Furthermore, the multi-layer neural mechanism exists in HVS, which inspires the development of multi-layer neural networks, which is commonly known as deep learning. Along this direction, a set of non-linear machine learning algorithms were developed for modeling complex data representations. It is the multi-layer artificial neural architecture that allows the learning of high-level representations of data through multi-scale analysis from low-level primitives to semantic features. It is comforting that this type of multi-layer neural networks resembles the multi-layer mechanism in the HVS. In the next chapter, we will cover the basics of artificial neural networks.
References
Aharon M et al 2006 K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation IEEE Trans. Signal Process. 54 4311
Bell A J and Sejnowski T J 1997 The ‘independent components’ of natural scenes are edge filters Vis. Res. 37 3327–38
Chambolle A 2004 An algorithm for total variation minimization and applications J. Math. Imaging Vis. 20 89–97
Chambolle A and Lions P-L 1997 Image recovery via total variation minimization and related problems Numer. Math. 76 167–88
Dandes E 2006 Near-optimal signal recovery from random projections: universal encoding strategies IEEE Trans. Inform. Theory 52 5406–25
Donoho D L, Tsaig Y, Drori I and Starck J 2012 Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit IEEE Trans. Inform. Theory 58 1094–121
Elad M 2010 Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing (Berlin: Springer)
Elbakri I A and Fessler J A 2002 Statistical image reconstruction for polyenergetic x-ray computed tomography IEEE Trans. Med. Imaging 21 89–99
Hastie T, Tibshirani R and Friedman J 2009 The Elements of Statistical Learning: Data Mining Inference and Prediction vol 1 (New York: Springer)
Mairal J, Bach F, Ponce J and Sapiro G 2009 Online dictionary learning for sparse coding Proc. of the 26th Annual Int. Conf. on Machine Learning (New York: ACM) 689–96
Mallat S G and Zhang Z 1993 Matching pursuits with time-frequency dictionaries IEEE Trans. Signal Process. 41 3397–415
Mou X, Wu J, Bai T, Xu Q, Yu H and Wang G 2014 Dictionary learning based low-dose x-ray CT reconstruction using a balancing principle Proc. SPIE 9212 921207
Needell D and Tropp J A 2009 COSAMP: Iterative signal recovery from incomplete and inaccurate samples Appl. Comput. Harmon. Anal. 26 301–21
Needell D and Vershynin R 2010 Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit IEEE J. Sel. Topics Signal Process. 4 310–316
Olshausen B A and Field D J 1996 Emergence of simple-cell receptive field properties by learning a sparse code for natural images Nature 381 607–9
Pati Y C, Rezaiifar R and Krishnaprasad P S 1993 Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition Proc. of 27th Asilomar Conf. on Signals Systems and Computers (Piscataway, NJ: IEEE) 40–4
Sahoo S K and Makur A 2013 Dictionary training for sparse representation as generalization of K-means clustering IEEE Sign. Process Lett. 20 587–90
Rubinstein R, Zibulevsky M and Elad M 2008 Efficient implementation of the K-SVD algorithm using batch orthogonal matching pursuit Technical report CS Technion http://www.cs.technion.ac.il/˜ronrubin/Publications/KSVD-OMP-v2.pdf
Xu Q, Yu H, Mou X, Zhang L, Hsieh J and Wang G 2012 Low-dose x-ray CT reconstruction via dictionary learning IEEE Trans. Med. Imaging 31 1682–97
IOP Publishing
Machine Learning for Tomographic Imaging
Ge Wang, Yi Zhang, Xiaojing Ye and Xuanqin Mou
Chapter 3
Artificial neural networks
3.1 Basic concepts
In chapter 2, we have introduced tomographic reconstruction based on a learned structural dictionary in which the prior information of low-level image features is expressed as atoms, which are over-complete basis functions. This prior information is actually a result of image information extraction. Indeed, it is an essential task to find an efficient measure to express the information for various images contents. In the development of deep learning techniques, it has become a common belief now that multi-layer neural networks extract image information from different semantic levels, thereby representing image features effectively and efficiently, which is consistent with the principle of the human vision system (HVS) perceiving natural images. Therefore, in this chapter we will focus on the basic knowledge of artificial neural networks, providing the foundation for feature representation and reconstruction of medical images using deep neural networks.
3.1.1 Biological neural network
Artificial neural networks originated from mimicking biological neural systems. It is necessary to understand the connection between the artificial neural network and the biological neural network before one is introduced to deep learning.
The hierarchical structure of the HVS is shown in figure 1.2. In the HVS, features are extracted layer by layer. As described in chapter 1, in the ‘what’ pathway, the V1/V2 area is mainly sensitive to edges and lines, the V4 area senses object shapes, and finally the IT region completes the object recognition. In this process, the receptive field is constantly increasing in size, and the extracted features are increasingly more complex.
To a large degree, the artificial neural network attempts to duplicate the biological neural network from the perspective of information processing. As a result, an artificial neural network serves as a simple mathematical model, and different networks are defined by different interconnections among various numbers of artificial neurons. A neural network is a computational framework, including a large number of neurons as basic computing units connected to each other with varying