Machine Learning for Tomographic Imaging. Professor Ge Wang. Читать онлайн. Newlib. NEWLIB.NET

Автор: Professor Ge Wang
Издательство: Ingram
Серия:
Жанр произведения: Медицина
Год издания: 0
isbn: 9780750322164
Скачать книгу
as an additional variable along with α and x. The objective function is solved in an alternating fashion, by keeping all variables D, α, and x fixed except for one at a time, gradually converging to a satisfactory sparse representation α, a final image x, and the associated dictionary D. As mentioned above, dictionary learning-based reconstruction consists of two steps: sparse coding and image updating. It is noted that the updating step for an intermediate reconstructed image x with a fixed sparse representation α˜ and the current dictionary D˜ is exactly the same as that for GDSIR expressed in equation (2.28). For the sparse coding step, the D and α are estimated with a fixed reconstruction image xt, whose objective function is thus as follows:

image

      This equation is the generic dictionary learning and sparse representation problem, which can be solved with respect to α and D alternatingly using the K-SVD algorithm described in subsection 2.2.2.

      In summary, the GDSIR and ADSIR algorithms are summarized in algorithms 2.2 and 2.3, respectively. In order to improve the convergence speed, a fast online algorithm is used to train a dictionary from patches extracted from an intermediate image.

      Algorithm 2.2. Workflow for the GDSIR algorithm. Reprinted with permission from Xu et al (2012). Copyright 2012 IEEE.

Global dictionary learning
1: Choose parameters for dictionary learning.
2: Extract patches to form a training set.
3: Construct a global dictionary.
Image reconstruction
4: Initialize x0, α0, and t = 0.
5: Set parameters λ, ε, and L0S.
6: while a stopping criterion is not satisfied do
7: Update xt−1 to xt using equation (2.29).
8: Represent xt with a sparse code αt using OMP.
9: end while
10: Output the final reconstruction.

      Algorithm 2.3. Workflow for the ADSIR algorithm. Reprinted with permission from Xu et al (2012). Copyright 2012 IEEE.

1: Choose λ, ε, L0S and other parameters.
2: Initialize x0, D0, α0, and t = 0.
3: while a stopping criterion is not satisfied do
4: Update xt−1 to xt using equation (2.29).
5: Extract patches from xt from the training set.
6: Construct a dictionary D0 from the training set.
7: Represent xt with a sparse code αt using OMP.
8: end while
9: Output the final reconstruction.

      2.3.2.2 Experimental results

      The dictionary learning algorithms produced quite exciting image reconstructions from low-dose data. Compared to the traditional regularizers widely used for CT reconstruction, the dictionary learning results are superior in terms of edge preservation and noise suppression. Under few views and/or low-dose conditions, the dictionary learning framework performed very well, with two examples shown in figures 2.6 and 2.7.

image

      Figure 2.6. Reconstructed images from a low-dose sinogram collected in a sheep lung CT perfusion study. The upper row shows the images reconstructed using the FBP, GDSIR, ADSIR, TVSIR, and GDNSIR methods (from left to right), respectively. The magnified local regions are shown below the FBP result (upper left, upper right, lower left, and lower right correspond to GDSIR, ADSIR, TVSIR, and GDNSIR, respectively). The display window is [−700,800] HU. The second to fifth images in the bottom row are the difference images between the FBP image and the results obtained using the GDSIR, ADSIR, TVSIR, and GDNSIR methods, respectively, in the display window [−556,556] HU. Reproduced with permission from Xu et al (2012). Copyright 2012 IEEE.

      In addition to using the sheep dictionary to reconstruct images of the same sheep, Xu et al also used this sheep dictionary to reconstruct human chest CT images. Excitingly, the global dictionary trained from images of the sheep performed well in the case of human chest scans, which implies that CT images have similar low-level structural information just as natural images. In fact, the dictionary learning approach can transfer natural image statistics for medical image reconstruction, which has a profound impact on tomographic imaging in general (Xu et al 2012).

image

      Figure 2.7. Reconstruction results using the FBP, GDSIR, and ADSIR methods, respectively (from top to bottom), with the left and right columns representing the results from 1100 and 550 projection views, respectively, in the display window [0,2]. Reprinted with permission from Xu et al (2012). Copyright 2012 IEEE.

      The proposed L0-norm-based dictionary learning method is highly nonconvex and computationally nontrivial, although pilot results were reported through L0-norm-based dictionary learning. Motivated by this challenge, an algorithm for low-dose CT via L1-norm-based dictionary learning should be popular to avoid non-convexity. It has been verified that the L1-norm-based sparse constraint has a similar performance as that of GDSIR and ADSIR (Mou et al 2014).

      Tomographic image reconstruction is an inverse problem. Due to the non-ideal factors in the data acquisition, such as noise and errors introduced in practice, as well as possibly the data incompleteness, image reconstruction can be a highly ill-posed problem. Thanks to the Bayesian inference, prior knowledge can be utilized to solve the inverse problem. Compared with the simplest iterative algorithms that only address the data fidelity term, the regularized/penalized reconstruction techniques take advantage of the posterior probability and naturally incorporate prior knowledge into the reconstruction process. Consequently, a regularized solution must be in a much narrower searching space of the unknown variables. In determining how to extract the prior knowledge, or in other words, the intrinsic distribution properties of images, the theory of natural image statistics and the HVS give us great inspiration. Both anatomical and neurophysiological studies indicate a multi-layer HVS architecture which supports a sparse representation to eliminate image redundancy as much as possible and use statistical independent features when we perceive natural scenes. Inspired by this discovery, machine