Graph Spectral Image Processing. Gene Cheung. Читать онлайн. Newlib. NEWLIB.NET

Автор: Gene Cheung
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Программы
Год издания: 0
isbn: 9781119850816
Скачать книгу
there is improvement due to the second iteration. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.6. Classification error rate (%) as a function of label noise for the two datasets. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.7. The block diagram of the unweighted graph generation scheme. V0(·) is a feature map function that reflects the node-to-node correlation. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.8. The graph-based classifier and graph update scheme. The green and blue colors denote input and output, respectively. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.9. The overall block diagram of the DynGLR-Net for r = 2. Given observations X, G-Net first learns an initial undirected and unweighted k-NN-graph by minimizing LossE. The resulting edge matrix E0 is used in the GLR iteration. The learnt shallow feature map f1(X) = {X, ZD(X)} is then used as an input to learn a CNNC1 network for assigning weights to the initial graph edges. Given a subset of potentially noisy labels, ˙Y, GLR is performed on the constructed undirected and weighted graph to restore the labels. The resulting restored labels are used in the following GLR iterations.Figure 9.10. CNNCr neural nets for CIFAR10 dataset: “pool/q/w” refers to a max-pooling layer with q = pool size and w = stride size. “x conv y/z” refers to a 1D convolutional layer with y filters, each with kernel size x and stride size z. “fc x” refers to the fully connected layer with x = number of neurons. “reshape x” refers to a reshape layer to transform the size of the input to x. “avg” refers to the global average-pooling layer (Lin et al. 2013). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.11. CNNHU neural nets for the CIFAR10 dataset. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 9.12. The magnitude of the GFT coefficients for the CIFAR10 dataset using sufficient data under 30% label noise level. The density of each eigenvalue λ across all experiments on the testing sets is represented through color maps. The top row shows the result after initialization and before GLR (G-Net output) and the second and third row show the result after the first (r = 1) and the second iteration (r = 2), respectively. For a color version of this figure, see www.iste.co.uk/cheung/graph.zip

      11 Chapter 10Figure 10.1. Point cloud classification network (Simonovsky and Komodakis 2017). The network outputs the classification score vector y ∈ Rc, where c is the number of classes. GCONV: graph convolution operation; BNORM: batch normalization; FC: fully connected layer. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.2. Point cloud segmentation network (Wang et al. 2019). The network outputs the per-point classification scores Y ∈ RN×p for p semantic labels. ⊕: concatenation. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.3. Part segmentation results for chairs, lamps and tables. Figure from (Wang et al. 2019). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.4. Image denoising network. Figure from (Valsesia et al. 2019a). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.5. Graph-convolutional layer. Figure from (Valsesia et al. 2019a). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.6. Extract from Urban100 scene 13, σ = 25. Left to right: ground truth, noisy (20.16 dB), BM3D (30.40 dB), DnCNN (30.71 dB), NLRN (31.41 dB), GCNN (31.53 dB). Figure from (Valsesia et al. 2019a)Figure 10.7. Receptive field (green) of a single pixel (red, circled in purple) for the three graph-convolutional layers in the LPF1 block with respect to the input of the first graph-convolutional layer in the block. Top row: gray pixel on an edge. Bottom row: white pixel in a uniform area. Figure from (Valsesia et al. 2019a). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.8. Generative adversarial network. A generator G maps a latent vector z into a sample ĵ of the data distribution. The discriminator D can be interpreted as measuring the optimal transport cost between the true data and the generated data distributions. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.9. Graph-convolutional GAN generator. The graph-convolutional and upsampling layers use a k nearest neighbor graph computed from the feature vectors at the input of the layer. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.10. Generated point clouds. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.11. Graph-convolutional variational autoencoder for shape completion. Figure from (Litany et al. 2018). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.12. Examples of completed shapes. Figure from (Litany et al. 2018). For a color version of this figure, see www.iste.co.uk/cheung/graph.zip

      List of Tables

      1 Chapter 2Table 2.1. Comparison between different GSP-based approaches to graph learning. Table from (Dong et al. 2019) with permission

      2 Chapter 4Table 4.1. DCTs/DSTs corresponding to ¯ with different vertex weightsTable 4.2. Comparison of KLT, S-GFT and NS-GFT with RDOT scheme in terms of BD-rate (% bitrate reduction) with respect to the DCT. Smaller (negative) BD-rates mean better compressionTable 4.3. Comparison of KLT, S-GFT and NS-GFT for coding of different prediction modes in terms of BD-rate with respect to the DCT. Smaller (negative) BD-rates mean better compressionTable 4.4. The contribution of GL-GFT and EA-GFT in terms of BD-rate with respect to the DCT. Smaller (negative) BD-rates mean better compression

      3 Chapter 6Table 6.1. Property comparison of different graph LaplaciansTable 6.2. Comparison with different graph variants in PSNR(dB) at QF = 5

      4 Chapter 7Table 7.1. Classification accuracy (%) on the ModelNet40 dataset

      5 Chapter 9Table 9.1. Classification error rate (%) for the CIFAR10 dataset for different labeling ratios (%)Table 9.2. Classification error rate (%) for the CIFAR10 dataset using sufficient training data under different label noise levels

      6 Chapter 10Table 10.1. Mean F1 score weighted by class frequency on Sydney Urban Objects dataset (De Deuge et al. 2013)Table 10.2. Mean class accuracy (respectively, mean instance accuracy) on ModelNet datasets (Wu et al. 2015)Table 10.3. Part segmentation results on ShapeNet part dataset (Yi et al. 2016). The results show the mean computed for all classes; for more detailed results, we refer the reader to (Wang et al. 2019)Table 10.4. Natural image denoising results. The evaluation metric is PSNR (dB)Table 10.5. Quantitative comparisonsTable 10.6. Completion error for synthetic range scans.

      Guide

      1  Cover

      2  Table of Contents

      3  Title Page

      4  Copyright

      5  Introduction to Graph Spectral Image Processing

      6  Begin Reading

      7  List of Authors

      8  Index

      9  End User License Agreement

      Pages

      1  v

      2  iii

      3  iv

      4  Скачать книгу