11 Chapter 10Figure 10.1. Point cloud classification network (Simonovsky and Komodakis 2017). The network outputs the classification score vector y ∈ Rc, where c is the number of classes. GCONV: graph convolution operation; BNORM: batch normalization; FC: fully connected layer. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.2. Point cloud segmentation network (Wang et al. 2019). The network outputs the per-point classification scores Y ∈ RN×p for p semantic labels. ⊕: concatenation. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.3. Part segmentation results for chairs, lamps and tables. Figure from (Wang et al. 2019). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.4. Image denoising network. Figure from (Valsesia et al. 2019a). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.5. Graph-convolutional layer. Figure from (Valsesia et al. 2019a). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.6. Extract from Urban100 scene 13, σ = 25. Left to right: ground truth, noisy (20.16 dB), BM3D (30.40 dB), DnCNN (30.71 dB), NLRN (31.41 dB), GCNN (31.53 dB). Figure from (Valsesia et al. 2019a)Figure 10.7. Receptive field (green) of a single pixel (red, circled in purple) for the three graph-convolutional layers in the LPF1 block with respect to the input of the first graph-convolutional layer in the block. Top row: gray pixel on an edge. Bottom row: white pixel in a uniform area. Figure from (Valsesia et al. 2019a). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.8. Generative adversarial network. A generator G maps a latent vector z into a sample ĵ of the data distribution. The discriminator D can be interpreted as measuring the optimal transport cost between the true data and the generated data distributions. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.9. Graph-convolutional GAN generator. The graph-convolutional and upsampling layers use a k nearest neighbor graph computed from the feature vectors at the input of the layer. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.10. Generated point clouds. For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.11. Graph-convolutional variational autoencoder for shape completion. Figure from (Litany et al. 2018). For a color version of this figure, see www.iste.co.uk/cheung/graph.zipFigure 10.12. Examples of completed shapes. Figure from (Litany et al. 2018). For a color version of this figure, see www.iste.co.uk/cheung/graph.zip
List of Tables
1 Chapter 2Table 2.1. Comparison between different GSP-based approaches to graph learning. Table from (Dong et al. 2019) with permission
2 Chapter 4Table 4.1. DCTs/DSTs corresponding to ¯ with different vertex weightsTable 4.2. Comparison of KLT, S-GFT and NS-GFT with RDOT scheme in terms of BD-rate (% bitrate reduction) with respect to the DCT. Smaller (negative) BD-rates mean better compressionTable 4.3. Comparison of KLT, S-GFT and NS-GFT for coding of different prediction modes in terms of BD-rate with respect to the DCT. Smaller (negative) BD-rates mean better compressionTable 4.4. The contribution of GL-GFT and EA-GFT in terms of BD-rate with respect to the DCT. Smaller (negative) BD-rates mean better compression
3 Chapter 6Table 6.1. Property comparison of different graph LaplaciansTable 6.2. Comparison with different graph variants in PSNR(dB) at QF = 5
4 Chapter 7Table 7.1. Classification accuracy (%) on the ModelNet40 dataset
5 Chapter 9Table 9.1. Classification error rate (%) for the CIFAR10 dataset for different labeling ratios (%)Table 9.2. Classification error rate (%) for the CIFAR10 dataset using sufficient training data under different label noise levels
6 Chapter 10Table 10.1. Mean F1 score weighted by class frequency on Sydney Urban Objects dataset (De Deuge et al. 2013)Table 10.2. Mean class accuracy (respectively, mean instance accuracy) on ModelNet datasets (Wu et al. 2015)Table 10.3. Part segmentation results on ShapeNet part dataset (Yi et al. 2016). The results show the mean computed for all classes; for more detailed results, we refer the reader to (Wang et al. 2019)Table 10.4. Natural image denoising results. The evaluation metric is PSNR (dB)Table 10.5. Quantitative comparisonsTable 10.6. Completion error for synthetic range scans.
Guide
1 Cover
5 Introduction to Graph Spectral Image Processing
8 Index
Pages
1 v
2 iii
3 iv