Class-wise accuracies | Overall accuracy | Time [s] | |||||
Water | Urban | Vegetation | Bare soil | Containers | |||
Proposed method | 100% | 78.12% | 89.46% | 98.78% | 47.12% | 82.69% | 254 |
(Storvik et al. 2009) | 99.95% | 97.32% | 90.81% | 96.22% | 37.25% | 79.44% | 298 |
(Voisin et al. 2014) | 100% | 75.24% | 87.16% | 98.89% | 49.31% | 82.12% | 668 |
The effectiveness of the results of the proposed technique and the improvement, compared to those achieved by the algorithm in Storvik et al. (2009), are quantitatively confirmed by the classification accuracies on the test samples of the aforementioned classes (see Table 1.1). All considered methods obtained a poor discrimination of the “containers” class, due to its significant overlapping with the “urban” class in the multisensor or multispectral feature space. As a further evolution of the present technique, improvements in the discrimination of this class could be obtained by using texture features. On the one hand, accurate performance on the test samples, quite similar to those achieved by the proposed method, was also obtained by the multisensor multiscale technique in Voisin et al. (2014). On the other hand, the proposed algorithm granted improvements over this benchmark approach with regard to the spatial detail in the resulting maps (see Figure 1.6 (e) and (g)) and to the overall computational burden (see Table 1.1). The latter advantage is due to the fact that addressing multisensor fusion using multiple quad-trees in cascade can leverage on the time efficiency of the sequential recursive formulation of MPM, without requiring the challenging and possibly time-expensive problem of the modeling of the joint pdf of optical and SAR data. On the contrary, this problem is involved in the approaches in Storvik et al. (2009) and Voisin et al. (2014), in which copulas and meta-Gaussian densities are used for this purpose.
1.3.2. Results of the second method
Two very-high-resolution time series, acquired again over Port-au-Prince, Haiti, were used for experiments with the second proposed method. They both consist of Pléiades pansharpened data at a spatial resolution of 0.5 m (see Figures 1.7(a) and 1.8(a)), of HH-polarized X-band COSMO-SkyMed spotlight data at a resolution of 1 m (see Figures 1.7(b) and 1.8(b)) and of HH-polarized C-band RADARSAT-2 ultrafine data with a pixel spacing of 1.56 m (see Figures 1.7(c) and 1.8(c)). The acquisition dates of the three images in the series were a few days apart from one another. They correspond to two different sites in the Port-au-Prince area, which are shown in Figures 1.7 and 1.8 and are related to 1000 × 1000 and 2400 × 600 pixel grids at the finest resolution, respectively. The main classes in the two scenes are the same as in the previous section. Training and test samples associated with the two sites and annotated by an expert photointerpreter were used to train the second proposed method and to quantitatively measure its performance. The pixel grid at a resolution of 0.5 m of the Pléiades image was used as the reference finest resolution, and the RADARSAT-2 image was slightly resampled to a pixel spacing of 4 · 0.5 = 2 m in order to match the power-of-2 structure associated with the quad-tree (see also Figure 1.5). Antialiasing filtering was applied within this minor downsampling from 1.56 to 2 m, which is expected to have a negligible impact on the classification output, since the resolution ratio between the original and resampled images is close to 1.
In principle, the second proposed method can be applied in two distinct ways that differ in the ordering of the two SAR data sources in the two quad-trees, i.e. the COSMO-SkyMed image in the first quad-tree and the RADARSAT-2 image in the second one or vice versa. Preliminary experiments, which we omit for brevity, indicated that this choice of order did not have relevant impact on the output classification map.
Quite accurate performance was obtained on the test samples by the proposed method in the case of the multimission, multifrequency and multiresolution fusion task addressed in the present experiment (see Table 1.2). The maps obtained from the classification of the compound COSMO-SkyMed/RADARSAT-2/Pléiades time series of the two sites also exhibited remarkable spatial regularity (see Figures 1.7(g) and 1.8(g)). In this experiment, rather low accuracy was also achieved in the case of the “containers” class, again because of its overlapping with the “urban” class in the multisensor feature space.
Figure 1.7. Second proposed method – first test site. (a) Pléiades (©CNES distribution Airbus DS), (b) COSMO-SkyMed (©ASI 2011) and (c) RADARSAT-2 (©CSA 2011) images of the input series. The SAR images are shown after histogram equalization. The R-band of the optical image is displayed. Classification maps obtained by operating with (d) only Pléiades data, (e) Pléiades and COSMO-SkyMed data, (f) Pléiades and RADARSAT-2 data and (g) the proposed technique with all data in the series.
Color legend: water urban vegetation bare soil containers
For a color version of this figure, see www.iste.co.uk/atto/change2.zip
Figure 1.8. Second proposed method – second test site. (a) Pléiades (©CNES distribution Airbus DS), (b) COSMO-SkyMed (©ASI 2011) and (c) RADARSAT-2 (©CSA 2011) images of the input series. The SAR images are shown after histogram equalization. The R-band of the optical image is displayed. Classification maps obtained by operating with (d) only Pléiades data, (e) Pléiades and COSMO-SkyMed data, (f) Pléiades and RADARSAT-2 data and (g) the proposed technique with all data in the series.
Color legend: water urban vegetation bare soil containers
For a color version of this figure, see www.iste.co.uk/atto/change2.zip
Table 1.2. Second proposed method: classification accuracies on the test set of the time series composed of Pléiades, COSMO-SkyMed (CS) and RADARSAT-2 (RS) images
Class-wise accuracies | Overall accuracy | |||||
Water | Urban |