Also, many other studies indicated big differences between the “exponential” and “kernel” estimates of hazard parameters. The mentioned Monte Carlo analyses by Kijko et al. (2001), and the actual data studies by Lasocki and Papadimitriou (2006), suggest that in the case when these estimates differ, the “kernel” estimate is more accurate. All of this favors the kernel estimation of magnitude distribution functions for the seismic hazard assessment.
The PSHA, which uses the kernel estimation of magnitude distribution as an alternative to the parametric model [1.19] and [1.20], has been implemented on the IS-EPOS Platform (tcs.ah-epos.eu, Orlecka-Sikora et al. 2020). The kernel estimation of magnitude distribution is also applied in the SHAPE software package for time-dependent seismic hazard analysis (Leptokaropoulos and Lasocki 2020). SHAPE is open-source, downloadable from https://git.plgrid.pl/projects/EA/repos/seraapplications/browse/SHAPE_Package.
Figure 1.4. Comparison of the estimates obtained from the kernel estimation method (solid lines) and from the exponential model of magnitude distribution (dashed lines) for the data from the Northern Aegean Sea. Left – the CDF estimates juxtaposed with the cumulative histogram of magnitude data. Right – the mean return period estimates. Reprinted from Lasocki and Papadimitriou (2006, Figures 5b and 6b)
Figure 1.5. “Kernel” (circles) and “exponential” (squares) estimates of the mean return periods juxtaposed with the mean return period estimates drawn from the actual 94 year-long observations of seismicity in the Northern Aegean Sea (diamonds). Reprinted from Lasocki and Papadimitriou (2006, Figure 7b)
1.5. Interval estimation of magnitude CDF and related hazard parameters
Orlecka-Sikora (2004, 2008) presented a method for assessing the confidence intervals of the CDF, which had been estimated by the kernel estimation. The method is based on the bias-corrected and accelerating method by Efron (1987), the smoothed bootstrap and the second-order bootstrap samples, and is called the iterative bias-corrected and accelerating method (IBCa).
To simplify the notation let: {Mi}, i = 1,.., n be an n-element sample of already randomized magnitudes [1.21],
The j-th jackknife sample, j ≤ n, is the n − 1 element sample {M1, M2,.., Mj−1, Mj+1, .., Mn} that is the initial sample from which the j-th element has been removed. Hence, we can have, at most, n jackknife samples.
The first-order smoothed bootstrap samples are obtained in the same way as previously (equation [1.25]). The k-th smoothed bootstrap sample, ℳ(k), is composed of:
[1.33]
where
The second-order smoothed bootstrap samples are based on the first-order bootstrap sample, say ℳ(k), and the CDF kernel estimate from ℳ(k), say
[1.34]
where
The IBCa method can be presented in the following steps:
– Step 1. Generate n jackknife samples, estimate kernel CDF-s from the jackknife samples, and evaluate the accelerating constant:
[1.35]
where
–