2.2.3. RSQSim
The rate-and-state-dependent fault constitutive properties for the sliding strength of faults (Dieterich 1994) are the principal ingredients of the RSQSim simulator algorithm (Dieterich and Richards-Dinger 2010; Colella et al. 2011; Richards–Dinger and Dieterich 2012). From the view point of the interevent time distribution, RSQSim is generally seen as the only simulator that shows the occurrence of time-dependent increases in conditional probability of nearby earthquakes following a significant earthquake as well as aftershocks (Tullis et al. 2012a,b; Field 2015). Moreover, Dieterich and Richards-Dinger (2010) explicitly mention occasional presence of multiple events occurring as pairs, and more rarely as triplets in their simulated catalogs. More recently, RSQSim has been applied for aftershock sequences simulations (Xu et al. 2014), for modeling injection-induced seismicity (Dieterich et al. 2015), to the Wellington, New Zealand, fault network (Christophersen et al. 2017), as well as for replicating seismic hazard statistics (Shaw et al. 2018).
2.2.4. ViscoSim
Based on a simplified viscoelastic-cycle model of fault interaction (Pollitz and Schwartz (2008)), Pollitz (2011, 2012) developed the ViscoSim earthquake simulator, which is the only known simulation code that includes viscoelastic stress transfer and a layered Earth’s crust model.
2.2.5. Other simulation codes
Besides the above-mentioned simulation algorithms, other simulation codes have been published in the recent seismological literature. In order to test the preference between a characteristic earthquake hypothesis and a simpler time-independent hypothesis, Parsons and Geist (2009) applied a simple simulator-based model to paleoseismological records collected at Wrightwood on the San Andreas Fault and Wasatch fault segments. Their simulations were constrained by the slip rate values and were based on the Gutenberg–Richter magnitude distribution. They showed that the Gutenberg–Richter distribution can be used as an earthquake occurrence model on sub-segments of different size on individual faults in probabilistic earthquake forecasting. Another application of the same simulator was tried by Parsons et al. (2013) on the Nankai–Tonankai–Tokai subduction zones in Japan. They found that using convergence rate as a primary constraint allows the simulator to replicate much of the spatial distribution of observed segmented rupture rates along the Nankai–Tonankai–Tokai subduction zones, although rate differences between the two forecast methods were noted.
Barall and Tullis (2015) studied the performance of triangular fault elements in earthquake simulators, finding that, contrary to expectations, rectangles overall perform as well as or better than triangles when computing stresses on curved fault surfaces. They also found that one triangulation may perform significantly better than another triangulation.
Parsons et al. (2018) considered three different simulation algorithms to assess method and parameter dependence on magnitude frequency results for individual faults, in addition to the existing Uniform California Earthquake Rupture Forecast, version 3 efforts. Finally, Shaw (2019) improved the RSQSim algorithm with a new generalized hybrid loading method that combines the ability to drive faults at desired slip rates while loading with more regularized stressing rates, allowing faults to slip in a more natural way.
2.2.6. Comparisons among simulators
The various methods and models adopted in different simulator algorithms have led to several studies comparing the results obtained by these simulators. One which focused on earthquake simulators dealt with this subject (Tullis 2012). In this volume, two papers (Tullis et al. 2012a,b) were specifically devoted to the comparison of the four main simulators developed in California. Moreover, Field (2015) carried out an in-depth analysis of the performance of three simulators, namely, ALLCAL, RSQSim and ViscoSim. He focused in particular on the recurrence interval distributions for each simulator (see Figure 2.2). In the below figure, we also added the probability density functions of the Brownian Passage Time (BPT) renewal model, computed for the respective minimum, mean and maximum recurrence time Tr and the coefficient of variation Cν obtained from the simulated distributions (Mosca et al. 2012). More recently, Wilson et al. (2017) addressed the problem of verifying earthquake simulators with observed data and discussed the performances of ALLCAL, Virtual California, ViscoSim and RSQSim.
Figure 2.2. Recurrence-time distributions from three different physics-based earthquake simulators (as labeled) at the Pitman Canon paleoseismic site on the southern San Andreas Fault. This analysis only considers ruptures that have an area greater than the square of the average down-dip width. The maximum-likelihood analysis of paleoseismic data gives an observed mean recurrence interval of 173 years (with 95% confidence bounds of 106–284 years) (based on Field 2015). The three colored lines depict the BPT probability density functions computed for the respective minimum, mean and maximum recurrence time Tr and the coefficient of variation Cν obtained from the simulated distributions. For a color version of this figure, see www.iste.co.uk/limnios/statistical.zip
2.3. Conceptual evolution of a physics-based earthquake simulator
Although based on some of the principles adopted for the previously simulator algorithms as described in section 2.2, the simulation algorithm described in this section was independently developed. It was aiming at demonstrating that a model characterized by a few, simple and reasonable assumptions allows the replication not only of the spatial features, but also of the temporal behavior and the scaling laws of the observed seismicity. The relations among source parameters used in this algorithm have a physical justification and are consistent with the empirical relations known from the literature (see the appendix in section 2.5).
2.3.1. A physics-based earthquake simulator (2015)
In a study of the Corinth Gulf fault system (CGFS), Console et al. (2015) introduced a new and original earthquake simulator. The algorithm applied in their study was developed upon the conceptions introduced for earthquakes simulators in California (Tullis 2012), such as the constraint from the long-term slip rate on fault segments, and the adherence to a physics-based model of rupture growth, without making use of time-dependent rheological parameters on the fault. Because of its limited sophistication, this algorithm is suitable for the production of synthetic catalogs resembling the long-term seismic activity of relatively simple fault systems, including hundreds of thousands of earthquakes of moderate magnitude, even when using quite modest computing resources. The basic concepts, upon which this algorithm is built, are shown in the flow chart of Figure 2.3. A detailed outline of the computer code is provided in the appendix in section 2.6. Here, we recall only the main features