3. Combinations and Extensions to Accommodate Aftershocks
3.1. STEP-EEPAS Mixture
The Short-Term Earthquake Probabilities (STEP)
[49] model is an aftershock model based on the Omori–Utsu aftershock-decay relation
[4]. The STEP model has a background component,
λSTAT, and a time-dependent clustering component,
λCLUST. The expected number of earthquakes in the
jth time, magnitude, location bin
(tj,mj, xj,yj) is given by
STEP and EEPAS were linearly combined to enhance short-term earthquake forecasting in California [50]. Using the Advanced National Seismic System (ANSS) catalogue of California over the period 1984–2004, the optimal mixture model for forecasting earthquakes with M ≥5:0 was found to be a convex linear combination consisting of 0.42 of EEPAS and 0.58 of STEP. This mixture gave an average probability gain of more than 2 (i.e., information gain per earthquake, ln(probability gain), of more than 0.7) compared to each of the individual models when forecasting ahead for the next 24 h time period. The contribution from EEPAS can be weighted depending on magnitude to enhance the performance at high target magnitudes. The STEP-EEPAS mixture improves short-term forecasting by allowing for the aftershocks of earthquakes that have already occurred.
3.2. EEPAS with Aftershocks Model
The EEPAS with aftershocks model (EAS)
[51] has a different purpose than the STEP-EEPAS mixture. It allows for aftershocks of earthquakes expected to occur under the EEPAS model, but not for aftershocks of earthquakes that have already occurred. It is aimed at improving medium-term forecasts by including the associated aftershocks of expected mainshocks in the forecast. The model assumes that the number of expected aftershocks depends on the mainshock magnitude, that their magnitude distribution follows the Gutenberg-Richter relation
[52], and their spatial distribution is consistent with Utsu’s areal relation
[53]. This involves a modification of the EEPAS model to include several additional parameters: the Gutenberg-Richter b-value for aftershocks, an aftershock productivity parameter
θ, the minimum magnitude difference
γ by which a mainshock exceeds its largest aftershock, and the proportion
pM of earthquakes in the target magnitude range that are mainshocks. The effect is to change the magnitude and spatial distributions of the transient contributions of precursors to the rate density. Versions of the EEPAS and EAS model with equal weights and aftershocks down-weighted were fitted to a 10-year period and independently tested on a later 10-year period of the catalogues of California and the Kanto region of central Japan
[51]. For the testing period, the information gain of the EAS models over their EEPAS counterparts was about 0.1 on average. This confirmed the efficacy of the modifications. However, the expected number of aftershocks was found to strongly depend on the assumed maximum magnitude. This creates a difficulty in the practical application of the EAS model.
3.3. Janus Model: EEPAS-ETAS Mixture
The Janus model is an additive mixture of the EEPAS model and an Epidemic-type aftershock (ETAS) model. From each contributing earthquake, it looks both to the larger earthquakes expected to follow it in the medium term and mostly smaller earthquakes expected to follow it in the short term. In
[54], the Janus model was optimized for time horizons ranging from 0–3000 days (i.e., up to more than 8 years) on the New Zealand and California earthquake catalogues. For each time horizon of interest, EEPAS parameters were refitted with the delay set equal to the time horizon. It was found that the ETAS model is much more informative than EEPAS for forecasting with short-time horizons of a few days, but even with a zero-time horizon, the Janus model outperforms it with an information gain per earthquake (IGPE) of about 0.1. For time horizons of 10–3000 days, the Janus model outperforms both ETAS and EEPAS with IGPEs ranging from 0.2 to 0.5. As the time horizon lengthens beyond six months in New Zealand and two years in California, the EEPAS model becomes more informative than ETAS and the major component of the optimal mixture. In
[54], it was concluded that both cascades of triggering and the precursory scale increase phenomenon contribute to earthquake predictability and that these contributions are largely independent.
3.4. Hybrid Forecasting in New Zealand
EEPAS is now used for public earthquake forecasting in New Zealand, as one of the core elements of a hybrid forecasting tool. Public forecasting was initiated in New Zealand as a response to the devastating Canterbury earthquake sequence. This sequence began with the September 2010
M7.1 Darfield earthquake
[55] and continued with the 22 February 2011
M6.3 Christchurch earthquake
[56]. The Christchurch earthquake and subsequent earthquakes of about
M6 in the vicinity of Christchurch resulted in the death of 185 people, and over NZ40 billion dollars of damage to buildings and infrastructure
[57]. The faults that ruptured during this sequence were unknown prior to the sequence and hazard was considered to be low in Christchurch
[58]. As a result of this sequence, attention was drawn to statistical forecasting models. A model with time-varying and long-term components was developed to forecast the following 50 years of expected earthquakes and resulting hazard in Canterbury. This was used to inform decisions for the rebuilding of Christchurch
[28]. The time-varying component was provided by a mixture of EEPAS and aftershock models and time-invariant component by a mixture of different smoothed seismicity models
[29][59]. Such statistical modelling can serve as a supplement to standard probabilistic seismic hazard analysis (PSHA) (e.g.,
[58][60]).
Following the November 2016
M7.8 Kaikoura earthquake
[61], a modified hybrid model with three components—short-term, medium-term and long-term—was developed
[27] to forecast the expected earthquakes and resulting hazard over the following 100 years. This model was used to inform decision-makers involved in the reinstatement of road and rail networks in the northern South Island. It is a gridded model, in which EEPAS provides the medium-term component.
4. Challenges in EEPAS Forecasting
Since its introduction in 2004, EEPAS has been a successful forecasting model for well-catalogued regions including New Zealand
[16][24][26][27][54][62][63], California
[16][20][25][54][64], Japan
[17][18][21][22][65], and Greece
[19]. To address the limitations imposed by the input earthquake catalogues, EEPAS has undergone many revisions. As mentioned earlier, one milestone in the EEPAS improvement was compensation of the forecasts for missing precursory earthquakes in the time-lag between the end of the catalogue and the forecasting time-horizon. researchers have also learned how to compensate EEPAS forecasts for the limited record of precursory information before any target earthquake. Overall, the current version of the EEPAS is much better adapted to deal with the limitations of any earthquake catalogue than previously. However, there are still significant challenges and unknowns as outlined here.
4.1. Understanding the Physics behind the Ψ-Phenomenon
The
Ψ-phenomenon and EEPAS model are empirically based. However, the
Ψ-phenomenon can be identified as easily in synthetic catalogues as in real earthquake catalogues and the EEPAS model also works well in synthetic catalogues
[66][67]. Synthetic catalogues are based on physical components such as fault networks, slip rates on faults, friction laws, and Coulomb stress calculations
[68][69]. The earthquake generation process of each synthetic earthquake is in principle traced through the stress transfer between neighbouring faults. This leads to an eventual failure of the fault that produces the earthquake. Ideally, the origin of the
Ψ-phenomenon should be explained by a similar physics-based concept. Such an understanding is likely to be helpful in guiding future refinements of the EEPAS model.
4.2. Incorporating Dependence on the Long-Term Earthquake Rate
researchers have learned from analysis of synthetic catalogues that the scale of the EEPAS time distribution is inversely proportional to the slip rate on faults. Slip rates are related to the long-term rate of the earthquakes that they generate
[70]. Therefore, researchers should expect the scale of the EEPAS time distribution to be inversely related to the long-term earthquake rate.
If the spatial variability of the long-term earthquake rate is known, it can be incorporated into the EEPAS model using Equation (19). This is straight-forward and does not add to the number of fitted parameters in the model. The challenge is how to best estimate it from existing data sources
[71]. The long-term earthquake rates can be estimated from smoothed seismicity, strain rates, faults and their slip rates, the location of plate boundaries, or some combinations of these. The main limitation is the restricted length of the available catalogue against which to test them.
4.3. A Three-Dimensional Version of EEPAS?
The EEPAS model at present only makes use of two spatial dimensions—latitude and longitude. All earthquakes within a chosen depth range are treated the same, regardless of their estimated hypocentral depths. The reason for this is primarily that depth determinations are often poorly constrained. In the New Zealand catalogue, many depths are fixed by analysts, rather than directly estimated, because of the difficulty of estimating depths using a 2D velocity model and the available seismograph network. researchers expect that the seismograph network will become denser over time and a comprehensive 3D velocity model
[72] will be incorporated in the GeoNet earthquake locator. As a result, the precision of depth determinations will improve. Then, it will make sense to shift to three-dimensional distance determinations in the EEPAS model.
4.4. Target-Earthquake Oriented Compensation for Missing Precursors
Researchers have shown how to compensate EEPAS for missing earthquakes with a fixed lead time. However, applying a fixed lead time is potentially wasteful of precious earthquake catalogue data. Ignoring the early earthquakes in a catalogue can adversely affect the forecasting of large earthquakes, which have very long precursor times. It is the largest earthquakes that researchers are ultimately most interested in forecasting, even though conformity to the Gutenberg-Richter law limits their contribution to the information gain.
The challenge is then to use as much of the past catalogue as possible and compensate the forecast of each target earthquake for the incompleteness of precursory contributions at each point in time, location, and magnitude. This is what researchers call “target-oriented” compensation. Shifting from a fixed lead time to target-oriented compensation would involve modifying Equations.
The incompleteness of precursory contributions for each target earthquake depends on the completeness of the catalogue in its vicinity in the period prior to its occurrence.
4.5. Accommodating Variable Incompleteness of the Earthquake Catalogue
Completeness of an earthquake catalogue varies with time, magnitude, and location depending on the network configuration and instrumentations
[73]. Treatments of catalogue completeness can range from simple to elaborate. In the simplest approach, one might choose a starting time
t0 after which the input catalogue is approximately complete for all magnitudes above a minimum threshold
m0. Target-oriented compensation could then be applied based on the lead time between
t0 and the time of each target earthquake. A more elaborate approach would be to estimate a magnitude-dependent starting time
t0(m) at which the catalogue becomes complete for magnitude
m>m0. The lead time
L(m) for a given target earthquake then varies with the input magnitude. Furthermore, one can also take into account the effect of spatial variations on the catalogue completeness, due to time-varying coverage of the region of interest by the seismic network. at which the catalogue is complete for input magnitudes
m>m0 and the resulting variable lead times
L(m) at the time when a target earthquake occurs.
4.6. Optimal Use of the Space-Time Trade-Off
The space-time trade-off of precursory seismicity presents opportunities to improve EEPAS forecasts by mixing models from points on the line of even trade-off, as previously demonstrated
[64]. However, optimally incorporating the trade-off into the EEPAS model remains a challenge. The space-time trade-off imposes a relation between the fitted values
aT and
σA. However, it may also affect other parameters, such as
σT.
It is undesirable to incorporate the trade-off subjectively. Ideally, a revised fitting process would automatically integrate contributions to the forecast from points along the line. This would require some reformulation of the model.
4.7. Development of a Global Forecasting Model
An important goal for the future is the development of a global EEPAS model. The aim is to forecast the largest earthquakes, e.g., M≥7, expected to occur anywhere in the world, with time horizons extending out to several decades, using a global catalogue. All factors now known to affect the EEPAS parameters—incompleteness of precursory earthquakes, the space-time trade-off, and dependence on long-term earthquake rates—need to be simultaneously addressed in a coherent way to develop such a global model.
The model would be regionally adjustable to accommodate variation in the earthquake rate and the space-time trade-off of precursory seismicity. It would also include compensation for incompleteness of precursory contributions. The earthquake occurrence rates vary by several orders of magnitude between plate-boundary regions and continental regions. The time distribution in the EEPAS model would therefore vary over a similarly wide range. This induces far more variability in completeness of precursory contributions than in regional catalogues that adds to the challenge. These complexities imply there is still some way to go to develop a global EEPAS model.