Examples and Tutorials#

This section demonstrates PyEEPAS integration with the earthquake forecasting ecosystem through three interactive notebooks.

Framework Integration

PyEEPAS is designed to work seamlessly with established seismological tools:

  • pyCSEP (Savran et al., 2022; Graham et al., 2024): Standardized forecast evaluation using consistency tests (L-test, N-test, S-test, M-test) and comparative scoring rules (log-likelihood, Brier, Kagan information scores).

  • SeismoStats (Mirwald et al., 2025): Statistical seismology package providing robust b-value estimators (b-positive, Tinti, Kijko-Smit) and catalog preprocessing tools.

  • Rectangular Algorithm (Christophersen et al., 2024): Automated \(\Psi\) phenomenon detection that identifies precursor-mainshock pairs and derives initial EEPAS parameter estimates through fixed-effects regression.

The complete workflow from raw catalog preprocessing to rigorous forecast evaluation is demonstrated below.

Notebooks#

Notebook 1: Automated \(\Psi\) Phenomenon Detection#

Examining the \Psi Phenomenon in Italian Seismicity

Purpose: Demonstrate the rectangular algorithm for automated precursor identification.

Integration Highlights:

  • Automated :math:`Psi` Detection: The rectangular algorithm (Christophersen et al., 2024) systematically identifies all precursor-mainshock pairs within a specified search radius (e.g., 400 km).

  • Fixed-Effects Regression: Addresses the space-time trade-off inherent in nested observations by properly distinguishing within-mainshock correlation from between-mainshock scaling relationships.

  • Initial Parameter Estimation: Derives initial values for EEPAS parameters from empirical scaling relations:

    \[\begin{split}\log_{10} T_p &= a_T + b_T M_p \\ \log_{10} A_p &= b_A M_p \\ M_m &= a_M + b_M M_p\end{split}\]

Outputs:

  • Scatterplots with fitted scaling relations

  • Initial parameter estimates

Key Advantage: Replaces manual \(\Psi\) identification (a major barrier to EEPAS adoption) with a fully automated, reproducible procedure.

Notebook 2: Forecast Evaluation (Reproduce Published Results)#

EEPAS Italy Results: Visualization and PyCSEP Evaluation

Purpose: Evaluate EEPAS forecasts from config_italy_reproduce.json using pyCSEP.

Configuration: Reproduces Biondini et al. (2023) published results using the same initial parameters as reported in literature.

Key Features:
  • Validates our framework can replicate published results within an hour

  • Uses manually-determined initial EEPAS parameters from literature

  • Demonstrates computational efficiency of the Python implementation

Notebook 3: End-to-End Pipeline Evaluation (Automated Parameter Estimation)#

EEPAS Italy Results: Visualization and PyCSEP Evaluation

Purpose: Evaluate EEPAS forecasts from config_italy_endtoend.json — the complete automated end-to-end workflow.

Configuration: Full automated pipeline from raw catalog to forecast evaluation
  • mT = 4.95 (target magnitude threshold)

  • m0 = 2.95 (completeness magnitude, ~2 units below mT)

  • Initial parameters automatically estimated using rectangular algorithm and fixed-effects regression

  • Automatic boundary adjustment during optimization

Integration Highlights:

  • Consistency Tests:

    • L-test: Overall likelihood fit (observed earthquakes vs forecast)

    • N-test: Total event count (Poisson or Negative Binomial)

    • S-test: Spatial distribution consistency

    • M-test: Magnitude distribution consistency

  • Comparative Scoring:

    • Log-likelihood score: Measures overall model fit

    • Brier score: Particularly suitable for rare events

    • Kagan information score: Quantifies spatial informativeness

  • pyCSEP-Compatible Output: EEPAS generates gridded forecasts in the standard format required by pyCSEP, enabling seamless integration with the Collaboratory for the Study of Earthquake Predictability (CSEP) testing framework.

Key Outputs:
  • Spatial forecast maps with observed earthquakes

  • PyCSEP consistency test results (L-test, N-test, S-test, M-test)

  • Comparative scoring metrics (log-likelihood, Brier score)

  • Better log-likelihood scores compared to manually-initialized parameters

Key Advantage: Demonstrates that automatically estimated parameters can be directly used for forecasting without manual Ψ identification, passing strict consistency tests and achieving improved performance. This removes a major barrier to EEPAS adoption in new regions.

Notebook 4: Catalog Preprocessing with SeismoStats#

EEPAS Italy Preprocessing: Estimation of Completeness (m_c) and b-value

Purpose: Estimate magnitude of completeness (\(m_c\)) and b-value using SeismoStats.

Integration Highlights:

  • SeismoStats Package: Provides multiple b-value estimators:

    • b-positive (Lippiello et al., 2024): Maximum likelihood with positive bias correction

    • Tinti & Mulargia (1987): Accounts for magnitude binning

    • Kijko & Smit (2012): Handles temporal variation in completeness

  • Magnitude of Completeness (:math:`m_c`) Estimation:

    • Maximum curvature method

    • Goodness-of-fit test (GFT)

    • b-value stability analysis

  • Gutenberg-Richter Validation:

    \[\log_{10} N(m) = a - b \cdot m\]

    where \(b \approx 1.0\) for most regions.

EEPAS Parameter Dependency:

The estimated b-value feeds directly into EEPAS:

\[\beta = b \ln 10 \approx 2.303 b\]

This \(\beta\) parameter appears in:

  • Incompleteness correction: \(\Delta(m) = \Phi\left(\frac{m - a_M - b_M m_0 - \sigma_M^2 \beta}{\sigma_M}\right)\)

  • Normalization factor: \(\eta(m) \propto \exp\{-\beta[a_M + (b_M - 1)m + \sigma_M^2 \beta / 2]\}\)

Key Advantage: Ensures EEPAS forecasts are based on statistically sound catalog preprocessing, avoiding biases from incorrect \(m_c\) or \(b\)-value estimates.

Complete Workflow Summary#

The three notebooks demonstrate a complete earthquake forecasting pipeline:

  1. Preprocessing (Notebook 3): Estimate \(m_c\) and \(b\) using SeismoStats

  2. Parameter Initialization (Notebook 1): Automate \(\Psi\) detection using Rectangular Algorithm

  3. Model Fitting: Optimize EEPAS parameters via maximum likelihood (command-line tools)

  4. Forecast Generation: Create gridded rate forecasts (command-line tools)

  5. Statistical Evaluation (Notebook 2): Validate forecasts using pyCSEP

This end-to-end integration showcases EEPAS as a modern, reproducible framework for medium- to long-term earthquake forecasting, fully compatible with the established tools of statistical seismology.

References

  • Christophersen, A., Rhoades, D. A., & Hainzl, S. (2024). Algorithmic Identification of the Precursory Scale Increase Phenomenon in Earthquake Catalogs. Seismological Research Letters, 95(6), 3464–3481.

  • Glenn, W. B., et al. (1950). Verification of forecasts expressed in terms of probability. Monthly Weather Review, 78(1), 1–3.

  • Graham, K. M., Bayona, J. A., Khawaja, A. M., et al. (2024). New features in the pyCSEP toolkit for earthquake forecast development and evaluation. Seismological Research Letters, 95(6), 3449–3463.

  • Kagan, Y. Y. (2009). Testing long-term earthquake forecasts: likelihood methods and error diagrams. Geophysical Journal International, 177(2), 532–542.

  • Lippiello, E., & Petrillo, G. (2024). b-more-incomplete and b-more-positive: Insights on a robust estimator of magnitude distribution. Journal of Geophysical Research: Solid Earth, 129(2), e2023JB027849.

  • Mirwald, A., Schmid, N., Han, M., Rohnacher, A., Mizrahi, L., Ritz, V. A., & Wiemer, S. (2025). SeismoStats: A Python Package for Statistical Seismology. GitHub repository. swiss-seismological-service/SeismoStats

  • Savran, W. H., Bayona, J. A., Iturrieta, P., Asim, K. M., Bao, H., Bayliss, K., Herrmann, M., Schorlemmer, D., Maechling, P. J., & Werner, M. J. (2022). pyCSEP: A Python toolkit for earthquake forecast developers. Seismological Society of America, 93(5), 2858–2870.