testsetset
Researchers from Google’s AI division and Harvard University have created an AI model capable of predicting the location of aftershocks up to one year after a major earthquake. The model was trained on 199 major earthquake events, followed by 130,000 aftershocks, and was found to be more accurate than a method used to predict aftershocks today.
Aftershocks included in the dataset used to train the neural network took place in a perimeter that stretches 50 kilometers vertically and 100 kilometers horizontally from each earthquake epicenter.
“We found that after feeding these model stress changes into the neural network, the neural network could sort of predict aftershock locations in the testing dataset more accurately than the sort of baseline Coulomb failure stress change criterion that’s used a lot in studies of aftershock locations,” Phoebe DeVries of the Department of Earth and Planetary Sciences at Harvard University told VentureBeat in a phone interview.
Data used to train the model came from noteworthy earthquakes over the past few decades, such as the 2004 Sumatra earthquake, the 2011 earthquake in Japan, the 1989 Loma Prieta earthquake in the San Francisco Bay Area, and the 1994 Northridge earthquake near Los Angeles.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
The results were published today in the journal Nature. The study was authored by DeVries, Harvard University earth and planetary science professor Brendan Meade, and Google machine learning researchers Martin Wattenberg and Fernanda Viégas.
No actual seismologists took part in the research, though DeVries and Meade consider themselves to be computational earth scientists.
Lessons learned training the AI model will be used to explore an even bigger question: What triggers earthquakes?
“While most neural networks are extraordinarily difficult to interpret, and are sometimes referred to as black boxes, I think because we had some idea of the physics that might be involved in this, we brought with us knowledge that the transfer of stresses via elasticity was important,” Meade told VentureBeat in a phone interview. “It turned out that our result was interpretable. We could look at what was coming out of this network and actually make sense of it, and it’s actually pointed us to some possibly different physical theories of what causes earthquake triggering, and so it’s leading us in a new direction, which is exciting for us.”
The model is unable to factor in earthquakes produced by other major natural disasters, such as a volcanic eruption, Meade said.
“Any machine learning application, whether or not the neural network has inferential power, really depends not only on the architecture but on the training set used for it, and we used no training sets related to volcanoes or anything like that, so we have no reason to believe at all it would work for events like that,” he said.
The model was trained using historical data from major earthquakes in years past, but going forward, it will be informed by data from future earthquakes, Meade said.
Updated 12:08 p.m. Aug. 30 Correction: The original version of this article stated that Brendan Meade was a Google employee when he is in fact a professor at Harvard University.