Despite past failures, geophysicists think earthquake prediction might yet be possible
Jeff McGuire says he does not want to be known as the guy who predicts earthquakes. But in September 2008, a magnitude 6.0 quake shook the bottom of the ocean at a fault along the East Pacific Rise — within 10 kilometers from where and within the year-and-a-half window that McGuire and his colleagues had predicted.
It is in fact possible to predict a large quake on a short timescale, says McGuire, of the Woods Hole Oceanographic Institution — when the geology is relatively simple, as on a transform fault along the East Pacific Rise. And his year-and-a-half time frame is short compared with the typically decades-long forecasts for large earthquakes on other types of faults.
On the continents, however, the geology is not so simple. Most big quakes happen at the boundaries between oceanic and continental plates, on faults known to have experienced big quakes in the past. Yet some of the largest quakes in the United States happened nowhere near a plate boundary, and some big quakes have occurred where no fault had previously been known to exist. Decades of false starts and failures have led many experts to conclude that making accurate short-term predictions of those rare but big earthquakes is a hopeless quest.
“We understand the complexity of earthquakes,” says David Jackson, a seismologist at the University of California, Los Angeles. “Big ones start as little ones and find enough energy to keep going.” Figuring out when big ones are going to happen requires knowing much more about how quakes begin, and also about how big they will grow once they start.
“It may be that the amount of information that we need to have about the fault to predict how big an earthquake will grow is, for all practical purposes, unknowable,” says geophysicist Greg Beroza of Stanford University. “Still, until we have a deeper understanding of fault behavior, I think it’s important to keep an open mind.”
More and more researchers’ minds have been opening. While scientists aren’t exactly optimistic, some of their pessimism about prediction is fading, says Susan Hough, a seismologist in the U.S. Geological Survey’s Pasadena, Calif., office. “There’s a lot more talk in serious seismology circles about prediction.”
Fueling the new attitude are tools, knowledge, technologies and data that researchers didn’t have before, says Mike Blanpied of the USGS Earthquake Hazards Program and the National Earthquake Prediction Evaluation Council. Some 4,000 stations, for example, now monitor Earth’s known faults. Global Positioning Systems constantly watch the ground move. Instruments that measure tiny changes in strain and motion deep underground are embedded in some of the world’s most dangerous faults. And computers store and process huge data sets on it all.
These ingredients add up to a perfect recipe for a mathematical approach to understanding earthquakes — and offer the intriguing possibility that a quantitative picture of the ways quakes change how rocks share and trade stress underground could help in determining when a small quake will become a big one.
“There’s a lot of data now to work with that didn’t exist earlier,” Blanpied says.
Many seismologists maintain that providing absolute, short-term prediction of a big quake in a specific place is not possible. “Scientific focus is better directed at long-term forecasts to improve the information that goes into building codes and insurance rates,” says David Applegate, senior science adviser for earthquake hazards at USGS in Reston, Va. Other researchers are focused on developing early warning systems, already deployed in Japan, which detect an earthquake’s early seismic waves and give people crucial seconds of notice before shaking begins. But other experts can’t resist pursuing the idea that a week’s, month’s or even year’s warning of a large quake might be possible.
One session at a meeting this year of the Seismological Society of America was titled “Global Collaborative Earthquake Predictability Research,” the mission of an international research consortium now in its third year. The goal of the group, the Collaboratory for the Study of Earthquake Predictability, is to standardize and make more scientific the way earthquake predictions are stated and tested, says its director, Tom Jordan of the University of Southern California in Los Angeles. The group also aims to continuously monitor faults around the world and test how well statistical, quantitative models of earthquake likelihood hold up to what happens in real time. One testing center was set up at USC three years ago.
Not long ago, “earthquake” and “predictability” would have been unlikely adjoining words in a seismological society session. Long faded was the excitement created by the apparently successful prediction of a magnitude 7.3 quake that struck Haicheng, China, in 1975.
“It was kind of a successful prediction,” Hough says, “although it turns out, when you dig into it, it was largely a matter of luck.”
Inspired in part by the Haicheng prediction, the U.S. government started the National Earthquake Hazards Reduction Program and billed it largely as a prediction program, says Hough, who is writing a book on quake prediction.
But during the 1980s and 1990s, geologists’ pessimism about the feasibility of prediction grew. “Earthquakes happened that hadn’t been predicted,” Hough says.
Still, researchers kept looking for clear precursor signals. For example, since the 1970s some researchers have explored the idea that the ground releases unusual amounts of radon gas before a quake starts. But sometimes the gas release precedes large quakes, and sometimes it doesn’t, Applegate says. Similarly, the premise that underground changes create low-frequency magnetic waves detectable in the atmosphere gained wide favor after a reported magnetic signal for the magnitude 6.9 Loma Prieta earthquake that shook the San Francisco Bay Area in October 1989. But a new analysis of that data in the April Physics of the Earth and Planetary Interiors concludes that the signal was the result of a malfunctioning amplifier on a sensor.
The main problem with earthquake predictions, however, is getting the time right. “We think we’re probably doing an OK job of identifying where earthquakes are likely to occur and probably pretty well on what the largest magnitudes are expected to be for a particular fault for a particular area,” Blanpied says. “We’ve even been able to get the long-term rate of earthquakes. But saying when a particular earthquake is going to occur has really not ever been done.”
Shakes before quakes
Many seismologists have turned to assessing various factors, rather than searching for any one sign. One idea is that patterns of small quakes, which rattle the Earth every day, contain information about dynamics around faults and within rocks.
“There’s this appreciation that the big ones grow out of the little ones, and that the little ones redistribute the stress,” says UCLA’s Jackson. “They set up the stress for the big ones.”
Much of the current work aims to decode how stress is distributed and redistributed far below the surface and among more than one fault in an area. Understanding that pattern could help scientists recognize when stress is setting the stage for a large quake.
In 2000, Jackson and several colleagues proposed the Regional Earthquake Likelihood Model, or RELM, as a method for testing forecasts of earthquake odds. RELM allows probabilities to be tested not along one fault, but by units of location, magnitude and time. A modeler divides an area into grids of projected probability. From there, the model can be measured against what actually happens over time in the area, and also measured against other models of probability. “RELM is flexible precisely because it is a blank sheet,” Jackson says. “Put probabilities in, and see whether future earthquakes occur in the high or low probability areas you specified.”
A former postdoctoral researcher with Jackson, Agnès Helmstetter, now at the University of Grenoble in France, used the RELM approach to suggest that an area’s distribution of past seismicity, including quakes as low as magnitude 2, can be a map for predicting future large earthquakes. Her application of RELM is one of the models being compared against real-time measurements of ongoing seismic activity around the collaboratory’s first testing center at USC.
So far, Helmstetter’s application of the model is proving robust against almost three years of seismic measurements, Jackson says. One possible reason is that she is using a large amount of seismic data.
“Forecasts based on past seismicity may provide more accurate forecasts than models based on geological data — fault location and slip rate — because many faults are not known, or spatial resolution is not as good as seismicity data,” Helmstetter says.
The most hopeful lead for specifying earthquake location and likelihood, Jackson says, is the observation that earthquakes cluster. Such a swarm, a sudden jump in the number of small quakes in one area, clearly signaled the magnitude 6 quake on the East Pacific Rise fault, McGuire says. “The fault kind of turned on for a few days,” he says.
Possibilities and probabilities
On the continents, the picture isn’t as clear. Take central Italy, where small quakes began shaking in December 2008. By April, 400 small and medium quakes — including one at magnitude 4.5 in March — had added to the swarm. The Istituto Nazionale di Geofisica e Vulcanologia, or INGV, in Rome now estimates that the chances of a large, damaging quake were between 0.1 percent and 0.3 percent for any given week at that time.
Meanwhile, along the southern portion of California’s San Andreas fault, seismometers were measuring a similar earthquake swarm around the town of Bombay Beach, just south of the Salton Sea, with a magnitude 4.8 quake hitting on March 24. Soon after, the Southern California Earthquake Center announced that chances of a large earthquake striking were 1 to 5 percent over the coming days.
No large quake struck the Bombay Beach area. But on April 6, a magnitude 6.3 quake hit Italy, with its epicenter near the town of L’Aquila. The damage was severe, mainly to old buildings not constructed to withstand shaking. Almost 300 people were reportedly killed.
“Even though the probabilities go way up, they are still low,” Jordan says. “So it’s hard to know exactly what to do in terms of advising people. Chances are nothing is going to happen. But in the case of Italy, it did.”
Warner Marzocchi, a chief scientist at INGV, says using earthquake swarms as clues to big quakes is a promising technique. But it’s far from a sure thing. “If you think this swarm was enough to raise an alarm, you would have to raise a lot of alarms,” he says of the small quakes preceding L’Aquila. Similar swarms occurred in previous years, he says, with no big quakes.
Massimo Cocco, also at INGV, notes that a claim that the L’Aquila earthquake had been predicted was of little merit. “There was no prediction for this earthquake. There was a claiming, but it was not released in a way that can be evaluated scientifically.”
Such supposed quake predictions are among the reasons Jordan and his colleagues started the collaboratory. Part of the group’s mission is to infuse rigor and standardization into earthquake prognostication, so that predictions are spelled out — with a specific geographical location, time and magnitude range — as scientific hypotheses that can be tested.
Central Italy will be one of the testing sites for the collaboratory, Jordan says. And researchers are already applying the RELM model to the L’Aquila aftershocks, Cocco says, to better understand how the main shock affected stress in the surrounding rock.
Cocco adds that several hundred of the swarm quakes were clustered near the April 6 quake’s nucleation, the area in a fault where an earthquake begins. “We need to work more to understand the nucleation and the physical processes responsible for earthquake initiation,” he says.
The early buildup
The nucleation — the start — of earthquakes could be slow.
Strain builds as plates press each other and try to slide past one another over hundreds of years. At a fault — a weak point in the crust — the strain eventually overcomes the friction between the plates. This sudden frictional failure, or rapid fault slip, happens in seconds and generates seismic waves that travel through rock. In large earthquakes, fault slip may reach the surface.
In 1995, Beroza and William Ellsworth, also of Stanford, proposed a possible transition period between the gradual building of strain and the sudden slip. It is a slow preparation process, the tail end of which generates a seismic signal, Beroza says. The duration of the process could correspond to the size of the resulting earthquake.
This transition process could be related to another phenomenon exciting earth scientists. In 2002, Kazushige Obara, using data from intensive earthquake monitoring in Japan after the 1995 Kobe quake, reported that slow, almost imperceptible movement happens within portions of faults deep below where earthquakes occur. These changes, detectable only with the most sensitive instruments that discern tiny movements in rock, are too slow to generate strong seismic waves. Earth scientists studying this subtle movement use terms such as slow quakes, silent quakes, aseismic slip, nonvolcanic tremor, slow transients and episodic tremor and slip.
Researchers are only beginning to grasp the slow quake phenomenon. “It has not taught us much about garden-variety earthquakes,” says geophysicist John Vidale of the University of Washington in Seattle. But he adds that “tremor has the potential to silhouette patches that will break in future earthquakes. In the Pacific Northwest, for example, the location of tremor suggests that the next magnitude 9 earthquake in the region can break closer to the Puget Sound than we previously thought.”
Beroza says the slow quakes detected so far haven’t been followed by any large earthquakes, but notes that seismologists have been watching these silent quakes for only eight years. The quakes do increase stress on the faults. A team reports in the May 26 Eos that monitoring slow quakes in a subduction zone around the state of Guerrero in Mexico could offer a way to understand how stress is being distributed among different parts of the fault. “As it gets closer to the time of the big earthquake,” Beroza says, “the character or the frequency of the slow earthquake might change.”
The new understanding might reveal a possible precursor pattern for large quakes. Or it might not.
“People need to realize that some natural events like earthquake occurrence have a probabilistic nature,” says Marzocchi of Italy’s INGV. “We may not be able to predict earthquakes. We may only be able to make forecasts.”
Visit the Collaboratory for the Study of Earthquake Predictability website: www.cseptesting.org
Beroza, G.C., and S. Ide. 2009. Deep tremors and slow quakes. Science 324(May 22):1025-1026. 10.1126/science.1171231
USGS Earthquake Hazards Program - Northern California: [Go to]
Seismological Society of America: [Go to]
Wang, K., et al. 2006. Confirming a Chinese earthquake prediction. Geotimes (June 26). [Go to]
Jackson, D.D. 2003. Earthquake prediction and forecasting. IUGG 2003 General Assembly of the International Union of Geodesy and Geophysics 150(June 30):335-348. [Go to]
Helmstetter, A., Y.Y. Kagan, and D.D. Jackson. 2007. High-resolution time-independent grid-based
forecast for M ≥ 5 earthquakes in California. Seismological Research Letters 78(January/February):78-86.
Jordan, T.H. 2006. Earthquake predictability, brick by brick. Seismological Research Letters 77(January/February):3-6.
Ellsworth, W.L., and G.C. Beroza. 1995. Seismic evidence for an earthquake nucleation phase. Science 268(May 12):851-855.
Fraser-Smith, A.C., et al. 1990. Low-frequency magentic field measurements near the epicenter of the M8 7.1 prieta earthquake. Geophysical Research Letters 17(August):1465-1468.