Technology

Will We Ever Be Able to Predict Earthquakes?

Once, seismologists correctly predicted a major quake. They were only 12 years off on the timing.

Layers of earth are deformed by the collision of the Pacific and North American tectonic plates along the southern San Andreas Fault.
Layers of earth are deformed by the collision of the Pacific and North American plates along the southern San Andreas Fault north of the Salton Sea near Mecca, Calif.

Photo by David McNew/Getty Images

Excerpted from Earthquake Storms: The Fascinating History and Volatile Future of the San Andreas Fault by John Dvorak, out now from Pegasus.

Reporter: Did anyone predict last night’s earthquake?

Charles Richter: Not yet.

The 1970s was to be the decade when earthquake prediction became a reality. In 1975 the Chinese government announced that it had ordered an evacuation of a major city hours before a major quake struck. The Russians were saying that they were using a variety of techniques routinely to predict not only large earthquakes but moderate ones, too. And in the United States, scientists had discovered a number of anomalies near Los Angeles—a broad uplift of the ground, a slowing of seismic waves, increased emission of radon gas—that indicated the city would soon be razed by strong seismic shaking.

And then it all collapsed—that is, the effort to predict earthquakes collapsed.

In 1976 the greatest seismic calamity yet in modern times left hundreds of thousands of people dead in China—and government officials gave no indication that they knew such a disaster was coming.* The evidence used by Russian scientists was scrutinized by other scientists who concluded the Russian effort was plagued by inconsistencies and unsubstantiated claims. And in the United States, additional measurements showed that the anomalies that seemed to doom Los Angeles had either disappeared, or, maybe, had never existed at all.

And the world community of seismologists remains divided—at times, vehemently—over the issue of whether it will ever be possible to predict earthquakes. It’s a question that’s been raised again as the network of faults in Southern California has awakened with seismic activity in recent months. It is a complex problem. And, to date, no one has yet predicted an earthquake.

An attempt to predict an earthquake near Parkfield, Calif., in the 1980s failed. But this effort was the closest anyone has come to predicting an earthquake—that is, to identifying the fault, giving the magnitude, and limiting the time period when the calamity would occur.

* * *

The San Andreas Fault can be divided into three main segments. The northern segment runs from Cape Mendocino to San Juan Bautista—the part of the fault that ruptured in 1906. The southern segment begins around Cholame, just north of Carrizo Plain, and runs south, eventually forming the southern boundary of the Mojave Desert, continues through Cajon Pass and San Gorgonio Pass, can be picked up 20 miles east of Palm Springs in Coachella Valley, and ends at Bombay Beach on the east side of the Salton Sea. The northern half of the southern segment—from Cholame to Cajon Pass—ruptured in 1857; the southern half of the southern segment—from San Gorgonio Pass to Bombay Beach—did so in about 1690. So all of the San Andreas Fault has broken during a major earthquake in the last few hundred years except for a short middle segment that runs from San Juan Bautista to Cholame and includes the ranching community of Parkfield. This 150-mile segment of the San Andreas Fault is distinctly different from the other parts of the fault: Here the fault is slowly and continuously sliding.

Ten miles south of San Juan Bautista is DeRose Vineyards. It is a family-owned business where the winemaking and tasting room is located in a large building with a concrete floor and metal walls and roof. On the day I visited, I identified myself as an earthquake tourist. The person who was pouring the wine pointed immediately to the center of the building and said, “It’s over there.”

Here the trace of the San Andreas Fault is all too apparent. Running along the floor is a line of broken concrete slabs, up to a foot across, that extends the full length of the building. Where the fault runs beneath a metal wall, the wall has been sheared apart, the two halves now standing as much as two feet apart. Broken ends of twisted rebar are exposed where the metal wall once connected to the concrete floor. A plaque attached to a wall in the center of the DeRose Winery building proclaims the San Andreas Fault at this spot to be a registered natural landmark.

If one drives south of DeRose Vineyards, one can find sets of cracks running diagonally across the pavement. These, too, are the San Andreas Fault. They are visible, as is the slow destruction of the winery at DeRose Vineyards, because along this segment the fault is always sliding. And the sliding can be found as far south as Parkfield, where the fault runs under a bridge. As one might expect, the bridge has a distinct bend over the exact place where it crosses the San Andreas Fault.

The slow sliding is known as seismic creep, caused in part by a constant jitter of small earthquakes. At DeRose Vineyards, the fault slides about an inch a year. At Parkfield, it is half that amount, which means occasionally the Parkfield section has to catch up. It does so with a jolt—a moderate earthquake.

Six times—in 1857, 1881, 1901, 1922, 1934, and 1966—the Parkfield section has surged forward. Each event has been nearly identical in size—corresponding to a magnitude-6 earthquake—and each successive event has occurred, on average, 22 years after the previous one. Moreover, there seemed to be definite precursory signs before the last two events. The main shocks in 1934 and again in 1966 were preceded 17 minutes by a strong foreshock that was felt over a wide area. Furthermore, an irrigation pipe that crossed the rupture zone separated nine hours before the 1966 event. All this gave credence to the idea that the next Parkfield earthquake might be predicted.

In 1985 a panel of 12 scientists, formally known as the National Earthquake Prediction Evaluation Council, endorsed a Parkfield prediction, saying that there was a 95 percent chance that a magnitude-6 earthquake would occur along the Parkfield section of the San Andreas Fault by 1993. A dense network of instruments was installed in the hopes of trapping the earthquake, to detect precursory signs that might occur in seismic patterns, in ground movements, in electric or magnetic fields, in radon-gas emission, or in the chemistry or level of water wells. And then people waited.

Twice, an “A”-level alert was issued, on Oct. 19, 1992, and again on Nov. 14, 1993. Both alerts were triggered after felt earthquakes, similar in size to what preceded the 1934 and 1966 events, occurred. Both times there was an increased awareness that the predicted earthquake might occur within the next 72 hours. California state agencies and emergency services were notified. And both times … nothing happened.

The year 1993 came and went, and no earthquake. Then 1994, 1995, and so on. Finally, at 10:15 in the morning on Sept. 28, 2004, a magnitude-6 earthquake ruptured the Parkfield section of the San Andreas Fault. The predicted earthquake had occurred. Or had it?

There were important differences between the events of 1934 and 1966 and the one that occurred in 2004. First, in 1934 and 1966, the ground rupture began north of Parkfield and propagated south. In 2004 it was in the opposite direction: The rupture started south of Parkfield and propagated north. More important, in the dense network of instruments there were no precursors recorded minutes, hours, or days before the event. There was no foreshock or increase of seismic activity before the event. There was no damage to irrigation pipes. There was no measured change in electric or magnetic fields or in chemistry or level of water wells. Most disconcerting, there was no measured ground movement: There was no warping or rise or fall of the ground surface. There was no underground compression or slight expansion of rock—no dilatancy—and this could be measured with great precision.

Five instruments known as borehole strainmeters were installed within a few miles of where the 2004 rupture formed. Essentially, each instrument consists of a fluid-filled bag stuffed deep down a borehole. If the surrounding rock is compressed or stretched by a tiny amount—equivalent to taking a 100-mile-long rigid bar and compressing or stretching one end by the diameter of a human hair—the bag undergoes a small compression or expansion. But no change was recorded for weeks to seconds before the earthquake. As far as anyone can tell, the 2004 Parkfield earthquake was a spontaneous event.

The Parkfield experiment was successful in identifying where an earthquake would occur and how big it would be, though the all-important when was missed; the event came 12 years too late. Which raises the question: Will it ever be possible to predict earthquakes?

The answer, as it is seen today, is: maybe.

The question of earthquake prediction can be reduced to a more tractable and straightforward question: What triggers a large earthquake?

Imagine this: Initially, an earthquake fault, such as the San Andreas, is relatively quiet. Only a few small earthquakes are occurring, popping off, in a familiar analogy, like kernels of heated popcorn. Then the popping of one earthquake kernel happens to set off more kernels, and those set off more kernels until there is an explosion, or cascade, of kernels popping, a rupture forms, and a large earthquake is produced.

Or imagine this: The lower region of the San Andreas Fault is slowly sliding—without earthquakes—because here the rocks are hot and plastic and are driven to slide smoothly by the slow and constant movement of the Pacific and North American plates. As the slipping region grows, the sliding accelerates until it reaches a critical speed to where a rupture forms in the brittle overlying rock, and a larger earthquake is produced.

In the former case—the cascade model of earthquake kernels—the beginning of any large earthquake is no different from the beginnings of countless small ones, which means it is impossible to ever predict large earthquakes.

In the latter case—the pre-slip model—a long process occurs that prepares the San Andreas for a sudden and major slip. In that case, earthquakes might be predicted if we can figure out how to measure the slow sliding and subsequent buildup.

Which idea is true—or whether earthquakes work in some other manner entirely—is still the focus of much research today and is hotly debated. But this much is true: When there is a major earthquake, the probability of another major earthquake happening soon after in the same region goes way up. Once the Earth’s crust starts to adjust to the slow buildup of pressure between the tectonic plates, that pressure may not be relieved simply as a single large event but as one—or more—major earthquakes happening in a short time period.

To put this in concrete numbers: History shows that whenever there is a major earthquake in California, say a magnitude-6 event, which can do substantial damage—there is a 1-in- 20 chance that another earthquake of equal or greater magnitude will happen in the same general area within the next three days.

This leads to a practical concern. After a major earthquake, people should brace themselves for an equal or larger event. Emergency services, such as fire and police stations and hospitals, need to prepare for additional injuries and for the disruption of still more roads and utilities. And for those who are attempting to rescue people who are already trapped under debris, those rescuers should be aware that a larger earthquake could strike and a greater catastrophe may happen.

Excerpted from Earthquake Storms: The Fascinating History and Volatile Future of the San Andreas Fault by John Dvorak, out now from Pegasus.

*Correction, April 22, 2014: This piece misstated that the 1976 Tangshan earthquake as the greatest seismic calamity yet. The 1556 Shensi earthquake killed more people, according to the USGS. (Return.)