Web edition: June 18, 2010
Print edition: July 3, 2010; Vol.178 #1 (p. 32)
Catastrophes come in all shapes and sizes, but some basic causative principles underlie most of them. Robert Bea, an engineer at the University of California, Berkeley, has studied system failures from space shuttle explosions to levee breaks during Hurricane Katrina — but as a former oil rig worker he is most familiar with drilling disasters. Bea has thus assumed a key role in analyzing the response to the Deepwater Horizon spill in the Gulf of Mexico (Page 5). He spoke with Science News contributing editor Alexandra Witze about why the spill could have been foreseen.
You’ve looked at failures in more than 600 engineering systems. How does the Deepwater Horizon spill compare?
It is following this road map to disaster exactly. When I came back from Katrina in New Orleans, I got in front of my class and said I have a new equation for disaster. I was trying to be dramatic. But it is A plus B equals C.
A is important. It’s things like extreme pressures, temperatures, darkness, earthquakes, hurricanes, ice that goes bump in the night in the Arctic, volcanoes that spew into the sky. This is Mother Nature doing what she has done for millions and millions of years. B is ... kind of natural too. It includes people’s hubris, arrogance, greed, ignorance and a real killer called laziness. C is the disaster that comes sooner or later. This story, as we best know it now, is tracking that equation perfectly.
How big is the role of human fallibility?
Eighty percent of high-consequence accidents fall in the second category, [that] of human-factor uncertainties. Of that, 60 percent traces back to trouble in operations and maintenance — things are designed that can’t be built, operated or maintained as intended. When all of these get through, you have a ticking time bomb.
A subcategory is unknown knowables, where information exists but something prevents us from analyzing it properly. No matter how good you are and how much insight you think you have, you can’t predict everything. You have to be on constant alert for this category of uncertainty, because it requires a very different set of management tools.
Whose fault was the spill?
The government’s responsibility broke down. Industry’s responsibility broke down. The only one that didn’t is the environment, and unfortunately she’s getting treated pretty badly right now. It is a collective set of breakdowns. The crucial one is government — they’re the parents in the family. Industry are the children. Here the children told the parents what to do. It’s an entire chain: the tool pusher, the rig worker, the company man representing BP, the people in the Minerals Management Service office in New Orleans. Everybody has a share in this one.
The Interior Department is restructuring the Minerals Management Service (the federal agency overseeing offshore drilling) as a result of the spill. Will that make a difference?
Reorganization at the time you’ve got catastrophes is not a good thing [but it can work]. At the Piper Alpha platform in the North Sea, I went to work three days after it blew up [in 1988], killing 167 people. The United Kingdom found its regulation was part of the causation of the accident. They have completely restructured, and they are leaders in this work today. [In 1980] I went to work a few days after the Alexander Kielland accident in the Norway sector of the North Sea. Today Norway is helping lead the world in regulatory and industrial operations. The U.K. and Norway both took big, strong, intense kicks in the seat of the pants to restructure and refocus.
What about “quiet failures” — things we don’t know about but that could go wrong at any moment? How common are those?
I see these in court. I’m involved in one right now — the failure of the flood protection system for greater New Orleans during Katrina. In Australia, I’m working on a challenge that is so like the Deepwater Horizon it’s not funny. The tracks of the Montara blowout [in 2009] are damn near identical to this one. And the American public doesn’t know anything about it.
I’ve worked on an Indonesian deepwater development, 10,000 feet deep. That operator, after two years of intense study of the risk involved, said that the reserve remains underdeveloped today because the technology is not there to prevent failures or to mitigate them.
What can we do now to prevent such catastrophes from happening again?NOAA [the National Oceanic and Atmospheric Administration] and EPA have mobilized and are planning for 10 years from now. We’ll be smarter. We’ll have much more information on the environmental impacts and on organizational breakdowns. The knowledge will be there. The question will be, do we react properly to that knowledge? I hope so.