The symposium — co-sponsored by the Department of Earth, Atmospheric and Planetary Sciences, the Department of Urban Studies and Planning, Department of Civil and Environmental Engineering and the Center for Global Change Science and the MIT Energy Initiative — brought together three panels to address what happened, what can be learned and how the event affected people and ecosystems.
Among the speakers, Department of Mechanical Engineering Emeritus Professor Jerome Milgram questioned the usefulness of oil booms in this cleanup effort. He said that in some situations booms can be helpful, such as for collecting oil from some contained spills in calm water. But in a case like this one, with a widespread spill in the open ocean, the use of booms served very little purpose.
Some 850 miles of booms and 5500 cleanup vessels were deployed to combat this spill, Milgram said, “something like we’ve never seen before.” The amount of oil collected by these means was about 3 percent of the total released from the runaway well, according to government estimates, though Milgram says he thinks the correct total is less than 1.5 percent.
But despite the fact that he was the inventor of a highly sophisticated oil-boom system himself, Milgram said that “the bottom line is, what was collected was insignificant.”
Milgram has done detailed analyses of how different kinds of oil booms perform under real-world conditions, and he concludes that the only real benefit from the use of booms and skimmers in the Gulf spill was to provide employment for some of the local people. “When it comes to surface cleanup in the open sea, use your money for something better, in my opinion.”
But Milgram, who led a team that worked on capping the second-largest oil spill ever in the Gulf — the Ixtoc spill off the Mexican coast in 1979 — said the impact of the spill will likely be less serious than many had feared. Marshlands “take a long time to recover, but they do,” he said. The 1979 spill let loose about 60 percent the volume of this year’s spill and left traces that can still be found by digging, but “by and large, it wasn’t much of a problem” for the coastal ecosystem, he said.
But many things could be done to prepare for another spill, said Alexander Slocum, professor of mechanical engineering, who was part of a team of scientists and engineers selected by U.S. Secretary of Energy Steven Chu, to provide assistance in the oil-spill recovery operations. Slocum said more planning and stockpiling of resources must be done in advance. For example, a variety of caps and collection devices, similar to those that were built on the fly in this case, could be designed, built and ready to deploy.
“We can do this safely, we can be ready,” he said. Over the course of the summer, BP, along with the team of experts and others working on the problem, developed a variety of devices to attempt to plug the well, providing contingencies that could be used depending on how conditions at the well unfolded — including a system for plugging the pipe even if the entire blowout-preventer assembly broke off leaving just a jagged stub of pipe emerging from the seafloor. In that case, they would have inserted a steel device containing shaped explosive charges that, once inserted into the stub of pipe, would be detonated to weld the capping device to the well casing itself. “We did have an ultimate solution,” Slocum says.
But after developing a collection of different tools and devices over the course of the summer, he said, “We’re suggesting we should have this arsenal available from Day One,” rather than waiting for an emergency to happen.
But the key to preventing such calamities lies in better risk management, suggested Nancy Leveson, professor of aeronautics and astronautics, who has written and consulted extensively on risk prevention in complex engineering systems, such as the space shuttle. Ultimately, she said, major disasters often result from a “culture of denial” within organizations that prevents early recognition of potential problems so that they can be resolved.
In addition, managers are often focused on the wrong indicators, confusing occupational safety with system safety, for example. They look at the number of days employees are out of work because of accidents as a measure of safety, while ignoring potential warning signs of a larger problem. “They’re managing the wrong feedback,” Leveson says. “They focus on operator error or technical failures, and ignore systemic and management factors.”
But there are better ways of managing risks, and studying the methods used by companies that have more successfully controlled risks is one useful approach. “We can fix these things,” she said. “They don’t need to happen.”