ABSTRACT
A large oil spill is not simply a physical phenomenon. Those communicating scientific information about a major incident need to be keenly aware of the political, media, and legal consequences of any scientific communications, something the author learned while co-leading two expert teams during the Deepwater Horizon oil spill to (1) estimate the flow rate by Particle Image Velocimetry (PIV) and (2) determine the early fate of the oil (Oil Budget Calculator or OBC).
Unlike past experience at smaller incidents, reports to the unified command were often transmitted to third parties such as politicians, single-issue lobbies, and various media. Assumptions about the receiver's technical background and neutrality that were valid for unified command were often not applicable to these other groups. This resulted in sound scientific conclusions being sometimes misunderstood, misreported or misused. Resources then had to be dedicated to correcting the misinformation, an often more difficult task than generating the information itself.
Based upon this experience, the following lessons learned are recommended:
- (1)
Carefully pick scientific team members based upon not only expertise but also the ability to function within group constraints and unified command rules. The eccentric scientist can be a valid stereotype but emergency events require each individual to subordinate his/her personal ego to team goals. Special interest groups and media organizations may attempt to enlist individual team members to their cause, causing discord within the group. The PIV team members in particular were subject to media invitations to disclose inner discussions and divisions among the members.
- (2)
Vet results not only with Unified Command but also with team members by written agreement prior to public release. Private squabbling over obscure technical points, typical between academic researchers, will be magnified by the press if made public before the points are resolved.
- (3)
Carefully write scientific conclusions to lessen the risk that individuals and organizations may misinterpret or spin the results to their own interest. Both the PIV and OBC teams suffered some misinterpretation of their results by non-scientists, a consequence that might have been reduced if the result limitations had been more clearly identified and explained in laymen terms.
INTRODUCTION:
Those individuals who choose the scientific professions for their career paths expect to face challenges from peers with regard to scientific claims they may produce. The science and engineering professions usually have established procedures and locations (e.g. refereed journals or technical conferences) where competing viewpoints are weighed against experimental data and qualified expert judgment. Unfortunately, few scientists are trained to face challenges to their scientific communications that arise from non-experts based often on misunderstandings or external motives. Yet, those experts that provide scientific advice on response and impacts from spills of national significance are almost certain to face such challenges. The purpose of this article is to provide insights learned during the Deepwater Horizon Spill on such challenges that will hopefully assist others involved in the next ‘ Big One’.
SPILL BACKGROUND:
The Deepwater Horizon (DWH) oil spill began on April 20, 2010, when expanding gas in the riser ignited, causing an explosion, collapse of the rig, and the largest spill in North American waters. Oil leakage continued for 87 days, releasing crude oil (minus surface capture) between the BP estimate of 2.45 million bbl (Blunt, 2013) and the U.S. federal government estimate of 4.2 million bbl (McNutt et al, 2011). Because the leak originated almost a mile subsurface, standard approaches to predicting spill volume and oil behavior were inadequate. The initial estimates of 1000 bbl/day, soon changed to at least 5000 bbl/day, were not matching observations in the field. Available spill weathering models were calibrated for surface spills, making mass balance calculations used in cleanup decisions unreliable. The National Oceanic and Atmospheric Administration Office of Response and Restoration (NOAA/OR&R) provides scientific advice to the federal on-scene command for spills in navigable waters. As senior scientist for OR&R, the author assembled two teams of federal government and external experts to assist in estimating (1) the sub-surface release rate of oil as part of the overall Flow Rate Technical Group (FRTG) and (2) the fate of the oil for response purposes. Calculations from both teams were used to provide estimates for the Incident Command System (ICS) Form 209 on a periodic basis.
ICS is the chosen approach in the United States to develop a command and control structure for emergency response, rapidly integrating experts from different agencies and organizations. A key requirement for effective response using ICS is the construction of an incident action plan (IAP) for a particular operational period. The mass balance in Form 209 is a critical piece of this plan. Normally, spills estimates used to support the IAP are of great interest only to the on-scene command and key staff. Supplementary explanatory material on spill volume or oil behavior estimate calculations are usually not required by such spill professionals. However, the Deepwater Horizon spill was anything but normal.
The Gulf of Mexico is not an isolated location and interest and media access to events in its waters are intense. Deepwater Horizon spill reports intended only for command center personnel were apt to appear the same evening on CNN. Academic researchers seeking funding for their pet projects, advocacy groups for various causes and media outlets looking for a good story were quick to exploit any proclamation or action by the response team for their own purpose, regardless in many cases of the actual facts. In such a climate, it is naïve to expect impartiality or expert understanding. Instead, scientific communications and actions have to be vetted as if the scientist were a candidate for political office where all statements are subject to non-contextual interpretation. The following recommendations, based upon the DWH experience, are passed on to those who may in the future face similar circumstances.
SCIENTIFIC TEAM SELECTION:
Virtually every scientist, sufficiently old to have gray in his/her hair, has had more than once to select and lead a team of experts to perform specific research projects. In many cases each expert brings unique knowledge or skills to the team but there is also likely a great deal of overlap in subject mastery. This can be a source of conflict because great expertise is often accompanied by great ego. The eccentric scientist is a valid stereotype but emergency events require the individual to subordinate that ego to the goals of the team. Moreover, most researchers are not exposed to outsiders who attempt to exploit their credentials or political leanings for ulterior motives. As such, they may be more vulnerable than the seasoned responder to such appeals.
Deepwater Horizon illustrated the consequences, both intended or not, of choosing team members only from seasoned spill professionals or choosing those who were highly qualified subject matter experts but new to emergency response. In the first category, the author assembled participants whose names are probably well known to many IOSC attendees. This team undertook the task of constructing a new mass balance tool to assign the spilled oil to the various ICS 209 categories; useful as an indicator to command staff on how well the response was doing and how big a job remained. The most commonly used government software model for this purpose, ADIOS2(Lehr et al., 2002), was not designed or adequate to predict behavior of spilled oil at such depths. The assembled team members, many of whom had participated in ADIOS2 development, were highly knowledgeable in the field and, equally important, aware of the format and needed accuracy for predictions to the response command. The wild card was the unusual time constraint to develop the tool. While ADIOS2 took a few years to develop, the team had just a few weeks to develop what became known as the Oil Budget Calculator (OBC). This meant that the standard documentation and review that accompanies such model development had to be postponed. TM
As this was the second team assembled (the first is discussed shortly), team members were chosen for scope of knowledge and for inclusiveness, seeking representatives from government, academia, and industry. Unfortunately, outside legal and political pressures limited optimal selection. BP, perhaps understandably, showed reluctance in having their experts as part of the team although the company did provide technical data. Other oil company experts did or did not agree to participate, based it seemed upon the company's legal advisors. One surprising refusal came from another government agency. While the actual expert gladly provided unofficial recommendations to the team, top officials in his agency prevented the inclusion of his name in the list of contributors. The remaining names, true unsung heroes in the response effort, can be found in the Technical Documentation for the Oil Budget Calculator (Federal Interagency Solutions Group, Oil Budget Calculator Science and Engineering Team, 2010).
Whether from industry, government, or academia, the OBC team experts were almost all well familiar with spill response. Excluded from the team were certain academics who showed up in the media with ideas on how to calculate spill behavior but lacked the track record on actual spills possessed by those selected. Argument can be made for and against this decision. On the one hand, little effort had to be spent on background briefing since the team members understood the requirements and limitations of spill forecasting. On the other hand, exclusion reduced the opportunity for innovative approaches that the outside academics may have provided and encouraged criticism by them in the media on the team efforts. In balance, it is generally better to be inclusive of these academics and work with them as part of the spill response.
Unlike the team that built the OBC, the team that estimated the flow rate by particle image velocimetry (PIV) were predominantly new to the spill field. Valuable early response time had to be expended explaining to the members the requirements and limitations of spill forecasting. This was accomplished by supplying background material and reference experts, who themselves did not estimate leak rate but served as a resource to the PIV experts who did. These supplementary experts included a petroleum engineer, well blowout modeler, and two highly qualified statisticians from the National Institute of Standards and Technology (NIST).
The PIV experts, with a single exception, were chosen directly or indirectly by the author and were Ph.D. professors or their research assistants at leading academic institutions. In many cases, they were already familiar with the PIV research of the other members. While each was given the opportunity to utilize their own pet PIV method, respect for the other academics generally led to consensus estimates after some discussion. The challenge was to expedite those discussions to meet deadlines imposed by the on-scene command. The single non-academic PIV expert, had, in the author's opinion, some difficulty joining the consensus, often taking a more defensive stance. Rejected was a suggestion by the head of FRTG to drop this member from the team; in hindsight possibly a mistake since his participation led, later, to criticism of the team conclusions by an outside advocacy group with whom this member had contact.
Within the academic team members, there may have been more agreement on team calculation results but there remained disagreement on the proper role of the group and individual members outside the group. Team members concurred not to disclose the internal discussions of the group to the media. Unfortunately, this did not prevent individual members from succumbing to media sirens that encouraged them to vent their personal opinions on the nightly news, causing a rapid damage control exercise by the author and others.
The DWH lesson for future team leads is to choose members based not only upon their expertise but also their ability to function within group and ICS constraints. Big spills bring the threat of magnification of team friction. Such friction can easily interfere with the successful operations of the group and be exploited by outside interests.
TEAM PREDICTIONS:
While ICS is a formal process, ICS for smaller spills is often done in a reduced implementation. Certainly this is true for spill forecasts where an individual spill forecaster may do some particular calculations, have a preliminary brief meeting with other team members to see if anybody objects and, if none, sends the results to the on-scene command. This is inadequate for spills with high media and/or stakeholder interest. For such incidents, all team predictions will be subject to future intense scrutiny. It is extremely important to document not only forecasts and scientific techniques used by the team but also the method of reaching consensus of team members.
This became apparent with the Deepwater Horizon spill. The PIV team, estimating the flow rate, used measurement procedures unfamiliar to the oil spill community. Therefore, the decision was made early to document extensively, even while the cleanup was still ongoing, the variations employed by the individual PIV expert. Each expert was allowed to describe his method and conclusions in separate appendices included with the team's summary report. While the method of reaching consensus was actually quite formal, involving team meetings assisted by statistical analysis by NIST statisticians of the group predictions, this was incompletely documented, leading to charges of bias and misreporting of flow results by an outside interest group seeking to make the government spill response an issue for its cause. Unfortunately, the interest group charges resulted in the expenditure of federal resources for a formal inquiry in order to show the accusations were without merit.
So as to not repeat the PIV Report situation, the final Oil Budget Calculator team results and method of consensus were well documented. All the major participants individually signed off on the final report prior to its release. Fourteen independently selected outside experts peer -reviewed this final report, and their comments, plus the responses from the OBC team, were included in the document. Of course, subsequent research has and will continue to refine and improve the calculations by the OBC team but the report remains an important reference. The downside of this approach is that documentation of method, including peer review, must significantly lag the actual spill predictions. In the case of the OBC final report, the delay was several months. This led to complaints by outside academics as to the ‘black box’ nature of the OBC and Congressional investigations into its reliability prior to the documentation release.
With the wisdom of hindsight, a hybrid approach to producing and documenting spill forecasts is recommended. Preliminary reports should not only describe the team's technical approach and predictions but also extensively document how members reached consensus. All team members should sign off on any predictions, and expected uncertainties, prior to their release. The lead should insure that these predictions are not misstated in official reports and make clear that peer review will be conducted as soon as practical after the spill emergency, with comments solicited from outside experts.
REPORTING CONCLUSIONS:
All authors must consider the background of the audience when communicating information. For most spills, the target audience is easily defined; spill responders, responsible party representatives, and stakeholder organizations. Generally, members of these target audiences are well versed in the nature and consequences of spills and the subsequent response. Spill forecasters do not have to carefully define terms such as ‘biodegradation, emulsion, shoreline impact, and chemical dispersants in their reports. Usually, the same audience is aware of the methods used to generate the forecasts and their expected accuracy. Extensive background material on spill behavior is neither required nor desired since spill scientists are expected to direct their limited resources toward producing timely predictions.
For large spills such as the Deepwater Horizon, these assumptions may not hold. The actual audience will probably be much larger than the target audience. A great deal of confusion occurred because of media and politician misinterpretation of the controversial Oil Budget Calculator pie chart (Figure 1). No spill professional would conclude that this pictorial representation of the ICS 209 form operational oil budget table showed that spilled oil that is not in the remaining (surface) category was removed from the environment. However, a high-level government official, unfamiliar with spill response terminology, made just this claim on national television. Moreover, while an initial preliminary report of the OBC team results carefully listed all those scientists who had contributed in some manner to the report with the caveat that inclusion of their names did not necessarily constitute endorsement by them of all the report conclusions, the caveat was later deleted because of concern that it might create confusion and controversy. Also, as mentioned earlier, a leading government scientist who assisted in the development and review of the OBC was not listed in the published contributor list at the direction of his superiors. These decisions, while well intentioned, led to future controversy. The author was called before a Congressional subcommittee to explain why three fourths of the oil was gone, something never concluded by the scientific team. A watchdog group wanted an explanation of why a well-recognized government scientist was dropped from the contributor list and suggested that the White House was pressuring scientists to make changes to the actual team calculations, a situation that did not happen. Investigating reporters began interviewing the OBC scientific contributors to see if there was a potential scandal. Fortunately, the media soon recognized there was not and turned to other matters.
Hindsight suggests that conclusions released by scientific teams should carefully define, in layman language, terms not familiar to those outside the spill response field. Uncertainty should be imbedded into any report as demonstrated, for example, in the OBC mass balance pie chart (Figure 2) found in the OBC final report (Federal Interagency Solutions Group, Oil Budget Calculator Science and Engineering Team, 2010). Media professionals can provide valuable assistance in wording the conclusions to minimize possible misinterpretation. Be careful when editing any contributions or preliminary results in a way that might lead to future claims of cover-up. Anticipate, in spite of best efforts, that conclusions may be subjected to unfair and unfounded challenges by those more interested in their cause than the scientific facts.
SUMMARY AND CAVEATS:
Scientific forecasting at large spill incidents requires skills that are not typical of the average researcher. Hopefully, this article, describing lessons learned during Deepwater Horizon spill, will be of assistance to those experts tasked with this job in the next ‘Big One’. Of course, no two spills are identical and a different set of controversies will face those experts. However, the recommendations above should still likely apply.
The contents of this paper reflect the personal views of the author and may not reflect the opinions and conclusions of NOAA, the U.S. government or other team members.