Oil spill modeling developed tremendously over the past four decades, from simple floating particle trajectories of the 1979 Ixtoc spill to four-dimensional oil trajectory and fate models coupled with biological exposure, toxicity, and population models begun in the late 1980s to early 1990s, spurred by the Exxon Valdez, other major spills at that time and the Oil Pollution Oct of 1990. While many of the basic concepts and algorithms were developed in the 1990s, advances in computer hardware/software, as well as modeling techniques, have allowed for much better resolution of the needed model inputs, computations and outputs. Data availability and assimilation has greatly improved the performance of meteorological, hydrodynamic and ice (metocean) models, which are critical inputs to oil spill modeling. While large oil spills are traumatic events adversely impacting the environment and socioeconomic interests, they provide opportunities for process studies, model development and validation due to resources supporting the efforts and collections of needed data. Among other lessons, the Ixtoc spill modeling demonstrated the need for comprehensive, time-varying winds and currents from metocean models as input. In modeling the Exxon Valdez, spatially and temporally (hourly) varying winds driven by mountainous terrain, as well as coastal currents, were highly influential to the trajectory and fate of the oil. Measurement data was needed to drive modeling as the existing metocean models were not sufficiently accurate to account for observed oil movements. Other large spills in 1989 were in estuaries and coastal areas where oil movements were primarily driven by river and tidal currents, for which hydrodynamic models were reliable. The 1996 North Cape oil spill was the first for which water column exposure and effects modeling could be verified with field data. The Erika and Prestige spill trajectories were largely driven by the ships' movements while releasing oil and the winds. In modeling the Deepwater Horizon spill, metocean models were able to predict observed surface oil movements for several days and in terms of general overall direction. However, the modeled deepwater plume moved in various directions depending upon the hydrodynamic model used as input, highlighting the need for more accuracy in ocean models below surface waters. Recent developments in instrumentation, remote sensing, and data assimilation should improve both deepwater and surface trajectories. This combined with advancements in toxicity modeling and supporting data will facilitate confidence in biological effects modeling results. Described research and monitoring needs are based on modeling lessons learned.
There is a growing demand by both government and industry for oil spill fates and effects modeling to evaluate the likely trajectory and fate of oil, potential environment impacts of spills, implications of spill response activities, consequences of potential regulatory policies, potential liabilities, and oil spill injuries as part of natural resource damage assessment (NRDA). This demand is driven by government regulations, limitations of data collections from field and laboratory work, growing power of computers for performing data analyses and modeling calculations, and the urgency to assess the impacts of catastrophic spills like the Ixtoc blowout (1979), Exxon Valdez (1989), Erika (1999) and Deepwater Horizon (2010) oil spills. As scientific information has improved over the decades, so have oil spill modeling capabilities.
This paper provides an overview of oil spill model development and lessons learned from applications over the past four decades. The focus is on compiled models and their capabilities, as opposed to models of specific processes. Applications addressed are simulations of specific spill cases, as opposed to hypothetical spills evaluated in risk assessments and planning, since comparisons of model predictions with observational data demonstrate capabilities and sources of uncertainties. Note that a model is calibrated by adjusting model inputs to bring the model in agreement with measurement data, whereas a model is validated (i.e., verified as accurate) by comparing model results to independent measurement data not used to run or adjust the model. Oil spill observational data are typically scant or unavailable, except in the case of catastrophic spills when attention, effort and monetary resources are focused on studying the event. Thus, our lessons come mainly from model comparisons to observations during catastrophic spills.
Oil spill modeling is performed to address specific questions related to the spill evaluated. Spill responders would like the modeling to predict, as a forecast, where oil will go, when it will reach different locations, how much will reach the locations and what difference it would make if certain response actions are taken. Impact assessments and NRDA use hindcast modeling to evaluate resource exposures to oil and adverse effects resulting from the exposures. An important consideration is the degree of accuracy needed to answer the questions being asked. Accuracy is determined both by the model and the input data used to produce the model result. Input data, particularly wind and currents, are more accurate when developed after the event than in forecast mode. (Consider the accuracy of weather predictions versus retrospective analyses).
The above considerations should be kept in mind when evaluating modeling of specific spill cases. The following sections present an overview of oil spill model development over the past four decades, methods used for applying models to spill forecasts and hindcasts, results and discussion of example model applications to specific catastrophic spills, and conclusions with respect to lessons learned from the modeling exercises.
OIL SPILL MODEL DEVELOPMENT
Many oil spill models have been developed over the past 40 years, and several comprehensive reviews of oil spill trajectory and fate modeling have been performed (Huang, 1983; Spaulding, 1988, 2017; Reed et al. 1999; Afenyo et al., 2016) to assess the state of the practice, summarize key developments, and project future research. The vast majority of oil spill models are used to predict or hindcast an oil spill trajectory or potential trajectories (e.g., Yapa et al. 1992; Proctor et al., 1994; Beegle-Krausse, 2001; Price et al., 2003; Mariano et al. 2011; Dietrich, et al., 2012; NOAA, 2014; Boufadel et al., 2014; Weisberg et al., 2017), focusing on floating oil transport, with the purposes of informing spill response (real time or in drills) and contingency planning. In these cases, the wind and current data used are the major influences on the accuracy of the results and so are critical inputs. Detailed oil compositional data may not be used or needed, if the modeling is performed to inform response activities and planning.
In addition to oil trajectory, three-dimensional oil fate models evaluate such processes as oil weathering (evaporation, emulsification, photo-oxidation), dissolution, biodegradation, oil property changes, entrainment of floating oil into the water column, oil droplet behavior and surfacing, interaction with particulates (mineral and organic), sedimentation, and shoreline standing with the goal of evaluating oil exposure, fate and mass balance. Earlier oil fate models (Mackay et al., 1982a,b; Mackay and McAuliffe, 1988; Reed, 1989; Daling et al., 1990; Spaulding et al., 1992) laid the groundwork, including the major processes but parameterizing them with limited data. In the 1990s and 2000s, a great deal more information became available, and more complex three-dimensional multi-component oil trajectory and fate models were developed (e.g., Anderson et al., 1990; French et al. 1996; Reed et al. 2000; French-McCay 2003, 2004; French-McCay et al. 2018b; Daae et al. 2018). These models continue to be improved as data informing their algorithms become available.
Subsea oil and gas blowouts add an additional complication to oil spill modeling due to the buoyancy of the plume released into the water column. Buoyant plume models (Rye and Brandvik, 1997; Spaulding et al. 2000; Johansen, 2000, 2003; Yapa et al. 2001; Zheng et al. 2003; Socolofsky and Adams, 2002; Socolofsky et al. 2011, 2015; Johansen et al., 2013; Gros et al., 2017; Spaulding et al. 2017) focus on modeling the rising plume until it has entrained sufficient seawater to reach a neutrally buoyant “trap height”, forming an intrusion in the water column, or until the plume has breached the water surface. Oil droplets are released from the intrusion and rise based on their individual buoyancies. A full trajectory and fate far-field model is needed to evaluate the movements and fate of the oil released from the buoyant plume.
Relatively few models have been designed and used to quantitatively evaluate the impacts of oil on wildlife, aquatic organisms and habitats. Typically, a biological effects model is coupled to an oil fates model capable of providing the needed oil exposure information. The earliest models (e.g., Samuels and Lanfear 1982; Ford et al. 1982; Spaulding et al. 1983; Reed and Spaulding, 1984; Reed et al., 1989; Jayko et al., 1990; Seip et al. 1991) overlaid oil trajectories on biological resource maps or modeled distributions of wildlife and quantified impacts via intersections with oil greater than a threshold amount. In some cases, wildlife (seabird or marine mammal) population and migration models were used to simulate the distribution, behavior and recovery of the affected species, in conjunction with their intersection with oil trajectories (Samuels and Lanfear, 1982; Ford et al., 1982; French et al., 1989; Seip et al., 1991). In these modeling efforts, the effects threshold for wildlife was based on an oil thickness or mass, although quantitative information definitively indicating what dose would be lethal was not available. As part of the development of Spill Impact Model Application Package (SIMAP), French et al. (1996), French-McCay (2009), and French-McCay et al. (2018d) have reviewed published literature and established reasonable oil thickness thresholds for effects on wildlife and habitats based on available information. However, quantitative dose-response data for a range of oils and weathering states are needed to reduce uncertainty in estimating wildlife impacts.
For aquatic organisms, impact thresholds based on surface oil are inadequate, particularly when evaluating implications of chemical dispersant use. Oil spill modeling addressing effects on aquatic biota is described in French and French (1989), French et al. (1996), French-McCay (2003, 2004, 2009), and Vikebo et al. (2013, 2015). The following summary is based model development and applications described in French-McCay (2002, 2003, 2009). Effects on aquatic organisms are related to concentrations of dissolved oil components in the water. Exposure to microscopic oil droplets may also impact aquatic biota either mechanically (e.g., filter feeders) or as a conduit for exposure to semi-soluble components (which might be taken up via the gills or digestive tract). The effects of uptake of the dissolved components into tissues are additive. Thus, an additive acute toxicity model and toxicity data for individual compounds may be used to estimate the effects of the mixture to which aquatic organisms are exposed. This Toxic Units modeling approach for evaluating toxicity of oil (Peterson 1994; French et al. 1996; French-McCay, 2002; McGrath and Di Toro, 2009; Redman et al., 2012) was developed based on earlier research on the additive effects of organic pollutants (e.g., McCarty et al. 1992; McCarty and Mackay, 1993). An additional complication, not often addressed, is that mortality is a function of duration of exposure – the longer the duration of exposure, the lower the effects concentration (see review in French McCay, 2002). Thus, the most accurate biological effects modeling tracks the exposure of biota as they move in the environment or the water passes them by, and employs a Toxic Units approach, corrected for duration of exposure. See the recent report by NASEM (2019) for a recent review of aquatic toxicity modeling as applied to oil spills.
METHODS FOR MODEL APPLICATION – DATA INPUTS
Oil spill trajectory and fate models require a number of data inputs including information defining the spill scenario (date, timing, location(s), conditions of release), oil property and composition data, geographic data (shoreline, habitat/shore type mapping, bathymetry), winds, currents, ice cover, temperature, salinity, suspended particulate concentrations, and spatial and temporal details regarding spill response activities (i.e. removal locations, timing and amounts of oil and water recovered, burn locations, timing and estimates of the amount burned, dispersant application amounts, timing, locations, and effectiveness of the application, etc.). The model's accuracy is dependent upon inputs of data of appropriate quality and accuracy to answer the questions being addressed. Lessons learned over the last four decades are therefore in part related to the availability and development of sufficiently accurate input data relevant to the level of uncertainty in the model results that may be accepted for addressing the questions being asked.
Forty years ago, maps of shoreline, bathymetry, habitat types, temperature and salinity were difficult to obtain and often in paper format. However, in the last decade or two, digital geographical information has become readily available for most parts of the globe. Oil spill modeling readily utilizes these data employing specialized software, allowing modelers to quickly develop model and data grids, as well as mapped output.
As will be discussed further below, wind and current data 40 years ago was typically limited to time series measurement data at specific stations, or opportunistic data (e.g., ship drift) offshore. Today, global, mesoscale and regional metrological and hydrodynamic models produce products that are posted regularly on the internet. Coupled ice models are also under active development (see review in French-McCay et l. 2018e). One of the lessons learned discussed below is the importance of the accuracy of these data sets as inputs for oil spill modeling. Oil fate modeling, as well as biological effects modeling for planktonic organisms, should be driven by an accurate description of the winds, currents and ice cover at all significant spatial and temporal scales for addressing the questions and objectives of the study.
Biological effects modeling requires toxicity data for species of interest and data describing the abundance or densities (or relative distributions) of exposed organisms (i.e., baseline density data). Toxicity data can be based on statistically analyzed compilations of information or on specific bioassays for species and life stages of special concern.
RESULTS AND DISCUSSION – MODELING OF HISTORICAL SPILL CASES
Santa Barbara Oil Spill (January 1969; subsea blowout 10 km off California coastline)
Spill Circumstances: An estimated 80,000 to 100,000 barrels (13,000 to 16,000 m3) of crude oil spilled, creating an oil slick 35 miles long along California's coast and killing thousands of birds, fish and sea mammals.
Spill Modeling: Computer models had not yet been developed and critical input data such as modeled winds and currents were not available.
Lessons Learned: In addition to the development of capable computers, meteorological and hydrodynamic modeling is needed to simulate the spill's trajectory and fate. Such critical input data are not available for the time of that spill. The public outrage and media attention regarding the spill in part resulted in numerous pieces of environmental legislation within the next several years (e.g., the National Environmental Policy Act (NEPA, 1970) the Clean Water Act, 1972).
Amoco Cadiz (16 March 1978; oil tanker aground 5 km off the coast of Brittany, France)
Spill Circumstances: The oil tanker Amoco Cadiz ultimately split in three and sank. National Oceanic and Atmospheric Administration (NOAA) estimated 220,880 metric tonnes of oil was spilt. The coastline of northwestern France was heavily oiled.
Spill Modeling: Reed and Gundlach (1989) and Reed et al. (1989) developed a shoreline oil interaction model based on nearshore hydrodynamics and holding capacities of different shoreline types for oil. Trajectory modeling of the offshore movements of the oil was driven by wind data from Brest airport and tidal current modeling, which proved sufficiently accurate.
Lessons Learned: Detailed observational data of shoreline oiling allowed for the development and verification of the shoreline interaction model. The derived oil holding capacities of different shoreline types have been used in oil spill modeling for other spills (see French et al. 1996 for details).
Ixtoc oil spill (June 1979 – March 1980; subsea blowout in the Bay of Campeche, Mexico)
Spill Circumstances: The Ixtoc I blowout from 50 m depth off the coast of Mexico west of the Yucatan Peninsula, initially flowed at about 30,000 bbl/day, but pumping of mud and steel objects into the well eventually reduced the flow to about 10,000 bbl/day (~2,000 m3/day).
Spill Modeling: NOAA hindcasted the spill with an early version of their oil spill trajectory model, then called the On-Scene-Spill Model (OSCM). Oil movements from the start of the release to the time of aerial mappings were used to modify of the mean current pattern, which was based on historical dynamic topographies and hindcasting (Torgrimson, 1981; Galt, 1981). Thus, this was an iterative approach that inferred the currents used for the trajectory from observations of the slick movements. Mackay et al. (1981) modeled oil weathering, comparing it to detailed field chemistry measurements, for the first time in any spill.
Lessons Learned: Anderson (1983) studied the sensitivity of a modeled surface trajectory prediction to three wind (atmospheric pressure-based wind fields; wind drift observations, Brownsville airport wind measurements) and three current (surface Ekman currents based on wind data; hydrodynamic model data by Dynalysis of Princeton completed in 1981; from a geostrophic model) input data sets, demonstrating their general inadequacy for accurate predictions of long-term trajectory estimates. What was needed was comprehensive, time-varying winds and currents from metocean models at resolutions not possible at that time.
Exxon Valdez Oil Spill (EVOS; 24 March 1989; oil tanker; Prince William Sound [PWS])
Spill Circumstances: 250,000 bbl (34.8 million kg) of Alaskan North Slope crude oil was spilled early on 24 March 1989 at Bligh Reef. On March 24 and 25, winds were light, but on March 26 a large storm involving strong northeasterly winds emulsified and dispersed the oil widely in central PWS (Galt et al. 1991). The floating oil movement was generally towards the southwest, out of PWS at Montague Strait, and continuing along the coast towards the southwest.
Spill Modeling: Galt et al. (1991) performed oil spill trajectory modeling of the spill, simulating the oil's movements in PWS and along the Alaskan coast. Wolfe et al. (1994) estimated the ultimate fate and a mass balance for the spill. French-McCay (2004) modeled oil fate and impacts occurring in PWS over the first 60 days after the release, using a seasonal mean tidal current field simulated using a hydrodynamic model applied to PWS. Wind data from several nearby stations in PWS were used for the simulation, choosing the closest station to the floating oil on each date. Model-estimated bird kills were in good agreement (within a factor 2) for species groups that were not observed to avoid oil. For those where avoidance was observed, model estimates were a factor 4–5 higher than the field estimates. However, the uncertainty in abundance estimates and/or probability of oiling could account for the differences.
Lessons Learned: Spatially and temporally (hourly) varying winds driven by mountainous terrain, as well as coastal currents, were highly influential to the trajectory and fate of the oil. Measurement data was needed to drive modeling as the existing metocean models were not sufficiently accurate to account for observed oil movements. Meteorological models that resolve spatial details of the wind field are particularly important in coastal waters near mountains and with strong diurnal sea breezes. Good wildlife density data were available and needed in order to estimate wildlife impacts. Indications of bird avoidance were uncertain, indicating the need for data on bird behavior around spilled oil.
North Cape Oil Spill (January 1996; barge grounding; south coast of Rhode Island, USA)
Spill Circumstances: 828,000 gallons (2,682 metric tons) of home heating (light fuel) oil spilled into the surf zone, entraining oil into the water column in heavy surf, resulting in high concentrations of dissolved components in shallow water that killed millions of water column and benthic organisms, many of which washed up on beaches (French-McCay 2003).
Spill Modeling: Movements of oil droplets entrained in the surf zone were primarily by tidal currents, for which hydrodynamic modeling was available and verified as reliable. Oil fate model predictions were compared with observed oil distributions and measured PAH concentrations in water and sediments. Biological baseline density data were available for lobsters and birds, which were monitored in the spill area before and after the spill (French McCay, 2003).
Lessons Learned: The oil fate model simulation was validated with observations of surface oil and measurements of PAHs in water samples taken after the spill, confirming the modeled dissolution and dispersion estimates were reasonable. The model-estimated mortalities of lobsters (8.3 million) and birds (2,226–4,355) were verified with field-based estimates of lobsters (9 million) and birds (2,400) killed by the spill (French McCay, 2003). This spill case was the first for which water column exposure and effects modeling could be verified with field data.
PEPCO Pipeline, Maryland (7 April 2000; pipeline spill; Patuxent, Maryland)
Spill Circumstances: Approximately 422 MT (120,000 gal) of a mixture of No. 2 (light) and No. 6 (heavy) fuel oils were released from a pipeline, which transported oil to the Potomac Electric Power Company (PEPCO) Chalk Point Facility, into Swanson Creek, a tidal tributary of the Patuxent River, Maryland, with extensive wetlands (saltmarsh).
Spill Modeling: French-McCay et al. (2006) used measured wind data and hydrodynamic modeling of tidal and river flow for modeling. The trajectory was verified with observations. Employing pre-spill bird density estimates, model estimates of birds oiled agreed with field-based estimates within a factor 2–3.
Lessons Learned: Given the large effort put into the field estimates and the contained water body making field observation feasible, the uncertainties in the field estimates of birds oiled are likely much lower than for many other spills. Thus, the agreement of the model to the field estimates verifies that the model algorithms are realistic. This analysis also demonstrated the importance of obtaining reliable baseline bird density data.
Prestige Oil Spill (13–19 November 2002; oil tanker; North Atlantic off Spain)
Spill Circumstances: Damaged in severe weather, the oil tanker M/T Prestige leaked oil off the coast of Spain as the vessel was towed off shore, first to the northwest and then towards the southwest, until on 19 November 2002 when the vessel broke in half and both the bow and stern sank approximately 250 kilometers off the coast of northern Spain. The coastline of northern Spain was heavily oiled by the heavy fuel oil being carried as cargo.
Spill Modeling: French-McCay et al. (2013) performed spill trajectory modeling using interpolated wind records from four offshore buoys and four stations at the coastline. Oceanic currents used were based on data from drifters released in the spill area during December 2002. Oil was released from the M/T Prestige from various tanks over an extended period and in varying locations as the ship moved. Thus, the release details were complex and were derived from detailed information in the ship's log and measurements of tank contents (ullage) before sailing and on 14 November 2002. Model results were validated by available observations of surface oil (including a 17 November satellite SAR image) and shoreline oiling.
Lessons Learned: To account for the observed oil distribution and movements, the details of the release (timing of differing amounts of oil released at the surface and from underwater cracks) needed to be worked out. Winds accounted for most of the oil drift, but the drifter-based current data were required to move the oil northeast and eastward along northern Spain.
Deepwater Horizon Oil Spill (DWHOS; April–July 2010; subsea blowout; Gulf of Mexico)
Spill Circumstances: The Deepwater Horizon oil and gas blowout ~1500 m below the water surface of the Northern Gulf of Mexico released oil until 15 July 2010, 87 days after the initial explosion and fire on the rig. A massive response ensued, including use of mechanical recovery equipment, in situ burning, and surface and subsea applications of dispersants.
Spill Modeling: Many scientific and modeling studies were funded as the result of this spill. Trajectory modeling was performed by a number of modelers (MacFadyen et al., 2011; Mariano et al., 2011; Paris et al., 2012; Dietrich et al., 2012; Boufadel et al., 2014; North et al., 2015; Testa et al., 2016; Weisberg et al., 2017), most of whom were focused on modeling the oil's movements. Gros et al. (2017) and Spaulding et al. (2017) modeled the dynamics of the nearfield buoyant plume. Gros et al. (2017) evaluated dissolution of soluble hydrocarbons in the intrusion, comparing results to measurements of volatile hydrocarbons in the atmosphere measure by overflights. French-McCay et al. (2015; 2016; 2018a,c) performed modeling of the full fate of the oil in the far field, both as part of the NRDA efforts and later in hindcast for validating the model by comparing results to remote-sensing-based floating oil maps, shoreline oiling distributions, and subsea chemistry sample data.
Lessons Learned: Gros et al. (2017) showed that much of the soluble light hydrocarbons dissolved in the intrusion at depth. Model-predicted concentrations by French-McCay et al. (2015, 2016, 2018a) within 10–15 km of the wellhead agreed in magnitude with measured concentrations. Additional analyses undertaken to validate the SIMAP model (French-McCay et al. 2018a, c) found that predictions of floating oil mass agreed well with estimates based on remote sensing data. Metocean models predicted observed surface oil movements for several days and in terms of general overall direction. However, the modeled deepwater plume moved in various directions depending upon the hydrodynamic model used as input, highlighting the need for more accuracy in ocean models below surface waters. Uncertainties in the oil droplet size distributions released from the nearfield plume are still being debated to date (NASEM 2019).
Based on the modeling studies to date, the majority of biological impacts are to wildlife (birds, mammals and turtles) contacting oil and/or volatiles and exposures of nearshore and coastal habitats. Large effects on fish and invertebrates result when (1) the spill is large; (2) when entrainment is high (i.e., by a subsea blowout or by high winds or surf) before the oil has weathered; (3) when the oil is of low viscosity such that it may be easily entrained (e.g., home heating oil and light crude oils); and (4) where dispersion in the water column is slow (i.e., in weak currents or in shallow water where dispersion is limited by geographical confinement), such that exposure duration is relatively long.
Validation studies using SIMAP show the model's ability to predict trajectory, mass balance (fate), and concentrations is reliable when several key inputs are well characterized: winds, currents, ice cover, oil properties and composition, and specific details on the release conditions, such as oil amount(s), timing and location(s) (French and Rines 1997; French-McCay 2003, 2004; French-McCay and Rowe 2004; French-McCay et al. 2013, 2015, 2016, 2018a,c,e). However, modeled transport would not be expected to exactly match the actual current- and wind-driven transport that occurred at the time of the spill because the hydrodynamic and wind models used as input would not necessarily capture all the small-scale features, such as fronts and smaller eddy positions and timing in the offshore areas (see also MacFadyen et al. (2011), Dietrich et al. (2012), Boufadel et al. (2014), and Weisberg et al. (2017) for their perspectives on floating oil transport). Thus, one would expect general agreement between modeled oil distributions and observations, such as those based on remote sensing, but that the model might be displaced by distances consistent with differences in the details of the small-scale features with the actual patterns at the time. As wind, hydrodynamic, and ice models have improved or will improve their resolution, the accuracy of model trajectories also improves.
In some spills (e.g., Prestige, DWHOS), details of the oil (and gas) release scenario (volume(s), timing, location(s)) matter. The trajectory is particularly sensitive to the timing and duration of a release, a consideration often overlooked when preparing inputs for modeling. The degree of detail needed in defining oil properties and composition depends on the questions being asked. If the questions relate to where the oil is going and when it will get there (i.e. response), detailed oil composition data are not needed, and the oil can be simply characterized by its density and viscosity. However, if exposure concentrations for aquatic resources are desired, detailed compositional data are needed.
The SIMAP biological effects model has been validated using simulations for ~30 spill events where data are available for comparison (French and Rines 1997; French-McCay 2003, 2004; French and Rowe, 2004; French-McCay et al. 2013, 2015, 2016, 2018a, c). In most cases only the wildlife impacts could be verified because of limitations of the available observational data. However, in the North Cape spill simulations, both wildlife and water column impacts (lobsters) could be validated (French-McCay 2003). Modeling results show that the algorithms in the model are valid when input data for toxicity and abundance are accurate.
The considerable range in sensitivities to oil components suggests that bioassay data for species/life stages of concern need to be measured. However, using a Toxic Units approach simplifies this problem in that toxicity of only a few compounds with a range of solubilities need be measured (see French-McCay, 2002; McGrath and Di Toro, 2009; Redman et al., 2012; NASEM, 2019 for details). The availability of toxicity data has increased greatly over the past 40 years. Better chemical analytical methods have allowed more of the thousands of compounds in oil to be included and addressed by a Toxic Units approach. In a risk assessment context, the range of toxicity sensitivities is well understood (from the above-cited compilations), and so thresholds of concern and other statistics can be estimated from existing data sets.
Uncertainties biological effects model results are proportional to uncertainties in biological densities, which will always be uncertain due to the natural patchiness and variability of species distributions. As it will never be feasible to synoptically sample all locations and times of concern for all biota and life stages, statistical models developed over the past decades are increasingly being used to quantify biological distributions based on available data. In many analyses, such as in ecological risk assessments, evaluation of percent losses for populations of interest may be the appropriate measure of impact. This would avoid the difficulty of quantifying densities of biota at the time and location of the spill.
Oiling probabilities, given oil passing through an area where animals are present (French-McCay (2004, 2009), have been based on limited data available in the literature. More detailed and quantitative information on bird and other biota's behavior in the presence of oil (e.g., avoidance and attraction) would improve the knowledge base and ability to estimate impacts from spills. In addition, quantitative dose-response data are needed to establish more definitive oil toxicity thresholds for wildlife.
What has yet to be addressed in models are implications of sublethal effects of chronic contamination; indirect effects via changes in ecosystem structure; and behavioral changes resulting in reduced growth, survival or reproductive success. The latter are dependent on understanding of population and/or ecosystem structure and dynamics, areas of active study that are difficult to quantify but a challenge for future research.