Abstract
Governments and industry have been cooperating in the development of oil spill preparedness for more than 30 years. This has included support to the ratification and implementation of instruments such as the International Convention on Oil Pollution Preparedness, Response and Co-operation (OPRC 90), which provides the basis for collaborative efforts between governments and industry to prepare for and respond to marine oil pollutions. Joint activities implemented in this framework represent a major investment and it is important to measure and track the extent to which they have led to sustained improvements.
This paper examines the challenges of measuring progress in oil spill preparedness that have emerged over time, leading to the development of different tools and systems to monitor long-term developments.
It will first review the metrics and tools used to assess the key elements of preparedness, focused on regions where the International Maritime Organization (IMO) - industry Global Initiative has been active since 1996. The challenges of ascribing and assessing the indicators will be highlighted. Whilst a quantitative method, such as the IPIECA Global Risk Analysis, is useful regarding technical aspects and to compare progress in time and between different regions, it does have a number of caveats, including the verification of data and the need to ensure that preparedness frameworks described in national strategy are translated into credible response capability. There is thus a need for more refined metrics and a complementary qualitative approach. Moreover, the difficulty to catalyse lasting change without sustained efforts was recognized. This paper will discuss why the measures should apply both for evaluation and decision-making and explain why it is key to build more comprehensive (from legal basis to implementation processes and equipment) and sustainable national preparedness systems. The indicators cover a range of aspects of oil spill readiness and should enable a picture of both national and regional preparedness to be constructed, which inform decisions on future actions and activities. The benefits of a step based approach and the potential for tools such as the Readiness Evaluation Tool for Oil Spills (RETOSTM) to underpin broader evaluations will be highlighted. The need for an enhanced methodology to measure progress in preparedness and its consistency with the risk exposure is finally discussed.
Introduction
Launched in South Africa in 1996, the Global Initiative (GI) is a programme through which the International Maritime Organization (IMO) and IPIECA (the global oil and gas industry association for advancing environmental and social performance) work together to encourage and facilitate the improvement of global oil spill preparedness and response capabilities in accordance with provisions set out in the International Convention on Oil Pollution Preparedness, Response and Cooperation (OPRC 90). The OPRC 90 Convention promotes industry-government cooperation and encourages them to work together to address core elements of effective preparedness and response to oil spills. The GI aims at encouraging and facilitating the development and implementation of national, regional and sub-regional oil spill contingency plans (particularly in developing countries) and increasing the ratification of relevant international conventions, including OPRC 90. Having appropriate measures enables oil spill preparedness programmes to develop effective work programmes comprising suitably targeted activities. The measures also inform national decision-makers in their allocation of the resources needed to improve preparedness, as well as reassurance to stakeholders that programmes are achieving their aims.
This paper examines the challenges of measuring progress in oil spill preparedness that have emerged over time, leading to the development of different tools and systems to monitor long-term developments. It first describes the current metrics used in GIs regions highlighting the fact that although similar indicators stemming from the OPRC Convention are widely used, there is no harmonized way of measuring progress throughout the GIs regions. The constraints and challenges of measuring progress and compare results globally will be discussed through the presentation of the global risk assessment conducted by IPIECA in 1998, 2009 and 2019. This leads to a reflection on the need to find a consistent system for monitoring sustained progress globally, for the programmes and decision-makers.
I. Current metrics used in GIs' regions
Between 1996 and 2003, although also active in the Caribbean, the Mediterranean, Caspian and Black Seas, and South East Asia, much of the early GI effort focused on the West and Central African (WACAF) region. Technical missions and training workshops/courses resulted in a substantial increase in African nations' ratification of relevant conventions. In this respect, a reassessment of the project lead to its regionalisation in order to drive progress more rapidly. It was also recognized that GI is a long-term process and the results attained so far in some countries were good examples, which were being replicated elsewhere. Three main regional initiatives were created progressively:
Caspian Sea, Black Sea and Central Eurasia: the Oil Spill Preparedness Regional Initiative (OSPRI) was formed in 2003;
West, Central and Southern Africa: the Global Initiative for West, Central and Southern Africa (GI WACAF Project) was launched in April 2006 in Gabon; and
South East Asia: the Global Initiative for Southeast Asia (GI SEA Project) was formally launched in Jakarta in March 20131.
Since their creation, all three projects have used indicators to assess their needs and measure achievements. Whilst the regions are, in essence, broadly aligned in the key metrics, there are variations in the indicators used and differences in methods to collect data. This can present a barrier to a holistic, consistent and precise measure of progress over time and across the different regions.
a. The method used to ascribe indicators is different in the three regions
The regionalization of the programme was meant to drive progress more rapidly and deliver effective and sustainable preparedness improvements in a reasonable time frame, thus overcoming some of the obstacles of the early years of GI. However, it was also noted that this changeover to a regional approach may result in the loss of strategic analysis (Micallef and Thiam, 2008). This might be visible in the way progress is monitored. All three initiatives monitor the status of ratification of core IMO conventions addressing oil spill preparedness and response and liability and compensation for oil tanker spill: OPRC 90; the International Convention on Civil Liability for Oil Pollution Damage (CLC 92); the International Convention on the Establishment of an International Fund for Compensation for Oil Pollution Damage (FUND 92). The status of the National Oil Spill Contingency Plans (NOSCP) is also tracked. Some initiatives add to the list the relevant regional conventions or encompass a broader list of conventions. However, when it comes to a more refined evaluation of the situation of the countries individually and of a region, the indicators and the method used vary among the initiatives.
OSPRI
The Oil Spill Preparedness Regional Initiative (OSPRI) was formed in 2003 with a mission “to encourage and support industry and governments to work cooperatively, promoting the adoption of proven, credible, integrated and sustainable national, regional and international oil spill response capability” in the Caspian Sea, Black Sea and Central Eurasian region (Taylor et al, 2005). OSPRI was the first regional programme established and initially used a set of 11 high level ‘critical success factors’ for the two Sea basins and the Turkish Straits (the latter being a major choke point of concern). These are shown in Figure 1, with the assessed status at the commencement of the effort (2003) compared with at the end of 2007.
The original metrics used by OSPRI as assessed in 2007 (R = not yet addressed or requires significant future work, Y = partially on track or work-in-progress and G = on track or completed)
The original metrics used by OSPRI as assessed in 2007 (R = not yet addressed or requires significant future work, Y = partially on track or work-in-progress and G = on track or completed)
It became clear to OSPRI members during 2007 that successes at the regional level meant that a more refined set of metrics were needed, and, at this time, there was also the experience from GI WACAF to consider and from which to learn. Whilst regional metrics had proved useful, it was also clear that more detailed national analysis was needed to direct and prioritise the initiative's efforts. Therefore, an expanded set of 18 measures was established, partially reflecting the GI WACAF KPIs. The measures are listed in Table 1.
Crucially, the assessment was expanded to individually cover each of the ten countries within the region. Each of these metrics is assessed as “Completed”, “Work in progress” or “Not yet addressed”. Continual progress has been tracked since then using these measures. After 10 years, the OSPRI members are again reviewing their metrics and have agreed to utilize the RETOSTM tool at the national level, to acquire additional detail in assessing preparedness, tracking changes and prioritizing activities. The use of RETOSTM is discussed in detail later in this paper. An initial RETOS ‘level A’ assessment for the ten countries in the OSPRI region was conducted and has indicated how this tool can differentiate preparedness between countries – see Figure 2.
Summary RETOS radar charts for the ten countries in the OSPRI region
GI WACAF
During the early years of GI WACAF, a broad picture of the state of preparedness and response capability in the region was drafted. The information provided by relevant government and industry representatives provided the necessary material to undertake a gap analysis study (GI WACAF Annual Review, 2006). The data collected in 2006 have been monitored since then and regularly updated against key indicators covering the requirements of the OPRC 90 Convention. Since 2006, using data submitted by each country during the biennial Regional conferences, it has been possible to monitor the evolution of the level of national preparedness and response capabilities across the region. Six main indicators have been developed to follow this evolution as shown in Table 2.
Whilst the trend across the region has shown a marked development in oil spill response capability, the rate at which this development has taken place has varied significantly among individual countries, with some advancing more rapidly than others. In this regard, the project's work programme has become much more diversified in recent years in order to respond to the needs of individual countries. In order to reflect this diversified programme and as capabilities improved and to develop a more qualitative reporting of the progress made, the Project Secretariat developed new indicators to obtain a better picture of the level of oil spill preparedness and response in the region. These supplementary indicators were subsequently endorsed by the Steering Committee in 2015 and are shown in Table 3.
Their status in 2019 is indicated in Figure 4.
These new supplementary indicators allow the Project to have a more refined and qualitative overview of the progress and to see whether the frameworks described in the national strategy are translated into credible response capability, which is key to build more comprehensive (from legal basis to implementation processes and equipment) and sustainable preparedness national systems. In addition to this generic picture of the region, GI WACAF also puts together Country profiles every two years that are more descriptive and qualitative.
They help to better understand the progress, need of each country and thus tailor activities.
GI SEA
GI SEA had the benefit of learning from the OSPRI and GIWACAF experience. As earlier highlighted, the key metrics are largely aligned, based on the requirements in the OPRC 90 Convention. A high-level view is first taken, from a regulatory perspective, looking at the following elements (similar to the primary indicators used in GIWACAF):
Are there law(s) on protection of the environment?
Are there regulation(s)/ guidelines on oil spill preparedness and response?
Is there a national authority for managing oil spill preparedness and response?
It was recognised that all countries within the remit of GISEA (the ten member states of the Association of Southeast Asian Nations, ASEAN) have established national laws for the protection of the environment from pollution. While six of the ten countries have specific regulations/guidelines on oil spills preparedness and response, these are not fully developed in others. However, all the countries have designated a national authority for oil spill preparedness and response. Clearly, the level of oil spill preparedness and response capability varies across the ASEAN Member States.
Building on the above assessment, GI SEA then considers the status of each country, with regards to their NOSCP, and additional policies/plans to support effective oil spill preparedness, and prompt and efficient response; the indicators included, and measurement criterion for each, and summarised in Table 4.
A snapshot of the current state of oil spill preparedness across the ASEAN Member States, following the above indicators, is provided in Figure 5.
Like GI WACAF, GI SEA puts together also puts together Country profiles that are more descriptive and qualitative. These are not publicly available in the immediate term and meant for internal consumption by the Project. Additionally, GISEA monitors the regional, sub-regional and bilateral agreements in place between the ASEAN Member States. A number of countries are Party to active and long-established bilateral and sub-regional arrangements, while other countries are not2. These arrangements pre-date the ASEAN Regional Oil Spill Contingency Plan, which was adopted in November 2018. Considering the differences across the ten countries, it is obvious that the cooperative mechanism presented in the regional plan is yet to be fully exercised.
If the metrics are well developed in each region, the assessment method differs from one region to another. This divergence limits the ability of the GI programme to present and visualise the collective progress in oil spill preparedness and response on a global scale. The need to set up a harmonized way of measuring progress globally was felt to be beneficial.
II. Attempt to measure progress and compare results: constraints and challenges
a. Setting international comparable standards: the example of the IPIECA Global risk assessment
In the light of the above, it appears that global common indicators to monitor progress over time and compare regions were needed. These indicators had to be broad enough to adapt to different institutional systems and based on harmonized and accepted good practices. A global risk assessment was conducted in 1998 at the beginning of the GIs. It is one of the few existing ways of measuring progress in preparedness and its consistency with the risk exposure over time and globally. The approach is recognised as having various limitations and flaws, however it was retained for direct comparative updates in 2009 and 2019. The assessment is based on the geographical regions shown in Figure 6:
This work is using twenty broad-brush criteria for assessing maritime countries and consolidating these into regional assessment, based on ten metrics for preparedness and ten for exposure. Each individual country scores are averaged to give a regional score. The score is given based on the best available information used with the caveat that some are down to personal knowledge. The assessment criteria are presented in the following tables.
b. Limits of the exercise
The outcomes for 2019 show that the exposure remains relatively consistent globally except in South East Asia and that preparedness has considerably increased where GI's have been active (Regions highlighted in red in Figure 7). However, these results need to be taken with caution, taking into consideration some imperfections and caveats that accompany the methodology. Some of them are specific to the methodology and can be changed, however some are more challenging and inherent to the nature of the exercise of quantifying progress and present greater challenges.
In the first category (caveats specific to the methodology), the following challenges can be listed. First, the geographical regions are arbitrary definitions (but it is also true for the initiatives themselves) and for instance, Mediterranean is displayed as one region (scoring at 80), however Southern Mediterranean preparedness is lower (77.6).
Whilst this method is useful regarding technical aspects (equipment, training, emergency plans, etc.), it does not consider the specificities of each country such as lawmaking processes, effectiveness of response procedure, or institutional maturity. For instance it gives the maximum score to countries having ratified the Supplementary Fund 2003, but contrary to CLC and Fund, it might not be relevant to ratify this convention for all countries depending on the level of oil imported. Indeed, the contribution system for the Supplementary Fund differs from that of the 1992 Fund. For the purpose of paying contributions, at least 1 million tonnes of contributing oil are deemed to have been received each year in each Member State. Additionally, the Bunkers Convention is only worth 1 point, even though now statistics provided by ITOPF show a decline in tanker spills and an increase in non-tanker spills, including bunker spills.
Figure 8 shows that regions in general are prepared to the risks faced but it has to be nuanced by the fact that this methodology only takes into account 10 exposure criteria (as many as preparedness criteria) while in traditional risk disaster prevention methodology, more risk factors are taken into account. There may be a need to correlate higher risk regions with industry activity, especially exploration, or larger risk factors (global marine traffic, piracy, etc.).
In the second category (inherent to the exercise of measuring progress), one of the main problems is the subjectivity of the answers, which is linked to a certain degree with a more general challenge in measuring progress made by governments. Indeed, the evaluation is based on the data collected by the GIs and other sources. The method may not be same for all of them. Even though, the projects seek to obtain the most accurate data possible, their accuracy depends on the countries' own perceptions. The verification of the data provided is a major challenge and suppose to have field knowledge of every country of the region. This is emphasized by the fact that the quantitative method ascribes figures to facts that may not reflect a more nuanced reality.
c.Qualitative versus Quantitative approach
In view of the good scores of some of the countries in the assessment compared to the lack of effective implementation of the policies, it appears necessary to have complementary qualitative assessment. The GI WACAF (for instance) country profiles are therefore very important as they provide a more precise analysis of the preparedness level of the countries. Primary and secondary indicators are crossed-checked and place is given to commentaries by the administration. The case of Benin is very illustrative. Formally Benin approved its NOSCP in 2006 and has a designated authority in place. However, they are stuck in the process of updating the NOSCP that has proved inefficient and out of date since the reshuffle of the Beninese maritime administration. In the case of an oil spill the current NOSCP is thus not useful, especially as the different parts that should constitute an NOSCP (see GI WACAF secondary indicators) are not yet developed. Despite the lack of effective implementation, the situation of Benin improved since 1998 as they have ratified conventions and are now tackling the problem of overlapping institutional mandates and are committed to further develop parts of their NOSCP.
Notwithstanding with the above, significant progress is to be observed in the regions where a GI is operating (highlighted in Figure 7). The GIs have indeed largely contributed to raise awareness, to the ratification of IMO conventions, especially OPRC 90, to legislative improvement and to capacity building.
The reflection on the methodology used for setting up indicators leads to a reflection on the aim of the indicators. Indicators perform two key purposes:
To measure if current programme efforts are successful and worthwhile in building improved and sustainable preparedness and response, and
To identify areas for improvement and thereby inform decision-making and prioritization of future programme activities.
Regarding the first point, the methodology described above is useful as it allows consistent comparison of progress over time. But as explained it contains caveats and the results must be viewed with care and cross-checked with more qualitative or detailed analysis. As for the second point, there is now a wider range of more sophisticated tools for measuring preparedness (e.g. RETOS™) and individual GIs can measure preparedness and implementation with greater granularity.
Having appropriate measures enables oil spill preparedness programmes to develop effective work programmes comprising suitably targeted activities. The measures also inform national decision-makers in their strategic choices to improve preparedness. However, these methods, the RETOSTM as well as GI metrics are not checking the preparedness against the risks that the countries face. Therefore, an enhanced methodology to measure progress in preparedness and its consistency with the risk exposure is needed.
III. Finding a consistent system for monitoring sustained progress globally
The objective of having good indicators is to have consistent measure over time and across regions but also to also to help governments and decision makers sustain the progress. That is why the indicators used to assess capacities and the needs of a country should be consistent with the ones used to measure progress. The methodology is important in order to avoid (or at least to minimize) arbitrary and subjective answers, so that both industry and governments can use them with confidence and so that GIs can better monitor progress.
a. Assess the readiness and the needs: existing tools
First, establishing the legislative and regulatory framework is the foundation to an effective national preparedness and response system.
Second, countries are encouraged to set up policies, processes and tools to implement the legislative framework.
Finally, cross-functional aspects, such as the implementation of database, transboundary cooperation, are recognized as an integral part of a successful approach.
GI WACAF 3-step approach from basic and core elements to more advanced elements of pollution preparedness and response
GI WACAF 3-step approach from basic and core elements to more advanced elements of pollution preparedness and response
This methodology is useful to assess both the readiness of the country and subsequently of the needs by initially checking if the basics are covered. The metric then used to conduct the assessment will be the key to have the most complete evaluation. For now, GI WACAF has been using its own indicators as exposed above, but the potential for tools such as the Readiness Evaluation Tool for Oil Spills (RETOSTM) to underpin broader evaluations has to be considered.
Indeed, RETOSTM is more comprehensive and is an internationally recognized tool that was developed to set up standards or “good practice” guidelines on how evaluate oil spill response readiness in terms of Oil Spill Response Plans (OSRPs), and the actual response capability. It was meant to fill the gap created by the absence of international assessment standards for the content of information that should be included in OSRPs, or the format and presentation of the information within an OSRP.
Readiness includes the experience, training, and skills of personnel involved in oil spill preparedness and response, and not only having a plan on paper. These human elements are intangible and difficult to evaluate. The assessment process tries to compensate these caveats and to make the most objective assessment by a sound methodology and precise indicators. The objectives are to: 1) to evaluate status for defined OSR scope; 2) to define gaps or areas for improvement; 3) to define priorities; 4) to assign tasks, responsibilities, and schedules; 5) to measure advances; 6) to re-evaluate. Thus, the tool can be used not only for a first assessment but also as a mean to improve and measure progress over time.
The tool can be used at different scopes. For the purpose of this paper, the scope to be considered is “Government – National (& multinational)”. Then, for each scope to be assessed, the user can select three different assessment levels (A, B and C). This tool is flexible and can be tailored for governments as institution-specific criteria can been developed and added to the tool. The criteria progress from what may be considered fundamental aspects of oil spill response management capability (Level A) to very complete and/or best international practice (Level C). At the end of the assessment, the tool creates a summary radar chart (see Figure 10) with 10 categories.
Finally, it allows each user to have a Global Performance Analysis with Sub-scores per assessment category and simple display of results by category and a Global Improvement Program / Implementation Plan is auto generated with priorities for improvement, as shown in Figure 11. It is important to note that for Level A some Critical criteria ensuring basic management are highlighted in yellow in the tool. This encourages a focus on the essential elements for a country commencing the preparedness process.
b.Limits of the tools for an easy snapshot and overall picture
RETOSTM is a proven international tool to assess the capacities of the countries and their needs. But in order to conduct such an assessment in every country, assessors need to be trained and the challenge of centralising the answers and of following-up would however remain central. Moreover, the challenges of the follow up and monitoring of progress still remains after the first assessment. The tool is very comprehensive and therefore maybe cumbersome to follow-up, especially for countries with very little administrative support but needing the most support. It is difficult to find a balance between a good number of indicators to keep the information easy and functional but at the same time have a good picture of the reality. An idea is to monitor progress with indicators based on the 10 RETOSTM categories and reconduct an assessment in the countries periodically, as feasible. There would be a possibility to read the details of the answers to the questions if more precise monitoring was needed and could be added to complete country-profiles.
It should also be noted that the RETOSTM assessment only considers the level of preparedness and not risk exposure. Therefore, as comprehensive as the tool is, it is nonetheless, not complete and the results of the assessment have to be included in a more comprehensive methodology.
c. Build a common framework based on existing tools and the three GIs regions experience?
There is a need for an internationally recognized method of measuring and analysing preparedness and response and to check it against risks and exposure of the countries. In order to enhance the IPIECA methodology previously described, inspiration can be taken from the Sendai Framework for Disaster Risk Reduction 2015–2030 adopted by the Members States of the United Nations. This framework implements a methodology that establishes a series of metrics allowing comparison between risk levels for each country at a national level. It establishes a reference formula to measure risk based on four indicators: Hazard; Exposure; Vulnerability; Resilience4.
While this paper provides a study on how to better measure the preparedness level (or resilience), it would be interesting to further investigate how international criteria and scoring could be developed to measure the risks factors, i.e. hazard, exposure, vulnerability. Such an effort would add value to the work already undertaken through the GI programmes (that can be completed through the RETOS tool) while also helping the programmes and the decision makers tailor their improvement programme not only according to their level of preparedness but also commensurate with the risk they face.
Conclusion
First, this paper describes how GI initiatives have gradually improved the way they measure progress in terms of oil spill preparedness and response. GI initiatives provide good tools to enhance the criteria used as they provide a refined analysis of the region supported by field experience. However, this regional approach hinders a meaningful global comparison (as the metrics vary among regions) and a comparison over time (as the metrics evolved). Moreover, these metrics only focus on the preparedness side and not on risk exposure. The advantage of the IPIECA Global risk assessment is that it enables to monitor both global- and over time- progress as assessments using this method have been made every 10 years. It also takes into account exposure factors and check preparedness against risk faced. However, this method needs refinement in order to better verify progress.
Regarding the preparedness and response level, the use of RETOS in every country could be a way forward to draw a global picture. The implementation, however, requires significant time and effort. The use of the tool in OSPRI is encouraging. This could provide the direction for the GI programmes to adopt common strategy, while keeping in view the nuances and needs specific to each region.
Looking into the future, it might be worthwhile to incorporate risk exposure into any analysis of oil spill preparedness and response, either at the national or regional level. In this regard, inspiration can be drawn from existing international tools for risk disaster management. This comprehensive assessment would help keeping in mind a common strategic analysis. Fundamentally, while the metrics will evolve over time, the intent of measuring progress must not be lost and the choice of measures has to satisfy realistic needs.
1 GI SEA website: http://www.gisea.org/#home; OSPRI website: www.ospri.online ; GI WACAF website: https://www.giwacaf.net/en/
3 CLC: the International Convention on Civil Liability for Oil Pollution Damage, 1992; Fund: International Convention on the Establishment of an International Fund for Compensation for Oil Pollution Damage, 1992
4Risk = hazard x exposure x vulnerability x (1 - resilience)