ABSTRACT
Incident responses, whether an exercise or actual event, of varying sizes and complexities, are carried out by governments, businesses, and Non-Government Agencies (NGOs) across the world on a daily basis, and while evaluations, or assessments, of these responses are usually conducted, it is not always in an effective manner. We will address usual challenges facing the evaluation/assessment process and what can be done to optimize the process for effective and actionable feedback.
Reviewing evaluation feedback gathered over the last ten years from Tier 3, Worst Case Discharge (WCD) exercises held across the globe, we have observed common issues which impact the quality of the evaluator feedback. We have developed methods and guidelines to overcome these issues and collect consistent high-quality evaluation feedback and data.
Key to this is the development of the evaluation materials and the selection of trained and qualified evaluators. An evaluator is expected to answer the evaluation questions provided so if the answers do not provide the quality of data needed the problem lies in the question set. For example, there should not be any questions that can be answered with a simple “yes” or “no.” Phrase questions more as directives requiring narration.
This brings us to the evaluators; you can have the best-worded questions and still receive poor-quality feedback if the evaluators are not subject matter experts in the operation or process you are asking them to evaluate. When selecting evaluators and developing the evaluation process, factors to consider include: have they been properly trained in the evaluation process? Has what good looks like been defined or is it up to the individual to decide? Are you requiring them to support their position with a narrative?
Digital evaluation tools have similar challenges and are only as good as the designer and the end user. A truly digital tool includes some level of automated analysis or data consolidation, which is challenging where the pathways to a successful response are not binary (yes or no) given human factors and team dynamics. Just as driving an expensive sports car does not make you a better driver, using a complex digital tool does not make for better evaluation feedback. If you ask the wrong question or the evaluators are not competent, the results will not be what you need regardless of which evaluation approach or tool you employ.
Beyond the questions, evaluators, and the training they receive regarding the process, additional factors influence the effectiveness of the evaluation feedback. Among those are cultural, personal, and geopolitical influences. In some cultures, it is considered rude to mention any performance gaps, and in some political environments, constructive feedback may be considered criticism of existing political administrations and simply cannot be made. As a result, evaluation feedback may not be accurate and could pose a significant risk in the future.
What is the solution? We will discuss the influencing factors identified and the methods we are implementing to eliminate or minimize their impact. Utilizing a standardized approach that provides benchmarks, we are comparing apples to apples allowing us to develop actionable improvement strategies supporting continuous improvement across the globe.