Background Patients who decompensate overnight experience worse outcomes than those who do so during the day. Just-in-time (JIT) simulation could improve on-call resident preparedness but has been minimally evaluated in critical care medicine (CCM) to date.
Objective To determine whether JIT training can improve residents’ performance in simulation and if those skills would transfer to better clinical management in adult CCM.
Methods Second-year medicine residents participated in simulated decompensation events aligned to common medical intensive care unit (MICU) emergencies predicted to occur overnight by their attending intensivist. Simulation faculty scored their performance via critical action checklists. If the event occurred, MICU attendings rated residents’ clinical management as well. At the rotation’s conclusion, a variant of one previously trained scenario was simulated to assess for performance improvement. Resident perceptions were surveyed before, during, and after completion of the study.
Results Twenty-eight residents participated; 22 of 28 (79%) completed the curriculum. Management of simulated decompensations improved following training (initial simulation checklist completion rate 60% vs 80% final simulation, P≤.001, Wilcoxon r=0.5). Predicted events occurred in 27 (45%) of the 60 shifts evaluated, with no observed difference in faculty ratings of overnight performance (median rating 4.5 if trained vs 3.0 if untrained; U=58.50; P=.12; Mann-Whitney r=0.30). Residents’ self-reported preparedness to manage MICU emergencies improved significantly following training, from a median of 3.0 to 4.0 (P=.006, Wilcoxon r=0.42).
Conclusions JIT simulation training improved residents’ performance in simulation.
Introduction
Hospitalized patients who decompensate overnight experience worse outcomes than those who do so during the day.1-10 Work hour restrictions and night float rotations, intended to mitigate clinician fatigue and improve overnight staffing, have had mixed results on patient safety.11-20 The persistence of an “off-hours effect” may be due to the relative inexperience of those on duty overnight—often residents in academic medical intensive care units (MICUs)21,22 —and the correspondingly lower likelihood that necessary interventions would be promptly or properly performed.
Lack of preparedness can be improved with training. High-fidelity simulation is particularly effective for developing complex clinical skills, allowing for active learning in an environment where errors do not negatively affect patient safety. Simulation demonstrates superiority over other teaching modalities in critical care settings.23-37
“Just-in-time” (JIT) simulation, or anticipatory simulation, is a type of training conducted prior to a predicted clinical event, thereby leveraging temporal proximity to mitigate skill decay.38 JIT interventions have frequently been employed to improve the performance of specific high-acuity procedures and have demonstrated efficacy.36,39-46 However, the value of the JIT approach for more complex clinical skills in adult critical care has not yet been meaningfully evaluated (ie, at higher Kirkpatrick levels).47 The aim of this study was to determine whether JIT training improves residents’ performance in simulation and if those skills transfer to improved clinical management in adult critical care medicine. A secondary aim of our study was to evaluate resident satisfaction with the JIT simulation curriculum.
Critical care patients experiencing overnight adverse events have worse outcomes, and resident preparation to handle these events has been minimally studied.
Just-in-time simulation of commonly occurring overnight patient decompensation improved resident self-reported preparedness and later simulation performance, but did not change faculty assessment of resident performance in the low number of overnight events that occurred.
Just-in-time simulation to improve resident preparedness for handling overnight patient critical care decompensation is a promising strategy.
Methods
Setting and Participants
We conducted a prospective, nonrandomized, observational study evaluating the effectiveness of a JIT simulation training program on residents’ performance managing clinical decompensations in the MICU. Training took place in the Manhattan Veterans Affairs (VA) Simulation Learning Center adjacent to the 12-bed combined medical and cardiac intensive care unit at the VA New York Harbor Healthcare System’s Manhattan Campus (VA MICU), which is an urban teaching hospital affiliated with the New York University Grossman School of Medicine (NYUSOM). The VA MICU is staffed by residents from the NYU internal medicine (IM) residency program; most residents rotate there for 4 weeks. All second-year residents rotating between October 2019 and November 2021 were invited to participate. To ensure consistency in participants’ prior exposure to critical care medicine, first- and third-year residents were excluded from this study. Data collection was paused twice during the COVID-19 pandemic to accommodate for reallocations in hospital resources and concluded once a full year’s worth of second-year residents had participated in the program (Figure 1).
Participant Enrollment Flowchart and Demographics
Abbreviations: VA, Veterans Affairs; MICU, medical intensive care unit; PGY, postgraduate year.
Participant Enrollment Flowchart and Demographics
Abbreviations: VA, Veterans Affairs; MICU, medical intensive care unit; PGY, postgraduate year.
During the study period, one PGY-2 resident was on call each night as the most senior in-house physician responsible for leading the initial management of any clinical changes that occurred overnight. Fellows and faculty were available for consultation by phone and, if necessary, to present to the hospital. Senior residents were on 24-hour call every third night and received simulation training during the preceding day. Thus, residents could complete a maximum of 4 simulations during their rotation; they could opt out of a session for conflicting clinical responsibilities. Online supplementary data Appendix 1 outlines the typical monthly schedule.
Interventions and Outcomes Measured
The initial curriculum was co-created by B.S.K. (head of the Simulation Learning Center), C.B.D. (Associate Program Director of the NYU IM residency), R.R. (Director of Nocturnist Medicine at Bellevue Hospital Center), and S.S.N. (Senior Simulation Fellow at the time of curriculum implementation), with iterative improvements contributed by all research team members. The most common clinical decompensation events seen in ICU settings were identified through review of reimbursement data from the Agency for Healthcare Research and Quality,48 and 9 scenarios were included in the curriculum: symptomatic bradycardia, ventricular tachycardia/fibrillation arrest, septic shock, hemorrhagic shock, hypoxic respiratory failure, hypercapnic respiratory failure, ventilator troubleshooting, massive pulmonary embolism, and elevated intracranial pressure (ICP). Some scenarios were excluded if they were difficult to simulate (eg, acute renal failure), predict (eg, specific pharmacotherapy overdose), or encountered in the adjoining cardiac critical care unit (eg, cardiogenic shock). R.R. created a set of clinical cases and critical action checklists for each of the 9 scenarios, with 15 key performance elements based on best practices for management of each scenario (online supplementary data Appendices 2-4).
We utilized Messick’s validity framework to develop and modify our scoring checklists.49,50 All materials were reviewed and edited by a panel of 3 additional faculty with expertise in critical care and simulation-based medical education (B.S.K., S.S.N., A.A.); discrepancies in checklist content were resolved by consensus through a modified Delphi process.51-54 Panel members discussed the checklists’ fitness for measuring each session’s objectives and agreed to specific rating rules (eg, how to score a checklist if the scenario deviated from the script). The authors attempted to mitigate interrater variability in checklist documentation by creating trichotomous scoring systems (done, partially done, or not done). Once checklist content, structure, and rating rules were finalized, faculty evaluators were trained, and all scenarios were piloted with a group of residents enrolled in a simulation elective rotation. Ultimately, a post-hoc calculation of interrater reliability was performed using a sample of 5 of the most simulated scenarios (septic shock [twice], elevated ICP, hypoxic and hypercapnic respiratory failure) independently scored by faculty (R.R., J.W.T.).
Case selection for each session was based on a conversation with the attending intensivist, who was asked to predict which decompensation was likely to occur that night according to the clinical status of patients in the MICU. During each 30-minute simulation session, residents received direct observation and feedback from simulation faculty. Each case utilized a prescribed script and clinical deterioration sequence delivered via a high-fidelity, ventilator-compatible patient simulator (SimMan 3G, Laerdal Global Health). We used the ASL 5000 Breathing Simulator (IngMar Medical) to display ventilator waveforms and facilitate adjustments. During simulations, participants obtained a history, conducted a physical examination, requested laboratory and imaging studies, called consultants, and performed a limited number of procedures. Researchers assessed participant performance in real time using the critical action checklists. Immediately following each simulation, a semistructured debrief with faculty occurred. Checklist scoring responsibilities rotated among multiple authors to promote blinding to participants’ prior performances.
To assess for performance improvement, residents completed a final “variant” simulation at the end of the rotation (online supplementary data Appendix 3). Each variant case evaluated the resident’s ability to manage 1 of the 9 clinical scenarios previously completed but applied to a different patient presentation. For example, hypoxic respiratory failure was due to hypertensive emergency with flash pulmonary edema in the initial case, then presented in the context of pneumonia and adult respiratory distress syndrome in the final variant. We used variant cases—rather than repeat simulations—to better assess whether higher-order learning (ie, skill transfer) occurred during the initial training session beyond cognitive processes within the lower-order taxon of “remembering” previous instruction, such as recognition and recall of prior simulations (ie, skill retention).55 We maintained the critical actions and scoring checklists for the variant cases to compare performance before and after exposure to the curriculum.
To assess whether simulated skills transferred into real clinical settings, we surveyed faculty regarding residents’ actual clinical management decisions throughout their rotations. The morning after every simulation call night, the daytime VA MICU attending intensivists were asked whether a clinical decompensation event occurred overnight and to rate resident management on a 5-point Likert scale (1=very poor to 5=very good; online supplementary data Appendix 5). At the end of the data collection phase, we separated faculty assessments into 2 groups pertaining to an event for which the resident had or had not previously received simulation training. Additionally, residents were surveyed regarding their perceived preparedness and prior exposures to overnight decompensations before each simulation and at the conclusion of the curriculum (online supplementary data Appendices 6-8).
Statistical Analysis
Residents who did not complete the final simulation for any reason were considered lost to follow-up, and their data were excluded from comparison testing. Prior to analysis, we de-identified all data to ensure analysts were blinded to the identity and performance of individual participants. Shapiro-Wilk tests revealed that our data departed from normality, so we used non-parametric tests for analysis, and calculated 95% confidence intervals (CIs) and effect sizes for all findings. We used Wilcoxon signed rank testing to evaluate changes in residents’ performance between the initial and the final (variant) simulations as well as their self-perceived preparedness, rated on 5-point Likert scales at the start and end of the curriculum. We employed Wilcoxon rank sum and Mann-Whitney U testing to compare the management of actual overnight events based on whether the residents had previously received simulation training for that clinical scenario and reported Wilcoxon and Mann-Whitney r values to estimate effect size. Finally, we performed thematic analysis56 of qualitative responses from resident and faculty surveys to further explore their perceptions of the curriculum. One author (R.R.) coded the complete transcript of all survey data, then inductively determined salient themes, which were organized into domains with 1 to 2 representative quotations for each theme. Codes were assigned positive, negative, or neutral values; where themes featured both positive and negative codes, both representative quotations were presented.
This study met the NYUSOM’s criteria for self-certification as quality improvement for program evaluation purposes, rather than as human subjects research, and thus did not require institutional review board review.
Results
During the study, 32 of 38 (84%) of the VA MICU overnight senior resident shifts were covered by second-year residents, 28 of whom participated in 76 simulation training sessions, with 22 of 28 (79%) residents completing the curriculum (Figure 1). Participants completed a mean of 2.7 (range 1-4) simulations each, with some missed simulations due to time pressures or interruptions from the COVID-19 pandemic.
Relative to their performance on initial simulations, residents’ performance on the final (variant) simulations across various clinical scenarios (n=22) improved significantly, from a median of 60% completion of checklist critical actions (95% CI = 59.40-68.57) on initial simulation to 80% completion in final simulations (95% CI 71.64-83.00); P≤.001, Wilcoxon r=0.5 (Figure 2). A post-hoc interrater reliability analysis of 5 scored checklists showed strong agreement between raters (weighted κ=0.843 [95% CI 0.75-0.94], P<.0001).
There were 27 decompensation events during the study for which faculty assessment data were available, with a next-day attending survey response rate of 79% (60 of 76). In 11 instances, the resident had not received training for the type of clinical event that occurred; for 16, the resident received simulation training for that scenario (online supplementary data Appendix 9). If the resident received JIT training, median attending rating of trainee management of actual overnight events in the MICU on a 5-point Likert scale was 4.5 (95% CI 3.74-4.63); if untrained, median rating was 3.0 (95% CI 3.16-4.11); U=58.50, P=.12, Mann-Whitney r=0.3 (Figure 3). Neither group differed significantly in prior experience rotating in the ICU, baseline or final simulation score, or reported experience of prior decompensations during their ICU block (Table).
Residents’ self-ratings of preparedness to manage MICU emergencies improved significantly following training, from a median of 3.0 (neither prepared nor unprepared) at the outset of the curriculum to 4.0 (somewhat prepared) at its completion (P=.006; Wilcoxon r=0.42; Figure 4). The resident response rate was 28 of 28 (100%) for the initial survey and 44 of 48 (92%) for subsequent surveys.
There was no difference in baseline exposure to critical care, simulation performance, faculty rating, or preparedness scoring between the groups of residents who participated in the curriculum before or during the COVID-19 pandemic (online supplementary data Appendix 10). We observed a statistically significant decrease in the number of simulations per participant during the pandemic, with a change in simulated scenarios largely driven by an increase in hemorrhagic shock and a decrease in elevated ICP cases.
Final resident feedback on the curriculum is shown in online supplementary data Appendix 11. Free-text responses to survey questions fell into 3 domains: those pertaining to the educational environment used for simulation, those regarding the relevance of the training to residents’ clinical practice, and faculty comments on the quality and safety of residents’ actual clinical management following training. Selected findings and exemplar quotations are listed in online supplementary data Appendix 12.
Discussion
In this study, we found that completion of the curriculum significantly increased residents’ ability to perform critical action steps in simulations of common ICU emergencies, and their self-ratings of feeling prepared to respond.
Prior studies of simulation-based education in the MICU show similar effects. Singer et al reported higher scores on checklist assessments of resident performance in cases of septic shock/hypoxic respiratory failure, ventilator alarm management, and evaluation of spontaneous breathing trials by residents trained using simulation rather than didactic-based education.23 Schroedl et al found that residents exposed to simulation outperformed traditionally educated residents on bedside clinical assessment of mechanical ventilation and invasive hemodynamic monitoring parameters.30 The use of JIT simulation training in ICU education is less studied. Nishisaki et al performed a JIT simulation training for endotracheal intubation for residents rotating in the pediatric ICU and noted that it did not improve first attempt success or overall success rate but did improve resident participation.46 Our study is in accord with the existing literature demonstrating the benefits of simulation training to critical action checklist completion, extending the use of JIT simulation into the adult ICU population.
Our study has several important limitations. It is unclear to what extent residents’ learning through clinical practice in the MICU contributed to some of our improved outcomes. Additionally, though we attempted to maintain blindness to the identity of each simulation scenario, daytime ICU faculty’s assessments of overnight performance may have been biased by foreknowledge of which overnight decompensation was simulated and the quality of support provided by the overnight fellow. The surveyed attending intensivists also did not receive formal rater training, further suggesting their ratings should be interpreted with caution. Finally, the use of one coder may have introduced personal bias and a singular perspective to the qualitative analysis of faculty and resident comments.
Our study is a single-institutional intervention targeting a specific training level of IM residents, and care should be taken to generalize the findings. In particular, the replicability of this curriculum in other MICUs may pose significant logistical challenges. Our institution is fortunate to have a simulation lab staffed with a critical care faculty member and a senior critical care simulation fellow near our MICU, facilitating easy interruption of clinical assignments for simulation education during the week. Even so, as the team makeup of our VA MICU has undergone further reorganization in response to the COVID-19 pandemic, we are no longer able to support the curriculum as described at the time of this publication. Institutions lacking these resources may have trouble recreating our curricular framework.
We did not find evidence that JIT simulation training improved residents’ management of actual overnight events in the MICU. Though our comparison of attending ratings for the trained and untrained groups did not achieve statistical significance (P=.12), we believe it may have educational significance, as it represented a shift from average to above-average performance after training. Relative to most effect sizes in education research, the moderate effect size (Wilcoxon r=0.3) of this improvement was notable.57-59 The lack of statistical significance may be attributable to our small sample size, as decompensation events occurred infrequently. A lack of standardization in training of the service attending raters may also have contributed. We employed a global performance scale (online supplementary data Appendix 5) instead of more granular measures of assessment, such as specific inquiries into critical actions taken aligned to guidelines (eg, sepsis bundle completion in septic shock). Because no resident performance was assessed below average despite specific negative feedback comments (online supplementary data Appendix 12), inflationary ratings for both groups may have obscured appreciable differences in performance captured by a more prescriptive survey instrument.
Viewed through the lens of the Kirkpatrick model’s 4 levels of evaluation,47 this simulation curriculum had a positive impact on participants’ Reactions (summative assessments) and Learning (checklist performance) but did not have a significant impact on Behaviors (intensivist reviews) and was not designed to evaluate changes in Results (ICU outcomes). Further research, utilizing a similar simulation structure but with more rigorous faculty assessment training and defined survey instruments aligned to best practices, is needed to see if adoption of the JIT simulation model into the longitudinal MICU curriculum has a measurable impact on overnight resident management behaviors and resultant patient outcomes.
Conclusions
In this pilot study of JIT simulation training for overnight emergencies, second-year medicine residents exposed to training during their MICU rotation demonstrated better performance in a simulation setting and reported better preparation for their overnight calls and high satisfaction with the curriculum.
The authors would like to thank Jose D. Chuquin, MD, Molly Forster, MD, Alexandria E. Imperato, MD, Jesse B. Rafel, MD, Grace T. Gibbon, MPH, Dhawani Shah, MPH, and Jordan Murphy, MPH.
References
Editor’s Note
The online supplementary data contains resources, surveys, further data from the study, and a visual abstract.
Author Notes
Funding: The authors report no external funding source for this study.
Conflict of interest: The authors declare they have no competing interests.
This work was previously presented at the virtual Chest Annual Meeting, October 18-21, 2020.