Background Patients who decompensate overnight experience worse outcomes than those who do so during the day. Just-in-time (JIT) simulation could improve on-call resident preparedness but has been minimally evaluated in critical care medicine (CCM) to date.

Objective To determine whether JIT training can improve residents’ performance in simulation and if those skills would transfer to better clinical management in adult CCM.

Methods Second-year medicine residents participated in simulated decompensation events aligned to common medical intensive care unit (MICU) emergencies predicted to occur overnight by their attending intensivist. Simulation faculty scored their performance via critical action checklists. If the event occurred, MICU attendings rated residents’ clinical management as well. At the rotation’s conclusion, a variant of one previously trained scenario was simulated to assess for performance improvement. Resident perceptions were surveyed before, during, and after completion of the study.

Results Twenty-eight residents participated; 22 of 28 (79%) completed the curriculum. Management of simulated decompensations improved following training (initial simulation checklist completion rate 60% vs 80% final simulation, P≤.001, Wilcoxon r=0.5). Predicted events occurred in 27 (45%) of the 60 shifts evaluated, with no observed difference in faculty ratings of overnight performance (median rating 4.5 if trained vs 3.0 if untrained; U=58.50; P=.12; Mann-Whitney r=0.30). Residents’ self-reported preparedness to manage MICU emergencies improved significantly following training, from a median of 3.0 to 4.0 (P=.006, Wilcoxon r=0.42).

Conclusions JIT simulation training improved residents’ performance in simulation.

Hospitalized patients who decompensate overnight experience worse outcomes than those who do so during the day.1-10  Work hour restrictions and night float rotations, intended to mitigate clinician fatigue and improve overnight staffing, have had mixed results on patient safety.11-20  The persistence of an “off-hours effect” may be due to the relative inexperience of those on duty overnight—often residents in academic medical intensive care units (MICUs)21,22 —and the correspondingly lower likelihood that necessary interventions would be promptly or properly performed.

Lack of preparedness can be improved with training. High-fidelity simulation is particularly effective for developing complex clinical skills, allowing for active learning in an environment where errors do not negatively affect patient safety. Simulation demonstrates superiority over other teaching modalities in critical care settings.23-37 

“Just-in-time” (JIT) simulation, or anticipatory simulation, is a type of training conducted prior to a predicted clinical event, thereby leveraging temporal proximity to mitigate skill decay.38  JIT interventions have frequently been employed to improve the performance of specific high-acuity procedures and have demonstrated efficacy.36,39-46  However, the value of the JIT approach for more complex clinical skills in adult critical care has not yet been meaningfully evaluated (ie, at higher Kirkpatrick levels).47  The aim of this study was to determine whether JIT training improves residents’ performance in simulation and if those skills transfer to improved clinical management in adult critical care medicine. A secondary aim of our study was to evaluate resident satisfaction with the JIT simulation curriculum.

KEY POINTS
What Is Known

Critical care patients experiencing overnight adverse events have worse outcomes, and resident preparation to handle these events has been minimally studied.

What Is New

Just-in-time simulation of commonly occurring overnight patient decompensation improved resident self-reported preparedness and later simulation performance, but did not change faculty assessment of resident performance in the low number of overnight events that occurred.

Bottom Line

Just-in-time simulation to improve resident preparedness for handling overnight patient critical care decompensation is a promising strategy.

Setting and Participants

We conducted a prospective, nonrandomized, observational study evaluating the effectiveness of a JIT simulation training program on residents’ performance managing clinical decompensations in the MICU. Training took place in the Manhattan Veterans Affairs (VA) Simulation Learning Center adjacent to the 12-bed combined medical and cardiac intensive care unit at the VA New York Harbor Healthcare System’s Manhattan Campus (VA MICU), which is an urban teaching hospital affiliated with the New York University Grossman School of Medicine (NYUSOM). The VA MICU is staffed by residents from the NYU internal medicine (IM) residency program; most residents rotate there for 4 weeks. All second-year residents rotating between October 2019 and November 2021 were invited to participate. To ensure consistency in participants’ prior exposure to critical care medicine, first- and third-year residents were excluded from this study. Data collection was paused twice during the COVID-19 pandemic to accommodate for reallocations in hospital resources and concluded once a full year’s worth of second-year residents had participated in the program (Figure 1).

Figure 1

Participant Enrollment Flowchart and Demographics

Abbreviations: VA, Veterans Affairs; MICU, medical intensive care unit; PGY, postgraduate year.

Figure 1

Participant Enrollment Flowchart and Demographics

Abbreviations: VA, Veterans Affairs; MICU, medical intensive care unit; PGY, postgraduate year.

Close modal

During the study period, one PGY-2 resident was on call each night as the most senior in-house physician responsible for leading the initial management of any clinical changes that occurred overnight. Fellows and faculty were available for consultation by phone and, if necessary, to present to the hospital. Senior residents were on 24-hour call every third night and received simulation training during the preceding day. Thus, residents could complete a maximum of 4 simulations during their rotation; they could opt out of a session for conflicting clinical responsibilities. Online supplementary data Appendix 1 outlines the typical monthly schedule.

Interventions and Outcomes Measured

The initial curriculum was co-created by B.S.K. (head of the Simulation Learning Center), C.B.D. (Associate Program Director of the NYU IM residency), R.R. (Director of Nocturnist Medicine at Bellevue Hospital Center), and S.S.N. (Senior Simulation Fellow at the time of curriculum implementation), with iterative improvements contributed by all research team members. The most common clinical decompensation events seen in ICU settings were identified through review of reimbursement data from the Agency for Healthcare Research and Quality,48  and 9 scenarios were included in the curriculum: symptomatic bradycardia, ventricular tachycardia/fibrillation arrest, septic shock, hemorrhagic shock, hypoxic respiratory failure, hypercapnic respiratory failure, ventilator troubleshooting, massive pulmonary embolism, and elevated intracranial pressure (ICP). Some scenarios were excluded if they were difficult to simulate (eg, acute renal failure), predict (eg, specific pharmacotherapy overdose), or encountered in the adjoining cardiac critical care unit (eg, cardiogenic shock). R.R. created a set of clinical cases and critical action checklists for each of the 9 scenarios, with 15 key performance elements based on best practices for management of each scenario (online supplementary data Appendices 2-4).

We utilized Messick’s validity framework to develop and modify our scoring checklists.49,50  All materials were reviewed and edited by a panel of 3 additional faculty with expertise in critical care and simulation-based medical education (B.S.K., S.S.N., A.A.); discrepancies in checklist content were resolved by consensus through a modified Delphi process.51-54  Panel members discussed the checklists’ fitness for measuring each session’s objectives and agreed to specific rating rules (eg, how to score a checklist if the scenario deviated from the script). The authors attempted to mitigate interrater variability in checklist documentation by creating trichotomous scoring systems (done, partially done, or not done). Once checklist content, structure, and rating rules were finalized, faculty evaluators were trained, and all scenarios were piloted with a group of residents enrolled in a simulation elective rotation. Ultimately, a post-hoc calculation of interrater reliability was performed using a sample of 5 of the most simulated scenarios (septic shock [twice], elevated ICP, hypoxic and hypercapnic respiratory failure) independently scored by faculty (R.R., J.W.T.).

Case selection for each session was based on a conversation with the attending intensivist, who was asked to predict which decompensation was likely to occur that night according to the clinical status of patients in the MICU. During each 30-minute simulation session, residents received direct observation and feedback from simulation faculty. Each case utilized a prescribed script and clinical deterioration sequence delivered via a high-fidelity, ventilator-compatible patient simulator (SimMan 3G, Laerdal Global Health). We used the ASL 5000 Breathing Simulator (IngMar Medical) to display ventilator waveforms and facilitate adjustments. During simulations, participants obtained a history, conducted a physical examination, requested laboratory and imaging studies, called consultants, and performed a limited number of procedures. Researchers assessed participant performance in real time using the critical action checklists. Immediately following each simulation, a semistructured debrief with faculty occurred. Checklist scoring responsibilities rotated among multiple authors to promote blinding to participants’ prior performances.

To assess for performance improvement, residents completed a final “variant” simulation at the end of the rotation (online supplementary data Appendix 3). Each variant case evaluated the resident’s ability to manage 1 of the 9 clinical scenarios previously completed but applied to a different patient presentation. For example, hypoxic respiratory failure was due to hypertensive emergency with flash pulmonary edema in the initial case, then presented in the context of pneumonia and adult respiratory distress syndrome in the final variant. We used variant cases—rather than repeat simulations—to better assess whether higher-order learning (ie, skill transfer) occurred during the initial training session beyond cognitive processes within the lower-order taxon of “remembering” previous instruction, such as recognition and recall of prior simulations (ie, skill retention).55  We maintained the critical actions and scoring checklists for the variant cases to compare performance before and after exposure to the curriculum.

To assess whether simulated skills transferred into real clinical settings, we surveyed faculty regarding residents’ actual clinical management decisions throughout their rotations. The morning after every simulation call night, the daytime VA MICU attending intensivists were asked whether a clinical decompensation event occurred overnight and to rate resident management on a 5-point Likert scale (1=very poor to 5=very good; online supplementary data Appendix 5). At the end of the data collection phase, we separated faculty assessments into 2 groups pertaining to an event for which the resident had or had not previously received simulation training. Additionally, residents were surveyed regarding their perceived preparedness and prior exposures to overnight decompensations before each simulation and at the conclusion of the curriculum (online supplementary data Appendices 6-8).

Statistical Analysis

Residents who did not complete the final simulation for any reason were considered lost to follow-up, and their data were excluded from comparison testing. Prior to analysis, we de-identified all data to ensure analysts were blinded to the identity and performance of individual participants. Shapiro-Wilk tests revealed that our data departed from normality, so we used non-parametric tests for analysis, and calculated 95% confidence intervals (CIs) and effect sizes for all findings. We used Wilcoxon signed rank testing to evaluate changes in residents’ performance between the initial and the final (variant) simulations as well as their self-perceived preparedness, rated on 5-point Likert scales at the start and end of the curriculum. We employed Wilcoxon rank sum and Mann-Whitney U testing to compare the management of actual overnight events based on whether the residents had previously received simulation training for that clinical scenario and reported Wilcoxon and Mann-Whitney r values to estimate effect size. Finally, we performed thematic analysis56  of qualitative responses from resident and faculty surveys to further explore their perceptions of the curriculum. One author (R.R.) coded the complete transcript of all survey data, then inductively determined salient themes, which were organized into domains with 1 to 2 representative quotations for each theme. Codes were assigned positive, negative, or neutral values; where themes featured both positive and negative codes, both representative quotations were presented.

This study met the NYUSOM’s criteria for self-certification as quality improvement for program evaluation purposes, rather than as human subjects research, and thus did not require institutional review board review.

During the study, 32 of 38 (84%) of the VA MICU overnight senior resident shifts were covered by second-year residents, 28 of whom participated in 76 simulation training sessions, with 22 of 28 (79%) residents completing the curriculum (Figure 1). Participants completed a mean of 2.7 (range 1-4) simulations each, with some missed simulations due to time pressures or interruptions from the COVID-19 pandemic.

Relative to their performance on initial simulations, residents’ performance on the final (variant) simulations across various clinical scenarios (n=22) improved significantly, from a median of 60% completion of checklist critical actions (95% CI = 59.40-68.57) on initial simulation to 80% completion in final simulations (95% CI 71.64-83.00); P≤.001, Wilcoxon r=0.5 (Figure 2). A post-hoc interrater reliability analysis of 5 scored checklists showed strong agreement between raters (weighted κ=0.843 [95% CI 0.75-0.94], P<.0001).

Figure 2

Resident Critical Action Checklist Performance

Figure 2

Resident Critical Action Checklist Performance

Close modal

There were 27 decompensation events during the study for which faculty assessment data were available, with a next-day attending survey response rate of 79% (60 of 76). In 11 instances, the resident had not received training for the type of clinical event that occurred; for 16, the resident received simulation training for that scenario (online supplementary data Appendix 9). If the resident received JIT training, median attending rating of trainee management of actual overnight events in the MICU on a 5-point Likert scale was 4.5 (95% CI 3.74-4.63); if untrained, median rating was 3.0 (95% CI 3.16-4.11); U=58.50, P=.12, Mann-Whitney r=0.3 (Figure 3). Neither group differed significantly in prior experience rotating in the ICU, baseline or final simulation score, or reported experience of prior decompensations during their ICU block (Table).

Figure 3

Faculty Ratings of Overnight Resident Management by Training Status

Figure 3

Faculty Ratings of Overnight Resident Management by Training Status

Close modal
Table

Comparison of Clinical Experience and Performance by Training Group

Comparison of Clinical Experience and Performance by Training Group
Comparison of Clinical Experience and Performance by Training Group

Residents’ self-ratings of preparedness to manage MICU emergencies improved significantly following training, from a median of 3.0 (neither prepared nor unprepared) at the outset of the curriculum to 4.0 (somewhat prepared) at its completion (P=.006; Wilcoxon r=0.42; Figure 4). The resident response rate was 28 of 28 (100%) for the initial survey and 44 of 48 (92%) for subsequent surveys.

Figure 4

Resident Self-Reported Preparedness

Figure 4

Resident Self-Reported Preparedness

Close modal

There was no difference in baseline exposure to critical care, simulation performance, faculty rating, or preparedness scoring between the groups of residents who participated in the curriculum before or during the COVID-19 pandemic (online supplementary data Appendix 10). We observed a statistically significant decrease in the number of simulations per participant during the pandemic, with a change in simulated scenarios largely driven by an increase in hemorrhagic shock and a decrease in elevated ICP cases.

Final resident feedback on the curriculum is shown in online supplementary data Appendix 11. Free-text responses to survey questions fell into 3 domains: those pertaining to the educational environment used for simulation, those regarding the relevance of the training to residents’ clinical practice, and faculty comments on the quality and safety of residents’ actual clinical management following training. Selected findings and exemplar quotations are listed in online supplementary data Appendix 12.

In this study, we found that completion of the curriculum significantly increased residents’ ability to perform critical action steps in simulations of common ICU emergencies, and their self-ratings of feeling prepared to respond.

Prior studies of simulation-based education in the MICU show similar effects. Singer et al reported higher scores on checklist assessments of resident performance in cases of septic shock/hypoxic respiratory failure, ventilator alarm management, and evaluation of spontaneous breathing trials by residents trained using simulation rather than didactic-based education.23  Schroedl et al found that residents exposed to simulation outperformed traditionally educated residents on bedside clinical assessment of mechanical ventilation and invasive hemodynamic monitoring parameters.30  The use of JIT simulation training in ICU education is less studied. Nishisaki et al performed a JIT simulation training for endotracheal intubation for residents rotating in the pediatric ICU and noted that it did not improve first attempt success or overall success rate but did improve resident participation.46  Our study is in accord with the existing literature demonstrating the benefits of simulation training to critical action checklist completion, extending the use of JIT simulation into the adult ICU population.

Our study has several important limitations. It is unclear to what extent residents’ learning through clinical practice in the MICU contributed to some of our improved outcomes. Additionally, though we attempted to maintain blindness to the identity of each simulation scenario, daytime ICU faculty’s assessments of overnight performance may have been biased by foreknowledge of which overnight decompensation was simulated and the quality of support provided by the overnight fellow. The surveyed attending intensivists also did not receive formal rater training, further suggesting their ratings should be interpreted with caution. Finally, the use of one coder may have introduced personal bias and a singular perspective to the qualitative analysis of faculty and resident comments.

Our study is a single-institutional intervention targeting a specific training level of IM residents, and care should be taken to generalize the findings. In particular, the replicability of this curriculum in other MICUs may pose significant logistical challenges. Our institution is fortunate to have a simulation lab staffed with a critical care faculty member and a senior critical care simulation fellow near our MICU, facilitating easy interruption of clinical assignments for simulation education during the week. Even so, as the team makeup of our VA MICU has undergone further reorganization in response to the COVID-19 pandemic, we are no longer able to support the curriculum as described at the time of this publication. Institutions lacking these resources may have trouble recreating our curricular framework.

We did not find evidence that JIT simulation training improved residents’ management of actual overnight events in the MICU. Though our comparison of attending ratings for the trained and untrained groups did not achieve statistical significance (P=.12), we believe it may have educational significance, as it represented a shift from average to above-average performance after training. Relative to most effect sizes in education research, the moderate effect size (Wilcoxon r=0.3) of this improvement was notable.57-59  The lack of statistical significance may be attributable to our small sample size, as decompensation events occurred infrequently. A lack of standardization in training of the service attending raters may also have contributed. We employed a global performance scale (online supplementary data Appendix 5) instead of more granular measures of assessment, such as specific inquiries into critical actions taken aligned to guidelines (eg, sepsis bundle completion in septic shock). Because no resident performance was assessed below average despite specific negative feedback comments (online supplementary data Appendix 12), inflationary ratings for both groups may have obscured appreciable differences in performance captured by a more prescriptive survey instrument.

Viewed through the lens of the Kirkpatrick model’s 4 levels of evaluation,47  this simulation curriculum had a positive impact on participants’ Reactions (summative assessments) and Learning (checklist performance) but did not have a significant impact on Behaviors (intensivist reviews) and was not designed to evaluate changes in Results (ICU outcomes). Further research, utilizing a similar simulation structure but with more rigorous faculty assessment training and defined survey instruments aligned to best practices, is needed to see if adoption of the JIT simulation model into the longitudinal MICU curriculum has a measurable impact on overnight resident management behaviors and resultant patient outcomes.

In this pilot study of JIT simulation training for overnight emergencies, second-year medicine residents exposed to training during their MICU rotation demonstrated better performance in a simulation setting and reported better preparation for their overnight calls and high satisfaction with the curriculum.

The authors would like to thank Jose D. Chuquin, MD, Molly Forster, MD, Alexandria E. Imperato, MD, Jesse B. Rafel, MD, Grace T. Gibbon, MPH, Dhawani Shah, MPH, and Jordan Murphy, MPH.

1. 
Neuraz
A,
Guerin
C,
Payet
C,
et al.
Patient mortality is associated with staff resources and workload in the ICU: a multicenter observational study
.
Crit Care Med
.
2015
;
43
(
8
):
1587
-
1594
.
2. 
Churpek
MM,
Edelson
DP,
Lee
JY,
et al.
Association between survival and time of day for rapid response team calls in a national registry
.
Crit Care Med
.
2017
;
45
(
10
):
1677
-
1682
.
3. 
Ofoma
UR,
Basnet
S,
Berger
A,
et al.
Trends in survival after in-hospital cardiac arrest during nights and weekends
.
J Am Coll Cardiol
.
2018
;
71
(
4
):
402
-
411
.
4. 
Lyndon
A,
Lee
HC,
Gay
C,
et al.
Effect of time of birth on maternal morbidity during childbirth hospitalization in California
.
Am J Obstet Gynecol
.
2015
;
213
(
5
):
705e1
-
705e11
.
5. 
Sorita
A,
Ahmed
A,
Starr
SR,
et al.
Off-hour presentation and outcomes in patients with acute myocardial infarction: systematic review and meta-analysis
.
BMJ
.
2014
;
348
:
f7393
.
6. 
Angus
DC,
Shorr
AF,
White
A,
Dremsizov
TT,
Schmitz
RJ,
Kelley
MA.
Critical care delivery in the United States: distribution of services and compliance with Leapfrog recommendations
.
Crit Care Med
.
2006
;
34
(
4
):
1016
-
1024
.
7. 
Wong
HJ,
Morra
D.
Excellent hospital care for all: open and operating 24/7
.
J Gen Intern Med
.
2011
;
26
(
9
):
1050
-
1052
.
8. 
Bray
BD,
Cloud
GC,
James
MA,
et al.
Weekly variation in health-care quality by day and time of admission: a nationwide, registry-based, prospective cohort study of acute stroke care
.
Lancet
.
2016
;
388
(
10040
):
170
-
177
.
9. 
Peberdy
MA,
Ornato
JP,
Larkin
GL,
et al.
Survival from in-hospital cardiac arrest during nights and weekends
.
JAMA
.
2008
;
299
(
7
):
785
-
792
.
10. 
Robinson
EJ,
Smith
GB,
Power
GS,
et al.
Risk-adjusted survival for adults following in-hospital cardiac arrest by day of week and time of day: observational cohort study
.
BMJ Qual Saf
.
2016
;
25
(
11
):
832
-
841
.
11. 
Bolster
L,
Rourke
L.
The effect of restricting residents’ duty hours on patient safety, resident well-being, and resident education: an updated systematic review
.
J Grad Med Educ
.
2015
;
7
(
3
):
349
-
363
.
12. 
Ulmer
C,
Wolman
DM,
Johns
MM.
Resident Duty Hours: Enhancing Sleep, Supervision, and Safety
.
National Academies Press
;
2009
.
13. 
Baldwin
K,
Namdari
S,
Donegan
D,
Kamath
A,
Mehta
S.
Early effects of resident work-hour restrictions on patient safety: a systematic review and plea for improved studies
.
J Bone Joint Surg Am
.
2011
;
93
(
2
):
e5
.
14. 
Jamal
M,
Doi
SA,
Rousseau
M,
et al
Systematic review and meta-analysis of the effect of North American working hours restrictions on mortality and morbidity in surgical patients
.
Br J Surg
.
2012
;
99
(
3
):
336
-
344
.
15. 
Philibert
I,
Nasca
T,
Brigham
T,
Shapiro
J.
Duty-hour limits and patient care and resident outcomes: can high-quality studies offer insight into complex relationships?
Ann Rev Med
.
2013
;
64
:
467
-
483
.
16. 
Reed
D,
Fletcher
K,
Arora
VM.
Systematic review: association of shift length, protected sleep time, and night float with patient care, residents’ health, and education
.
Ann Intern Med
.
2010
;
153
(
12
):
829
-
842
.
17. 
Moonesinghe
SR,
Lowery
J,
Shahi
N,
Millen
A,
Beard
JD.
Impact of reduction in working hours for doctors in training on postgraduate medical education and patients’ outcomes: systematic review
.
BMJ
.
2011
;
342
:
d1580
.
18. 
Fletcher
KE,
Davis
SQ,
Underwood
W,
Mangrulkar
RS,
McMahon
LF,
Saint
S.
Systematic review: effects of resident work hours on patient safety
.
Ann Intern Med
.
2004
;
141
(
11
):
851
-
857
.
19. 
Fletcher
KE,
Underwood
W
Davis
SQ,
Mangrulkar
RS,
McMahon
LF,
Saint
S.
Effects of work hour reduction on residents’ lives: a systematic review
.
JAMA
.
2005
;
294
(
9
):
1088
-
1100
.
20. 
Fletcher
KE,
Reed
DA,
Arora
VM.
Patient safety, resident education and resident well-being following implementation of the 2003 ACGME duty hour rules
.
J Gen Intern Med
.
2011
;
26
(
8
):
907
-
919
.
21. 
Maratta
C,
Hutchison
K,
Moore
GP,
et al.
In-house, overnight physician staffing: a cross-sectional survey of Canadian adult ICUs
.
Crit Care Med
.
2020
;
48
(
12
):
e1203
-
e1210
.
22. 
Diaz-Guzman
E,
Colbert
CY,
Mannino
DM,
Davenport
DL,
Arroliga
AC.
24/7 in-house intensivist coverage and fellowship education: a cross-sectional survey of academic medical centers in the United States
.
Chest
.
2012
;
141
(
4
):
959
-
966
.
23. 
Singer
BD,
Corbridge
TC,
Schroedl
CJ,
et al.
First-year residents outperform third-year residents after simulation-based education in critical care medicine
.
Simul Healthc
.
2013
;
8
(
2
):
67
-
71
.
24. 
Seymour
NE,
Gallagher
AG,
Roman
SA,
et al.
Virtual reality training improves operating room performance: results of a randomized, double-blinded study
.
Ann Surg
.
2002
;
236
(
4
):
458
-
463
.
25. 
Grantcharov
TP,
Kristiansen
VB,
Bendix
J,
Bardram
L,
Rosenberg
J,
Funch-Jensen
P.
Randomized clinical trial of virtual reality simulation for laparoscopic skills training
.
Br J Surg
.
2004
;
91
(
2
):
146
-
150
.
26. 
Ahlberg
G,
Hultcrantz
R,
Jaramillo
E,
Lindblom
A,
Arvidsson
D.
Virtual reality colonoscopy simulation: a compulsory practice for the future colonoscopist?
Endoscopy
.
2005
;
37
(
12
):
1198
-
1204
.
27. 
Chaer
RA,
DeRubertis
BG,
Lin
SC,
et al.
Simulation improves resident performance in catheter-based intervention
.
Ann Surgery
.
2006
;
244
(
3
):
343
-
352
.
28. 
Steadman
RH,
Coates
WC,
Huang
YM,
et al.
Simulation-based training is superior to problem-based learning for the acquisition of critical assessment and management skills
.
Crit Care Med
.
2006
;
34
(
1
):
151
-
157
.
29. 
Beal
M,
Kinnear
J,
Anderson
C,
Martin
T,
Wamboldt
R,
Hooper
L.
The effectiveness of medical simulation in teaching medical students critical care medicine: a systematic review and meta-analysis
.
Simul Healthc
.
2017
;
12
(
2
):
104
-
116
.
30. 
Schroedl
CJ,
Corbridge
TC,
Cohen
ER,
et al.
Use of simulation-based education to improve resident learning and patient care in the medical intensive care unit: a randomized trial
.
J Crit Care
.
2012
;
27
(
2
):
219.e7
-
219.e13
.
31. 
Ottestad
E,
Boulet
JR,
Lighthall
GK.
Evaluating the management of septic shock using patient simulation
.
Crit Care Med
.
2007
;
35
(
3
):
769
-
775
.
32. 
Pusic
MV,
Kessler
D,
Szyld
D,
Kalet
A,
Pecaric
M,
Boutis
K.
Experience curves as an organizing framework for deliberate practice in emergency medicine learning
.
Acad Emerg Med
.
2012
;
19
(
12
):
1476
-
1480
.
33. 
Niles
D,
Sutton
RM,
Donoghue
A,
et al.
“Rolling refreshers”: a novel approach to maintain CPR psychomotor skill competence
.
Resuscitation
.
2009
;
80
(
8
):
909
-
912
.
34. 
Bender
J,
Kennally
K,
Shields
R,
Overly
F.
Does simulation booster impact retention of resuscitation procedural skills and teamwork?
J Perinatol
.
2014
;
34
(
9
):
664
-
668
.
35. 
Stross
JK.
Maintaining competency in advanced cardiac life support skills
.
JAMA
.
1983
;
249
(
24
):
3339
-
3341
.
36. 
Branzetti
JB,
Adedipe
AA,
Gittinger
MJ,
et al.
Randomized controlled trial to assess the effect of a just-in-time training on procedural performance: a proof-of-concept study to address procedural skill decay
.
BMJ Qual Saf
.
2017
;
26
(
11
):
881
-
891
.
37. 
Smith
KK,
Gilcreast
D,
Pierce
K.
Evaluation of staff’s retention of ACLS and BLS skills
.
Resuscitation
.
2008
;
78
(
1
):
59
-
65
.
38. 
Posner
GD,
Clark
ML,
Grant
VJ.
Simulation in the clinical setting: towards a standard lexicon
.
Adv Simul (Lond)
.
2017
;
2
:
15
.
39. 
Aggarwal
R.
Just-in-time simulation-based training
.
BMJ Qual Saf
.
2017
;
26
(
11
):
866
-
868
.
40. 
Bonz
JW,
Pope
JK,
Wong
AH,
Ray
JM,
Evans
LV.
Just‐in‐time clinical video review improves successful placement of Sengstaken‐Blakemore tube by emergency medicine resident physicians: a randomized control simulation‐based study
.
AEM Educ Train
.
2021
;
5
(
3
):
e10573
.
41. 
Braga
MS,
Tyler
MD,
Rhoads
JM,
et al.
Effect of just-in-time simulation training on provider performance and patient outcomes for clinical procedures: a systematic review
.
BMJ Simul Technol Enhanc Learn
.
2015
;
1
(
3
):
94
-
102
.
42. 
Cheng
Y-T,
Liu
DR,
Wang
VJ.
Teaching splinting techniques using a just-in-time training instructional video
.
Pediatr Emerg Care
.
2017
;
33
(
3
):
166
-
170
.
43. 
Stoler
GB,
Johnston
JR,
Stevenson
JA,
Suyama
J.
Preparing emergency personnel in dialysis: a just-in-time training program for additional staffing during disasters
.
Disaster Med Public Health Prep
.
2013
;
7
(
3
):
272
-
277
.
44. 
Carlson
KJ,
Klute
LM,
Dowdall
JR,
et al.
Just-in-time simulation training for nasopharyngeal specimen collection during the SARS-CoV-2 pandemic
.
J Contin Educ Health Prof
.
2022
;
42
(
1
):
e88
-
e91
.
45. 
Thomas
AA,
Uspal
NG,
Oron
AP,
Klein
EJ.
Perceptions on the impact of a just-in-time room on trainees and supervising physicians in a pediatric emergency department
.
J Grad Med Educ
.
2016
;
8
(
5
):
754
-
758
.
46. 
Nishisaki
A,
Donoghue
AJ,
Colborn
S,
et al.
Effect of just-in-time simulation training on tracheal intubation procedure safety in the pediatric intensive care unit
.
Anesthesiology
.
2010
;
113
(
1
):
214
-
223
47. 
Kirkpatrick
JD,
Kirkpatrick
WK.
Chapter 2: The New World Kirkpatrick—An Overview. In:
Kirkpatrick’s Four Levels of Training Evaluation
.
ATD Press
;
2016
.
48. 
Agency for Healthcare Research and Quality
.
Utilization of Intensive Care Services
,
2011
:
Statistical Brief #185. Published December 2014. Accessed September 6, 2024. https://hcup-us.ahrq.gov/reports/statbriefs/sb185-Hospital-Intensive-Care-Units-2011.pdf
49. 
Cook
DA,
Hatala
R.
Validation of educational assessments: a primer for simulation and beyond
.
Adv Simul (Lond)
.
2016
;
1
:
31
.
50. 
Messick
S.
Validity. In:
Linn
RL
, ed.
Educational Measurement
. 3rd ed.
American Council on Education and Macmillan
;
1989
:
13
-
104
.
51. 
Jones
J,
Hunter
D.
Qualitative research: consensus methods for medical and health services research
.
BMJ
.
1995
;
311
(
7001
):
376
-
380
.
52. 
Stewart
J,
O’Halloran
C,
Harrigan
P,
Spencer
JA,
Barton
JR,
Singleton
SJ.
Identifying appropriate tasks for the preregistration year: modified Delphi technique
.
BMJ
.
1999
;
319
(
7204
):
224
-
229
.
53. 
Duffield
C.
The Delphi technique
.
Aust J Adv Nurs
.
1988
;
6
(
2
):
41
-
45
.
54. 
Williams
PL,
Webb
C.
The Delphi technique: a methodological discussion
.
J Adv Nurs
.
1994
;
19
(
1
):
180
-
186
.
55. 
Mayer
RE.
Rote versus meaningful learning
.
Theory Pract
.
2002
;
41
(
4
):
226
-
232
.
56. 
Braun
V,
Clarke
V.
Using thematic analysis in psychology
.
Qual Res Psychol
.
2006
;
3
(
2
):
77
-
101
.
57. 
Kraft
MA.
Interpreting effect sizes of education interventions
.
Educ Res
.
2020
;
49
(
4
):
241
-
253
.
58. 
National Center for Special Education Research
.
Translating the Statistical Representation of the Effects of Education Interventions into More Readily Interpretable Forms
.
Published November 2012. Accessed September 6, 2024. https://ies.ed.gov/ncser/pubs/20133000/pdf/20133000.pdf
59. 
Sullivan
GM,
Feinn
R.
Using effect size—or why the P value is not enough
.
J Grad Med Educ
.
2012
;
4
(
3
):
279
-
282
.

The online supplementary data contains resources, surveys, further data from the study, and a visual abstract.

Funding: The authors report no external funding source for this study.

Conflict of interest: The authors declare they have no competing interests.

This work was previously presented at the virtual Chest Annual Meeting, October 18-21, 2020.

Supplementary data