Abstract
Throughout their medical education, learners face multiple transition periods associated with increased demands, producing stress and concern about the adequacy of their skills for their new role.
We evaluated the effectiveness of boot camps in improving clinical skills, knowledge, and confidence during transitions into postgraduate or discipline-specific residency programs.
Boot camps are in-training courses combining simulation-based practice with other educational methods to enhance learning and preparation for individuals entering new clinical roles. We performed a search of MEDLINE, CINAHL, PsycINFO, EMBASE, and ERIC using boot camp and comparable search terms. Inclusion criteria included studies that reported on medical education boot camps, involved learners entering new clinical roles in North American programs, and reported empirical data on the effectiveness of boot camps to improve clinical skills, knowledge, and/or confidence. A random effects model meta-analysis was performed to combined mean effect size differences (Cohen's d) across studies based on pretest/posttest or comparison group analyses.
The search returned 1096 articles, 15 of which met all inclusion criteria. Combined effect size estimates showed learners who completed boot camp courses had significantly “large” improvements in clinical skills (d = 1.78; 95% CI 1.33–2.22; P < .001), knowledge (d = 2.08; 95% CI 1.20–2.96; P < .001), and confidence (d = 1.89; 95% CI 1.63–2.15; P < .001).
Boot camps were shown as an effective educational strategy to improve learners' clinical skills, knowledge, and confidence. Focus on pretest/posttest research designs limits the strength of these findings.
Introduction
Throughout undergraduate and postgraduate medical education, learners are faced with a series of transitions that can be stressful, which can raise questions about the adequacy of their skills for their new role. For example, the transition from medical school to postgraduate training or into a discipline-specific subspecialty residency program (ie, junior to senior) can be stressful because of the variability of teaching and learning opportunities provided for clinical or procedural skills training as well as the increased expectations in learners' clinical responsibilities.1–4 Similarly, the transition from residency to fellowship has been cited as a time of great stress given the increased complexity of patients' needs, their personal sense of responsibility, and their expectations of clinical performance without failure.5,6 Furthermore, these periods have also been associated with increased rates of psychiatric morbidity and burnout among trainees7 and a growing concern for the care and safety of the patient, giving rise to terms like July phenomenon or July effect.8,9
Concerns are amplified under current duty hour restrictions in effect for trainees. Less time spent learning clinical and procedural skills can lead to a decrease in learner competence and patient safety. For instance, Poulouse et al10 found a greater rate of needle injuries after work hour restrictions, suggesting reduced clinical experiences lead to less proficiency on some procedural tasks. Given the medicolegal concerns associated with medical errors because of work hour restrictions and transition periods, there has been a strong push toward introducing new educational strategies within clinical training programs to potentially mitigate those effects.11
Before the development of competency-based education frameworks, teaching and learning for most specialties took the form of a master-apprentice relationship in which trainees observed their clinical mentors in practice. As learners became more familiar with the knowledge and skills of their specialty, they began to perform more intricate tasks with growing independence.11,12 According to the Ericsson13 theory of how expertise is achieved, active engagement that involves deliberate practice and immediate feedback is essential in the acquisition of knowledge and the ability to perform. A recent meta-analysis comparing traditional to simulation-based medical education with deliberate practice supported the superiority of this approach to enhancing learners' clinical knowledge and skills.14 Using this theoretic principle as a guide to enhance clinical expertise, many medical education programs across North America have adopted the concept of deliberate practice and adapted it in “boot camp” courses specifically designed to ease learners into new clinical roles during transition periods.
Despite the increasing popularity of boot camps in medical education, there is no prior review on their effectiveness published in the literature. The objective of this study was to perform a descriptive analysis of boot camps in medical education and a meta-analysis to assess the effectiveness of boot camp courses as an educational strategy to prepare learners during transition periods in clinical training programs. The main focus was to assess the effectiveness of boot camps across 3 important areas: clinical skills performance, knowledge acquisition, and confidence in clinical abilities.
Methods
Study Selection
In consultation with a medical librarian, we conducted a systematic review of the research on the effectiveness of boot camps published between January 1995 and April 2013 using MEDLINE, CINAHL, PsycINFO, EMBASE, and ERIC. For the purpose of study selection, boot camps were defined as courses designed to enhance learning and preparation for those entering new clinical roles with simulation-based practice and other related educational strategies. Initial identification of search terms was based on literature and articles on medical education boot camps. Terms were identified through literature review to encompass all synonyms used to describe boot camp courses, including crash course, capstone course, transition course, or rookie camp. A review of references for all included articles was performed to identify any additional studies for inclusion in our review and analysis. The search was further limited to the heading medical education. Two authors (C.B. and J.A.) independently retrieved and reviewed articles for study selection and also reviewed the references of selected articles to identify additional reports. All conflicts in article selection were resolved by consensus.
Eligibility Criteria
Studies were included if they (1) reported on medical education boot camps for trainees transitioning from medical school or already in a residency program in North America, (2) involved trainees who were expecting to enter into new clinical roles, (3) assessed the effectiveness of the boot camp session(s) pretest/posttest (ie, immediately after the boot camp intervention) or with a comparison group, and (4) provided empirical data on clinical skills, knowledge acquisition, and/or confidence measures. The authors restricted the search to North American studies to ensure that different formats of boot camp courses reflected similar clinical transition periods for medical students and residents. For this study, residents who completed preliminary training were considered to have been provided with a boot camp to assist with transition into discipline-specific training.
Data Selection and Abstraction
To address concerns about bias, we conducted a comprehensive search using strict selection criteria based on rigorous interrater reliability. Two authors (C.B. and J.A.) were involved in independently reviewing abstracts and coding the data from the full-text articles identified. The following data were collected: year of study, specialties involved, sample size, level of training of the participants, duration of the boot camp, type of study research design, boot camp design and definition, outcome variables assessed, and the measured empirical values reported, including means and standard deviations.
Statistical analysis was performed using Stata version 12.1 (StataCorp LP). All data from the outcome measures were continuous, and the meta-analysis was performed using sample means and standard deviations to calculate the Cohen's d for effect size differences.15 For studies where standard deviations were not provided, we used the P values from the t test statistic to estimate the comparable effect size difference.16 We chose the random effects model for our meta-analysis based on the variability among studies in length of boot camp session(s), content delivered, evaluations used, and levels of transition for the different groups' participants.17 The interpretation of the magnitude of the combined effect sizes were based on Cohen's15 suggestions: d = 0.20 to 0.49 are small, d = 0.50 to 0.79 are medium, and d ≥ 0.80 are considered to be large effect size differences. The heterogeneity of the combined study outcomes were tested using the I2 statistic. To estimate the between-studies variance, an I2 value of 50% implies that half of the observed variability can be attributed to between-studies variance (heterogeneity in the boot camp interventions) and the other half within-study variance (ie, sampling error).18 To minimize study heterogeneity, studies comparing pretest/posttest scores for boot camp participants were analyzed separately from studies using comparisons or control groups.
Results
The literature search identified 1096 articles. After screening, 36 full-text articles were assessed for eligibility. Of those 36 articles, 21 were excluded because they failed to provide sufficient empirical data, did not pertain to boot camps, reported duplicate data, or were not based in medical education or from North America. Fifteen studies met all inclusion criteria.1,3,19–31 Those 15 articles underwent coding extraction independently by 2 authors (C.B. and J.A.) to ensure data collection consistency and accuracy of effect size calculation analyses. Any discrepancies were reviewed by the other authors until agreement was achieved.
Descriptive Review
From the 15 studies included in our meta-analysis, a range of definitions for boot camps were used (table 1). Despite differences in wording, most describe a boot camp as an early preparatory course or orientation sessions for learners undergoing a transition in medical education. The design and development of these boot camps varied from an informal curricular design by surgical staff19 to a formal needs assessment approach to understanding clinical skill development.26 Although a variety of educational methods and teaching strategies were used, every boot camp used some form of low- or high-technology simulation as a key component. In addition, 10 of 15 boot camps (67%) described providing either immediate or formative feedback to trainees (3 boot camps did not specify whether feedback was given, and 2 boot camps engaged only in debriefing sessions following simulation and case scenarios).
Regarding the particular medical education transitions targeted by the boot camps, we identified that studies focused on either the transition from medical school to graduate/postgraduate education (6 of 15, 40%) or into a discipline-specific (eg, specialty/subspecialty) residency program (9 of 15, 60%; table 2). Only 1 specific boot camp for fellows was identified in our search, and it provided insufficient data for statistical analysis.5 The number of boot camp participants ranged from 6 to 47, and participants came from a variety of specialties, including internal medicine, general surgery, obstetrics and gynecology, cardiac surgery, thoracic surgery, and orthopedic surgery. Fourteen of 15 (93%) studies involved surgical specialties or subspecialties (table 2). For boot camps involving medical students at the end of their clerkship period, students had applied or were matched to surgical specialties. The length of time of the boot camp varied from 4 hours (over 2.5 days)23 to 160 hours (over 30 days)27 and was completed in a single day (8 hours) for residents transitioning into otolaryngology26 or was spread across a 7-week period (2 to 3 hours per week) for medical students beginning their surgery residency.3
The majority of studies used a pretest/posttest (13 of 15, 87%) assessment process to determine the effectiveness of the boot camp intervention immediately on completion (table 2). Four studies also included a comparison group.3,19,27,30 One study assessed boot camp participants' performance using only a comparison group who had not completed the boot camp,1 and 1 study compared participants only to historic controls.31 Three sets of learner outcome measures were identified and, if reported, were combined across studies: (1) clinical skills performance (6 of 15, 40%), (2) knowledge acquisition (4 of 15, 26%), and (3) participants' confidence in their clinical abilities (8 of 15, 53%).
Clinical Skills Performance
Six boot camp studies (40%) provided pretest/posttest data on trainees' performance on different clinical skills, using either task-specific checklist scores or global rating scales, such as an Objective Structured Assessment of Technical Skills.32 Although some of the clinical performances assessed were related to the overall completion of the surgical or procedural skill itself (eg, central line insertion, chest tube insertion, lumbar puncture), many outcome measures were related to generic skills important for successful clinical practice (eg, restricted space tying, suturing). As shown in figure 1, 16 clinical skill outcomes were combined across the 6 studies for a large effect size of d = 1.78 (95% CI 1.33–2.22, P < .001). The heterogeneity was found to be moderately high with an I2 statistic value of 81.2% (P < .01). A reported random-effects model for combined effect sizes is recommended when the heterogeneity is above 50%.
Random and Fixed-Effects Model Forest Plot for Boot Camp Clinical Skills Improvement, Pretest/Posttest
Abbreviations: GRS, global rating scale; OSATS, Objective Structured Assessment of Technical Skills.
Random and Fixed-Effects Model Forest Plot for Boot Camp Clinical Skills Improvement, Pretest/Posttest
Abbreviations: GRS, global rating scale; OSATS, Objective Structured Assessment of Technical Skills.
table 3 shows a combined random-effects model for the 4 studies (27%, 9 outcomes) where boot camp trainees' clinical skills performance scores were contrasted with a comparison group, showing a large effect size difference of d = 2.73 (95% CI 1.36–4.07, P < .001).
Knowledge Acquisition
For improvements in knowledge acquisition, 4 boot camp studies provided pretest/posttest data on 5 outcome measures for trainees by mean scores in the form of multiple-choice and short answer question examinations (figure 2). Studies included 5 different knowledge outcomes such as general and course-specific surgical knowledge,25 and knowledge of surgical ward management.29 A reported random-effects model for the combined effect sizes showed that participation in boot camps significantly improved learners' medical and surgical knowledge (d = 2.08; 95% CI 1.20–2.96; P < .001). The heterogeneity was found to be high with an I2 statistic value of 82.1% (P < .01). Only 2 studies using comparison groups provided data on knowledge acquisition, and meta-analytic comparisons were not performed.
Random and Fixed-Effects Model Forest Plot for Boot Camp Knowledge Acquisition Improvement, Pretest/Posttest
Abbreviations: MCQ, multiple-choice question; SAQ, short answer question.
Random and Fixed-Effects Model Forest Plot for Boot Camp Knowledge Acquisition Improvement, Pretest/Posttest
Abbreviations: MCQ, multiple-choice question; SAQ, short answer question.
Confidence in Clinical Abilities
Eight studies provided pretest/posttest data for 37 outcomes related to improvement in trainees' perceived confidence, measured on Likert scales from strongly disagree to strongly agree (figure 3). For example, Todd et al28 surveyed (pretest/posttest) first-year residents about their confidence to manage a number of clinical presentations from sepsis, acute hypoxia, chest pain, decreased urine output, and postoperative fever. A random-effects model for the combined effect sizes showed significant improvement in confidence with a large effect size of d = 1.89 (95% CI 1.63–2.15, P < .001). The heterogeneity was found to be moderate with an I2 statistic value of 65.4% (P < .01). Only 2 studies using comparison groups provided data on competence, and meta-analytic comparison was not performed.
Random and Fixed-Effects Model Forest Plot for Boot Camp Confidence Improvement, Pretest/Posttest
Abbreviations: ENT, ear, nose, and throat; mgt, management.
Random and Fixed-Effects Model Forest Plot for Boot Camp Confidence Improvement, Pretest/Posttest
Abbreviations: ENT, ear, nose, and throat; mgt, management.
Discussion
As shown by the number of studies that reported on the effectiveness of boot camps to enhance learners' transition to the next stage of their training, boot camps as a medical education strategy are still in their infancy. In identifying articles to include in our meta-analysis, we encountered a variety of definitions for boot camps. Although exact wording differed, certain commonalities were present: All studies described boot camps as courses/sessions aimed to prepare or orient learners undergoing clinical transitions, boot camps were structured as short and focused courses, and a wide variety of educational methods were applied in each camp with a particular focus on the use of simulation. In addition, most boot camps described a focus on providing immediate and/or formative feedback to participants in keeping with the Ericsson13 theory on how to support the development of expert performance.
Based on the boot camp descriptions from the literature included in the meta-analysis, the following definition is proposed: “A boot camp is a focused course designed to enhance learning, orientation, and preparation for learners entering a new clinical role. This is achieved through the use of multiple educational methods with a focus on deliberate practice with formative feedback.” In support of the Ericcson's13 theory of what is required to develop expertise, the introduction of boot camp courses during career transition periods in clinical training may enhance trainees' clinical skills, knowledge, and confidence. The main finding of the meta-analysis is that learners who completed boot camps had significantly “large” improvements in (1) clinical skills development (d = 1.78; 95% CI 1.33–2.22; P < .001), (2) knowledge acquisition (d = 2.08; 95% CI 1.20–2.96; P < .001), and (3) perceived confidence (d = 1.89; 95% CI 1.63–2.15; P < .001).
The effect sizes can also be interpreted as the average percentile standing of the trainees before and after participation in the boot camp course.15 An effect size of d = 1.50 indicates that the means of the posttest participants are at the 93.3 percentile for the means of their pretest scores or a nonboot camp (no intervention) comparison group. The boot camps studies included in this analysis demonstrated that an educational intervention that includes the use of deliberate practice in the development of clinical skills, knowledge, and confidence may provide immediate benefits to trainees.
To further support these findings, a meta-analysis of the combined effect sizes for clinical skill improvement across studies that used a comparison group demonstrated a large effect size difference between the intervention and comparison groups (d = 2.73; 95% CI 1.36–4.07; P < .001). Comparison groups used residents at similar levels of training,1,3,27,31 and the improvement found can be attributed to the course itself rather than to a maturation or testing effect. We were unable to perform a similar analysis for knowledge or confidence because of the few comparison group studies reporting these outcomes.
Although the combined effect sizes for each of the clinical skills, knowledge acquisition, and perceived confidence measures were all statistically significant, there is a significant amount of heterogeneity between the boot camp studies.18 The heterogeneity for studies that reported on clinical skills development was found to be moderately high (I2 = 81.2%, P < .01), reflecting variability in the skills being assessed (ie, tying a 2-handed knot, neonatal resuscitation, chest tube insertion), and in the assessment measures used (eg, length of checklists, types of global rating scales). Studies that reported on knowledge outcomes also had moderately high levels of heterogeneity (I2 = 82.1%, P < .01). This can be partly explained by the lack of standardization between examinations to assess the trainees' knowledge acquisition. Studies that reported on confidence improvement measures showed the lowest levels of heterogeneity (I2 = 65.4%, P < .01), which is explained in part by the fact that all confidence measures being assessed using similar self-report questionnaires that had trainees report on their confidence in their abilities as a result of participation in the boot camp.
The remainder of the heterogeneity among studies is to be expected because of less-rigorous research designs in these studies. For example, none of the studies in this meta-analysis used a random selection of trainees, and most used single group, pretest/posttest designs. Use of pretest and posttest scores can increase variability among studies, depending on when the examinations were given in relation to the end of boot camp (eg, posttest scores from immediately after the intervention can be influenced by the pretest in 1- to 3-day boot camp courses, whereas boot camps that were run for up to 4 to 7 weeks can be influenced by maturation effects). Although there was moderately high heterogeneity among studies (a reflection of the variability in the intensity and duration of the boot camp interventions), a fairly symmetric funnel plot analysis showed that publication bias in favor of positive findings only was not likely.
Limitations
We found significant variability among studies in the specialties involved, the number of participants (range from 6 to 47), the stage of the trainees' transition (eg, from medical school to residency or to a residency specialty), and the duration of the boot camp. Aside from heterogeneity associated with the single group, pretest/posttest research designs, further limitations to the study include the restricted scope of the types of boot camps (ie, all but 1 study involved surgical disciplines), the range of outcomes assessed, and the lack of long-term follow-up with trainees. Regarding outcomes assessed, the majority reported in the studies represented a focus on medical expert-related competencies. This limits the extrapolation of the results to other clinical competencies, such as communication with patients and their families or the development of skills for collaborating with colleagues and coworkers; the main concern for trainees during transition to residency and fellowship appears to be development of surgical/procedural skills, including performing procedures, evaluating patients, and running codes.5,6 Finally, the results of the boot camp courses were limited because most studies provided little information about the costs and feasibility of running those camps or long-term follow-up data on retention of knowledge, skills, or attitudes.
As training hours become limited by widespread implementation of duty hour limits and as associated concerns for safety of care in teaching settings have become a public concern, there is a compelling need for curricular change and programs need to look for new ways to address those concerns. The use of boot camp courses may provide benefit during career transitions by increasing trainees' knowledge, skills, and confidence. Future studies that use rigorous research designs, such as comparison of boot camp intervention to other approaches, studies that include assessments of feasibility and costs, and research that explores delayed outcomes is needed for a comprehensive assessment of the benefits of this intervention.
References
Author notes
All authors are in the Faculty of Medicine at the University of Calgary, Calgary, Alberta, Canada. Christopher Blackmore, MD, is General Surgery Resident, Department of Surgery; Janice Austin, MD, is General Surgery Resident, Department of Surgery; Steven R. Lopushinsky, MD, MSc, is Clinical Assistant Professor, Department of Pediatric Surgery; and Tyrone Donnon, PhD, is Associate Professor, Medical Education and Research Unit, Department of Community Health Sciences.
Funding: The authors report no external funding source for this study.
Conflict of interest: The authors declare they have no competing interests.