Feedback Credibility in a Formative Postgraduate OSCE: Effects of Examiner Type
Editor's Note: The following are the Top 3 Research in Residency Education Papers selected by the JGME and the Royal College of Physicians and Surgeons of Canada for the 2016 International Conference on Residency Education meeting in Niagara Falls, Canada. A full listing of submitted abstracts appears online (http://www.jgme.org/page/ICREAbstracts). Underlined author names indicate presenting author at the conference.
Winning Paper
Introduction: Studies indicate that learners' perspectives are key determinants of feedback effectiveness. Previous data from our objective structured clinical examination (OSCE) suggested that residents perceived feedback by faculty as more credible than from standardized patients (SPs), and indicated generalist faculty feedback was more credible compared to subspecialists. The present study was designed to systematically examine how residents perceive credibility of feedback from SPs compared to faculty examiners, and whether feedback credibility is higher for faculty examiners' whose subspecialty is congruent with station content compared to faculty who are generalists or subspecialty incongruent.
Methods: During a formative, 5-station internal medicine OSCE, residents received immediate feedback from faculty examiners or from SPs. For clinical scenarios, each resident received feedback from at least 1 specialty congruent and 1 specialty incongruent faculty. Residents were randomized to receive feedback from SPs or faculty for communication stations, with feedback controlled for content. After the OSCE residents rated perceived credibility of feedback providers. Results were analyzed through multivariable linear regression.
Results: A total of 197 residents, 36 faculty, and 12 SPs participated. For communication stations, credibility of faculty feedback was significantly higher than SP-provided feedback (6.3 versus 5.4/7, P < .0001). For scenario stations, feedback credibility was higher for congruent subspecialists compared to non-congruent subspecialists (6.6 versus 6.0/7, P < .001) and generalists (6.6 versus 6.4/7; P < .001).
Conclusions: Faculty specialty congruency with station content led to higher feedback credibility. Communication skills feedback from faculty was perceived as more credible by residents than feedback from SPs. These results support a significant effect of perceived expertise on credibility of feedback, which warrants further study.
Beyond Rater Cognition: The Impact of Supervisor Continuity on the Quality of Documented Work-Based Assessments
Purpose: Barriers to completing high-quality work-based assessments include relational factors such as the episodic and fragmented interaction that often exists between clinical supervisors and trainees. In an effort to increase supervisor-trainee continuity in the Department of Emergency Medicine at the University of Ottawa, Clinical Teaching Teams (CTT) were created in which a resident and clinical supervisor work matched shifts together throughout the year. The aim of this study was to determine the impact of increased supervisor-trainee continuity on the quality of assessments documented on Daily Encounter Cards (DECs).
Methods: DECs completed by 20 clinical supervisors were sorted into 3 groups representing differing degrees of supervisor-trainee continuity (ie, supervisor with CTT emergency resident, non-CTT emergency resident, or off-service resident). DECs were scored using the Completed Clinical Evaluation Report Rating (CCERR), a previously validated 9-item quantitative measure of DEC quality. Mean scores between the continuity groups were analyzed using a univariate ANOVA.
Results: Mean CCERR scores for the CTT (21.0, SD = 5.8), non-CTT (21.9, SD = 4.2), and off-service (20.7, SD = 4.0) groups differed (P = .019). A statistically significant difference in means between the non-CTT and off-service groups was observed (P = .04); however, the magnitude of this difference was small (partial eta-squared = 0.03) and not educationally significant. The number of encounters between supervisor and trainee did not have a significant effect on CCERR scores (P = .43).
Conclusions: Increasing supervisor-trainee continuity alone did not result in higher quality assessments of clinical performance. Additional research focusing on the educational alliance between supervisor and trainee may hold greater promise.
Establishing Absolute Standards for Technical Performance
Introduction: Standard setting methodologies have been used in medicine primarily for written examinations at the undergraduate and postgraduate levels. Currently, standard setting has not been used to determine competency for trainees in surgical skill assessment. The objective of this systematic review is to identify studies that systematically establish cutoff values, focusing on procedural skill assessment.
Methods: A systematic review describing the use of absolute standard-setting methodologies to assess procedural performance was conducted by searching MEDLINE, Embase, PsychINFO, and the Cochrane database of systematic reviews. Abstracts of retrieved studies were reviewed and those meeting the inclusion criteria were selected for full-text review. Data were retrieved in a systematic manner, and validity and quality of evidence presented in the included studies was assessed using the Medical Education Research Study Quality Instrument (MERSQI).
Results: Of the 1762 studies identified, 38 used standard-setting methodology for assessment of procedural skill. Of these, 25 used participant-centered methods, and 13 used item-centered methods. The included studies assessed residents (26), fellows (7), and staff physicians (17). 18 articles were MERSQI graded as 14/18 or higher, while 20 did not meet this mark.
Conclusions: The 38 studies included in this analysis demonstrate that absolute standard-setting methodologies can be used to establish cutoffs for procedural skill assessments, including those taking place in the clinical setting. Establishing benchmarks in technical skill is particularly important prior to the implementation of new assessments into surgical training, such as the “competence by design” curriculum being introduced in many residency programs.
Resident Evaluations of Faculty: Resident Versus Faculty Perspectives
Editor's Note: The following are the Top 5 Resident Papers selected by the JGME and the Royal College of Physicians and Surgeons of Canada for the 2016 International Conference on Residency Education meeting in Niagara Falls, Canada. A full listing of submitted abstracts appears online (http://www.jgme.org/page/ICREAbstracts). Underlined author names indicate presenting author at the conference.
Winning Paper
Introduction: Resident assessments of faculty are integral to postgraduate education. We previously surveyed residents on their perspectives of the written assessments they complete of faculty physicians. We now sought to determine how faculty perspectives compare to residents' perspectives.
Methods: We designed an anonymous online survey for internal medicine faculty at McMaster University. The questions mirrored those of the 2015 resident survey.
Results: 42/65 (65%) of faculty completed our survey. Faculty identified resident assessments as being most important for personal satisfaction and development as teachers, consistent with previous studies. Faculty identified major barriers to residents providing honest assessments as: inadequate time with preceptors, concerns that assessments will not effect change, and inadequate time to complete assessments. In contrast, residents reported that their main barrier to being honest was concern about anonymity of the assessments. Nonetheless, only 16% of faculty report ever being able to identify the resident who completed their assessment. Assessments are equally effective in identifying faculty's strengths and areas for improvement, but much less effective in identifying strategies for improvement. Residents reported being honest on faculty assessments 58% of the time, but faculty think residents are honest only 12% of the time. Faculty report modifying their behaviors based on resident feedback 37% of the time, though residents thought they do so only 9% of the time. Main areas where faculty modify their teaching behaviors are resident autonomy, bedside teaching, and availability in clinical settings.
Conclusions: There is a large disconnect between resident and faculty perspectives on resident assessments of faculty.
Organizational Change: Another Competency to Consider?
Introduction: The introduction of competency-based medical education (CBME) across Canada has resulted in a fundamental shift away from time-based training to outcomes-based training. Not only is there a need for a change in the structure and framework of medical education, but there is also a demand to change attitudes and beliefs about how our institutions approach education. Readiness for change and organizational change management strategies are well recognized among business managers as many change initiatives fail. Within medical education, organizational change competency has not been discussed when evaluating how to best facilitate and implement CBME. As more and more programs begin to embrace CBME, the dynamics of organizational change will become crucial to the success or failure of these new training platforms.
Methods: Using a retrospective case analysis of the implementation of CBME within 1 Canadian institution, this narrative review focuses on 3 themes: (1) developing a definition of organizational change competency as applied to medical education; (2) summarizing key organizational change management theories to help effectively manage, lead, and implement CBME; and (3) discussing how we may assess for organizational change competency within our teaching institutions.
Conclusions: New initiatives have a high failure rate across all professional domains. In order to ensure the successful implementation of CBME, institutions should consider whether or not there is organizational change competency.
Accreditation Council for Graduate Medical Education Case Logs as an Indication of Operative Competency for Vascular Anastomoses: A Pilot Study
Introduction: The Accreditation Council for Graduate Medical Education (ACGME) continues to play an integral role in accreditation of surgical programs. The institution of case logs to demonstrate competency of graduating residents is a key educational component of evaluation. This study compared the number of vascular cases a surgical resident has completed with their operative proficiency, quality of anastomosis, and confidence in a simulation setting.
Methods: General surgery residents participated in a simulation lab in which they completed an end-to-side anastomosis. Residents ranging from postgraduate year 1 to 5 performed a timed task and were evaluated according to technical proficiency using a previously validated global rating scale and quality of the anastomosis (Duran et al, 2014). Participants completed a survey regarding their confidence with the procedure and future fellowship plans. Univariate and multivariate analysis were performed.
Results: A total of 18 general surgery residents were available for evaluation; 2 were excluded due to deficient case logs. The residents were evenly distributed throughout clinical years. Groups of residents were divided into quartiles according to vascular cases recorded in the ACGME database. The second and third quartiles were identified as having the highest confidence (P = .048) and best quality of final product (P = .014). No correlation was found between number of cases and the proficiency score or time to complete.
Conclusions: ACGME case logs, which are a requirement for completion of general surgery residency, may not be indicative of resident competency and technical proficiency. Careful examination of resident operative technique is likely the best measure of competency.
What Are the Sleep Patterns of Residents?
Introduction: Decisions regarding resident work hours and fatigue management should be based on accurate information. Most data on resident sleep are from retrospective sleep logs, an inaccurate method. This study's purpose was to objectively define sleep patterns of residents through a method known as actigraphy. We hypothesize that residents are getting less sleep than other Canadians and sleep is further decreased when on call.
Methods: This was a cross-sectional cohort study. Sleep patterns of the participants were recorded for 2 weeks using sleep logs and actigraphy monitors, devices which determine sleep patterns through measuring movement and light. Statistical analysis included t tests and linear regression.
Results: Participants from the University of Calgary orthopaedic (21) and pediatrics (5) residency programs were recruited. Residents had decreased average sleep (M = 4.9 hours, SD = 3.1 hours) compared to the national age and sex matched norms (M = 8.1 hours, P = .00001). Residents total sleep on call was 0.89 hour less (P = .02) than their baseline sleep duration. Orthopaedic residents get 2.3 hours less sleep than pediatrics residents (P = .00001). Regression demonstrated that medical programs and increased postgraduate year were both correlated with increased sleep.
Conclusions: Residents get less sleep than the general population and sleep is decreased almost another hour when residents are on call. Orthopaedic residents get 2.3 hours less sleep per night on average than their pediatrics colleagues. Factors that influence sleep include the type of program and postgraduate year. This study gives accurate information on residents' sleep patterns and demonstrates a reliable way to measure it.
Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS): Promoting a Culture of Safety
Introduction: Adverse events due to medical error are a source of preventable morbidity and mortality in Canada's emergency departments (EDs). TeamSTEPPS is a method to minimize these errors. Although widely implemented, the optimal method of education has not been standardized. This study objectively measured use of TeamSTEPPS strategies pre- and post-implementation of a novel TeamSTEPPS program.
Methods: A 12-month longitudinal TeamSTEPPS program was introduced to physicians, nurses, and allied health care professionals in a tertiary care ED. An academic approach consisting of group huddles, educational props, and social media, including Facebook, Twitter, and YouTube, were employed. Trained observers utilized a performance observation tool to record and quantify the use of strategies by staff. Main study endpoints were improvement in behavior metrics and sustained use of TeamSTEPPS methodology.
Results: Using a logistic regression model and chi square test for data analysis, 2 modules, Call Out/Check Back and Shared Mental Model, were demonstrated to have a statistically significant increase in utilization through our observation period (P = .003 and P = .048, respectively). Behavior metrics for Leadership & Team Structure were non-significant (P = .29). Additionally, other modules including Briefs/Debriefs/Huddles, Challenge & CUS Rule, DESC Script, and Situation Monitoring showed perfect performance rates pre- and post-TeamSTEPPS implementation in this group of already high-performing providers.
Conclusions: TeamSTEPPS was successfully implemented in a busy tertiary ED and sustained use of the implemented strategies was identified. Initial high level staff performance resulted in non-critical differences in many endpoints; improvement in 2 critical behavior metrics was ultimately achieved.