ABSTRACT
Background The medical workplace presents challenges for workplace-based learning. Structured debriefing of shared clinical experiences has been proposed as a way to take advantage of workplace-based learning in a setting that facilitates deep learning conversations.
Objective To investigate faculty and learner acceptance of private, face-to-face, structured debriefing of performance of entrustable professional activities (EPAs).
Methods During the 2020-2021 academic year, faculty at the University of Colorado (CU) and the University of Utah (UU) debriefed fellow performance of jointly selected EPAs in neonatal-perinatal medicine pertinent to shared 1- to 3-week clinical rotations. Private face-to-face debriefing was structured by a comprehensive EPA-specific list of behavioral anchors describing 3 levels of entrustment/accomplishment. Sessions ended with joint decisions as to level of entrustment/accomplishment and goals for improvement. We used thematic analysis of semistructured fellow interviews and faculty focus groups to identify themes illustrated with representative quotations.
Results We interviewed 17 fellows and 18 faculty. CU participants debriefed after clinical rotations; UU usually debriefed during rotations. Debriefing sessions for 1 to 2 EPAs lasted 20 to 40 minutes. Themes represented in fellow interviews and faculty focus groups suggested that debriefing facilitated formative feedback along with shared understanding of clinical performance and assessment criteria. The standardized format and private conversations supported assessment of aspects of performance for which review might otherwise have been overlooked or avoided. The conversations also provided valuable opportunities for formative discussion of other matters of importance to fellows.
Conclusions Structured debriefing of recently shared clinical experiences fostered formative assessment viewed positively by teachers and learners.
Introduction
The requirements for optimal workplace-based medical education are clear.1-5 It is equally clear that satisfying those requirements in a busy medical workplace is an ongoing challenge.6,7 Fragmented teacher-learner interactions6 make it difficult to achieve such basic elements of workplace-based learning as joint teacher-learner identification of learning goals, joint reflection on directly observed performance with confidential interactive feedback, and jointly selected goals for improvement.2,4,5 Finding opportunities for detailed discussion of pathophysiology and management and of such matters as professionalism and leadership is difficult.6
How might learners learn from the workplace away from its pressures? Tavaras et al8 proposed that structured debriefing might lead to especially effective formative assessment and learning. Although debriefing has been applied primarily to simulation and brief clinical encounters,9,10 its principles should be applicable in any setting.8,11-13
Methods
In 2020, we introduced assessment of performance on EPAs to faculty and fellows in neonatal-perinatal medicine training programs at the University of Colorado (CU) and the University of Utah (UU).14 We asked faculty-fellow dyads to debrief performance on 1 to 2 jointly selected EPAs after 1- to 3-week shared clinical experiences using the R2C2 model, an evidence-based reflective model for providing assessment feedback.15 Debriefing was structured by EPA-specific behavioral anchors developed by T.A.P., M.D.J., and members of CU faculty (online supplementary data) that defined 3 levels of performance: entrustment to practice with direct supervision, with reactive supervision (on request and/or post hoc), and without supervision.16 We asked the dyad to reach agreement on those anchors representing shared experience and then to select an overall entrustment/accomplishment category. Conversations were private, either in-person or virtual (Zoom Video Communications Inc).
Investigators not involved in clinical supervision used open-ended questions (online supplementary data) to conduct semistructured 30-minute individual interviews of fellows in-person or by Zoom. We used similar questions (online supplementary data) for semistructured 1-hour focus groups with 4 groups of faculty, 2 at CU (in person) and 2 at UU (by Zoom), selected to represent a cross section of age, experience, gender, and career focus (research or clinical). Interviews were recorded, transcribed, reviewed for accuracy, and after deidentification of fellow responses, loaded into coding software (NVivo, QSR International). Working independently, then together, T.A.P., G.G., and M.D.J. employed directed content analysis17 using codes based on R2C2 assessment18 and elements of validity19 to see how well participants adhered to the assessment model and to identify evidence of validity. During that analysis, we noted comments evaluating the interventions and created additional codes accordingly. Codes suggested candidate themes that were confirmed by subsequent analysis.20 Codes, coding structure, and themes developed at CU were reviewed by C.C.Y. and C.B.T. The entire team reached agreement as to codes, coding structures, and sufficiency of data to support specification of themes. We addressed reflexivity with ongoing discussion of biases within the team featuring different professional backgrounds and experiences.
This study was reviewed and found to be exempt by the institutional review board at the CU and UU Schools of Medicine.
Results
We interviewed 11 of 12 fellows at CU and 6 of 6 at UU, along with 10 faculty at CU and 8 at UU regarding their experiences during the 2020-21 academic year. We found no systematic differences between institutions. Themes were represented in both interviews and faculty focus groups.
Formative Assessment
Interviewees described conversations as consisting almost entirely of formative assessment. Behavioral anchors structured dialogue around “specific management with patients or specific topics or learning points” (Fellow), providing “time to talk about some of these things in more detail that you otherwise don’t get when you’re on service” (Fellow). Debriefing gave faculty “a real opportunity to spend a little more time finding out what they [fellows] think, what they feel their weaknesses are, what their fears are” (Faculty). Fellows often used conversations to discuss matters not strictly pertinent to the EPA under consideration, such as “what skills do I need to work on to be ready to come out of fellowship” or to “reach the next level” (Faculty) and “concrete steps that you can take to improve” (Fellow).
Behavioral Anchors
Behavioral anchors provided “scaffolding” (Fellow) and a “template” (Faculty) for the debriefing and a “common language” (Faculty) with “more specifics for where my performance was” (Fellow). Debriefing according to a fixed format was perceived to make reviews “consistent from faculty member to faculty member” (Fellow). Faculty commented on the developmental arrangement of anchors: “…somehow having some of these things written down on a piece of paper maybe makes them [fellows] realize that this is a normal process and it’s not like you just screwed up because you’re not good.” The comprehensive list of behavioral anchors “opened the door for conversations that I think we wouldn’t have otherwise had” (Faculty) such as cost, leadership, and interpersonal and interprofessional relationships.
Face-to-Face Conversations
Private conversations provided “a safe place” (Fellow) to receive challenging feedback and permitted “vulnerable discussion[s]” (Fellow) about clinical dilemmas and decision-making, conversations that “I don’t think would be good to do on rounds” (Fellow). They allowed fellows to “add more detail” (Fellow) to self-assessment and allowed fellows and faculty to clarify faculty assessment that might otherwise have been “interpreted or portrayed in a certain way that was different than was intended” (Fellow).
Comparison to Other Assessment Methodologies
Respondents characterized remotely delivered electronic feedback negatively, as “a black hole…without opportunity for a dialogue” (Faculty). Sessions were “the best feedback kind of sessions or way to give feedback that I’ve had in medical school, residency, or fellowship” (Fellow) and “far surpasses the one-way electronic evaluation you get or…ad hoc unstructured conversations” (Faculty).
Logistics
Debriefing of performance on 1 to 2 EPAs lasted 20 to 40 minutes. Conversations that proceeded to broader discussions of performance lasted longer. Prompt assessment enhanced the “quality and granularity” (Faculty) of debriefing. Delayed sessions might feel “disconnected” (Fellow) from the experience. Despite reminder emails, participants noted that finding time for review sessions was challenging. A related challenge was disagreement as to who, faculty or fellow, should assume responsibility for scheduling.
Discussion
Evaluations of structured debriefing of shared clinical experiences defined by EPAs and developmentally tiered behavioral anchors were positive. Private debriefing promoted shared understanding of assessment and feedback and made possible discussion of topics that might otherwise have been overlooked or avoided.
Why was this approach viewed positively? Likely most important was that the intervention had features that the literature suggests would be important to acceptance of formative feedback. Faculty observed learner performance directly7 over consecutive days of collegial work, minimizing the possibility that assessment would be regarded by fellows as not representative of overall performance.21,22 Interviewees stated that comprehensive written behavioral anchors seemed to improve the consistency of assessment across raters and rating sessions,23 likely supporting learner perceptions of fairness.24 Face-to-face interactive conversations seemed to lessen the likelihood that anchors, their application to performance, and next steps for learning would be overlooked, misunderstood, or perceived as irrelevant or unfair.23-25 Comments comparing assessment and feedback in this intervention with remote electronic assessment and feedback supports the outsized importance of dialogue in formative assessment.8,26,27
Limitations of this study are that it involved a single subspecialty28 with relatively small training programs, making it easier to develop the trusting relationships important for formative feedback.21,25 Furthermore, the quality of formative assessment was documented only by attestation. Additional research is needed to determine if a debriefing approach to assessment is broadly applicable. Barriers to implementation affecting its feasibility include the time needed for debriefing, the need for scheduling reminders, and the need for clear direction as to who is responsible for scheduling.
Conclusions
Structured debriefing of shared learner-teacher clinical experiences defined by EPAs supported robust formative assessment and was viewed positively by teachers and learners.
The authors would like to thank Tai Lockspeiser MD, MEd, for her assistance in editing the revised manuscript.
References
Editor’s Note
The online supplementary data contains examples of EPAs with behavioral anchors and the fellow interview protocol.
Author Notes
Funding: The authors report no external funding source for this study.
Conflict of interest: The authors declare they have no competing interests.
Author notes
Jennifer Gong, PhD, died 2019.