Multiple snapshots of a trainee's real-time performance are required for program directors and Clinical Competency Committees to provide a comprehensive performance assessment and deliver formative feedback.1 Multi-source feedback (MSF), or 360-degree evaluation, is a formal questionnaire-based assessment method that relies on workplace-based direct observation of performance by multiple raters from different perspectives.2–5 While decades of MSF studies in medical education support its feasibility and provide validity evidence,2–4 educators wishing to implement MSF programs may be intimidated by issues of time, labor intensity, and scope.1,5,6
What Is Known
The MSF approach is best suited for direct observation of real-time trainee performance. Using diverse raters' perspectives over multiple observations yields a more comprehensive assessment of a learner's performance than any single-source approach.2–4 The MSF approach can be powerful for assessing behaviors in nontechnical or nonprocedural domains, such as professionalism, interpersonal and communication skills, and practice-based learning and improvement, that are difficult to capture by traditional methods.1,3,4 Educators planning to develop an MSF tool must adhere to a rigorous development process to optimize instrument feasibility, reliability, and validity as well as to avoid measurement errors. Steps in this process include the development of items or the use of existing items that will yield specific, meaningful feedback to trainees,1,6 the identification of the behaviors that each rater group is best suited to observe and judge, determining who the assessor(s) are, and the training of raters. Validity evidence can be demonstrated by correlations between MSF instruments (concurrent validity) and between MSF ratings and other assessment measures (predictive validity).3
Program directors should:
Seek to implement assessment programs that incorporate data from multiple sources. These should be based on multiple snapshots of real-time in situ workplace observations.
Prioritize adopting, adapting, or creating multi-source feedback instruments to optimize validity evidence while ensuring feasible implementation (eg, allows the rater to assess the performances they are best suited to observe).
Continue to cultivate a feedback culture that is safe and supportive and that promotes frequent feedback conversations between learners and credible raters.
Sources of the Feedback: The Raters
The MSF raters can include self, peers, supervisors or preceptors, nonphysician coworkers (eg, nurses, social workers, pharmacists), and patients.3,6 Self-assessment plays an integral role in the MSF process (table). Trainees' self-assessment, when compared with other MSF ratings, may identify gaps needing further attention.2–4 Eight to 20 credible rater assessments (depending on the source) are needed to optimize validity. Residents perceive raters as credible when raters directly observe their performance and are capable of providing meaningful feedback about that performance.1,4 Thus, each rater must observe the trainee on the unique set of domains matched to the scope of their role and relationship with the trainee in order to provide feedback that will be perceived as credible.4–6 Trainees have identified patient feedback as a particularly powerful trigger for behavior change.4,6
Delivering MSF: The Impetus to Change Behavior
The MSF approach may be more likely to lead to behavioral change when trainees buy in to the goals and value of MSF from credible raters.2 Facilitated feedback delivery by a mentor or coach may improve learner receptivity and incorporation of MSF.4 Adding narrative comments to numerical item scores makes the feedback more meaningful.4,6 Trainees report that guidance in identifying gaps between self-assessment and MSF results improves MSF acceptance and helps them generate an action plan to change behavior.4
How You Can Start TODAY
Perform a local needs assessment: How is your program already obtaining real-time observation data? How can you expand MSF instruments and sources? Identify existing survey-based feedback of trainees and opportunities to expand.
Educate residents, faculty, and ancillary staff about the rationale behind MSF and the professional behaviors that are most amenable to MSF assessments.
Adopt or adapt available MSF tools, and pilot these in a simulated educational setting to increase learner and faculty familiarity with the tools. Sample items and instruments are available for adaptation.2,7–9
Select and train raters that trainees will perceive as credible; raters must be able to directly observe the behaviors you intend to assess.
What You Can Do LONG TERM
Consider faculty development aimed at promoting a culture of feedback within your training program and institution that includes MSF.
Build a framework to support MSF use in Clinical Competence Committees for subcompetency assessment and to inform entrustment decisions.
Provide MSF to trainees as part of semiannual reviews to identify improvement targets with action plans.
Implement MSF to assess faculty teaching; incorporate this into the annual performance review with the department chair.
Editor's Note: In order to expand the resident assessment toolbox for programs and medical education researchers, JGME is presenting a series of Rip Outs on assessment instruments “beyond surveys.” This multi-source feedback Rip Out is the first in this series.