Background

Workplace-based assessment (WBA) is a key assessment strategy in competency-based medical education. However, its full potential has not been actualized secondary to concerns with reliability, validity, and accuracy. Frame of reference training (FORT), a rater training technique that helps assessors distinguish between learner performance levels, can improve the accuracy and reliability of WBA, but the effect size is variable. Understanding FORT benefits and challenges help improve this rater training technique.

Objective

To explore faculty's perceptions of the benefits and challenges associated with FORT.

Methods

Subjects were internal medicine and family medicine physicians (n=41) who participated in a rater training intervention in 2018 consisting of in-person FORT followed by asynchronous online spaced learning. We assessed participants' perceptions of FORT in post-workshop focus groups and an end-of-study survey. Focus groups and survey free text responses were coded using thematic analysis.

Results

All subjects participated in 1 of 4 focus groups and completed the survey. Four benefits of FORT were identified: (1) opportunity to apply skills frameworks via deliberate practice; (2) demonstration of the importance of certain evidence-based clinical skills; (3) practice that improved the ability to discriminate between resident skill levels; and (4) highlighting the importance of direct observation and the dangers using proxy information in assessment. Challenges included time constraints and task repetitiveness.

Conclusions

Participants believe that FORT training serves multiple purposes, including helping them distinguish between learner skill levels while demonstrating the impact of evidence-based clinical skills and the importance of direct observation.

Objectives

To explore faculty's perceptions of the benefits and challenges associated with frame of reference rater training.

Findings

Participants felt frame of reference training offered an opportunity to apply skills frameworks via deliberate practice, demonstrated the importance of evidence-based clinical skills, improved their ability to discriminate between resident skill levels, and highlighted the importance of direct observation.

Limitations

It is uncertain how findings generalize to other specialties or rater training for other skills.

Bottom Line

This study provides an understanding for how frame of reference training impacts faculty's beliefs about and approach to workplace-based assessment.

Workplace-based assessment (WBA) is a key assessment strategy in medical education, particularly in competency-based medical education. Improving WBA quality requires faculty development,1-4  which is necessary to improve faculty's ability to observe, synthesize observations into a judgment, encode judgments into an entrustment rating, provide feedback, and coach learners.3-5  However, the impact of faculty development interventions on the reliability and accuracy of WBA in medical education has been small to negligible.6-12  As such, more work is needed to delineate the unique effects of the various components and implementation designs of rater training.4,5 

Performance dimension training (PDT) and frame of reference training (FORT) are 2 rater training techniques that can improve performance appraisal assessments.13,14  PDT trains assessors to recognize the appropriate behaviors or dimensions of a given competency or skill using evidence-informed definitions supported by examples using written vignettes, videos, or role-plays.13,14  FORT helps assessors discriminate between variations in the quality of demonstrated skills by having participants individually assess multiple videos of individual learners with different skill levels and then together discuss discrepancies in observations and ratings.13,14  For example, during PDT, assessors are asked to identify the behaviors that constitute aspirational shared decision-making and are shown examples of these behaviors in video vignettes. Then, during FORT, facilitators ask team members to individually assess several stimulus videos of a resident counseling a patient about starting a statin, in which the resident demonstrates a range of skills from poor to aspirational. Assessors then discuss the videos as a group using a compare-contrast approach, sharing assessments and discussing discrepancies to create a shared mental model to guide assessment judgements.13,14  In a business setting, PDT and FORT can improve assessment accuracy and minimize rating variability by decreasing rater errors or biases unrelated to the targeted performance behaviors.13,14  The impact of these techniques in medical education have been more variable.9-12 

We previously published a rater training study that incorporated emerging rater cognition theories (the use of variable frames of reference, inference, and uncertainty in how to translate observations into numerical ratings) to determine which components of rater training might improve WBA.15-19  In that study we explored how PDT affected participants' approach to WBA.19  More recently, we demonstrated that rater training which included FORT increased the accuracy and specificity of observations.20  Clarifying the mechanisms of FORT in medical education may provide further insights into rater cognition and how best to conduct FORT to improve WBA quality and accuracy.21,22  To our knowledge there has not been a study examining faculty perceptions of FORT in medical education and the mechanism by which it influences assessors' judgments.

The purpose of this study was to understand assessors' perceptions of FORT, in particular its benefits and challenges.

We previously published a randomized controlled trial of a longitudinal rater training intervention to improve WBA.20  This current study focuses only on the intervention group of that study and on the perceptions of FORT.

Participants

Between December 2017 and August 2018, we emailed family and internal medicine residency program directors at 138 programs in 6 Midwest states and 186 programs in 5 Mid-Atlantic states soliciting their interest to enroll faculty in the study. Program directors provided email addresses for potential participants. Eligible participants needed to be general practitioner faculty who (1) were responsible for outpatient clinical training and evaluation of residents; (2) provided outpatient care for their own patient panel; (3) held a faculty position for at least 1 year; and (4) were available for a 2-day study session. Participants who agreed to enroll were initially randomized to the control (n=45) or intervention arm (n=49). This current study focuses only on the 41 intervention group participants who completed the study (7 dropped out post-randomization and 1 dropped out mid-trial secondary to illness). The control group participants are not included in the current study. Participants received a modest $150 honorarium from the Accreditation Council for Graduate Medical Education. The intervention group was eligible for up to 14.25 Continuing Medical Education and Maintenance of Certification credits.

Development of Stimulus Videos

Between June 2016 and June 2018, with the assistance of 6 experts in physician-patient communication and trainee assessment, and using evidence from the literature, we created stimulus videos depicting residents taking a history from or counseling patients.23-28  We created 27 videos (9 scenarios, each demonstrating 3 different levels of resident skill) for rater training using recommended guidelines.29  In the online supplementary data, we summarize the steps we used to create the videos and their associated answer keys with the expert-informed consensus entrustment rating and narrative assessment.

Intervention and Assessments

At baseline, participants completed a self-administered demographic web-based questionnaire and assessed 10 stimulus videos (5 history taking and 5 counseling) using an online rater assessment form asking them to identify what the resident in the video did well, what required improvement, and how they would supervise the resident going forward (prospective entrustment decision).20  Participants attended 1 of 4 rater trainings in the fall of 2018. The rater trainings were 2-day, in-person, 3-hour workshops that immediately followed the baseline assessment. The rater training content and format (online supplementary data) were informed by our prior research and included PDT and FORT.9,15-19  During PDT, participants created frameworks of the skills required for history taking and counseling. They then reviewed an evidence-based framework for each skill and revised their framework as needed.23-28  During FORT participants applied the framework to the 2 remaining stimulus videos in the series, comparing their assessments to the answer keys as a group.

After each of the 4 rater training workshops, 1 of 4 individuals with expertise in focus group facilitation (but otherwise unrelated to the study) led a focus group about the rater training (focus group guide provided as online supplementary data). Focus groups occurred immediately after the rater training so that participants could provide specific feedback about the rater training. Study investigators were not present during the focus groups, which were audio-recorded, transcribed, and de-identified.

Four weeks after the in-person workshops, participants started 3 asynchronous, online, spaced learning FORT modules with timed deliverables spaced 6 weeks apart using the Canvas Learning Management System (Instructure Inc). In spaced learning, a course is divided into short duration modules with breaks between the sessions. We included spaced learning because knowledge retention is enhanced when learning sessions are spaced in time.30  During each spaced learning module, participants were prompted to watch a series of history taking and counseling stimulus videos, each depicting 2 different levels of resident skill. We instructed participants to use their frameworks to guide observations. After rating each video, participants reviewed the answer key and identified similarities and differences between their own observations and those of the expert. A third video for each series was available for optional review (online supplementary data). A study investigator (J.K. or E.H.) moderated the spaced learning discussion boards.

At a minimum of 4 weeks after the last spaced learning module (March to May 2019), participants watched and rated 10 stimulus videos (5 history taking and 5 counseling). Participants completed a 19-item end-of-study survey focused on the intervention (online supplementary data). Questions asked about the benefits of spaced learning (rated on a 5-point Likert scale where 1=strongly disagree, 5=strongly agree) and spaced learning timing and work effort (rated on a 3-point scale). There were 3 open-ended questions eliciting (1) strengths of spaced learning to improve skills in direct observation; (2) ways spaced learning could be improved; and (3) barriers to spaced learning participation.

Analysis

We used descriptive statistics to summarize demographic and end-of-study survey data. Two investigators (J.K., L.C.) independently coded focus group transcripts and open-ended survey questions using thematic analysis.31  The investigators began by familiarizing themselves with the data, coding data relevant to the research question, and generating initial themes. The investigators met multiple times to review, discuss, reconcile, and name the themes identified. Themes were then shared with the third investigator (E.H.) for additional input and clarification.

The Institutional Review Board of the University of Pennsylvania Office of Regulatory Affairs approved this study. All participants provided informed consent.

Table 1 summarizes participant demographics. All participants attended 1 of 4 post-workshop focus groups. From the focus groups we identified 4 themes describing FORT benefits and 6 themes describing challenges. Table 2 provides example quotes for each theme.

Table 1

Baseline Characteristics of Participants

Baseline Characteristics of Participants
Baseline Characteristics of Participants
Table 2

Benefits and Challenges Associated with Frame of Reference Training (FORT)

Benefits and Challenges Associated with Frame of Reference Training (FORT)
Benefits and Challenges Associated with Frame of Reference Training (FORT)

Benefits of FORT

First, FORT allowed participants to practice applying their previously created frameworks to different videos. Participants described how watching multiple videos of the same encounter enabled them to apply the frameworks to videos demonstrating a progression of resident skill level. Participants explained how watching the same encounter at 3 different resident skill levels promoted deliberate practice. They explained how writing out their observations required them to commit to their assessments, further facilitating deliberate practice.

Second, watching videos of the same encounter at 3 different resident skill levels helped participants gain clarity about the importance of specific clinical skills. Initially some participants doubted that a particular framework behavior was important and necessary for safe, effective, patient-centered care (for example, asking patients to prioritize their agenda at the beginning of a visit). However, when participants watched the video in which the resident started the encounter with agenda-setting, they were able see why that behavior was beneficial and important. Therefore, seeing aspirational performance helped highlight the value and importance of certain clinical behaviors that may have initially been dismissed as unimportant or minimally important.

Third, seeing the same clinical encounter performed at 3 resident skill levels helped participants discriminate between performance levels. While it was sometimes difficult for participants to distinguish between poor and satisfactory performance (calling for direct and indirect supervision respectively) or between satisfactory and aspirational performance (no supervision needed), watching the sequence of 3 videos helped participants better distinguish between resident skill levels. Participants described how comparing and contrasting residents across the video series enabled them to better understand the range of behaviors for, or variable execution of, a given skill. For example, across the video series, participants could see a range of how much and how well a resident explored the physical, psychological, and emotional impact of a symptom on a patient.

Fourth, participants described how watching the 3-video series emphasized the importance of direct observation. Multiple participants described having an “ah ha” moment when they realized that a resident's oral case presentation might be identical after all 3 video encounters and might not represent what occurred during the patient visit. As such, FORT underscored how a resident's patient presentation was an incomplete and inadequate proxy for what occurred during an office visit. Participants recognized that, while the information a resident obtained from the patient might be the same, the patient's experience during the encounter and subsequent outcome of the visit could substantially differ. This realization further reinforced the value of direct observation.

Challenges With FORT

Participants described several challenges with FORT. Given the similarities between each video in a series, participants described how it was confusing to recall what occurred in each video. Therefore, they questioned whether the same or a different actor should portray the resident in all 3 videos. However, participants also recommended making the differences between the videos more subtle, with the poorly skilled resident a little better and the aspirational resident a little less skilled. Some felt that writing down their observations for each video in the series was tedious, particularly given the similarity of videos in the series. Participants also described how watching videos as a group may have caused cueing when other participants had verbal or non-verbal reactions to a video, potentially promoting groupthink. Participants knew that the residents in the videos were portrayed by actors, so at times they were uncertain if the behaviors they observed should be attributed to the resident's skill or the actor's performance. Finally, during FORT we asked participants if there were any behaviors on the framework that were not essential for safe, effective, patient-centered care. Several participants described their uncertainty taking behaviors off the framework when the frameworks were presented as being evidence-based.

End-of-Survey Results About FORT Spaced Learning

All participants (n=41) completed the end-of-study survey. Participants valued space learning as an approach to improve their skills in direct observation and feedback (Table 3) and were favorable regarding the number and timing of spaced learning modules (data not shown).

Table 3

Faculty Perceptions of Frame of Reference Training Spaced Learning on End-of-Study Survey

Faculty Perceptions of Frame of Reference Training Spaced Learning on End-of-Study Survey
Faculty Perceptions of Frame of Reference Training Spaced Learning on End-of-Study Survey

Almost all participants answered the open-ended questions on benefits of (n=39), areas to improve (n=39), and barriers to participating in (n=38) FORT spaced learning. Survey responses reiterated many of the focus group themes. Participants described how spaced learning afforded them additional opportunity for repeated practice, which helped refresh skills in direct observation while bringing intervening real-world experience to practice. Participants described how repeated practice mitigated losing previously acquired skills, promoting longer-term learning. Practice also reinforced the frameworks, thereby building an “internal model” for the competency being observed. As a result, applying the frameworks started to become “second nature.”

Participants also described the benefit of comparing their assessments to the answer keys during spaced learning. The answer keys helped participants differentiate between skill levels across the video series. The combination of seeing more cases and comparing assessments to the answer keys made it easier for participants to differentiate between good and fair resident skill levels. With time, participants described being better able to identify the more subtle differences in resident skills. Additionally, participants valued the ability to compare their assessments to the answer keys to see how they could further improve as evaluators. Finally, the FORT spaced learning continued to serve as a reminder that resident behaviors in the room with a patient may not translate into their oral case presentations.

The greatest challenge with spaced learning was time. Participants described how it was challenging to complete the spaced learning given their competing responsibilities. Some participants wondered if it would be better to assign fewer videos per module or make each video shorter but have more modules. Several commented that typing their observations was tedious. Finally, some participants recommended expanding the modules to skills beyond history taking and physical examination.

In this study we explored the mechanism by which FORT, delivered through 2 in-person workshops and 3 asynchronous online spaced learning modules, impacted participants' approach to WBA. We found that FORT provided participants an opportunity to practice applying assessment frameworks, highlighted the importance of specific evidence-based clinical skills in patient care, helped participants improve their ability to discriminate between skill levels, and emphasized the importance of direct observation and the dangers of using proxy information in assessment. There are ongoing calls to improve assessment across the undergraduate to graduate medical education continuum.32-34  Importantly, the ability to assess effectively requires training and practice.2-5,21  To our knowledge this is the first study to explore benefits and challenges of WBA FORT in medical education.

The participants in our study recognized how FORT provided an opportunity to engage in deliberate practice of assessment through repetition, reflection, and feedback using the answer keys. The acquisition of expertise requires deliberate practice,35  and acquiring expertise in assessment is likely no different. The ability to compare individual assessments to expert raters has previously been identified as an important FORT technique.13  Deliberate practice requires motivation and endurance, and participants described how this practice sometimes felt tedious and time consuming. As such, more research is needed to determine how best to optimize stimulus video content and delivery.

Faculty evaluation of learners is likely related to faculty's own clinical skills.15,16  Our findings highlight how FORT may help faculty shift from using their own clinical skills as the standard when evaluating residents. Furthermore, if faculty do not routinely perform a particular skill, or if they do not believe that behavior is important for effective patient care, they are unlikely to comment on it or assess it in their learners. While reviewing evidence-based clinical skills frameworks may suffice to convince a few faculty members of the importance of these specific clinical skills,19  we found that faculty could better appreciate why certain skills were important and beneficial after seeing the 3-video series, particularly the video of the resident with aspirational skills. Therefore, FORT may highlight the importance of specific clinical skills and the likelihood that faculty would use them in criterion-referenced resident assessment.

Successful implementation of WBA requires that both individuals and the organizational culture value direct observation.5,36  It can be difficult to get faculty to buy into the importance of direct observation.37  Watching the 3-video series reinforced the importance of direct observation. This was an unexpected finding. Watching the video series helped participants appreciate, in a more salient and visceral way, how a resident might present a patient the same way despite 3 very different patient encounters. This realization reinforced the importance of direct observation and the dangers and limitations of using proxy information. Going forward, using a video series may be an effective strategy to increase buy-in to direct observation.

The rater training in this study was longer than most previously published studies (6 hours of in-person workshops and 3 online asynchronous spaced learning modules). Effective rater training takes time, and this raises the real challenge of how to integrate more intensive rater training into medical education assessment programs given faculty's limited time and competing responsibilities. That said, if we want to improve assessment, we need to recognize that there are no easy fixes. Effective assessment takes time, practice, and faculty development.5,21,37  Furthermore, iterative practice is important given that prior rater training interventions have shown drift effects in which assessors show high levels of interrater reliability initially but poor reliability in subsequent performance assessments.38,39  Successful implementation of WBA requires that individuals and the organizational culture value direct observation.5,36  While it can be difficult to get faculty to buy into the importance of direct observation, training assessors on resident observation helps establish a sense of trust, reliability, and validity in the feedback that faculty provide to learners after conducting an observation, thereby shifting the cultural view of WBA as time well spent. Future research will need to explore how best to implement FORT to maximize effectiveness while balancing feasibility.

There are several limitations to our findings. We needed to recruit from multiple residency programs to identify faculty for this study, and the participants' motivation may differ from those who chose not to participate (volunteer bias). Second, it is uncertain how findings generalize to other specialties or rater training for other skills (for example procedural skills). Third, there was variable participation in the spaced learning. Fourth, participants may have described more benefits to training to justify their time participating. Finally, although the impact of WBA is maximized when it is followed by feedback, this study did not address the impact of training on feedback.

Individuals participating in FORT describe how it not only enables deliberate practice to improve discrimination between skill levels but also reinforces the evidence-informed skill frameworks and the importance of direct observation.

The authors would like to thank the following individuals: Denise LaMarra, MS, CHSE, Janice Radway, BA, Marc Shalaby, MD, and the Perelman School of Medicine Standardized Patient Program, including the actors for assistance developing stimulus videos; Carol Chou, MD, Nicole Defenbaugh, PhD, Richard Frankel, PhD, Benjamin Kinnear, MD, MEd, Denise LaMarra, MS, CHSE, and Leigh Simmons, MD, for assistance developing the stimulus videos; Stephanie Taitano and the Perelman School of Medicine Faculty Affairs and Professional Development Staff for their assistance creating the spaced learning modules; Anthony R. Artino Jr, PhD, for his help with the end of study survey; and focus group leaders Elizabeth Bernabeo, MPH, Justin Bittner, Laura Hirshfield, PhD, and Judy Shea, PhD.

1. 
Anderson
HL,
Kurtz
J,
West
DC.
Implementation and use of workplace-based assessment in clinical learning environments: a scoping review
.
Acad Med
.
2021
;
96
(suppl 11)
:
164
-
174
.
2. 
Kogan
JR,
Hatala
R,
Hauer
KE,
Holmboe
E.
Guidelines: the do's, don'ts, and don't knows of direct observation of clinical skills in medical education
.
Perspect Med Educ
.
2017
;
6
(5)
:
286
-
305
.
3. 
Massie
J,
Ali
JM.
Workplace-based assessment: a review of user perceptions and strategies to address the identified shortcomings
.
Adv Health Sci Educ Theory Pract
.
2016
;
21
(2)
:
455
-
473
.
4. 
Prentice
S,
Benson
J,
Kirkpatrick
E,
Schuwirth
L.
Workplace-based assessments in postgraduate medical education: a hermeneutic review
.
Med Educ
.
2020
;
54
(11)
:
981
-
992
.
5. 
Lorwald
AC,
Lahner
FM,
Mooser
B,
et al
Influences on the implementation of Mini-CEX and DOPS for postgraduate medical trainees' learning: a grounded theory study
.
Med Teach
.
2019
;
41
(4)
:
448
-
456
.
6. 
Newble
DI,
Hoare
J,
Sheldrake
PF.
The selection and training of examiners for clinical examination
.
Med Educ
.
1980
;
14
(5)
:
345
-
349
.
7. 
Noel
GL,
Herbers
JE,
Caplow
MP,
Cooper
GS,
Pangaro
LN,
Harvey
J.
How well do internal medicine faculty members evaluate the clinical skills of residents
.
Ann Intern Med
.
1992
;
117
(9)
:
757
-
765
.
8. 
George
BC,
Teitelbaum
EN,
DaRosa
DA,
et al
Duration of faculty training needed to ensure reliable OR performance ratings
.
J Surg Educ
.
2013
;
70
(6)
:
703
-
708
.
9. 
Holmboe
ES,
Hawkins
RE,
Huot
SJ.
Effects of training in direct observation of medical residents' clinical competence: a randomized trial
.
Ann Intern Med
.
2004
;
140
(11)
:
874
-
881
.
10. 
Cook
DA,
Dupras
DM,
Beckman
TJ,
Thomas
KG,
Pankratz
VS.
Effect of rater training on reliability and accuracy of mini-CEX scores: a randomized controlled trial
.
J Gen Intern Med
.
2009
;
24
(1)
:
74
-
79
.
11. 
Robertson
RL,
Vergis
A,
Gillman
LM,
Park
J.
Effect of rater training on the reliability of technical skills assessment: a randomized controlled trial
.
Can J Surg
.
2018
;
61
(6)
:
405
-
411
.
12. 
Weitz
G,
Vinzentius
C,
Twesten
C,
Lehnert
H,
Bonnemeier
H,
Konig
IR.
Effects of a rater training on rater accuracy in a physical examination clinical skills assessment
.
GMS Z Med Ausbild
.
2014
;
31(4).doc41
.
13. 
Woehr
DJ,
Huffcutt
AI.
Rater training for performance appraisal: a quantitative review
.
J Occupation Org Psychol
.
1994
;
67
(3)
:
189
-
205
.
14. 
Feldman
M,
Lazzara
EH,
Valderbilt
AA,
DiazGranados
D.
Rater training to support high-stakes simulation-based assessments
.
J Contin Educ Health Prof
.
2012
;
32
(4)
:
279
-
286
.
15. 
Kogan
JR,
Conforti
L,
Bernabeo
E,
Iobst
W,
Holmboe
E.
Opening the black box of clinical skills assessment via observation: a conceptual model
.
Med Educ
.
2011
;
45
(10)
:
1048
-
1060
.
16. 
Kogan
JR,
Hess
BJ,
Conforti
LN,
Holmboe
ES.
What drives faculty ratings of residents' clinical skills? The impact of faculty's own clinical skills
.
Acad Med
.
2010
;
85
(suppl 10)
:
25
-
28
.
17. 
Yeates
P,
O'Neill
P,
Mann
K,
Eva
K.
Seeing the same thing differently: mechanisms that contribute to assessor differences in directly-observed performance assessments
.
Adv Health Sci Educ Theory Pract
.
2013
;
18
(3)
:
325
-
341
.
18. 
Gingerich
A,
Kogan
J,
Yeates
P,
Govaerts
M,
Holmboe
E.
Seeing the ‘black box' differently: assessor cognition from three research perspectives
.
Med Educ
.
2014
;
48
(11)
:
1055
-
1068
.
19. 
Kogan
JR,
Conforti
LN,
Bernabeo
E,
Iobst
W,
Holmboe
E.
How faculty members experience workplace-based assessment rater training: a qualitative study
.
Med Educ
.
2015
;
49
(7)
:
692
-
708
.
20. 
Kogan
JR,
Dine
CJ,
Conforti
LN,
Holmboe
ES.
Can rater training improve the quality and accuracy of workplace-based assessment narrative comments and entrustment ratings? A randomized controlled trial
[published online ahead of print July 21, 2022]
.
Acad Med.
21. 
ten Cate
O,
Balmer
DF,
Caretta-Weyer
H,
Hatala
R,
Hennus
MP,
West
DC.
Entrustable professional activities and entrustment decision making: a development and research agenda for the next decade
.
Acad Med
.
2021
;
96
(suppl 7)
:
96
-
104
.
22. 
Cook
DA,
Bordage
G,
Schmidt
HG.
Description, justification and clarification: a framework for classifying the purposes of research in medical education
.
Med Educ
.
2008
;
42
(2)
:
128
-
133
.
23. 
Lane
JL,
Gottlieb
RP.
Structured clinical observations: a method to teach clinical skills with limited time and financial resources
.
Pediatrics
.
2000
;
105
(4 Part 2)
:
973
-
977
.
24. 
Makoul
G.
The SEGUE framework for teaching and assessing communication skills
.
Patient Educ Couns
.
2001
;
45
(1)
:
23
-
34
.
25. 
Lyles
JS,
Dwamena
FC,
Lein
C,
Smith
RC.
Evidence-based patient-centered interviewing
.
JCOM-WAYNE PA
.
2001
;
8
(7)
:
28
-
34
.
26. 
Duke
P,
Frankel
RM,
Reis
S.
How to integrate the electronic health record and patient-centered communication into the medical visit: a skills-based approach
.
Teach Learn Med
.
2013
;
25
(4)
:
358
-
365
.
27. 
Frankel
RM,
Stein
T.
Getting the most out of the clinical encounter: the four habits model
.
J Med Pract Manage
.
2001
;
16
(4)
:
184
-
91
.
28. 
Braddock
CH,
Edwards
KA,
Hasenberg
NM,
Laidley
TL,
Levinson
W.
Informed decision making in outpatient practice
.
JAMA
.
1999
;
282
(24)
:
2313
-
2320
.
29. 
Canavan
C,
Holtman
MC,
Richmond
M,
Katsufrakis
PJ.
The quality of written comments on professional behaviors in a developmental multisource feedback program
.
Acad Med
.
2010
;
85
(10 suppl)
:
106
-
109
.
30. 
Carpenter
SK,
Cepeda
NJ,
Rohrer
D,
Kang
SH,
Pashler
H.
Using spacing to enhance diverse forms of learning: review of recent research and implications for instruction
.
Educ Psychol Rev
.
2012
;
24
:
369
-
378
.
31. 
Nowell
LS,
Norris
JM,
While
DE,
Moules
NJ.
Thematic analysis: striving to meet the trustworthiness criteria
.
Int J Qual Methods
.
2017
;
16
:
1
-
13
.
32. 
The Coalition for Physician Accountability.
The Coalition for Physician Accountability's Undergraduate Medical Education-Graduate Medical Education Review Committee (UGRC): Recommendations for Comprehensive Improvement of the UME-GME Transition
.
33. 
McConville
JF,
Woodruff
JN.
A shared evaluation platform for medical training
.
N Engl J Med
.
2021
;
384
(6)
:
491
-
493
.
34. 
AACOM, AAMC, ACGME, ECFMG/FAIMER.
Transition in a Time of Disruption. Practical Guidance to Support the Move from Undergraduate Medical Education to Graduate Medical Education
.
Published March 2021. Accessed December 13, 2022. https://www.aamc.org/media/51991/download
35. 
Ericsson
KA.
Deliberate practice and acquisition of expert performance: a general overview
.
Acad Emerg Med
.
2008
;
15
(11)
:
988
-
994
.
36. 
Young
JQ,
Sugarman
R,
Schwartz
J,
O'Sullivan
PS.
Faculty and resident engagement with a workplace-based assessment tool: use of implementation science to explore enablers and barriers
.
Acad Med
.
2020
;
95
(12)
:
1937
-
1944
.
37. 
Massie
J,
Ali
JM.
Workplace-based assessment: a review of user perceptions and strategies to address the identified shortcomings
.
Adv Health Sci Educ Theory Pract
.
2016
;
21
(2)
:
455
-
473
.
38. 
McLaughlin
K,
Ainslie
M,
Coderre
S,
Wright
B,
Violato
C.
The effect of differential rater function over time (DRIFT) on objective structured clinical examination ratings
.
Med Educ
.
2009
;
43
(10)
:
989
-
992
.
39. 
Hemmer
PA,
Dadekian
GA,
Terndrup
C,
et al
Regular formal evaluation sessions are effective as frame-of-reference training for faculty evaluators of clerkship medical students
.
J Gen Intern Med
.
2015
;
30
(9)
:
1313
-
1318
.

Author notes

Funding: The authors report no external funding source for this study.

Competing Interests

Conflict of interest: The authors declare they have no competing interests.

Editor's Note: The online version of this article contains an overview of creation of the training videos and expert answer keys, components of rater training faculty development, the focus group interview guide, and the survey used in the study.

Supplementary data