Background Teaching faculty request timely feedback from residents to improve their skills. Yet even with anonymous processes, this upward feedback can be difficult to obtain as residents raise concerns about identification and repercussions.

Objective To examine faculty perception of the quality and content of feedback from residents after increasing anonymity and sacrificing timeliness.

Methods Between 2011 and 2017, an associate program director at a large internal medicine residency program met briefly with each resident individually to obtain feedback about their teaching faculty shortly after their rotation. To improve anonymity, residents were promised their feedback would not be released until they graduated. In 2019, all feedback was collated and released at one time to faculty. We administered 3 timed, voluntary, anonymous, 36-item closed-ended surveys to faculty asking about the content and value, and to self-identify whether the feedback was praise, constructive, or criticism.

Results Exactly 189 faculty participated with 140 completing all 3 surveys (74.1% response rate). Faculty reported this feedback content to be of higher quality (81.0%, 81 of 100) and quantity (82.4%, 84 of 102) in contrast to prior feedback. More than 85.4% (88 of 103) of faculty agreed this feedback was more specific. Faculty identified less praise (median 35.0% vs median 50.0%, P<.001) and more negative constructive feedback (median 20.0% vs median 5.0%, P<.001) compared to prior feedback. More than 82% (116 of 140) of faculty reported it would change their behavior, but 3 months after receiving the feedback, only 63.6% (89 or 140) felt the same way (P<.001). Faculty were divided on the necessity of a time delay, with 41.4% (58 of 140) believing it reduced the feedback’s value. Despite the delay, 32.1% (45 of 140) felt they could identify residents.

Conclusions Offering a substantial delay in feedback delivery increased anonymity and enabled residents to furnish more nuanced and constructive comments; however, faculty opinions diverged on whether this postponement was valuable.

Feedback is considered fundamental to performance improvement in medical education for trainees and faculty, especially as studies show self-assessment can be flawed.1-4  The Accreditation Council for Graduate Medical Education requires residents have the opportunity to evaluate faculty, which is often called upward feedback.5-7  The predominant goals of upward feedback are to improve clinical teaching and support faculty promotion.6-9  While most faculty desire feedback from trainees,10  the literature is mixed on what type of feedback improves clinical teaching.6,11  Numerical rating scales have not been shown to change teaching performance, while written comments have mixed results.12-16 

An inherent issue in faculty feedback is the power differential; feedback to faculty is perceived as having greater consequences.8  Residents raise concerns that providing anything beyond praise could negatively impact relationships, given that clinical education occurs in small intimate teams.7-9,17  If faculty react poorly, this could result in less positive evaluations or recommendations with potential downstream consequences for fellowships or jobs.7,18,19  In addition, residents may feel uncomfortable providing constructive upward feedback.20 

Residency programs often make upward feedback anonymous to reduce resident concerns, yet resident discomfort persists.21  Small departments with fewer faculty find it difficult to maintain anonymity, and even in larger hospitals, faculty may attend only once or twice a year, making it possible to identify the resident.19  These concerns, along with resident workload and time pressures, often limit the amount of feedback to faculty resulting in vague generalized praise, rather than anything actionable. This violates the second tenant of the Feedback Intervention Theory, which posits that the effectiveness of feedback depends on a credible feedback source, specific and actionable content, and how the individual perceives the content.8,13,22-24 

Ensuring anonymity also limits the ability to provide timely feedback, which is considered a cornerstone of feedback. Balancing timeliness and anonymity can pose challenges.8  It is unclear what time delay is needed to obtain anonymous and actionable feedback from residents.

With this question in mind, we developed a longitudinal approach to collecting resident feedback, promising anonymity by withholding release of their comments until they completed training. We sacrificed timely feedback to protect resident anonymity and anticipated residents would provide more constructive and candid comments.7,19  We hypothesized that faculty would find this time-delayed feedback valuable and actionable.

What Is Known

Faculty want feedback on their teaching, but residents express concerns about anonymity when feedback is given at completion of experiences.

What Is New

This study examined internal medicine faculty perceptions of feedback from residents collected immediately but provided after residents graduated. Faculty reported that the delayed feedback was of more value, quantity, and specificity, but had mixed preferences regarding timing.

Bottom Line

With the anonymity of delayed feedback residents provided different feedback which faculty noted was higher quality and quantity, but some faculty found that the delay reduced value.

Setting and Participants

This study was completed at Massachusetts General Hospital (MGH), an academic hospital in Boston, Massachusetts. Participants were inpatient teaching faculty and residents in internal medicine. During the time of the study, the residency program had more than 170 trainees. The general medicine inpatient teams consisted of a postgraduate year (PGY) 2 resident leading 2 to 4 interns (PGY-1) in the care of 12 to 24 patients, with 1 to 2 teaching faculty. Rotations were 2 weeks long and PGY-2 residents completed 4 to 6 rotations.

Intervention

From 2011 to 2017, the inpatient associate program director (APD) (K.M.F.), held a brief 10-minute meeting with each PGY-2 resident shortly after their rotation to ask about their experience with the faculty. Residents had the opportunity to discuss any issues, but the meeting largely focused on feedback. Residents were told their feedback was confidential, would be combined with that of other residents, and released only after they finished training.

The residents were asked 4 questions, chosen based on review of the literature: (1) How were your attendings? (2) What was their teaching style? (3) Can you describe the level of autonomy? and (4) What were their strengths and areas for improvement? Residents’ responses were typed up in real time and saved on a hospital shared drive with access limited to study personnel. During this study period, the program expected residents to provide verbal feedback and complete anonymous written evaluations released yearly.

In 2019, individual feedback about each faculty was combined, rearranging the chronology to further reduce the risk of resident identification. Each resident’s comment was a distinct paragraph. To evaluate faculty’s response to this feedback, faculty completed a timed 3-part survey from January through March. Faculty were informed about the project via email and asked about their prior experience with resident feedback on inpatient services. Upon completion of survey 1, faculty were emailed their personal feedback, along with an explanation of the process and examples (online supplementary data Appendix A and B). Immediately after reading their feedback, they were asked to complete survey 2. Faculty were offered the opportunity to meet with the APD to discuss their feedback. Faculty who did not want to participate in this study were emailed their feedback separately. Since feedback often triggers emotions, faculty were sent the survey 3 months after receiving their feedback to assess if their views changed over time.22  The secure web-based application Research Electronic Data Capture (REDCap)25,26  was used to manage survey distribution and collect responses, and automated reminders were sent out at 1, 2, and 4 weeks. We obtained outside email addresses for faculty who had left MGH. Those completing survey 2 were entered into a lottery for a gift card, and all received a $5 coffee card for completing survey 3.

Outcomes Measured

The 3 surveys were developed from a literature review and description on types of feedback.11,27  The first survey drafts and research methodology were presented to an advisory panel of 20 medical education researchers for feedback. Two experts in survey design reviewed and edited the surveys, and 2 faculty not in the study pilot tested them prior to distribution (see online supplementary data for surveys). The surveys collected basic demographic information (age, gender, and years post-residency training). Items were rated on a 5-point Likert scale or had open text boxes. Based on a previously published model,27,28  faculty were asked to estimate whether feedback they received contained praise (positive statements about you as a teacher), positive constructive feedback (affirming comments about specific behavior), negative constructive feedback (corrective comments about specific behaviors), and criticism (negative statements about you as a teacher). Identifying information was kept separate to ensure anonymity and confidentiality.

Analysis of the Outcomes

Descriptive statistics were used to summarize data, such as the characteristics of respondents. Inferential statistics were used to analyze the responses between surveys, where the surveys were analyzed as independent, rather than repeated data, which is a more conservative approach and allowed for more data points to be included in specific analyses. As some data were not parametric, nonparametric analyses were completed. To compare the percentages of 4 specific types of written feedback faculty received between baseline and the new approach, Mann-Whitney U tests were used. To analyze proportions of faculty agreement, faculty confidence, and faculty perception of quality, 2 proportion z-tests were used to sequentially compare responses from 2 surveys at a time. Stata 17 (Stata) and 2 online calculators (Social Science Statistics and Statology) were used for these analyses. Statistical significance was accepted at P=.05.

This study was deemed exempt by the Partners Human Research Committee.

Characteristics of Respondents

During the 6 years, feedback was collected from 371 PGY-2 residents about 251 faculty. Faculty were removed if they had retired, died, or had feedback from only one resident, which would have made them easily identifiable, resulting in 189 eligible faculty. Survey 1 was completed by 157 faculty (83.1%), survey 2 by 151 (79.9%), and survey 3 by 140 (74.1%). Survey responses from the 140 faculty who completed all 3 surveys were included in the analysis. More respondents were men (59.3%), 11+ years out from training (55.7%), and served as attendings for 5+ years at MGH (62.1%; Table 1). This matched current MGH faculty demographics.

Table 1

Faculty Demographics and Prior Feedback Experience (Survey 1)

Faculty Demographics and Prior Feedback Experience (Survey 1)
Faculty Demographics and Prior Feedback Experience (Survey 1)

Experience of Prior Feedback

Based on prior experience, the majority of faculty reported receiving verbal feedback at least some of the time (37.1% [52 of 140] often/always, 41.4% [58 of 140] sometimes) while far fewer received written feedback (2.9% [4 of 140] often/always, 12.1% [17 of 140] sometimes; Table 1).

New Feedback

After reading their new feedback, 72.9% (102 of 140) of faculty agreed/strongly agreed it matched expectations. When asked to compare the new feedback to prior feedback, written or verbal, 81.0% (81 of 100) agreed/strongly agreed the feedback was higher quality, 82.4% (84 of 102) higher quantity, and 85.4% (88 of 103) more specific. When asked about the new feedback approach, 77.9% (109 of 140) agreed/strongly agreed it was structured well, 93.6% (131 of 140) said they planned to reflect on their feedback, and despite the time delay, 32.1% (45 of 140) believed they could identify some residents (Table 2).

Table 2

Faculty Responses to New Feedback Approach (Survey 2)

Faculty Responses to New Feedback Approach (Survey 2)
Faculty Responses to New Feedback Approach (Survey 2)

Comparing Content of Previous vs New Feedback

When comparing the 4 specific types of feedback, the results highlighted differences between prior feedback and the new feedback in the percentage of praise and negative constructive feedback, identified by faculty. Descriptive results noted a decrease in praise (median: 50.0%, [IQR 35.0-75.0%] vs median: 35.0%, [IQR 20.0-50.0%], P<.001), and an increase in negative constructive feedback (median: 5.0%, [IQR 0.0-15.0%] vs median: 20.0%, [IQR 10.0-25.0%], P<.001; Table 3).

Table 3

Content of Previous Versus Current Feedback (Survey 1 and Survey 2)

Content of Previous Versus Current Feedback (Survey 1 and Survey 2)
Content of Previous Versus Current Feedback (Survey 1 and Survey 2)

When asked about the feedback content compared to prior feedback, more faculty agreed/strongly agreed the new feedback focused on teaching (92.9% [130 of 140] vs 75.9% [101 of 133], P<.001), clinical knowledge (72.1% [101 of 140] vs 51.9% [69 of 133], P=.001), leadership skills (67.9% [95 of 140] vs 43.6% [58 of 133], P<.001), and time management (72.1% [101 of 140] vs 35.3% [47 of 133], P<.001). However, fewer responded that the new approach focused on interactions with patients (47.1% [66 of 140] vs 70.7% [94 of 133], P<.001). More faculty were confident the feedback provided via the new approach was truthful compared to prior (62.9% [88 of 140] vs 7.5% [10 of 133], P<.001).

Faculty Beliefs About Feedback

Faculty agreed/strongly agreed the new feedback was more valuable than prior written feedback (94.3% [132 of 140] vs 67.1% [49 of 73], P<.001) and a higher proportion agreed the new feedback would likely lead to a change in their teaching behaviors (82.9% [116 of 140] vs 50.7% [37 of 73], P<.001; online supplementary data Table).

Nearly all faculty reported a belief that residents should provide feedback, which did not change with the new feedback (98.6% [138 of 140] vs 96.4% [135 of 140]). Over a third of faculty believe resident feedback is only valuable when given in a timely fashion, which did not change after receiving this time-delayed feedback (37.1% [52 of 140] vs 35.7% [50 of 140]). Faculty were split whether a time delay is needed to obtain constructive feedback and a sizable number (41.4%, 58 of 140) felt the time delay reduced the value of the feedback. There were no differences between prior and new feedback in how the feedback made them feel (online supplementary data Table).

Response to New Feedback and 3 Months Later

Three months after receiving their feedback, most faculty still agreed the value of the feedback was high, but the percentage had decreased (94.3% [132 of 140] vs 84.3% [118 of 140], P=.01). There was a decrease in agreement they would change teaching behaviors (82.9% [116 of 140] vs 63.6% [89 of 140], P<.001). When asked how the feedback made them feel, there was a decrease in interest to improve their teaching (87.1% [122 of 140] vs 77.9% [109 of 140], P=.04) and an increase in frustration about their teaching role (12.9% [18 of 140] vs 23.6% [33 of 140], P=.02; Table 4).

Table 4

New Feedback Approach 3 Months Later (Comparison of Survey 2 and Survey 3)

New Feedback Approach 3 Months Later (Comparison of Survey 2 and Survey 3)
New Feedback Approach 3 Months Later (Comparison of Survey 2 and Survey 3)

We explored a longitudinal approach to providing feedback to teaching faculty from residents, sacrificing timeliness in exchange for anonymity, resulting in more constructive and specific feedback. A majority of faculty reported that this new feedback was more valuable, truthful, and higher quality than prior experience and would cause them to change their teaching behaviors. More than a third noted the time delay decreased the value of the feedback. They were divided as to whether a time delay was necessary to obtain the anonymous content, as a third of faculty believed they could still identify some residents.

This study echoes findings from others’ work that obtaining feedback from busy residents is challenging.13,14  Factors that contribute to this include resident workload and time pressure, along with fear of repercussions for any feedback beyond praise.7,8,17,19,29  The business literature advises caution in providing upward feedback to bosses, and higher education literature supports anonymity to protect learners.21,29-31  There are limited studies in graduate medical education (GME) regarding the best methods for evaluating faculty, including the impact of anonymity.8,11,13,18  One study evaluating non-anonymous feedback found some benefits, but noted that residents worried about consequences, and concluded anonymous feedback must be available.8  With small clinical teams, anonymity is difficult to achieve in GME. Even with a significant time delay, faculty in this study believed they could still identify some residents.

Feedback Intervention Theory (FIT) suggests feedback is more effective when it avoids praise and criticism and focuses on specific behaviors.11,22-24,32  In our study, this method of collecting feedback shifted the balance from praise to constructive feedback, with faculty reporting that they would change their behavior. Another tenant of FIT is the trustworthiness of the source. In order to accept feedback, recipients need to believe feedback is honest and accurate.4,21,29  In an earlier study on non-anonymous feedback, faculty were concerned that residents “moderated the message.”8  In our study, nearly 63% of faculty were confident residents were more truthful, making feedback potentially more actionable.33,34  Finally, the third tenant of FIT is how the recipient perceives and reacts to the feedback, especially emotionally. Our study asked faculty to categorize their perceptions of the feedback initially and then later, to determine if processing the feedback had changed their views. With a time delay, faculty were less positive about the feedback, perhaps reducing its value, since feedback effectiveness is in large part determined by the recipient.

We observed that this method of collecting feedback required a considerable time investment, which may not be possible at other institutions. However, there were several additional benefits, including sending an unwritten message that feedback about faculty was important and enabling real-time awareness of concerns. All residents agreed to meet with the APD.

This study has multiple limitations, including being a single-center study with its own feedback culture and practice. The feedback provided spanned 6 years, resulting in a significant time delay, which likely contributed to a reduction in its value. Bias was likely introduced by notifying faculty with a letter explaining the new feedback process and contextualizing the content immediately before comparing this feedback to prior experiences. In addition, the time between prior feedback and reading the new feedback was highly variable for each faculty, which also may have affected faculty perception. Because the residents’ feedback was typed up verbatim by one individual, this may have introduced partiality to the process. Our survey design included agreement scales, which can introduce bias. While the survey response rate is greater than other studies of physicians, 17 faculty nonresponders may bias the results in unknown ways.35 

The optimal time delay for ensuring anonymity but still providing valuable upward feedback to faculty remains unclear.36  Future research might investigate shorter time delays than those used in this study, assessing resident anonymity and comfort, feedback quality, and faculty perception of its value.

This approach to obtain resident feedback for teaching faculty resulted in more constructive feedback, which faculty perceived as valuable and planned to utilize. The anonymity achieved with a significant time delay likely resulted in improved content, but faculty were divided on whether the delay’s benefits justified the tradeoff.

The authors would like to thank the Massachusetts General Hospital internal medicine residency program and teaching faculty for their support and cooperation and the MGH Education Council for reviewing the methods and surveys.

1. 
Wilkerson
L,
Irby
DM.
Strategies for improving teaching practices: a comprehensive approach to faculty development
.
Acad Med
.
1998
;
73
(
4
):
387
-
396
.
2. 
Fleming
M,
Vautour
D,
McMullen
M,
et al.
Examining the accuracy of residents’ self-assessments and faculty assessment behaviours in anesthesiology
.
Can Med Educ J
.
2021
;
12
(
4
):
17
-
26
.
3. 
Davis
DA,
Mazmanian
PE,
Fordis
M,
Van Harrison
R,
Thorpe
KE,
Perrier
L.
Accuracy of physician self-assessment compared with observed measures of competence: a systematic review
.
JAMA
.
2006
;
296
(
9
):
1094
-
1102
.
4. 
Eva
KW,
Armson
H,
Holmboe
E,
et al.
Factors influencing responsiveness to feedback: on the interplay between fear, confidence, and reasoning processes
.
Adv Health Sci Educ Theory Pract
.
2012
;
17
(
1
):
15
-
26
.
5. 
Accreditation Council for Graduate Medical Edcuation
.
ACGME Common Program Requirements (Residency)
.
6. 
Haydar
B,
Charnin
J,
Voepel-Lewis
T,
Baker
K.
Resident characterization of better-than- and worse-than-average clinical teaching
.
Anesthesiology
.
2014
;
120
(
1
):
120
-
128
.
7. 
Afonso
NM,
Cardozo
LJ,
Mascarenhas
OA,
Aranha
AN,
Shah
C.
Are anonymous evaluations a better assessment of faculty teaching performance? A comparative analysis of open and anonymous evaluation processes
.
Fam Med
.
2005
;
37
(
1
):
43
-
47
.
8. 
Dudek
NL,
Dojeiji
S,
Day
K,
Varpio
L.
Feedback to supervisors: is anonymity really so important?
Acad Med
.
2016
;
91
(
9
):
1305
-
1312
.
9. 
Beckman
TJ,
Reed
DA,
Shanafelt
TD,
West
CP.
Impact of resident well-being and empathy on assessments of faculty physicians
.
J Gen Intern Med
.
2010
;
25
(
1
):
52
-
56
.
10. 
Dent
MM,
Boltri
J,
Okosun
IS.
Do volunteer community-based preceptors value students’ feedback?
Acad Med
.
2004
;
79
(
11
):
1103
-
1107
.
11. 
Baker
K.
Clinical teaching improves with resident evaluation and feedback
.
Anesthesiology
.
2010
;
113
(
3
):
693
-
703
.
12. 
Schum
TR,
Yindra
KJ.
Relationship between systematic feedback to faculty and ratings of clinical teaching
.
Acad Med
.
1996
;
71
(
10
):
1100
-
1102
.
13. 
Cohan
RH,
Dunnick
NR,
Blane
CE,
Fitzgerald
JT.
Improvement of faculty teaching performance: efficacy of resident evaluations
.
Acad Radiol
.
1996
;
3
(
1
):
63
-
67
.
14. 
Litzelman
DK,
Stratos
GA,
Marriott
DJ,
Lazaridis
EN,
Skeff
KM.
Beneficial and harmful effects of augmented feedback on physicians’ clinical-teaching performances
.
Acad Med
.
1998
;
73
(
3
):
324
-
332
.
15. 
Risucci
DA,
Lutsky
L,
Rosati
RJ,
Tortolani
AJ.
Reliability and accuracy of resident evaluations of surgical faculty
.
Eval Health Prof
.
1992
;
15
(
3
):
313
-
324
.
16. 
Cox
SS,
Swanson
MS.
Identification of teaching excellence in operating room and clinic settings
.
Am J Surg
.
2002
;
183
(
3
):
251
-
255
.
17. 
Ramani
S,
Lee-Krueger
RCW,
Roze des Ordons
A,
et al.
Only when they seek: exploring supervisor and resident perspectives and positions on upward feedback
.
J Contin Educ Health Prof
.
2022
;
42
(
4
):
249
-
255
.
18. 
Guerrasio
J,
Weissberg
M.
Unsigned: why anonymous evaluations in clinical settings are counterproductive
.
Med Educ
.
2012
;
46
(
10
):
928
-
930
.
19. 
Daberkow
DW
2nd,
Hilton
C,
Sanders
CV,
Chauvin
SW.
Faculty evaluations by medicine residents using known versus anonymous systems
.
Med Educ Online
.
2005
;
10
(
1
):
4380
.
20. 
Mann
K,
van der Vleuten
C,
Eva
K,
et al.
Tensions in informed self-assessment: how the desire for feedback and reticence to collect and use it can conflict
.
Acad Med
.
2011
;
86
(
9
):
1120
-
1127
.
21. 
Wachtel
H.
Student evaluation of college teaching effectiveness: a brief review
.
Assess Eval High Educ
.
1998
;
23
(
2
):
191
-
212
.
22. 
Dowding
D,
Merrill
J,
Russell
D.
Using feedback intervention theory to guide clinical dashboard design
.
AMIA Annu Symp Proc
.
2018
;
2018
:
395
-
403
.
23. 
Brown
B,
Gude
WT,
Blakeman
T,
et al.
Clinical performance feedback intervention theory (CP-FIT): a new theory for designing, implementing, and evaluating feedback in health care based on a systematic review and meta-synthesis of qualitative research
.
Implement Sci
.
2019
;
14
(
1
):
40
.
24. 
Kluger
A.
The effects of feedback interventions on performance; a historical review, a meta-analysis, and a preliminary feedback intervention theory
.
Psychol Bull
.
1996
;
119
(
2
):
254
-
284
.
25. 
Harris
PA,
Taylor
R,
Thielke
R,
Payne
J,
Gonzalez
N,
Conde
JG.
Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support
.
J Biomed Inform
.
2009
;
42
(
2
):
377
-
381
.
26. 
Harris
PA,
Taylor
R,
Minor
BL,
et al.
The REDCap consortium: building an international community of software platform partners
.
J Biomed Inform
.
2019
;
95
:
103208
.
27. 
Geraghty
S.
Types and sources of feedback in the workplace
.
talkdesk
.
28. 
Canavan
C,
Holtman
MC,
Richmond
M,
Katsufrakis
PJ.
The quality of written comments on professional behaviors in a developmental multisource feedback program
.
Acad Med
.
2010
;
85
(
suppl 10
):
106
-
109
.
29. 
Fluit
CV,
Bolhuis
S,
Klaassen
T,
et al.
Residents provide feedback to their clinical teachers: reflection through dialogue
.
Med Teach
.
2013
;
35
(
9
):
e1485
-
e1492
.
30. 
Gallo
A.
How to give your boss feedback
.
Harvard Business Review
.
Published March 24, 2010. Accessed July 30, 2024. https://hbr.org/2010/03/how-to-give-your-boss-feedback
31. 
Winter
J.
Giving feedback to your boss—like a boss
.
Forbes. February
18
,
2013
.
32. 
Maker
VK,
Curtis
KD,
Donnelly
MB.
Faculty evaluations: diagnostic and therapeutic
.
Curr Surg
.
2004
;
61
(
6
):
597
-
601
.
33. 
Yarris
LM,
Fu
R,
LaMantia
J,
et al.
Effect of an educational intervention on faculty and resident satisfaction with real-time feedback in the emergency department
.
Acad Emerg Med
.
2011
;
18
(
5
):
504
-
512
.
34. 
Kornegay
JG,
Kraut
A,
Manthey
D,
et al.
Feedback in medical education: a critical appraisal
.
AEM Educ Train
.
2017
;
1
(
2
):
98
-
109
.
35. 
Kellerman
SE,
Herold
J.
Physician response to surveys. A review of the literature
.
Am J Prev Med
.
2001
;
20
(
1
):
61
-
67
.
36. 
Gonzalo
JD,
Heist
BS,
Duffy
BL,
et al.
Content and timing of feedback and reflection: a multi-center qualitative study of experienced bedside teachers
.
BMC Med Educ
.
2014
;
14
:
212
.

The online supplementary data contains instructions for faculty, an example of resident comments about an attending, further data from the study, the surveys used in the study, and a visual abstract.

Funding: A small grant was obtained from the Massachusetts General Hospital Department of Medicine’s Center for Educational Innovation and Scholarship.

Conflict of interest: The authors declare they have no competing interests.

Supplementary data