Background

The Standardized Letter of Evaluation (SLOE) stratifies the assessment of emergency medicine (EM) bound medical applicants. However, bias in SLOE, particularly regarding race and ethnicity, is an underexplored area.

Objective

This study aims to assess whether underrepresented in medicine (UIM) and non-UIM applicants are rated differently in SLOE components.

Methods

This was a cross-section study of EM-bound applicants across 3 geographically distinct US training programs during the 2019-2020 application cycle. Using descriptive and regression analyses, we examine the differences between UIM applicants and non-UIM applicants for each of the SLOE components: 7 qualifications of an EM physician (7QEM), global assessment (GA) rating, and projected rank list (RL) position.

Results

Out of a combined total of 3759, 2002 (53.3%) unique EM-bound applicants were included. UIM applicants had lower ratings for each of the 7QEM questions, GA, and RL positions. Compared to non-UIM applicants, only some of the 7QEM components: “Work ethic and ability to assume responsibility,” “Ability to work in a team, and “Ability to communicate a caring nature,” were associated with their SLOE. “Commitment to EM” correlated more with GA for UIM than for non-UIM applicants.

Conclusions

This study shows a difference in SLOE rating, with UIM applicants receiving lower ratings than non-UIM applicants.

Underrepresented in medicine (UIM) students face multiple systemic barriers, including bias and discrimination.1,2  The Association of American Medical Colleges (AAMC) describes UIM as “racial and ethnic populations that are underrepresented in the medical profession relative to their numbers in the general population.”3  UIM students experience discrimination, stereotyping, and racial disparities in clinical grading and evaluation.4-8 

The Standardized Letter of Evaluation (SLOE) provides evaluative assessments of students9  critical to residency screening and selection,10,11  and is a composite of a grade, global assessment (GA), predicted rank list position (RL), and assessment of the 7 qualifications of an emergency medicine (EM) physician (7QEM). A SLOE template is provided as online supplementary data. As United States Medical Licensing Examination (USMLE) Step 1 transitions to pass/fail, the SLOE will likely carry greater weight in applicant selection.12,13  Prior research illustrates the presence of gender differences,14,15  but there are no data on racial or ethnic differences in SLOEs.

We performed a multi-institutional, cross-sectional, convenience sample study of SLOEs among 3 US EM residency programs (Rush University, Stanford University, and University of Florida–Jacksonville) through the Electronic Residency Application Service (ERAS) during the 2019-2020 application cycle. The programs represented distinct program lengths, types, and geography. The academic year selected for study was before COVID-19 and USMLE Step 1 becoming pass/fail. We defined UIM using the AAMC initial definition of “racial groups of Black, Mexican-American, mainland Puerto Rican, and Native American (American Indian and natives of Alaska and Hawaii)” because ERAS allows filtering for these self-reported variables.3  Only a single application was reviewed when an applicant applied to multiple programs. Our theoretical orientation is post-positivist, and the theoretical framework is aligned with a post-colonial lens to examine the effect of race on modern structures.16 

We included all applicants from the Liaison Committee for Medical Education or Commission on Osteopathic College Accreditation accredited medical schools who applied to at least one of the institutions.

Abstractors from each institution used a pre-piloted standardized data abstraction tool to collect the following data from ERAS: AAMC number, self-identified gender and race/ethnicity, and medical school. For each SLOE, we collected the rating for all 7QEM questions, GA, and predicted RL position. As with prior literature, order values were assigned for GA, RL, and 7QEM.17,18  The first 5QEM used the following anchors: (1) “Below peers,” (2) “At level of peers,” and (3) “above peers.” The sixth QEM used (1) “More than peers,” (2) “Same as peers,” and (3) “Less than peers.” The seventh QEM used (1) “Good,” (2) “Excellent,” and (3) “Outstanding.” For GA and RL, students are assessed in comparison to other applicants as top 10% (4), top third (3), middle third (2), or lower third (1). With each applicant included in the study having a single averaged rating for each SLOE component, the data were treated as continuous variables.17,18 

We used a repeated measures analysis of variance to examine the dependent variable of 7QEM ratings with one independent variable of UIM status to determine if mean ratings differed between groups. All analyses were computed using SPSS 26 (IBM Corp, Armonk, NY). Data were recorded and coded using a Microsoft Excel spreadsheet. Further discussion of the analytic approach is included in the online supplementary data.

This study was granted an exemption from all 3 Institutional Review Boards.

Out of 3759 applicants, 3250 (86.5%) met the initial inclusion criteria. Of these, 1248 (38.4%) applicants were excluded. Exclusion criteria and demographics of all applicants meeting inclusion criteria can be found in the Figure. Our included applicants represented 58.8% (2002 of 3405) of all EM applicants for the 2019-2020 application cycle ERAS data.19  The included applicants contributed 5433 SLOEs to the data set, with 4717 SLOEs meeting inclusion criteria. A total of 716 SLOEs were excluded: 60 subspecialty SLOEs, 118 with incomplete data, 157 not written by program leadership, 425 written by a letter writer who wrote <10 SLOEs the previous year, and 44 after meeting multiple exclusion criteria. Of the 4717 SLOEs included, 891 (18.9%) were from UIM applicants.

UIM applicants received lower average 7QEM ratings (2.39 vs 2.45, η2=0.01), mean GA ratings (2.40 vs 2.59, η2=0.01), and mean RL rankings (2.42 vs 2.59, η2=0.01). Consistent with prior research,17,18  GA and RL were converted into anchors with differences in percentages found to be statistically significant. The Table displays the mean rating for each QEM and effect size, reported as partial eta squared (η2).

While all 7QEM ratings had significant correlations with GA, the linear regression model revealed that only ratings on “Commitment to EM,” “Ability to develop a differential,” “Guidance needed,” and “Prediction of success” were associated with GA for UIM applicants. Additional linear regression model highlighted that while all 7QEM ratings were significantly correlated with RL, only ratings on “Ability to develop a differential,” “Ability to work with a team,” “Ability to communicate a caring nature,” “Guidance needed,” and “Prediction of success” are critically associated with the RL position for UIM applicants (Table).

To our knowledge, this is the first study to specifically assess for differences in SLOE scoring for UIM and non-UIM in GA, RL, and all 7QEM questions. We found a difference in SLOE ratings, with UIM applicants receiving lower ratings than non-UIM applicants. The effect sizes are small yet consistent in all findings and may represent systematic bias. Findings noted add to growing literature recognizing UIM students' experience of pervasive bias and discrimination in medical education.4-8  With USMLE Step 1 transitioning to pass/fail,12,13  the SLOE may be vulnerable to bias and should be examined further.

This study is subject to the limitations inherent to cross-sectional research. The data are limited to select EM programs and may not reflect all EM SLOEs. Additionally, we defined UIM using the AAMC definition,3  although bias may likewise have occurred against other races not self-identifying as UIM. While we identified differences in the SLOE domain ratings of UIM vs non-UIM applicants, the relevance of these findings remains unclear as no data exist to date on how a specific rating will affect residency ranking and Match success. In addition, a significant limitation is that we did not correct for other application variables. Many other potential sources of bias beyond structural racism may contribute to different ratings in the SLOE. This study was not designed to elucidate the causal factors. Finally, our use of the multiple linear regression analysis is exploratory and should be interpreted with caution.

Bias exists in different aspects of the residency application. Our work highlights differential ratings in the SLOE for UIM vs non-UIM applicants. Future work should investigate how these differences impact the ranking of applicants. Additionally, more work is needed to compare the SLOE with other objective evaluative tools concerning racial and ethnic equity in grading across different specialties. Finally, future studies could assess the impact of diversity training for faculty on SLOE scoring.

Our study evaluated the relationship between race/ethnicity and SLOE components. We found differences in the overall ratings of the 7QEM questions, the predictors of GA, and the anticipated RL position on SLOEs for UIM and non-UIM applicants.

1.
Grimm
LJ
,
Redmond
RA
,
Campbell
JC
,
Rosette
AS
.
Gender and racial bias in radiology residency letters of recommendation
.
J Am Coll Radiol
.
2020
;
17
(1 Pt A)
:
64
-
71
.
2.
Capers
Q
,
Clinchot
D
,
McDougle
L
,
Greenwald
AG
.
Implicit racial bias in medical school admissions
.
Acad Med
.
2017
;
92
(3)
:
365
-
369
.
3.
Associate of American Medical Colleges. Underrepresented in Medicine Definition. Accessed November 4,
2021
.
4.
Williams
DR
,
Mohammed
SA
.
Discrimination and racial disparities in health: evidence and needed research
.
J Behav Med
.
2009
;
32
(1)
:
20
-
47
.
5.
Lewis
TT
,
Cogburn
CD
,
Williams
DR
.
Self-reported experiences of discrimination and health: scientific advances, ongoing controversies, and emerging issues
.
Annu Rev Clin Psychol
.
2015
;
11
:
407
-
440
.
6.
Low
D
,
Pollack
SW
,
Liao
ZC
, et al
Racial/ethnic disparities in clinical grading in medical school
.
Teach Learn Med
.
2019
;
31
(5)
:
487
-
496
.
7.
Lee
KB
,
Vaishnavi
SN
,
Lau
SKM
,
Andriole
DA
,
Jeffe
DB
.
“Making the grade:” noncognitive predictors of medical students' clinical clerkship grades
.
J Natl Med Assoc
.
2007
;
99
(10)
:
1138
-
1150
.
8.
Ross
DA
,
Boatright
D
,
Nunez-Smith
M
,
Jordan
A
,
Chekroud
A
,
Moore
EZ
.
Differences in words used to describe racial and gender groups in Medical Student Performance Evaluations
.
PLoS One
.
2017
;
12
(8)
:
e0181659
.
9.
Keim
SM
,
Rein
JA
,
Chisholm
C
, et al
A standardized letter of recommendation for residency application
.
Acad Emerg Med
.
1999
;
6
(11)
:
1141
-
1146
.
10.
Love
JN
,
Smith
J
,
Weizberg
M
, et al
Council of Emergency Medicine Residency Directors' standardized letter of recommendation: the program director's perspective
.
Acad Emerg Med
.
2014
;
21
(6)
:
680
-
687
.
11.
Negaard
M
,
Assimacopoulos
E
,
Harland
K
,
Van Heukelom
J
.
Emergency medicine residency selection criteria: an update and comparison
.
AEM Educ Train
.
2018
;
2
(2)
:
146
-
153
.
12.
Neville
AL
,
Smith
BR
,
de Virgilio
C. USMLE
Step 1 scoring system change to pass/fail-an opportunity for change
.
JAMA Surg
.
2020
;
155
(12)
:
1093
-
1094
.
13.
Cangialosi
PT
,
Chung
BC
,
Thielhelm
TP
,
Camarda
ND
,
Eiger
DS
.
Medical students' reflections on the recent changes to the USMLE Step Exams
.
Acad Med
.
2021
;
96
(3)
:
343
-
348
.
14.
Miller
DT
,
McCarthy
DM
,
Fant
AL
,
Li-Sauerwine
S
,
Ali
A
,
Kontrick
AV
.
The Standardized Letter of Evaluation narrative: differences in language use by gender
.
West J Emerg Med
.
2019
;
20
(6)
:
948
-
956
.
15.
Li
S
,
Fant
AL
,
McCarthy
DM
,
Miller
D
,
Craig
J
,
Kontrick
A
.
Gender differences in language of Standardized Letter of Evaluation narratives for emergency medicine residency applicants
.
AEM Educ Train
.
2017
;
1
(4)
:
334
-
339
.
16.
Varpio
L
,
Paradis
E
,
Uijtdehaage
S
,
Young
M
.
The distinctions between theory, theoretical framework, and conceptual framework
.
Acad Med
.
2020
;
95
(7)
:
989
-
994
.
17.
Mannix
A
,
Monteiro
S
,
Miller
D
, et al
Gender differences in emergency medicine Standardized Letters of Evaluation
.
AEM Educ Train
.
2022
;
6
(2)
:
e10740
.
18.
Miller
DT
,
Krzyzaniak
S
,
Mannix
A
, et al
The Standardized Letter of Evaluation in emergency medicine: are the qualifications useful?
AEM Educ Train
.
2021
;
5
(3)
:
e10607
.
19.
Association of American Medical Colleges
.
ERAS Statistics. Accessed March 7,
2021
.

The authors would like to thank Dr. CJ Foote for his statistical analysis support on the preliminary data of this manuscript. We also thank Dr. Stefanie Sebok-Syer for her expertise and critical reading of the manuscript.

Author notes

Editor's Note: The online version of this article contains the SLOE template and further analysis from the study.

Funding: Dr. Gottlieb reports funding from the Centers for Disease Control and Prevention, Emergency Medicine Foundation, and Society for Academic Emergency Medicine.

Competing Interests

Conflict of interest: The authors declare they have no competing interests.

This work was previously presented at CORDAA22, San Diego, CA, March 28, 2022; Society for Academic Emergency Medicine Western Regional Meeting, Stanford, CA, April 1, 2022; Society for Academic Emergency Medicine New England Regional Meeting, Providence, RI, April 6, 2022; and SAEM22, New Orleans, LA, May 12, 2022.

Supplementary data