The COVID-19 pandemic has inextricably changed academic medicine and medical education, including a complete overhaul of the residency and fellowship selection process. Applicant interviews are now virtual, and visiting electives, often serving as “audition rotations,” are cancelled or highly restricted by national bodies.1,2 Previously, we reviewed the topics of recruitment3 and interviews.4
In this guide, we explore findings from fields outside of medicine, such as the social sciences and business, to inform residency selection with a focus on file review and rank list determination.
We also summarize recent evidence around trainee assessment during candidate ranking, since it may be worthwhile for program leaders to harness the most recent evidence while reimagining their systems. Studies show that these systems are flawed with rater and systemic biases. In the era of the Me Too and Black Lives Matter movements, there is a compelling case to go beyond simple digital conversion of processes and approach this task as an opportunity to redesign a better system for the next generation of learners and patients.
Revisiting Selection and Ranking
Program directors use the Electronic Residency Application Service (ERAS) or Canadian Residency Match System (CaRMS) application data to attempt to predict which applicants will be most successful in their program. The COVID-19 pandemic has resulted in changes to the standard data typically used for rank order decisions. Traditionally, specialty-specific letters of recommendation and in-person interviews have played an important role in determining applicant rank order position.5–7 Applicants are likely to have fewer specialty-specific letters of recommendation in the 2020–2021 application cycle. Emergency medicine (EM) program directors have created a template for a non-specialty letter of reference to guide faculty in non-EM specialties regarding the specific characteristics and assessments prioritized in EM.8 Specialties that emphasize letters of recommendation written by faculty within the specialty may consider creating similar templates.
Programs may also want to more explicitly obtain information during interviews, given that interviews will now be conducted virtually, and the actual time spent directly engaging with applicants may be truncated. Providing implicit bias training to interviewers and using a standardized, structured interview process (eg, multiple mini interviews) may help programs maximize objectivity and garner information that is more likely to be predictive of an applicant's future performance.9–13
Some programs have employed scoring rubrics or mathematical models to assist in the selection process.14–16 The predictive ability of various application data or summative scoring rubrics on future resident performance has been mixed.14,15,17,18 Previously used scoring rubrics may need to be restructured or reweighted, by potentially incorporating diversity as a metric to mitigate existing structural bias within medicine. Expanding screening tools to capture a broader spectrum of candidate aptitudes may help to create more diverse classes of residents. Additionally, programs may consider the blinding of applicant gender, first names, or excluding extracurricular activities to promote equity during the ranking process.19 Extracurricular activities may provide insights regarding applicant interests, but may also lead to bias related to socioeconomic status.19
Reimagining Selection and Ranking in the Post-COVID-19 World
Although not a universal practice, some programs examine candidates' social media profiles as a screening instrument or tool for selection.20–23 Those programs that have used social media review as part of the selection process have found mixed results.20–23 Given the limitations that a virtual recruitment season has placed on programs, review of applicant social media pages may play an increased role in the selection process.
Alternative modalities for assessment may provide additional information to program directors. Some programs have incorporated assessment of technical skills or critical thinking into their application process.24,25 Having applicants participate and be assessed in virtual educational sessions (eg, simulation, skills lab, case-based discussions) in real time or via video may provide additional insight into their capabilities. These assessments could take place prior to selection for interviews or during the interview processes. Limited data suggested that incorporating pre-interview assessment does not negatively influence applicant perceptions of the program.26 Additionally, consideration of assessment beyond the cognitive- and skill-based realms, such as personal and professional characteristics, may help program directors capture a more holistic representation of applicants. Several different assessments have been used in medical education to measure non-cognitive dimensions such as situational judgement, personality traits, and professional characteristics.11,13,27–30 Limited data supported that these types of assessments predict rank list position and future performance, and are more predictive than traditional cognitive assessments.13,27,28,30–32 The table summarizes evidence-informed tactics for scoring and ranking application elements.
Considerations for Rank Order Determination
Because the transition to virtual experiences in the graduate medical education (GME) application process has been abrupt, processes are rife with uncertainty for applicants and programs. The methods for determining rank list order should minimize implicit bias, maximize diversity and inclusion, and prioritize factors most likely associated with resident success, as defined by each program. The first priority of a program's rank list is to ensure that all positions are filled. Given the efficiency and cost-savings of virtual interviews, students may apply to and interview at an even greater number of programs this year. Program directors may wish to implement processes to assess applicant interest in the program prior to interviewing, as well as increase the number and diversity of the candidates invited to interview.
With reductions and changes in the information available to rank list discussions, programs may benefit from early consensus decisions, prior to the interview season, regarding what applicant factors matter most to the program. This determination will consider the current program strengths, weaknesses, and vision. Explicitly naming these factors can help ensure they are elicited during the selection process and are thus available for and considered during rank list development. Program directors can apply published strategies for minimizing implicit bias into rank meetings, and consider adding diversity as a metric in ranking spreadsheets and rubrics.33
Finally, because selection committee activities may be conducted virtually as well, it is important to consider how virtual interactions may change the usual consensus processes. Literature regarding group decision-making in clinical competency decisions may be relevant to rank meeting discussions.34 Hauer and colleagues suggested that attention to group size, group understanding of its work, the role of the leader, information-sharing procedures, and time pressures is necessary to ensure optimal outcomes.34
Conclusions
Changes as a result of the COVID-19 pandemic necessitate changes to the GME selection process. The limited availability of specialty-specific letters of recommendation and other traditional application elements, in addition to the elimination of in-person interactions, requires revisions to screening rubrics and consideration of alternative assessments. Program leadership may want to create processes to assess applicant interest, reach early consensus agreement on which applicant factors are prioritized and thereby assessed, and deliberately incorporate procedures that ensure equity.