The match rate overall and for applicant subgroups such as allopathic senior medical students and foreign-trained physicians in the National Resident Matching Program (NRMP) Main Residency Match has remained remarkably stable over time. This suggests that the Match's competitiveness has not undergone any major changes.1 Nevertheless, nearly every specialty has experienced a steady increase in the number of residency programs to which each student applies, creating waste in the residency recruitment process. As a result, more resources are being expended without demonstrable improvement in Match outcomes.2 The increased financial cost for the application process and in-person interviews burdens applicants, while residency programs devote more faculty and staff resources to review and interview a larger number of applicants. Given medical students' focus on recruitment and time away for interviews, the educational value of the fourth year—to ensure preparedness for residency—is eroding.3,4 Furthermore, the influx of applications may lead to program practices that rely on easily filterable markers of success, such as United States Medical Licensing Examination (USMLE) scores. Despite genuine interest and even prospects of being a “good fit” based on other metrics, many applicants may be disadvantaged in the current system, because highly competitive applicants receive more interview invitations than they can accept and may horde interview slots until later in the recruitment season. Students and residency program directors alike lament the phenomenon of application inflation, yet the numbers continue to grow.
Over the past 5 years, several solutions to the phenomenon of application inflation and its concomitant waste of resources for students and programs have been proposed. Some have called for limits to the total number of applications per applicant, which presupposes that any competitive advantage or disadvantage would be felt evenly across all applicants.2,5 However, given applicants' consumer rights to apply to as many programs as financially feasible, this option has failed to gain traction. Further, students with special circumstances (eg, couples matching, or an irregularity on their records such as a failed class or failed USMLE attempt) may require an exception to a limit or risk going unmatched. Otolaryngology programs now require a program-specific paragraph for each student application, to encourage students to consider their “fit” for a particular program and to deter application inflation. A decline in the number of applications occurred the first year of implementation, along with an increase in unmatched programs, which brings up concerns about this approach.6
While a cap on applications is unlikely to become a reality, medical student groups have repeatedly called for programs to provide specific data about characteristics of matched applicants, to provide guidance about an applicant's competitiveness.7 Otherwise, how is a student to know if they are sufficiently competitive? Programs have been reluctant to publish these data; they worry that providing the data may limit opportunities to attract more competitive applicants. It appears that the undergraduate medical education (UME) and graduate medical education (GME) communities are at an impasse.
In this issue of the Journal of Graduate Medical Education, Whipple and colleagues propose a potential solution, one that will not meaningfully disadvantage any applicant while improving the chances of students and programs to find the right match.8 Their proposed solution is elegantly simple: applicants would have the option to designate some programs (presumably a limited number) as “preferred programs.” The authors use real numbers from the 2014 Otolaryngology Match to demonstrate that, if medical students selected a subset of their applications as “preferred,” the majority of students would receive more interview invitations per application than in the current system. The authors demonstrate that only the most competitive applicants, evaluated by level of competitiveness in both “easy-to-assess” measures (USMLE scores, class rank, grade point average, Alpha Omega Alpha status, number of publications, geographic preference) and “hard-to-assess” measures (letters of recommendation, personal recommendations, personal statement, qualitative performance reviews, awards, volunteer activities, research interests), were ostensibly disadvantaged in this model. The most competitive applicants receive fewer invitations to interview in the new model. The most competitive students may receive more invitations than they can accommodate, and thus, a system in which students have the option to designate preferred programs would not actually disadvantage this group.
The authors have proposed an intriguing solution. In their scenario, by knowing applicants' genuine interest, programs could devote more time to fully evaluate these applications for interview invitations. Program resources would be primarily spent on the review of those applicants who were sincerely interested in the program, thereby minimizing the waste inherent in the review of an applicant with less interest.
However, there are key limitations to the proposed solution. First, it is not clear how student preference designation would perform in less competitive specialties and residency programs. The simulation model was performed in one of the most competitive surgical subspecialties—most specialty matches do not fit the otolaryngology model. Second, it may be difficult to garner support from students and medical schools for this system. While the authors propose that students would not be required to reveal preferences, students may feel pressured to reveal preferences or risk losing an interview invitation to a desired program. Third, by virtue of limiting the number of programs a student can designate as preferred (which presumes that programs will fill most or all of their interview slots with competitive applicants from the “preferred program” pool), the authors propose a system with a virtual cap rather than an actual one. To date, an application cap has not been acceptable to students or schools; therefore, it is not clear that a virtual cap would be any more welcome. Finally, students may see this as yet another example where they are being asked—essentially required—to disclose information while programs continue to resist publishing statistics of matched applicants. This imbalance will do nothing to improve the trust and communication between students and the programs to which they are applying for further training.
The model proposed by Whipple and colleagues remains promising. To our knowledge, this is the first published argument to demonstrate that when residency program leaders know which applicants have sincere interest in their program, more interview invitations will be extended per applicant for the vast majority of students. However, while novel and creative, the proposed approach remains a technical fix to the challenge of application inflation. Essentially, it would redistribute invitations, but the reasoning is circular. Even if a greater share of applicants receives more invitations, it is not necessarily true that a better fit will result, applicants will be less stressed, or applicants or programs will be more satisfied with the process. Nor will this approach enable students to substantially refocus their energies on the educational value of their fourth year of medical school.
We urge the medical education community to consider more radical solutions. The American Medical Association has recently called for proposals for institutions to pilot disruptive innovations across the UME to GME continuum—innovations that will promote students' preparedness for training and well-being in residency. This may create favorable conditions to explore new models for residency application, recruitment, and selection.
In the era of competency-based education and promotion, and educational handoffs along the UME to GME continuum, perhaps the current residency application process is no longer the right “fit.” What if programs engaged in mission-driven recruitment and sought to attract applicants who have achieved competencies specific to this mission? Programs could list track(s), informed by stakeholder input and alumni outcomes, which reflect their strengths, along with the desired competencies achieved by incoming interns, for each track. An internal medicine program, for example, could choose to list any number of specializations: rural health, cardiology fellowship preparedness, health informatics, quality improvement, primary care, community practice, etc. Students and their medical schools would submit educational portfolios that reflect program “fit” rather than the current model that primarily emphasizes overall medical school performance and suitability for a core specialty. We could realize the vision of an actual educational continuum with an effective handoff across the UME to GME transition. Or, what if students were guaranteed residency positions in their institution (or a consortium of institutions) as part of their acceptance to medical school? They could then focus their energy on developing the competencies to be the best physicians they could be, rather than on developing a track record that will make them competitive for the residency match. Imagine that: medical students' tremendous capacities focused on becoming physicians rather than on being competitive residency applicants.