Galen Eminence, MD
Dean of Students
The Medical School
Dear Dr Eminence:
The leaves are starting to turn outside, and my computer is filling rapidly with applications from your graduates who want to become anesthesiologists. Of course, I'm grateful for their interest. Like any program director, I stand before 2 sets of decisions. Which of these applicants will I invite to visit our program, and once those interviews are over, how do I fit these multifaceted, talented young physicians into the 1-dimensional hierarchy of our National Residency Matching Program match list?
Although the evidence supporting my belief is thin,1 I assume that students' performances in your medical school will predict their success during residency. That sentiment is widespread. Multiple surveys of program directors have shown that programs' decisions to interview applicants are heavily influenced by medical school performance, particularly in the required clinical clerkships.2,3
For that reason, I am particularly interested in reading the summative evaluations in the applicants' Medical Student Performance Evaluations (MSPEs)—their Dean's Letters. Every year, however, I'm puzzled by your and other schools' reluctance to report comparative data about the performance of your students. For example, you wrote:
Although the [Association of American Medical Colleges] recommends that the MSPE contain a comparative summative performance statement, [the] School of Medicine's evaluation system was not designed to provide a specific comparative performance ranking of our students. For that reason, we encourage review of this evaluation letter in its entirety.
Some schools are like yours and refuse outright to produce comparative data. Others cloak their data in a ritual familiar to any program director: The last paragraph contains some verbal descriptor—Outstanding! Superior! Excellent!—which is matched to a histogram buried in the tables at the bottom of the letter. Schools' distaste in reporting this information is palpable.
One possible reason for this reluctance is obvious: Medicine is a cooperative endeavor. One surgeon's excellent technique does nothing to subtract from the care given by the surgeon in the adjacent operating room. The habit of cooperation starts in medical school, so students are generally graded against objective criteria and not against each other. The hospital would be a sadder and lesser place were it given over to the bald competition common in other areas of graduate study.
The residency match, however, is competitive. It cannot be otherwise. A place in my residency occupied by one successful applicant can't simultaneously be given to another. Accordingly, our match lists are ranked, and applicants are explicitly compared with each other. The reality of these comparisons is uncomfortable: I'm quite confident that the last applicant on the bottom of our rank list will become a fine anesthesiologist. Perhaps these lists dishonor the cooperative nature of medicine, but the practical necessity of the match makes them inevitable.
Reflecting this discomfort, many Dean's Letters, like yours, urge program directors to read the letter, and evaluate the applicant, in its entirety. You rightly make the point that medicine comprises more than examination performance and that your students are greater than the sum of their grades. This holistic viewpoint doesn't change in residency; residents' abilities as physicians are not summarized by their latest in-training examination scores.
That logic, however, does not eliminate the need to assess applicants competitively. Rather, it suggests that your comparative assessments should be based broadly, considering all facets of a student's progress, not just his or her scores. These comparative rankings should be made by medical schools, and not by program directors, for a couple of reasons.
First, the narrative comments included in the Dean's Letters are, in reality, recommendations and not evaluations. Most faculty members, typing their end-of-rotation evaluations, are quite aware that their comments may end up on my desk next autumn. Who would want to harm the prospects of his or her own student with an ill-chosen word? The result, predictably, is mush. You described one of your graduates as an “outstanding” student with an “above-average fund of knowledge,” destined to be a “fantastic house officer”. . . but the graduate nonetheless appears to fall near the bottom of the class. Is this individual truly outstanding? Simply competent? Less? I have no idea. This can't possibly be the best way to allow admissions committees to get to know applicants.
Second, it is impossible for me as a program director to have as complete a view of a medical student's abilities as you enjoy. For 3½ years, you have been privy to a wealth of information: examination scores, performance in workshops, assessments of professionalism, in-person meetings, and evaluations from faculty, residents, and other students. If you cannot make a comparative assessment with 4 years' worth of experience, how can I possibly do a better job with 4 pages of information on my computer?
I suspect that your reluctance to issue explicit rankings stems from your worry about harming the prospects of the students in the bottom part of their class. You probably worry that branding an applicant with a scarlet “fourth quartile” will harm the chances of what may be a very competent student, who merely falls near the bottom of a high-achieving cohort. That concern, although well intended, is misplaced. First, program directors recognize the high competence threshold that all medical students must hurdle. Second, the only way to meaningfully hurt students' applications is to change the relative numbers of graduates and available residency positions. As long as that ratio doesn't change, better information about residents can only make the match process more accurate and more efficient. As the number of American medical graduates grows, the need to make the match process as equitable as possible can only become more acute.
Unfortunately, the toll of this imperfect communication falls on the medical students. Faced with a Dean's Letter with no ranking or context, comprising only a list of free-floating adjectives, many program directors will make their decisions based on the data that they do have. Usually, that means a disproportionate reliance on standardized test scores: exactly the outcome that you and your school would like to avoid.
So please, Dean Eminence: rank your students. Use all of the criteria that you find relevant: humanism, professionalism, knowledge, anything, but provide some sort of ranking for residency programs to use as a basis for comparison. When you don't, then we have to try for ourselves, and we certainly won't do as well as you can. Give us the rankings. We can handle the truth.
Sincerely yours,
Richard Benzinger, MD, PhD
Program Director
References
Author notes
Richard Benzinger, MD, PhD, is Assistant Professor of Anesthesiology and Residency Program Director, Washington University School of Medicine.