Variables in cardiology fellowship applications have not been objectively analyzed against applicants' subsequent clinical performance. We investigated possible correlations in a retrospective cohort study of 65 cardiology fellows at the Mayo Clinic (Rochester, Minn) who began 2 years of clinical training from July 2007 through July 2013. Application variables included the strength of comparative statements in recommendation letters and the authors' academic ranks, membership status in the Alpha Omega Alpha Honor Medical Society, awards earned, volunteer activities, United States Medical Licensing Examination (USMLE) scores, advanced degrees, publications, and completion of a residency program ranked in the top 6 in the United States. The outcome was clinical performance as measured by a mean of faculty evaluation scores during clinical training.
The overall mean evaluation score was 4.07 ± 0.18 (scale, 1–5). After multivariable analysis, evaluation scores were associated with Alpha Omega Alpha designation (β=0.13; 95% CI, 0.01–0.25; P=0.03), residency program reputation (β=0.13; 95% CI, 0.05–0.21; P=0.004), and strength of comparative statements in recommendation letters (β=0.08; 95% CI, 0.01–0.15; P=0.02), particularly in letters from residency program directors (β=0.05; 95% CI, 0.01–0.08; P=0.009).
Objective factors to consider in the cardiology fellowship application include Alpha Omega Alpha membership, residency program reputation, and comparative statements from residency program directors.
The quality of patient care that academic cardiology practices provide depends on recruiting excellent fellows. Fellows often become faculty members at the institutions where they train and thus influence the future of academic cardiology. The recruitment process is crucial but subjective, often because selection committees lack strong objective evidence for evaluating candidates.
Investigators have studied some aspects of application data to predict subsequent performance. Low Medical College Admission Test scores have correlated with adverse disciplinary action by state licensing boards,1 medical school clerkship grades, and United States Medical Licensing Examination (USMLE) scores.2–4 The USMLE scores appear to correlate with performance during surgical internship.5 Medical school grades have predicted internship performance in various specialties.6,7 Among internal medicine residency candidates, positive statements in recommendation letters comparing applicants with their peers are associated with professionalism during internship.8 Useful data are available to guide candidate selection for general surgery and surgical subspecialty programs9–11 ; however, whether these findings can be extended to other subspecialty fellowships is unclear. Given the limited research into best educational practices in cardiology12 and the lack of data to predict clinical performance among applicants for cardiology fellowships, improving cardiology training necessitates collecting data specific to cardiology fellowship applicants.13,14
To determine whether data collected from standard cardiology fellowship applications can be used to predict clinical performance during training, we conducted a retrospective analysis of cardiac fellows from our institution.
Study Population and Methods
We conducted a retrospective cohort study of 7 classes of cardiology fellows who began 2 years of core clinical training from July 2007 through July 2013 at the Mayo Clinic Cardiovascular Diseases Fellowship program (Rochester, Minn), an accredited general cardiovascular program. Annually, 8 or 9 fellows entered through the National Resident Matching Program, and 1 to 3 fellows joined through research pathways sponsored by the National Institutes of Health or our institution's Clinical Investigator Program. All fellows completed 2 years of required clinical rotations in conjunction with research or subspecialty training.
All fellows who entered the program during the study period were eligible for inclusion. All data were internal, confidential, and deidentified for analytical purposes. Data on rejected applicants were excluded. This study received an exemption from the Mayo Clinic Institutional Review Board.
Some medical schools have no Alpha Omega Alpha Honor Medical Society (AΩA) chapter, so AΩA membership was a 3-level variable (yes, no, and “not offered”).
We analyzed 2 variables regarding the authors of each candidate's recommendation letters. First, we assigned a numerical value to the academic rank of each letter's author8 : full professor, 5; associate professor, 4; assistant professor, 3; instructor, 2; community physician, 1; and unknown or not stated, 0. When multiple authors signed a letter, we recorded the senior author's rank. We averaged these scores to determine a mean academic rank of the authors for each candidate. Next, we recorded whether each letter was written by the candidate's residency program director or by someone who had a different title.
We also determined the strongest comparative statement in each letter, defined as a phrase that directly compared a candidate with peers.8 Highly positive statements, such as “Dr. X is the best resident with whom I have worked,” received a rating of “most enthusiasm” and a score of 3. Statements such as “Dr. Y is among the best residents with whom I have worked,” were rated “moderate enthusiasm” and scored 2. Statements such as “Dr. Z performs at or above the level of his or her peers” were rated “neutral enthusiasm” and scored 1. Letters without statements scored 0. Quotations embedded within letters were excluded from analysis because they were not direct observations. We averaged the scores for each candidate's letters.
To maximize agreement, 3 authors (MWC, TJB, and KWK) reviewed letters, discussed ratings, and reached consensus. One author's data (MWC) were also included in this study, so other authors reviewed his letters, and we excluded them from the initial calibration process.
Our primary outcome was clinical performance, from a composite of all clinical evaluations during the first 2 years of clinical cardiology training. Multiple faculty members rated each fellow on multiple variables after each clinical rotation (scale, 1–5). Ten variables were universal (Table II). Ratings on each variable from each evaluator were averaged for a total score. Scores from all evaluations that a fellow received were averaged into one score.
Relationships between continuous predictor and outcome variables were analyzed with use of simple and multiple linear regression and were reported as mean ± SD or median and interquartile range. Categorical variables were reported as number and percentage. Variables with P <0.1 on univariable regression were included in the multiple-regression analysis and were removed in turn until all variables were P ≤0.05. Results of linear regression were reported as β coefficients with 95% CIs. Regression R2 values were used to compare the strength of association between different variables and models. Pairwise associations between multiple-regression variables were evaluated for collinearity. P <0.05 was considered statistically significant. Analyses were conducted with use of SAS version 9.4 (SAS Institute Inc.).
We found 67 fellows eligible for our study but excluded 2 because of incomplete applications. The remaining 65 fellows (mean age, 32 ± 4 yr; 41 men) completed 18 different residency programs. Forty of 43 whose residency program was ranked in the top 6 by the Doximity Residency Navigator16 completed the Mayo Clinic Internal Medicine Residency (Table III).
Of the 255 letters of recommendation, 62 (24%) had been signed by training program directors (Table IV). Three applications had no program director's letter. Three study investigators (MWC, TJB, and KWK) scored 12 letters on 3 candidates; initial agreement was satisfactory (W=0.95). Differences were reconciled, and the process was repeated for 16 letters on 4 candidates (W=0.96). The remaining 227 letters (on 58 candidates) were divided between the reviewers, who gave each letter one score.
The mean strength of statements comparing learner to peers was 1 ± 1.2. Of the 255 letters, 131 (51%) had no statement, including 28 of 62 program director letters (45%) and 103 of 193 letters from others (53%). Two applications had no comparative statements. The mean academic rank of authors was 3.6 ± 1.8, and full or associate professors authored 173 letters.
In total, 142 faculty completed 4,494 fellow evaluations. Of 27,941 evaluation items, 23 scored a 1 (0.1%), 293 scored 2 (1%), 4,009 scored 3 (14.3%), 16,830 scored 4 (60.2%), and 6,786 scored 5 (24.3%) (mean, 4.07 ± 0.18). Scores of the 10 universal evaluation items had a Cronbach α of 0.98.
Univariable analysis revealed that AΩA membership, completion of a top-6 residency program, and strong comparative statements (particularly from program directors) were significantly associated with the primary outcome (Table V). The lack of association between academic rank and evaluation scores persisted after excluding the 16% of letters in which no academic rank was specified.
AΩA Status. Among 25 fellows who did not complete a Mayo Clinic residency, 11 had had no AΩA opportunity. Three of the remaining 14 were AΩA members (mean evaluation score, 4.14 ± 0.12), and 11 were not (mean score, 3.98 ± 0.17) (P=0.03).
Multivariable analysis revealed that AΩA status, training at a top-6 residency program, and strong comparative statements were independently associated with evaluation scores (Table VI). Fellows who had not achieved available AΩA status had independently lower scores than did those who qualified or who lacked AΩA opportunity. The association between strong comparative statements and primary outcome remained significant when including letters from program directors only. Separate analyses revealed no collinearity between independent variables in the multivariable analysis.
Our findings of factors objectively associated with clinical performance during cardiology fellowship have implications in selecting candidates for fellowship programs.
Research productivity during residency has been associated with future research productivity.21 However, trainees who heavily focus on research may neglect clinical obligations.22 In a small study of internal medicine residents, no positive correlation was found between prior scholarly productivity and clinical performance.23 In contrast, results from a large study of residents at our institution identified a positive association.24 The absence of similar findings in the current study may be related to small sample size, so future investigations should involve a larger cohort of cardiology fellows and multi-institutional designs.
Our finding no association between clinical performance and advanced degrees, awards, volunteer activities, and standardized test scores does not mean that they are unimportant; for example, USMLE scores have been associated with performance in later training5,25 and later performance on standardized tests.11,26,27 The time between USMLE testing and cardiology fellowship (often ≥3 yr) and the relatively narrow range of USMLE scores perhaps restricted our observations, and we did not compare USMLE scores with performance on the cardiology in-training examination.28 Study results from other specialties suggest that USMLE scores can predict good performance on in-training or board examinations,26,27,29 so studies of cardiology fellows are warranted.
Completing a top-6 internal medicine residency program predicted strong clinical performance.16 The methodology underlying Doximity rankings is similar to that of the annual physician survey for the U.S. News & World Report rankings of best hospitals.30 Although perhaps not subject to rigorous academic scrutiny, the Doximity rankings are widely available and may influence candidates' perception of programs.31–33 In our study, the relationship between residency and performance during cardiology fellowship may validate the Doximity reputation scores. Possible associations between reputation rankings and recognized criteria for the quality of graduate medical education programs should be investigated.
The association between AΩA status and performance is somewhat surprising. It suggests that early academic excellence can predict performance later. In our study, the 30 cardiology fellowship applicants without AΩA opportunity (24 from international medical schools and 6 from Mayo Medical School) performed better than did applicants who failed to achieve membership at schools that had AΩA chapters. (Of note, some reputable medical schools decline AΩA participation.)
An association between AΩA status and residency program possibly confounds the association between AΩA status and fellowship performance. However, among fellows who did not complete the Mayo Clinic residency, the difference in evaluation scores between AΩA and non-AΩA members suggests that AΩA designation predicts performance independently of residency program. Larger studies are warranted.
The association between favorable comparative statements and subsequent clinical performance replicates our earlier findings that recommendation letters predict professionalism scores among first-year internal medicine residents.8 Of note, the association in the current study pertains to global clinical performance variables related to professionalism (Table II), supporting the power of observation-based assessments in predicting subsequent professional behaviors.
The association between comparative statements and performance was strongest for letters signed by residency program directors (Table VI). Although program directors may have relatively little direct contact with residents, letters may contain information from (or may have actually been written by) associate directors who can judge relative performance on the basis of multiple meaningful clinical observations over time.34 In contrast, a writer's academic rank was less important, implying that a writer's academic reputation is less important than a writer's relationship with a resident.
Fellowship candidates need not tailor their applications to match the predictors identified in our study. Even though our findings objectify application analysis, other variables influence fellowship performance, and selection committees should continue to evaluate applications on their own merits.
Our single-institution study evaluated relatively few fellows. Regardless, the Mayo Clinic Cardiovascular Diseases Fellowship is one of the largest in the United States, making it suitable for this research from the perspective of sample size and representation of a typical program. Furthermore, our independent variables are used by other cardiology training programs when selecting candidates. Our findings should be validated in multi-institutional studies that include a diverse group of cardiology trainees and fellowship programs.
Forty of our 65 fellows had completed the Mayo Clinic Internal Medicine Residency Program (6th in the 2015 Doximity rankings) and may have excelled during fellowship because of familiarity with our institution. Our findings will probably pertain to cardiology fellowship programs that match many internal candidates.
We applied the 2015 Doximity rankings to fellows who began our program from 2007 through 2013. Substantial variability in the rankings is unlikely and historical rankings were not available, so we think that applying these Doximity data to the period of our study was appropriate.
Our primary outcome, the composite of all clinical evaluations during the first 2 years of cardiology training, has not been formally validated; however, previous assessments with the same instrument items identified valid content, internal structure, and relations to other variables.8,18,19 Furthermore, the current study's outcome had strong internal consistency.17
Finally, our study was limited by the lack of suitable studies for comparison with other groups of cardiology trainees.12
To our knowledge, our investigation is the first to predict educational performance outcomes in cardiology fellows by using fellowship application data. We found that comparative statements in recommendation letters, membership in the AΩA Honor Medical Society, and completion of a top-6 residency program were associated with clinical performance in a large academic cardiology fellowship program. Fellowship selection committees may consider these variables when evaluating candidates for their programs.
This work was funded in part by the 2016 Mayo Clinic Endowment for Education Research Award.