Background

In 1999, the Accreditation Council for Graduate Medical Education (ACGME) Outcome Project began to focus on resident performance in the 6 competencies of patient care, medical knowledge, professionalism, practice-based learning and improvement, interpersonal communication skills, and professionalism. Beginning in 2007, the ACGME began collecting information on how programs assess these competencies. This report provides information on the nature and extent of those assessments.

Methods

Using data collected by the ACGME for site visits, we use descriptive statistics and percentages to describe the number and type of methods and assessors accredited programs (n  =  4417) report using to assess the competencies. Observed differences among specialties, methodologies, and assessors are tested with analysis of variance procedures.

Results

Almost all (>97%) of programs report assessing all of the competencies and using multiple methods and multiple assessors. Similar assessment methods and evaluator types were consistently used across the 6 competencies. However, there were some differences in the use of patient and family as assessors: Primary care and ambulatory specialties used these to a greater extent than other specialties.

Conclusion

Residency programs are emphasizing the competencies in their evaluation of residents. Understanding the scope of evaluation methodologies that programs use in resident assessment is important for both the profession and the public, so that together we may monitor continuing improvement in US graduate medical education.

Editor's Note: The ACGME News and Views section of JGME includes data reports, updates, and perspectives from the ACGME and its review committees. The decision to publish the article is made by the ACGME.

The Accreditation Council for Graduate Medical Education (ACGME) initiated the Outcome Project in the late 1990s, thus furthering its mission to ensure and improve the quality of graduate medical education (GME) in the United States. Through extensive research and collaboration with the American Board of Medical Specialties and various constituencies and GME stakeholders, the Outcome Project delineated 6 domains (patient care, medical knowledge, professionalism, practice-based learning and improvement, interpersonal communication skills, and professionalism) that underlie medical competence.

In 1999, the Outcome Project began to focus the ACGME and the GME community on resident performance and the assessment of these competencies as indicators of residency program effectiveness. By July 2002, requirements for all programs included language specific to each of these competencies, and in July of 2007, program requirements for each specialty included specialty-specific language in each of the 6 competencies.

During this phase of development, programs began to report using educational outcome data to improve individual resident and overall program performance. Simultaneously, the ACGME required programs to document the assessment tools used to evaluate resident performance in these 6 domains of clinical competence. This information represents the collective improvements programs are making toward outcomes-based learning and assessment.

The ACGME collects data for ongoing assessment and site visit preparation from all accredited residency programs and sponsoring institutions through the Accreditation Data System. In 2007, the ACGME began collecting quantitative data about the methods of evaluation used, as well as information about the evaluators, for assessing resident performance in each of the competencies. The data, along with several narrative questions about the evaluations process, make up the Evaluation Section of the Program Information Form, the program-level self-study document prepared for the site visit and subsequent accreditation review.

We analyzed these assessment and evaluator data for each of the competencies for both specialty (n  =  2474) and subspecialty (n  =  1943) programs that provided updated information within the last 2 years (July 1, 2008 to June 30, 2010). Programs selected from among 22 categories (plus “other”) of assessment methods and from among 19 categories (plus “other”) of evaluators. We explored these data using descriptive statistics (means and frequencies), and statistical comparisons used analysis of variance procedures.

Although the ACGME did not require subspecialty fellowships to adhere to the initial Outcomes Project timeline, many of those programs have provided data on their assessments and evaluators. Given the more intense scrutiny of physician competency within those specialties leading to initial board certification, however, we concentrated most of our results on the pipeline specialty programs, that is, those specialties leading to initial board certification. For a more complete discussion of pipeline specialty programs, see the analysis by Byrne and colleagues1 in this issue.

To date (as of June 30, 2010), almost one-half (49.9%, n  =  4417) of all accredited pipeline and subspecialty programs have entered assessment methods and evaluators as part of their Program Information Form preparation. Almost all programs (>97%) report at least 1 assessment method and evaluator for each competency (figure 1). The mean number of assessments per competency ranged between 2.7 (for systems-based practice) and 4.0 (for patient care); the number of evaluators ranged from 5.6 (practice-based learning) to 8.2 (professionalism).

Figure 1

Percentages of Programs Assessing Each of the Competencies

Figure 1

Percentages of Programs Assessing Each of the Competencies

Close modal

The remaining results are limited to the pipeline specialties and represent those leading to initial board certification (n  =  2474, 56.0% of all reporting programs). In analyses (not shown here) pipeline and subspecialty program patterns of competency evaluation did not differ.

There is significant variability in the types of assessment methods and evaluators within the pipeline specialties. figure 2 shows the percentages of programs using each type of assessment method: 90.9% and 81.1% of the programs use direct observation and global assessment, respectively. Conversely, there are relatively few programs that include standardized patient exams (12.3%) and reviews of drug prescribing procedures (7.8%) in their assessments. figure 3 shows the percentages of programs using each type of evaluator. Most programs rely on program directors, attendings, and other faculty to evaluate their residents: 94.1%, 90.1%, and 82.3% of programs report using these evaluators, respectively.

Figure 2

Percentages of Programs Using Each Assessment Method

Figure 2

Percentages of Programs Using Each Assessment Method

Close modal
Figure 3

Percentages of Programs Using Each Type of Evaluator

Figure 3

Percentages of Programs Using Each Type of Evaluator

Close modal

This variability in assessment method and evaluator types for each competency shows a consistent pattern across the 6 competencies. That is, programs tend to use the same methods and evaluators to a similar degree, regardless of specific competency. figure 4 presents the percentage of programs using each assessment method across the 6 competencies. From these data, it is clear that global assessment and direct observation methods, used by approximately 67% of all the programs (and to a lesser extent, multisource assessments), are the programs' most common choice of assessment method, regardless of competency.

Figure 4

Percentages of Programs Using Each Method Type, by Competency

Figure 4

Percentages of Programs Using Each Method Type, by Competency

Close modal

It is not surprising that approximately 74% of programs report using the in-training exam to assess medical knowledge, and few programs (approximately 10%) use it to assess the other competencies. Approximately 28% of programs are using patient surveys to assess 4 of the competencies (communication skills, patient care, practice-based learning, and professionalism).

figure 5 presents the percentages of programs using each type of evaluator for each of the competencies. Similar patterns to that of the assessment methods are shown; programs are relying heavily on a few kinds of evaluators (program directors and faculty are used by at least 70% of the programs) to assess all the competencies.

Figure 5

Percentages of Programs Using Each Evaluator Type, by Competency

Figure 5

Percentages of Programs Using Each Evaluator Type, by Competency

Close modal

However, a few evaluators are used to a greater extent than others for certain competencies. For example, programs use patients and family members to assess interpersonal communication skills, professionalism, and patient care to a greater extent than the other competencies.

We further explored this source of variability by comparing the percentages of each specialty using patients and family members as evaluators. figure 6 shows these data. There are significant differences by type of specialty. Specifically, the primary care specialties (family medicine, internal medicine, pediatrics, obstetrics, and gynecology) tend to use higher percentages of patient and family members as evaluators than do programs in either surgical specialties or hospital-based specialties, F2,1453  =  2.95, P  =  .05. However, this finding may be due, in part, to significantly greater number of resident evaluations from patients and families done in those specialties, F2,1453  =  18.23, P < .0001.

Figure 6

Percentages of Programs Using Patients or Family Members as Evaluators, by Specialty and Competency

Figure 6

Percentages of Programs Using Patients or Family Members as Evaluators, by Specialty and Competency

Close modal

Programs have integrated assessment of the competencies into their education using a variety of methods and assessors. However, our results show programs relying heavily on global assessment and direct observation methods, which may explain the difficulty in differentiating or separating the competencies in day-to-day clinical endeavors.24 

This difficulty may also be a function of many programs' reliance on a few evaluators (faculty and program director). Having faculty evaluators is important and reliance on their assessments need not be detrimental to careful evaluation. Indeed, if thoughtfully done, evaluations from faculty are rich resources of multidimensional assessment.5 

It may be that the preponderance of faculty evaluation is due to an artifact of our data collection. The nomenclature used for evaluators (attending, faculty supervisor, faculty member) may be more specific than is necessary. Programs may have selected all faculty types in the interest of completeness when reporting their data to the ACGME. In addition, the number of assessments could be inflated due to the inclusion of both assessment and feedback tools in programs' Accreditation Data System data submissions.

An area of assessment that does show variability across the competencies is the use of patient and family evaluations of resident performance. Programs' reported use of these evaluators is encouraging, as patient-centered care figures prominently in the recent external attention focused on resident competencies and preparedness to practice medicine independently.6 Although few studies have examined resident's impact on patient safety, several researchers have called for more integration of patient safety and quality improvement initiatives into residency program curricula, making the involvement of patients and families critical.710 

An interesting finding in these data is the variability among specialties assessing residents using family and patients in the areas of patient care, professionalism, and communication skills. The larger percentages and greater numbers of assessments found in the primary care and ambulatory specialties may be because those specialties allow for more sustained and multiple single-patient interactions. However, it is interesting to note that approximately two-thirds of thoracic surgery and ophthalmology programs also reported using patient and family evaluation of residents in at least some areas, indicating that surgical disciplines have found ways to incorporate such evaluations in their assessments. All clinical specialty programs should be encouraged to involve patients and their families in assessment as the focus on resident outcomes continues.

Our analysis has several limitations. We have a relatively large amount of information on the use of evaluators and assessment methods; however, we do not have data on the programs' perceptions of the relative merits of each. For example, programs may find some methods easy to use but not exceptionally informative, or vice versa. Similarly we have no data on the frequency or quality of assessment methodologies. Finally, we should note that these data should be interpreted with caution, as information on assessment practices were self-reported by the programs to the ACGME. Although ACGME site visitors do provide an external validity check on the existence of measures and assessments used by programs, data reports from the programs may not be entirely free from bias.

It is gratifying to observe that nearly 100% of pipeline programs (those leading to initial certification) as well as more than 97% of subspecialty programs are actively assessing the competencies. This suggests that essentially all of allopathic GME programs in the United States emphasize the competencies in their resident assessment. This reassures both the profession and the public that traditional elements of performance (medical knowledge, patient care), elements previously underemphasized (communication and interpersonal skills, professionalism), as well as the so-called new elements (practice-based learning and improvement, systems-based practice) are all facets of ACGME-accredited resident assessment.

The similarity in use of many of the assessment methods and evaluators suggest that the ACGME data collection instrument might be simplified. For example, perhaps asking about “faculty” rather than “attendings,” “faculty supervisor,” and “faculty members” may help reduce site visit paperwork, as well as provide more succinct data to the Residency Review Committees. It may also permit more relevant summarization of information for feedback to the profession and the public. We will continue to explore these data to discover more optimal ways to accurately and succinctly document programs' assessment of residents.

Ability to understand the range of evaluation methodologies used by residency programs, and the results of these assessments in conjunction with the milestones of clinical competence currently under development at the ACGME, will help us ensure the effectiveness of our GME programs.

1
Byrne
,
L.
,
K. H.
Holt
,
T.
Richter
,
R. S.
Miller
, and
T. J.
Nasca
.
Tracking residents through multiple residency programs: a different approach for measuring residents' rates of continuing GME in ACGME-accredited programs.
J Grad Med Educ
2010
.
2
(
4
):
xx
xx
.
2
Huddle
,
T. S.
and
G. R.
Heudebert
.
Taking apart the art: the risk of anatomizing clinical competence.
Acad Med
2007
.
82
(
6
):
536
541
.
3
Lurie
,
S. J.
,
C. J.
Mooney
, and
J. M.
Lyness
.
Measurement of the general competencies of the accreditation council for graduate medical education: a systematic review.
Acad Med
2009
.
84
:
301
309
.
4
Kogan
,
J. R.
,
E. S.
Holmboe
, and
K. E.
Hauer
.
Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review.
JAMA
2009
.
302
(
12
):
1316
1326
.
5
Ginsburg
,
S.
,
J.
McIlroy
,
O.
Oulanova
,
K.
Eva
, and
G.
Regehr
.
Toward authentic clinical evaluation: pitfalls in the pursuit of competency.
Acad Med
2010
.
85
(
5
):
780
786
.
6
Blumenthal
,
D.
,
M.
Gokhale
,
E. G.
Campbell
, and
J. S.
Weissman
.
Preparedness for clinical practice: reports of graduating residents at academic health centers.
JAMA
2001
.
286
(
9
):
1027
1034
.
7
Windish
,
D. M.
,
D. A.
Reed
,
R. T.
Boonyasai
,
C.
Chakraborti
, and
E. B.
Bass
.
Methodological rigor of quality improvement curricula for physician trainees: a systematic review and recommendations for change.
Acad Med
2009
.
84
(
12
):
1677
1692
.
8
Singh
,
R.
,
B.
Naughton
,
J. S.
Taylor
, et al
.
A comprehensive collaborative patient safety residency curriculum to address the ACGME core competencies.
Med Educ
2005
.
39
(
12
):
1195
1204
.
9
ten Cate
,
O.
and
F.
Scheele
.
Competency-based postgraduate training: can we bridge the gap between theory and clinical practice?
Acad Med
2007
.
82
(
6
):
542
547
.
10
Varkey
,
P.
,
S.
Karlapudi
,
S.
Rose
,
R.
Nelson
, and
M.
Warner
.
A systems approach for implementing practice-based learning and improvement and systems-based practice in graduate medical education [published correction appears in Acad Med. 2009;84(6):694].
Acad Med
2009
.
84
(
3
):
335
339
.

Author notes

All authors are at the Accreditation Council for Graduate Medical Education (ACGME). Kathleen D. Holt, PhD, is Senior Analyst and Director of Special Projects, Applications and Data Analysis Department and adjunct professor at Department of Family Medicine at University of Rochester; Rebecca S. Miller, MS, is Senior Vice President, Applications and Data Analysis Department; Thomas J. Nasca, MD, MACP, is ACGME's CEO and Professor of Medicine at Jefferson Medical College.

We thank the Department of Applications and Data Analysis at the ACGME for collecting and ensuring the quality and accuracy of the graduate medical education data.

The ACGME provided support for Dr Holt, Ms Miller, and Dr Nasca for this research.

The funding body had no role in the design of the study; the collection, analysis, and interpretation of the data; the writing of the manuscript; or in the decision to submit the manuscript for publication.