This report reflects the findings of the first set of visits of the Clinical Learning Environment Review (CLER) Program1,2 that were conducted from September 30, 2012 to March 20, 2015. These initial visits were aimed at collecting a comprehensive base of evidence on how the nation's clinical learning environments (CLEs) approach the six CLER focus areas. Specifically, the CLER Program explored issues related to the five key questions that were posed to the program at its outset.3 It also sought to enrich understanding of the nation's CLEs and establish a benchmark for subsequent site visits. Owing to these broad objectives, the CLER Program employed a mixed methods approach, utilizing both quantitative and qualitative information gathering and analysis methods. It is the collective results of this effort that informed the aggregated findings in this report.

In 2012 there were 696 ACGME-accredited Sponsoring Institutions (SIs) with 1,767 participating sites,a which are the hospitals, medical centers, and ambulatory units where graduate clinical training takes place. This report examined 297 CLEs that had three or more ACGME-accredited core specialty programs. The CLEs visited were affiliated with 297 SIs that included 8,878 residency programs (92% of all ACGME programs) and 111,482 residents and fellows (90% of all residents and fellows in ACGME-accredited programs).b

Many of the SIs had several institutions that served as participating sites. Due to resource limitations, it was necessary to select one clinical site for each CLER visit. The selection was based on two factors: (a) the CLE that served the largest possible number of programs for that SI and (b) the availability of both the Designated Institutional Official (DIO) and the Chief Executive Officer (CEO) of that CLE to be present at the opening interview and the exit interview. If the preferred site could not be scheduled based on these selection factors, an alternative site was selected.

The CLER site visit protocol was designed as a structured visit with a schedule of events (see Figure 1).

The institutions were notified of the CLER visit at least 10 days prior to arrival. The relatively short notice was intended to maximize efforts to gather real-time information from those who would be interviewed.

The visits were conducted by teams of from two to four site visitors and over two to three days. The visit length and number of site visitors varied primarily by the number of programs and residents and fellows at the site. Each CLER visit team was led by a full-time salaried employee of the ACGME. Additional team members were either other ACGME staff or trained volunteers from the graduate medical education (GME) community.

The group sessions during each visit were always conducted in the same order: (1) an initial group interview with the CEO, members of the executive team (e.g., Chief Medical Officer, Chief Nursing Officer), the DIO, and a resident representative; (2) a short interview with patient safety and health care quality leadership; (3) a group interview with residents and fellows; (4) a group interview with faculty members; (5) a group interview with program directors; (6) a second interview with patient safety and health care quality leadership; and (7) an exit meeting with the CEO, members of the executive team, the DIO, and a resident representative. All interviews took place in a quiet location without disruption and did not exceed 90 minutes.

The purpose of the initial meetings with executive leadership and patient safety/quality leaders was to become familiar with the basic language and culture of the CLE's current activities in the six CLER focus areas. This information helped inform subsequent interviews and observations during the CLER visit.

The resident and fellow group interviews were conducted with six to 30 participants per session who were peer-selected across the programs. Resident attendance was mostly limited to those post graduate year (PGY) 2 or higher. Since the CLER visits were conducted on a year-round basis it was recognized that many of the first-year residents would lack sufficient clinical experience to assess the learning environment, specifically in the early months of the PGY-1 year. Faculty members were chosen to broadly represent the residency and fellowship programs at the CLE, with from six to 30 faculty members per interview group. Group sessions with program directors ranged from three to 30 participants and included the leaders of ACGME-accredited core and fellowship programs at the institutions; associate program directors where included when program directors were not available. For CLEs with more than 30 programs, two independent sets of interviews were conducted with residents and fellows, faculty members, and program directors, but there were never more than 30 participants per session.

Additionally, the CLER site visit team conducted a set of walking rounds, escorted by residents and fellows, to observe various patient floors, units, and service areas. The walking rounds enabled the site visitors to gather feedback from physicians, nurses, and other health care professionals in the clinical setting. There were at least three sets of walking rounds per institution, each 90 minutes in length. For larger institutions, a fourth round lasting 60 to 90 minutes was added.

Throughout each visit, the CLER team conducted huddles to discuss the information they had gathered. Later, they held a team meeting to synthesize their findings, reach consensus, and prepare both an oral report and a draft of the written narrative report. At the exit meeting, the team shared its oral report with executive leadership, which covered the site visitors' initial feedback on the six CLER focus areas. The written report, delivered approximately six to eight weeks after the visit, reflected the same topics but with a more comprehensive and detailed set of observations. Both the oral and written report were intended to provide formative information that would help the CLE leadership assess their practices in the six focus areas, inform resident and fellow training, and guide improvements in the CLE.

Survey Instruments

Group interviews were conducted using a structured questionnaire developed under the guidance of experts in GME and/or the six CLER focus areas. The questionnaires contained both closed- and open-ended questions. After initial validation by expert review, the instruments were field tested on five CLER site visits. At the conclusion of each of these visits, the items were refined as part of an iterative design process; with each iteration, items were reviewed and revised as necessary based on feedback from interviewees and interviewers.

Walking Rounds

The walking rounds were designed to facilitate random, impromptu interviews with residents, fellows, and other clinical staff across a number of clinical areas (e.g., inpatient and outpatient areas, emergency departments) where residents and fellows were trained based on the SI's ACGME-accredited specialty and sub-specialty programs.

The walking rounds were also designed to (1) triangulate, confirm, and cross-check findings from the group interviews and (2) glean new information on residents and fellows' experiences across the six focus areas. In conjunction with group interviews, the walking rounds provided important information that could either confirm or conflict with that gathered in group sessions. The result was a broader and more accurate understanding of the CLE than either method could provide alone.

CLER Site Visit Reports

Findings from each visit were synthesized in a written report. A formal template of the written report was developed and refined over time in the early stages of the program. The template was designed to assist the CLER site visit team in preparing both the oral and written reports and to ensure that each of the six focus areas was fully addressed. The reports also included a brief description of the site and any of its unique aspects. All members of the CLER site visit team reviewed and edited each report for accuracy and to achieve consensus on the findings.

Other Sources of Data

Several other sources of data were used to augment the site visit data, including the ACGME annual data reportsc and the 2013 American Hospital Association (AHA) Annual Survey Database.d The ACGME reports provided information on the SIs, programs, and physicians in GME, including the number of ACGME-accredited programs, number of residents and fellows matriculated, and university affiliation. The AHA data offered CLE information, including type of ownership (e.g., nongovernment, not-for-profit vs. investor-owned, for-profit) and size, as measured by the number of staffed acute care beds.

Group Interviews with an Audience Response System

Group interviews with residents and fellows, faculty members, and program directors were conducted using a computerized audience response system (ARS)e that allowed for anonymous answers to closed-ended questions. Data from the ARS was exported into Microsoft Excel® and then into a software package for statistical analysis. Responses to open-ended questions were documented qualitatively. The three surveys—one each for residents and fellows, faculty members, and program directors—consisted of 43, 24, and 30 closed-ended questions, and 30, 27, and 37 open-ended questions, respectively.

Group Interviews with No Audience Response System

All responses were documented qualitatively for group interviews with the CEO, members of the executive team, the DIO, and the resident representative (32 questions) as well as with the leadership of the patient safety and health care quality programs (40 questions).

Descriptive statistics were used to summarize and describe distribution and general characteristics of SIs, CLEs, and groups interviewed. For SIs, the information includes SI type (e.g., teaching hospital, medical school) and the number of ACGME-accredited residency and fellowship programs per institution. CLE characteristics include type of ownership (e.g., nongovernment, not-for-profit), number of licensed beds, and total staff count. And for group interviews, demographic information—including gender and medical specialty—were recorded.

Analysis of ARS Data

Analysis was conducted at two levels: (1) individuals (e.g., residents and fellows) and (2) CLEs. For the first level of analysis, results are based on the total sample of individuals surveyed, presented as percentages. The second level examines differences between CLEs when individual responses are aggregated at the CLE level. CLE results are presented as median and interquartile range. Analysis of the data at these two levels provides a national overview of the state of engagement in the six focus areas and compares CLEs on these outcomes.

Chi-square analysis was used to compare resident and fellow responses and to identify any relationships in responses by (a) gender, (b) residency year, and (c) specialty grouping. Chi-square analysis was also used to explore if differences were associated with CLE characteristics: (a) regional location, (b) bed size, and (c) type of ownership. Grouping of CLE-specific variables (e.g., bed size) was based on categories in the annual AHA survey. Statistical significance was evaluated at P values of .05 or less. All statistical analysis was conducted using IBM® SPSS Statistics® version 22.0.4 

Analysis of CLER Site Visit Reports

Specific findings based on responses to non-ARS questions and interviews on walking rounds were systematically coded, extracted, and recorded as frequency counts for further descriptive analysis. Overall percentages and percentages stratified by CLE region, bed size, and type of ownership are reported.

Development of Overarching Themes and Findings in the Focus Areas

The overarching themes and findings by focus areas were determined in three stages. First, the CLER Program staff aggregated and de-identified the results and presented them in summary form to the CLER site visitors. Next, the CLER site visitors reviewed and commented on the results, and offered additional findings by consensus. Based on feedback from the site visitors, the CLER Program staff revised the summary of results before presenting them to the CLER Evaluation Committee. Lastly, the members of the CLER Evaluation Committee reviewed the results and prioritized a set of key findings for each of the six focus areas. The committee also identified the four overarching themes that cut across all of the focus areas, achieving its decisions by consensus.

In addition to prioritizing the findings, the CLER Evaluation Committee developed a set of commentaries to discuss the importance of the findings and their impact on patient care and the education of our nation's future physician workforce.

Triangulation and Cross-Validation

Overall accuracy in the conclusions was enhanced by triangulating the findings. The findings were cross-validated for consistency and corroboration using multiple sources of complementary evidence and analytic techniques. For example, the ARS results were more meaningful when supplemented by critical qualitative information and vice-versa. Multiple sources of data provided greater insight and minimized inadequacies of individual data sources when a finding was supported in multiple places.

The strengths and weaknesses of each individual method underscored the need to apply a mixed methods approach to seek more valid and reliable results. Moreover, corroborating both quantitative and qualitative evidence provided a richer, more balanced, and comprehensive perspective by allowing for deeper dimensions of the data to emerge.

As with any formative learning process, there are a number of limitations to the CLER Program that warrant consideration in using the information in this report. Perhaps most important, the findings do not suggest cause and effect. Close study over time is needed to allow these baseline observations to inform desired changes to improve the CLE. Common to all of the overarching themes is a high degree of variability both within and across sites. Over time, it will be useful to better understand how to best address this variability.

Second, the findings in this report are based on a series of sampled populations. For each visit, the CLER teams interviewed a sample of residents and fellows, faculty members, program directors, and other clinical and administrative staff—with the aim of broad representation across all programs (e.g., proportionally more individuals from larger programs). While the goal was to achieve a degree of representativeness, the sample may or may not reflect the entire population. Given that the CLER Program is a formative assessment, this approach to sampling allowed for a broad and in-depth understanding of socially complex systems such as CLEs.

Lastly, while this aggregated set of findings is designed to be highly representative, the CLEs that were not included in the sample may represent different experiences and consequently could yield different conclusions as CLER goes on to consider them in the future.

1
Weiss
KB
,
Bagian
JP
,
Nasca
TJ.
The clinical learning environment: the foundation of graduate medical education
.
JAMA
.
2013
;
309
(
16
):
1687
1688
.
2
Weiss
KB
,
Wagner
R
,
Nasca
TJ.
Development, testing, and implementation of the ACGME Clinical Learning Environment Review (CLER) Program
.
J Grad Med Educ
.
2012
;
4
(
3
):
396
398
.
3
Wagner
R
,
Patow
C
,
Newton
R
,
Casey
BR
,
Koh
NJ
,
Weiss
KB
;
CLERProgram
.
The overview of the CLER Program: CLER National Report of Findings 2016
.
J Grad Med Educ
.
2016
;
8
(
2 suppl 1
):
11
14
.
4
IBM SPSS Statistics for Windows. Version 22.0
.
Armonk, NY
:
IBM Corp
;
2013
.

a Each SI is composed of one or more participating sites for training; the number of participating sites is limited to major participating sites where resident education occurred.

b Source: The Accreditation Council for Graduate Medical Education (ACGME) annual data report. The ACGME annual data reports contains the most recent data on the programs, institutions, and physicians in graduate medical education as reported by all medical residency SIs and ACGME-accredited programs.

c The ACGME annual data reports contains the most recent data on the programs, institutions, and physicians in GME as reported by all medical residency SIs and ACGME-accredited programs.

d The AHA Annual Survey Database™ includes data from the AHA Annual Survey of Hospitals, the AHA registration database, the US Census Bureau population data, and information from hospital accrediting bodies and other organizations.

e Audience Response Systems, Inc.

Author notes

CLER Program: Mark Bixby, MD; Baretta R. Casey, MD, MPH, FAAFP; Robin Dibner, MD; Anne Down; Staci Fischer, MD; Constance Haan, MD, MS, MA; Scott A. Holliday, MD*; Elizabeth Kimball, MA; Nancy J. Koh, PhD; Robin Newton, MD, FACP; Carl Patow, MD, MPH, FACS*; Mark Pian, MD*; Dale Ray, MD, MMM; Melissa Schori, MD, FACP, MBA; Robin Wagner, RN, MHSA; Elizabeth Wedemeyer, MD, FAAP; Kevin B. Weiss, MD

* Former staff member