Background

The Accreditation Council for Graduate Medical Education Clinical Learning Environment Review (CLER) program visits 1 participating site per sponsoring institution. While valuable, feedback on that site does not necessarily generalize to all learning environments where trainees and faculty provide clinical care, and institutions may be missing significant insight and feedback on other clinical learning sites.

Objective

We explored how the Emory Learning Environment Evaluation process—modeled after CLER—could be used to improve the learning environments at 5 major clinical training sites.

Methods

Participants were recruited via e-mail. Sites hosted separate 60-minute sessions for medical students, residents and fellows, and faculty. We used the CLER Pathways to Excellence to develop a combination of fixed choice and opened-ended questions deployed via an audience response system and verbal queries. Data were analyzed primarily through descriptive statistics and graphs.

Results

Across sites, per session, medical student participants ranged from 9–16, residents and fellows ranged 21–30, and faculty ranged 15–29. Learners agreed that sites: (1) provided a supportive culture for requesting supervision (students 100%; residents and fellows 70%–100%), and (2) provided a supportive culture for reporting patient safety events (students 94%–100%; residents and fellows 91%–95%). Only a minority of residents and fellows and faculty agreed that they were educated on how to provide effective supervision (residents and fellows 21%–52%; faculty 45%–64%).

Conclusions

Data from this process have helped standardize improvement efforts across multiple clinical learning environments within our sponsoring institution.

What was known and gap

The CLER program provides important feedback on the clinical learning environment, but assesses only 1 site for each sponsoring institution.

What is new

An internal listening and data collection process based on CLER gathered learner and faculty insights and feedback across 4 clinical sites at 1 sponsoring institution.

Limitations

Single institution study limits generalizability; potential for response bias due to respondent selection.

Bottom line

Feedback from learners and faculty helped standardize improvement efforts across multiple clinical learning environments.

The Accreditation Council for Graduate Medical Education (ACGME) Clinical Learning Environment Review (CLER) program was designed to provide clinical sites affiliated with ACGME-accredited institutions with periodic feedback that addresses the quality and safety of the learning environment.1  Created as a program of periodic site visits, 1 participating site is visited per sponsoring institution.1  While this feedback is valuable, it may not generalize to the learning environment at the sponsoring institution's other clinical sites where learners and faculty provide clinical care. As a result, institutions may be missing significant insights regarding their overall clinical learning environment.

The CLER site visit protocol includes trainee and faculty interview sessions.1  A survey of early impressions of the CLER program from the designated institutional official community found that participation in CLER site visits resulted in (1) more involvement from the graduate medical education (GME) community in improving 1 or more CLER focus areas; (2) changes to the clinical learning environment (eg, increased focus on patient safety and quality improvement); and (3) better recognition of the resources needed to improve the learning environment.2 

At Emory University School of Medicine, students, residents and fellows, and faculty provide clinical care in 5 major participating sites. Although the institution had received CLER information for 1 site, leadership desired a more comprehensive understanding of the learning environment at the other 4 major participating sites.

To obtain data specific to all 5 learning environments and to capitalize on the benefits of the CLER program, we developed an internal Emory Learning Environment Evaluation (ELEE) process, modeled after the CLER visit protocol.3  The goal was to provide feedback to hospital leadership, and use the findings to generate strategic initiatives to improve the institution's 5 clinical learning environments.

We explored how ELEE could be used to improve the learning environments at all major clinical sites. We hypothesized that the outcomes of the process would provide valuable feedback to hospital leadership and also enable our GME office to identify opportunities for improvement at these participating sites.

Setting and Participants

ELEE started in June 2017 and involved Emory medical students, residents, fellows, and faculty assigned to our 5 participating sites at the time of a scheduled session.

Design

To recruit participants, an e-mail was sent to program directors and clerkship directors detailing the study and inviting them to identify up to 2 students, residents and fellows, and faculty from their program who would be rotating/working at a given clinical site during the data collection period. The e-mail also included information about the clinical site of interest, as well as the date, location, and time of data collection. Following the CLER site visit protocol, the goal was to recruit up to 30 participants per 60-minute session. Each site hosted a separate session for medical students, residents and fellows, and faculty.

Potential participants were sent a calendar invite with study details and a study information sheet. During each session, the content of the information sheet was read aloud to participants, and consent to participate was obtained using an initial question fielded via an audience response system (Poll Everywhere, San Francisco, CA). The audience response system was set to the “Responses are anonymous” option. No compensation other than catering was provided to participants. Each session was led by at least 2 members from our education leadership group, who were not directly engaged in the learning environment being studied to encourage open dialogue.

The CLER Pathways to Excellence4,5  document was used to develop survey questions related to the 6 CLER focus areas: (1) Patient Safety; (2) Health Care Quality; (3) Care Transitions; (4) Supervision; (5) Well-Being; and (6) Professionalism. We developed up to 6 questions per focus area (table). Questions were fielded using a combination of yes/no questions (via the audience response system) and verbal opened-ended questions.

table

Sample Questions and Responses From ELEE Questionnaire

Sample Questions and Responses From ELEE Questionnaire
Sample Questions and Responses From ELEE Questionnaire

After the session, study leaders provided a written summary to hospital leadership, and met with these individuals to review results and develop strategies for improvement. Modeling the CLER site visit format,3  data collection occurred in a room set up to project a PowerPoint presentation. Rooms were configured in a U-shaped or boardroom set-up. During each session, participants responded to survey questions by accessing the audience response system via their cellular phones, using either text message or an online browser. Sessions for residents, fellows, and faculty included 33 questions (1 informed consent question, 22 yes/no questions, 10 open-ended questions). Student sessions included 30 questions (1 informed consent question, 21 yes/no questions, 8 open-ended questions). Both are provided as online supplemental material. For open-ended questions, participants were encouraged to respond verbally in an open-ended discussion. Comments were recorded.

The study was deemed exempt by the Emory University Institutional Review Board.

Analysis

Categorical (yes/no) data were reported through descriptive statistics and graphs. Aggregated comments (derived from open-ended questions) were stripped of identifiers. Comments were used to help contextualize results from categorical data. A table was also generated to compare responses across sites.

To date, the ELEE process has been successfully deployed in 4 of 5 participating sites. Across sites, based on the goal of 30 participants per session, medical students participants ranged from 9–16 (30%–53%), resident and fellow participants from 21–30 (70%–100%), and faculty participants from 15–29 (50%–97%).

Participants were not required to respond to every question, and the number of respondents per question varied. To make it easier to compare responses across participant groups and sites, the percentage responding yes for each yes/no question was calculated and reported (provided as online supplemental material).

Areas of Strength

Across the 4 participating sites, most learners agreed that sites: (1) provided a supportive culture for requesting supervision (students 100%; residents and fellows 70%–100%); (2) provided an environment of professionalism that supported honesty and integrity and respectful treatment of others (students 83%–100%, residents and fellows 86%–100%); and (3) provided a supportive culture for reporting patient safety events (students 94%–100%; residents and fellows 91%–95%).

During open-ended discussions learners spoke about being aware of what to report in terms of patient safety errors, but being unaware of the patient safety reporting system used at each site. Learners indicated being aware of multiple ways to obtain supervision (eg, upper-level resident, clinical chief of service, or faculty). With regard to transitions of care, all groups agreed that they used direct communication in the development of patient care plans among primary and consulting teams (students 88%–100%; residents and fellows 90%–100%; faculty 75%–100%), and with regard to professionalism, all groups agreed that, if needed, they would use the training site processes for reporting unprofessional behavior (students 82%–93%; residents and fellows 84%–91%; faculty 80%–96%).

Areas for Improvement

Across the 4 participating sites, a minority of learners stated that they (1) knew how to report patient safety events (students 0%–30%; residents and fellows 37%–63%); (2) received specialty-specific data on quality metrics and benchmarks related to their patient populations (students 0%–6%; residents and fellows 13%–38%); and (3) were aware of site processes for reporting unprofessional behavior (students 15%–27%; residents and fellows 21%–59%).

With regard to supervision, a minority of residents and fellows and faculty agreed that they were educated how to provide effective supervision (residents and fellows 21%–52%; faculty 45%–64%). Finally, with regard to well-being, a minority of the faculty agreed that (1) sites demonstrated system-based actions for preventing, eliminating, or mitigating impediments to the well-being of learners and faculty members (21%–63%), and (2) sites demonstrated mechanisms for identification, early intervention, and ongoing support of learners and faculty members who are at risk of or are demonstrating self-harm (26%–64%).

Comments around well-being were the most site-specific. For example, while participants across sites provided examples of well-being initiatives such as on-call meals, cafeteria space, snacks at nurses' stations, and tobacco-free campuses, concerns brought up were site-specific (eg, nurse shortages, not enough consultant spaces, or the cumbersome nature of the electronic health record system).

The development of ELEE provided several positive outcomes, including (1) increased learner and faculty engagement in continuous quality improvement in the learning environment6; (2) enhanced communication between the medical school and hospital leadership7; (3) education of learners and faculty regarding CLER focus areas8 ; and (4) identification of opportunities for improvement in the 6 focus areas across a variety of learning environments.

As a result of ELEE, our GME office and participating sites have initiated several projects to address identified areas of improvement. For example, in collaboration with our residents, we developed patient safety–related educational materials to teach residents and fellows how to report patient safety events at each training site. With regard to the need for specialty-specific data on quality metrics and benchmarks related to patient populations, a hospital system that currently generates site-specific quality metrics is developing a system to provide site-specific and individualized, specialty-specific quality metrics to all residents and fellows on a quarterly basis.

As a secondary outcome, the experience gained from the ELEE sessions helped to prepare our faculty, program directors, residents and fellows, and leadership at our primary institution for the subsequent CLER visit.

We encountered 3 challenges that should be considered if implementing this process. First, we found that medical student recruitment was challenging because their rotations often were assigned at the last minute. This process was improved after requesting help from the Associate Dean of Clinical Education. Second, medical students reported that some questions did not seem relevant to them, such as receiving specialty-specific quality data related to their patient populations. Third, we encountered challenges obtaining formal approval for our Veterans Affairs Medical Center site, which required a separate institutional review board process. Our plan is to develop a site-specific study protocol to visit this site in the future.

The ELEE process had financial costs. We purchased lunch for the faculty sessions and offered snacks for the earlier sessions with medical students and residents. The cost was approximately $300 per site. There were offsetting benefits. Based on informal feedback, medical students, residents, fellows, faculty, and hospital leadership considered this inititative positive, and it resulted in the discussion of improvement opportunities with leaders from the different clinical sites.

Limitations of ELEE include the potential for biased recruitment of participants by program/clerkship leadership and a limited number of medical student participants, which may have resulted in less anonymity and lower likelihood of honest responses. In the future, we plan to have peer-selected participants and hope to recruit additional medical students. The generalizability of this study is also limited given that data were collected in a single institution.

We intend to repeat this process every 2 years and hold annual meetings with hospital leadership to develop and review projects to address identified areas of improvement, verify that improvements are made, and evaluate the impact of changes.

Use of an internal data collection protocol based on the CLER visit process at multiple participating sites within an ACGME-accredited institution enabled GME offices at participating sites to have a more comprehensive understanding of performance in each of the 6 CLER focus areas. Data gathered inform continuous quality improvement and standardization of efforts across multiple clinical learning environments within a single sponsoring institution.

1
Koh
NJ,
Wagner
R,
Weiss
KB.
The methodology for the CLER National Report of Findings 2016
.
J Grad Med Educ
.
2016
;
8
(
2 suppl 1
):
15
19
.
2
Koh
NJ,
Wagner
R,
Sun
H,
et al.
Early impressions of the CLER Program: a survey of the designated institutional official community
.
J Grad Med Educ
.
2016
;
8
(
3
):
478
482
.
3
Weiss
KB,
Wagner
R,
Nasca
TJ.
Development, testing, and implementation of the ACGME Clinical Learning Environment Review (CLER) Program
.
J Grad Med Educ
.
2012
;
4
(
3
):
396
398
.
4
Weiss
KB,
Bagian
JP,
Wagner
R,
et al.
Introducing the CLER Pathways to Excellence: a new way of viewing clinical learning environments
.
J Grad Med Educ
.
2014
;
6
(
3
):
608
609
.
5
Weiss
KB,
Bagian
JP,
Wagner
R. CLER
Pathways to Excellence: Expectations for an optimal clinical learning environment (executive summary)
.
J Grad Med Educ
.
2014
;
6
(
3
):
610
611
.
6
Counte
MA,
Meurer
S.
Issues in the assessment of continuous quality improvement implementation in health care organizations
.
Int J Qual Heal Care
.
2001
;
13
(
3
):
197
207
.
7
Kaplan
HC,
Brady
PW,
Dritz
MC,
et al.
The influence of context on quality improvement success in health care: a systematic review of the literature
.
Milbank Q
.
2010
;
88
(
4
):
500
559
.
8
Tess
A,
Vidyarthi
A,
Yang
J,
et al.
Bridging the gap: a framework and strategies for integrating the quality and safety mission of teaching hospitals and graduate medical education
.
Acad Med
.
2015
;
90
(
9
):
1251
1257
.

Author notes

Editor's Note: The online version of this article contains the Emory Learning Environment Evaluation (ELEE) questionnaire and a table of ELEE site comparison for yes/no questions.

Funding: The authors report no external funding source for this study.

Competing Interests

Conflict of interest: The authors declare they have no competing interests.

This study was presented as a poster at the ACGME Annual Educational Conference, Orlando, Florida, February 28–March 4, 2018, and the AAMC Continuum Connections meeting, Orlando, Florida, April 28–May 1, 2018.

Supplementary data