Background

The clinical learning environment (CLE) is a priority focus in medical education. The Accreditation Council for Graduate Medical Education Clinical Learning Environment Review's (CLER) recent addition of teaming and health care systems obligates educators to monitor these areas. Tools to evaluate the CLE would ideally be: (1) appropriate for all health care team members on a specific unit/project; (2) informed by contemporary learning environment frameworks; and (3) feasible/quick to complete. No existing CLE evaluation tool meets these criteria.

Objective

This report describes the creation and preliminary validity evidence for a Clinical Learning Environment Quick Survey (CLEQS).

Methods

Survey items were identified from the literature and other data sources, sorted into 1 of 4 learning environment domains (personal, social, organizational, material) and reviewed by multiple stakeholders and experts. Leaders from 6 interprofessional graduate medical education quality improvement/patient safety teams distributed this voluntary survey to their clinical team members (November 2019–mid-January 2021) using electronic or paper formats. Validity evidence for this instrument was based on the content, response process, internal structure, reliability, relations to other variables, and consequences.

Results

Two hundred one CLEQS responses were obtained, taking 1.5 minutes on average to complete with good reliability (Cronbach's α ≥ 0.83). The Cronbach alpha for each CE domain with the overall item ranged from 0.50 for personal to 0.79 for social. There were strong associations with other measures and clarity about improvement targets.

Conclusions

CLEQS meets the 3 criteria for evaluating CLEs. Reliability data supports its internal consistency, and initial validity evidence is promising.

Objectives

To describe the creation, reliability, and preliminary validation of a new, theory-based, short (10-item) instrument, the Clinical Learning Environment Quick Survey (CLEQS).

Findings

Reliability data supports its internal consistency, and validity evidence is favorable and complements existing annual and semiannual accreditation and system tools.

Limitations

This was a single institution sample.

Bottom Line

CLEQS is an innovative, quick, theory-based tool that can be used by interprofessional team members including learners to evaluate clinical learning environments, without causing undue hardship, offering insights for continuous quality improvement consistent with CLER.

Clinical practice environments shape the quality of patient care and learning, making them important and required targets for program evaluation in graduate medical education (GME) in the United States.17  The Accreditation Council for Graduate Medical Education (ACGME) Clinical Learning Environment Review (CLER) seeks to optimize the clinical learning environment (CLE), recently adding teaming as an essential part of interprofessional learning and development and recognizing the health care system's responsibility for the CLE.8,9  The ACGME Common Program Requirements (CPRs) also emphasize the importance of CLE clinical interprofessional teamwork and active participation in interprofessional quality improvement and patient safety initiatives.10  With the need to evaluate CLE teaming, including interprofessional teams quality/safety improvement efforts, a quick CLE survey is needed that can be completed periodically by all team members active in a unit/project (eg, learners, clinicians, other clinical staff). Such a formative tool would allow project teams, GME leaders, and their sponsoring organizations to track changes over time, target improvement interventions, and present data about the quality of the CLE to CLER and program reviewers.

An extensive review of research on interventions designed to improve learning environments in health professions education by Gruppen et al identified 4 domains that should be considered in reviewing a learning environment: personal, social, organizational, and material (physical/virtual spaces).1113  This framework is more inclusive than prior frameworks14,15  because it expands the social (eg, how individuals interact and work together as a team) and includes the material (eg, physical and virtual spaces, equipment) domains of the learning environment as inseparable interactions of everyday organizational life per the theory of sociomateriality.16,17 

A recent review of existing instruments that measure CLEs found that they do not adequately sample all 4 learning environment domains, are lengthy, which limits completion rates (eg, ≥ 25 items), and do not examine the CLE from the perspective of all members in a unit/team (eg, learners, clinicians, staff).13,18  The review authors recommend the development of shorter instruments that sample the 4 domains.18 

This innovation article describes the creation, reliability, and preliminary validation of a new, theory-based, short (10-item) CLE instrument, the Clinical Learning Environment Quick Survey (CLEQS), and provides the results of an initial test of its application in multiple clinic unit-based quality/safety project teams. CLEQS was explicitly designed to complement existing annual and semiannual accreditation and system tools and be appropriate for all participants in the clinical workplace (including residents and fellows).

Survey Tool Development

To develop a CLE survey that could be quickly completed by learners and all other members of the health care team at the unit/team level, we started by developing content (items) that reflect the contemporary 4 domain learning environment constructs (personal, social—including teaming, organizational, and material) described by Gruppen et al.12,13  With the inclusion of teaming and health care systems in the CLER Pathway to Excellence 2.0, we were encouraged to create items consistent with content drawn from 4 different data sources: existing education-oriented surveys,18,19  CLER principles on teaming,10,20  in-use sponsoring institution health system surveys,21,22  and literature on the CLE.9,23,24  Content from these surveys can be connected to the 4 learning environment domains.

Key concepts from each of the data sources were assigned by the lead author (D.S.) to one of Gruppen et al's 4 quadrants13  and confirmed by at least 2 other authors. The process resulted in multiple common focal areas within each quadrant. For example, in the personal domain, multiple sources identified wellness items highlighting purpose/meaning9,12,19,20,25  and psychological safety items.12,18,21,25  For each of these focal areas, 2 to 3 items were then proposed to achieve sufficiency in coverage for each domain along with an overall item to ultimately achieve 10 items total (< 5 minutes to complete) to minimize survey fatigue,26  using scales similar to those of existing instruments.

The items were then reviewed and edited by multiple stakeholders across the continuum of health professions education and practice to ensure that the items were applicable to their health care team and quality improvement project team members (residents/fellows, GME leadership, medical students, continuing professional education, interprofessional education, clinical and interprofessional leaders) and an expert in research on learning environments (D.I.). Several stakeholder representatives (residents, faculty members, interprofessional leaders, managers) engaged in read/think aloud as they considered the items. Items were then revised to be appropriate for use by all team members with response options (length, scale) similar to existing system-wide tools and their national benchmark data to ultimately allow comparison with institutional surveys. See the Figure for the 10-item survey items and associated scales, including the referent sources for each item organized by learning environment domain.

Figure

10-Item Clinical Learning Environment Quick Survey (CLEQS) by 4 Learning Environment Domains with Cronbach's Alphaα and Referents13 

a 4-point scale (Yes Definitely, Yes, with some minor concerns, No, with some major concerns, No Definitely Not)

b 5-point scale anchored at 1 by “Guarded” (eg, afraid to ask questions, or speak up; take minimal/no risks to get along) and 5 by “Candid” (it's OK to ask questions, raise concerns; safe to take risks to learn)

c 7-point anchored at 1, Strongly Disagree, and 7, Strongly Agree

d 5-point scale (Strongly Agree, Agree, Neither Agree nor Disagree, Disagree, Strongly Disagree)

e 5-point scale (Not at all Effective, Slightly Effective, Somewhat Effective, Very Effective, Extremely Effective)

Note: Cronbach's alpha analysis was performed to evaluate the overall consistency of the survey and to evaluate the degree to which each of the 4 domains was associated with the overall item (ie, Would you recommend this workplace to your colleagues?).

Figure

10-Item Clinical Learning Environment Quick Survey (CLEQS) by 4 Learning Environment Domains with Cronbach's Alphaα and Referents13 

a 4-point scale (Yes Definitely, Yes, with some minor concerns, No, with some major concerns, No Definitely Not)

b 5-point scale anchored at 1 by “Guarded” (eg, afraid to ask questions, or speak up; take minimal/no risks to get along) and 5 by “Candid” (it's OK to ask questions, raise concerns; safe to take risks to learn)

c 7-point anchored at 1, Strongly Disagree, and 7, Strongly Agree

d 5-point scale (Strongly Agree, Agree, Neither Agree nor Disagree, Disagree, Strongly Disagree)

e 5-point scale (Not at all Effective, Slightly Effective, Somewhat Effective, Very Effective, Extremely Effective)

Note: Cronbach's alpha analysis was performed to evaluate the overall consistency of the survey and to evaluate the degree to which each of the 4 domains was associated with the overall item (ie, Would you recommend this workplace to your colleagues?).

Close modal

Setting and Participants

Six GME interprofessional project teams from 2 Aurora Health Care (a part of Advocate Aurora Health) teaching hospitals/clinics in Milwaukee, Wisconsin, participating in the Alliance of Independent Academic Medical Centers' National Initiative VII (NI-VII) on Teaming for Interprofessional Collaborative Practice27  were invited to participate in the survey. The sites for the 6 interprofessional quality improvement/safety project teams included inpatient and ambulatory sites in cardiology, family medicine, internal medicine, obstetrics and gynecology, radiology, and GME leadership. These teams engaged residents (training levels 1–4), fellows, faculty members, and multiple health professionals (physician assistant, nurse practitioner, nursing, pharmacy, technicians—lab/imaging, social workers, speech pathologists, medical assistants). Team leaders distributed the survey to their respective clinical team members between November 2019 and mid-January 2021. As it was a voluntary convenience sample, team leaders were asked to estimate the number of team members in their clinical units who could potentially complete the survey in order to calculate a response rate. To avoid clinical firewalls, respondents completed the CLEQS using SurveyMonkey or as print copy (with data subsequently entered into the survey tool) to allow for easy completion in the midst of busy clinical practices.

Survey Analysis—Reliability and Validity

Establishing reliability and validity of survey tools is essential to establish the creditability of the tool.28  Validity of a survey tool refers to a carefully structured argument that supports appropriate interpretations of instrument scores.2831  We sought validity evidence associated with content, response process, internal structure, relations to other variables, and consequences of our new instrument consistent with accepted standards. Internal consistency was determined using Cronbach's alpha. All item scales were converted to a standard scale for analysis. Cronbach's alpha analysis was performed to evaluate the degree to which each of the 4 domains was associated with the overall item (ie, Would you recommend this workplace to your colleagues?) as preliminary evidence for the internal structure of the survey consistent with the underlying 4 domain framework. To determine feasibility of tool use by response time, the online survey tool time tracker was enabled, resulting in an average completion time by all e-respondents.

Descriptive statistics were provided to each team's leaders, including number of surveys completed and item means during a working group meeting. Team leaders and GME leadership were asked to review the results from their specific area of responsibility and advise if the results were consistent with similar (bi)annual data about their respective hospital/clinic/program units collected by other systemwide and accreditation surveys (Culture of Safety Survey,21  Engagement Survey,22  ACGME Resident/Faculty Survey19), and if the results pointed to actionable changes (consequences). Results and each team's brief comments were noted in the working minutes from the group meeting.

Since monitoring the CLE and quality/safety interprofessional project teams is an accreditation requirement, the first author's Research Subject Protection Program determined that this type of work does not constitute human subjects research.

A total of 201 surveys were completed in Fall 2019. Team leaders who were responsible for survey distribution did not explicitly track distribution but reported that they had a “strong” response rate (estimated around 70%). Consistent with the aim of having a diverse set of respondents, a mix of trainees, faculty members, and others in the CLE filled out the survey. Sixty percent of respondents were physicians, either residents or fellows (38%, 77 of 201) or faculty members (22%, 45 of 201). Twenty-one percent (42 of 201) were other clinicians, which included 17 nurses, 15 lab techs, 5 nurse midwives, 1 speech pathologist, 1 social worker, plus 3 others. The remaining 18% were other clinic and lab staff (n = 18), medical assistants (n = 5), program coordinators (n = 9), and medical students (n = 5). The average time required to complete the online survey form was 1.5 minutes (range 1–2 minutes).

CLEQS reliability was calculated using all items resulting in a Cronbach's α = 0.83. Examining the individual item alphas, all were in the acceptable range 0.80 or greater and the correlations were in the preferred range of 0.30–0.60 range32  except, “The work I do is meaningful,” which was below 0.30 (Table 1). While removing this item would improve the overall reliability alpha, it was a key item to include across multiple data sources in the content review, including CLER, and thus was retained. Item correlations are available in Table 2. The Cronbach's alpha for each of the Gruppen et al CE domains with the overall item ranged from 0.79 for social to 0.50 for personal (see Figure) and fall within the acceptable range for a short survey.32 

Table 1

Clinical Learning Environment Quick Survey (CLEQS) Cronbach Coefficient Alpha by Item (n = 201)

Clinical Learning Environment Quick Survey (CLEQS) Cronbach Coefficient Alpha by Item (n = 201)
Clinical Learning Environment Quick Survey (CLEQS) Cronbach Coefficient Alpha by Item (n = 201)
Table 2

Pearson Correlation Coefficients for Clinical Learning Environment Quick Survey (CLEQ) Items among Group-Level Variables (n = 201)

Pearson Correlation Coefficients for Clinical Learning Environment Quick Survey (CLEQ) Items among Group-Level Variables (n = 201)
Pearson Correlation Coefficients for Clinical Learning Environment Quick Survey (CLEQ) Items among Group-Level Variables (n = 201)

Each project's team leaders reviewed the results by CLEQS item and domain. The results varied by team, with mean rating differences by items between teams > 0.60 on a 4-point scale to > 2.2 on a 5-point scale (Table 3). This variability between teams' CLEQS results indicates that the survey can identify differences among units and points to CLE areas in need of improvement. To address response bias concerns that can impact validity,33  program directors, service unit medical directors, and GME leaders confirmed that their respective team's results were consistent with service line/unit/program data from other system-wide and accreditation tools (Culture of Safety Survey, Team Member Engagement Survey, ACGME Resident/Faculty Surveys). For example, one unit received consistently low scores on my direct supervisor/attending provides sufficient supervision/feedback while another unit received low scores on effective and collaborative teamwork. Other units received consistently high scores for feeling supported by team members or having clear expectations. Team leaders reported that this discrimination between items allowed them to focus on celebrating strengths and targeting improvement strategies specific to their teams. In contrast to system-wide/accreditation tools that are administered annually, they noted that CLEQS can be administered more frequently to provide targeted progress monitoring and longitudinal tracking.

Table 3

Clinical Learning Environment Quick Survey (CLEQS) With Directions and Items With Associated Scales, Means, and Ranges

Clinical Learning Environment Quick Survey (CLEQS) With Directions and Items With Associated Scales, Means, and Ranges
Clinical Learning Environment Quick Survey (CLEQS) With Directions and Items With Associated Scales, Means, and Ranges

We describe the development and pilot testing of CLEQS, an innovative, theory-based tool that can be quickly completed by learners and interprofessional health care team members. Survey reliability was strong. Preliminary validity evidence supports a reasoned argument for CLEQS' validity consistent with Messick's model.31  Item content reflected the 4 learning environment domains.12,13  Response process evidence was obtained via the iterative review of items by multiple key stakeholders ranging from education trainees and faculty to clinical and interprofessional leaders and an expert in evaluation of learning environments.18  The relationship to other variables and the ability to identify specific areas for improvement forms a strong consequential validity argument, which is buttressed by team leaders highlighting the tool's utility for acknowledging strengths and tracking CLE improvements at repeated intervals.

From a practical perspective, this tool offers leaders in clinical education a survey instrument that can be easily administered at frequent intervals and is sensitive to the 4 domains of the learning environment and the focal areas within each domain.11  The ability to quickly gather perspectives from all CLE members in a unit or those involved with quality/safety clinical project teams overcomes the barrier of waiting for system/accreditation data sets. This allows GME quality/safety project, program, and/or institutional leaders to initiate and monitor targeted interventions to improve the clinical learning/work environment. While other CLE inventories exist,18  none of them meet all 3 criteria identified by the authors (theory-based, appropriate for all team members, and short).

While the social domain had a strong Cronbach's alpha of 0.79, suggesting that the items clustering in that domain were homogeneous, the other domains had lower Alpha levels, implying that the concepts within the domains were a bit more heterogenous. This should be expected with any short survey, as increasing the number of items typically increases the reliability.34  Conceptually, the social domain items are quite similar (all related to interpersonal teaming), while items in the other 3 domains are composed of a cluster of focal areas: personal—individual meaning and personal safety, material—access to resources and space, and organizational—clarity of expectations and supervision/feedback. The focal areas cover the sub-concepts of the domain and enrich the diagnostic accuracy of the instrument while retaining the virtue of shortness.

There are several limitations associated with the development and pilot testing of CLEQS. While the respondents were from multiple specialties/service units in 2 teaching hospitals and affiliated clinics, they were all involved in a specific project and thus are a convenience sample and may not be representative of their larger affiliated programs/units. While we applied Messick's unified theory of validity evidence to guide and inform our results, the sampling and convergent validity aspects should be addressed in subsequent work. Ultimately, statistical comparisons of CLEQS data with other data from our programs35  and the ACGME along with system-wide tools will be completed.

CLEQS is an innovative, quick, theory-based tool that can be used to evaluate CLEs by interprofessional team members, including learners in those settings, without causing undue hardship on the participants. This instrument focuses on the CLE at the unit level, rather than the overall program, providing insights for continuous improvement at the micro level. Reliability data supports its internal consistency and early validity evidence for this innovation is favorable.

The authors would like to thank each of the 6 Aurora Health Care GME interprofessional team leaders and members for participating in the pilot as part of their Alliance of Independent Academic Medical Centers' National Initiative VII (NI-VII) projects on Teaming for Interprofessional Collaborative Practice and Kayla Heslin, MPH, Aurora University of Wisconsin Medical Group–Aurora Health Care for her statistical support.

1. 
Genn
JM.
AMEE Medical Education Guide No. 23 (part 1): curriculum, environment, climate, quality and change in medical education—a unifying perspective
.
Med Teach
.
2001
;
23
(
4
):
337
344
.
2. 
Genn
JM.
AMEE Medical Education Guide No. 23 (part 2): curriculum, environment, climate, quality and change in medical education—a unifying perspective
.
Med Teach
.
2001
;
23
(
5
):
445
454
.
3. 
Nasca
TJ,
Weiss
KB,
Bagian
JP.
Improving clinical learning environments for tomorrow's physicians
.
N Engl J Med
.
2014
;
370
(
11
):
991
993
.
4. 
Nordquist
J,
Hall
J,
Caverzagie
K,
et al
The clinical learning environment
.
Med Teach
.
2019
;
41
(
4
):
366
372
.
5. 
Palmgren
PJ.
It takes two to tango: an inquiry into healthcare professional education environments. Stockholm, Sweden: Karolinska Institutet; 2016.
2021
.
6. 
Tackett
S,
Wright
S,
Lubin
R,
Li
J,
Pan
H.
International study of medical school learning environments and their relationship with student well-being and empathy
.
Med Educ
.
2017
;
51
(
3
):
280
289
.
7. 
Wagner
R,
Weiss
KB,
Passiment
ML,
Nasca
TJ.
Pursuing excellence in clinical learning environments
.
J Grad Med Educ
.
2016
;
8
(
1
):
124
127
.
8. 
Co
JP,
Weiss
KB,
CLER Evaluation Committee. CLER Pathways to Excellence, Version 2.0: executive summary
.
J Grad Med Educ
.
2019
;
11
(
6
):
739
741
.
9. 
Accreditation Council for Graduate Medical Education.
Common Program Requirements
.
2021
.
10. 
Accreditation Council for Graduate Medical Education.
CLER Pathways to Excellence: Expectations for an Optimal Clinical Learning Environment to Achieve Safe and High-Quality Patient Care, Version 2.0
.
2021
.
11. 
Macy
Josiah
Jr.
Foundation. Improving Environments for Learning in the Health Professions. Proceedings of a conference chaired by David M. Irby, PhD.
2021
.
12. 
Gruppen
LD,
Irby
DM,
Durning
S,
Maggio
L.
Interventions designed to improve the learning environment in the health professions: a scoping review
.
MedEdPublish
.
2018
;
7(3).
doi:10.15694/mep. 2018.0000211.1
13. 
Gruppen
LD,
Irby
DM,
Durning
SJ,
Maggio
LA.
Conceptualizing learning environments in the health professions
.
Acad Med
.
2019
;
94
(
7
):
969
974
.
14. 
Moos
RH.
Conceptualizations of human environments
.
Am Psychol
.
1973
;
28
(
8
):
652
655
.
15. 
Schönrock-Adema
J,
Bouwkamp-Timmer
T,
van Hell
EA,
Cohen-Schotanus
J.
Key elements in assessing the educational environment: where is the theory?
Adv Health Sci Educ Theory Pract
.
2012
;
17
(
5
):
727
742
.
16. 
Fenwick
T.
Sociomateriality in medical practice and learning: attuning to what matters
.
Med Educ
.
2014
;
48
(
1
):
44
52
.
17. 
Orlikowski
WJ.
Sociomaterial practices: exploring technology at work
.
Org Studies
.
2007
;
28
(
9
):
1435
1448
.
18. 
Irby
DM,
O'Brien
BC,
Stenfors
T,
Palmgren
PJ.
Selecting clinical learning environment instruments for medicine using a four domain framework
.
Acad Med
.
2021
;
96
(
2
):
218
225
.
19. 
Accreditation Council for Graduate Medical Education.
Resident/Fellow and Faculty Surveys
.
2021
.
20. 
Edmondson
AC.
The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth
.
Hoboken, NJ
:
John Wiley & Sons;
2018
.
21. 
Agency for Healthcare Research and Quality (AHRQ).
Surveys on Patient Safety Culture (SOPS)
.
2021
.
22. 
Perceptyx. Employee engagement surveys.
2021
.
23. 
Nordquist
J,
Sundberg
K,
Laing
A.
Aligning physical learning spaces with the curriculum: AMEE Guide No. 107
.
Med Teach
.
2016
;
38
(
8
):
755
768
.
doi:10.3109/0142159X. 2016.1147541
24. 
Cooper
AZ,
Simpson
D,
Nordquist
J.
Optimizing the physical clinical learning environment for teaching
.
J Grad Med Educ
.
2020
;
12
(
2
):
221
222
.
25. 
Advocate Aurora Health.
2019 Team Member Engagement Survey. Administered by Perceptyx.com. Internal Access. September 2019.
26. 
Porter
SR,
Whitcomb
ME,
Weitzer
WH.
Multiple surveys of students and survey fatigue
.
In:
SR Porter
.
Special Issue
:
Overcoming Survey Research Problems
.
New Directions Institutional Res.
2004
; (
121
):
63
73
.
27. 
Alliance of Independent Academic Medical Centers. National Initiative VII: Teaming for Interprofessional Collaborative Practice
.
2021
.
28. 
Sullivan
GM.
A primer on the validity of assessment instruments
.
J Grad Med Educ
.
2011
;
3
(
2
):
119
120
.
29. 
Downing
SM.
Validity: on the meaningful interpretation of assessment data
.
Med Educ
.
2003
;
37
(
9
):
830
837
.
30. 
Cook
DA,
Beckman
TJ.
Current concepts in validity and reliability for psychometric instruments: theory and application
.
Am J Med
.
2006
;
119
(
2
):
166.e7
16
.
doi:10.1016/j.amjmed. 2005.10.036
31. 
Messick
S.
Validity of psychological assessment: validation of inferences from persons' responses and performances as scientific inquiry into score meaning
.
Am Psychol
.
1995
;
50
(
9
):
741
749
.
32. 
Gable
RK,
Wolfe
MB.
Instrument Development in the Affective Domain: Measuring Attitudes and Values in Corporate and School Settings
.
Boston, MA
:
Kluwer Academic Publishers;
1993
.
33. 
Halbesleben
JRB,
Whitman
MV.
Evaluating survey quality in health services research: a decision framework for assessing nonresponse bias
.
Health Serv Res
.
2013
;
48
(
3
):
913
930
.
34. 
Bolarinwa
OA.
Principles and methods of validity and reliability testing of questionnaires used in social and health science researches
.
Niger Postgrad Med J
.
2015
;
22
(
4
):
195
201
.
35. 
Dyrbe
LN,
Hunderfund
ANL,
Winters
RC,
et al
The relationship between residents' perceptions of residency program leadership team behaviors and resident burnout and satisfaction
.
Acad Med
.
2020
;
95
(
9
):
1428
1434
.

Author notes

Funding: The authors report no external funding source for this study.

Competing Interests

Conflict of interest: The authors declare they have no competing interests.