ABSTRACT
The clinical learning environment (CLE) is a priority focus in medical education. The Accreditation Council for Graduate Medical Education Clinical Learning Environment Review's (CLER) recent addition of teaming and health care systems obligates educators to monitor these areas. Tools to evaluate the CLE would ideally be: (1) appropriate for all health care team members on a specific unit/project; (2) informed by contemporary learning environment frameworks; and (3) feasible/quick to complete. No existing CLE evaluation tool meets these criteria.
This report describes the creation and preliminary validity evidence for a Clinical Learning Environment Quick Survey (CLEQS).
Survey items were identified from the literature and other data sources, sorted into 1 of 4 learning environment domains (personal, social, organizational, material) and reviewed by multiple stakeholders and experts. Leaders from 6 interprofessional graduate medical education quality improvement/patient safety teams distributed this voluntary survey to their clinical team members (November 2019–mid-January 2021) using electronic or paper formats. Validity evidence for this instrument was based on the content, response process, internal structure, reliability, relations to other variables, and consequences.
Two hundred one CLEQS responses were obtained, taking 1.5 minutes on average to complete with good reliability (Cronbach's α ≥ 0.83). The Cronbach alpha for each CE domain with the overall item ranged from 0.50 for personal to 0.79 for social. There were strong associations with other measures and clarity about improvement targets.
CLEQS meets the 3 criteria for evaluating CLEs. Reliability data supports its internal consistency, and initial validity evidence is promising.
To describe the creation, reliability, and preliminary validation of a new, theory-based, short (10-item) instrument, the Clinical Learning Environment Quick Survey (CLEQS).
Reliability data supports its internal consistency, and validity evidence is favorable and complements existing annual and semiannual accreditation and system tools.
This was a single institution sample.
CLEQS is an innovative, quick, theory-based tool that can be used by interprofessional team members including learners to evaluate clinical learning environments, without causing undue hardship, offering insights for continuous quality improvement consistent with CLER.
Introduction
Clinical practice environments shape the quality of patient care and learning, making them important and required targets for program evaluation in graduate medical education (GME) in the United States.1–7 The Accreditation Council for Graduate Medical Education (ACGME) Clinical Learning Environment Review (CLER) seeks to optimize the clinical learning environment (CLE), recently adding teaming as an essential part of interprofessional learning and development and recognizing the health care system's responsibility for the CLE.8,9 The ACGME Common Program Requirements (CPRs) also emphasize the importance of CLE clinical interprofessional teamwork and active participation in interprofessional quality improvement and patient safety initiatives.10 With the need to evaluate CLE teaming, including interprofessional teams quality/safety improvement efforts, a quick CLE survey is needed that can be completed periodically by all team members active in a unit/project (eg, learners, clinicians, other clinical staff). Such a formative tool would allow project teams, GME leaders, and their sponsoring organizations to track changes over time, target improvement interventions, and present data about the quality of the CLE to CLER and program reviewers.
An extensive review of research on interventions designed to improve learning environments in health professions education by Gruppen et al identified 4 domains that should be considered in reviewing a learning environment: personal, social, organizational, and material (physical/virtual spaces).11–13 This framework is more inclusive than prior frameworks14,15 because it expands the social (eg, how individuals interact and work together as a team) and includes the material (eg, physical and virtual spaces, equipment) domains of the learning environment as inseparable interactions of everyday organizational life per the theory of sociomateriality.16,17
A recent review of existing instruments that measure CLEs found that they do not adequately sample all 4 learning environment domains, are lengthy, which limits completion rates (eg, ≥ 25 items), and do not examine the CLE from the perspective of all members in a unit/team (eg, learners, clinicians, staff).13,18 The review authors recommend the development of shorter instruments that sample the 4 domains.18
This innovation article describes the creation, reliability, and preliminary validation of a new, theory-based, short (10-item) CLE instrument, the Clinical Learning Environment Quick Survey (CLEQS), and provides the results of an initial test of its application in multiple clinic unit-based quality/safety project teams. CLEQS was explicitly designed to complement existing annual and semiannual accreditation and system tools and be appropriate for all participants in the clinical workplace (including residents and fellows).
Methods
Survey Tool Development
To develop a CLE survey that could be quickly completed by learners and all other members of the health care team at the unit/team level, we started by developing content (items) that reflect the contemporary 4 domain learning environment constructs (personal, social—including teaming, organizational, and material) described by Gruppen et al.12,13 With the inclusion of teaming and health care systems in the CLER Pathway to Excellence 2.0, we were encouraged to create items consistent with content drawn from 4 different data sources: existing education-oriented surveys,18,19 CLER principles on teaming,10,20 in-use sponsoring institution health system surveys,21,22 and literature on the CLE.9,23,24 Content from these surveys can be connected to the 4 learning environment domains.
Key concepts from each of the data sources were assigned by the lead author (D.S.) to one of Gruppen et al's 4 quadrants13 and confirmed by at least 2 other authors. The process resulted in multiple common focal areas within each quadrant. For example, in the personal domain, multiple sources identified wellness items highlighting purpose/meaning9,12,19,20,25 and psychological safety items.12,18,21,25 For each of these focal areas, 2 to 3 items were then proposed to achieve sufficiency in coverage for each domain along with an overall item to ultimately achieve 10 items total (< 5 minutes to complete) to minimize survey fatigue,26 using scales similar to those of existing instruments.
The items were then reviewed and edited by multiple stakeholders across the continuum of health professions education and practice to ensure that the items were applicable to their health care team and quality improvement project team members (residents/fellows, GME leadership, medical students, continuing professional education, interprofessional education, clinical and interprofessional leaders) and an expert in research on learning environments (D.I.). Several stakeholder representatives (residents, faculty members, interprofessional leaders, managers) engaged in read/think aloud as they considered the items. Items were then revised to be appropriate for use by all team members with response options (length, scale) similar to existing system-wide tools and their national benchmark data to ultimately allow comparison with institutional surveys. See the Figure for the 10-item survey items and associated scales, including the referent sources for each item organized by learning environment domain.
Setting and Participants
Six GME interprofessional project teams from 2 Aurora Health Care (a part of Advocate Aurora Health) teaching hospitals/clinics in Milwaukee, Wisconsin, participating in the Alliance of Independent Academic Medical Centers' National Initiative VII (NI-VII) on Teaming for Interprofessional Collaborative Practice27 were invited to participate in the survey. The sites for the 6 interprofessional quality improvement/safety project teams included inpatient and ambulatory sites in cardiology, family medicine, internal medicine, obstetrics and gynecology, radiology, and GME leadership. These teams engaged residents (training levels 1–4), fellows, faculty members, and multiple health professionals (physician assistant, nurse practitioner, nursing, pharmacy, technicians—lab/imaging, social workers, speech pathologists, medical assistants). Team leaders distributed the survey to their respective clinical team members between November 2019 and mid-January 2021. As it was a voluntary convenience sample, team leaders were asked to estimate the number of team members in their clinical units who could potentially complete the survey in order to calculate a response rate. To avoid clinical firewalls, respondents completed the CLEQS using SurveyMonkey or as print copy (with data subsequently entered into the survey tool) to allow for easy completion in the midst of busy clinical practices.
Survey Analysis—Reliability and Validity
Establishing reliability and validity of survey tools is essential to establish the creditability of the tool.28 Validity of a survey tool refers to a carefully structured argument that supports appropriate interpretations of instrument scores.28–31 We sought validity evidence associated with content, response process, internal structure, relations to other variables, and consequences of our new instrument consistent with accepted standards. Internal consistency was determined using Cronbach's alpha. All item scales were converted to a standard scale for analysis. Cronbach's alpha analysis was performed to evaluate the degree to which each of the 4 domains was associated with the overall item (ie, Would you recommend this workplace to your colleagues?) as preliminary evidence for the internal structure of the survey consistent with the underlying 4 domain framework. To determine feasibility of tool use by response time, the online survey tool time tracker was enabled, resulting in an average completion time by all e-respondents.
Descriptive statistics were provided to each team's leaders, including number of surveys completed and item means during a working group meeting. Team leaders and GME leadership were asked to review the results from their specific area of responsibility and advise if the results were consistent with similar (bi)annual data about their respective hospital/clinic/program units collected by other systemwide and accreditation surveys (Culture of Safety Survey,21 Engagement Survey,22 ACGME Resident/Faculty Survey19), and if the results pointed to actionable changes (consequences). Results and each team's brief comments were noted in the working minutes from the group meeting.
Since monitoring the CLE and quality/safety interprofessional project teams is an accreditation requirement, the first author's Research Subject Protection Program determined that this type of work does not constitute human subjects research.
Results
A total of 201 surveys were completed in Fall 2019. Team leaders who were responsible for survey distribution did not explicitly track distribution but reported that they had a “strong” response rate (estimated around 70%). Consistent with the aim of having a diverse set of respondents, a mix of trainees, faculty members, and others in the CLE filled out the survey. Sixty percent of respondents were physicians, either residents or fellows (38%, 77 of 201) or faculty members (22%, 45 of 201). Twenty-one percent (42 of 201) were other clinicians, which included 17 nurses, 15 lab techs, 5 nurse midwives, 1 speech pathologist, 1 social worker, plus 3 others. The remaining 18% were other clinic and lab staff (n = 18), medical assistants (n = 5), program coordinators (n = 9), and medical students (n = 5). The average time required to complete the online survey form was 1.5 minutes (range 1–2 minutes).
CLEQS reliability was calculated using all items resulting in a Cronbach's α = 0.83. Examining the individual item alphas, all were in the acceptable range 0.80 or greater and the correlations were in the preferred range of 0.30–0.60 range32 except, “The work I do is meaningful,” which was below 0.30 (Table 1). While removing this item would improve the overall reliability alpha, it was a key item to include across multiple data sources in the content review, including CLER, and thus was retained. Item correlations are available in Table 2. The Cronbach's alpha for each of the Gruppen et al CE domains with the overall item ranged from 0.79 for social to 0.50 for personal (see Figure) and fall within the acceptable range for a short survey.32
Each project's team leaders reviewed the results by CLEQS item and domain. The results varied by team, with mean rating differences by items between teams > 0.60 on a 4-point scale to > 2.2 on a 5-point scale (Table 3). This variability between teams' CLEQS results indicates that the survey can identify differences among units and points to CLE areas in need of improvement. To address response bias concerns that can impact validity,33 program directors, service unit medical directors, and GME leaders confirmed that their respective team's results were consistent with service line/unit/program data from other system-wide and accreditation tools (Culture of Safety Survey, Team Member Engagement Survey, ACGME Resident/Faculty Surveys). For example, one unit received consistently low scores on my direct supervisor/attending provides sufficient supervision/feedback while another unit received low scores on effective and collaborative teamwork. Other units received consistently high scores for feeling supported by team members or having clear expectations. Team leaders reported that this discrimination between items allowed them to focus on celebrating strengths and targeting improvement strategies specific to their teams. In contrast to system-wide/accreditation tools that are administered annually, they noted that CLEQS can be administered more frequently to provide targeted progress monitoring and longitudinal tracking.
Discussion
We describe the development and pilot testing of CLEQS, an innovative, theory-based tool that can be quickly completed by learners and interprofessional health care team members. Survey reliability was strong. Preliminary validity evidence supports a reasoned argument for CLEQS' validity consistent with Messick's model.31 Item content reflected the 4 learning environment domains.12,13 Response process evidence was obtained via the iterative review of items by multiple key stakeholders ranging from education trainees and faculty to clinical and interprofessional leaders and an expert in evaluation of learning environments.18 The relationship to other variables and the ability to identify specific areas for improvement forms a strong consequential validity argument, which is buttressed by team leaders highlighting the tool's utility for acknowledging strengths and tracking CLE improvements at repeated intervals.
From a practical perspective, this tool offers leaders in clinical education a survey instrument that can be easily administered at frequent intervals and is sensitive to the 4 domains of the learning environment and the focal areas within each domain.11 The ability to quickly gather perspectives from all CLE members in a unit or those involved with quality/safety clinical project teams overcomes the barrier of waiting for system/accreditation data sets. This allows GME quality/safety project, program, and/or institutional leaders to initiate and monitor targeted interventions to improve the clinical learning/work environment. While other CLE inventories exist,18 none of them meet all 3 criteria identified by the authors (theory-based, appropriate for all team members, and short).
While the social domain had a strong Cronbach's alpha of 0.79, suggesting that the items clustering in that domain were homogeneous, the other domains had lower Alpha levels, implying that the concepts within the domains were a bit more heterogenous. This should be expected with any short survey, as increasing the number of items typically increases the reliability.34 Conceptually, the social domain items are quite similar (all related to interpersonal teaming), while items in the other 3 domains are composed of a cluster of focal areas: personal—individual meaning and personal safety, material—access to resources and space, and organizational—clarity of expectations and supervision/feedback. The focal areas cover the sub-concepts of the domain and enrich the diagnostic accuracy of the instrument while retaining the virtue of shortness.
There are several limitations associated with the development and pilot testing of CLEQS. While the respondents were from multiple specialties/service units in 2 teaching hospitals and affiliated clinics, they were all involved in a specific project and thus are a convenience sample and may not be representative of their larger affiliated programs/units. While we applied Messick's unified theory of validity evidence to guide and inform our results, the sampling and convergent validity aspects should be addressed in subsequent work. Ultimately, statistical comparisons of CLEQS data with other data from our programs35 and the ACGME along with system-wide tools will be completed.
Conclusions
CLEQS is an innovative, quick, theory-based tool that can be used to evaluate CLEs by interprofessional team members, including learners in those settings, without causing undue hardship on the participants. This instrument focuses on the CLE at the unit level, rather than the overall program, providing insights for continuous improvement at the micro level. Reliability data supports its internal consistency and early validity evidence for this innovation is favorable.
The authors would like to thank each of the 6 Aurora Health Care GME interprofessional team leaders and members for participating in the pilot as part of their Alliance of Independent Academic Medical Centers' National Initiative VII (NI-VII) projects on Teaming for Interprofessional Collaborative Practice and Kayla Heslin, MPH, Aurora University of Wisconsin Medical Group–Aurora Health Care for her statistical support.
References
Author notes
Funding: The authors report no external funding source for this study.
Competing Interests
Conflict of interest: The authors declare they have no competing interests.