Background

The clinical learning environment (CLE) is frequently assessed using perceptions surveys, such as the AAMC Graduation Questionnaire and ACGME Resident/Fellow Survey. However, these survey responses often capture subjective factors not directly related to the trainee's CLE experiences.

Objective

The authors aimed to assess these subjective factors as “calibration bias” and show how it varies by health professions education discipline, and co-varies by program, patient-mix, and trainee factors.

Methods

We measured calibration bias using 2011–2017 US Department of Veterans Affairs (VA) Learners' Perceptions Survey data to compare medical students and physician residents and fellows (n = 32 830) with nursing (n = 29 758) and allied and associated health (n = 27 092) trainees.

Results

Compared to their physician counterparts, nursing trainees (OR 1.31, 95% CI 1.22–1.40) and allied/associated health trainees (1.18, 1.12–1.24) tended to overrate their CLE experiences. Across disciplines, respondents tended to overrate CLEs when reporting 1 higher level (of 5) of psychological safety (3.62, 3.52–3.73), 1 SD more time in the CLE (1.05, 1.04–1.07), female gender (1.13, 1.10–1.16), 1 of 7 lower academic level (0.95, 1.04–1.07), and having seen the lowest tercile of patients for their respective discipline who lacked social support (1.16, 1.12–1.21) and had low income (1.05, 1.01–1.09), co-occurring addictions (1.06, 1.02–1.10), and mental illness (1.06, 1.02–1.10).

Conclusions

Accounting for calibration bias when using perception survey scores is important to better understand physician trainees and the complex clinical learning environments in which they train.

What was known and gap

While subjective factors are believed to influence how residents and fellows rate their clinical learning environments, how to calibrate for these influences when using such ratings to rank programs by their performance is not well understood.

What is new

We measure calibration bias and show how biases vary by discipline, the trainee's program and facility factors, and the mix of patients that trainees see.

Limitations

Study data were limited to the Department of Veterans Affairs medical centers and to a limited set of predictor factors.

Bottom line

Educators must integrate calibration bias metrics into their perceptions surveys results in order to better understand their residents and fellows and the complex clinical learning environments in which they train.

A critical component of medical education is the clinical learning environment (CLE) where trainees engage in supervised patient care to acquire competencies necessary to enter independent practice.1  To evaluate CLEs for program accreditation, faculty evaluations, and program rankings, education leaders turn to perceptions surveys,2  such as the AAMC Medical School Graduation Questionnaire3  and the Accreditation Council for Graduate Medical Education (ACGME) Resident/Fellow Survey.4  These surveys ask respondents to rate items on 5-point scales (satisfaction, agreement, excellence), with item responses grouped into domains reflecting CLE constructs such as supervision, interaction with faculty, clinical experience, scut work, research opportunities, working environment, personal experiences, and professionalism.2,5,6  While perception surveys can reflect CLE qualities, critics charge that responses may also vary with how questions are framed,7  surveys are designed,8  and response options are quantified.9  Importantly, respondents' subjective characteristics,10  including personality traits, perceptions of personal support, peer morale, and autonomy,1113  and how respondents retrieve information, make judgments, and interpret survey questions,1416  have also been shown to impact perception survey responses.

In this study, we propose a theoretical framework that defines calibration bias as the difference between a trainee's self-reported rating from that of an actual rating had the trainee responded with the subjective characteristics of the average trainee respondent. If calibration bias were controlled, trainees would rate the same experience in exactly the same way. Well-validated data from the Department of Veterans Affairs (VA) national CLE surveys10  are used to approximate calibration bias and assess if: (1) such biases exist with well-validated survey data, and if so, (2) does calibration bias vary by discipline and (3) by trainee and CLE factors.

Conceptual Model

Derived from the 9-criteria evaluation framework,10 figure 1 shows calibration bias as a mediator between the CLE as the object to be assessed, and domain scores used to assess the CLE. The bias is a result of subjective1719  factors that impact how a respondent's experiences are perceived, and threshold9,20  factors that impact how respondents value the 5-point response options they must select to rate those perceptions. These biases can lead respondents to over- or under-rate experiences compared to an “average” rater who, by definition, has no calibration bias. Based on our framework, possible remedies include changes in survey design, administration, scoring, and analyses.

figure 1

Role of Calibration Bias on Relationship Between Clinical Learning Environment and Trainee Perceptions Survey Scores

a Facility-level calibrating items for this example include parking, convenience of the facility location, and electronic health record. Respondent satisfaction with these calibrating items are scored as the calibration index.

figure 1

Role of Calibration Bias on Relationship Between Clinical Learning Environment and Trainee Perceptions Survey Scores

a Facility-level calibrating items for this example include parking, convenience of the facility location, and electronic health record. Respondent satisfaction with these calibrating items are scored as the calibration index.

Close modal

Calibration bias is not directly observable. To test for its presence in validated data, we created a calibration index where respondents rate their satisfaction with selected CLE facility-level “calibrating items,” such as parking, facility location, and electronic health record (EHR), where experiences are not likely to vary among trainees reporting on the same facility and academic year. The index equals the respondent's calibrating item score minus the average of all such scores from respondents to the given facility and academic year. If calibrating item experiences are invariant, then from figure 1 any variation in index scores must be the result of mediating subjective and threshold factors. A second test does not depend on strict invariant item experiences. As shown in figure 1, associations between the index score and trainee, patient, program, and other facility-level factors that are not expected to impact a trainee's calibrating item experiences, can only be observed if the respondent answered the survey in the presence of calibration biases and such biases are influenced by such factors.

Data Setting and Sample

Data came from the Department of Veterans Affairs' (VA) Learners' Perceptions Survey (LPS) for physician, nursing, and allied and associated health trainees,21  collected from July 1, 2010, through August 30, 2017. Validated elsewhere,10  LPS is an anonymous, voluntary, Office of Management and Budget approved, web-based perceptions survey administered annually to trainees who rotate through a VA medical center as part of a required curriculum for an accredited health professions education program. LPS respondents were solicited through advertising, capturing only 11% of all VA trainees. However, LPS findings have been well-published,10  with physician resident respondents shown to be comparable by specialty, academic level, international status, and gender with US physician residents in ACGME-accredited non-pediatric and non-OB-GYN programs.22 

Calibration Bias

Calibration bias is estimated by a proxy index based on how respondents rated their satisfaction with 3 calibrating items on a 5-point scale. Calibrating items are parking, location convenience, and EHRs. Item responses are scored as 1 for “very dissatisfied,” 2 “somewhat dissatisfied,” 3 “neutral,” 4 “somewhat dissatisfied,” and 5 “very satisfied.” The index, Cindex, is computed by taking the average of the 3 calibrating item scores and subtracting the mean of such averages computed for all survey respondents at the given facility.22,23  Cindex is also computed in standard deviates (Cz = ) and binary (Cbinary = 1 if C z > 0, and = 0 if Cz ≤ 0). Positive Cindex values indicate trainees whose subjective factors put them at risk of overrating their experiences compared to that of an average respondent, while negative values indicate trainees at risk of underrating their experiences.

The psychometric properties of Cindex have been estimated for VA trainees before.10  Calibration index values were found to have a mean (−0.06), range (−3.60 to 2.00), SD (0.84), facility-level clustering ICC (0.05), and test-retest reliability (ICC = 0.86). We also reported modest scalability (H = 0.38) and consistency (Cronbach's alpha = 0.59) among the 3 calibrating items. This is not surprising, as parking and location fall under working environment, and EHR falls under clinical environment. Calibration bias is expected to reflect the subjective properties of trainees, so combining different items together is tantamount to measuring illness severity by counting comorbidities, even though such diseases are clinically distinct and unrelated.24 

Covariates

Trainee and CLE covariates were computed from LPS survey responses, previously shown to have high consistency (Cronbach's alpha) and test-retest reliability.10  Trainee covariates included professional discipline across 26 professions, academic level in years since high school, and gender. CLE covariates included the percent of time the trainee spent in VA, psychological safety25  computed based on a 5-point agreement to: “Members of the clinical team of which I was a part are able to bring up problems and tough issues”; a 5-point VA facility service complexity score26 ; and mix of patients seen ranked into terciles by discipline, for patients “age 65 and over,” with “chronic mental illness,” “chronic medical illness,” “multiple illnesses,” “substance dependence,” and “low income,” and those who “lacked social support.”

Analyses

Independent associations regressing calibration index on trainee and program factors were estimated using SPSS generalized linear models with an identity link function and Gaussian distribution for Cindex and Cz, and a logit link function and binomial distribution for Cbinary.

Table 1 describes sample means, SDs, and frequencies of all study variables. Cindex ranged from −3.563 to 1.628 with an SD of 0.7966 index values, consistent with theory that calibration biases exist and vary by trainee.

table 1

Sample Demographics

Sample Demographics
Sample Demographics

Table 2 shows how calibration, computed as the percent of respondents with positive index values (Cz > 0), varied by discipline (P < .001), ranging from 43.0% (psychology) to 66.9% (physical therapy). Table 3 shows allied and associated health trainees and nursing trainees were 17.8% and 30.5%, respectively, more likely to overreport favorable ratings (Cbinary = 1) and had higher average index scores by 0.085 and 0.142 standard deviates, compared to their physician counterparts. These findings are consistent with our hypothesis that calibration biases vary by academic discipline.

table 2

Percent Trainees with a Positive Calibration Index Score and a High Psychological Safety Rating, and Their Associations, by Discipline

Percent Trainees with a Positive Calibration Index Score and a High Psychological Safety Rating, and Their Associations, by Discipline
Percent Trainees with a Positive Calibration Index Score and a High Psychological Safety Rating, and Their Associations, by Discipline
table 3

Independent Associations Between Trainee and CLE Factors and Calibration Index, by How Calibration Is Scoreda

Independent Associations Between Trainee and CLE Factors and Calibration Index, by How Calibration Is Scoreda
Independent Associations Between Trainee and CLE Factors and Calibration Index, by How Calibration Is Scoreda

Table 2 shows psychological safety, computed as the percent of trainees who strongly agreed that their CLE was psychological safe, was associated with calibration, on a discipline by discipline basis (except chiropractic). As with calibration, psychological safety also varied by discipline (P < .001). Figure 2 shows disciplines with a higher percentage of trainees who had a positive calibration index value also had a higher percentage of trainees who strongly agreed their CLE was psychologically safe (r = 0.786, P < .001). Table 2 also reveals that the associations between calibration and psychological safety varied by discipline (P < .001). Across disciplines, the size of the association between calibration and psychological safety was negatively correlated with the percent of a discipline's trainees who strongly agreed their CLE was psychologically safe (r = −0.564, P = .003).

figure 2

Percent Trainees Reporting Positive Calibration Index Valuesa and Psychologically Safeb Clinical Learning Environment by Professional Discipline

a Percent of respondents with Cz > 0 (above sample mean).

b Percent of respondents who “strongly agreed” that their VA clinical learning environment was psychologically safe.

figure 2

Percent Trainees Reporting Positive Calibration Index Valuesa and Psychologically Safeb Clinical Learning Environment by Professional Discipline

a Percent of respondents with Cz > 0 (above sample mean).

b Percent of respondents who “strongly agreed” that their VA clinical learning environment was psychologically safe.

Close modal

Table 3 describes the independent association across all study variables between trainee and CLE factors and calibration based on how bias was scored. Psychological safety had overwhelmingly the largest association: one level increase in psychological safety 5-point scale was associated with an average increase in calibration (Cz) of 0.476 standard deviates. Calibration was also positively associated with lower academic level, female gender, percent of time the trainee was in VA, more complex facilities, and fewer patients the respondent sees than expected for the respondent's discipline with chronic mental illness, chronic medical illness, alcohol/substance dependence, low income, and lack social/family support.

Our findings highlight the importance of accounting for calibration bias when interpreting CLE perceptions surveys scores. Calibration bias is viewed as subjective and threshold factors mediating between a trainee's CLE experience and their satisfaction rating of that CLE. We measured bias severity using an index scored that averages how trainees rated the 3 calibrating items, mean-centered by facility and academic year. We observed these index values varied by trainee and discipline, suggesting calibration biases exist, but only to the extent trainee experiences were invariant by facility and academic year. On exploratory analyses, we found patient, program, and trainee factors that were not expected to impact trainee experiences with parking, location, and EHR were consistently associated with index values, suggesting the presence of calibration biases.

Our findings are consistent with studies that have shown adjusting for Cindex, under different names, has led to significant changes in reported results when assessing primary care continuity clinics,27  psychological safety,23  trainee preferences for education program elements,22  and interprofessional team care.28  Of note, calibration bias was accounted for by including the index as a covariate to explain a CLE domain score of interest.

The finding of a strong association between psychological safety and calibration bias is also consistent with the role psychological safety has been seen to play in the workplace2931  and in health professions education,32,33  together with its connection to CLE satisfaction,23  work-related communication,34  team tenure,35,36  perceived care,37  self-awareness, burnout, civility,38  and mental health.39  Our findings suggest trainees who believe their CLE is psychologically unsafe will tend to underrate their CLE experiences below what a rater unaffected by psychological safety would otherwise have rated those same experiences.

Overall, resident physicians reported more negative calibration bias and lower levels psychological safety compared to their nursing and allied and associated health trainee counterparts. These findings are consistent with the pressures physician trainees face when engaged in the care of complex patients in situations with high levels of ambiguity and uncertainty,4042  where trainees assume the role of an apprentice with expectations of becoming independent practitioners.43  Resident physicians advance their professional development by engaging in patient care in supervised environment,44,45  where frequent risk-taking behavior is often taken in complex hierarchical situations fraught with uncertainty.38,4648 

There are study limitations. Our methods do not allow separate estimates for subjective and threshold biases. Computation rests on the assumption that calibrating item experiences are either invariant, or at minimum, not affected by other trainee and CLE factors. We also assumed respondents both comprehended the meaning of and recalled information relevant to answering the 3 index questions. Similarly, trainee responses may be subject to additional biases when assessing sensitive topics.4648  However, we believe the impact of such pressures may be minimal because the survey was administered nationally, with only aggregate scores reported to program directors. We also assumed calibration bias is a property of trainees. An alternative approach is to construct separate indices derived from experience-invariant items that are related to the CLE construct of interest.

Readers are also cautioned about extrapolating results to different clinical settings, as VA medical centers can differ from non-VA clinical settings. In addition, with an 11% sampling rate, it is unlikely this convenience sample represents all VA trainees.49,50  However, our purpose here is to compare trainees by discipline. LPS resident physician sample has been shown to be comparable to US residents in non-pediatric and non-OB-GYN programs by international status, PGY level, and specialty.22 

Finally, our list of Cindex predictors was limited. Future studies should consider the prevalence of depressive disorder, burnout, and chronic anxiety among trainees and teaching faculty that have been shown to be associated with high pessimism, negative perceptions,51  negative-selective memory,52  lower satisfaction intensity,53,54  increased frequency of medical errors,55  and higher rates of medical negligence and malpractice litigation.56,57 

This study offers evidence that a trainee's subjective and threshold factors introduce calibration biases that impact how responses to CLE perceptions surveys should be scored, analyzed, and interpreted. The integration of calibration bias metrics into CLE perceptions surveys should be an integral element in the quest to better understand medical trainees and the complex clinical learning environments in which they train.

1.
Loo
LK,
Byrne
JM.
Towards robust validity evidence for learning environment assessment tools
.
Acad Med
.
2015
;
90
(6)
:
698
699
.
2.
Colbert-Getz
JM,
Kim
S,
Goode
VH,
Shochet
RB,
Wright
SM.
Assessing medical students' and residents' perceptions of the learning environment: exploring validity evidence for the interpretation of scores from existing tools
.
Acad Med
.
2014
;
89
(12)
:
1687
1693
.
3.
Carney
PA,
Rdesinski
R,
Blank
AE,
Graham
M,
Wimmers
P,
Chen
HC,
et al.
Utility of the AAMC graduation questionnaire to study behavioral and social sciences domains in undergraduate medical education
.
Acad Med
.
2010
;
85
(1)
:
169
176
.
4.
Holt
KD,
Miller
RS,
Philibert
I,
Heard
JK,
Nasca
TJ.
Residents' perspectives on the learning environment: data from the Accreditation Council for Graduate Medical Education resident survey
.
Acad Med
.
2010
;
85
(3)
:
512
518
.
5.
Jaffe
RC,
Bergin
CR,
Loo
LK,
Singh
S,
Uthlaut
B,
Glod
SA,
et al.
Nesting domains: a global conceptual model for optimizing the clinical learning environment
.
Am J Med
.
2019
;
132
(7)
:
886
991
.
6.
Jaffe
RC,
Bergin
CR,
Loo
LK,
Singh
S,
Uthlaut
B,
Glod
SA,
et al.
Reactive, holistic, proactive: practical applications of the AAIM learning and working environment conceptual model
.
Am J Med
.
2019
;
132
(8)
:
995
1000
.
7.
Schwartz
N.
Self-reports: how questions shape the answers
.
Am Psychol
.
1999
;
54
(2)
:
93
105
.
8.
Phillips
AW,
Artino
AR
Lies, damned lies, and surveys
.
J Grad Med Educ
.
2017
;
9
(6)
:
677
679
.
9.
Yock
Y,
Lim
I,
Lim
YH,
Lim
WS,
Chew
N,
Archuleta
S.
Sometimes means some of the time: residents' overlapping responses to vague quantifiers on the ACGME-I resident survey
.
J Grad Med Educ
.
2017
;
9
(6)
:
735
740
.
10.
Kashner
TM,
Clarke
CT,
Aron
DC,
Byrne
JM,
Cannon
GW,
Deemer
DA,
et al.
The 9-criteria evaluation framework for perceptions survey: the case of VA's Learners' Perceptions Survey
.
Biostat Epidemiol
.
2020
;
4
(1)
:
140
171
.
11.
Vasileva-Stojanovska
T,
Malinovski
T,
Marina
V,
Dobri
J,
Trajkovik
V.
Impact of satisfaction, personality, and learning style on educational outcomes in a blended learning environment
.
Learn Indiv Diff
.
2015
;
38
:
127
135
.
12.
Lam
H-TC,
O'Toole
TG,
Arola
PE,
Kashner
TM,
Chang
BK.
Factors associated with the satisfaction of millennial generation dental residents
.
J Dental Educ
.
2012
;
76
(11)
:
1416
1426
.
13.
Perone
JA,
Fankhauser
GT,
Adhikari
D,
Mehta
HB,
Woods
MB,
Tyler
DS,
et al.
It depends on your perspective: resident satisfaction with operative experience
.
Am J Surg
.
2017
;
213
(2)
:
253
259
.
14.
Artino
AR
La Rochelle
JS,
Dezee
KJ,
Gehlbach
H.
Developing questionnaires for educational research: AMEE guide no. 87
.
Med Teach
.
2014
;
36
(6)
:
463
474
.
15.
Krosnick
JA.
Survey research
.
Annu Rev Psychol
.
1999
;
50
:
537
567
.
16.
Karabenick
SA,
Woollery
ME,
Friedel
JM,
Ammon
BV,
Blazevski
J.
Cognitive processing of self-reports items in education research
.
Educ Psychol
.
2007
;
42
(3)
:
139
151
.
17.
Vlaev
I,
Chater
N,
Neil
S,
Brown
GDA.
Does the brain calculate value?
Trends Cogn Sci
.
2011
;
15
(11)
:
546
554
.
18.
Parducci
A.
Happiness, Pleasure, and Judgement: The Contextual Theory and Its Applications
.
Mahwah, NJ
:
Erlbaum Associates;
1995
.
19.
Smith
RH,
Diener
E,
Wedell
DH.
Intrapersonal and social comparison determinants of happiness: a range-frequency analysis
.
J Pers Soc Psychol
.
1989
;
56
(3)
:
317
325
.
20.
Johnson
TR.
On the use of heterogeneous thresholds ordinal regression model to account for individual differences in response style
.
Psychometrika
.
2003
;
68
(4)
:
563
583
.
21.
Keitz
SA,
Holland
GJ,
Melander
EH,
Bosworth
HB,
Pincus
SH.
The Veterans Affairs Learners' Perceptions survey: the foundation for education quality improvement
.
Acad Med
.
2003
;
78
(9)
:
910
917
.
22.
Kashner
TM,
Hettler
DL,
Zeiss
RA,
Aron
DC,
Bernett
DS,
Brannen
JL,
et al.
Has interprofessional education changed learning preferences? A national perspective
.
Health Serv Res
.
2017
;
52
(1)
:
268
290
.
23.
Torralba
KD,
Loo
LK,
Byrne
JM,
Baz
S,
Cannon
GW,
Keitz
SA,
et al.
Does psychological safety impact the clinical learning environment for resident physicians? Results from the VA's Learners' Perceptions Survey
.
J Grad Med Educ
.
2016
;
8
(5)
:
699
707
.
24.
Charlson
ME,
Pompei
P,
Ales
KL,
MacKenzie
CR.
A new method of classifying prognostic comorbidity in longitudinal studies: development and validation
.
J Chronic Dis
.
1987
;
40
(5)
:
373
383
.
25.
Edmondson,
A.
Psychological safety and learning behavior in work teams
.
Admin Sci Quart
.
1999
;
44
(2)
:
350
383
.
26.
Peterson
D.
FY17 VHA Facility Complexity Model Overview
.
Veterans Health Administration Office of Productivity, Efficiency, and Staffing. November 6,
2017
.
27.
Byrne
JM,
Chang
BK,
Gilman
SC,
Keitz
SA,
Kaminetzky
CP,
Aron
DC,
et al.
The learners' perceptions survey-primary care: assessing resident perceptions of internal medicine continuity clinics and patient-centered care
.
J Grad Med Educ
.
2013
;
5
(4)
:
587
593
.
28.
Byrne
JM,
Kashner
TM,
Gilman
SC,
Wicker
AB,
Bernett
DS,
Aron
DC,
et al.
Do patient aligned medical team models of care impact VA's clinical learning environments
.
Health Services Research and Development/Quality Enhancement Research Initiative National Conference. Philadelphia, PA, July 8–10,
2015
.
29.
Frazier
ML,
Fainshmidt
S,
Klinger
RL,
Pezeshkan
A,
Vracheva
V.
Psychological safety: a meta-analytic review and extension
.
Personnel Psychol
.
2017
;
70
(1)
:
113
165
.
30.
Newman
A,
Donohue
R,
Eva
N.
Psychological safety: a systematic review of the literature
.
Human Res Manag Rev
.
2017
;
27
(3)
:
521
535
.
31.
Edmondson
AC.
The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth
.
New York, NY
:
Wiley;
2019
.
32.
Torralba
KD,
Puder
D.
Psychological safety among learners: when connection is more than just communication
.
J Grad Med Educ
.
2017
;
9
(4)
:
538
539
.
33.
Bynum
WE,
Haque
TM.
Risky business: psychological safety and the risks of learning medicine
.
J Grad Med Educ
.
2016
;
8
(5)
:
780
782
.
34.
Yanchus
NJ,
Derickson
R,
Moore
SC,
Bologna
D,
Osatuke
K.
Communication and psychological safety in veterans health administration work environments
.
J Health Organ Manag
.
2014
;
28
(6)
:
754
776
.
35.
Koopman
J,
Lanaj
K,
Wang
M,
Zhou
L,
Shi
J.
Nonlinear effects of team tenure on team psychological safety climate and climate strength: implications for average team member performance
.
J App Psychol
.
2016
;
101
(7)
:
940
957
.
36.
Rosenbaum
L.
Cursed by knowledge - building a culture of psychological safety
.
N Engl J Med
.
2019
;
380
(8)
:
786
790
.
37.
Binyamin
G,
Friedman
A,
Carmeli
A.
Reciprocal care in hierarchical exchange: implications for psychological safety and innovative behaviors at work
.
Psychol Aesth Creat Art
.
2017
;
12
(1)
:
79
88
.
38.
Hernandez
W,
Luthanen
A,
Ramsel
D,
Osatuke
K.
The mediating relationship of self-awareness on supervisor burnout and workgroup civility & psychological safety: a multilevel path analysis
.
Burnout Res
.
2015
;
2
(1)
:
36
49
.
39.
Woody
RH.
Psychological safety for mental health practitioners: suggestions from a defense lawyer
.
Psychol Inj Law
.
2016
;
9
(2)
:
198
202
.
40.
Domen
RE.
The ethics of ambiguity: rethinking the role and importance of uncertainty in medical education and practice
.
Acad Pathol
.
2016
;
3
:
2374289516654712
.
41.
Torralba
KD,
Baz
S,
Byrne
JM,
Kashner
TM.
Understanding Psychological Safety and its Role in Quality Improvement in Graduate Medical Education
.
In:
Textbook for Medical Education Programs. 12th ed
.
Alexandria, VA
:
Alliance for Academic Internal Medicine;
2017
.
42.
Hillen
MA,
Gutheil
CM,
Strout
TD,
Smets
EM,
Han
PK.
Tolerance of uncertainty: conceptual analysis, integrative model, and implications for healthcare
.
Soc Sci Med
.
2017
;
180
:
62
75
.
43.
Cruess
RL,
Cruess
SR,
Steinert
Y.
Medicine as a community of practice: implications for medical education
.
Acad Med
.
2018
;
93
(2)
:
185
191
.
44.
Goodwin
D,
Pope
C,
Mort
M,
Smith
A.
Access, boundaries and their effects: legitimate participation in anaesthesia
.
Sociol Health Illn
.
2005
;
27
(6)
:
855
871
.
45.
Amalberti
R,
Vincent
C,
Auray
Y,
de Saint Maurice
G.
Violations and migrations in health care: a framework for understanding and management
.
Qual Saf Health Care
.
2006
;
15
(Suppl 1)
:
i66
71
.
46.
Loo
LK,
Puri
N,
Kim
DI,
Kawayeh
A,
Baz
S,
Hegstad
D.
“Page me if you need me”: the hidden curriculum of attending-resident communication
.
J Grad Med Educ
.
2012
;
4
(3)
:
340
345
.
47.
Farnan
JM,
Johnson
JK,
Meltzer
DO,
Humphrey
HJ,
Arora
VM.
Resident uncertainty in clinical decision making and impact on patient care: a qualitative study
.
Qual Saf Health Care
.
2008
;
17
(2)
:
122
126
.
48.
Bush
RW.
Supervision in medical education: logical fallacies and clear choices
.
J Grad Med Educ
.
2010
;
2
(1)
:
141
143
.
49.
Fauth
T,
Hattrup
K,
Mueller
K,
Roberts
B.
Nonresponse in employee attitude surveys: a group-level analysis
.
J Business Psychol
.
2013
;
28
(1)
:
1
16
.
50.
Prins
JT,
Hoekstra-Weebers
JEHM,
Gazendam-Donofrio
SM,
Dillingh
GS,
Bakker
AB,
Huisman
M,
et al.
Burnout and engagement among resident doctors in the Netherlands: a national study
.
Med Educ
.
2010
;
44
(3)
:
236
247
.
51.
Beck
AT,
Rush
JA,
Shaw
BF,
Emery
G.
Cognitive Therapy of Depression
.
New York, NY
:
Guilford Press;
1979
.
52.
Williams
JMG,
Barnhofer
T,
Crane
C,
Hermans
D,
Raes
F,
Watkins
E,
et al.
Autobiographical memory specificity and emotional disorder
.
Psychol Bull
.
2007
;
133
(1)
:
122
148
.
53.
Kassam
A,
Horton
J,
Shoimer
I,
Patten
S.
Predictors of well-being in resident physicians: a descriptive and psychometric study
.
J Grad Med Educ
.
2015
;
7
(1)
:
70
74
.
54.
Dunn
J,
Ng
SK,
Breitbart
W,
Aitken
J,
Youl
P,
Baade
PD,
et al.
Health-related quality of life and life satisfaction in colorectal cancer survivors: trajectories of adjustment
.
Health Qual Life Outcomes
2013
;
11
(1)
:
46
.
55.
Thomas
NK.
Resident burnout
.
JAMA
.
2004
;
292
(23)
:
2880
2889
.
56.
Anagnostopoulos
F,
Liolios
E,
Persefonis
G,
Slater
J,
Kafetsios
K,
Niakas
D.
Physician burnout and patient satisfaction with consultation in primary health care settings: evidence of relationships from a one-with-many design
.
J Clin Psychol Med Settings
.
2012
;
19
(4)
:
401
410
.
57.
Zis
PA,
Anagnostopoulos
F,
Sykioti
P.
Burnout in medical residents: a study based on the job demands-resources model
.
Sci World J
.
2014
;
2014
:
673279
.

Author notes

Funding: This study was funded by the Department of Veterans Affairs, Veterans Health Administration, in the Office of Research and Development, Health Services Research and Development Service IIR#14-071 and IIR#15-084, and by the MacPherson Society, Loma Linda University School of Medicine.

Competing Interests

Conflict of interest: The authors declare they have no competing interests.

The authors would like to thank the following individuals for their help on the project: Christopher T. Clarke, PhD, David Bernett, BA, and Laura Stefanowycz, Office of Academic Affiliations Data Management and Support Center, St. Louis, MO; Grant W. Cannon, Utah VA Medical Center, University of Utah School of Medicine; Catherine P. Kaminetzky, Puget Sound VA Healthcare System, University of Washington School of Medicine; Sheri A. Keitz, University of Massachusetts School of Medicine; and Shanalee Tamares, MLIS, and Daniel Reichert, MD, Loma Linda University.