Background

Residents receive little information about how they interact with patients.

Objective

This pilot study assessed the feasibility and validity of a new 16-item tool developed to assess patients' perspectives of interns' communication skills and professionalism and the team's communication.

Methods

Feasibility was determined by the percentage of surveys completed, the average time for survey completion, the percentage of target interns evaluated, and the mean number of evaluations per intern. Generalizability was analyzed using an (evaluator:evaluatee) × item model. Simulated D studies estimated optimal numbers of items and evaluators. Factor analysis with varimax rotation was used to examine the structure of the items. Scores were correlated with other measures of communication and professionalism for validation.

Results

Most patients (225 of 305 [74%]) completed the evaluation. Each survey took approximately 6.3 minutes to complete. In 43 days over 18 weeks, 45 of 50 interns (90%) were evaluated an average of 4.6 times. Fifty evaluations would be required to reach a minimally acceptable coefficient (0.57). Two factor structures were identified. The evaluation did not correlate with faculty evaluations of resident communication but did correlate weakly (r  =  0.140, P  =  .04) with standardized patient evaluations.

Conclusions

A large number of patient evaluations are needed to reliably assess intern and team communication skills. Evaluations by patients add a perspective in assessing these skills that is different from those of faculty evaluations. Future work will focus on whether this new information adds to existing evaluation systems and warrants the added effort.

What was known

Residents benefit from feedback on their developing interpersonal and communication skills.

What is new

A brief patient assessment of communication skills tapped into constructs different from faculty evaluation of trainees' skills, although a large sample of evaluations was needed to achieve acceptable reliability.

Limitations

Single-institution study may limit generalizability; items were read to patients, and responses may have differed in an anonymous administration of the tool.

Bottom line

Patient evaluations add a different perspective in assessing resident communication skills. Future research is needed to assess whether the additive value warrants the added effort.

Editor's Note: The online version of this article includes the mean scores for intern communication and team-based communication items.

Communication and professionalism are important components of patient-physician relationships.1 Effective communication improves patients' emotional health and contributes to symptom resolution, improved functional and physiologic status, better pain control, higher patient adherence and satisfaction, and a lower probability of malpractice suits.15 

Residents receive little information about how they interact with patients. Most feedback is provided by faculty who may observe a resident-patient interaction6; however, these observations tend to occur infrequently. Additionally, feedback from faculty may not take into account the patient's perspective of the interaction with the trainee. The patient's perspective of residents' communication skills is important given the fundamental role of patient-centeredness in high-quality care.7 Patients' evaluations provide a different perspective of residents' behavior, empower patients to contribute to medical education, and give insight into improving the patient-physician interaction.811 

Patients could have a unique view of the effectiveness of communication within teams. Patients interact not only with their primary physicians but also with those physicians providing cross-coverage. As patient handoffs increase,12 communication within a team is crucial to maintaining safe and effective care.13 While there is a growing body of literature on patient assessment of individual physicians, to our knowledge, no studies have assessed team communication from patients' perspectives. Patients' perceptions of communication among providers, as they relate to the care they receive during their hospitalization, may give a unique perspective in identifying maladaptive team dynamics.

The Accreditation Council for Graduate Medical Education (ACGME) mandates multisource assessment of all trainees14 that includes evaluations from faculty, other trainees, nonphysician colleagues, and patients. In internal medicine (IM), patient surveys such as the American Board of Internal Medicine's (ABIM) Patient Satisfaction Questionnaire (PSQ)15 and a portion of the National Committee for Quality Assurance survey16 have been used to assess physicians' communication skills. However, previous research of patient evaluations of residents is relatively limited and focused predominantly on the ambulatory setting.1720 In a residency ophthalmology clinic, a patient satisfaction survey was able to detect differences in patients' perceptions of communication among individual residents.20 Given that many trainees spend most of their time in the inpatient setting, it is important to assess trainees in that setting. However, only a few studies in inpatient pediatrics2123 and IM24 have evaluated residents in the inpatient setting. These studies have not demonstrated an efficient process or a validated tool.

A barrier to implementing patient assessments of resident communication skills is the large number of evaluations necessary to make assessment decisions.17,25 Strategies to overcome this barrier may include expanding evaluation collection to the inpatient setting to yield higher numbers of evaluations. Second, while electronic surveys using a tablet device have been shown to improve patient response rates,26 it is not known whether using electronically completed, tablet-based evaluations while patients are still hospitalized can improve response rates. Third, as a high number of patient evaluations of residents are typically required and many programs have multiple trainees, an electronic format also may make data synthesis and reporting more feasible.

Given limited published experience of collecting electronic patient assessments of IM residents' communication skills and team communication skills in the inpatient setting, we developed a new iPad-based tool to assess patients' perspectives of their interns' communication and professionalism qualities. The tool was also created to assess patients' perspectives of the team's communication qualities. A pilot study was designed to estimate feasibility and evaluate evidence-based validity on internal structure (reliability) and relationships with other variables.27 

Development of the Intern and Team Communication Instrument

An iPad application was built for a 16-item instrument assessing the patient's perspective of an intern's skills in communication and professionalism (12 items) and the communication of that intern's team with each other (4 items). Instrument content was based on a literature review of preexisting tools.15,16,21,28 Nine questions were adapted from the ABIM PSQ,15 2 from the Consumer Assessment of Healthcare Providers and Systems,16 and 1 from the Physicians' Humanistic Behaviors Questionnaire Humanism scale.28 Two items pertaining to team communication were adapted from an existing instrument,21 and 2 were novel. All items were rated on a 5-point scale (where 1  =  poor, 2  =  fair, 3  =  good, 4  =  very good, and 5  =  excellent), with a sixth option of unable to answer/not applicable.

Obtaining Patient Evaluations

The study took place on 4 IM inpatient services between February and June 2012. Each team consisted of an attending physician, a resident, 2 interns (or 1 intern and 1 subintern), and 1 to 2 medical students. Fifty interns rotated on these services during the study period for 1 week to 4 weeks. On a daily basis, using the hospital electronic health record, a research assistant (RA) identified all patients cared for by an intern, who were being discharged that day and patients being cared for by an intern on the day the intern was rotating to another service. Patients cared for by rotating interns from other residency programs or medical student subinterns were excluded. On the day of discharge or intern switch day, the RA asked patients to complete the survey. A photograph of the intern being evaluated was presented to confirm correct identification. The RA read the survey items and possible answers. Patients were excluded if their primary language was not English or if they could not participate due to altered mental status.

Approval was obtained from the Institutional Review Board of the Perelman School of Medicine at the University of Pennsylvania.

Performance of the Instrument

To determine feasibility, descriptive statistics were used to determine the percentage of surveys completed, the average time for survey completion, the percentage of target interns evaluated, and the mean number of evaluations per intern. Frequencies for all items were computed. To assess internal structure, homogeneity was assessed using Cronbach's alpha. Generalizability was analyzed using an (evaluator: evaluatee) × item model. This method determines how much of the observed variation is explained by the evaluator, evaluatee, or the instrument items. Simulated D studies, which provide theoretical statistics for how the score precision would change if items were added or deleted from the instrument, estimated optimal numbers of items and evaluators. Factor analysis with varimax rotation examined the structure of the 16 items. The average scores were compared from patients who were in an isolation room to those who were not and from patients who were admitted by the intern being evaluated compared to patients not admitted by the intern.

Validity Evidence for the Instrument

The scores of the patient evaluations were correlated with other measures: evaluations by faculty and a standardized patient (SP) examination. In our program, faculty members are required to complete an electronic evaluation of each intern with whom they work for at least 1 week. These end-of-rotation evaluations provide a summative assessment of a trainee based on the 6 ACGME core competencies. Three items assess the trainee's communication skills and professionalism, and 1 assesses teamwork. Each item is scored on a Likert scale of 1 (lowest) to 9 (highest). Patient evaluation scores of communication skills were correlated with faculty ratings on communication. Patient scores of interteam communication were correlated with faculty ratings on teamwork.

Interns at our institution are required to complete an SP examination during the second half of their intern year. This examination assesses their communication skills and professionalism in 4 different counseling scenarios. Interns were assessed by the SP using case-specific checklists with 7 to 10 yes/no items indicating whether key counseling elements had been performed. Six interpersonal skills (eliciting information, listening, giving information, respectfulness, empathy, and professionalism) were rated on a 4-point scale (1  =  poor/almost never, 2  =  fair/somewhat less, 3  =  good/somewhat more, 4  =  very good/almost always). An additional item asked, “How comfortable would you feel referring a family member or friend to this doctor?” (1  =  not at all, 2  =  somewhat, 3  =  comfortable, 4  =  very comfortable). Interns received feedback if they required remediation. We assessed the correlation between trainees' scores on SP interpersonal communication assessment and the results of the patient evaluations of intern communication.

Obtaining Patient Evaluations

Of 305 patients approached, 225 surveys were completed (74%) over 18 weeks. Thirty-four patients (11%) refused to complete the survey. The main reasons for refusal included inconvenient timing prior to discharge or during a meal, inadequate pain management, or not desiring participation. Twenty-four patients (8%) were unable to complete the survey due to language or cognitive barriers. Twenty-two patients (8%) did not recognize the intern being evaluated or believed they had not spent enough time with him or her. An indeterminate amount of surveys were not completed due to factors such as network connectivity, technical issues with the iPad application, and inability to identify all patients being discharged. Each survey took on average 6.3 minutes to complete (range  =  6, SD  =  2.36). Of 50 interns, 45 (90%) received at least 1 evaluation. On average, interns were evaluated 4.6 times (range  =  10, SD  =  3.2).

Performance of the Instrument

Mean scores for intern communication and team-based communication items were 4.2 (range  =  3.5) and 3.6 (range  =  3.8), respectively (provided as online supplemental material). Survey items and the intern being evaluated explained 9.2% and 1.5% of variation, respectively (table 1). The reproducibility coefficient for 5 evaluations per intern was 0.12. Increasing the number of evaluations to 12 or 15 boosted the coefficient to 0.25 or 0.29, respectively. Fifty evaluations would be required to reach a coefficient close to acceptable (0.57) and 165 evaluations to reach an ideal coefficient (0.80). The scores on the patient evaluations did not significantly differ whether the patient was in isolation (n  =  53, P  =  .33) or whether the patient was admitted by the intern being evaluated (n  =  58, P  =  .47).

TABLE 1

Generalizability Analysis and Simulated D Studies of Patient Evaluations

Generalizability Analysis and Simulated D Studies of Patient Evaluations
Generalizability Analysis and Simulated D Studies of Patient Evaluations

A principal component factor analysis with varimax rotation revealed 2 factor structures (table 2). The first factor of 12 items reflected individual physician communication with factor loadings ranging from 0.685 to 0.865. The second factor (team communication) consisted of 4 items with factor loadings ranging from 0.646 to 0.852.

TABLE 2

Factor Structure of Tablet-Completed Patient Evaluation of Internal Medicine Interns' Communication Skills and Team Communication

Factor Structure of Tablet-Completed Patient Evaluation of Internal Medicine Interns' Communication Skills and Team Communication
Factor Structure of Tablet-Completed Patient Evaluation of Internal Medicine Interns' Communication Skills and Team Communication

The average of the trainee's score on interpersonal communication on our required SP assessment of interns showed a weak correlation (r  =  0.14) with the results of the patient evaluation questions related to intern communication.

Validity Evidence for the Instrument

The Cronbach's alpha measure of reliability for all questions on the instrument used was 0.961, with values of 0.970 and 0.816 for the individual physician communication and team communication items, respectively, indicating a high level of reliability. The average of the trainee's SP examination interpersonal communication score was significantly correlated with the patient evaluation individual physician communication factor (r  =  0.140, P  =  .04). The patient evaluation of individual physician communication and team communication did not significantly correlate with faculty evaluations (table 3).

TABLE 3

Correlation of Patient Evaluations of Interns' Communication Skills and Team Communication and Faculty and Standardized Patient Evaluations of Interns

Correlation of Patient Evaluations of Interns' Communication Skills and Team Communication and Faculty and Standardized Patient Evaluations of Interns
Correlation of Patient Evaluations of Interns' Communication Skills and Team Communication and Faculty and Standardized Patient Evaluations of Interns

A 16-item patient evaluation assessing interns' communication and professionalism skills and team communication was developed. Most patients (74% [225 of 305]) were able to complete the evaluation. Factor analysis confirmed that the evaluation did indeed assess 2 separate factors. Although each evaluation took only approximately 6 minutes to complete, at least 50 evaluations would be required per intern to evaluate each reliably. This replicates the method used by other studies that have been found to require at least 50 patient evaluations.15,17,24,25 

Patient evaluations did not correlate with measures of communication from faculty but did weakly correlate with scores obtained from an SP exercise. This correlation may reflect the fact that patients see a different aspect of communication that cannot or was not observed by faculty, although additional validity testing should be performed to test this hypothesis. Most physicians and educators agree that patients' evaluations add a unique perspective to physician assessment and can empower patients.11,29,30 The lack of correlation with evaluations by faculty may mean that patient evaluations add to the overall assessment of communication and professionalism. Faculty assessment of teamwork also did not correlate with patient evaluation of teamwork. It is not clear that a patients' assessment of team cohesiveness measures the same skills that an attending uses to assess the ability of a trainee to function within a team. Further validity testing is required. Reasons for patients' refusal to complete evaluations may provide useful information. For example, 8% (25 of 305) of patients were unable to identify who their primary provider was while hospitalized or stated they felt they did not spend enough time with the intern to assess their skills. This qualitative information may be able to identify outliers and may in itself suggest a deficit in communication skills. Alternatively, it may provide information about the structure of care delivery in the era of duty hour regulations and patient care discontinuity secondary to increased handoffs.12 Further exploration of the impact of increased handoffs on patients' perceptions of communication with their intern providers would be beneficial.

Our study highlights the fact that the feasibility of using patient evaluations for summative feedback during training in the inpatient setting is questionable. Although each evaluation took only approximately 6 minutes to complete, at least 50 evaluations would be required per intern to evaluate each reliably. Our instrument was adopted primarily from the ABIM PSQ, which demonstrated reliability with a reproducibility coefficient of 0.7 when obtaining 20 evaluations and 0.8 with 35 or more evaluations.15 In our study, at least 50 evaluations were required per trainee to achieve acceptable reliability. Differences between the ambulatory and the inpatient settings could account for the larger number of evaluations needed in our study. Despite longer exposure to trainees in the inpatient settings, patients see multiple providers during a hospital stay and may not develop long-term relationships. Other inpatient studies have demonstrated that at least 50 evaluations are required for a coefficient of 0.67.24 For large residency programs, even 20 to 35 evaluations may not be feasible. Additional research is needed to determine whether patient evaluations collected electronically are associated with a higher response rate and validity compared to paper-based evaluations and telephone surveys that may be limited by low response rate. If electronically based evaluations improve response rate and assist in data synthesis, this assessment modality may help collect information about trainees in residency program settings with fewer resources. Given the weak correlation of the patient evaluations with the SP evaluation of trainees, future investigations should explore the possibility of using an SP exercise without the need to obtain a large number of patient evaluations.

There are several limitations. This study was conducted at a single institution. The items on the tool were read by the RA. Patient responses might vary if they were given privacy to report answers.24 Additional research is needed to determine whether the individual collecting the data needs to be disassociated from the patient care team to promote honest feedback. The survey also collected only quantitative data with no option for qualitative information. Narrative patient feedback may explore what patients view as important in communication and professionalism.31,32 

A 16-item patient evaluation assessing interns' communication and professionalism skills, and team communication was developed. A large number of evaluations are required to provide a reliable patient assessment of trainee communication skills, and this may be prohibitive for the implementation of such evaluations during training. If patient evaluations continue to be a required component of trainee assessment in IM, or if there is interest in expanding patient input in other specialties, future research will need to focus on effectively collecting such feedback to benefit physicians in training and continuing medical education.

1
Stewart
MA
.
Effective physician-patient communication and health outcomes: a review
.
CMAJ
.
1995
;
152
(
9
):
1423
1433
.
2
van der Leeuw
RM
,
Lombarts
KM
,
Arah
OA
,
Heineman
MJ
.
A systematic review of the effects of residency training on patient outcomes
.
BMC Med
.
2012
;
10
:
65
.
3
Pollak
KI
,
Alexander
SC
,
Tulsky
JA
,
Lyna
P
,
Coffman
CJ
,
Dolor
RJ
,
et al.
Physician empathy and listening: associations with patient satisfaction and autonomy
.
J Am Board Fam Med
.
2011
;
24
(
6
):
665
672
.
4
Levinson
W
,
Roter
DL
,
Mullooly
JP
,
Dull
VT
,
Frankel
RM
.
Physician-patient communication. The relationship with malpractice claims among primary care physicians and surgeons
.
JAMA
.
1997
;
277
(
7
):
553
559
.
5
King
A
,
Hoppe
RB
.
“Best practice” for patient-centered communication: a narrative review
.
J Grad Med Educ
.
2013
;
5
(
3
):
385
393
.
6
McLeod
PJ
,
Tamblyn
R
,
Benaroya
S
,
Snell
L
.
Faculty ratings of resident humanism predict patient satisfaction ratings in ambulatory medical clinics
.
J Gen Intern Med
.
1994
;
9
(
6
):
321
326
.
7
Committee on Quality of Healthcare in America
.
Crossing the Quality Chasm: A New Health System for the 21st Century
.
Washington, DC
:
Institute of Medicine
;
2001
.
8
Reinders
MD
,
Blankenstein
AH
,
van der Horst
HE
,
Knol
DL
,
Schoonheim
PL
,
van Marwijk
HW
.
Does patient feedback improve the consultation skills of general practice trainees? A controlled trial
.
Med Educ
.
2010
;
44
(
2
):
156
164
.
9
Duffy
FD
,
Gordon
GH
,
Whelan
G
,
Cole-Kelly
K
,
Frankel
R
,
Buffone
N
,
et al.
Assessing competence in communication and interpersonal skills: the Kalamazoo II report
.
Acad Med
.
2004
;
79
(
6
):
495
507
.
10
Delbanco
TL
.
Enriching the doctor-patient relationship by inviting the patient's perspective
.
Ann Intern Med
.
1992
;
116
(
5
):
414
418
.
11
Dalia
S
,
Schiffman
FJ
.
Who's my doctor? First-year residents and patient care: hospitalized patients' perception of their “main physician.”
.
J Grad Med Educ
.
2010
;
2
(
2
):
201
205
.
12
Horwitz
LI
,
Krumholz
HM
,
Green
ML
,
Huot
SJ
.
Transfers of patient care between house staff on internal medicine wards: a national survey
.
Arch Intern Med
.
2006
;
166
(
11
):
1173
1177
.
13
Vidyarthi
AR
,
Arora
V
,
Schnipper
JL
,
Wall
SD
,
Wachter
RM
.
Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign-out
.
J Hosp Med
.
2006
;
1
(
4
):
257
266
.
14
Accreditation Council for Graduate Medical Education
.
Internal medicine program requirements
. .
15
American Board of Internal Medicine PSQ Project Co-Investigators
.
Final report on the patient satisfaction questionnaire project
.
Philadelphia, PA
:
American Board of Internal Medicine
;
1989
.
16
Consumer Assessment of Healthcare Providers and Systems (CAHPS); Agency for Healthcare Research and Quaility
.
US Department of Health and Human Services
.
http://www.cahps.ahrq.gov/. Accessed May 9, 2013
.
17
Tamblyn
R
,
Benaroya
S
,
Snell
L
,
McLeod
P
,
Schnarch
B
,
Abrahamowicz
M
.
The feasibility and value of using patient satisfaction ratings to evaluate internal medicine residents
.
J Gen Intern Med
.
1994
;
9
(
3
):
146
152
.
18
Yancy
WS
Jr,
Macpherson
DS
,
Hanusa
BH
,
Switzer
GE
,
Arnold
RM
,
Buranosky
RA
,
et al.
Patient satisfaction in resident and attending ambulatory care clinics
.
J Gen Intern Med
.
2001
;
16
(
11
):
755
762
.
19
Dawn
AG
,
Lee
PP
,
Hall-Stone
T
,
Gable
W
.
Development of a patient satisfaction survey for outpatient care: a brief report
.
J Med Pract Manage
.
2003
;
19
(
3
):
166
169
.
20
Jagadeesan
R
,
Kalyan
DN
,
Lee P, Stinnett
S
,
Challa
P
.
Use of a standardized patient satisfaction questionnaire to assess the quality of care provided by ophthalmology residents
.
Ophthalmology
.
2008
;
115
(
4
):
738
743
.
21
Co
JP
,
Mohamed
H
,
Kelleher
ML
,
Edgman-Levitan
S
,
Perrin
JM
.
Feasibility of using a tablet computer survey for parental assessment of resident communication skills
.
Ambul Pediatr
.
2008
;
8
(
6
):
375
378
.
22
Brinkman
WB
,
Geraghty
SR
,
Lanphear
BP
,
Khoury
JC
,
Gonzalez del Rey
JA
,
Dewitt
TG
,
et al.
Effect of multisource feedback on resident communication skills and professionalism: a randomized controlled trial
.
Arch Pediatr Adolesc Med
.
2007
;
161
(
1
):
44
49
.
23
Brinkman
WB
,
Geraghy
SR
,
Lanphear
BP
,
Khoury
JC
,
Gonzalez del Rey
JA
,
DeWitt
TG
,
et al.
Evaluation of resident communication skills and professionalism: a matter of perspective
?
Pediatrics
.
2006
;
118
(
4
):
1371
1379
.
24
Woolliscroft
JO
,
Howell
JD
,
Patel
BP
,
Swanson
DB
.
Resident-patient interactions: the humanistic qualities of internal medicine residents assessed by patients, attending physicians, program supervisors, and nurses
.
Acad Med
.
1994
;
69
(
3
):
216
224
.
25
Nelson
EC
,
Gentry
MA
,
Mook
KH
,
Spritzer
KL
,
Higgins
JH
,
Hays
RD
.
How many patients are needed to provide reliable evaluations of individual clinicians
?
Med Care
.
2004
;
42
(
3
):
259
266
.
26
Parker
MJ
,
Manan
A
,
Urbanski
S
.
Prospective evaluation of direct approach with a tablet device as a strategy to enhance survey study participant response rate
.
BMC Res Notes
.
2012
;
5
:
605
.
27
Cook
DA
,
Beckman
TJ
.
Current concepts in validity and reliability for psychometric instruments: theory and application
.
Am J Med
.
2006
;
119
(
2
):
166.e7
e16
.
28
Weaver
MJ
,
Ow
CL
,
Walker
DJ
,
Degenhardt
EF
.
A questionnaire for patients' evaluations of their physicians' humanistic behaviors
.
J Gen Intern Med
.
1993
;
8
(
3
):
135
139
.
29
Weissman
JS
,
Schneider
EC
,
Weingart
SN
,
Epstein
AM
,
David-Kasdan
J
,
Feibelmann
S
,
et al.
Comparing patient-reported hospital adverse events with medical record review: do patients know something that hospitals do not
?
Ann Intern Med
.
2008
;
149
(
2
):
100
108
.
30
Gallagher
TH
,
Levinson
W
.
Physicians with multiple patient complaints: ending our silence
.
BMJ Qual Saf
.
2013
;
22
(
7
):
521
524
.
31
Bergman
AA
,
Connaughton
SL
.
What is patient-centered care really? Voices of Hispanic prenatal patients
.
Health Commun
.
2013
;
28
(
8
):
789
799
.
32
Wiggins
MN
,
Coker
K
,
Hicks
EK
.
Patient perceptions of professionalism: implications for residency education
.
Med Educ
.
2009
;
43
(
1
):
28
33
.

Author notes

C. Jessica Dine, MD, MSHPR, is Associate Program Director, Internal Medicine Residency Program, and Assistant Professor of Medicine, Perelman School of Medicine, University of Pennsylvania; Stefanie Ruffolo, MD, is Senior Resident, Internal Medicine and Pediatrics Residency Program, Hospital of the University of Pennsylvania; Jennifer Lapin, PhD, is Senior Research Analyst and Graduate Medical Education Director, Office of Evaluations and Assessment, Perelman School of Medicine, University of Pennsylvania; Judy A. Shea, PhD, is Associate Dean of Medical Education Research and Director of Faculty Growth and Development, Perelman School of Medicine, University of Pennsylvania; and Jennifer R. Kogan, MD, is Director for Undergraduate Education, Department of Medicine, and Associate Professor of Medicine, Perelman School of Medicine, University of Pennsylvania.

Funding: This study was funded in part by a Division of General Internal Medicine grant from the University of Pennsylvania.

The authors would like to thank the interns and patients who participated in this research study, as well as the residency program leadership.

Supplementary data