Background

End-of-shift assessments (ESA) can provide representative data on medical trainee performance but do not occur routinely and are not documented systematically.

Objective

To evaluate the implementation of a web-based tool with text message prompts to assist mobile ESA (mESA) in an emergency medicine (EM) residency program.

Methods

mESA used timed text messages to prompt faculty/trainees to expect in-person qualitative ESA in a milestone content area and for the faculty to record descriptive performance data through a web-based platform. We assessed implementation between January 2018 and November 2019 using the RE-AIM framework (reach, effectiveness, adoption, implementation, and maintenance).

Results

Reach: 96 faculty and 79 trainees participated in the mESA program. Effectiveness: From surveys, approximately 72% of faculty and 58% of trainees reported increases in providing and receiving ESA feedback after program implementation. From ESA submissions, trainees reported receiving in-person feedback on 90% of shifts. Residency leadership confirmed perceived utility of the mESA program. Adoption: mESA prompts were sent on 7792 unique shifts across 4 EDs, all days of week, and different times of day. Faculty electronically submitted ESA feedback on 45% of shifts. Implementation quality: No technological errors occurred. Maintenance: Completion of in-person ESA feedback and electronic submission of feedback by faculty was stable over time.

Conclusions

We found mixed evidence in support of using a web-based tool with text message prompts for mESA for EM trainees.

What was known and gap

End-of-shift assessments of medical trainees are important to conduct but suboptimally performed and not systematically recorded.

What is new

To improve end-of-shift assessments, a digital tool that uses text message prompts and feedback with web-based data entry was implemented in an academic emergency medicine residency over a 22-month period.

Limitations

Study was conducted at a single institution and only included emergency medicine residents, limiting generalizability to other sites, training level, and specialties.

Bottom line

A digital tool utilizing text messaging and web-based entry can improve end-of-shift assessments for medical trainees but has yet to be optimized.

Assessment and feedback are critical components of the education and training of medical trainees.1  The Accreditation Council for Graduate Medical Education (ACGME) requires that residency programs provide trainees with assessment of clinical performance,2  but does not specify how such assessments and feedback should be conducted or recorded.

One potentially useful form of trainee assessments and feedback occurs at the end of each clinical day or shift. These end-of-shift assessments (ESA) overcome limitations of less frequent end-of-rotation assessments and allow for contemporaneous feedback to trainees.35  However, ESA add daily burden to clinicians to complete and document these assessments, limiting systematic submissions. ESA, when traditionally documented using paper forms, also adds significant administrator burden to collect and collate submissions.

Electronic platforms can assist ESA by prompting clinicians to perform ESA and automating the collection of qualitative and quantitative feedback on trainee performance. Electronic ESA platforms have been used for surgical residents1,4  and medical students.1,5  There is some evidence that electronic ESA programs improve completion rates above paper forms.68  Still, evidence of uptake and utility of computerized ESA programs remains scant. In this study, we examine the implementation of a web-based tool with text message prompts to assist mobile ESA (mESA) in an emergency medicine (EM) residency program.

Setting and Participants

We implemented the system in January 2018 in a 3-year EM residency program where trainees rotate through 4 emergency departments (EDs). Each site is an academic, tertiary care hospital, with a combined annual volume of approximately 200 000 ED patient visits.

Interventions

The mESA has several core components (box 1, figure 1). For faculty, at the scheduled shift start time, the program sends a text message indicating which trainee is to be assessed and assigns a randomly generated content area. Content areas are based on ACGME EM milestones9  and include communication, medical decision-making, efficiency, professionalism, and focused history and physical examinations (see table in online supplemental material). The intent is to “prime” the faculty member to provide feedback and allow them to prospectively collect experiential data during the shift to inform their assessment.

box 1 Core Components of Web-Based Tool With Text-Message Prompts
  1. System for querying commercially available scheduling software and extracting shift-related data.

  2. System for matching faculty-trainee dyads by location of hospital emergency department, location of shift within ED (ie, “pod”), and duration of overlap between faculty and trainee.

  3. System for sending text messages to faculty and trainees at scheduled intervals.

  4. System for receiving and storing incoming text messages.

  5. Secure website for presenting trainee assessment fields to residency faculty.

  6. Searchable database of all ESA outgoing and incoming data for administrators.

figure 1

Overview of Web-Based Tool With Text Message Prompts for End-of-Shift Resident Evaluations

figure 1

Overview of Web-Based Tool With Text Message Prompts for End-of-Shift Resident Evaluations

Close modal

One hour prior to shift end, the mESA program sends a text hyperlink to a secure mESA website where the faculty member completes a 3-question assessment form (screenshots provided as online supplemental material). If feedback is not submitted by 8am the day following the prompt, a reminder text is sent. Each month, faculty received the text: “In [month] you completed [#] out of [#] assessments for a [%] completion rate.”

For trainees, at shift start, the mESA program texts them the name of the assigned faculty member and content area. One hour prior to shift end, the program texts: “Did [faculty] provide you with any in-person feedback on [X] performance today? Please reply Y or N.”

Prior to mESA implementation, all 4 sites used paper ESA cards. After internally testing the mESA program, participants were able to opt out at any time. Following a stepwise process, site 1 ran for 3 months, then sites 2 and 3 were added in month 4, and site 4 in month 5.

Outcomes

We evaluate our mESA using the reach, effectiveness, adoption, implementation, and maintenance (RE-AIM) framework.10,11  Data sources included: a database of incoming and outgoing text messages and web-based feedback (Microsoft Access), a web-based database of survey results (REDCap), and descriptive residency leadership feedback collected via email.

Analysis

Reach was assessed by examining the representativeness of faculty opting into the program and measuring the number of residents participating. Effectiveness was assessed through surveys from faculty, trainees, and residency leadership and through ESA reports from trainees. Adoption was assessed by examining the percentage of ESA prompts where faculty submitted online trainee feedback. We examined the relationship between program burden (ie, number of prompts) and online submission rates by faculty using correlation coefficients. Implementation quality was assessed by examining technological errors and issues. Maintenance was assessed by examining trends in ESA submission rates over time.

Approximately 2 months after mESA program initiation at site 1, we invited the EM faculty at this site to participate in a 9-question (1 open- and 8 closed-ended) online survey (provided as online supplemental material). The survey, developed by the authors (clinical educators) without further testing, was designed to gauge faculty members' experience with providing residents with end-of-shift feedback before and after the implementation of the web-based tool. We also evaluated faculty comfort and satisfaction levels with the new program and solicited faculty feedback and suggestions for improvement. The faculty survey was only sent to site 1 because this was the first of our 4 sites to come online. Survey feedback from this site helped inform our rollout process at the other sites.

Similarly, approximately 2 months after mESA program initiation, we invited EM residents to participate in a 10-question (1 open- and 9 closed-ended) online survey (faculty and resident surveys provided as online supplemental material). A single email invitation was sent to complete the survey. The resident survey was designed to assess resident perceptions of how often they received performance feedback from faculty before and after implementation of the program. We also assessed satisfaction with the new system and solicited suggestions for improvement. After approximately 1 year, residency leaders were surveyed over email to provide descriptive feedback on mESA.

The institutional review board at the University of Pittsburgh approved the survey study. The remainder of the data was collected as quality improvement.

Reach

We targeted 100 eligible faculty to participate in the mESA. Four opted out (96% reach). Feedback from faculty who opted out suggested they preferred to provide feedback through other methods (n = 2) or did not own a smartphone (n = 2). Of 80 eligible trainees, 79 were enrolled in the mESA program over the 22 months of implementation, with only one opting out (99% reach).

Effectiveness

Eighteen faculty from site 1 (85% of eligible faculty at this site) completed the survey (10 faculty with up to 10 years of practice, 8 with >10 years). Faculty endorsed the following reasons for not providing ESA feedback when using paper cards (prior to mESA): 14 forget, 14 felt they were too busy, 13 felt the residents were too busy, and 4 felt discomfort with providing feedback. Thirteen of 18 (72%) agreed that receiving a prompt at the beginning of a shift made it easier for them to collect relevant data to inform feedback; 17 of 18 (94%) agreed that the reminder at the end of the shift was useful to provide in-person feedback, and 16 of 18 (89%) would recommend mESA to other training programs. Other feedback included concern about the narrow scope of content areas (n = 2) and feeling burdened by the number of prompts (n = 3).

The survey was sent to 48 trainees (total number in our program at a given time), and 24 (50%) responded (year 1: n = 12; year 2: n = 6; year 3: n = 6). Sixteen of 24 (67%) trainees reported that receiving a prompt at the beginning of a shift specifying the content area for assessment made it easier to focus on it during the shift, and 14 (58%) felt the end of shift prompt was helpful to remind them to approach the attending for feedback. Nineteen of 24 (79%) would recommend the mESA program to other residencies. Other feedback included supportive statements about improvements relative to paper cards (n = 3) and concern that content areas may be too specific and place barriers to feedback in other areas (n = 2). Text message feedback from trainees confirmed that in-person ESA feedback was provided 90% of the time.

After approximately 1 year, 7 residency leaders (4 program directors and 3 core education faculty) were surveyed via email to provide feedback on mESA. This was solely intended to collect descriptive feedback from core leadership. They universally agreed that the mESA provided added value to the quality of assessment, primarily through improved face-to-face feedback at shift end, improved feedback at the completion of a rotation, and summary assessments for the clinical competency committee (box 2).

box 2 Residency Leadership and Core Education Faculty Survey Comments
  • “The adaptability of the interface allowed for a rapid change that aligns milestone assessments with summative feedback gathered from the month's text messages.”

  • “By the end of the month, I generally have a good ‘feel' for the resident's performance and can summarize into a nice paragraph for the eval.”

  • “Convenient and therefore increases response rate.”

  • “I have found that with a more focused approach I often receive more useful and constructive feedback. Attendings can consider the resident's performance throughout the shift because they are alerted prior to the beginning of the shift. This allows them to focus their thoughts rather than provide general feedback.”

  • “By expanding the categories, we have been able to obtain useful feedback for our milestones reports.”

  • “We get more volume, more of the time in terms of useful comments.”

  • “The evaluators have an easier time completing summative evaluations.”

  • “Easier to see trends with this program.”

  • “Higher completion rate.”

  • “As a site evaluator, having the data available is immensely helpful for creating a summative evaluation.”

Adoption

Between January 3, 2018 and November 4, 2019, mESA prompts were sent on 7792 unique shifts. Total number of prompts received by faculty varied from 2 to 401 (mean = 81 prompts per faculty member). Faculty submitted online ESA feedback of trainees on 3526 unique shifts (overall ESA submission rate 45%). Forty percent of ESA were prompted between 7am and 7pm, 34% were prompted between 7pm and 12am and 27% were prompted between 12am and 7am. When examining the mESA completion rates by time of day, 54% were missing when prompted between 7am and 7pm, 50% when prompted between 7pm and 12am, and 56% when prompted between 12am and 7am (chi-square test; P < .0001).

Implementation Quality

Most mESAs were prompted at site 1 (70%), with 14% and 12% at sites 2 and 3 respectively, and 5% at site 4, due to the higher number of resident shifts at site 1 during the study period and because that site was the first to utilize the system. We invested approximately $20,000 to build, implement, and maintain the system. We did not identify any technological errors, such as erroneous, missing, or delayed messages. We did not identify any delays in texts being received. One technological issue that came up during implementation was how to handle situations where a faculty was working with 2 trainees on the same shift for the same amount of time. We initially solved this by randomly choosing only one of the trainees. A later version of the program now identifies faculty-trainee dyads by specific shift name. A second mitigated issue involved asking enrollees to alert us if their phone number changed and creating an alert system if text messages are not delivered. Shift schedule changes were handled by building a program interface for the administrator to manually document such alterations.

Maintenance

Figure 2 depicts completion rates, which stabilized around month 6 of 22. There was no evidence that the number of total prompts received by a faculty member was associated with increased missing web-based ESA submissions (correlation coefficients non-significant; figure 3).

figure 2

Percentage of Shifts With ESA Prompts Where Faculty Submitted Web-Based Feedback Over Time

figure 2

Percentage of Shifts With ESA Prompts Where Faculty Submitted Web-Based Feedback Over Time

Close modal
figure 3

Scatterplot of Total Number of Shifts a Given Faculty has Received ESA Prompts by Percentage of ESA Submitted Through Web-Based Portal

figure 3

Scatterplot of Total Number of Shifts a Given Faculty has Received ESA Prompts by Percentage of ESA Submitted Through Web-Based Portal

Close modal

This is the first known study to lend support to use of an electronic ESA tool in an EM residency. Utilizing the RE-AIM framework, we found evidence of reach, effectiveness, implementation quality, and maintenance for our intervention, but suboptimal adoption.

Although our study focused on EM, findings are relevant to most specialties utilizing shift work. Assessment tools are most useful when they are acceptable to trainees and faculty, reliable, valid, cost-effective, and can impact future practice.12  For training leadership, ESA is a practical and effective means to help trainees improve13  and correlates with clinical competency committee's decisions about trainees' milestone proficiency.14  For trainees, it strengthens the apprenticeship model, wherein trainees work closely with more senior physicians.15  While residents model their trainers, their personal growth is greatly augmented by directed feedback on clinical performance. Such feedback can be best incorporated if it is specific, timely, and actionable. For administration, mESA reduces burdens by centralizing collection of feedback.16 

Our finding of good mESA reach may be due to a combination of perceived need for improving the process of ESA, the willingness to try technological solutions, and the process of opt-out (as opposed to opt-in) enrollment. In support of effectiveness, we found that the majority of faculty and trainees sampled approved of the mESA tool. Feedback from residency leadership largely echoed what was provided by site faculty and residents and added that the database of mESA submissions provides value by making milestone and end-of-rotation summary assessments more data-driven. The mESA tool was stable and did not crash or lose data during implementation, providing evidence of program quality.

The 45% of web-based submissions by faculty was lower than other studies of paper ESAs,17  but higher than the historic submission rates at our sites (estimated to be around 30%). We designed the mESA program with a focus on descriptive (not quantitative) feedback, which has a higher burden to complete for faculty, but potentially results in more useful information for training leadership. Additionally, residents reported high rates of real-time feedback, exceeding the completion rate of web-based assessments. This demonstrates that the mESA prompted real-time discussions about performance for formative assessment during the pilot. We included monthly feedback to faculty on the percentage of ESA submissions completed in hopes of influencing future submission rates. Lower rates of participation overnight may have been due to fatigue at the end of those shifts. It may be reasonable to utilize alternative methods of data collection based on time of day such that overnight shift workers are prompted after sleeping to complete mESAs.

A limitation of the study is a lack of control or comparison group. Secondly, we were unable to recruit individuals who never participated in mESA to participate in a descriptive interview, limiting our understanding of the barriers influencing initial engagement and participation. Furthermore, as the surveys did not have prior evidence of validity, respondents may have interpreted questions differently than intended.

Future mESA design considerations should include departmental incentives tied to completion or reducing the frequency of mESA. We noted clusters of low responders among the faculty and found that certain faculty members are simply not comfortable with smartphone technology.

Although it is likely inevitable that certain faculty members will have poor participation regardless of the tool used, further research is needed to determine the reasons for the low rates. Because our assessment system is web-based, future iterations of the program could include sending prompts via email or other means to faculty without smartphones. Design improvements could include limiting prompts to shift-start for trainees and shift-end for faculty. Finally, future studies examining the relationship between ESA feedback and other measures of trainee performance would be beneficial.

We found that a mobile smartphone ESA program reached its intended users, was found to be effective by stakeholders, and was adopted at moderate and stable rates over time.

1.
Bernard
AW,
Kman
NE,
Khandelwal
S.
Feedback in the emergency medicine clerkship
.
West J Emerg Med.
2011
;
12
(4)
:
537
542
.
2.
Accreditation Council for Graduate Medical Education
.
ACGME Common Program Requirements
.
2020
.
3.
Warrington
S,
Beeson
M,
Bradford
A.
Inter-rater agreement of end-of-shift evaluations based on a single encounter
.
West J Emerg Med.
2017
;
18
(3)
:
518
524
.
4.
Lefebvre
C,
Hiestand
B,
Glass
C,
Masneri
D,
Hosmer
K,
Hunt
M,
et al.
Examining the effects of narrative commentary on evaluators' summative assessments of resident performance
.
Eval Health Prof.
2020
;
43
(3)
:
159
161
.
5.
Bandiera
G,
Lendrum
D.
Daily encounter cards facilitate competency-based feedback while leniency bias persists
.
CJEM
.
2008
;
10
(1)
:
44
50
.
6.
Hartranft
TH,
Yandle
K,
Graham
T,
Holden
C,
Chambers
LW.
Evaluating surgical residents quickly and easily against the milestones using electronic formative feedback
.
J Surg Educ.
2017
;
74
(2)
:
237
242
.
7.
Tews
MC,
Treat
RW,
Nanes
M.
Increasing completion rate of an M4 emergency medicine student end-of-shift evaluation using a mobile electronic platform and real-time completion
.
West J Emerg Med.
2016
;
17
(4)
:
478
483
.
8.
Mooney
JS,
Cappelli
T,
Byrne-Davis
L,
Lumsden
CJ.
How we developed eForms: an electronic form and data capture tool to support assessment in mobile medical education
.
Med Teach
.
2014
;
36
(12)
:
1032
1037
.
9.
Accreditation Council for Graduate Medical Education
.
Milestones
.
2020
.
10.
Glasgow
RE,
Vogt
TM,
Boles
SM.
Evaluating the public health impact of health promotion interventions: the RE-AIM framework
.
Am J Public Health
.
1999
;
89
(9)
:
1322
1327
.
11.
Gaglio
B,
Shoup
JA,
Glasgow
RE.
The RE-AIM framework: a systematic review of use over time
.
Am J Public Health
.
2013
;
103
(6)
:
e38
e46
.
12.
Van Der Vleuten
CPM.
The assessment of professional competence: developments, research and practical implications
.
Adv Health Sci Educ.
1996
;
1
(1)
:
41
67
.
13.
Donoff
MG.
Field notes: assisting achievement and documenting competence
.
Can Fam Physician
.
2009
;
55
(12)
:
1260
1262
,
e100–e102.
14.
Regan
L,
Cope
L,
Omron
R,
Bright
L,
Bayram
JD.
Do end-of-rotation and end-of-shift assessments inform clinical competency committees' (CCC) decisions?
West J Emerg Med.
2018
;
19
(1)
:
121
127
.
15.
Epstein
RM.
Assessment in medical education
.
N Engl J Med.
2007
;
356
(4)
:
387
396
.
16.
Lawson
L,
Jung
J,
Franzen
D,
Hiller
K.
Clinical assessment of medical students in emergency medicine clerkships: a survey of current practice
.
J Emerg Med.
2016
;
51
(6)
:
705
711
.
17.
Kogan
JR,
Shea
JA.
Implementing feedback cards in core clerkships
.
Med Educ.
2008
;
42
(11)
:
1071
1079
.

Author notes

Editor's Note: The online version of this article contains milestone-based content areas for resident assessment, screenshots of the web-based interface for faculty, and the surveys used in the study.

Funding: The authors report no external funding source for this study.

Competing Interests

Conflict of interest: The authors declare they have no competing interests.

Supplementary data