Background An easy-to-use application to facilitate direct observation and allow for 2-way feedback between residents and faculty is needed.

Objective To develop a mobile-based application (app) with the goals of (1) providing just-in-time feedback to residents; (2) improving timeliness of feedback by faculty; and (3) allowing residents to comment on the value of faculty feedback.

Methods Fifty-one of 69 (74%) internal medicine (IM) residents and 20 of 25 (80%) IM core faculty participated in the study from July 1, 2020, to December 31, 2021. An iOS app was designed by authors with expertise in medical education and application development to capture entrustable professional activities (EPAs)-based feedback (eg, informed consent) based on direct observation of residents’ skills in the workplace. App utilization and narrative feedback characteristics of faculty comments were examined by exporting the data from the database server. The end user satisfaction was examined using a survey instrument.

Results Eighty-seven percent of assessments (117 of 134) initiated were fully completed by residents and faculty. Faculty narrative comments were noted in 97% (114 of 117) of completed assessments and 64% (75 of 117) of residents’ feedback to the faculty contained narrative comments. Eighty-three percent (97 of 117) of comments were behaviorally specific and 71% (83 of 117) contained an actionable item. Eighty-six percent (18 of 21) of residents and 90% (9 of 10) of core faculty stated that this application promoted an educational interaction between them.

Conclusions This app facilitates the efficient completion of EPA-based formative assessments and captures bidirectional feedback in the workplace setting.

Multiple studies have suggested that feedback is less meaningful if it does not include direct observation or guidance for performance improvement.1-4  As clinical settings become more demanding with general lack of time, current interfaces for documenting these direct observation-based assessments remain challenging.

Traditionally, specialties in graduate medical education have used mobile applications with a milestone and competency-based framework to capture feedback and assessment data.5-8  The Association of American Medical Colleges entrustable professional activities (EPAs) provide a framework to deliver effective formative assessment (feedback) in an individualized and setting-appropriate manner and motivation for learners to develop behaviors, skills, attitudes, and knowledge.9-11  EPAs can also inform advancement decisions by focusing on the essential discreet professional activities a resident should be able to perform. EPA-based assessments were used in our mobile application as they provide a framework that translates competencies into clinical practice.

To our knowledge, there is a paucity of literature describing applications that allow residents to provide real-time feedback to faculty about the usefulness of their feedback. In addition, few references combining the EPA framework with a mobile platform can be retrieved in the literature.12-14  This is the first article describing an EPA-based application (app) which delivers workplace-based assessments for internal medicine (IM) residents and also provides feedback to the faculty observers. The goals of developing this electronic, mobile-based app (qUIkcoach) were to (1) provide just-in-time feedback to residents; (2) improve timeliness of feedback offered by faculty; and (3) allow residents to comment on the value of feedback provided to them.

Setting and Needs Assessment

Fifty-one IM residents and 20 IM core faculty (CF) participated in the study conducted at a large teaching hospital over an 18-month period (July 1, 2020 to December 31, 2021). The design of the app and the study were initiated in response to a needs assessment showing dissatisfaction with the current paper-based workplace-based assessment (mini-clinical evaluation exercise; mini-CEX; online supplementary data Table 1). Main areas of dissatisfaction included: (1) evaluation of broad skill categories rather than discrete clinical skills; (2) unavailability of paper forms at the point of care; and (3) suboptimal “just-in-time” completion rates.

Table 1

Post-Study Surveys Completed by Residents (1A) and Core Faculty Observers (1B)

Post-Study Surveys Completed by Residents (1A) and Core Faculty Observers (1B)
Post-Study Surveys Completed by Residents (1A) and Core Faculty Observers (1B)

Design of Mobile App

Authors with expertise in medical education (J.R., M.S.) and mobile application design (G.M.) developed the iOS app in Apple’s Swift programming language using XCode, Apple’s integrated development environment. The app uses Google Firebase to send real-time notifications when the assessment is created (by either resident or CF) and the feedback is completed (by either resident or CF). Assessment data is stored in the Microsoft SQL Server database, while data communication is accomplished via Web API written in C#. The user identification is authenticated by the university’s single sign-on service and requires a unique user log-in. While the design of this application allows it to be available to other programs beyond our institution, financial resources (estimated cost of $4,000) to host and maintain the web and database server are required.

The feedback cycle begins with the initiation of feedback by either the resident or the CF and ends with delivery of feedback to both the resident and CF (Figure). Radio buttons allow for efficient recording of the assessments for each of the EPA-based discrete skills on the checklist. In addition, a level of entrustment (global assessment) is also chosen by the CF for every observation. Screenshots of 2 EPA-based assessments (verbal handoff and informed consent) are included as examples in online supplementary data Figures 1 and 2. Narrative comments are made using the voice-to-text feature. The app is designed to allow assessments to be submitted only when all fields are completed. After the assessment is submitted by the CF, a notification is sent to the resident. The resident can then view the submitted assessment and enter feedback on the utility and quality of the feedback provided by the CF. To assure anonymity, the CF can only view aggregate feedback (minimum of 5 entries) from the residents.

Figure

Illustration of Bidirectional Feedback Cycle Used in the Mobile Application

Figure

Illustration of Bidirectional Feedback Cycle Used in the Mobile Application

Close modal

The user interface of the app is divided into 3 tabs: Create, Pending, and Completed. These tabs make the interface intuitive and easy to follow and allow users to quickly navigate the app. The resident or CF may initiate an assessment for an individual resident by clicking the Create tab. Once an assessment is created, the notification feature of the app delivers the request to the CF (online supplementary data Figure 3) and moves the assessment to the Pending tab. The list of assessments is color-coded, so assessments that need to be completed are easily identifiable. The number included in the red circle indicates the number of pending assessments and implies that action is needed (online supplementary data Figure 4). The assessment moves to the Completed tab once the resident has reviewed the CF feedback and provided feedback back to the CF. Following completion of the faculty feedback, an email containing the feedback and entrustment rating is automatically generated and sent to the resident, faculty, and residency program. All data are automatically uploaded and available to the residency program leadership via the dashboard.

For the study we created a web-based version of qUIkcoach for Android users. This web-based application design is identical to the iOS app, allowing Android users a similar experience.

Resident and Faculty Development

All IM residents and CFs were trained to use the app during 20-minute sessions that were repeated 3 times prior to the start date. All participants were also trained on the feedback framework proposed by Gigante et al.15  This framework was also included in the app for reference (online supplementary data Figure 5). Residents and CFs received monthly informational e-mail reminders following the training sessions.

Intervention

CFs and residents were encouraged to use the mobile app, but no expectations for numbers of assessments to be completed were set for the study. Automated reminders were sent if there were incomplete assessments.

Outcomes

To assess the feasibility, utility, and validity of the intervention, 3 outcomes were examined: (1) utilization; (2) characteristics of faculty narrative comments (content and polarity, specificity and actionability); and (3) end-user satisfaction. Application utilization and characteristics of faculty narrative comments were examined by exporting the data from the database server. End-user satisfaction data was gathered using an electronic survey (online supplementary data table 2) delivered to CFs and residents following the 18-month study period. Using a previously published method for describing the comments, we characterized each CF comment defined as a grouping of words focused on a unique concept or behavior, in 3 dimensions: (1) content and polarity (ie, reinforcing/corrective); (2) specificity (ie, general comment vs behaviorally specific); and (3) actionability.16  Two authors (J.R. and M.S.) independently coded each comment and then compared assigned codes. Differences were resolved through consensus. Resident feedback on the quality of CF feedback was not examined in this study.

This project was deemed nonhuman subjects research by the Institutional Review Board of the University of Iowa.

Utilization

A total of 134 assessments were initiated during the study period. Of 134 initiated assessments, 117 (87%) were fully completed by both residents and faculty. Thirty-nine percent (46 of 117) of completed assessments were performed in the inpatient and 61% (71 of 117) in the outpatient setting. Narrative comments by CFs were noted in 97% (114 of 117) of the completed assessments whereas 64% (75 of 117) of the residents’ provided narrative feedback to CFs.

Feedback Characteristics

Forty percent (47 of 117) of the completed CF assessments generated a single narrative comment, 15% (18 of 117) contained 2 comments, and 44% (52 of 117) included 3 or more comments. Eighty-three percent (97 of 117) of comments were behaviorally specific (eg, “Do recommend you ask one more time to make sure no other questions. Also consider checking in after each chunk”) and 17% (20 of 117) were general (eg, “Overall great job. You made the patient feel very comfortable with the procedure”). Additionally, 71% (83 of 117) of comments had an actionable item as part of the narrative feedback. In terms of valence, 83% (97 of 117) of assessments contained reinforcing comments, 71% (83 of 117) contained corrective feedback, and 65% (77 of 117) contained both reinforcing and corrective feedback.

End-User Satisfaction

The post-study survey was completed by 50% (10 of 20) of CFs and 41% (21 of 51) of residents (table 1A & B). All (10 of 10) CFs reported that they were able to complete the assessment in less than 10 minutes (30% completed within 5 minutes). In addition, 100% (10 of 10) of CFs reported that they either finished the assessment immediately after the observation or at some point during the observation day. Ninety percent (9 of 10) of CF responders thought that the application was easy to use and rating forms were easy to understand. One hundred percent of CFs (10 of 10) and residents (21 of 21) surveyed responded that workplace-based assessment is an important part of training.

This mobile application offers the possibility of efficiently completing and capturing WBAs based on direct observation. The characteristics of the CF narrative feedback included a high proportion of behaviorally specific comments and actionable items. In addition to providing feedback to residents, this is the first platform to offer a bidirectional feature to the feedback (CF to resident and resident to CF) and to allow residents to assess the usefulness of the feedback they receive.

Medical educators who have implemented WBAs have encountered significant challenges. Common barriers include lack of time and competing demands that interfere with the faculty member’s ability to complete these assessments. Design of our app addresses some of these barriers by making the user interface easy to use and time efficient. Multiple specialties (pediatrics, surgical specialties, family medicine) have employed different frameworks (milestones, competencies) using mobile applications to facilitate more efficient capture, delivery, and aggregation of assessment data.5-8,17-19  Initial outcomes of these reports have supported the feasibility and utility of mobile platforms. At the same time, concerns have been raised that competencies and milestones used in assessments are too numerous, too granular, and/or too abstract for frontline faculty to use. We used EPA-based assessments in our mobile app as they provided a framework that translates competencies into clinical practice. The EPA-based assessments used were refined from the assessments used in our previous study.20 

The outcomes of this study indicate that, when designed with user interface principles in mind, assessments can be completed quickly and generate bidirectional feedback. The app’s efficiency supports feasibility of its use within a training environment, and the narrative comment analysis supports its utility. While there was a high completion rate of the narrative comment section by CFs, the resident narrative comment completion rate indicates room for improvement. More faculty development will be needed to increase the number of specific actionable items included in the feedback. The completion rate of the post-study survey was below expectations, thereby hindering us from drawing conclusive results. In the future, time stamps for evaluation creation and completion will be added so that we can more accurately measure the time needed to complete an assessment.

Currently we are developing an integrated dashboard to allow for residents and CFs to view their cumulative data, which will allow us to gather additional evidence of the app’s validity and usability, both for coaching and for judging competence.

This mobile application facilitates the efficient completion of EPA-based formative assessments and captures bidirectional feedback in the workplace setting. The application also combines EPA-based formative assessments and bidirectional feedback into a single package.

The authors would like to thank Teresa Ruggle for her support in the development of the graphics for this article.

1. 
Holmboe
ES.
Faculty and the observation of trainees’ clinical skills: problems and opportunities
.
Acad Med
.
2004
;
79
(
1
):
16
-
22
.
2. 
Hasley
PB,
Arnold
RM.
Summative evaluation on the hospital wards. What do faculty say to learners?
Adv Health Sci Educ Theory Pract
.
2009
;
14
(
3
):
431
-
439
.
3. 
Cantillon
P,
Sargeant
J.
Giving feedback in clinical settings
.
BMJ
.
2008
;
337
:
a1961
.
4. 
Van Hell
EA,
Kuks
JB,
Raat
AN,
Van Lohuizen
MT,
Cohen-Schotanus
J.
Instructiveness of feedback during clerkships: influence of supervisor, observation and student initiative
.
Med Teach
.
2009
;
31
(
1
):
45
-
50
.
5. 
Cooney
CM,
Redett
RJ
3rd,
Dorafshar
AH,
Zarrabi
B,
Lifchez
SD.
Integrating the NAS Milestones and handheld technology to improve residency training and assessment
.
J Surg Educ
.
2014
;
71
(
1
):
39
-
42
.
6. 
Page
CP,
Reid
A,
Coe
CL,
et al.
Learnings from the pilot implementation of mobile medical milestones application
.
J Grad Med Educ
.
2016
;
8
(
4
):
569
-
575
.
7. 
Bohnen
JD,
George
BC,
Williams
RG,
et al.
The feasibility of real-time intraoperative performance assessment with SIMPL (System for Improving and Measuring Procedural Learning): early experience from a multi-institutional trial
.
J Surg Educ
.
2016
;
73
(
6
):
e118
-
e130
.
8. 
Fitzpatrick
R,
Paterson
NR,
Watterson
J,
Seabrook
C,
Roberts
M.
Development and implementation of a mobile version of the O-SCORE assessment tool and case log for competency-based assessment in urology residency training: an initial assessment of utilization and acceptance among residents and faculty
.
Can Urol Assoc J
.
2019
;
13
(
2
):
45
-
50
.
9. 
Lomis
K,
Amiel
JM,
Ryan
MS,
et al.
Implementing an entrustable professional activities framework in undergraduate medical education: early lessons from the AAMC Core Entrustable Professional Activities for Entering Residency Pilot
.
Acad Med
.
2017
;
92
(
6
):
765
-
770
.
10. 
Lypson
ML,
Frohna
JG,
Gruppen
LD,
Woolliscroft
JO.
Assessing residents’ competencies at baseline: identifying the gaps
.
Acad Med
.
2004
;
79
(
6
):
564
-
570
.
11. 
Carraccio
C,
Englander
R,
Gilhooly
J,
et al.
Building a framework of entrustable professional activities, supported by competencies and milestones, to bridge the educational continuum
.
Acad Med
.
2017
;
92
(
3
):
324
-
330
.
12. 
Young
JQ,
Sugarman
R,
Schwartz
J,
McClure
M,
O’Sullivan
PS.
A mobile app to capture EPA assessment data: Utilizing the consolidated framework for implementation research to identify enablers and barriers to engagement
.
Perspect Med Educ
.
2020
;
9
(
4
):
210
-
219
.
13. 
Diwersi
N,
Gass
JM,
Fischer
H,
Metzger
J,
Knobe
M,
Marty
AP.
Surgery goes EPA (entrustable professional activity)—how a strikingly easy to use app revolutionizes assessments of clinical skills in surgical training
.
BMC Med Educ
.
2022
;
22
(
1
):
559
.
14. 
George
BC,
Bohnen
JD,
Williams
RG,
et al.
Readiness of US general surgery residents for independent practice
.
Ann Surg
.
2017
;
266
(
4
):
582
-
594
.
15. 
Gigante
J,
Dell
M,
Sharkey
A.
Getting beyond “good job”: how to give effective feedback
.
Pediatrics
.
2011
;
127
(
2
):
205
-
207
.
16. 
Lockyer
JM,
Sargeant
J,
Richards
SH,
Campbell
JL,
Rivera
LA.
Multisource feedback and narrative comments: polarity, specificity, actionability, and CanMEDS roles
.
J Contin Educ Health Prof
.
2018
;
38
(
1
):
32
-
40
.
17. 
Hicks
PJ,
Margolis
MJ,
Carraccio
CL,
et al.
A novel workplace-based assessment for competency-based decisions and learner feedback
.
Med Teach
.
2018
;
40
(
11
):
1143
-
1150
.
18. 
Eaton
M,
Scully
R,
Schuller
M,
et al.
Value and barriers to use of the SIMPL tool for resident feedback
.
J Surg Educ
.
2019
;
76
(
3
):
620
-
627
.
19. 
Torre
DM,
Simpson
DE,
Elnicki
DM,
Sebastian
JL,
Holmboe
ES.
Feasibility, reliability and user satisfaction with a PDA-based mini-CEX to evaluate the clinical skills of third-year medical students
.
Teach Learn Med
.
2007
;
19
(
3
):
271
-
277
.
20. 
CarlLee
S,
Rowat
J,
Suneja
M.
Assessing entrustable professional activities using an orientation OSCE: identifying the gaps
.
J Grad Med Educ
.
2019
;
11
(
2
):
214
-
220
.

The online supplementary data contains the surveys used in the study and further resources.

Funding: The authors report no external funding source for this study.

Conflict of interest: The authors declare they have no competing interests.

Some data from this article were presented at the Association of Program Directors of Internal Medicine National Meeting, April 10-13, 2022, Charlotte, North Caroline, USA.

Supplementary data