Setting and Problem
Formative evaluations are a critical component of monitoring an individual's learning and improvement over time. These evaluations provide objective feedback on patient care, interpersonal skills, communication, and professionalism, and must come from multiple sources, such as patients and families. Obtaining patient evaluations that are physician specific can be a challenge, especially with a one-time encounter, such as an emergency department or acute care visit.
In the pediatric emergency medicine fellowship program at the University of Alabama at Birmingham School of Medicine, the majority of patient encounters are single visits. In an effort to obtain fellow-specific evaluations, we have traditionally relied on fellows to distribute and collect an established number of their own evaluations (3 per year).
Problems utilizing this method have included selection bias, with fellows self-selecting patients with which they had a positive experience, all fellows obtaining only the minimum requirement of 9 patient evaluations over 3 years, only 28% of these evaluations containing comments, and more than 50% of fellows delaying obtaining their minimum requirement to the final month of their training. These issues combined to negate the formative feedback from the evaluations.
Intervention
Our fellowship program embarked on a project to increase patient evaluations for individual fellows. The goal was to have evaluations completed at the time of discharge from the emergency department, utilizing a fellow-specific QR code. Secondary goals included more timely patient evaluations that had less selection bias and provided more formative feedback. A 10-question patient survey was created within SurveyMonkey. The survey involved 9 yes or no questions and 1 free-text response. Each fellow was linked to a separate QR code (figure), and a flipbook was created with each fellow's picture and QR code. This project was reviewed by the Institutional Review Board at The University of Alabama at Birmingham and was deemed exempt. The project began with a 1-week trial and used a medical student to collect feedback.
A convenience sample of patients, cared for by fellows, were approached at the end of their visit. Patients or families were asked if they were willing to complete a survey about their care. Information was collected about the number of families approached, if they agreed to complete the survey, if they had a smartphone, and if it had a QR reader downloaded. A departmental tablet was available for survey completion if families did not want to use their smartphones. The flipbook was used to choose their provider and for code scanning.
Outcomes to Date
During the trial, 45 patients and families were approached for 8 fellows (range of 5 to 7 patients per fellow). Forty-one of 45 (91%) completed the evaluation. Thirty-three of 41 (81%) used a smartphone to complete the evaluation, 8 of 41 (20%) used the departmental tablet, and 20 of 41 (49%) provided text responses. For the patients and their families who did not complete the evaluation, reasons included no parent available, patient condition precluded, and family desire to leave.
The high level of response to this simple, cost-free technique has been encouraging. We received more than 50% of the required minimum evaluations during the 1-week trial and increased text responses from 28% to 49%, showing promise for improved feedback.
The next phase of the project will be to embed it into the discharge process. A nurse, clerk, or another individual will offer the departmental tablet and flipbook to obtain a patient evaluation. These results will then be disseminated to the fellows monthly for feedback, which is timelier, without selection bias, and formative. Replication of this process would be simple, of no cost, and could be a way for any program to increase their patient evaluations.