Failure to follow up and communicate test results to patients in outpatient settings may lead to diagnostic and therapeutic delays. Residents are less likely than attending physicians to report results to patients, and may face additional barriers to reporting, given competing clinical responsibilities.
This study aimed to improve the rates of communicating test results to patients in resident ambulatory clinics.
We performed an internal medicine, residency-wide, pre- and postintervention, quality improvement project using audit and feedback. Residents performed audits of ambulatory patients requiring laboratory or radiologic testing by means of a shared online interface. The intervention consisted of an educational module viewed with initial audits, development of a personalized improvement plan after Phase 1, and repeated real-time feedback of individual relative performance compared at clinic and program levels. Outcomes included results communicated within 14 days and prespecified “significant” results communicated within 72 hours.
A total of 76 of 86 eligible residents (88%) reviewed 1713 individual ambulatory patients' charts in Phase 1, and 73 residents (85%) reviewed 1509 charts in Phase 2. Follow-up rates were higher in Phase 2 than Phase 1 for communicating results within 14 days and significant results within 72 hours (85% versus 78%, P < .001; and 82% versus 70%, P = .002, respectively). Communication of “significant” results was more likely to occur via telephone, compared with communication of nonsignificant results.
Participation in a shared audit and feedback quality improvement project can improve rates of resident follow-up and communication of results, although communication gaps remained.
Residents may face added barriers to timely reporting of test results to patients in outpatient settings, which may lead to diagnostic and therapeutic delays.
A residency-wide quality improvement project with audit and feedback in an internal medicine program.
Single specialty, single institution study reduces generalizability; self-reported data have a potential for recall bias.
The intervention improved the timeliness of communication of routine and significant results.
Failure to inform patients and document communication of test results may lead to diagnostic and therapeutic delays, and is a common source of malpractice claims.1–3 Previous studies have found strategies for following test results, but they vary widely by clinical setting.2,4–7 Furthermore, time pressures and complex interfaces among clinicians, staff, electronic health records (EHRs), and lab and radiology facilities make result follow-up increasingly difficult.2,4–7
Previous work has shown residents are less likely than attending physicians to report results to patients.6 Even with smaller patient panels than full-time providers, reduced time within primary care clinics and additional clinical responsibilities, such as inpatient rounding, pose potential barriers to residents following results expeditiously. Communicating results, therefore, is an important skill to cultivate. We sought to institute a program-wide quality improvement (QI) project around an ambulatory clinic result follow-up standard, including definitions of “significant” abnormal results and appropriate communication time frames.
The Duke Internal Medicine Residency Program includes 41 categorical and 9 preliminary interns, and 86 categorical postgraduate year 2 (PGY-2) and PGY-3 residents. Categorical residents participate in 1 of 3 continuity clinics: a community-based clinic (Clinic 1), a Veterans Affairs clinic (Clinic 2), and a faculty practice clinic (Clinic 3). All 3 clinics were included in this study. During PGY-1, all trainees participate in an online curriculum teaching basic QI vocabulary and processes during ambulatory blocks and were excluded from the shared QI project.8 Rather than repeat this curriculum, PGY-2 and PGY-3 residents on ambulatory rotations spend 1 half-day honing practical QI skills by participating in a shared QI project chosen by program and resident leadership, with audit and feedback methodology utilizing Microsoft SharePoint, as previously described.9
Each shared SharePoint Individual Performance Improvement Module is accessed online via password on internal Duke servers, and engages upper-level residents on a systems-based practice issue relevant to ambulatory general internal medicine. The module allows for individual residents to implement unique improvement strategies and provides individual-level data to gauge success. For the 2013–2014 academic year, follow-up of ordered labs and studies from resident continuity clinics was the selected topic. First, resident, clinic, and program leadership developed a standard that all available lab and study results should be communicated to patients within 14 days. Additionally, leadership created a list of “significant” result examples to be communicated within shorter time frames (table 1). Even if not listed, however, any result significantly changing patient care similarly would be considered “significant.” The primary outcomes were rates of result communication within the specified 14-day or 72-hour time frames.
The online project occurred in 2 phases relative to the creation of an individual “aims” statement. In the first ambulatory half-day (Phase 1), residents reviewed a 24-slide educational module describing the project goals and metrics, lab follow-up gaps and consequences, the follow-up standard, and QI principles, including creation of “aims” statements and plan-do-study-act cycles.10 Residents then completed retrospective chart audits for at least their most recent 20 personal ambulatory patients with ordered tests or studies, and noted whether communication was documented. All data entered by residents contained no protected health information or actual result values. Data abstracted included study type, whether results were “significant” findings, time frames of communication, and method of communication. As part of chart reviews, residents were asked to review at least 10 patients with “significant” results.
After completing data entry, residents were shown real-time comparison graphs of personal communication rates relative to program standards and aggregate clinic peer and overall program performance. To complete Phase 1, residents developed an individual performance improvement plan, including an “aims” statement and “next steps.”
Data entry during the second ambulatory half-day (Phase 2) was identical to Phase 1. Rather than create another improvement plan, residents instead commented on project successes, improvement barriers, and opportunities identified from their previously recorded performance plans. Phase 2 occurred at least 3 months from Phase 1 to allow time to implement improvement plans. Residents without ambulatory blocks during half of the academic year were excused from that phase. Throughout the academic year, faculty leaders presented clinic- and program-level data to residents via lectures, program newsletters, and online announcements.
This study was considered exempt by the Duke University Health System Institutional Review Board.
Measures and Analysis
Rates of test result follow-up within 14 days and significant test result follow-up within 72 hours were the primary outcomes. Rates were calculated as proportions of patients to whom residents communicated results within specified periods. Rates were summarized across residents by phase and by phase within each clinic. Wilcoxon signed rank tests for nonparametric paired samples were used to compare follow-up rates. Chi-square tests were used to compare the distributions of clinic by phase.
The secondary outcome was the type of follow-up communication performed. Chi-square tests were used to examine changes in the distribution of communication type by clinic across phases and between significant and nonsignificant results within each phase. A 2-sided significance level of .05 was used for all statistical tests, which were conducted using SAS version 9.4 (SAS Institute Inc, Cary, NC).
Follow-up rates and patients reviewed per resident and clinic are summarized in table 2. The distribution of residents by clinic was not significantly different between phases. Overall, 76 of 86 residents (88%) completed Phase 1, reviewing 1713 of their ambulatory patients' charts, and 73 residents (85%) completed Phase 2, reviewing 1509 charts. The mean number of charts reviewed was 22.5 in Phase 1 (SD = 5.3; range, 1–30) and 20.7 in Phase 2 (SD = 4.9; range, 5–30). Follow-up rates were higher in Phase 2 than in Phase 1 for communicating all results within 14 days and significant results within 72 hours (85% versus 78%, P < .001; and 82% versus 70%, P = .002, respectively).
Follow-up rates are summarized by phase within each clinic in table 3. Follow-up rates within 14 days were higher in Phase 2 for all clinics (Clinic 1: 79% versus 70%, P = .006; Clinic 2: 89% versus 85%, P = .02; Clinic 3: 94% versus 86%, P = .008). Follow-up rates for significant tests within 72 hours were higher in Phase 2 for Clinics 1 and 3 (83% versus 68%, P = .04; and 95% versus 73%, P = .03, respectively), but did not change for Clinic 2.
We collected information on tests ordered and communication methods by phase and clinic. In Clinics 1 and 2, most additional communication regarding results utilized patient letters. In Clinic 3, communication shifted to include more annotation for patient portal review and direct telephone calls. Communication of “significant” results was more likely to utilize telephone calls and less likely to employ letters.
This QI effort demonstrated that engaging residents in a web-based, residency-wide, audit and feedback QI project successfully increases rates of communicating test results to patients within prespecified time frames.
Residents in this study reported results to patients more often than residents in a previous self-reported frequency survey, in which 21% of residents stated they “sometimes” reported results, and 8% reported they “never” reported results.6 While this study did not categorize residents into self-reporting frequencies, if 30% responded to results only “sometimes” or “never,” follow-up rates would have been much lower. Overall, rates of communication in both phases fit a previous systematic review, which showed failure to follow up for 7% to 62% of lab tests and 1% to 35% of radiologic studies.4 Even with the follow-up standard, residents in our study still failed to communicate results within 14 days in 15% of cases and failed to communicate significant results within 72 hours in 20%. Some failures may be related to local clinic-specific details; for example, the faculty practice model clinic utilized a patient portal system more frequently compared to clinics whose patients likely have less Internet access. Other factors, such as embedding continuity clinics between or within other rotations, may be more generalizable. As the previous study was performed prior to frequent lab results incorporation into EHRs, technology differences may have facilitated some of the communication improvement.2 However, a persistent rate of tests remains uncommunicated, suggesting that EHRs alone are insufficient to guarantee communication. Our clinics continue to emphasize communication timing standards, and are revisiting this project this academic year.
Previous work shows patients and providers prefer different methods of communication based on testing type, result abnormality, and patient factors, such as health literacy.7,11–13 In general, both patients and providers are more likely to prefer follow-up visits if results are abnormal and/or carry a high emotional burden.11,13 The added complexity of resident clinical responsibilities should compel programs to investigate alternative strategies to communicate important results. Across our clinics, the differences in patient access and use of portals have led to large differences in communication type.
Use of telephone-based EHR portal apps with alert functions may improve communication, as residents may be more likely to check telephone-based apps than log into computer virtual networks during off-site rotations. In revisiting this project, residents from 2 clinics now have app-based EHR access for tracking results, and concurrent systematic efforts have increased enrollment in online portals. Additionally, continuity may be improved with more predictable schedules (ie, 4 + 1 blocks), potentially easing in-person communication.
This platform and project have significant implications in the current training environment. First, providing residents with personal performance data in a meaningful, systems-based manner as in this project meets requirements for multiple aspects of the Clinical Learning Environment Review, and engaging trainees in this practice-based learning and improvement exercise reinforces skills translatable into future clinical environments. Our tool provides real-time feedback relative to peers who have already completed the project, likely increasing the overall educational impact. The basic software interface is flexible and could be modified by other training programs or institutions.9
Limitations of this work include data self-report and a lack of resources to check the accuracy of reported data. In addition, the project focused on individual performance rather than larger systems, and improvements may not be sustained. Finally, as outcomes were measured at baseline and after multiple interventions, it is not possible to know which multiple components of the intervention are essential for improvement. To understand the sustainability of this audit and feedback model, we currently are repeating the audit and feedback project. Further work is needed to define the best processes needed to communicate test results in alignment with patient preferences and urgency of follow-up.
In this internal medicine residency project, an intervention combining a brief review of QI concepts with resident chart reviews and periodic reporting of comparative clinic and program summary performance resulted in improved timeliness of communicating study results to patients. Improvement was seen for both routine and significant test results and across community, Veterans Affairs, and university clinics.
Funding: The authors report no external funding source for this study.
Conflict of interest: The authors declare they have no competing interests.
This work was presented as a poster at the Society of General Internal Medicine Annual Scientific Meeting, Toronto, Ontario, Canada, April 22–25, 2015.