There is an unmet need for formal curricula to deliver practice feedback training to residents.
We developed a curriculum to help residents receive and interpret individual practice feedback data and to engage them in quality improvement efforts.
We created a framework based on resident attribution, effective metric selection, faculty coaching, peer and site comparisons, and resident-driven goals. The curriculum used electronic health record–generated resident-level data and disease-specific ambulatory didactics to help motivate quality improvement efforts. It was rolled out to 144 internal medicine residents practicing at 1 of 4 primary care clinic sites from July 2016 to June 2017. Resident attitudes and behaviors were tracked with presurveys and postsurveys, completed by 126 (88%) and 85 (59%) residents, respectively. Data log-ins and completion of educational activities were monitored. Group-level performance data were tracked using run charts.
Survey results demonstrated significant improvements on a 5-point Likert scale in residents' self-reported ability to receive (from a mean of 2.0 to 3.3, P < .001) and to interpret and understand (mean of 2.4 to 3.2, P < .001) their practice performance data. There was also an increased likelihood they would report that their practice had seen improvements in patient care (13% versus 35%, P < .001). Run charts demonstrated no change in patient outcome metrics.
A learner-centered longitudinal curriculum on ambulatory patient panels can help residents develop competency in receiving, interpreting, and effectively applying individualized practice performance data.
Physicians are expected to review and analyze performance data to execute practice-based improvement for their patients, but few residency programs have published the frameworks used to design and implement curricula addressing practice feedback.
A curriculum to help residents receive and interpret individual practice feedback data and to engage them in quality improvement efforts.
Surveys lacked validity evidence, and curriculum was implemented in one residency program, which may limit generalizability.
The curriculum helped residents develop competency in receiving, interpreting, and effectively applying individualized practice performance data.
Physicians are expected to review and analyze performance data to execute practice-based improvement for their patients.1 As national policies such as the Medicare Access and CHIP Reauthorization Act of 2015 and practice recognition programs such as the National Committee for Quality Assurance Patient-Centered Medical Home have created incentives to increase ambulatory care quality across the country, the Accreditation Council for Graduate Medical Education (ACGME) has similarly aligned its objectives.2,3 Within the Practice-Based Learning and Improvement (PBLI) core competency, the ACGME has identified the subcompetency of improving via performance audit to train the next generation of clinicians to deliver high-quality, efficient health care.4
Most resident feedback studies have focused on inpatient performance metrics; few have utilized ambulatory population health metrics. Interventions that provided residents with practice feedback in conjunction with educational sessions, self-reflection, and involvement in quality improvement have had the most success in improving both process and clinical outcome measures,5–9 while those that provided residents with their data in isolation have been less successful.10,11 However, few programs have published the frameworks they used to design and implement a longitudinal and multimodal curriculum addressing practice feedback.5,9
Implementing a practice-based improvement curriculum requires accurate, resident-specific performance outcomes for patients. Previously published PBLI efforts have often relied on manual chart review because of difficulties accessing and automatically compiling personalized resident-level data from the electronic health record (EHR).5,12–15 With increasing EHR experience and usability, health care systems have an opportunity to provide more detailed, extensive, and frequent data to physicians.16
To our knowledge, this is the first described longitudinal residency curriculum to use a structured framework and individualized EHR-level data to guide how residents receive practice feedback. We aimed to design a curriculum that would help residents receive and interpret data on their patient panels, engage them in quality improvement efforts, and prepare them for the practice feedback they will likely receive throughout their careers.
Setting and Participants
The initial year of the program was conducted with 144 internal medicine residents (both categorical and primary care residents) from July 2016 to June 2017. The continuity clinic sites included 2 hospital-based clinics, a community-based practice, and a Veterans Affairs (VA) clinic.
The curriculum incorporated opportunities for residents to engage in the 5 elements of PBLI: responsibility for a panel of patients, auditing that panel based on evidence-based criteria, comparing the audit to benchmarks to explore potential deficiencies (and successes), identifying areas for change, and engaging in a quality improvement intervention.4
Key curricular design features included the following:
Longitudinal feedback provided at multiple points in time
A learner-centered approach that includes built-in self-reflection, individual goal setting and quality improvement activities, and individualized faculty coaching
Multimodal activities ranging from large group discussions to one-on-one coaching
Curriculum complementary to existing outpatient didactic curriculum and clinical practice
All participants were provided study information sheets, and anonymous survey participation was optional. Surveys were developed by the authors without further testing.
Our framework for designing the curriculum included 5 key elements:
Accurate identification of a resident's panel of patients is necessary to create a sense of ownership and responsibility for that panel. In order to capture as many of the patients our residents were caring for as possible, our only requirement for attribution was that the resident was listed in the primary care physician field in the EHR.
We chose metrics that (1) residents feel they have the power to impact, (2) have a large enough denominator in small resident panels, (3) offer the opportunity for disease-based teaching, and (4) align with institutional quality improvement goals to allow residents to coordinate with larger-scale improvement efforts. Our initial metrics were blood pressure control in patients with hypertension and colorectal cancer screening for indicated patients. We utilized the practice feedback intervention suggestions outlined by Brehaut and colleagues17 as guidance for metrics delivery, including highlighting specific goals, providing individual data with comparators, addressing the credibility of the information, and preventing defensive reactions.
Faculty coaching was the backbone of the curriculum. Faculty initially helped residents address the accuracy of their results and understand what the results implied for their practice patterns and behaviors. They later worked one-on-one with residents to identify potential opportunities for change and specific steps for utilizing their clinic team to help optimize care for their panel. Prior to the sessions, all faculty mentors received in-person training on the data delivery system and educational goals. They also received reference materials, residents' completed self-assessments, and examples of individualized coaching.
Peer and Cross-Site Comparisons
Peer comparisons allowed for reflection on when it may be appropriate to have outlying performance and when it is a learning opportunity: for example, when lower rates of colorectal cancer screening reflect a patient panel with more barriers to screening compared with another clinic, or when a resident's outcomes are sharply different from a peer's outcomes even with similar clinics/populations. In small group clinic sessions, high-performing residents shared strategies they used in real time. Large group discussions allowed residents to discuss differences among sites and review clinic processes to replicate success. Posters were created for each clinic workroom to increase data visibility, display clinic-level trends, and recognize top performers.
Quality Improvement Focus
Residents used a self-assessment worksheet to reflect on their performance, set personal goals, and identify individual-level and systems-level interventions to help improve their performance. We encouraged residents to use techniques used in didactics to prioritize potential improvements, including strategic prioritization and the Impact vs. Effort Matrix.18 This served to counteract the tendency of residents to focus on individual interventions and to instead consider team- and system-level interventions.
We implemented this curriculum over the course of an academic year and have replicated it in subsequent years. The formal education components are outlined in the table.
We sent electronic presurveys and postsurveys to residents and asked them to self-report how frequently they engaged in practice feedback and in panel management activities and whether they thought reviewing data was useful in improving practice patterns and quality of care. We analyzed survey data using summative statistics, chi-square tests, and paired t tests, as appropriate. In the postsurveys, we also solicited written feedback on the curriculum. As initial educational process outcomes, we tracked how frequently residents logged in to view their data, the percentage of residents who attended educational activities, and self-assessment completion rates. We also tracked time spent by residents, faculty, and coordinators. Group-level performance on initial patient metrics was readily available for 3 of 4 clinics and was tracked using run charts.
Our institutional review board determined that this project was a quality improvement effort that did not require full review.
More than 90% of residents participated in each of the outlined curricular activities and 100% (144 of 144) completed the self-assessment that asked them to access their personal data at least once.
A total of 88% (126 of 144) of residents completed the presurvey and 59% (85 of 144) of residents completed the postsurvey. Presurveys and postsurveys demonstrated significant improvements on a 5-point Likert scale in residents' self-reported ability to receive (from a mean of 2.0 to 3.3, P < .001) and to interpret and understand (mean of 2.4 to 3.2, P < .001) their practice performance data. They also showed significant improvement on receiving coaching for how to improve (mean of 2.4 to 3.2, P < .001) their practice performance data.
Self-reported application of these skills into clinical practice also increased. Although residents most often reported never for all 3 behaviors in presurveys, figures 1 through 3 show the increased frequency with which residents reported the following: (1) looking up practice performance data (percentage responding sometimes or frequently increasing from 16% [20 of 126] to 64% [54 of 85], P < .001); (2) using that data to identify opportunities for change (15% [19 of 126] to 60% [51 of 85], P < .001); and (3) adjusting their workflow or clinic processes to help improve practice performance (26% [33 of 126] to 64% [54 of 85], P < .001).
Resident perceptions of the utility and impact of reviewing practice performance data also changed. The number of residents who agreed or strongly agreed that reviewing practice performance data is useful to improve practice patterns did not change significantly (72% [91 of 126] to 82% [70 of 85], P = .09). The number who agreed or strongly agreed that their practice had seen improvements in patient care by reviewing practice performance data increased from 13% (16 of 126) to 35% (30 of 85, P < .001).
Resident log-ins were able to be tracked at 3 of our 4 clinic sites (58%, 84 of 144 residents) and increased in parallel with curricular activities throughout the year (figure 4). Group-level performance on the initial 2 metrics was readily available at the same 3 sites. Run charts demonstrated stability in colon cancer screening rates and hypertension control over the course of the intervention, as well as nonrandom variation in the form of a shift in the data toward higher colorectal cancer screening rates later in the year and in the first few months of postintervention follow-up (figure 5a and b).
Both resident and faculty acceptability were high, with enthusiasm about the availability of data and tools to help with data interpretation. Residents suggested a variety of additional metrics for which they wanted future feedback and expressed interest in using their data to drive quality improvement projects. Most frustrations centered on technical problems with data accessibility or accuracy of panel identification. Faculty were supportive of the framework and willing to devote curricular time to coach the residents. Residents and faculty thought further faculty training with additional resources and experience could be helpful.
The support of residency leadership and clinic site directors, as well as curricular flexibility in the 4+1 scheduling model, made the curriculum feasible. Relatively little curricular time was used (1 hour of ambulatory didactic lecture time and two 30-minute sessions of preclinic educational time). Residents were expected to do a small amount of practice feedback work (such as completing the self-assessment, which took 15 minutes on average) during their half-day of administrative time. Faculty development included a 45-minute meeting with ambulatory associate program directors and clinic site directors. Each of the 10 faculty clinic champions also had a 20-minute one-on-one session with a chief resident to review curricular goals and resident data. A nurse clinical quality specialist devoted approximately 10 hours to data management and analysis over the 1-year period.
We were able to use a structured framework to guide the implementation of a longitudinal curriculum centered on residents' ambulatory patient panels that was feasible to add to our residency curriculum without significant additional learner or instructor time and with high levels of resident and faculty acceptability. Residents reported significant improvements in their ability to receive, interpret, and understand practice feedback. They logged in to access their data more frequently and had high levels of participation in curricular activities. Patient outcomes for the chosen metrics did not change among our resident patient panels.
To our knowledge, this is the first described longitudinal residency curriculum to use a structured framework and individualized EHR-level data to guide how residents receive practice feedback. Prior studies of resident practice feedback interventions have relied on manual chart review, which can provide meaningful feedback but is more time-intensive and less replicable for a larger number of quality measures over time.5,11–15 This framework was designed specifically to frame messaging for residents around acting on clinically meaningful valid metrics to help improve the quality of care they deliver and to overcome some of the typical challenges to practice feedback. Such challenges include those generalizable to all physicians (adequate time, data accuracy, and systems support to help physicians utilize data to effect change) and those unique to residents (small panel sizes, varied clinic settings, competing educational objectives).
We used strategies highlighted by 2 reviews that indicated practice feedback is most effective at improving practice when provided multiple times, combined with other interventions (eg, education, guidelines, reminders), and tied to specific goals and action plans.19,20 Prior studies of resident practice feedback interventions have found largely similar conclusions; practice feedback data in isolation is less effective at affecting quality outcomes10–12 than those with multifaceted interventions.5–9,21
Despite modeling these proven strategies, patient outcomes remained largely stable, similar to prior published research demonstrating inconsistent effects of practice feedback on outcomes.19 However, run charts did demonstrate a nonrandom trend toward improved outcomes when including the first few months of postintervention follow-up. Prior resident studies with improved clinical outcomes have largely seen those improvements over the course of 2 or more years.15,21 More time is likely needed to determine if improved educational outcomes also translate to improved patient outcomes.
The curriculum was implemented in a single residency program and may not be generalizable, although it was successful within a large academic program with multiple ambulatory clinics and 2 EHR systems. The surveys had no validity evidence, thus respondents may have interpreted questions differently than intended. We also did not have long-term data on how residents view practice performance over the course of their residency or, more importantly, their careers.
In the future we hope to combine the practice feedback framework with an ambulatory quality improvement curriculum to help motivate data-driven individual and group efforts to improve patient outcomes. Further research is needed to see if similar success can be obtained in other programs, including those in different specialties that also have ambulatory patient panels. Most importantly, long-term research is needed to see if these efforts successfully prepare residents to receive and effectively use practice feedback data throughout their careers.
A longitudinal practice feedback curriculum that used EHR-generated provider-level data complemented an ambulatory didactic curriculum to help residents develop PBLI competencies and identify both individual and large-scale opportunities for quality improvement.
Funding: The authors report no external funding source for this study.
Conflict of interest: Dr Gupta is the Director of Evaluation and Outreach at Costs of Care Inc.
The authors would like to thank Drs Jodi Friedman, Lisa Skinner, Christina Harris, Mina Ma, Peter LeFevre, Anna Chirra, and Allison Diamant for their support, suggestions, and time in implementing this curriculum. The authors would also like to thank Meghan Nechrebecki, Sean Furlong, and Vilay Khandewal for their instrumental assistance in data acquisition and troubleshooting.