ABSTRACT
The Accreditation Council for Graduate Medical Education specifies that trainees must receive clinical outcomes and quality benchmark data at specific levels related to institutional patient populations. Program directors (PDs) are challenged to identify meaningful data and provide them in formats acceptable to trainees.
We sought to understand what types of patients, data/metrics, and data delivery systems trainees and PDs prefer for supplying trainees with clinical outcomes data.
Trainees (n = 21) and PDs (n = 12) from multiple specialties participated in focus groups during academic year 2017–2018. They described key themes for providing clinical outcomes data to trainees.
Trainees and PDs differed in how they identified patients for clinical outcomes data for trainees. Trainees were interested in encounters where they felt a sense of responsibility or had autonomy/independent decision-making opportunities, continuity, or learned something new; PDs used broader criteria including all patients cared for by their trainees. Both groups thought trainees should be given trainee-level metrics and consistently highlighted the importance of comparison to peers and/or benchmarks. Both groups found value in “push” and “pull” data systems, although trainees wanted both, while PDs wanted one or the other. Both groups agreed that trainees should review data with specific faculty. Trainees expressed concern about being judged based on their patients' clinical outcomes.
Trainee and PD perspectives on which patients they would like outcomes data for differed, but they overlapped for types of metrics, formats, and review processes for the data.
What was known and gap
The Accreditation Council for Graduate Medical Education specifies that trainees must receive clinical outcomes and quality benchmark data. Program directors (PDs) must identify meaningful data and provide them in formats acceptable to trainees.
What is new
Focus groups for PDs and trainees that sought to understand what types of patient populations, data/metrics, and data delivery systems trainees and PDs prefer for supplying residents with clinical outcomes data.
Limitations
Study was conducted in a single institution, limiting generalizability, and results may have been impacted by factors, including institutional culture of team-based care.
Bottom line
Trainee and PD perspectives on which patients they would like outcomes data for differed, but they overlapped for types of metrics, formats, and review processes for the data.
Introduction
Learning from experience is the core of practice-based learning and improvement. The Accreditation Council for Graduate Medical Education (ACGME) Common Program Requirements state that residents and fellows should be provided data about clinical outcomes and quality benchmarks.1 The Clinical Learning Environment Review (CLER) Health Care Quality (HQ) Pathway 3 further specifies that data should be provided with specific attention to trainees' own patients rather than their service or clinical group.2 Results from early CLER visits showed that programs vary significantly in how they provide clinical data to trainees.3
To provide trainees with optimal data for feedback, we must accurately attribute patients to trainees, either individually or at the team level, and identify meaningful metrics. If trainees do not have confidence in the feedback, the impact may be lessened.4 Therefore, trainees must trust that data are accurate relative to the outcome, but also relative to their own roles in the patient's care. Other authors have described algorithms for attributing patients to trainees5,6 and proposed the concept of distinguishing between attribution to trainees and contribution by trainees.7
Our institution developed a trainee clinical dashboard, “ResiDash,” as a tactic to provide clinical feedback. This dashboard uses attribution rules (eg, authorship of specific note types) to attribute patients to individual trainees and calculate individual quality metrics. Early feedback on our dashboard revealed that trainees often did not perceive the metric data as reflective of their own practice because they did not agree with the attribution algorithm. For example, some trainees felt that when “cross-covering” they had less ownership of outcomes because plans may have been directed by other team members. Several other investigators have shared their experiences and challenges in using dashboards for trainee clinical metrics.5,8 We could not identify reports of how these dashboards are used by clinical competency committees or in other assessments.
We designed this focus group study using a constructivist approach9 to elucidate trainee and program director (PD) preferences regarding the patient populations, data/metrics, and data format for providing patient outcomes data to trainees.
Methods
Our study was conducted during academic year 2017–2018 at a large, urban, multisite, university-based institution with more than 100 training programs and 1600 trainees. We recruited trainee participants via e-mail, using a log of users who had accessed our ResiDash dashboard (114 users, all were invited). We recruited PD participants via our Graduate Medical Education Committee membership, as well as specific PDs who had previously expressed interest in this subject. We separated trainees from PDs in focus groups because we wanted trainees to feel safe expressing their views. We scheduled focus groups based on participant availability to include a range of levels and specialties. We accepted all respondents who could attend one of the focus groups. We offered a $50 gift card as an incentive to all participants.
We used a constructivist approach with recognition that trainees and PD perspectives would generate and shape the meaning of the focus group discussions. Focus groups allow for participants to interact with each other and endorse ideas in real time,9 which we hoped would allow us either to identify broadly applicable guidelines or to determine the impracticality of that approach.
We developed a focus group guide with questions informed by the ACGME CLER HQ Pathway 3 (“Residents/fellows receive, from the clinical site, specialty-specific data on quality metrics and benchmarks related to their patient populations”).2 We added questions about the formatting of data based on prior feedback from users who had accessed ResiDash. Focus group interviews were conducted by a well-trained interviewer (G.R.) with relevant skills and content expertise, and research assistants with backgrounds in qualitative research (M.T. and S.C.M.).
The focus group guide (provided as online supplemental material) focused on 3 major areas: “who,” “what,” and “how.” We asked trainee participants to define the population of patients for whom they have participated in care and would be interested in data; what kinds of data they would be interested in; and how they would like those data presented and reviewed. We asked PDs similar but reworded questions to understand how they think about data specifically for their trainees. One author (G.R.) conducted the focus groups, which were audiorecorded and transcribed for coding.
We used conventional content analysis to analyze transcripts to capture what trainees and PDs offered independent of any existing frameworks.10 We conducted an initial review of transcripts from each focus group, developed codes independently, then compared and consolidated. Three researchers participated in coding and analysis for each transcript. Transcripts were analyzed using a constant comparative approach by iteratively comparing similarities and differences in the data in order to develop a sense of patterns and themes.11 The researchers included 1 clinician who was very familiar with the content and context (G.R.), 1 education research faculty member (C.B.), and 1 research assistant (M.T. or S.C.M.) to provide check against potential bias the authors may have due to their roles in graduate medical education (GME).
The University of California, San Francisco Committee on Human Research approved the study.
Results
A total of 21 trainees, representing 13 unique training programs, participated in 1 of 3 focus groups. A total of 8 participants were currently enrolled in a core residency training program, 10 were in a fellowship program, and 3 were recent residency graduates within 1 month of graduation, who answered as if still a resident. A total of 12 PDs (including 1 associate director) representing 12 unique residency and fellowship programs participated in 1 of 3 focus groups. Of note, 1 PD focus group included only 2 participants. Focus group composition by number, specialty, and role are reported in table 1.
We identified 4 themes from the focus group data (table 2) and organized these themes around key topics. Trainees and PDs applied different criteria to identify the populations of patients for whom they wanted data (PDs used much broader criteria). The groups overlapped significantly on what kinds of data they were interested in (focus on individual metrics) and the format of the data (need for both “push” and “pull,” customizable, and comparison to peers). The groups also agreed that trainees should review data with faculty, but trainees also identified concerns about which faculty would be appropriate for this activity.
Approaches to Identifying Patients (“Who”)
Trainees and PDs generally took different approaches to identifying patients from whom to provide trainees with data about their clinical practice (ie, the denominator). Trainees tended to describe specific criteria based on their level of direct involvement, while PDs used broader criteria (eg, all patients on trainee-led services).
Trainees were interested in data around “meaningful” encounters, described as patients for whom they felt a sense of responsibility and/or had autonomy/independent decision-making opportunities, had continuity, or learned something new. Examples included deciding on the plan during a clinic visit, deciding that a patient needed surgery, or operating on a patient in specific roles.
Trainees who functioned as consultants were interested in patients for whom they made specific recommendations and whether they ended up being correct. PDs were less consistent in their discussion of autonomy. Many focused on the context of team care rather than individual care. Several PDs expressed that trainees should receive data on all patients who were on their residents' care teams. An alternative perspective that garnered agreement from PDs was the importance of patients with whom there was independence and/or autonomy to avoid pushback from residents about their effect on care.
Continuity of care was a common factor used by trainees and PDs across specialties. One trainee described how patients for whom they had written consecutive notes or patients in their panel might be a useful metric to identify patients meaningful to them. However, this same resident also noted that primary care panels were not always reliable markers of continuity.
In addition to data reflecting their clinical practice, trainees expressed interest in receiving clinical data regarding patients from whom they learned something new, even if they had less ownership or involvement. Trainees were particularly interested in learning from patients when they had a misdiagnosis and discussed the value of relaying this information back to residents.
Specific Data of Interest (“What”)
Multiple trainees and PDs expressed a desire to receive patient caseload data for logs as well as for identifying gaps in clinical experience. PDs added that these data also would be useful for monitoring consistency of trainee clinical experiences within a program.
Beyond logs, trainees and PDs were interested in individual patient-level metrics that directly related to trainees' delivery of care, compared with system or operational metrics. Participants highlighted the importance of providing clinical outcomes data that are essential and relevant.
PDs and trainees described clinical and quality improvement (QI) metrics in the context of individual specialties using narrow metrics (eg, adherence to specific practice guidelines) as well as broadly applicable metrics (eg, complications, test utilizations, readmissions). Both groups emphasized how the patient population would largely dictate the metrics of interest, and therefore residents might potentially receive data on different metrics each month. One PD described the ideal state being when “faculty members identify a specific metric that they think is meaningful and valuable in the context of their practice.” PDs also thought that trainees in outpatient practices should receive the same metrics as faculty in those practices.
Trainees and PDs consistently described a desire to compare trainees' performance and experiences with peers, and shared how this may inform how they can change clinical practice or influence learning goals. Trainees indicated that different metrics might allow for comparison either across specialties or to providers in the same specialty at other institutions.
Overall, trainees and PDs expressed limited interest in institutional-level metrics, such as those reported on health systems' QI reports. They did acknowledge that certain institution-wide operational metrics (eg, hospital-acquired conditions) could be beneficial to their learning, although the utility would be greater as they approached transition to attending physician roles. Trainees were more interested in adverse events data than revenue or cost data.
Preferred Data Format (“How”)
Trainees and PDs had mixed responses as to how they would like to access or receive data. The majority of trainees supported dual approaches: regular e-mail notifications, including key outcome metrics, in addition to a customizable online interface. PDs tended to be split—some wanted data pushed to them and others wanted data only on demand—and very few wanted both. Both groups discussed the importance of presenting data with a standard of comparison for benchmarking.
Approach to Review and Utility of Data
Trainees described tangible and intangible benefits of using patient data to shape their clinical practice as individuals, as part of a team, and institutionally. Trainees expressed enthusiasm for using available clinical outcomes data as springboards for discussions regarding QI projects, which could drive meaningful improvement within their specialties or divisions.
Almost universally, participants felt that trainees should review the data with a faculty member. Some felt that a PD was the appropriate person, while others felt the opposite. Several trainees specifically expressed concern about reviewing data with a PD—either that the clinical data would be used to give an unfavorable evaluation, or that their PD might not know the clinical context. These trainees preferred to review their data with a clinical supervisor. PDs agreed that trainees would benefit most from reviewing data with a clinical mentor who could provide context.
Discussion
In this study, we sought to understand trainee and PD preferences regarding patient populations, data/metrics, and data format for providing patient outcomes data to trainees. We accomplished this by describing which patients trainees and PDs are most interested in (“who”), what types of metrics they find useful and meaningful (“what”), and how they would like the data presented and reviewed (“how”). Trainees and PDs expressed some overlapping and diverging views on “who” the patients of interest are; there was greater overlap on the “what” and “how.”
In our focus groups, trainees described criteria for meaningful encounters that they could then use to identify patients of interest, whereas PDs tended to include all patients on trainee-led services. This lack of alignment creates a challenge, as patient identification and attribution are a key foundational step to providing data. If trainees and faculty do not agree on the patients, there is a greater chance they would not agree on the credibility of the feedback. Trainees' focus on subjective metrics, such as “meaningful” encounters, creates a challenge with electronic health record data reports.
Amid these challenges, approaches to providing meaningful data will likely require a combination of algorithm-based attribution and direct identification of patients by trainees. Algorithm-based attribution enables documentation of clinical exposure, serving one role in providing feedback.5 Direct identification of patients enables trainees to identify patients in real time as they actively participate in patient care and/or identify learning opportunities, as well as retrospectively after they have had time to reflect on experiences.12
These dual approaches to patient identification will be useful for different purposes. Algorithms will likely be more useful for populations of patients and identification of practice patterns. Alternatively, direct identification provides examples of specific cases that might constitute a portfolio of meaningful clinical experiences. Notably, when trainees directly identify patients with an interest in follow-up, they may use criteria that only indirectly relate to clinical care. Some trainees may desire follow-up on patients based on curiosity, generally related to clinical uncertainty, personal attachment, and patient vulnerability.13 Given that the underlying reasons for identifying patients will influence the learning that arises, the purpose of sharing data should be identified prospectively.
Our work underscored the findings of other authors, who found that residents are interested in patient-level feedback,14 but also that residents need to have confidence in the data (in our case, the attribution of the data) if they are to use it for improvement.4 Our respondents also highlighted the importance of trusted mentors for contextualizing any feedback.15,16 Educators should be aware that providing retrospective feedback has the potential to trigger a strong emotional reaction.17 Finally, we noted that residents expressed less interest in institutional metrics, and their faculty acknowledged this lower level of interest without great concern. This finding is consistent with the CLER report, which noted that physician groups were more likely to identify departmental priorities than institutional priorities,3 and highlights an opportunity for greater alignment between GME and health systems.
An important limitation of our study is that it was at a single institution. It is likely that our results were impacted by factors including institutional culture of team-based care, as well as the limited ways in which faculty in our institution are currently able to access meaningful patient-level data for feedback. Smaller fellowship programs may have been overrepresented in our participants, and some larger programs (eg, emergency medicine, psychiatry) were not represented, which leaves open the possibility that we missed important perspectives. Interestingly, emergency medicine is a setting in which a trainee might manage all aspects of a patient's care during an encounter, so attribution is more straightforward. There is significant variation in care models between specialties, related to the duration of patient encounters and the number of providers seen during a single encounter. These differences may affect the types of data that are desired by learners and their PDs.
As GME and health care delivery models evolve, educators and health system leaders are challenged to provide credible clinical feedback to residents in their health care systems.2 Educators, quality specialists, and informaticists will need to work together to develop practical and effective data reporting. Only through effective collaboration will we be able to provide the right data, about the right patients, to the right learners, at the right time.
Conclusions
Trainees and PDs are able to describe criteria for identifying patients about whom they would like clinical outcomes data, what data they are interested in, and how they would like to receive this data. Criteria differed for the patient population to be included but overlapped for types of metrics as well as the formats and review processes.
References
Author notes
Editor's Note: The online version of this article contains the focus group guide.
Funding: This research was funded using intramural funds from the Haile T. Debas Academy of Medical Educators at UCSF.
Competing Interests
Conflict of interest: The authors declare they have no competing interests.
Portions of this research were previously presented as oral abstracts at the UCSF Education Showcase, San Francisco, California, April 29–May 1, 2019.
The authors would like to thank Laura Ellerbe, MS, who participated in data analysis, and Bridget O'Brien, PhD, who provided very helpful feedback on the manuscript.