To ensure that residency and fellowship programs create environments where trainees can learn how individual practice relates to the larger health care system, the Accreditation Council for Graduate Medical Education (ACGME) core competencies include practice-based learning and improvement and systems-based practice. Accordingly, the ACGME Common Program Requirements mandate that trainees receive data on quality metrics and benchmarks related to their patient populations and demonstrate competence in using information technology for learning.1,2 

One possible approach to achieve these aims is through dashboards, which are “visual displays of the most important information” designed to synthesize and concisely visualize complex data.3  When employed in the medical setting, they can be used to monitor case mix, increase adherence to practice guidelines, and facilitate higher quality and more efficient clinical care.414  Within medical education, dashboards also hold the potential to enhance training in such fields as preventive medicine, population health, and systems management.1519  Detailed below are strategies for successful design and implementation of clinical dashboards for graduate medical education (GME) training programs.

Whether designing a learner-focused dashboard from scratch or adapting a prebuilt one, it is important to understand the fundamental features of a high-quality dashboard. The development and evaluation of a dashboard comprise several key steps: identification of performance metrics, consideration of data sources, creation of an effective visual display, and curriculum integration (Figure 1). The design requires collaboration with information technology (IT) professionals, administrators, and program leadership as well as input from trainees at each step of the process. In the authors' experience, programs should plan for a timeline of 6 to 12 months for the design and initial implementation phase, recognizing that the essential processes of dashboard refinement and maintenance require ongoing investment of resources. Institutional support, such as funding and IT resources, may be garnered by leveraging the need to satisfy ACGME requirements and by achieving financial goals of the health system.

Figure 1

Learner-Centered Dashboard Design

Note: The design of a learner-centered dashboard involves several discrete steps, requiring learner engagement, faculty development, and cultural shift throughout the process. Within each step, programs must consider several principal characteristics (boxes) to tailor the dashboard to the needs of their learners.

Figure 1

Learner-Centered Dashboard Design

Note: The design of a learner-centered dashboard involves several discrete steps, requiring learner engagement, faculty development, and cultural shift throughout the process. Within each step, programs must consider several principal characteristics (boxes) to tailor the dashboard to the needs of their learners.

Close modal

Performance metrics ideally represent measures of successful patient care and population management.20  Deciding on the type and number of performance metrics is paramount in dashboard design. Currently, there is no common taxonomy of metrics for different specialties; however, some reported guiding principles are to employ metrics that are resident-sensitive, educationally aligned, and appropriate to panel size and case mix.2124 

First, programs should identify metrics that are resident-sensitive. Resident-sensitive quality measures have been defined as ones that “require an action by the resident, with the resident possessing a realistic opportunity to do so that directly effects patient care.”21  For example, while lead screening is an important pediatric quality metric, if a nursing outreach initiative is driving rates of screening, resident contribution to the metric may be low and not reflect their performance. Trainee input can help generate consensus on those metrics most relevant to their practice.

Second, programs should consider how metrics align with and inform educational goals.23  In radiology training, for example, a dashboard may display the percentage of time a resident's initial read was modified by an attending. This metric functions as a surrogate measure of their diagnostic accuracy and should improve with ongoing education. Furthermore, an analysis of reads that require substantive edits can inform resident- and program-level educational interventions.25 

Third, programs should consider what metrics are suitable given panel size and case mix. One way to address smaller and heterogeneous panel sizes is to focus on process measures, which are less subject to case mix variability. They are also more sensitive to differences in care than outcome metrics, where the measured event may be too uncommon to detect meaningful differences among residents. For example, the appropriate use of venous thromboembolism prophylaxis may be a more useful measure of surgical trainee practice than the reduction in rates of postoperative pulmonary embolism.26  If used, process measures must be linked to patient-related outcomes through strong evidence.

In contrast, clinical outcomes are attractive in that they are considered the gold standard and are often readily available since health care organizations may require reporting clinical outcomes as a way to monitor quality metrics. One way to minimize the effect of small panel sizes for trainees is to choose compound metrics, like the diabetes Healthcare Effectiveness Data and Information Set. Aggregation of metrics has been shown to increase reliability of determining the physician “thumbprint.”27  The concept of “the balanced scorecard” suggests the use of a range of metrics, including clinical outcome measures, process measures, and patient satisfaction scores, to provide a holistic view of the trainee.28 

Finally, it is worth considering that a dashboard not only communicates performance to learners, but also imparts the values of the program and health system.

The next step is to evaluate the data sources underpinning the dashboard. Common data sources include the electronic health record (EHR), which can yield patient data as well as process measures, patient satisfaction scores, and billing data. The choice of data source may also impact provider attribution, depending on whether data is organized at the level of the individual or care team.29  For example, length of stay is a metric that better reflects interdisciplinary team function, rather than an individual's clinical management. Even those metrics that are ascribed to an individual, however, can be influenced by the actions of colleagues, preceptors, or clinical staff.24  Programs could either select metrics that are more representative of independent practice or try to account for interdependence by adjusting for faculty supervision or clinic site. Emphasizing that data are a reflection of population health and opportunities rather than a report card can also alleviate potential concerns about dashboard performance and attribution.30  Programs that want to use this data as summative, rather than formative, may find progress over time to be a better indicator of performance than static metrics.25 

With any metric, it is vital that programs validate data, as inaccurate or poorly attributed information will quickly undermine trust in the dashboard. EHR data validation can be done by running a similar database query and comparing results. For example, a report of patients on controlled substances could be compared with those receiving opioid prescriptions. A final validation step is to recruit a group of trainees to review results by hand for any errors or omissions.

Despite data validation, EHR-sourced dashboards are subject to certain limitations. Notably, data can only be extracted from certain fields, and free text in a note may not be recognized. Once a dashboard is rolled out, consider a reporting infrastructure to rapidly address any concerns.

Dashboards should simplify data sets to quickly convey current performance (Figure 2). To maximize impact, dashboards must have high visibility with clear infographics and images. Excessive or poorly designed features can distract learners.31  Color-coding data can communicate levels of performance. Designing dashboard features that are interactive and actionable may empower trainees to improve their quality metrics. For example, a dashboard element should not just show rates of breast cancer screening, but also link to a report that indicates the individual patients in need of mammographies.

Figure 2

Sample EHR Embedded Internal Medicine Resident Dashboard

Note: Dashboards provide residents a visual display on a range of preselected metrics and can quickly convey individual panel performance. The dashboard shown in Figure 2 was designed as part of a Health Resources and Services Administration Grant Award T0BHP28574-01-00 for Primary Care Training and Enhancement and is shown with permission from EPIC Systems Corporation.

Figure 2

Sample EHR Embedded Internal Medicine Resident Dashboard

Note: Dashboards provide residents a visual display on a range of preselected metrics and can quickly convey individual panel performance. The dashboard shown in Figure 2 was designed as part of a Health Resources and Services Administration Grant Award T0BHP28574-01-00 for Primary Care Training and Enhancement and is shown with permission from EPIC Systems Corporation.

Close modal

Dashboard displays may show data for single or multiple providers. When using a dashboard with trainees, a program must decide whether to keep individual performance data confidential or to display information from all providers. Comparison of individual data to aggregate peer data may be helpful to benchmark individual performance within the context of the training program while maintaining privacy. Additionally, comparisons to national data can be a useful teaching tool regarding standards of care.

While the ACGME has emphasized the importance of providing trainees with practice data, there has been little research about how to meaningfully deliver this information.32  To effectively teach dashboard skills and promote their use, programs need to make dashboards accessible and integrate them into thoughtful curricula.33  These curricula should focus on understanding performance metrics, evaluating data accuracy, and integrating dashboards into clinical practice. Pairing education about the specific metric with the dashboard display could transform abstract medical knowledge into something tangible. To make the delivery of performance data actionable, there must be complementary quality improvement (QI) and panel management initiatives. Within GME, it is often difficult to design and implement practice-based improvement curricula due to challenges obtaining individual data, lack of faculty expertise, and asynchronous schedules.34  However, there are QI tools that can be utilized with dashboards to create practice improvement curricula. For example, in internal medicine, a model for guiding practice improvement is the American Board of Internal Medicine practice improvement modules (PIMs).3537  PIMs are designed to guide providers or groups of practitioners, even those with limited QI training, through clinic-based improvement projects.38  These types of resources allow for QI initiatives to be completed on an independent schedule. Programs can create PIMs using dashboard data and grounded in clinical quality goals and local resources.

Although direct observation of trainees for assessment provides critical information regarding behaviors, experts now suggest additional performance measures that include patient outcomes.15  A thoughtful approach to a cultural shift toward patient and population outcomes, in concordance with dashboard development, is essential, through development of a shared model among program leadership, faculty, and trainees.

Programs can empower trainees to be a driving force in the development of dashboards by including them in all steps of the process.7  Furthermore, trainees should understand that their medical training is linked to clinical outcomes, with dashboards acting as a potential tool for performance improvement and assessment. There is also a need for faculty development to help strengthen the connection between quality improvement, performance measures, and medical education. Financial linkage of clinical outcomes and medical education can also help drive and expedite cultural change.6 

Limitations to the use of dashboards in medical education include the potential inability to attribute the contribution of a single trainee to a performance measure, difficulty comparing performance measures between different clinical settings, and insufficient data points to render an adequate performance evaluation.6  While the EHR is the most obvious choice for dashboard placement, many are not optimized to support dashboard integration due to issues with functionality, display, and inadequate data mining ability.8  Furthermore, dashboard implementation requires a large amount of monetary and health IT resources.

From an academic standpoint, further research is necessary in evaluating the impact of dashboards within GME, delineating financial and personal costs for dashboard creation, and identifying the optimal way to integrate dashboards within educational curricula. We also suggest that governing boards such as the ACGME increase their support for the use of clinical performance data in training.

With the growing use of big data analytics, the aggregation and display of personalized metrics will play an increasing role in medicine.39  Electronic dashboards show promise for improving the clinical care of patients as well as medical training. To expand their use in GME, dashboards should include valid data and be designed such that they are accessible, actionable, and clinically relevant. Electronic dashboards that can meaningfully distill complex data into useful information may promote advancements in preventive medicine, population health, and systems management. In GME, a cultural change is needed to support the integration of clinical performance data through dashboards.

1
Accreditation Council for Graduate Medical Education
.
Common Program Requirements (Residency)
. ,
2019
.
2
Accreditation Council for Graduate Medical Education
.
Common Program Requirements (Fellowship)
. ,
2019
.
3
Few
S.
Informatech
.
Dashboard confusion: a clear understanding of dashboards requires delving beneath the marketing hype
. ,
2019
.
4
Waitman
LR,
Phillips
IE,
McCoy
AB,
Danciu
I,
Halpenny
RM,
Nelsen
CL,
et al.
Adopting real-time surveillance dashboards as a component of an enterprisewide medication safety strategy
.
Jt Comm J Qual Patient Saf.
2011
;
37
(
7
):
326
332
. doi:.
5
Banerjee
D,
Thompson
C,
Kell
C,
Shetty
R,
Vetteth
Y,
Grossman
H,
et al.
An informatics-based approach to reducing heart failure all-cause readmissions: the Stanford heart failure dashboard
.
J Am Med Inform Assoc.
2017
;
24
(
3
):
550
555
. doi:.
6
Roberts
DH,
Gilmartin
GS,
Neeman
N,
Schulze
JE,
Cannistraro
S,
Ngo
LH,
et al.
Design and measurement of quality improvement indicators in ambulatory pulmonary care: creating a “culture of quality” in an academic pulmonary division
.
Chest
.
2009
;
136
(
4
):
1134
1140
. doi:.
7
Chahine
S,
Kulasegaram
KM,
Wright
S,
Monteiro
S,
Grierson
LEM,
Barber
C,
et al.
A call to investigate the relationship between education and health outcomes using big data
.
Acad Med.
2018
;
93
(
6
):
829
832
. doi:.
8
Anderson
D,
Zlateva
I,
Khatri
K,
Ciaburri
N.
Using health information technology to improve adherence to opioid prescribing guidelines in primary care
.
Clin J Pain
.
2015
;
31
(
6
):
573
579
. doi:.
9
Yigitbasioglu
OM,
Velcu
O.
A review of dashboards in performance management: Implications for design and research
.
Intern J Account Info Syst.
2012
;
13
(
1
):
41
59
. doi:.
10
Ward
CE,
Morella
L,
Ashburner
JM,
Atlas
SJ.
An interactive, all-payer, multidomain primary care performance dashboard
.
J Ambul Care Manage
.
2014
;
37
(
4
):
339
348
. doi:.
11
Dowding
D,
Randell
R,
Gardner
P,
Fitzpatrick
G,
Dykes
P,
Favela
J,
et al.
Dashboards for improving patient care: review of the literature
.
Int J Med Inform.
2015
;
84
(
2
):
87
100
. doi:.
12
Buttigieg
SC,
Pace
A,
Rathert
C.
Hospital performance dashboards: a literature review
.
J Health Organ Manag
.
2017
;
31
(
3
):
385
406
. doi:.
13
Wilbanks
BA,
Langford
PA.
A review of dashboards for data analytics in nursing
.
Comput Inform Nurs.
2014
;
32
(
11
):
545
549
. doi:.
14
Durojaiye
AB,
Snyder
E,
Cohen
M,
Nagy
P,
Hong
K,
Johnson
PT.
Radiology resident assessment and feedback dashboard
.
Radiographics
.
2018
;
38
(
5
):
1443
1453
. doi:.
15
Triola
MM,
Hawkins
RE,
Skochelak
SE.
The time is now: using graduates' practice data to drive medical education reform
.
Acad Med.
2018
;
93
(
6
):
826
828
. doi:.
16
Levin
JC,
Hron
J.
Automated reporting of trainee metrics using electronic clinical systems
.
J Grad Med Educ.
2017
;
9
(
3
):
361
365
. doi:.
17
Friedman
KA,
Raimo
J,
Spielmann
K,
Chaudhry
S.
Resident dashboards: helping your clinical competency committee visualize trainees' key performance indicators
.
Med Educ Online
.
2016
;
21:29838. doi: 0.3402/meo.v21.29838.
18
Sun
J,
Li
KY,
Peng
P,
Genes
N,
Chung
A.
119 development of a clinical performance dashboard to empower resident education
.
Annals Emerg Med.
2017
;
70
(
4 suppl
):
48
49
. doi:.
19
Boscardin
C,
Fergus
KB,
Hellevig
B,
Hauer
KE.
Twelve tips to promote successful development of a learner performance dashboard within a medical education program
.
Med Teach
.
2018
;
40
(
8
):
855
861
. doi:.
20
Baker
DW,
Qaseem
A.
Evidence-based performance measures: preventing unintended consequences of quality measurement
.
Ann Intern Med.
2011
;
155
(
9
):
638
640
. doi:.
21
Schumacher
DJ,
Holmboe
ES,
van der Vleuten
C,
Busari
JO,
Carraccio
C.
Developing resident-sensitive quality measures: a model from pediatric emergency medicine
.
Acad Med.
2018
;
93
(
7
):
1071
1078
. doi:.
22
Smirnova
A,
Sebok-Syer
SS,
Chahine
S,
Kalet
AL,
Tamblyn
R,
Lombarts
KMJMH,
et al.
Defining and adopting clinical performance measures in graduate medical education: where are we now and where are we going?
Acad Med.
2019
;
94
(
5
):
671
677
. doi:.
23
Kalet
AL,
Gillespie
CC,
Schwartz
MD,
Holmboe
ES,
Ark
TK,
Jay
M,
et al.
New measures to establish the evidence base for medical education: identifying educationally sensitive patient outcomes
.
Acad Med.
2010
;
85
(
5
):
844
851
. doi:.
24
Sebok-Syer
SS,
Chahine
S,
Watling
CJ,
Goldszmidt
M,
Cristancho
S,
Lingard
L.
Considering the interdependence of clinical performance: implications for assessment and entrustment
.
Med Educ.
2018
;
52
(
9
):
970
980
. doi:.
25
Choi
HH,
Clark
J,
Jay
AK,
Filice
RW.
Minimizing barriers in learning for on-call radiology residents-end-to-end web-based resident feedback system
.
J Digit Imaging
.
2018
;
31
(
1
):
117
123
. doi:.
26
Mant
J.
Process versus outcome indicators in the assessment of quality of health care
.
Int J Qual Health Care
.
2001
;
13
(
6
):
475
480
. doi:.
27
Kaplan
SH,
Griffith
JL,
Price
LL,
Pawlson
LG,
Greenfield
S.
Improving the reliability of physician performance assessment: identifying the “physician effect” on quality and creating composite measures
.
Med Care
.
2009
;
47
(
4
):
378
387
. doi:.
28
Kaplan
RS,
Norton
DP.
Using the balanced scorecard as a strategic management system
.
Harvard Bus Rev.
1996
;
74
(
1
):
75
85
.
29
Gebauer
S,
Steele
E.
Questions program directors need to answer before using resident clinical performance data
.
J Grad Med Educ.
2016
;
8
(
4
):
507
509
. doi:.
30
Hong
CS,
Atlas
SJ,
Chang
Y,
Subramanian
SV,
Ashburner
JM,
Barry
MJ,
et al.
Relationship between patient panel characteristics and primary care physician clinical performance rankings
.
JAMA.
2010
;
304
(
10
):
1107
1113
. doi:.
31
Iselin
ER.
The effects of information load and information diversity on decision quality in a structured decision task
.
Account Org Society
.
1988
;
13
(
2
):
147
164
. doi:.
32
Accreditation Council for Graduate Medical Education
.
ACGME Program Requirements for Graduate Medical Education in Internal Medicine
. ,
2019
.
33
Miller
ME,
Patel
A,
Schindler
N,
Hirsch
K,
Ming
M,
Weber
S,
et al.
Bridging the gap: interdepartmental quality improvement and patient safety curriculum created by hospital leaders, faculty, and trainees
.
J Grad Med Educ.
2018
;
10
(
5
):
566
572
. doi:.
34
Esch
LM,
Bird
AN,
Oyler
JL,
Wei Lee
W,
Shah
SD,
Pincavage
AT.
Preparing for the primary care clinic: an ambulatory boot camp for internal medicine interns
.
Med Educ Online
.
2015
;
20
:
29702
. doi:.
35
Duffy
FD,
Lynn
LA,
Didura
H,
Hess
B,
Caverzagie
K,
Grosso
L,
et al.
Self-assessment of practice performance: development of the ABIM Practice Improvement Module (PIM)
.
J Contin Educ Health Prof.
2008
;
28
(
1
):
38
46
. doi:.
36
Oyler
J,
Vinci
L,
Johnson
JK,
Arora
VM.
Teaching internal medicine residents to sustain their improvement through the quality assessment and improvement curriculum
.
J Gen Intern Med.
2011
;
26
(
2
):
221
225
. doi:.
37
American Board of Internal Medicine
.
QI/PI Activities
. ,
2019
.
38
Lu
LB,
Barrette
EP,
Noronha
C,
Sobel
HG,
Tobin
DG,
eds
.
Leading an Academic Medical Practice
.
Springer International Publishing;
2018
.
39
Kapp
JM,
Simoes
EJ,
DeBiasi
A,
Kravet
SJ.
A conceptual framework for a systems thinking approach to us population health
.
Syst Research Behav Sci.
2017
;
34
(
6
):
686
698
. doi:.