Background 

Programmatic assessment is the intentional collection of key data from multiple sources for both assessment of learning and assessment for learning.

Objective 

We developed a system of programmatic assessment (PA) to identify competency progression (summative) and assessment for learning to assist residents in their formative development.

Methods 

The programmatic assessment was designed iteratively from 2014 through 2016. All assessments were first categorized by competency domain and source of assessment. The number of assessment modalities for each competency domain was collected. These multisource assessments were then mapped by program leadership to the milestones to develop a master PA blueprint. A resident learning management system provided the platform for aggregating formative and summative data, allowing residents and faculty ongoing access to guide learning and assessment. A key component of programmatic assessment was to support resident integration of assessment information through feedback by faculty after shifts and during monthly formal assessments, semiannual resident reviews, and summative judgments by the Clinical Competency Committee.

Results 

Through the PA, the 6 competency domains are assessed through multiple modalities: patient care (22 different assessments), professionalism (18), systems-based practice (17), interprofessional and communication skills (16), medical knowledge (11), and practice-based learning and improvement (6). Each assessment provides feedback to the resident in various formats. Our programmatic assessment has been utilized for more than 2 years with iterative improvements.

Conclusions 

The implementation of programmatic assessment allowed our program to organize diverse, multisourced feedback to drive both formative and summative assessments.

The primary purpose of residency programs is to train competent physicians. This requires effective and comprehensive assessment of residents to determine progression. Growing recognition of the importance of assessment as both a means for advancement decisions as well as a deliberate driver of resident learning creates a demand for a program of systematic and robust multisource assessment.15 

A best practice model of programmatic assessment (PA) was described by van der Vleuten and colleagues,1,3,4  entailing the intentional collection of key data points from multiple sources for both assessment of learning and assessment for learning. The aggregation of data over the course of training provides a comprehensive picture of resident performance for feedback and summative progress decisions, in keeping with Accreditation Council for Graduate Medical Education (ACGME) requirements for assessment of trainees.6  In this brief report, we describe the University of Michigan Emergency Medicine Programmatic Assessment for consideration of adoption or adaptation by other programs.

Setting

We implemented our programmatic assessment in a 4-year emergency medicine residency with 16 residents per year at a university hospital and 2 affiliated community hospitals. The PA was designed and implemented iteratively from 2014 through 2016.

Intervention: Implementation of Programmatic Assessment

Developing an Assessment Master Plan

We developed the master plan for the PA by mapping existing assessments to competencies and noting gaps. Program leadership collected all current assessments and mapped them to the competencies. We noted the source of the data (eg, resident, faculty, nurse) and the modality of assessment (eg, direct observation, standardized procedure simulation lab, global faculty assessment). Each assessment modality was intentionally collected and weighed as high or low stakes as data were aggregated. Mapping agreement was achieved through consensus of program leadership. The PA master plan (table) outlined how each competency domain was assessed by various modalities, with data from various sources and multiple perspectives.5,79  The number of modalities for each competency domain was assessed. After competency/assessment mapping, we reviewed the map to determine and address assessment gaps. The ability to identify areas of deficiency in assessment is an essential aspect of programmatic assessment; it is an iterative process focused on continued improvement.

Data Aggregation and Access for Residents and Faculty

Programmatic assessment required a system of collecting, organizing, and storing data easily accessible to residents and program leadership. We used MedHub (MedHub, Minneapolis, MN) to house the PA data, including an electronic learning portfolio; structured assessments; examination scores; duty hours; nursing, peer, faculty, global, and shift assessments; milestone judgments; US Medical Licensing Examination scores; and remediation letters (as needed). Additional data were tracked outside of the learning management system, such as patient counts, student teaching evaluations, and clinical skills assessments.

Enhancing Feedback to Learners

To provide residents with feedback and assessment both of learning and for learning, it was important to offer feedback at multiple opportunities.1,10  For example, a summative clinical skills competency examination included simulation, oral boards, standardized patients, and diagnostic testing, and faculty provided immediate feedback including strengths and opportunities for improvement.

The PA was designed to provide multifaceted formative assessment. Each low-stakes assessment (see the table) provided feedback to the residents.7  Because residents were directly supervised during the shift, they were provided real-time feedback on performance, including verbal debriefings about performance during the shift. We encouraged this through faculty development on how to provide feedback. We asked faculty to complete monthly assessments that include numeric scoring and narrative feedback on (1) strengths; (2) areas of development; and (3) a targeted focus area such as documentation quality or communication skills. Further, we provided a monetary incentive if, as a team, faculty completed on average 3 assessments per faculty per month for any resident. During faculty meetings, we set aside 10 to 20 minutes to collect and discuss information about resident performance. For any assessments, we provided feedback to the residents.

Resident mentoring was provided through semiannual meetings with program leadership to review performance (table) and set goals, using a SMART (Specific, Measurable, Achievable, Relevant, Time bound) goals approach.11  An important part of the discussion was to provide programmatic assessment data including nursing, faculty, and peer feedback and to reflect on data from clinical skills examinations (a half-day of multimodal assessment utilizing simulation, oral board cases, written content knowledge, and standardized patient interactions).

Ensuring Trustworthy Competency Assessments

High-stakes decisions regarding resident progress were made every 6 months by the Clinical Competency Committee (CCC) according to ACGME guidelines. These higher-stakes decisions were based on the programmatic assessment, which brought in many data points of information across contexts, methods, and assessors.12,13  This was important, as decisions based on a single assessment source carry a risk of the source not reflecting the full construct to be assessed. Additionally, by having multisourced data feeding specific competencies, the decision-making process ensured more trustworthy and defensible judgments.14 

If a resident was not meeting milestone goals, the CCC guided the remediation plan. To better understand performance gaps, the CCC reviewed all assessments and identified areas of weakness. This looked beyond ratings on the ACGME milestones15  and included the original assessment data. For example, a resident may communicate well during simulation but may need remediation based on nursing or patient assessments. Compared with typical CCC practice, use of the PA facilitated broad input and promoted individualized remediation plans that distinguished global competency deficiencies from situational challenges.

Monitoring and Evaluating the Learning Effect

An important aspect of ongoing program evaluation and improvement was the annual review of the PA blueprint to ensure that broad, multisource assessments were collected and to identify assessment or learning gaps. For example, we reworked our pediatrics skills examination to better cover gaps in learning and assessment. Additionally, we reviewed the quality of our assessments to ensure the validity and reliability evidence was collected.

Promoting Interaction Among Stakeholders

To maintain an open stakeholder dialogue about performance, assessment, and program evaluation, we discussed assessment in regular meetings, including joint resident-faculty educational conferences, graduate medical education committees, department operations committees, and education leadership and faculty meetings.

This study was determined to be not regulated by the University of Michigan Institutional Review Board.

Our programmatic assessment for emergency medicine residents uses multiple modalities for each competency: patient care (22 different assessments), professionalism (18), systems-based practice (17), communication (16), medical knowledge (11), and practice-based learning and improvement (6). The system was accepted by the residents, as demonstrated by our ACGME resident survey rating the program at or above the national mean for most of the elements that compose the evaluation category. From a practical perspective, through rigorous programmatic assessment, all of our residents were successfully progressing through the program.

We demonstrated the feasibility of constructing a rigorous program of assessment in a residency program. The PA created a structure that allowed the program to define areas of rigorous assessment and those that needed further development. It also informed the deliberations of the CCC and allowed appropriate weighing of information. Our PA required resources; some of the assessments within the PA, such as our clinical skills examination, were quite resource intensive. Others, such as global, peer, and nursing assessment, were routine practice and required fewer resources. Other challenges included faculty development and buy-in from residents and faculty.

Benefits of programmatic assessment included a thoughtful focus on learning and feedback through a process that incorporated multiple, varied sources of data to provide a more reliable and trustworthy view of each resident.1,4,1618  Hauer et al19  noted the importance of structured decision making in optimizing CCC performance. The PA allowed our CCC to proceed with knowledge of the available sources of data and methods of assessment. It also structured information sharing to ensure that all data were considered in decisions by making explicit sources of information derived from assessments.

There are several limitations to this study. First, it was a single program, which limited generalizability. Second, implementation of programmatic assessment required resources. Finally, our study did not examine if assessments improved after institution of programmatic assessment.

Further studies should examine the sustainability and cost-effectiveness of programmatic assessment and its longer-term impact on the quality of assessments in competency-based education. In addition, while all assessments in a PA program are examined, some likely have more validity evidence, reliability, and weight than others. It will be important to determine how many assessments are necessary to get to generalizable summative assessments.

Our implementation of programmatic assessment allowed our program to organize diverse, multisourced feedback to drive both formative and summative assessments to assist residents in their professional development.

1
van der Vleuten
CP,
Schuwirth
LW,
Driessen
EW,
et al.
A model for programmatic assessment fit for purpose
.
Med Teach
.
2012
;
34
(
3
):
205
214
.
2
van der Vleuten
C,
Schuwirth
L.
Assessing professional competence: from methods to programmes
.
Med Educ
.
2005
;
39
(
3
):
309
317
.
3
Schuwirth
LW,
van der Vleuten
CP.
Programmatic assessment: from assessment of learning to assessment for learning
.
Med Teach
.
2011
;
33
(
6
):
478
485
.
4
van der Vleuten
CP,
Schuwirth
LW,
Driessen
EW,
et al.
Twelve tips for programmatic assessment
.
Med Teach
.
2015
;
37
(
7
):
641
646
.
5
Dijkstra
J,
Galbraith
R,
Hodges
BD,
et al.
Expert validation of fit-for-purpose guidelines for designing programmes of assessment
.
BMC Med Educ
.
2012
;
12
:
20
.
6
Nasca
TJ,
Philibert
I,
Brigham
T,
et al.
The next GME accreditation system—rationale and benefits
.
N Engl J Med
.
2012
;
366
(
11
):
1051
1056
.
7
Hauff
SR,
Hopson
LR,
Losman
E,
et al.
Programmatic assessment of level 1 milestones in incoming interns
.
Acad Emerg Med
.
2014
;
21
(
6
):
694
698
.
8
Santen
SA,
Rademacher
N,
Heron
SL,
et al.
How competent are emergency medicine interns for level 1 milestones: who is responsible?
Acad Emerg Med
.
2013
;
20
(
7
):
736
739
.
9
Chan
T,
Sherbino
J,
McMAP Collaborators. The McMaster Modular Assessment Program (McMAP): a theoretically grounded work-based assessment system for an emergency medicine residency program
.
Acad Med
.
2015
;
90
(
7
):
900
905
.
10
Hattie
J,
Timperley
H.
The power of feedback
.
Rev Educ Res
.
2007
;
77
(
1
):
81
112
.
11
Doran
GT.
There's a S.M.A.R.T. way to write management's goals and objectives
.
Manage Rev (AMA Forum)
.
1981
;
70
(
11
):
35
36
. .
12
Colbert
CY,
Dannefer
EF,
French
JC.
Clinical competency committees and assessment: changing the conversation in graduate medical education
.
J Grad Med Educ
.
2015
;
7
(
2
):
162
165
.
13
Beeson
MS,
Carter
WA,
Christopher
TA,
et al.
Emergency medicine milestones
.
J Grad Med Educ
.
2013
;
5
(
suppl 1
):
5
13
.
14
Schuwirth
L,
van der Vleuten
CP.
Programmatic assessment and Kane's validity perspective
.
Med Educ
.
2012
;
46
(
1
):
38
48
.
15
Wu
JS,
Siewert
B,
Boiselle
PM.
Resident evaluation and remediation: a comprehensive approach
.
J Grad Med Educ
.
2010
;
2
(
2
):
242
245
.
16
Driessen
EW,
van Tartwijk
J,
Govaerts
M,
et al.
The use of programmatic assessment in the clinical workplace: a Maastricht case report
.
Med Teach
.
2012
;
34
(
3
):
226
231
.
17
Bok
HGJ,
Teunissen
PW,
Favier
RP,
et al.
Programmatic assessment of competency-based workplace learning: when theory meets practice
.
BMC Med Educ
.
2013
;
13
:
123
.
18
Heeneman
S,
Oudkerk Pool
A,
Schuwirth
LW,
et al.
The impact of programmatic assessment on student learning: theory versus practice
.
Med Educ
.
2015
;
49
(
5
):
487
498
.
19
Hauer
KE,
Cate
OT,
Boscardin
CK,
et al.
Ensuring resident competence: a narrative review of the literature on group decision making to inform the work of clinical competency committees
.
J Grad Med Educ
.
2016
;
8
(
2
):
156
164
.

Funding: The authors report no external funding source for this study.

Conflict of interest: The authors declare they have no competing interests.

The abstract of this study was presented at AMEE 2017: The Annual Conference of the Association for Medical Education in Europe, Helsinki, Finland, August 26–30, 2017.