It is common to complete evaluations of graduate medical education (GME) programs, present them at conferences, publish them in peer-reviewed journals, add them to curricula vitae (CVs), and then move on without using them to enact changes in the programs themselves. Such actions may reflect the reality that many individuals perceive and conduct program evaluations as if they were research.1  While research and program evaluation use similar methods, they have distinct purposes, timelines, audiences, and most notably, intended uses.2  Evaluations of GME programs need to be used to, for example, inform program decisions and modifications, grow program stakeholders' knowledge, stimulate organizational culture changes, or improve the quality of training.1,3,4  They need to be more than intellectual exercises resulting in accomplishments listed on CVs.5  As such, we emphasize that evaluation use is an essential consequence of program evaluation. Those involved in program evaluation should discuss it and maintain its prominence at the onset of every evaluation. We also promote the adage “use-it-or-lose-it” to stress timely program evaluation use. Yet the literature on program evaluation in GME often neglects to discuss use, including how selected evaluation approaches can influence evaluation use.1,6,7  In this article, we explain evaluation use by describing both the use of evaluation findings and process use (ie, changes resulting from engagement in the evaluation process itself).1,8  We also suggest strategies, including evaluation approaches, that faculty can use to increase evaluation use in GME.

The 3 categories of use of evaluation findings are instrumental, conceptual, and symbolic. Instrumental use refers to instances where stakeholders use evaluation findings to take direct actions (eg, improvements, changes, terminations) in a program.9  For example, evaluation findings show that residents in a GME program are struggling to complete their research projects. Using the findings, the GME team implements new research training activities to assist residents in the completion of their projects. Conceptual use describes occurrences where stakeholders use evaluation findings to evolve their understandings of a program but do not take direct actions based on these findings.4  For instance, the GME team acknowledges the findings that residents are struggling to complete their research projects. These findings inform their understanding of why residents are not attending academic conferences to present their research. Lastly, symbolic use occurs when stakeholders use the sheer existence of a completed evaluation to comply with reporting requirements or justify a previously made program action.4  For example, the funding university requires the GME program to complete an evaluation to retain funding for residents' research projects. The GME team completes an evaluation and presents the report to the university. Alternatively, before the evaluation, the GME program hired a research assistant to help residents with their research projects and the subsequent evaluation findings are used to justify the hiring of the research assistant. In GME, we emphasize instrumental use, as this form of use leads to actions that can improve programs. However, the use of evaluation findings is typically a short-term consequence of evaluation because these findings are relevant only within a specific and limited timeframe (ie, use-it-or-lose it).

On the other hand, process use can have ongoing influence on individuals, programs, and organizations. It recognizes that evaluation processes themselves can affect attitudes, thought processes, and behaviors.10  Process use recognizes stakeholders' learning advancements from their involvement in an evaluation as well as the effects of evaluation processes on program functioning and organizational culture.11  Process use does not require changes to a program or direct actions because of evaluation findings. There are 6 types of process use which we illustrate with examples:

  1. Facilitating stakeholders' shared understanding of the program: Evaluation activities result in the GME team agreeing on their program's goals and activities.

  2. Supporting and reinforcing a program intervention: Evaluation processes require the GME team to communicate and collaborate, skills that their program's educational intervention aims to enhance.

  3. Increasing stakeholders' engagement as well as their evaluation and critical thinking skills: Evaluation involvement teaches the GME team how to conduct evaluations and demonstrates their value. Thus, it enhances their commitment to evaluation and the program itself.

  4. Facilitating program and organizational development: Evaluation involvement may lead the GME team to value and become responsive to program feedback. Such changes contribute to their organization's evaluation capacity and learning functions.

  5. Infusing evaluation thinking into the organization's culture: Evaluation involvement leads the GME team to think like evaluators in their everyday roles; therefore, an evaluation culture emerges within their organization.

  6. Promoting instrumentation effects: Evaluation involvement increases the GME team's understanding of what program aspects are evaluation foci. Thus, they ensure that what gets evaluated remains a priority within the program.11 

When stakeholders are involved in evaluation processes, they enter an evaluation culture and learn how to think and look at things through an evaluative lens. They can also use the knowledge and skills (eg, evaluation knowledge, methodological and facilitation skills) they develop to strengthen their organization's abilities to design, implement, interpret, and use evaluations and thereby build their organization's evaluation capacity. In this sense, process use is valuable throughout and following an evaluation and in various GME settings regardless of the evaluation findings or recommendations.12 

The Table presents strategies that faculty involved in program evaluation can employ to increase evaluation use.

Table

Strategies to Increase Program Evaluation Use

Strategies to Increase Program Evaluation Use
Strategies to Increase Program Evaluation Use

In closing, it is imperative to remember that evaluation use, especially process use, can occur throughout a program evaluation rather than simply at its conclusion.10  Evaluation use can start at the planning stage and continue well beyond a presentation or publication of an evaluation. Program evaluators need a use-it-or-lose-it perspective throughout the evaluation process to maximize improvements to training. This perspective will maintain stakeholders' faith in the value of evaluation, as they witness that evaluation efforts lead to timely, actionable findings and processes. Ultimately, we must embrace evaluation use to ensure that all stakeholders and programs, not only conference attendees, readers of peer-reviewed journals, or our CVs, witness the consequences (both positive and negative) of program evaluation.

1. 
Balmer
DF,
Riddle
JM,
Simpson
D.
Program evaluation: getting started and standards
.
J Grad Med Educ
.
2020
;
12
(3)
:
345
-
346
.
2. 
Mathison
S.
What is the difference between evaluation and research and why do we care?
In:
Smith
NL,
Brandon
PR,
eds.
Fundamental Issues in Evaluation
.
The Guilford Press;
2008
:
183
-
196
.
3. 
Balmer
DF,
Rama
JA,
Simpson
D.
Program evaluation models: evaluating processes and outcomes in graduate medical education
.
J Grad Med Educ
.
2019
;
11
(1)
:
99
-
100
.
4. 
Johnson
K,
Greenseid
L,
Toal
S,
King
JA,
Lawrenz
F,
Volkov
B.
Research on evaluation use: a review of the empirical literature from 1986 to 2005
.
Am J Eval
.
2009
;
30
(3)
:
377
-
410
.
5. 
Alkin
M.
Evaluation Essentials
.
The Guilford Press
;
2011
.
6. 
Cook
DA.
Twelve tips for evaluating educational programs
.
Med Teach
.
2010
;
32
(4)
:
296
-
301
.
7. 
Goldie
J.
AMEE education guide no. 29: evaluating educational programmes
.
Med Teach
.
2006
;
28
(3)
:
210
-
224
.
8. 
Yarbrough
DB,
Shulha
LM,
Hopson
RK,
Caruthers
FA.
The Program Evaluation Standards: A Guide for Evaluators and Evaluation Users. 3rd ed
.
Sage Publications
;
2011
.
9. 
Shulha
LM,
Cousins
JB.
Evaluation use: theory, research, and practice since 1986
.
Am J Eval
.
1997
;
18
(1)
:
195
-
208
.
10. 
Alkin
M,
King
JA.
The historical development of evaluation use
.
Am J Eval
.
2016
;
37
(4)
:
568
-
579
.
11. 
Patton
MQ.
Process use as a usefulism
.
New Dir Eval
.
2007
;
2007
(116)
:
99
-
112
.
12. 
Amo
C,
Cousins
JB.
Going through the process: an examination of the operationalization of process use in empirical research on evaluation
.
New Dir Eval
.
2007
;
2007
(116)
:
5
-
26
.
13. 
Cousins
JB,
Chouinard
JA.
Participatory Evaluation Up Close: An Integration of Research-Based Knowledge
.
Information Age Publishing
;
2012
.
14. 
Moreau
K.
Twelve tips for planning and conducting a participatory evaluation
.
Med Teach
.
2017
;
39
(4)
:
334
-
340
.
15. 
Patton
MQ,
Campbell-Patton
C.
Utilization-Focused Evaluation. 5th ed
.
Sage Publications
;
2021
.
16. 
Taut
S,
Alkin
M.
Program staff perceptions of barriers to evaluation implentation
.
Am J Eval
.
2003
;
24
(2)
:
213
-
226
.
17. 
National Center for Chronic Disease and Prevention and Health Promotion.
Evaluation reporting: A guide to help ensure use of evaluation findings
.
US Department of Health and Human Services
;
2013
.