Purpose

To assess whether a novel evaluation tool could guide curricular change in an internal medicine residency program.

Method

The authors developed an 8-item Ecological Momentary Assessment tool and collected daily evaluations from residents of the relative educational value of 3 differing ambulatory morning report formats (scale: 8  =  best, 0  =  worst). From the evaluations, they made a targeted curricular change and used the tool to assess its impact.

Results

Residents completed 1388 evaluation cards for 223 sessions over 32 months, with a response rate of 75.3%. At baseline, there was a decline in perceived educational value with advancing postgraduate (PGY) year for the overall mean score (PGY-1, 7.4; PGY-2, 7.2; PGY-3, 7.0; P < .01) and for percentage reporting greater than 2 new things learned (PGY-1, 77%; PGY-2, 66%; PGY-3, 50%; P < .001). The authors replaced the format of a lower scoring session with one of higher cognitive content to target upper-level residents. The new session's mean score improved (7.1 to 7.4; P  =  .03); the adjusted odds ratios before and after the change for percentage answering, “Yes, definitely” to “Area I need to improve” was 2.53 (95% confidence interval [CI], 1.45–4.42; P  =  .001) and to “Would recommend to others,” it was 2.08 (95% CI, 1.12–3.89; P  =  .05).

Conclusions

The Ecological Momentary Assessment tool successfully guided ambulatory morning report curricular changes and confirmed successful curricular impact. Ecological Momentary Assessment concepts of multiple, frequent, timely evaluations can be successfully applied in residency curriculum redesign.

A comprehensive curriculum is vital for graduate medical education to ensure that educational goals are met and trainees achieve professional competence.1 Educators have differing views about the essential elements of a medical curriculum.2–5 With the paucity of peer-reviewed published materials, medical educators and program directors 6,7 are left to develop curriculum de novo.8 As a result, many current curricula may not be effective.9,10 

Ideally, curricular development should be an ongoing process, based on evaluation and feedback of the curriculum's performance. Changes should be guided by an evaluation process in an ongoing feedback loop, such as the Kern model of medical education.11 Evaluating the effectiveness of a graduate medical education curriculum is challenging.7,12,13 Standard evaluation methods can be limited by ceiling effect, recall bias, and timing distance from the educational experience. Without timely and effective methods for measuring the impact of a curriculum, how do medical educators know if the content is valuable and whether curricular changes are effective?

Ecological Momentary Assessment (EMA) was developed by social scientists to evaluate a research participant's behavior and cognition.14,15 It is based on the principle that reports of an experience obtained close in time and frequently over the course of a study minimizes recall bias, gives more sensitive assessments, and has greater variability and detail.14 Ecological Momentary Assessment is designed to be unobtrusive to reduce respondent burden and fatigue. We are unaware of studies applying the concepts of EMA for curricular reform in graduate medical education.

From resident feedback, we recognized a need to improve our ambulatory curriculum, including our ambulatory morning report (AMR). This report is a teaching conference focused on ambulatory medicine, similar to traditional inpatient-based teaching conferences, and became part of our ambulatory curriculum in the 1990s.16,17 In planning our curricular change, we reviewed existing AMR evaluations and found only 1 question completed by graduating residents at their exit interview. This single question suffered from ceiling effect, lacked detail, and was inadequate to inform curricular changes. We explored whether we could apply a more rigorous evaluation process to guide our curricular change. We used the concepts of EMA for a curricular redesign to (1) assess the baseline educational value of the AMR sessions, (2) identify areas needing improvement, and (3) measure the impact of subsequent curricular changes.

Study Design

We conducted a 3-phase prospective observational study to evaluate our AMR sessions. Phase 1 involved the development and use of the EMA evaluation tool. In phase 2, we analyzed and interpreted the data to guide curricular change. For phase 3, we implemented the curricular change and evaluated its impact through continued use of the EMA tool.

Participants

The University of Alabama at Birmingham (UAB) Internal Medicine Residency Program is a university-based program with 125 residents. Each month, approximately one-third of the residents are assigned to ambulatory experiences, including general medicine and subspecialty clinics (eg, rheumatology, geriatrics, and endocrinology), urgent care clinics, and general medicine consults. These residents are required to attend AMR with a chief medical resident and 1 to 3 general internal medicine faculty members.

Setting

We conduct AMR 3 days a week for 1 hour. For the first half of each session, a resident presents a case from an ambulatory experience to the faculty moderator. To ensure a comprehensive exposure to ambulatory medicine, the second half of each session has structured teaching of an assigned ambulatory topic of the week (such as “Outpatient Management of Chronic Obstructive Pulmonary Disease”). Because residents only attend AMR on ambulatory rotations, the weekly topics are repeated each academic year.

Before our curricular change, for the second half of AMR, Tuesdays were dedicated to a case-based general didactic session; Wednesdays, to board-review questions; and Fridays, to faculty case presentation, all focused on the topic of the week. For our curricular redesign, we evaluated the relative educational value of a different format for the second half of the Wednesday sessions.

Evaluation Tool Development

We developed an 8-item evaluation tool to assess the value of each session. Based on Skeff's teaching domains of promoting understanding, retention, and evaluation,18 8 clinician educators within the Division of General Internal Medicine developed questions through an iterative process. We limited the number of questions so the EMA tool could be completed in less than 1 minute. We avoided a Likert numerical scale and used a descriptive response scale when possible to improve clarity of responses. We pilot tested the logistics of data collection from September 1 through October 30, 2006. We did not perform analysis on the pilot data. The final EMA tool was a card with 8 questions: (1) I would recommend today's AMR to a colleague, (2) I learned something relevant to patient care, (3) More sessions should be structured like today's, (4) The content was in an area I needed to improve (response scale for questions 1–4: “yes definitely,” “yes somewhat,” “no”), (5) Number of new things learned today, (6) Number of AMRs per week to be structured like today's, (7) Number of AMRs per week to cut so more can be like today's, and (8) Number of AMRs today's faculty should facilitate.

We performed factor analysis and found that questions 1 to 4 loaded well onto a single factor (Eigenvalue, 3.04). Thus, we created an overall scale for questions 1 to 4 by assigning points (yes definitely  =  2 points; yes somewhat  =  1 point; no  =  0 points; maximum score  =  8, minimum score  =  0); and further analyzed the other items on the tool individually. We kept all 8 questions on the tool during the entire study period to avoid any potential for response bias. Questions 6 and 7 proved confusing to residents, based on feedback and inconsistent responses, and were not used to inform curricular change. Question 8 evaluated faculty and likewise was not used for curricular change.

Process and Data Collection

Following the principles of EMA, the chief medical resident or faculty distributed the cards to all resident participants immediately after each AMR session. Residents completed the cards and anonymously put them in a box in the conference room. We collected cards for a baseline assessment (November 1, 2006–June 20, 2008) and after the implementation of the curriculum (July 9, 2008–June 20, 2009). Each card was linked by date to an AMR session, allowing identification of the topic, resident presenter, and faculty moderator. The Institutional Review Board at the University of Alabama at Birmingham approved the study. Resident participation was voluntary and without incentives. None of the authors report a conflict of interest with any products or services in this study.

Analysis

We used standard descriptive statistics and calculated the Cronbach α to measure the internal consistency of questions 1 to 4 and 5 to 8 on the tool. We used logistic regression to obtain odds ratios between the dependent variables (card questions) and the independent variable (baseline and after curricular change), while adjusting for PGY level and academic month quartile (June to September, October to December, January to March, April to June). We selected academic month quartile to account for temporal effects of residents' increasing clinical experience throughout the academic year. We used a significance level of P < .05.

We collected 1388 cards for 146 days of baseline assessment (phase 1) and 77 days of intervention follow-up (phase 3, after the curricular change). For the days resident attendance was documented (168 of 223 study days), 1128 cards were turned in by 1649 residents (overall response rate, 75.3%). The mean number of residents per AMR session was 9.8 (SD, 4.1), and the mean number of cards collected was 5.6 (SD, 3.0) at baseline and 7.4 (SD, 4.0) after the curricular change) (P < .001). There were no significant differences in sex, postgraduate year, and day of the week for baseline and after the curricular change (table 1). The Cronbach α for the first 4 questions was 0.88 and for questions 5 to 8, it was 0.68 (acceptable value > 0.7), confirming greater internal consistency of questions 1 through 4.

TABLE 1

Ambulatory Morning Report Characteristics

Ambulatory Morning Report Characteristics
Ambulatory Morning Report Characteristics

Phase 1, Baseline Evaluation

Our baseline curricular assessment found a stepped decline for the value of AMR with advancing postgraduate year of training (table 2). The percentage of residents who reported learning “greater than 2 new things today” decreased with advancing PGY level (PGY-1, 77%; PGY-2, 66%; PGY-3, 50%; P < .001).

TABLE 2

Ambulatory Morning Report (AMR) Quality by Level of Training, Baseline Evaluation

Ambulatory Morning Report (AMR) Quality by Level of Training, Baseline Evaluation
Ambulatory Morning Report (AMR) Quality by Level of Training, Baseline Evaluation

The ratings for the first 4 questions of the card were no different by day-of-the-week format: general didactic sessions (Tuesday) (mean, 7.3; SD, 1.4); board review questions (Wednesday) (mean, 7.1; SD, 1.5); and, faculty case presentation (Friday) (mean, 7.1; SD, 1.6) (P  =  .15). There also was no difference for the PGY-3 residents' mean score (Tuesday: mean, 7.2; SD, 1.6; Wednesday: mean, 6.9; SD, 1.7; Friday: mean, 7.0; SD, 1.8) (P  =  .35).

Phase 2, Curriculum Change

On the basis of the baseline analysis, in July 2008 we instituted a curricular change to target the needs of our senior residents. We changed the Wednesday session from board review questions to a case-based didactic and modified the content to be a subtheme of the topic of the week, with a higher cognitive level of teaching material. For example, if the topic of the week was “Outpatient Management of Chronic Obstructive Pulmonary Disease,” the Wednesday session was changed to a case-based didactic session on “Interpretation of Pulmonary Function Tests.” We selected the Wednesday session for change because it was logistically feasible to add the subtheme concept on this day. We did not change the other lower scoring session (Friday) because we wanted to maintain faculty presence in the conference and felt that its removal may result in residents missing valuable role modeling of how clinicians in real practice manage difficult ambulatory cases.

Phase 3, Curricular Change Evaluation

After the curricular change, the total score for the Wednesday sessions improved from 7.1 (SD, 1.5) to 7.4 (SD, 1.3) (P  =  .03) and became the highest of the 3 sessions. Ratings for the other sessions remained unchanged. In the unadjusted analysis, the ratings for the first 4 questions of the card improved for the new session, as follows: (1) I would recommend today's AMR to a colleague, by 7.2% (95% CI, 0.4–14.0; P  =  .05); (2) I learned something relevant to patient care, by 3.3% (95% CI, −3.0 to 9.5; P  =  .32); (3) More sessions should be structured like today's, by 5.8% (95% CI, −2.1 to13.8; P  =  .16); and (4) The content was in an area I needed to improve, by 12.2% (95% CI, 4.6–19.8; P  =  .003).

The unadjusted and adjusted odds ratios before and after the curricular change for the new Wednesday sessions are shown in the figure. After adjustment for PGY level and academic month, the odds ratios for percentage answering, “Yes, definitely” to “Area I need to improve” was 2.53 (95% CI, 1.45–4.42; P  =  .001). The adjusted odds ratio for “Would recommend to others” was 2.08 (95% CI, 1.12–3.89; P  =  .05).

FIGURE

Unadjusted and Adjusted Odds Ratios for Residents Answering “Yes, Definitelyto Various Aspects of Ambulatory Morning Report Evaluation After the Curricular Change

FIGURE

Unadjusted and Adjusted Odds Ratios for Residents Answering “Yes, Definitelyto Various Aspects of Ambulatory Morning Report Evaluation After the Curricular Change

Close modal

By level of training, there was no significant change from baseline in the percentage of residents who reported learning greater than 2 new things, overall (PGY-1, 77, 76; P  =  .83; PGY-2, 66, 69; P  =  .46; PGY-3, 50, 57; P  =  .09) or for the new sessions (PGY-1, 72, 86; P  =  .12; PGY-2, 70, 71; P  =  .87; PGY-3, 52, 60; P  =  .31). Overall, the mean rating for the curriculum by PGY-3 residents was unchanged (7.0 vs 6.9).

We successfully applied the concepts of EMA: daily, unobtrusive, immediate evaluations, as an innovative evaluation tool to improve a specific component of our ambulatory curriculum. Our evaluation tool identified the relative value of 3 different AMR formats and identified a decline in educational value with advancing PGY level. With our second year of evaluations, we proved the curricular changes were successful, even when adjusting for level of training and timing during the academic year. After the curricular change, the new format became the highest rated of all AMR sessions.

The Accreditation Council of Graduate Medical Education (ACGME) requires that programs evaluate their curriculum at least annually, by using data to demonstrate continuous improvement of educational processes, including the curriculum.19 With the new emphasis on outcomes, programs are developing educational initiatives that need evaluation.20,21 Ecological Momentary Assessment fulfills an important need in graduate medical education by allowing programs to demonstrate continuous improvement in their educational processes. Our study provides an innovative model that can be used by program directors to evaluate new initiatives and existing curricula. The daily, rapid evaluations provided immediate real-time information about the educational content without investing much time or resources. Programs can make curricular changes quickly, without waiting for a full year of evaluation, and can limit resident exposure to ineffective curriculum while a lengthy and cumbersome evaluation process is used. It is a practical and feasible evaluation method that provided valuable information about our AMR formats, identified a session to target, and confirmed that the change was successful.

We learned a great deal from our experience with this residency program. The process required continual attention to distribute, collect, and quickly analyze the data, and therefore, should be limited to assess a specific change for a finite period of time. We needed buy-in from residents to complete the cards; therefore, the process had to be unobtrusive and we ensured card completion took less than 1 minute. We avoided resident fatigue because residents attended AMR only on ambulatory rotations. We found that when the chief medical resident or faculty did not distribute the cards, the residents did not independently seek them. Yet, 75% of the sessions had a response rate of more than 50%.

Our study has limitations. First, we measured perceived value of the teaching sessions and did not measure impact on medical knowledge. Second, we designed the evaluations to be completely blinded and were not able to account for clustering effects or repeated measures in the analysis. Lastly, our observations are limited to a single conference at a single institution; however, testing a proof of concept is necessary before wider implementation. We detected a total mean score improvement of only 0.3 for the changed session (7.1 to 7.4), demonstrating that there was still a ceiling effect with our EMA tool, but with more than 1300 cards turned in, this difference was significant and had face validity with the responses on the other items. There was educational improvement that we could measure to confirm the curricular change was successful.

Ecological Momentary Assessment has been used in a variety of interventions that inform patient behavior such as smoking cessation, pediatric obesity, and adolescents' behaviors.14,15,22 It has not been used to inform behavior and learning in graduate medical education. In our literature review, we found 1 study that used EMA in a family medicine residency program to assess the cognitive impact of electronic knowledge resources.23 Another study applied EMA concepts to provide residents daily written feedback by faculty on their clinical performance in a rheumatology clinic.24 Many programs modify existing curriculum based on perceived value or informal resident feedback. To our knowledge, we are the first to use EMA to evaluate and inform a residency curricular change. Our EMA tool allows more reliable evaluation, with a structured process, but overcomes the limits of a full needs assessment, and the 6-step approach. This “fast-track” evaluation method overcomes many limitations of standard evaluation methods used in medical education.14,15 Our study provides evidence that brief, daily evaluation cards can be used as an innovative evaluation method to improve a residency curriculum.

Program directors can use EMA to rapidly assess educational components of their curriculum. The evaluations were unobtrusive to residents and took less than 1 minute to complete, but they offered specific feedback about the sessions' relative value and allowed for targeted curricular changes and confirmed successful impact on learners. With the novel concepts of EMA, program directors can implement mandated ACGME requirements for continuous improvement of their educational processes, with little investment of time and cost, and obtain specific feedback to overcome the limitations of standard evaluation methods.

1
Epstein
RM
,
Hundert
EM
.
Defining and assessing professional competence
.
JAMA
.
2002
;
287
(
2
):
226
235
.
2
Burke
W
,
Baron
RB
,
Lemon
M
,
Losh
D
,
Novack
A
.
Training generalist physicians: structural elements of the curriculum
.
J Gen Intern Med
.
1994
;
9
(
4 suppl 1
):
S23
S30
.
3
Ende
J
,
Atkins
E
.
Conceptualizing curriculum for graduate medical education
.
Acad Med
.
1992
;
67
(
8
):
528
534
.
4
Huddle
TS
,
Heudebert
GR
.
Internal medicine training in the 21st century
.
Acad Med
.
2008
;
83
(
10
):
910
915
.
5
Loftus
TH
,
McLeod
PJ
,
Snell
LS
.
Faculty perceptions of effective ambulatory care teaching
.
J Gen Intern Med
.
1993
;
8
(
10
):
575
577
.
6
Di Francesco
L
,
Pistoria
MJ
,
Auerbach
AD
,
Nardino
RJ
,
Holmboe
ES
.
Internal medicine training in the inpatient setting: a review of published educational interventions
.
J Gen Intern Med
.
2005
;
20
(
12
):
1173
1180
.
7
Thomas
P
.
Medical education curricula: where's the beef
?
J Gen Intern Med
.
1999
;
14
(
7
):
449
450
.
8
Thomas
PA
,
Gebo
KA
,
Hellmann
DB
.
A pilot study of peer review in residency training
.
J Gen Intern Med
.
1999
;
14
(
9
):
551
554
.
9
Raman
M
,
Shaffer
E
,
Lockyear
J
.
Gastroenterology fellowship training: approaches to curriculum assessment and evaluation
.
Can J Gastroenterol
.
2008
;
22
(
6
):
559
564
.
10
Sheets
KJ
,
Anderson
WA
.
The reporting of curriculum development activities in the health professions
.
Teach Learn Med
.
1991
;
3
:
6
.
11
Kern
DE
,
Branch
WT
Jr,
Jackson
JL
,
et al.
Teaching the psychosocial aspects of care in the clinical setting: practical recommendations
.
Acad Med
.
2005
;
80
(
1
):
8
20
.
12
Green
ML
.
Identifying, appraising, and implementing medical education curricula: a guide for medical educators
.
Ann Intern Med
.
2001
;
135
(
10
):
889
896
.
13
Reed
D
,
Price
EG
,
Windish
DM
,
et al.
Challenges in systematic reviews of educational intervention studies
.
Ann Intern Med
.
2005
;
142
(
12, pt 2
):
1080
1089
.
14
Moskowitz
DS
,
Young
SN
.
Ecological momentary assessment: what it is and why it is a method of the future in clinical psychopharmacology
.
J Psychiatry Neurosci
.
2006
;
31
(
1
):
13
20
.
15
Shiffman
S
,
Stone
AA
,
Hufford
MR
.
Ecological momentary assessment
.
Annu Rev Clin Psychol
.
2008
;
4
:
1
32
.
16
Demopoulos
B
,
Pelzman
F
,
Wenderoth
S
.
Ambulatory morning report: an underutilized educational modality
.
Teach Learn Med
.
2001
;
13
(
1
):
49
52
.
17
Spickard
A
III,
Hales
JB
,
Ellis
S
.
Outpatient morning report: a new educational venue
.
Acad Med
.
2000
;
75
(
2
):
197
.
18
Litzelman
DK
,
Stratos
GA
,
Marriott
DJ
,
Skeff
KM
.
Factorial validation of a widely disseminated educational framework for evaluating clinical teachers
.
Acad Med
.
1998
;
73
(
6
):
688
695
.
19
Accreditation Council for Graduate Medical Education Program Requirements for Graduate Medical Education in Internal Medicine
.
Available at: www.acgme.org/acWebsite/downloads/RRC_progReq/140_internal_medicine_07012009.pdf. Revised July 1, 2009. Accessed April 16, 2010
.
20
Meyers
FJ
,
Weinberger
SE
,
Fitzgibbons
JP
,
et al.
Redesigning residency training in internal medicine: the consensus report of the Alliance for Academic Internal Medicine Education Redesign Task Force
.
Acad Med
.
2007
;
82
(
12
):
1211
1219
.
21
Accreditation Council for Graduate Medical Education ACGME, American Board of Medical Specialties ABMS
.
Toolbox of assessment methods, 2000
.
Available at: www.acgme.org/Outcome/assess/Toolbox.pdf. Accessed April 22, 2010
.
22
Rofey
DL
,
Hull
EE
,
Phillips
J
,
Vogt
K
,
Silk
JS
,
Dahl
RE
.
Utilizing Ecological Momentary Assessment in pediatric obesity to quantify behavior, emotion, and sleep
.
Obesity (Silver Spring)
.
2010
;
18
(
6
):
1270
1272
.
23
Pluye
P
,
Grad
RM
.
Cognitive impact assessment of electronic knowledge resources: a mixed methods evaluation study of a handheld prototype
.
AMIA Annu Symp Proc
.
2006
:
634
638
.
24
Humphrey-Murto
S
,
Khalidi
N
,
Smith
CD
,
et al.
Resident evaluations: the use of daily evaluation forms in rheumatology ambulatory care
.
J Rheumatol
.
2009
;
36
(
6
):
1298
1303
.

Author notes

Lisa L. Willett, MD, is Associate Professor of Medicine and the Associate Director of the Internal Medicine Residency Program in the Division of General Internal Medicine at the University of Alabama at Birmingham; Carlos A. Estrada, MD, MS, is Senior Scholar, Veterans Affairs National Quality Scholars Program at the Birmingham VA Medical Center and Professor of Medicine at the University of Alabama at Birmingham; Terry C.Wall, MD, MPH, is Professor of Pediatrics in the Division of General Pediatrics and Adolescent Medicine; Heather L. Coley, MPH, is a Program Manager in the Division of General Internal Medicine at the University of Alabama at Birmingham; Julius Ngu, MD, MPhil, is in the Division of General Internal Medicine at the University of Alabama at Birmingham; William Curry, MD, is Professor of Medicine in the Division of General Internal Medicine at the University of Alabama at Birmingham; Amanda H. Salanitro, MD, is an Instructor at the Geriatric Research, Education, and Clinical Center at the VA Tennessee Valley Healthcare System and the Section of Hospital Medicine at Vanderbilt University; and Thomas K. Houston, MD, MPH, is Director of the eHealth QUERI at the Bedford VAMC and Chief of the Division of Health Informatics and Implementation Science at the University of Massachusetts Medical School.

Presented in part at the 2010 Southern Society of General Internal Medicine Annual Meetings, New Orleans, Louisiana.

Funding and support: This study was funded by a Health Sciences Foundation grant for Medical Education Research from the University of Alabama at Birmingham to Drs Houston and Willett.

Disclosures: The authors report no conflict of interest.

Ethical approval: This study was approved by the Institutional Review Board of the University of Alabama at Birmingham.

Disclaimer: The opinions expressed in this article are those of the authors alone and do not reflect the views of the Department of Veterans Affairs.