Abstract
Simulation training has emerged as an effective method of educating residents in cardiac emergencies. Few studies have used emergency simulation scenarios as an outcome measure to identify training deficiencies within residency programs.
The purpose of this study was to evaluate postgraduate year-1 (PGY-1) residents on their ability to manage an acute coronary syndrome and cardiac arrest scenario before and after internship in order to provide outcome data to improve program performance.
A total of 58 PGY-1 residents from 10 medical specialties were evaluated using a human patient simulator before and after internship. They were given 12 minutes to manage a patient with acute coronary syndrome and ventricular fibrillation due to hyperkalemia. An objective checklist following basic and advanced cardiac life support guidelines was used to assess performance.
A total of 58 interns (age, 25 to 44 years [mean, 29.1]; 38 [65.6%] men; 41 [70.7%] allopathic medical school graduates) participated in both the incoming and outgoing examination. Overall chest pain scores increased from a mean of 60.0% to 76.1% (P < .01). Medical knowledge performance improved from 51.1% to 76.1% (P < .01). Systems-based practice performance improved from 40.9% to 71.0% (P < .01). However, patient care performance declined from 93.4% to 80.2% (P < .01).
A simulated acute coronary syndrome and cardiac arrest scenario can evaluate incoming PGY-1 competency performance and test for interval improvement. This assessment tool can measure resident competency performance and evaluate program effectiveness.
Editor's Note: The online version of this article contains the Faculty Evaluation form.
Background
Acute coronary syndrome (ACS) is the leading cause of death in the United States, with up to 1.5 million annual events each year.1 The incidence of sudden cardiac death in the United States is estimated to be as high as 250 000 per year.2 Because of the importance of early, appropriate management of these medical conditions, simulation training has emerged as an effective method of educating residents in cardiac emergencies.3
Residents trained using simulated resuscitations have shown improved cognitive performance and better adherence to advanced life support protocols.4–7 Simulation has also been used to improve resident orientation and to promote teamwork and patient safety during resuscitations.8–10 One academic program has shown improved quality of care during actual cardiac events as a result of resident training with simulation.11 Participating in realistic learning scenarios using human patient simulators has also been viewed as enjoyable and educational by trainees and faculty.12,13
Competency-based simulation examinations have been used to provide outcome data before and after internship to improve overall program performance.14 Educational outcome data regarding cardiac resuscitation has been limited to testing residents before and after brief simulation courses.7 The purpose of our study was to evaluate postgraduate year-1 (PGY-1) residents' ability to manage a simulated ACS and cardiac arrest patient before and after internship to provide outcome data to improve program performance.
Methods
Participants
A total of 61 PGY-1 residents from 10 medical specialties were evaluated using a simulated chest pain and cardiac arrest scenario on a human patient simulator before and after internship in June 2006 and 2007. The residents were entering Accreditation Council for Graduate Medical Education–accredited programs in emergency medicine, internal medicine, family medicine, pediatrics, obstetrics and gynecology, neurology, general surgery, orthopedics, otolaryngology, and transitional year. They completed the standard rotations for their specialty training programs during the PGY-1 year prior to retesting.
ACS and Cardiac Arrest Scenario
Residents participating in the study were given 12 minutes to evaluate and initially manage a 67-year-old man with a known history of coronary artery disease and diabetes who presented to the emergency department with chest pain. The patient was a high-fidelity human patient simulator that was able to answer questions and provide vital signs, physical examination findings, and cardiac rhythms. Residents had access to emergency equipment, including a cardiac monitor, defibrillator, oxygen, airway supplies, and medications. If requested, an electrocardiogram (ECG) was provided, which revealed changes consistent with an acute ST elevation myocardial infarction.
Residents were required to recognize and manage ventricular fibrillation when the patient became pulseless and unresponsive. Then, using the results of a chemistry panel showing hyperkalemia, residents were asked to identify the electrolyte abnormality most likely responsible for the cardiac arrest and list all possible treatments for the abnormality, beginning with the medications that should be used first in this patient. Patient care, medical knowledge, and systems-based practice competencies were assessed using an electronic evaluation form.
The simulated scenario with evaluation form was created by a board-certified physician and reviewed for accuracy and content by 4 additional physician educators. The station was standardized to ensure that all residents received the same information without hints, reminders, or recommendations from the evaluator. The timing of each station was strictly enforced.
Evaluation Forms and Data Interpretation
A single faculty physician evaluated each resident before and after internship using a computerized evaluation form with both objective and subjective criteria. An objective checklist based on established advanced cardiac life support guidelines was used for the patient care and medical knowledge competencies and graded in a binomial (yes/no) format. Systems-based practice competencies were graded using a 5-point Likert scale (1 = needs improvement, 2 = below average, 3 = average, 4 = above average, 5 = excellent). The scores were reported as a percentage of the maximum 250 points available.
Data were collected on standardized InfoPath forms (Microsoft Corporation, Redmond, WA) and consolidated into an Excel spreadsheet (Microsoft). Station scores were compared for each individual resident and by residency department. SAS version 9.1 (SAS Inc, Cary, NC) was used to conduct the statistical analyses. Standard univariate methods were used throughout the statistical analyses. Statistical significance was set at P < .05. Changes in outcomes over time were analyzed using the Wilcoxon signed rank test for paired data. Changes in binomial (yes/no) data were analyzed using the McNemar test. Changes in ordinal outcomes (Likert scale) were analyzed using the Mantel-Haenszel mean score statistic. Sample size was sufficient to allow asymptotic methods.
Reliability of this assessment tool was strengthened by using the same evaluator for before and after tests for all residents tested, providing standardized instructions and education to the evaluator on how to use the evaluation form, and using multiple checklist items, most of which contained objective “yes/no” selections. For the subjective checklist items, the study relied on the lone evaluator's internal gauge. The “practice effect bias” from using the same scenario for before and after testing was minimized by not providing residents with feedback after their initial test, and not informing residents that the second test would use the same scenario. This bias was also decreased by the time interval between testing.15
This study received institutional review board approval from Madigan Army Medical Center, Fort Lewis, Washington.
Results
A total of 61 PGY-1 residents from 10 residency programs participated in the incoming chest pain scenario in June 2006. Three residents did not participate in the end-of-year scenario because of scheduling conflicts. The data analysis for this study was based on the 58 examinees who participated in both the incoming and outgoing chest pain scenarios. The participant ages ranged from 25 to 44 years, with an average age of 29.1. There were more men (38, 65.6%) than women (20, 34.4%) and all were graduates of a US medical school; there were more allopathic (41, 70.7%) than osteopathic (17, 29.3%) residents. The represented programs included internal medicine (n = 10), transitional year (n = 10), emergency medicine (n = 8), family medicine (n = 8), pediatrics (n = 6), general surgery (n = 5), obstetrics and gynecology (n = 4), neurology (n = 3), orthopedics (n = 2), and otolaryngology (n = 2).
There was statistically significant improvement in overall chest pain station scores among all residents (60.0% to 76.1%; P < .01), with improvement seen in each specialty (figure). Total patient care subscores declined from 93.4% to 80.2% (P < .01), with most checklist items showing a decline (table 1). Overall medical knowledge subscores improved significantly from 51.1% to 75.5% (P < .01), with all checklist items showing improvement (table 2). Total systems-based practice scores improved from a Likert score of 2.04 to 3.55, or a 40.9% to 71.0% improvement (P < .01). All 6 skills in the systems-based practice section showed statistically significant improvement (table 3).
Discussion
All residents at this facility completed advanced cardiac life support training after initial testing, and all residents showed an improvement in their ability to evaluate and manage a simulated patient with ACS and ventricular fibrillation after 1 year of postgraduate medical training. However, the data suggest that exposure and repetition can be a factor leading to greater improvement. Interns in pediatrics showed less improvement than other interns in programs that focus on adult medicine, most likely because cardiac events are rare in this discipline. In contrast, emergency medicine residents, who likely have the most contact with cardiac events, achieved the highest score of 87.6%. The level of improvement may also vary depending on each program's curriculum and the importance of this training to the medical specialty. This study did not address the curriculum content of each specialty in relation to advanced cardiac life support. Further study could be undertaken to assess this.
The initial scores showed great variability between specialties. The scores ranged from a low of 48.3% for the neurology residents and a high of 69.9% for the emergency medicine residents. The presence of the same evaluator and objective checklists supports that there was variability in knowledge coming into residency, which is a reflection of what was learned in medical school.
Identifying Areas for Curricular Improvement
Results from this simulated scenario can be used to target specific resident learning needs. Impressive improvements were seen in the treatment of select areas of myocardial ischemia, ventricular fibrillation, and hyperkalemia. The use of morphine, oxygen, nitroglycerin, and aspirin for ACS also seemed well known before and after internship. Despite statistically significant improvement in most medical knowledge areas at the end of the year, less than half of the participants recognized ECG findings of inferior wall ischemia or considered fibrinolytic therapy for ischemia. Identification of ST elevation or ordering of troponin tests occurred in only 75% of those tested following the PGY-1 year. Making curriculum improvements in these deficient areas could lead to improved ECG interpretation and ACS management skills within a residency program. Scores from the initial testing are representative of what the residents learned during medical school, and could be provided as feedback to the originating schools.
Decline in Patient Care Scores Over the Year of Training
Many of the patient care competency skills tested in the intervention focused on basic life support. Residents' performance decreased in these areas on repeat testing in both the ACS and ventricular fibrillation portion of the scenario. There was a small, insignificant drop in some patient care skills during the ACS portion of the encounter. Participants may have recollected their pretest experience instead of conducting an initial assessment of the patient as they awaited the patient's decompensation into ventricular fibrillation.
During the cardiac arrest phase, the higher patient care scores on initial testing may be the result of the PGY-1 residents' strong knowledge of basic life support (checking the patient's airway, breathing, and circulation) and inexperience with rhythm recognition and advanced cardiac life support. In this case, a decline in patient care scores during the end-of-year testing may be considered an improvement, because residents are able to recognize ventricular fibrillation early and provide timely defibrillation. In this case, the PGY-1 year may have taught these advanced management principles, which could lead to better patient outcomes. Determining the time lapse between ventricular fibrillation recognition and the initial shock may clarify whether the disparity results from a lack of basic life support skill in the end-of-year test or better skill at rhythm recognition and proper defibrillation. However, this drop in patient care score could also represent the difference between residents treating the monitor and the patient.
Study Limitations
The use of simulation can serve as a great assessment tool in residency; however, human patient simulators are not a perfect substitute for real patients and clinical practice. Initial scores may have been artificially low if participants were not familiar with working in a simulated environment. As shown in table 3, no one initially used the blood pressure cuff correctly. This may be because of a lack of familiarity with the simulated patient's equipment in the beginning of the training year. Residents in some residency training programs may gain more experience with medical simulation during the intern year, making them more successful in a simulated environment.
The evaluator was not blinded to whether the resident was taking the before or after test. There is a possibility that knowing the resident had completed the internship may have led the evaluator to grade somewhat harder on the outgoing test. There was no analysis done to determine whether the checklists were responsive to the increased expertise residents gained over 1 year of training. Finding a way to blind the evaluators may clarify whether a bias existed.
In our study, there was no interrater bias because the same single evaluator was used to grade every resident before and after internship. If more than 1 evaluator were used, it would be important to assess each evaluator's performance by having him or her evaluate the same resident to measure agreement across evaluators on their objective and internal gauge assessment.
Recommendations for Future Research
This study is different from prior studies because it tested chest pain and cardiac arrest management and the improvement of that knowledge over the course of the PGY-1 training year. Further study may be undertaken to determine if simulation scores reflect physician success in residency, board examinations, or clinical practice and to evaluate whether any curriculum changes brought about by these study findings led to further improvement in outgoing scores. Resident surveys could also be considered to describe resident experience in these types of resuscitations. If larger sample sizes were able to be obtained, further assessment by category such as specialty, prior experience, medical school, number of emergency medicine rotations completed, or age could be conducted. Sample sizes in subcategories such as specialty were not large enough for comparison in this study.
References
Author notes
Susan P. Opar, MD, is on the Faculty of the Family Medicine Residency Program, Madigan Army Medical Center; Matthew W. Short, MD, FAAFP, is Director, Transitional Year Program, Madigan Army Medical Center, and Adjunct Assistant Professor of Family Medicine, Uniformed Services University of the Health Sciences, School of Medicine, and Clinical Assistant Professor of Family Medicine, University of Washington School of Medicine; Jennifer E. Jorgensen, MD, FACP, is a Gastroenterology Fellow, University of Michigan Medical Center; Robert B. Blankenship, MD, FACEP, is Medical Director, St. Vincent Medical Center; and Bernard J. Roth, MD, FACP, FACCP, is Pulmonary Disease Subspecialty Education Coordinator, Madigan Army Medical Center, and Professor of Medicine, Uniformed Services University of the Health Sciences, and Clinical Professor of Medicine, University of Washington, Division of Pulmonary/Critical Care Medicine.
The views expressed are those of the authors and do not reflect the official policy of the Department of the Army, the Department of Defense or the U.S. Government.
An oral presentation of this work was given on April 25, 2008, at Western Regional Medical Command Madigan Research Day, Fort Lewis, WA. A poster presentation, “Acute Coronary Syndrome and Cardiac Arrest: Using Simulation to Assess Resident Performance and Program Outcomes” was provided March 4-7, 2010, at the 2010 Accreditation Council for Graduate Medical Education Annual Education Conference, Nashville, TN.