The Accreditation Council for Graduate Medical Education (ACGME) expects programs to perform an annual program evaluation, and with the transition to the Next Accreditation System in 2013,1 specifies a more comprehensive self-study as part of the 10-year accreditation review. Both are components of a larger approach to ongoing monitoring and promoting improvement in all programs, including programs in compliance with the accreditation standards.1,2 In an earlier article, we described common improvement priorities in a large sample of programs that participated in a voluntary site visit following completion of their self-study.3 This second article focuses on attributes of effective program evaluation and improvement. The intent is to offer actionable recommendations for how to enhance and accelerate improvement in all types of programs. The approach is scalable to programs' size, current status, and time and other resources available for program improvement.
The article focuses on 5 dimensions of effective program improvement efforts shown in the box. These attributes emerged from interviews of leadership, faculty, and residents in programs that were particularly effective in making improvements, and from programs that struggled with evaluation and improvement. We also used field notes, written feedback from participating programs, and discussion of these concepts by the site visit teams.
Linking improvements to program aims and environmental context
Executing the plan-do-study-act (PDSA) cycle
Managing and tracking improvement data
Stakeholder engagement and involvement in improvement activities
Coordination between program, departmental, and institutional aims and priorities
The 5 dimensions were initially envisioned to evolve into an assessment tool for use by programs in self-evaluations, and as part of the accreditation site visit.2 This would follow a developmental model, with feedback to help programs progress to the “next level.” However, closer examination showed attempts to “score” programs in these dimensions, even as a formative assessment, would not be fair, as most dimensions do not follow an evolutionary model, and there is a range of effective practices for programs across different specialties and of different sizes. The 5 dimensions still provide a shared mental model of attributes of effective program improvement activities. They are currently being used by the ACGME accreditation field representatives to offer formative feedback to programs during the 10-Year Accreditation Site Visit.
Linking Improvement to Aims and Context
Programs on continued accreditation currently have few or no citations, and the majority of citations are resolved in a single annual accreditation cycle.4 Setting aims allows these programs to make improvements in areas important to program leaders, faculty, and trainees. Aims can be set as part of the annual program evaluation or the self-study, and revised as needed. Aims can differentiate a given program, and position it to meet local, regional, and, for some programs, national needs. Aims may relate to attributes of the individuals who matriculate into the program, such as recruiting individuals who have overcome barriers or those with the potential to excel in areas such as leadership or advocacy. Aims often relate to the attributes of graduates, and in an earlier article, we identified that educating physicians to be fully prepared for unsupervised practice is a common aim across a range of specialties.3 Other aims for graduates may emphasize future careers in academic medicine, research and generating new knowledge, or practice in underserved areas. A less frequently seen, yet highly useful set of aims relates to the attributes of the program itself, such as providing “a culture that is supportive, respectful, and compassionate toward trainees, patients, and colleagues.”3
Aims are a powerful way of engaging faculty, trainees, and other stakeholders in a discussion of the program, as a first step to improvement, whether done as part of the self-study, during the annual program evaluation, or in response to emerging problems, or information suggesting a need for change.
Aims guide improvement activities through an assessment of the activities in furtherance of the aims. Creating a table of aims, activities to further the aims, and improvement projects may show current or planned projects without a link to an aim or key aims without current activities. It may show aims without an improvement project. This can be used for a conversation that may assist in prioritizing improvements, and identifying new improvement projects in areas important to the program.
Completing the Plan-Do-Study-Act Cycle
An established quality improvement guide describes the improvement process as repetitive cycles of widening and narrowing focus.5 In the initial phase, an expanded focus allows for a broad consideration of possible improvements, followed by narrowing the focus to a few key priorities to ensure follow-through. A similar process is used for the improvement work, with an expanded focus generating possible causes of a problem and a narrowing focus to identify the root cause that will be addressed through the improvement effort. Finally, the focus is expanded again to consider a wide range of possible solutions, and then narrowed to select the intervention for testing during the plan-do-study-act (PDSA) cycle.6
The ACGME by design instituted a time lag between the self-study and the 10-Year Accreditation Site Visit to give programs time to make improvements in key areas identified during the self-study, and demonstrate these improvements during the site visit. A common cause of “arrested” improvement is when a sizable share of improvement interventions are abandoned or considered completed before a full PDSA cycle has been performed. There are multiple reasons for improvement initiatives, arresting at the “Plan” or “Do” phases of the cycle, including competing demands on the individuals charged with managing improvements, selecting too many projects at a given time, and a lack of understanding of the key attributes and the importance of the “Study” and “Act” components of the improvement cycle.
The “Study” element entails the evaluation of the data, comparing it to a hypothesis or less formal set of expectations, and assessing successes, failures, surprises, and unintended consequences (good and bad), and summarizing what was learned. At this phase, assessing the effectiveness of interventions is critical, but frequently is not done. The final “Act” step of the cycle is when key decisions are made, with the options of adopting the change, adapting the change, or abandoning the intervention in favor of a better approach. Each option needs to be considered in an evidence-based way, using the data collected throughout the cycle. The ACGME has provided an easy-to-use form for tracking the dimensions of the PDSA cycle7 to assist individuals charged with improvement activities to navigate these important, often less well understood, phases of the improvement cycle.
For some areas, such as improving board certification performance of graduates, or increasing minority recruitment, where real data may not be available to guide short-term initiatives, it will be useful to look at “early” data to guide future refinements to interventions. For an effort to increase graduates' performance on the certifying board examination, this may include in-training examination data or even residents' evaluation of a new board-focused curriculum.
Managing Improvement Action Plans and Data
Effective improvement activities rely on data and on individuals who are charged with tracking outcomes and managing the data. The ACGME Common Program Requirements specify that the Program Evaluation Committee needs to document action plans from the annual program evaluation and track and document progress.8 A wide range of data can be used in program improvement, and the ACGME offers a list of high-value data for the annual program evaluation and the self-study.9
Having the program director feel that he or she should be solely responsible for managing action plans can be a barrier to effective improvement tracking. Entrusting data tracking to faculty, the program coordinator, or senior trainees may accelerate data reporting and resulting improvement. In many programs with effective improvement processes, multiple individuals have been delegated responsibility for specific projects, with regular meetings to ensure shared accountability and monitoring of progress. The ACGME has created forms to use in tracking data for program improvement,10,11 available from the self-study website, and many sponsoring institutions have designed institutional forms that are used in the review of program improvement activities by the Graduate Medical Education Committee. Forms should be specific about who is responsible for a given improvement intervention, relevant deadlines, and dates when program updates are expected, and the types of data, how they are being collected, and who is responsible for data collection and aggregation.
Stakeholder Involvement and Engagement
An important textbook on utilization-focused evaluation recommends that useful evaluations focus on “intended uses, by primary intended users.”12 Program leadership, faculty, and trainees are the primary intended users, and their input and involvement in the improvement process is critical. Yet they are often not the only stakeholders. A common source of input into the program evaluation has entailed feedback from program graduates, such as a survey of graduates at 1 year and 5 years in practice. Nurses and other members of the multidisciplinary team also can offer useful feedback important to program improvement efforts.
The figure shows 4 levels of stakeholder engagement. Of note, while more stakeholder engagement generally is associated with more meaningful and accelerated improvement, there is a wide range of approaches for how this might be achieved, including different models for small versus large programs. No single approach to engaging stakeholders will be effective across all programs.
Coordination Between Program, Departmental, and Institutional Aims and Priorities
This critical dimension is important in aligning program-level efforts with institutional priorities around quality and safety of care, or educating the needed physician workforce for the region. ACGME guidance for the self-study suggests vetting of aims by departmental and institutional leadership to promote alignment. This may also assist the program in securing resources. At the same time, findings from programs that underwent a voluntary pilot visit suggest that, not infrequently, there is conflict between program and departmental or institutional priorities, such as the program's aim to educate generalist physicians conflicting with its department's efforts to increase subspecialty faculty and a focus on subspecialized care.3 This suggests a need for conversations with department and institutional leadership, when program aims and department and institutional goals appear to be in conflict, to ensure appropriate support for important program aims.
Another area for coordination is between the core program and its subspecialty programs. In certain specialties, this coordination is established, as the subspecialty program directors have a reporting relationship to the core program director. In other programs, program improvement may occur in the individual programs, with no coordination. This may result in missed opportunities for collaborating on common improvement priorities and for a broader evaluation of the core and subspecialty programs by the group of program directors, which may identify common priorities that may enhance the likelihood of success for interventions.
Combining Useful Practices in Several Dimensions: The Rapid Improvement Cycle
An effective way to accelerate improvement is through rapid feedback cycles. This combines attributes of several of the dimensions of effective improvement. Rapid improvement cycles use “good enough” data, have short timelines of several weeks to a few months, and include a reflective debriefing session on current data on improvement. During these sessions, priorities are actively discussed with stakeholders as part of the “Study” phase of the improvement cycle, before moving to the “Act” phase of implementing the change, abandoning the given intervention, or repeating the PDSA cycle.
Rapid improvement cycles can be aided by communicating the improvement plan, and the results achieved through brief, focused documents that allow stakeholders to co-own the improvement process. This requires the preparation of brief (1-page) documents that can be used to facilitate the discussion. Questions should focus on “What have we learned? What are the implications of this for what we are trying to achieve? What are the next steps?” An added benefit is that it increases faculty and trainee awareness of and ownership in program improvement initiatives. Finally, enhancing the focus on the effectiveness of action plan steps may increase the ability to make changes to program improvement interventions if measurements do not (yet) show the expected improvement.
Institutional Engagement in Program Improvement
Institutions have been active in reviewing program improvement efforts through their Graduate Medical Education Committees, in the absence of a requirement for this activity. This is due to the value of this activity both for programs and to carry out the institution's oversight role in an era with few citations for most programs on continued accreditation. Information shared by institutional leadership across a range of recent site visits suggests they are deemed high value, due to the ability to identify cross-program concerns and needs, highlight best practices for wider sharing, and showcase the experience of programs with an early self-study or 10-Year Accreditation Site Visit to facilitate learning. Institutional committees may also suggest common priority areas for programs such as resident and faculty well-being, how to enhance the appeal of didactic sessions, the business of medicine, or interprofessional education and practice.13
Conclusion
The 5 dimensions of effective program improvement are offered as suggestions for programs that wish to enhance and accelerate their improvement process through both their annual program evaluation and the self-study. They are flexible and can be tailored to the needs of the given program and the current state of its improvement effort.