Editor's note: This is a commentary on Wagner D, Lypson M. Centralized assessment in graduate medical education: cents and sensibilities. J Grad Med Educ. 2009;1:21–27.
The article by Wagner and Lypson is thought-provoking at a number of levels for health care professionals. It is valuable not only to consider the immediate implications of assessing incoming residents' preparedness for participation in clinical practice and learning, but to see how it may serve as a probe for a number of systemic issues in the education of health professionals.
At first pass, the authors describe 2 remarkable efforts to improve the assessment and training of new postgraduate-year-1 (PGY-1) doctors. These efforts are notable for their involvement of expert clinical leadership in curriculum design, a clear focus on concepts and skills that may be inadequately addressed in traditional undergraduate education, and a process that is standardized yet has room for continuous iteration and improvement. The efforts clearly add substance and value to the residency programs involved, and also appear to give new residents a much clearer roadmap to their own optimal training. As such, this has much to recommend it for adaptation in any number of postgraduate education programs. This is also helped by the pragmatic focus on key steps and learnings for those who want to initiate a similar process.
On another level, by creating an assessment process and data, the authors provide a valuable, if imperfect, lens into our training systems and the results they deliver. We all strive to train and support skilled physicians. Even those of us who have little contact with resident education are completely dependent on this process for our health care providers and leaders. As a physician in a large health maintenance organization, I have limited involvement with the process of graduate medical education, but work with 6 000 examples of the practicing physicians it produces–my colleagues in the medical group. How reliably are we preparing our next generation, and what can the current study add to our knowledge and understanding of opportunities for improvement and the steps we need to take?
Opportunities for improvement will be made manifest by performance gaps. These gaps are identified consistently in a number of areas in the current work. In analyzing these, several assumptions can be made and are supported by the data.
First, these results are real, and they identify issues that very likely translate into actual clinical care. Errors and failures to reliably deliver care are huge problems in our current health care delivery systems.1
Second, these skills deficits are likely to add to cost and they may cause harm. Data from the Keystone project are one example.2 In this study, basic skills and protocols for central line insertion were found to be lacking on a very broad scale, leading to harm and expense from bloodstream infections. These complications could be reduced significantly by focused performance improvement efforts. Surgical site infections are another example of considerable harm, with well-documented opportunities for improvement.3–5 This adds urgency to finding and remediating these problems. It also opens the possibility of a far more robust system for tracking the benefit of these interventions that may assist with funding them. A system that can truly measure the cost of harm, optimize the reliability, and support the training of new physicians will be the model in the future.
Third, we must be clear that the skills deficits measured are almost always a reflection on the educational system, not the quality of the entering resident. The PGY-1 physicians tested in this process are selected for high intelligence and work ethic; improving performance and competence is almost entirely a systemic reformation, not a winnowing of imperfect candidates.
So where do we need to change how we do things? A 60 percent score on patient care items is cause for concern. Gaps in psychomotor skills and breaches in aseptic technique are not what we would expect from a truly reliable training process. This may or may not be a new phenomenon. On one hand, the baseline reliability and procedural skills for one procedure (central line placement) might suggest that we have not fully succeeded in this task. Data from the Keystone project suggest significant and costly harm associated with inconsistent central line practices, many of which were placed by senior physicians well out of training.
In addition, simply avoiding the complications of the procedure may be a small part of the potential for improvement. Recent data from Institute for Healthcare Improvement Surviving Sepsis campaign6 suggest that clinicians may avoid performing procedures for which they believe they have a lack of skill. In the case of sepsis, the harm from failure to place a central venous pressure line and optimize therapy may be a leading cause of preventable mortality.7 Clearly, there is room for improvement for doctors at all stages of their careers.
At the same time, some aspects of this “performance gap” in residency entrants may have been worsened by well-intentioned safety interventions. In an effort to avoid harm, attending supervision of emergencies such as cardiac arrests has been greatly increased over historical levels. Invasive procedures have been transferred to smaller and smaller numbers of providers. This has almost certainly improved the safety of patients, and we clearly cannot return to practicing on real patients to attain basic skills. However, we have paid a price in reduced learning opportunities. The newly minted graduates are a clear example of this. We find in regional simulation-based training, including fundamentals of critical care skills classes, that many of our newest colleagues were not consistently trained to do procedures that were a routine part of practice in the past.
This is one of our main opportunities for systemic change. The authors have found at least some of the gaps; they also note that remediation efforts involving reading or lecture only may not be sufficiently powerful to attain the measurable goals of acceptable and consistent performance at the “shows how” and “does” levels. Immersive learning, simulation, and training until proficiency is attained all show promise in this area.
What might be the next steps? It seems clear that the future will be one of objective and demonstrated competence for physicians, and reliable care delivery for health care systems. Action on problem areas such as those highlighted by Wagner and Lypson is an imperative. Although more data will always be of value, the areas noted in this article provide a starting list for not only education but for systemic quality improvement. Better care ultimately will be measured and will save considerable cents, a linkage that hopefully can be measured and translated into robust and sustained educational processes in the future.
References
Author notes
Paul Preston, MD, is a staff anesthesiologist, simulation instructor, and physician safety educator for The Permanente Medical Group, a part of Kaiser Permanente Northern California.