I recently visited a retired physician who practiced in Roma, Texas, population 10 000. His family has lived and ranched in Roma for 6 generations, well before the border was moved and Roma became part of the United States. As the town's first and only physician for most of his 4-plus decades in practice, Dr Mario Ramirez did it all: deliveries, house calls, minor sugery, and seeing 60 people a day. After wearing himself ragged by racing to attend women in labor, far away from his office, he managed to rent a house, a historical structure, right in Roma's central square and convert this into a tiny hospital—the first of its kind—where women in labor could be watched over and delivered of their infants.
The hospital's main challenge was not funding or patient volume, but rather regulations. First the fire standards were not met; so Dr Ramirez had to buy discarded metal stairs from old buildings in San Antionio that were being renovated for the World's Fair and affix these to the outside of his building. Next he was cited for not having a sprinkler system, which required that pipes run throughout his historical building. Finally, he was told the corridors (of this ancient adobe house) needed to be widended to meet code. Regulations, the need for him to meet “standards of care,” ultimately resulted in no care; regulations were the undoing of what had been a godsend to the women and children of Starr County.
A similar phenomenon seems to have occurred with the teaching and testing of physical diagnosis skills; these skills are not being taught (past the medical school physical diagnosis course in the first and second years) and we are not testing for the acquisition of these skills at the completion of a residency. As a result, many of us believe that there has been an embarrassing decline in basic bedside skills. The reason for this is that the high-stakes exam that once existed (and which would inevitably have led residents to take pains with their technique) was abandoned some years ago because the test did not meet code, so to speak. It was thought to be too subjective, too arbitrary, and not standardized. In an attempt to make a “great” test from a subjective and variable test, we did away with any test of clinical skills, at least in the field of internal medicine.
The exam of old was no doubt full of flaws, but the existence of such exams sent the message that clinical skills mattered. They still matter, but we have replaced such tests with psychometrically valid multiple-choice tests that don't test any of these skills. Residency programs were assigned the responsibility to make sure clinical skills were acquired. However, programs have been unable to teach, let alone reliably evaluate, the bedside skills of trainees.
An academic physician in America who trained in Britain (and still finds the time to return to Britain to assist in the clinical exams for membership in the Royal College of Physicians) wrote to me saying, “One cannot pass the medicine boards in the UK without passing the half-day clinical test, which is an amazingly sensitive way to find out who has ever been ‘around’ patients. You can tell within 30 seconds which candidates have been trained at the bedside by their rapport and handling of patients.” (J. Baillie, MBChB, written communication, January 5, 2010).
We need better ways to test who has been “around” patients. What we test so well now is the resident's facility with the ‘iPatient’—the virtual construct of the patient residing in the computer (while the real patient in the bed becomes a mere icon for the data file in the computer).1 In that sense, we are much like the drunk person who has lost his keys somewhere on a dark street but insists on walking to a far away street light to search for his keys there, because that is where the light is good.
The problem begins in the medical student years—the United States Medical Licensing Exam “clinical skills” exam seems to test everything but the kind of clinical skills and technique I am talking about here. A student told me that in the exam scenarios with standardized patients, all one needs to do is approach the patient with a tendon hammer for the knee to shoot out: in other words, you are being tested to see if you thought of checking a reflex, not to see if you can actually check one. Consider this recent letter to the editor from a medical student in Britain:
“This difference between the UK and [the] USA was highlighted when I recently spent a month at a New York teaching hospital as part of my elective. The attending physician asked for a list of the signs of clubbing, something that I was taught in my first week of clinical medicine, however, neither of the 2 fellows I was with could answer. This and other examples during my time in the USA confirmed to me the value of the clinical method, reinforced during numerous bedside teaching sessions during my clinical education in the UK and that from this respect, the UK training system will better allow me to become the physician I desire to be.”2
This student has perhaps observed the fact that “rounds” in too many teaching institutions consist of the attending sitting in a room with residents and “card flipping.” The patient visit (if it occurs at all) happens as an afterthought, and often the attendings do this on their own. Candidly, many younger attendings will admit that the act of rounding at the bedside with the team is an area of discomfort and anxiety—they are much more comfortable wrestling with the vagaries of sodium or following “critical pathways.”3 The fault is of course not theirs, but ours, as educators, for not having made these skills something we value and therefore test.
With all of our talk about quality and medical errors, I think the lay public would be somewhat scandalized to learn that we have wonderful multiple-choice standardized tests that measure cognitive ability, and yet have nothing that checks to see that the graduates of our residency program can competently examine a patient, feel a spleen when it is massively enlarged, recognize clubbing of the fingers, define cranial nerve deificits, or detect a massive pleural effusion—although we readily tick off all these things as done on the electronic medical record (or even more often write NAD, which really means “not actually done” as opposed to “no abnormality detected”).
Following are some solutions:
Emphasize bedside technique during residency training: not just a theoretical knowledge of how to percuss, but actual instruction and supervision, repetition and feedback. To this end, we at Stanford, have developed what we call the Stanford 25: twenty-five technique-dependent, physical diagnosis maneuvers that we will teach to all our interns and watch as they perform them. One example is the ankle jerk in the bedridden patient; if this can be done, and it requires proper positioning and proper use of the hammer, then it is likely that other reflexes can just as easily be elicited. When the interns become junior residents, they will have some expertise and a repetoire of things to show and teach the new interns. Technique matters.4,5
Teach the use of bedside exam skills as a valuable way for setting priorities with complicated patients, a way of establishing a hierarchy in a “problem” list that scrolls off the page. These days, the data being generated with each patient are mind-boggling, but only a visit with the patient can elicit where it hurts, or what part is tender.
Correlate the relevant anatomy and physiology with disease in a more direct way: the wonderful technology of imaging should allow more direct correlation and sharpening of skills—if the computed axial tomography scan shows that the spleen should be palpable, we should go back and see if indeed it was, and if we missed it during examination because of poor technique.
Promote physical exam findings as valid phenotypic markers, rather than as archaic collections of eponyms; in the genomic era it is clear that the phenotypic markers of disease are often more meaningful and predictive than the genotype. Finding nicotine stains on the fingernails, or acanthosis nigricans, or xanthalesma on the eyelids is much more predictive of the risk of cancer, diabetes, or coronary artery disease, respectively, than mapping out the patient's genome.
Incorporate 20th century technology (bedside ultrasonography, panoptic ophthalmoscopy) into the required bedside skill set.
Develop a high-stakes exam so that the credential of being board-certified implies more than one's ability to study the medcial knowledge self-assessment program (MKSAP). Board certification should imply that there is some clinical skill that has been acquired and tested at the bedside and far away from a computer. A model might well be the London MRCP (member of the Royal College of Physicians) exam. Humphrey Hodgson, MBChB,6 senior censor and vice president of education/training for the Royal College of Physicians of London, wrote in the British Medical Journal in response to our description of the Stanford 25:
“Whilst the MRCP exam may not for the whole of its 150-year history [have] been an entirely appropriate test of clinical skills (the Greek language option being dropped only in 1936, and the requirement for wearing morning dress in 1943), in its current form it provides just such an assessment, and much work has been performed over the last decade to ensure that it is a robust and fair assessment. The clinical component – PACES, Practical Assessment of Clinical Examination Skills – covers directly observed procedures in physical examination, as well as history taking, communication, and consultation and diagnostic skills. Twenty-four of the 25 physical diagnostic manoeuvres taught to the Stanford trainees read exactly like a membership candidate's aide-memoire. The MRCP examination also includes a wide-ranging knowledge-based assessment and assessment of problem-solving ability.
The MRCP diploma remains an internationally recognised imprimatur, and many of us believe that its assessment of the direct interaction between doctor and patient is critical in earning that position. As your leader states, the public legitimately expect their physicians to be competent in physical examination. In addition, importantly for the medical graduate undergoing training in General Internal Medicine in the UK, in the near future completion of the Core Medical Training (CMT) component will involve (for those who started CMT during or after August 2009) obtaining the full MRCP(UK) diploma. The public can be reassured therefore that doctors in higher Medical Training have demonstrated the important skills your leader highlights.”
The American public, not to mention those of us involved in graduate medical education, are deserving of the same reassurance—that our graduates have the technique and skills to handle patients in expert fashion. We forget too often that the examination of the patient is a time-honored ritual, and that the purpose of all rituals—weddings, presidential inaugurations—is transformation. Can there be a ritual more significant than one human being confiding in another and then disrobing and allowing touch? In any other context this would be assault. It is crucial to the training of our residents that their skills at the end of residency training be worthy of this ritual. The transformation associated with this ritual is beyond measure and it is the generation of trust and the sealing of the patient-physician relationship.
References
Author notes
Abraham Verghese, MD, MACP, is Professor and Senior Associate Chair for the Theory and Practice of Medicine in Internal Medicine at Stanford University.