When training to become a physician, a key development is simply learning the parlance. I vividly recall sitting in a lecture hall on my first day of medical school and being told by the dean of students, “In your first year of medical school alone, you will learn 10 000 new vocabulary words.” I was never sure where he obtained this statistic, and I have only been able to corroborate his claim through anecdotal online reports based on sketchy information. Nonetheless, anyone in medical education would agree that the better part of medical training hinges on talking the talk.

As I have become interested in how we teach clinical reasoning, I am increasingly bothered by some of the popular phrases that seep into the lexicon of our impressionable trainees. Many of these sayings are familiar to learners and teachers alike: “until proven otherwise,” “high index of suspicion,” among others. On the surface, these expressions seem to encapsulate lessons learned from our predecessors' missteps, and there is a sage aura around many of them.

However, I contend that these phrases convey superficial wisdom at best. Ambiguous and open to multiple interpretations, they belie the complexity of the clinical reasoning process. By their very nature, they are reductionist and thus are antithetical to the Bayesian approach that underpins sound probabilistic reasoning.1  I refer to these phrases as pseudo-probabilistic aphorisms, and I believe they are a scourge to advancing trainees' clinical reasoning. I propose the following list of flawed partial pearls of wisdom be returned to the oysters from which they came.

When I hear this phrase, a patient with an aortic dissection comes to mind. Dissections are often deadly, frequently missed, and may be diagnosed as another common chest pain disorder. It is the quintessential example of when I need a “high index of suspicion”: a do-not-miss diagnosis that is uncommon and elusive in routine diagnostic algorithms.

But that is my interpretation, and therein lies the problem. There is no universal definition of what constitutes a “high index of suspicion.” I am not alone in wondering exactly what index we are using to measure suspicion.2  In fact, neither the Merriam-Webster nor the Oxford dictionary have a satisfying definition of index that would make this phrase semantically correct, let alone diagnostically helpful.

Google and Google Scholar search results show the phrase linked to (1) extremely common diagnoses such as atrial fibrillation, irritant dermatitis, or alcoholism; (2) unusual manifestations of protean conditions that are difficult to diagnose definitively, such as extrapulmonary tuberculosis; and (3) extraordinarily rare entities that one would never suspect at all, such as a spontaneous abdominal aortic infection.3 

Again, I return to the aortic dissection as the best fit for my understanding of “high index of suspicion.” But why use such an ambiguous, nearly nonsensical phrase to convey this idea? Is it not more instructive to implore trainees to consider the potential of an aortic dissection—and potentially deviate from routine diagnostic algorithms—when seeing patients with acute chest pain?

This phrase is used in both diagnostic and therapeutic decisions, and I encounter it most frequently in handoffs. I suspect many of us intuitively understand the concept of “low threshold,” but I personally have been vexed when attempting to put it into action. Here is a typical exchange between me and a colleague at sign-out in the intensive care unit.

Colleague: “Have a low threshold to intubate this patient.”

Me (while eyeballing the electronic chart during handoff): “This patient has a respiratory rate of 30 and marginal oxygen saturations on high-flow nasal cannula. What exactly is the threshold?”

Colleague: “You know, just have a low threshold if he gets worse.”

Me: “These numbers don't look great. Again, what is my threshold?”

Thresholds are familiar to those acquainted with clinical reasoning. When the treatment threshold is breached, we feel a diagnosis is likely sufficient enough to initiate treatment without additional diagnostic testing. On the other hand, when the probability of a particular condition falls below the test threshold, no further diagnostic testing is indicated. Uses of thresholds outside these settings are utterly unhelpful because, like “indices of suspicion,” a threshold implies something quantifiable. If we cannot define a threshold in concrete terms, we are doing nothing more than simply reminding our colleagues, “Be careful!”

I remember first hearing this phrase on my surgical oncology rotation as a third-year medical student. “Patients over the age of 50 with anemia due to occult gastrointestinal blood loss have colon cancer until proven otherwise.”

Now I could reconcile sending a quinquagenarian with iron deficiency anemia for a colonoscopy, but what should I do if the colonoscopy fails to demonstrate any source of bleeding, let alone cancer? If I follow the surgical oncologist's dictum exactly, every adult patient of this variety would get a colectomy. Even as a third-year student, I understood such an approach was sheer stupidity.

But assume, for example, in the context of a resident seeing a patient with a suspicious skin lesion, the preceptors says it is “skin cancer until proven otherwise.” What should the resident do if a shave biopsy shows no evidence of cancer? Does he or she invoke sampling error and subject the patient to another biopsy? Or does the resident feel comfortable with expectant management when he or she sees the reassuring result?

Keep in mind that the phrase “until proven otherwise” gives undue weight to aggressive diagnostic pursuits without considering the potential risks. When we tell trainees to work up problems ad infinitum, we generate false positives and overdiagnoses. More importantly, we miss opportunities to teach the nuances of clinical reasoning.

I recently treated an elderly man who had been experiencing myalgias and had elevated inflammatory markers. He was referred from the resident clinic to a rheumatologist for polymyalgia rheumatica (PMR), but the specialist disagreed with the tentative diagnosis, pointing out that PMR was a “diagnosis of exclusion.” End of story. And I wondered, “To the exclusion of what?”

To exclude a diagnosis implies there is a prespecified set of possibilities. The most logically sound interpretation of this phrase is that we should systematically rule out other diagnoses before settling on the “diagnosis of exclusion.” However, without other diagnostic anchors, a “diagnosis of exclusion” becomes an endless journey in a sea of possibilities.

I suspect this rheumatologist was suggesting one of the following options: (1) PMR is entirely a clinical diagnosis, and it was excluded by how poorly the patient's symptoms matched with the typical pattern; (2) there are various conditions that mimic PMR, which are more easily and reliably evaluated (and thus potentially excluded) through routine diagnostic testing; or (3) there are alternative explanations for each of the patient's isolated symptoms (eg, muscle aches are from the statin, and the mildly elevated C-reactive protein is due to being obese). Treating physicians should consider these common possibilities before attempting to unify them into a less common diagnosis.

So how do we improve on this phrase? I would suggest: “The diagnosis you propose is made on clinical grounds. Carefully consider this limited set of other diagnoses before settling on a convenient but difficult-to-disprove alternative.”

Learning the language of clinical reasoning is imperative to a physician's diagnostic and professional development. Pseudo-probabilistic aphorisms may make us sound smart, but they ultimately may thwart our quest to diagnostic excellence.

1
Elstein
AS,
Schwartz
A.
Clinical problem solving and diagnostic decision making: selective review of the cognitive literature
.
BMJ
.
2002
;
324
(
7339
):
729
732
.
2
Weissberg
D.
What is an index of suspicion?
J Am Coll Surg
.
2007
;
204
(
3
):
520
.
3
Ewart
JM,
Burke
ML,
Bunt
TJ.
Spontaneous abdominal aortic infections. Essentials of diagnosis and management
.
Am Surg
.
1983
;
49
(
1
):
37
50
.