Several months ago, a neuropathologist at a nearby university medical center reviewed one of my cases before patient treatment. The university neuropathologist's diagnosis differed from my own—oligodendroglioma (World Health Organization [WHO] grade II, university diagnosis) versus anaplastic astrocytoma (WHO grade III, my diagnosis).

My reaction to this news was probably similar to the reaction of other community pathologists who find themselves in a comparable situation: Which of the diagnoses was correct? Did the difference matter? How often do discrepancies such as this occur? Are outside reviewers prone to change diagnoses simply to demonstrate their worth? A recent Wall Street Journal article1  came to mind: “What if the Doctor is Wrong”? the headline asked. Was I one of these doctors? I sent the case to an internationally recognized neuropathology expert for adjudication.

In this issue of the Archives, Swapp, Aubry, Salomão, and Cheville2  examined the practice of routinely reviewing all previous extradepartmental (“outside”) pathology diagnoses issued for patients presenting to the Mayo Clinic, Rochester, Minnesota. Among 71 811 cases subjected to review during a 6-year period, Mayo pathologists identified what the study authors considered to be a significant diagnostic discrepancy in 0.6% of cases. Review of a subset of discrepant cases revealed that treatment was altered by the diagnostic change for 90% of patients.

The results of Swapp et al2  are comparable to those of previous reviews, although others have reported somewhat higher rates of disagreement.3,4  Readers should be cautious before generalizing from any retrospective single-institution study. Referral patterns and the diagnostic styles of community and reviewing pathologists are likely to impact discrepancy rates. For example, patients seen at the Mayo Clinic are likely to be wealthier than the average US patient and may have better access to initial care of high quality. Perhaps discrepancy rates are greater when reviewing original pathology diagnoses rendered on the less affluent. We do not know.

The practice of routinely reviewing outside cases is costly. Using the current Medicare global reimbursement for slide review as a proxy for cost (CPT 88321), the expense to detect a single discrepancy with an underlying 0.6% error rate is $16 000. Detection cost per case will be much higher if ancillary studies are performed, such as immunoperoxidase or molecular analyses. If an average pathologist can review 2500 outside cases per year and costs her employer$350 000 with benefits and institutional overhead, then the cost to detect a discrepant diagnosis with an underlying 0.6% error rate is \$23 000. In the series by Swapp et al,2  treatment was not altered for 10% of patients with discrepant diagnoses, which means that the cost of identifying an error that alters treatment is 11% higher. Administrators and a public fed up with rising health care costs have a right to ask: Is the juice worth the squeeze?

Many pathology diagnoses set in motion a chain of events that have profound implications for individual patients. A breast is removed (or, instead, the patient is pronounced cancer-free and sent home without surgery). Immunosuppression is induced (or, instead, steroids are withheld and antibiotics are prescribed for a systemic fungal infection). It is difficult to calculate the impact of incorrect pathology diagnoses on patients, institutions, and society. And because the consequences of an incorrect diagnosis are difficult to assess, Swapp et al2  did not estimate the cost-benefit or cost-effectiveness of routinely reviewing outside pathology material. Yet, if we are going to make rational decisions about where to deploy resources in the future, our profession will need to learn how to make these calculations even if our estimates are only approximations.

The report by Swapp et al2  adds an interesting twist to the subject of extradepartmental review. Follow-up biopsy material was available for 86 of the 166 discrepant cases the authors selected for more intensive study. In 85% of these cases subsequent biopsy confirmed the review diagnosis, but in 15% of these cases another diagnosis (usually the original diagnosis) proved to be correct. The authors are to be commended for being open to the possibility that reviewers' diagnoses are not necessarily right—a possibility that has been inexplicably neglected in other studies. Nevertheless, the authors' findings are unsettling. If this proportion can be generalized to all 71 811 cases in their series, then the routine review of extradepartmental diagnosis resulted in the correction of 366 errant diagnoses (0.5%) and the creation of 65 new, incorrect diagnoses (0.1%). Any cost-benefit analysis of routine extradepartmental review will need to factor reviewer error into the calculus.

The advisability of reviewing outside cases does not need to be answered with a simple yes or no. An extradepartmental review policy need not be all-or-none. While universal review of outside cases will identify the largest number of incorrect diagnoses, selective review of higher-risk cases will produce the greatest return on investment. More research is required to define criteria that might be used to trigger a review, or to cause a case not to be reviewed. Does it make sense to forgo review for straightforward diagnoses, certain tissue types, or diagnoses rendered at particular institutions or by particular individuals? Renshaw and Gould5  report that error rates differ by pathologist and also that certain tissue types are particularly problematic (brain, breast, and skin). Perhaps a more targeted approach will better balance benefit with cost. And if certain types of outside biopsy reviews are shown to be particularly cost-effective when patients visit new institutions, perhaps those types of review should also be conducted for the large majority of patients who elect to be treated at the same institution in which they were diagnosed.

And what of my case? The expert who reviewed the biopsy I had examined diagnosed a low-grade oligoastrocytoma—a diagnosis that differed from my original interpretation and also (slightly) from the interpretation of the local university neuropathologist. As a result of the outside reviews, the tumor was downgraded from WHO III to WHO II, the patient's prognosis improved, and radiation therapy, which would normally have been administered immediately, will instead be held until symptoms progress. This particular review postponed expensive and burdensome radiotherapy.

The literature seems clear that review of outside diagnoses before treatment will correct some diagnostic errors, while creating relatively fewer new errors. What is yet not clear is whether this activity is a good institutional or societal investment in a resource-constrained environment, and whether extradepartmental reviews should be conducted routinely or selectively.

1
Landro
,
L
.
What if the doctor is wrong: some cancers, asthma, other conditions can be tricky to diagnose, leading to incorrect treatments
.
Wall Street Journal
.
January 17
,
2012
.
2
Swapp
RE
,
Aubry
MC
,
Salomão
DR
,
Cheville
JC
.
Outside case review of surgical pathology for referred patients: the impact on patient care
.
Arch Pathol Lab Med
.
2013
;
137
(
2
):
233
240
.
3
Kronz
JD
,
Westra
WH
,
Epstein
JI
.
Mandatory second opinion surgical pathology at a large referral hospital
.
Cancer
.
1999
;
86
(
11
):
2426
2435
.
4
Abt
AB
,
Abt
LG
,
Olt
GJ
.
The effect of interinstitution anatomic pathology consultation on patient care
.
Arch Pathol Lab Med
.
1995
;
119
(
6
):
514
517
.
5
Renshaw
AA
,
Gould
EW
.
Measuring errors in surgical pathology in real-life practice
.
Am J Clin Pathol
.
2007
;
127
(
1
):
144
152
.

## Author notes

The author has no relevant financial interest in the products or companies described in this article.