Context

Additional reviews of diagnostic surgical and cytology cases have been shown to detect diagnostic discrepancies.

Objective

To develop, through a systematic review of the literature, recommendations for the review of pathology cases to detect or prevent interpretive diagnostic errors.

Design

The College of American Pathologists Pathology and Laboratory Quality Center in association with the Association of Directors of Anatomic and Surgical Pathology convened an expert panel to develop an evidence-based guideline to help define the role of case reviews in surgical pathology and cytology. A literature search was conducted to gather data on the review of cases in surgical pathology and cytology.

Results

The panel drafted 5 recommendations, with strong agreement from open comment period participants ranging from 87% to 93%. The recommendations are: (1) anatomic pathologists should develop procedures for the review of selected pathology cases to detect disagreements and potential interpretive errors; (2) anatomic pathologists should perform case reviews in a timely manner to avoid having a negative impact on patient care; (3) anatomic pathologists should have documented case review procedures that are relevant to their practice setting; (4) anatomic pathologists should continuously monitor and document the results of case reviews; and (5) if pathology case reviews show poor agreement within a defined case type, anatomic pathologists should take steps to improve agreement.

Conclusions

Evidence exists that case reviews detect errors; therefore, the expert panel recommends that anatomic pathologists develop procedures for the review of pathology cases to detect disagreements and potential interpretive errors, in order to improve the quality of patient care.

The test cycle in surgical pathology and cytology is similar to the test cycle of other laboratory tests.1–4 It is composed of the preanalytic, analytic, and postanalytic phases. The preanalytic and postanalytic phases are very similar to tests in the clinical laboratory. The preanalytic phase is composed of specimen acquisition, specimen labeling, and delivery to the laboratory, where the specimen is prepared for the analytic phase. The postanalytic phase begins with report generation and ends with delivery of the report to the clinician.

Unlike the other phases of the test cycle, the analytic phase is substantially different in surgical pathology and cytology (versus clinical pathology) in that it involves the inherent judgment of the pathologist at the time of slide interpretation.5–9 It is therefore more subjective than clinical laboratory tests. There are many factors that contribute to an accurate interpretive diagnosis, including: (1) the pathologist's knowledge and experience, (2) clinical correlation, (3) standardized diagnostic criteria and taxonomy, (4) confirmatory ancillary studies when available, and (5) secondary review of cases.

Studies have shown the additive value of clinical correlation, standardization of diagnostic criteria, and taxonomy and confirmatory ancillary testing to the accuracy of surgical pathology and cytology diagnoses.5,10–15 Several of these factors contribute to establishing a precise diagnosis, but the pathologist's knowledge and experience remain the essential factors in interpretive diagnosis. Although numerous studies have shown that case reviews help detect interpretive diagnostic errors, there have been no efforts to formalize this practice as a strategy to reduce errors. In considering processes occurring in surgical pathology and cytology, targeted case reviews could be an integral component of a quality assurance plan that is aimed proactively at preventing errors before they have a potential adverse impact on patient care.

The College of American Pathologists (CAP) Pathology and Laboratory Quality Center in association with the Association of Directors of Anatomic and Surgical Pathology (ADASP) convened an expert panel to systematically review the literature and develop an evidence-based guideline to help define the role of case reviews in surgical pathology and cytology. We focused on the contribution of case reviews to error detection and prevention of interpretive diagnosis.

PANEL COMPOSITION

The CAP Pathology and Laboratory Quality Center (the Center) and ADASP convened an expert panel consisting of practicing pathologists with expertise and experience in surgical pathology. Members included practicing pathologists in the United States and Canada. The CAP and ADASP approved the appointment of the project, cochairs, and expert panel members. In addition, a physician-methodologist experienced in systematic review and guideline development consulted with the panel throughout the project.

CONFLICT OF INTEREST POLICY

Prior to acceptance on the expert panel, potential members completed the CAP conflict of interest disclosure process, whose policy and form require disclosure of material financial interest in, or potential for benefit of significant value from, the guideline's development or its recommendations. The potential members completed the conflict of interest disclosure form, listing any relationship that could be interpreted as constituting an actual, potential, or apparent conflict. Potential conflicts were managed by the cochairs. Everyone was required to disclose conflicts prior to beginning and continuously throughout the project's timeline. Disclosed conflicts of the expert panel members are listed in the Appendix. The CAP and ADASP provided funding for the administration of the project; no industry funds were used in the development of the guideline. All panel members volunteered their time and were not compensated for their involvement. Please see the supplemental digital content (SDC) available at www.archivesofpathology.org in the January 2016 table of contents for full details on the conflict of interest policy.

OBJECTIVE

The panel addressed the overarching question, “What are the most effective ways to reduce interpretive diagnostic errors in Anatomic Pathology?” The key questions that the panel addressed were:

  • 1.

    Does targeted review (done at either the analytic or the postanalytic phase) of surgical pathology or cytology cases (slides and/or reports) reduce the error rate (often measured as amended reports) or increase the rate of interpretive error detection compared with no review, random review, or usual review procedures?

  • 2.

    What methods of selecting cases for review have been shown to increase/decrease the rate of interpretive error detection compared with no review, random review, or usual review procedures?

METHODS

A detailed account of the methods used to create this guideline can be found in the SDC, including additional scope questions.

Systematic Literature Review and Analysis

A systematic literature search was completed for relevant evidence in MEDLINE using both OvidSP and PubMed (January 1, 1992, to October 31, 2012). The search strategy included medical subject headings and text words to capture the general concepts of pathology and quality (eg, pathology, surgical; pathology, clinical; pathology and quality improvement; quality assurance, health care; quality control; reproducibility of results), and a targeted concept of slide/case review. MEDLINE searches were supplemented with a search of Google Scholar, a search for meeting abstracts (2008–2012) using both Biosis Previews and hand searching, and a focused hand search of identified pathology journals (2008–2012). An update of the OvidSP search was conducted through October 2013. All searches were limited to human studies published in English. Reference lists of included articles were also reviewed for relevant reports. Detailed information regarding the literature search strategy can be found in the SDC.

Eligible Study Designs

All study designs were included in the initial literature search. In addition to journal articles, the search identified monographs and meeting abstracts. During evidence review, articles that did not present new evidence were excluded, including letters, commentaries, and editorials.

Inclusion Criteria

Published studies were selected for full-text review if they met each of the following criteria:

  • 1.

    English-language articles/documents that addressed surgical pathology or cytology studies and provided data or information relevant to one or more key questions; and

  • 2.

    Original research addressing pathology case reviews.

Exclusion Criteria

Editorials, letters, commentaries, and invited opinions were not included in the study. Articles were also excluded if the full article was not available in English, did not address any key questions, and/or focused primarily on clinical pathology studies, including all other specialties except radiology. Articles were also excluded if they were focused on any of the following: preanalytic specimen processes, noninterpretative postanalytic processes, additional diagnostic techniques, issues related to competency use of checklists, standardized language, taxonomy, or formatting.

Outcomes of Interest

The panel assessed studies that identified discrepancies in interpretation between a primary pathologist review and a second pathologist review as a way of estimating the error rate. To the extent that erroneous readings can be identified in excess of an expected degree of disagreement, then a method of targeted review would be said to be effective. Thus, studies with a control group are desirable; as a practical matter, however, it is necessary to examine uncontrolled series, too. Studies had to report numbers of discrepant diagnoses among a defined population of specimens examined in order to allow calculation of a discrepancy rate.

The panel did not assess discrepancies from the preanalytic specimen process (related to tissue collection and processing) or postanalytic errors (eg, typographic or transcription errors, amended reports), additional diagnostic techniques (eg, immunomarkers), issues related to competency, or the use of checklists, standardized language, taxonomy, or formatting.

Various studies classify errors in different ways (eg, major versus minor, clinically significant versus insignificant). Recognizing that all errors are not alike, we assessed the severity of interpretive errors according to the clinical impact on a patient.16 We considered the clinical impact of errors as follows: (1) diagnostic thinking (error results in a change in diagnosis or diagnostic category); (2) therapeutic efficacy (error results in a change in therapeutic choice); or (3) patient outcome efficacy (error results in a change in outcome [eg, procedure avoided]; demonstrating this unequivocally may require long-term follow-up). We also considered the efficiency or cost (in terms of effort or dollars) that a targeted review strategy entails.

Quality Assessment

The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach provides a system for rating quality of evidence and strength of recommendations that is explicit, comprehensive, transparent, and pragmatic, and is increasingly being adopted by organizations worldwide.17 The GRADE approach examines the quality of evidence at the level of individual studies and also at the review level. GRADE was used for rating the quality of evidence. At the individual study level, we assessed studies according to three criteria: (1) study design rating, (2) risk of bias rating, and (3) applicability concerns. For a full description of the assessment, refer to the SDC. A physician-methodologist consultant experienced in systematic review and guideline development rated the quality of each study, constructed evidence tables and summary of findings tables, and, along with the panel, developed quality of evidence ratings. The quality of evidence definitions from GRADE are shown in Table 1.

Table 1. 

Quality of Evidence Ratings in the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) Frameworka

Quality of Evidence Ratings in the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) Frameworka
Quality of Evidence Ratings in the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) Frameworka

Assessing the Strength of Recommendations

Development of recommendations requires that the panel review the identified evidence and make a series of key judgments (using procedures described in the SDC). CAP uses a three-tier system to rate the strength of recommendations instead of the traditional two-tier approach of strong or weak recommendations. This approach is consistent with prior CAP guidelines (Table 2).

Table 2. 

Strength of Recommendationsa

Strength of Recommendationsa
Strength of Recommendationsa

Guideline Revision

This guideline will be reviewed every 4 years, or earlier in the event of publication of substantive and high-quality evidence that could potentially alter the original guideline recommendations. If necessary, the entire panel will reconvene to discuss potential changes. When appropriate, the panel will recommend revision of the guideline to CAP and ADASP for review and approval.

Disclaimer

The CAP developed the Pathology and Laboratory Quality Center as a forum to create and maintain evidence-based practice guidelines and consensus statements. Practice guidelines and consensus statements reflect the best available evidence and expert consensus supported in practice. They are intended to assist physicians and patients in clinical decision-making and to identify questions and settings for further research. With the rapid flow of scientific information, new evidence may emerge between the time a practice guideline or consensus statement is developed and when it is published or read. Guidelines and statements are not continually updated and may not reflect the most recent evidence. Guidelines and statements address only the topics specifically identified therein and are not applicable to other interventions, diseases, or stages of diseases. Furthermore, guidelines and consensus statements cannot account for individual variation among patients and cannot be considered inclusive of all proper methods of care or exclusive of other treatments. It is the responsibility of the treating physician or other health care provider, relying on independent experience and knowledge, to determine the best course of treatment for the patient. Accordingly, adherence to any practice guideline or consensus statement is voluntary, with the ultimate determination regarding its application to be made by the physician in light of each patient's individual circumstances and preferences. CAP and ADASP make no warranty, express or implied, regarding guidelines and statements, and specifically exclude any warranties of merchantability and fitness for a particular use or purpose. CAP and ADASP assume no responsibility for any injury or damage to persons or property arising out of or related to any use of this statement or for any errors or omissions.

RESULTS

A total of 823 studies met the search term requirements. A total of 137 articles were included for data extraction. Excluded articles were available as discussion or background references. The panel convened 26 times (25 by teleconference and 1 face-to-face meeting) to develop the scope, draft recommendations, review and respond to solicited feedback, and assess the quality of evidence that supports the final recommendations. A nominal group technique was employed by the panel for consensus decision-making to encourage unique input with balanced participation among group members. An open comment period was held from December 2, 2013, through January 21, 2014, during which draft recommendations were posted on the CAP Web site. Five recommendations were drafted, with strong agreement for each recommendation from the open comment period participants ranging from 87% to 93% (refer to Outcomes in the SDC for full details). The expert panel modified the draft recommendations based on the feedback given during the considered judgment process. Then, an independent review panel, masked to the expert panel and vetted through the conflict of interest process, provided final review of the manuscript and recommended it for approval by the CAP.

All articles discussed the review of cases by one or more additional pathologist(s). Table 3 shows a summary of the studies that demonstrated a discrepancy rate. The studies could be grouped by service (surgical pathology versus cytology), organ system, and type of review (internal versus external). Most studies expressed a discrepancy rate. A smaller percentage conveyed a major or significant discrepancy rate. In our review 81 different definitions of major or significant diagnostic discrepancies were used. The entries in this array of definitions sort themselves into three general categories that characterize major or significant discrepancies: (1) differences with demonstrated impact on patient care, (2) differences with potential impact on patient care, and (3) differences that indicate substantial diagnostic changes (ie, benign and malignant or positive and negative diagnoses), without regard to actual or potential clinical impact. The final recommendations are summarized in Table 4.

Table 3. 

Summary of Studies That Express a Discrepancy or Major Discrepancy Rate

Summary of Studies That Express a Discrepancy or Major Discrepancy Rate
Summary of Studies That Express a Discrepancy or Major Discrepancy Rate
Table 4. 

Guideline Statements and Strength of Recommendation

Guideline Statements and Strength of Recommendation
Guideline Statements and Strength of Recommendation

GUIDELINE STATEMENTS

1. Recommendation

—Anatomic pathologists should develop procedures for the review of pathology cases to detect disagreements and potential interpretive errors, and to improve patient care.

The literature review demonstrated that a review of cases detects discrepancies and errors, and furthermore, that discrepancy or error rates fall in a range that is clinically important. The evidence was inadequate to demonstrate a direct impact on patient safety, because few studies reported the clinical impact on patient outcomes that resulted from interpretive errors. The overall GRADE quality of evidence was low, but because of the consistent findings of a large number of studies of clinically important major discrepancy rates, and the significant impact that a diagnostic error may be expected to have on an affected individual, the panel graded this guideline statement as a CAP “recommendation.” Because considerable variability of methods and rates exists, the evidence was lacking to determine a precise prevalent rate of discrepancy or error.

Numerous studies have demonstrated that the review of cases by a second pathologist detects disagreements and potential errors. Significant error rates range from 0.1% to more than 10%, depending on the method of review and the type of cases. The manner and extent of case reviews are outlined in the following recommendations, but they should be tailored to the needs of the individual laboratory. Ideally, the goal is to enhance collaborative diagnostic teamwork and reduce errors.

Data extraction was performed on 137 articles that described some type of surgical pathology and/or cytology case review. A subset of those articles is summarized in Table 3. All articles that underwent data extraction determined that review of cases found errors or discrepancies in diagnosis. Some articles defined errors as “significant” or “major” (see definition in the “Outcome” section above). Although this was a smaller proportion of cases, every study determined that these discrepancies were detected by review of cases, leading to the conclusion that case reviews have the potential to reduce errors and improve patient care. One program, Cancer Care Ontario, has undertaken a similar analysis of the literature and recommends selective review of oncologic cases.18

Table 5. 

Studies That Offer a Comparison of Prospective Versus Retrospective Review of Surgical Pathology and Cytology Cases

Studies That Offer a Comparison of Prospective Versus Retrospective Review of Surgical Pathology and Cytology Cases
Studies That Offer a Comparison of Prospective Versus Retrospective Review of Surgical Pathology and Cytology Cases
Table 5. 

Extended

Extended
Extended

2. Recommendation

—Anatomic pathologists should perform case reviews in a timely manner to have a positive impact on patient care.

The literature review found 4 moderate-quality comparative studies that show prospective reviews (before sign-out) compared with retrospective reviews (after sign-out) can reduce disagreement/major disagreement rates and amended report rates (Table 4). The evidence was inadequate to demonstrate a direct impact on patient safety because few studies reported patient outcomes that resulted from interpretive errors. The GRADE quality of evidence is low, but because of the consistent findings in these 4 studies and no contradictory studies, the panel graded this guideline statement as a CAP “recommendation.”

Secondary reviews of surgical pathology and cytology cases should be performed in a timely manner to ensure appropriate treatment decisions and patient care. Ideally, prospective reviews are encouraged if possible, and resources should be directed to the timely review of cases prior to rendering diagnoses. Nevertheless, in some circumstances retrospective reviews may also be useful—such as when prospective reviews are not possible because of various laboratory limitations and constraints usually related to turnaround time requirements—but it is expected that these reviews also occur in a timely manner. In addition, there are specific settings—for example, clinical correlation conferences and correlating cytology/biopsy cases with excision specimens—where retrospective secondary reviews may be a standard process. The preference for prospective review stated above should not be interpreted as a justification for no longer performing these retrospective reviews.

If review of cases is to have an impact on patient care, then detection of errors must occur prior to definitive treatment. Detection of errors at this point also prevents unnecessary and potentially harmful therapies. The ideal time for review of cases is before cases are signed out. This is best for several reasons: (1) the correct treatment plan may be constructed based on the reviewed diagnosis, (2) this minimizes any rework or amendment of reports, (3) this minimizes any potential confusion about the diagnosis, and (4) this builds confidence and trust in the laboratory system. Some articles demonstrated that review of cases before sign-out is associated with a decrease in amended reports within the same laboratory, particularly amended reports for diagnostic edits (Table 5).19–22 An additional article demonstrates an association of lower amended report rates in laboratories that perform prospective reviews versus laboratories that perform retrospective reviews.23

The definition of timely can be significantly affected by communication with the clinician. For example, when cases are sent out of the laboratory for second opinion, these second opinions may take days or even weeks to come back. If the clinician is notified and is deferring treatment until the diagnosis becomes available, the consultation is still considered timely. In a similar manner, in any case where the second in-house review may take any significant length of time to perform, clinician notification of this process would still be considered a timely review.

On the other hand, review of cases after sign-out is well established and accepted. Examples include review of cases for many types of multidisciplinary conferences. These reviews tend to be incorporated into the normal workflow for case management and with the treatment decisions frequently deferred to and based on the multidisciplinary discussions at these conferences. Although these reviews may lead to amended reports and some rework, they tend to be timely and lead to appropriate patient care.

Review of cases temporally distant from when the case was originally signed out has the potential to be devastating for the patient as well as the pathologist. Identifying an error months or years after the case is finalized may not help the patient, since it is likely the patient has already been managed based on the original diagnosis. Alternatively, it may lead to the realization of a diagnosis not previously known, possibly resulting in delay of appropriate therapy with the potential for a more advanced stage of disease.

3. Expert Consensus Opinion

—Anatomic pathologists should have documented case review procedures that are relevant to their practice setting.

The quality of evidence was low to support using case review procedures compared with no case review procedures and to support targeted reviews versus random case review procedures; however, the evidence was very low with regard to distinction between different methods of review. The overall GRADE quality of evidence was very low, leading the panel to rate this guideline statement with the CAP strength of recommendation of “expert consensus opinion.”

A wide variety of review processes have been reported in the literature, with varying levels of error detection of interpretive diagnoses. Depending on the method, reviews may also affect turnaround time, increase workload, and add expense. The ideal method may depend on the practice setting, which may range from a small general community practice to a highly subspecialized academic practice. Therefore, the review processes should be tailored to the type of practice setting in order to maximize error detection while minimizing negative impacts. Targeted review, general review, percentage of cases reviewed, blinded review, review of cases with known high rates of missed lesions, and other methods should be considered to design a system best suited for individual practices. The laboratory medical director is responsible for determining the policy.

Departments of pathology should formalize their review processes and include any and all review procedures in their quality assurance plan for surgical pathology and cytology. A plan to conduct and document review of cases may be the last opportunity to identify a diagnostic error. Although many types of reviews have been reported, departments should routinely document when departmental reviews occur. These reviews may include but are not limited to: (1) review of a selected percentage of cases, (2) review of selected types of diagnoses, (3) review of a selected organ system or specimen type, (4) review of random cases, (5) review of cases for multidisciplinary conferences, (6) in-house cases sent outside for review, (7) review of cases during cytology-histology correlation, (8) review of cases in a consensus conference, and (9) review of cases for any other reason.

Important principles to consider when reviewing cases include: (1) The reviewing pathologist should independently formulate opinions without influence from others. Some have suggested that blinded reviews are optimal, but this has not clearly been demonstrated to be necessary.24–26 (2) The reviewing pathologist ideally should have sufficient expertise in the material he or she is reviewing. (3) Case reviews performed prior to sign-out could be used to build collaborative teamwork and are excellent opportunities for pathologists to learn and improve their skills. (4) Review strategies should include negative cases because many errors are false negatives.27,28 (5) Targeted review of selected organs or diseases leads to detection of more errors compared with review of cases randomly.29

When review of cases occurs, the actual review should be documented in some fashion. When cases are reviewed before sign-out, many departments choose to document the review within the body of the report. Reviews may also be documented in a separate quality assurance log, either on paper or electronically. Another option is to have reviews be documented as addenda to the report. Case logs of multidisciplinary conferences or other types of conferences may also be included with the quality assurance documentation of reviews.

Defining adequate review policies may be difficult in specific settings. First, one of the most problematic settings to define adequate review policies is that of the solo pathologist, where there is no other pathologist in-house with whom to share the case. Essentially all of the published data involve a second pathologist reviewing the work of the initial pathologist. Although there may be value in the practice, there are few or no data available concerning a single pathologist reviewing his or her own work a second time. At a minimum, an adequate policy in this setting might consist of documenting review of cases for clinical conferences and all cases that are reviewed at an outside laboratory.

Second, for small pathology groups with more than one pathologist, although in-house review of cases by a second pathologist is encouraged, in this setting a review strategy similar to that for a solo practitioner may also suffice. Third, in some tertiary care settings, very few cases may be sent to an outside laboratory for a second opinion. However, many, if not all, of these tertiary settings have large subspecialized pathology groups. In this setting, documentation of cases reviewed at clinical conferences and cases that are shown within and between these subspecialty groups might qualify as the minimum review strategy.

4. Expert Consensus Opinion

—Anatomic pathologists should continuously monitor and document the results of case review.

The quality of evidence based on agreement studies was low for the finding that for several defined diagnoses and/or organ systems interobserver agreement is poor. In the panel's literature review there were no studies that directly related continuous monitoring to diagnostic agreement or improvement. The GRADE quality of evidence was very low, leading the panel to rate this guideline statement with the CAP strength of recommendation of “expert consensus opinion.” However, based on traditional quality assurance principles it is our expert consensus opinion that monitoring is necessary to maintain confidence in the quality of diagnoses. Once the review design is established, the review procedure should be adhered to and monitored, ensuring that the program is functioning as intended and that all anatomic pathologists are compliant. This may be assessed by such methods as monitoring overall rates of case review pre–sign-out or post–sign-out, monitoring amended/revised report rates, monitoring minor/major discrepancies, leveraging laboratory information system software to include review status in the permanent electronic records of cases, and/or making the review process and results part of ongoing professional practice evaluation.

In addition, the results of the reviews should be assessed for areas to investigate and evaluate additional questions about the practice, such as elucidating previously unidentified areas where local variability exists, identifying specific case types where additional internal or external expertise might be of value, recognizing individual practitioners who may need to acquire additional proficiency, and detecting systems and process issues that contribute to diagnostic disagreements and potential errors.

In monitoring and documenting the results of the review process, individual practices should operate with the knowledge that some diagnoses have a known reputation for high interobserver variability, and that for such cases “disagreements” between observers may not indicate an interpretive error, per se. Examples of such diagnoses include, but are not limited to, epithelial dysplasia in inflammatory bowel disease and in the setting of Barrett metaplasia of the esophagus, reactive versus squamous intraepithelial lesions in the uterine cervix, and Gleason grading of prostatic adenocarcinoma (Table 6).

Table 6. 

A Sampling of Studies That Demonstrate Low, Intermediate, and High Levels of Diagnostic Agreement

A Sampling of Studies That Demonstrate Low, Intermediate, and High Levels of Diagnostic Agreement
A Sampling of Studies That Demonstrate Low, Intermediate, and High Levels of Diagnostic Agreement

Laboratories should develop written procedures and record the results of their intradepartmental review studies. A simple method to document the department's effort in review is to measure the percentage (or rate) of cases reviewed by a second pathologist prior to sign-out. A recent study has documented that departments with review policies reexamine cases at a rate of about 10%.30 The total percentage of cases that are reviewed before and after sign-out is not known.

If a department implements a policy of reviewing specific types of cases, then periodic audits of that case type may be indicated. Others commonly track amended reports with diagnostic edits by the sign-out pathologist or cases that have undergone external diagnostic review after sign-out; the number and percentage of cases with minor or major diagnostic agreements may be tabulated for the department and for individual practitioners. In some cases, it may not be entirely clear to pathologists whether a difference in opinion should result in an amendment or addendum to the report. Should this approach be chosen, the distinction between amendments and addenda used by the practice must be explicit, and participating pathologists must adhere to the distinction consistently. Additional methods to assess diagnostic proficiency may include frozen section–paraffin section correlation and cytology–follow-up tissue biopsy correlation; these may also be incorporated into an overall quality improvement program.

Other methods of documenting diagnostic proficiency are also acceptable.

5. Expert Consensus Opinion

—If pathology case reviews show poor agreement within a defined area, anatomic pathologists should take steps to improve agreement.

The quality of evidence was low regarding the best methods to improve agreement in areas for which agreement is poor. It is likely that best approaches may differ based on features of disease, individual practice patterns, and available ancillary diagnostic tests. Studies on methods to improve agreement were not found in our literature search or were outside the scope of our search. Therefore, the GRADE quality of evidence was not assessed, leading the panel to rate this guideline statement with the CAP strength of recommendation of “expert consensus opinion.”

The causes for poor agreement within and among anatomic pathology groups are variable. Two external factors need to be taken into account when assessing interobserver disagreement. First, some diagnoses have inherently higher interobserver variation than others, and these differences should be acknowledged (Table 6). Second, pathology diagnoses are dynamic and terminology changes; different designations for the same entity should be recognized and remedied as a problem distinct from that of interobserver variation itself. If poor interobserver agreement is discovered within a practice, particularly a degree of disagreement that exceeds that in published norms, then the practice members should use specific improvement methods (eg, consensus conferences, calibration slide sets, etc) to improve team consensus.

The quality of evidence was very low regarding the best methods to improve agreement in areas for which agreement is poor. It is likely that best approaches may differ based on features of disease, individual practice patterns, and available ancillary diagnostic tests.

Pathologists' experiences, expertise, and abilities are variable regarding some lesions with demonstrated high rates of diagnostic discrepancies. Examples include the assessment of thyroid lesions by fine-needle aspiration cytology and the assessment of esophageal dysplasia in the setting of Barrett esophagus. The use of standardized diagnostic criteria is a powerful approach to improve diagnostic variation.31–33 Methods that have been shown to reduce interobserver disagreement include intradepartmental consensus conferences with the acceptance and use of uniform diagnostic criteria. The use of calibration slide sets has also been used. Reinforcement of agreed-upon criteria works well with review and discussion of problematic cases. Disagreement rates due to other factors should also be addressed. These may include but are not limited to lack of familiarity with certain conditions and recent changes in diagnostic criteria or procedures.

LIMITATIONS

Because secondary review of cases detects and corrects errors, it is natural to wonder whether these data can be used to measure quality in an anatomic pathology laboratory. Such a measure would be of tremendous interest to pathologists, clinicians, employers, insurers, and patients. However, at present it is not clear how best to interpret the results of these reviews appropriately, and these results should not be used to attempt to compare the quality of two different pathology laboratories. The reasons for this include but are not limited to:

  • 1.

    The source of errors in different laboratories may differ.

  • 2.

    The definition of error in different laboratories may differ.

  • 3.

    The clinical significance of errors may differ.

  • 4.

    The methods used to detect errors may differ.

  • 5.

    The sensitivity of the review method is not controlled for and is unknown.

  • 6.

    Expected ranges of performance are not well defined.

At present, there is supporting evidence to state that:

  • 1.

    Second reviews successfully detect and reduce errors.

  • 2.

    Groups that do second reviews have a lower error rate than if they did not perform second reviews.

  • 3.

    Groups that perform second reviews have a measure of quality that may be of use within the group.

  • 4.

    Groups that perform second reviews and fail to detect significant errors (<1 per 1000 cases) may have a problem with the sensitivity of their second reviews.

Finally, to measure or compare quality between groups, the following areas need further investigation:

  • 1.

    Identify the optimal review methods that are applicable across many different groups.

  • 2.

    Identify methods to measure and ensure the sensitivity of the second review.

  • 3.

    Standardize criteria for review methods, definitions of errors, and optimal results.

  • 4.

    Define expected ranges of performance.

  • 5.

    Define methods to verify performance that falls outside that expected range.

CONCLUSION

We recommend that surgical pathology and cytology laboratories adopt a system of secondary timely case reviews that is suited to their practice and helps detect or prevent diagnostic interpretive errors.

For additional questions and comments, contact the Pathology and Laboratory Quality Center at center@cap.org.

References

References
1
Nakhleh
RE
,
Fitzgibbons
PL
,
eds
.
Quality Management in Anatomic Pathology: Promoting Patient Safety Through Systems Improvement and Error Reduction
.
Northfield, IL
:
College of American Pathologists;
2005
.
2
Valenstein
P.
Quality Management in Clinical Laboratories: Promoting Patient Safety Through Risk Reduction and Continuous Improvement
.
Northfield, IL
:
College of American Pathologists;
2005
.
3
Wagar
EA
,
Horowitz
RE
,
Siegal
GP
,
eds
.
Laboratory Administration for Pathologists
.
Northfield, IL
:
College of American Pathologists;
2011
.
4
Howanitz
PJ.
Errors in laboratory medicine: practical lessons to improve patient safety
.
Arch Pathol Lab Med
.
2005
;
129
(
10
):
1252
1261
.
5
Meier
FA
,
Varney
RC
,
Zarbo
RJ.
Study of amended reports to evaluate and improve surgical pathology processes
.
Adv Anat Pathol
.
2011
;
18
(
5
):
406
413
.
6
Nakhleh
RE.
Diagnostic error in surgical pathology
.
Diagn Histopathol
.
2013
;
19
(
12
):
433
437
.
7
Renshaw
AA
,
Gould
EW.
Increasing agreement over time in interlaboratory anatomic pathology consultation material
.
Am J Clin Pathol
.
2013
;
140
(
2
):
215
218
.
8
Raab
SS
,
Meier
FA
,
Zarbo
RJ
,
et al
.
The “Big Dog” effect: variability assessing the causes of error in diagnoses of patients with lung cancer
.
J Clin Oncol
.
2006
;
24
(
18
):
2808
2814
.
9
Raab
SS
,
Nakhleh
RE
,
Ruby
SG.
Patient safety in anatomic pathology: measuring discrepancy frequencies and causes
.
Arch Pathol Lab Med
.
2005
;
129
(
4
):
459
466
.
10
Al-Maghrabi
JA
,
Sayadi
HH.
The importance of second opinion in surgical pathology referral material of lymphoma
.
Saudi Med J
.
2012
;
33
(
4
):
399
405
.
11
Arbiser
ZK
,
Folpe
AL
,
Weiss
SW.
Consultative (expert) second opinions in soft tissue pathology: analysis of problem-prone diagnostic situations
.
Am J Clin Pathol
.
2001
;
116
(
4
):
473
476
.
12
Coffin
CS
,
Burak
KW
,
Hart
J
,
Gao
ZH.
The impact of pathologist experience on liver transplant biopsy interpretation
.
Mod Pathol
.
2006
;
19
(
6
):
832
838
.
13
Rakovitch
E
,
Mihai
A
,
Pignol
JP
,
et al
.
Is expert breast pathology assessment necessary for the management of ductal carcinoma in situ?
Breast Cancer Res Treat
.
2004
;
87
(
3
):
265
272
.
14
Swapp
RE
,
Aubry
MC
,
Salomao
DR
,
Cheville
JC.
Outside case review of surgical pathology for referred patients: the impact on patient care
.
Arch Pathol Lab Med
.
2013
;
137
(
2
):
233
240
.
15
Ray-Coquard
I
,
Montesco
MC
,
Coindre
JM
,
et al
.
Sarcoma: concordance between initial diagnosis and centralized expert review in a population-based study within three European regions
.
Ann Oncol
.
2012
;
23
(
9
):
2442
2449
.
16
Fryback
DG
,
Thornbury
JR.
The efficacy of diagnostic imaging
.
Med Decis Making
.
1991
;
11
(
2
):
88
94
.
17
Guyatt
GH
,
Oxman
AD
,
Vist
GE
,
et al
.
GRADE: an emerging consensus on rating quality of evidence and strength of recommendations
.
BMJ
.
2008
;
336
(
7650
):
924
926
.
18
Cancer Care Ontario
.
Pathology and laboratory medicine evidence-based series (EBS). Cancer Care Ontario Web site
. ,
2014
.
19
Lind
AC
,
Bewtra
C
,
Healy
JC
,
Sims
KL.
Prospective peer review in surgical pathology
.
Am J Clin Pathol
.
1995
;
104
(
5
):
560
566
.
20
Novis
D.
Routine review of surgical pathology cases as a method by which to reduce diagnostic errors in a community hospital
.
Pathol Case Rev
.
2005
;
10
(
2
):
63
67
.
21
Owens
SR
,
Wiehagen
LT
,
Kelly
SM
,
et al
.
Initial experience with a novel pre-sign-out quality assurance tool for review of random surgical pathology diagnoses in a subspecialty-based university practice
.
Am J Surg Pathol
.
2010
;
34
(
9
):
1319
1323
.
22
Renshaw
AA
,
Gould
EW.
Measuring the value of review of pathology material by a second pathologist
.
Am J Clin Pathol
.
2006
;
125
(
5
):
737
739
.
23
Nakhleh
RE
,
Zarbo
RJ.
Amended reports in surgical pathology and implications for diagnostic error detection and avoidance: a College of American Pathologists q-probes study of 1,667,547 accessioned cases in 359 laboratories
.
Arch Pathol Lab Med
.
1998
;
122
(
4
):
303
309
.
24
Raab
SS
,
Stone
CH
,
Jensen
CS
,
et al
.
Double slide viewing as a cytology quality improvement initiative
.
Am J Clin Pathol
.
2006
;
125
(
4
):
526
533
.
25
Renshaw
AA
,
Cartagena
N
,
Granter
SR
,
Gould
EW.
Agreement and error rates using blinded review to evaluate surgical pathology of biopsy material
.
Am J Clin Pathol
.
2003
;
119
(
6
):
797
800
.
26
Renshaw
AA
,
Gould
EW.
Comparison of disagreement and amendment rates by tissue type and diagnosis: identifying cases for directed blinded review
.
Am J Clin Pathol
.
2006
;
126
(
5
):
736
739
.
27
Troxel
DB.
Medicolegal aspects of error in pathology
.
Arch Pathol Lab Med
.
2006
;
130
(
5
):
617
619
.
28
Kornstein
MJ
,
Byrne
SP.
The medicolegal aspect of error in pathology: a search of jury verdicts and settlements
.
Arch Pathol Lab Med
.
2007
;
131
(
4
):
615
618
.
29
Raab
SS
,
Grzybicki
DM
,
Mahood
LK
,
Parwani
AV
,
Kuan
SF
,
Rao
UN.
Effectiveness of random and focused review in detecting surgical pathology error
.
Am J Clin Pathol
.
2008
;
130
(
6
):
905
912
.
30
Nakhleh
RE
,
Bekeris
LG
,
Souers
RJ
,
Meier
FA
,
Tworek
JA.
Surgical pathology case reviews before sign-out: a College of American Pathologists Q-Probes study of 45 laboratories
.
Arch Pathol Lab Med
.
2010
;
134
(
5
):
740
743
.
31
The Bethesda System for reporting cervical/vaginal cytologic diagnoses: report of the 1991 Bethesda Workshop
.
Am J Surg Pathol
.
1992
;
16
(
9
):
914
916
.
32
Cibas
ES
,
Ali
SZ.
The Bethesda System for reporting thyroid cytopathology
.
Am J Clin Pathol
.
2009
;
132
(
5
):
658
665
.
33
Reid
BJ
,
Haggitt
RC
,
Rubin
CE
,
et al
.
Observer variation in the diagnosis of dysplasia in Barrett's esophagus
.
Hum Pathol
.
1988
;
19
(
2
):
166
178
.
34
Teutsch
SM
,
Bradley
LA
,
Palomaki
GE
,
et al
.
The evaluation of genomic applications in practice and prevention (EGAPP) initiative: methods of the EGAPP Working Group
.
Genet Med
.
2009
;
11
(
1
):
3
14
.
35
Abt
AB
,
Abt
LG
,
Olt
GJ.
The effect of interinstitution anatomic pathology consultation on patient care
.
Arch Pathol Lab Med
.
1995
;
119
(
6
):
514
517
.
36
Agarwal
S
,
Wadhwa
N.
Revisiting old slides–how worthwhile is it?
Pathol Res Pract
.
2010
;
206
(
6
):
368
371
.
37
Allsbrook
WCJ
,
Mangold
KA
,
Johnson
MH
,
Lane
RB
,
Lane
CG
,
Epstein
JI.
Interobserver reproducibility of Gleason grading of prostatic carcinoma: general pathologist
.
Hum Pathol
.
2001
;
32
(
1
):
81
88
.
38
Ascoli
V
,
Bosco
D
,
Carnovale Scalzo C. Cytologic re-evaluation of negative effusions from patients with malignant mesothelioma
.
Pathologica
.
2011
;
103
(
6
):
318
324
.
39
Berney
DM
,
Fisher
G
,
Kattan
MW
,
et al
.
Pitfalls in the diagnosis of prostatic cancer: retrospective review of 1791 cases with clinical outcome
.
Histopathology
.
2007
;
51
(
4
):
452
457
.
40
Boiko
PE
,
Piepkorn
MW.
Reliability of skin biopsy pathology
.
J Am Board Fam Pract
.
1994
;
7
(
5
):
371
374
.
41
Castanon
A
,
Ferryman
S
,
Patnick
J
,
Sasieni
P.
Review of cytology and histopathology as part of the NHS Cervical Screening Programme audit of invasive cervical cancers
.
Cytopathology
.
2012
;
23
(
1
):
13
22
.
42
Clary
KM
,
Silverman
JF
,
Liu
Y
,
et al
.
Cytohistologic discrepancies: a means to improve pathology practice and patient outcomes
.
Am J Clin Pathol
.
2002
;
117
(
4
):
567
573
.
43
Coblentz
TR
,
Mills
SE
,
Theodorescu
D.
Impact of second opinion pathology in the definitive management of patients with bladder carcinoma
.
Cancer
.
2001
;
91
(
7
):
1284
1290
.
44
Cook
IS
,
McCormick
D
,
Poller
DN.
Referrals for second opinion in surgical pathology: implications for management of cancer patients in the UK
.
Eur J Surg Oncol
.
2001
;
27
(
6
):
589
594
.
45
Golfier
F
,
Clerc
J
,
Hajri
T
,
et al
.
Contribution of referent pathologists to the quality of trophoblastic diseases diagnosis
.
Hum Reprod
.
2011
;
26
(
10
):
2651
2657
.
46
Jacques
SM
,
Qureshi
F
,
Munkarah
A
,
Lawrence
WD.
Interinstitutional surgical pathology review in gynecologic oncology: II. Endometrial cancer in hysterectomy specimens
.
Int J Gynecol Pathol
.
1998
;
17
(
1
):
42
45
.
47
Jones
K
,
Jordan
RC.
Patterns of second-opinion diagnosis in oral and maxillofacial pathology
.
Oral Surg Oral Med Oral Path Oral Radiol Endod
.
2010
;
109
(
6
):
865
869
.
48
Kennecke
HF
,
Speers
CH
,
Ennis
CA
,
Gelmon
K
,
Olivotto
IA
,
Hayes
M.
Impact of routine pathology review on treatment for node-negative breast cancer
.
J Clin Oncol
.
2012
;
30
(
18
):
2227
2231
.
49
Kwon
JS
,
Francis
JA
,
Qiu
F
,
Weir
MM
,
Ettler
HC.
When is a pathology review indicated in endometrial cancer?
Obstet Gynecol
.
2007
;
110
(
6
):
1224
1230
.
50
Layfield
LJ
,
Jones
C
,
Rowe
L
,
Gopez
EV.
Institutional review of outside cytology materials: a retrospective analysis of two institutions' experiences
.
Diagn Cytopathol
.
2002
;
26
(
1
):
45
48
.
51
Lester
JF
,
Dojcinov
SD
,
Attanoos
RL
,
et al
.
The clinical impact of expert pathological review on lymphoma management: a regional experience
.
Br J Haematol
.
2003
;
123
(
3
):
463
468
.
52
Lueck
N
,
Jensen
C
,
Cohen
MB
,
Weydert
JA.
Mandatory second opinion in cytopathology
.
Cancer
.
2009
;
117
(
2
):
82
91
.
53
Lytwyn
A
,
Salit
IE
,
Raboud
J
,
et al
.
Interobserver agreement in the interpretation of anal intraepithelial neoplasia
.
Cancer
.
2005
;
103
(
7
):
1447
1456
.
54
Manion
E
,
Cohen
MB
,
Weydert
J.
Mandatory second opinion in surgical pathology referral material: clinical consequences of major disagreements
.
Am J Surg Pathol
.
2008
;
32
(
5
):
732
737
.
55
Matasar
MJ
,
Shi
W
,
Silberstien
J
,
et al
.
Expert second-opinion pathology review of lymphoma in the era of the World Health Organization classification
.
Ann Oncol
.
2012
;
23
(
1
):
159
166
.
56
McBroom
HM
,
Ramsay
AD.
The clinicopathological meeting: a means of auditing diagnostic performance
.
Am J Surg Pathol
.
1993
;
17
(
1
):
75
80
.
57
McGinnis
KS
,
Lessin
SR
,
Elder
DE
,
et al
.
Pathology review of cases presenting to a multidisciplinary pigmented lesion clinic
.
Arch Dermatol
.
2002
;
138
(
5
):
617
621
.
58
Murali
R
,
Hughes
MT
,
Fitzgerald
P
,
Thompson
JF
,
Scolyer
RA.
Interobserver variation in the histopathologic reporting of key prognostic parameters, particularly Clark level, affects pathologic staging of primary cutaneous melanoma
.
Ann Surg
.
2009
;
249
(
4
):
641
647
.
59
Owens
SR
,
Dhir
R
,
Yousem
SA
,
et al
.
The development and testing of a laboratory information system-driven tool for pre-sign-out quality assurance of random surgical pathology reports
.
Am J Clin Pathol
.
2010
;
133
(
6
):
836
841
.
60
Pinto Sanchez MI, Smecuol E, Vazquez H, Mazure R, Maurino E, Bai JC
.
Very high rate of misdiagnosis of celiac disease in clinical practice
.
Acta Gastroenterol Latinoam
.
2009
;
39
(
4
):
250
253
.
61
Pongpruttipan
T
,
Sitthinamsuwan
P
,
Rungkaew
P
,
Ruangchira-urai
R
,
Vongjirad
A
,
Sukpanichnant
S.
Pitfalls in classifying lymphomas
.
J Med Assoc Thai
.
2007
;
90
(
6
):
1129
1136
.
62
Prayson
RA
,
Agamanolis
DP
,
Cohen
ML
,
et al
.
Interobserver reproducibility among neuropathologists and surgical pathologists in fibrillary astrocytoma grading
.
J Neurol Sci
.
2000
;
175
(
1
):
33
39
.
63
Proctor
IE
,
McNamara
C
,
Rodriguez-Justo
M
,
Isaacson
PG
,
Ramsay
A.
Importance of expert central review in the diagnosis of lymphoid malignancies in a regional cancer network
.
J Clin Oncol
.
2011
;
29
(
11
):
1431
1435
.
64
Qureshi
A
,
Loya
A
,
Azam
M
,
Hussain
M
,
Mushtaq
S
,
Mahmood
T.
Study of parameters to ensure quality control in histopathology reporting: a meta-analysis at a tertiary care center
.
Indian J Pathol Microbiol
.
2012
;
55
(
2
):
180
182
.
65
Raab
SS
,
Grzybicki
DM
,
Janosky
JE
,
et al
.
Clinical impact and frequency of anatomic pathology errors in cancer diagnoses
.
Cancer
.
2005
;
104
(
10
):
2205
2213
.
66
Ramsay
AD
,
Gallagher
PJ.
Local audit of surgical pathology: 18 month's experience of peer review-based quality assessment in an English teaching hospital
.
Am J Surg Pathol
.
1992
;
16
(
5
):
476
482
.
67
Renshaw
AA
,
Gould
EW.
Correlation of workload with disagreement and amendment rates in surgical pathology and nongynecologic cytology
.
Am J Clin Pathol
.
2006
;
125
(
6
):
820
822
.
68
Renshaw
AA
,
Pinnar
NE
,
Jiroutek
MR
,
Young
ML.
Blinded review as a method for quality improvement in surgical pathology
.
Arch Pathol Lab Med
.
2002
;
126
(
8
):
961
963
.
69
Renshaw
AA
,
Pinnar
NE
,
Jiroutek
MR
,
Young
ML.
Quantifying the value of in-house consultation in surgical pathology
.
Am J Clin Pathol
.
2002
;
117
(
5
):
751
754
.
70
Santoso
JT
,
Coleman
RL
,
Voet
RL
,
Bernstein
SG
,
Lifshitz
S
,
Miller
D.
Pathology slide review in gynecologic oncology
.
Obstet Gynecol
.
1998
;
91
(
5, pt 1
):
730
734
.
71
Sharif
MA
,
Hamdani
SN.
Second opinion and discrepancy in the diagnosis of soft tissue lesions at surgical pathology
.
Indian J Pathol Microbiol
.
2010
;
53
(
3
):
460
464
.
72
Shoo
BA
,
Sagebiel
RW
,
Kashani-Sabet
M.
Discordance in the histopathologic diagnosis of melanoma at a melanoma referral center
.
J Am Acad Dermatol
.
2010
;
62
(
5
):
751
756
.
73
Thomas
CW
,
Bainbridge
TC
,
Thomson
TA
,
McGahan
CE
,
Morris
WJ.
Clinical impact of second pathology opinion: a longitudinal study of central genitourinary pathology review before prostate brachytherapy
.
Brachytherapy
.
2007
;
6
(
2
):
135
141
.
74
Trotter
MJ
,
Bruecks
AK.
Interpretation of skin biopsies by general pathologists: diagnostic discrepancy rate measured by blinded review
.
Arch Pathol Lab Med
.
2003
;
127
(
11
):
1489
1492
.
75
Tsuda
H
,
Akiyama
F
,
Kurosumi
M
,
Sakamoto
G
,
Watanabe
T.
Monitoring of interobserver agreement in nuclear atypia scoring of node-negative breast carcinomas judged at individual collaborating hospitals in the National Surgical Adjuvant Study of Breast Cancer (NSAS-BC) protocol
.
Jpn J Clin Oncol
.
1999
;
29
(
9
):
413
420
.
76
Wayment
RO
,
Bourne
A
,
Kay
P
,
Tarter
TH.
Second opinion pathology in tertiary care of patients with urologic malignancies
.
Urol Oncol
.
2011
;
29
(
2
):
194
198
.
77
Wechsler
J
,
Bastuji-Garin
S
,
Spatz
A
,
et al
.
Reliability of the histopathologic diagnosis of malignant melanoma in childhood
.
Arch Dermatol
.
2002
;
138
(
5
):
625
628
.
78
Westra
WH
,
Kronz
JD
,
Eisele
DW.
The impact of second opinion surgical pathology on the practice of head and neck surgery: a decade experience at a large referral hospital
.
Head Neck
.
2002
;
24
(
7
):
684
693
.
79
Zaino
RJ
,
Kauderer
J
,
Trimble
CL
,
et al
.
Reproducibility of the diagnosis of atypical endometrial hyperplasia: a Gynecologic Oncology Group study
.
Cancer
.
2006
;
106
(
4
):
804
811
.
80
Hahm
GK
,
Niemann
TH
,
Lucas
JG
,
Frankel
WL.
The value of second opinion in gastrointestinal and liver pathology
.
Arch Pathol Lab Med
.
2001
;
125
(
6
):
736
739
.
81
Ahmed
Z
,
Yaqoob
N
,
Muzaffar
S
,
Kayani
N
,
Pervez
S
,
Hasan
SH.
Diagnostic surgical pathology: the importance of second opinion in a developing country
.
J Pak Med Assoc
.
2004
;
54
(
6
):
306
311
.
82
Bomeisl
PEJ
,
Alam
S
,
Wakely
PEJ.
Interinstitutional consultation in fine-needle aspiration cytopathology: a study of 742 cases
.
Cancer
.
2009
;
117
(
4
):
237
246
.
83
Davidov
T
,
Trooskin
SZ
,
Shanker
BA
,
et al
.
Routine second-opinion cytopathology review of thyroid fine needle aspiration biopsies reduces diagnostic thyroidectomy
.
Surgery
.
2010
;
148
(
6
):
1294
1301
.
84
Aldape
K
,
Simmons
ML
,
Davis
RL
,
et al
.
Discrepancies in diagnoses of neuroepithelial neoplasms: the San Francisco Bay Area Adult Glioma Study
.
Cancer
.
2000
;
88
(
10
):
2342
2349
.
85
Bajaj
J
,
Morgenstern
N
,
Sugrue
C
,
Wasserman
J
,
Wasserman
P.
Clinical impact of second opinion in thyroid fine needle aspiration cytology (FNAC): a study of 922 interinstitutional consultations
.
Diagn Cytopathol
.
2012
;
40
(
5
):
422
429
.
86
Baloch
ZW
,
Hendreen
S
,
Gupta
PK
,
et al
.
Interinstitutional review of thyroid fine-needle aspirations: impact on clinical management of thyroid nodules
.
Diagn Cytopathol
.
2001
;
25
(
4
):
231
234
.
87
Bejarano
PA
,
Koehler
A
,
Sherman
KE.
Second opinion pathology in liver biopsy interpretation
.
Am J Gastroenterol
.
2001
;
96
(
11
):
3158
3164
.
88
Bruner
JM
,
Inouye
L
,
Fuller
GN
,
Langford
LA.
Diagnostic discrepancies and their clinical impact in a neuropathology referral practice
.
Cancer
.
1997
;
79
(
4
):
796
803
.
89
Butler
ST
,
Youker
SR
,
Mandrell
J
,
Flanagan
KH
,
Fosko
SW.
The importance of reviewing pathology specimens before Mohs surgery
.
Dermatol Surg
.
2009
;
35
(
3
):
407
412
.
90
Chafe
S
,
Honore
L
,
Pearcey
R
,
Capstick
V.
An analysis of the impact of pathology review in gynecologic cancer
.
Int J Radiat Oncol Biol Phys
.
2000
;
48
(
5
):
1433
1438
.
91
Chan
YM
,
Cheung
AN
,
Cheng
DK
,
Ng
TY
,
Ngan
HY
,
Wong
LC.
Pathology slide review in gynecologic oncology: routine or selective?
Gynecol Oncol
.
1999
;
75
(
2
):
267
271
.
92
Corley
DA
,
Kubo
A
,
DeBoer
J
,
Rumore
GJ.
Diagnosing Barrett's esophagus: reliability of clinical and pathologic diagnoses
.
Gastrointest Endosc
.
2009
;
69
(
6
):
1004
1010
.
93
Epstein
JI
,
Walsh
PC
,
Sanfilippo
F.
Clinical and cost impact of second-opinion pathology: review of prostate biopsies prior to radical prostatectomy
.
Am J Surg Pathol
.
1996
;
20
(
7
):
851
857
.
94
Jara-Lazaro
AR
,
Thike
AA
,
Tan
PH.
Diagnostic issues in second opinion consultations in prostate pathology
.
Pathology
.
2010
;
42
(
1
):
6
14
.
95
Kamat
S
,
Parwani
AV
,
Khalbuss
WE
,
et al
.
Use of a laboratory information system driven tool for pre-signout quality assurance of random cytopathology reports
.
J Pathol Inform
.
2011
;
2
:
42
.
96
Kishimoto
R
,
Saika
T
,
Bekku
K
,
et al
.
The clinical impact of pathological review on selection the treatment modality for localized prostate cancer in candidates for brachytherapy monotherapy
.
World J Urol
.
2012
;
30
(
3
):
375
378
.
97
Kronz
JD
,
Milord
R
,
Wilentz
R
,
Weir
EG
,
Schreiner
SR
,
Epstein
JI.
Lesions missed on prostate biopsies in cases sent in for consultation
.
Prostate
.
2003
;
54
(
4
):
310
314
.
98
Lehnhardt
M
,
Daigeler
A
,
Hauser
J
,
et al
.
The value of expert second opinion in diagnosis of soft tissue sarcomas
.
J Surg Oncol
.
2008
;
97
(
1
):
40
43
.
99
Lurkin
A
,
Ducimetiere
F
,
Vince
DR
,
et al
.
Epidemiological evaluation of concordance between initial diagnosis and central pathology review in a comprehensive and prospective series of sarcoma patients in the Rhone-Alpes region
.
BMC Cancer
.
2010
;
10
:
150
.
100
Mellink
WA
,
Henzen-Logmans
SC
,
Bongaerts
AH
,
Ooijen
BV
,
Rodenburg
CJ
,
Wiggers
TH.
Discrepancy between second and first opinion in surgical oncological patients
.
Eur J Surg Oncol
.
2006
;
32
(
1
):
108
112
.
101
Murphy
WM
,
Rivera-Ramirez
I
,
Luciani
LG
,
Wajsman
Z.
Second opinion of anatomical pathology: a complex issue not easily reduced to matters of right and wrong
.
J Urol
.
2001
;
165
(
6, pt 1
):
1957
1959
.
102
Park
JH
,
Kim
HK
,
Kang
SW
,
et al
.
Second opinion in thyroid fine-needle aspiration biopsy by the Bethesda system
.
Endocr J
.
2012
;
59
(
3
):
205
212
.
103
Santillan
AA
,
Messina
JL
,
Marzban
SS
,
Crespo
G
,
Sondak
VK
,
Zager
JS.
Pathology review of thin melanoma and melanoma in situ in a multidisciplinary melanoma clinic: impact on treatment decisions
.
J Clin Oncol
.
2010
;
28
(
3
):
481
486
.
104
Scott
CB
,
Nelson
JS
,
Farnan
NC
,
et al
.
Central pathology review in clinical trials for patients with malignant glioma: a Report of Radiation Therapy Oncology Group 83-02
.
Cancer
.
1995
;
76
(
2
):
307
313
.
105
Selman
AE
,
Niemann
TH
,
Fowler
JM
,
Copeland
LJ.
Quality assurance of second opinion pathology in gynecologic oncology
.
Obstet Gynecol
.
1999
;
94
(
2
):
302
306
.
106
Staradub
VL
,
Messenger
KA
,
Hao
N
,
Wiley
EL
,
Morrow
M.
Changes in breast cancer therapy because of pathology second opinions
.
Ann Surg Oncol
.
2002
;
9
(
10
):
982
987
.
107
Vivino
FB
,
Gala
I
,
Hermann
GA.
Change in final diagnosis on second evaluation of labial minor salivary gland biopsies
.
J Rheumatol
.
2002
;
29
(
5
):
938
944
.
108
Weir
MM
,
Jan
E
,
Colgan
TJ.
Interinstitutional pathology consultations: a reassessment
.
Am J Clin Pathol
.
2003
;
120
(
3
):
405
412
.
109
Fraser
S
,
Lanaspre
E
,
Pinto
T
,
Goderya
R
,
Chandra
A.
Thyroid FNA diagnosis: correlation between referral and review diagnosis in a network MDM
.
Cytopathology
.
2011
;
22
(
4
):
ii
.
110
Idowu
MO
,
Jain
R
,
Pedigo
MA
,
Powers
CN.
Is a second pathologist's review of ASC-H useful in reducing false negative diagnosis
.
Mod Pathol
.
2008
;
21
(
suppl 1s
):
74A
.
111
Jing
X
,
Knoepp
SM
,
Roh
MH
,
et al
.
Group consensus review minimizes the diagnosis of “follicular lesion of undetermined significance” and improves cytohistologic concordance
.
Diagn Cytopathol
.
2012
;
40
(
12
):
1037
1042
.
112
Kuroiwa
K
,
Shiraishi
T
,
Naito
S.
Discrepancy between local and central pathological review for radical prostatectomy specimens
.
J Urol
.
2009
;
181
(
4 suppl
):
58
.
113
LaCasce
AS
,
Kho
ME
,
Friedberg
JW
,
et al
.
Comparison of referring and final pathology for patients with non-Hodgkin's lymphoma in the National Comprehensive Cancer Network
.
J Clin Oncol
.
2008
;
26
(
31
):
5107
5112
.
114
Nguyen
PL
,
Schultz
D
,
Renshaw
AA
,
et al
.
The impact of pathology review on treatment recommendations for patients with adenocarcinoma of the prostate
.
Urol Oncol
.
2004
;
22
(
4
):
295
299
.
115
Raab
SS
,
Geisinger
EM
,
Parwani
AV
,
Jensen
C
,
Vrbin
CM
,
Grzybicki
DM.
Effect of double viewing needle core prostate biopsy tissues on error reduction
.
Mod Pathol
.
2008
;
21
(
suppl 1s
):
358A
.
116
Saglam
O
,
Pederson
A
,
Zhang
Z
,
Stone
CH
,
Kini
S.
Retrospective review and analysis of pancreaticobiliary specimens with discordant cytohistologic correlation
.
Cancer Cytopathol
.
2008
;
114
(
s5
):
421
422
.
117
Tatsas
A
,
Herman
J
,
Hruban
R
,
et al
.
Second opinion in pancreatic cytopathology
.
Cytojournal
.
2011
;
8
(
suppl 1
):
S89
.
118
van Dijk
MC
,
Aben
KK
,
van Hees
F
,
et al
.
Expert review remains important in the histopathological diagnosis of cutaneous melanocytic lesions
.
Histopathology
.
2008
;
52
(
2
):
139
146
.
119
van Rhijn
BW
,
van der Kwast
TH
,
Kakiashvili
DM
,
et al
.
Pathological stage review is indicated in primary pT1 bladder cancer
.
BJU Int
.
2010
;
106
(
2
):
206
211
.
120
Whitehead
ME
,
Fitzwater
JE
,
Lindley
SK
,
Kern
SB
,
Ulirsch
RC
,
Winecoff
WFI.
Quality assurance of histopathologic diagnoses: a prospective audit of three thousand cases
.
Am J Clin Pathol
.
1984
;
81
(
4
):
487
491
.
121
Eskander
RN
,
Baruah
J
,
Nayak
R
,
et al
.
Outside slide review in gynecologic oncology: impact on patient care and treatment
.
Int J Gynecol Pathol
.
2013
;
32
(
3
):
293
298
.
122
Gaudi
S
,
Zarandona
JM
,
Raab
SS
,
English
JCI
,
Jukic
DM.
Discrepancies in dermatopathology diagnoses: the role of second review policies and dermatopathology fellowship training
.
J Am Acad Dermatol
.
2013
;
68
(
1
):
119
128
.
123
Gerhard
R
,
da Cunha Santos
G.
Inter- and intraobserver reproducibility of thyroid fine needle aspiration cytology: an analysis of discrepant cases
.
Cytopathology
.
2007
;
18
(
2
):
105
111
.
124
Haws
B
,
St Romain
P
,
Mammen
J
,
Fraga
GR.
Secondary review of external histopathology on cutaneous oncology patients referred for sentinel lymph node biopsy: how often does it happen and is it worth it?
J Cutan Pathol
.
2012
;
39
(
9
):
844
849
.
125
Kukreti
V
,
Patterson
B
,
Callum
J
,
Etchells
E
,
Crump
M.
Pathology the gold standard - a retrospective analysis of discordant “second-opinion” lymphoma pathology and its impact on patient care
.
Blood
.
2006
;
108
(
11
):
348
.
126
Pomianowska
E
,
Grzyb
K
,
Westgaard
A
,
Clausen
OP
,
Gladhaug
IP.
Reclassification of tumour origin in resected periampullary adenocarcinomas reveals underestimation of distal bile duct cancer
.
Eur J Surg Oncol
.
2012
;
38
(
11
):
1043
1050
.
127
Randall
RL
,
Bruckner
JD
,
Papenhausen
MD
,
Thurman
T
,
Conrad
EUI.
Errors in diagnosis and margin determination of soft-tissue sarcomas initially treated at non-tertiary centers
.
Orthopedics
.
2004
;
27
(
2
):
209
212
.
128
Wurzer
JC
,
Al-Saleem
TI
,
Hanlon
AL
,
Freedman
GM
,
Patchefsky
A
,
Hanks
GE.
Histopathologic review of prostate biopsies from patients referred to a comprehensive cancer center: correlation of pathologic findings, analysis of cost, and impact on treatment
.
Cancer
.
1998
;
83
(
4
):
753
759
.
129
Tan
YY
,
Kebebew
E
,
Reiff
E
,
et al
.
Does routine consultation of thyroid fine-needle aspiration cytology change surgical management?
J Am Coll Surg
.
2007
;
205
(
1
):
8
12
.
130
Brimo
F
,
Schultz
L
,
Epstein
JI.
The value of mandatory second opinion pathology review of prostate needle biopsy interpretation before radical prostatectomy
.
J Urol
.
2010
;
184
(
1
):
126
130
.
131
Brochez
L
,
Verhaeghe
E
,
Grosshans
E
,
et al
.
Inter-observer variation in the histopathological diagnosis of clinically suspicious pigmented skin lesions
.
J Pathol
.
2002
;
196
(
4
):
459
466
.
132
Chan
TY
,
Epstein
JI.
Patient and urologist driven second opinion of prostate needle biopsies
.
J Urol
.
2005
;
174
(
4, pt 1
):
1390
1394
.
133
Dhir
R
,
Parwani
AV
,
Zynger
DL.
Impact of bladder biopsy second review on pathological stage and subsequent patient management
.
Lab Invest
.
2009
;
89
(
suppl 1
):
165A
.
134
Fajardo
DA
,
Miyamoto
H
,
Miller
JS
,
Lee
TK
,
Epstein
JI.
Identification of Gleason pattern 5 on prostatic needle core biopsy: frequency of underdiagnosis and relation to morphology
.
Am J Surg Pathol
.
2011
;
35
(
11
):
1706
1711
.
135
Khazai
L
,
Middleton
LP
,
Goktepe
N
,
Liu
BT
,
Sahin
AA.
Breast pathology second review identifies clinically significant discrepancies in 10% of cases
.
Lab Invest
.
2012
;
92
(
suppl 1
):
46A
.
136
Kommoss
S
,
Pfisterer
J
,
Reuss
A
,
et al
.
Specialized pathology review in patients with ovarian cancer: highly recommended to assure adequate treatment: results from a prospective study
.
Lab Invest
.
2012
;
92
(
suppl 1
):
281A
.
137
Kronz
JD
,
Westra
WH
,
Epstein
JI.
Mandatory second opinion surgical pathology at a large referral hospital
.
Cancer
.
1999
;
86
(
11
):
2426
2435
.
138
Li
X
,
Heller
K
,
Cangiarella
J
,
Simsir
A.
Interinstitutional second opinion in thyroid cytology: should second opinion be mandated prior to definitive surgery if fine needle aspiration was performed elsewhere?
Cytojournal
.
2011
;
8
(
3
):
S63
.
139
Price
JA
,
Grunfeld
E
,
Barnes
PJ
,
Rheaume
DE
,
Rayson
D.
Inter-institutional pathology consultations for breast cancer: impact on clinical oncology therapy recommendations
.
Curr Oncol
.
2010
;
17
(
1
):
25
32
.
140
Renshaw
AA
,
Gould
EW.
Comparison of disagreement and error rates for three types of interdepartmental consultations
.
Am J Clin Pathol
.
2005
;
124
(
6
):
878
882
.
141
Safrin
RE
,
Bark
CJ.
Surgical pathology sign-out: routine review of every case by a second pathologist
.
Am J Surg Pathol
.
1993
;
17
(
11
):
1190
1192
.
142
Tavora
F
,
Fajardo
DA
,
Lee
TK
,
et al
.
Small endoscopic biopsies of the ureter and renal pelvis: pathologic pitfalls
.
Am J Surg Pathol
.
2009
;
33
(
10
):
1540
1546
.
143
Tsung
JS.
Institutional pathology consultation
.
Am J Surg Pathol
.
2004
;
28
(
3
):
399
402
.
144
Kerkhof
M
,
van Dekken
H
,
Steyerberg
EW
,
et al
.
Grading of dysplasia in Barrett's oesophagus: substantial interobserver variation between general and gastrointestinal pathologists
.
Histopathology
.
2007
;
50
(
7
):
920
927
.
145
Oyama
T
,
Allsbrook
WCJ
,
Kurokawa
K
,
et al
.
A comparison of interobserver reproducibility of Gleason grading of prostatic carcinoma in Japan and the United States
.
Arch Pathol Lab Med
.
2005
;
129
(
8
):
1004
1010
.
146
Van der Kwast
TH
,
Evans
A
,
Lockwood
G
,
et al
.
Variability in diagnostic opinion among pathologists for single small atypical foci in prostate biopsies
.
Am J Surg Pathol
.
2010
;
34
(
2
):
169
177
.
147
van der Kwast
TH
,
Collette
L
,
Van Poppel
H
,
et al
.
Impact of pathology review of stage and margin status of radical prostatectomy specimens (EORTC trial 22911)
.
Virchows Arch
.
2006
;
449
(
4
):
428
434
.
148
Fadare
O
,
Parkash
V
,
Dupont
WD
,
et al
.
The diagnosis of endometrial carcinomas with clear cells by gynecologic pathologists: an assessment of interobserver variability and associated morphologic features
.
Am J Surg Pathol
.
2012
;
36
(
8
):
1107
1118
.
APPENDIX. 

Disclosed Interests and Activities November 2011–September 2014

Disclosed Interests and Activities November 2011–September 2014
Disclosed Interests and Activities November 2011–September 2014

Author notes

Supplemental digital content is available for this article at www.archivesofpathology.org in the January 2016 table of contents.

Competing Interests

Authors' disclosures of potential conflicts of interest and author contributions are found in the appendix at the end of this article.

Supplementary data