Context.—

Quality measures that are supported by evidence-based clinical practice guidelines are preferred for assessing the quality of pathologists' practices. Careful testing of a measure ensures that scores obtained by that measure reflect the quality of a pathologist's practice.

Objective.—

To specify a new quality measure and to demonstrate through testing that it is suitable for measuring pathologists' appropriate incorporation of information regarding microsatellite instability (MSI) and/or mismatch repair (MMR) status in pathology reports for colorectal, endometrial, gastroesophageal, and small bowel carcinoma.

Design.—

The College of American Pathologists collaborated with the American Gastroenterological Association to specify and test the new measure. Face validity testing was used to investigate the validity of the measure. Feasibility testing was conducted to understand if data elements required by the measure specification were readily accessible. Signal-to-noise analysis was used to characterize the measure's reliability.

Results.—

Guideline recommendations for MSI and/or MMR testing supported specifications for the measure. Face validity testing indicated that the measure could distinguish the quality of care provided. Data elements required by the measure specification were found to be accessible, which supported the measure's feasibility. Reliability testing showed that differences in measure score were attributable to real differences in performance rather than random variation in scoring.

Conclusions.—

The Mismatch Repair or Microsatellite Instability Biomarker Testing Status in Colorectal Carcinoma, Endometrial, Gastroesophageal, or Small Bowel Carcinoma measure was appropriately specified, and testing demonstrated that it is well suited for characterizing the quality of pathologists' communication of MMR and/or MSI status.

The Medicare Access and CHIP Reauthorization Act of 2015 established the Quality Payment Program (QPP), which allowed for adjustments to reimbursement based on a provider's demonstration of quality and cost of care. The QPP offers 2 payment pathways: Merit-based Incentive Payment System (MIPS) and Advanced Alternative Payment Models. The MIPS pathway that most eligible pathologists participate in is the default pathway. Within the MIPS, there are 4 categories that determine the final score, which can result in a payment adjustment. The categories are quality, improvement activities, promoting interoperability, and cost. Currently, most MIPS-eligible pathologists are not required by the Centers for Medicare & Medicaid Services (CMS) to participate in the cost and promoting interoperability categories. However, pathologists actively participate in the improvement activities and quality categories, with quality historically constituting 85%1  of the final score for pathologists. A recent change in the 2022 Quality Payment Program Final Rule reweighed quality and improvement activities for small groups (≤15 pathologists) such that each performance category contributes 50%1  of the final score. Quality remains weighted at 85% of the final score for larger groups. Eligible pathologists can use a CMS-approved Qualified Clinical Data Registry (QCDR) to report performance on quality measures to CMS. QCDRs can submit approved QCDR measures, MIPS Clinical Quality Measures (CQMs), and information regarding improvement activities on behalf of the pathologist or group. A QCDR measure is available only to users of the QCDR that includes the measure, whereas MIPS CQMs are publicly available and have undergone an extensive public approval process. The College of American Pathologists (CAP) maintains the only pathology-specific QCDR, which includes 6 QCDR measures only available through the Pathologists Quality Registry QCDR. The CAP Quality and Clinical Data Registry Affairs Committee collaborates with CAP committees, pathologists, and other organizations to update and develop performance measures that are well suited for demonstrating the quality and value of pathologists' practices within quality payment programs such as MIPS.

Mismatch repair (MMR) or microsatellite instability (MSI) testing is indicated in several clinical settings.26  Despite the well-established need for this information, gaps in documentation of MMR and/or MSI status persist.719  In the 2019 and 2020 reporting years, CAP18 “Mismatch Repair (MMR) or Microsatellite Instability (MSI) Biomarker Testing to Inform Clinical Management and Treatment Decisions in Patients with Primary or Metastatic Colorectal Carcinoma” was a high-priority, proprietary QCDR measure in the Pathologists Quality Registry. CAP18 measured the percentage of surgical pathology reports that addressed the status of MMR proteins by immunohistochemistry or MSI by DNA-based testing methods for all primary and metastatic colorectal carcinomas. In 2020, a similar measure, CAP31 “Endometrial Carcinoma Testing for MMR, MSI, or Both,” was added to the Pathologists Quality Registry QCDR to measure the percentage of surgical pathology reports with a diagnosis of endometrial carcinoma (EMCA) that addressed MSI and/or MMR status.

CAP creates and maintains multiple measures to help pathologists participate successfully in MIPS. In 2020, CMS requested that CAP consider combining 2 measures, CAP18 and CAP31, as part of a broader initiative to reduce the overall number of measures and create more meaningful and inclusive measures for the MIPS program. The Quality and Clinical Data Registry Affairs Committee was also aware of a forthcoming CAP clinical practice guideline for MMR and MSI testing in patients being considered for checkpoint inhibitor therapy and was interested in creating a measure for determining how well a pathologist communicates MMR/MSI status in relevant tumors. Following publication of the American Gastroenterological Association (AGA) guideline on the diagnosis and management of Lynch syndrome,20  the AGA Quality Committee initiated their own quality measure development process regarding testing of tumors for MMR/MSI. After initial development and in response to feedback during a public comment period that highlighted the importance of pathologist involvement in MMR/MSI testing, the AGA Quality Committee began a process of measure harmonization in collaboration with the CAP Quality and Clinical Data Registry Affairs Committee. Although combining a single medical specialty's measures is not rare, harmonizing measures between specialties can be challenging. This manuscript documents the collaborative process that led to the creation of CAP33 “Mismatch Repair (MMR) or Microsatellite Instability (MSI) Biomarker Testing Status in Colorectal Carcinoma, Endometrial, Gastroesophageal, or Small Bowel Carcinoma” and its acceptance as a MIPS CQM.

CAP33 was expected to be broadly useful to many pathologists participating in MIPS, but initially it would be available exclusively to pathologists using the Pathologists Quality Registry QCDR. Hence, the CAP Quality and Clinical Data Registry Affairs Committee and AGA Quality Committee were interested in moving the measure to the public domain. Supporting such a move, CMS has indicated a goal of having measures that are applicable to or endorsed by multiple specialties. Given the similarities in measure content and overlapping interests, collaborating to test this measure for the public domain was a natural step for CAP and AGA. Furthermore, the CAP performed additional testing and development of the measure in order to submit it for approval by CMS as a MIPS CQM, with a goal of benefitting the pathology community beyond only Pathology Quality Registry users. Inclusion in MIPS could also make this measure reportable for other relevant specialties, such as gastroenterology, surgery, and oncology.

Submitting a measure to the public domain for MIPS reporting is a lengthy, multistep project. A key step is submission to the Measures Under Consideration (MUC) List. To be considered for the MUC list, CAP33 required validity, feasibility, and reliability testing. Once approved for selection to the MUC, the measure proceeds through an extensive expert review by the multistakeholder Measures Application Partnership, convened by the National Quality Forum (NQF), as well as a public comment period and rule-making process prior to acceptance as a MIPS measure.

Measure Development

Prior to the creation of CAP33, the Pathologists' Quality Registry had 2 independent MMR/MSI measures: 1 addressing colorectal cancer (CRC-only measure) and another addressing endometrial cancer (EMCA-only measure). These 2 existing measures were combined, and additional diagnoses related to an anticipated MMR and MSI Testing in Patients Being Considered for Checkpoint Inhibitor Therapy guideline that was under development were added to form a comprehensive new measure focused on the appropriate use of MMR/MSI.

Measure Testing

Measure testing was done after the new measure specification was drafted in accordance with recommendations from the National Quality Forum and CMS Measures Management System Blueprint for Measure Development. Three types of testing were performed for the combined MMR/MSI status measure. Face validity testing was conducted to evaluate whether the measure, as specified, reflected the intended quality practice of addressing MMR/MSI status. Feasibility testing was conducted to determine whether the data elements required to assess MMR/MSI status were readily available to potential users. Reliability testing was conducted to assess whether the written specification would produce reproducible results for a consistent group being measured. Together, validity and reliability testing are considered scientific acceptability testing.

Face validity testing was done in 2 parts. First, CAP experts from the Quality Practices Committee who were uninvolved in measure development were asked to rate their agreement with the following statement on a scale from 1, indicating strong disagreement with the following statement, to 5, indicating strong agreement with the following statement, “The MMR/MSI Testing Status quality measure as described will accurately distinguish good quality from poor quality among pathologists.” Separately, a larger group of clinicians that included volunteers from the AGA Institute Quality Committee who order MMR or MSI testing were surveyed to ascertain whether the measure correlated with quality of care. They responded on a scale from 1, indicating strong disagreement, to 4, indicating strong agreement with the statement, “The MMR/MSI Testing Status quality measure as described [above] will accurately distinguish between good and poor quality of care.” Based on the first round of face validity testing with pathologists using a scale of 1 to 5, it was observed that participants could respond “Neither Agree nor Disagree,” which was not informative. Hence, a 1 to 4 scale was employed when surveying gastroenterologists for agreement or disagreement. The different response scales (ie, 1–5 versus 1–4) did not compromise the interpretation of results because responses from the 2 groups were analyzed separately. Success for validity would be judged by majority responses.

Feasibility testing was performed for pathologists and gastroenterologists. To assess feasibility among pathologists, practices using the Pathologists Quality Registry were surveyed using a scorecard, as is recommended by NQF, to assess whether pathologists would be able to access necessary data elements in pathology reports to permit them to use and report performance with the measure. The scorecard gauged not only whether the data elements were accessible but also whether they were captured as free text or as structured data with associated coding systems. Feasibility for pathologists using the Pathologists Quality Registry QCDR also was assessed by looking at real-world experience with reporting of the related CRC-only and EMCA-only measures that required similar data elements. Among gastroenterologists, feasibility was scored by AGA Institute Quality Committee members who were surveyed using an equivalent scorecard, which was modified to address their electronic health records rather than the laboratory information systems that pathologists use. Like the scorecard for pathologists, the gastroenterologists' version asked whether data elements were available as free text or structured data and if a coding system was used. If most data elements needed to report the measure are expected to be available, then feasibility would be judged acceptable.

Reliability testing was accomplished with the CRC-only measure and the 2021 combined MMR/MSI status measure, CAP33. Reliability was assessed at the clinician level for the CRC-only measure using data from the Pathologists Quality Registry QCDR from January 1, 2019, to December 31, 2019. For each clinician, the total eligible population, number of exclusion and exception cases, and population that met the quality action were extracted from the registry. A performance rate for the measure by each pathologist was calculated from deidentified data, and a mean performance rate for the included pathologists was determined. Using this population, reliability of the measure was calculated as the ratio of signal to noise using the method described by Adams21  and Adams et al,22  with signal being the proportion of variability among pathologists' scores that can be explained by true differences in performance, whereas noise is the total variability in measured performance.

Following the written specification of the MMR/MSI status measure, reliability testing was conducted with data submitted voluntarily from members of the Quality and Clinical Data Registry Affairs Committee. These members extracted deidentified cases of colorectal, endometrial, gastroesophageal, and small bowel carcinoma from their respective laboratory information system using the same process and keywords as a pathology practice using the Pathologists Quality Registry QCDR would use. Because of the public health emergency in 2020 and the recognized changes in case volume and distribution, representative data from 2019 were used. Pathologists evaluated the cases to categorize them as Performance Met or Not Met after consideration of specified Exclusions and Exceptions. These data were provided in aggregate to Pathologists Quality Registry staff in the same format as the data pulled from the Registry for the CRC-only measure. Data were then analyzed using the same method as for the CRC-only measure: signal-to-noise analysis as described by Adams21  and Adams et al.22 

Measure Development

After a request for public comment in August 2018, CAP was aware of the AGA developing a Lynch syndrome screening measure independently, which was related to MMR/MSI and intended to be submitted to CMS's MUC list. Concurrently, CAP was developing the CRC-only MMR/MSI measure. In February 2019, AGA and CAP clinical leads met, compared the language of their respective measures, and agreed to collaborate on a harmonized measure. By summer, language was agreed upon, and CAP submitted the harmonized measure as a QCDR measure in August 2019. This version was accepted for use in 2020 in the Pathologists Quality Registry QCDR alongside the EMCA-only measure.

In early 2020, the draft multispecialty guideline recommendations23  for MMR/MSI testing in patients being considered for checkpoint inhibitor therapy testing were circulated for public comment. Recognizing the overlap with the current MMR/MSI measure and based on CMS's desire for measures to be combined as much as possible, the existing Pathologists Quality Registry QCDR measure was expanded with the agreement of AGA24  to include not only colorectal and endometrial carcinomas, but also gastroesophageal and small bowel carcinomas (Table).

Mismatch Repair (MMR) or Microsatellite Instability (MSI) Biomarker Testing Status in Colorectal Carcinoma, Endometrial, Gastroesophageal, or Small Bowel Carcinoma

Mismatch Repair (MMR) or Microsatellite Instability (MSI) Biomarker Testing Status in Colorectal Carcinoma, Endometrial, Gastroesophageal, or Small Bowel Carcinoma
Mismatch Repair (MMR) or Microsatellite Instability (MSI) Biomarker Testing Status in Colorectal Carcinoma, Endometrial, Gastroesophageal, or Small Bowel Carcinoma

Measure Testing

For face validity testing, 9 pathologists from the Quality Practices Committee responded to the survey, producing a mean score of 4.3 on a scale from 1 to 5 (5 = strongly agree) in favor of a valid measure specification. In addition, 37 of 40 surveyed gastroenterologists and genetic counselors agreed or strongly agreed with a mean score of 3.4 on a scale from 1 to 4 (4 = strongly agree) favoring a valid measure specification. These results were interpreted as supporting the validity of the measure from the perspective of pathologists, gastroenterologists, and genetic counselors for the purpose of identifying high-quality care and differentiating good performance from poor performance.

Feasibility testing of the new measure was performed separately for 2 groups: pathologists and gastroenterologists. Four pathology practices that use the Pathologists Quality Registry participated in testing by classifying data elements on a scorecard as either “yes” or “no” to indicate the availability of the elements in pathology reports as well as the format of the elements. Practice size varied from 4 pathologists to 40 pathologists. Scores for data availability indicated that most data elements were available to most practices. This was interpreted as evidence that the data elements needed to report this measure are available commonly in a laboratory information system (LIS) without requiring extra effort to capture or locate them. Data elements that were not available commonly were consistent across practices and included patient reason for not performing/ordering MMR/MSI tests (eg, patient refused) and health system reason for not performing/ordering MMR/MSI tests (eg, payor-related issues). Follow-up interviews with pathologists indicated that these elements would not be captured in the LIS because the information is very rarely conveyed from the ordering health care provider to the pathologist. Scores for structure of the data were observed to be comparatively lower than data availability scores, and practices indicated that most data elements were captured as free text rather than structured data (structured data fields include codes, drop-down boxes, items on the CAP electronic Cancer Checklist, etc). When data were captured in structured fields, much of the data were captured with a standard terminology, such as ICD-10 or CPT codes.

Feasibility among pathologists also was assessed based on the real-world experience from the Pathologists Quality Registry QCDR. In 2019, a total of 11 practices with 56 clinicians reported on 817 cases to CMS for the CRC-only measure. In 2020, 12 practices with 123 clinicians submitted 4796 cases to CMS for the CRC-only measure, and 8 practices with 80 clinicians submitted 456 cases to CMS for the EMCA-only measure. These results demonstrate feasibility among Pathologists Quality Registry QCDR users for the individual measures. In 2021, a total of 22 practices reported 7923 cases to CMS for the combined MMR/MSI measure, indicating real-world feasibility of this measure as well.

To assess feasibility for gastroenterologists, AGA members were surveyed using a scorecard similar to that for pathologists, with language modified to accommodate electronic health record data rather than LIS data. Like the scorecard for pathologists, the gastroenterologists' version captured whether data elements were available as structured or free text and if a coding system was used. Among the 24 respondents, scores were similar to pathologists' results. Most of the individual data elements were available to most practices and were captured in the electronic health record as part of routine patient care documentation. Although gastroenterologists captured more data in structured fields than pathologists, many data remained free text or unstructured. When gastroenterologists captured data in a structured format, the data were generally represented using a standard terminology language, which is consistent with the finding for pathology practices capturing structured data.

Reliability testing was done for both the CRC-only measure and the 2021 combined MMR/MSI status measure. In 2020, reliability testing was performed at the clinician level for the CRC-only measure using data from the Pathologists Quality Registry QCDR from January 1, 2019, to December 31, 2019. In 2019, a total of 56 clinicians submitted data to CMS for this measure. Additional pathologists entered data but opted not to submit them to CMS because doing so would not have affected their MIPS scores. The total number of pathologists who sent at least 1 case to the Pathologists Quality Registry QCDR for the CRC-only measure in 2019 was 80, covering 2799 patients. For each pathologist, the total eligible population, number of exclusion and exception cases, and population that met the quality action were pulled from the Pathologists Quality Registry QCDR. A performance score for each pathologist was calculated with deidentified data. Performance scores are expressed as a percentage of cases with Performance Met. Scores for individual pathologists ranged from 0 to 100% Performance Met. The mean of the performance scores for 80 included pathologists was 49.5%. Using this population, the reliability of the measure was calculated as the ratio of signal to noise. The signal is the proportion of variability among pathologists' scores that can be explained by true differences in performance, whereas noise is the total variability in measured performance. Performance rates among pathologists indicated that there was clinically meaningful variation across the pathologists' performance scores. A reliability value of 0 implies that all the variability in a measure is attributable to measurement error. A reliability value of 1 implies that all the variability is attributable to real differences in physician performance. Estimates were based upon a β-binomial model distribution (α = .12, β = .16). The best estimate of reliability for the CRC-only measure was 0.98, which is consistent with very high reliability.

Reliability testing was also conducted on the 2021 combined MMR/MSI status measure CAP33. Data were collected from members of the Quality and Clinical Data Registry Affairs Committee following the measure specification. These members obtained deidentified cases of colorectal, endometrial, gastroesophageal, and small bowel carcinoma from their respective laboratory information systems using the same processes and keywords used by Pathologists Quality Registry QCDR users. Data from 6 practices for a total of 51 pathologists covering 1282 cases were provided in aggregate to Pathologists Quality Registry staff in the same format as the data pulled from the Pathologists Quality Registry QCDR for the CRC-only measure. Significant variability was seen in performance scores with the fully specified measure. This suggested that reliability of the measure would be very good, which was consistent with data submitted to CMS for performance year 2022 for the combined MMR/MSI measure that showed significant variability as well, with performance scores for pathologists ranging from 1.56% to 100% (unpublished data from Pathologists Quality Registry QCDR). The data submitted by Quality and Clinical Data Registry Affairs Committee members were then analyzed using the same method as was used for the CRC-only measure: signal-to-noise analysis using a β-binomial model. The mean of the performance scores (expressed as a percentage of cases with Performance Met) was 60% for 51 included pathologists (scores for individual pathologists ranged from 0 to 100% Performance Met). Assumed estimates were based on a β-binomial model distribution (α = .20, β = .12). The best estimate of the reliability for CAP33 was 0.96 (SD, 0.07), which is consistent with very high reliability. Therefore, variability in scores is attributable to true differences in performance rather than noise in measurement.

Existing measures derived from guidelines that are updated present opportunities for creating new measures. Measure development and full measure testing can successfully be achieved through the collaboration of specialty societies, and measures that span the care continuum benefit from the input of relevant providers. These measures will be increasingly important as CMS and other national organizations increase their focus on patient care journeys. Testing measures is a daunting task, which does not prove that a measure will be successful in real-world use but can detect flaws in a measure before real-world use. Although preferred testing methods have been described, the overall evaluation of scientific acceptability often requires the consensus of a committee of experts who can balance the varied and sometimes conflicting information collected while testing. In testing the MMR/MSI measure, pathologists agreed that the measure distinguishes quality performance, and the survey of gastroenterologists was compatible with the pathologists' assessment. Feasibility testing can also be successfully achieved through engagement of society members, as seen here. Members of both societies contributed information for feasibility testing, and the existing data from similar measures in a QCDR were leveraged as added evidence of feasibility. Volunteer data submission and access to existing data in the Pathologists Quality Registry QCDR was successfully used for reliability testing. Of note, the availability of data in the Pathologists Quality Registry was instrumental to the success of reliability testing because it minimized the burden on volunteers to manually abstract data. Data in the Pathologists Quality Registry are repeatedly reviewed by participating clinicians throughout the year before submission to CMS and are therefore highly likely to be accurate and complete.

When considering how to assist practices in successfully implementing the measure, resources have been developed and deployed that consider the various mechanisms by which data come to the Pathologists Quality Registry QCDR. To support practices using the combined MMR/MSI measure, user guides comparing the previous versions to the current version have been created, as well as a walkthrough video and frequently asked questions document. In addition, documentation to assist practices that integrate their LIS with the Pathologists Quality Registry QCDR has been created based on the specification and known keywords. Resources were shared with the 22 practices that selected the combined MMR/MSI measure as one of their priority QCDR measures for 2021 and practices that selected the measure for 2022.

The intent of a quality measure is to assess the quality of care provided to patients. Biomarker testing relevant to the clinical context provides pertinent information to guide prompt treatment decisions for a patient. Referring physicians depend on pathologists' diagnoses as well as supplementary information that informs patient management decisions. When patients' pathology reports include the relevant biomarker status, the application of evidence-based guidelines and elimination of unnecessary duplicative testing is possible. These results align with the National Quality Strategy domain of Communication and Care Coordination. Indeed, this measure can help pathologists monitor their success in effectively communicating important information for the purpose of care coordination and the efficient use of resources.

MMR or MSI Biomarker Testing Status in Colorectal Carcinoma, Endometrial, Gastroesophageal, or Small Bowel Carcinoma was an approved QCDR measure for 2021. Testing supported its scientific acceptability. This measure was then successfully submitted and approved for inclusion in 2021 to be considered for conversion to a MIPS CQM and potential public use in 2023. Subsequently, the measure specification and testing results were reviewed by committees of the NQF and earned NQF endorsement. CMS also vetted the measure and approved it as a MIPS CQM (QPP 491)25  for public use effective January 1, 2023.

The authors gratefully acknowledge the contributions of Colleen Skau, PhD, of the CAP, who was instrumental in developing the measure and guiding it through the testing and submission processes; David Godzina, MBA/MA, of the AGA, who was a key coordinator of the measure creation process; and Thomas A. Long, MPH, of the CAP, who provided statistical evaluation of the measure.

1.
Comparison of 2022 and 2023 MIPS requirements
.
College of American Pathologists Web site. Accessed February 27, 2023.
2.
Sepulveda
AR,
Hamilton
SR,
Allegra
CJ,
et al .
Molecular biomarkers for the evaluation of colorectal cancer: guideline from the American Society for Clinical Pathology, College of American Pathologists, Association for Molecular Pathology, and American Society of Clinical Oncology
.
Arch Pathol Lab Med
.
2017
;
141
(5)
:
625
657
.
3.
Mismatch repair and microsatellite instability testing for immune checkpoint inhibitor therapy statements and strengths of recommendations. College of American Pathologists Web site. Accessed
February
27,
2023
.
4.
Benson
AB,
Venook
AP,
Al-Hawary
MM,
et al .
National Comprehensive Cancer Network (NCCN) guidelines insights: colon cancer, version 2.2018
.
J Natl Compr Canc Netw
.
2018
;
16
(4)
:
359
369
.
5.
NCCN clinical practice guidelines in oncology: uterine neoplasms version 1. National Comprehensive Cancer Network Web site. Accessed
February
27,
2023
.
6.
Society of Gynecologic Oncology clinical practice statement screening for Lynch syndrome in endometrial cancer
.
Accessed
February
27,
2023
.
7.
Eriksson
J,
Amonkar
M,
Al-Jassar
G,
et al .
Mismatch repair/microsatellite instability testing practices among US physicians treating patients with advanced/metastatic colorectal cancer
.
J Clin Med
.
2019
;
8
(4)
:
558
.
8.
Pokharel
HP,
Hacker
NF,
Andrews
L.
Changing patterns of referrals and outcomes of genetic participation in gynaecological-oncology multidisciplinary care
.
Aust N Z J Obstet Gynaecol
.
2016
;
56
(6)
:
633
638
.
9.
Mathiak
M,
Warneke
VS,
Behrens
HM,
et al .
Clinicopathologic characteristics of microsatellite instable gastric carcinomas revisited: urgent need for standardization
.
Appl Immunohistochem Mol Morphol.
2017
;
25
(1)
:
12
24
.
10.
Bae
YS,
Kim
H,
Noh
SH,
Kim
H.
Usefulness of immunohistochemistry for microsatellite instability screening in gastric cancer
.
Gut Liver
.
2015
;
9
(5)
:
629
635
.
11.
Abrha
A,
Shukla
ND,
Hodan
R,
et al .
Universal screening of gastrointestinal malignancies for mismatch repair deficiency at Stanford. JNCI Cancer Spectr.
2020
;
4(5):pkaa054.
12.
Cohen,
SA.
Current Lynch syndrome tumor screening practices: a survey of genetic counselors
.
J Genet Couns
.
2014
;
23
(1)
:
38
47
.
13.
Froelich
W.
Disparities in MSI/MMR biomarker testing for colorectal cancer
.
Oncology Times
.
2020
;
42
(22)
:
35
.
14.
Jain
A,
Shafer
L,
Rothenmund
H,
Kim
CA,
et al .
Suboptimal adherence in clinical practice to guidelines recommendation to screen for Lynch syndrome
.
Dig Dis Sci
.
2019
;
64
(12)
:
3489
3501
.
15.
Latham
A,
Srinivasan
P,
Kemel
Y,
et al .
Microsatellite instability is associated with the presence of Lynch syndrome pan-cancer
.
J Clin Oncol
.
2019
;
37
(4)
:
286
295
.
16.
Llor
X.
Lynch syndrome: widening the net
.
Gastroenterology
.
2019
;
157
(5)
:
1432
1434
.
17.
Pan
JY,
Haile
RW,
Templeton
A,
et al .
Worldwide practice patterns in Lynch syndrome diagnosis and management, based on data from the International Mismatch Repair Consortium
.
Clin Gastroenterol Hepatol
.
2018
;
16
(12)
:
1901
1910
.
18.
Shaikh
T,
Handorf
EA,
Meyer
JE,
et al .
Mismatch repair deficiency testing in patients with colorectal cancer and nonadherence to testing guidelines in young adults
.
JAMA Oncol
.
2018
;
4
(2)
:
e173580
.
19.
Smyth
EC,
Wotherspoon
A,
Peckitt
C,
et al .
Mismatch repair deficiency, microsatellite instability, and survival: an exploratory analysis of the Medical Research Council Adjuvant Gastric Infusional Chemotherapy (MAGIC) trial
.
JAMA Oncol
.
2017
;
3
(9)
:
1197
1203
.
20.
Rubenstein
JH,
Enns
R,
Heidelbaugh
J,
Barkun
A
Clinical Guidelines Committee. American Gastroenterological Association institute guideline on the diagnosis and management of Lynch syndrome
.
Gastroenterology
.
2015
;
149
(3)
:
777
782
.
21.
Adams
JL.
The Reliability of Provider Profiling: A Tutorial. Santa Monica, CA: RAND Corporation; 2009. Accessed February 27, 2023.
22.
Adams
JL,
Mehrotra
A,
Thomas
JW,
McGlynn
EA.
Physician cost profiling–reliability and risk of misclassification
.
N Engl J Med
.
2010
;
362
(11)
:
1014
1021
.
23.
Bartley
AN,
Mills
AM,
Konnick
E,
et al .
Mismatch repair and microsatellite instability testing for immune checkpoint inhibitor therapy: guideline from the College of American Pathologists in collaboration with the Association for Molecular Pathology and Fight Colorectal Cancer
.
Arch Pathol Lab Med
.
2022
;
146
(10)
:
1194
1210
.
24.
Leiman
DA,
Cardona
DM,
Kupfer
SS,
et al
American Gastroenterological Association Quality Committee; College of American Pathologists Quality Payment Measure Committee. American Gastroenterological Association Institute and College of American Pathologists quality measure development for detection of mismatch repair deficiency and Lynch Syndrome management
.
Gastroenterology
.
2022
;
162
(2)
:
360
365
.
25.
QPP 491: mismatch repair (MMR) or microsatellite instability (MSI) biomarker testing status in colorectal carcinoma, endometrial, gastroesophageal, or small bowel carcinoma registry specifications. College of American Pathologists Web site. Accessed:
March
27,
2023
.

Competing Interests

The authors have no relevant financial interest in the products or companies described in this article.