In 2008, the Joint Commission (JC) implemented a standard mandating formal monitoring of physician professional performance as part of the process of granting and maintaining practice privileges.
To create a pathology-specific management tool to aid pathologists in constructing a professional practice-monitoring program, thereby meeting the JC mandate.
A total of 105 College of American Pathologists (CAP)–defined metrics were created. Metrics were based on the job descriptions of pathologists' duties in the laboratory, and metric development was aided by experience from the Q-Probes and Q-Tracks programs. The program was offered in a Web-based format, allowing secure data entry, customization of metrics, and central data collection for future benchmarking.
The program was live for 3 years, with 347 pathologists subscribed from 61 practices (median, 4 per institution; range, 1–35). Subscribers used 93 of the CAP-defined metrics and created 109 custom metrics. The median number of CAP-defined metrics used per pathologist was 5 (range, 1–43), and the median custom-defined metrics per pathologist was 2 (range, 1–5). Most frequently, 1 to 3 metrics were monitored (42.7%), with 20% each following 4 to 6 metrics, 5 to 9 metrics, or greater than 10 metrics. Anatomic pathology metrics were used more commonly than clinical pathology metrics. Owing to low registration, the program was discontinued in 2016.
Through careful vetting of metrics it was possible to develop a pathologist-specific management tool to address the JC mandate. While this initial product failed, valuable metrics were developed and implementation knowledge was gained that may be used to address new regulatory requirements for emerging value-based payment systems.
In 2008, the Joint Commission (JC) implemented a new standard mandating evaluation of practitioners' professional performance as part of the process of granting and maintaining practice privileges in a health care organization.1–3 The mandate included 2 distinct forms of evaluation: (1) Ongoing Professional Practice Evaluation (OPPE) and (2) Focused Professional Practice Evaluation (FPPE). OPPE is intended to be a means of performance evaluation that is conducted on an ongoing basis, with the aims to monitor competency, identify areas for possible improvement, and use objective data in decisions for continuance of practice privileges. FPPE involves more specific and time-limited monitoring of performance in 3 situations: (1) when a provider is initially granted privileges, (2) when new privileges are requested for an already privileged provider, and (3) when performance problems involving a privileged provider are identified (either through the OPPE process or by any other means, such as complaints or significant departure from accepted practice).
At the heart of the JC mandate is the concept that professional practice evaluation must reflect the competency of individual pathologists (not the laboratory or pathology department as a whole). This must be accomplished through objective measures, or metrics, that assess an individual pathologist's quality and timeliness of work, as well as compliance with requirements of continuing medical education and institutional service. The JC recommends that OPPE/FPPE programs organize performance metrics based on the 6 competency areas defined by the Accreditation Council for Graduate Medical Education (ACGME) and the American Board of Medical Specialties: patient care, medical/clinical knowledge, practice-based learning and improvement, interpersonal and communication skills, professionalism, and systems-based practice.
A challenge in meeting the JC mandate is the lack of detail the mandate provides. Specifically, the mandate does not indicate what metrics to use, how many metrics to use, or how often the metrics must be measured. The requirement is simply that there must be a system in place to objectively monitor practice performance. As a general guide, selection of sufficient metrics to monitor aspects of most or all of the key components of a medical provider's practice is recommended. It is the organized medical staff of the health care organization who ultimately must approve the number and type of metrics monitored for any individual practitioner or group of similar practitioners. While the JC recommends that an OPPE/FPPE program be organized into the 6 competencies above, it is not required that all 6 areas be monitored for a given practitioner. Also, in the case of OPPE, “ongoing” is defined as “more frequently than annually,” but is not further specified. Likewise, the mandate does not specify the length of time for a period of FPPE.
FPPE is conducted to fulfill specific requirements for initial establishment of practice privileges or to monitor remediation of specific practice parameters for an already privileged practitioner. Thus, FPPE is intended to be done on an episodic basis and for a predefined and finite period. The JC does not specify the length of time for a period of FPPE. The FPPE process should be clearly defined and documented with criteria and a monitoring plan; it should also be of fixed duration and have predetermined measures or conditions for acceptable performance. Most organizations set up FPPE monitoring sessions for periods of 3 to 6 months; however, this is left up to the discretion of each health care organization. For infrequently performed services, longer periods of monitoring, such as 12 months, may be appropriate. An alternative approach for infrequently performed services may be monitoring of a predetermined number of service events (eg, review of the diagnoses made on the first 10 or 20 of a specified type of surgical pathology specimen), rather than monitoring for a prescribed period. The duration and scope of FPPE monitoring may also be adjusted for the level of documented training and experience of a practitioner, with shorter monitoring periods or fewer service events for more experienced practitioners. It should be noted that all practitioners must be subjected to FPPE for new privileges, even those with extensive prior experience at other health care organizations.
OPPE is a system whereby a practitioner's performance is assessed and evaluated on an ongoing basis, allowing continuous monitoring of the quality and effectiveness of the practitioner's practice. Once a practitioner has achieved practice privileges in a health care organization, OPPE should be done continuously and data collected and assessed periodically.
There are general professional practice performance reporting programs in existence and some are currently used by hospital systems. However, for the most part, such systems are either too clinician focused or too general in scope, and are not easily applied to pathology practice. To fill this gap, and, following a market survey establishing a need, the College of American Pathologists (CAP) created the Evalumetrics program as a pathology-specific management tool to aid pathology practices in meeting the JC mandate for OPPE and FPPE.
Planning and Premarket Survey
The concept of a CAP-developed OPPE/FPPE program was first presented at a quarterly meeting of the CAP Quality Practices Committee in August 2010. A needs assessment marketing survey was conducted in November 2010 to estimate pathologists' interest level in a CAP-developed program. The survey indicated significant interest in the program, with more than half of respondents finding the program valuable and more than one-third of respondents indicating they were likely to enroll. With the backing of the CAP Council on Scientific Affairs, the project moved forward.
Pathology-specific metrics were developed that were relevant to the daily work load, duties, and job descriptions of pathologists and laboratory medical directors. Many metrics were based on prior experience with the Q-Probes and Q-Tracks programs, which included well-established benchmarks that pathology practices could use for goal setting. Other metrics were written from scratch, and, in many instances, no benchmark data existed. In such cases, it was planned to accrue subscriber data in order to eventually create benchmarks. Regardless of the existence of any published benchmark, every metric was written with the caveat that each pathology practice must determine goals and benchmarks that are appropriate for its particular practice setting. In addition to the members of the CAP Quality Practices Committee, input was provided by several CAP Resource Committees, including Autopsy, Cytopathology, Molecular Oncology, Forensic Pathology, and Neuropathology. All metrics were provided as fully developed measures of performance in each general and subspecialty area of pathology practice. The listing for each of the metrics included a description of the metric, the data collection format (rate/percentage, count, mean/median, and binary/yes-no), the competency area(s) covered, calculation methodology, suggested threshold, available national benchmarking information, and relevant references. To assist the subscriber in collection of relevant raw data and performance of all required calculations, an electronic worksheet was made available for most metrics for printing and off-line use by the subscriber.
The metrics were divided into 4 major categories: (1) general pathology, (2) clinical pathology (CP), (3) anatomic pathology (AP), and (4) procedural services. General pathology includes metrics that are generally applicable to all pathologists such as continuing education or on-call performance. Clinical pathology and AP were further stratified into both core and specific practice areas. Core metrics were to be used for pathologists who practice in all subspecialties of AP or CP. The use of specific practice area metrics was most applicable to pathologists who have delineated privileges limited to 1 or more pathology subspecialties (eg, chemical pathology in CP, cytopathology in AP).
An important component of FPPE and OPPE is peer review, which may be conducted by using multiple sources of information, including (1) the review of individual cases, (2) the review of aggregate data for compliance with general rules of the medical staff and clinical standards, and (3) the use of rate-based measures in comparison with established benchmarks or norms. A peer is defined as a medical staff member within the same service or practice area.
Clinical Pathology Metrics
Professional practice evaluation to support privileging in CP was evaluated by using 3 major categories of performance: (1) practice activity, (2) timeliness, and (3) competence. Practice activity or workload was measured to show that the pathologist is actively practicing in the area(s) of CP for which privileges are granted. Timeliness was measured to show the pathologist's ability to meet turnaround expectations required for quality patient care and laboratory management. Competence was measured to show the pathologist's knowledge and skills in the practice area(s) for which privileges are granted. Since leadership and management of the clinical laboratory is a significant component of laboratory medicine, each of the 3 major categories listed above was further stratified into either clinical or laboratory management areas of practice for which the pathologist is responsible. Finally, specialized measures were included to evaluate performance of medical procedures performed by the clinical pathologist (eg, bone marrow aspiration and biopsy) or low volume, infrequent activity (eg, method validation). Table 1 displays examples of different OPPE metrics that could be selected for hematology practice.
Anatomic Pathology Metrics
Similar to CP, professional practice evaluation to support privileging in AP was evaluated by using 3 major categories of performance: (1) practice activity, (2) timeliness, and (3) competence. Practice activity or workload was measured to show that the pathologist is actively practicing in the area(s) of AP for which privileges are granted. These included number of surgical pathology, cytopathology, and autopsy cases performed. Timeliness was measured to show the pathologist's ability to meet turnaround expectations required for quality patient care and laboratory management. Commonly measured areas were turnaround time (TAT) of routine biopsy cases, TAT of cytology cases, and preliminary and final report TAT of autopsy cases. Competence was measured to show the pathologist's knowledge and skills in the practice area(s) for which privileges are granted. Competence in AP was based primarily on peer review of cases, which could be accomplished several ways, including case review for conferences, through systematic review of a set percentage of cases, or review of selected case types. Management of the laboratory is a significant component of the work of AP leadership, thus each of the 3 major categories were further divided for evaluation of administrative activities of a pathologist. This included performing administrative tasks, such as signing off on procedures and policies in a timely and complete manner. Finally, specific measures were used to evaluate special situations, such as performance of medical procedures performed by the anatomic pathologist (eg, fine-needle biopsy) or low-volume, infrequent activity (eg, method validation for immunohistochemistry and other ancillary tests). Table 2 displays examples of different OPPE metrics that could be selected for cytopathology practice.
Information Technology Considerations
For ease of reporting, a Web-based portal was developed with the aid of a third-party developer. The program allowed for secure, password-protected access. To address any concern that the performance data might be discoverable by outside agencies, such as insurers or legislative bodies, subscribing practices reported data with confidential identifiers that were known only to the subscribing practices. Neither CAP nor any other agency had access to the identifiers. The reporting portal was built to be flexible enough to allow for customization of metrics (when desirable), and to allow for dashboard reporting to health care organization administrators. The portal was also a means for collection of benchmark data that could be provided to program subscribers as a means of refining target goals in the future.
Subscriber Registration and Choice of Metrics
The workflow to establish institutional practice areas, select metrics, and enroll pathologists in Evalumetrics is provided in the Figure. The system was designed to allow easy entry of information by either a physician leader or pathology group into the system. When a subscriber first registered in the system and when new practitioners were added to a subscriber's database, information regarding each practitioner needed to be entered. Required information was a listing of the general areas of practice (eg, AP, CP, or both) and the subspecialty privilege area(s) (eg, cytopathology, hematopathology, chemistry) that were a regular part of pathologists' work. Such information was used to display prospective metrics for each pathologist. In addition to groups of metrics for subspecialty privileges, an optional general set of core metrics for AP and/or CP were offered in the form of a rapid startup option. These core AP and core CP metrics covered the basic core privileges and services provided by pathologists in each general practice area. Although adequate for basic monitoring of practice performance, the core metrics did not include practice measures specific to subspecialty areas of work. Note that these metrics were offered as suggestions only; the subscriber was free to choose 1 or more of these core metrics for each subscriber, but selection of these metrics was not obligatory. The subscriber was given the option to select any of these metrics and/or to enter their own subscriber-customized metrics for each practitioner. For OPPE, data entry frequency options of every month, every 3 months, and every 6 months were offered as options. For FPPE, the duration of monitoring and/or number of service events was to be entered for each metric at the time of their selection. If not entered on time, the system would alert subscribers that data entry is due and still pending.
Archiving and Reporting
The program generated both on-demand and biannual system-generated reports, all of which could be downloaded as .pdf files. Various data formats were possible, including reports showing metrics and performance data for an individual practitioner. This format could be used as a trending report for a monitoring period of up to 2 contiguous years. High-level or executive summary reports showed the average performance of multiple practitioners in a department or group, and a comprehensive summary report was available for a more detailed comparison of performance trends across multiple providers. In all report formats, performance that met or did not meet the designated threshold for each metric was clearly identified and color coded.
Marketing of the Program
Promotion of the program was accomplished through several avenues, including a direct mail piece, advertising at pathology meeting expositions, Webinars, email, the CAP catalog, and the CAP Today periodical.4 In addition, live informational sessions were held by members of the CAP Quality Practices Committee at CAP annual meetings. Finally, individual meetings with interested potential participants were also conducted.
The Evalumetrics program go-live date was March 15, 2013. From its inception to its termination in 2016, the program had 61 institutional subscribers with 53 institutions actively participating and entering data (Table 3). Overall, there were 347 pathologists enrolled, with 337 pathologists having data elements populated in the program. Subscribing institutions had a median of 4 pathologists enrolled with a range of 1 to 35. Table 4 shows that 24 of 49 participating institutions (49%) had between 1 to 4 pathologists in their practice. Twenty-nine of 51 subscribing institutions (57%) were voluntary not-for-profit and 24 of 46 (52%) contained 250 beds or fewer. The median annual test volume of participating institutions was 923 498 (Table 5).
Table 6 shows the distribution of metrics among subscribers. The number of CAP-defined metrics offered by the Evalumetrics program was 105, of which 93 unique metrics were selected among all participants. The number of CAP-defined metrics used per institution ranged from 1 to 43 (median, 5). One to 3 metrics were monitored by 144 of 337 participants (42.7%), with 66 of 337 (19.6%) following 4 to 6 metrics, 62 of 337 (18.4%) following 7 to 9 metrics, and 65 of 337 (19.3%) following greater than 10 metrics. Thirty-five institutions created 109 custom-defined metrics with 1 to 5 of these metrics used per institution (median, 2).
Table 7 lists the 21 most popular CAP-proposed metrics chosen by at least 10% of participants. Virtually all of these metrics were from AP and most frequently focused on the competency of the individual pathologist (7 metrics). Monitoring of discrepancies with unsolicited extradepartmental review of surgical pathology cases (238 of 337, 70.6%) and monitoring concordance between frozen section diagnosis and final diagnosis (228 of 337, 67.7%) were the 2 most frequent metrics chosen. Other monitors of diagnostic accuracy followed by participants included the following: report revisions for interpretive discrepancies in surgical pathology by 111 of 337 (32.9%), report revisions for interpretive discrepancies in nongynecologic cytopathology by 37 of 337 (11%), correlation with extradepartmental review in nongynecologic cytopathology by 48 of 337 (14.2%), deferral rate of frozen section diagnoses by 47 of 337 (13.9%), and acceptable performance of mandatory proficiency testing in gynecologic cytopathology by 38 of 337 (11.3%).
Other AP metrics followed by participants focused on TAT, practice volume, and conveying information accurately through reports. Medical practice volume in surgical pathology was monitored by 59 of 337 participants (17.5%) and review of surgical pathology cases by a second pathologist before sign-out by 52 of 337 (15.4%). Turnaround time–associated metrics emphasized frozen sections by 190 of 337 participants (56.4%), routine biopsies by 118 of 337 (35%), and autopsies. Among autopsy metrics were TAT for final diagnoses of routine autopsies monitored by 61 of 337 (18.1%), TAT for complex autopsies by 44 of 337 (13.1%), and TAT for preliminary gross autopsy findings without ancillary testing by 44 of 337 (13.1%). The pathologist's role in conveying information accurately was highlighted by measuring completeness of surgical pathology cancer reports by 81 of 337 (24%), reports amended for noninterpretive discrepancies in surgical pathology by 63 of 337 (18.7%), and the adequacy of reporting lower esophageal biopsies for Barrett esophagus by 44 of 337 (13.1%).
Other metrics followed were more general and were applicable to both AP and CP. These tended to follow the categories recommended by the ACGME and included monitoring hours of continuing medical education by 118 of 337 (35.0%); attendance at institutional, departmental and interdepartmental meetings by 138 of 337 (40.9%); on-call reliability by 79 of 337 (23.4%); and participation in self-assessment modules by 38 of 337 (11.3%).
Owing to high cost of operation and shortfalls in anticipated registration, the Evalumetrics program was discontinued on March 31, 2016.
One author (S.J.M.), serving as quality assurance officer for her academic pathology practice group, secured a departmental investment in the Evalumetrics program in response to her institution's increased focus on OPPE reporting. Importantly for this case study, although the author became a member of the Quality Practices Committee of the College of American Pathologists, the author had not been involved in the development of the Evalumetrics product before 2013. At the launch of Evalumetrics on March 15, 2013, the author's academic institution became the first official customer of the product. The practice group at that time consisted of 20 anatomic pathologists covering autopsy, cytopathology, and surgical pathology; 3 pathologists with both anatomic and clinical pathology responsibilities; and 4 purely clinical pathologists practicing molecular diagnostics and transfusion medicine.
It was decided that calendar year 2013 would be a pilot phase for Evalumetrics within the practice group; therefore, 10 provider licenses were purchased rather than 27. The practice group leaders regarded the price point per license as appropriate. Ten provider profiles were created within the Web-based software program, representing the different practice patterns within the pathology group. Initial setup time was lengthy, with several available metrics for 1 practice area, varying slightly in calculation method or definition. In addition, options existed to apply specific metrics in subspecialty anatomic practice areas such as dermatopathology and neuropathology. Significant director-level time was required to identify advantages and disadvantages applicable to different initial setup methods within the system.
The value of the system identified by the institution was the intellectual property of the metrics themselves. Common AP metrics, such as amended reports and TAT, were broken down in useful ways and published references were provided. For some practice areas, especially within CP, valuable new metrics were selected and applied to appropriate physicians.
Routine data entry into Evalumetrics was more complicated than expected for 2 main reasons. First, the preexisting departmental quality assurance program gathered data on a monthly basis, while the system asked for quarterly totals. Second, the system sometimes asked for direct metric data, and sometimes asked for raw data in numerator/denominator format for calculated metrics. This was problematic because existing laboratory information system reports in use at the institution did not always provide the numbers in the format required for data entry (raw or calculated). During 2013, 2 residents provided assistance during their separate departmental quality assurance rotations (1 month each), and 1 departmental staff assistant provided many hours of data entry time that required a fair amount of resident and attending physician oversight. During a 5-hour institution-site visit hosted in August 2013, leaders of the practice group along with residents and staff assistants involved in the project met with Evalumetrics leaders. The site visit consisted of real-time data entry demonstrations, report generation demonstrations, and discussion of strategies for streamlining data entry. A central theme was the need for direct data import strategies from the laboratory informatics system.
A pilot-phase subscription was continued at the institution through 2014, with periodic flurries of data entry as time allowed. During 2013–2014, the author simultaneously created 6-month OPPE summaries for practice group members, based on internal quality management program data and a few new metrics that were part of the Evalumetrics suite. These simple reports in Microsoft Excel (Microsoft Corporation, Redmond, Washington) format were approved to meet the JC criteria by the institutional Credentials Committee. Although the downstream opportunity for OPPE benchmarking beyond the practice group was deemed valuable, the continued expenditure of human resources in support of data entry into Evalumetrics could not be supported and the subscription was discontinued for 2015.
The Evalumetrics program was created as a tool for development and management of professional practice evaluation, using metrics that are pathology-specific. The program was developed in response to the JC's mandate that all practitioners undergo evaluation of practice as part of gaining privileges in a health care organization. The metrics in the program were developed by the CAP Quality Practices Committee, with input from various CAP Resource Committees. This level of expertise with quality program development was the foundation for the many pathology metrics that were created. The end result was a Web-based reporting platform with secure reporting of data and the ability for easy reporting to hospital or health system administrators.
The selection of metrics for AP by most participants conveniently fulfilled the central theme of the JC mandate that professional practice evaluation must reflect the competency of individual physicians. In the case of pathologists, this is most frequently the individual pathologist's contribution (not the laboratory or pathology department as a whole) to a test result or surgical pathology diagnosis. This emphasis was reflected by many of the metrics selected that focused either on the diagnostic accuracy of pathologists or the process of conveying information clearly and correctly through surgical pathology reports. The former had the 2 most popular metrics: monitoring discrepancies with unsolicited extradepartmental review in surgical pathology and disagreement between frozen section diagnoses and final diagnoses. The latter had 3 metrics that included compliance with cancer protocols, report revisions for noninterpretive errors, and the adequacy of reporting lower esophageal biopsies for Barrett esophagus.
During development of the program, it was felt that the greatest need for metrics was in the CP realm, rather than AP. This was due to relative scarcity of Q-Probes and Q-Tracks that evaluated individual pathologist practice in CP. Metrics for AP, conversely, were more widely available. Thus, the committee found it surprising that subscribers used so few CP-specific metrics. The metrics that involved CP did not do so specifically and were meant to monitor general activities of pathologists relating to professionalism, systems-based practices, and fund of knowledge. The general metrics with applicability to CP included attendance at institutional, intradepartmental, and interdepartmental meetings; continuing medical education; self-assessment modules; and on-call reliability. However, as users had the ability to track custom-defined activities within these categories, it is uncertain how many institutions followed CP-related activities. Subscribers had the ability to follow on-call reliability for frozen section calls and/or clinical calls, but the choice of which activity to follow was left to the subscriber. Similarly, there was no mandate by the Evalumetrics program that continuing medical education, self-assessment modules, and meeting attendance focus on CP.
The relatively infrequent use of CP-specific metrics could be due to the volume of activity directly related to a pathologist in CP being low when compared to AP, particularly when assessing diagnostic accuracy. Also, most pathologists practice both AP and CP, so AP metrics may be chosen preferentially. Many more surgical pathology specimens are attributed to a single pathologist than clinical specimens as can be seen in the volume of bone marrow specimens compared to surgical pathology specimens in a typical practice setting. This in no way detracts from the effort involved in reading out a bone marrow and synthesizing the myriad of molecular and genetic tests that often accompany a bone marrow, but simply reflects volume. Even in those CP areas where individual pathologist volumes may be substantial, such as in serum protein electrophoresis, there is generally little opportunity to measure competency, as these studies are not ordinarily submitted for extradepartmental review and follow-up on patients with repeated studies at other institutions is generally not available.
Perhaps even more surprising to the committee than the relative lack of CP metrics chosen was the near absence of metrics that specifically monitored pathologist competency in managing either clinical or anatomic laboratories. Possible metrics applicable to administration of laboratories involve on-call reliability and TAT. The former satisfies the pathologist's obligation under the Clinical Laboratory Improvement Amendments of 1988 (CLIA) to be responsible and available for clinical consultation, while the later reflects the holistic process of laboratory efficiency. Turnaround time can be viewed as a surrogate measure of administrative competency in AP, as it is difficult to separate out the pathologist's activity from the entire progression of processing a specimen, particularly TAT of routine biopsies and frozen sections. It is also probably meaningless to attempt to separate the pathologist portion of TAT, as it is a reflection of the entire laboratory and it highlights the importance of having pathologists involved in the management of the laboratory.
The lack of usage of administrative metrics for CP activities potentially undervalues the pathologist's role in these complex endeavors. Given the recent emphasis on population health, value-based purchasing, and the gradual decline of fee-for-service in medicine, CP activities may be an area where a pathologist can demonstrate value on a large scale. In addition to ensuring prompt and accurate clinical test results, pathologists have the ability to further add value by providing input into proper test selection, test underutilization, and test overutilization. These activities should probably be considered as core competencies for pathologists with CP responsibilities.
A valuable component of the Evalumetrics program was gaining detailed feedback from 1 large subscribing academic center. The difficulties they reported provide some insight into the complexities of professional practice evaluation and likely apply to other practice settings. The group reported the expenditure of human resources for initial setup and continued data entry as the primary difficulty of the program. Disconnects between data formatting from laboratory information systems and data required for reporting in Evalumetrics were noted to be problematic. They did, however, see significant value in the CAP metrics themselves, with many measures broken down in useful ways, and convenient references for many metrics. The price point for licensing fees was not reported as a barrier. Ultimately, the group decided to use a combination of Evalumetrics program–derived metrics and custom metrics in a simplified spreadsheet format.
Ultimately, Evalumetrics suffered from low registration numbers and some attrition problems during its tenure. This occurred despite a predevelopment needs-based market assessment that revealed a strong desire for such a tool and one-third of survey respondents indicating a likelihood of enrolling. There are a number of possible reasons for this. First, the JC appeared to be slow to enforce its 2008 mandate, leaving little to no incentive to meet the mandate (at least initially). Second, some pathology practices were likely forced to use hospital-wide (ie, nonpathology-specific) programs, since it is easier for hospital administrators to deal with a single program. Third, it is likely that some practices created their own metrics, either based on CAP checklist items or prior Q-Probes and Q-Tracks. Fourth, it is likely that some attrition was a consequence of practices temporarily using Evalumetrics for guidance, after which the practices saw no use in continuing to be part of the program. Finally, to the outside observer, Evalumetrics looks large and cumbersome; it is possible that potential participants misunderstood the program and thought they had to participate in all offered metrics.
An interesting aspect of the Evalumetrics program was the attempt by a national organization to develop a quality system for practicing pathologists. While use of the program was voluntary and the program could be customized for each practice and/or pathologist, Evalumetrics could have helped establish quality benchmarks for practicing pathologists. Such benchmarks could be very useful for quality and value-based incentive programs such as Centers for Medicare & Medicaid Services' developing Merit-Based Incentive Payment System that will be transitioned to in 2017. Will the experience with the Evalumetrics program temper involvement of national pathology organizations in developing and piloting benchmarks useful for the coming wave of quality, value, and outcomes transparency in our profession? Perhaps yes, or at least until regulatory and payment pressures become sufficiently acute to drive wholesale change in how we monitor our practices.
The CAP will maintain the Evalumetrics professional practice metrics and benchmarks as intellectual property. It is likely that this valuable information and experience will prove useful as reimbursement for pathology services becomes increasingly based on value-based purchasing and pay-for-performance models. A monograph of all the metrics is currently under development. Perhaps future improved interoperability between laboratory information systems will allow for easier data extraction, alleviating some of the manual data collection.
On behalf of the CAP Quality Practices Committee, the authors wish to thank all the pathologists and pathology practices who participated in the Evalumetrics program. The committee is also grateful for the support of the CAP Council on Scientific Affairs, the assistance of many CAP Resource Committees, and the efforts of many CAP staff who devoted time to this project. The following additional CAP Quality Practices Committee members were involved in development of the Evalumetrics program: Don S. Karcher, MD, Glenn E. Ramsey, MD, Kathryn S. Dyhdalo, MD, Kirsten W. Alcorn, MD, Larry W. Massie, MD, Michael O. Idowu, MD, MPH, Peter J. Howanitz, MD, Peter L. Perrotta, MD, Jennifer L. Hunt, MD, MEd, Frederick A. Meier, MD, and Roberta L. Zimmerman, MD.
The authors have no relevant financial interest in the products or companies described in this article.