Electronic synoptic pathology reporting using xPert from mTuitive is available to all pathologists in British Columbia, Canada. Comparative feedback reports for pathologists and surgeons were created by using the synoptic reporting software.
To use data stored in a single central data repository to provide nonpunitive confidential comparative feedback reports (dashboards) to individual pathologists and surgeons for reflection on their practice and to use aggregate data for quality improvement initiatives.
Integration of mTuitive middleware in 5 different laboratory information systems to have 1 software solution (xPert) sending discrete data elements to the central data repository was performed. Microsoft Office products were used to build comparative feedback reports and made the infrastructure sustainable. Two different types of reports were developed: individual confidential feedback reports (dashboards) and aggregated data reports.
Pathologists have access to an individual confidential live feedback report for the 5 major cancer sites. Surgeons get an annual confidential emailed PDF report. Several quality improvement initiatives were identified from the aggregate data.
We present 2 novel dashboards: a live pathologist dashboard and a static surgeon dashboard. Individual confidential dashboards incentivize use of nonmandated electronic synoptic pathology reporting tools and have increased adoption rates. Use of dashboards has also led to discussions about how patient care may be improved.
Cancer is the leading cause of death in Canada and is responsible for 30% of all deaths.1 Lung, breast, colorectal, and prostate cancer are the most commonly diagnosed types of cancer in Canada (excluding nonmelanoma skin cancer).1 These cancers account for about half (48%) of all new cancer cases,1 so these were the cancer types we focused on in this study.
The pathology report for resected cancer specimens forms the basis for evaluating the need for adjuvant therapy and for patient prognostication.2 The pathology reporting of cancer specimens needs to be standardized, complete, and structured so that therapeutic dilemmas and delays do not occur because of incomplete pathology reports.2,3 The use of standardized structured datasets for pathology cancer reporting has been shown to improve patient care and clinical outcomes.4,5 In particular, standardized electronic synoptic reporting using the College of American Pathologists (CAP)–approved cancer checklists ensures complete reports, reduces clinical errors, and provides reliable aggregate data as discrete data elements to analyze for laboratory quality assurance, research, cancer registry surveillance, and other secondary uses.6–8 The electronic version of the CAP Cancer Protocols (eCCs) is currently used by 35% to 40% of all practicing anatomic pathologists in the United States and Canada.4 CAP Cancer Protocols and Checklists have been endorsed by the Canadian Association of Pathologists (CAP-ACP) and are now a pan-Canadian content standard for pathology reporting in Canada.9
Health care organizations and professional groups are being asked to develop clear quality measures and metrics that can lead to performance improvements at provider and system levels.8 With this in mind, and to increase clinician engagement, this article describes pathologist and surgeon dashboards that provide clinically relevant quality indicators meaningful to them. This will allow them to take ownership of the data and assume an active role in ongoing personal and system performance.
Differences in the 5 laboratory information systems (LISs) used in British Columbia (ie, Meditech Client Server, Meditech Expanse, Cerner CoPath Plus, Sunquest CoPath, and Cerner Millennium) meant data could not be easily extracted and uniformly aggregated. A middleware solution using mTuitive’s xPert10 was the only tool at the time capable of interfacing with all 5 LISs in use across the province. xPert was installed at the first site in 2012 and was rolled out to all pathology sites in the next 5 years and is now currently available to all pathologists in the province. In 2016, British Columbia added mTuitive’s Central Data Repository (CDR) to the program. The CDR allowed the collection of discrete tumor data with patient-level data in a single database from all submitted synoptic reporting checklists across the province, and these data were controlled by British Columbia’s Electronic Synoptic Pathology Reporting Program.
Pathologists have more than 80 checklists available for use, based on the CAP cancer protocols. Of these, the synoptic reporting advisory committee concentrated on the 5 major cancer sites (breast, colon, lung, prostate, and endometrium). Using metrics developed by the Canadian Partnership Against Cancer for the 5 major cancer sites, individualized comparative feedback reports (dashboards) were created for both pathologists and surgeons, and aggregate data from individual hospital sites were assessed.
DESIGN AND METHODS
There were 2 main goals for this project. The first was to provide individual anonymized confidential personal practice data to pathologists and surgeons. The second was to use aggregate site-specific data to look for practice variation across the province and hence identify quality improvement (QI) initiatives to enhance patient care.
The first step was the integration of mTuitive middleware with all 5 different LISs in the province and to have 1 product (xPert) available for electronic synoptic pathology reporting by all pathologists in the province. The electronic synoptic pathology data flows from xPert to a single database called the CDR. Patient and case-level data from each LIS flows to the CDR where they are joined with the xPert data. A biweekly feed sends CDR data directly to the BC Tumor Registry. CDR data can also be queried to run a variety of scheduled and ad hoc reports (see supplemental digital content and Supplemental Figure 1, at https://meridian.allenpress.com/aplm in the February 2024 table of contents).
This system has 2 major advantages: (1) Checklists need only be updated in xPert and then are automatically deployed to all LISs rather than updating checklists in the separate LISs; and (2) having control of the CDR allows access and use of the data for comparative feedback reports, QI projects, and research.
A subset of the indicators, chosen by the pathologist champions for each checklist, was incorporated into the dashboards and endorsed by the provincial synoptic pathology reporting advisory committee. Pathologist champions are provincially recognized experts. The advisory committee includes representative pathologists and technical staff from each health authority.
Pathologist Dashboard
This was designed to be a 1-page overview of several actionable metrics for a particular period and personalized for each pathologist to allow that pathologist to compare their metrics with those of their peers. Actionable metrics were deliberated on and refined throughout this project by the synoptic pathology reporting advisory committee, clinical tumor-specific cancer care groups, and pathologist champions for each cancer type. The result was the establishment of a core and supplementary set of metrics shown in Table 1.
Surgeon Dashboard
The chosen metrics for the 3 most common cancer resections for the Surgeon Dashboard are seen in Table 2. Details of the development of the dashboards can be found in Supplemental Figure 2.
RESULTS
Pathologist Dashboard
An example of a single Microsoft Excel report representing the Pathologist Dashboard is shown in Figure 1 for pathologist P012 in 2020. The Microsoft Excel report had additional tabs to explain the reporting design choices for the pathologist, including how cancer metrics, and their corresponding targets, were selected and which literature was used to compute the 95% CI from binomial proportions.
This dashboard is available 24/7 and is accessed through a Microsoft SharePoint site. Pathologists can access the Microsoft SharePoint site anytime they are logged into their hospital network account. Once they are logged into the network, there are no login credentials or passwords required to access the report. The SharePoint site will show each pathologist their unique alias code (eg, P012), which they will use to filter the report for their individual data.
Upon opening the report, the pathologist’s first overview is defaulted to the “2. Dashboard” tab at the bottom. This gives the core metrics reported by that pathologist for the time frame requested and allows the eye to focus on the performance indicators highlighted by the green circles and yellow triangles. Where there are recognized and agreed-upon targets, then these are shown. The pathologist can compare their metrics with those of their peers and evaluate if there is practice variation or an opportunity for QI. They can then use any of the tabs at the bottom; for example, the tab at the bottom named “1. Start Here” explains the report’s contents and how one can use and refresh the report each quarter. A training video is available that describes the dashboard report, how to use the dashboard, and the dashboard data-updating process.
After reviewing their individualized dashboard in the “2. Dashboard” tab, the pathologist may be wondering how they compared with their peers in a specific metric over time. They can then navigate to the “3. Deep Dive” tab (Figure 2) to access time-series graphs, and statistically significant insights, of that metric. Furthermore, it was proposed that CIs be added to graphs to take small sample sizes (ie, few synoptic reports per pathologist or surgeon) into account to assist with interpreting apparent anomalous results. Figure 2 shows the “% colon resections with at least 12 lymph nodes examined” for the pathologist with alias P156.
The lightly shaded blue regions in the graph indicate the 95% CI for P156’s values per quarter. If the shaded blue region overlaps with another colored line, it implies the 2 entities’ values are not statistically significantly different. In contrast, a pathologist and/or surgeon’s entity values can be considered different from other entities every 95 out of 100 reports when the shaded region does not overlap with another colored line. Furthermore, data point symbols were selected to be different shapes to account for color blindness and black-and-white prints of the tab to assist with interpretation.
After showing the “3. Deep Dive” tab to the Working Group, it was suggested users may want to see CIs for site, health authority, and the province. Hence, Bisrey Analytics produced the “3b. Regionalized Deep Dive” tab (Figure 3). Rather than display all entities, the graph only displays the 1 selection. Another feature in this tab is the ability for the user to select originating, or submitting, identifiers to distinguish counts of synoptic reports. The dark green line shows turnaround time rate for British Columbia, and the shaded area represents the CI.
Tabs 4, 6, 7, and 8 found at the bottom of the Pathologist Dashboard (Figure 1) are for information-sharing purposes and describe in detail the sources of the data presented, how target ranges were decided, how the statistics were calculated, and include an acronyms tab. These were included for transparency of the methodical approach to the data.
The Pathologist Dashboard can be used to provide material data to exemplify their performance during annual performance reviews. For example, Figure 4 shows an individual pathologist’s performance compared to that of their peers in reporting endometrial carcinoma during a calendar year.
Surgeon Dashboard
An example of the Surgeon Dashboard is shown in Figure 5. This dashboard corresponds to surgeon G7289, showing their individual data for the calendar year 2021 related to the colorectal cancers reported synoptically by pathologists for that surgeon. This gives the core metrics for the time frame requested and allows the eye to focus on the performance indicators highlighted by the green circles and yellow triangles, as described in the dashboard legend. Where there are recognized and agreed-upon targets, then these are shown. The surgeons elected to receive only a hard copy of their report in a PDF format once per annum in an email that is confidential. Also included in the report are their specific metrics over time. For example, Figure 6 shows percentage of colon resections with at least 12 lymph nodes harvested, with the individual surgeon’s data compared to those of their peers in the same hospital, health authority, and across the province. This allows the surgeon to compare their performance with that of their peers. The Deep Dive pages, also found in the Pathologist Dashboards, appearing on subsequent pages of the PDF report, are to create time-series graphs with CIs for each metric. A surgeon’s individualized report only contains pages for indicators relevant to their dashboard page.
Performance Testing
We used statistical testing of the aggregate data to determine how confident we are that a site or region is a positive or negative outlier (see Supplemental Figure 3).
Reporting Aggregate Data for Quality Improvement Initiatives
Site and health authority level data can be presented in different types of graphs to reveal where variation exists and enable QI opportunities. One example of a QI initiative investigated pT3 colorectal cancers reported with macroscopic tumor perforation, suggesting incorrect tumor staging. Colorectal cancer cases with macroscopic tumor perforation should be staged as pT4a or pT4b. Of 4010 colorectal cancers in the CDR, 44 cases (1%) reported macroscopic tumor perforation but the final staging was pT3, and we wanted to find the root causes for this discrepancy. All 44 cases were reviewed by 2 pathologists. All patient, facility, and pathologic identifiers were removed. In conclusion, most were correctly reported as true perforation through the tumor, but this was often iatrogenic and therefore did not change the pT3 status, that is, correct tumor staging. A lot of cases associated with tumor perforation were rectal cancers for which patients had received preoperative radiotherapy with or without chemotherapy; tumor perforation was through nonperitonealized tissue and therefore not stage pT4a, so correctly staged as pT3. There were 7 cases of incorrect computer entry with no macroscopic tumor perforation (ie, perforation through tumor). All were correctly staged as pT3. Fifteen of the 4040 cases (0.4%) were understaged. To avoid understaging there needs to be clarification that if there is perforation through the tumor, then even if tumor cells do not extend to the serosal edge of perforation it is still a pT4a tumor.
Variation Across Health Authorities
Lymph node harvest rate in colorectal cancer is an example of a metric that showed variable results between health authorities (see Supplemental Figure 4). We spent some time trying to understand why variability exists between health authorities in lymph node harvest rates in colorectal cancer resections. Our steps included presenting data at site visits with pathologists, arranging for one of our colorectal specialist pathologists to host an educational Webinar on lymph node harvesting, and inviting the site with the highest rate of lymph node yield to give a Zoom presentation describing their step-by-step procedures. For the specific sites where the data signaled there may be a problem, we reviewed individual pathology reports, looking for a reason to explain why lymph node retrieval was low. Some things that could explain low yield include not using a fat-clearing agent or short specimen length, but none of these seemed to be the case at this site. As we monitored this metric, we noticed an improvement in lymph node yield, as shown in Figure 7.
Further examples of looking at 1 specific metric from different organ sites, that is, visceral pleural invasion in lung cancer specimens (Supplemental Figure 5), lymphovascular invasion in endometrial carcinoma (Supplemental Figure 6), chemotherapy response scoring in breast carcinoma (Supplemental Figure 7), and resection margin status in pT2 radical prostatectomy specimens (Supplemental Figure 8), illustrate how the aggregate data can be used to look for QI opportunities.
Variation Across Sites
Presenting data in different formats can reveal potential outliers. For example, breast histologic grade distribution displayed in a radial chart shows variation between sites (Figure 8). Each color on the graph represents a site. The radial chart shows that all sites, except the one represented in red, report a similar grade distribution.
Another example shows the variation in intactness of total mesorectum excision (TME) in rectal cancer (Supplemental Figure 9), displayed by pathology region. Variation in completeness of the mesorectum in TME specimens initiated a discussion of the significance of this finding.
Variation Over Time
Aggregate data can be shown in a run chart (see Figure 3) to show how results change over time. This run chart shows the percentage of colorectal cancer cases that were signed out within 14 calendar days from receipt in the laboratory. The dark green line shows turnaround time rate for British Columbia, and the shaded area represents the CI. Another example of a run chart is serosal penetration in colorectal cancer (Figure 9).
Adoption Rate
Use of the xPert product is not mandatory but it is strongly encouraged. A survey looking at 5254 eligible cancer resection cases in British Columbia during 2017–2019 showed that 3924 (75%) had a synoptic report submitted. The Pathologist Dashboard was made available to pathologists in 2019. In 2020, 110 of 138 pathologists (80%) were consistent users of synoptic reporting software, and in 2021, this ratio increased to 96 of 113 pathologists (85%). In Figure 10, one notices increasing adoption in all health authorities.
DISCUSSION
British Columbia implemented Level 6 electronic synoptic pathology reporting in 2012–2017 and spent 2018–2020 using the data in our CDR for development of comparative feedback reporting to pathologists and surgeons and for QI initiatives. There were hurdles along the way including trying to find a sustainable solution that could use programs already available within the organization to save on costs and capital expenditure but optimize the result. It became apparent early on that managing and using the vast amount of data in the CDR would require the expertise of data scientists, and they were key to developing a solution for our needs.
We believe that the use of data for comparative feedback reporting as outlined in this article is novel and could be easily transposed to other systems and improved on. For the pathologists, the dashboards include information related to the 5 major cancer sites (breast, lung, colorectal, prostate, and endometrium). Pathologists preferred a live dynamic up-to-date dashboard that can be manipulated by the individual in numerous ways depending a lot on the technologic prowess of the user. For users performing simple tasks, there is a 1-page “what you see is what you get” dashboard with targeted ranges clearly delineated in a nonthreatening way but suggesting opportunities for improvement if necessary. For the advanced user, there are additional pages offering run charts that show trends over time, radial charts to compare results between regions in a circular shape, and turnaround time charts to show case completion rates by site.
It was the overwhelming preference of surgeons for a single annual report in a PDF flat file format, and at present this is restricted to breast, colorectal, and prostate, with plans to expand to lung and endometrium. Because of issues related to confidentiality, the surgeons also wanted a nonmandatory opt-in program, making it incumbent on the surgeon to join the program. The volumes of cases done per quarter are quite low for most surgeons, and thus the CIs are very wide. In the future it may be desirable to switch from quarterly indicators to longer periods such as years.
Both dashboards are anonymized, confidential, and nonpunitive. The individual dashboards are not reviewed at any committee level. Aggregate site, health authority, and provincial data are discussed at the committee level, looking for practice variation and opportunities for QI; using the aggregated indicator reports, we were able to identify QI initiatives in every organ site.
Accepted target ranges were inserted on the dashboards where appropriate, but there was no implication that performance was poor if outside of the target range, but rather an inference that this would be an area to reflect on. We opted to show the target ranges in a nonconfrontational manner so that the dashboards are not viewed as punitive. Target selection needed to be transparent so pathologists and surgeons could be confident the targets were realistic and supported by the literature. Moreover, indicator light wording was specifically chosen to be less offensive (ie, yellow lights saying “observed value deviates from target range” instead of red lights saying “not meeting standards”). At present, displaying quarterly data, with few data points, makes it harder to distinguish statistical differences. In the future, we advise plotting years of data on the x-axis, thereby incorporating more checklists and reducing the CI band, to identify statistical differences more easily.
The dashboards were primarily built by using currently supported IT infrastructure. Should a Microsoft PowerBI, Microsoft SQL Server Reporting Services, Tableau, or other such reporting server be available, one could translate this product to those platforms. That would require a significant degree of work, but the back end and design of the products illustrated in this article should serve as a good prototype design to guide that work.
The development of the dashboards began by consulting with pathologist checklist champions and surgical leads to determine which data elements should be included on the dashboards. Postimplementation questionnaires were circulated to pathologists and surgeons for feedback. We asked whether they trusted the data, the security and confidentiality of the data, and the nonpunitive nature of the project. Overall, responders did not express concern about any of these issues. Informal, in-person feedback during site visits was the most useful. It will be interesting to see whether the surgeons’ current preference for the annual report will change to a dynamic, up-to-date report similar to the Pathologist Dashboard. Streamlining the data flow allows for the possibility of scaling up to incorporate new cancer checklists in the future.
Use of the xPert product is not mandatory but it is strongly encouraged. One of the enticements to use xPert is to provide feedback to the pathologists and surgeons on their performance. In engaging pathologists in this way, we hoped to increase the adoption rate. Adoption rates can be used as a surrogate marker to assess the value pathologists get from the comparative feedback reports.
British Columbia’s unique approach to implementing electronic synoptic pathology reporting allowed for standardization across the province in an effective and efficient manner. There are many advantages to having a provincial synoptic reporting program. Surgeons from across British Columbia receive a standardized pathology report containing all the mandatory fields defined by CAP’s most recent cancer protocol. We have a provincial database that allows us to identify trends in patient referral patterns and in pathologist and surgeon patterns of practice. Analyzing data also shows us where we are delivering consistent pathology service and where potential anomalous results exist. These are all advantages at the system level but do not provide immediate benefit to individual pathologists. Giving feedback to pathologists in the form of individual dashboards created an incentive for pathologists to start using the software. Also, when pathologists take time to assess their performance, they can claim credits in the Maintenance of Certification program with the Royal College of Physicians of Canada. The pathologist can also use this information during annual performance reviews. As pathologists and surgeons continue to review their results, we expect this will encourage informal conversations between peers regarding individual practice. We also hope that the dashboard data will inform discussions within established communities of practice that will support policy changes for best patient care. While the synoptic checklist does not capture instances where information is missing from the pathology requisition, these discussions can be a place where pathologists and surgeons can address these issues.
In summary, this initiative focuses on using discrete pathology data collected by using synoptic reporting software to mobilize evidence at point-of-care to (1) increase the knowledge capacity of pathologists, surgeons, and other interdisciplinary groups of clinicians; and (2) inform diagnostic and treatment care pathways, clinical practice guidelines, and/or program planning. Some of the QI initiatives identified included (1) colon lymph node harvest, (2) TME completeness, (3) breast biomarker reporting, (4) treatment response to neoadjuvant therapy in breast cancer, and (5) resection margin status in radical prostatectomy. The infrastructure that has been created for the feedback reports allows us to expand to report on more checklist types and to create ad hoc reports.
This project was funded by the Canadian Partnership Against Cancer. We would like to thank the pathologists of British Columbia for their contributions to this project.
References
Author notes
Supplemental digital content is available for this article at https://meridian.allenpress.com/aplm in the February 2024 table of contents.
Bisra is a cofounder of Bisrey Analytics. The other authors have no relevant financial interest in the products or companies described in this article.
The opinions expressed in this paper are purely those of the authors and not those of Canadian Partnership Against Cancer or the Provincial Health Services Authority.