Abstract
Context.—Laboratory quality indicator data, most often presented and reported as a percentage of variance, may be misleading, inasmuch as variances, and therefore percentages, appear to be low.
Method.—Current data from laboratory quality indicators and national data derived from several years of College of American Pathologists Q-Probes studies were normalized to parts-per-million defects, as commonly practiced in the manufacturing and service industries for benchmarking performance.
Results.—Laboratory data in parts-per-million defects demonstrated opportunities for significant improvements in laboratory performance across the total testing process.
Conclusions.—Historical quality assurance programs do not appear to be significantly improving the total testing process. Manufacturing and service industries are using quality systems strategies, such as ISO 9000 and the Baldridge Award Criteria, to effect improvements in both productivity and cost. Quality system solutions for performance improvement may provide a systematic approach to improving laboratory performance.
In the last decade, the initiative for quality assurance and quality improvement in laboratories has been driven predominantly by the requirements of regulatory and accrediting agencies. The Clinical Laboratory Improvement Amendments of 1988 require that a clinical laboratory's quality assurance program include evaluation of each of the steps of the total testing process.1 The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) has stated in their Comprehensive Accreditation Manual for Pathology and Clinical Laboratory Services that the laboratory is required to “systematically assess and improve important functions and work processes and their outcomes.” 2 The JCAHO also requires laboratories to perform external comparison of their performance with others in a process commonly known as benchmarking.
In response to these requirements, laboratories have identified indicators that measure their performance in key functions related to patient care and satisfaction. Performance on these indicators is to be reported periodically to the organization at large, along with an explanation of the follow-up actions to be taken to improve performance. The goal of improving performance is for the laboratory (and the health care organization) to design its processes well and to systematically measure, assess, and improve its performance to improve patient health outcomes.
Many organizations use the Design–Measure–Assess–Improve model to effectively manage their business processes and to drive improvement of organization performance and competitiveness. The expectations for achieving performance excellence are described in the Malcolm Baldridge National Quality Award Criteria for Performance Excellence,3 which operationalizes the purposes of Public Law 100–107, the Malcolm Baldridge National Quality Improvement Act of 1987.
Category 4 of the award criteria (Information and Analysis) describes how organizations are to use measurement and analysis of performance through the creation and use of indicators that represent factors that lead to improved customer, operational, and financial performance. Item 4.2 in category 4 specifically requires the organization to use data, information, competitive comparisons, and benchmarking information to promote major improvements in areas critical to their competitive strategy. To visualize internal and comparison information, organizations often use the Standard Six Sigma Benchmarking Chart.4 This chart plots the number of defects or adverse events normalized to parts per million (ppm) against “sigma,” a common measure of variation in both industry and clinical laboratories. The Six Sigma strategy measures the degree to which any process deviates from its goal. Average products, regardless of their complexity, have a quality performance value of about 4 sigma. The best, or “world class,” products have a level of performance of 6 sigma. The sigma value indicates how often defects are likely to occur; the higher the sigma value, the less likely the process will produce defects. Six Sigma philosophy purports that there is a direct correlation between the number of product defects, wasted operating costs, and the level of customer satisfaction. Consequently, as sigma increases, process reliability improves, operating costs go down, and customer satisfaction increases.
The Figure depicts common industry performance on the Six Sigma benchmarking chart. The circle at the 4-sigma level shows the average quality performance level of common consumer services that are usually performed manually. These services show a range of defects from 3000 to 10 000 ppm. Of note is the high defect rate of tax advice provided via the telephone from the Internal Revenue Service. The benchmark for what is called world class is a 3.4 ppm defect rate. The United States is known for world class performance in the very low defect rate for domestic airline fatalities, at 0.43 ppm.
This article presents quality indicator information collected from 3 clinical laboratories and normalizes it to ppm. In addition, information from the collective experience of hundreds of US laboratories—as reported in the College of American Pathologists (CAP) Q-Probes studies—is also normalized to ppm.
METHODS AND MATERIALS
For the first part of this study, data were collected by 3 clinical laboratories for a set of defined quality indicators. The laboratories represented a range of types and sizes, described as follows: a laboratory based in a medium-sized urban community hospital, a laboratory in a large urban medical center, and a regional multisite laboratory serving a wide metropolitan area.
The indicators chosen for monitoring by the laboratories represented selected parts of the total testing process, described as the preanalytic, analytic, and postanalytic phases of testing. Each phase of the testing process was monitored by at least 1 indicator. Some phases had more than 1 indicator because of issues unique to the laboratory that chose to monitor it.
Data were collected for date ranges specified for each laboratory. The nature of the indicator determined whether data could be obtained from the laboratory's information system or needed to be collected manually. Not all laboratories collected data on every indicator.
For the second part of this study, these data were compared with reports published as part of the CAP Q-Probes program.5 Q-Probes is a subscription service of periodic structured quality assurance studies, which laboratories can use to participate in interinstitutional comparison of performance. Q-Probes authors and CAP staff analyze the data collected and submitted by participating laboratories, prepare a written discussion of the study findings, and make general recommendations for improvement. Enrolled laboratories receive a report that benchmarks their facility's performance against that of other laboratories of a similar type. The number of laboratories participating in each study varies from about 300 to 700 institutions per study. Laboratories can subscribe to Q-Probes without submitting their data for analysis or can purchase the written report after publication. Q-Probes studies are written in a manner that allows laboratories to repeat the same study at a later time for purposes of internal comparison. Of the 70 CAP Q-Probes studies conducted between 1989 and 1995, 12 were chosen to represent various steps in the total testing process, also known as the path of workflow. Where possible, Q-Probes studies were chosen to correlate with the indicators monitored by the laboratories participating in this study.
Findings
Table 1 represents the collective findings of all 3 laboratories for the indicators they chose to represent the total testing process. Data for each indicator are shown first as a percentage of variance, which is the most common format laboratories use to report quality indicator data to their respective institutions. In the last column of the table, the data are normalized to ppm for the purposes of comparing laboratory quality indicator data with general industry experience. It must be stated that the performance represented by these 3 laboratories must not be construed as representing or suggesting benchmarks of best practices. The data merely represent the performance of these laboratories at a specific point in time.
Table 2 presents data from the Q-Probes studies,6,7 shown first as the median value published in the written reports and then as normalized to ppm.
COMMENT
The percent variance of the 3 laboratories on their chosen quality indicators appears to be low, except for the high variances reported on missing information on cytology Papanicolaou smear requests and collection of therapeutic drug monitoring specimens. The percent variance on the Q-Probes data also appears to be low, except for cervicovaginal cytology specimen adequacy and timing for collection of therapeutic drug monitoring specimens.
When low absolute numbers of variance are divided by large test volume numbers, small variance percentages are the result. However, it is prudent to remember that a small percentage of a big number is itself a big number. Each variance event has the potential to (or in some cases did) adversely affect the patient; therefore, laboratories should not let low variance percentages on quality indicators lull them into a false sense of good performance.
It is also misleading to suggest that when the laboratory's performance on a quality indicator remains stable at a low percentage over a period of time, the variance has reached an irreducible minimum due to “human nature.” What the data do indicate is that the process being measured is not capable of better performance without redesign and improvement. Six Sigma breakthroughs are the direct result of rethinking the way the work gets done, changing the process, and using automation where needed to improve the process.
Internal assessment is an essential element in quality systems such as ISO 9000,8 the American Association of Blood Banks accreditation program,9 and the model recently published for health care by the NCCLS.10 Quality indicators measure important aspects of the work processes and tasks that employees perform daily; therefore, it is reasonable to expect that collecting data to assess internal performance would be part of routine laboratory operations. Yet, most laboratory employees view data collection for quality indicators as extra work that hinders their ability to do their “real” jobs. As a result, there is reluctance to collect and report data and for management to be actively engaged in reviewing and taking necessary actions. Laboratory management must emphasize the role of obtaining facts and data for managing laboratory quality. In the laboratory staff's defense, however, much of the data that could be routinely analyzed to provide meaningful information about the laboratory's performance on quality indicators resides in the information system. Few management reports routinely provide quality indicator information other than turnaround time. The report-writing function of most laboratory information systems can be used to cull out some quality indicator data, but usually only after someone receives significant specialized training and makes many trial-and-error attempts at producing meaningful reports.
Six Sigma data for the airline industry show a baggage-handling performance of about 4000 ppm (0.4%) mishandled bags.4 The American public recognizes this performance as such poor quality as to insist on carrying on more and bigger luggage to prevent the problem. Compare this 0.4% performance to any of the laboratories' or Q-Probes' quality indicator data and ask this question, “Are we satisfied with this level of performance from our accredited/regulated laboratories?”
If we consider that the pilot laboratories in this study represent the present state of quality performance, comparison of the present data with the historical Q-Probes data demonstrates that there has been no significant improvement in performance as the years have passed. One might surmise that despite 10 years of quality assurance programs in accredited/regulated laboratories, performance across the total testing process has not been significantly improved. The data in Table 2 represent laboratory performance at the 50th percentile. Therefore, the quality performance of accredited laboratories at percentiles below this level indicates a more serious performance deficit.
Further evaluation of the information presented in this paper should lead readers to speculate on the effect of current laboratory performance on patient outcomes. Each variance has the potential to adversely affect both the quality of patient care and the cost of that care. Therefore, laboratory performance measurements can be directly linked to issues faced in the organization's risk management program and in their quality cost assessment program. These considerations warrant further investigation.
Present quality assurance programs focus on the “find a problem, fix a problem” philosophy without regard for analyzing the underlying process that created the problem. To make significant improvements in laboratory performance, systematic approaches need to be considered. Manufacturing and service industries have successfully used the quality system approach of ISO 9000 or the Baldridge Award Criteria to effect improvements in both performance and productivity. Health care organizations are beginning to look to ISO 9000 and the US Congress has recently funded the Baldridge Award for Health Care.11
As health care begins to appreciate the lessons learned by the manufacturing and service sectors during the last 10 to 15 years and begins to implement quality system strategies, major breakthroughs such as the Six Sigma concept seem possible. Hopefully, accreditation programs will support and sponsor these new systematized approaches to quality to effect true performance improvement.
Acknowledgments
The authors acknowledge the following persons, who made significant contributions to data collection and manuscript review: Dianne Beesley, MT(ASCP)SBB, Emerson Clark, MBA, MT(ASCP), and James Picklo, SM(ASCP); SM(AAM), Exempla–Saint Joseph Hospital, Denver, Colo; and Mary Jane Eaves, MT(ASCP), Hennepin County Medical Center, Minneapolis, Minn.