Context.—During the past 25 years, the College of American Pathologists' (CAP) Q-Probes program has been available as a subscription program to teach laboratorians how to improve the quality of clinical laboratory services.
Objective.—To determine the accomplishments of the CAP Q-Probes program.
Design.—We reviewed Q-Probes participant information, study data and conclusions, author information, and program accomplishments.
Results.—During this time 117 Q-Probes clinical pathology studies were conducted by 54 authors and coauthors, 42 899 laboratories enrolled from 24 countries, 98 peer-reviewed publications occurred and were cited more than 1600 times, and the studies were featured 59 times in CAP Today. The most frequent studies (19) focused on turnaround times for results or products at specific locations (emergency department, operating room, inpatients, outpatients), specific diseases (acute myocardial infarction, urinary tract), availability for specific events such as morning rounds or surgery, a specific result (positive blood cultures), and a method on how to use data for improvement (stat test outliers). Percentile ranking of study participants with better performance provided benchmarks for each study with attributes statistically defined that influenced improved performance. Other programs, such as an ongoing quality improvement program (Q-Tracks), a laboratory competency assessment program, a pathologist certification program, and an ongoing physician practice evaluation program (Evalumetrics), have been developed from Q-Probes studies.
Conclusions.—The CAP's Q-Probes program has made significant contributions to the medical literature and has developed a worldwide reputation for improving the quality of clinical pathology services worldwide.
Before the word benchmark escaped into the variety and imprecision of common usage, it was a surveyor's technical term. Surveyors cut a mark into some durable material, a rock, wall, gate pillar, or face of a building, to indicate the starting, closing, or a suitable intermediate point to a line of levels for determination of altitudes over the face of a country. One of the first nationwide uses of benchmarks was completed in 1860 in Great Britain where 750 primary and 750 000 secondary benchmarks were carved into stones across the landscape to describe elevations above sea level.
Over the face of the landscape of clinical pathology, for 25 years, 117 Q-Probes studies, set in the durable material of 98 peer-reviewed publications, have indicated levels of performance in the starting or preanalytic phase of test ordering and specimen collection, the intermediate or analytic points of result generation, and the closing or postanalytic phase of result communication, in the total testing process.
Applied to the face of the country of clinical pathology, the Q-Probes survey model is composed of 10 features. First, Q-Probes studies are developed by a group of practicing laboratorians, the College of American Pathologists' (CAP) Quality Practices Committee (QPC). Second, the studies are pilot tested in real clinical laboratories before they are launched for use by participating laboratories. Third, data collection directions, collection forms, and management of the data generation are performed by CAP-QPC staff who have developed expertise in this survey task. Fourth, CAP statisticians have developed a consistent yet flexible approach that determines the benchmarks of quality performance and relates them to demographic and institutional (stratifying) variables, as well as features of the total testing process that are modifiable (practice) variables. Fifth, subscribing laboratories receive complete summaries of the Q-Probes study results; these summaries set the individual participant's performance against a horizon of best, median, and most improvable performance among other participants.
Sixth, the Q-Probes designers subject the studies' results to careful, collaborative reviews, which become publications. These publications summarize the studies' significant findings and make their way through the peer-review process to the medical literature. Seventh, later studies undergo modification, based on information gained from earlier studies, during a systematic, now quarter-century-long effort to cover all phases of the testing in every laboratory section. Eighth, for more than 25 years the Q-Probes collaboration between the QPC and Q-Probes subscribers has built practical definitions of laboratory events and practical measures of performance that have proved themselves resistant to interlaboratory variation in their application. Ninth, the QPC and its staff have paid special attention to themes, particularly those of patient safety (patient identification and critical value result reporting), that have found acceptance as indices of laboratory quality from the federal regulator of American medicine, the Centers For Medicare and Medicaid Services (CMS), as well as accrediting agencies on which CMS currently bestows deemed status to inspect laboratories: the CAP Laboratory Accreditation Program and the Joint Commission's Laboratory Inspection Program. Finally, the Q-Probes study model has recognized the value for the delineation of clinical laboratory benchmarks of defining satisfiers and dissatisfiers, among physicians, nurses, and patients, especially in regard to process (cycle or turnaround) times.
The Q-Probes experience in clinical pathology has shown that clinical laboratory testing can be successfully analyzed as a process, a sequence of definable steps, which have been conveniently divided into the now conventional preanalytic, analytic, and postanalytic phases. The Q-Probes model has helped overcome, in this context, barriers between extramural (to the laboratory) and intramural measures of performance, so the total testing process can improve.
The Q-Probes model has made this contribution by demonstrating, during 2½ decades, that from laboratory to laboratory, similarities and differences in performance are amenable to statistical comparison in real-world conditions and that such comparison yields statistically significant differences—and benchmarks—of performance.
The Q-Probes studies have demonstrated long-term ability to provide benchmarks—valid criteria by which participants can assess themselves—and to generate firm associations between stratifying and practice variables with these benchmarks. As we summarize in this article, more consistent performance, with shorter cycle times that better satisfy people interested in laboratory tests, specimens, and result reports, constitutes a successful model of clinical laboratory quality.
MATERIALS AND METHODS
Studies were developed by the CAP's QPC and were pilot tested in a small number of laboratories. Participants enrolled in the CAP's Q-Probes program for each specific study. Instructions were distributed along with preprinted forms required for data gathering. Participants recorded required information by monitoring their own clinical laboratory practices, collecting information for a short time interval ranging from 1 to 3 months, or until a specific statistically significant number of results were collected. Questionnaires about how the processes studied were organized in each laboratory were used to determine how maximal improvement could occur. Upon completion of the study, data collection forms were returned to the CAP by a specific date. Data were evaluated by statisticians and study authors, and a critique describing the study findings and suggestions for improvements was mailed to participants. Higher percentile rankings indicated better performance. Occasionally, a study was repeated and results compared with the previous study.
We reviewed CAP clinical pathology Q-Probes studies, publications, historical information, and participant demographic information between the first Q-Probes study in 1989 and subsequent studies during the first 25 years of use. Citation of manuscripts in the literature was determined by searching the Elsevier Scopus (Amsterdam, the Netherlands) database. Self-citations were included. Based on the primary quality indicator, studies were organized into the 3 major categories of the total testing process: preanalytic, analytic, and postanalytic phases. When studies included more than 1 phase, they were classified as covering the entire total testing process. Laboratories that participated in more than 1 study were counted as participants in all studies in which they participated. The most current CAP Laboratory Accreditation Program checklists were evaluated for the number of times clinical pathology Q-Probes citations were listed.
Table 1 displays the 98 peer-reviewed publications from clinical pathology Q-Probes studies, divided into stages of the total testing process investigated. A single article was published from most studies, although 11 publications compared results among 2 to 5 studies repeated over time, with the longest comparison study interval of 6 years. Most studies were conducted to establish quality indicators in the preanalytic phase, although the largest number of publications evaluated the entire total testing process. Some of the studies that exemplified the entire total testing process category were 9 articles about using results of multiple Q-Probes studies for quality improvement, and 19 timeliness studies describing aspects of blood products or test turnaround times. Five of the 13 publications in the analytic category described bedside glucose testing, and 3 of the 7 postanalytic studies were about critical value reporting. Manuscripts from other studies conducted during the current and previous years have been submitted or will be submitted for publication over the next few years.
Table 2 lists the demographics of the Q-Probes studies between 1989 and 2014, during which time 117 clinical pathology CAP Q-Probes studies were conducted with 42 899 participants. The number of studies varied between 3 and 9 per year, and between 1993 and 1996, one to 2 clinical pathology Q-Probes studies for small hospitals (<200 beds) were offered. Before and after that time, data from small hospitals were merged into a single database of all hospitals. These 117 clinical pathology Q-Probes studies were authored by 54 laboratorians and resulted in 98 peer-reviewed articles that were recognized in 1609 journal citations. A book, Quality Management in Clinical Laboratories: Promoting Patient Safety Through Risk Reduction and Continuous Improvement, edited by Valenstein,1 was published by the CAP in 2005. Twenty-nine abstracts were published in a variety of journals, of which 6 were abstracts from Archives publications republished in The Journal of the American Medical Association, and 2 were abstracts republished in the Yearbook of Pathology and Laboratory Medicine. CAP Today featured 59 stories about clinical pathology Q-Probes studies during this time.
Table 3 describes the impact of the Q-Probes program on the practice of pathology and medicine. The CAP's Laboratory Accreditation Program checklists contain 61 citations of practice patterns established by Q-Probes studies, varying from 19 citations in the Laboratory General Checklist to 2 citations in each of the Urinalysis, Transfusion Medicine, Point-Of-Care Testing, Immunology, Hematology and Coagulation, and the All Common checklists.2 Five editorials about Q-Probes clinical pathology findings have appeared in the Archives of Pathology & Laboratory Medicine,3–7 and 14 medical journals have published clinical pathology Q-Probes data. Authors have discussed Q-Probes studies at 32 national and international medical meetings and at the 1991 International Meeting of the Juran Institute, a worldwide organization focused on product and service quality across all industries. The QPC also conducted the 17th CAP Conference cosponsored by The Joint Commission in 1990 where more than 300 participants gathered to discuss quality improvement in pathology. Four educational seminars, “CAPitalize on Education,” that highlighted Q-Probes, were cosponsored jointly by the CAP and The Joint Commission.
Another honor the Q-Probes program received was an award by the American Hospital Association's Healthcare Forum Journal in 1993 as one of the 6 best benchmarking programs in medicine that year. Two manuscripts developed from Q-Probes studies had been nominated for the Centers for Disease Control and Prevention's (CDC's) Charles Sheppard Award in 1996 and again in 2001, which recognizes the best original article published by a CDC scientist in laboratory science. Also between 2004 and 2007, the CAP was awarded a sizable 3-year CDC grant to study and define the “Best Practices for Standardized Quality Assurance Activities in Pathology and Laboratory Medicine.”
Members of the QPC have been instrumental in developing additional CAP products to improve quality of clinical laboratories. These spinoffs include Q-Tracks, a CAP Competency Assessment Program with more than 41 000 yearly participants; Evalumetrics, a new ongoing professional practice evaluation/focused professional practice evaluation program; and the Specialty CAP Certificate Program. In addition, participation in Q-Probes studies was approved by the American Board of Pathology as a society-sponsored program to fulfill Maintenance of Certification Part IV requirements.
Table 4 summarizes benchmarks from 8 typical studies for the clinical laboratory that had been established from the Q-Probes program. These studies are representative of the major disciplines (transfusion medicine, microbiology, hematology, chemistry, and point-of-care) as well as management aspects (costs of laboratory testing, staffing efficiencies, and customer satisfaction) of the clinical laboratory. These sample studies covered the preanalytic, analytic, and postanalytic phases as well as the entire total testing process. Benchmarks were established for the median (50th percentile) laboratory practice, for the best performing 10% of laboratories (defined as at or above the 90th percentile), and for the 10% of clinical laboratories that needed the most improvement. Although there are 2 to 4 benchmarks listed in Table 4 in each of the sample studies, in most studies a larger number of benchmarks were established. For each benchmark, participant characteristics (enablers) were identified that were associated statistically with better performance. For example, in the blood utilization study, there was a relationship between the bed size or teaching status of participants with the cross-matched to transfused ratio; between the teaching status of participants or the use of individuals monitoring blood practices with the blood expiration rate; and between the teaching status of participants or an active monitoring program with blood wastage. The number of institutions participating in the studies listed in Table 4 ranged from 151 to 630; however, other studies have had participants from more than 900 institutions8 or from as few as 52 institutions.9 In these examples, the size of the databases ranged from 623 different practice patterns for notification of a test critical value to more than 18 million units of cross-matched blood in the blood utilization study.
The most common benchmark characteristic was expressed by a percentage, such as the percentage of red blood cell units wasted, the percentage of urine specimens contaminated, the percentage of patients bruised from phlebotomy, and the percentage of nurses satisfied by the laboratory services. Cost (US dollars spent) was considered a benchmark for the comparison of glucose testing between the clinical laboratory and for the point-of-care, cutoffs for manual peripheral blood cell review and billable tests per employee were used as clinical laboratory efficiency measures. Decision points on when notification occurred for a markedly low or high result were among the subjects of the critical value study.
Table 5 summarizes findings in 19 Q-Probes publications, based on timeliness of tests, services, or products provided for patient care. Timeliness was the most common attribute studied for clinical pathology disciplines. Evidence was provided that timeliness was influenced by the test studied, that the median was a better measurement than the mean to describe overall performance, and that clinicians considered that turnaround times begin with the placement of the order as opposed to laboratorians who considered turnaround times beginning with the specimen arriving in the clinical laboratory. In addition, benchmarks were established for stat tests, routine tests, automated early morning tests, and tests required for early morning rounds. The use of outliers also was recommended to improve turnaround times. Similarly, these studies established benchmarks for analytes found in cerebrospinal fluid; for stat chemistry tests such as potassium and troponin; hematology tests such as hemoglobin; and transfusion medicine tests such as the type and screen. Timeliness of outpatient phlebotomy and the delivery of blood to the operating theater also were studied. A number of studies reviewed improvements in turnaround times over various periods of time by repeating a previous study and comparing data among studies.
The Figure is a photograph taken in 1990 of a celebratory QPC dinner where the outgoing chair received a committee award from the incoming chair.
In the late 1970s and early 1980s, some laboratorians expressed concern that all quality control and quality improvement activities occurred solely within the analytic testing phase within clinical laboratories, even though many of the processes such as ordering, obtaining, transporting, and preparing the specimen for measurement, followed by reporting the result to the patient's physician, were erroneous and influenced the quality of testing far more than the analytic measurements.10 The frequency and magnitude of these errors and their influence on patient care led leaders of the CAP to begin to focus efforts on errors occurring before and after the analytic measurement. Ultimately, this led the CAP committee involved in monitoring quality control practices within the clinical laboratory to transition its focus to systematically investigate and reduce these types of errors. Shortly thereafter, scientists at the CDC also became concerned with errors at the preanalytic and postanalytic phases and they too expanded their interest beyond the analytic measurement. As a first step, they coined the term total testing process to include the steps from the physicians' perceived need for a test until the test result was returned and interpreted by the ordering physician. By the time the federal government began to release the Clinical Laboratory Improvement Amendments of 1988 (CLIA '88), the CAP committee was already field evaluating and implementing studies that identified indicators of quality throughout the total testing process by using a product they named Q-Probes. The CAP committee responsible for the Q-Probes program became known as the Quality Practices Committee, and laboratory participation in the first Q-Probes module began in January 1989.
The scope of the 117 Q-Probes studies during the past 25 years has spanned all major clinical laboratory disciplines, defining quality indicators and offering participants benchmarks from these studies. The success of this educational effort has resulted in the use of some of these indicators in every laboratory worldwide. These studies also helped laboratorians assume responsibility for activities within the total testing process that occur outside of the clinical laboratory, as well as for testing that occurs outside the clinical laboratory at the point-of-care by nonlaboratorians.
The main reason for implementing the Q-Probes program was to educate laboratorians about quality indicators in pathology laboratories and to demonstrate how to perform quality improvement activities not only within the clinical laboratory, but also in the preanalytic and postanalytic phases of the total testing process. Initial estimates were that it would take 5 years to accomplish this goal. The continuing value to participants and their patients is demonstrated clearly by the use of these types of studies 25 years later, long after the development of the study model and initial estimate of a 5-year lifespan for the program. One of the major reasons for the continuing success of the Q-Probes program is that the studies are designed by experts, as many pathologists and their staffs lack the skill and knowledge to do so. Additionally, these studies have made it easier to fulfill regulatory requirements and to continue performance improvement. Finally, as laboratory medicine has changed, the authors have introduced new indicators, crafted new Q-Probes studies, and developed new benchmarks in response to these changes.
Most studies were conducted to improve patient safety, although a sizable number of studies provided data to improve management of the clinical laboratory. When these studies were categorized within the total testing process, most studies were preanalytic, whereas the fewest studies were postanalytic. A number of studies, such as timeliness studies evaluating turnaround times, focused efforts throughout the entire total testing process. Phlebotomy studies evaluated complications of clinical laboratory personnel in performing blood drawing directly upon patients, whereas others studies were directed upon controlling clinical laboratory documents or providing a safe workplace for clinical laboratory employees. At the time of initial development in the late 1980s, laboratorians had not focused upon preanalytic or postanalytic processes, nor did many laboratorians assume ownership for these processes. We believe that these studies have been instrumental in current concepts of preanalytic and postanalytic phases of the total testing process being the responsibility primarily of laboratorians.
Achievement of these accomplishments has required tireless efforts by CAP members, other laboratorians, CAP staff, and program participants during these 25 years. Almost 43 000 participants in 24 countries have used these studies and have improved their practices from conclusions in these studies. To help participants and other laboratorians improve patient care, authors have published a wealth of information in 14 medical journals and have made news almost 60 times in CAP Today. Recognition of the excellence of the program came from organizations such as the American Hospital Association's Healthcare Forum Journal, the Juran Institute, and the CDC. Widespread support by laboratorians to develop these studies, and of participants worldwide to enroll and contribute resources to conducting these studies and submitting data for benchmarks, provides support for how valuable these studies have been in improving performance as well as patient care worldwide. That this work has been important to clinical pathology alone is verified by more than 1600 citations in the medical literature, and 61 citations in the CAP Laboratory Accreditation Program checklists. A multiyear CDC grant was awarded to the CAP, based upon the infrastructure for studying clinical laboratories and emergency departments. All these successes attest to the notion that the Q-Probes program has been widely acknowledged throughout medicine as improving the quality of clinical pathology services.
Q-Probes studies have provided an extensive database of information on which valid statistical conclusions were made. These conclusions helped participants as well as other laboratories choose structures and processes that would result in improvement, and avoid those structures and processes that result in poor performance. Early studies helped define statistical concepts that should be used in describing data, and with each study, the data provided information available for a succeeding study that would answer a more difficult question. For example, the number of participants in a few studies was more than 900, and in some studies conclusions were based on more than 12 000 000 data points.
When the Q-Probes program was developed in 1989, there was almost no information available about the timeliness of clinical laboratory testing. One of the first Q-Probes studies was developed to address the timeliness of cerebrospinal fluid analysis, and since that time almost 20 additional timeliness-based studies in clinical pathology have followed. The Q-Probes studies provided recommendations on the statistical expression of results; the time intervals that should be used to monitor the timeliness of clinical laboratory results; benchmarks for timeliness of stat testing, routine testing, and automated testing; and recommendations on how timeliness of clinical laboratory results may be improved by analyzing turnaround time data and using solutions that others had implemented in driving improvement. Some specific conclusions were that timeliness was dependent on the test studied; the median value should be used in preference to the mean value for data analysis; participants identified as best performers provided valuable suggestions to improve performance, and participants in need of improvement provided suggestions on what did not work to drive improvement; and clinicians, in contrast to laboratorians, considered timeliness to begin with the ordering of the test and ending with result reporting. This start time for timeliness forced laboratorians to consider preanalytic processes, such as transit times of specimens, as within their domain and to assume responsibility for preanalytic and postanalytic phases of clinical laboratory testing.
Almost 20 articles have been published, based on timeliness data related to clinical laboratory services. Specific areas of the hospital, such as the emergency departments, intensive care units, operating rooms, outpatient areas, inpatient areas, and the phlebotomy suites, were studied. The tests and products studied were those tests from the major disciplines of the clinical laboratory that were ordered as a stat, or routine priority. Timeliness measurements were extended from tests to the expectations of patients for the timeliness of phlebotomy, the delivery of blood to the operating room, the timeliness of cardiac markers used in the diagnosis of myocardial infarction, the timeliness of clinical laboratory results for morning rounds, and the completion of the type and screen for patients undergoing surgical procedures.
Based on these studies, 2 important conclusions were identified. Clinicians required clinical laboratory results faster in many situations than the laboratories could provide them, and significant improvement in timeliness of clinical laboratory testing was very difficult, requiring years to achieve. In contrast to anatomic pathology where most Q-Probes studies investigated the accuracy of diagnoses, for clinical pathology tests, clinicians assume that tests are accurate. Hence, it is not surprising that the most important attribute of clinical laboratory tests is their timeliness.11
In conclusion, during the past 25 years the practice of pathology as well as the practice of medicine has changed remarkably. Timeliness is now recognized as important in clinical laboratory testing as precision and accuracy, and testing in most hospitals has moved in part from the clinical laboratory to the patients' bedside. The third major change is that pathologists now are responsible for the preanalytic and postanalytic phases, and have made major progress in improving these processes. Also during this time, indicators of quality have been defined for all steps of the total testing process, and clinical laboratories throughout the entire world are now using these indicators and benchmarks that have been developed by the Q-Probes program. National patient safety goals have been established by both the CAP and The Joint Commission, pertaining to patient identification and critical value notification, problems that have been studied 8 times and 3 times, respectively, by Q-Probes studies. Now that data gathering from clinical laboratory information systems is automated fully for proficiency testing, it will be easier to develop databases on clinical laboratory performance and indicators of quality throughout the total testing process, and to begin to automatically capture this information. The past 25 years have been very exciting times with the changing landscape in clinical pathology, and we believe that the future will be even more exciting when comparative data can be collected easily and used by laboratory personnel and accrediting agencies to continue to drive improvement in patient care.
We would like to thank our subscribers who participated voluntarily in the clinical pathology Q-Probes program and whose data helped improve not only their own practices but also the worldwide practice of pathology.
The authors have no relevant financial interest in the products or companies described in this article.