Context.—Despite compliance with quality control standards, concerns remain as to the accuracy and reliability of point-of-care testing.

Objective.—To assess a practical method for quality improvement using the context in which point-of-care testing is done.

Design.—Quality measures for point-of-care testing, making use of natural duplication of results obtained by other testing methods, were used to monitor testing quality and evaluate quality improvement interventions.

Setting.—Five adult intensive care units (total of 88 beds) in a large academic medical center, using point-of-care testing for blood gases, electrolytes, and hematocrit levels.

Participants.—Nurses performing bedside testing and laboratory personnel assigned the responsibility for supervising their performance.

Interventions.—Quality of testing was monitored continuously, and, where problems were identified, training and support interventions implemented, and their effects evaluated.

Main Outcome Measures.—Improvement in correlation coefficients and regression parameters of point-of-care hematocrit and potassium testing results compared with contemporaneous results from the core laboratory.

Results.—The initial survey found point-of-care potassium levels were tightly correlated with core laboratory results (r = 0.958). Baseline correlation coefficients and regression parameters for point-of-care hematocrit levels compared with core laboratory values varied widely from unit to unit. The intensive care units with the highest variances of bedside vs core laboratory testing received targeted interventions. Follow-up yielded evidence of dramatic improvement; 1 unit experienced an increase in correlation from 0.50 to 0.95.

Conclusions.—The findings suggest that, when point-of-care testing is highly dependent on operator technique, targeted interventions can resolve problems and provide reliable results at the bedside.

We describe an operator-focused approach to quality control and improvement for point-of-care testing (POCT). The approach was deployed to identify problems, develop corrective interventions, and evaluate their effectiveness. This method of quality control focuses on how POCT is performed on a routine basis by large numbers of nonlaboratory personnel and provides a means for monitoring and improving performance.

The University of Alabama at Birmingham (UAB) Health System maintains an Office of Bedside Testing (OBT) staffed by a medical technologist and a registered nurse.1 The OBT monitors all bedside testing, including blood gas and electrolyte testing, glucometry, and any other tests conducted by nonlaboratory staff outside the UAB Hospital Laboratories (henceforth referred to as the core laboratory). The OBT conducts quality control procedures, training, and proficiency testing in compliance with College of American Pathologists requirements.2 Standard quality control (QC) and quality assurance (QA) have indicated acceptable levels of accuracy, precision, and reliability. Nevertheless, nurses and physicians were anecdotally reporting POCT results at variance with patient baseline values and/or core laboratory results often enough to raise doubts about POCT. We initiated a study to identify problems with POCT and to evaluate actions taken to correct any problems found.

Hortin3 explains why traditional QC techniques fail to address problems unique to POCT. Central laboratory and POCT methods differ in several ways. Most significantly, POCT involves multiple instruments with many operators per instrument. Traditional QC applied to POCT is costly and ineffective in identifying important problems that can degrade the reliability of results. Our observational approach to QC and QA involves comparing POCT data with central laboratory results. The approach developed relies on naturally concurrent samples, avoids elaborate protocols, and has the added advantage that POCT operators were less aware of QA, since it took place in the background of routine testing. Part of the QA system involves presenting the data collected in a comprehensible format that furnished information useful for educational purposes and demonstration of improvements as they occurred.

The design of this study was more observational than experimental in nature—POCT takes place outside the core laboratory, and testing is conducted by nonlaboratory personnel who have other duties and different priorities than do laboratorians. Laboratory personnel, working through the OBT, can monitor, train, and advise POCT operators but do not have the degree of control that exists in a laboratory environment. Effective continuous quality improvement needs to recognize the circumstances in which testing is performed and adopt methods practically suited to specific clinical settings.

Two POCT devices from the i-STAT Corporation, Princeton, NI, are used in the UAB Health System and this study. (Use of trade names is for identification only and does not represent endorsement by the US Department of Health and Human Services or by the Public Health Service.) One device was the i-STAT Portable Clinical Analyzer (PCA). Five of 8 adult intensive care units (ICUs) were using PCAs for blood gas, chemistry, and hematocrit testing. These were the coronary care unit (CCU), the heart transplant intensive care unit (HTICU), the medical intensive care unit (MICU), the neurosciences intensive care unit (NICU), and the trauma/burns intensive care unit (TBICU). During the course of the study, blood analysis modules (BAMs) developed jointly by i-STAT and the Hewlett-Packard Corp (Baltimore, Md) were substituted for PCAs in the TBICU. For simplicity, we will refer to both PCA and BAM units collectively as i-STAT analyzers. In all, 62% of the adult ICU beds used i-STAT analyzers.

The population that we wished to study comprised the nearly 600 nurses and technicians who, records in the OBT indicated, had verified competency to perform i-STAT testing. We focused on hematocrit and potassium testing, because nurses and physicians expressed the most concern about those tests, and records showed that these analytes had the greatest probability of duplicate testing. The POCT hematocrit values were obtained using 2 types of disposable cartridges: the EG7+ (sodium, potassium, ionized calcium, pH, PCO2, PO2, and hematocrit), and the 6+ (sodium, potassium, chloride, urea nitrogen, glucose, and hematocrit). The core laboratory used Coulter STKS hematology analyzers (Coulter Corp, Miami, Fla). Core laboratory chemistry results were obtained using either Hitachi 747 (Boehringer Mannheim Corp, Indianapolis, Ind) or Ektachem 700 (Johnson & Johnson Corp, New Brunswick, NJ) chemistry analyzers.

Data on laboratory testing were extracted from the laboratory information system (LIS; Cerner Corp, Kansas City, Mo) and from i-STAT data management stations. The POCT hematocrit results were reviewed, and those matched with core laboratory hematocrits produced coincidentally on the same patient were retained for analysis. Coincidental testing occurred when test results on a POCT panel overlapped with results in a core laboratory panel performed at the same time. Frequently, a POCT blood gas or electrolyte panel was performed concurrently with core laboratory hematocrit included in a complete blood cell count. Internally defined criteria for retention in the study required that POCT results be matched to core laboratory hematocrits reported within 1 hour on the same patient. The 1-hour time frame was chosen because the collection time for specimens reported to the core laboratory was found, in many instances, not to represent the true collection time but the time of entering the order into the laboratory computer. A similar data collection was made of i-STAT potassium levels matched to levels obtained in the core laboratory.

The date that specimens were collected was used to follow effects of interventions addressed to problems identified by the baseline study. Patient location was also noted for matched results. This allowed for analysis of results in each ICU. Data were collected for January to obtain baseline information on POCT correlation with core laboratory testing, and observations were added on a weekly basis thereafter.

The OBT reviewed our findings, communicated those findings to ICU staffs, and implemented changes intended to improve POCT reliability where problems were found. In-service training was being conducted in the NICU as the January data were collected, and the final training session was done in the first week of February. In February, the OBT concentrated on the TBICU and the MICU. In each of these units, nurses and physicians were presented with findings from the baseline study. The OBT training focused on sample-handling technique4,5 and included the following:

  1. The specimen should be drawn free of any anticoagulant and processed immediately. If there is any delay in processing, the specimen should be remixed thoroughly.

  2. Care should be taken that the test cartridge in neither overfilled nor underfilled.

  3. To accomplish this procedure, it is necessary that all materials and the analyzer be brought to the bedside at the beginning of the process. (In both the MICU and TBICU, additional analyzers were provided so that a sufficient number of instruments was available for the required volume of testing.)

Statistical analysis was performed using Excel software (Microsoft Corp, Seattle, Wash) for graphical presentation and SAS software (SAS Institute Inc, Cary, NC) for statistical analysis. Ordinary least squares analysis was used to compare POCT and core laboratory hematocrit results. Models were evaluated to test the effect of outliers on parameter estimates. Scatterplots were visually evaluated. In addition, parallel models were constructed for test results in each ICU over time to allow comparison between units and assessments of the quality improvement interventions. Finally, an internally defined statistic, “percent of clinically significant differences,” was calculated. This was defined as POCT hematocrit values at least 5 percentage points above or below core laboratory values.

Parameter estimates for EG7+ and 6+ hematocrits regressed on core laboratory values for aggregated and unit-level observations are shown in Table 1. The 6+ hematocrits tended to be more tightly correlated with core laboratory results than were EG7+ values. The EG7+ hematocrits varied more and exhibited systematic bias. Regression diagnostics, including residual error distributions, studentized residual values, and Cook D statistics for measuring influence and H statistics for leverage, indicated that no data points imposed significant leverage on the slope. Outliers did influence the intercept; those outliers, however, comprised nearly 10% of the data. In all cases, the range of data was sufficient such that the error in the x-axis variable did not influence the regression statistics.

Table 1. 

Initial Parameter Estimates for Point-of-Care vs Core Laboratory Hematocrit Results*

Initial Parameter Estimates for Point-of-Care vs Core Laboratory Hematocrit Results*
Initial Parameter Estimates for Point-of-Care vs Core Laboratory Hematocrit Results*

We also regressed POCT potassium values against serum values reported on contemporaneously collected specimens sent to the core laboratory. A total of 165 paired potassium values showed much less variance. The EG7+ potassium levels evaluated against core laboratory levels yielded an intercept of −0.018, a slope of 0.958, and a correlation coefficient of 0.927. For 6+ assays, the intercept was −0.107, the slope 0.995, and the correlation coefficient 0.951. Neither of these intercepts differs significantly from 0, and neither slope differs significantly from 1.

Table 2 indicates the quality improvements achieved in each of the 3 ICUs (MICU, NICU, and TBICU) where interventions (ie, in-service training, provision of additional equipment) were initiated. In units where no special actions were taken (CCU, HTICU), performance remained the same or deteriorated. The greatest improvement was in the TBICU, as shown in the Figure. In January, i-STAT hematocrits were essentially uncorrelated with core laboratory values. By April, TBICU was the unit with the most reliable POCT hematocrits.

Table 2. 

Changes in Point-of-Care Testing Performance Over Time*

Changes in Point-of-Care Testing Performance Over Time*
Changes in Point-of-Care Testing Performance Over Time*

This study describes an approach to quality improvement for POCT that permits continuous process monitoring and performance improvement. The QC procedures adapted from those performed in centralized laboratory testing (eg, imprecision, linearity, and interpreter variability studies) are not adequate for POCT. Our approach allows the identification of specific problems (ie, unreliable hematocrit values in some ICUs) not exposed by traditional methods. Problem ICUs were identified without elaborate evaluation protocols. We have demonstrated that POCT processes can be monitored and corrected by a mechanism of continuous quality improvement. Thus, POCT compared with core laboratory potassium results showed an example of a process under control.

A search of the literature found several studies validating the reliability of i-STAT PCAs for blood gas and electrolyte testing.6–10 Reports on hematocrit testing were less conclusive. Two studies6,7 pronounced POCT hematocrits reliable, and another study5 indicated there might be a reliability problem but discounted it. In a study of POCT and emergency department length of stay, Parvin et al11 determined that hematocrits obtained with i-STAT PCAs were unreliable, even after efforts to improve operator technique. Our study shed light on the source of variations among and within previously published studies. In addition, we monitored a very large number of operators performing POCT. The question of what degree of accuracy was necessary for hematocrit values related to the vexing problem of transfusion decisions, and the classification of the differences between methods noted here as significant errors is difficult. Our “clinically significant difference” criterion can be viewed as a benchmark, subject to modification in varying clinical contexts.

The OBT was established to address the multidisciplinary nature of POCT. Successful POCT requires collaboration between nurses and medical technologists3,12 Nurses do not generally receive intensive training in laboratory technology as part of their education but do receive extensive training in adult learning and instructional methods. When nurses and medical technologists cooperate in coordinating bedside testing, effective guidelines for testing procedures can be developed and effectively communicated. In this case, the most important factor influencing the reliability of hematocrit results was sample handling. Simply presenting data from the baseline study to nurses was sufficient to produce significant improvement in testing done in the NICU. At the same time, in-service training was conducted to reinforce the importance of proper sample handling.11,12 

Continuous quality improvement requires effective data management.1 Constant monitoring requires that POCT and core laboratory test results are readily comparable with baseline data and with one another. Practical data management at this level of detail can only be provided when POCT systems are interfaced with the core laboratory information system. It is also necessary that test results are readily accessible for import into a variety of analytic software applications. Finally, the data need to be transformed into information comprehensible for end users. We found that clinical staff responded most to correlation coefficients, scatterplots, and the percentage of POCT results with clinically (as opposed to statistically) significant differences. They liked numbers and diagrams that highlighted the relevant data, and using this approach maximized effective learning.

In the MICU and TBICU, nurses reported that an insufficient number of PCAs on the unit rendered immediate specimen analysis difficult or impossible, especially during peak testing periods (ie, scheduled morning specimen collections). Where this was the case (TBICU, MICU), providing additional analyzers produced immediate improvements. A lack of adequate resources has been hypothesized as a reason for emergency department length of stay13 and emergency department test turnaround time (TAT).14 Another change that took place in the first week of March involved a nursing service decision to replace PCAs in the TBICU with BAMs. The BAMs, which were integrated into the bedside monitoring system, were easier to use and facilitated data management.

An alternative was to shift some routine testing from POCT to the core laboratory. This was the most cost-effective solution for the MICU, which explains why data collection there ended in March (Table 2). Decisions such as this depend on trade-offs among costs, TAT, and the perceived reliability of test results. Steindel and Howanitz15 reported on factors influencing laboratories' performance in reaching TAT goals. One significant factor was effective performance monitoring, especially where laboratories did not control specimen handling and transport. How these trade-offs were addressed depended on the TAT performance required by clinicians and the ability of testing options available to meet those requirements within given cost constraints.

Our study describes the success in using this approach, and UAB has plans to completely automate the collection and analysis of data comparing POCT and core laboratory testing. Others planning to use a similar approach will have to modify the details of the method to fit local requirements. Smaller institutions might need to undertake such a project manually. There is probably no need to attempt to capture every instance of concurrent POCT and core laboratory results, so long as care is taken to collect a bias-free random sample. An alternative approach would monitor “deltas” or instances where test results diverge significantly, as locally defined, from each other. Monitoring the frequency of deltas over time allows for analysis of results that are not routinely duplicated. Corrective action as described here should be taken when the frequency exceeds a locally defined action limit. For either method, an unknown proportion of deltas will be truthful indicators of changes in patients' conditions. Any method used will require that resources be dedicated to data collection and analysis. Use of the natural experiment illustrated here, however, provides a method that achieves large improvements in quality at minimal added cost.

Continuous monitoring and quality improvement are possible and necessary for reliable POCT. For large health systems, POCT can involve hundreds of operators who do not have significant theoretical training in laboratory methods. Achieving and maintaining adequate quality depend on data and information management, effective multidisciplinary coordination, and the commitment of adequate resources. In the absence of continuous quality improvement, processes can spin out of control. This results in a loss of confidence in POCT technology and duplication in testing and potentially puts patients at risk for inappropriate treatment.

Scatterplots of point-of-care hematocrit (Hct) compared with core laboratory Hct in the trauma/burns intensive care unit (TBICU) before and after interventions.

Scatterplots of point-of-care hematocrit (Hct) compared with core laboratory Hct in the trauma/burns intensive care unit (TBICU) before and after interventions.

Close modal
Table 2. 

Extended

Extended
Extended

This study was conducted under a Cooperative Agreement with the UAB Department of Pathology, the Association of Schools of Public Health, and the Centers for Disease Control and Prevention (S661-17/17).

We thank Carol Howard, MT, MSPH, and LuAnn Hensley, RN, coordinators in the UAB Office of Bedside Testing. We also acknowledge the efforts of the critical care nurses at UAB for their feedback on the information presented and their active efforts to improve the quality of testing.

Hortin
,
G. L.
,
C.
Utz
, and
C.
Gibson
.
Managing information from bedside testing.
Med Lab Observer
1995
.
27
:
28
32
.
College of American Pathologists
Commission on Laboratory Accreditation Inspection Checklist: Point-of-Care Testing, Section 30.
Northbrook, Ill: College of American Pathologists; 1997
.
Hortin
,
G. L.
Beyond traditional quality control: how to check costs and quality of point-of-care testing.
Med Lab Observer
1997
.
29
:
31
37
.
i-STAT Corp
i-STAT System Manual.
Princeton, NJ: i-STAT Corp; November 1996
.
National Committee for Clinical Laboratory Standards
Blood gas pre-analytical considerations: specimen collection, calibration, and controls.
Wayne, Pa: National Committee for Clinical Laboratory Standards; 1993. Approved guideline C27-A
.
Adams
,
D. A.
and
M.
Buus-Frank
.
Point-of-care technology: the i-STAT system for bedside blood analysis.
J Pediatr Nurs
1995
.
10
:
194
198
.
Bishop
,
M. S.
,
I.
Husain
,
M.
Aldred
, and
G. J.
Kost
.
Multisite point-of-care potassium testing for patient-focussed care.
Arch Pathol Lab Med
1994
.
118
:
797
800
.
Erickson
,
K. A.
and
P.
Wilding
.
Evaluation of a novel point-of-care system, the i-STAT portable clinical analyzer.
Clin Chem
1993
.
39
:
283
287
.
Jacobs
,
E.
,
E.
Vadasdi
,
L.
Sarkozi
, and
N.
Colman
.
Analytical evaluation of i-STAT portable clinical analyzer and use by nonlaboratory health-care professionals.
Clin Chem
1993
.
39
:
1069
1074
.
Mock
,
T.
,
D.
Morrison
, and
R.
Yatscoff
.
Evaluation of the i-STAT™ system: a portable chemistry analyzer for the measurement of sodium, potassium, chloride, urea, glucose, and hematocrit.
Clin Biochem
1995
.
28
:
187
192
.
Parvin
,
C. A.
,
S. F.
Lo
,
S. M.
Deuser
,
L. G.
Weaver
,
L. M.
Lewis
, and
M. G.
Scott
.
Impact of point-of-care testing on patients' length of stay in a large emergency department.
Clin Chem
1996
.
42
:
711
717
.
Miller
,
K. A.
and
N. A.
Miller
.
Joining forces to improve point-of-care testing.
Nurs Manage
1997
.
28
:
34
37
.
Saunder
,
C. E.
,
P. K.
Makins
, and
L. J.
Leblanc
.
Modeling emergency department operations using advanced computer simulation systems.
Ann Emerg Med
1987
.
16
:
1244
1248
.
Howanitz
,
P. J.
,
S. J.
Steindel
,
G. S.
Cembrowski
, and
T. A.
Long
.
Emergency department stat test turnaround times: a College of American Pathologists study for potassium and hemoglobin.
Arch Pathol Lab Med
1992
.
116
:
122
128
.
Steindel
,
S. J.
and
P. J.
Howanitz
.
Changes in emergency department turnaround time performance from 1990 to 1993: a comparison of two College of American Pathologists Q-probes studies.
Arch Pathol Lab Med
1997
.
121
:
1031
1041
.