Context.—

Coronavirus disease 2019 (COVID-19) test performance depends on predictive values in settings of increasing disease prevalence. Geospatially distributed diagnostics with minimal uncertainty facilitate efficient point-of-need strategies.

Objectives.—

To use original mathematics to interpret COVID-19 test metrics; assess US Food and Drug Administration Emergency Use Authorizations and Health Canada targets; compare predictive values for multiplex, antigen, polymerase chain reaction kit, point-of-care antibody, and home tests; enhance test performance; and improve decision-making.

Design.—

PubMed/newsprint-generated articles documenting prevalence. Mathematica and open access software helped perform recursive calculations, graph multivariate relationships, and visualize performance by comparing predictive value geometric mean-squared patterns.

Results.—

Tiered sensitivity/specificity comprised: T1, 90%, 95%; T2, 95%, 97.5%; and T3, 100%, ≥99%. Tier 1 false negatives exceeded true negatives at >90.5% prevalence; false positives exceeded true positives at <5.3% prevalence. High-sensitivity/specificity tests reduced false negatives and false positives, yielding superior predictive values. Recursive testing improved predictive values. Visual logistics facilitated test comparisons. Antigen test quality fell off as prevalence increased. Multiplex severe acute respiratory syndrome (SARS)–CoV-2)*influenza A/B*respiratory syncytial virus testing performed reasonably well compared with tier 3. Tier 3 performance with a tier 2 confidence band lower limit will generate excellent performance and reliability.

Conclusions.—

The overriding principle is to select the best combined performance and reliability pattern for the prevalence bracket. Some public health professionals recommend repetitive testing to compensate for low sensitivity. More logically, improved COVID-19 assays with less uncertainty conserve resources. Multiplex differentiation of COVID-19 from influenza A/B–respiratory syncytial virus represents an effective strategy if seasonal flu surges next year.

The goal of this research is to apply original mathematical relationships and visual logistics to facilitate the interpretation of the performance of coronavirus disease 2019 (COVID-19) diagnostics in settings of increasing disease prevalence, which currently prevail throughout the United States as the pandemic marches forward and the nation vaccinates its citizens. The article ends with a summary of recommendations for the use of COVID-19 tests.

Interpretation

Informed physicians protect the public at large from misleading claims about COVID-19 diagnostic test results that could put them, their families, and their communities at risk of COVID-19 surges, renewed lockdowns, and business and school disruptions. Practical examples (multiplex, antigen, polymerase chain reaction [PCR] kit, point-of-care [POC] antibody, and home tests) illustrate how mathematical analyses clarify the interpretation of diagnostic performance when testing individuals, gatewaying social activities, and surveying community contagion.

Background

The importance of metrics for test performance becomes apparent when one considers the wide range of COVID-19 prevalence found in different populations and locales, the effects of prevalence on the interpretation of test results, the need for national and international standardization, and the implementation of point-of-care testing (POCT) in current and future pandemics.

Prevalence and Geospace

Please see Table 2 in the article by Kost1  for a list of geospatial settings with low COVID-19 prevalence. Here, the supplemental material documents moderate (20%–70%) and high (70%–100%) prevalence and positivity rates (see supplemental digital content at https://meridian.allenpress.com/aplm).

Data were obtained from recent publications. Prevalence of COVID-19 changes rapidly with dynamic surges in different locales, migrating hot spots, incomplete testing, delayed reporting, social behavior, cultural disparities, episodic reopenings, and other factors, such as asymptomatic carriers, super-spreaders, marginally reliable early-release assays, and low test capture rate, which underestimates cases identified.

Visual Logistics

Visual logistics for COVID-19 diagnostics highlight major challenges and encourage improvements in clinical performance. They allow one to filter out tests that do not perform well despite receiving US Food and Drug Administration (FDA) Emergency Use Authorizations (EUAs). Understanding performance helps address the public health controversy of inexpensive repetitive testing versus high-quality diagnostics that can be delivered rapidly.

Objectives

The objectives are (1) to apply visual logistics for interpreting COVID-19 test performance easily and quickly, (2) to assess FDA EUA specifications and Health Canada targets, (3) to calibrate the performance of emerging tests relative to sensitivity and specificity tiers, and (4) to enhance diagnostic standards by revealing the impact of wide ranges in prevalence and uncertainty on the clinical utility of COVID-19 diagnostics. Results will help facilitate the badly needed national and international standardization of COVID-19 tests.

Overview

Please refer to the open access paper by Kost1  in the Archives of Pathology & Laboratory Medicine for a description of mathematical methods; computational design, software, and strategy; visual logistics; and human ethics. Explanation of the governing equations for performance analysis can be found in that paper, which primarily addressed low prevalence, typical of early COVID-19 outbreaks.

Mathematical Foundations

Computations used the equation set from the article by Kost.1  Equations (Eqs.) 7 through 14 were used to calculate positive predictive value (PPV) and negative predictive value (NPV), plus associated parameters through the rearrangement of variables. Ratios [Eqs. 15–17] and rates [Eqs. 18–21] enabled comparisons.

A recursive equation [like Eq. 22] was used to analyze the effect of repetitive testing on NPV. Ideally, repeated tests should be performed with different designs, so-called orthogonal testing.1  The NPV recursive equation is ti+1 = [y(1 − pi)] / [pi(1 − x) + y(1 − pi)], where the number of testing events, i, is an index from 1 to 3 or more, pi+1 and pi are partition prevalences, and ti+1 is the sequential NPV.

Table 1 presents the 3 performance tiers. A prevalence boundary, PB, was calculated by substituting NPV = t = 1 – RFO in Eq. 12, where RFO is the false-omission rate [Eq. 20]. This new derived equation is PB = [(y)(RFO)] / [(1 − x) – (1 − xy)(RFO)].

Table 1

Performance Tiers and RFO Design Criteriaa

Performance Tiers and RFO Design Criteriaa
Performance Tiers and RFO Design Criteriaa

Graphs for antigen and POC tests illustrate uncertainty in the form of the upper and lower bounds of the 95% CIs for test sensitivity and specificity documented by the FDA in EUA announcements.2  Confidence intervals were used as stated in EUAs and were not adjusted for nonnormal distributions, theoretical considerations, or stochastic asymmetry.

Predictive Value Geometric Mean-Squared

Predictive value geometric mean-squared1  (PV GM2) visualizes how low (≤20%), moderate (20%–70%), and high (≥70%) prevalence affect diagnostic performance in a single graphic. The PV GM2 plots enhance awareness when assessing competing tests. This facilitates strategic planning of diagnostics as asymptomatic spreaders are discovered and vaccination alters severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) detectability.

The PV GM2 [Eq. 25] is computed by multiplying PPV and NPV, PV GM2 = PPV • NPV, that is, by multiplying the right-hand sides of Eq. 7 and Eq. 11, each expressed as a decimal fraction from 0 to 1.0. The resulting range of PV GM2 is 0 to 1.

The PV GM2 helps compare tiered sensitivity and specificity, government specifications, and commercial claims. Adjustments in sensitivity and specificity can customize tests for clinical objectives. The PV GM2 pattern recognition reveals the significance of uncertainty when the 95% CI also is displayed.

Transformation of Pretest Probability (Prevalence) to Posttest Probability (RFO)

This proof demonstrates how sensitivity and specificity modulate pretest probability to generate posttest probability of COVID-19 for a negative test, RFO [Eq. 20]. We use this transformation: pretest probability → pre-test odds → likelihood ratio → posttest odds → posttest probability.

Pretest odds = [pretest probability] / [1 – (pretest probability)] = p / (1 – p) where p, the prevalence, is (TP + FN) / N. Because N = TP + FN + TN + FP, then N – TP – FN = TN + FP. Therefore, p / (1 – p) = [(TP + FN) / N] / [1 – (TP + FN) / N] = [(TP + FN) / N] / [(N – TP – FN) / N] = (TP + FN) / (TN + FP), which is equivalent to the ratio [+ COVID-19]:[− COVID-19].

In terms of those with (plus sign) and those without (minus sign) COVID-19, the likelihood ratio can be expressed as: [FN / (+ COVID-19)] / [TN / (− COVID-19)]. Multiplying the pretest odds by the likelihood ratio generates the posttest odds of COVID-19. The likelihood ratio is a function of sensitivity and specificity.

For a negative test, the likelihood ratio is (1 – sensitivity) / specificity = [1 – TP / (TP + FN)] / [TN / (TN + FP)] = [(TP + FN – TP) / (TP + FN)] / [TN / (TN + FP)] = [FN / (TP + FN)] / [TN / (TN + FP)] = [FN / TN] × [(TN + FP) / (TP + FN)].

The posttest odds of a negative test equal the pretest odds times the likelihood ratio for a negative test, so we simplify, [p / (1 − p)] × [(1 − x)/y] = [(TP + FN) / (TN + FP)] × [FN/TN] × [(TN + FP) / (TP + FN)] = FN/TN.

Because the posttest probability is [posttest odds] / [1 + (posttest odds)], the posttest probability equals [FN/TN] / [1 + (FN/TN)] = FN / (TN + FN). However, FN / (TN + FN) is the false omission rate [Eq. 20]. Therefore, the posttest probability = RFO. Note also that RFO = 1 – NPV. Therefore, the posttest probability is 1- NPV.

Uncertainty

Prevalence at maximum and minimum magnitudes of the 95% CI was found by setting the derivative of the difference in the upper confidence limit (UCL) and lower confidence limit (LCL), expressed as ΔPV GM2, equal to zero and solving for p. That is, d[ΔPV GM2] / dp = 0, where ΔPV GM2 = [(PPV × NPV)UCL – (PPV × NPV)LCL] and d[ΔPV GM2] / dp is the first derivative with respect to prevalence for the range 0 ≤ p ≤ 100%.

Software solved quartic equations in the form k4p4 + k3p3 + k2p2 + k1p + k0 = 0, where ki are real constants. Next, real roots were selected for 0 ≤ p ≤ 100%. Open-access Symbolab derivative calculator (https://www.symbolab.com/solver/derivative-calculator) and equation calculator (https://www.symbolab.com/solver/equation-calculator) solutions were confirmed with Mathematica (https://www.wolfram.com/mathematica/; all accessed January 9, 2021).

Uncertainty adds risk. Local and global minimum and maximum (x, y) points of uncertainty also were confirmed using the toggle function in the Desmos graphing calculator (https://www.desmos.com/calculator, accessed January 9, 2021) for the inset plots of ΔPV GM2 shown in the figures.

FDA EUA Specifications

Figure 1 illustrates the impact of tier 1 requirements on COVID-19 diagnostics, which can qualify under EUAs for serologic tests reporting the presence or absence of SARS-CoV-2 antibodies (immunoglobulin [Ig] G, or IgG and IgM). The FDA specifications call for a sensitivity of 90% and a specificity of 95%.3  Graphs were created using Equations 17 (main curve) and 16 (upper left inset) with a sensitivity of 90% and a specificity of 95%.

Figure 1

The influence of prevalence on FN:TN versus FP:TP ratios for tier 1 specifications. Abbreviations: FN, false negative; FP, false positive; NPV, negative predictive value; PPV, positive predictive value; TN, true negative; TP, true positive.

Figure 1

The influence of prevalence on FN:TN versus FP:TP ratios for tier 1 specifications. Abbreviations: FN, false negative; FP, false positive; NPV, negative predictive value; PPV, positive predictive value; TN, true negative; TP, true positive.

Close modal

The main curve on the right traces the ratio of false negative (FN) to true negative (TN) test results, FN/TN, which rises suddenly with high prevalence. Poor performance appears above 80% prevalence because FNs increase relative to TNs, attributable to the EUA requirement for sensitivity [TP / (TP + FN)] at 90%.

Note that at 80% prevalence, the FDA relaxation to ≥70% sensitivity for IgM antibody tests3  generates an FN/TN ratio of 1.3 and an NPV of only 44.2%, but a PPV of 98.2% because of relative saturation by TPs at high prevalence, that is, the increased incidence of those with COVID-19.

With prevalence less than 5.3%, the ratio of FP to TP (Figure 1, upper left) rises and PPV deteriorates rapidly. There are few infected individuals in the nearly disease-free population. False positives increase relative to TPs. A specificity of 95% generates the FPs. For prevalence ranges ≤20%, the FN/TN ratio is insignificant. For example, at 5% prevalence, the FN/TN ratio is <1% (0.006) [Eq. 17]. The FP/TP ratio is somewhat like a reflection of the FN/TN curve, but it is not a mirror image because sensitivity and specificity are not equal.

With an estimate of local prevalence, the health care provider or patient who receives a COVID-19 test result can use Figure 1 to determine the relative chance that the test result is an FN or TN or, alternately, an FP versus a TP. When the prevalence is intermediate, say 20% to 50%, chances of misleading test results diminish.

However, because prevalence across the United States may be as low as, or lower than 2% regionally in sheltered rural communities (eg, see Table 2 in Kost1), these regions are growing scarce. Prevalence and positivity rates are increasing and will continue to do so, with several geographic regions and subsets of the population already surprisingly high at 70% or more (see Supplement).

Performance Tiers

Figure 2 illustrates NPV for the 3 performance tiers (Table 1) while displaying NPV for prevalence from 0 to 100%. Tier 3 (green, top right) shows that if sensitivity is 100%, NPV will be 100% for the full range of prevalence, because there are no FNs. The false omission rate, RFO [Eq. 20], which reflects dangerous missed diagnoses, is equal to 1 – NPV, and therefore RFO is 0 for tier 3.

Figure 2

Negative predictive values with recursive testing with tier 1 specifications. Abbreviations: FN, false negative; NPV, negative predictive value; PPV, positive predictive value; TN, true negative.

Figure 2

Negative predictive values with recursive testing with tier 1 specifications. Abbreviations: FN, false negative; NPV, negative predictive value; PPV, positive predictive value; TN, true negative.

Close modal

Linkage

Tier 2 was designed (1) to generate a prevalence boundary of approximately 50% at a false omission rate of 5% (illustrated below), and also (2) to establish the lower boundary of the recommended 95% CI for tier 3. As a result, with tier 3 performance and tier 2 specifications for the 95% CI, the magnitude of the lower band of the confidence interval is relatively stable within 0.054 to 0.065 PV GM2 for prevalence ranging from 21.5% to 51.3%, and ∼0.1 or less for the entire range of 20% to 70%. Thus, tiers 3 and 2 can be tightly linked to reduce uncertainty in diagnostic test design.

Recursive Testing

Some public health practitioners recommend repeating tests with the assumption that serial tests will reduce false results. Figure 2 shows the progressive improvement in NPV in 3 rounds of recursive testing for tiers 1 (red) and 2 (blue). Under FDA EUA specifications (tier 1, red, left frame) for antibody tests and at a prevalence of 98%, the NPV of the first test is 16.2%.

A high prevalence of 98% was selected as the starting point to illustrate the large dynamic amplitude of subsequent rounds of testing using the recursive equation for NPV (see Materials and Methods). Negative predictive value increased to 64.7%, then to 94.6% on the second and third repetitions, respectively.

When sensitivity is increased to 95% and specificity to 97.5% (tier 2, blue, right frame), which approximates Canadian target specifications,1,4  the first NPV is 28.5%. With repeat testing, NPV is 88.6% and 99.3% on the second and third rounds, respectively. With regard to NPV, tier 3 (green, right frame) obviated the need for recursive testing. There were no FNs.

Prevalence Boundary

A prevalence boundary is defined as the upper limit of pretest probability (ie, prevalence), beyond which a diagnostic test cannot achieve a given posttest probability or a false omission rate, RFO. The RFO reflects the chance of stealth infection given a negative test result, or, expressed differently, 1 – NPV, the posttest probability of COVID-19. Figure 3 details this analysis for RFO = 5% (black horizontal line).

Figure 3

The false omission rate, RFO, determines the prevalence boundaries for different levels of sensitivity, tier 2, and tier 3, which has no boundary. Abbreviations: FDA, US Food and Drug Administration; FN, false negative; NPV, negative predictive value.

Figure 3

The false omission rate, RFO, determines the prevalence boundaries for different levels of sensitivity, tier 2, and tier 3, which has no boundary. Abbreviations: FDA, US Food and Drug Administration; FN, false negative; NPV, negative predictive value.

Close modal

At the RFO of 5%, NPV is 95%. A total of 1 in 20 results will be a FN, and therefore the posttest probability of missed infections is 5%. Investigators consider RFO >5% contributory to contagion and medically unacceptable,5  somewhat akin to positivity rates [RPOS = (TP + FP) / N, Eq. 21] of 5% to 8% or 15% of intensive care unit capacity, used in California and other states as thresholds for social and business controls. Upward trends in positivity rates suggest cases remain undiagnosed, additional tests should be conducted, and infected individuals be isolated before they spread the disease.

The right column in the inset table in Figure 3 lists the prevalence boundaries for the given sensitivity and specificity pairs to the left. Color-coded numbers identify the prevalence boundaries shown graphically along the RFO = 5% line below. A tier 3 test is not limited, because NPV is 100% and RFO = 0 (bottom horizontal line). A tier 2 test (labeled) is limited to 50.6% prevalence. At higher prevalence, tier 2 sensitivity and specificity are inadequate to achieve the target RFO of 5%.

The FDA specifications for symptomatic patients by prescription (blue line no. 4) and nonprescription over-the-counter (OTC) tests6  (black line no. 5; Table 2) generate prevalence boundaries of 34.3% and 20.7%, respectively. If the prevalence exceeds these limits, the tests cannot achieve the RFO cutoff. Thus, test results become unreliable for patient counseling, especially if sensitivity falls off (purple lines, nos. 2 and 3) from inadequate swab sampling, use of saliva specimens with scant viral counts, and several other preanalytic failures.

Table 2

US Food and Drug Administration (FDA) Requirements for Nonlaboratory Coronavirus 2019 (COVID-19) Tests

US Food and Drug Administration (FDA) Requirements for Nonlaboratory Coronavirus 2019 (COVID-19) Tests
US Food and Drug Administration (FDA) Requirements for Nonlaboratory Coronavirus 2019 (COVID-19) Tests

Authors promoting pandemic mitigation suggest that tests with high specificity and sensitivity as low as 50% (red line no. 1) will, “…identify the vast majority of transmission events,” a strategy that can “…lead to the isolation of a large proportion of infected individuals while drastically reducing the isolation of uninfected contacts.”7 Figure 3 shows that for a specificity of 99% and a sensitivity of 50%, prevalence tops out at 9.4%, above which dangerous false omissions will exceed 5%.

False Omission Rates

Figure 4 maps pretest probability (prevalence) to sensitivity for specified posttest probabilities shown as curves (isopleths) of equal false omission rates, RFO, which readers will find useful for selecting tests. For example, with prevalence, that is, a pretest probability of 20% (step 1, blue), and a desired posttest probability (RFO) of 5% (step 2), an assay sensitivity of ∼80% (step 3, 79.2%) would be necessary.

Figure 4

Starting with the pretest probability of coronavirus disease 2019 (COVID-19; prevalence), then selecting the false omission rate (Rfo; posttest probability), to establish adequate sensitivity. Abbreviation: NPV, negative predictive value.

Figure 4

Starting with the pretest probability of coronavirus disease 2019 (COVID-19; prevalence), then selecting the false omission rate (Rfo; posttest probability), to establish adequate sensitivity. Abbreviation: NPV, negative predictive value.

Close modal

As prevalence increases above 40%, however, the dynamic range (vertical span) of RFO becomes asymptotically narrower, allowing less latitude in assay sensitivity. To compound difficulties, people not vaccinated incur higher risk because of the increasing difficulties identifying human sources of infection when stealth infections go undetected.

Therefore, higher prevalence demands higher sensitivity to reduce the number of FNs and diminish chances of dangerous missed diagnoses. For example, with a sensitivity of 90% (step A, orange) and a posttest RFO of 5% (step B), the test would be limited to a low-moderate prevalence of 34.3% (step C). Above that prevalence, a higher sensitivity test is needed.

At the protective prevalence of ∼70% for community immunity (“herd immunity”), there is little or no room for uncertainty in the sensitivity of the test (Figure 4, upper right). False omissions would lead to underestimates of prevalence, and potentially a vicious cycle. Tier 3 performance probably will be needed to pick up SARS-CoV-2 variants (validation required) and would be effective in addressing and eliminating false omissions as prevalence tops out.

Practical Examples of COVID-19 Diagnostics

Example 1: First Multiplex Influenza A/B + SARS-CoV-2 + Respiratory Syncytial Virus

The main frame of Figure 5 displays the PV GM2 of the first FDA EUA for a multiplex test8  that simultaneously detects influenza A and B as well as SARS-CoV-2. The upper inset table lists sensitivity, expressed as positive percent agreement [PPA; TP / (TP + FN)] for EUA comparison data, and specificity, expressed as negative percent agreement [NPA; TN / (TN + FP)], for the 3-plex assay. Influenza A/B performance exceeds that of tier 3, whereas SARS-CoV-2 performance does not.1 

Figure 5

PV GM2 plots for the first multiplex SARS-CoV-2 assays with comparison of FDA EUA specifications, Canada Target requirements, and tier 3 high performance. Abbreviations: EUA, emergency use authorization; FDA, US Food and Drug Administration; NPA, negative percent agreement; NPV, negative predictive value; OTC, over-the-counter; PPA, positive percent agreement; PPV, positive predictive value; PV GM2, predictive value geometric mean-squared; RSV, respiratory syncytial virus; SARS-CoV-2, severe acute respiratory syndrome coronavirus 2.

Figure 5

PV GM2 plots for the first multiplex SARS-CoV-2 assays with comparison of FDA EUA specifications, Canada Target requirements, and tier 3 high performance. Abbreviations: EUA, emergency use authorization; FDA, US Food and Drug Administration; NPA, negative percent agreement; NPV, negative predictive value; OTC, over-the-counter; PPA, positive percent agreement; PPV, positive predictive value; PV GM2, predictive value geometric mean-squared; RSV, respiratory syncytial virus; SARS-CoV-2, severe acute respiratory syndrome coronavirus 2.

Close modal

A US multicenter group (including University of California Davis Health, Sacramento) reported a PPA for SARS-CoV-2 of 100% (CI, 97.7%–100%) and an NPA of 97.4% (CI, 94.1%–98.9%) in the 3-plex SARS-CoV-2/influenza A/B test with 494 patients, where aliquots were run on the cobas 68/8800 (Roche Diagnostics), and then an analysis of 357 evaluable remnants was conducted using test results generated by the small, portable Liat (also Roche Diagnostics).9 

The inset in the middle of Figure 5 displays the performance of a 4-plex FDA EUA10  assay that adds respiratory syncytial virus (RSV). Sensitivity and specificity data in the instructions for users for this influenza A/B-RSV-SARS-CoV-2 test were based on use of known archived samples. There are other multiplex SARS-CoV-2 + influenza A/B assays with RSV multiplexed into the assay format.11 

Figure 5 (main frame) also compares performance for FDA OTC specifications (95% sensitivity, 99% specificity) to the original FDA EUA specifications (90% sensitivity, 95% specificity) and Canadian targets (95% sensitivity, 98% specificity) in order to emphasize that future designs for multiplex molecular diagnostics intended for any setting, POC or otherwise, should avoid these lower grades of performance, especially eliminating as much as possible of the falloff as prevalence increases.

Example 2: First Five FDA EUA Antigen Tests

Figure 6 compares the performance of the first 5 FDA EUA antigen tests12  to that of tier 3 (green). The performance of 1 test with a claimed specificity of 100% (purple) falls off asymmetrically. Two other tests with PPVs of 100% (red) are problematic because performance deteriorates rapidly and significantly with increasing prevalence. Two additional tests (black, purple) do not match that of tier 3 but are similar in performance as prevalence hits ∼50% or higher. The performance of a test (blue) for which the EUA was refiled is mediocre.

Figure 6

Comparison of the performance of the first 5 SARS-CoV-2 antigen tests that obtained Food and Drug Administration emergency use authorization (FDA EUA) status versus Tier 3 performance with analysis of uncertainty. Abbreviations: NPA, negative percent agreement; NPV, negative predictive value; PPA, positive percent agreement; PPV, positive predictive value; PV GM2, predictive value geometric mean-squared; SARS-CoV-2, severe acute respiratory syndrome coronavirus 2.

Figure 6

Comparison of the performance of the first 5 SARS-CoV-2 antigen tests that obtained Food and Drug Administration emergency use authorization (FDA EUA) status versus Tier 3 performance with analysis of uncertainty. Abbreviations: NPA, negative percent agreement; NPV, negative predictive value; PPA, positive percent agreement; PPV, positive predictive value; PV GM2, predictive value geometric mean-squared; SARS-CoV-2, severe acute respiratory syndrome coronavirus 2.

Close modal

The inset shows uncertainty as the upper and lower bounds of the 95% confidence limits in terms of PV GM2 for the test, with rapidly decaying performance identified at the tip of the arrow pointed up toward the top. The smaller inset plots the magnitude of the 95% CI, which has a local minimum at 26.1% prevalence and maxima at both low (<5%) and high (86.5%) prevalence. Figure 6 shows that antigen tests must be analyzed individually to determine the range of prevalence for the best performance and least uncertainty.

Example 3: First Portable PCR Test Kit and First Rapid COVID-19 IgG/IgM Antibody Test for POC

Figure 7 contrasts the symmetric versus asymmetric performance of 2 FDA EUAs: the left panel for the first portable PCR test kit for the direct molecular detection of SARS-CoV-213  (POC EUA metrics14  similar: PPA, 100% [CI, 89.0%–100%]; and NPA, 95.3% [CI, 87.1%–98.4%]), and the right panel for the first rapid COVID-19 IgG/IgM antibody test for assessment of immunity at POCs.15,16  Inset tables provide percent agreement and 95% CI details.

Figure 7

PV GM2 curves, uncertainty bands, and magnitudes of uncertainty for the first portable coronavirus disease 2019 (COVID-19) PCR test kit (left) and the first rapid COVID-19 IgG/IgM antibody test for the point-of-care (right) granted EUAs by the FDA. Abbreviations: COVID-19, coronavirus disease 2019; EUA, Emergency Use Authorization; FDA, US Food and Drug Administration; Ig, immunoglobulin; PCR, polymerase chain reaction; PV GM2, predictive value geometric mean-squared.

Figure 7

PV GM2 curves, uncertainty bands, and magnitudes of uncertainty for the first portable coronavirus disease 2019 (COVID-19) PCR test kit (left) and the first rapid COVID-19 IgG/IgM antibody test for the point-of-care (right) granted EUAs by the FDA. Abbreviations: COVID-19, coronavirus disease 2019; EUA, Emergency Use Authorization; FDA, US Food and Drug Administration; Ig, immunoglobulin; PCR, polymerase chain reaction; PV GM2, predictive value geometric mean-squared.

Close modal

The magnitudes of the 95% CIs map as the bands between the black PV GM2 line (left) and curve (right) at the top and the boundaries of the lower confidence limits (red). On the left, PV GM2 is 1, based on the claim of both perfect sensitivity 100% and specificity 100%. However, FDA authorization was granted with a sample size of only 60. The 95% CI has a minimum uncertainty at 50% but maximum uncertainty at 0% and 100% prevalence (see blue inset).

On the right, the magnitude of the 95% CI increases (blue inset) from the local minimum at 41.1% to a peak at 100% prevalence. This FDA EUA is based on a sample size of 118. The local maximum at ∼3% and global minimum at 0% cast doubt on the reliability of the antibody test in low-prevalence settings, consistent with observations by others.17  The portable and POC tests analyzed in Figure 7 need additional multicenter validation studies to confirm manufacturer claims of sensitivity and specificity, overall clinical utility, cost-effectiveness, and use at POCs.

Example 4: First OTC Fully At-Home POC Diagnostic Test for COVID-19

Figure 8 analyzes the first OTC COVID-19 antigen capillary action test strip, which received an FDA EUA to be performed entirely at home.18,19  The figure compares asymptomatic (left) and symptomatic individuals (right), along with the magnitudes of uncertainty (bottom panels). In the range of moderate prevalence from 20% to 70%, performance is better and uncertainty relatively “flat” for symptomatic individuals. In both cases, performance falls off at higher prevalence because of inferior sensitivity and decreasing NPV, typical of antigen tests.

Figure 8

Asymptomatic (left) and symptomatic (right) performance and uncertainty of the first antigen test granted FDA EUA when operated fully at home. Abbreviations: COVID-19, coronavirus disease 2019; EUA, Emergency Use Authorization; FDA, US Food and Drug Administration; LCL, lower limit of the 95% CI; NPA, negative percent agreement; PPA, positive percent agreement; PPV, positive predictive value; PV GM2, predictive value geometric mean-squared; UCL, upper limit of the 95% CI.

Figure 8

Asymptomatic (left) and symptomatic (right) performance and uncertainty of the first antigen test granted FDA EUA when operated fully at home. Abbreviations: COVID-19, coronavirus disease 2019; EUA, Emergency Use Authorization; FDA, US Food and Drug Administration; LCL, lower limit of the 95% CI; NPA, negative percent agreement; PPA, positive percent agreement; PPV, positive predictive value; PV GM2, predictive value geometric mean-squared; UCL, upper limit of the 95% CI.

Close modal

Dynamic Prevalence

Mathematical analyses and visual logistics help customize the design and selection of COVID-19 diagnostics. Prevalence remains highly unpredictable, largely unknown, and geographically variable for settings now moving from moderate to high. The selection of tests based on performance expectations must anticipate the dynamic ranges of prevalence shown in the Supplement.

Tests with adequate performance in low-prevalence settings may not perform well in high-prevalence ones, and conversely. At low prevalence, suboptimal specificity creates unwanted FPs. Positive predictive value performance is poor. At high prevalence, low sensitivity creates FNs and degrades NPV. The World Health Organization acceptable target product profile for POC rapid tests to detect SARS-CoV-2 infection of sensitivity 80% and specificity 97%20  reflects subtier sensitivity that will generate poor performance.

Evaluations of COVID-19 tests based on the World Health Organization criteria without adequate consideration of prevailing prevalence fall short.21  However, visual logistics show that at intermediate prevalence of 20% to 50%, tiered assays perform reasonably well, providing they do not exceed a prevalence boundary of the false omission rate as disease prevalence tops 50%.

With high prevalence, false omissions will allow persons deemed free of COVID-19 to spread disease, propel outbreaks, and trigger setbacks. Tier 3 offers an operational solution. In low prevalence, a specificity of ≥99% conservatively avoids FPs. Sensitivity is 100%, so there are no FNs, NPV = 100%, and PV GM2 = PPV. The PV GM2 tests achieving tier 3 performance levels show consistently improving performance as prevalence increases, an excellent test design for an advancing pandemic.

Control of False Omissions

Given high specificity, low sensitivity can limit test performance because of prevalence boundaries (Figure 3) beyond which a test generates excessive false omissions. Low-sensitivity tests produce progressively more frequent false omissions as prevalence increases, that is, more infected people who have FN COVID-19 test results—potential stealth transmitters.

With Tier 3 performance, false omissions do not occur, presuming that specimen procurement and processing are free of preanalytic errors. Tier 2 was designed to have a prevalence boundary of approximately 50% when the false omission rate is 5% (Figure 3, no. 6); and tier 1, when the RFO is 10% (Table 1).

Figure 4 offers a practical algorithm to avoid false omissions. Start with the best assessment of local prevalence, select the desired RFO, and then find the required test sensitivity (blue pathway). Alternately (orange pathway), start with sensitivity and find the prevalence boundary for a given RFO. Either way, higher prevalence demands higher sensitivity, increasingly so, because the dynamic range for RFO diminishes with higher prevalence (Figure 4, upper right).

Impact of Novel COVID-19 Diagnostics

Influenza A/B + SARS-CoV-2 Multiplexing (Example 1)

Winters present significant challenges when differentiating seasonal flu from COVID-19. Mistaken diagnoses could perpetuate surges in both infections, although current year-on-year influenza positivity in California is only 0.3% (2020–2021) versus 26.9% (2019–2020) last year, with zero outbreaks thus far this winter. COVID-19 masking and safe spacing have truncated seasonal influenza.

Multiplex tests now are available that detect influenza A/B + SARS-CoV-2,8  and additionally RSV.10,11  These multiplex tests will prove essential for differential diagnoses of patients, including children, presenting with symptoms suggestive of COVID-19.

If suspicion of COVID-19 infection and positivity is moderate to high and/or testing is early (first day) or late (>7 days) in the clinical course, repeat testing and lower respiratory tract specimens may be needed to increase productivity and partially offset FN results.22 

Performance Falloff and Uncertainty of Antigen Tests (Example 2)

Dependence on sensitivity or specificity metrics alone can mislead. With each metric comes uncertainty, typically expressed as a 95% CI—1 in 20 times the test will not meet the claimed performance level. Still, 19 of 20 times performance could vary widely within the CI, as illustrated for the COVID-19 antigen test shown in the inset of Figure 6.

That assay has dual problems—falloff in performance at high prevalence compounded by variable uncertainty throughout the prevalence range. The inset in Figure 6 shows the magnitude of uncertainty in terms of PV GM2. Uncertainty increases in both low prevalence and 80% to 90% prevalence. Starting from a local maximum, the CI next attains a local minimum at 26.1% prevalence, followed by a local maximum at 86.5%. Clinical usefulness may be erratic, inconsistent, or confusing.

Rapid antigen testing may help in outbreaks when asymptomatic super-spreaders unknowingly cause roughly 40% of infections, because their viral load is high. However, an antigen test should be avoided when the prevalence in the community matches the range for which the test uncertainty is maximum.

For variants already spreading to numerous countries, the R01 is as much as 0.7 higher,23  which means the minimum prevalence to achieve community (herd) immunity may be as high as 80%. The FDA has encouraged the use of authorized antigen tests for asymptomatic patients.24  Tests generating false omissions (FNs) will, however, underestimate incidence and corrupt research study results. Nonetheless, the volume of COVID-19 testing and genetic sequencing must be expanded significantly to keep up with variants.

Although antigen tests are fast, convenient, and accessible, they also are prone to FNs because of suboptimal sensitivity. For example, one rapid antigen test was shown to have sensitivity of 64.2% for specimens from symptomatic persons, and from those lacking symptoms it was only 35.8%.25  In general, consider confirming test results with a molecular diagnostic when not within a range of 20% to 40% prevalence, viral load is likely to be low, symptoms are absent, or a missed diagnosis (false omission) could cascade into multiple deaths, for example, in a nursing home. Regardless, antigen tests with tier 2 or better sensitivity and specificity should gateway airline travel. The Centers for Disease Control and Prevention approved both antigen and nucleic acid amplification testing 3 days prior to boarding a plane returning to the United States.

Firsts for the POC (Example 3)

Assays have been approved for mobile use. The first portable COVID-19 test kit (Figure 7, left) may have poor reliability in either low or high prevalence because of uncertainty in sensitivity and specificity. On the right, the first rapid COVID-19 IgG/IgM antibody test for the POC displays the same type of uncertainty seen in Figure 6, that is, local and global maxima.

The FDA EUA process of allowing an N of only 60 (Figure 7, left frame) to launch a portable molecular diagnostics test kit generates uncertainty. Expected performance, indicated by black lines, appears good if not perfect on the left, but the lower confidence limit for sensitivity is 88.6% on the left and 88.7% on the right. These represent unacceptable sub–tier 1 performance levels. The bowl-shape magnitude of the CIs (blue) raises concerns about reliability and consistency.

Entirely Home Testing (Example 4)

Home testing may prove valuable, particularly for those with symptoms of COVID-19, as the right frame of Figure 8 indicates. The band of relatively low uncertainty covers the range of moderate prevalence (20%–70%), which most communities are now entering. Home testing encourages empowerment, but self-testing should be accompanied by education. For example, a negative antigen test at home is not license to travel, gather, mingle, or work without precautions necessary to prevent COVID-19 contagion. As shown in Figure 6, the performance of antigen tests generally falls off as prevalence increases, and for some tests it declines rapidly. People should be wary of these drawbacks.

Impact of Uncertainty

On October 2, the state of Nevada blocked nursing homes from using federally provided rapid COVID-19 antigen testing equipment because of FP results, which place disease-free residents at extreme risk when transferred to units with patients known to be infected with COVID-19.2628  On October 8, the US Department of Health and Human Services called the state action “inconsistent with and preempted by federal law” and ordered the state to resume using the equipment.

The federal government had supplied 14 000 nursing homes around the country with Quidel Corp and Becton Dickinson & Co antigen testing equipment in part to fulfill a federal mandate that nursing homes use POCT to evaluate all staff members.2628  These tests obviate sending specimens off site for more expensive and slower direct molecular detection of SARS-CoV-2.

Nevada responded by withdrawing its directive on October 9, but it responded, “…our goal remains united in protecting those most vulnerable in our communities.”2325  Rapid antigen tests are designed to detect SARS-CoV-2 proteins. Nevada discovered 23 of 39 antigen-positive tests were negative (59% FP rate) when checked with PCR-based molecular diagnostics. Nevada's request to pause for further review and training was declined. Subsequently, the state recommended follow-up of all convalescent care test results with molecular diagnostics.

Residents of nursing and long-term care facilities account for more than 170 000 COVID-19 deaths (35%) and ∼1.3 million cases nationwide (as of February 18, 2021).29  False positives generated in nursing homes by the Becton Dickinson Veritor test (illustrated by the bold red line in the main plot of Figure 6) were initially reported September 15.26  Frequent FNs will become apparent as prevalence increases, because of lower sensitivity. The purple line (Quidel Sofia) and bold red line (Becton Dickinson Veritor) illustrate antigen testing performance for the 2 tests initially banned by Nevada until overruled by the federal government. False negatives produce falloff of PV GM2 curves to the right in Figure 6.

However, the uncertainty of potential FPs appearing in low prevalence tends to even the playing field, as illustrated for the worst-performing test (Veritor) among the 5. This suggests that the federal order was premature in the absence of further evaluation of performance and that standards for confidence limits should be narrower to improve reliability and confidence in COVID-19 tests.

Outbreak Control

During spring and early summer 2020, the spread of COVID-19 went undetected in part because of worldwide limited testing capacity.30  The United States has 337 000 clinical laboratory technologists, many of them weary from having performed COVID-19 testing for extended hours for several months. Expansion of testing outside the laboratory would help relieve these labor shortages in hospital laboratories and other sites operating under state and federal statutes.

Faster results would also better support tracing. The Broad Institute of MIT (Cambridge, Massachusetts) and Harvard (Boston, Massachusetts) reports test results in 24 hours.31  It upped testing staff from 125 to 375 personnel, who perform 70 000 tests per day at $25 each for 108 partners, of which 60% are northeastern colleges and schools where students tested twice per week have limited positivity rates to 0.01% to 0.2%. Hence, contagion can be controlled through coordinated planning, diligent testing, and rapid results reporting.

Interpreting results will remain challenging because knowledge of the pathophysiology of COVID-19 is evolving rapidly, and preanalytic errors can easily deteriorate predictive values. In reinfection,3234  limited data tend to undermine trending antibody response and interpreting diagnostic truth as prevalence increases. We do not know if vaccinations will be effective or for how long, especially in light of new variants, although some speculate 8 to 9 months. Variants will affect COVID-19 test results, however, in part because they are thought to be more infectious and will increase prevalence.

POC Trends

Point-of-care COVID-19 testing, pivotal to pandemic control, has popped up in airports as governments like Germany make it a requirement35  and US airlines try to attract travelers.36,37  French trains have provided testing for free.35  Point-of-care testing is found in homes38,39  and drive-ups, which also provide influenza vaccinations. Portable multiplex molecular diagnostics provide vital diagnostic information for 20-minute triage, rational quarantine, and safe spacing (“social distancing”). Amazon conducts 50 000 tests per day.40  Points of need in industry, communities, immigration, and schools provide testing. Figure 8 assesses the first entirely at-home antigen test and its uncertainty. On an individual level, home testing and testing close to home empower people to engage in limiting contagion.

Public Health Policy

Community (herd) immunity to COVID-19 and new variants requires high prevalence, from 70% to 85%. Calculations based on diagnostic testing and/or surveillance41  and using a derived reproduction number, Rt (versus R01), generate proportions of the total population varying from 5.6% (Kuwait) to 85.0% (Bahrain), with 72% of the countries in the range of 56.1% to 74.8% or higher. The estimate for the United States is 69.9%,41  which supports the use of 70% as the threshold for high prevalence in this article. Highly contagious variants push the threshold for community immunity higher. The United States should prepare a long-term plan42  for POC strategies that relieve laboratory staff shortages, identify up to 40% of health care workers who are asymptomatic carriers,43  and accelerate high-quality testing at geospatially optimized sites.44  The widespread availability of COVID-19 testing will enable ethical public health policy.

Practice Principles

This research identified overriding design and practice principles for COVD-19 diagnostics and testing. Designers can optimize patterns of simultaneous high performance and high reliability for COVID-19 tests, given target ranges of prevalence. Likewise, tests can be selected on that basis. Table 3 outlines reasonable uses of COVID-19 diagnostics at low, moderate, and high prevalence. This table can help launch global standardization of performance requirements for hundreds of new COVID-19 tests or at least initiate a process for attaining uniform and consistent federal guidance and regulations and their enforcement.

Table 3

Recommended Coronavirus Disease 2019 (COVID-19) Diagnostics for Low, Moderate, and High Prevalence

Recommended Coronavirus Disease 2019 (COVID-19) Diagnostics for Low, Moderate, and High Prevalence
Recommended Coronavirus Disease 2019 (COVID-19) Diagnostics for Low, Moderate, and High Prevalence
Table 3

Extended

Extended
Extended

Standards must encompass both performance and 95% CIs and link the two. High performance, such as tier 3 with the lower bound of the 95% CI set at tier 2 sensitivity and specificity, which were designed to serve that purpose, would guide consensus review of COVID-19 diagnostics for the realistic effectiveness of predictive values in relation to prevailing prevalence. Validation can focus on clinical efficacy in rapid decision-making, diagnostic efficiency for asymptomatic versus symptomatic patients, cost-effectiveness at breaking chains of viral transmission, and ultimately, impact on morbidity and mortality.45 

Additional tools to improve diagnostic practices comprise reference standards, large field validations, algorithmic confirmation of results, proven uses for screening and contact tracing, and archived assay results. In the post-EUA era ahead, one can expect poorly performing tests to be ignored or to fail.46  Importantly, test shortcomings can lead to underestimating the burden of COVID-1947  and other highly infectious diseases. Nonetheless, POC strategies are needed to deal with COVID-19 variants, prepare for future infectious crises, and avoid catastrophic losses of life, social disruptions, and economic disasters worldwide.

Raised Bar

High standards for COVID-19 test performance will facilitate pandemic control. Visual logistics identify room for improvement in COVID-19 test specifications for the precedents analyzed here, namely, first multiplex molecular diagnostics, antigen, new POC, portable PCR kit, and home tests (Figures 6 through 8).

Generally, determining the degree of individual infectiousness from reverse transcription–PCR results or Ct (cycle threshold) values48  remains challenging.49  In addition, relatively high-cost PCR assays limit capacity. As prevalence increases, FNs with some tests threaten sound management and legal liability.50 

Epidemic mitigation depends on the frequency and accessibility of testing. Large-scale screening demands smaller, faster, smarter, and cheaper test formats at points of need, but POCT per se is not an excuse for inaccuracy. With viral load reference standards and further studies, antigen levels may ultimately be correlated with viral burden.

Operators should be trained to recognize symptomatic patients, time testing for presumed peaks in viral load, carefully collect specimens, avoid off-label use, and use good judgment if test results are used to shorten the duration of quarantine or confirm an immune response to vaccination.

Reliable, Accessible, Predictable

Poor reliability and unexpected results can be addressed by means of larger objective field validations that carefully investigate differences in test results for symptomatic versus asymptomatic individuals (eg, Figure 8). In asymptomatic individuals, FNs produced by POC antigen tests, whether intrinsic to the assay, from faulty specimen collection, or due to poor timing relative to viral load, may delay recognition of recurrences. Therefore, COVID-19 test results should be documented in well-organized open-access public registries.

A National Academy of Sciences report that focuses on asymptomatic surveillance concludes that accessibility and acceptance in rural small colleges are the primary virtues of saliva testing of students, faculty, and staff.51  However, a slow viral rise in saliva52  and a sensitivity as low as 73.1%53  or worse limit their utility, so performance metrics should be fully disclosed.5255  Emerging studies fine-tune predictability, costs, and other criteria for community and health care screening using saliva specimens.5659 

Standards and Sustainability

Original assays developed within a single laboratory no longer need FDA premarket approval through the 510(k) pathway for medical devices. Higher standards to improve EUA-grade diagnostics, statistical proof of performance, agreed confirmation algorithms, publicly accessible multicenter databases, and other evaluables are necessary to demonstrate real-world effectiveness for public health practice.60  False positives and negatives can alienate people from screening and cause skepticism, poor compliance, wasteful inefficiency, and risk of contagion.

Public health crises have taught us that speed counts.61  Point-of-care strategies belong permanently on the front lines to accelerate triage and therapeutic turnaround time. Highly performing diagnostics, wherever implemented, are necessary for sustainable diagnosis and treatment response. Integrated sustained approaches using multiple types of production (eg, 3-dimensional printed swabs) and diagnostic platforms are needed to meet increasing demands of stressed supply chains, overwhelming test volumes, and spread of viral variants that must be detected and traced quickly.

Innovation

Identified objectives comprise environmental screening for new threats, immediate investigations of outbreaks, isolation and treatment of those infected, worldwide immunologic observatories looking for antibodies in blood samples, infectious disease forecasting like weather reports, coordinated consensus standards, and 100-day vaccines using newly developed virology technologies.

Technologies for detection of SARS-CoV-2 in asymptomatic individuals must be improved. The performance of the home antigen test for asymptomatic individuals (Figure 8, left) barely exceeds tier 1 specifications, which are not adequate, although FDA authorization of home testing made it highly accessible to the public.

Strategies for detecting stealth threats represent a major opportunity for innovation. Using a baseline assumption that peak infectiousness occurs at the median of symptom onset and that 30% of infected individuals never develop symptoms, but are 75% as infectious as those who do, modelers found that persons with infections who never develop symptoms may account for 24% of transmissions.62  An additional 35% are presymptomatic spreaders. Together, these 2 groups account for 59% of all transmissions.62 

UC Davis Health implemented successfully numerous fast high-performance portable PCR instruments (Liat, Roche Diagnostics) in emergency departments and the health network.9  New rapid tests are rolling out to counter waves of infection and new variants in Europe, the United States, and other counties.63,64 

High-performance multiplex SARS-CoV-2 + influenza A/B + RSV molecular diagnostics (Figure 5) are needed for differential diagnosis during fall and winter flu seasons, although COVID-19 mitigation measures, such as masks, safe spacing, and lockdowns, diminished the need this past winter. All innovative SARS-CoV-2 multiplex formats must be validated for more contagious mutant variants.

Interestingly, a Massachusetts company was awarded the first EUA for a clustered regularly interspersed short palindromic repeats (CRISPR)–based COVID-19 test. CRISPR requires fewer steps, operates at room temperature, and yields a result quickly. South San Francisco entrepreneurs believe CRISPR can be fashioned into a portable SARS-CoV-2 test for home use.65  The updating of assays will be needed to keep up with new mutations of SARS-CoV-2. New public health curricula will help facilitate the assimilation of these novel POC technologies.6668 

POC Vision

The COVID-19 pandemic has proved unequivocally the high value of POCT. Public health practice can be enhanced by implementing POC strategies in communities and coordinating geospatial solutions in national and global strata.61,6971  Smartphones provide accessible connectivity and alerts regarding exposure to COVID-19.

The pandemic is bolstering POC culture. People encountering COVID-19 testing in a variety of point-of-need locations, such as airports, workplaces, schools, convalescent care facilities, neighborhoods door-to-door, and limited-resource settings,72  assimilate mobile diagnostics into daily life, homes, and community activities.

Future health care professionals can comfortably embrace the concepts of POC culture and POC specialists,7376  individuals trained in diagnostics, medicine, therapeutics, epidemiology, and informatics who are equipped with POC technologies and knowledge adequate to respond immediately to outbreaks at points of need. These specialists can increase diagnostic field capacity and relieve central laboratories.

Point-of-care testing represents the new normal for outbreak detection, rapid diagnosis, disease mitigation, regional epidemic control, and rapid decision-making in the COVID-19 pandemic77–83 and in clever optimized geospatial POC strategies, the vision for the 21st century.

This work was supported in part by the Point-of-Care Testing Center for Teaching and Research (POCT•CTR) and by the author, its director. I would like to thank the creative students, fellows, and international scholars who design and study POC strategies in the United States and other countries. I am grateful to have received a Fulbright Scholar Award 2020–2021, which supports geospatial analysis of highly infectious diseases and cardiac rescue, strategic POCT field research in ASEAN member states, Cambodia, and the Philippines, and community and university lectures. I thank Doug Kirk, MD, chief medical officer, and Kimberly Bleichner-Jones, MBA, executive director of administration, UC Davis Health, for their support of this Fulbright program. Figures and tables are provided courtesy and permission of Knowledge Optimization, Davis, California.

1.
Kost
GJ
.
Designing and interpreting COVID-19 diagnostics: mathematics, visual logistics, and low prevalence
.
Arch Pathol Lab Med
.
2021
;
145
(3)
:
291
307
.
3.
Stenzel
T.
FDA virtual town hall series: immediate in effect guidance on Coronavirus (COVID-19) diagnostic tests. May 6, 2020
.
2021
.
4.
Health Canada.
COVID-19 serological testing devices: notice on sensitivity and specificity values. June 24, 2020
.
2021
.
5.
Raschke
RA,
Curry
SC,
Glenn
T,
Gutierrez
F,
Iyengar
S. A
Bayesian analysis of strategies to rule out COVID19 using reverse transcriptase-polymerase chain reaction (RT-PCR)
[published online ahead of print
April
27,
2020]
.
Arch Path Lab Med. doi: 10.5858/arpa.2020-0196-LE
6.
US Food and Drug Administration.
Template for manufacturers of molecular and antigen diagnostic COVID-19 tests for non-laboratory use: section 8, clinical evaluation, paragraphs 5 and 6, p. 16; and section 10: alternate clinical study approaches, paragraph 10 A and B
,
pp.
16
17
.
July 29, 2020.
2021
.
7.
Kennedy-Shaffer
L,
Baym
M,
Hanage
WP
.
Perfect as the enemy of the good: using low-sensitivity tests to mitigate +SARS-CoV-2 outbreaks
.
2020
.
Digital Access to Scholarship at Harvard Web site.
2021
.
8.
US Food and Drug Administration.
Emergency Use Authorization: cobas SARS-CoV-2 & Influenza A/B, instructions for users
,
Table 22, p. 38 of 44. September 3, 2020.
2021
.
9.
Hansen
G,
Marino
J,
Wang
ZX,
et al
Clinical performance of the point-of-care cobas Liat for detection of SARS-CoV-2 in 20 minutes: a multicenter study
.
J Clin Microbiol
.
2021
;
59
(2)
:
e02811
e02820
.
10.
Cepheid Xpert Xpress SARS-CoV-2/Flu/RSV.
US Food and Drug Administration Web site
.
January
9,
2021
.
11.
Seegene Inc.
Seegene introduces a high throughput 8-plex Test for Flu A, Flu B, RSV and COVID-19 with dual internal controls
[press release].
Cision PR Newswire
.
September 8, 2020.
2021
.
12.
US Food and Drug Administration.
In vitro diagnostic EUAs: individual EUAs for antigen diagnostic tests for SARS-CoV-2
.
2021
.
13.
US Food and Drug Administration.
Visby medical package insert: COVID-19. September 16, 2020
.
2021
.
14.
US Food and Drug Administration.
Visby medical package insert: COVID-19 point of care. February 8, 2021
.
2021
.
15.
US Food and Drug Administration.
Coronavirus (COVID-19) update: FDA authorizes first point-of-care antibody test for COVID-19
.
2021
.
16.
US Food and Drug Administration.
EUA authorized serology test performance: Assure COVID-19 IgG/IgM rapid test device. October 9, 2020
.
2021
.
17.
Mathur
G,
Mathur
S.
Antibody testing for COVID-19: can it be used as a screening tool in areas with low prevalence?
Am J Clin Pathol
.
2020
;
154
:
1
3
.
18.
US Food and Drug Administration.
Coronavirus (COVID-19) update: FDA authorizes antigen test as first over-the-counter fully at-home diagnostic test for COVID-19. December 15, 2020
.
2021
.
19.
US Food and Drug Administration.
Individual EUAs for antigen diagnostic tests for SARS-CoV-2: ellume Information for Users
.
2021
.
20.
World Health Organization.
Target Product Profiles for Priority Diagnostics to Support Response to the COVID-19 Pandemic v.1.0
.
Geneva, Switzerland
:
WHO;
2020
.
21.
Ghaffari
A,
Meurant
R,
Ardakani
A.
COVID-19 point-of-care diagnostics that satisfy global target product profiles
.
Diagnostics
.
2021
;
11
(115)
:
1
12
.
22.
Dugdale
CM,
Anahtar
MN,
Chiosi
JJ,
et al
Clinical, laboratory, and radiological characteristics of patients with initial false negative SARS-CoV-2 nucleic acid amplification test results
.
Oxford University Open Forum Infectious Diseases
.
November
24,
2020
.
23.
Volz
E,
Mishra
S,
Chand
M,
et al
Transmission of SARS-CoV-2 lineage B.1.1.7 in England: insights from linking epidemiological and genetic data
[posted online
January
4,
2021]
.
medRxiv
.
24.
US Food and Drug Administration.
FDA Virtual Townhall Series
.
December 16, 2020.
January
9,
2021
.
25.
Prince-Guerra
JL,
Almandares
O,
Nolen
LD,
et al
Evaluation of Abbott BinaxNOW rapid antigen test for SARS-CoV-2 infection at two community-based testing sites—Pima County, Arizona, November 3-17, 2020
.
MMWR Morb Mortal Wkly Rep
.
2021
;
70
:
100
105
.
26.
Mathews
AW
.
Testing maker probes false-positive results
.
Wall Street Journal
.
2020
;
276
(64)
:
A6
.
27.
Mathews
AW,
Abbott
B.
U.S. orders Nevada to allow rapid tests in nursing homes
.
Wall Street Journal
.
2020
;
276
(86)
:
A8
.
28.
Abbott
B,
Mathews
AW
.
Nevada lifts block on some rapid tests
.
Wall Street Journal
.
2020
;
276
(87)
:
A6
.
29.
The COVID Tracking Project.
The long-term care COVID tracker
.
2021
.
30.
Douglas
J,
Meichtry
S,
Barnett
A.
Europe outpaces U.S. in key case gauge: France declares state of emergency, imposes curfew on Paris as virus flares up on continent
.
Wall Street Journal
.
2020
;
276
(90)
:
A9
.
31.
Korn
M.
A testing lab helps keep northeast colleges open
.
Wall Street Journal
.
2020
;
276
(92)
:
A3
.
32.
Mathews
A.
Reinfection case indicates recovered patients are at risk
.
Wall Street Journal
.
2020
;
276
(88)
:
A7
.
33.
Tillett
RL,
Sevinsky
JR,
Hartley
PD,
et al
Genomic evidence for reinfection with SARS-CoV-2: a case study
.
Lancet Infect Dis
.
2021
;
21
(1)
:
52
58
.
34.
Iwasaki
A.
What reinfections mean for COVID-19
.
Lancet Infect Dis
.
2021
;
21
(1)
:
3
5
.
35.
Pancevski
B.
European alarm grows over surge–rise in COVID-19 cases coincides with peak tourist season, worries about complacency
.
Wall Street Journal
.
2020
;
276
(24)
:
A7
.
36.
Sider
A.
United Airlines to offer Covid tests to some travelers
.
Wall Street Journal
.
2020
;
276
(73)
:
B3
.
37.
McCartney
S.
Can airport Covid testing get people flying again?
Wall Street Journal
.
2020
;
276
(84)
:
A11
.
38.
US Food and Drug Administration.
Coronavirus (COVID-19) update: FDA authorizes first standalone at-home sample collection kit that can be used with certain authorized tests. May 16, 2020
.
2021
.
39.
US Food and Drug Administration.
Coronavirus (COVID-19) update: FDA authorizes first diagnostic test using at-home collection of saliva specimens. May 8, 2020
.
2021
.
40.
Herrera
S.
Amazon reveals Covid cases in workforce
.
Wall Street Journal
.
2020
;
276
(79)
:
B4
.
41.
Kwok
KO,
Lai
F,
Wei
WI,
Wong
SYS,
Tang
JWT
.
Herd immunity–estimating the level required to halt the COVID-19 epidemics in affected countries
.
J Infect
.
2020
;
80
:
e32
e33
.
42.
Kost
GJ
.
Protecting Americans from Highly Infectious Diseases Through the Creation, Dissemination, and Promotion of National Point-of-care Testing Policy and Guidelines
.
Davis, CA
:
Knowledge Optimization;
2017
.
43.
Gomez-Ochoa
SA,
Franco
OH,
Rojas
LZ,
et al
COVID-19 in healthcare workers: a living systematic review and meta-analysis of prevalence, risk factors, clinical characteristics, and outcomes
.
Am J Epidemiol
.
2021
;
190
(1)
:
161
175
.
44.
Kost
GJ
.
Geospatial hotspots need point-of-care strategies to stop highly infectious outbreaks: ebola and coronavirus
.
Arch Pathol Lab Med
.
2020
;
144
(10)
:
1166
1190
.
45.
Neilan
AM,
Losina
E,
Bangs
AC,
et al
Clinical impact, costs, and cost-effectiveness of expanded SARS-CoV-2 testing in Massachusetts
[published online ahead of print
September
18,
2020]
.
Clin Infect Dis. doi: 10.1093/cid/ciaa1418
46.
Abbott
B,
Krouse
S.
Rapid Covid-19 tests go unused
.
Wall Street Journal
.
2021
;
277
:
A1,A6.
47.
Wu
SL,
Mertens
AN,
Crider
YS,
et al
Substantial underestimation of SARS-CoV-2 infection in the United States
.
Nat Commun
.
2020
;
11
(4507)
:
1
11
.
48.
Buchan
BW,
Hoff
JS,
Cmehlin
CG,
et al
Distribution of SARS-CoV-2 PCR cycle threshold values provide practical insight into overall and target-specific sensitivity among symptomatic patients
.
Am J Clin Pathol
.
2020
;
154
(4)
:
479
485
.
49.
La Scolla
B,
Le Bideau
M,
Andreani
J,
et al
Viral RNA load as determined by cell culture as a management tool for discharge of SARS-CoV-2 patients from infectious disease wards
.
Eur J Clin Microbiol Infect Dis
.
2020
;
39
(6)
:
1059
1061
.
50.
US Food and Drug Administration.
Risk of false results with the Curative SARS-CoV-2 test for COVOD-19: FDA safety recommendation. January 4, 2021
.
2021
.
51.
National Academies of Sciences, Engineering, and Medicine.
COVID-19 Testing Strategies for Colleges and Universities
.
Washington, DC
:
The National Academies Press
;
2020
.
52.
Winnett
A,
Cooper
MM,
Shelby
N,
et al
SARS-CoV-2 viral load in saliva rises gradually and to moderate levels in some humans
[posted online
December
11,
2020]
.
medRxiv
.
53.
Senok
A,
Alsuwaidi
H,
Atrah
Y,
et al
Saliva as an alternative specimen for molecular COVOD-19 testing in community settings and population-based screening
.
Infect Drug Resist
.
2020
;
13
:
3393
3399
.
54.
Czumbel
LM,
Kiss
S,
Farkas
N,
et al
Saliva as a candidate for COVID-198 diagnostic testing: a meta-analysis
.
Front Med
.
2020
;
7
(465)
:
1
10
.
55.
Michelmore
RW
.
Campus ready COVID-19 testing, University of California, Davis. Technical FAQ: how accurate is it?
UC Davis Web site.
2020
.
56.
Deckert
A,
Anders
S,
de Allegri
M,
et al
Effectiveness and cost-effectiveness of four different strategies for SARS-CoV-2 surveillance in the general population (CoV-Surv Study): a structured summary of a study protocol for a cluster-randomized, two-factorial controlled study
.
Trials
.
2021
:
22
(39)
:
1
4
.
57.
Fernandez-Gonzalez
M,
Agullo
V,
de la Rica
V,
et al
Performance of saliva specimens for the molecular detection of SARS-CoV-2 in the community setting: does sample collection method mater?
[published online ahead of print
January
8,
2021]
.
J Clin Microbiol.
58.
Zhang
K,
Shoukat
A,
Crystal
W,
et al
Routine saliva testing for the identification of silent COVID-10 infections in healthcare workers
[published online ahead of print
January
11,
2021]
.
Infect Control Hosp Epidemiol.
2021.
59.
Bastos
ML,
Perlman-Arrow
S,
Menzies
D,
Campbell
JR
.
The sensitivity and costs of testing for SARS-CoV-2 infection with saliva versus nasopharyngeal swabs
[published online ahead of print
January
12,
2021]
.
Ann Intern Med.
2021.
60.
National Academies of Sciences, Engineering, and Medicine.
Rapid Expert Consultation on Critical Issues in Diagnostic Testing for the COVID-19 Pandemic (November 9, 2020)
.
Washington, DC
:
The National Academies Press
;
2020
.
61.
Kost
GJ,
Curtis
CM,
eds.
Global Point of Care: Strategies for Disasters, Emergencies, and Public Health Preparedness
.
Washington, DC
:
AACC Press-Elsevier;
2015
.
62.
Johansson
MA,
Quandelacy
TM,
Kada
S,
et al
SARS-CoV-2 transmission from people without COVID-19 symptoms
.
JAMA Netw Open
.
2021
;
4
(1)
:
e2035057
.
63.
Pancevski
B.
Rapid tests blossom amid rise in cases
.
Wall Street Journal
.
2020
;
276
(87)
:
A7
.
64.
Colchester
M.
Europe's climbing load strains testing programs
.
Wall Street Journal
.
2020
;
276
(84)
:
A8
.
65.
Vinluan
F.
COVID-19 drives new push for CRISPR-based home diagnostics (with commentary by Dr. Kost)
.
Timmerman Report
.
January
9,
2021
.
66.
Kost
GJ,
Zadran
A,
Zadran
L,
Ventura
I.
Point-of-care testing curriculum and accreditation for public health–enabling preparedness, response, and higher standards of care at points of need
.
Front Public Health
.
2019
;
8
(385)
:
1
15
.
67.
Kost
GJ,
Zadran
A.
Schools of public health should be accredited for, and teach the principles and practice of point-of-care testing
.
J Appl Lab Med
.
2019
;
4
:
278
283
.
68.
Kost
GJ
.
Public health education should include point-of-care testing–lessons learned from the COVID-19 pandemic.
EJIFCC
.
2021
;
in press.
69.
Kost
GJ
.
Geospatial science and point-of-care testing: creating solutions for population access, emergencies, outbreaks, and disasters
.
Front Public Health
.
2019
;
7
(329)
:
1
31
.
70.
Curtis
A,
Ajayakumar
J,
Curtis
J,
et al
Geographic monitoring for early disease detection (GeoMMED)
.
Nat Res Sci Rep
.
2020
;
10
:
21753
.
71.
Kalyatanda
GS,
Archibald
LK,
Patnala
S,
et al
No human exists in isolation or as an island: The outcomes of a multidisciplinary, global, and context-specific COVID-19 consortium
.
Am J Dis Med
.
2020
;
15
:
219
222
.
72.
Kost
GJ,
Katip
P,
Vansith
K,
Negash
H.
The final frontier for point of care: performance, resilience, and culture
.
Point of Care
.
2013
;
12
(1)
:
1
8
.
73.
Kost
GJ,
Ferguson
WJ,
Kost
LE
.
Principles of point of care culture, the spatial care pathTM, and enabling community and global resilience
.
e-Journal IFCC
.
2014
;
25
(2)
:
134
153
.
74.
Kost
GJ,
Zhou
Y,
Katip
P.
Understanding point of care culture improves resilience and standards of care in limited-resource countries
.
In:
Kost
GJ,
Curtis
CM,
eds.
Global Point of Care: Strategies for Disasters, Emergencies, and Public Health Preparedness
.
Washington, DC
:
AACC Press-Elsevier;
2015
:
471
490
.
75.
Liu
X,
Zhu
X,
Kost
GJ,
et al
The creation of point-of-careology
.
Point of Care
.
2019
;
18
:
77
84
.
76.
Carter
JG,
Iturbe
LO,
Duprey
J-LHA,
et al
Sub-5-minute detection of SARS-CoV-2 RNA using a reverse transcriptase-free exponential amplification reaction, RTF-EXPAR
[posted online
January
4,
2021]
.
medRxiv
.
77.
Dinnes
J,
Deeks
JJ,
Adriano
A,
et al
Rapid, point-of-care antigen tests and molecular-based tests for diagnosis of SARS-CoV-2 infection
[review].
August 26, 2020.
Cochrane Database Syst Rev.
2020
;
8
:
CD013705.
78.
International Federation of Clinical Chemistry and Laboratory Medicine.
Information guide on COVID-19. Updated December 2020
.
2021
.
79.
Kamps
BS,
Hoffmann
C.
COVID Reference–The COVID Textbook. 6th ed.
Hamburg
:
Steinhauser Verlag
;
2021
.
https://covidreference.com. Accessed February 25, 2021.
80.
Kost
GJ
.
Newdemics, public health, small-world networks, and point-of-care testing
.
Point of Care
.
2006
;
5
(4)
:
138
144
.
81.
Kost
GJ
.
Point-of-care testing for pandemic management
.
Lab Insights
.
August 25, 2020.
2021
.
82.
Kost
GJ
.
The Impact of Increasing Prevalence, False Omissions, and Diagnostic Uncertainty on COVID-19 Tests and the Future of Point-of-Care Testing
.
Boston, MA
:
Cambridge Healthtech Institute;
2021
:
16
.

Author notes

Supplemental digital content is available for this article at https://meridian.allenpress.com/aplm in the July 2021 table of contents.

The author has no relevant financial interest in the products or companies referenced in this article.

Supplementary data