Process models that include the myriad pathways that pathogen-contaminated food may traverse before consumption and the dose-response function to relate exposure to likelihood of illness may represent a “gold standard” for quantitative microbial risk assessment. Nevertheless, simplifications that rely on measuring the change in contamination occurrence of a raw food at the end of production may provide reasonable approximations of the effects measured by a process model. In this study, we parameterized three process models representing different product-pathogen pairs (i.e., chicken-Salmonella, chicken-Campylobacter, and beef–E. coli O157:H7) to compare with predictions based on qualitative testing of the raw product before consideration of mixing, partitioning, growth, attenuation, or dose-response processes. The results reveal that reductions in prevalence generated from qualitative testing of raw finished product usually underestimate the reduction in likelihood of illness for a population of consumers. Qualitative microbial testing results depend on the test's limit of detection. The negative bias is greater for limits of detection that are closer to the center of the contamination distribution and becomes less as the limit of detection is moved further into the right tail of the distribution. Nevertheless, a positive bias can result when the limit of detection refers to very high contamination levels. Changes in these high levels translate to larger consumed doses for which the slope of the dose-response function is smaller compared with the larger slope associated with smaller doses. Consequently, in these cases, a proportional reduction in prevalence of contamination results in a less than proportional reduction in probability of illness. The magnitudes of the biases are generally less for nonscalar (versus scalar) adjustments to the distribution.

This content is only available as a PDF.