Health departments and agencies such as the US Centers for Disease Control and Prevention have always relied on the technologies available to them contemporaneously to monitor the health of populations, identify emergent threats, and guide the allocation of human (eg, health educators, epidemiologists) and other (eg, pharmaceuticals) resources to mitigate any threats. Monitoring is defined as “the systematic collection, consolidation and evaluation of morbidity and mortality reports and other relevant data,” as well as “the regular dissemination of the basic data and interpretations to all who have contributed and to all others who need to know.”1(p3)
Long a cornerstone of infectious disease epidemiology, public health surveillance strategies are also used to monitor chronic conditions, health care utilization, and behaviors.2 The kinds of information collected has expanded with the growth of electronic health records, big data, and machine learning. Information is routinely extracted from diverse sources (eg, patient medical records, cell phones, social media accounts) without individuals' knowledge or explicit consent and shared with agencies (eg, law enforcement, social service agencies) and corporations whose interests in the information may diverge starkly from those of the data generators. The ethical implications of these practices are not sufficiently examined.
Hypersurveillance of Black and other populations by law enforcement is a defining feature of US racial relations.3 The information gained through hypersurveillance has historically been exploited to uphold white supremacist racial orders. These range from the role of overseers during slavery and white citizens councils after slavery, to the use of social science data to develop predictive models of suspected future crimes as advanced by urban police departments.4 Mistrust associated with law enforcement is rooted in these historical and ongoing practices.3,4 Of particular concern to health equity advocates, the widespread automated collection and sharing of public health surveillance data, especially with law enforcement, may undermine public health objectives and disproportionately impact the racially and socially marginalized populations that public health interventions are intended to serve.
While some presume the surveillance strategies of the early 21st century are unbiased, the evidence from researchers and community organizers biases are embedded in the sources of information on which the systems rely and the algorithms they use to analyze and present the information.5,6 Surveillance data are now generated, accessed, and shared with ease. This has led to the proliferation of data dashboards, which make the curated data available to the public or other audiences in near real time. Collectively, the power of surveillance technologies, the profitability of data sharing, the limited regulation of these practices, the increasing reliance on them, and the presumed objectivity of big data and surveillance help obscure their potential adverse impacts for racialized and marginalized communities.5
This Winter 2023 issue of the Rapid Assessment of COVID Evidence (RACE) Series highlights work from the COVID-19 Task Force on Racism and Equity being conducted by Amani et al7 (this issue) to enable health equity advocates to systematically evaluate the potential for public health surveillance systems to engage in these forms of harm. The article explains the development of a scoring schema by which to evaluate surveillance systems and presents the results of a pilot study that applied the schema to a set of systems identified in year 1 of the COVID pandemic as potentially useful for developing a surveillance system to monitor social determinants of COVID inequities. Both the findings from the pilot study and the examples it provides of harms occurring to populations as a result of public health information sharing, which are discussed in the article, underscore the potential for public health surveillance to harm communities, however inadvertently. There is an urgent need to regulate emerging technologies and advance ethical guidelines for their use.8
The concerns raised by Amani et al complement a set of concerns about equity in data dashboards and surveillance that is beyond the scope of the study. Equity has been defined as “assurance of the conditions for optimal health for all people” and it “requires valuing all individuals and populations equally, recognizing and rectifying historical injustices, and providing resources according to need.”9(pS74) It is difficult to study a problem, if it is not named. During the earliest days of the COVID pandemic, few systems reported and collected data disaggregated by race and ethnicity, which hampered efforts to detect disparities and provide “resources according to need.” This suggests current surveillance strategies may undermine the attainment of equity in diverse communities due to both the biases embedded in existing apparatuses and because the assessments of disparities are inadequate and the surveillance apparatuses.
The purpose of the RACE Series is to share findings from research being conducted by the COVID Task Force on Racism and Equity as the work is completed so that communities and others can use the information in support of their ongoing health equity efforts. This year-long partnership with Ethnicity & Disease benefits from the journal's focus on health equity and open access publication policy. Ethnicity & Disease makes all articles freely available to the public, which aligns with key principles of health equity.
The work was supported by UCLA David Geffen School of Medicine COVID-19 Research Award No. HE-28. Support for the Rapid Assessment of COVID Evidence (RACE) Series was provided in part by the Robert Wood Johnson Foundation (Grant No. 79361). The views expressed here do not necessarily reflect the views of the Foundation.