key: cord-0057961-92vlvtus authors: Francis, John G.; Francis, Leslie P. title: Counting Numbers date: 2021-03-18 journal: Sustaining Surveillance: The Importance of Information for Public Health DOI: 10.1007/978-3-030-63928-0_2 sha: 004486cdae8fa99a9aa1114842459a1968a660ac doc_id: 57961 cord_uid: 92vlvtus Surveillance began with counting the numbers of people in the population. At various times in history, numbers have been used to assess the overall strength of the population, to identify the march of dangerous contagion, or to determine needs for food or labor. But even simple counting of population numbers, vital statistics, or reports of disease has been controversial. Information is power and the most rudimentary surveillance can be used both for good and for harm. This chapter sets ethical questions about these basic surveillance methods in historical and epistemological context. It gives examples of uses of data about population numbers, vital statistics, or outbreaks that have been clearly beneficial, as well as examples that have bordered on the genocidal. Counting numbers, as a rudimentary epidemiological method, also presents the opportunity to explore ethical problems raised by epidemiology as a science, such as incomplete data, biased data, or false negatives or positives. Today, with increasing understanding of disease and availability of prevention or treatment, the advantages of outbreak detection may be shared far more widely and more equally. Nonetheless, outbreak detection can generate fear and hostility if patterns of disease track otherwise disfavored groups. COVID-19 has revealed the importance of demographic data about the distribution of disease burdens—data that may either generate mistrust as people see their disadvantage starkly, or that may foster trust if the result is increased attention to disparities in treatment and in health. about locations and progress of pestilences such as plague or cholera. These records were aimed at a foremost goal of public health surveillance: detecting outbreaks of deadly disease. However, they often were not sufficiently timely to allow societies to take action against the disease as it spread; instead, they served primarily as historical records of the disease's impact. Outbreak detection presents a compelling case for timely and comprehensive information to avoid, mitigate, or-most hopefully-extinguish the spread of sickness. Counting cases of illness and death within a population can reveal the severity and distribution of an outbreak, as well as possibilities for exposure as the outbreak spreads to epidemic or pandemic levels. With COVID-19, daily numbers of reported new cases, deaths, and reportedly "recovered" patients who have survived at least three weeks from diagnosis, serve to remind political leaders and the public of the pandemic's toll. Estimates of "excess" deaths-numbers of deaths beyond what would normally have been expected within a population-are also used to assess the pandemic's hidden impact. Vital statistics also reveal the pandemic's disparate impact on the elderly, people of color, people with disabilities, and the poor. But even simple counting of population numbers, vital statistics, or reports of disease has been controversial. Forceful opposition to systematic counting of the population has persisted throughout history. Some judged that counting the population was sacrilegious because of fears that it might incur the wrath of God (NISRA 2019). Others were concerned that the results would reveal the country's weaknesses to its enemies or that gathering the information was a threat to individual liberties. Still others resist collecting certain kinds of demographic information about the population; for example, France does not collect racial and ethnic categories in its census (Léonard 2015) . These controversies raise some of the most fundamental issues about surveillance. Information is power, and the most rudimentary surveillance can be used both for good and for harm. Knowledge of population numbers has enhanced the power of monarchs to levy oppressive taxes or conscript soldiers. It has been regarded as sacrilegious in some religious traditions. It has been deployed to reveal migration patterns or to find immigrants themselves, as well as in the U.S. to allocate political representation in ways that have been unfair at times or clearly discriminatory. It has been thought to reveal generalizations about population subgroups that may be stigmatizing or degrading. Political leaders may also wish to suppress information about numbers or present numbers in ways that make them appear more favorable than they actually are. Rulers may hope to moderate alarm among the population or to enhance their political positions. President Trump's decision in July 2020 to have hospitals report COVID-19 data to the Department of Health and Human Services rather than to the Centers for Disease Control and Prevention was criticized by those who feared exactly this kind of data manipulation for political gain (Stolberg 2020) . This chapter sets ethical questions about these basic surveillance methods in historical and epistemological context. It gives examples of uses of data about population numbers, vital statistics, or outbreaks that have been clearly beneficial, as well as examples that have bordered on the genocidal. Counting numbers, as a rudimentary epidemiological method, also presents the opportunity to consider some of the problems raised by epidemiology as a science and the ethical implications of how these problems are answered. How can the science of epidemiology-or science more generally-go wrong in ways that might undermine ethical justifications for surveillance? The spread of bubonic plague presents perhaps the most sustained examples of counting cases of disease over the centuries. Recent outbreaks of contagion have been counted, too, from Ebola to COVID-19. Daily logs of new infections and deaths were published as the COVID-19 pandemic spread. Common themes are reflected in these numbers and how they are publicized. The roles of fear, misunderstanding, and mistrust are apparent. So is the perception of contagious disease as attacking from without and the use of military rhetoric in describing responses to its spread. Some communications have stigmatized disease victims. Others have emphasized the interactions between health disparities and burdens of infection. Finally, people seem to care about having information that is timely and complete. All of these themes can be found in the history of the plague. Plague, caused by the bacterium Yersinia pestis, stimulated collection of vital statistics. Plague is deadly, with fatality rates of 30% to 100% without treatment. Plague killed more than fifty million people in Europe alone during the fourteenth century, over half of the population at the time. Plague is also ugly: its symptoms include weakness, seizures, diarrhea and vomiting, bleeding, swollen lymph nodes, and the blackened skin that gave it the label "Black Plague." For most of history, all that people could do for protection was to become aware of where disease had broken out and make efforts to avoid it; today, early antibiotic therapy can successfully treat most cases of plague. How plague spread-through bites of infected fleas or licewas not known until the last century, either. Even today, how plague spreads remains an ongoing subject of study; until recently, rats were thought to be the vehicles transporting fleas, especially aboard ships, but modeling now suggests human transport was to blame for carrying plague-bearing fleas across the globe (Dean et al. 2018 ). The plague also illustrates the range of human reactions in the face of pandemic spread. Immediate threats of horrifying and deadly diseases are commonly met by panic and fear. Plague was recognizable, disgusting, and deadly. All too often, fearful reactions to such diseases trace racial or ethnic lines, judgments of moral opprobrium, or both. Reactions may condemn or destroy cultural or religious practices. Those supposed to be victims of disease may be seen as deadly sources of contagion and quarantined, banished, or exterminated. As the plague swept apparently inexorably across Europe during the early Renaissance, strategies to prevent disease spread developed, such as quarantine and the cordon sanitaire. Quarantine-the practice of keeping ships offshore for forty days until all disease was supposed to have died out-was instituted in the region of Venice in 1377 to protect against plague (Tognotti 2013; Gensini et al. 2004 ). The Venetian Republic appointed three guardians to detect and exclude ships carrying the disease (Declich and Carter 1994) . The first English quarantine regulations were adopted in 1663 in London and the initial French regulations in 1683 in the port city of Marseille-both also to stop plague from entering the city. A further strategy to halt disease spread was drawing geographical lines that could not be crossed-the so-called cordon sanitaire. The heroic village of Eyam in England self-isolated, perhaps creating even greater risk for its residents by transforming their disease to its deadlier pulmonary form (Massad et al. 2004) . Plague arrived on the west coast of the United States towards the end of the nineteenth century, apparently from China via Hawai'i during a pandemic that originated in southern China and spread widely in Asia and Europe. The disease was first found in Hawai'i among Chinese residents. At the time, although the bacillus causing the disease could be identified, its mode of transmission was unknown. The assumption was common that white European ancestry conferred immunity (Randall 2019, p. 6 ). Honolulu's Chinatown was quarantined out of fear of the pestilence. When the home of one of the victims was burned to eradicate the infection, a shift in the winds flared the fire out of control and Honolulu's entire Chinatown was reduced to ashes. When a case of plague was identified in San Francisco's Chinatown a few months later, it appeared in a city where anti-Chinese sentiment was fierce. Prejudice and fear combined to impose immediate quarantine on all of Chinatown. But quarantine competed with corruption and concerns about its economic impact on the city. Thereafter, efforts to address the plague ricocheted between quarantine and release amidst disbelief in bacteriological confirmation of the disease. (Suspicions of science are not merely a phenomenon of the present day.) Evidence was also clear that bodies were being hidden for fear of discovery (Randall 2019, p. 58) . Efforts were imposed to prevent movement outside of the state by people of Chinese ancestry who had not received an experimental vaccine. These impositions were enjoined by the courts as violating constitutional rights to equal protection (Wong Wai 1900; Jew Ho 1900) because no evidence had been provided for imposing the restrictions only on Asian residents of San Francisco (McClain 1986) . From the outbreak in San Francisco, plague spread across the bay and to Los Angeles and beyond; the disease is now endemic throughout the western United States. Ignorance about disease transmission, prejudice, and economic protectionism all contributed to this disease spread and the failure to prevent its becoming endemic in the U.S. Today, plague is found on all continents, although the three most affected countries are the Democratic Republic of Congo, Madagascar, and Peru. According to the World Health Organization (WHO 2017a), surveillance is essential to identify and manage plague outbreaks wherever they might occur. Likewise, surveillance is critical to detecting outbreaks of polio, cholera, Ebola, avian influenza, and the myriad other infectious diseases known in the world today-along with emerging infections as yet unknown. Counting numbers of people who are sick and dying still matters today to this enterprise, although far more sophisticated surveillance techniques are also in use. Ebola is a relatively new zoonosis, a disease initially transmitted from non-human animals to humans. Because people with Ebola bleed copiously, caregivers for them are at high risk of infection without effective precautions. Burial practices that involve bathing or dressing infected corpses are also highly dangerous as they may involve contact with infected fluids. Only recently have vaccination or treatment been available for this once highly deadly disease (Farmer 2020; Maxmen 2010) . Timely information is thus critical to prevent Ebola spread, yet the history of identifying Ebola outbreaks is a history of surveillance challenges and failures. The 2014-2016 outbreak in West Africa is an illustration. The first cases were identified in December of 2013; WHO was first notified of the outbreak in March of 2014 but did not declare it a public health emergency of international concern until August ( Kalra et al. 2014) . The response has been criticized as the result of "the combination of dysfunctional health systems, international indifference, high population mobility, local customs, densely populated capitals, and lack of trust in authorities after years of armed conflict." (Farrar and Piot 2014, p. 1545) . By the time the outbreak's end was declared, over 28,000 cases had been confirmed, 40% of which were fatal. Moreover, the consequences of the epidemic reverberate. Health infrastructures have been decimated by the deaths of healthcare workers, with resulting impacts on vaccinations against diseases such as measles. Social disintegration, food insecurity, and psychological trauma remain in the epidemic's wake (Kaner and Schaack 2016) . In the United States, communication missteps and subsequent mistrust led to cries for closing borders, quarantining anyone from an affected area, and augmenting powers of the federal government to restrict travel. Gaining the information needed to respond to epidemics such as Ebola is complex. It depends on the existence of health infrastructures and trust in their use. It requires recognition of events, transmission of information about them, and analysis of the information thus gained. A failure of any of these can be devastating, as West Africa learned to its peril in 2014. Through public notice of the deliberations of the Emergency Committee, the WHO is attempting to achieve transparency and accurate communication as suggested in its 2017 surveillance guidelines (WHO 2017b). Its perceived success in meeting this goal was challenged by COVID-19, however. COVID-19 is the disease caused by the novel coronavirus, SARS, CoV-2, which apparently emerged into human-to-human transmission in late 2019. As of just over six months into the pandemic, much was changing and remained unknown. Crystalclear, however, was that many areas of the globe lost weeks in early 2020 that were vital to prevention of disease spread. Concerns were raised that the WHO had been too slow in recognizing the threat. Allegations that the WHO had delayed in sounding the alarm in deference to China were used by President Donald Trump to notify the WHO of the United States' intention at the time to withdraw from that organization (Rogers and Mandavilli 2020). Contagious diseases like plague are nasty and scary. They are enemies that attack from outside the body. They cause distressing symptoms that are regarded with fear and distrust, such as copious bleeding, vomiting, or diarrhea. Their results may be disfigurement, odors and filth, or sudden death. When their causes are unknown, but risks of transmission appear high, the presence of contagion may lead people to avoid, stigmatize, imprison, or kill those who are thought to be sources of infection . Possessions, dwellings, or communities may be burned to eradicate what are thought to be sources of infection. These reactions may track, and be intensified by, lines of class, ethnicity, or especially race. They may also be linked to moral condemnation of those who are ill. Fear in the face of deadly outbreaks is neither surprising nor unjustifiable but has encouraged conducting surveillance in ways that are at best morally problematic. Here, we highlight stigma and isolation, cultural disruption, and moral condemnation as particularly serious ethical risks of even rudimentary surveillance such as counting population numbers or cases of disease. Leprosy has been one of the most vilified diseases throughout history. To be a "leper" is to be an outcast. Caused by a bacillus, leprosy is contagious. People with leprosy have skin ulcers, lose feeling in affected areas of the body, and, in more advanced stages of the disease, experience contractures of fingers, toes, and limbs, or lose digits. They are, in short, (de)formed. Lepers have been avoided, shunned, and shunted off to far away colonies so that others would not have to see them, touch them, or risk infection from them. The Bible portrays leprous skin conditions as uncleanliness. Leprosy has been seen as a mark of shame from God (Grzybowski et al. 2016 )-the classic "stigma." Because individuals vary genetically in susceptibility to leprosy infection, leprosy also has been associated with particular ancestral subgroups. So, in Hawai'i, Native Hawai'ians were disproportionately infected when the disease arrived. Until seafaring European explorers reached their shores in the late eighteenth century, the Hawai'ian Islands had been isolated from contact with others and their residents had not developed resistance to many infectious diseases, including venereal disease and leprosy. Leprosy likely came to the Islands by the 1840s. By 1865 its spread had frightened authorities and the Legislative Assembly enacted "An Act to Prevent the Spread of Leprosy." The law required physicians to report all suspected cases, established a hospital, and set up an isolated colony to quarantine infected persons on the Kalaupapa peninsula on the island of Molokai. Kalaupapa National Historical Park in Hawai'i stands today as a memorial to the 8000 people who were banished to that remote peninsula and died there of leprosy (Greene 1980) . The peninsula was surrounded by the ocean on three sides and twothousand-foot cliffs on the fourth; it could be reached only at two ocean landings and then only in good weather. Anyone suspected of leprosy was isolated on the peninsula, often by force, including some who were not ill but became so after being quarantined in close contact with others. The residents forcibly relocated to Kalaupapa did not starve-the peninsula was agriculturally fertile-but they were subjected to appalling living conditions, died at high rates, and never saw families or friends again (Tayman 2006) . Father Damien, a priest who ministered to those on the colony and eventually died of leprosy himself, was sainted by the Catholic Church in 2009. The vast majority (97%) of those isolated at Kalaupapa were Native Hawai'ians. Their isolation was imposed by the European settlers on the islands, not by the Native Hawai'ians themselves. Western attitudes of disgust towards those with leprosy were not shared by Native Hawai'ians (Amundson and Ruddle-Miyamoto 2010) . The Act to Prevent the Spread of Leprosy and the policies that followed have been sharply criticized for discrimination on the basis of race and disability. The story of Kalaupapa is gripping but not unique. Many other stories could also be told to illustrate how fear can interact with prejudice to generate disparately harsh treatment of those identified as ill who fall into disadvantaged minority groups. Chinese in San Francisco were mistakenly quarantined for plague, Haitians were stigmatized as the bearers of AIDS, and even very recently racism echoed in the panic about the possibility that Ebola would come to the United States. As Gizmodo journalist Stassa Edwards (2014) writes: The Western medical discourse on Africa has never been particularly subtle: the continent is often depicted as an undivided repository of degeneration. Comparing the representations of disease in Africa and in the West, you can hear the whispers of an underlying moral panic: a sense that Africa, and its bodies, are uncontainable. The discussion around Ebola has already evoked-almost entirely from Tea Party Republicans-the explicit idea that American borders are too porous and that all manners of perceived primitiveness might infect the West. To be sure, collection of data about births and deaths or population numbers does not by itself cause fear or stigma. Context matters: are the disease and how it spreads well understood? Can it be readily treated? Does it cause symptoms that disgust or horrify, such as uncontrollable diarrhea or hemorrhage? Does it disfigure or maim, like leprosy? Does it track-or even appear to track-racial or ethnic lines? Are those who are disproportionately afflicted from groups already disfavored for other reasons? Is the disease possibly related to conduct judged to be immoral at the time, such as prostitution? And, is there any way to regard the ill as causing their own misfortune, and thus to being responsible and blameworthy for it? If the answer to any of these questions is "yes," information about numbers may fuel fear and stigma, especially if it reveals information about population subgroups disproportionately affected by conditions that are frightening and poorly understood. The arrivals of explorers and colonizers brought globalization of disease to previously remote areas. Despite some disputes about the numbers (Roberts 1989) , native populations were clearly devastated by infections such as measles or smallpox. Although at least much of the disease spread appears to have been unintentional, its impact has been compared to the Holocaust and other genocides (Brave Heart and DeBruyn 1998). At the same time, colonizers were exposed to infections novel to them but endemic in colonized areas, such as malaria or yellow fever. Catastrophic declines in indigenous population numbers, joined by environmental effects of colonization and the imposition of measures to protect colonists from the new diseases they encountered, caused extensive cultural disruption. So-called "tropical medicine" developed intertwined with the history of colonialism, according to historian Deborah Neill's comprehensive account (2012). Emerging along with increasing understanding of the germ theory of disease, the specialty of tropical medicine was designed to protect European colonizers from new infections they encountered. It also aimed to safeguard the health of workers who were needed for development and exploitation of natural resources. While on the one hand as a specialty it contributed greatly to the understanding of diseases such as yellow fever, encephalitis, and malaria, on the other hand it often reflected racist attitudes about the superiority of Western practices of hygiene and cleanliness, and the backwardness of local populations. It also contributed to the ability of colonizers to remain in power and shaped how they exercised the power they had. Separating population groups was the primary recommendation of tropical medicine for preventing disease spread to European colonists (Neill 2012, p. 91) . Native populations were judged to be reservoirs of disease because of their poor hygiene. Congregating them into their own defined areas was believed to reduce risks of disease transmitted by mosquitoes. These recommendations for segregation dovetailed as well with the political and economic interests of settlers. Their echoes persist today, as Neill writes (2012, p. 101): "The legacy of segregation fueled deep racial divisions that would haunt European administrations as well as emerging African nations well into the twentieth and twenty-first centuries." Disruption caused by colonial settlements changed patterns of disease activity, just as population expansion and climate change are continuing to do today. In the late nineteenth century, for example, a particularly deadly sleeping sickness epidemic spread through central Africa, originating with changes in the habitat of the tsetse fly. In addition to surveillance, strategies adopted by European colonists to address the epidemic included forced moves of populations from infected areas, travel restrictions, and separate camps for those already ill. Those believed to be infected were subjected to treatments such as high doses of arsenic, even at levels that caused blindness and death. Not surprisingly, these impositions met with resistance and rebellion. The cultural disruption from forced relocation of entire villages was particularly extensive. Historian Helen Tilley (2016) , in commentary for the American Medical Association Journal of Ethics, judges these efforts to have been forms of structural violence that "underpinned colonial rule" and "not only disrupted people's lives and livelihoods but also created enduring inequalities that laid the groundwork for more damage." Infectious diseases such as syphilis are sexually transmitted. Their identification and control have been associated with moral condemnation and blame of those who become infected and who transmit infection. As Chapter 3 discusses in more detail, methods of case identification and contact tracing were importantly shaped by these moralistic judgments. But even apart from such judgments about individual conduct, populations or population subgroups have been identified with the moral taint of diseases. Early on, venereal disease was associated with prostitution and lewdness; the very term "venereal" itself is rooted in the Latin for sexual love. Wet nurses were also condemned for disease transmission by staunch Calvinists seeking to "save the family from corruption" (Siena 1998) . Women were generally blamed for the disease; according to Siena, beginning in the seventeenth century the theory was prevalent that venereal disease originated spontaneously from putrefaction in the womb. The U.S. has seen racist claims made about sexual licentiousness as explaining the disproportionate rates of sexually transmitted infections among blacks, although the explanations lie with social networks and access to care rather than rates of sex (CDC 2019; Adimora et al. 2006 ). For a time, it was believed that the natural history of diseases such as syphilis differed between blacks and whites. Nowhere has moral condemnation of the infected been more apparent than in attitudes toward HIV/AIDS. The association of AIDS with homosexuality-it was at one point referred to as "gay related immune deficiency" or "GRID" before the virus causing it was identified-led moral conservatives and religious groups to call for criminalization of transmission of the disease. In 1988 while Ronald Reagan was president, a presidential commission advocated criminal penalties for knowing transmission of the HIV virus; approximately half the states in the U.S. enacted such laws. The U.S. Agency for International Development encouraged enactment of similar laws in sub-Saharan Africa and these laws proliferated as a result (Francis and Francis 2013) . In South Africa, over 30% of women are HIV positive and these women experience high rates of intimate partner violence for their supposed infidelity even when they are pregnant and even if it is likely that they were infected by their intimate partners (Bernstein et al. 2016) . Uganda, with the support of evangelical Christians, enacted an Anti-Homosexuality Act in 2014 that was struck down by its constitutional court but that continues to resonate politically. A report by Human Rights Watch (2018) illustrates the toll that such moral panic against same sex relationships can take on groups and their health. In Indonesia in early 2016, anti-gay pronouncements were widespread in media. The country's largest Muslim organization, the Nahdlatul Ulama, urged criminalization of LGBT activism and forced reformation of LGBT people. This anti-LGBT rhetoric was followed by raids on locations including private homes where LGBT people were thought to be, with arrests, and with convictions for pornography. Human Rights Watch reports that this anti-LGBT activity has been correlated with a sharp spike in HIV infections and with increasing difficulty for public health outreach workers to reach people at risk of infection. Such fear and condemnation of infectious disease threaten to undermine cooperation with even simple forms of surveillance. Many-but not all-of the examples we have given were associated with times before much was understood about the causes of disease or disease transmission, when avoidance and isolation seemed the only realistic responses. However, even with scientific progress, uncertainty, controversy, and reactions of mistrust remain. These must also be addressed if even simple forms of surveillance are to be reliably sustained. Such reactions of fear to contagious disease were in part rooted in the limited scientific understanding of disease causation and transmission. But, however impressive advances in science may be, for epistemological and ethical reasons advances cannot be expected to resolve all problems of fear of disease. Some diseases are deservedly frightening. Science is imperfect. Analytic paradigms are contested, there is much uncertainty, and knowledge is evolving. Contested normative assessments are invoked in support of judgments about the significance of results and policy recommendations based on them. Behind these controversies lie deep epistemological cleavages. Studies of social epistemology and epistemic injustice have shed new lights on the status of scientific claims. Social epistemologists explore how social structures influence judgments about what is knowledge and what should be studied. Epistemic injustice occurs when testimony of disfavored groups is disbelieved, suppressed, or never even articulated. For example, when the testimony of women about their sexual activity is disbelieved or never heard, misunderstandings about disease transmission patterns may flourish. Ethical problems with the conduct of science complicate these epistemological issues. Unjust exploitation of research subjects and conflicts of interest give legitimate reason for questioning scientific claims. It should thus come as no surprise that generalized suspicions of science have arisen to challenge trust in the use of science for public health initiatives. Some uncertainties in science and their impact on trust in surveillance are ineluctable. Additional problems for trust can be brought by the failure to recognize that ethical choices are intertwined with apparently simple empirical claims and, even more, by ethical malfeasance in the conduct of science. One of the most-if not the most-celebrated public health successes of all time was Sir John Snow's identification of water from the Broad Street pump as the source of a cholera epidemic in London in 1854. Yet Snow's work was bitterly contested and greeted by many with disbelief. During the mid-nineteenth century, scientists vigorously debated the causes of disease. Some attributed disease outbreaks to "miasma": bad air rising from decaying organic matter. The odors of unsanitary urban areas were thought to indicate unhealthy levels of miasma. Defenders of the miasma theory of disease argued that disease could be eradicated by proper sanitation to avoid more intense releases of miasma. These miasma theorists were important early supporters of public works and aid for the poor (UCLA Department of Epidemiology 2015). Their critics asserted the germ theory of disease: that microorganisms transmitted disease from person to person. The shift from miasma-an environmental theory of disease-to microorganisms also presaged a shift away from social programs for addressing disease to strategies singling out infected individuals. When cholera broke out in London in 1849, Sir John Snow, an early proponent of the germ theory of disease, painstakingly correlated cholera deaths in London with the water supply. Snow's work benefited from the more systematic methods of modern public health surveillance that had been instituted in the nineteenth century. In England, the Health of Towns Committee of Liverpool appointed the first designated public health officer, Thomas Fresh, as Inspector of Nuisances in 1844 (Parkinson 2013) . William Farr, superintendent of the Statistical Department of the British General Register Office from 1838 until 1880 and generally regarded as the developer of the modern concept of public health surveillance, standardized methods for collecting and analyzing vital statistics (Langmuir 1976) . Edwin Chadwick, secretary of the Poor Law Commission, encouraged studies of the life and health of the London working class with the aim of improving sanitary conditions; the study recommendations for a national board of health, local district health board, and district medical officers were adopted in the Public Health Act of 1848 (Committee 1988, p. 60). Although Farr was an adherent of the miasma theory of disease, his data were useful for Snow's investigation of patterns of cholera deaths. For several years, despite Snow's correlations, scientific controversy raged over the mechanisms of cholera spread. Then in 1854 in the Soho area of London, Snow observed the correlation between cholera deaths and drinking the water from a particular local source, the Broad Street pump. Snow urged removal of the handle from the pump and stopped the outbreak in its tracks (Hempel 2007, Chs. 14-15) . With the discovery of microorganisms in the water, Snow's work provided critical evidence for the germ theory of disease. Scientific understanding of disease causation and transmission has of course grown exponentially since Snow's day. Nonetheless, much remains unknown or controversial about disease classification and etiology. Novel infections such as HIV initially was or COVID-19 now is pose particular challenges because in the beginning little is known about the disease itself or how it may be transmitted. Lines between what are regarded as infectious conditions and what are not so regarded continue to shift. For example, cancer, once feared as infectious and transmissible, came to be regarded as an enemy solely within the body of its host. Today, however, the HPV virus is known to play a role in causing some cancers, returning these diseases into the category of conditions that can be transmitted from some to others and even prevented by vaccination. Other transmissible viruses such as Hepatitis C have also been implicated in causing some cancers. "Cancer" is coming to be regarded as a multiplicity of conditions implicating a variety of environmental, infectious, and genetic factors. Cancer research is now exploring immunotherapies that draw on paradigms from infectious disease treatment, and infectious disease treatment is likewise benefiting from developments in cancer immunotherapy, thus further blurring the lines between cancer and other disease processes (Hotchkiss and Moldawer 2014) . To take another example, studies of the human microbiome are yielding increasing knowledge of potential interconnections between varieties in bacteria and other microorganisms within the body and disease states (Lloyd-Price et al. 2017) . Not incidentally for this volume, increasing understanding of the role played in human health by microbial communities in the gut, vagina, mouth, mucous membranes, and skin, may also suggest important but as yet unexplored changes in directions for surveillance. The consequences for privacy of systematic collection of material internal to the human body are significant. People from different geographic regions may have different microbiota, so it may be possible to identify their nationality from samples. Changes in the microbiome may be associated with behavior, contact with others, or prior medical treatment, too. Because microbiomes are unique to individuals, samples may provide sufficient data for individual identification, even small samples from touching human skin (Gligorov et al. 2014 ) Yet depending on the directions taken by scientific understanding of the human microbiome, surveillance may need to move in hitherto unexpected directions to be effective for some purposes. Scientific understanding of changes in population health is also evolving. As already outlined, surveillance as a means of tracking the health of populations has a long and contentious history. Census data about population numbers can be a measure of strength-but can also be a way for enemies to discern weak spots or for those in power to learn where to extract resources. Information about disease incidence or prevalence can be both instrumental in disease prevention or a source of stigma and exclusion. Decisions to collect-or not to collect-information about populations, as well as judgments about the significance of this information-also harbor controversies with ethical implications. The seventeenth century saw the emergence of systematic data collection about populations and the early development of epidemiology as a science. In the midst of wars and recurring outbreaks of plague, the English kings mandated the Church of England to maintain a parish-level system for recording plague deaths. The information was sent to London and published weekly as the London Bills of Mortality; those who were more fortunate were able to rely on this information to decide where to move to try to avoid the plague. Using 50 years of the Bills, John Graunt in the 1660s developed the first known tables of health data and techniques of analysis that enabled him to hypothesize fluctuating environmental causes for plague deaths. During the eighteenth century, Johann Peter Frank in Germany advocated surveillance of school health, injury prevention, maternal and child health, and public water and sewage treatment as part of a system of police medicine. Mirabeau and other French Revolutionary leaders saw population health as the state's responsibility. In the American colonies, Rhode Island required innkeepers to report contagious diseases and later enacted required reporting of smallpox, yellow fever, and cholera (Declich and Carter 1994) . By the late eighteenth century, population growth had appeared as a new threat. Thomas Malthus' argument that geometrical population growth would soon outstrip food supplies spurred Parliament to pass the Census Act in 1800. The first systematic count of the population of both England and Wales was held just afterwards, in 1801 (Office for National Statistics 2015). In the United States, the first formal census count occurred in 1790 and divided the population into free white males (over and under 16), white females, other free persons, and slaves (United States Census Bureau 2015). To this day, data collected through the census remain a critical source for public health. During the nineteenth century, in both the United Kingdom and the US, public health efforts were often aligned with reform movements to address the conditions of the urban poor (Fairchild et al. 2010) . U.S. census data provide key background information about population trends, housing characteristics, educational attainment, and economic characteristics of geographic areas divided by ZIP code. These data are combined with other data, such as health surveys, to produce community health dashboards. These compilations of health indicators can be used by communities for their own improvement, by local public health authorities, by health care providers making decisions about where to locate or what kinds of care to provide, and by businesses seeking to understand workforce availability in decisions about where to locate. They are widely used by researchers interested in studying the social determinants of health. Despite its great utility, in the U.S. at least there also are significant barriers to many uses of census data, particularly in forms that could allow individuals to be identified. Legal restrictions on the data that may be shared, including restrictions among sharing by agencies of the federal government, are designed to protect the integrity of the data-gathering process. Under U.S. law, for example, the Census Bureau may not share identifiable information with the U.S. Department of Homeland Security. Nonetheless, despite these protections, immigrants and others with fears about their status or about stigmatization may be especially reluctant to answer the calls of census takers. Concerns are also raised about individual privacy and the potential for targeting, especially if census data are released with narrow geographic delineations even when they have been deidentified. In response, the Census Bureau takes a number of steps to protect privacy (Census Bureau 2019). But controversies about what data the census should collect and what uses may be made of this data continue to rage in the U.S., as the Trump Administration's failed efforts to include a citizenship question on the 2020 census illustrate. Indeed, as the case of leprosy in Hawai'i illustrates, public health interventions have not always been benign. To take another notorious example, Dr. Walter Lindley, officer of the fledgling health department in Los Angeles in 1879, called for the improvement of sanitation, the construction of a municipal sewer system-and the eradication of Chinatown. Over the decades, city health officials portrayed Los Angeles as "pristine," but its immigrant sections-Chinese, Japanese, or Mexicanas "rotten spots" (Molina 2006, p. 1) . To take a much simpler current example, information about demographic trends in a community such as declining birth rates or increasing percentages of the elderly in the population, coupled with rising health care costs, declining levels of educational attainment, or poor housing stock, may lead businesses to avoid investing in that community, thus exacerbating any existing downward spiral. From the perspective of epidemiology as a science, initial problems lie with the data itself. Some data may simply be absent, for example if population subgroups refuse to answer questions or cannot be found. Surveys using landline telephones have become increasingly unreliable as indicators of population trends, for example, because they tend to under-represent minorities and people in younger age cohorts. Researchers using these data have devised various methods of correcting for bias in survey responses, but the success of these methods remains controversial. In addition, questions may be framed in ways that are problematic and elide or confuse important information. For example, the U.S. government categories for race, used by law to collect the census, include only these options: white, black or African-American, American Indian or Alaska Native, Asian, and Native Hawaiian or Other Pacific Island. People are asked to self-identify and, since 2000, have been given the choice to identify as more than one race (Census Bureau 2018). By relying on self-identification, the methodology captures at best perceived race and must not be understood to be measuring race in any other sense. The methodology also has been criticized for the failure to make important distinctions, for example by lumping "Asians" together into a single category. Beyond gaps and confusions in the data lie controversies over analytic techniques. The learning algorithms developed through artificial intelligence may magnify existing deficits in the data. If people in a given location show up initially overrepresented in data about the presence of a condition, and then as a result are subject to disproportionate testing recommendations, the disproportion with which they are identified as having the condition may increase. An example is drug testing within particular subpopulations judged to have higher proportions of drug use to begin with. Testing rates in these subpopulations may simply replicate earlier suspicions. Meanwhile, the failure to test other subpopulations may miss information about even higher drug use percentages. When even relatively straightforward findings derived from population data are used to draw public policy recommendations, choices must be made about the comparative importance of false positive or false negative results. Take the death penalty as a harsh example. Assuming that the criminal justice system is sometimes imperfect in convicting someone of a capital offense, which is worse: putting an actually innocent person to death (false positive) or letting an actually guilty person go free (false negative)? Policymakers differ in how they answer this question about the death penalty. Some argue that the state should avoid executing the innocent at almost any cost. Others argue that public safety and the potential for deterrence matter more, even if there may be risks of executing the innocent. These policymakers may also disagree about the extent of the risks-death penalty critics tend to emphasize probabilities not only of actual innocence of the act altogether (wrong person cases) but also of innocence of an offense of the degree of severity charged (right person, wrong offense cases)-but the point here is that they disagree about how to weigh ethically the respective probabilities of these errors. And they disagree about where the burden of persuasion should lie when information is incomplete: is it up to a proponent of the death penalty to argue that it deters, or up to an opponent to produce evidence that it does not? Similar judgments are made all the time about medical tests. If detecting a positive is very important, because it might enable a lifesaving intervention, thresholds of suspicion arguably should be set high. The sensitivity of a test-whether it misses actual positives-will be judged more important than its specificity, whether it also sweeps up a comparatively high number of false positives. On the other hand, if the risks of over-identification are high, as they would be if there is a significant probability of executing people who are actually innocent, arguably the specificity of a test-whether it identifies a high number of positive cases that are false positiveswill be judged more important than its sensitivity. An underlying complication to setting these thresholds is that in populations in which true positives are infrequent, the probability that a positive test is a true positive is low, whereas in high frequency populations it is higher. Failure to understand this complication is known as the base rate fallacy. If rates of false positives and false negatives are not well understood, the result may be policies that are based on erroneous assumptions about either the frequency, or the comparative absence, of a condition of concern. Value judgments are part of the equation in deciding how to set these thresholds. People might differ about how to weigh the risks and costs of intervening with people who test positive and the importance of the goal of the intervention itself. These differences might apply to judgments about the goals of intervention for the individual him or herself, or about the importance of protecting others from the individual. People might also differ about the extent it is reasonable to defer to expertise and whether it is morally acceptable to intervene paternalistically with people for their own good. Some might argue that even if individuals are likely to make poorly reasoned judgments because they lack understanding of simple points about probability such as the base rate fallacy, these decisions should still be theirs to make and they should be allowed to take risks at least with themselves, if not to protect others from them. There may also be differences of opinion about how much an individual's decision affects only him or herself; for example, in the controversy about whether to require vaccination there are disagreements about whether its primary goal is protection of the individual or protecting others from the individual's becoming a source of contagion. A further complication is that there might be reasons for questioning expert advice. Conflicts of interest-well known for their influence on medical practice-are one ground for why expert advice might come under fire. Injustice-or the history of injustice-is another, as we discuss further in a later section of this chapter. Screening mammography for breast cancer is a particularly controversial example of these problems with the sensitivity and specificity of methods for screening populations (Plutynski 2012 ). On the one hand, early detection of these cancers is thought to be important for both morbidity and longer-term mortality. If so, some argue, the threshold for suspicion should be set high. Moreover, defenders of such a high threshold argue, the worry and subsequent evaluation associated with assessing whether an initially identified positive is a true positive should be judged far less serious than the risks of an early death from cancer. On the other hand, setting a high threshold may recommend invasive and potentially risky treatment for women who would never have needed it because their early cellular changes might never have become cancerous. Further, critics of a high sensitivity threshold say, the impact of worry and repeat testing should not be so readily discounted. One unanticipated consequence of undergoing evaluation for a false positive may be to discourage the willingness to return for mammography screening in later years, thus decreasing the likelihood that women once identified as false positives will return for medically appropriate care (Shen et al. 2018 ). Concerns about medical paternalism loom large in these debates. So as well do allegations of conflicts of interest on the part of providers who may have economic incentives to over-test and over-treat. Public health surveillance likewise raises these normative questions about the import of false positives and false negatives. Signals from reports of mortality or morbidity also may be misread. Consider an apparent uptick in reported influenza cases requiring hospitalization or resulting in death. This uptick might be an accurate signal of the spread of a very serious form of influenza-a true positive signal. Or, it might be a flaw in the data if cases are counted twice because they were reported by both the initial diagnosing physician and the treating hospital. Or, it might be the result of misdiagnoses of a respiratory infection, or of selective case reporting-both misleading as signals of an influenza outbreak. If only the very sick seek out hospital care, while many others who were infected simply treat themselves at home, reports of fatality rates based on the hospital data may be greatly exaggerated. On the other hand, there may be underestimations based on available data if patients never come to hospitals, reports of their conditions are not made, or tests erroneously report that they are not ill. These judgments and the policy recommendations based on them will be affected by value judgments and problems that may be raised about them. To illustrate, in 2009 reports surfaced of an outbreak of a strain of influenza in Mexico that appeared to be very serious. Reports were that the outbreak had reached epidemic status and would soon become pandemic. World-wide panic ensued, with school closures, travel bans, and widescale rejection of Mexican products even though they could not be sources of contagion. The WHO declared a public health emergency of international concern. Because the virus had originated from pork, the flu was called "swine flu." Pork producers were hit particularly hard even though the disease could not be spread by eating pork. The economic consequences for Mexico in particular were extensive: one estimate put tourist industry losses at $2.8 billion overall and pork production losses at $2 million monthly (Rassy and Smith 2013) . Clearly, the decision to declare a public health emergency had major economic and social consequences for Mexico. Whether it was truly such an emergency, however, was another question. The 2009 outbreak was the first test in the context of a potential pandemic of the WHO International Health Regulations that had entered into force in 2005. After the supposed pandemic proved "a damp squib," the WHO was resoundingly criticized for overreacting to the event and for undisclosed conflicts of interest that had allegedly affected the decisions it made (Godlee 2010) . Estimates of the severity of the form of influenza were one of the problems: because only those who were seriously ill had appeared for treatment in the area of Mexico where transmission of the disease was thought to have started, and these patients had high rates of mortality and morbidity, unwarranted conclusions were quickly drawn about the lethality of the influenza strain. A subsequent WHO report responding to the criticism emphasized that "the main ethos of public health is one of prevention: … in the fact of uncertainty and potentially serious harm, it is better to err on the side of safety" (WHO 2011). That several WHO scientific advisors had not disclosed their economic ties to pharmaceutical companies manufacturing influenza prophylactics did not help trust in these judgments, however. While finding "no evidence of malfeasance," the report did acknowledge the uncertainties involved in the decisions that WHO had made and the importance for all countries of making decisions such as imposing travel restrictions on the basis of sound evidence (WHO 2011, p. 11) . By contrast, as we described above, the WHO came under criticism for delaying judgments about the significance of the Ebola epidemic in West Africa in 2014. And it is subject to a chorus of criticism about alleged delays in reporting the COVID-19 situation and in declaring it a Public Health Emergency of International Concern. Indeed, President Trump cited the WHO's supposed failures and favoritism to the Chinese to justify his decision to give notice that the U.S. intended to withdraw from the WHO (Rogers and Mandavilli 2020) . Science and decisions based on it take place under conditions of admitted uncertainty. Judgment calls are made about data collection, analysis, and subsequent policy recommendations. If judgments are not carefully made and carefully explained-and this may be difficult in the face of panic-the result may be serious and unwarranted disadvantage for some. Such results may be unjust or complicate existing injustice, particularly if protective judgments benefit those who are already better off or entrench structural injustice. They may also undermine trust in data, generating reluctance to participate in or believe the pronouncements of subsequent surveillance activities. A further complication in these reactions to threats of disease is that human beings are not very good at understanding risks, even when they have been professionally trained to do so. Recent empirical work about how people make decisions has documented common cognitive biases that are relevant to risk judgments. These biases are useful shortcuts to reasoning but they are also frequently misleading. Understanding these biases and how they may affect risk judgments is critical to understanding what is to be surveilled and how the results of surveillance may be communicated and understood. One such bias is called the "availability heuristic." Famously identified by Tversky and Kahneman (1973) , this bias describes how people are likely to judge that a risk is greater when they have recently heard about a relevant occurrence. For example, after a highly-publicized airplane crash, people rate air travel as less safe than they otherwise would. During the Ebola epidemic, people believed that the chances of transmission in the U.S. from someone returning from the affected area were far higher than they actually were. Such misjudgments occurred not only in the general population but even among professionals; in the U.K., for example, diagnoses of possible viral hemorrhagic fever rose markedly when the Ebola epidemic was prominent in media reports (Curran 2015) . In clinical care, health care providers are disproportionately likely to identify a patient as having a particular condition when they have just experienced missing a diagnosis in another case that became memorable for them because of the harm that resulted to a particular patient. Relatedly, people may be likely to judge that a particular case is representative of a population when it is seen as similar to other population members, even if it is not. The judgment that the severe cases of flu seen in Mexico in the 2009 epidemic were representative exemplified this bias. Outbreak reports of cases that are not randomly selected are notably prone to this bias of assuming that they are representative of the population more generally (Curran 2015) . Anchoring and framing are another type of relevant cognitive bias in judgments of risk. With this bias, how a problem is defined may affect the judgments that are made. Well-known examples of this bias are whether an outcome is presented in frequency or percentage form and whether it is presented in terms of possibility of death or possibility of survival. That is, people will judge an outcome as more likely if they are told that it occurs in 10 out of 100 cases than if they are told it occurs 10% of the time. And people are more likely to choose an alternative with a 95% chance of survival than with a 5% chance of death-even though numerically speaking the two are equivalent. Framing in terms of gender, race or ethnicity, age or disability may also affect clinical judgment in ways that lead to errors (Bui et al. 2015) . Judgments of immorality complicate these cognitive biases. Recent empirical work on risk perception indicates that people tend to believe that risks are greater when they judge conduct to be immoral. For example, when parents leave their children for short periods of time for reasons people judge to be morally problematic, people judge risks to the children as higher than if the reasons are seen as innocent (Thomas et al. 2016 ). Applied to disease, this would suggest judgments of risks of transmission are higher when conduct is judged to be immoral. This phenomenon might explain why prostitutes are judged more likely to transmit disease than clients who visit them. When actions are judged to be intentional, moreover, any harm they cause is seen as greater than if the same results occurred from actions not judged to be intentional (Ames and Fiske 2015) . And causation is more likely to be attributed to actions judged to be immoral (Alicke 1992) . How these cognitive biases affect public perceptions of risk has been the subject of extensive study. From plague to Ebola, the threat of disease has been both exaggerated and discounted through these biases. Their impact on clinical care has also been studied but primarily with hypothetical rather than real-world examples (Saposnik et al. 2016; Blumenthal-Barby and Krieger 2015) . People are more likely to see risks as serious when these heuristics are at work. Heuristics may mount up, too, and thus may play a role in why presuppositions about populations are so difficult to shake. It might be surmised that public health professionals are no exception in the demonstration of cognitive biases, although this apparently has not been systematically examined to any extent (Greenland 2017). Uncertainties about scientific judgments have been compounded by the impact of scientific misconduct such as exploitation or racism. Public health researchers have exploited research subjects in ways that reveal clear racism. Public health recommendations also have been met with complaints about conflicts of interest that have grounded mistrust. While the impact of misconduct should not be exaggerated, for trust in data collection and use to be sustained, these concerns must be acknowledged and addressed. The legacy of the Tuskegee syphilis study casts a long shadow over attitudes towards public health and medicine in the United States today. But it is not the only shadow; eugenics, racism, and the exploitation of human participants in research have all played roles, not only in Nazi Germany but in many other circumstances. The Tuskegee study has become iconic in explaining reluctance among Blacks in the U.S. to trust health care institutions, although its role has perhaps been distorted in comparison to other factors. The study has been particularly well described by the feminist historian Susan Reverby (2009) whose excellent account is the basis for some of the discussion that follows. In the early part of the twentieth century, syphilis was feared for how it ravaged both body and mind. Because the disease was known to be sexually transmitted, those who were infected were often stigmatized and morally condemned for their supposed licentiousness. Prevalence of the disease was little understood and only primitive treatment in the form of mercury and arsenic was available. In the U.S., the disease was thought to be a particular problem among blacks because of what was supposed to be their sexual irresponsibility. Syphilis also was judged to take different forms in blacks and in whites, supposedly for genetic reasons-and there were links between anti-syphilis campaigns and eugenics. By the 1930s, syphilis had become a subject of major public health concern and a particular target of the U.S. Public Health Service; Chapter 3 describes the development of case identification and contact tracing as a response. Before effective treatment became available, however, the natural history of the disease was of particular scientific interest. Macon County, Alabama, was selected for study because of its high black population, high incidence of disease, and presence of the respected Tuskegee Institute for black education. People in the county were poor, illiterate, and lacked access to health care. Several demonstration projects for syphilis treatment had been instituted in the effort to improve the health of blacks in the south, including one in Macon County, but largely without success. After the demonstration projects ended, the "Tuskegee Study of Untreated Syphilis in the Negro Male" began in 1932 to learn more about the natural history of the disease. The Study was justified by those who initiated and continued it by the importance of addressing scientific uncertainties about the course of the disease. When the Study started, the only treatment available, neosalvarsan, was risky, expensive, and required frequent injections that were difficult to provide in a rural setting. By the late 1940s, penicillin therapy had become available, yet the Study continued until 1972-another nearly 25 years. (Parenthetically, although responsibility for conceptualizing the Study has traditionally been attributed to a minor public health official, recent revelations indicate that the later head of the Public Health Service who campaigned fiercely against syphilis, Dr. Thomas Parran, may have played a role in its origins. As a result, the University of Pittsburgh has decided to remove Parran's name from the building that houses its school of public health, over which Parran served as dean.) The Tuskegee Study included six hundred Black males, 399 with syphilis and 201 controls. Its aim was to observe the natural course of untreated syphilis. Participants entered the Study with a promise of $50 for burial expenses. They were deceived into believing that they were receiving treatment for their "bad blood," but they were not receiving any treatment at all. Instead, they were knowingly kept in the Study for long after penicillin had become readily available as an effective treatment for syphilis. Their sexual partners and their children were also at risk from their untreated infections, although syphilis is not contagious in its late stage. A reporter's revelations about the Study in the Boston Globe finally resulted in its termination in 1972. In 1997, survivors of the Study and relatives who had been injured received a formal apology from President Clinton. Survivors also sued and received compensation; the Trump administration has contended that any leftover settlement funds should revert to the U.S. government but descendants of the participants object (Reeves 2017) . "Tuskegee" has become a metaphor for public health abuse in the U.S. It created momentum for development of the regime for regulating the ethics of research in the United States, a regime that primarily emphasizes the individual informed consent that was judged to have been so clearly lacking in the Tuskegee Study-and that we raised questions about for public health in Chapter 6. In Reverby's judgment, the frequent comparison in ethics discussions between Tuskegee and Nazi medical experimentation mistakenly fixes it in the 1930s rather than as the ongoing racism that it continued to demonstrate. It thus constructs Tuskegee more as a violation of individual autonomy than as the manifestation of structural injustice that it actually was. Mistakenly understood in comparison to Nazi racism, Reverby contends (2009, p. 187) , Tuskegee became a convenient explanation for minority mistrust of health care and medical research and its condemnation has become a code for expressing supposed racial sensitivity. Instead, the Study should serve as a reminder of the many social factors that contribute to racial disparities in health today in the U.S. Reverby thus concludes (p. 194) , "The arguments about the problems of informed consent and the development of the entire institutional review board infrastructure [required by the federal regulations for the conduct of research with human subjects] provided little focus on the links between race and science or the problems of equity." As AIDS appeared on the scene, taking its toll disproportionately on Blacks, conspiracy theories about the role of medical experimenters in transmitting the disease circulated widely. Reverby relates (p. 200 ) the spread of false rumors that federal government health workers had deliberately infected Tuskegee participants with syphilis, thus perpetrating American genocide. Similar rumors also circulated about AIDS, both in the US and across the globe, generating ongoing mistrust of scientists especially those with connections to governments. These rumors have continued to be used in opposition to vaccination or other medical interventions. Tuskegee and the cultural meanings it later assumed are a powerful illustration of how injustice in the conduct of science can create legacies of mistrust for public health data collection and its use to address health problems. Contemporary health care and research, both in the U.S. and elsewhere, are beset by economic incentives. Pharmaceutical companies are a major part of the mix, but so also are providers, insurers, and researchers. Public health and even charities furthering public health goals have also come under suspicion of the taint of conflicts of interest affecting the decisions they make. Evidence is robust that people are influenced by their economic interests even when they sincerely believe that they are not. The role of for-profit pharmaceutical companies in influencing science and, in turn, public health, is extensive. Although most major medical journals require disclosures of conflicts of interest, pharmaceutical companies fund a great deal of medical research. Funding influences decisions about what to study and what research questions to ask. It influences which end points are chosen as measures of success such as whether an average of two months increased survival is a meaningful result in comparison with the costs and side effects of a novel therapeutic agent. Fleck and Danis (2017) point out the complex normative judgments involved in making recommendations about whether such a therapeutic agent should be cleared for marketing or paid for out of shared funds, especially when trials of the drug may not have included measures of the quality of life of patients taking it or its efficacy in comparison with other treatments. They argue that when prices are very high-as high as $750,000 for limited gain such as a two month increase in survival-it is ethical to refuse to pay for the treatment out of shared funds and to expect patients who wish this option to purchase an insurance rider to pay for it. Pharmaceutical company sponsorship also influences decisions about how to structure clinical trials, for example whether to test a new agent against placebo or against a therapy currently in use. Showing that the new agent has results comparable to the current therapy may not indicate that it is effective but could only show that both are similarly ineffective. Drug company sponsorship also influences whether replicability or reproducibility of studies are conducted. Despite requirements in the U.S. that clinical trials be registered with a federal data base, ClinicalTrials.gov, negative study findings may not be published. Findings may never become public that a particular drug had no effect on the condition of interest. One study of clinical trials involving children found that 30% did not result in publication and that the rate of non-publication for industry-sponsored trials was double that of trials sponsored by academic institutions (Pica and Bourgeois 2016) . Importantly for public health, conflicts of interest may also affect judgments about when conditions should be considered pathological and in need of intervention. Identification of hypertension is an example. The American Academy of Family Physicians and the American College of Physicians declined to endorse American Heart Association and the American College of Cardiology guidelines identifying hypertension as blood pressure over 130/80, a figure that would classify 46% of U.S. adults as in need of intervention, in part out of concerns over conflicts of interest (AAFP 2017). One commentator has characterized this dispute not just as a conflict between specialists and generalists, but as "instead about medical care (primary or specialty care) versus public health. From the public health perspective it is inevitably a losing proposition and a rearguard action for doctors to treat mildly elevated blood pressures with medicine or even individual lifestyle advice" (Husten 2018) . Earlier in this chapter we pointed out concerns about conflicts of interest in the World Health Organization's decision to declare the 2009 influenza outbreak a health emergency of international concern. This declaration was the initial test of the new World Health Regulations that had entered into force in 2007. These regulations allowed for the declaration of public health emergencies of international concern; and, unlike the prior regulations they replaced, they did not limit such declarations to a specific list of serious contagious diseases such as cholera or plague. Under Article 48 of the Regulations, the Director of WHO was to appoint an Emergency Committee of experts to advise on whether an outbreak constituted an emergency of international concern. The experts were to be selected from the WHO roster of experts for advisory panels. Membership of the Emergency Committee responsible for declaring pandemic status was not made public, a lack of transparency that gave rise to concerns that these members may have been influenced by industry ties (Godlee 2010) . Reportedly, pharmaceutical companies made high profits-estimates ranged from $7 billion to $10 billion, depending on the company-from the sale of flu vaccines (Godlee 2010) . In addition, the expert responsible for authoring WHO's guidance on the use of antivirals during a pandemic reportedly was receiving payments at the same time from Roche, maker of Tamiflu, the principal antiviral being stockpiled for prophylactic use in case of an outbreak (Cohen and Carter 2010) . The Council of Europe Parliamentary Assembly was sharply critical of the WHO decision to declare a pandemic and called for more transparency in making these decisions (Council of Europe 2010). Although as we described above the WHO's subsequent assessment of the declaration concluded that there had been no malfeasance, suspicion lingered. The report and the heat it generated indicate the importance of fully disclosing and appropriately managing the impact of conflicts of interest for trust in the science used by public health. As WHO establishes partnerships with non-state actors in public health, conflicts of interest are a principal concern, as we discuss further in Chapter 7. Another illustration of how known financial incentives can undercut trust in public health recommendations is the campaign by Merck in support of Gardasil, the vaccine it manufactured against the human papilloma virus (HPV) (Schwartz et al. 2007 ). The HPV virus is sexually transmitted and can cause a variety of cancers, primarily of the cervix, penis, and tongue. Development of a vaccine against the infection was regarded as a highly important public health measure especially for areas of the globe where regular access to cervical cancer screening was not available. The vaccine was not cheap: to the contrary, at $120 per each of the required three doses, it was the most expensive vaccine ever marketed. In the media, it was eye-catchingly described as a vaccine that could prevent cancer. After U.S. licensure of the vaccine in 2006, the CDC Advisory Committee on Immunization Practices recommended that all women ages 13-26 receive the vaccine. Nearly half of the states considered legislation requiring vaccination for school entry. When Merck's extensive behind the scenes lobbying to promote the vaccine became known, controversy erupted over the recommendations for its routine administration. To be sure, the vaccine requirements were controversial for other reasons as well: it protected against a sexually transmitted disease, the recommendations applied to young teenagers not believed to be (or desired to be) sexually active, the recommendations were perceived as interfering with parental authority over their children, and the recommendations applied to females only (despite the fact that males could transmit the disease and were themselves at some risk from the infection). The controversy lingered and fed into more general suspicions of immunization prevalent in the U.S. and elsewhere at the time ). The ground is thus fertile for suspicion of science. Trust or distrust in science may be shaped by stages of a health threat and perceptions of its severity among the population. Uncertainties and contested normative judgments have been complicated by serious ethical problems in the conduct of science such as exploitation of research subjects or conflicts of interest. In addition, science has become political, figuring in debates over clean water and air, climate change, abortion, and many other hot button issues. Distrust of science is widespread, as the case of vaccination illustrates. Vaccines to prevent disease are one of the most powerful weapons public health can deploy. Yet suspicions about their efficacy and risks-that is, about vaccine science-continue to depress vaccination rates. Maintaining the levels of vaccination in the population needed to achieve herd immunity-that level of population immunization needed to prevent disease spread-remains a fragile enterprise. Selfinterested objections to vaccination as expecting individuals to take risks for the benefit of others in the community is part of the problem. So are parents objecting to taking any risks on behalf of their children for the benefit of others. But behind these objections lie mistrust of scientific representations of the efficacy, benefits, and risks of vaccination. In 1998, a paper published in the UK in the highly regarded medical journal The Lancet, purported to establish a link between vaccination against measles, mumps, and rubella (MMR), and autism. The paper, by Wakefield and 12 colleagues (1998) , reported a series of 12 cases in which children who had been immunized later were allegedly found to have pervasive developmental disorders. The widely publicized paper caused a sensation that resulted in vaccination refusals to the point that herd immunity was compromised in areas of the UK and elsewhere. From the beginning, a red flag was that the report was highly speculative in generalizing from twelve cases; and epidemiological studies soon questioned the findings (Rao and Andrade 2011) . Six years later, ten of the twelve original coauthors retracted their support for the claims about causal relationships between immunization and developmental disorders made in the article (Murch et al. 2004) . Although an initial investigation by The Lancet found no misconduct in the publication, it did report that Wakefield had failed to report as a conflict of interest that he had been funded by the Legal Aid Board for a pilot study that also included some of the same children, funding that could have been perceived as a conflict of interest that Wakefield should have revealed (Lancet 2004) . Finally, in 2010, The Lancet fully retracted the Wakefield paper based on conclusions that the children reported in the series had not been consecutively referred and the study had not received local ethics committee approval (Lancet 2010). Despite full scientific repudiation of the supposed link between vaccination and autism, suppositions about the connection will not die out. Parents, worried about the impact of the many vaccines now given their infants, continue to report beliefs about sudden behavior changes after immunization. "Anti-vax" has become a movement world-wide and vaccination rates have again slipped below herd immunity levels in some locations. The result has been outbreaks of disease, including one from exposures at the Disneyland resort in California that sickened more than 125 children and resulted in legislation repealing California's "personal belief" exemption from vaccination requirements for school children. Suspicions about vaccine safety are one explanation for low immunization rates. Poverty and sporadic access to medical care are others. It is notable, however, that vaccine mistrust is not always correlated with poverty or lack of education; in California, for example, wealthy Marin County had one of the highest rates of unvaccinated children at the time of the Disneyland outbreak. Many parents opposed to vaccination are also suspicious of the role of the state more generally. There are high rates of vaccine refusal in families who home school their children and the California statute requiring proof of immunization for school entry exempts children who are home schooled (California 2020). Home schooled children of course do not stay at home all of the time and may be out and around places where children gather-such as Disneyland-just as much as other children are. Vaccine refusal has reportedly also been taken up as a cause by women who had adverse experiences with medicalized birth such as unanticipated caesarean section deliveries and 2.6 Suspicions of Science: Skepticism and Politics who see vaccination as "unnatural" (Reich 2016) . These women may feel that their understanding of the birthing process was ignored by physicians-put in more philosophical terms, that epistemological injustice deprived them of an opportunity for natural childbirth that they treasured. Vaccine denialists sometimes draw on alternative theories of knowledge in support of their views. Mark Navin (2013) relates how leaders of the anti-vaccination movement describe their experiences with physicians who failed to listen to their concerns about their children. According to Navin, these denialists see themselves as victimized by epistemological injustice of disbelief in their testimony. They claim a kind of "democratic" authority as citizen-scientists who have been ignored by experts. These arguments gain credence, of course, if the supposed experts can be discounted for other reasons, such as the conflicts of interest recounted above. Despite his own conflicts, Wakefield is regarded in the anti-vax world as a hero who listened to and believed his patients and who has been unjustly vilified by a scientific establishment that may itself be corrupt. Navin's conclusion is not that the vaccine denialists are justified-he describes in detail their own epistemic vices-but that public health advocates must not only educate about the scientific case in favor of vaccination but also attend to their own epistemic vices such as the failure to listen to concerned parents who may have made accurate observations about their children. The story of autism fear and vaccines has drawn particular attention for its lessons for scientific communication. At the time the fraudulent paper was published, worries were already running high about increasing, but unexplained, rates of autism in the population. Wakefield's paper struck a sympathetic chord among parents who worried about toxic combinations injected into their children. According to communication scholars Burgess et al. (2006) , apparently causing autism had a high "outrage" factor among the public because it was judged to have inflicted severe neurological damage on innocent young children. Parents, moreover, were regarded as having been wrongfully coerced by the British National Health Service's reimbursement policies favoring the combined form of MMR immunization. Scientists, by contrast, emphasized the low or absent probability of the connection, but these claims did not have the same popular salience. Advisory committees recommending policy decisions such as the Advisory Committee on Immunization Practices that makes vaccine recommendations to the US CDC must manage these fears (Martinez 2012) . In Chapter 7, we consider further how communication through groups may be helpful in achieving the values of respect for individuals that have led some to insist that parental decisions not to consent to their children's vaccination should be honored without further question. Vital statistics can provide important information about the health of populations. For centuries, together with reports about outbreaks, acquiring this information was the primary way for people who were more fortunate to avoid exposure to diseases that could not otherwise be treated. Those who were not so fortunate succumbed. Today, with increasing understanding of disease and availability of prevention or treatment, the advantages of outbreak detection may be shared far more widely and more equally. Nonetheless, outbreak detection can generate fear and hostility if patterns of disease track otherwise disfavored groups. On the other hand, COVID-19 has revealed the importance of demographic data about the distribution of disease burdens-data that may either generate mistrust as people see their disadvantage starkly, or that may foster trust if the result is increased attention to disparities in treatment and in health. The advance of science, particularly in understanding causation and treatment of contagious disease, made it all the more compelling to identify outbreaks quickly. But scientific advancement is not a panacea. Scientific uncertainty is ineluctable and scientific knowledge is incomplete. When there is reason to believe the practice of science cannot be trusted-because of conflicts of interest, politicization, outright fraud, or exploitation-the consequences may be dire for public health activities using even simple forms of data such as vital statistics. Trust in science to deal with what can be learned from surveillance requires not only scientific integrity but also the appearance of scientific integrity and successful communication of this appearance. HIV and African Americans in the Southern United States: Sexual Networks and Social Context Culpable Causation 2017. AAFP Decides to Not Endorse AHA/ ACC Hypertension Guideline Perceived Intent Motivates People to Magnify Observed Harms A Wholesome Horror: The Stigmas of Leprosy in 19th Century Hawaii Intimate Partner Violence Experiences by HIV-Infected Pregnant Women in South Africa: A Cross-Sectional Study Cognitive Biases and Heuristics in Medical Decision Making: A Critical Review Using a Systematic Search Strategy The American Indian Holocaust: Healing Historical Unresolved Grief. American Indian and Alaska Native Mental Framing Effects on Physicians' Judgment and Decision Making The MMR Vaccination and Autism Controversy in United Kingdom 1998-2005: Inevitable Community Outrage or a Failure of Risk Communication? California (Health & Safety Code § 120335(f)) (California) (2020) Centers for Disease Control and Prevention (CDC). 2019. HIV and African Americans The Past, Present, and Future of Public Health Surveillance. Scientifica (Cairo) WHO and the Pandemic Flu "Conspiracies HPV Vaccination Mandates-Lawmaking amid Political and Scientific Controversy Committee for the Study of the Future of Public Health (Committee). 1988. The Future of Public Health The Handling of the H1N1 Pandemic: More Transparency Needed Outbreak Column 16: Cognitive Errors in Outbreak Decision Making Human Ectoparasites and the Spread of Plague in Europe During the Second Pandemic Public Health Surveillance: Historical Origins, Methods and Evaluation Vital Statistics Collected by the Government From Miasma to Ebola: The History of Racist Moral Panic Over Disease The EXODUS of Public Health: What History Can Tell Us About the Future The Ebola Emergency-Immediate Action, Ongoing Strategy Fevers, Feuds, and Diamonds: Ebola and the Ravages of History How Should Therapeutic Decisions about Expensive Drugs Be Made in Imperfect Environments? HIV Treatment as Prevention: Not an Argument for Continuing Criminalisation of HIV Transmission The Concept of Quarantine in History: From Plague to SARS Privacy, Confidentiality, and New Ways of Knowing More Conflicts of Interest and Pandemic Flu Great Moments in Statistics Exile in Paradise: The Isolation of Hawai'i's Leprosy Victims and Development of Kalaupapa Settlement, 1865 to the Present Invited Commentary: The Need for Cognitive Science in Methodology Leprosy: Social Implications from Antiquity to the Present The Strange Case of the Broad Street Pump: John Snow and the Mystery of Cholera Parallels Between Cancer and Infectious Disease Scared in Public and Now No Privacy": Human Rights and Public Health Impacts of Indonesia's Anti-LGBT Moral Panic The Blood Pressure Guideline War Is Not A Fake War 103 Fed Rep The Emergence of Ebola as a Global Health Security Threat: From 'Lessons Learned' to Coordinated Multilateral Containment Efforts Understanding Ebola: The 2014 Epidemic Retraction-Ileal-lymphoid-nodular Hyperplasia, Non-specific Colitis, and Pervasive Developmental Disorder in Children William Farr: Founder of Modern Concepts of Surveillance Who Counts in the Census? Racial and Ethnic Categories in Francis Strains, Functions and Dynamics in the Expanded Human Microbiome Project Managing Scientific Uncertainty in Medical Decision Making: The Case of the Advisory Committee on Immunization Practices The Eyam Plague Revisited: Did the Village Isolation Change Transmission from Fleas to Pulmonary? World's Second-Deadliest Ebola Outbreak Ends in Democratic Republic of the Congo Of Medicine, Race, and American Law: The Bubonic Plague Outbreak of 1900 Fit to be Citizens? Public Health and Race in Los Angeles Retraction of an Interpretation Competing Epistemic Spaces: How Social Epistemology Helps Explain and Evaluate Vaccine Denialism Networks in Tropical Medicine: Internationalism, Colonialism, and the Rise of a Medical Specialty Office for National Statistics Thomas Fresh (1803-1861), Inspector of Nuisances, Liverpool's First Public Health Officer Discontinuation and Nonpublication of Randomized Clinical Trials Conducted in Children Ethical Issues in Cancer Screening and Prevention The MMR Vaccine and Autism: Sensation, Refutation, Retraction, and Fraud Black Death at the Golden Gate: The Race to Save America from the Bubonic Plague The Economic Impact of H1N1 on Mexico's Tourist and Pork Sectors Tuskegee Syphilis Study descendants seek settlement money Of Natural Bodies and Antibodies: Parents' Vaccine Refusal and the Dichotomies of Natural and Artificial Examining Tuskegee: The Infamous Syphilis Study and Its Legacy Disease and Death in the Trump Administration Signals Formal Withdrawal From W.H.O. The New York Times Cognitive Biases Associated with Medical Decisions: A Systematic Review Lessons from the Failure of Human Papillomavirus Vaccine State Requirements The Impact of False Positive Breast Cancer Screening Mammograms on Screening Retention: A Retrospective Population Cohort Study in Alberta Pollution, Promiscuity, and the Pox: English Venerology and the Early Modern Discourse on Social and Sexual Danger Are there Characteristics of Infectious Diseases that Raise Special Ethical Issues? Trump Administration Strips C.D.C. of Control of Coronavirus Data. The New York Times The Colony No Child Left Alone: Moral Judgments About Parents Affect Estimates of Risk to Children Medicine, Empires, and Ethics in Colonial Africa Lessons from the History of Quarantine, from Plague to Influenza A Availability: A Heuristic for Judging Frequency and Probability Competing Theories of Cholera Race: About Ileallymphoid-nodular hyperplasia, Non-specific Colitis, and Pervasive Developmental Disorder in Children Report of the Review Committee on the Functioning of the International Health Regulations (2005) in relation to Pandemic (H1N1)