key: cord-0981760-lut7vovl authors: Otter, Chris; Breyfogle, Nicholas; Brooke, John L.; Webel, Mari K.; Klingle, Matthew; Otter, Chris; Price-Smith, Andrew; Walker, Brett L.; Nash, Linda title: Forum: Technology, Ecology, and Human Health Since 1850 date: 2015-09-11 journal: Environ Hist Durh N C DOI: 10.1093/envhis/emv113 sha: b104bc127aa3d58c9f082aafd5f222ed045324cb doc_id: 981760 cord_uid: lut7vovl nan A tsetse fly bites into human skin, releasing trypanosome parasites that course into the lymphatic system and bloodstream. An obese, sedentary middle-aged man's body slowly becomes insulin resistant; regulation of glucose levels collapses, and he develops type 2 diabetes. A frightened cow defecates in an abattoir, releasing virulent Escherichia coli O157:H7 bacteria that splatter over other animals. Later, these bacteria transfer to the sides of freshly slaughtered beef. A seventy-eight-year-old woman steps off a plane in Toronto and unwittingly brings severe acute respiratory syndrome (SARS) into North America. A cloud of aerosolized asbestos and metals billows forth from the pulverized wreckage of the World Trade Center and silently insinuates itself in the pleural tissue of rescue operators. Agents of human disease come in many forms. Some are simple, some are complex; some are living, some are not. E. coli bacteria are prokaryotic, simple, and ancient, while the trypanosomes causing sleeping sickness are more complex eukaryotes that contain nuclei and reproduce sexually. Malignant mesothelioma, by contrast, arises from the contact of lifeless toxins with delicate pleural tissue. Other entities, like the prions causing scrapie, Creutzfeldt-Jakob syndrome, and Kuru, straddle the enigmatic boundary between living and nonliving. But all of them, as Linda Nash has powerfully argued, move through materially specific ecologies: bloodstreams, air, rivers, water mains, hamburgers. 1 All diseases have their own distinct pathogen-ecology system, an actor-network or assemblage composed of specific environments, disease agents, and bodies. Pathogen-ecology systems have a long history that has been conceptualized in schematic, but nonteleological, terms. Tony McMichael argues that the ecological history of human disease can be comprehended in terms of four distinct but overlapping waves. The first occurred following early human settlements in the Neolithic; a combination of higher aggregate populations and cohabitation with animals allowed microbial transfer from animals to humans. During the second wave, contacts between Eurasian populations transmitted microbes between cultures (the Black Death is the paradigmatic example), and formerly epidemic diseases became endemic. After 1500 CE, the third wave saw pathogens like smallpox distributed around the globe via the Columbian Exchange. 2 The focus of this Forum is McMichael's fourth and most recent wave, which constitutes the large-scale ecological and technological transitions underway since the later nineteenth century. 3 These transformations include the construction of massive and systemic infrastructures, unprecedented urbanization, the development and deployment of new materials, mass deforestation, the disintegration of natural habitats, pollution, species extinction, and industrialized agriculture. 4 Through such processes, humans have unsettled planetary ecologies to an unparalleled extent. The public health effects of this relentless human niche construction have been dramatic and in some ways paradoxical. The classic infectious diseases in the more developed parts of the world largely have been conquered through vaccines and antibiotics, and human life expectancy has risen dramatically. However, new diseases are emerging across the globe at a historically unprecedented rate and are circulating efficiently through technological networks, causing an increased amount of microbial and toxic "traffic." 5 Meanwhile, the engineering projects. Urbanization and the construction of large reservoirs and highways have disturbed ecologies and triggered the irruption of previously unknown diseases into human societies, particularly in tropical parts of South America, Africa, and Asia. This is particularly true of arboviruses. In South Asia, Kyasanur forest disease, a tick-borne viral fever, appears to have emerged when deforestation and cattle grazing forced ticks into contact with humans. 12 Colonization of Amazonia led to the introduction of Oropouche fever, caused by a virus spread by a biting midge called Culicoides paraensis, into human populations, while urbanization has been a major factor in the planetary expansion of dengue fever. 13 The role of colonialism in disturbing disease ecologies is addressed by Mari Webel in her essay on sleeping sickness. Third, modern technological networks have acted as evolutionary spaces, within which new pathogens have emerged, evolved, and found unique niches. 14 Chris Otter and Andrew Price-Smith each address this issue, providing two examples of the numerous diseases emerging in advanced industrial society. Legionnaires' disease, for example, was first observed in the United States in 1976. The bacterium only thrives in warm water of between 25 and 42 C, conditions rarely encountered outside of human-built water infrastructures: whirlpools, showers, hot tubs, hot-water tank sediment, and cooling towers. 15 Fourth, the Anthropocene is an age of material transition and diversification. While traditional technological networks were composed of wood, stone, and iron and powered by organic energy sources, newer technological systems incorporate a wider range of materials, particularly rarer metals and synthetic substances. 16 Many of these materials were used for functionally specific and critical elements of technological systems like batteries, insulation, fuses, and microchips. Additionally, many new chemical products like nylon, PCBs, Teflon, and stilbestrol used in everyday consumer items permeate our planet. Living bodies soon began to inhale and ingest these materials, resulting not only in the emergence of various forms of cancer, but also conditions like asthma and allergies. 17 Disposing of such materials is also a serious environmental problem. 18 As Gregg Mitman has argued, the "multitudinous exposures permeating our modern world" mark this epoch as materially distinct from earlier periods. 19 Brett Walker compellingly illustrates this development in his essay on asbestos and 9/11. In the Anthropocene, motorization and aviation have allowed people and pathogens to move at unprecedented speeds. Electrification and computerization enable information to move almost instantaneously. In the "Age of Speed," humans must adapt to the transformed pace of life. 20 Jonathan Crary's recent 24/7 offers a stark warning about our increasingly sleepless society. 21 Yet the Anthropocene is not purely an age of hyperactivity. Increased bodily stasis, inactivity, obesity, and metabolic derangement comprise the fifth aspect of the Anthropocene's disease regime, which Matthew Klingle examines in his essay on diabetes. According to one report, chairs are becoming the twenty-first-century version of the cigarette. 22 Globally, type 2 diabetes rose sevenfold between 1975 and 2005. 23 The metabolic syndrome, a cluster of conditions including irregularity of glucose regulation, high blood pressure, and obesity, is the archetypal "mismatch disease," resulting from a disjuncture between our bodies and our reconfigured technological landscape. Here, the entire nutritional and technological milieu is the disease agent. A list of such hypothesized conditions now includes attention deficit/hyperactivity disorder, carpal tunnel syndrome, fibromyalgia, myopia, and obsessive-compulsive disorder. 24 In the developed world, noncommunicable diseases are the major killers in the Anthropocene. A sixth and final dimension of this Anthropocene pathogen-ecology is epistemological. Disease in an age of technological and material transition is increasingly visualized, mapped, analyzed, calculated, comprehended, and predicted through multiple public health networks and technologies. The mapping of disease emerged in the nineteenth century as a key technique to understanding its specific ecology, as Mari Webel adeptly shows in her essay on sleeping sickness. These processes of mapping increasingly revealed disease ecologies to be technological or infrastructural in nature: John Snow's identification of the Broad Street Pump is the iconic example here. 25 Statistical methods were simultaneously utilized to elucidate epidemiology. Such techniques have been joined by laboratory studies establishing (contested) thresholds of toxicity and the rise of disease modeling. 26 In addition, the improved accuracy and power of instruments of detection have facilitated the identification of smaller and smaller disease entities. Electron microscopes enabled the visualization of viruses in the 1930s. The environmental effects of DDT were only uncovered following the development in the later 1950s of devices that could detect a thousand millionth of a milligram of a substance. 27 Large-scale commercial systems often have their own concomitant systems for monitoring and preventing disease. The cattle complex, for example, has attempted to improve the traceability of animals through tattoos, tags, and implants. 28 The tracking of human disease carriers-"viral traffic studies"-is also an essential dimension of contemporary epidemiology. 29 A combination of such techniques, aided by the acceleration of computing capacity, has facilitated the development of vast systems of knowledge production, such as the Food Emergency Response Network and PulseNet/ Enter.Net. These six dimensions are, to repeat, in no sense exhaustive. The essays in this Forum explore all six, albeit in different ways and with different emphases. The authors all contributed to the 2012-13 program on "Health and Disease in World History" held at The Ohio State University's Center for Historical Research. During this program, this issue of the relationship between ecology and disease was a recurring one, and the Forum thus offers an attempt to explore these various facets of pathogen-ecologies in the recent human past. The first article, by Mari Webel, explores the history of sleeping sickness in early twentieth-century East Africa. Webel specifically focuses on ecological disruption and knowledge production. Although the disease was an old one, it was new to Europeans, whose colonial activity disturbed local ecologies and triggered the expansion of the disease. The response was characteristic of European imperialism: an ongoing attempt to expunge the perceived natural perils of an undeveloped environment, associated with dangerous forests and swamps, and to replace them with something more manufactured and manageable. It also entailed an attempt to comprehend the disease's specific patterns of movement through the use of an elaborate and laborious system of fly-catchers, whose activities allowed the mapping of the disease onto the colonial environment. Matthew Klingle explores a different history, that of type 2 diabetes in twentieth-century America. This is a mismatch disease caused by bodies undergoing rapid and incomplete assimilation to a new material and dietary milieu. When Native American groups shifted to the diet of white people, specifically foods rich in sugar and corn syrup, and began adopting the more sedentary lifestyles associated with modern technologies, rates of type 2 diabetes accelerated alarmingly. This phenomenon can only be explained environmentally, and specifically by an environment characterized by an abundance of refined foods and technologies of stasis. Development, Klingle shows, has the capacity to produce new forms of degeneration. Klingle also draws our attention to the ways in which poverty and its attendant stresses are a strong contributing factor in this history. The theme of food is continued by Chris Otter, who looks at the emergence of novel foodborne pathogens in the twentieth century, particularly listeriosis and Escherichia coli O157:H7, in relation to the development of large-scale food systems. In the process, he highlights issues relating to expansive networks and evolutionary history. It has long been understood that long-distance transportation and processing of food can contribute to the formation and distribution of pathogens, which in turn catalyzed attempts to retard decay, particularly through refrigeration. However, it has proved impossible to prevent food systems from spreading disease. Listeriosis, for example, survives at cold temperatures and thus thrived within refrigerated environments. The appearance of E. coli O157:H7 in 1982, a deadly version of an otherwise unremarkable inhabitant of mammalian guts, created great speculation about the role played by contemporary food systems in the distribution and even generation of disease. A combination of giant feedlots, grain feeding, and the administration of antibiotics have all been actively implicated in the emergence of this deadly pathogen. The evolution of the microbe, while not entirely determined by radical ecological change, has clearly been significantly shaped by the transformation of food systems. Andrew Price-Smith furthers the argument here that so-called advanced technological systems can actually cause disease, not by malfunctioning but by working perfectly. He focuses on SARS and the increasing phenomenon of antibiotic resistance. SARS, he shows, is a virus explicitly associated with affluent and technologically sophisticated societies. It found a niche in advanced hospitals, where airconditioning systems facilitated its spread, and it traveled globally through the medium of intercontinental air travel, which distributes microbes far more quickly than the ships that brought cholera to European shores in earlier centuries. Antibiotic resistance, meanwhile, is becoming one of the biggest public health concerns in the West. 30 The argument is thus that progress in medical technology always generates microbial resistance and evolution by exploiting new niches. Those parts of the world remaining more "natural" and "undeveloped," paradoxically perhaps, are less vulnerable to certain diseases, the so-called plagues of affluence. Brett Walker explores the new materials of the Anthropocene by using the evocative example of 9/11. While most scholarly focus, quite understandably, has been directed toward the appalling acts of terrorists, Walker addresses another, more morally ambiguous, story. The World Trade Center was itself a giant vertical technological system, or series of interwoven systems, composed of a vast number of different chemicals and synthetic materials, many of which were toxic or carcinogenic. The destruction of the towers reduced these materials to a giant plume of dust and vapor. Rescue workers inhaled this entirely novel material cocktail and, in turn, developed cancer. Walker's particular focus is on asbestos, which allowed the fireproofing of very tall buildings and was released into the atmosphere during the attacks. Again, Walker's piece indicates the ways in which technologically advanced systems and structures can have dramatic, and often unpredictable, health effects. Finally, Linda Nash provides a commentary in which she contextualizes the Forum's essays by addressing ways in which environmental historians have addressed the history of health and disease. She suggests that the development of the environmental history of disease has drawn on the work of social historians of medicine and disease ecologists. The Forum, she notes, utilizes the frame of disease ecology rather more than that of the social history of medicine. In doing so, she reminds us, the broader social, economic, and political forces shaping technological landscapes and disease ecologies can sometimes be obscured. Further, the knowledge of disease in the Anthropocene is always contextual, situated, and historical. Taken together, the essays demonstrate various aspects of our current ecological disease regime. While ecology remains inescapable, it has become increasingly shaped and built by humans. This humanbuilt world, in turn, provides an elaborate set of networks within which a range of disease agents emerges and circulates. Human health is now maintained or disturbed within the complex niches we have constructed for ourselves. Given the palpable environmental inequalities underpinning social relations, one would expect clear social consequences: this is evident with type 2 diabetes as well as with conditions as disparate as enteric infections in the developing world and occupational cancers in Europe and America. However, some emergent diseases are only evident in more technologically advanced areas. SARS is a case in point, but other emergent diseases, like E. coli O157:H7 and bovine spongiform encephalopathy, are also limited to the developed world. If the Anthropocene is the age when humans become "geological agents," it is equally the age when our engineered environment simultaneously nurtures and sickens us. 31 Two final points are perhaps in order. It has become common to view the Anthropocene as the epoch of "the sixth extinction." 32 This might be true at a faunal level, but at the microbial level, it is clearly also an age of accelerated emergence. Finally, while long-distance technological networks have connected most of the world, every pathogen-disease system has its own specific, delimited ecology. The world is not evenly suffused by singular disease risks, in the manner of Ulrich Beck's "globalized risks." 33 If disease ecologies are inescapable, they are also always local. These essays explore some aspects of this tangled history. In the early twentieth century, much remained uncertain about sleeping sickness, a deadly parasitic disease spread by a biting fly. After 1903, scientists generally agreed that infection with a protozoan parasite caused the disease and that the tsetse fly was its insect vector. 1 What they did not yet know was how long a fly could remain infectious after biting a person with parasites in his bloodstream; which animals in the African environment served as reservoirs for the disease; which species of tsetse flies carried the disease; and what constituted the preferred ecology, or ideal habitat, for the flies that threatened to endanger human populations across a wide swath of the continent. This essay considers pathogens and their ecologies at a particular moment in history, amid the advance of colonial rule in Africa's Great Lakes region when both sleeping sickness and the documentation and mapping of it appear to expand in tandem. Historians familiar with sleeping sickness will be equally familiar with the voluminous data-the maps, charts, and reports signaling this epistemological shift-that colonial medical officers and tropical medicine researchers produced and that have allowed studies of sleeping sickness to develop so robustly. 2 Colonial scientists and administrators documented fly habitats, conducted disease transmission experiments in animals ranging from monkeys to crocodiles, tested a wide variety of chemicals and drugs on human and animal subjects, established and managed cordons sanitaires and sleeping sickness isolation camps, and cleared bush from lakeshores and riverbanks to destroy fly habitats. Collectively, across sub-Saharan Africa, colonial health services launched and pursued diverse anti-sleeping sickness campaigns that frequently constituted the most assertive and permanent colonial presence to date for affected African communities. They did so based on particular understandings of the African environment and of the disease ecology of sleeping sickness, ultimately driven by the imperatives of capital to secure the economic future of threatened colonies. In the Great Lakes region, such campaigns focused largely on riverine tsetse habitats, characterized by dense vegetation along waterways and lakeshores. Colonial campaigns also relied on an understanding of fly feeding behavior that initially included mammals, reptiles, and birds, and left open many possibilities for sustaining fly populations-and therefore the possibility of disease transmission under the right conditions-with or without a local human presence. Sleeping sickness interventions, particularly those targeted at altering African environments and tsetse fly habitats, operated within changing regimes of labor and land use. Early colonial understandings of how landscape and disease were related intertwined with African experiences of tax collection, labor recruitment, and diverse attempts to profoundly reshape the ways that people fished, farmed, traded, and traveled. But in order to make such sweeping attempts at intervention, colonial administrators and medical officers first had to develop a comprehensive knowledge of the ecologies and places of sleeping sickness-to map landscapes with a focus on sleeping sickness and within rubrics of "infected" or "healthy," "contaminated" or "clean." Such processes of producing knowledge about disease ecology and fly habitats relied on the collaborative as well as independent labor of young African men employed as specimen collectors, "fly boys" in British areas and Fliegenfänger (fly-catchers) in German areas. 3 Fly-catchers' tedious and intensive groundwork provided the foundation for colonial understandings of sleeping sickness epidemiology and disease ecology, and, in turn, shaped strategies for ecological and environmental interventions. In the early period of colonial medical interventions targeting sleeping sickness, specimen collectors' work did much to turn the uncertain into the known by classifying an ecological niche as "infected" or "healthy," categories that were readily extended to an area's inhabitants. Here, I focus not on the aspects of ecological change that may have facilitated the spread of sleeping sickness, but on the practical and intellectual work that enabled an understanding of disease ecology to develop in the first place. I posit that exploring auxiliaries' work allows us to understand more fully the specific processes of knowledge production that led to the development of ideas about sleeping sickness and East African environments and shaped environmental interventions on the ground. Changing disease ecologies and uses of the environment affected exposure of both human and animal populations to trypanosome parasites in the late nineteenth and early twentieth centuries, linking ecological change to colonial incursion and concomitant political and social disruption. 4 The associated changes in disease ecology, human mobility, resource extraction, and governance implicated in the spread of epidemic sleeping sickness fit into current understandings of the sweeping transformations of the Anthropocene. 5 Arguments positing the Anthropocene as a new era in geological and human history provide a temporal and interdisciplinary framing that historians of health can use to analyze diverse multi-sectoral change. But efforts to understand shifts in disease incidence spatially and environmentally far precede this modern reframing of moments of rupture. 6 Particularly in early twentieth-century colonial contexts, such efforts were generally not global in scale, focusing instead on the political geography of territories and protectorates, for example, or the topographies of mountain ranges, lakes, or forests. But we might, as the authors of this Forum's introduction suggest, productively view epistemological innovations and knowledge production as another dimension within which we consider the singularity of the Anthropocene. This essay reflects on colonial efforts to connect environment and disease, particularly specimen collection and mapping, around Lake Victoria in the early twentieth century. It pieces together a general picture of the activities of the diverse Fliegenfänger at work in the German anti-sleeping sickness campaign at Lake Victoria and Lake Tanganyika before the First World War, relying on reports, photographs, and correspondence produced at the time. It then focuses on a specific field of activity on the eastern shore of Lake Victoria to explore one example of how fly work fit into the designs of the colonial campaign against sleeping sickness. Although it is no small leap from the Lake Victoria littoral to lower Manhattan, Walker's contribution here also considers work and related flows of information and expertise (or their blockage), encouraging us to think about labor, risk, and how individuals' health can be caught up in wider global structures of resource extraction and capital. Diseases enabled by rapid technological change (as explored in Otter's and Price-Smith's contributions) or understood in tension between inherent qualities and recent shifts (as critiqued by Klingle's study of diabetes) are not intuitive comparative companions for sleeping sickness. New technologies are striking in their absence, as the past century is best characterized by a lack of development in either testing or treatment resources for this predominantly rural disease. And yet we might take these disparate diseases alongside one another, as this Forum seeks to do, to consider comparatively the historical roots of how connections between particular populations and particular diseases, set within particular environments, are created and reified. The expansion of epidemic sleeping sickness in the Belgian Congo during the last decade of the nineteenth century and in the Lake Victoria region during the first decade of the twentieth century took a significant demographic toll on affected African communities. 7 Historical epidemiological treatments of the disease's expansion in this period, while acknowledging local variability, generally point to a convergence of causative events and factors, including the disruption of previous patterns of environmental management due to the social and political upheaval triggered by colonial incursion (the Ford thesis), reduction of land undergrazing due to diseases affecting cattle and wild game, and changes in human exposure to the disease due to mobility in and out of tsetse habitats. 8 Labor and mobility also played a key role in the epidemic in certain areas; Lyons connects Belgian colonial demands for rubber and gold to changing exposure to sleeping sickness, along with heightened susceptibility, among populations in northeastern Congo. 9 But this nuanced understanding of epidemiological change and the epidemic's origins, not to mention the possible role of colonial demands in aggravating, rather than mitigating, the disease's expansion was not a part of colonial appraisals at the time. Instead, colonial regimes focused on endangered populations and the threat sleeping sickness posed to colonial economies. They developed diverse approaches toward eradicating the diseaseoften using common strategies or principles drawn from the new field of tropical medicine-that changed over the course of the epidemic's first twenty years. 10 British efforts to stem the tide of the disease focused primarily on coastal Buganda (including the Ssese Islands) and Busoga, where an estimated 250,000 to 300,000 people died from the disease before 1920. 11 There, efforts combined attempts to control African population movement and land use with attempts to clear vegetation and destroy tsetse habitats. The British campaign against sleeping sickness in the Uganda Protectorate preceded the German campaign in German East Africa by just a few years. British counterparts provided German scientists with points of reference for anti-sleeping sickness measures, such as the depopulation of coastal areas and the development of isolation camps, as well as research methods, such as a practical orientation to laboratory techniques needed for research into the disease. 12 At the outset of German investigations into sleeping sickness in 1903, British use of African personnel to sample tsetse fly vector populations likewise influenced German colonial officers' practices. 13 As the German anti-sleeping sickness campaign took shape around Lake Victoria-near Bukoba on the western shore and near Shirati on the eastern shore-Fliegenfänger were certainly a part of initial efforts to identify fly species and their habitats. In areas such as Shirati District on eastern Lake Victoria or the northern littoral of Lake Tanganyika, German colonial doctors asserted that sleeping sickness was endemic, but they also insisted that its arrival was recent and connected to the migration of people carrying the disease from, in the case of Shirati, areas of British Uganda, and in the case of Lake Tanganyika, from the Belgian Congo. This reflected a belief that the disease was entirely novel in Germancontrolled areas. When people carrying trypanosomes into tsetse fly habitats arrived-when the presence of clinical cases of sleeping sickness overlapped with fly vector habitats-medical and sanitation officers labeled such areas as "infected." Characterizing an area or a specific landscape in Africa as infected was not without precedent. It drew on Euro-American ideas current in the early twentieth century of a pathologized, dangerous African natural environment and also made reference to historical etiologies of disease derived from miasmatic theory. 14 The new technologies and tactics of tropical medicine aimed to steadily undermine the former, making Africa safe for Europeans (while still bolstering particular ideas about "the tropics" as a place apart) and also dismiss the latter once and for all by identifying specific pathogens for ailments that had seemed connected to particular kinds of environments. 15 German colonial officials combined surveys of insect and human populations to create specific local maps of areas of concern. At Lake Victoria, these areas radiated outward from Shirati, just south of the Ugandan border to the east, and Bukoba, just south of the Ugandan border to the west (see figure 1 ). 16 Medical officers assigned to the sleeping sickness campaign created "sketch maps" that identified territories by a chief or king's name and were oriented around major named waterways. These maps accompanied narratives of exploration and first forays through a new district (in the initial months of the anti-sleeping sickness campaign) and grew more refined and detailed with return trips through known areas of tsetse or sleeping sickness (as the campaign took shape). In either case, maps produced for administrators and senior scientists in East Africa and Berlin focused on differentiating between healthy and infected landscapes, specifically the overlap between clinical cases and tsetse that meant the disease could spread, and not on indicating population distribution, rates of infection, or density of fly vectors-data that field scientists developed. What is striking now when reading colonial doctors' reports alongside the accompanying maps they produced is how they flatten the detailed experiences of searching for flies and the sick, removing a sense of time spent, negotiations made, and orientation lost and recovered. These experiences were often included in reports in great detail, along with numbers of people examined for sleeping sickness, confirmed cases of the sick, and descriptions of plants, terrain, and climate. While these details are, admittedly, necessary casualties of presenting the information desired about the extent of sleeping sickness in German territories in a general manner, they represent a silencing in the colonial archive, as well as an opportunity for historians of health, ecology, and environment. Revisiting the work of surveying and collection opens up our perspective on the processes by which knowledge about African environments was created and on whose labor those processes depended. 17 German attention to African communities within the sleeping sickness campaign focused on its colonial borderlands, spaces within which uncontrolled African mobility posed the most serious threat to the wider health of the German East African protectorate. As such, two officers from an initial expedition (led by eminent scientist Robert Koch) in 1906-7 were responsible for operations on opposite sides of Lake Victoria in the fall of 1907. Using Shirati, the station on the eastern lakeshore, as a base, searches for flies and the sick focused on the lakeshore itself, spanning the short distance north to the Uganda border and then further south to the bays where the Mori and Mara Rivers entered the lake. Off the lake, the Mori River watershed was the primary focus of doctors' investigations, extending up river for twenty to thirty miles to the village of Utegi. Dr. Oskar Feldmann, one of the officers tapped to explore conditions around the lake, conducted an initial survey of the area around Shirati in September 1907. It revealed several hundred potential cases of sleeping sickness, of which a few score were confirmed to have trypanosomes in their bodies. At the same time, Feldmann also placed Glossina palpalis, the tsetse fly, on the map, collecting specimens and then inquiring whether people living on the shore of the lake or along the Mori and Mara Rivers were familiar with the biting flies. While the majority of his report reads, typical of Feldmann's bravado, as a tale of maneuvers through new territories where he singlehandedly surveyed several thousand people for sleeping sickness, subsequent plans for the anti-sleeping sickness campaign in the area reveal a cadre of shadow assistants that enabled his work. Feldmann mentioned in passing how "his fly-catchers," working independently from him, helped to assemble a picture of sleeping sickness ecology in the area. 18 African assistants who circulated along with Feldmann into the Mori bay and watershed, as well as along the lakeshore, allowed him to map the extent of sleeping sickness and presence of the fly vector within a wide swath of territory, and to do so relatively quickly. He incorporated these workers into a proposed budget for a sleeping sickness camp at Shirati, and several months later, when Feldmann had been posted to Lake Tanganyika to organize antisleeping sickness efforts there, fly-catchers and their provisioning again made up a key item in his budget. The collectors at Lake Tanganyika were to be paid 12 rupees each per month and provided with boots, suits, and leg wrappings to protect them from the bites of infected flies. 19 This wage was more than that for porters and men who policed the lakeshore and, by comparison, was triple the yearly hut tax of 4 rupees that the German East African government required at the time. 20 We have a sense, then, that specimen collecting-working as a Fliegenfänger for the Germans-was comparatively lucrative work for the few who had it. We unfortunately do not know how these men came to be employed or any great detail about their age, place of origin, or training. Perhaps local authorities mediated their employment, as with a group of medical auxiliaries posted north of Bukoba, or perhaps they were drawn from the ranks of young men working in another capacity for the administration at Shirati. 21 Their assistance became indispensable for the campaign around Shirati, where dense fly populations, scant infrastructure, and a Luo-speaking population perceived as being more mobile and dispersed made for a complex and constantly shifting intervention. 22 Accurate collection of tsetse and mapping of fly habitats shaped German officers' arguments that the district was by and large verseucht (contaminated), and, by extension, that preventing the spread of the disease using ambulatory treatment with antitrypanosomal drugs in a central isolation camp was not feasible. German officials employed the latter strategy near Bukoba, on the opposite side of the lake, where tsetse ecology differed and flies were less common. The campaign around the Mara and Mori Rivers, in contrast, focused on aggressive Abholzung (literally "deforestation," but here meaning bush clearing) along the river courses, destroying fly habitats where people were most likely to come into contact with tsetse during regular use of the river, even during drier seasons when fishing or gathering water was less frequent. Species identification and extensive specimen collection allowed targeted Abholzung, but, in practice, radical clearing took place anywhere that flies had been spotted where people also might be. Such clearing generally took the form of cutting back bush and grasses, not by burning, and occasionally also involved replanting cleared areas with more desired plants such as cassava. 23 It was labor-intensive work under shifting regimes of labor that included corvée (forced labor) and the arrangement of a set number of days per month in exchange for hut tax forgiveness, as well as wage labor. By June 1908, German doctors had begun to set up a camp aimed at treatment and research at Utegi, inland on the Mori River. 24 Flycatchers regularly deployed from Shirati and Utegi, likely accompanying further exploration to the south along the Mara River as the local campaign expanded its reach. By 1912 specimen collection also appears to have been experimentally used as a measure to get rid of flies completely along the Mara River. A Dr. Breuer reported that three Fliegenfänger caught over 4,700 flies in the last quarter of the year, destroying fly pupae as well. 25 In the second quarter of 1913, another doctor reported from Shirati that Fliegenfänger caught 52,000 flies, but also, more significantly, that auxiliaries were following up on clearing work near the town of Musoma, checking to see if fly habitats had indeed been destroyed along the lakeshore. Based on their "close checking" of the area, he was pessimistic that full clearing would yet be successful. 26 Here, auxiliaries had become reliable proxies, their familiarity with flies and fly habitats allowing the campaign a broader reach in a variety of locations in the district. Elsewhere in the district, clearing work yielded more encouraging results, where along the Mori River, sections were considered "fly free" by early 1914 and intensive work by fly-catchers had been ongoing. Simultaneously, another group worked independently on an island in the lake, where a year-long experiment in the "mechanical" extermination of flies was coming to an end, ultimately proving an ineffective way of getting rid of tsetse. 27 As with many vector-borne diseases, the options for inhibiting the spread of sleeping sickness were-and are-limited either to interfering with the parasite's life cycle in the human body through drug treatment or to interrupting the transmission of the disease by separating fly vectors from human hosts. For German officials in the early twentieth century on the eastern lakeshore of Lake Victoria, choosing to deploy the latter strategy in certain areas meant attempting a wholesale reduction in tsetse populations through destruction of fly habitats. Alternatives, such as moving people away from fly habitats by relocating villages, protecting individuals from fly bites, or largescale treatment with antiparasitic drugs, were generally untenable around Shirati. Drug treatments, including experimental therapies, remained an important and controversial component of the German anti-sleeping sickness campaign, although with very limited positive benefits and many adverse effects. 28 Increasingly specific knowledge of the disease ecology of sleeping sickness on the eastern lakeshore did not lead to increasingly beneficial interventions. Clearing bush and replanting with certain food crops did not eliminate tsetse fly habitats; flies recolonized the same areas with the new vegetation. Furthermore, labor regimes oriented around clearing work created ongoing tension between German officials and communities in the Mara and Mori watersheds, as one among many new and potentially contentious colonial demands. The problem of sleeping sickness provides inroads for historical analyses of disease ecology and the impact of political, economic, and social change on wider dynamics of exposure and vulnerability. Contrasting with diseases of affluence or those generated through the use of new materials or technologies, as discussed elsewhere in this Forum, sleeping sickness in the modern era has been a disease of poorer rural African populations and its recent history marked more by technological lags and gaps than forward momentum, innovation, or technological change. As research attention in pharmaceuticals and medicine focused elsewhere in the latter half of the twentieth century, vector-centered preventive interventions persisted, and drug treatments remained difficult to access, risky, and fraught with uncertainty. Disease prevention campaigns during the colonial period, such as those targeting sleeping sickness, allow historians not only to trace the genealogy of modern interventions, but also to understand how the information that fed particular policies was mediated by diverse labor on the ground. Histories of research and labor nestle within the more prominent story of epidemic disease and its prevention, and teasing out these histories reminds us of the variability and contingency of colonial programs in this early period. This is particularly interesting when considering sleeping sickness in historical perspective, as the focal nature of the disease has often meant continued attempts at intervention in a particular place over nearly a century, sometimes by multiple state and nongovernmental actors. Given the ongoing, if diminishing, impact of sleeping sickness on rural African communities, a vigorous and multidisciplinary discussion of aspects of the historical and present burden of disease remains relevant. Such a discussion potentially gives deeper context for patterns of disease causation, for example, and the factors that motivate community resistance to or engagement with public health measures. Mari K. Webel is an assistant professor of history at the University of Pittsburgh, specializing in African history and the history of health. She is currently completing a book on the politics of sleeping sickness prevention in East Africa entitled Negotiated Interventions: Sleeping Sickness, Community, and Authority in the Great Lakes Region, 1890-1920 and pursuing a project on the recent history of the "neglected tropical diseases" and global health programs in Africa. 16 Along Lake Tanganyika, areas of concern for the sleeping sickness campaign were more widespread, constituted by a string of posts between Ujiji and Usumbura (Bujumbura), their immediate hinterlands, and nearest landings on the lakeshore. Away from the coast, concern followed commerce, with German attention directed at oil palm plantations and a salt production site. In recent decades, chronic diseases have emerged to become significant global health threats, with diabetes as one of the leading causes of death and disability. Many observers have framed diabetes as a "mismatch disease" where cultural changes, from abundant calories to sedentary lifestyles, have outpaced human evolutionary biology. When seen historically, however, labeling diabetes as a "disease of civilization" is an old narrative that intersects with changing ideas about racial vulnerability, environmental degradation, and anxieties over progress. This essay explores how the frame of diabetes as a mismatch disease emerged, in part, from research on Native Americans, and it argues that, ultimately, the present-day diabetes epidemic can only be explained environmentally, which requires historians to broaden what they consider as the "environment" in their studies. We may live in the Anthropocene, but when we apply the term to our modern bodies, we also live in the age of chronic disease. Threequarters of all health care spending in the United States goes toward the treatment of chronic diseases. They are the leading cause of death and disability in the developed world and are fast becoming a major cause of premature death and morbidity in the developing world. 1 Yet for all of their devastation, chronic diseases have only received concerted attention from historians in the past decade. 2 One reason is historical: chronic diseases were largely invisible or uncommon until the postwar era. Before then, infectious disease was the greatest health threat. Following the so-called epidemiological transition, when Western nations gained the upper hand on infectious disease, a new period of "managed fear" emerged as diseases once considered to be fatal now were controllable, if at tremendous cost to patients and caregivers. 3 No chronic illness may better exemplify our state of ambient dread than diabetes. According to the US Centers for Disease Control and Prevention, as of 2010 approximately 25.8 million people, or 8.3 percent of all Americans, were living with diabetes. Statistics for global prevalence are similar with approximately 80 percent of diabetes deaths occurring in low-to middle-income nations. 4 Once considered rare, diabetes is now ubiquitous and increasing. Once considered either an act of fate or a failing of human nature, it is now seen as the result of modernity run amok. Several other authors in this Forum make similar claims about the emergence of new pathogen-ecology systems in the Anthropocene. Brett Walker archly notes how the materials of our built environment, from asbestos to rare earths, are now chemical weapons of global terrorism. Andrew Price-Smith points to nosocomial diseases like the SARS coronavirus as the deadly fruit of modern health care and transportation technology. Chris Otter traces how virulent foodborne pathogens, like Escherichia coli, have coevolved with our ever more complex food systems. As this Forum suggests, civilization is no safe harbor for health because, as Linda Nash has trenchantly observed, our "most controlled spaces" have often become "the most physically threatening." 5 Diabetes also encapsulates this curious anxiety of modern lifethat progress is pathological. 6 Defining and enumerating progress as a risk factor, however, has proved to be a thorny task, especially for chronic diseases like diabetes. Over the past century-plus, physicians and scientists have offered varied and sometimes contradictory explanations for today's diabetes epidemic. Exploring all of these "hybrid causations" is beyond the scope of this essay, but one particular frame has become pervasive in recent years: diabetes as a "mismatch disease." 7 The theory goes like this: evolution favored human ancestors who could lard away calories from energy-rich foods, but today, evolution has outstripped human adaptation. Human culture has become, in the words of evolutionary biologist Daniel Liberman, "the dominant force of evolutionary change acting on the human body" as well as upon the planet itself. 8 Put another way, we have shaped more than the environments enfolding us; we have also shaped the environments within us. While framing diabetes as a "mismatch disease" has become widely accepted as the latest explanation for the global scourge of diabetes, the origins of the concept, to paraphrase Nash, are both inescapably local and older than we may realize. The frame was first created in the late nineteenth century when physicians and nurses across the Western world noticed increased diabetes incidence and prevalence. Many attributed the rise to longer life spans, but others pointed to changes in diet, body size, and increased affluence plus the stresses of industrial urban life. As British physician Robert Saundby wrote in 1900, diabetes was "one of the penalties of advanced civilization." 9 Others added racial explanations to the list, insisting that diabetes was a Judenkrankheit, a Jewish disease, rooted in ethnic tendencies toward obesity or overeating. 10 This type of diabetes was previously called maturity or adult-onset diabetes, or fat diabetes; it is called type 2 diabetes today. The other main variant, which usually strikes in youth, was called juvenile-onset or thin diabetes; this is type 1 diabetes, and it was a death sentence. The discovery of injectable insulin in 1921 revolutionized the treatment of diabetes, specifically type 1, by saving lives with the trade-off of transmuting a once fatal illness into a chronic disorder. 11 But insulin therapy did not shatter the frame of diabetes as a disease of progress. Rather, the frame shifted to enclose other groups, notably Native Americans. At the start of the twentieth century, American Indians were defined as uniquely vulnerable to acute infectious illnesses like tuberculosis. While observers blamed inadequate sanitation, limited health care, and poor nutrition for high infectious disease prevalence, many also believed that Native susceptibility was "embedded in their bodies." 12 The frame of Indians as epidemiologically fragile expanded to include chronic disease, thanks, in large part, to long-term research by the National Institutes of Health (NIH) on the Akimel O'odham, or Pima. Launched in 1965, the NIH studies, which continue in some form to this day, were originally conceived to compare rheumatoid arthritis prevalence in two communities: the Niitsítapi (Blackfeet Confederacy) of Montana and the Akimel O'odham of south-central Arizona. After discovering that nearly one in three Pima had significantly elevated blood glucose levels, the NIH shifted to investigate diabetes instead. G. Donald Whedon, director of the National Institute of Arthritis, Metabolism, and Digestive Diseases, reported in 1965 that the Pima provided "an unparalleled opportunity" to study "the influence of heredity and environment" on the natural history of diabetes. Similar studies on the general US population, he continued, would be impossible due to the overwhelming scale and "extreme mobility" of the sample size. Left unmentioned in Whedon's report was the Pima's relative confinement to the Gila River Indian Community, their sovereign reservation. 13 The discovery of diabetes among the Pima surprised the NIH researchers, but other investigators had already laid the foundations of inquiry. The physician-anthropologist Aleš Hrdlička claimed in 1908 that while Indians' constitution was "superior to that of the whites living in larger communities," the Pima were also much heavier than their counterparts. He blamed their "sedentary" habits, increased life span, and their changed diet that included "everything obtainable that enters into the dietary of the white man." 14 By the postwar years, other researchers speculated that southwestern tribes residing in "recently-irrigated, low-altitude areas of high average year-round temperatures," like the Pima, had higher diabetes prevalence than Indians "in high-altitude plains and timberlands" because of their dietary customs and dependence upon agriculture. 15 Some added genetics to the original formulation of race and environment to explain Indian predisposition, suggesting that groups like the Pima were "a natural group for the critical study of diabetes" because of their "marked familial history . . . and inbreeding." 16 One popular theory, proposed in 1960 by physical anthropologist James V. Neel, was the "thrifty genotype" hypothesis. Neel argued that periods of famine or undernutrition had acted as a selective pressure within hunter-gatherer societies, where metabolic efficiency was an advantage. Now, an evolutionary advantage had become a disadvantage. 17 While Neel may not have intended his hypothesis to be applied to Native peoples, the NIH researchers, along with other scientists studying indigenous populations, initially accepted Neel's concept even as some at the time considered it incomplete. Indeed, Neel's theory has been amended several times and remains controversial and widely disputed to this day. 18 All of these theories relied, in part or in whole, on environmental and evolutionary mechanisms to explain diabetes prevalence. Moreover, early evidence to support these theories often came from research on Native communities. As the first group of NIH scientists dispatched to Arizona argued in 1967, the "broader and more intensive study of the evolution of diabetes in the Pima Indians may produce information applicable to the general problem of obesity diabetes." 19 Almost three decades later, the Pima studies had arguably become some of the most important research on diabetes in the world. Peter H. Bennett, former director of NIH field studies in Arizona, stated in 1999, "environmental risk factors" were likely responsible for diabetes prevalence among the Pima "because the genetic constitution of the population cannot change over such a short period of time." In his mind, the diabetes epidemic was "a clear example of genetic-environmental interaction" where certain groups, like the Pima, were expressly afflicted because their evolutionary history and changed environment collided. 20 While the Pima were uniquely affected by history, the NIH studies were important because they were, according to Bennett, "generalizable [and not] for the Pimas alone." 21 Likewise, William Knowler, another NIH scientist in Phoenix, said in a 1999 interview with the Arizona Republic, "a lot of the way diabetes is treated throughout the country and the world is based on things that we learned with the Pima Indians." 22 It is important to note that Native peoples have not been without agency in this reframing of diabetes. Scientific knowledge derived from and produced within indigenous spaces and bodies was later used by Natives to promote and protect their own health. In a 1980 memo, the US Indian Health Service urged tribal governments to create diabetes education and prevention programs based on the NIH's findings. Chuck Raymond, a noted Native artist and member of the Winnebago Tribe of Nebraska, created posters and brochures in the late 1970s touting traditional foods, exercise, and portion control for the Swanson Center for Nutrition, an Omaha-based clinic devoted to Indian health care. Some initiatives, like the Zuni (Pueblo) Diabetes Project in New Mexico, became models for best practices for communities across North America. 23 But an unintended consequence of scientific studies on Native communities was to reshape popular narratives about ill health and progress. Journalists and publicists for the NIH alike sometimes reinforced crude tropes of primitive Indians, labeling the Pima as "living laboratories" whose scientific utility came from their seeming helplessness before the onslaught of development. 24 One 1996 NIH publication, The Pima Indians: Pathfinders for Health, compared the Pimas' willingness to participate at the forefront of scientific exploration to their ancestors' roles as US Army Calvary scouts during the Mexican-American War or as guides for white American emigrants during the California gold rush. The Pima were folded into one of America's foundational myths, the frontier experience, a story that mixed environmental and racial metaphors to explain, if inconsistently, both progress and decline. 25 The upshot of such representations, which were reproduced in other scientific studies and popular accounts of that research, was the icon of the diabetic Indian. Further variations of the same stereotype emerged as well: the diabetic black, the diabetic Mexican, and so on. By the late twentieth century, the face of diabetes in America had morphed from an affluent Jew into a person of color, probably poor, from an inner-city neighborhood or an isolated rural hamlet. 26 These new stereotypes were extensions of the idea of diabetes as a disease of civilization and have prompted further research into the riddle of escalating incidence and prevalence, particularly type 2 diabetes. Despite the widespread appeal of genetic explanations, the role of a degraded environment has received increased attention in the past two decades, often to explain the disproportional effect on minority groups and the poor. There are three main lines of inquiry, all of which probe how our human-built, eco-technological world may have changed our bodies as well as our environments, creating what social epidemiologist Nancy Krieger has called "embodied inequalities." 27 First, renewed attention to dietary changes, like the widespread consumption of simple carbohydrates, has led some researchers to question the long-held assumption that all calories are metabolically equal. Not surprisingly, diets high in added sugar have been found to be more commonplace in poor and minority communities where access to fresh unprocessed food is often difficult or expensive. 28 Second, research into what scientists call "allostatic load"-the persistent accumulation of stress within the body-suggests that stressors can affect the automatic nervous, adrenocortical, and endocrine systems. Taken alone, allostatic load may not cause diabetes directly. In combination with poor diet and inadequate health care, dangerous environmental conditions, from familial strife to financial insecurity to racial discrimination, can literally get under the skin, spur unnecessary insulin production, and produce diabetes. Allostatic load is perhaps another expression of what Rob Nixon calls "slow violence," a kind of "delayed destruction that is dispersed across time and space." 29 Finally, beginning in the 1990s, endocrinologists and public health scientists began linking exposure to chemicals broadly called endocrine disruptors that mimic the functions of human hormones like insulin to a host of health problems including diabetes. While this research remains disputed, the leading professional association of hormone scientists, The Endocrine Society, has publicly criticized the US Food and Drug Administration for failing to warn or safeguard consumers. Meanwhile, environmental justice advocates, pointing to the evidence that communities of color and the poor suffer disproportionately from environmental hazards, have added exposure to diabetogenic and obseogenic chemicals to their litany of concern. 30 CONCLUSION: THE PARADOXES OF PROGRESS All of these potential pathways, individually or in combination, may also be helping to entrench and reproduce diabetes within particular populations through epigenetic regulation of genetic codes. Put simply, the patterns of inequality that yielded disproportionate incidence and prevalence of diabetes may be persistent and growing. As one pair of endocrinologists argued in 2011, "the synergy of coexisting poverty, poor education, and pollution likely contributes to the pathogenesis of metabolic diseases" like diabetes, creating what they call a "paradox of progress." 31 Central to this paradox is an expanded sense of what counts as "the environment." These paradoxes are now attracting the attention of historians and scientists alike. Recent advances in the biological sciences, according to historian John L. Brooke and anthropologist Clark Spencer Larsen, have reopened "questions about how cultural nurture has been shaping biological nature throughout human history." Rather than suggesting a crude reemergence of simple evolutionary or environmental explanations, however, Brooke and Larsen note that such questions are "being debated on history's disciplinary turf." 32 The emphasis on basic biomedical research in discussions about chronic diseases like diabetes, while important, may obscure historical explanations for why certain populations have become more prone to so-called diseases of civilization during the Anthropocene. The stakes for historians to engage in these debates are personal as well. Diabetes disassembles bodies and wrecks lives. The Canadian writer Lawrence Hill poignantly describes watching his father, Donald G. Hill III, an African American emigrant who became a noted Ontarian academic and politician, falling "apart limb by limb" as diabetic complications claimed his life. Hill himself lives with the disease and worries his son might be next. "He is in a long line of diabetics," Hill concludes, arcing "like an arrow through the male line of my family." 34 Hill's story reminds us that of all major diseases, diabetes is one of the few where the condition is also a label for the afflicted. Yet diabetic is more than an adjective; it is an embodied identity that can become, for some, an embodied inequality as well. By enlarging the frame of "the environment," historians can help to reimagine the stories we tell about chronic diseases and the people who live with them. 35 Telling new stories may also, in turn, reorient how we address diabetes and other chronic conditions as the inescapable products of ecologies of inequality. Matthew Klingle is associate professor of history and environmental studies at Bowdoin College. He is the author of Emerald City: An Environmental History of Seattle (Yale University Press, 2007), recipient of the Ray Allen Billington Prize from the Organization of American Historians. This article is part of a larger research project on the environmental and social history of diabetes and chronic disease that has been supported by an Andrew W. Mellon Foundation New Directions Fellowship. Many thanks to the organizers and participants in the "Health and Disease in World History" symposium at The Ohio State University's Center for Historical Research, particularly John Brooke and Chris Otter, as well as my colleagues in this Forum for their comments and suggestions, especially Linda Nash and an anonymous reviewer. Diabetes is perhaps better understood as a spectrum of related diseases, with some physicians and scientists arguing that each variant is etiologically unique. With type 1 diabetes, the disease emerges when the beta cells of the pancreas are unable to produce the hormone insulin, which regulates how the body's cells metabolize or store glucose, their primary fuel. In type 2 diabetes, which is far more common, cells no longer have the capacity to take insulin across the cell membrane. The beta cells go into overdrive, producing more insulin than normal because tissues where the body stores glucose-primarily the liver and fat-cannot accept insulin across cell walls. The consequence is a cascading process whereby the overproduction of insulin drives further weight gain, further insulin "resistance," and many horrifying complications as a result. The story of type 1 diabetes and the discovery of insulin is beyond the scope of this essay, but the definitive work on the subject remains Michael Bliss, Toxic Foodways: Agro-Food Systems, Emerging Foodborne Pathogens, and Evolutionary History Abstract This essay explores the environmental and evolutionary history of foodborne pathogens in the twentieth century. The development of large-scale industrialized food systems in the nineteenth century led to the reduction in various traditional forms of foodborne disease, such as ergotism, which thrived in the absence of effective climatic control, fungicides, and food storage technology. However, these same systems allowed other, emergent, foodborne pathogens to thrive and circulate. This essay examines two of these-Listeria monocytogenes and Escherichia coli O157:H7-and argues that these pathogens emerged in specific niches created by industrialized food systems. While specific developments in food processing, production, and consumption, including large-scale feedlots and rising antibiotic use, are responsible for some of the critical genetic developments in the pathogen's history, I suggest that foodborne pathogens have both an environmental and an evolutionary history. Ergotism, a disease of rye caused by a parasitic fungus, haunted medieval and early modern Europe. There were 132 epidemics of ergotism between 591 and 1789. 1 The fungus thrived in cool, wet climates and struck populations whose diet was largely limited to rye. It manifested itself in two forms: convulsive and gangrenous. It was a spectacular and disturbing condition, generating hallucinations and leaving victims maimed and disabled. These terrible inexplicable bodily phenomena were, according to some scholars, responsible for numerous witchcraft panics in Europe and North America. 2 Today, however, ergotism is practically nonexistent. The fatal combination of damp climates, deep poverty, reliance on a single crop, and poor storage facilities, which forced peasants to devour every last grain of rye, has vanished. Foodborne pathogens have, however, arguably increased in significance over the past century. Ergotism no longer thrives, but many other foodborne pathogens are flourishing including salmonella, listeriosis, and Escherichia coli O157:H7. In 2011, 48 million Americans were sickened by foodborne pathogens, and three thousand died. 3 These pathogens are sustained by historically novel food ecologies that have produced multiple milieux for the emergence, circulation, and dispersal of new foodborne pathogens. This is both an environmental and an evolutionary event. The systems supplying food in medieval Europe were, to adopt Lewis Mumford's parlance, eotechnic: foodstuffs moved along rivers or were carried by animals while cereals were ground by water mills and windmills. 4 These systems were, simply put, profoundly delimited by the vagaries of weather, climate, and topography. Food usually moved over relatively short distances, and it was hoarded as insurance against famine. Temperature control and preservation techniques were limited to natural ice, subterranean caves, pickling, air drying, salting, or smoking. Hence ergotism was produced within a very particular agro-food system, in which defenses against fungi, putrefaction, and pests were minimal. Food was frequently consumed when stale, moldy, or rotting; "freshness" was not a concern for the medieval consumer. 5 Agriculture itself utilized decomposing organic matter for fertilizer while trades like leather making utilized urine and dung. Ergotism represented the dark side of what André Guillerme called the "fungal economy" of the medieval period. 6 This eotechnic food system was slowly dismantled from the eighteenth century, and a new food system (or set of food systems) replaced it. These new systems were increasingly driven by fossil fuels, which enabled food to be distributed more effectively and over longer distances. There was a slow, uneven shift from an "eotechnic" system to a "paleotechnic" one, or from an "organic" system to a "mineral" one. 7 This carried the history of food systems over the threshold of the Anthropocene. The ensuing high-energy food economy reduced reliance on local accumulations of rotting food, increased the mobility of foodstuffs, and facilitated their collection in centralized processing hubs such as mills, refineries, and abattoirs. This simultaneously liquid and concentrated agro-food system would, however, have remained "fungal" without the later nineteenth-century development of reliable mechanical refrigeration, insulation technologies, and humidity control. Microbial proliferation could thus be retarded. Technologies of atmospheric management permeated the food system, knitting together abattoirs, railways, storage depots, and domestic spaces, producing "cold chains." Pasteurization made bacterial survival and growth even more difficult. Canning allowed foods to be distributed over longer distances. Finally, states passed laws regulating the food industry and subjected the system to increasingly stringent inspection. This division into medieval/eotechnic/organic and modern/ paleotechnic/mineral is a simplification, but it is, I think, a helpful one. The latter operated over longer distances, concentrated production in critical hubs, and controlled atmospheres to preclude the decay that characterized older food systems. Ergotism thus became very rare. The sale of putrid meat and tainted milk declined. Foodstuffs became more durable. The modern food system might appear to represent a demonstrable improvement over its predecessor. This is, however, too simple a conclusion. The history of foodborne pathogens, like the other histories in this Forum, demonstrates how environmental transformation has provided new niches for the recrudescence of older diseases or the emergence of novel ones. In English, "food poisoning" had appeared as a general phrase by the 1880s. 8 Contemporaries did not, obviously, claim that foodborne pathogens were an entirely new phenomenon, but they routinely observed that they seemed to be increasingly implicated in disease. Dietary transition, as Matthew Klingle shows in his article, had a dark side. In 1899 Victor Vaughan, professor of hygiene at the University of Michigan, argued that "an actual increase in the number of outbreaks of food poisoning" was evident. 9 This rise continued into the twentieth century. How could a food system that so successfully destroyed the "poisons of the past" be equally successful at generating or mobilizing a new set of pathogens? The surge in foodborne disease was frequently linked to the changing landscape of food preparation and consumption. The rise of "restaurants, lunch clubs, canteens, cafeterias, snack and milk bars, schools, service camps, and training centres" meant that more people were consuming pre-prepared food. 10 Meat products were particularly dangerous. One 1962 estimate suggested that 73 percent of British food poisoning cases came from meat. 11 Communal feeding facilities encouraged the cooking, cooling, and reheating of such foods, which, along with faulty refrigerators, lack of thermometers, and careless food-handling practices, provided bacteria with ideal circumstances in which to thrive. In 1965 reheated stewed beef poisoned five hundred people in Westminster with Clostridium welchii (a pathogen, also known as Clostridium perfringens, particularly associated with mass catering). 12 To counter these risks, washbasins, continuous roller towels, rat proofing, serving tongs, soap dispensers, and rubber gloves permeated food preparation spaces. Germ theory reshaped the causal understanding of food poisoning. In 1888 Gustav Gaertner isolated Bacillus enteritidis following a meat poisoning outbreak at Frankenhausen, Germany. 13 Between 1909 and 1923, many similar bacteria were clustered together under the genus Salmonella. 14 The incidence of salmonella grew across the twentieth century, as did the number of identified serotypes, of which there are currently over 2,500. Salmonella outbreaks followed a particular pattern: low attack rates (i.e., most people who ingest the pathogen do not develop illness) but "huge numbers of dispersed victims." 15 Salmonella scares turned the focus away from immediate conditions of food preparation and toward conditions of production. The microbe was easily spread between previously dispersed animals congregated in congested abattoir spaces: "the major cause of the problem of salmonellae in meat products is the spread of salmonellosis during the hours and days immediately prior to slaughter." 16 Pathogens easily passed from animal to animal in cramped, feculent lairages. Kitchen and restaurant negligence could no longer be solely blamed for infection. This situation was compounded by the twentieth-century rise of the broiler chicken industry and the industrialization of egg production. 17 An increasing number of salmonella outbreaks were associated with chicken or turkey. Some public health officials postulated that food systems exercised selective force, effectively shaping the evolution of pathogens. In 1931 the bacteriologist Edwin Jordan commented that the salmonella group "is a very large one and is apparently undergoing active evolutionary change at the present time." 18 Food systems, it seemed, had evolutionary consequences. More recently, Mike Osterholm, director of the Center for Disease Research and Policy at the University of Minnesota, noted that "the very nature of the ever-growing and complex food supply chain, and the desire of consumers to have many different kinds of foods available at a moment's notice, has allowed for a whole new spectrum of pathogens to arrive on the scene." 19 Some of these pathogens are particularly deadly. Listeriosis was first recognized in 1924, although the bacterium causing the infection-Listeria monocytogenes-only acquired its current name in 1940 and gained recognition as a foodborne pathogen as late as the 1980s. 20 Its symptoms are generally meningitic or encephalitic: headaches, vomiting, fever, malaise, and then signs of a central nervous system infection. 21 Listeriosis has a very high fatality rate (9 to 30 percent), with risks highest for the young, the old, pregnant women, and the immunocompromised. 22 The number of cases of foodborne listeriosis has grown noticeably since the 1960s. The twentieth-century trend toward highly processed foods with long shelf lives encouraged the spread of listeriosis. 23 L. monocytogenes is a psychrophilic microorganism that can live and even grow at temperatures as low as 4 C, hence its association with cold meat, coleslaw, egg salad, ice cream, and cheese. Cold storage can act "essentially as a period of selective enrichment for this species." 24 Refrigeration, rather perversely, aided the organism's development. Broken thermostats, irregular defrost cycles, or close juxtaposition of food and lights made food milieux particularly hazardous. 25 The abattoir environment, again, helped the pathogen's development. Fats and proteins on abattoir equipment might provide a biofilm conducive to its growth, making it more prevalent after slaughter than before. 26 L. monocytogenes has also spread through minimally processed refrigerated fresh foods (bagged salads, broccoli, and lettuce). 27 Sanitary vigilance, hazard analysis, and chill chain management extended across the food chain, from slaughterhouses and cheese plants to refrigerators and delicatessen slicing machines. E. coli itself was first described by (and named after) Theodor Escherich in 1885. It diverged from Salmonella typhimurium well over 100 million years ago. 28 Its ecological niche is the intestinal systems of mammals, particularly cattle. But in 1982, a novel strain of the bacterium was implicated in two outbreaks of food poisoning in Michigan and Oregon traced to fast-food restaurants. This was E. coli O157:H7. There are, additionally, several cases of archived E. coli O157:H7 that predate 1982 by a decade or so. As Mari Webel notes in her contribution to this Forum, cataloguing, classifying, and mapping disease has become essential to its comprehension. 29 There are also other emergent "pathovars" of E. coli. 30 The defining clinical features of E. coli O157:H7 are bloody diarrhea and, in serious cases, hemolytic uremic syndrome (HUS) and death. 31 It has since spread through various infected foods including innocuous foods like apple cider and radishes. 32 It causes around 73,000 cases of disease annually in the United States. 33 It is impossible to know when E. coli O157:H7 first appeared on earth. But it is a classic "emerging pathogen." For some, this simply means a previously existing pathogen has proliferated and spread as a consequence of transformed and expanded food systems. 34 Others are less circumspect. Two major texts on emergent infections explicitly call the microbe a "new pathogen." 35 Hugh Pennington, who has chaired two public inquiries into British E. coli outbreaks, concludes that the pathogen is "relatively new and . . . evolving quickly." 36 Either way, it is clearly increasing in incidence and geographic range. Here, environmental history meets evolutionary history. Bacteria do not reproduce sexually; their usual mode of reproduction is via division, which can involve recombination and mutation, and hence transformation. 37 They also change via horizontal gene transfer, whereby bits of DNA, like viruses, insert themselves into bacteria in order to reproduce. Microbiologists call such bits of DNA bacteriophages, plasmids, and transposons. 38 Bacteria can also change through DNA deletion, thereby producing what are called "black holes." 39 The "relentless evolution" of the E. coli species is widely attested. 40 Indeed, few microorganisms have been so well studied. François Jacob, Elie Wollman, and Jacques Monod, for example, demonstrated through the study of E. coli in the 1950s how viruses insert themselves into bacteria. Microbiologists have constructed a detailed evolutionary sequence for the emergence of E. coli O157:H7 from its ancestral form, E. coli O55:H7. They have traced the various ways in which the pathogen acquired its deadly characteristics. First, E. coli acquired two shiga toxins through horizontal gene transfer. According to some studies, one was acquired before the divergence and one after. 41 These helped the microorganism protect itself from protozoa grazing in ambient intestinal space. 42 It also acquired the ability to cause hemolysis (the destruction of red blood cells) from a plasmid. Another emergent acquisition was the capacity to produce attachment and effacement lesions, which enable the pathogen to stick to the surface of the small intestine and generate fistulae through which shiga toxins enter the bloodstream, producing a cascade of symptoms that can culminate in death. This comes from a "pathogenicity island" acquired from another plasmid. 43 E. coli O157:H7 has gained the ability to scavenge large amounts of iron, something necessary for its pathogenicity. 44 The microbe has also lost a considerable amount of genetic material, including DNA that inhibited virulence. 45 In short, E. coli O157:H7 did not emerge fully formed but developed instead as a result of a series of evolutionary events. This multiplicity makes dating the emergence of the pathogen extremely difficult, if not impossible. Microbiologists have used molecular clocks, which are based on the theory that the evolution of proteins is linear over time, allowing divergence to be calculated. The divergence between E. coli O55:H7 and E. coli O157:H7 has been dated to approximately four hundred years ago, with each strain undergoing significant transformation thereafter. 46 There is, then, general consensus that E. coli O157:H7 is a "recent, derived state . . . rather than an ancestral condition of primitive E. coli." 47 The next task is to situate this genetic history within the history of food systems and food habits. The hamburger itself is a relatively new food product, sometimes containing the meat of hundreds of animals, whose ground and particulate nature allows pathogens originally caking the surface to be distributed throughout the meat. 48 Food preparation techniques (drunken or cavalier grilling or failing to clean knives, for example) exacerbate the risk. Like sausages in the late nineteenth century, hamburgers became objects of significant unease in the 1980s and 1990s. 49 A more significant environmental factor is the transformed landscape of meat production associated with the "livestock revolution." 50 Bigger but more genetically homogeneous cattle populations, subjected to intense biological strain, have become increasingly agglomerated in massive feedlots and industrialized slaughtering complexes. 51 Fossil fuel-driven food chains then disperse meat over great distances: E. coli O157:H7 outbreaks soon occurred in Argentina, South Africa, and Japan. 52 Like the SARS coronavirus studied by Andrew Price-Smith, E. coli O157:H7 is a pathogen of the age of intercontinental air travel. Abattoir facilities, particularly pre-slaughter lairages, provide a perfect milieu for contamination. Cattle hides easily become splattered with E. coli O157:H7, especially in the presence of "super shedders," whose intestines disgorge remarkable volumes of the pathogen. 53 Sanitary vigilance is essential: water trough sediment, for example, can sustain large colonies of E. coli O157:H7. 54 More actively, basic transformations within the food system may have exercised selective pressure on the pathogen. Some studies, for example, have suggested that grain feeding increases E. coli's acid resistance, allowing it to survive in the human stomach. 55 These studies remain controversial, however, and simply asserting that grain feeding "caused" pathogenic forms of E. coli is almost certainly misleading. 56 More persuasive is the argument that the addition of antibiotics to animal feeds from the 1950s facilitated the acquisition of a shiga toxin. 57 Mutations giving the bacterium the ability to expel antibiotics suddenly had an evolutionary edge. 58 The large-scale administration of antibiotics also coincides with the first reports of HUS in humans. E. coli O157:H7, then, emerged through the acquisition and loss of many bits of DNA. There was no singular moment of appearance. It is fair to conclude that the industrialization of livestock very probably provided a milieu that selected for some, but not all, of these virulence factors, and it certainly allowed the pathogen to circulate throughout the human food chain. The pathogen has a complex history, some of which may be inaccessible to both the historian and the microbiologist. In 1931 when Edwin Jordan mused on the evolution of salmonella bacteria, he also reflected on the demise of ergotism: "modern improvements in the facilities for transporting food from regions of abundant harvest into regions where crops have failed, and the use of special methods for separating the diseased grain from the wholesome have greatly reduced the prevalence of ergotism." 59 He did not, however, connect the two phenomena. He failed to note that "modern improvements" in the food system were in a sense responsible for both arresting ergotism while accelerating other forms of pathogenic evolution. High-energy transportation systems and industrialized slaughter, along with transformations in animal feeding, was producing new agro-food ecologies whose manifold niches allowed new microorganisms to circulate rapidly, find new niches, gain virulence, and cross species barriers. 60 Meanwhile, mass catering, food processing, and food preparation practices also amplified the risks of novel foodborne pathogens, while an increasingly aging population was becoming immunosenescent and hence particularly vulnerable to these infections. The result was the arrival of novel foodborne diseases of which salmonellosis, listeriosis, and E. coli O157:H7 are but three examples. This is an evolutionary process without end; a 2011 German outbreak of E. coli O104:H4 killed fifty-three people. 61 We will never know the precise moment when E. coli acquired the shiga toxins or attachment-effacement capacities that turned it into a killer germ. What, then, can we conclude about the emergence of E. coli O157:H7 and other foodborne pathogens? There are two possible conclusions. First, that E. coli O157:H7 evolved among bovine or other animal communities at some unspecified time in the past. Twentieth-century agro-food systems, typified by the mobilization and agglomeration of bovine populations, then allowed the pathogen to enter human communities. A second possibility is that these spatial trends, coupled with novel feeding regimes and the administration of antibiotics, actually created specific niches within that which some, if not all, of the various genetic capacities of the pathogen were acquired. Both possibilities, however, demonstrate that today's agrofood systems have brought pathology along with plenty. These particular pathogen-disease ecologies are one of the most visceral ways in which consumer anxiety is experienced in the Anthropocene. I would like to thank Lisa Brady and the contributors to this Forum for their helpful comments on this essay, as well as the anonymous reviewer. Many thanks also to the organizers of, and participants in, the two panels on "biological history" at the January The Plagues of Affluence: Human Ecology and the Case of the SARS Epidemic Abstract In this essay I argue that infectious disease is not simply a product of conditions of poverty because the mutability of pathogens allows them to thrive in multiple niches throughout the complex human ecology. Consequently, certain conditions of affluence may actually contribute to the proliferation of certain diseases like the virulent coronavirus that causes severe acute respiratory syndrome (SARS). Thus certain pathogens colonize ecological niches within affluent and technologically sophisticated societies. These socalled plagues of affluence represent a challenge to global health that is largely unaddressed. The concept of perfect and positive health is a utopian creation of the human mind. It cannot become reality because man will never be so perfectly adapted to his environment that his life will not involve struggles, failures, and sufferings.... The less pleasant reality is that in an ever-changing world each period and each type of civilization will continue to have its burden of diseases created by the unavoidable failures of adaptation to the new environment. -René Dubos, Man Adapting (Yale University Press, 1965) Hippocrates argued that human health was subject to the condition of the "airs and waters" and that the poor condition of the elements would dispose human populations to illness and pestilence. 1 The ancient Greeks were not cognizant of pathogens and their vectors of transmission, discoveries that resulted from the germ theory postulates developed in the nineteenth century by microbiologists such as Robert Koch and Louis Pasteur. Nonetheless, Hippocrates was the first to recognize that changes in environmental conditions could result in negative health consequences for human populations. Thus thinking about interactions between environment and disease is a hoary phenomenon. The human species is increasingly confronted with a world out of balance because human-induced changes to the biosphere frequently result in significant nonlinear, negative, and frequently unanticipated biological consequences. Changes in the macro environment may alter the activity taking place in the microbial penumbra that envelops humanity, triggering the emergence of novel pathogens or the recrudescence of existing pathogens, or both. Thus in the microbial realm we have witnessed the emergence of novel pathogens in recent years (e.g., the severe acute respiratory syndrome [ , and recently New Delhi matallo-beta-lactamase-1 [NDM-1). 2 Specifically, environmental change often facilitates the zoonotic transmission of pathogens from their animal reservoirs and permits their endogenization within the human ecology. The common wisdom holds that human disease is typically the function of inequities in society, and thus poverty is held to be a primary driver of pathogenic emergence. However, I argue that while poverty may certainly contribute to the proliferation of certain diseases, such as malaria and cholera, other pathogens thrive under conditions of affluence. The SARS epidemic of 2002-3 is a case in point. The SARS coronavirus exhibited properties of a "jet-set" disease as it spread among the affluent population centers of East Asia and then leapt to Canada. I argue that the mutability of pathogens allows them to thrive in multiple niches throughout the complex human ecology and that certain conditions of affluence may actually contribute to the proliferation of certain diseases like the virulent coronavirus that causes SARS. The environment is not static: it is both dynamic and complex and consists of interactions between spaces, structures, technologies, and the constantly shifting microbial penumbra that surrounds humanity and encompasses the planet. Microbes have successfully colonized all ecological niches of the planet, ranging from those that thrive under conditions of poverty, like shigella and vibrio cholera, to extremophiles that thrive under the most exceptionally hostile physical conditions of extreme cold, heat, acidity, darkness, and radiation. For example, acidophiles thrive in highly acidic environments inimical to other species of life, and other species of lithotrophic bacteria live in extreme darkness and consume minerals to survive. 3 Others, such as psychrophiles, thrive in extremely cold environments or exist in lightless lakes beneath the Antarctic ice mass. 4 Moreover, microbes are dynamic-moving between peoples and spaces and constantly evolving to exploit new ecological niches arising from changes in technology and modernity. A better way of visualizing it, however, is that humanity and bacteria are coevolving. 5 Advances in human ingenuity that generate new technologies actually create novel ecological niches that microbes will then colonize, and even thrive within. Thus we are currently observing the coevolution of landscapes, microbes, and the human body in the Anthropocene. Selective evolutionary pressures, often the result of environmental change, will then force microbes to adapt to (and colonize) ecological niches in nations that are both advanced and impoverished, both in terms of the distribution of wealth and in terms of technological interfaces with society. 6 Therefore there will be diseases of poverty and diseases of affluence. The assumption that the most virulent and transmissible of new pathogens necessarily emanate from the least developed countries (and regions within those countries) is empirically specious. 7 As Otter notes in this Forum, technological transformations associated with affluence can contribute to the emergence and spread of pathogens such as E. coli O157:H7 and listeria monocytogenes. Similarly, Klingle argues that affluence may result in shifts in the material and dietary regime of a body politic, which in turn results in the spread of diabetes. Walker's article argues that the construction of technologically sophisticated spaces can result in toxicity (through asbestos) that undermines the health of the population. Thus affluence may drive the emergence of noncommunicable diseases as well. The historical record illustrates the manner in which environmental and technological change offered novel ecological niches that microbes readily exploited. The advent of trade along the Silk Road from Europe to East Asia facilitated the emergence of the Black Death (Yersinia pestis) from its animal reservoirs in central Asia. The trade caravans created host-vector-pathogen relations that favored the proliferation of the bacteria in Europe and East Asia, and the concentration of populations within cities of Europe favored the rapid spread of the disease there. 8 In the early 1800s, the European projection of military power into South Asia, and the consequent repatriation of soldiers, permitted the expansion of cholera into the European theater, where it spread rapidly along riparian trade routes to engulf the continent during the 1830s. Perhaps the most intense example of microbial evolution associated with ecological change occurred during the First World War, as the H1N1 influenza virus mutated and became progressively more virulent in three successive waves that encircled the planet, killing over 50 million people and sickening hundreds of millions. Arguably, the very conditions of the war, namely the extreme density of human populations in cramped trenches, railroad cars, and naval ships, allowed the virus to jump from one host to another with ever-increasing rapidity. The increased rate of viral transmission is associated with the evolution of genetic traits of increasing lethality. 9 Thus the ecological conditions of the war itself facilitated this lethal manifestation of disease. 10 In recent years we have seen the emergence of pathogens that have colonized the novel ecological niches created by certain emerging technologies-for example, Legionnaires' disease that emerged in Philadelphia during the summer of 1976. In this particular case, an outbreak of disease occurred during a meeting of the American Legion at a hotel in Philadelphia where 189 individuals were sickened and 29 perished. A subsequent investigation determined that a novel strain of bacteria, Legionella pneumophila, had colonized the airconditioning cooling towers of the hotel in question and used the AC system to then colonize human hosts. 11 Thus the advent of Legionella represents another example of microorganisms colonizing novel technological systems. Consequently, the quest to create sanitary environments has, paradoxically, led to the emergence of dangerous pathogens that thrive in these novel and presumably sanitary ecological niches. The spread of BSE during the 1990s is another example of technological and economic shifts that resulted in the emergence of prions, an entirely new class of pathogen. Prions are proteins that mutate from a benign form to a rogue variant that then proceed to erode considerable portions of the brain of the host organism. Thus in cattle the prion-induced destruction of the brain resulted in mad cow disease. Prions spread via technologies that recycled animal ruminants back into feed for cattle and sheep, which allowed for the spread of prions throughout livestock primarily in Europe. Human ingestion of infected foodstuffs (meats) then led to the colonization of human hosts by prions and the proliferation of Creutzfeldt-Jakob syndrome, which saw the rapid destruction of brain tissues resulting in the death of the human host. Global trade in such feed throughout the 1990s then allowed for the emergence of prion disease (BSE) in various other countries around the world (including Canada and the United States). In this particular case, the extraordinary violation of ecological principles, namely feeding infected remains of diseased cattle to other cattle, and then feeding those subsequently infected cattle to humans, resulted in a chain of prion transmission that would not have occurred in the natural world. Of note, BSE did not establish a significant foothold within the poorer societies of the planet, primarily because those less affluent societies wisely chose to forgo the unusual food production system developed in Europe, one that clearly put profits above safety. The SARS epidemic of 2002-3 is another instance of microorganisms colonizing and flourishing within the relatively advanced technological environments of the developed world. The etiology of SARS is complex, and so it deserves some elaboration. SARS is generated by a mutant and lethal coronavirus (SARS-associated coronavirus [SARS-CoV]) that probably exists in the natural reservoir of certain Chinese horseshoe bat populations in Southeast Asia, which communicated the virus to palm civets (Paguma larvata). 12 The SARS coronavirus strains that infected civets possessed a greater capacity to subsequently infect individual humans, and thereafter the virus became endogenized within the human ecology. 13 Thus the SARS coronavirus is a novel zoonosis that recently jumped from its natural reservoirs into the human ecology, apparently in late 2002. 14 The processes of zoonotic transfer of pathogens from animal populations into the human ecology illustrate the deep and persistent connectivity between humanity and the natural world. Nash rightly critiques those champions of modernity who assume a distance between the natural world and the constructed world. 15 The SARS epidemic illustrates that disease is not simply a consequence of the inequitable distribution of wealth, either globally or within societies. The SARS coronavirus took advantage of changes in the relationship between its natural hosts and the human ecology of East Asia and then spread via posh hotels, jet airliners, and technologically advanced hospital environments. Its proliferation in Hong Kong, Singapore, Beijing, and Toronto illustrates the principle that wealthier nations are not removed from the destructive effects of the novel agents of communicable disease. Indeed, such nations exhibit vulnerability to a range of pathogens that flourish under conditions of affluence and technological sophistication. According to Yanzhong Huang, the first SARS case occurred in the city of Foshan, near Guangzhou in November 2002. 16 The index case was the physician Liu Jianlun, who involuntarily fostered the viral chain of international transmission when he traveled to Hong Kong and stayed at the Metropole Hotel. Liu then infected other travelers who consequently spread the illness throughout the nations of the Pacific Rim. On March 12, 2003 , the World Health Organization (WHO) issued a global outbreak alert and initiated international surveillance efforts to track the contagion. Specifically, the epidemic resulted in 8,096 cases of infection (morbidity) and 774 deaths (mortality) between November 1, 2002, and August 7, 2003, exhibiting a mortality rate of approximately 10.88 percent of those infected. 17 By that point the pathogen had affected countries all along the Pacific Rim (see figure 2) . Once the SARS coronavirus entered the human ecology, it flourished within the context of affluent hotels and sophisticated hospitals in East Asia before it jumped to Toronto, Canada. Surprisingly, SARS seemed to thrive and move with ease within technologically advanced hospital environments and thus represents one of the premier nosocomial (hospital-acquired) diseases. 18 Conversely, SARS failed to become firmly established in the frequently open air and relatively low-tech hospital environments of Laos, Cambodia, Malaysia, Indonesia, and Thailand. It appears that the virus could not adapt as readily to such environments and thus did not establish a critical foothold in those less enclosed medical ecologies. The virus thus only posed a significant threat to human populations in the most advanced countries of the region, those with the most technologically sophisticated and self-contained hospital ecosystems. It was therefore truly one of the first plagues of affluence. SARS's nosocomial transmission in affluent societies is borne out by the mortality data. Of an estimated 774 deaths from SARS, the majority occurred in relatively advanced hospital environs, primarily in East Asia and Canada (figure 3). Moreover, the success of various countries in controlling the epidemic demonstrates that an affluent nation that exhibited significant levels of medical capacity, for example Canada or Singapore, had a much more problematic time in containing the spread of the pathogen than did polities of lower capacity like Thailand. The primary way that SARS appears to spread is by close person-toperson contact. SARS-CoV seems to be transmitted most readily by respiratory droplets (droplet spread) produced when an infected person coughs or sneezes. Droplet spread can happen when droplets from the cough or sneeze of an infected person are propelled a short distance (generally up to three feet) through the air and deposited on the mucous membranes of the mouth, nose, or eyes of persons who are nearby. The virus also can spread when a person touches a surface or object contaminated with infectious droplets and then touches his or her mouth, nose, or eye(s). In addition, it is possible that SARS-CoV might be spread more broadly through the air (airborne spread) or by other ways that are not now known. 19 Therefore, the hermetic air-conditioned hospitals of developed societies appear to have enabled transmission of the SARS coronavirus. Consequently, SARS would seem to pose a greater threat to nations of higher capacity, and thus the effects of pathogens on a given society may be contextually dependent on the parameters of the human ecology of the society involved. This suggests that, in the face of a nosocomial pathogen such as SARS, social ingenuity may offset any lack of technical ingenuity and infrastructure, allowing nations of low capacity to adapt to, and contain, the pathogen. The case of SARS also illustrates the paradoxical and dualistic role of modern technology in the face of novel outbreaks of contagion. Communications technology exhibited positive effects in the containment of SARS: text messages and cell phones conveyed warnings from within China, facilitated the networked response of the WHO through the Global Outbreak Alert and Response Network, and assisted in the accurate diagnosis of the pathogen. However, technologically sophisticated hospital environments facilitated nosocomial transmission, and the rapid spread of the virus was accomplished through jet airplane technologies. In the early years of the twenty-first century, an entire range of microbes has mutated to thrive in the advanced ecologies of wealthier nations. The hospitals of the Western world are now epicenters of transmission for the most formidable of drug-resistant microbes. This is primarily the product of a human ecology that is effectively tainted by the chronic overuse of antibiotics. One of these dangerous bacteria is VRE, which is now entirely resistant to vancomycin, one of our drugs of last resort (see figure 1 , Supplementary Material). VRE proliferates within hospital environments and exhibits a very high mortality rate. However, one of the worrisome propensities of VRE is that it appears to be able to pass its genetic qualities to drug resistance to other types of bacteria. Because of this, resistance to vancomycin has now been observed in other bacteria such as VRSA. 20 MRSA is another bacterium that has become resistant to many antibiotics (see figure 2 , Supplementary Material). MRSA is rather common in hospital environments because health care workers often inadvertently transmit it. Fortunately, rates of MRSA infection have declined in the United States since 2010, largely due to increased medical attention to the issue. 21 Both of these resistant bacteria have emerged as the product of the overmedicated societies of the West, where the ubiquitous overuse of antibiotics (in both humans and livestock) has generated this new class of resistant bacteria. Most worrisome is the emergence of NDM-1, which is a product of the massive overuse of antibiotics and of medical tourism. The year 2010 saw the advent of NDM-1, which is a genetic sequence that is easily transferred among various types of bacteria. Whatever bacterium hosts the sequence then acquires immunity to all existing antibiotics. This gene sequence, which emerged out of India and spread across the globe, has been transferred across various types of bacteria and into both animal and human hosts. 22 As NDM-1 continues to proliferate through the bacterial world, many previously treatable diseases may increasingly prove to be incurable, resulting in increasing morbidity, mortality, and fear. The increasing spread of NDM-1 throughout the human ecology presents an extraordinary challenge to global public health. Natural systems exhibit profound complexity in which small changes that are gradually introduced over time may induce temporally distal nonlinearities. 23 Thus human attempts to tame nature typically result in unforeseen and often negative outcomes. The chains of connectivity between disparate elements within a system may be exceedingly complex, often involving feedback loops and contingencies. Outcomes witnessed in one domain may consequently radiate outward to affect other domains, creating feedback loops that ultimately condition the evolutionary trajectory of the system. 24 Complex systems often reveal emergent properties, in that the system exhibits characteristics that are both greater than and qualitatively different from its constituent parts and might, therefore, be quite unexpected. Emerging diseases then are frequently the result of emergent properties where antecedent variables (e.g., population density, speed of transport, or ecological change) combine in unusual and unforeseen ways to facilitate the emergence of a given pathogen that then becomes endogenized within the human ecology. Ecological variables often combine to produce benign emergent properties such as ecosystem services. However, such emergent properties may also result in negative outcomes such as the emergence of an entirely novel class of pathogens in the form of infectious and lethal proteins, such as the prions that generate BSE. As the sociologist Emile Durkheim commented, "Whenever certain elements combine and thereby produce, by the fact of their new combination, new phenomena, it is plain that these new phenomena reside not in the original elements but in the totality formed by their union [or interaction]." 25 Thus complex systems may not exhibit properties that are attributable to their discrete components, but rather to the macro-level interaction of those components, and thus (under conditions of strong emergence) the whole is both greater than, and different from, the sum of its parts. 26 However, emergence is not limited to the rise and spread of pathogens; it also may affect their virulence. The traditional, and reductionist, view was that virulence was a function either of the pathogen or of the host. 27 However, we now know that complex interactions between multiple variables can affect virulence through processes of emergence. For example, the Great Influenza pandemic of 1917-19 exhibited marked increases in pathogenic virulence, which would seem to have resulted from properties of emergence. As the three major viral waves cascaded around the planet during this time period, each wave coincided with a marked increase in the virulence of the virus. 28 Recent research in the domain of virology suggests that as the viral speed of transmission accelerates, the virulence of that same pathogen increases. This is because, under diminished speed of viral transmission, the pathogen must keep its human host alive long enough to jump to a new host; thus traits of lethality are selected out. However, under scenarios of rapid viral transmission, such traits of virulence remain intact, and the pathogen in question may consequently evolve to become increasingly lethal. 29 One might then also consider the advent of the plagues of affluence as the result of emergent properties, where changing ecosystems and shifting technological landscapes combine to allow for the emergence of previously unknown pathogens such as SARS, VRE, BSE, MRSA, and NDM-1. The SARS epidemic clearly exhibited emergent properties in that it featured a novel zoonotic pathogen that leaped from its natural animal reservoirs into the high-density populations of southeastern China, whereupon it became endogenized in the human ecology of East Asia. It was then distributed via airplane travel throughout the region and thereafter to Canada. SARS was also transmitted in a nosocomial fashion, such that it flourished in the contained medical infrastructures of the developed world. VRE is undoubtedly emergent as well, in that it arose in the context of antibiotic saturation, combined with high population density and population mobility, all in the context of the affluent medical environment of the Western world. Similarly, NDM-1 would also seem to exhibit properties of emergence, again stemming from antibiotic saturation, medical tourism, and rapid global migration. Paradoxically, then, in our rush to produce sanitary landscapes, we have created novel ecological niches that are exploited by a range of emergent pathogens-thus we now observe the advent of the plagues of affluence. These new landscapes are not just structures such as hospitals and laboratories; we have also altered the potentialities of the microbial penumbra that surrounds humanity by creating or altering microenvironments that force pathogens to evolve faster and faster. For example, the use of antimicrobial cleaning agents in households may wipe out 99.9 percent of an existing bacterial population on a given surface. The small population of bacteria that survive will exhibit some small genetic variance that distinguishes them from those that perished, and the survivors will simply flourish over time to colonize the previously sanitized environment. In this manner the use of antimicrobials and antibiotic medications creates evolutionary pressures that push microbes to mutate at a faster rate, often developing resistance to said antimicrobials. 30 The pursuit of sanitary landscapes, replete with contained hospital environments and antimicrobial saturation of our societies, locks us into a perpetual race against the microbes-one that we cannot win. 31 According to the WHO, antimicrobial resistance has now reached alarming levels across a broad spectrum of pathogens on a global scale. 32 Keiji Fukuda of the WHO commented that antimicrobial resistance is "a problem so serious that it threatens the achievements of modern medicine. A post-antibiotic era -in which common infections and minor injuries can kill-far from being an apocalyptic fantasy, is instead a very real possibility for the 21st century." 33 The anthropogenic disturbance of complex ecological systems on a planetary scale, and the attendant reduction in the resilience of ecosystems, promises to exacerbate processes of pathogenic emergence into the human ecology and the intensification of microbial resistance. Human ingenuity, manifest in public health and medicine, has controlled the spread of many illnesses, but it has also resulted in the emergence of diseases that are a product of the new sanitary landscapes. These diseases are actually products of human ingenuity, thriving on the very drugs and expensive new technologies that humans have created. In a very real sense they are the plagues of affluence, the biotic externalities resulting from the sanitary landscapes of the twenty-first century. Nobody knows the amount of toxins that WTC workers, first responders, and Lower Manhattan residents were exposed to from the initial dust plume, but with an alarming spike in cancer deaths among people involved in the 9/11 emergency, the exposure amount was surely high. Among those toxins released was asbestos, which had coated parts of the steel skeleton of the iconic structures. An alphabet soup of dangerous substances mined and extracted from around the world now reside in the bodies of New Yorkers, making them artifacts of the global Anthropocene Epoch. The toxic dust plume from the WTC served as a warning sign of the dangers that occur when the infrastructure of the modern built environment, constructed as it is from asbestos and other hazardous materials extracted globally, comes tumbling down during acts of terrorism and war. On the morning of September 11, 2001, Michael Valentin and his wife awoke to the television showing two airliners smashing into the Twin Towers of the World Trade Center (WTC). Smoke billowed and fire exploded from the gaping holes left in the side of the towering buildings. Later that day, the American public learned that the first airplane was American Airlines Flight 11, a Boeing 767-223ER, originally bound for Los Angeles. Hijacked and under the control of terrorists, Flight 11 had punched the North Tower between the ninety-third and ninety-ninth floors. United Flight 175, the second airliner, had severed the South Tower between the seventy-seventh and eighty-fifth floors. Fifty-six minutes after impact, the South Tower buckled and collapsed; shortly thereafter, the North Tower followed. Within seconds, a swirling plume that engulfed Lower Manhattan in a pulsing storm of fire, dust, and debris was all that remained of the once-iconic buildings (see figure 1 ). Gathering his wits, Valentin, a member of the Manhattan South Vice Unit, contacted colleagues and together they drove to Highway 3, a police unit located near Grand Central Parkway. Eventually, they made their way to the 7th Precinct, in Manhattan's Lower East Side, along with a convoy of police officers and firefighters. As Valentin and his colleagues approached the swirling dust storm, a gray figure emerged from the plume, like a ship silently ghosting through the fog. It was a lone woman, staggering as she walked toward him. Valentin remembered that her "face was covered with powder except for circles around her eyes." He recalled, "You could see the look of horror in her face." When he asked if she needed help, she mumbled that she was going to walk over the Williamsburg Bridge and then home. 1 She then disappeared into the dust cloud as quickly and silently as she had appeared. One common memory that many first responders share about those initial hours was the "powder" that At 5:20 that afternoon, as Valentin and his colleagues started working at Ground Zero, the forty-seven-story 7 WTC building, which had survived the original attacks, collapsed. "I could not believe what I was seeing with my own eyes," he testified to the US House of Representatives Judiciary Committee. "During the next few months, working in and around the World Trade Center site, I saw things that were unimaginable-the sights, sounds and smells of those months were burned into my memory for the rest of my life." Valentin served as part of a team that worked in and around Ground Zero for months. They performed security duties, served on bucket brigades, conducted door-to-door searches, retrieved human body parts from surrounding buildings, and moved equipment and supplies. It was, he explained, "like one long nightmarish blur from beginning to end." 2 During his work at Ground Zero, Valentin had one physical examination that included a chest x-ray. As he reported to the Judiciary Committee, "my lungs were clear, and I was healthy, as I had always been up until 9/11." But Valentin began having serious health problems two years after the event. He suffered from intractable lung and sinus infections, as well as a painful searing sensation inside his ears. In 2004 Valentin began experiencing night sweats, and in September, on his fortieth birthday and three years after the Twin Towers collapsed, doctors discovered a large tumorous mass in his chest between his aorta and trachea. They told him it was likely lymphoma. The mass turned out to be benign, but while physicians examined his lymph nodes they discovered disturbing black particulates. Shortly thereafter, doctors diagnosed him with gallbladder problems and, when they surgically removed his gallbladder, they discovered another lymphatic tumor. While treating the tumor, doctors noticed that his lung functions had diminished, and he was diagnosed with a dangerous thickening of the pleural lining of his lungs. All told, doctors have since diagnosed this once healthy cop with gastroesophageal reflux disease, esophagitis, sinusitis, and a host of other serious conditions. Similarly, Valentin's partner, Ernest Vallebuona, has since been diagnosed with B-cell lymphoma. As Valentin explained to The Guardian, "We all have terminal illnesses, we are all going to die." 3 Of Valentin's numerous health concerns, the thickening of the pleural lining of his lungs is perhaps the most alarming because it is an early symptom of mesothelioma, an extremely rare lung cancer indisputably linked to asbestos exposure, according to health experts. Mesothelium is the thin membrane that lines the body cavity, and mesothelial tissue also covers many critical organs including the lungs. As malignant mesothelioma occurs in this tissue it can be excruciatingly painful, particularly as fluid accumulates in the pleural space, a condition known as pleural effusion. Clinically speaking, the relationship between mesothelioma and asbestos is so solid that many experts consider it a "sentinel" cancer. If you have mesothelioma, in other words, you were exposed to asbestos at some point in your life, probably decades earlier. Much like the other contributions to this Environmental History Forum, this piece focuses on the emergent ecologies produced by the Anthropocene Epoch's modern built environments and how they conspire to produce deadly disease. Just as SARS is a jet-set disease, diabetes is a disease of modern civilization, and many foodborne bacteria emerge from the industrialization of food, WTC dust is an aerosolized version of our built world. Although mostly inorganic, it too, because of the alchemic qualities of its explosive creation, has the emergent qualities of the other disease discussed in this Forum. What is different is that the environment from which it emerged, and which bestowed on it its emergent qualities, is an environment caused by terrorism. To this day, debate continues among health specialists regarding what actually caused Valentin's chronic ailments and tumors. He smoked Parliament cigarettes, for example, which complicates quick conclusions about his lung condition. But the fact that he had a chest x-ray immediately after 9/11 that showed his lungs to be healthy, indicates that something transpired between the months immediately after 9/11 and his first diagnosis in 2004. Health experts writing in the New England Journal of Medicine concede that the asbestos fibers in the WTC dust plume "might slightly increase the risk of mesothelioma," but that any increased numbers of first responders suffering from the rare lung cancer "would not become evident for decades." 4 Normally, these health experts would be correct, but it is also possible that something alchemic happened in the fires of the 9/11 terrorist attacks that fundamentally altered the toxins and made them physically different. Indeed, some varieties of asbestos, because of their chemical composition, cause lung disease much earlier than others, and it is likely that Valentin was exposed to some of these dangerous asbestiform minerals. Indeed, Valentin's pleural condition became evident in a matter of years, not decades. Something happened in the swirling fire-wrought dust storm that intensified the toxins and accelerated Valentin's conditions, a metamorphic moment that transformed the asbestos and its accompanying ingredients into something entirely new and utterly deadly. As scientists explain, because of its exotic ingredients and the unique circumstances of its creation, WTC dust is "new in the world." 5 This alchemy of global terrorism and disease occurred when the Twin Towers exploded and aerosolized, one that took scientists and New York first responders into uncharted territories. As one observer described, "Metals and glass from windows and computers and girders were turned into mist particles that bonded with larger pieces of concrete, creating billions of tiny hybrid fragments, each coated with a sheath created from the elements of destruction. Asbestos was pulverized into pieces so tiny that ordinary tests devised to track the fibers missed them." 6 As one scientist commented, the fires "produced a multitude of additional chemicals and physical changes in the materials that were released." These fires generated an "additional series of highly toxic and noxious substances." 7 The WTC dust plume was more than the sum of its parts: as it swirled in Lower Manhattan it took on its own hybrid toxicological qualities, ones that reject reductionist answers to what caused Valentin's cancers. Inorganic chemistry, as it produced transformations under the historical circumstances of global terrorism, became an agent in this historical narrative. With WTC dust, hybridity, as Bruno Latour would suggest, was not simply the results of the intertwining of the politics, science, technology, and nature that encompassed the 9/11 event, but rather about the material hybridity that occurred in the multicausal fires of this specific historical moment (see figure 2 ). 8 Subsequently, Lower Manhattan's ecologies of terrorism have become one of the "most intensely studied and sampled environments on the planet." And the lungs of first responders are among the most monitored in the world, with over eleven thousand people having received chest x-rays and other forms of physical examinations. 9 Environmental Protection Administration (EPA) officials insisted that Lower Manhattan was safe to return to only days after the WTC came crashing down. "Given the scope of the tragedy from last week," EPA Administrator Christie Todd Whitman infamously announced one week after the disaster, "I am glad to reassure the people of New York and Washington, D.C., that their air is safe to breathe and their water is safe to drink." 10 But critics, such as Joel Shufro, executive director of the New York Committee for Occupational Safety and Health, believed that officials put the health of financial markets ahead of the health of residents. Specifically, regarding asbestos exposure, Shufro explained, "Those exposures may have grave adverse public health consequences, but we will not know exactly what those consequences are for decades." 11 In part, the debate regarding the residual health risks of WTC dust stems from testing technologies and protocols. The EPA used antiquated testing protocols while private researchers used the latest in electron microscopes and fiber-counting procedures. When Cate Jenkins, a chemist in the EPA's Region 8, which includes Superfund site Libby, Montana, alerted EPA officials in New York's Region 2 that the EPA itself had declared their polarized light microscopy equipment to be obsolete in 1994, and asked if they wanted to borrow the transmission electron microscopy (TEM) equipment from Region 8, she was told on a conference call: "We don't want you fucking cowboys here. The best thing they could do is transfer you to Alaska." 12 With TEM equipment, researchers often counted nine fibers for every one counted by EPA Region 2 scientists. Making a chilling connection to Montana, Jenkins acknowledged that, in light of the more accurate testing, "The concentrations of asbestos in both settled dusts inside homes in Libby is comparable to the settled dusts inside the buildings in lower Manhattan." 13 In essence, the ecologies of terrorism had reduced Lower Manhattan to a Superfund site, and Jenkins lost her job for explaining as much. Using TEM and other sophisticated equipment, however, scientists have identified the basic global anatomy of WTC dust (see Figure 1 , supplementary material). The pinkish substance, which has a consistency of flour, contains an alphabet soup of chemical elements spanning the periodic table from arsenic to zinc, with such deadly celebrities as arsenic, cadmium, mercury, lead, thallium, and uranium headlining the list. The dust also contains a dazzling variety of carcinogenic pollutants such as PCBs, PAHs, PCDDs, PCDFs, as well as nearly ninety types of chlorinated hydrocarbons. 14 Important for our later discussion, one test of the dust in a residential apartment building near the WTC site detected low levels of a kind of tremolite asbestos called richterite in the toxic pink dust. 15 Richterite is fairly rare in this world, but can be found in the vermiculite mine in Libby, Montana. The WTC dust that sickened Valentin was historically constructed. Just as the making of the WTC had historical origins, so did its dramatic unmaking. But just as WTC dust is an artifact of history, so too are Valentin's diseases because they resulted from a complex web of forces. In different but conspiring ways, these forces caused the destruction of the Twin Towers, the creation of WTC dust, and the development of Valentin's health challenges. Given that among Valentin's most pressing health concerns is the potential development of mesothelioma, as much as any other historical factor it was the happenings in a small northwestern Montana town that most directed Valentin's new life course. On detecting richterite asbestos in the settled dust of a residential apartment building, researcher E. B. Ilgren speculated, "Some associate richterite with the vermiculite found in Libby, Montana. This raises the possibility that it came from a W. R. Grace Monokote spray used to insulate the WTC complex. Given the date of the construction of the complex and the use of Monokote during that time period, the findings of some richterite containing vermiculite would not be unexpected." 16 Indeed, it would not be unexpected, but the history of W. R. Grace's Monokote, and the use of asbestos on the WTC in general, is a complicated story. It is intertwined with the histories of the use of asbestos in fire prevention in built environments, the growing medical awareness of the association between asbestos and pulmonary disease, the sacrificing of a small Montana town in the name of corporate profits, and stunningly inept state and federal regulatory instruments. It is also the story of two asbestoses, chrysotile and amphibole, and the health concerns associated with these two asbestiform minerals. According to the US Geologic Survey (USGS), the asbestos deposits near Libby are about 100 million years old. The deposits sit atop the Algonkian Belt, a pre-Cambrian era formation: the physical happenings in deep geologic time matter because they conspired to help determine the toxicity of Libby's asbestos in historical time. 17 The USGS has assigned the tremolite asbestos in Libby a host of names based on its chemical composition including winchite and, as we have seen, richterite. The asbestos occurred naturally in the vermiculite deposits, the percentage of which ranged from 21 to 26 percent in raw vermiculite ore and 0.3 to 7 percent in concentrated form. It is important to note that Libby's vermiculite contained unusually high levels of asbestos. Raw ore vermiculite samples from Enoree and Patterson, South Carolina, for example, sites of other W. R. Grace operations, contained 1 percent asbestos (some twenty-five times less than Libby). Moreover, the asbestos occurring in Libby's vermiculite had a far higher percentage of fibers over 5 microns than other deposits, making it particularly dangerous to human health because these long fibers are more easily lodged in human tissue. Also important, Libby's asbestos has significant morphological differences from other deposits, often making it more toxic. 18 The important point is that naturally occurring causes, the morphological and toxicological qualities of Libby's amphibole asbestos, ones born in its deep geologic past and manifested later, made it far more dangerous to human health than other kinds of asbestos. Not knowing any of this geology, businessman Edward Alley first unearthed the vermiculite near Libby and named his new discovery Zonolite, which was hailed by the local press as having "a hundred and one uses." It was a miracle mineral. Alley incorporated Zonolite Corporation in 1927. With the help of the Great Northern Railroad, he began exporting his Zonolite product throughout the United States and beyond, to countries such as Scotland, England, and Japan. 19 In 1939 Chicago businessmen consolidated Zonolite Corporation and a handful of other vermiculite outfits into Universal Zonolite Insulation Company. Originally, Alley had shipped processed ore, but the new owners realized that shipping smaller, more compact unprocessed ore was less expensive. Immediately, Zonolite opened some three hundred processing plants nationally and internationally, processing a product that principally served as "house fill" insulation. But every time a train left Libby, and every time a new processing plant was opened, new exposure pathways to Libby's killer asbestos were created across the entire United States and beyond. In 1963 the multinational W. R. Grace bought Zonolite, along with the vermiculite mine at Enoree, South Carolina. Grace was interested in the vermiculite, but also, at least originally, the tremolite asbestos. By 1968, under W. R. Grace's management, Libby produced almost 70 percent of the vermiculite used in the Western world. 20 About this time, W. R. Grace shifted its interest from "house fill" insulation to spray-on cement and fireproofing used to protect the steel columns of multistory buildings. There was a spate of high-rise construction in the 1970s and 1980s, and for engineering purposes, erecting these buildings required heavy coats of spray-on insulation, which W. R. Grace produced and readily supplied. With increased production under W. R. Grace, a cloud of vermiculite and asbestos dust emanating from the mine and its sorting facilities began to swallow Libby. Bob Dedrick, a Libby resident, remembered the dust at the old dry mill. "I remember people talking about it," he explained, "how dusty it was, so dusty that they just couldn't breathe. They couldn't stand it. And said they'd wear a handkerchief or something over it, and it would just plug up." 21 As a child growing up in Libby, Diane Keck played in the Zonolite dust piles near the ballpark. "It was real dusty," she said. "You'd jump on the piles and poof!" 22 But it was becoming clear that the dust was dangerous. After Zonolite was processed in California, bags of the ore by-product often would be returned to Libby, explained Bob Beagle. "For years," he recalled, "a plant in California used to send back the waste material after they expanded it, to us, and we'd haul it back up on the hill and bury it. And they'd come back with tags on it, 'Danger. Hazardous Material.'" 23 It was an ominous sign of things to come. It was killer dust that would eventually unite the histories of Libby and Lower Manhattan. With a boom of skyscraper construction in the 1970s and 1980s, spray-on fireproofing became critical to engineering this new vertical built environment. Because of its critical role in fire prevention, one mineralogist could trumpet, "Asbestos may be termed indispensable to modern life." 24 It certainly became a ubiquitous part of modern life. Before spray-on asbestos insulation, large steel beams had to be pretreated with cement coating, making them large, cumbersome to transport, expensive to erect, easy to damage, and difficult to build with. Cement often sloughed off during transport, making those parts of the steel beams vulnerable to buckling under heat. The Limpet process, as the practice of spraying-on adhesive asbestos was called in Great Britain, was pioneered by the J. W. Roberts Company in 1932. British Railway coach makers used the spray-on asbestos in their railcars to control condensation and noise, and to serve as thermal insulation. The process was first used in the United States in 1935, mostly for textured decorative finishes in nightclubs, restaurants, and hotels. Because the spray-on insulation proved an effective fireproofing, by the early 1950s the National Underwriters Laboratories had approved spray-on products from both National Gypsum Company and Asbestospray Company. In 1958 spray-on asbestos insulation was used in the construction of the sixty-story Chase Manhattan Bank building in New York City. By 1970 over half of the multistory building erected in the United States had asbestos spray-on insulation guarding their towering steel skeletons. 25 In the face of the grim reality of deadly urban fires, asbestos was indeed a promising breakthrough. Usually, spray-on insulation contained 5 to 30 percent commercial-grade chrysotile asbestos, mineral wool, adhesives, and binders such as bentonite and synthetic resins. By 1968 an estimated 40,000 tons of spray-on asbestos insulation was being used in the United States. Engineers slated the Word Trade Center, constructed between 1968 and 1973, to hold about 5,000 tons of spray-on insulation. In fact, spray-on insulation was critical to the architectural vision of the WTC complex. With insulation, engineering plans required a much smaller footing because the bearing load was reduced, shorter construction time because there was less concrete formwork involved, the spray was more easily applied to irregular surface configurations and architectural shapes, and it saved money. Because steel loses its strength at approximately 1,100 F (590 C), spray-on insulation was critical for the fire rating of multistory buildings. It made sense for the construction of the 110-story WTC, where, on the best of occasions, evacuating the entire complex took several hours (see figure 3 ). 26 However, both building and destroying the WTC prompted debates regarding asbestos in built environments. In the late 1960s and early 1970s, the Twin Towers had become Ground Zero for another important event: the debate over the health risks associated with asbestos insulation. In 1964 Irving Selikoff of Mount Sinai Hospital and two colleagues published an article in the Journal of the American Medical Association linking lung cancer to working with asbestos insulation. The landmark study examined the health of some fifteen hundred members of the Asbestos Workers Union in the New York metropolitan area, as well as two local affiliates of the International Association of Heat and Frost Insulators and Asbestos Workers. The results determined, "For the period 1953-1957, there were 85 observed deaths compared with 56.6 expected deaths, and for the period 1958-1962, there were 88 observed deaths compared with only 54.4 expected deaths." More strikingly, of 632 insulation workers tracked, forty-five died of lung cancer, including three cases of mesothelioma, when only six such deaths were expected. In other words, people are always expected to die, but not in the numbers that these "laggers," "insulators," or "pipe coverers" were dying, and not from rare forms of lung cancer such as mesothelioma. 27 With this study, Selikoff and his colleagues focused attention on the dangers of asbestos use in insulation application, just at the time that it was becoming, as we have seen, "indispensable to modern life." Then, in another important study in the American Industrial Hygiene Association Journal, Selikoff and his colleagues focused their attention on the dangers of spraying insulation in particular. They investigated the two main parts in the spray-on process, the nozzle men (the sprayers) and hopper men (the mixers), as well as the electricians, plumbers, painters, masons, and everybody else exposed to the large amounts of asbestos overspray that settled at the construction site. While builders erected the WTC complex, studies indicated that, because of the overspray with the insulation, some five tons of chrysotile asbestos was being released into the surrounding built environment. 28 That overspray was contaminating air around the Twin Towers, the site of the actual spraying, raised serious health concerns. In 1971 researchers including Selikoff had conducted postmortem biopsies of twenty-eight New Yorkers and determined that twenty-four of them, or 85 percent, had chrysotile asbestos fibers lodged in their lungs. 29 Asbestos was already becoming a part of the bodies of New Yorkers at an alarming pace. In the face of growing evidence that asbestos was killing insulation workers and contaminating the New York metropolitan area, the Port Authority of New York and New Jersey took the precautionary step of halting the use of asbestos-based spray insulation on the WTC. It represented an unusual step because the architectural and engineering plans had called for the use of asbestos spray in order to have a smaller footprint and use smaller columns. The halting occurred two years before the New York City Council passed a law that banned spray-on insulation on February 25, 1972 . Nonetheless, before the Port Authority took this precautionary step, insulators had sprayed insulation, containing some 20 percent chrysotile asbestos, on Tower 1 (North Tower) up to the thirty-eighth floor. Above this point, insulators applied a supposedly asbestos-free alternative. The entirety of Tower 2 (South Tower) was also sprayed with the "asbestos-free" alternative. Insulators also sprayed asbestos and vermiculite on some of the weightbearing walls and areas where air erosion was a serious concern, such as in high-speed elevator shafts. In these areas, and a host of others, insulators sprayed a concoction with 80 percent asbestos. The steel columns of 7 WTC, the one that Valentin had watched tumble to the ground while working at Ground Zero, was entirely coated with spray-on asbestos insulation. 30 Although engineers only used asbestos on parts of the WTC complex, the buildings still had thousands of tons of asbestos and vermiculite clinging to them. It is important to keep in mind that Tower 1 was struck between the ninety-fourth and ninety-ninth floors, and Tower 2 between the seventy-eighth and eighty-fourth floors, areas with an apparently less effective, asbestos-free alternative. In the 1960s, Libby was still far from Ground Zero in the debates over the dangers of asbestos, but it was about to be permanently intertwined with the story of the toppling of the Twin Towers. As we have seen, New York City banned the use of asbestos in 1972. One year later, the EPA, as part of the National Emissions Standards for Hazardous Air Pollutants, followed suit and banned spray-applied asbestos fireproofing and insulation, largely in response to studies surrounding the construction of the Twin Towers. 31 As companies scrambled to design asbestos-free alternatives, W. R. Grace announced a dramatic "research breakthrough." In the face of the New York City and EPA bans on spray-on asbestos fireproofing, the company had designed a "completely asbestos-free" product, one whose "health and environmental aspects are overpowering." Actually, as the New York Times reported, the product was a familiar name in the spray-on insulation world, a product called Monokote. 32 In 1968 W. R. Grace had won the contract to fireproof the steel columns of the WTC. The product to be used, Monokote-3, contained a mixture of 12 percent commercial grade chrysotile asbestos, 30 percent of Libby's low-grade vermiculite, and gypsum. "In an effort to provide the proper fire protection and provide heat flow into the column it was felt that a dense vermiculite-gypsum plaster could best fulfill the needs" of engineers, claimed one W. R. Grace document, titled "Study of the Interior Fire Protection Requirements of the Exterior Columns of the World Trade Center Project." In this earlier version of the Monokote product, Libby's vermiculite provided the fireproofing, but because the company used grade-three vermiculite in the mix, it probably contained somewhere between 5 and 15 percent tremolite asbestos, as well as the industrial-grade chrysotile asbestos. But following the research of Selikoff and others, engineers stopped using asbestos-containing fireproofing on Tower 1 at the thirty-eighth floor. From that point upward on the columns of Tower 1, and all of the columns of Tower 2-some 244 steel beams surrounded each tower and more still surrounded elevator shafts and other locations-engineers used the new "asbestos-free" Monokote-4. However, despite W. R. Grace's assurances and reassurances, Monokote-4 did have discernable levels of tremolite asbestos, about 1 percent. The principal reason that asbestos was used in Monokote was that it eased the spraying process by keeping the finicky nozzles from clogging. This led to what some call the "Grace rule." The company managed to convince the EPA that it needed 1 percent asbestos to keep Monokote spraying, and that 1 percent asbestos did not represent a health threat to sprayers or other workers. Regardless, by 1977, Monokote was the leading fireproofing material in the United States. Workers had sprayed it on 60 to 80 percent of the 150,000 steel-columned structures built in the 1970s and 1980s. By the 1980s, Monokote, and Libby's tremolite asbestos, was everywhere. 33 Typically, W. R. Grace has denied that its fireproofing products were used on the WTC project, but Hyman Brown, the project engineer, rejects those denials. "So I can tell you right now that [Monokote-3] was used in the first building, most of the first building, and [Monokote -4] was used for the rest of the first building and the second building because we were told that [Monokote-4] did not have asbestos in it," he has explained in interviews with Andrea Peacock. Publications at the time claimed that the WTC required some 5,000 tons of spray-on fireproofing, which according to one estimate would be approximately 250 to 750 tons of dangerous tremolite asbestos. 34 Libby's vermiculite has its own history, one that follows its creation in geologic time, and its movement from a small Montana town, through the bodies of local residents, to the steel columns of the Twin Towers, and into the bodies of New Yorkers. This same history could be written for the scores of other dangerous elements, fibers, shards, and chemicals that fused and transformed in the 9/11 alchemies of terrorism that created the WTC dust. This is the emergent nature of the toxins discussed in this Forum. When the Twin Towers crumbled, the exotic materials of our built environment atomized. Engineers, driven by our modern desires and requirements, had designed this built world, but they had never done so in isolation from the economic, social, and cultural drivers of their historical circumstances. These engineers serve as historical actors in this narrative, but so too does the chemically morphing product of their labor. The WTC eventually became an agent of history itself, as it alchemized into something entirely new while it burned, tumbled, and crashed to the ground. This built environment was not separate from history, but deeply embedded in it, as were the porous bodies of New Yorkers who called this fabricated world their home. When that fabricated world turned to mist, Valentin and other first responders breathed this history in deeply, into the deepest corners of their lungs, as they searched with their bare hands for survivors and, later, body parts. In the end, the destruction of the built environment metastasized into real physiological changes: this history proved overwhelming once in their bodies. Brett L. Walker is Regents Professor and Michael P. Malone Professor of History, Montana State University, Bozeman. He is author of many books, including most recently A Concise History of Japan from Cambridge University Press, which explores Japanese history in the context of the Anthropocene Epoch. Reading these essays, it is hard to believe that only two decades ago, the idea of writing environmental histories of disease and health in the industrial era seemed somehow novel. Although Silent Spring, the seminal text of the modern environmental movement, focused precisely on the link between modern environments and their effects on human-as well as animal-health, the first generation of environmental historians, drawing their inspiration from ecology, confined their gaze to the effects of modernization on landscapes. 1 Instead, the origins of this topic lie primarily with two other groups of scholars: social historians of medicine and disease ecologists. For the former, the key issue was inequality, which, as they detailed, correlated strongly with disease occurrence and severity. Their data were often statistical, and socioeconomic class emerged as a crucial variable that encapsulated many of the physical-material conditions (poor housing, inadequate diets, polluted air and water, and dangerous factories) that characterized the daily lives of poor and marginalized people. Their stories were overwhelmingly about the unequal burden of disease, particularly in societies undergoing industrialization. 2 Disease ecologists, in contrast, were interested in the environmental-material conditions that affected the evolution of pathogens. Amid triumphal accounts of the success of modern medicine and optimistic calls for disease eradication in the mid-twentieth century, they offered a more skeptical perspective, insisting on the ability of pathogens to adapt to changed conditions. 3 In the late twentieth century, the emergence and reemergence of epidemics of infectious disease in the industrialized world seemed to validate their ecological concerns. By the end of the century, progressive accounts of medical history based on the so-called epidemiologic transition seemed overly simplistic. As new diseases like AIDS and SARs made clear, the trajectory of infectious disease in world history was far less linear than historians and medical scientists once assumed. In their introduction, the editors frame these histories under the term of the Anthropocene, defining the latter in terms of its global scale, its sense of temporal crisis, and the centrality of technology to both environmental health problems and their solution. These are important insights, but the editors stop short of any critical reflection on this framing. It is a frame, however, that relies more heavily on disease ecology and biomedical science than social medicine. The discourse of the Anthropocene communicates several key features of the twentieth century, particularly the novel ecological situations posed by both climate change and rapidly increasing rates of species extinction. And yet, it often fails to engage with the ways that inequality has structured health and disease. Consequently, it has the potential to underwrite a kind of "scalar politics"-to use Nick King's phrase, in which the proffered solutions often rely on the further development of technologies (and consumption of resources) in the centers of power while "reducing the scale of [disease] intervention, from global political economy to laboratory investigation and information management." 4 In the wake of the devastating Ebola epidemic in West Africa, Bill Gates-the high-tech billionaire who now runs the influential Gates Foundation-argued that the outbreak, which has killed more than eleven thousand people in West Africa, demonstrated the need for substantial investments in a "a global warning and response system" as well as more research on antiviral drugs, antibody treatments, and "RNA-based constructs." Yet as the World Health Organization pointed out, the epidemic was essentially confined to three of the world's poorest countries, all of which "have very weak health systems, lack human and infrastructural resources, and have only recently emerged from long periods of conflict and instability." 5 There are different ways to frame Ebola. The essays here are diverse in their approach. Those of Otter and Price-Williams continue in the vein of disease ecology, showing how infectious pathogens continue to exploit novel ecological (and even technological) niches, while that of Klingle echoes earlier social histories by pointing out how poverty and marginalization allowed for the explosion of type 2 diabetes among Native American peoples. Webel's essay builds on work undertaken in colonial history since the 1980s. Scholars such as David Arnold, Randall Packard, and Warwick Anderson extended the social history of medicine to colonial contexts, bringing attention to the prominence of health, disease, and environment within colonial discourse and practice. They also drew on the new cultural history to demonstrate the deep imbrication of Western discourses of health with those of race. 6 The essay by Brett Walker is, in many ways, the most novel and "environmental" of the group. His focus on the health impacts of the World Trade Center's collapse begins to unpack the materials and ecologies that make up the contemporary built environment-in his case, the history of construction fireproofing, asbestos mining, and the particular industrial compound, Monokote-as well as their very uncertain futures. His story, like Carson's, centers on the new and toxic substances that industrialization has introduced into daily life-the material "diversification" that the editors cite-but his argument hinges not on pathogen evolution so much as politics. His narrative links the mines of Libby, Montana, with the emergency responders of New York City, suggesting the complex networks of transportation and exchange that undergird the modern ecology of the postindustrial metropolis. These postindustrial ecologies emerge through complicated networks of politics, labor, and capital, and his essay gestures toward the role of unions, local and national politicians, corporations, and bureaucrats. His story is similar to that told by Janet Ore, who has carefully unraveled the use of formaldehyde and other toxic materials in mobile home construction, linking the illnesses and struggles of plywood workers to those of working-class mobile-home owners. These innumerable "slow disasters," along with more spectacular events like 9/11, demonstrate that these novel chemical ecologies pose unknown risks that may only become visible through the concerted work of victims and their advocates over the space of decades. 7 Most of these essays, as well as several other works in the field-my own included-aim to upend again the traditional narrative of medical and health progress, underscoring the point that modern environments have not necessarily led to the control of pathogens and improvement of health, but instead may contribute to the creation and propagation of disease. To do this, environmental historians marshal contemporary biomedical research-on the ecology of pathogens, the toxicology of chemicals, and the epidemiology of chronic disease. The resulting histories bring the insights of modern biomedicine to the attention of historians and other social science scholars. In this way, their stories are similar to the first wave of environmental histories that indicted industrialism's impact of the land. Those early environmental histories relied on a particular science, Clementsian ecology, to lend authority to their narratives and to support their position that the technology-intensive manipulation of landscape had upset a preexisting natural balance. Arguably disease ecologists can write the evolutionary history of pathogens better than historians, and so epidemiologists with epidemiology. But historians are typically far better equipped to elaborate the human and social variables that scientists inevitably oversimplify. A key strength of history lies in its attention to social and political details and their contingency. In addition, environmental historians offer close attention to material details. Too often in scientific (and some historical) writing, "modernity" and "technology" come to stand in for a host of more complex forces. In his cross-disciplinary work on the H1N1, or bird flu, virus, disease ecologist-cumgeographer Rob Wallace argues for a temporal link between the emergence of the virus in China and the adoption of neoliberal economic policies that enabled the explosion of factory bird farming. These are the kinds of linkages that deserve closer scrutiny and attention from historians. How did the modern hospital environment that now breeds SARS rise to dominance? What were the forces (of capital, labor, politics, law, medicine) that drove particular models of patient treatment and care, that standardized climate-controlled buildings, or that determined what materials would be mobilized to build late twentieth-century skyscrapers? And what were the alternative possibilities that were swept aside? Ultimately, however, the essays in this Forum raise a thornier, if familiar, issue: the relationship between environmental history and science. Justifiably excited by new scientific knowledge that links environments ever more strongly to health and disease, environmental historians have not, for the most part, looked very hard at the contexts that have generated that knowledge. We have been down this path before. Early environmental histories were jeremiads that drew their strength from a Clementsian ecology that emphasized stability and climax-even though that was hardly the only ecological paradigm available at the time. As the trajectory of the field makes clear, too close a reliance on certain natural sciences leads to histories that quickly become period pieces. 8 The displacement of Clementsian ecology had its benefits. Arguably the most important development within the field of environmental history over the last two decades has been its engagement with the history and sociology of science. The recognition that every science has a history of its own and emerges within overlapping fields of power relations (from the scale of the laboratory to that of the globe) should at least give historians pause when they turn to science to authorize their narratives-whether of landscape change or disease occurrence. Contrary to popular belief, the choice is not between an uncritical acceptance of scientific truth and science denial. There is a middle ground in which we can view science as generating powerful and useful knowledge while acknowledging the contexts that shape it. Who participates in its development? What kinds of projects are funded? What questions get asked? What kinds of answers are legitimated? Moreover, modern science is itself more diverse and conflictual than historians typically acknowledge; there is always more than one science to choose from. Klingle's acknowledgment of the longstanding controversy surrounding Neel's "thrifty genotype" hypothesis points, for example, to the diversity of opinions within the medical community and to much earlier challenges to racial explications of disease among Native Americans-an issue that Christopher McMillen has deftly explored with respect to tuberculosis. Even as modern science has grown more diverse and complex, our histories too often have not. 9 There are important exceptions. Christopher Sellers's now classic history of toxicology links the rise of this new science in the early twentieth century to the emergence of modern chemical corporations and the aspirations of a new group of university-based scientists who depended on corporate funding. Although toxicology remains the dominant science of chemical regulation to which environmentalists and citizen activists appeal, Sellers's work offers us a more critical perspective on both its history and its potential. 10 More recently, Sellers has looked at how the scientists in different national contexts (the United States, Western Europe, and the USSR) assessed toxic effects and came to strikingly different conclusions about chemical dangers during the Cold War. In particular, Soviet scientists focused on more subtle effects that American scientists neglected, such as impacts to the central nervous system; as a result, Soviet health standards were typically more protective than those in the United States, by as much as an order of magnitude. Similarly, Kate Brown has written about the Soviet diagnosis of "chronic radiation sickness," which emerged in the mid-1950s to describe a cluster of nonspecific symptoms (chronic fatigue, loss of appetite, premature aging, aching joints) associated with ongoing exposure to radiation both within and downwind of nuclear production facilities. Whereas the closed society of the USSR allowed the government to gather massive amounts of data on populations exposed to radiation, American officials studiously avoided collecting this kind of information because they feared generating popular opposition to nuclear weapons production as well as exposing the government to legal liability. Moreover, as Brown relates, when American scientists came to dominate radiation health research after the Cold War, the diagnosis of chronic radiation sickness disappeared from the literature. Her work, along with that of Sellers, shows how scientific representations express "their own nation's current configuration of workplace power relations," which are, in turn, the outcome of particular social and political histories. 11 These works reveal that even those sciences that generate a more "environmental" perspective have a context and history that require critical illumination. Disease ecology is no exception. Its emergence in the first half of the twentieth century was "contingent upon the vagaries of global capital, commerce, and war," as well as the environmental legacies of colonialism. 12 What are the contexts that have renewed the interest in disease ecology, and how are the forces and institutions of shaping that science today? How are modern disease ecology and epidemiology, with their close connections to genetics, similar to and different from their earlier incarnations in the mid-twentieth century? 13 And what are the political, economic, and institutional contexts that are shaping their current trajectory? In addition, environmental historians should have more to say about the material contexts on which modern biomedical science depends. Knowledge, too, has its environmental costs-particularly modern scientific knowledge. 14 Perhaps most politically controversial and thus most visible is the reliance of biomedicine on massive numbers of nonhuman animals. Much of our scientific knowledge about diseases like E. coli, SARs, type 2 diabetes, and asbestosis has relied on the production and global circulation of specific strains of mice, rats, ferrets, and rhesus monkeys for laboratory experimentation. To the dismay of many researchers and institutional administrators, activist challenges to this research have made this animal "resource" increasingly visible. 15 But there are other impacts to consider. Contemporary biomedical research continues to be conducted in the usual places, precisely because those are the locations in which one is assured of the massive material resources necessary to sustain high-tech laboratories on which that science depends: reliable and abundant energy supplies, high-quality buildings, good water supplies, rare metals, radioactive and toxic materials, innumerable chemicals, and a plethora of more ordinary consumer items. The modern biomedical lab is a very particular kind of local space. Within most of the industrialized world, and certainly within the United States, both "Big Science" and basic research are assumed to be values in their own right that inevitably generate practical results; it borders on heresy to ask if such science is necessary or warranted, if the benefits justify the costs, or what other solutions to disease might be possible. The perpetual return to "science" helps to ensure the flow of dollars and resources into the laboratories, universities, and cities of the (over) developed world: to genotype new viruses, to develop new vaccines, to manipulate data, to write computer code. Moreover, in the face of global crisis, questions of inequality-along with the histories of colonialism and capitalism that have structured that inequality-are too easily relegated to the sidelines. But if the ultimate goal is improved global health, we need to recognize that the investments in these places may come at the expense of investments in basic public health and infrastructure elsewhere. Put another way, modern intellectual work, particularly in the natural sciences, has its own ongoing set of environmental and social implications, and these too need to be part of our stories. 15 For instance, the University of Washington's April 2015 decision to continue with the construction of a new $120 million "animal research facility" generated considerable news coverage. "Activists Trying to Stop Work on New UW Animal Lab," Seattle Times, April 23, 2015, http://www.seattletimes.com/seattle-news/ education/activists-trying-to-stop-work-on-new-uw-animal-lab/. Networks in Tropical Medicine: Internationalism, Colonialism, and the Rise of a Medical Specialty The Colonial Disease: A Social History of Sleeping Sickness in Northern Zaire Sleeping Sickness, Colonial Medicine and Imperialism: Some Connections in the Belgian Congo The Power to Heal: African Auxiliaries in Colonial Belgian Congo and Uganda Lords of the Fly: Sleeping Sickness Control in British Uganda Trypanosomiasis Control in African history: An Evaded Issue? The Social and Economic Effects of Sleeping The Age of Water: The Urban Environment in the North of France The Character of the Industrial Revolution in England Food, Hygiene, and the Laboratory. A Short History of Food Poisoning in Britain, circa 1850-1950 Food Poisoning Food Poisoning, Food-Borne Infection and Intoxication: Nature, History, and Causation, Measures for Prevention and Control Reporting and Incidence of Food Poisoning Food Poisoning and Food Hygiene Food Poisoning and Food Infections Emerging Epidemics: The Menace of New Infections Salmonella in Swine, Cattle and the Environment of Abattoirs Legislation Food-Poisoning and Food-Borne Infection A Disease of Rabbits Characterized by Large Mononuclear Leucocytosis, Caused by a Hitherto Undescribed Bacillus, Bacterium monocytogenes Listeria monocytogenes, a Food-Borne Pathogen Serving High-Risk Foods in a High-Risk Setting: Survey of Hospital Food Service Practices after an Outbreak of Listeriosis in a Hospital Listeria monocytogenes, a Food-Borne Pathogen A Review of the Incidence and Transmission of Listeria monocytogenes in Ready-to-Eat Products in Retail and Food Service Environments Comparative Investigations of Listeria monocytogenes Isolated from a Turkey Plant, Turkey Products, and from Human Cases of Listeriosis in Denmark Evolution in Bacteria: Evidence for a Universal Substitution Rate in Cellular Genomes Shiga Toxin-Producing Escherichia Coli: Yesterday, Today, and Tomorrow On the Origins of Bacterial Pathogenic Species by Means of Natural Selection: A Tale of Coevolution Etiology of Bloody Diarrhea among Patients Presenting to United States Emergency Departments: Prevalence of Escherichia coli O157:H7 and Other Enteropathogens Emerging Foodborne Pathogens: Escherichia coli O157:H7 as a Model of Entry of a New Pathogen into the Food Supply of the Developed World Shiga Toxin of Enterohemorrhagic Escherichia coli type O157:H7 Promotes Intestinal Colonization Emerging Foodborne Pathogens Emerging Infections: Microbial Threats to Health in the United States The Public Inquiry into the September 2005 Outbreak of E.coli 0157 in South Wales The History and Evolution of Escherichia coli O157 and Other Shiga-Toxin Producing E. coli Black Holes, Antivirulence Genes, and Gene Inactivation in the Evolution of Bacterial Pathogens The Relentless Evolution of Pathogenic Escherichia coli History and Evolution of Escherichia coli O157 Grazing Protozoa and the Evolution of the Escherichia coli O157:H7 Shiga Toxin-Encoding Prophage Bacteria: A Very Short Introduction Escherichia Coli O157:H7," in Encyclopedia of Genetics Black Holes" and Bacterial Pathogenicity: A Large Genomic Deletion That Enhances the Virulence of Shigella spp. and Enteroinvasive Escherichia coli Derivation of Escherichia coli O157:H7 from its O55:H7 Precursor Parallel Evolution of Virulence in Pathogenic Escherichia coli Emerging Foodborne Pathogens The Dangerous Sausage: Diet, Meat and Disease in Victorian and Edwardian Britain The Livestock Revolution, Food Safety, and Small-Scale Farmers: Why They Matter to Us All Should We Eat Meat? Evolution and Consequences of Modern Carnivory Food Systems and the Changing Patterns of Food-Borne Zoonoses Super Shedding of Escherichia coli O157:H7 by Cattle and the Impact on Beef Carcass Contamination Ecology of Escherichia Coli O157:H7 in Cattle and Impact of Management Practices Invited Review: Effects of Diet Shifts on Escherichia coli in Cattle Microcosm: E. coli and the New Science of Life Emerging Foodborne Pathogens Emerging Pathogens: Is E. coli 0104:H4 the Next Strain to Watch? SARS and the Global Risk of Emerging Infectious Diseases Coronavirus as a Possible Cause of Severe Acute Respiratory Syndrome Inescapable Ecologies: A History of Environment, Disease, and Knowledge The Politics of China's SARS Crisis Cumulative Number of Probable Reported Cases of SARS Preliminary conclusion of the Campbell Commission The first strains of VRSA were observed in 2007 in the United States Making Health Care Safer NDM-1-a Cause for Worldwide Concern Control and Catastrophe in Human Affairs Human Frontiers, Environments and Disease The Health of Nations The Rules of Sociological Method (Glencoe: The Free Press, 1938), xlvii.; see also Philip W. Anderson A Different Universe: Reinventing Physics from the Bottom Down Reductionistic and Holistic Science Evolution of Infectious Disease Prevalence and Antimicrobial Resistance of Cronobacter sakazakii Isolated from Domestic Kitchens in Middle Tennessee, United States The Challenge of Antibiotic Resistance Strange But True: Antibacterial Products May Do More Harm Than Good Antimicrobial Resistance in Hospital Organisms and Its Relation to Antibiotic Use The Rise of the Enterococcus: Beyond Vancomycin Resistance Mortality and Hospital Stay Associated with Resistant Staphylococcus Aureus and Escherichia coli Bacteremia: Estimating the Burden of Antibiotic Resistance in Europe At Montana State, Timothy LeCain and Michael Reidy continually challenge me to better my ideas and clarify my writing, as has my partner, LaTrelle Scherffius. I am also grateful to the anonymous readers at Environmental History for their thoughtful suggestions Statement to the House, House Judiciary Committee and Subcommittee on Immigration, Citizenship, Refugees, Border Security, and International Law /11's Delayed Legacy: Cancer for Many of the Rescue Workers Treatise of World Trade Center (WTC) Dust Generated during the Are We Ready? Public Health Since Health Risks from Exposure to Asbestos, Metals, and Various Chemicals due to the Collapse of the World Trade Center: An Environmental Residential Survey with a Commentary Related to Ground Zero Workers We Have Never Been Modern, trans. Catherine Porter Air Testing After Sept. 11 Attack Is Both Perplexing and Reassuring EPA Regulators Say They've Learned From But Critics Remain Unconvinced NY Officials Underestimate Danger Libby Meets Manhattan: Connecting the Dots between a New York Terrorist Attack and a Montana Mining Disaster Characterization of the Dust/Smoke Aerosol that Settled East of the World Trade Center (WTC) in Lower Manhattan after the Collapse of the WTC 11 Chemical Analysis of World Trade Center Fine Particulate Matter for Use in Toxicological Assessment Analysis of Aerosols from the World Trade Center Collapse Site Risk Assessment for Asbestos-Related Cancer from the 9/11 Attack on the World Trade Center Health Risks from Exposure to Asbestos, Metals, and Various Chemicals For an excellent meditation on the relationship between deep geologic time and historical time, see Daniel Francis Zizzamia Tremolite Asbestos Health Consultation: Chemical Specific Health Consultation: Tremolite Asbestos and Other Related Types of Asbestos The True Story of How the WR Grace Corporation Left a Montana Town to Die (and Got Away with It Interview by Fredric L. Quivik, PhD. Oral interview, part of the Libby Oral History Project. Bob and Carrie Dedrick's home in Libby, MT Interview by Fredric L. Quivik, PhD. Oral interview, part of the Libby Oral History Project. Diane Keck's home near Libby, MT Interview by Fredric L. Quivik, PhD. Oral interview, part of the Libby Oral History Project Asbestos and Fire: Technological Trade-offs and the Body at Risk Application of Sprayed Inorganic Fiber Containing Asbestos: Occupational Health Hazards The World Trade Center Catastrophe: Was the Type of Spray Fire Proofing a Factor in the Collapse of the Twin Towers? Asbestos Exposure and Neoplasia The World Trade Center Catastrophe Chrysotile Asbestos in the Lungs of Persons 356. 31 Information on the asbestos ban Protecting the Product: A Special Report: Company's Silence Countered Safety Fears about Asbestos Protecting the Product Linda Nash is an associate professor in the Department of History at the University of Washington and director of the Center for the Study of the Pacific Northwest The Columbian Exchange; Biological and Cultural Consequences of 1492 A History of Public Health Malaria in the Upper Mississippi Valley The Cholera Years; the United States in 1832 The White Plague; Tuberculosis Printed and published for the Atlantic Monthly Press by Biological Aspects of Infectious Disease Randall Packard's history of malaria offers an excellent account of these competing ideas in medical and health practice. Randall M. Packard, The Making of a Tropical Disease: A Short History of Malaria The Scale Politics of Emerging Diseases The Next Epidemic-Lessons from Ebola Ebola Virus Disease Colonizing the Body: State Medicine and Epidemic Disease in Nineteenth-Century India Black Labor: Tuberculosis and the Political Economy of Health and Disease in South Africa Colonial Pathologies: American Tropical Medicine, Race, and Hygiene in the Philippines Mobile Home Syndrome: Engineered Woods and the Making of a New Domestic Ecology in the Post-World War II Era Slow Violence and the Environmentalism of the Poor Afterword Environmental History: Watching a Historical Field Mature The Red Man and the White Plague': Rethinking Race, Tuberculosis, and American Indians, ca. 1890-1950 Hazards of the Job: From Industrial Disease to Environmental Health Science Dispatches from Dystopia The Cold War over the Worker's Body: Cross-National Clashes over Maximum Allowable Concentrations in the Post-World War II Era Living in a Material World McMillen's history of tuberculosis reveals the extent to which contemporary research on the genetics of tuberculosis relies on the troubled racist science of the early twentieth century. McMillen There are many examples, for example, Jacob Darwin Hamblin, Arming Mother Nature: The Birth of Catastrophic Environmentalism