Inductive Risk and Values in Science Inductive Risk and Values in Science* Heather Douglastl Department of Philosophy, University of Puget Sound Although epistemic values have become widely accepted as part of scientific reasoning, non-epistemic values have been largely relegated to the "external" parts of science (the selection of hypotheses, restrictions on methodologies, and the use of scientific tech- nologies). I argue that because of inductive risk, or the risk of error, non-epistemic values are required in science wherever non-epistemic consequences of error should be considered. I use examples from dioxin studies to illustrate how non-epistemic conse- quences of error can and should be considered in the internal stages of science: choice of methodology, characterization of data, and interpretation of results. 1. Introduction. Despite its central importance to the philosophy of science, the issue of whether, which, and how values have a role to play in science has been discussed little in the past 50 years. With some exceptions (see below), the common wisdom of philosophers of science has been that only epistemic values have a legitimate role to play in science. While many have claimed that non-epistemic (i.e., social, ethical, political) values in fact do play a role in science, the normative standard of "value-free" (read "non- epistemic value free") science remains. In this paper, I will challenge that normative standard for large areas of science. I will argue that non-epistemic values are a required part of the internal aspects of scientific reasoning for cases where inductive risk in- cludes risk of non-epistemic consequences. In these cases, value-free sci- ence is inadequate science; the reasoning is flawed and incomplete. Thus *Received February 2000; revised September 2000. tSend requests for reprints to the author, Department of Philosophy, University of Puget Sound, 1500 North Warner, Tacoma, WA 98416-0094. I gave earlier versions of this paper at the Workshop on Values in Scientific Research at the University of Pittsburgh in October 1998 and at the Center for Nuclear and Toxic Waste Management at the University of California at Berkeley in November 1999. My thanks to those whose challenges and comments at those talks helped me refine these ideas. I also wish to thank Ted Richards for his continual help on this paper. Philosophy of Science, 67 (December 2000) pp. 559-579. 0031-8248/2000/6704-0001$2.00 Copyright 2000 by the Philosophy of Science Association. All rights reserved. 559 This content downloaded from 129.110.33.9 on Sun, 16 Feb 2014 14:40:23 PM All use subject to JSTOR Terms and Conditions http://www.jstor.org/page/info/about/policies/terms.jsp 560 HEATHER DOUGLAS the normative standard needs to be reconsidered. For science that has clear non-epistemic impacts, being "value-free" is not a laudable goal. As I will note at the end of my paper, this does not mean any argument whatsoever is a good argument in science. Accepting the role of values in science does not eliminate the requirement for good arguments. It only modifies the understanding of what can count as a good argument. It must be noted that this challenge to the normative standard is ob- viously not the only one set forth recently. In the past ten years, feminist philosophers of science have challenged the normative standard, primarily by challenging the epistemic/non-epistemic distinction on which it rests. (Rooney 1992; Longino 1996) I find their arguments largely persuasive and I have written elsewhere on the porousness of this distinction. (Ma- chamer and Douglas 1999 ) While the distinction cannot do the boundary work many philosophers would like it to (i.e., maintaining the boundary between legitimate uses of values and illegitimate uses that would threaten the objectivity of science), the distinction can serve to remind us which goals the values primarily serve within a particular context. This is how I will use it for the remainder of this paper. My focus for the role of values in science centers on Hempel's concept of "inductive risk." Hempel's articulation of inductive risk encapsulates the main arguments over values in science from the debates on that issue from 1945-1965. After using Hempel's work to provide the necessary background on inductive risk, I will discuss how inductive risk fits in with other work on the legitimate use of non-epistemic values in science. To illustrate how consideration of inductive risk can require the use of non- epistemic values, I will discuss examples from recent laboratory animal studies of dioxin's ability to induce cancer.12 It is precisely such conten- tious areas as this that have caused public questioning of science and that generate much heat but little light on the values in science question. With these examples, I hope to convince the reader that a decision by the sci- entists in the examples would not be complete without a consideration of non-epistemic values through inductive risk. 2. Inductive Risk. From 1948-1965, a series of papers3 raised the issue of whether values could be a legitimate part of scientific reasoning. All those 1. By "dioxin" I will be referring to the most toxic congener of the class of chemicals known as dioxins, 2,3,7,8-tetrachlorodibenzo-p- dioxin (or 2,3,7,8-TCDD), which is also the best studied. 2. The case studies on dioxin are drawn from my dissertation, The Use of Science in Policy-making: a Study of Values in Dioxin Science, The University of Pittsburgh, 1998. I thank Donald Mattison for assisting me with understanding dioxin studies. 3. Thanks to John Beatty for making me aware of this work. This content downloaded from 129.110.33.9 on Sun, 16 Feb 2014 14:40:23 PM All use subject to JSTOR Terms and Conditions http://www.jstor.org/page/info/about/policies/terms.jsp INDUCTIVE RISK AND VALUES 561 that argued for the legitimate use of values in science did so on the basis of the concept of inductive risk. Inductive risk, a term first used by Hempel (1965)4, is the chance that one will be wrong in accepting (or rejecting) a scientific hypothesis. While papers such as Churchman 1948 and Rudner 1953 argued that the risk of inductive error meant that values must play a role in science generally, other papers, such as Jeffrey 1956 and Levi 1962, sought to limit the influence of non-epistemic values in science. Be- cause Hempel's views encapsulate considerations from both sides of this debate, I will focus on Hempel to introduce the concept of inductive risk. A more in-depth historical examination of this material must await a fu- ture paper. In his 1965 essay, "Science and Human Values," Hempel articulates the traditional view of philosophers regarding the possibility that values could act as presuppositions for scientific arguments. According to Hem- pel, value statements have no logical role to play when one is trying to support a scientific statement. Judgments of value lack "all logical rele- vance to the proposed hypothesis since they can contribute neither to its support nor to its disconfirmation." (91) This traditional view does not encapsulate the entirety of Hempel's thinking on science and values, how- ever. Hempel holds that values can serve as presuppositions to what he calls scientific method. Because no evidence can establish a hypothesis with certainty, "acceptance (of a hypothesis) carries with it the 'inductive risk' " that the hypothesis may turn out to be incorrect. (92) Inductive risk is the risk of error in accepting or rejecting hypotheses. Hempel then considers what rules should be used by a scientist when accepting or rejecting hypotheses, arguing that values do have an impor- tant role to play in the rules of acceptance. (1965, 92) Hempel considers rules of acceptance to be "special instances of decision rules" (such as maximizing expected utility), which must consider both the possibility that the decision to accept a hypothesis (or reject a hypothesis) proves right and the possibility that it proves wrong. As Hempel states: When a scientific rule of acceptance is applied to a specified hypothesis on the basis of a given body of evidence, the possible 'outcomes' of the resulting decision may be divided into four major types: (1) the hypothesis is accepted (as presumably true) in accordance with the rule and is in fact true; (2) the hypothesis is rejected (as presumably false) in accordance with the rule and is in fact false; (3) the hypothesis is accepted in accordance with the rule, but is in fact false; (4) the hypothesis is rejected in accordance with the rule but is in fact true. The former two cases are what science aims to achieve; the possibility 4. My thanks to Eric Angner for bringing this article to my attention. This content downloaded from 129.110.33.9 on Sun, 16 Feb 2014 14:40:23 PM All use subject to JSTOR Terms and Conditions http://www.jstor.org/page/info/about/policies/terms.jsp 562 HEATHER DOUGLAS of the latter two represents the inductive risk that any acceptance rule must involve. (1965, 92) In order to formulate the acceptance rules properly, Hempel suggests, one must decide how one values the various outcomes: "The problem of for- mulating adequate rules of acceptance and rejection has no clear meaning unless standards of adequacy have been provided by assigning definite values or disvalues to those different possible 'outcomes' of acceptance or rejection." (1965, 92) It is in this way that value statements act as legiti- mate premises in whether or not to accept or reject a scientific hypothesis. Values are needed to weigh the consequences of the possible errors one makes in accepting or rejecting a hypothesis, i.e., the consequences that follow from the inductive risk. Depending on the outcomes, different kinds of values will be required for the justification of an acceptance rule. For some cases, acceptance (or rejection) of a hypothesis will lead to a particular course of action and outcomes with non-epistemic effects. In these cases, the outcomes of the potential actions need to be evaluated using non-epistemic values in order to formulate rules of acceptance. In other cases, where the acceptance of a hypothesis will not lead clearly to any particular course of action, Hem- pel believes the question of how to assign values to the outcomes to be considerably more difficult.5 Instead of valuing the practical outcomes, one must instead consider the outcomes in terms of the goals of science, which Hempel describes as "the attainment of an increasingly reliable, extensive, and theoretically systematized body of knowledge." (1965, 93) In current terms, Hempel is providing a potential set of epistemic values with which to determine what our rules of acceptance ought to be: reli- ability, extensiveness, and systematization.6 Although inductive risk is present in scientists' decisions to accept a theory, it is not obvious that scientists should consider all the conse- quences entailed by inductive risk. One might argue, as both Richard Jef- frey and Ernan McMullin have, that we should not expect or demand that scientists consider the consequences of accepting a theory erroneously. (Jeffrey 1956; McMullin 1983) Such considerations, McMullin argued in his 1982 Presidential Address to the Philosophy of Science Association, should be made by those using or applying the science. Under this view, the value judgments attached to various outcomes, or "utilities," are not the concern of the scientists. As McMullin stated: "Such utilities are ir- relevant to theoretical science proper and the scientist is not called upon 5. A point made by Jeffrey (1956, 242). 6. Levi (1962) emphasized the use of epistemic values to the exclusion of all others. This content downloaded from 129.110.33.9 on Sun, 16 Feb 2014 14:40:23 PM All use subject to JSTOR Terms and Conditions http://www.jstor.org/page/info/about/policies/terms.jsp INDUCTIVE RISK AND VALUES 563 to make value-judgments in their regard as part of his scientific work." (McMullin 1983, 8) This argument, however, overlooks the authority science and scientists have in our culture and the important role scientists play in practical de- cision-making. Where science is "useful" it will have effects beyond the development of a body of knowledge. In many contexts, if a scientist af- firms something as true, or accepts a certain theory, that statement is taken as authoritative and will have effects, potentially damaging ones, if the scientist is wrong. And, as I will argue below, scientists also take inductive risks in stages of science before acceptance or rejection of theories, thus considering risks never brought to the light of public decision-making. Public decision-makers thus have no ability to consider McMullin's "util- ities," leaving the job to the scientists. To claim that scientists ought not consider the predictable consequences of error (or inductive risk) is to argue that scientists are somehow not morally responsible for their actions as scientists. To defend a completely "value-free" science would require such a move, one which seems to be far more dangerous than openly grappling with the role of values in science. Arguing that scientists have the same moral responsibilities as the rest of us is beyond the scope of this paper. 3. The Structure of Values in Science. Inductive risk provides just one way for values to play a role in science. It is a critical way, I believe, both for the dismantling of a value-free normative standard for science and for a clearer understanding of how and why scientific disputes occur. In this section, I will place inductive risk into the context of other views on science and values, showing the general structure of how and where values play a role in science. In the process, I will expand on Hempel's limited view of inductive risk for acceptance of hypotheses, arguing that inductive risk is relevant throughout the scientific process.7 To place the idea of inductive risk in context, it should be noted that there are three decision points in the scientific process where non-epistemic values are widely recognized as having a legitimate role. (Here, I follow Longino 1990, 83-85) First, values (both epistemic and non-epistemic) play important roles in the selection of problems to pursue. Second, the direct use to which scientific knowledge is put in society requires the con- sideration of non-epistemic values. For example, if science enables the development of a new technology, values are (or should be) consulted to determine whether such a technology is desirable. Third, non-epistemic 7. Churchman (1956, 247) recognizes the many places where scientists make decisions. My work can be seen as a clarification and further development of his views. This content downloaded from 129.110.33.9 on Sun, 16 Feb 2014 14:40:23 PM All use subject to JSTOR Terms and Conditions http://www.jstor.org/page/info/about/policies/terms.jsp 564 HEATHER DOUGLAS values place limitations on methodological options, such as limitations on how we can use humans in experimentation. In all of these cases where non-epistemic values are recognized as le- gitimate, those values play a direct role in the decision-making. One is not considering the consequences of error, as one is with inductive risk, but considering the direct consequences of a particular course of action. Per- forming experiments on humans without their consent is unethical not because of the chance that the methodological choice will lead to inac- curate and misused results, but because of the direct consequences of the methodology on the human subjects. Similarly the use of science to de- velop an unwanted technology is unethical not because of the potential unintended consequences of the technology (although those might also be a problem), but because of the intended consequences. In these cases, moral concerns over the direct consequences of actions override any po- tential epistemic benefits, thus limiting choices. Longino has pointed out that these three roles for values in science are compatible with an "exter- nality" picture of values in science. (Longino 1990, 85-86) Under this model, the "internal" process of scientific reasoning can go forward with- out the necessary inclusion of non-epistemic values. Non-epistemic values serve as constraints for some scientific choices, but do not interfere with internal scientific reasoning. Consideration of values through inductive risk does not fit with the "externality" model, however. First, the role of values is indirect, instead of direct. As Hempel rightly pointed out, value judgments have no direct place in the argument for what should be taken to be true. However, because error is always a possibility, we are required to consider the con- sequences of error alongside the arguments concerning evidence. And the consideration of the consequences of error require the consideration of values, both epistemic and non-epistemic. The role for values is there, even if it is not direct. Second, with inductive risk, the places in the scientific process where values play a role are not limited to the outskirts of science.8 In the exter- nality model, the internal stages of science remain free of values. Consid- eration of inductive risk, however, occurs throughout the scientific pro- cess. Although Hempel focused entirely on inductive risk at the point of theory acceptance, there are other places in the scientific process where inductive risk is relevant. If one follows the general schema of the meth- 8. Longino (1990, 86 and 128-132) also grapples with the role of values in the internal processes of science, but she does not argue that non-epistemic values are required of internal science. Instead, she argues that values can affect science through background assumptions, a non-normative argument. Nor does her account make clear how back- ground assumptions, ostensibly epistemic statements, carry ethical or societal values with them. This gap spurred me to deeper consideration of the problem. This content downloaded from 129.110.33.9 on Sun, 16 Feb 2014 14:40:23 PM All use subject to JSTOR Terms and Conditions http://www.jstor.org/page/info/about/policies/terms.jsp INDUCTIVE RISK AND VALUES 565 odology from a scientific research paper, significant inductive risk is pres- ent at each of the three "internal" stages of science: choice of methodol- ogy, gathering and characterization of the data, and interpretation of the data. At each point, one can make a wrong (i.e., epistemically incorrect) choice, with consequences following from that choice. A chosen method- ology assumed to be reliable may not be. A piece of data accepted as sound may be the product of error. An interpretation may rely on a selected background assumption that is erroneous. Thus, just as there is inductive risk for accepting theories, there is inductive risk for accepting method- ologies, data, and interpretations. By expanding where we see relevant inductive risk, the potential role for non-epistemic values has also expanded. Hempel was right in asserting that whether or not a piece of evidence is confirmatory of a hypothesis (given a set of background assumptions) is a relationship in which value judgments have no role. What evidence is available to support a theory or which background assumptions we chose to hold, however, does in- volve value judgments through the consideration of inductive risk. In cases where the consequences of making a choice and being wrong are clear, the inductive risk of the choice should be considered by the scientists mak- ing the choice. In the cases I discuss below, the consequences of the choices include clear non-epistemic consequences, requiring non-epistemic values in the decision-making. Thus, where the weighing of inductive risk requires the consideration of non-epistemic consequences, non-epistemic values have a legitimate role to play in the internal stages of science. The exter- nality model is overthrown by a normative requirement for the consider- ation of non-epistemic values, i.e., non-epistemic values are required for good reasoning. In these cases where inductive risk is involved, non-epistemic values are not the sole determinant of whether to accept a given option. The scientist will need to consider both the quantity of evidence or degree of confir- mation to estimate the magnitude of inductive risk and the valuation of the consequences that would result from error to estimate the seriousness or desirability of the consequences. The weighing of these consequences, in combination with the perceived magnitude of the inductive risk (i.e., how likely one is to be wrong), determines which choice is more accept- able. Where non-epistemic consequences follow from error, non-epistemic values are essential for deciding which inductive risks we should accept, or which choice we should make. 4. Inductive Risk in Methodological Choice: Statistical Significance. As noted in the previous section, that non-epistemic values have a role to play in the making of methodological choices is little disputed. When a meth- odological option has direct consequences that are ethically unacceptable, This content downloaded from 129.110.33.9 on Sun, 16 Feb 2014 14:40:23 PM All use subject to JSTOR Terms and Conditions http://www.jstor.org/page/info/about/policies/terms.jsp 566 HEATHER DOUGLAS that methodology is not considered a viable choice. Such is the case par- ticularly for studies with human subjects, which must be performed with a constant eye towards the appropriate treatment of the subjects. Valuing direct consequences is, however, not the only way in which non-epistemic values play a role in methodological choice. In this section, I describe how inductive risks are taken when making the methodological choice of the appropriate level for statistical significance, and how these risks entail non- epistemic consequences. Under most circumstances, the choice of a level of statistical significance is not made through the explicit consideration of arguments for different statistical choices, but by the tradition of an area of research or the choice of a computer statistical package. I will not argue that such conventions have not worked reasonably well.9 Instead I will examine the reasoning required to make a deliberate and explicit choice of a level of statistical significance for conducting toxicological studies, showing how non-epistemic values would play a role in such a choice. The deliberate choice of a level of statistical significance requires that one consider which kind of errors one is willing to tolerate. For any given test, one must find an appropriate balance between two types of error: false positives and false negatives. False positives occur when one accepts an experimental hypothesis as true and it is not. False negatives occur when one rejects an experimental hypothesis as false and it is not. Chang- ing the level of statistical significance changes the balance between false positives and false negatives. If one wishes to avoid more false negatives and one is willing to accept more false positives, one should lower the standard for statistical significance. If one wishes, on the other hand, to avoid false positives more, one should raise the standard for statistical significance. For any given experimental test, one cannot lower both types of error; one can only make trade-offs from one to the other. In order to reduce both types of error, one must devise a new, more accurate experi- mental test (such as increasing the population size examined or developing a new technique for collecting data). In laboratory animal studies, tests for statistical significance are used to determine when the response of the dosed or exposed animals is signifi- cantly different from the non-dosed or control animals. Because of the control exerted in the lab over the conditions of the animals, if there is a response that is significantly different between the exposed and the control animals, that difference can generally be attributed to the dose given to the animals. The statistical comparison between exposed and control ani- 9. In Regulating Toxic Substances, Carl Cranor argues that the standard assumptions have not been serving us well, leaving us with too many false negatives compared to false positives. He estimates that false negatives are in fact more costly to society than false positives. (see Cranor 1993, 71-78, 122-129, 135-137, and 153-157) This content downloaded from 129.110.33.9 on Sun, 16 Feb 2014 14:40:23 PM All use subject to JSTOR Terms and Conditions http://www.jstor.org/page/info/about/policies/terms.jsp INDUCTIVE RISK AND VALUES 567 mals is particularly important for cancer studies, where both control ani- mals and exposed animals will usually exhibit some cancer. What must be determined is whether the exposed animals exhibit significantly more can- cer than the control animals. Only when a cancer rate is significantly dif- ferent from the control cancer rate is it considered a genuine result of the dosing. Thus, setting the standard for statistical significance will impact what is considered a response caused by the dosing. The stricter the stan- dards for statistical significance, the greater the difference between dosed animals and control animals will need to be for the response to be consid- ered significant. Stricter standards lead to a reduction in the rate of false positives, and an increase in the rate of false negatives. On the other hand, if one has a laxer standard for statistical significance, a smaller difference in cancer rates between exposed and control populations will be consid- ered significant and attributed to the dosing regimen. This increases the likelihood of false positives, but lowers the likelihood of false negatives. In setting the standard for statistical significance, one must decide what balance between false positives and false negatives is optimal. In making this decision, one ought to consider the consequences of the false positives and false negatives, both epistemic and non-epistemic. I will focus here on the non-epistemic consequences. In laboratory animal studies testing the potential harms of environmentally pervasive chemicals (such as dioxins), the results are used to determine both whether the chemical has a partic- ular effect and what the dose-response relationship is for the chemical and the effect. The results are then extrapolated to humans (a controversial subject I will not address here) and used to set regulatory standards for the chemical. In testing whether dioxins have a particular effect or not, an excess of false positives in such studies will mean that dioxins will appear to cause more harm to the animals than they actually do, leading to overregulation of the chemicals. An excess of false negatives will have the opposite result, causing dioxins to appear less harmful than they actually are, leading to underregulation of the chemicals. Thus, in general, false positives are likely to lead to stronger regulation than is warranted (or overregulation); false negatives are likely to lead to weaker regulation than is warranted (or underregulation). Overregulation presents excess costs to the industries that would bear the costs of regulations. Underregulation presents costs to public health and to other areas affected by damage to public health. Depending on how one values these effects, an evaluation that requires the consultation of non-epistemic values, different balances between false positives and false negatives will be preferable. In addition to whether a substance has a particular effect, laboratory animal studies also are used to determine the dose-response relationship for the effect. Two different models are used when interpreting dose- This content downloaded from 129.110.33.9 on Sun, 16 Feb 2014 14:40:23 PM All use subject to JSTOR Terms and Conditions http://www.jstor.org/page/info/about/policies/terms.jsp 568 HEATHER DOUGLAS response data: the threshold model and the linear extrapolation model. (The choice of a model is discussed further below.) The threshold model assumes that there is no response or effect caused by the chemical under study below a certain dose. The dose below which there is no effect is called the threshold. The linear extrapolation model, on the other hand, assumes that the chemical under study is capable of producing biological effects with decreasing rates at ever decreasing doses. Thus one must ex- trapolate a curve back from the tested doses, usually a linear extrapolation through the origin. Deciding whether an exposed group's response differs significantly from the control group's response is essential for determining the details of either dose-response model. The statistical significance test tells us whether or not the difference in response is significant and attributable to the dose. Thus, the standard for statistical significance will affect both what is considered a response and the shape of the dose-response curve. For the threshold model, the no-response level is determined by where there is no statistically significant response, i.e., the threshold is defined in terms of observable response and observability is defined in terms of sta- tistical significance. Thus a false negative generally means that the "safe" dose is set higher than it should be, which will be less protective of public health. Because dose levels in animal studies are usually set one order of magnitude apart, the "safe dose" resulting from a false negative will be at least one order of magnitude less protective than it should be. For a dose- response extrapolation curve, a false negative will have varied results, de- pending on the shape of the curve, but it will in general produce a less dangerous looking curve, leading to laxer regulations. False positives, on the other hand, will produce excessively protective safe doses (in the threshold model) or more dangerous looking dose-response curves (in the extrapolation model), generating stricter than necessary regulation. In finding the appropriate balance between false positive and false neg- ative errors, we must decide what the appropriate balance is in the con- sequences of those errors: overregulation and underregulation. Selecting an appropriate balance will depend on how we value the effects of those two consequences, whether we are more concerned about protecting public health from dioxin pollution or whether we are more concerned about protecting industries that produce dioxins from increased regulation. To value one objective and not to value the other at all does not seem to be a plausible position; we would not want to choose to have only false pos- itives or only false negatives. Finding the balance requires, among other things, weighing the non-epistemic valuations of the potential conse- quences. Reducing the possibility of any error by increasing the power of the study would help mitigate the dilemma here, but doing so is extremely This content downloaded from 129.110.33.9 on Sun, 16 Feb 2014 14:40:23 PM All use subject to JSTOR Terms and Conditions http://www.jstor.org/page/info/about/policies/terms.jsp INDUCTIVE RISK AND VALUES 569 difficult. For example, one way to reduce the risk of both false positives and false negatives is to increase the animal populations under study. Cur- rently, most studies use 50-100 animals in each dose group. Increasing those numbers would decrease the chance of false positives and false neg- atives. However, it is extremely expensive and difficult to do larger studies. A single two-year cancer study using 200 rats (50 at three dose levels and 50 controls) costs roughly $3 million. (Graham and Rhomberg 1996, 18) The cost and logistics of increased dose groups can be overwhelming. Perhaps some other solutions to strengthen studies will present itself, but in the meantime, some balance must be struck. Where the balance should lie for dioxin studies is currently unclear. Regardless, determining the bal- ance clearly requires an ethical value judgment in the internal stages of a scientific study. 5. Inductive Risk in Evidence Characterization: Rat Liver Tumors. Once one has implemented the method chosen for the study, the data must be gathered and characterized. Evidence characterization occurs late in the studies I examine here, after the laboratory animals have been dosed for many months. One can encounter difficulties in evidence characterization that are unforeseen when one is selecting a methodological approach. De- ciding how to grapple with unexpected ambiguities in data sources is my concern in this section. I will show how, as with the other stages in science, there is inductive risk in making choices, and one must decide what amounts (or levels) and kinds of inductive risk are acceptable. Some of the consequences of the risks are non-epistemic, and thus non-epistemic values are needed to weigh the consequences and to make the choice. In dioxin cancer studies, rodents (the animal group of choice because of their relatively short lifespan and rapid breeding cycles) are dosed for two years, close to a natural life-span. At the end of those two years, full body autopsies are performed on the animals to gather the endpoint data. Because dioxins appear to affect more than one organ site, all potential areas for cancerous growths are checked. In the studies relied upon by regulators for making decisions about dioxins, tissue and organ samples have been mounted on slides to be evaluated by toxicologists. One partic- ular study, published in 1978 by Richard Kociba and other toxicologists at Dow Chemical, has been central to regulators in setting acceptable levels for dioxins in the environment. (Greenlee et al. 1991, 567; Huff et al. 1991, 72) The first long term cancer study performed for dioxins, the Kociba study focused attention on cancers of the liver in female rats. (Kociba et al. 1978) The female rat liver slides have undergone at least three evaluations by pathologists, with different results. In Table 1, three different evaluations of the rat liver slides from the Kociba studies are given. The first evaluation, from 1978, was originally This content downloaded from 129.110.33.9 on Sun, 16 Feb 2014 14:40:23 PM All use subject to JSTOR Terms and Conditions http://www.jstor.org/page/info/about/policies/terms.jsp 570 HEATHER DOUGLAS TABLE 1 FEMALE SPRAGUE-DAWLEY RAT LIVER SLIDE EVALUATIONS (Adapted from EPA 1994, p. 6-5) Key: B = Rats with Benign Tumors M = Rats with Malignant Tumors T = Total Rats with Tumors Dose Level 1978 1980 1990 Acute Toxicity in Rats B 8/86 2/86 no acute liver toxicity observed 0 M 1/86 0/86 no acute animal toxicity nglkglday T 9/86 16/86 2/86 (control) B 3/50 1I50 no acute liver toxicity observed 1 M 0I50 0I50 no acute animal toxicity nglkg/day T 3/50 8/50 1/50 B 18/50 9/50 8/9 livers with tumors show some 10 M 2/50 0I50 acute toxicity nglkg/day T 20/50 27/50 9/50 debatable acute animal toxicity p<0.001 p