Euro Jnl Phil Sci DOI 10.1007/s13194-012-0062-x ORIGINAL PAPER IN PHILOSOPHY OF SCIENCE In defence of the value free ideal Gregor Betz Received: 14 May 2012 / Accepted: 13 November 2012 © Springer Science+Business Media Dordrecht 2013 Abstract The ideal of value free science states that the justification of scientific find- ings should not be based on non-epistemic (e.g. moral or political) values. It has been criticized on the grounds that scientists have to employ moral judgements in manag- ing inductive risks. The paper seeks to defuse this methodological critique. Allegedly value-laden decisions can be systematically avoided, it argues, by making uncertain- ties explicit and articulating findings carefully. Such careful uncertainty articulation, understood as a methodological strategy, is exemplified by the current practice of the Intergovernmental Panel on Climate Change (IPCC). Keywords Value free science · Inductive risks · Uncertainty · Scientific policy advice · Climate science · IPCC 1 Introduction The ideal of value free science states that the justification of scientific findings should not be based on non-epistemic (e.g. moral or political) values. It derives, straightforwardly and independently, from democratic principles and the ideal of per- sonal autonomy: As political decisions are informed by scientific findings, the value free ideal ensures—in a democratic society—that collective goals are determined by democratically legitimized institutions, and not by a handful of experts (cf. Sartori 1962, pp. 404–410). In regard of private decisions, personal autonomy would be jeop- ardized if the scientific findings we rely on in everyday life were soaked with moral assumptions (see Weber 1949, pp. 17–18). G. Betz (�) Institute of Philosophy, Karlsruhe Institute of Technology, Kaiserstr. 12, 76135 Karlsruhe, Germany e-mail: gregor.betz@kit.edu mailto:gregor.betz@kit.edu Euro Jnl Phil Sci However, the ideal of value free science has been at the heart of various controver- sies which have raged since at least Max Weber’s publications and involved philoso- phers and social scientists alike. More recently, philosophers have re-articulated and sharpened two types of criticism which I shall term the “semantic critique” and the “methodological critique”.1 The semantic critique, which was anticipated by Weber (1949, pp. 27–38), is, for example, set forth by Putnam (2002) and Dupré (2007). It claims that normative and factual statements are irreducibly interwoven because of the existence of thick concepts.2 That’s why science cannot get rid of value judg- ments; as a consequence, the ideal of value free science is allegedly unrealizable. The methodological critique dates back to a seminal paper by Rudner (1953), which incited a debate in the 1950s involving, amongst others, Jeffrey (1956) and Levi (1960). Various philosophers of science, including Philip Kitcher,3 Helen Longino,4 Torsten Wilholt,5 Eric Winsberg,6 Kevin Elliott7 and, most forcefully, Heather 1A third, rather empirical critique, challenges the ideal of value free science in regard of its (potential) harmful side-effects. Adopting the ideal in scientific policy advice, the argument goes, might have the effect that worse decisions are eventually being taken or that scientific advice, for being too nuanced or careful, is completely ignored in policy making; see for example Cranor (1990, p. 139) or Elliott (2011, pp. 55–80, in particular pp. 67–68). Both Cranor and Elliott, though, don’t provide detailed empirical evidence to support the claim that providing value-free advice is socially harmful. Moreover, Elliott recon- structs the argument as a defence of the methodological critique (ibid., pp. 63–64, 68). I don’t agree: If it’s really harmful to give value-free advice, that’s clearly a reason in its own not to do it—quite indepen- dently of any further sophisticated, methodological reasoning. Moreover, stressing that value-free advice is harmful does not invalidate the refutation of the methodological critique advanced below. This said, the charge that adopting the value-free ideal might be socially harmful—at least in some contexts and under certain conditions—has to be taken serious and calls for further philosophical and empirical investigation. 2A term coined by Williams (1985). 3See Kitcher (2011, pp. 31–40). Besides endorsing the methodological critique reconstructed in this paper, Kitcher sets forth a further argument against value freedom which is based on the pervasiveness of so- called probative values in scientific inquiry (Kitcher 2011, pp. 37–40). While this argument deserves a more thorough discussion in its own, I suspect that it hinges on an ambiguity. If probative values (e.g. worthiness, policy-relevance) are simply used to determine detailed research questions, their being non- epistemic doesn’t undermine value freedom. If probative values (e.g. burdens of proof), however, are used to infer scientific results, they represent plain epistemic values, and the value free ideal is left intact, as well. 4Compare Longino (1990, 2002). 5See Wilholt (2009). 6See Winsberg (2010, pp. 93–119). 7See Elliott (2011, pp. 55–80). Elliott distinguishes, further, two versions of what I call the methodological critique: the “gap argument” (ibid., pp. 62–66) and the “error argument” (ibid., pp. 66–70). While he sees clearly that these arguments are not completely independent but rely on joint premisses (e.g., ibid., p. 70), I’d even go a bit further: The “error argument”, on the one hand, applies only in situations where one faces inductive risks, i.e., where there is a gap to be bridged in the scientific inference chain. On the other hand, the “gap argument” stresses that non-epistemic values have to be alluded to in order to bridge (evidential or logical) gaps in scientific reasoning—and the methodological handling of errors seems just to be one such gap. In sum, it’s not clear to me whether we have two distinct (albeit closely related) arguments at all. The reconstruction unfolded in this paper merges the “gap argument” and the “error argument” in one line of argumentation. Euro Jnl Phil Sci Douglas,8 currently seem to endorse the methodological critique. In short, it main- tains that scientists have to make methodological decisions which require them to rely on non-epistemic value judgments. This paper seeks to defuse the methodological critique. Allegedly arbitrary and value-laden decisions can be systematically avoided, it argues, by making uncertain- ties explicit and articulating findings carefully. The methodological critique is not only ill-founded, but distracts from the crucial methodological challenge scientific policy advice faces today, namely the appropriate description and communication of knowledge gaps and uncertainty. The structure of this paper is as follows. One can distinguish two basic versions of the methodological critique that rely on a common, and hence central premiss, namely that policy-relevant scientific findings depend on arbitrary choices (Sec- tion 2). Such arbitrariness, the critics argue, arises in situations of uncertainty because scientists may handle inductive risks in different ways (Section 3).9 But that is not inevitable: On the contrary, arbitrary decisions are systematically avoided, if uncer- tainties are properly expressed (Section 4). Such careful uncertainty articulation, understood as a methodological strategy, is exemplified by the current practice of the Intergovernmental Panel on Climate Change, IPCC (Section 5). 2 How arbitrariness undermines the value free ideal There are two versions of the methodological critique, which object to the value free ideal in different ways while sharing a common core. A first variant of the critique argues that the value free ideal cannot be (fully) realized, a second variant states that it would be morally wrong to realize it. The common core of both versions is a kind of underdetermination thesis. It claims that every scientific inference to policy-relevant10 findings involves a chain of arbitrary (epistemically underdetermined) choices: Thesis 1 (Dependence on arbitrary choices) To arrive at (adopt and communicate) policy-relevant results, scientists have to make decisions which (i) are not objectively (empirically or logically) determined and (ii) sensitively influence the results thus obtained. 8In a series of philosophical studies, Douglas (2000, 2007, 2009) has revived and improved upon Rudner’s original argument. 9So the critique, in particular as set forth by Heather Douglas, does not refer to Duhem-style underdeter- mination as discussed, e.g., in Quine (1975) or Laudan (1991), although it has been originally articulated in such terms (cf. Rudner 1953). 10This restriction to policy-relevant science is crucial, as Mitchell (2004) has stressed. Only if results are communicated and (potentially) influence policy decisions does the following claim hold. Euro Jnl Phil Sci According to the first version of the critique, one inevitably buys into non- epistemic value judgments by making an arbitrary, not objectively determined choice with societal consequences:11 Thesis 2 (Decisions value laden) Decisions which (i) are not objectively deter- mined and (ii) sensitively influence policy-relevant results (to be adopted and communicated) are inevitably based—possibly implicitly—on non-epistemic value judgments. From Theses 1 and 2, it follows immediately that science cannot be free of non- epistemic values: Thesis 3 (Value free science unrealizable) Scientists inevitably make non-epistemic value judgments when establishing (adopting and communicating) policy-relevant results. This first version of the methodological critique is set forth by Rudner (1953) and seems to be approved, e.g., by Wilholt (2009), Winsberg (2010) and Kitcher (2011). However, one of its premisses, Thesis 2, represents a non-trivial assumption. To see this, suppose that the scientist, when facing an arbitrary choice, simply rolls a die. It’s at least not straightforward which specific, non-epistemic normative assumptions she (implicitly) buys into by doing so. This might explain why Heather Douglas unfolds a different and, I take it, stronger line of argument, which yields a second type of methodological critique.12 The sec- ond version, too, takes off from the premiss that policy-relevant scientific findings depend on arbitrary choices (Thesis 1). Because of science’s recognized authority, it reasons, the acceptance of some policy-relevant finding by a scientist is highly consequential.13 Thesis 4 (Policy-relevant results consequential) The policy-relevant results scientists arrive at (adopt and communicate) have potentially (in particular in case they err) morally significant societal consequences. One of Douglas’ main examples may serve to illustrate this claim (cf. Douglas 2000). The scientific finding that dioxins are highly carcinogenic will spur pol- icy makers to set up tight regulations and will, consequently, shut down otherwise 11This is not to say that value judgements themselves are arbitrary in the sense of being irrational or unjustifiable. The critique merely maintains that every epistemically underdetermined decision relies on non-epistemic reasons—and is hence non-arbitrary from a broader, non-epistemic perspective. The argu- ment unfolded in this paper is consistent with the view that non-epistemic normative claims can be supported and justified through (e.g. moral) reasoning. 12See specifically Douglas (2007, pp. 122–126) and Douglas (2009, pp. 80–81). 13This general claim entails that errors committed by scientists are highly consequential, too. Error probabilities and inductive risks will have a more specific argumentative rôle to play in the following section. Euro Jnl Phil Sci socially beneficial economic activities. If scientists, however, refute that very hypoth- esis, dioxins won’t be banned and the public will be exposed to a potentially harmful substance. In any case, the scientists’ decision has far-reaching effects, which will be particularly harmful provided they err. But whenever we face a choice with morally significant consequences, the idea of responsibility requires us to adopt a moral point of view, i.e. to consider ethical aspects in the decision deliberation: Thesis 5 (Moral responsibility) Any decision that is not objectively determined and has, potentially, morally significant societal consequences, should be based on non- epistemic value judgments (instead of being taken arbitrarily). That’s why it would be morally wrong (in the light of Theses 1, 4 and 5) to follow the value free ideal in scientific inquiry: Thesis 6 (Value free science unethical) Scientists should rely on non-epistemic value judgments when establishing (adopting and communicating) policy-relevant results. The second version of the methodological critique constitutes a conclusive argu- ment, at least once the arbitrariness thesis—the cornerstone of the critique—is granted. This very thesis will be further discussed in the following sections. 3 How uncertainty triggers arbitrariness Policy making requires definite and unequivocal answers to various factual questions, or so it seems. Take Douglas’ example from health and environmental policy (cf. Douglas 2000): Are dioxins carcinogenic? Is there a threshold below which exposure to dioxins is absolutely safe—and if so, where? How many persons, out of a hundred, develop malignant tumors as a result from exposure to dioxins at current rates? Sci- entists are expected to answer these questions with “plain” hypotheses: Yes, dioxins are carcinogenic; or: no, they aren’t. The safety threshold lies at level X; or: it lies at level Y. So and so many persons (out of a hundred) currently suffer from malignant tumors because of exposure to dioxins. Under uncertainty, i.e. when the empirical evidence or the theoretical understand- ing of a system is limited, such plain hypotheses can neither be confirmed nor be rejected beyond reasonable doubt. As the inference to the policy-relevant result, which the scientist is ultimately going to communicate, becomes error prone, the error probabilities or, more generally, the inductive risks one is ready to accept when drawing the inference have to be specified. Clearly, errors can be committed all along the inference chain—including the generation and interpretation of data, the choice of the model, the specification of parameters and boundary conditions, etc.—, so induc- tive risks have to be taken care of throughout the entire inquiry (and not merely at its final step). This gives the scientists considerable leeway, because there is no way in which the methodological management of inductive risks (specifically the acceptance of certain error probabilities) is objectively, i.e. empirically or logically, determined. Euro Jnl Phil Sci The policy-relevant results inferred, e.g. statements about the carcinogenic effects of dioxins, depend sensitively on those methodological choices. Put concisely, the argument in favor of arbitrariness reads: P1 To arrive at (adopt and communicate) policy-relevant results, scientists have to adopt or reject plain hypotheses under uncertainty. P2 Under uncertainty, adopting or rejecting plain hypotheses requires setting error probabilities which one is willing to accept when drawing the respective inference and which sensitively affect the results that can be inferred. P3 The error probabilities one is willing to accept in an inference are not objectively (empirically or logically) determined. C Thus: Dependence on arbitrary choices (Thesis 1). The argument relies crucially on a specific reconstruction of the decision situation that scientists face in policy advice. As P1 has it, the options available to scientists are fairly limited: they have to arrive at and communicate results of a quite specific type.14 And they cannot infer such plain hypotheses, at least not under uncertainty, without taking substantial inductive risks and hence committing themselves to a set of methodological choices that are not objectively determined. As long as the first premiss is granted, I take the above reasoning to be a very strong and convincing argument. But can P1 really be upheld? 4 Avoiding arbitrariness through articulating and recasting findings appropriately Premiss P1, stated as narrowly as above, is false. Policy making can be based on hedged hypotheses that make the uncertainties explicit, and scientific advisors may provide valuable information without inferring plain, unequivocal hypotheses that are not fully backed by the evidence. Reconsider Douglas’ example (Douglas 2000).15 Rather than opting for a single interpretation of the ambiguous data, scientists can make the uncertainty explicit through working with ranges of observational val- ues. Instead of employing a single model, inferences can be carried out for various 14A somewhat less specific, and hence weaker, analogue to premiss P1 figures prominently in Rudner’s exposition of the methodological critique. Rudner (1953) claims, explicitly, that “the scientist as scientist accepts or rejects hypotheses”, and any satisfactory methodological account should explain how (ibid., p. 2). Such a rough analogue to premiss P1 is rather implicitly (but no less importantly) assumed in the writings of Douglas, too. Thus, Douglas (2000) explicitly affirms P2 by claiming that the scientists who study the health effects of dioxins (i) have to set a specific level of statistical significance when drawing the inference (ibid., p. 567), (ii) have to agree on an unequivocal interpretation of the available data (ibid., p. 571), and have to choose between two alternative (causal) models, the threshold and the linear extrapolation model (ibid., p. 573). Echoing her earlier analysis in a discussion of risk assessments, Douglas (2009, p. 142) claims that scientists frequently have to choose from equally plausible models in order to “bridge the gaps” and complete the assessment. But why are scientists required to make these choices? Presumably in order to arrive at policy-relevant results (that is, roughly, P1). However, a careful reading of the critics seems to reveal that they are not exactly committed to P1. The discussion in the next section will account for this observation. 15See also footnote 14. Euro Jnl Phil Sci alternative models. Rather than committing oneself to a single level of statisti- cal significance, one may systematically vary this parameter, too. Acting as policy advisors, the scientists can communicate the results of such a (methodological) sen- sitivity analysis. The reported range of possibilities may then inform a decision under uncertainty.16 Reporting ranges of possibility is just one way of avoiding plain hypotheses and recasting scientific results in situations of uncertainty. Scientists could equally infer and communicate other types of hedged hypotheses that factor in the current level of understanding. They might make use of various epistemic modalities (e.g. it is unlikely/it is possible/it is plausible/etc. that . . . ) or simply conditionalize on unwar- ranted assumptions (e.g. if we deem these error probabilities acceptable, then . . . , based on such-and-such a scheme of probative values, we find that . . . , given that set of normative, non-epistemic assumptions, the following policy measure is advisable . . . ).17 In sum, scientists as policy advisors are far from being required to accept or refute plain hypotheses.18 With its original premiss in tatters, the above reconstruction of the methodological critique is in need of repair. But, in fact, premiss P1 can be easily amended to yield a much more plausible statement.19 P1’ To arrive at (adopt and communicate) policy-relevant results, scientists have to adopt or reject plain or hedged hypotheses under uncertainty. For the sake of validity, we have to modify premiss P2 correspondingly. 16In the sense of Knight (1921). 17Curiously, conditionalizing on normative assumptions is exactly the strategy favored by Douglas (2009, p. 153), herself. It should be noted that such conditional scientific results comply with the value free ideal, because once uncertain or (non-epistemic) normative assumptions are placed in the antecedent of a hypothesis, they are clearly not maintained by the scientist anymore. 18This is the bottom line of Jeffrey’s criticism (Jeffrey 1956), too. However, my argument deviates sub- stantially from Jeffrey’s in allowing that uncertainties be made explicit otherwise than through probability assignments. More generally, it seems to me a shortcoming of the current debate about value freedom that uncertainty articulation is, short-sightedly, identified with the assignment of probabilities. Not only does Douglas ignore non-probabilistic uncertainty statements, as I will argue below, Winsberg (2010, pp. 96, 119), Biddle and Winsberg (2010) and Kitcher (2011, p. 34), too, wrongly assume that giving value free policy advice requires one to make uncertainties explicit through probabilities. As an additional point, note that reporting hedged hypotheses is not identical with merely stating the (e.g., limited, partially conflicting) evidence and letting the policy makers decide on whether the plain hypothesis should be adopted or not. That’s, however, what Elliott (2011) seems to have in mind when referring to value-free scientific advice in the face of uncertainty, as formulations like scientists “passing the buck” (ibid., pp. 55, 64), “withholding their judgment or providing uninterpreted data to decision makers” (ibid., p. 55), letting “the users of information decide whether or not to accept the hypotheses” (ibid., p. 67) suggest. The point about making uncertainties fully explicit and reporting hedged hypotheses is (i) to enable policy makers to take the actual scientific uncertainty into account and (ii) to allow their normative risk preferences (level of risk aversion) to bear on the decision. Justifying decisions under uncertainty does obviously not require one to fully adopt an uncertain prediction or to act as if one of the uncertain forecasts were true; see also footnotes 20 and 21. 19Besides systematic objections, the following modification also addresses the hermeneutic issue consid- ered in footnote 14. Euro Jnl Phil Sci P2’ Under uncertainty, adopting or rejecting plain or hedged hypotheses requires setting error probabilities which one is willing to accept when drawing the respective inference and which sensitively affect the results that can be inferred. While P1’ is now nearly analytic, P2’ turns out to be questionable. Does accepting hedged hypotheses, which are, thanks to epistemic qualification and conditionaliza- tion, weaker than plain ones, still involve substantial error probabilities? Douglas sees this counter argument, anticipating that “[some] might argue at this point that scien- tists should just be clear about uncertainties and all this need for moral judgment will go away, thus preserving the value-free ideal” (Douglas 2009, p. 85). But she rebuts: Even a statement of uncertainty surrounding an empirical claim contains a weighing of second-order uncertainty, that is, whether the assessment of uncer- tainty is sufficiently accurate. It might seem that the uncertainty about the uncertainty estimate is not important. But we must keep in mind that the judg- ment that some uncertainty is not important is always a moral judgment. It is a judgment that there are no important consequences of error, or that the uncertainty is so small that even important consequences of error are not worth worrying about. Having clear assessments of uncertainty is always helpful, but the scientist must still decide that the assessment is sufficiently accurate, and thus the need for values is not eliminable. (Douglas 2009, p. 85) This much is clear: Sometimes a probability (or, generally, an uncertainty) statement cannot be inferred, based on the available evidence, without a substantial chance to err. But Douglas, or the methodological critique, needs more: Every hedged (e.g. epistemically qualified or suitably conditionalized) hypothesis involves substantial error probabilities.—And that seems to be plainly false. Douglas either ignores or underestimates how far epistemic qualification and conditionalization might carry us. Consider, e.g.: “It is possible (consistent with what we know), that . . . ”, “We have not been able to refute, that . . . ”, “If we assume these thresholds for error probabilities, we find that . . . ” Such results are, first of all, clearly policy relevant (think of mere possibility arguments,20 or worst case reasoning21)—even more so if complemented with a respective sensitivity analysis (as, for instance, varying the error thresholds systematically). Secondly, such hypotheses are sufficiently weak, or can be further weakened, so that the available evidence suffices to confirm them beyond reasonable doubt. There is a simple reason for that: A scientific result which fully and compre- hensively states our ignorance is itself well corroborated (for if it weren’t, it wouldn’t make the uncertainty fully explicit in the first place).22 20Cf. Hansson (2011). 21As discussed by Sunstein (2005), Gardiner (2006) and Shue (2010). 22Note that in refuting P1 or, respectively, P2’, it suffices to say that scientists can weaken their empirical claims such that they are warranted beyond reasonable doubt; we are not, at this stage, committed to the view that they should do so. Only if the value free ideal is accepted, e.g. for reasons indicated in the very first paragraph, the analysis unfolded in this section might entail that scientists should carry out epistemic qualification or conditionalization, because this might be the only way to achieve value freedom. Euro Jnl Phil Sci To insist that some hedged hypotheses can be justified beyond reasonable doubt, even under uncertainty, doesn’t mean to deny fallibilism. There remains always the logical possibility that we are wrong: The fundamental regularities observed in the past might break down in the future, other “laws of nature” might reign. Or all our scientists, being cognitively limited, might have committed—in spite of multiple independent checks and double-checks—a simple mistake (e.g. when performing a calculation). Whereas this kind of uncertainty is irreducible, indeed, it seems to me just irrelevant and without any practical bearing. Note that any scientific statement whatsoever (e.g. that earth is no disk) is affected by similar doubts.23 That’s nothing scientists have to worry about in scientific policy advice. Let me explain this in some more detail. I take it that there is a vast corpus of empirical statements which decision makers—in private, commercial or public contexts—rightly take for granted as plain facts. I’m thinking for instance of results to the effect that plutonium is toxic, the atmosphere comprises oxygen, coal burns, CO2 is a greenhouse gas, Africa is larger than Australia, etc. These findings have been thoroughly empirically tested, they have been derived independently from var- ious well-confirmed theories, or they have been successfully acted upon millions of times. Even so, they are fallible, they are empirically and logically underdetermined, and they might be wrong because of more trivial reasons: millions of people might have committed the same fallacy or might have been fooled by a similar optical illu- sion. Nobody is denying that. But such uncertainties are simply not decision-relevant. More precisely, I suggest that (i) the corpus of statements that are—for all practical purposes—established beyond reasonable doubt and (ii) the well-established social practice of relying on them in decision making may serve as a benchmark to which scientists can refer in policy advice. The idea is that scientific policy advice com- prises but results that are equally well confirmed as those benchmark statements—so that policy makers can rely on the scientific advice in the same way as they are used rely on other well-established facts. By making all the policy-relevant uncertain- ties explicit, scientists can further and further hedge their reported findings until the results are actually as well confirmed as the benchmark statements. (In the extreme, they might simply admit their complete ignorance as to the consequences of some policy option.) For illustrative purposes, we consider a ‘frank scientist’24 who tries to comply with the methodological recommendations outlined above in order to circumvent the non-epistemic management of inductive risks. She might address policy mak- ers along the following lines: “You have asked us to advice you on a complicated issue with many unknowns. We cannot reliably forecast the effects of the available 23Following David Hume (2000, p. 119), I tend to consider the evocation of such uncertainties as a sort of unreasonable skepticism, which comprises, in addition, doubting the existence of the external world or questioning our fundamental cognitive capacities. 24See also Kitcher (2011, p. 34) for a similar “frank explanation” of a climate scientist. By carefully stressing the uncertainties, Kitcher’s hypothetical climatologist is—in contrast to what Kitcher seems to believe—almost fully complying with this paper’s methodological recommendations. Euro Jnl Phil Sci policy options, which you’ve identified, in a probabilistic—let alone deterministic— way. Our current insights into the system simply don’t suffice to do so. However, we find that, if policy option A is adopted, it is consistent with our current understand- ing of the system (and hence possible) that the consequences CA1, CA2, . . . ensue; but note that we are not in a position to robustly rule out further effects of option A not included in that range. For policy option B, though, we can reliably exclude this-and-this set of developments as impossible, which still leaves CB1, CB2, . . . as a broad range of future possible consequences. These results are obviously not as telling as a deterministic forecast, but they represent all we currently know about the system’s future development. We, the scientists, think it’s not up to us to arbitrarily reduce these uncertainties. On the contrary, we think that democratically legitimized decision makers should acknowledge the uncertainties and determine—on normative grounds—which level of risk aversion is apt in this situation. Finally, the complex uncertainty statement I have provided above is as well confirmed as other empirical statements typically taken for granted in policy making (e.g., that plutonium is toxic, coal burns, earth’s atmosphere comprises oxygen, etc.). That is because all we relied on in establishing the possibilistic predictions were such well-confirmed results.” 5 Making uncertainties explicit and avoiding arbitrariness: the case of the IPCC It is universally acknowledged that the detailed consequences of anthropogenic cli- mate change are difficult to predict. Centurial forecasts of regional temperature anomalies or changes in precipitation patterns, let alone their ensuing ecologic or societal consequences, are highly uncertain. So, no wonder that climate scientists, in particular those involved in climate policy advice, have reflected extensively on how to deal with these uncertainties.25 A recent special issue of Climatic Change26 is further evidence of the attention climate science pays to uncertainty explication and communication. Some of the special issue’s discussion is devoted to the IPCC Guid- ance Note on Consistent Treatment of Uncertainties (Mastrandrea et al. 2010). The current Guidance Note, which is used to compile the Fifth Assessment Report (5AR), is a slightly modified version of the Guidance Note for the 4AR.27 The Guidance Note may serve as an excellent example for how the very statements and results sci- entists articulate and communicate are modified and chosen in the light of prevailing uncertainties. It thus illustrates the general strategy described in the previous section 25See, for example, the papers by Schneider (2002), Dessai and Hulme (2004), Risbey (2007) and Stainforth et al. (2007). 26Volume 109, Numbers 1–2/November 2011. 27Compare also Risbey and Kandlikar (2007) for a detailed discussion. Euro Jnl Phil Sci Table 1 Types of uncertainty or knowledge states that require a specific articulation of the results and findings in scientific policy advice State of scientific understanding A) A variable is ambiguous, or the processes determining it are poorly known or not amenable to measurement. B) The sign of a variable can be identified but the magnitude is poorly known. C) An order of magnitude can be given for a variable. D) A range can be given for a variable, based on quantitative analysis or expert judgment. E) A likelihood or probability can be determined for a variable, for the occurrence of an event, or for a range of outcomes (e.g., based on multiple observations, model ensemble runs, or expert judgment). F) A probability distribution or a set of distributions can be determined for the variable either through statistical analysis or through use of a formal quantitative survey of expert views. Adapted from Mastrandrea et al. (2010) and provides a counter-example to the arbitrariness thesis (Thesis 1), underpinning the refutation of the methodological critique. The Guidance Note distinguishes six epistemic states that characterize different levels of scientific understanding, and lack thereof, pertaining to some aspect of the climate system, as shown in Table 1. For each of these epistemic states, the Guidance Note suggests which sort of statements may serve as appropriate explications of the available scientific knowledge. Thus, in case F), it advises to state the probability dis- tribution (while making the assumptions of the statistical analysis explicit), whereas in case C), it does not do so.28 From state A) to state F), the scientific understanding gradually increases, and the statements scientists can justifiably and reliably make become ever more informative and precise. If, as is the case in state A), current understanding is very poor, scien- tists might simply report that very fact, rather than dealing with significant inductive risks when inferring some far-reaching hypothesis (as the methodological critique has it). Importantly, the statement that a process is poorly understood, that the evi- dence is low and that the agreement amongst experts is limited—such a statement itself does not involve any practically significant and policy-relevant uncertainties (contra premiss P2’). The Guidance Note thus provides a blueprint for making uncer- 28Note that, according to the Guidance Note, the explication of uncertainties does not necessarily depend on “our best climate models”, as Winsberg (2010, p. 111) assumes. Euro Jnl Phil Sci tainties fully explicit and avoiding substantial inductive risks.29 This is not to say that the framework provided by the IPCC is perfect and flawless. In addition, I’m not claiming here that the actual IPCC assessment reports consistently implement the Guidance Note and articulate uncertainties in a flawless way. But even if the guid- ing framework and the actual practice might be improved upon, the IPCC example nonetheless shows forcefully how scientists can articulate results as a function of the current state of understanding and thereby avoid arbitrary (methodological) choices. This effectively defeats the methodological critique of the value free ideal.30 6 Conclusion The methodological critique of the value free ideal is ill-founded. In a nutshell, the paper argues that there is a class of scientific statements which can be considered— for all practical purposes—as established beyond reasonable doubt. Results to the effect that, e.g., dinosaurs once inhabited the earth, alcohol freezes if sufficiently cooled, or methane is a greenhouse gas are not associated with decision making relevant uncertainties. Frequently, of course, empirical evidence is insufficient to establish hypotheses equally firmly. But rather than reporting uncertain results, and managing the ensuing inductive risks, scientific policy advice may set forth hedged hypotheses that fully make explicit the lack of understanding. By sufficiently weak- ening the reported results, it is always possible that scientific policy advice only communicates virtually certain (hedged) findings. That’s what the methodological critique of the value free ideal seems to underestimate. 29One may raise the question whether, by recommending use of expert judgement and surveys of expert views, the Guidance Note in fact tolerates non-epistemic value judgements and represents, accordingly, a counterexample to this paper’s line of thought. The relation between expert judgements and non-epistemic value judgements is intriguing and clearly deserves closer attention. At this point, however, we have to be content with the following brief remarks: First of all, the Guidance Note prescribes to use expert surveys in order to gauge the degree of agreement within the scientific community (cf. Mastrandea et al. 2010, pp. 2– 3), e.g. with a view to probability estimates. Such surveys hence represent one way to make the prevailing uncertainties explicit, fully in line with this paper. Secondly, I take it that to infer a hypothesis from expert judgement is not necessarily more uncertain than a well-founded inference from empirical evidence: In cases where we know that experts have acquired tacit knowledge and where many experts agree, expert judgement, too, can establish a hypothesis beyond reasonable doubt. Finally, whenever these conditions aren’t met, i.e. whenever expert judgement doesn’t establish a hypothesis beyond reasonable doubt, the Guidance Note (in my understanding) recommends to switch the category, e.g.: if experts don’t agree on a range of a variable (category D), one should attempt to provide an order of magnitude (category C) rather than reporting an uncertain and poorly founded quantitative range; if experts don’t agree on a probability of variable (category E), they should try to estimate a range of possible values for that variable without assigning probabilities. So, on the one hand, the Guidance Note can be interpreted in a way such that it remains an example for how to eliminate non-epistemic value judgements, although it relies on expert views. On the other hand, the unspecific reference to expert judgements in the Guidance Note leaves room for different interpretations. Note, however, that this brief case study does not hinge on the Guidance Note being perfect or flawless in every single aspect, for it illustrates in any case the general methodological strategy of removing policy-relevant inductive risks through making uncertainties explicit. 30The Guidance Note, by the way, endorses that ideal explicitly (cf. Mastrandea et al. 2010, p. 2). Euro Jnl Phil Sci Let me comment on the general dialectic situation. This paper attempts to refute a critique of the value free ideal. Even if the refutation were successful, this does not in itself show that the value free ideal is justified. Further reasons (e.g. along the lines indicated in the introductory paragraph) have to be given for that claim. Similarly, other criticisms of the value free ideal—such as the semantic critique or the denial that epistemic and non-epistemic values can be reasonably distinguished in the first place—remain unaffected by this paper’s argument and have to be considered separately. Finally, instead of accepting this paper’s conclusion, one may as well deny one of its premisses (and claim, e.g., that scientific findings like methane being a greenhouse gas require management of policy-relevant inductive risks). It should go without saying that in philosophy, too, you may hold on to your position, come what may, provided you’re prepared to make adjustments elsewhere in your web of beliefs. In that sense, there exists no definite refutation of the critique of the value free ideal. This said, consider, finally, the position which adheres to the value free ideal and maintains it plays an important rôle in giving science and scientific policy advice its place in a democratic society. Seen from such a perspective, this paper reveals that the philosophical critique, precisely because it addresses a socially and politi- cally relevant issue, is dangerous and risks to undermine democratic decision making. While scientific policy advice should be guided by the ideal of value free science, the methodological critique reminds us, at most, that there are factual (contingent) prob- lems of realizing this ideal: Scientists may lack the material or cognitive resources to identify all uncertainties, make them explicit and carry out sensitivity analyses. But (a) this does not affect value free science understood as an ideal which one should try to come close to.31 And (b) the unjustified criticism tends to obscure this method- ological challenge (which is luckily addressed by the IPCC), rather than illuminating it and contributing to its remediation. Acknowledgements I’d like to thank two anonymous reviewers of EJPS and its editor in chief for the numerous and extremely valuable comments on an earlier version of this paper. References Biddle, J., & Winsberg, E. (2010). Value judgements and the estimation of uncertainty in climate mod- elling. In P.D. Magnus, & J. Busch (Eds.), New waves in philosophy of science (pp. 172–197). Basingstoke: Palgrave Macmillan. Cranor, C.F. (1990). Some moral issues in risk assessment. Ethics, 101(1), 123–143. Dessai, S., & Hulme, M. (2004). Does climate adaptation policy need probabilities? Climate Policy, 4(2), 107–128. Douglas, H.E. (2000). Inductive risk and values in science. Philosophy of Science, 67(4), 559–579. Douglas, H.E. (2007). Rejecting the ideal of value-free science. In H. Kincaid, J. Dupré, A. Wylie (Eds.), Value-free science? Ideals and illusions (pp. 120–139). New York: Oxford University Press. Douglas, H.E. (2009). Science, policy, and the value-free ideal. Pittsburgh: University of Pittsburgh Press. 31Or, in other words, the ideal of value free science can and should be understood as a regulative princi- ple in a Kantian sense. Note that Kitcher (2011), too, conceives the ideal of well-ordered science (ibid., p. 125ff.) and the ideal of transparency (ibid., p. 151) along the same lines. Euro Jnl Phil Sci Dupré, J. (2007). Fact and value. In H. Kincaid, J. Dupré, A. Wylie (Eds.), Value-free science? Ideals and illusions (pp. 27–41). New York: Oxford University Press. Elliott, K.C. (2011). Is a little pollution good for you? Incorporating societal values in environmental research. New York: Oxford University Press. Gardiner, S.M. (2006). A core precautionary principle. The Journal of Political Philosophy, 14(1), 33–60. Hansson, S.O. (2011). Coping with the unpredictable effects of future technologies. Philosophy & Technology, 24(2), 137–149. Hume, D. (2000). An enquiry concerning human understanding. The Clarendon edition of the works of David Hume. Oxford: Clarendon Press. Jeffrey, R.C. (1956). Valuation and acceptance of scientific hypotheses. Philosophy of Science, 23(3), 237–246. Kitcher, P. (2011). Science in a democratic society. Amherst: Prometheus Books. Knight, F. (1921). Risk, uncertainty and profit. Boston: Houghton Mifflin. Laudan, L. (1991). Empirical equivalence and underdetermination. The Journal of Philosophy, 88(9), 449– 472. Levi, I. (1960). Must the scientist make value judgments? The Journal of Philosophy, 57(11), 345–357. Longino, H.E. (1990). Science as social knowledge: Values and objectivity in scientific inquiry. Princeton: Princeton University Press. Longino, H.E. (2002). The fate of knowledge. Princeton: Princeton University Press. Mastrandrea, M.D., Field, C.B., Stocker, T.F., Edenhofer, O., Ebi, K.L., Frame, D.J., Held, H., Kriegler, E., Mach, K.J., Matschoss, P.R., Plattner, G.-K., Yohe, G.W., Zwiers, F.W. (2010). Guidance note for lead authors of the IPCC fifth assessment report on consistent treatment of uncertainties. Technical report, Intergovernmental Panel on Climate Change (IPCC). Mitchell, S.D. (2004). The prescribed and proscribed values in science policy. In G. Wolters, & P. Machamer (Eds.), Science, values, and objectivity (pp. 245–255). Pittsburgh: University of Pittsburgh Press. Putnam, H. (2002). The collapse of the fact/value dichotomy. Cambridge: Harvard University Press. Quine, W.V.O. (1975). On empirically equivalent systems of the world. Erkenntnis, 9(3), 313–328. Risbey, J. (2007). Subjective elements in climate policy advice. Climatic Change, 85(1), 11–17. Risbey, J., & Kandlikar, M. (2007). Expressions of likelihood and confidence in the IPCC uncertainty assessment process. Climatic Change, 85(1), 19–31. Rudner, R. (1953). The scientist qua scientist makes value judgements. Philosophy of Science, 20(1), 1–6. Sartori, G. (1962). Democratic theory. Westport: Greenwood Press. Schneider, S.H. (2002). Can we estimate the likelihood of climatic changes at 2100? Climatic Change, 52, 441–451. Shue, H. (2010). Deadly delays, saving opportunities: Creating a more dangerous world? In S.M. Gardiner (Ed.), Climate ethics: Essential readings (pp. 146–162). New York: Oxford University Press. Stainforth, D.A., Allen, M.R., Tredger, E.R., Smith, L.A. (2007). Confidence, uncertainty and decision- support relevance in climate predictions. Philosophical Transactions of the Royal Society A- Mathematical Physical and Engineering Sciences, 365(1857), 2145–2161. Sunstein, C.R. (2005). Laws of fear: Beyond the precautionary principle. The Seeley lectures. Cambridge: Cambridge University Press. Weber, M. (1949). The meaning of “ethical neutrality” in sociology and economics. In E.A. Shils, & H.A. Finch (Eds.), Methodology of social sciences (pp. 1–48). Glencoe: Free Press. Wilholt, T. (2009). Bias and values in scientific research. Studies in History and Philosophy of Science, 40, 92–101. Williams, B.A.O. (1985). Ethics and the limits of philosophy. Cambridge: Harvard University Press. Winsberg, E. (2010). Science in the age of computer simulation. Chicago: Chicago University Press. In defence of the value free ideal Abstract Introduction How arbitrariness undermines the value free ideal How uncertainty triggers arbitrariness Avoiding arbitrariness through articulating and recasting findings appropriately Making uncertainties explicit and avoiding arbitrariness: the case of the IPCC Conclusion Acknowledgements References