ProbForInfConsent.pages (Accepted for publication in Philosophy of Science, 
 subject to revision after presentation at 2016 PSA meeting) Using Democratic Values in Science: an Objection and (Partial) Response1 S. Andrew Schroeder (aschroeder@cmc.edu), Claremont McKenna College draft of June 2016 Abstract Many philosophers of science have argued that social and ethical values have a significant role to play in core parts of the scientific process. A question that naturally arises is: when such value choices need to be made, which or whose values should be used? A common answer to this question turns to political values — i.e. the values of the public or its representatives. In this paper, I argue that this imposes a morally significant burden on certain scientists, effectively requiring them to advocate for policy positions they strongly disagree with. I conclude by discussing under what conditions this burden might be justified. 1. Values in Science and the Political View By now, most philosophers of science probably agree that there is an important place for so-called contextual (i.e. personal, ethical, political) values in core parts of the scientific process, especially in areas where science is connected to policy-making. Values may appropriately play a role in evaluating evidence (Douglas 2009), choosing scientific models (Elliott 2011), structuring quantitative measures (Reiss 2013, ch. 8; Stiglitz, Sen, and Fitoussi 2010; Hausman For comments on earlier drafts of this paper, I thank Alex Rajczi and the students in a seminar on science and values at 1 Claremont McKenna College. For discussions on related topics, I thank Gil Hersch, Daniel Steel, and Branwen Williams. This work was supported in part by a research grant from the Claremont McKenna College Center for Innovation and Entrepreneurship. �1 mailto:aschroeder@cmc.edu 2015), and/or in preparing information for presentation to non-experts (Elliott 2006; Hardwig 1994; Resnik 2001; Schroeder 2016). The natural follow-up question has received less sustained attention: when scientists should make use of values, which (or whose) values should they use? 2 In some cases, philosophers of science criticize a value choice on substantive ethical grounds (e.g. Shrader-Frechette 2008; Hoffmann and Stempsey 2008). This suggests that the values to be used are the objectively correct ones. A second common view gives scientists latitude to choose whatever (reasonable) values they prefer or think best, usually supplemented by a requirement of transparency. This is suggested by many existing codes of scientific ethics, which impose few constraints on scientists in making such choices. Finally, a third view says 3 that scientists ought to use the appropriate political values — that is, the values held or endorsed by the public or its representatives — at least when those values are informed and substantively reasonable. The most straightforward argument for this view grounds it in considerations of 4 democracy or political legitimacy. If certain value choices are going to ultimately influence policy, then the public or its representatives have a right to make those choices (Douglas 2005; Intemann 2015; cf. Steele 2012; Kitcher 2001). There are, of course further possibilities, and these views can be combined in more complex ways (e.g. requiring scientists to use political values in some domains, while permitting them to use their personal values in others). But if, for simplicity, we stick to these three primary In some cases, the justification for incorporating values into the scientific process dictates an answer. Feminist critiques of 2 historically androcentric fields, for example, suggest that non-androcentric values are needed as a corrective. I set aside such cases in this paper. Mara Walli, Matthew Wong, and I discuss this at length in a work-in-progress.3 I set aside, then, cases where the values, say, of a policy-maker are unreasonable, in the sense that they lie outside the range of 4 values that ought to be tolerated in a liberal society. In such cases, an advocate of the political view may permit or require scientists to reject those unreasonable values. (See e.g. Resnik 2001.) Also, in this paper I will set aside the important question of what the political view ought to say when the values of the public diverge from the values of policy-makers. The answer to this question, I think, will depend on one’s theory of political representation. �2 options, I think the third, which I will call the political view, is the most attractive. More precisely, I think that in most cases where values are called for in core parts of the scientific process, scientists should privilege political values. The most obvious concern with this view, 5 and one that has received much attention from its advocates, is that it doesn’t seem practical. It isn’t feasible to ask citizens or policy-makers to weigh in at every point in the scientific process where values are required, and even if we could, non-experts often will not have the scientific background to fully understand the options before them. Substitutes for actual participation on the part of policy-makers or the public, such as asking scientists to predict what the public would choose or to determine what values policy-makers would hold upon reflection, seem to place unreasonable epistemic demands on scientists. Douglas (2005), Intemann (2015), Guston (2004), and others have argued that these problems aren’t insurmountable, by suggesting specific ways that the concerns of policy-makers and the public can be brought into the scientific process. And Kevin Elliott (2006; 2011) has suggested a more general way we might make progress. The political view goes hand-in-hand with a view of the relationship between science and policy that is widely-held: that the role of a scientist is to promote informed decision-making by policy-makers. Bioethicists have 6 extensively discussed how health care professionals can promote informed decision-making on the part of patients and research subjects. Theoretical and empirical research has led to a range of suggestions for how physicians can promote informed decision-making, even in cases where a patient’s values may be uncertain, different research subjects may hold different values, and so This, of course, is proposed as a principle of professional ethics - not e.g. a legal requirement.5 See also Resnik (2001), Martin and Schinzinger (2010), and Schroeder (2016) for theoretical defenses of this idea, which is 6 consonant with the mission statements of many scientific organizations and associations. �3 forth. Elliott’s hope is that many of these suggestions can be adapted to the scientific case, or at least a parallel research program could be carried out, informed by the work of bioethicists.7 It is, of course, far from established that these proposals will work, but the range of options on the table strikes me as cause for optimism. And even if these solutions don’t work in all cases, there is still bite to the political view, since it could still tell scientists to use political values when they can determine those values. Accordingly, in this paper I would like to describe a different and I think deeper concern with the political view, one which has been conspicuously absent from the literature thus far. In requiring scientists to guide certain aspects of their work by political values, we will sometimes in effect ask that they support political causes they may personally oppose and bar them from fully advocating for their preferred policy measures. We are, then, depriving scientists of important political rights possessed by the general public. In the remainder of this paper, I will spell out this objection more fully and explain why I think it has significant moral force. In the end, I will suggest that although there is reason to think that the objection doesn’t ultimately undermine the political view, it nevertheless constitutes a significant cost that accompanies that view, which its proponents need to acknowledge. 2. Two Cases Where the Political View Seems Troublesome The literature on values in science is vast and diverse, and so it will be useful to have some particular examples in mind. First, consider Douglas’s (2000; 2009) argument that scientists should or must appeal to value judgments when resolving certain uncertainties that arise during the scientific process. Scientists conducting research into the potential carcinogen dioxin, for example, were faced with liver samples which had tumors that could not clearly be See also Schroeder (2016) for how this might go.7 �4 categorized as malignant or benign. In resolving such borderline or ambiguous cases, Douglas argues that scientists should appeal to contextual values, when the constitutive norms of science don’t dictate any resolution. In this case, health-protective values would lead scientists to classify borderline samples as malignant; while concerns about overregulation would lead scientists to classify those same samples as benign (Douglas 2000). Second, consider the many choices that scientists have to make when preparing their results for presentation. How should uncertainty be characterized? (Should 90% or 95% confidence intervals be used?) Which study results should be highlighted? (Which drug side effects should be discussed at length, and which included as part of a long list?) How should statistics be summarized? (As means or medians? Should results be broken down by gender, or presented only in aggregate?) In making choices like these, scientists frequently must appeal to values — to decide, for example, which pieces of information are important and which are not.8 It is, I presume, fairly uncontroversial that these value choices — how to resolve uncertainties in the research process and how to present results — can influence policy in foreseeable ways. Douglas, for example, argues that this is the case in the dioxin studies. Classifying borderline samples as malignant will make dioxin appear to be a more potent carcinogen, likely leading policy-makers to regulate it more stringently (2000, 571). Keohane, Lane, and Oppenheimer (2014) show how a presentation choice made by the Intergovernmental Panel on Climate Change led to poor policy outcomes, which likely could have been avoided by presenting information differently. More generally, we know from a wealth of studies in psychology and behavioral economics that the way information is presented to someone can strongly influence her subsequent choices (Thaler and Sunstein 2008), and there have been For discussions, see Elliott (2006), Hardwig (1994), Keohane, Lane, and Oppenheimer (2014), Resnik (2001), and Schroeder 8 (2016). �5 several influential commentaries calling for scientists to more carefully “frame” their results (Nisbet and Mooney 2007; Lakoff 2010). So it seems straightforward that the value choices made by scientists can predictably affect policy. If these value choices can influence policy, then in directing scientists to make them in accordance with political values — as opposed to the scientists’ personal values — we are asking scientists to characterize policy-relevant material in a way that may promote an outcome they strongly disfavor. For example, suppose the scientists in Douglas’s dioxin study value public health much more than they value keeping industry free from overregulation, but the public and its elected representatives have the opposite view. Further, suppose both views are substantively reasonable, in that they are within the range of policies eligible for adoption through democratic processes. In this case, the political view would tell the scientists to categorize borderline samples as benign, since that would better cohere with the public’s values. This could make dioxin appear to have minimal carcinogenic effects, predictably leading to less regulation than would have occurred had the scientists classified borderline samples according to their own, health-protective values. Similarly, suppose an environmental economist conducting an impact study of a proposed construction project is herself deeply committed to the preservation of natural spaces. Nevertheless, if the public is strongly committed to economic development, the political view would require her to put front-and-center a detailed breakdown of the economic consequences of construction, while describing the ecological costs more briefly or in a less prominent place — likely frustrating her desire for preservation. Notice that the concern here is not simply that scientists are being asked to provide information that will lead to an outcome they disfavor. I take it that any reasonable approach to scientific ethics will require that scientists communicate honestly, even in cases where that �6 promises to yield policies they don’t like. Similarly, I presume that scientists must also be forbidden from presenting information in ways that, though technically accurate, are nevertheless misleading. The problem here is that Douglas’s scientists are being asked to characterize results in one way (as benign) that could, with equal scientific validity, have been characterized differently (as malignant). And our environmental economist is being asked to present her results in one way (highlighting economic benefits), when an alternate presentation (one highlighting ecological costs) would be equally honest, accurate, objective, transparent, clear, and so forth. In each case, then, we have a collection of underlying data which can be described or characterized in different ways, neither of which appears to be more scientifically valid than the other. The political view insists that scientists choose the description grounded in values they don’t accept and which seems likely to promote policy outcomes they disfavor. In this respect, the political view requires scientists to in effect advocate for, or at least tilt the playing field towards, political views they disagree with.9 3. Elliott and The Principle of Helpfulness This seems clearly to be a significant imposition on scientists and thus a cost of the political view. It is therefore surprising that, so far as I can tell, philosophers who have argued for the political view have not commented on it. This is most striking in Elliott’s work. Elliott, recall, argues that scientists should aim to promote informed decision-making among policy- makers, in something like the way physicians should aim to promote informed decision-making among patients. Standard accounts in bioethics say that it is the patient’s values that carry the Can’t we let the scientists advocate for their preferred positions in other ways? We could let scientists present their preferred 9 interpretation separately. But if the political view is to have bite, presumably these alternate results will have to be clearly designated so and offered in a less prominent place (e.g. in an appendix or online supplement). And we should of course permit scientists to advocate for their views outside of their scientific papers/reports. But it seems likely that these (private) statements will carry much less policy weight than their scientific ones. �7 day: in normal cases, the physician’s job is to help a patient make decisions that cohere with her own values. If the scientific cases is analogous, then the scientist’s job is to help policy-makers make decisions that cohere with their (or the public’s) values. This, in turn, suggests that scientists should use political values when resolving uncertainties, presenting results, and so forth. In other words, Elliott’s proposal seems to imply the political view.10 The main defense Elliott offers for this view, however, relies on Scanlon’s “Principle of Helpfulness”: Suppose I learn, in the course of conversation with a person, that I have a piece of information that would be of great help to her because it would save her a great deal of time and effort in pursuing her life’s project. It would surely be wrong of me to fail (simply out of indifference) to give her this information when there is no compelling reason not to do so.11 Elliott sums up the idea this way: “[I]n situations where one can significantly help another individual by engaging in an action that requires little sacrifice, it is morally unacceptable not to help” (2011, 139). If the political view, however, requires characterizing data or presenting information in ways that promote policy choices a scientist strongly opposes, then this Principle doesn’t apply. When the pro-health scientist is required to classify ambiguous samples as benign, that does involve a sacrifice. A refusal to do so — which would hinder the pro-industry policy-maker’s ability to make an informed regulatory decision — would not be done “simply out of indifference”. It would be done out of the scientist’s desire to protect public health. In some work, Elliott appears to suggest that transparency about values may be enough (Elliott and Resnik 2014). That is, he 10 doesn’t seem to place (many) constraints on scientists’ value choices, so long as they are open about those choices. If that is Elliott’s view — and it is not clear to me that it is — it strikes me as in tension with his insistence that scientists promote informed decision-making. Surely I can better help you make a decision that coheres with your values by working from your values, rather than by working from my own values (even if I am open about what I am doing). Further, even if scientists are open about their value choices, policy-makers frequently won’t have the technical expertise to be able to reinterpret a scientific study, replacing one set of values (the scientist’s) with another (their own). (If values could so easily be swapped out by non- specialists, then much of the debate about values would be unimportant. Transparency is all we would require.) Scanlon (1996, 224), quoted in Elliott (2011, 139).11 �8 (Similar things, obviously, can be said about the environmental economist asked to highlight the economic aspects of a proposed construction project.) Scanlon’s Principle of Helpfulness is a quite weak one, applying only in cases where the agent in question can put forward no significant burden of compliance. That Elliott uses it to justify his informed decision-making framework, and implicitly the political view, suggests that he thinks that such a view doesn’t impose significant burdens on scientists. But if what I’ve said has been correct, that is wrong. Even if the political view is justified — and, as I’ve said, I think it is — we need to recognize that it asks a lot of scientists in cases where their values diverge from those of the relevant political body. 4. Physicians vs. Scientists This, however, brings up an interesting question. If Elliott is right that the scientific case is analogous to the biomedical case, then shouldn’t informed consent requirements in medicine be treated as similarly burdensome? Few bioethicists, though, would have sympathy for a physician who claimed that seeking informed consent constituted a significant ethical burden. (They may have sympathy for the claim that seeking informed consent is burdensome in more mundane ways — e.g. too time-consuming — but those complaints seem very unlike the scientists’.) I think that there is an important difference between the cases, which will help us to more clearly understand why the scientist is often burdened in a way that carries moral weight, while the physician normally is not. We can see this by constructing a case which seems to put a physician in a position like the scientist’s. Consider Jane, a doctor who strongly believes that the end of life for terminal patients is greatly enhanced by effective pain management, even if doing so shorten’s the �9 patient’s life or impairs his consciousness. For this reason, Jane has chosen palliative care as her specialty, making it her life’s work to help dying patients avoid unnecessary pain. One of her patients, John, has continually insisted that he wants to remain as lucid as possible, even if that means agony. As he lies here, in agony, Jane suspects that if she framed the information properly — highlighting a medication’s ability to relieve pain, while downplaying its cognitive effects — she might be able to get John to accept it. And accepting the medication, Jane strongly believes, would be much better for John. Nevertheless, standard interpretations of informed consent forbid her from doing so. Knowing that John is especially concerned about lucidity, she is ethically bound to highlight that information when informing him of his options. Unsurprisingly, John declines the pain medication and experiences what Jane regards as an awful death — precisely the kind of thing she went into palliative care to prevent. Like our pro-health scientist, Jane has been asked to present information in a way that ultimately frustrates her deeply-valued goals. But imagine Jane complains to the ethics board at her hospital, arguing that it is burdensome to ask her to highlight to John the effects of pain medication on lucidity, because doing so would frustrate her deeply-held values. This complaint doesn’t strike me as at all compelling. Why? Because Jane’s values shouldn’t hold any sway over John’s medical choices. John has the right to reject pain medication, whatever Jane (or just about anyone else) thinks about it. Put another way, John has no obligation to take Jane’s wishes into consideration, when he makes his decision. His decision is ultimately his. Now, imagine our pro-health scientist complains to her ethics committee, asserting that it is burdensome to ask her to present her data in a pro-industry light, when it could with equal scientific validity be presented in a pro-health light, because doing so would frustrate her deeply- held concern for public health. Or imagine the environmental economist complaining about �10 having to foreground the economic benefits of the proposed construction project, since doing so will make it more likely that the project is approved and another natural space will be bulldozed. If we assume that the scientists are citizens of the society in question, then their situation is different from Jane’s. As citizens in a democracy, their views should hold some sway over their government’s policy choices. A government does have an obligation to take its citizens’ views into consideration when making policy decisions. And when the government ultimately acts, it does so on the scientists’ behalf. The decision is, in part, the scientists’. The scientists, then, are stakeholders and even part-decision-makers in the associated policy-decisions, in a way that Jane is not a stakeholder in John’s decision. This is true even if Jane cares more about John’s decision than our scientists care about the policy decisions. We can see, then, that the political view isn’t burdensome simply because it directs scientists to promote or advocate for outcomes they disfavor. It is burdensome because it sometimes directs scientists to promote or advocate for disfavored views, on matters that they have a right to speak on, to a body that purports to act on their behalf. This is what gives their burden its moral significance.12 5. Justifying the Burdens of the Political View Some scientists have recognized the burdens that even neutrality — let alone the political view — would impose on them. Conservation biology is inescapably normative. Advocacy for the preservation of biodiversity is part of the scientific practice of conservation biology. If the editorial policy of or the publications in [the journal] Conservation Biology direct the discipline toward an “objective, value-free” approach, then they do not educate and transform society… To pretend that the acquisition of “positive knowledge” alone with avert mass extinctions is misguided… Without openly acknowledging such a perspective, What about cases where the scientists are not citizens of the society in question? In some cases, we can still make out a 12 stakeholder claim. (When it comes to climate change, for example, we are all stakeholders in U.S. climate policy.) But such cases raise complications which I unfortunately can’t discuss in a short paper like this one. �11 conservation could become merely a subdiscipline of biology, intellectually and functionally sterile and incapable of averting an anthropogenic mass extinction. (Barry and Oelschlaeger 1996)13 Most conservation biologists enter that field because of a strong commitment to the value of biodiversity and the preservation of nature (Marris 2006). Similar things are surely true of other scientific disciplines. (My experience has been that public health researchers and economists studying inequality disproportionately share certain political values.) To the extent that these values diverge from the values of the public and its representatives, the political view would require these scientists to continually characterize their results in ways structured by a value system they find unacceptable. (In this respect, things would be quite different for, say, climate scientists. Although their work is controversial, it nevertheless is founded on values that are widely shared. The potentially catastrophic consequences of climate change are ones that virtually everyone cares about. Climate change deniers typically object to the empirical claims made by climate scientists - not to the basic values they hold.) Is it fair, then, to tell a conservation biologist, who perhaps entered the field because of her love for natural spaces and has spent the bulk of her life collecting information that she hopes can be used to preserve them, that she is nevertheless ethically bound to resolve uncertainties in her research in ways favorable to economic growth, or to present her results in ways that highlight the economic value (as opposed to, say, the private or aesthetic value) of undeveloped land? I don’t have a full answer to this question — such an answer would require more empirical information, as well as a fuller discussion of political philosophy — but I think we can see how the argument would go. There are a range of situations in which we impose significant This article was followed by a collection of commentaries, most of which generally supported the authors’ views. Similar 13 proposals seem to crop up frequently among conservation biologists, and are generally endorsed by those in the field (Marris 2006). �12 restrictions on speech and advocacy for people in important social positions. The Code of Conduct for U.S. judges, for example, bars judges from publicly endorsing candidates for political office and from making speeches for political organizations. Uniformed U.S. military 14 personnel are not permitted to participate in political fundraising, speak at political events, or display political signs, even on their private vehicles. Other constraints on speech and 15 advocacy seem ethically appropriate for politicians, police officers, lawyers, and others. So, if there is an important public good served by constraining scientists’ advocacy, it doesn’t seem in principle problematic to do so. Two arguments along these lines seem promising. First, a distinctly political approach might argue that although imposing this burden on scientists does restrict important political rights of speech and advocacy, it is done in order to expand the political rights of others. By requiring scientists to work from the values of the public, the ability of the public to make informed policy choices and to effectively advocate for their own positions is enhanced. Thus, although the political view constitutes a loss of political freedom to scientists, that loss is more than balanced by the gain in political freedom to the public as a whole. (A view like this seems generally consistent with an approach to democracy like Brettschneider’s (2007).) Second, a straightforwardly consequentialist argument could point out the terrible consequences that threaten to follow if the public and/or policy-makers distrust scientific results. One of the primary arguments that has been put forward in favor of informed-consent approaches in bioethics has been that it promotes trust on the part of patients. Similarly, Elliott’s informed decision-making approach — which implies the political view — seems like a promising way to http://www.uscourts.gov/judges-judgeships/code-conduct-united-states-judges14 http://www.dtic.mil/whs/directives/corres/pdf/134410p.pdf15 �13 http://www.dtic.mil/whs/directives/corres/pdf/134410p.pdf http://www.uscourts.gov/judges-judgeships/code-conduct-united-states-judges promote trust in science (Elliott 2011, 133-6; cf. Hardwig 1994; Resnik 2001). If, then, the political view proves to be an effective way of promoting public trust in science, which in turn heads off the problems that ensue when policy-makers disregard science, that could justify imposing significant burdens on scientists. Neither of these defenses, of course, is anywhere near complete. But both do strike me as quite reasonable, and so I don’t think the concerns I’ve discussed in this paper should lead proponents of the political view to give up that position. That said, it is important to note the form that these defenses take. Neither attempts to show that the burden on scientists is not morally significant (as, perhaps, we might be inclined to say about the complaint of the palliative care physician). Instead, they each point to compensating benefits — not necessarily enjoyed by the scientists in question — which morally outweigh the scientists’ burden. This means that the political view, even if it is justified, comes at a real cost to scientists, which is something its proponents need to acknowledge. 
 �14 References Barry, Dwight and Max Oelschlaeger. 1996. “A Science for Survival: Values and Conservation Biology,” Conservation Biology 10: 905-11. Brettschneider, Cory. 2007. Democratic Rights: The Substance of Self-Government. Princeton University Press. Douglas Heather. 2000. “Inductive Risk and Values in Science.” Philosophy of Science 67: 559-79. Douglas, Heather. 2005. “Inserting the Public Into Science.” In Democratization of Expertise? Exploring Novel Forms of Scientific Advice in Political Decision-Making, ed. Sabine Maasen and Peter Weingart, 153-169. Springer. Douglas, Heather. 2009. Science, Policy, and the Value-Free Ideal. University of Pittsburgh Press. Elliott, Kevin C. 2006. “An ethics of expertise based on informed consent.” Science and Engineering Ethics 12: 637-61. Elliott, Kevin C. 2011. Is a Little Pollution Good for You? Incorporating Societal Values in Environmental Rsearch. Oxford University Press. Elliott, Kevin C. and David B. Resnik. 2014. “Science, Policy, and the Transparency of Values.” Environmental Health Perspectives 122: 647-50. Guston, David. 2004. “Forget Politicizing Science. Let’s Democratize Science!” Issues in Science and Technology fall 2004. Hardwig, John. 1994. “Toward and Ethics of Expertise.” In Professional Ethics and Social Responsibility, ed. Wueste, 83-101. Roman and Littlefield. Hausman, Daniel. 2015. Valuing Health: Well-Being, Freedom, and Suffering. Oxford University Press. Hoffman, George and William Stempsey. 2008. “The Hormesis Concept and Risk Assessment: Are There Unique Ethical and Policy Considerations?” BELLE Newsletter 14: 11-17. Intemann, K. 2015. “Distinguishing between legitimate and illegitimate values in climate modeling.” European Journal for Philosophy of Science 5: 217-32. Keohane, Robert O., Melissa Lane, and Michael Oppenheimer. 2014. “The ethics of scientific communication under uncertainty.” Politics, Philosophy & Economics 13: 343-368. Kitcher, Phillip. 2001. Science, Truth, and Democracy. Oxford University Press. Lakoff, George. 2010. “Why it Matters How We Frame the Environment.” Environmental Communication 4: 70-81. Marris, Emma. 2006. “Should conservation biologists push policies?” Nature 442: 13. Martin, Mike and Roland Schinzinger. 2010. Introduction to Engineering Ethics (2nd ed.). McGraw-Hill. Nisbet, Matthew and Chris Mooney. 2007. “Framing Science.” Science 316: 56. Reiss, Julian. 2013. Philosophy of Economics: A Contemporary Introduction. Routledge. Resnik, David. 2001. “Ethical Dilemmas in Communicating Medical Information to the Public.” Health Policy 55: 129-49. Scanlon, Thomas. M. 1996. What We Owe to Each Other. Harvard University Press. Schroeder, S. Andrew. 2016. “Communicating Scientific Results to Policy-Makers.” Paper presented at the American Philosophical Association Conference (Pacific Division). Available at . Shrader-Frechette, Kristin. 1994. Ethics of Scientific Research. Rowman and Littlefield. Shrader-Frechette, Kristin. 2008. “Ideological Toxicology: Invalid Logic, Science, Ethics About Low- Dose Pollution.” BELLE Newsletter 14: 39-47. Steele, Katie. 2012. “The Scientist qua Policy Advisor Makes Value Judgments.” Philosophy of Science 79: 893-904. Stiglitz, Joseph E., Amartya Sen, and Jean-Paul Fitoussi. 2010. Mis-measuring Our Lives: Why GDP Doesn’t Add Up. The New Press. Thaler, Richard and Cass Sunstein. 2008. Nudge. Yale University Press. �15