untitled Values in Science: The Case of Scientific Collaboration Kristina Rolin*y Much of the literature on values in science is limited in its perspective because it focuses on the role of values in individual scientists’ decision making, thereby ignoring the context of scientific collaboration. I examine the epistemic structure of scientific col- laboration and argue that it gives rise to two arguments showing that moral and social values can legitimately play a role in scientists’ decision to accept something as sci- entific knowledge. In the case of scientific collaboration some moral and social values are properly understood to be extrinsic epistemic values, that is, values that promote the attainment of scientific knowledge. 1. Introduction. Much of the literature on values in science focuses on the role of values in individual scientists’ decision making, thereby ignoring the context of scientific collaboration (see, e.g., Machamer and Wolters 2004; Kincaid, Dupré, and Wylie 2007; Carrier, Howard, and Kourany 2008). Yet, many scientists work in research teams and publish their findings in multi- authored articles (Wray 2002, 2006; Galison 2003). Scientific collabora- tion is often a practical necessity because the production and analysis of evi- dence are too expensive and time-consuming for any individual scientist to accomplish independently (Hardwig 1991; Wagenknecht 2013). Some- times collaboration becomes a necessity because a research project draws on Received December 2013; revised November 2014. *To contact the author, please write to: Helsinki Collegium for Advanced Studies, P.O. Box 4, 00014 University of Helsinki, Finland; e-mail: kristina.rolin@helsinki.fi. yI am grateful to Kevin Elliott, Brad Wray, and the reviewers for Philosophy of Science for their helpful comments on earlier versions of the manuscript. I also wish to thank the audi- ences at the Society for Philosophy of Science in Practice meeting in Toronto and the workshop “The Role of Values in Social Inquiry” in Copenhagen in 2013. This research has been made possible by funding from the Academy of Finland Centre of Excellence in the Philosophy of the Social Sciences at the University of Helsinki. Philosophy of Science, 82 (April 2015) pp. 157–177. 0031-8248/2015/8202-0001$10.00 Copyright 2015 by the Philosophy of Science Association. All rights reserved. 157 a variety of expertise from different disciplines (Thagard 1999; Andersen and Wagenknecht 2013). In such cases, a research team with a division of labor is capable of carrying out a project that no individual scientist could do on their own. Acknowledging the importance of scientific collaboration has led many philosophers to examine its implications for the social epistemology of sci- entific knowledge. Some philosophers suggest that scientific knowledge emerging in collaborations involves collective beliefs or acceptances (Gil- bert 2000; Bouvier 2004; Wray 2006, 2007b; Staley 2007; Andersen 2010; Rolin 2010; Cheon 2014). Some others suggest that the epistemic structure of scientific collaboration is based on relations of trust and interactions among scientists (Hardwig 1991; Kusch 2002; Thagard 2010; Fagan 2011, 2012; Andersen and Wagenknecht 2013; de Ridder 2013; Frost-Arnold 2013; Wagenknecht 2013, 2014). In the former case, a research team is thought to arrive at a group view that is not fully reducible to individual views. In the latter case, each team member is thought to rely on testimonial knowledge that is based on her trusting other team members. These two models are not competing accounts of the epistemic structure of scientific collaboration. Sometimes scientific knowledge in collaborations takes the form of collective acceptance, sometimes it is an outcome of trust-based acceptance, and at other times it takes some other form.1 In this paper I examine the implications of scientific collaboration for the debate concerning the proper role of moral and social values in science. Much of the debate is focused on the ideal of value-free science, the view that non-epistemic values are not allowed to intrude the decision-making processes that scientists are engaged in when they accept something as scientific knowledge. Acceptance is thought to involve a judgment that a hypothesis or a theory is sufficiently well supported that it does not need to be submitted to further investigation for the moment (Lacey 1999, 13).2 A number of philosophers argue that the value-free ideal is not feasible—or even if it is feasible under some specific circumstances, there is no reason to 1. There is a controversy over the question of whether a collective belief (Gilbert 2000) is properly understood to be a belief or an acceptance. While the belief/acceptance dis- tinction is not clear-cut, beliefs are often seen as involuntary and acceptances as volun- tary states (Elliott and Willmes 2013, 811). I use the term “collective acceptance” to acknowledge that a research group’s adopting a collective view is typically done volun- tarily (Häkli 2006) and as a means of realizing the group’s epistemic goals (Wray 2001). 2. My definition of the value-free ideal differs from Hugh Lacey’s thesis of impartiality in that the requirement of value freedom is applied not only to the appraisal of ac- ceptability but also to what Lacey calls the adoption of strategy (2005, 979). While impartiality proposes that a hypothesis or a theory is acceptable if and only if it manifests epistemic values highly in light of available evidence, the strategies adopted prior to acceptance may be socially value-laden (2005, 980). 158 KRISTINA ROLIN adopt it as a criterion of good science (Longino 1990, 1995; Root 1993; Lacey 1999; Kitcher 2001, 2011; Solomon 2001; Douglas 2009; Kourany 2010). Arguments against the ideal are advanced in tandem with case stud- ies where moral and social values are claimed to play a legitimate role in acceptance (see Anderson 1995, 2004; Douglas 2000; Intemann 2001; Rich- ardson 2010; Crasnow 2014; Elliott and McKaughan 2014). While I do not object to all such arguments, I wish to challenge the assumption that all moral and social values are non-epistemic values and that, consequently, all cases where moral and social values legitimately enter into a decision to accept something count as arguments against the value-free ideal. I argue that in the context of scientific collaboration some moral and social values are properly understood to be epistemic rather than non-epistemic values. By epistemic values I mean values that promote the attainment of truth, either intrinsically or extrinsically. As Daniel Steel explains, an epi- stemic value is intrinsic when manifesting that value constitutes an attain- ment of or is necessary for truth, and it is extrinsic when it promotes the at- tainment of truth without itself being an indicator or a requirement of truth (2010, 18). For a value to promote the attainment of truth may mean that it leads scientists to support social arrangements that are instrumental in the epistemic success of science. For example, diversity is an extrinsic epistemic value insofar as it leads scientists to cultivate a diversity of perspectives, and this in turn facilitates transformative criticism in scientific communities (Longino 2002, 131). While moral and social values are not epistemic values intrinsically, they can be argued to be extrinsic epistemic values on the grounds that they lead scientists to act in ways that are conducive to truth. In order to explain how the group perspective on values in science differs from the individual one, in section 2 I present a review of three well-known arguments against the value-free ideal: (1) an argument from pluralism with respect to epistemic values, (2) an argument from inductive risk, and (3) an argument from value-laden background assumptions. These arguments are built on slightly different yet overlapping analyses of what it means for a scientist to accept a hypothesis or a theory and how non-epistemic values can play a legitimate role in acceptance, which is thought to be a core ep- istemic moment in scientific inquiry. While the analysis of acceptance un- derlying these three arguments is complex, acceptance in this sense can be attributed to individual scientists and research teams alike. This analysis of acceptance does not do justice to scientific collaboration because it neglects epistemically significant differences between individual and collective ac- ceptance, on the one hand, and between individual scientists relying and not relying on testimony, on the other hand. In section 3 I review three normative views that have been proposed as alternatives to the value-free ideal. Miriam Solomon’s social empiricism builds on the argument from pluralism with respect to epistemic values, VALUES IN SCIENCE 159 Heather Douglas’s conception of scientific integrity on the inductive risk argument, and Helen Longino’s social account of objectivity on the argu- ment from value-laden background assumptions. In these three views, the value-free ideal is replaced with guidelines and norms addressed either to individual scientists or to scientific communities. While the three views offer important insights concerning the proper role of values in science, they ignore an intermediate social level in science: scientific collaboration.3 In sections 4 and 5 I introduce a more social analysis of acceptance than the one revealed in section 2. A more social analysis can be found in the two models of understanding the epistemic structure of scientific collaboration: collective acceptance and trust-based acceptance. I identify the moral and social values that can play a legitimate role in collective and trust-based acceptance. I argue that in the context of scientific collaboration these moral and social values should be understood as extrinsic epistemic values because they promote the attainment of truth. 2. Three Arguments against the Value-Free Ideal. When one argues against the value-free ideal, it is not sufficient to show that scientific re- search sometimes fails to be value-free. One has to show also that the ideal in itself is not feasible—or even if it is feasible under some circumstances, there are reasons that speak against its adoption as a standard of good sci- ence. In this section I review three arguments aiming to do so. While the plau- sibility of these arguments depends on case studies, I leave case studies aside and focus on analyzing the conception of acceptance underlying the three ar- guments. The conception of acceptance is of interest here because scientific collaboration will urge philosophers to rethink acceptance in science. 2.1. Argument from Pluralism with Respect to Epistemic Values. A number of philosophers argue that the value-free ideal is not attainable be- cause the set of epistemic values includes a variety of criteria and desiderata that cannot be realized at the same time, and non-epistemic values can legit- imately play a role in determining which epistemic values scientists em- phasize when they evaluate theories (Kuhn 1977; Rooney 1992; Kitcher 1993; Longino 1995; Solomon 2001; Elliott 2013; Elliott and McKaughan 2014). Arguments aiming to undermine the value-free ideal by drawing at- tention to the plurality of epistemic values follow two strategies. One strategy aims to show that the plurality of epistemic values is a consequence of the plurality of epistemic goals when the goals are taken to be either significant truths (Kitcher 1993; Anderson 1995) or empirical successes (Solomon 2001). 3. Philip Kitcher’s (2001, 2011) well-ordered science and Janet Kourany’s (2010) ideal of socially responsible science are also alternatives to the value-free ideal. I postpone a discussion of their views to another occasion. 160 KRISTINA ROLIN Another strategy suggests that the plurality of epistemic values is revealed by studying actual practices of science. For example, Thomas Kuhn (1977) claims that the five epistemic values of accuracy, consistency, simplicity, breadth of scope, and fruitfulness have played a role in theory choice throughout the his- tory of physics. Longino (1995) adds to this list six other values that, she ar- gues, can be called epistemic on equally good grounds: empirical adequacy, novelty, ontological heterogeneity, complexity of interaction, applicability to human needs, and diffusion of power. These arguments are based on an analysis of acceptance as a particular kind of value judgment. In this analysis, acceptance consists of two mo- ments, “valuing” and “evaluation” (McMullin 1983, 5). “Valuing” is about choosing which epistemic values are applied in a particular decision- making situation, and “evaluation” is about assessing the extent to which a theory realizes the chosen epistemic values. Non-epistemic values can legit- imately play a role in “valuing” because in this role they are thought to be epistemically harmless or even beneficial if they contribute to an efficient division of research efforts in scientific communities (Kitcher 1993; Solo- mon 2001). But non-epistemic values are not allowed to play a role in “eval- uation.” Next, I turn to an argument from inductive risk that suggests that “evaluation” cannot always be value-free either. 2.2. Argument from Inductive Risk. A number of philosophers argue that the value-free ideal is not feasible because non-epistemic values have a legitimate role to play in the evaluation of risks involved in acceptance (Douglas 2000, 2007, 2009; Wilholt 2009; Steel 2010, 2013; Elliott 2011; Biddle 2013; Brown 2013). The most often cited version of the inductive risk argument can be found in Richard Rudner’s 1953 article titled “The Scientist qua Scientist Makes Value Judgments.” One premise in Rudner’s argument is the view that a scientist as scientist accepts or rejects hypoth- eses and that acceptance involves uncertainty (1953, 2). In accepting a hy- pothesis, a scientist has to decide whether the evidence at hand is suffi- ciently strong to warrant the acceptance. This decision, Rudner argues, depends on the risks involved. If a scientist accepts a false hypothesis, there may be a cost associated with this type of error. In addition, if she rejects a true hypothesis, there may be another cost associated with the other type of error. The key premise in Rudner’s argument is that the assessment of the costs involved in these two mistakes is a matter of moral value judgment (1953, 3). It is important to notice that Rudner’s argument builds on a “thick” con- ception of acceptance. Given this conception, acceptance involves three moments: the assessment of the evidential warrant of a hypothesis, the iden- tification of error-related risks, and a moral value judgment concerning an acceptable level of risk. Thus, if one endorses a thick conception of accep- VALUES IN SCIENCE 161 tance, then the value-free ideal is not attainable because one moment in ac- ceptance involves non-epistemic values. While some philosophers are crit- ical of Rudner’s conception of acceptance (Jeffrey 1956; Hempel 1981; Mc- Mullin 1983; Lacey 1999, 2005; Mitchell 2004), those who defend it argue that it is more relevant to the actual practice of science than a thin concep- tion, which involves merely the assessment of the evidential warrant of a hy- pothesis (Douglas 2000, 2007; Biddle 2013; Steel 2013). Next, I discuss yet another well-known argument against the value-free ideal. 2.3. Argument from Value-Laden Background Assumptions. A number of philosophers argue that the value-free ideal is not feasible because non- epistemic values can legitimately influence the choice of background as- sumptions that play a role in a scientist’s decision to accept a hypothesis (Longino 1990, 2002; Anderson 1995, 2004; Intemann 2001, 2005; Haw- thorne 2010; Richardson 2010; Clough 2011; de Melo-Martín and Inte- mann 2012). For example, Longino argues that background assumptions are needed to establish the relevance of empirical evidence to a hypothesis or a theory (1990, 43–44; 2002, 127). While background assumptions may not always “encode” social values, they often do so (1990, 216). Value- laden background assumptions should not be judged as necessarily “bad” science because it is difficult to see how evidential reasoning could proceed without them (1990, 128 and 216). Whether value-laden background as- sumptions are acceptable or not will depend on a community practice where they are critically evaluated and either defended, modified, or rejected in response to criticism (1990, 73–74). Let me summarize the three arguments I have reviewed in this section. The first argument constructs acceptance as a value judgment that includes both valuing and evaluation. Non-epistemic values can legitimately play a role in the valuing of certain epistemic values. The second argument sug- gests that the evaluation of empirical adequacy is a more complex affair than merely determining the degree of evidential warrant. When a scientist evaluates a hypothesis, she is expected to identify error-related risks and to make a moral judgment of their seriousness. The third argument reveals yet another dimension in the structure of acceptance. When a scientist evaluates a hypothesis, she makes value-laden judgments concerning the plausibility of particular background assumptions. Whereas the second argument is for- ward looking in the sense that a scientist is expected to reflect on the con- sequences of her decision to accept something as scientific knowledge, the third argument is backward looking in the sense that acceptance is thought to build on an existing body of scientific research providing background as- sumptions for evidential reasoning. The analysis of acceptance underlying the three arguments is social in the sense that it draws attention to the context of scientific reasoning. The first 162 KRISTINA ROLIN argument draws attention to the context of particular epistemic values, the second argument to the context-dependent consequences of accepting a hypothesis, and the third argument to context-dependent background as- sumptions. Yet, it is important to notice that acceptance, as it is analyzed in the three arguments, can be attributed to individuals and research teams alike. In sections 4 and 5 I argue that in order to do justice to scientific collaboration, we need an analysis of acceptance that is more social than the one revealed in this section. 3. Three Normative Approaches to Values in Science. So far I have shown that there are reasons to believe that the value-free ideal is not feasible independently of whether acceptance is attributed to individual scientists or research teams. Thus, it is appropriate to discuss some normative approaches that have been proposed as alternatives to the value-free ideal. Interestingly, the alternative normative approaches offer guidance not only for individual scientists but also for scientific communities. Yet, I argue that such guidance does not meet the challenges posed by scientific collaboration. 3.1. Social Empiricism. Solomon’s social empiricism gives recommen- dations for individual scientists and science policy makers (2001, 150). In her view, non-epistemic values can legitimately play a role in determining which empirical successes a scientist considers most important when she decides to work with a particular scientific theory. Solomon thinks that non-epistemic values can play an epistemically beneficial role in science insofar as they gen- erate and maintain an efficient distribution of research efforts among those the- ories that have some empirical successes. Such a distribution, she argues, is a prerequisite to the long-term epistemic success of science. For this reason, a normative approach to values in science should not discourage the influence of non-epistemic values at the individual level in determining a scientist’s choice of one theory over another (2001, 120). Solomon’s main concern is the proper functioning of scientific communities. She recommends that science policy makers take steps to cultivate diversity and dissent in scien- tific communities (2001, 117–18). As they cannot know in advance which research programs will be fruitful, they are better off distributing their bets among several alternative lines of inquiry. 3.2. Scientific Integrity. While the term “policy” is mentioned ex- plicitly in the title of Douglas’s book Science, Policy, and the Value-Free Ideal, her main goal is to give advice not for policy makers but for indi- vidual scientists (2009, 19). In her view, scientific integrity consists in keep- ing non-epistemic values to their proper roles in scientific reasoning (2009, 88; see also 156 and 176). In order to define their proper roles, Douglas makes a distinction between a direct and an indirect role. Values play a di- VALUES IN SCIENCE 163 rect role when they act as reasons in themselves to accept a hypothesis or a theory and an indirect role when they act as reasons to accept a cer- tain level of uncertainty (2009, 96). While non-epistemic values are not al- lowed to play a direct role in scientific reasoning, they can legitimately play an indirect one. A direct role is not acceptable because it means that non- epistemic values play the same role as evidence does (2009, 156). An indirect role, on the other hand, is acceptable because scientists are morally respon- sible for their knowledge claims and the predictable consequences of mak- ing such claims (2009, 106). As Douglas herself admits, her normative ap- proach is meant to define a minimal criterion for good scientific practice rather than defend principles for an epistemically ideal community. Given that scientific integrity is a minimal criterion, an individual scientist can try to realize it even in a community that is less than ideal from an epistemic point of view. Next, I turn to an approach that is concerned not only with individual responsibilities but also with communities. 3.3. Social Account of Objectivity. In Longino’s (1990, 2002) view, non-epistemic values can legitimately play a role in a scientist’s choice of background assumptions as long as no one has challenged these assump- tions. Individual scientists are not held responsible for policing the role of non-epistemic values in scientific inquiry on their own. Such a responsi- bility would be too demanding because “there are no formal rules, guide- lines, or processes that can guarantee that social values will not permeate evidential relations” (2002, 50). An individual scientist may not even be aware that her preferred background assumptions resonate with certain non- epistemic values (1990, 80). For these reasons, a social account of objec- tivity is needed to make sure that value-laden background assumptions can be identified and criticized. Like Solomon, Longino is concerned with the proper functioning of sci- entific communities. A social account of objectivity is the view that sci- entific knowledge is objective to the degree that a relevant scientific com- munity satisfies the four norms of public venues, uptake of criticism, shared standards, and tempered equality of intellectual authority (1990, 76–81; 2002, 129–31). Yet, Longino’s approach differs from Solomon’s in that it does not assume that scientific communities are capable of realizing the normative ideal without assigning responsibilities to individual scientists. For example, the norm of uptake means that an individual scientist has an obligation to respond to criticism. Let me wrap up my findings. While these three accounts address both individual scientists and scientific communities, they all neglect an inter- mediate social level in science: research groups. The guidelines and norms they offer are meant to be valid independently of whether scientists work without or within research groups. In both cases an individual scientist 164 KRISTINA ROLIN is expected to work with empirically successful theories (Solomon 2001) and acknowledge her moral responsibility for error-related risks (Douglas 2009). In both cases she has certain obligations in virtue of being a member of a scientific community (Longino 1990, 2002). Clearly, the term “social” in Solomon’s and Longino’s social epistemologies means that their epis- temologies are concerned with scientific communities, not with research groups. It is time to turn to the epistemic structure of scientific collabora- tion, which gives rise to a more social analysis of acceptance than the one we have seen so far. 4. Values in Collective Acceptance. While there is a growing amount of literature on the role of collective acceptance and trust in science, this literature has not yet been explored in connection with the debate on the proper role of values in science. The literature on collective acceptance aims to understand community-wide scientific changes (Gilbert 2000; Andersen 2010), expert advisory committees (Beatty 2006), and scientific manifestos (Bouvier 2004). The literature on trust aims to account for the role of moral virtues in science (Hardwig 1991), the epistemic importance of gender and race equality in science (Rolin 2002; Wray 2007a), and relations between scientific and lay communities (Grasswick 2010; Anderson 2011). In this section I discuss the role of values in collective acceptance, and in the next section I discuss the role of values in trust-based acceptance. I argue that insofar as collective acceptance and trust-based acceptance play an epi- stemic role in science, some moral and social values can play a legitimate role in acceptance. While moral and social values are often seen as non- epistemic values, these particular moral and social values are extrinsic ep- istemic values. 4.1. Plural Subject Account of Collective Acceptance. A number of philosophers use Margaret Gilbert’s plural subject account of collective belief to understand collective acceptance in science (Wray 2001, 2007b; Bouvier 2004; Beatty 2006; Häkli 2006; Staley 2007; Andersen 2010; Rolin 2010). On such an account, a group of scientists accepts a view in- sofar as the group members are jointly committed to accepting the view as a body (Gilbert 2000, 39). To claim that a group accepts a view in this sense is not the same thing as to claim that all or most group members accept the view. It is possible that a group accepts a view that all or some group members do not accept as their personal view. It is not, of course, common in scientific collaborations that a group’s collective view and group mem- bers’ personal views are in conflict; otherwise, group members would hardly consider collaboration as an attractive option for them. But it is im- portant to notice that in principle an individual scientist’s personal and collective views can diverge, and they sometimes do (Beatty 2006; Staley VALUES IN SCIENCE 165 2007). A scientist can let a particular view stand as the group’s position even when she has some reservations or doubts concerning it. Given Gilbert’s account of collective acceptance, moral and social values are built into the very structure of collective acceptance. As Gilbert explains it, a group’s joint commitment to accept a view as a body generates ob- ligations for group members (2000, 44). Once a group member has openly expressed a commitment to jointly accept a view as the position of the group, she is obliged not to question the group view publicly. In some re- search groups an individual scientist may ask that her name be removed from the list of authors if she does not personally agree with the argument in the joint paper and is not willing to let the argument stand as the position of the group. But when she signs a joint paper, her act is usually taken to mean that she has expressed a commitment to jointly accept the content of the paper, knowing that such a commitment generates obligations. While an obligation to accept the group view is not universal in the way that some moral obligations are, it is nevertheless of moral and social nature because it is based on an agreement among group members. Therefore, a joint com- mitment provides group members with a moral and social reason for as- serting and supporting a view (see also Mathiesen 2006, 169). Thus, if collective acceptance is allowed to play an epistemic role in science, then a moral and social reason is allowed to play a role in accep- tance. Such a reason is a scientist’s commitment to carry on the collabo- ration even when it means that she has to suppress some of her personal views temporarily. While such a strong commitment to collaboration may seem to be irrational from an epistemic point of view, I argue that it is not necessarily so. 4.2. Group versus Individual Justification. A common assumption is that a group’s joint commitment to accept a view as a body is not effective in getting at truth (Goldman 2004; Mathiesen 2006). This assumption, I argue, is false. From an individual point of view it may seem to be epistemically irrational to accept a view on the grounds that other group members accept it and one has promised to be loyal to the group. From the point of view of the group, however, the situation is different. As with individual scientists, research groups are also expected to provide epistemic justification for their views. Groups and individuals are not different in this respect. However, group justification differs from individual justification in that it involves not merely reasoning but also an aggregation procedure. By an aggregation procedure I mean a mechanism for aggregating group members’ individ- ual views into corresponding collective views endorsed by the group as a whole. Whereas valid reasoning is likely to lead an individual to accept a consistent set of views, it is not sufficient for a group to arrive at a consistent set of collective views. As Christian List and Philip Pettit (2011) argue, a 166 KRISTINA ROLIN group has to settle on an aggregation procedure (or some other type of decision-making procedure) in order to achieve a consistent set of collective views. Let me explain the argument in more detail. Let us assume that a group G includes three persons, A, B, and C, and that each member of the group is competent in deductive reasoning. Let us as- sume also that the task at hand is to find out whether group G is justified in believing that (p & q) is true given that each group member has already made her individual judgments concerning the truth values of p and q. For example, if each group member believes that p is true and q is true, then each group member is justified in believing that (p & q) is true. In this case it does not make a difference whether the group decides to aggregate indi- vidual judgments concerning the premises or the conclusions. In both cases, group G is justified in believing that (p & q) is true. However, the situation is more complex when the group members disagree about the premises but nevertheless want to arrive at a collective view. Let us assume that A believes that p is true and q is false, B believes that p is false and q is true, and C believes that both p and q are true. It follows that both A and B are justified in believing that (p & q) is false and C is justified in believing that (p & q) is true. Thus, group G seems to be justified in believing that (p & q) is false when it chooses to aggregate individual conclusions by means of a majority vote. Also, group G seems to be justified in believing that p and q are true when it chooses to aggregate individual premises by means of a majority vote. This is because two group members, A and C, believe that p is true and two group members, B and C, believe that q is true. The trou- bling upshot is that group G seems to be justified in believing an incon- sistent set of propositions: p, q, and not (p & q). The abstract nature of the judgment aggregation problem has led some philosophers to question its relevance to scientific knowledge (Magnus 2013). Yet, I argue that there is at least one practical lesson research groups can learn from the problem. In order to block the possibility of inconsistent collective views, research groups need to “collectivize reason” (Pettit 2003, 176) by settling on an aggregation procedure (or another type of decision- making procedure). The reason for this is that consistency in a set of views is a necessary condition of having justified views, for both individuals and groups. Scientific publications are expected to be consistent independently of whether they are authored by individuals or groups (Rolin 2010, 220). Given the need to “collectivize reason” in research groups, a strong com- mitment to scientific collaboration is epistemically rational because it en- ables the group to steer clear of inconsistent collective views. Also, a group’s decision to follow a particular aggregation procedure requires a joint com- mitment on behalf of its members. Whereas group members can conduct reasoning individually, the aggregation of individual views into a collec- tive view cannot be done individually. It has to be done by the group jointly VALUES IN SCIENCE 167 as a body. This is the case even when one group member is chosen as the lead author who is in charge of compiling individual contributions into a consistent manuscript. The other group members need to recognize the au- thority of the lead author, and they are asked to give their approval to the final outcome of the writing process. Thus, the moral and social values gen- erated by a joint commitment are extrinsic epistemic values because they are conducive to internal consistency, and internal consistency is an intrinsic epistemic value because it is a necessary condition for truth (see, e.g., Steel 2010, 18). 4.3. Objections and Replies. So far I have argued that a joint com- mitment to accept an aggregation procedure and its outcome is epistemi- cally rational insofar as it enables a group to avoid inconsistency. It is im- portant to keep in mind also that when scientists work in research groups they can achieve more ambitious epistemic goals than they could if they were working on their own. Collaboration may sometimes require an indi- vidual to make a compromise, but the compromise is balanced by the ep- istemic benefits of collaboration (Fallis 2006; Wray 2006). Next, I wish to eliminate two objections that may be raised against my argument. One objection is that there is no need to enforce an internally consistent collective view by means of judgment aggregation procedures if the group members take their time to deliberate on their views. Given enough time and resources, disagreements among group members will be ironed out and the group will arrive at a collective view that is not only internally consis- tent but also the personal view of each group member.4 In this case, the group’s view can be understood in a summative way. On a summative ac- count, the group accepts a view if and only if all or most group members accept the view (Gilbert 2000, 37). A summative account of group view in itself does not import moral and social values into acceptance because it does not involve the notion of joint commitment. Against this objection I argue that group deliberation is not a feasible ideal in all areas of contemporary science. Kent Staley (2007) argues that deliberation is difficult to implement in very large research groups, which can be found in some areas of physics. Such groups can include up to 300 or even 400 scientists. Large research groups need to strike a balance between two aims. On the one hand, they seek to avoid making false claims, and on the other, they seek to make novel and significant true claims (Staley 2007, 323). The pressure to publish novel and significant results means that there 4. Solomon (2006) argues that an aggregation procedure is to be preferred to group deliberation because the latter can give rise to the phenomenon of groupthink. However, there is a controversy over the question whether groupthink is a problem in research teams (Tollefsen 2006; Wylie 2006; Wray 2014). 168 KRISTINA ROLIN is an incentive to aggregate individual views into a collective view even when it means that some group members’ personal views are dismissed for the moment. As Staley explains, there has to be a willingness to make compromises between the individual group members’ personal views and the collective statement of the group (2007, 324). Another objection to my argument is that the moral and social obligations generated by a joint commitment cannot be epistemically rational because they are likely to suppress dissent in research groups, and dissent is epi- stemically beneficial, as Solomon (2001) argues (see sec. 3). While I grant that dissent is an epistemic resource in scientific communities (see also Zollman 2010; Fehr 2011; Intemann 2011; Rolin 2011), I argue that im- posing epistemic conformity in research groups is not a problem as long as it is balanced by a reasonable amount of dissent in scientific communities. Acknowledging the epistemic importance of dissent does not undermine my argument; instead, it supports the view that there is an asymmetry between the social epistemology of research groups and the social epistemology of scientific communities. Whereas research groups are expected to speak as if they have one voice, scientific communities are not expected to do so. One might even argue that insofar as large-scale collaborations are becom- ing a rule rather than an exception, philosophers should be increasingly con- cerned with the question of how diversity of perspectives and dissent are maintained in scientific communities. Having countered the two objections, I conclude that a particular moral and social value can play a legitimate role in acceptance if acceptance is understood to involve not only individual but also collective acceptance. The moral and social value is an obligation that group members have in virtue of their joint commitment to let a particular view stand as the position of the group. I do not claim that all group views involve a joint commitment on behalf of group members. In some cases, a summative account of group views will probably be satisfactory. But I do claim that in some other cases, a collective account of group views is more adequate than a summative account. This is the case in large collaborations where the group is under pressure to publish novel and significant findings without waiting for all the group members to arrive at a summative consensus via deliberation. I have argued also that the moral and social values implicit in collective ac- ceptance are extrinsic epistemic values because they promote the attainment of truth by guaranteeing the internal consistency of collective views. Since collective acceptance is a special case, there is a need to develop an alternative, noncollective account of the epistemic structure of scientific collaboration. For example, what Melinda Fagan (2011) calls an “interac- tive” account of group views is a noncollective alternative both to a col- lective acceptance and to a summative account of group views. On an in- teractive account, the epistemic structure of scientific collaboration consists VALUES IN SCIENCE 169 of relations of trust and other interactions among group members (Thagard 2010, 279; Fagan 2011, 251; Wagenknecht 2013, 207). In the next section I argue that even if one prefers a trust-based account of group views to a collective acceptance account, moral and social values can play a legitimate role in acceptance. As in the case of collective acceptance, a particular moral and social value is properly understood to be an extrinsic epistemic value. 5. Values in Trust-Based Acceptance. Trust plays an epistemic role in science when trust in a testifier functions as a reason to accept an obser- vation report, an experimental result, a background assumption, or some other piece of information (Hardwig 1991; Kitcher 1993; Kusch 2002; Wilholt 2013; Wagenknecht 2014). The main difference between collec- tive and trust-based acceptance is that whereas in the former case scientific knowledge is attributed to the group as a whole, in the latter case it is at- tributed to individual group members. An individual group member’s set of justified views is extended dramatically if trust in another group member is seen as a sufficiently good ground for epistemic justification. Thus, trust makes it possible for each group member to know more than she could know otherwise. As John Hardwig argues, trust in a testifier involves trust in the moral and epistemic character of the testifier (1991, 700). When a scientist trusts a testifier, she trusts that the testifier is honest in giving her testimony and competent in the relevant domain. In an empirical study of two research groups Susann Wagenknecht argues that scientists use various strategies to secure trust in other group members (2014, 21). Scientists are not only engaged in question-and-answer types of interactions; sometimes they also witness the work of their collaborators in order to increase their under- standing of others’ contributions. Yet, trust plays an irreducible role in ac- ceptance because other group members do not know the details as well as the person who is in charge of running an experiment (Wagenknecht 2014, 11–13). Next, I argue that while scientists often expect to have some em- pirical warrant to support their trust in the epistemic character of their collaborators, the moral character of their collaborators is often taken for granted. 5.1. Default Assumption of Honesty. Honesty is often taken for granted in scientific collaborations because evidence of moral character is incom- plete. When group leaders recruit scientists into their teams, they may seek evidence of the moral character of the candidate in letters of recommenda- tion (Frost-Arnold 2013). Also, when scientists work in relatively small and stable teams, they are likely to trust the moral character of other team mem- bers because an extended experience of collaboration gives them a good reason to do so (Wagenknecht 2014). But even when there is evidence of good moral character, trust in the moral character of other team members 170 KRISTINA ROLIN is underdetermined by evidence. This is because the very notion of char- acter refers to a disposition to behave in certain ways across a range of so- cial situations. Consequently, trust in the moral character of other scientists is at least partly based on a principle of charity. The assumption of honesty plays an even more prominent role in large- scale collaborations where scientists do not know all the other group mem- bers personally. Being members in the same research group may be a reason for one group member A to trust the moral character of an unknown group member B if A believes that a third group member C whom A finds trust- worthy knows B personally and has found B’s moral character flawless. But even in this case, A’s trust in the moral character of B and C is based on incomplete evidence. The upshot is that honesty functions as a default assumption in trust- based acceptance. To say that honesty is a default assumption means that a testifier is assumed to be honest unless one has a reason to doubt it. Hav- ing made a mistake is not yet a reason to doubt a scientist’s honesty since many mistakes are due to oversight, lack of experience, or some other short- coming in the scientist’s epistemic character. One has a reason to doubt some- one’s honesty when there is evidence of intentional attempts to distort the research process (or of gross negligence leading to such distortions). The de- fault assumption of honesty is a moral value judgment because it is accepted for a moral reason. The moral reason is the belief that it is morally wrong to doubt another group member’s honesty when one does not have a reason to do so. It follows that if acceptance is understood to include trust-based accep- tance, then a moral value judgment can play a legitimate role in acceptance. Also, the default assumption of honesty is an extrinsic epistemic value because it contributes to epistemic justification in the context of scientific collaboration. While honesty is not the only requirement for a person’s being a trustworthy source of information, the assumption that a person is honest in giving her testimony is one reason to consider her trustworthy. Given the default assumption of honesty, trust in a testifier can be a good reason to accept a piece of information when a person does not have first-hand evidence. In scientific collaborations trust is often a superior reason to accept a piece of information because it gives one access to the best available evidence. As Hardwig (1991) explains, trust-based acceptance does not mean that evi- dence does not matter in epistemic justification. Quite the contrary, trust- based acceptance is needed precisely because evidence matters and it is too extensive and complex to be had by any other means than by trusting others (Hardwig 1991, 706). Next, I wish to reply to three objections that may be raised against my argument. 5.2. Objections and Replies. The first objection is that reliance on moral value judgments concerning honesty can be reduced if not eliminated by designing reward systems and sanctions so that they provide scientists VALUES IN SCIENCE 171 with a strong incentive to act in an honest way. When reward systems and sanctions work well, a scientist has a nonmoral reason to trust another scientist’s testimony. The nonmoral reason is her belief that the other sci- entist is likely to act in an honest way because as a self-interested and rational actor the other scientist understands that it is in her best interest to do so. In what Karen Frost-Arnold (2013) calls a self-interest account of trust, scientists trust each other’s testimony because they believe that sanc- tions for betraying trust are so serious that it is in their best interest to be trustworthy. Against this objection I argue that while a self-interest account of trust can give a partial explanation of why scientists trust each other in collab- orations, it has limitations. As Hardwig points out, institutionalized control mechanisms, such as replication of results, may diminish the need to rely on the moral character of the testifier, but they cannot obviate it (1991, 707). One reason for this is that replication is not always done because it may not lead to a publication in a high-impact journal. Replication is also costly and likely to delay other research projects. For example, randomized clinical trials remain often what James R. Brown (2010) calls “one-shot” science. Given that results are not always replicated, it may take a long time to discover dishonesty. And if dishonesty is not detected, it will not be pun- ished. For this reason it is unlikely that moral value judgments will be eliminated from the evaluation of trustworthiness. As Hardwig explains it, “There are no ‘people-proof’ institutions” (1991, 707). If Hardwig is right, moral value judgments can legitimately play a role in the evaluation of trustworthiness. This means that they can legitimately play a role in trust- based acceptance. The second objection to my argument is that the moral values implicit in trust-based acceptance are so remotely related to the attainment of truth that they do not deserve to be called epistemic values. Against this objection I argue that it is based on a narrow definition of epistemic value that is in need of further defense. Given a narrow definition, the set of epistemic values includes merely intrinsic epistemic values, that is, values that are either in- dicators of truth or necessary for truth (Steel 2010, 15). It does not include extrinsic epistemic values, that is, values that promote the attainment of truth without themselves being indicators or requirements of truth (Steel 2010, 18). Given the narrow definition, epistemic values can have a jus- tification independently of scientists’ historical reliance on them (Doug- las 2013, 801). Consequently, epistemic values can be identified indepen- dently of scientific practices as they have evolved historically. While these features may be attractive for some purposes in philosophy of science, I see no reason to limit the scope of philosophical inquiry to intrinsic epistemic values. Extrinsic epistemic values are not less interesting for those philos- ophers who aim to understand actual scientific practices. 172 KRISTINA ROLIN As with the second objection, the third one is also concerned with the definition of epistemic value. Someone may argue that a broad definition of epistemic value, the one including not only intrinsic but also extrinsic epistemic values, tends to blur the epistemic/non-epistemic distinction al- together. If some moral and social values are extrinsic epistemic values, then almost any values can be argued to be epistemic values, the objection goes. But this is not the case. If the set of epistemic values includes extrinsic epistemic values, then the epistemic/non-epistemic distinction is context dependent because the effectiveness of extrinsic epistemic values in bring- ing about the desired epistemic ends depends on the circumstances, and the circumstances are likely to vary from one context to another (Steel 2010, 20). This means that it may be a demanding task to determine whether a value is epistemic extrinsically. But it does not mean that we should aban- don the distinction among epistemic and non-epistemic values. To sum up the argument, some moral and social values deserve to be called extrinsic epistemic values because in the context of scientific collaboration they promote epistemic justification, either the justification of group views (as I have argued in sec. 4) or the justification of individual views that are based on testimony (as I have argued in sec. 5). This result adds a novel dimension to the three normative approaches I have reviewed in section 3. Moral and social values are allowed to play a role in acceptance not only because they are re- quired by moral responsibility (Douglas 2009) or because they can generate diversity and critical perspectives (Longino 1990; Solomon 2001). Some moral and social values should be permitted to play a role in acceptance because they are woven into the epistemic fabric of scientific collaboration. 6. Conclusion. The debate on the proper role of values in acceptance has been limited so far because it has focused either on individual scientists’ decision making independently of scientific collaborations or on the proper functioning of scientific communities. In order to reveal the limitations, I have reviewed three arguments against the value-free ideal and three al- ternatives to the value-free ideal. In each case a research team is treated as an epistemic black box, and the epistemic significance of its inner organi- zation is overlooked. In order to explain the group perspective on values in science, I have introduced the notions of collective acceptance and trust-based acceptance. In the case of collective acceptance the group perspective means that the group is the agent of acceptance, whereas in the case of trust-based ac- ceptance it means that an individual group member is epistemically de- pendent on other group members. Most significantly, the group perspective challenges the assumption that all moral and social values are non-epistemic values. In the case of collective acceptance, a joint commitment to collab- oration generates moral and social obligations that can play a legitimate role VALUES IN SCIENCE 173 in acceptance. In the case of trust-based acceptance, a default assumption of honesty is a moral value judgment that can play a legitimate role in accep- tance. In both cases, moral and social values are extrinsic epistemic values because they promote the epistemic justification of either group views or individual views in the context of epistemic dependency. Some values are moral, social, and epistemic at the same time. REFERENCES Andersen, Hanne. 2010. “Joint Acceptance and Scientific Change: A Case Study.” Episteme 7 (3): 248–65. Andersen, Hanne, and Susann Wagenknecht. 2013. “Epistemic Dependence in Interdisciplinary Groups.” Synthese 190 (11): 1881–98. Anderson, Elizabeth. 1995. “Knowledge, Human Interests, and Objectivity in Feminist Episte- mology.” Philosophical Topics 23:7–58. ———. 2004. “Uses of Value Judgments in Science: A General Argument, with Lessons from a Case Study of Feminist Research on Divorce.” Hypatia 19 (1): 1–24. ———. 2011. “Democracy, Public Policy, and Lay Assessment of Scientific Testimony.” Episteme 8 (2): 144–64. Beatty, John. 2006. “Masking Disagreement among Experts.” Episteme 3 (1–2): 52–67. Biddle, Justin. 2013. “State of the Field: Transient Underdetermination and Values in Science.” Studies in History and Philosophy of Science 44:124–33. Bouvier, Alban. 2004. “Individual Beliefs and Collective Beliefs in Science and Philosophy: The Plural Subject and the Polyphonic Subject Accounts.” Philosophy of the Social Sciences 34 (3): 382–407. Brown, James R. 2010. “One-Shot Science.” In The Commodification of Academic Research: Science and the Modern University, ed. Hans Radder, 90–109. Pittsburgh: University of Pittsburgh Press. Brown, Matthew. 2013. “Values in Science beyond Underdetermination and Inductive Risk.” Philosophy of Science 80 (5): 829–39. Carrier, Martin, Don Howard, and Janet Kourany, eds. 2008. The Challenge of the Social and the Pressure of Practice: Science and Values Revisited. Pittsburgh: University of Pittsburgh Press. Cheon, Hyundeuk. 2014. “In What Sense Is Scientific Knowledge Collective Knowledge?” Phi- losophy of the Social Sciences 44 (4): 407–23. Clough, Sharyn. 2011. “Gender and the Hygiene Hypothesis.” Social Science and Medicine 72:486–93. Crasnow, Sharon. 2014. “Feminist Science Studies: Reasoning from Cases.” Paper presented at the FEMMSS conference at the University of Waterloo, August 10–13. de Melo-Martín, Inmaculada, and Kristen Intemann. 2012. “Interpreting Evidence: Why Values Can Matter as Much as Science.” Perspectives in Biology and Medicine 55 (1): 59–70. de Ridder, Jeroen. 2013. “Epistemic Dependence and Collective Scientific Knowledge.” Synthese 191:37–53. Douglas, Heather. 2000. “Inductive Risk and Values in Science.” Philosophy of Science 67:559–79. ———. 2007. “Rejecting the Ideal of Value-Free Science.” In Kincaid et al. 2007, 120–39. ———. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh: University of Pittsburgh Press. ———. 2013. “The Value of Cognitive Values.” Philosophy of Science 80 (5): 796–806. Elliott, Kevin. 2011. “Direct and Indirect Roles for Values in Science.” Philosophy of Science 78 (2): 303–24. ———. 2013. “Douglas on Values: From Indirect Roles to Multiple Goals.” Studies in History and Philosophy of Science 44:375–83. 174 KRISTINA ROLIN Elliott, Kevin, and Daniel McKaughan. 2014. “Nonepistemic Values and the Multiple Goals of Science.” Philosophy of Science 81 (1): 1–21. Elliott, Kevin, and David Willmes. 2013. “Cognitive Attitudes and Values in Science.” Philosophy of Science 80 (5): 807–17. Fagan, Melinda. 2011. “Is There Collective Scientific Knowledge? Arguments from Explanation.” Philosophical Quarterly 61 (243): 247–69. ———. 2012. “Collective Scientific Knowledge.” Philosophy Compass 7 (12): 821–31. Fallis, Don. 2006. “The Epistemic Costs and Benefits of Collaboration.” Southern Journal of Philosophy 44 (Supplement): 197–208. Fehr, Carla. 2011. “What Is in It for Me? The Benefits of Diversity in Scientific Communities.” In Feminist Epistemology and Philosophy of Science: Power in Knowledge, ed. Heidi Grasswick, 133–55. Dordrecht: Springer. Frost-Arnold, Karen. 2013. “Moral Trust and Scientific Collaboration.” Studies in History and Philosophy of Science 44:301–10. Galison, Peter. 2003. “The Collective Author.” In Scientific Authorship: Credit and Intellectual Property in Science, ed. Mario Biagioli and Peter Galison, 325–55. London: Routledge. Gilbert, Margaret. 2000. “Collective Belief and Scientific Change.” In Sociality and Responsibility: New Essays on Plural Subject Theory, 37–49. Lanham, MD: Rowman & Littlefield. Goldman, Alvin I. 2004. “Group Knowledge versus Group Rationality: Two Approaches to Social Epistemology.” Episteme 1 (1): 11–22. Grasswick, Heidi. 2010. “Scientific and Lay Communities: Earning Epistemic Trust through Knowl- edgeSharing.”Synthese177:387–409. Häkli, Raul. 2006. “Group Beliefs and the Distinction between Belief and Acceptance.” Cognitive Systems Research 7:286–97. Hardwig, John. 1991. “The Role of Trust in Knowledge.” Journal of Philosophy 88 (12): 693–708. Hawthorne, Susan. 2010. “Embedding Values: How Science and Society Jointly Valence a Concept—the Case of ADHD.” Studies in History and Philosophy of Biological and Bio- medical Sciences 41:21–31. Hempel, Carl. 1981. “Turns in the Evolution of the Problem of Induction.” Synthese 46:389–404. Intemann, Kristen. 2001. “Science and Values: Are Value Judgments Always Irrelevant to the Justification of Scientific Claims?” Philosophy of Science 68 (Proceedings): S506–S518. ———. 2005. “Feminism, Underdetermination, and Values in Science.” Philosophy of Science 72 (5): 1001–12. ———. 2011. “Diversity and Dissent in Science: Does Democracy Always Serve Feminist Aims?” In Feminist Epistemology and Philosophy of Science: Power in Knowledge, ed. Heidi Grass- wick, 111–32. Dordrecht: Springer. Jeffrey, Richard. 1956. “Valuation and Acceptance of Scientific Hypotheses.” Philosophy of Sci- ence 23 (3): 237–46. Kincaid, Harold, John Dupré, and Alison Wylie, eds. 2007. Value-Free Science? Ideals and Illu- sions. New York: Oxford University Press. Kitcher, Philip. 1993. The Advancement of Science: Science without Legend, Objectivity without Illusions. New York: Oxford University Press. ———. 2001. Science, Truth, and Democracy. New York: Oxford University Press. ———. 2011. Science in a Democratic Society. New York: Prometheus. Kourany, Janet. 2010. Philosophy of Science after Feminism. Oxford: Oxford University Press. Kuhn, Thomas. 1977. “Objectivity, Value Judgment, and Theory Choice.” In The Essential Ten- sion: Selected Studies in Scientific Tradition and Change, 320–39. Chicago: University of Chicago Press. Kusch, Martin. 2002. Knowledge by Agreement. Oxford: Oxford University Press. Lacey, Hugh. 1999. Is Science Value Free? Values and Scientific Understanding. London: Rout- ledge. ———. 2005. “On the Interplay of the Cognitive and the Social in Scientific Practices.” Philosophy of Science 72 (5): 977–88. List, Christian, and Philip Pettit. 2011. Group Agency: The Possibility, Design, and Status of Corporate Agents. Oxford: Oxford University Press. VALUES IN SCIENCE 175 Longino, Helen. 1990. Science as Social Knowledge. Princeton, NJ: Princeton University Press. ———. 1995. “Gender, Politics, and the Theoretical Virtues.” Synthese 104 (3): 383–97. ———. 2002. The Fate of Knowledge. Princeton, NJ: Princeton University Press. Machamer, Peter, and Gereon Wolters, eds. 2004. Science, Values, and Objectivity. Pittsburgh: University of Pittsburgh Press. Magnus, P. D. 2013. “What Scientists Know Is Not a Function of What Scientists Know.” Phi- losophy of Science 80 (5): 840–49. Mathiesen, Kay. 2006. “The Epistemic Features of Group Belief.” Episteme 2 (3): 161–75. McMullin, Ernan. 1983. “Values in Science.” In PSA 1982: Proceedings of the 1982 Biennial Meeting of the Philosophy of Science Association, vol. 2, ed. Peter D. Asquith and Thomas Nickels, 3–28. East Lansing, MI: Philosophy of Science Association. Mitchell, Sandra. 2004. “The Prescribed and Proscribed Values in Science Policy.” In Machamer and Wolters 2004, 245–55. Pettit, Philip. 2003. “Groups with Minds of Their Own.” In Socializing Metaphysics: The Nature of Social Reality, ed. Frederick Schmitt, 167–93. Lanham, MD: Rowman & Littlefield. Richardson, Sarah. 2010. “Feminist Philosophy of Science: History, Contributions and Chal- lenges.” Synthese 177:337–62. Rolin, Kristina. 2002. “Gender and Trust in Science.” Hypatia 17 (4): 95–118. ———. 2010. “Group Justification in Science.” Episteme 7 (3): 215–31. ———. 2011. “Diversity and Dissent in the Social Sciences: The Case of Organization Studies.” Philosophy of the Social Sciences 41 (4): 470–94. Rooney, Phyllis. 1992. “On Values in Science: Is the Epistemic/Non-epistemic Distinction Useful?” In PSA 1992: Proceedings of the 1992 Biennial Meeting of the Philosophy of Science Asso- ciation, vol. 1, ed. David Hull, Micky Forbes, and Kathleen Okruhlik, 13–22. East Lansing, MI: Philosophy of Science Association. Root, Michael. 1993. Philosophy of Social Science: The Methods, Ideals, and Politics of Social Inquiry. Oxford: Blackwell. Rudner, Richard. 1953. “The Scientist qua Scientist Makes Value Judgments.” Philosophy of Science 20 (1): 1–6. Solomon, Miriam. 2001. Social Empiricism. Cambridge, MA: MIT Press. ———. 2006. “Groupthink vs. the Wisdom of the Crowds: The Social Epistemology of Deliber- ation and Dissent.” Southern Journal of Philosophy 44 (Supplement): 28–42. Staley, Kent. 2007. “Evidential Collaborations: Epistemic and Pragmatic Considerations in ‘Group Belief.’” Social Epistemology 21 (3): 321–35. Steel, Daniel. 2010. “Epistemic Values and the Argument from Inductive Risk.” Philosophy of Science 77 (1): 14–34. ———. 2013. “Acceptance, Values, and Inductive Risk.” Philosophy of Science 80 (5): 818–28. Thagard, Paul. 1999. How Scientists Explain Disease. Princeton, NJ: Princeton University Press. ———. 2010. “Explaining Economic Crises: Are There Collective Representations?” Episteme 7 (3): 266–83. Tollefsen, Deborah. 2006. “Group Deliberation, Social Cohesion, and Scientific Teamwork: Is There Room for Dissent?” Episteme 3 (1–2): 37–51. Wagenknecht, Susann. 2013. “Collaboration in Scientific Practice: A Social Epistemology of Research Groups.” PhD diss., Aarhus University. ———. 2014. “Facing the Incompleteness of Epistemic Trust: Managing Dependence in Scientific Practice.” Social Epistemology. doi:10.1080/02691728.2013.794872. Wilholt, Thorsten. 2009. “Bias and Values in Scientific Research.” Studies in History and Phi- losophy of Science 40:92–101. ———. 2013. “Epistemic Trust in Science.” British Journal for Philosophy of Science 64:233–53. Wray, K. Brad. 2001. “Collective Belief and Acceptance.” Synthese 129:319–33. ———. 2002. “The Epistemic Significance of Collaborative Research.” Philosophy of Science 6 (1): 150–68. ———. 2006. “Scientific Authorship in the Age of Collaborative Research.” Studies in History and Philosophy of Science 37:505–14. ———. 2007a. “Evaluating Scientists: Examining the Effects of Sexism and Nepotism.” In Kin- caid et al. 2007, 87–106. 176 KRISTINA ROLIN ———. 2007b. “Who Has Scientific Knowledge?” Social Epistemology 21 (3): 337–47. ———. 2014. “Collaborative Research, Deliberation, and Innovation.” Episteme 11 (3): 291–303. Wylie, Alison. 2006. “Socially Naturalized Norms of Epistemic Rationality: Aggregation and Deliberation.” Southern Journal of Philosophy 44 (Supplement): 43–48. Zollman, Kevin. 2010. “The Epistemic Benefits of Transient Diversity.” Erkenntnis 72:17–35. VALUES IN SCIENCE 177