Scientific Explanation: Putting Communication First Angela Potochnik Department of Philosophy, ML 0374 University of Cincinnati Cincinnati, OH 45221-0374 angela.potochnik@uc.edu Abstract Scientific explanations must bear the proper relationship to the world: they must depict what, out in the world, is responsible for the explanandum. But explanations must also bear the proper relationship to their audience: they must be able to create human understanding. With few exceptions, philosophical accounts of explanation either ignore entirely the relationship between explanations and their audience, or else demote this consideration to an ancillary role. In contrast, I argue that considering an explanation’s communicative role is crucial to any satisfactory account of explanation. Acknowledgments Thanks to my co-symposiasts Laura Franklin-Hall, Arnon Levy, and Michael Strevens for an interesting exchange, and to Levy and Strevens for comments on this paper. This research was supported by the Charles Phelps Taft Research Center at the University of Cincinnati. 1 1 Ontic and Communicative Senses of Explanation Several philosophers have pointed out that the term “explanation” is used to mean different things. According to Craver (2014), for example, the subject that takes the verb “to explain” might be four different types of things: something out in the world, a person, a scientific representation, or a mental representation. Imagine a teaspoon of salt settling at the bottom of a beaker of water, rather than dissolving. One sense of the verb, what Craver calls the ontic use, enables us to say things like, “the solution’s having reached its saturation point explains why no more salt would dissolve in it.” In this case something out in the world, a state of affairs, is doing the explaining. In a second sense, the communicative use, we say things like “the chemist explained to her audience why no more salt would dissolve in the solution.” And in a third, representational use, we could say, “the solubility graph explains why no more salt would dissolve in the solution.” Craver argues that the ontic sense of explanation is basic. He says, “scientific explanations are constructed and communicated by limited cognitive agents with particular pragmatic orientations. These topics are interesting, but they are downstream from discussions of what counts as an explanation for something else” (2014, 29). Strevens (2008) agrees. In his view, “what explains a given phenomenon is a set of causal facts. . . The communicative acts that we call explanations are attempts to convey some part of this explanatory causal information” (6). As Strevens notes, this ontological focus is traditional for philosophical accounts of scientific explanation. He provides a vivid metaphor: ”a philosopher of explanation will. . . occasionally discuss communicative conventions, just as an astronomer might study atmospheric distortion so as to more clearly see the stars” (2008, 6). On an ontic approach to explanation, then, communicative requirements are taken to merely distort or edit ontic explanations, and the latter are the appropriate target for philosophical accounts of explanation. Just as atmospheric distortion can only influence our view of the stars, not the stars themselves, so too are ontic explanations uninfluenced by communicative requirements. There does seem to be a kind of priority to the ontological features of an acceptable explanation. Craver is right to say that representations count as explanatory in virtue of their relationship to the world, to what he calls “certain kinds of ontic structures.” Scientific explanations must be connected in the proper way to features of the world; this is what allows them to convey information about that world, and information of the right kind to be explanatory. Put most broadly, an explanation must reflect what is responsible for the phenomenon to be explained. This means that explanations must depict dependence relations: what, out in the world, bears responsibility for the phenomenon’s occurrence. And Strevens is right that the relationship that scientific explanations must bear to the world has received the lion’s share of philosophical attention. The kind of responsibility that is explanatory is primarily what is at issue among different accounts of explanation. Thus the deductive-nomological approach posits nomic responsibility as explanatory; causal approaches posit causal responsibility; the mechanistic approach posits causal interactions among hierarchically organized entities; and so on. However, bearing the proper relationship to the world is only one of the tasks at which explanations must succeed. They also must establish a connection of the proper sort to human cognizers, to those seeking an explanation. There is no explanation unless something is (at least potentially) explained, and the latter is subject not only to facts about the world, but also to facts about cognition. Shifting the focus to the relationship between an explanation and its audience foregrounds the sense of explanation as a communicative act. Facts out in the world do not in themselves bear the proper relationship to human cognizers necessary for explanation. Those facts must be represented and communicated—and in the right way—in order for that connection to be forged. These features of explanations are thus also important to explanatory success. And yet, philosophical discussions of explanation have tended to downplay the significance of scientific explanations considered as communicative acts. A number of influences on the actual explanations formulated in science have traditionally been relegated to the category of the “pragmatics” of explanation. This terminology suggests a parallel with linguistics, where pragmatics is the study of particular speech acts and variation in meaning due to context. The influences traditionally included in the category of pragmatics of explanation are the particular features of an explainer and an explanation’s audience, such as their epistemic circumstances and interests, as well as features of scientists and humans in general, including our cognitive features, epistemic circumstances, and shared interests. In short, everything relevant to an explanation’s relationship to its audience is typically deemed merely pragmatics. It is sometimes explicitly acknowledged that what explanation is in fact generated depends in part on such pragmatic considerations, but philosophical accounts of explanation by and large follow the prioritization that Craver describes: first one must determine what out in the world counts as an explanation, then one might choose to consider the “downstream” questions regarding pragmatics and communication. In actuality few philosophers find reason to turn to these questions deemed secondary. Indeed, in conversation Craver has referred to these questions as belonging in “the dustbin of pragmatics.” On some views, the pragmatics of explanation is nothing special, that is, in no way distinct from the pragmatics of linguistic communication more generally (Lewis 1986). There are exceptions to this approach of downplaying the relationship explanations must bear to their audience, if only a few. Bromberger (1966) suggested that explanations should be taken as answers to particular why-questions. Van Fraassen’s (1980) pragmatic account of explanation also emphasizes the primacy of the audience’s concerns in shaping explanations. Yet van Fraassen suggests that the audience’s influence consists in determining what type of responsibility relation out in the world is explanatory, so his account of explanation is still primarily framed as an account of ontological explanatory relevance. Achinstein’s (1983) approach to explanation begins with the act of providing an explanation. For Achinstein, whether something counts as a good explanation depends on both the explainer and the audience for the explanation.1 I agree with Achinstein on this point. What counts as a good explanation, even in an ontic sense, depends on the explainer and the audience. Accordingly, sidelining the communicative purposes to which explanations are put is a mistake. In the next section, I argue that an explanation’s audience crucially shapes what facts count as explanatory. Then, in the final section, I argue that this requires accounts of explanation to privilege the relationship an explanation must bear to its audience. This 1Some recent treatments of understanding also address the cognitive requirements of explanation, though these treatments are not generally focused on providing an account of explanation. See, for instance, de Regt et al. (2009). communicative approach to an account of explanation may be uncommon in philosophy, but it accords well with some popular ideas about scientific explanation, including its role in producing understanding and the value of idealizations in explanations. 2 Explanatory Facts and the Audience Ontic approaches to explanation focus on what out in the world is responsible for the phenomenon to be explained, what we might generically call dependence relations. And yet potentially many dependence relations bear responsibility for any given event. This is especially obvious for causal accounts of explanation, since causal dependencies stretch indefinitely far through time, and at any given point in time there may be several causal dependencies at play. But I suspect it is also true for other approaches to explanation, including law- and pattern-based accounts, and certainly for accounts that recognize multiple types of explanatory dependence relations, e.g. both causal and mathematical. Accordingly, for any explanation formulated in science, one must decide which of potentially many dependence relations to represent. What dependence relation an explanation represents is determined by what states of affairs it represents, and how those states of affairs are represented. These, in turn, are shaped by the focus of the explanation’s audience. Treated in a certain way, these ideas are well appreciated. Lewis (1986) acknowledges that what is represented in an explanation, and how, are both influenced by the audience. He points out the “multiplicity of causes and the complexity of causal histories” (215) and acknowledges how this leads to multiple explanations formulated for a single event. He also notes that “information about what the causal history includes may range from the very specific to the very abstract” (220), in other words, even the same causes can be represented in many different ways. Bu Lewis does not take these ideas to be central to an account of explanation. These limitations apply only to explanations actually formulated at a given point in time, not to the explanation. According to Lewis, “among the true propositions about the causal history of an event, one is maximal in strength. It is the whole truth on the subject—the biggest chunk of explanatory information that is free of error. We might call this the whole explanation of the explanandum event, or simply the explanation” (1986, 218-19, emphases in original).2 For Lewis, a philosophical account of explanation regards this, the explanation—what we might call the ontic explanation. He thus embraces an ontic approach, according to which the only distinctive question about explanation is the nature of the explanatory dependence relations (Levy, draft). This way of accommodating the audience’s influence on scientific explanations is common in philosophy, but in my view it is mistaken. Decisions about what dependence relations to represent, and how to represent them, are ineliminable from the project of explaining. These decisions about representation determine the dependence relation featured in an explanation. For this reason, they help determine the nature of the explanatory facts, viz., ontic explanations. Consideration of an explanation’s audience, or those seeking the explanation, is essential to providing an account of 2This idea is closely related to Railton’s notion of an “ideal explanatory text” (1981). According to Lewis, the difference is that on Lewis’s view this is a “vast structure” of causally related events, whereas Railton’s ideal text consists of a long string of deductive- nomological arguments. explanation. Or so I will argue. Let us first consider ways in which the audience influences what an explanation should represent. Philosophers have long recognized the significance of how a phenomenon is characterized for what explains it. Deciding on the precise explanandum, or how to characterize the event to be explained, is an essential first step to formulating an explanation. Consider the phenomenon of blood-sharing among vampire bats. When vampire bats reassemble after a night of hunting, bats who had successful hunts regurgitate some blood and share it with any unsuccessful hunters in their brood. This phenomenon can be characterized in different ways, giving rise to different explananda. One might ask why bats share food with others in their brood, or one might ask why bats regurgitate blood. Both describe the same phenomenon, but they have different explanations. An explanation of food-sharing will involve facts about evolved cooperative behavior, that is, how it is the bats evolved to share their food in this selfless way (regardless of how the sharing is accomplished). In contrast, an explanation of blood-regurgitation will involve facts about bat anatomy and physiology, how it is that bats regurgitate some amount of their stomach-contents (regardless of what the regurgitation is used to accomplish). On some accounts, specifying the explanandum isn’t enough to determine the explanation; the contrast class—an intended contrast with some counterfactual state of affairs—also plays a role. We might ask why vampire bats share food selectively rather than not sharing at all, or why they share food selectively rather than indiscriminately. These questions regard the same explanandum—the food-sharing exhibited by vampire bats—but contrast that state of affairs with two different alternatives: not sharing at all versus sharing indiscriminately. Contrastive views of explanation hold that these result in different explanations of the same explanandum. Both explanations relate to facts about evolved cooperative behavior, but the first focuses on what gave rise to cooperation instead of competition, whereas the second regards specifics about the form of cooperation, e.g. how social grooming facilitates selective, not indiscriminate, food-sharing. Explanation-seekers thus uncontroversially influence the nature of explanations by setting the explanandum and, perhaps more controversially, the contrast class. Advocates of an ontic approach can accommodate these influences simply by granting that the explanandum and contrast class must be settled before there’s an answer to which states of affairs explain. The question, though, is whether the explanandum and contrast class exhaust the audience’s influence. I believe there’s no reason to expect so. Different explananda and contrast classes arise because explainers have different interests. The explanandum reflects what features of the phenomenon to be explained the explanation-seekers take to be salient, and the contrast class reflects what counterfactual alternatives the explanation-seekers take to be salient. These help indicate what exactly those seeking an explanation want to understand, but they may not fully settle the matter. Explanation-seekers also diverge in which of the dependence relations responsible for the phenomenon they take to be salient. This variation need not result in different explananda or contrast classes.3 One way to demonstrate that the explanandum and contrast class aren’t the only 3On this I am in agreement with van Fraassen (1980), who emphasizes the role of contrast classes but claims they are only one way in which contextual factors influence explanations. conduits for explanation-seekers’ influence is with an example where the causal facts, the explanandum, and the contrast class are all held fixed, and yet the explanation seems to vary. Consider the explanandum of why vampire bats share food selectively rather than not sharing (the contrast class). One explanation is reciprocal altruism. According to Wilkinson (1984), selective food-sharing evolved in bats because unsuccessful hunters faced such a high risk of starvation that sharing bats had greater fitness than non-sharing bats, because others would in turn share with them. If true, this is a good explanation of food-sharing among vampire bats. But it is not the only explanation. Imagine some biologists wonder how the trait of selective food-sharing (versus not sharing) is propagated in the vampire bat population. Here I don’t know the explanation, but I’ll sketch two possibilities. Perhaps there is genetic variation between sharing bats and non-sharing bats; then the genes that lead to sharing predominate given their selective advantage. Or, perhaps this is a learned trait: bats who are raised by sharers themselves share their food. Since sharers are advantaged, they raise more offspring and now predominate. If one of these explanations is true, it also explains why bats share food selectively instead of not sharing. The trait-propagation explanation and the reciprocal altruism explanation represent different dependence relations, each responsible for the same phenomenon. They target the same explanandum and contrast class: why vampire bats share food selectively (rather than not sharing). But explanation-seekers interests can vary in a way that makes one, but not the other, a successful explanation. The reciprocal altruism explanation succeeds when researchers wonder about the role of natural selection in bringing about the trait in question. The trait-propagation explanation succeeds when researchers wonder about the role of genetic and other forms of transmission in bringing about the trait in question. One might wonder whether these different research interests result in different explananda. They do not. In this example, the research interests specify not different features of the event to be explained, the cooperative trait, but of the factors upon which that event depends. For researchers with one of these questions, the explanation that answers the other question is a non-explanation. This exemplifies how researcher interests influence the content of an explanation in a way that goes beyond their influence on the explanandum and contrast class. The audience not only influences what an explanation should represent, but also how it should be represented. The two explanations sketched in my vampire bat example represent some different facts. For example, the reciprocal altruism explanation represents the ecological sources of fitness while the trait propagation explanation does not. But these explanations also represent some of the same facts in different ways. This point can be made most easily by focusing on level of detail, though I believe the accuracy of representation regularly varies as well. The reciprocal altruism explanation represents the selection dynamics in detail, showing that there’s an immediate cost to sharing but a longterm benefit. This is usually accomplished with an evolutionary game theory model. In contrast, the trait propagation explanation represents the selection dynamics in much less detail, using a simple parameter called the selection coefficient to indicate that sharing is selectively advantaged. The reverse is so for trait propagation, where the reciprocal altruism explanation simply represents the trait as heritable (somehow or other), while the trait propagation explanation represents the details of trait propagation—genetic, epigenetic, learning, or some combination thereof. These two explanations showcase different dependence relations in virtue of what facts they represent, and how they represent them. The reciprocal altruism explanation shows how selective food-sharing depended on certain ecological influences that selectively advantaged the trait, whereas the trait propagation explanation, if developed, would show how this trait depended on certain genetic or behavioral influences that enabled it to spread through the population, given its selective advantage. This illustrates how ontic explanations—what facts explain—depend on the audience’s interests, just as they uncontroversially depend on the characterization of the explanandum. What about Lewis’s claim that these representational decisions affect what explanations are actually developed, but not the ultimate (ontic) explanation? One might think that a complete explanation is simply all the explanatory dependence relations that govern a phenomenon, and this, it may seem, does not depend on any particular audience. Let’s start by asking, for the two explanations of food-sharing in vampire bats, whether an integrated explanation that combines them wouldn’t be better. I think, to the contrary, this would be a worse explanation for either audience. Further, it errs not simply by violating communicative conventions as Lewis expects (e.g. by giving too much information). Instead, an explanation that includes non-focal dependence relations violates explanatory norms as well. It identifies the wrong ontic explanation. A reciprocal altruism explanation that also included trait propagation details, if formulated for an audience interested in ecological sources of fitness, would get the explanatory dependence relation wrong. A full defense of this idea will have to wait, as it depends on details about explanation that I remain neutral about in this paper. The basic idea can be motivated as an extension of the idea of difference-making already familiar from discussions of causal explanation. Strevens (2008) argues that an explanation should only cite details that make a difference to the explanandum (as characterized), neglecting any other influences on the event itself. But I have argued that the audience influences which dependence relation is focal in the same way as it influences the characterization of the explanandum. If so, then an explanation should only cite details that make a difference to the focal dependence relation (for some explanandum). A reciprocal altruism explanation including detailed information about trait propagation incorrectly indicates that the selection effect of the environment, the focal dependence relation, itself depends on the details of trait propagation. In brief, by including details extraneous to the audience’s interests, the explanation misleadingly suggests a form of dependence that does not exist. It gets the ontic explanation, the nature of the explanatory dependence, wrong in virtue of violating communicative norms. So, in my view, scientists would get the ontic explanation wrong by including non-focal dependence relations. But perhaps I’ve misinterpreted Lewis. His idea instead may be that the ultimate (ontic) explanation is a grouping of all explanatory dependence relations, a wellspring for any explanations actually formulated in science, but not a guide to explanations in the representational sense. The problem with this idea is that such an ontic explanation would have very little to do with actual explanatory practice. Presumably most philosophers want to maintain some relationship between explanations in an ontic sense and in a representational or communicative sense, but this version of an ontic explanation is no guide to explanations that should actually be formulated. In contrast, my alternative candidate for ontic explanations preserves that relationship. An ontic explanation should be taken to be whatever dependence relation (relevant to explanation-seekers’ interests) is responsible for a phenomenon (characterized in some way, contrasted with some alternative). I have argued that what—out in the world—explains a phenomenon depends in subtle ways on what the explanation-seekers wish to understand. It’s not that the facts change based on our interests. But what citing a fact in an explanation signals about dependence relations does, I think, change based on explanation-seekers’ interests. Some dependence relations obtain, and explain, when closely related ones do not. Food-sharing in vampire bats depends on the details of trait propagation, even as the ecological sources of fitness do not. Information about trait propagation can help explain selective food-sharing, but not when the audience’s interests make it so that this information signals a dependence that does not obtain. This information thus belongs in some explanations of the given explanandum, but not others. Are those facts part of the ontic explanation for selective food-sharing in vampire bats? It depends. If this is right, then we must first discern an audience’s interests before we can say what facts an explanation should feature, that is, what ontic explanation is called for. For scientific explanations, the audience’s influence on what is explanatory is largely played by what we might call the research program. A scientific research program usually involves a choice of focal phenomena; hypotheses about the phenomena; and a methodology—a type of model, manner of investigation, etc. Research programs can also be influenced by what equipment is available, techniques researchers happen to be familiar with, and subtle features of the researchers themselves: their politics, their aesthetic preferences, their blindspots. An explanation always occurs in the context of some research program. This narrows the scope of investigation to certain types of dependence relations, thereby influencing the ontic explanation in the way I sketched above. A consequence of the audience’s influence on what is explanatory is the maintenance of distinct explanations in science. This gives rise to an empirical prediction about explanatory practice, namely, the continuance and even proliferation of different scientific explanations for any phenomenon investigated by scientists with varied research interests. I expect integrated explanations to be generated only when some researchers are interested specifically in the interplay of multiple dependences. Even then, the resulting integrated explanations merely add to the variety of scientific explanations. I think these predictions are borne out by science (see, e.g., Potochnik 2013). In contrast, a traditional ontic approach may not entail the unification of scientific explanations for a given explanandum, but nor does it give reason to expect the proliferation of different explanations. 3 A Communicative Approach to Explanation I have argued that explanation-seekers’ interests shape what features of the world are explanatory, and represented in what way. To be clear, this does not entail that the type of explanatory dependence relation is determined by scientists’ interests. Rather, the point is that whatever type(s) those are—causal, nomic, mathematical, etc.—which dependence relation of that type explains some phenomenon depends on the specific interests behind the request for explanation. This motivates a communicative approach to explanation. By this I mean that the relationship between an explanation and its audience is absolutely central to the nature of scientific explanations (in any sense). To determine what explains some phenomenon, one must first ascertain the research focus that occasions the explanation. Only then can one pose the question of what dependence relation accounts for an explanandum. Considerations of an explanation’s communicative purposes are accordingly not downstream but upstream from considerations of what specific “ontic structures” our explanations should represent. By this I mean that the research program in which an explanation is formulated, the explanation’s communicative context, influences both representational and ontological features of explanations. The research program influences what an explanation should represent, and how. This in turn results in the explanation featuring different dependence relations. It’s possible that traditional philosophical accounts of explanation assume this matter of the research agenda has been resolved in any given instance of explaining, before an account of explanation focused on the question of ontological dependence gets going. But this does not render communicative context unimportant; it simply makes it invisible when it is actually primary. Scientific explanations have classically been taken to be the means for generating understanding. Some philosophers have also explicitly defended a strong connection between explanation and understanding (e.g. Grimm 2010; Strevens 2013). But this is at odds with the nearly exclusive philosophical focus on the relationship explanations should bear to the world, and the resulting neglect of the relationship between explanations and explainers. Consider that Hempel (1965) holds both that explanations show us that a phenomenon was to be expected, and thereby enable us to understand the phenomenon, and also that explanations require demonstrating via logical deduction how a phenomenon depends on a law of nature. Yet nothing guarantees that the former is accomplished, and uniquely accomplished, by such derivations. Strevens (2008) says that “[he takes] scientific understanding to be that state produced, and only produced, by grasping a true explanation” (3). He also specifies that his account regards only the ontological sense of explanation, and that account takes the only full-fledged explanations to be descriptions of “the relevant causal mechanism in fundamental physical terms” (130-31). This seems as distantly related to human understanding as Hempel’s logical deductions. Tension between an official account of explanation and the connection between explanation and understanding can be avoided by embracing a communicative approach to explanation. Explanations are the means to generating understanding, so it is important to see how explanations are shaped by the cognitive needs of explainers. Here, then, is an independent reason to think that the relationship between an explanation and its audience critically shapes the nature of scientific explanations, including even their ontological features. Explanations must be comprehendible by humans; they must generate human understanding. And yet, I have suggested there are many, possibly countlessly many, factors upon which any given phenomenon depends. Determining which factors to cite to generate human understanding requires consideration of what exactly explanation-seekers want to understand. A communicative approach also accounts for what is distinctive about explanation as a scientific aim. Several philosophers have justified the value of explanation in particular with the idea that explanatory information is what would be missing for Laplace’s demon. This is a creature possessing all information about the current state of the universe and the (presumed deterministic) laws of nature and, on that basis, capable of predicting all future states and retrodicting all past states. From Douglas (2009): “The value of explanations can be rescued. . . when we recall that we are not Laplacian demons. . . We are finite beings, with finite mental capacities. . . Explanations help us to organize the complex world we encounter, making it cognitively manageable” (454). Citing explanation’s usefulness for limited human cognizers implicitly directs our attention to its communicative purposes. It is explanation in a communicative sense, explanations actually formulated for specific human audiences, that is relevant here. Finally, a communicative approach to explanation accommodates the connections that have been posited between explanation and idealization. Those who defend the scientific value of idealizations largely base that defense on arguments for how idealizations can contribute to explanation. Focusing on explanation warrants an emphasis on how idealizations are cognitively useful to us, how they facilitate our understanding. But for this explanations must be shaped not only by the relationship they must have to the world, their ontic features, but also by the relationship they must have to their audience. Positing an explanatory role for idealizations also requires that the communicative purposes of explanation be considered. Explanations certainly face ontological requirements. Any successful explanation must cite the right kind of dependence relation (perhaps, e.g., a cause), properly related to the phenomenon to be explained (perhaps, e.g., a difference-maker for the explanandum). Nonetheless, given the centrality of the communicative requirements for explanation, there is only limited sense to be made of ontic explanations existing “out there” in the world. I have argued that there are many dependence relations related to a target phenomenon, only some of which belong in a given explanation. To figure out which explain, one must consider the communicative context—the research interests that occasion the explanation. These interests determine the precise explanandum, contrast class, and focal dependence relations. Any account of explanation must include consideration of the specific communicative needs an explanation is designed to meet. References Achinstein, Peter (1983), The Nature of Explanation, Oxford: Oxford University Press. Bromberger, Silvain (1966), “Why-Questions”, in R. Colodny, ed., Mind and Cosmos, Pittsburgh: University of Pittsburgh Press, 86–111. Craver, Carl F. (2014), “The Ontic Conception of Scientific Explanation”, in Andreas Hütteman and Marie Kaiser, eds., Explanation in the Biological and Historical Sciences, Springer. de Regt, Henk W., Sabina Leonelli, and Kai Eigner, eds. (2009), Scientific Understanding: Philosophical Perspectives, Pittsburgh: University of Pittsburgh Press. Douglas, Heather (2009), Science, Policy, and the Value-Free Ideal, University of Pittsburgh Press. Grimm, Stephen R. (2010), “The Goal of Explanation”, Studies in History and Philosophy of Science 41: 337–344. Hempel, Carl (1965), Aspects of Scientific Explanation and Other Essays in the Philosophy of Science, New York: Free Press. Levy, Arnon (draft), “Against the Ontic Conception of Explanation”. Lewis, David (1986), “Causal Explanation”, in David Lewis, ed., Philosophical Papers, Oxford University Press, vol. II. Potochnik, Angela (2013), “Defusing Ideological Defenses in Biology”, BioScience 63: 118–123. Railton, Peter (1981), “Probability, Explanation, and Information”, Synthese 48: 233–56. Strevens, Michael (2008), Depth: An Account of Scientific Explanation, Cambridge: Harvard University Press. ——— (2013), “No Understanding Without Explanation”, Studies in History and Philosophy of Science 44: 510–515. van Fraassen, Bas C. (1980), The Scientific Image, Oxford: Clarendon Press. Wilkinson, Gerald S. (1984), “Reciprocal Food Sharing in the Vampire Bat”, Nature 308: 181–184.