Philosophy of Science, 77 ( July 2010) pp. 000–000. 0031-8248/2010/7703-0007$10.00 Copyright 2010 by the Philosophy of Science Association. All rights reserved. 1 Can Mechanisms Really Replace Laws of Nature?* Bert Leuridan†‡ Today, mechanisms and mechanistic explanation are very popular in philosophy of science and are deemed a welcome alternative to laws of nature and deductive-nomological explanation. Starting from Mitchell’s pragmatic notion of laws, I cast doubt on their status as a genuine alternative. I argue that (1) all complex-systems mechanisms onto- logically must rely on stable regularities, while (2) the reverse need not hold. Analogously, (3) models of mechanisms must incorporate pragmatic laws, while (4) such laws them- selves need not always refer to underlying mechanisms. Finally, I show that Mitchell’s account is more encompassing than the mechanistic account 1. Introduction. Today, mechanisms and mechanistic models are very popular in philosophy of science, in particular in philosophy of the life sciences. Mechanicist philosophers like Machamer, Darden, and Craver (2000) and Bechtel and Abrahamsen (2005) set their face against the dominant position that strict laws of nature and deductive-nomological (D-N) explanation have occupied for years on end.1 Their opposition is not groundless. The criteria for lawfulness that have been advanced by, for example, Nagel (1961), Hempel (1965), and Goodman (1973) and that are considered the received view are highly problematic. Strict laws de- *Received August 2008; revised January 2010. †To contact the author, please write to: Centre for Logic and Philosophy of Science, Ghent University, Blandijnberg 2, B-9000 Belgium; e-mail: Bert.Leuridan@Ugent.be. ‡I would like to thank the following people (in alphabetical order) for their helpful comments and criticisms: Leen De Vreese, Isabelle Drouet, Phyllis McKay Illari, Joke Meheus, Sandy Mitchell, Federica Russo, Maarten Van Dyck, Erik Weber, Marcel Weber, and Jon Williamson, as well as three anonymous referees. The author is Post- doctoral Fellow of the Research Foundation—Flanders (FWO). 1. By ‘law of nature’ or ‘natural law’, I mean a generalization describing a regularity, not some metaphysical entity that produces or is responsible for that regularity. I also explicitly distinguish between traditional or strict laws and regularities (which are rightly criticized by the mechanicists) on the one hand and pragmatic laws and reg- ularities on the other hand. 000 BERT LEURIDAN scribe strict regularities. They are universal and have no genuine excep- tions. They are nonvacuously true or at least very well supported by empirical data. They are general or nonlocal and contain only purely qualitative predicates. They also are projectable and have unlimited scope. Finally, they are somehow necessary (or noncontingent). If laws of nature are interpreted in this strict sense, we should classify almost all scientific generalizations as accidental. This holds for physics, as well as for chem- istry, biology, and the social sciences.2 If there are no strict laws, there are no D-N explanations. Hence, the mechanicist alternative, which states that explanation involves mechanistic models (i.e., descriptions of mech- anisms) instead of strict laws, might be very welcome. The received view has been attacked from other sides as well. Instead of abandoning the concept of natural law, Mitchell (1997, 2000) proposes to revise it. In her view, laws of nature should be interpreted pragmatically. A generalization is a pragmatic law if it allows for prediction, explanation, and manipulation, even if it fails to satisfy the traditional criteria. To this end, it should describe a stable regularity but not necessarily a strict one. What is the precise relation between mechanisms and stable regularities or between mechanistic models and pragmatic laws is still an open ques- tion, which I address in this article. Does the mechanistic account render Mitchell’s pragmatic solution superfluous? No. I show that the mecha- nistic literature cannot replace (but rather depends on) talk in terms of laws of nature, provided the latter are conceived of pragmatically. What is more, Mitchell’s account is more encompassing than the mechanicists’ and thus deserves our attention even in view of the patent mechanistic successes. In sections 2 and 3, I present mechanisms and mechanistic models and raise the question whether mechanisms really are an alternative to reg- ularities. This question is answered in the rest of this article. In section 4, I discuss pragmatic laws and their corresponding regularities. Together, sections 2–4 set the stage for the arguments presented in the rest of this article, where I make four related claims. In sections 5 and 6, I substantiate two ontological claims and two epistemological claims, respectively. First, mechanisms are ontologically dependent on stable regularities. There are no mechanisms without both macrolevel and microlevel stable regularities. Second, there may be stable regularities without any underlying mecha- nism. Third, models of mechanisms are epistemologically dependent on pragmatic laws. To adequately model a mechanism, one has to incorporate 2. Cartwright (1983) has made the case regarding physics, and Christie (1994), re- garding chemistry. Beatty (1995, 1997), Brandon (1997), and Sober (1997) have argued against strict laws in biology. Beed and Beed (2000) and Roberts (2004) discuss strict laws in the social sciences. MECHANISMS REPLACE LAWS OF NATURE? 000 pragmatic laws. Finally, pragmatic laws are themselves not epistemolog- ically dependent on mechanistic models. They need not always refer to a mechanism underlying the regularity at hand. In section 7, I conclude by showing that Mitchell’s account is more encompassing than the mechani- cist account. Thus, that account cannot replace talk in terms of laws, provided the latter are conceived of pragmatically (which, I think, they should be). 2. Mechanisms. From the end of the 1970s onward, the concept of ‘mech- anism’ has regained popularity in philosophy of science. Different families of concepts can be distinguished. In the Salmon/Dowe account (Salmon 1984; Dowe 2000), mechanisms are characterized in terms of causal pro- cesses and causal interactions. In this article, I do not consider this ac- count. Rather, I focus on the complex-systems approach defended by, for example, Glennan, Woodward, Machamer and colleagues, and Bechtel and Abrahamsen. In this approach, mechanisms are treated as complex systems of interacting parts. Contrary to Salmon/Dowe mechanisms, com- plex-systems mechanisms (cs-mechanisms) are robust or stable. They form stable configurations of robust objects, and as a whole they have stable dispositions: the overall behaviors of these mechanisms (see Glennan 2002, S344–S346). This difference will prove very relevant in the following sec- tions. Moreover, I focus on the theories of Machamer et al. (2000) and of Bechtel and Abrahamsen (2005). (My findings can be extended to, e.g., Glennan [2002], Woodward [2002], and Craver [2007], all of whom ex- plicitly endorse the role of what I call ‘causal P-laws’ in the mechanistic approach.) Both theories are mainly concerned with the life sciences, and they both present mechanisms and mechanistic explanation as an alter- native to strict laws of nature and D-N explanation. In their famous article, “Thinking about Mechanisms,” Machamer et al. (2000) define mechanisms as complex systems: (M*) [cs-]Mechanisms are entities and activities organized such that they are productive of regular changes from start or set-up to finish or termination conditions. (3) Entities are the things that engage in activities. Activities are the producers of change. The authors defend a dualistic metaphysics that combines substantivalist notions with concepts from process philosophy. Entities and activities, they claim, are complementary, interdependent concepts (4–8). If entities and activities are adequately organized, they behave reg- ularly. Bechtel and Abrahamsen’s (2005) definition of mechanisms is somewhat different, but it portrays mechanisms as organized, complex systems, too: 000 BERT LEURIDAN (M†) A [cs-]mechanism is a structure performing a function in virtue of its component parts, component operations, and their organiza- tion. The orchestrated functioning of the mechanism is responsible for one or more phenomena. (423) Definition M† strongly resembles M*. The component parts are clearly entities. Operations resemble activities. (Bechtel and Abrahamsen [2005, 423] use the label ‘operation’ instead of ‘activity’ because they wish to draw attention to the involvement of parts.) And the mechanism’s parts and activities must be organized or orchestrated. Yet Bechtel and Abra- hamsen add to M* the notion of function (423–24). Machamer et al. (2000) and Bechtel and Abrahamsen (2005) stress the role of mechanistic models in explanation. What is important is that they do so at the expense of strict laws. They signal several distinct but related problems regarding strict laws and D-N explanation. First, strict regularities are rarely if ever discovered in the life sciences. But if strict biological laws are rare or nonexistent, D-N explanations would not be practicable in the life sci- ences. Second, Bechtel and Abrahamsen append to this that their account avoids some hard ontological problems. Staking on mechanisms as real systems in nature, they write, has the advantage that “one does not have to face questions comparable to those faced by nomological accounts of explanation about the ontological status of laws” (425). Third, even if there were strict biological laws, there is no denying that explanation in the life sciences usually takes the form of mechanistic explanation. Bechtel and Abrahamsen write that: “Explanations in the life sciences frequently involve presenting a model of the mechanism taken to be responsible for a given phenomenon. Such explanations depart in numerous ways from nomological explanations commonly presented in philosophy of science” (421; my emphasis; see also Bechtel and Richardson 1993, 231). Machamer et al. (2000) make a stronger claim: “In many fields of science what is taken to be a satisfactory explanation requires providing a description of a mechanism” (1; my emphasis). Finally, both groups of authors argue that even if there were strict biological laws, D-N explanations would not be sufficiently explanatory. Explanation, they say, involves more than subsumption under a law or regularity. Laws or regularities do not explain why some phenomenon occurs. According to Machamer et al. (2000), activities are essential for rendering phenomena intelligible. A mechanistic explanation makes a phe- nomenon intelligible by providing an elucidative relation between the explanans and the explanandum, that is, by revealing the productive re- lation between the mechanism’s setup conditions, intermediate stages, and termination conditions. This productive relation is completely accounted for by the mechanism’s activities: “It is not the regularities that explain MECHANISMS REPLACE LAWS OF NATURE? 000 but the activities that sustain the regularities” (21–22). They append to this that “regularities are non-accidental and support counterfactuals to the extent that they describe activities. . . . No philosophical work is done by positing some further thing, a law, that underwrites the produc- tivity of activities” (7–8; terminological prudence is in order here. In my terms, regularities are ontological and cannot describe activities. And I do not adhere to laws as metaphysical entities that underwrite the pro- ductivity of activities). According to Bechtel and Abrahamsen (2005), subsumption under a law does not show why the explanandum phenom- enon occurred: “Even if accorded the status of a law, this statement [con- cerning the ratio of oxygen molecules consumed to adenosine triphosphate in metabolism] merely brings together a number of actual and potential cases as exemplars of the same phenomenon and provides a characteri- zation of that phenomenon. However, it would not explain why the phe- nomenon occurred—either in general or in any specific case” (422). To explain why, scientists (biologists) explain how. They provide a model of the mechanism underlying the phenomenon in question. In short, M* and M† are motivated by the apparent shortcomings of the concepts of strict law/regularity and D-N explanation (in the context of the life sciences). Mechanisms and mechanistic explanation are put forward as an alternative. I side with the mechanicists in their critical assessment of both strict laws/regularities and D-N explanation. I also endorse the view that ‘mechanism’ and ‘mechanistic explanation’ are very fruitful concepts. Yet I doubt whether the mechanistic account provides an alternative to talk in terms of laws of nature. 3. Are Mechanisms an Alternative to Regularities? In this section, I show that both M* and M† depend on the concept of ‘regularity’—at least prima facie: they mention regularities either explicitly or implicitly. In section 5, I argue that this is no coincidence: cs-mechanisms are onto- logically dependent on the existence of regularities. Definition M* men- tions regularities explicitly: mechanisms are productive of regular changes. Definition M† does not. However, it states that mechanisms perform a function. What is the relation between ‘function’ and ‘regularity’? In this section, I argue that functions are best conceived of as dispositions, that dispositions always involve regularities, and hence that M† implicitly refers to regularities. That functions are dispositional is evident from different theories of function. Consider first the dispositional theory of functions (e.g., Bigelow and Pargetter 1987). (DTF) An effect e of a character c is a function of that character if it confers a survival-enhancing propensity on the creature having c. 000 BERT LEURIDAN Bigelow and Pargetter interpret propensities dispositionally. It is not re- quired that e enhances survival (or reproduction) in all individuals all of the time. The dispositional theory of functions is itself not unquestioned. An alternative to DTF is the etiological theory of functions (e.g., Mitchell 2003, 92). (ETF) An effect e of a character or component c is a function of that character or component if it has played an essential role in the causal history issuing in the presence of that very component. During this causal history, c must have been selected over alternatives on the basis of its doing e, and it must have been produced or reproduced as a direct result of that selection process (96). By its reference to natural selection, the etiological theory links functions to fitness, which is a dis- positional characteristic. So even if functions are interpreted etiologically, they should be regarded as dispositional. Both DTF and ETF refer to evolution and selection, but perhaps selective function is not what Bechtel and Abrahamsen have in mind. (Bechtel and Abrahamsen [2005] are not clear about what they mean by ‘function’.) A different and less restrictive notion of ‘function’ derives from the works of Cummins (see Craver [2007] for a detailed discussion of Cummins’s work). For Cummins (1975, 756), function should not be linked with evolutionary considerations. Yet he explicitly links functions with dis- positions. “Thus, function-ascribing statements imply disposition state- ments; to attribute a function to something is, in part, to attribute a disposition to it” (758). Even if the function of x is to do f, it is not required that x does f all the time. Since functions are dispositions, they presuppose the existence of reg- ularities, as I now show. Even if there is no consensus about the correct analysis of dispositions, all attempts seem to have in common that dis- positions involve regularities. (For an overview of the most prevalent definitions of ‘disposition’, see Fara [2006].) Roughly, a disposition can be characterized as follows: (DISP) An object is disposed to M when C, if and only if, if it were the case that C and W, then it would .f(M ) Variable M refers to a manifestation, and C refers to the conditions of manifestation. In the case of the fragile glass, M could be ‘breaking’, and C could be ‘being struck’. Variable W stands for the extra conditions that should be included in the definition or analysis of dispositions. The simple conditional analysis, which leaves W empty, is victim to several counter- examples, and a large part of the literature about dispositions concerns the question of what other conditions should be included in W (e.g., David Lewis has suggested that an object is disposed to M when C if and only MECHANISMS REPLACE LAWS OF NATURE? 000 if it has an intrinsic property B such that, if it were the case that C and if the object were to retain B, then the object would M because C and because it has B; see Fara 2006, sec. 2.3). The f operator stands for the modal or probabilistic strength that should be included in the definition of dispositions. According to the simple conditional analysis, the object should always M if it were the case that C. Again, this makes the simple conditional analysis victim to several counterexamples. Therefore, it has been proposed to interpret f less strictly, namely, habitually (Fara 2006, sec. 2.4) or probabilistically (Prior, Pargetter, and Jackson 1982). What is relevant for the present discussion is as follows: even if we would allow for dispositions that are seldom manifested when their manifestation conditions C obtain, f cannot be replaced by ‘never’ since this would result in a contradictio in terminis.3 If the conditions in W are satisfied, .4P(MFC ) 1 0 So far we can safely conclude that both Machamer et al. (2000) and Bechtel and Abrahamsen (2005) define mechanisms in terms of regularities (either explicitly or implicitly). This raises a first question, namely, whether this use of ‘regularity’ is necessary or unavoidable. This question is an- swered in section 5. It should also be noticed that neither gives a detailed characterization of these regularities. They only describe them negatively: they are not strict. This raises a second question that I answer in section 4, namely, how these regularities should be conceived of. Regularities are a blind spot in the mechanistic literature. This blind spot can be removed by means of a more adequate theory of regularities and lawfulness. 4. Pragmatic Laws and Regularities. Instead of rejecting the concept of natural law, Sandra Mitchell (1997, 2000) sets out to refine it. She, too, 3. The claim that dispositions involve regularities is explicitly found in Cummins (1975): “To attribute a disposition d to an object a is to assert that the behavior of a is subject to (exhibits or would exhibit) a certain lawlike regularity” (758). He calls these lawlike regularities ‘dispositional regularities’. Contrary to me, Cummins seems to conceive of these as strict regularities. 4. If there are dispositions that are seldom manifested when their manifestation con- ditions C hold, what follows may serve as an example. A lottery is being held with 1,000,050 tickets. The tickets range from 1 to 1 million, but there are 51 tickets with the number 666. In this case, on might say, the lottery is disposed to select 666 as the winning number when a draw is made, even if the chance of selecting 666 is very low (51/1,000,000; I would like to thank Erik Weber for suggesting this example). I do not claim that such dispositions exist, and hence I do not claim that this lottery should be ascribed the disposition to have 666 as its outcome. What I want to claim is as follows: even if our definition of dispositions is so liberal that f might refer to very low prob- abilities, dispositions still depend on the existence of regularities. Only, in this case the regularities involved would have very limited strength, to be compared with, e.g., the regularity relating syphilis to paresis. (For the concept of strength of regularities, see sec. 4.) 000 BERT LEURIDAN starts from the observation that the existing criteria for strict lawfulness are too restrictive, at least with respect to biology and perhaps also with respect to other sciences. Therefore, she proposes a pragmatic approach to the question of whether there are laws in biology. “The pragmatic approach focuses on the role of laws in science, and queries biological generalizations to see whether and to what degree they function in that role” (1997, S469). The roles of laws that Mitchell focuses on are pre- diction, explanation, and manipulation. If a generalization G is used for one or several of these tasks, it qualifies as a pragmatic law. Mitchell contrasts the pragmatic approach for evaluating the lawfulness of biology both with the normative and with the paradigmatic approach. In the normative approach, one begins with a norm or definition of lawfulness, more specifically the traditional criteria for strict lawfulness (cf. above), and reviews the candidate biological generalization to see whether it meets the specified conditions. The paradigmatic approach begins with a set of exemplars of laws (characteristically in physics) and compares these to the generalizations in biology. If a match is found, the biological gener- alization is considered a law (Mitchell 1997, S469; 2000, 244–50). It should be noted, however, that paradigmatic and pragmatic considerations also played an important role in the works of Nagel (1961), Hempel (1965), and Goodman (1973). Criteria for lawfulness were assumed to rank New- ton’s paradigmatic laws of motion as natural laws and statements about the screws in Smith’s car as accidental generalizations. Also, the criteria had to be such that laws are the vehicles for prediction (Goodman) and explanation (Hempel) par excellence. So Mitchell’s approach does not differ radically in spirit from the traditional one. The main difference, and also the most interesting one, concerns the new gradual criteria she proposes for ranking lawful generalizations (Mitchell 1997, S475–S478; 2000, 259–63). Generalizations are laws if and to the extent that they can be used for prediction, explanation, or manipulation. Therefore, they must be projectable: “The function of scientific generalizations is to provide reliable expectations of the occurrence of events and patterns of properties. The tools we use and design for this are true generalizations that describe the actual structures that persist in the natural world” (1997, S477). Given that these generalizations will seldom be universal, we need to know when (in what contexts) they hold and when they do not. The interesting problem is not that biological generalizations are contingent but how and to what extent. Therefore, if we want to use a generalization, we need to assess the stability and strength of the relation it describes. Stability and strength are two ontological parameters for the evaluation of a generalization’s usefulness. (Mitchell also distinguishes several grad- ual representational criteria, such as degree of accuracy, level of ontology, MECHANISMS REPLACE LAWS OF NATURE? 000 simplicity, and cognitive manageability [1997, S477–S478; 2000, 259–63]. I do not discuss these criteria here.) Stability. What are the conditions on which the regularity under study is contingent? How spatiotemporally stable are these conditions? And what is the relation between the regularity and its conditions (is it deterministic, probabilistic, etc.)? Stability is a gradual criterion. All regularities are contingent in that they rest on certain conditions. These conditions are historically shaped and are to a certain extent spatiotemporally stable. Stability does not bear solely on the laws of physics. Only if contingency is interpreted gradually, Mitchell claims, will our conceptual framework be rich enough to account for the diversity of types of regularities and generalizations and for the complexity found in the sciences (1997, S469–S477; 2000, 250–59). Strength, too, is a gradual criterion. (In my opinion, the gradual character of ‘strength’ is best expressed by framing this criterion as some kind of covariance or correlation—deterministic regularities being a limit case.) Strength. How strong is the regularity itself ? Does it involve low or high probabilities? Or is it deterministic? Does it result in one unique outcome? Or are there multiple outcomes? Mitchell’s pragmatic approach gives rise to two questions that I should deal with. First, one may urge that it is too liberal in that it qualifies too many generalizations (especially very weak or unstable ones) as lawful. Second, one may question whether it sufficiently distinguishes between causal laws and noncausal laws. Is Mitchell’s approach too liberal? Ob- viously it is from the traditional point of view. Few pragmatic laws satisfy the criteria for strict lawfulness. But does this provide us with sufficient reason to deprive them of their honorific label ‘law’? And more specifically, should the fact that Mitchell’s approach allows for very weak or unstable pragmatic laws count as a shortcoming? I am of course willing to give up the word ‘law’, but I doubt this would be of any help. Moreover, there are two good reasons to stick to Mitchell’s approach. A first reason is that many scientific generalizations (in many different scientific disciplines) are called laws, while failing to satisfy the criteria for strict lawfulness. By contrast, their status as a law and their usefulness in practice can be easily acknowledged within Mitchell’s framework. The- ories of lawfulness that apply more stringent criteria run the risk of selling short these generalizations. The history of classical genetics provides nice examples of such nonstrict scientific laws. William Bateson (1900) was deeply convinced that it would be both useful and possible to discover the laws of heredity. This conviction was mainly inspired by the works of Francis Galton (1889, 1897), who formulated the law of regression and 000 BERT LEURIDAN what would later be known as the law of ancestral inheritance. (For a discussion of Galton’s theory of heredity, see Leuridan [2007].) But at that time Bateson also got acquainted, via Hugo de Vries, with the works of Gregor Mendel (Mendel 1865/1933; de Vries 1900). What is particularly interesting is the way Bateson conceived of the laws of heredity. He ac- knowledged that both Galton’s laws and Mendel’s law (at that time, Bateson did not distinguish between the law of segregation and the law of independent assortment) are subject to exceptions and have a limited scope of application. However, this did not dissuade him from holding to the label ‘law’. Nor did he later change his mind, when ever more exceptions to Mendel’s laws were adduced by the biometricians who re- jected Mendel’s theory in favor of Galton’s (Bateson 1902). In the works of Thomas Hunt Morgan and his coworkers (Morgan et al. 1915; Morgan 1919, 1926/1928), Mendel’s findings of segregation and independent as- sortment were called laws, even if they were complemented with systematic explanations of their failures (coupling and crossing over, sex-linked in- heritance, failure of dominance, and so on). Even today, textbooks in modern genetics start with an overview of Mendel (Klug and Cummings 1997, chap. 3). Mendel’s findings were certainly not strict laws, but their usefulness can be acknowledged within the pragmatic approach, as can their status as ‘laws’. Much research in classical genetics aimed at un- covering the conditions for the different regularities, assessing their sta- bility, specifying their strength, and so on. Nothing is gained by merely claiming that these regularities are not lawful. A second reason for sticking to Mitchell’s approach is that it also nicely fits actual scientific practice. Scientists invest plenty of time and money to discover (statistical) regularities that can be used for prediction, ex- planation, or interventions. Granted, few of the resulting descriptions are called laws. But what is more interesting is the fact that the criteria used nicely fit Mitchell’s liberality. Austin Bradford Hill (1965, 295) famously addressed the problem of causal inference. His article is still very influential today (at least it is cited frequently). Hill envisaged situations in occu- pational medicine in which our observations reveal a statistically signif- icant association between two variables (a disease or injury A and con- ditions of work B) but where our background knowledge (the general body of medical knowledge) does not suffice to determine whether the relation is causal. His article was unquestionably motivated pragmatically: “In occupational medicine our object is usually to take action. If this be operative cause and that be deleterious effect, then we shall wish to in- tervene to abolish or reduce death or disease” (300). To be useful in reducing death or disease, an association need not be strong: “We may recall John Snow’s classic analysis of the opening weeks of the cholera epidemic of 1854. . . . The death rate that he recorded in the customers MECHANISMS REPLACE LAWS OF NATURE? 000 supplied with the grossly polluted water of the Southwark and Vauxhall Company was in truth quite low—71 deaths in each 10,000 houses. What stands out vividly is the fact that the small rate is 14 times the figure of 5 deaths per 10,000 houses supplied with the sewage-free water of the rival Lambeth Company” (296). The weakness of the relation between sewage and cholera ( is very low) does not make it unusable forP(AFB) occupational (preventive) medicine. It underlay interventions to improve public health. To be useful in reducing death or disease, an association need not be stable either: “Arsenic can undoubtedly cause cancer of the skin in man but it has never been possible to demonstrate such an effect on any other animal” (298). Whether arsenic causes cancer in animals is of little interest if the intended domain of application consists of humans. Evidence from humans should suffice. To conclude, the case of Hill shows that Mitchell’s approach nicely fits the pragmatic slant of occupational medicine (which, after all, is part of the life sciences). And it shows that Mitchell’s liberality regarding very weak or unstable pragmatic laws is an advantage, rather than a disadvantage. The case of Hill brings my argument to the second question that is raised by Mitchell’s approach. Hill explicitly intended to distinguish causal regularities from mere associations, but Mitchell’s framework provides no means for making such a distinction. A regularity can be very stable or very strong, even if it is spurious. The distinction is favorable for two reasons. First, causalists, regarding explanation, allege that all explanantia should cite (at least some of ) the explanandum’s causes. Second, there is widespread agreement among philosophers that manipulation requires causal relations. I do not take up a position regarding the indispensability of causal relations in either explanation or manipulation here.5 But in order not to lose the causalists of explanation or manipulation, I distin- guish between causal regularities and noncausal ones. This distinction can be drawn with the help of Woodward’s theory. In Woodward’s view, a generalization is explanatory if and only if it is in- variant. And it is invariant to the extent that it “remains stable or un- changed as various other changes occur” (2003b, 239). Different senses of invariance can be distinguished, but the most important sense is in- variance under interventions, which is a gradual concept.6 Some gener- 5. Contrary to what is commonly assumed, policy may be based on noncausal or spurious relations. See Leuridan, Weber, and Van Dyck (2008) for the distinction between manipulative policy and selective policy. 6. Interventions are informally defined as follows (for a formal definition, see Wood- ward [2003b], 98–99): “an intervention on some variable X with respect to some second variable Y is a causal process that changes the value of X in an appropriately exogenous way, so that if a change in the value of Y occurs, it occurs only in virtue of the change in the value of X and not through some other causal route” (94). 000 BERT LEURIDAN alizations are more invariant than others, depending on the range and importance of the interventions under which they are invariant. Invariance also involves a threshold. If a generalization is not stable under any in- terventions, it is noninvariant and hence neither causal nor explanatory (248–49). With the help of Woodward’s conceptual framework, Mitchell’s concept of pragmatic law can be refined. Admittedly, Woodward defines ‘laws’ traditionally (2003b, 166–67). He also argues that lawfulness is not of any help regarding scientific explanation. Laws are only explanatory insofar as they are invariant. Laws that are not change-relating, and hence not invariant, are not explanatory (208). But this does not preclude us from joining the concepts of pragmatic law and invariance. (For more detailed comparisons between both frameworks, see Mitchell [2000], 258–59, and Woodward [2003b], 295–99.) In the remainder of this article, I repeatedly use the following four concepts: (P-regularity). A regularity is a pragmatic regularity (a P-regularity) if it has some degree of stability and strength. (P-law). A generalization is a pragmatic law (a P-law) if it describes a P-regularity. It has stability and strength to the extent that the regularity it describes is stable and strong. It allows one to a certain extent to predict, to explain, or to manipulate the world. It may, but need not, satisfy the criteria for strict lawfulness. (cP-law). A P-law is a causal P-law (a cP-law) if it is invariant under some range of interventions. It allows one to a certain extent to predict, to explain, or to manipulate the world.7 (cP-regularity). A P-regularity is a causal P-regularity (a cP-regular- ity) if it is described by a cP-law.8 Up until now, we have seen that whereas cs-mechanisms are put forward as an alternative to strict regularities (sec. 2), they are nevertheless defined in terms of regularities (sec. 3). In this section, I have presented concepts of regularity/law that more nicely fit scientific practice. Now the question regarding the precise relations between cs-mechanisms and P-regularities and between mechanistic models and P-laws can be addressed. 7. For the use of cP-laws in nonmechanistic explanation both of single events and regularities, see sec. 7. 8. I am assuming that every regularity can be described by some generalization. Al- though Woodward defines interventions (and hence causation) with respect to gener- alizations, he does not oppose causal regularities (2003b, 14, 118–22). In the interest of readability, I will write ‘(c)P-regularity’ (respectively ‘(c)P-law’) instead of ‘P-regu- larity or cP-regularity’ (respectively ‘P-law or cP-law’). MECHANISMS REPLACE LAWS OF NATURE? 000 5. The Ontological Relations between Mechanisms and (c)P-Regularities. In this section, I first argue that cs-mechanisms are ontologically depen- dent on (c)P-regularities. No x can count as a mechanism, unless it in- volves regularities. Then I investigate the reverse relation, that is, whether there can be (c)P-regularities without any underlying mechanism. Mech- anisms are ontologically dependent on the existence of regularities both at the macrolevel and at the microlevel. First, no x can count as a cs- mechanism, unless it produces some macrolevel regular behavior. Second, to produce such macrolevel regular behavior, this x has to rely on mi- crolevel regularities. In the life sciences, reference to mechanisms cannot be detached from matters of projectability. Morgan and his coworkers sought after the cs- mechanism of Mendelian heredity to explain both Mendel’s findings and their exceptions in a systematic way (Morgan et al. 1915; Morgan 1919, 1926/1928). Mainly drawing from findings on fruit flies, they explained definite macrolevel behaviors (definite phenotypic ratios in subsequent generations of organisms) by referring to the behaviors (independent as- sortment, crossing over, interference, and so on) of a complex set of parts or entities (gametes, chromosomes, factors or genes, and so on). But they were not only interested in the fruit flies in their laboratories. They were interested in the mechanism of heredity in Drosophila and in other species as well. As evidence accumulated, both Mendelian inheritance and the underlying chromosomal mechanism were more and more considered a general phenomenon. In the end, Morgan formulated the theory of the gene (including Mendel’s two laws) without reference to any specific spe- cies (1926/1928, 25). He likewise gave an abstract mechanistic explanation (chap. 3). The case of Morgan illustrates not only that talk in terms of laws is compatible with talk in terms of mechanisms but also that reference to mechanisms in the life sciences cannot be detached from matters of projectability. Because of this concern for projectability, Glennan (2002, S345) stresses that the behavior of cs-mechanisms as a whole should be stable. At this point, the reader might worry that metaphysical issues (about what a mechanism is) get conflated with epistemological ones (about the use of mechanistic knowledge). Such worry would be baseless. It is not that our concern for projectability implies that mechanisms should be stable or robust. Rather, it implies that life scientists should search after robust mechanisms (it is a matter of fact that, to phrase it naively, they succeed in this). And if the concept of ‘cs-mechanism’ is to fit scientific practice (as is argued by Machamer et al. [2000, 1–2] and Bechtel and Abrahamsen [2005, 422]), it must incorporate this notion of stability. But, per definitionem, this comes down to the following statement: 000 BERT LEURIDAN (H-REG) There can be no cs-mechanism without some higher-level (c)P-regularity (i.e., the stable behavior produced by that mechanism). Both M* and M† conform to H-REG. (Note that this is a stronger claim than the one I argued for in sec. 3.) Following M†, mechanisms perform a function. They have a dispositional property that f-regularly results in M if the conditions C and W are satisfied. Even very weak dispositions (see n. 4) can be accounted for by the concept of (c)P-regularity. Following M*, mechanisms are productive of regular changes from start or setup to finish or termination conditions: they exhibit cP-regularities. Are no exceptions to H-REG possible? A prima facie exception has been provided by Bogen (2005, 398–400), who criticizes Machamer et al. (2000) for providing an unfounded regularist account of causation, and Machamer (2004, n. 1) has sided with him. According to regularism, there is no causation without regularity. By contrast, Bogen argues for an Anscombian account in which causality is one thing, and regularity is another. From this he concludes that mechanists need not invoke regu- larities or invariant generalizations. Some cs-mechanisms, he states, are too unreliable to fit regularism: “The mechanisms which initiate electrical activity in post-synaptic neurons by releasing neuro-transmitters are a case in point. They are numerous enough, and each of them has enough chances to release neurotransmitters to support the functions of the ner- vous system. But each one fails more often than it succeeds, and so far, no one has found differences among background conditions which account for this” (Bogen 2005, 400; my emphasis). Does this example convincingly show that some cs-mechanisms are too unreliable to fit regularism? The answer to that question depends on what is meant by ‘regularity’ and ‘regularism’. Anscombe (1981, 133) argues that causation does not imply strict regularity. Bogen (2005, 399, 411) seems to adopt this strict inter- pretation of ‘regularity’, too. If ‘regularity’ and ‘regularism’ are interpreted in this strict sense, then the example convincingly shows that mechanists need not be regularists. It does not show, however, that H-REG is false; it does not show that some cs-mechanisms go without (nonstrict) higher- level P-regularities. Neither P-regularities nor cP-regularities need be strict or deterministic. Nor do they need to be backed by strict regularities. The concept of (c)P-regularity is compatible with genuine indeterminism. (Bo- gen [2005, 398] classifies both Mitchell and Woodward as regularists. If regularism is interpreted strictly, this classification is unfounded. Mitchell [1997, S478] and Woodward [2003b, 41] explicitly leave room for non- deterministic pragmatic laws and invariant generalizations, respectively.) A genuine exception to H-REG would be provided by a cs-mechanism that produces a token causal relation that happens only once. Such a unique token causal relation cannot be regarded as instantiating an actual MECHANISMS REPLACE LAWS OF NATURE? 000 regularity. (I would like to thank Phyllis McKay Illari for raising this point.) My account does not rule out unique token causal relations. I use Woodward’s theory of invariance to distinguish between causal and non- causal P-laws and regularities. From this it does not follow that regularity (even weak regularity) is a necessary condition for causality. Yet can unique token causal relations be constituted by cs-mechanisms? I doubt this. Unique token causal relations rather seem the product of Salmon/ Railton mechanisms, that is, actual sequences of interconnected events (cf. Glennan 2002, S345, S349–S350). Let us turn now to the cs-mechanisms’ microlevel dependence on cP- regularities. A mechanism’s behavior is not groundless. It is produced by its component parts. Suppose now, that some part pi behaves completely irregularly: it may do ai1 or ai2 or . . . or ain, but what it does is the result of a completely random internal process. There is no relation whatsoever to the behavior of the other parts pj of the mechanism or to the previous behaviors of pi itself. Suppose moreover, that the same holds for all the other parts of the mechanism. Clearly, this would make it very unlikely for the mechanism to produce a macrolevel P-regularity, let alone a cP- regularity. So unless the behavior of its parts is sufficiently stable and sufficiently strong, that is, unless it is P-regular, and unless these behaviors are organized sufficiently well, the mechanism’s overall behavior will fail to be P-regular. (I do not rule out that some of the mechanism’s parts behave strictly randomly. However, then, sufficiently many other parts should behave P-regularly, and their behavior should be organized suf- ficiently well.) (L-REG) There can be no cs-mechanism without some lower-level (c)P-regularities (i.e., the regular behaviors, operations, or activities displayed or engaged in by the mechanism’s parts). Again, this is stressed by Glennan (2002, S344): a mechanism’s parts must be objects—in the absence of interventions, their properties must remain relatively stable. Translating this to M* and M†, these parts’ activities or operations must be (c)P-regularities. Up until now, I have shown that there can be no cs-mechanisms without both macro- and microlevel regularities. But what about the reverse re- lation? Can there be a (c)P-regularity without an underlying mechanism? In other words, can there be fundamental regularities whose stability and strength are somehow sui generis? Glennan (1996, 61–63) assumes or stipulates that they exist. That is more than I need. In my view, funda- mental P-regularities are possible, and that suffices to establish an on- tological asymmetry between P-regularities and cs-mechanisms. (It might be the case that, as a matter of fact, all (c)P-regularities rest on some 000 BERT LEURIDAN underlying mechanism—I see nothing metaphysically wrong in an infinite ontological regress of mechanisms and regularities.) 6. The Epistemological Relations between Mechanistic Models and (c)P- Laws. Drawing on the findings of the previous section, I now show that mechanistic explanation cannot dispense with (c)P-laws. To adequately describe cs-mechanisms, mechanistic models need to incorporate—and thus are epistemologically dependent on—(c)P-laws. By contrast, a gen- eralization may count as a P-law without describing any underlying mech- anism. A large part of the complex-systems literature about mechanisms, es- pecially the contributions by Machamer et al. (2000) and by Bechtel and Abrahamsen (2005), is motivated by the failure of the D-N model to provide an adequate account of scientific explanation (see sec. 2). Expla- nation, especially in the life sciences, rarely if ever involves subsumption under strict laws. Far more often it takes the form of mechanistic expla- nation: one models or describes the mechanism underlying the explan- andum phenomenon. This raises the question what criteria a model should satisfy to count as a model of a cs-mechanism. The trivial answer is that it should adequately represent that mechanism. Less trivially, it should adequately represent (i) the mechanism’s macrolevel behavior, (ii) the mechanism’s parts and their properties, (iii) the operations they perform or the activities they engage in, and (iv) the organization of these parts and operations. Let us call this the adequacy criterion for mechanistic models (see also Craver 2006, 367–73). So, by section 5, the model should adequately describe both the macrolevel and the microlevel (c)P-regular- ities. Hence, by definition, it should incorporate (c)P-laws. Thus, the ad- equacy criterion implies that all mechanistic models must incorporate (c)P- laws. (Note that my claim differs from Weber’s [2008], in that I do not focus solely on physical laws.) But then the following question arises. Is it possible to gain evidence for a generalization’s lawfulness without relying on mechanistic back- ground knowledge? Can one be convinced that some generalization de- scribes a regularity that is sufficiently stable or strong (for some particular application context), and can one assess this stability or strength without any evidence for some underlying mechanism? In short, can a generali- zation count as a (c)P-law without referring to mechanisms? To be sure, this question is not idle, and moreover it has large epistemological import. It is not idle since mechanistic background knowledge is useful in assessing the lawfulness of regularities (see previous discussion of T. H. Morgan; see also Darden 1991) and is used so in many different scientific disciplines. It has large epistemological import since, given what we know from the first part of this section, the epistemological dependence of (c)P-laws on MECHANISMS REPLACE LAWS OF NATURE? 000 mechanistic models would imply an infinite (and vicious) epistemological regress. To be sure that some model M is a model of a cs-mechanism, I would need to know that the generalizations figuring in it areG , . . . , G1 n (c)P-laws. Yet then I would have to know the underlying mechanisms. It certainly does not do to rely on the existence of fundamental laws. First, in section 5, I have argued that fundamental regularities are not impos- sible. Yet it is still an open question whether they actually exist. Second, granted that there are fundamental regularities, few or no practicing bi- ologists would turn to fundamental laws in explaining biological phe- nomena. Machamer et al. (2000) have tried to solve the problem of infinite regress by introducing the notion of ‘bottoming out’. In their conception, nested hierarchical descriptions of mechanisms bottom out in lowest-level mech- anisms. These are but fundamental relative to the purposes of a given scientist or discipline. Their entities and activities are taken to be fun- damental (and hence not calling for further explanation) relative to the research question at hand, even if it is known that in other scientific fields they are considered as nonfundamental, macrolevel phenomena (13). Al- though this notion nicely fits scientific practice, it offers at best a pseu- dosolution to our problem. By treating some entities and their behavior as fundamental (relative to some context), one has not thereby shown (but only assumed) that that behavior is sufficiently stable or strong (for that context). In the rest of this section, I face the problem head on and show that (c)P-laws are not epistemologically dependent on mechanistic models. Mechanistic knowledge is not indispensable for the assessment of a generalization’s lawfulness. Other means do at least as well. A most natural candidate is performing experiments. Experiments are often ascribed the power to reveal causal connections and to confirm or refute claims about stable regularities, even if the relation between ex- periments and laws or theories is fraught with several problems (see Frank- lin 1995, 2003).9 Moreover, experiments are very frequently performed in 9. Franklin (1995, 196–204) discusses three problems. The first is known as the ‘theory- ladenness of observation’. Observation statements and measurement reports use terms whose meanings are determined by a particular theory. (This problem may be gener- alized. The realization of an experiment often also depends on theoretical insights about the experimental [object-apparatus] system and the possible interactions with its environment. Prior knowledge is needed about the object under study and about the instruments used [Radder 2003b, 165, 168–69].) The second is the ‘Duhem-Quine prob- lem’. If some hypothesis h generates a prediction e, it always does so together with some background knowledge b. Hence, if is observed instead of e, either h is to∼ e be blamed or b or both. So one can always save h by blaming b only. The third problem is the fact that experiments are fallible and that different experimental results may discord. Franklin concludes that although these problems are important and impel us 000 BERT LEURIDAN biology and the biomedical sciences. The question now is to what extent P-regularities may be experimentally discovered or established, without any knowledge of some underlying mechanism. I start by giving a very general characterization of experiments. (EXP) In an experiment, an object is placed in some controlled en- vironment. Using some apparatus, it is manipulated such that it as- sumes some definite property . Then, again using some ap-X p x paratus, the outcome is measured in some (other) property Y. More specifically, it is verified whether there is some relation between and (for some or all possible values x of X and y ofX p x Y p y Y ), and if so, what is its strength and how it can be characterized? Let me briefly dwell on this description. The term ‘object’ should be interpreted as broadly as possible. It may refer to one particular material object or to some complex of objects or to some sample of liquid or gas and so on. An environment is ‘controlled’ if the relation between X and Y is not influenced or disturbed by other factors. Eliminating all possible disturbing factors (and all possible sources of error in general) is a very delicate and difficult task, a large part of which depends on statistical analysis and data reduction (cf. Galison 1987; Franklin 1990). I return to this issue in a moment. Emphasis is laid on ‘manipulation’ since this, much more than passive observation, is considered a particularly reliable way to find out causal relationships.10 Finally, apparatuses are often in- dispensable in experimental designs. They play at least three different roles: as a device for manipulation, for measurement, or to control dis- turbing influences. (In Radder [2003a], the role of technology and instru- ments in experiments is discussed several times by many different authors.) This characterization, the mechanicist may argue, clearly reveals the use of mechanistic background knowledge in experimentation. If you want to create a controlled environment and rule out all disturbing influences, much is gained by knowing what these influences are. Such knowledge, furthermore, is outstandingly provided by mechanistic models. I endorse this claim but challenge that it is noxious for my argumentation. Mech- anistic background knowledge is highly valuable for the experimenter. Yet it certainly is not indispensable. to treat experimental results carefully, they are not insuperable. Experimental evidence may serve to test laws and theories. 10. Woodward (2003a) heavily stresses the connections between experimentation and manipulation on the one hand and causation on the other hand. In his view, experiments are not only an excellent tool for causal discovery and causal inference. To say that X causes Y also “means nothing more and nothing less than that if an appropriately designed experimental manipulation of [X ] were to be carried out, [Y ] (or the prob- ability of [Y ]) would change in value” (90). MECHANISMS REPLACE LAWS OF NATURE? 000 In many experiments, namely, in randomized experimental designs, dis- turbing influences are not screened off physically. (See Psillos [2004, sec. 5] for a similar yet somewhat distinct discussion of randomized experi- mental designs and mechanisms.) Instead, experimenters endeavor to can- cel out their influence by means of randomization. From the target pop- ulation P, a sample S is randomly selected. The random sampling procedure should guarantee that the subjects in S do not differ drastically from the rest of the subjects in P. In other words, for any variable Z, its distribution in S should not deviate drastically from its distribution in P. (In practice this is not guaranteed. Randomization only works with cer- tainty in the limit as sample size tends to infinity.) Then the subjects in S are randomly divided into an experimental group and a controlXS group . All subjects in are manipulated such that they assume someK XS S definite property , whereas those in are not so manipulatedKX p x S ( )—often they are given a placebo. This procedure should guar-X p∼ x antee that the subjects in and most closely resemble each other,X KS S except with respect to the cause variable X and its effects. (Instead of having only and , one may also create several experimentalX p x X p∼ x groups, each with a different level of X.) Then the relation between X and the effect variable Y is measured. Randomization is highly context independent. It allows control for disturbing influences without even knowing them. Thus, mechanistic back- ground knowledge is no conditio sine qua non for experimentation, and it is not indispensable regarding the assessment of the lawfulness of a generalization—even if there were a mechanism underlying the corre- sponding regularity. Fortunately, we escape the problem of infinite epi- stemic regress. 7. Conclusion: Can Mechanisms Really Replace Laws of Nature? In this article, I have substantiated four claims. First, cs-mechanisms as defined in M* or M† necessarily involve (c)P-regularities. Second, even if it cannot be ruled out that all (c)P-regularities involve an underlying mechanism, it is at least possible that there are fundamental regularities. Third, no model can count as a mechanistic model, unless it incorporates (c)P-laws. Finally, a generalization can be considered a (c)P-law even if it does not refer to an underlying mechanism. Is there then any tension between the mechanistic literature and Mitch- ell’s theory of pragmatic laws? In one sense, there is not. The above arguments show that the mechanistic literature cannot replace (but rather depends on) talk in terms of laws of nature, provided the latter are con- ceived of pragmatically. So the two must be compatible. Yet in another sense, there is a tension that deserves further elaboration. Mitchell’s ac- count is more encompassing than the mechanicists’: there are stable gen- 000 BERT LEURIDAN eralizations (and corresponding regularities) that are not (or not easily or not fruitfully) amenable to mechanistic explanation yet that are useful for prediction, explanation, or manipulation. Whereas these are outside the scope of the mechanistic literature, Mitchell’s account allows us to take them seriously. Fundamental P-laws, if existent, would provide an obvious example. Yet there are also genuinely biological and social P-laws that are not amenable to mechanistic explanation. Sawyer (2004) argues that there are social properties, for example, “having a dispute” or “being a church,” which are multiply realized in wildly disjunctive sets of lower-level mech- anisms. Such properties have two features that are of interest for us: they are not amenable to mechanistic explanation, and they can figure in causal laws. “To the extent that social properties are real, [social mechanist] explanation may be limited to the explanation of individual cases that do not generalize widely, resulting in an interpretivist or case study approach rather than a science of generalizable laws and theories” (266; my emphasis; arguably, these causal laws are not strict). Sawyer focuses on the social sciences, but similar cases can be found in the biological sciences. Division of labor is a relatively stable phenomenon that is found in ants, bees, and wasps (Mitchell 2009, 46–48). In each species, division of labor is the outcome of a different evolutionary pathway, and it is realized by a dif- ferent mechanism. For example, diverging behavior between colony mem- bers is partly explained in terms of genetic differences, but fire ants and honeybees harbor different degrees of genetic variability (48; Mitchell uses the case of division of labor for different purposes, i.e., to show that biological laws are contingent rather than necessary). In short, both in the social sciences and in biology there are stable regularities that are multiply realized by (wildly) disjunctive mechanisms. Focusing on these underlying mechanisms may be interesting, but it comes at a cost: one loses sight of the generality of the regularities (and corresponding laws) in question. It could be objected that in these cases there is no single relatively general regularity but rather a wild disjunction of very local regularities (each constituted by a different mechanism) and that by treating them as single regularities, one illicitly lumps together things that should be dis- tinguished. I disagree, however. Whether such lumping would be illicit depends on pragmatic considerations. If one endeavors to predict, to explain, or to bring about something, one may in some cases rely on generalizations that describe multiply realized P-regularities. What mat- ters is whether they are sufficiently stable and strong for the purpose at hand, not whether they are the result of a single rather than many kinds of mechanisms. Focusing on the social sciences, Julian Reiss (2007, 176– 81) argues that investigating causal mechanisms is not always a good MECHANISMS REPLACE LAWS OF NATURE? 000 strategy for accurate description, prediction, or control. On the basis of these findings, he argues for a more pluralistic methodology of the social sciences. Social scientists should not only strive for mechanistic models. Description, prediction, and control are nonexplanatory aims of science. Reiss does not question the assumption that causal mechanisms play an essential role in theoretical explanation. Are Machamer et al. (2000) right in claiming that explanation requires providing a description of a cs- mechanism (cf. above)? I think they are not. In Woodward’s framework, there is in a sense room for nonmechanistic explanations of both single events and regularities. Woodward explicitly discusses the case of single events. He gives the example of invoking the ideal gas law to explain why the pressure of a sample of gas enclosed in a fixed volume increases when the temperature is increased: “According to the manipulationist account, so-called phe- nomenological laws or generalizations can figure in explanations . . . even if they tell us nothing about underlying mechanisms, processes or constit- uents” (2003b, 221; my emphasis). In my view, Woodward’s theory may also leave room for nonmechanistic explanation of regularities. Consider a causal regularity, say A’s causing B’s. According to Woodward, ex- planatory relevance is intimately tied to what-if-things-had-been-different information. Suppose now that we find out that A is not a direct cause of B (but that A is a cause of C, which in turn is a cause of B). This would allow us to answer more what-if-things-had-been-different ques- tions (e.g., about what would happen given disturbing influences on C) and thus should count as an explanation of the relation between A and B, even if A, B, and C are at the same mechanistic level.11 Does nonmechanistic explanation using cP-laws escape the objection we encountered in section 2, namely, that subsumption of some phenom- enon under a law is not explanatory since it does not show why the phenomenon occurs (i.e., why the association holds)? I do not deny that mechanistic explanations, if available, are generally better than nonmecha- nistic explanations. They often provide more what-if-things-had-been-dif- ferent information and hence better or deeper understanding (Woodward 2003b, 223). Yet the objection in question involves a shift of explanandum (from the original phenomenon that is subsumed under the law to the regularity described by that law). This shift may be legitimate or illegit- imate. Whether subsumption under some cP-law counts as a good ex- planation depends on pragmatic factors (contextual features such as the explanation seeker’s background knowledge). Consider the following ex- ample: “The slush on the sidewalk remained liquid during the frost because 11. Weber and Leuridan (2008) call this a ‘mediating mechanism’ as opposed to a ‘cs- mechanism’. For the notion of ‘levels of mechanisms’, see Craver (2007, chap. 5). 000 BERT LEURIDAN it had been sprinkled with salt.” According to McMullin, who borrows the example from Hempel, this counts as an explanation only if one did not know that salt had been sprinkled there or if one did not know the effect of sprinkling salt on snow. “If someone who knowingly sprinkled salt on snow, which then proceeded to melt, were to seek an explanation for this, it would be a rather weak response to say that under these conditions, salt always does this!” (McMullin 1984, 214). What she calls for is, presumably, a mechanistic explanation.12 In scientific contexts, the background knowledge of the explanation seekers is generally such that they would not settle for nonmechanistic explanations, and the shift of explanandum would be legitimate. Yet this does not alter the fact that on many occasions (mostly in everyday life) nonmechanistic explanations do suffice, and the shift of explanandum would be illegitimate. The mechanicists are right in criticizing strict laws of nature and D-N explanation, and their analysis of mechanisms and mechanistic expla- nation is highly valuable. Yet the mechanistic account does not render Mitchell’s pragmatic solution to the problem of lawfulness superfluous. It cannot replace (but depends on) talk in terms of pragmatic laws of nature, and it is less encompassing than Mitchell’s theory. REFERENCES Anscombe, G. E. M. 1981. “Causality and Determination.” In Collected Philosophical Pa- pers. Vol. 2, Metaphysics and Philosophy of Mind, ed. G. E. M. Anscombe, 133–47. Minneapolis: University of Minnesota Press. Bateson, William. 1900. “Problems of Heredity as a Subject for Horticultural Investigation.” Journal for the Royal Horticultural Society 25:54–61. ———. 1902. Mendel’s Principles of Heredity: A Defence. London: Cambridge University Press. Beatty, John. 1995. “The Evolutionary Contingency Thesis.” In Concepts, Theories, and Rationality in the Biological Sciences, ed. G. Wolters and J. Lennox, 45–81. Pittsburgh: University of Pittsburgh Press. ———. 1997. “Why Do Biologists Argue like They Do?” Philosophy of Science 64 (Pro- ceedings): S432–S443. Bechtel, William, and Adele Abrahamsen. 2005. “Explanation: A Mechanist Alternative.” Studies in History and Philosophy of Biological and Biomedical Sciences 36:421–41. Bechtel, William, and Robert C. Richardson. 1993. Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research. Princeton, NJ: Princeton University Press. Beed, Clive, and Cara Beed. 2000. “Is the Case for Social Science Laws Strengthening?” Journal for the Theory of Social Behaviour 30 (2): 131–53. Bigelow, John, and Robert Pargetter. 1987. “Functions.” Journal of Philosophy 84 (4): 181– 96. 12. McMullin calls this a retroductive explanation as opposed to a nomothetic expla- nation. By the latter, he means an explanation using strict laws, but the point applies to pragmatic laws as well. Note that Hempel (1965, 425–28) had an ambiguous attitude toward the pragmatic aspects of explanation. MECHANISMS REPLACE LAWS OF NATURE? 000 Bogen, Jim. 2005. “Regularities and Causality: Generalizations and Causal Explanations.” Studies in History and Philosophy of Biological and Biomedical Sciences 36:397–420. Brandon, Robert N. 1997. “Does Biology Have Laws? The Experimental Evidence.” Phi- losophy of Science 64 (Proceedings): S444–S457. Cartwright, Nancy. 1983. How the Laws of Physics Lie. Oxford: Clarendon. Christie, Maureen. 1994. “Philosophers versus Chemists Concerning ‘Laws of Nature.’” Studies in the History and Philosophy of Science 25 (4): 613–29. Craver, Carl F. 2006. “When Mechanistic Models Explain.” Synthese 153:355–76. ———. 2007. Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience. Oxford: Clarendon. Cummins, Robert. 1975. “Functional Analysis.” Journal of Philosophy 72:741–64. Darden, Lindley. 1991. Theory Change in Science: Strategies from Mendelian Genetics. Ox- ford: Oxford University Press. de Vries, Hugo. 1900. “The Law of Segregation of Hybrids” [Das Spaltungsgesetz der Bastarde]. In The Origin of Genetics: A Mendel Source Book, ed. C. Stern and E. R. Sherwood, 107–17. San Francisco: Freeman. Dowe, Phillip. 2000. Physical Causation. Cambridge: Cambridge University Press. Fara, Michael. 2006. “Dispositions.” In The Stanford Encyclopedia of Philosophy, ed. E. N. Zalta. Stanford, CA: Stanford University. Franklin, Allan. 1990. Experiment, Right or Wrong. Cambridge: Cambridge University Press. ———. 1995. “Laws and Experiment.” In Laws of Nature: Essays on the Philosophical, Scientific and Historical Dimensions, ed. F. Weinert, 191–207. Berlin: de Gruyter. ———. 2003. “Experiment in Physics.” In The Stanford Encyclopedia of Philosophy, ed. E. N. Zalta. Stanford, CA: Stanford University. Galison, Peter. 1987. How Experiments End. Chicago: University of Chicago Press. Galton, Francis. 1889. Natural Inheritance. London: Macmillan. ———. 1897. “The Average Contribution of Each Several Ancestor to the Total Heritage of the Offspring.” Proceedings of the Royal Society 61:401–13. Glennan, Stuart S. 1996. “Mechanisms and the Nature of Causation.” Erkenntnis 44:49– 71. ———. 2002. “Rethinking Mechanistic Explanation.” Philosophy of Science 69 (Proceed- ings): S342–S353. Goodman, Nelson. 1973. Fact, Fiction, and Forecast. Indianapolis: Bobbs-Merrill. Hempel, Carl G. 1965. Aspects of Scientific Explanation and Other Essays in the Philosophy of Science. New York: Free Press. Hill, Austin B. 1965. “The Environment and Disease: Association or Causation?” Proceed- ings of the Royal Society of Medicine 58:295–300. Klug, William S., and Michael R. Cummings. 1997. Concepts of Genetics. 5th ed. Upper Saddle River, NJ: Prentice-Hall. Leuridan, Bert. 2007. “Galton’s Blinding Glasses: Modern Statistics Hiding Causal Structure in Early Theories of Inheritance.” In Causality and Probability in the Sciences, ed. F. Russo and J. Williamson, 243–62. Texts in Philosophy. London: College. Leuridan, Bert, Erik Weber, and Maarten Van Dyck. 2008. “The Practical Value of Spurious Correlations: Selective versus Manipulative Policy.” Analysis 68:298–303. Machamer, Peter. 2004. “Activities and Causation: The Metaphysics and Epistemology of Mechanisms.” International Studies in the Philosophy of Science 18 (1): 27–39. Machamer, Peter, Lindley Darden, and Carl F. Craver. 2000. “Thinking about Mechanisms.” Philosophy of Science 67 (1): 1–25. McMullin, Ernan. 1984. “Two Ideals of Explanation in Natural Science.” Midwest Studies in Philosophy 9:205–20. Mendel, Gregor. 1865/1933. Versuche über Pflanzenhybriden. Ostwald’s Klassiker der Ex- akten Wissenschaften. Leipzig: Akademische. Mitchell, Sandra D. 1997. “Pragmatic Laws.” Philosophy of Science 64 (Proceedings): S468– S479. ———. 2000. “Dimensions of Scientific Law.” Philosophy of Science 67 (4): 242–65. ———. 2003. Biological Complexity and Integrative Pluralism. Cambridge: Cambridge Uni- versity Press. 000 BERT LEURIDAN ———. 2009. Unsimple Truths: Science, Complexity and Policy. Chicago: University of Chicago Press. Morgan, Thomas H. 1919. The Physical Basis of Heredity. Philadelphia: Lippincott. ———. 1926/1928. The Theory of the Gene. Rev. ed. New Haven, CT: Yale University Press. Morgan, Thomas H., Alfred Sturtevant, Hermann Muller, and Calvin Bridges. 1915. The Mechanism of Mendelian Heredity. New York: Henry Holt & Co. Nagel, Ernest. 1961. The Structure of Science: Problems in the Logic of Scientific Explanation. London: Routledge & Kegan Paul. Prior, Elizabeth W., Robert Pargetter, and Frank Jackson. 1982. “Three Theses about Dis- positions.” American Philosophical Quarterly 19 (3): 251–57. Psillos, Stathis. 2004. “A Glimpse of the Secret Connexion: Harmonizing Mechanisms with Counterfactuals.” Perspectives on Science 12 (3): 288–319. Radder, Hans, ed. 2003a. The Philosophy of Scientific Experimentation. Pittsburgh: Uni- versity of Pittsburgh Press. ———. 2003b. “Technology and Theory in Experimental Science.” In Radder 2003a, 152– 73. Reiss, Julian. 2007. “Do We Need Mechanisms in the Social Sciences?” Philosophy of the Social Sciences 37 (2): 163–84. Roberts, John T. 2004. “There Are No Laws of the Social Sciences.” In Contemporary Debates in Philosophy of Science, ed. C. Hitchcock, 151–67. Malden, MA: Blackwell. Salmon, Wesley. 1984. Scientific Explanation and the Causal Structure of the World. Princeton, NJ: Princeton University Press. Sawyer, R. Keith. 2004. “The Mechanisms of Emergence.” Philosophy of the Social Sciences 34 (2): 260–82. Sober, Elliott. 1997. “Two Outbreaks of Lawlessness in Recent Philosophy of Biology.” Philosophy of Science 64 (Proceedings): S458–S467. Weber, Erik, and Bert Leuridan. 2008. “Counterfactual Causality, Empirical Research and the Role of Theory in the Social Sciences.” Historical Methods 41 (4): 197–201. Weber, Marcel. 2008. “Causes without Mechanisms: Experimental Regularities, Physical Laws, and Neuroscientific Explanation.” Philosophy of Science 75 (5): 995–1007. Woodward, Jim. 2002. “What Is a Mechanism? A Counterfactual Account.” Philosophy of Science 69 (Proceedings): S366–S377. ———. 2003a. “Experimentation, Causal Inference, and Instrumental Realism.” In Radder 2003a, 87–118. ———. 2003b. Making Things Happen: A Theory of Causal Explanation. New York: Oxford University Press.