Logical Foundations of Evidential Support Branden Fitelson∗† Abstract Carnap’s inductive logic (or confirmation) project is revisited from an “in- crease in firmness” (or probabilistic relevance) point of view. It is argued that Carnap’s main desiderata can be satisfied in this setting, without the need for a theory of “logical probability”. The emphasis here will be on ex- plaining how Carnap’s epistemological desiderata for inductive logic will need to be modified in this new setting. The key move is to abandon Carnap’s goal of bridging confirmation and credence, in favor of bridging confirmation and evidential support. ∗Department of Philosophy, University of California–Berkeley, 314 Moses Hall #2390, Berkeley, CA 94720-2390 †I would like to thank audiences at the University of California–Berkeley, the University of Michigan, the University of Oklahoma, and the PSA 2004 symposium at which this paper was presented for useful comments and discussion (there are too many members of these audiences who have made valuable suggestions to name each one individually). 1 1 Setting the Stage: Three Carnapian Desiderata In the second edition of Logical Foundations of Probability (LFP), Carnap (1962, xvi) distinguished two kinds of inductive-logical confirmation relations: confir- mation as firmness, which he informally characterized as “How probable the hypothesis H is on the basis of the evidence E”, and confirmation as increase in firmness, which he informally characterized as “How much the probability of H is increased when new evidence E is acquired (in addition to the prior evidence which, for simplicity, we shall take here as tautological).” Carnap de- votes almost all of LFP to the task of explicating the former. Presently, I will discuss Carnap’s approach to the former, and sketch my own approach to the latter. My discussion will focus, ultimately, on the relation between inductive logic (the confirmation as increase in firmness relation), and epistemology (the relation of incremental evidential support). I begin with some quotes from LFP to set the stage. The first gives a gen- eral sense of the very idea of inductive logic, as a quantitative analogue (or generalization) of deductive logic: Deductive logic may be regarded as the theory of the relation of logical consequence [`], and inductive logic as the theory of another concept [c] which is likewise objective and logical . . . degree of confirmation. The next two quotes give Carnap’s (1962, 200) informal characterizations of the terms “logical” and “objective” as they apply to the relations ` and c: The principal common characteristic of the statements in both fields is their independence of the contingency of facts. This characteristic justifies the application of the common term ‘logic’ to both fields. That c is an objective concept means this: if a certain c value holds for a certain hypothesis with respect to a certain evidence, then this value is entirely independent of what any person may happen to think about these sentences. It is clear that Carnap, at least, intends here to be saying that statements of confirmation theory should be analytic. But, mere analyticity is not all that is required of confirmation theory. Confirmation (c) must be analogous to entail- ment (`) in certain other ways as well. Specifically, Carnap (1962, 202) adds: They both involve the concept of range. ‘E L-implies H’ [‘E ` H’] means that the entire range of E is included in that of H, while . . . ‘c(H, E) = 3/4’ means three-fourths of the range of E is included in that of H.1 Moreover, Carnap emphasizes that both relations (` and c) should be applicable to epistemology in analogous ways. He endorses analogous epistemic bridge principles for both relations. For the ` relation, Carnap (1962, 201) endorses the following closure principle: 1Here, Carnap has in mind confirmation as firmness. We’ll have to generalize this ‘range’ analogy with entailment when we discuss confirmation as increase in firmness, below. 2 If E is known by the person X at the time t, then H is likewise known by X at t [provided that E ` H is also known by X at t]. For c, Carnap (1962, 201) endorses the following principle, which he takes to be analogous to closure: If E and nothing else is known by X at t, then the degree of belief justified by the knowledge of X at t is 3/4 [provided that c(H, E) = 3/4 is also known by X at t]. For Carnap, then, inductive logic involves a confirmation relation c which has (at least) the following three characteristics (I take these to be neutral between the firmness and increase in firmness conceptions of confirmation): • Analyticity : Statements involving c should be analytic, like statements involving ` are. – Carnap had a philosophical theory of analyticity. We won’t worry too much here about the details of his theory of analyticity (since that is not our emphasis). Roughly, here we will (following the spirit of Carnap’s informal characterizations of analyticity rather than the letter of his technical theory thereof) take ‘analytically true’ to mean ‘true in the way that (true) mathematical statements are true’. • Logicality : c should be a quantitative generalization of `. – c(H, E) should be maximal (minimal) when E ` H (E ` ∼H).2 • Applicability : c should be applicable to epistemology in ways analogous to the ways in which ` is applicable to epistemology. – Some kind of epistemic bridge principle(s) should hold, which con- nects c with some epistemic concept (much more on this below).3 2 Carnap’s Approach to Confirmation as Firmness Carnap’s c(·, ·)s are, basically, measures of the relative frequencies with which certain syntactical particles occur in the sentences of certain simple formal lan- guages.4 The building blocks of early Carnapian cs are “regular logical measure 2That is, so long as the function c(H, E) is defined. I am ignoring the paradoxes of entailment (see below). I have generalized Carnap’s “logical range” analogy, since that analogy breaks down when we are talking about increase in firmness. I have settled on this desideratum, since it is the strongest one I can think of that is capable of meaningfully applying to both the firmness and the increase in firmness conceptions of confirmation. 3As we will see below, Carnap has other epistemic applicability requirements in mind, which go beyond mere “epistemic bridge principles”. I won’t comment too much on these additional desiderata here, as this would require a much longer and involved treatment. 4Here, I have in mind Carnap’s early writings on confirmation and inductive logic. Later on, he moved toward a more semantical approach and away from the early syntactical approaches to confirmation. This distinction is not terribly important for present purposes. So, because it is simpler to talk about Carnap’s early approaches, that’s what I will do. For a more in-depth discussion of Carnap’s programme along present lines, see (Fitelson 2005). 3 functions” m(·). Such ms assign numbers on (0, 1) (which sum to 1) to each of the state descriptions (Z) of some (monadic) first-order language L. In his earliest system, Carnap uses m†(·), which assigns equal measure to each state descrip- tion in L. Conditional “logical” probability (c†) is then given by a ratio of these unconditional “logical” probabilities (in the usual way): c†(H, E) = m† (H&E)m† (E) . This function c† satisfies the first two desiderata, above, because: (i) c† statements are analytic, since, given m†, they are determined by the structure of L, and (ii) c† generalizes entailment, since c† is maximal (minimal) when E ` H (E ` ∼H).5 Nonetheless, Carnap eventually rejects c†, because he thinks its applicability to epistemology is unsatisfactory. Interestingly, this is not because c† fails to undergird his desired epistemic bridge principle. Carnap (1962, 565) explains: The choice of c† as the degree of confirmation would be tantamount to the principle never to let our past experiences influence our expectations for the future. What Carnap means here is that, according to c†(H, E), no E can ever increase the firmness of any H (unless E ` H). That is, we can never have c†(H, E) > c†(H, T), unless E ` H.6 Because of this “no learning from experience problem”, Carnap abandons c† in favor c∗: a function constructed using a different measure function m∗, which assigns equal measure to the structure descriptions (Str) of L. The struc- ture descriptions of L are just collections of state descriptions of L that are invariant under permutations of individual constants. This “clumping” of state descriptions avoids the “no learning from experience” problem. But, Carnap thinks c∗ has other shortcomings concerning its epistemic applicability. This causes Carnap to add an adjustable parameter λ to his c-constructions, which yields a λ-continuum of c-functions.7 Later, Carnap puzzled over still other epistemic application requirements that he thought no cλ-function could ade- quately meet. This led to the addition of further adjustable parameters to his systems, which allowed Carnap more flexibility in the ways he could carve up his languages L. But, even this broader class of cs never seemed fully satisfac- tory to Carnap from the point of view of its epistemic applicability. In the end, the classes of probability functions that one can construct Carnap-style from such Ls don’t seem rich enough to meet all the demands of general epistemic application (Maher 2001). 5Carnap’s cs get more than just the “endpoints” right, they give a tighter quantitiative gener- alization in terms of “logical range”. This cannot be achieved by measures of confirmation as increase in firmness. See (Kemeny & Oppenheim 1952) for similar remarks. 6It is somewhat strange that Carnap should see this as a problem for a measure of confirmation as firmness. But, I’ll let that pass. Carnap (in the early days) had a tendency to slide back and forth between considerations of firmness confirmation and considerations of increase in firmness confirmation. See (Michalos 1971) for an extended discussion of this aspect of Carnap’s thought, and Popper’s criticisms of it. 7Basically, λ is an index of caution. If λ is infinite, then no learning from experience is possible: c†, and if λ is set to a certain finite value, then it yields c∗, etc. 4 At this point, the following two key questions come to mind: • Why bother with such syntactical constructions in the first place? • Can we have analyticity, logicality, and epistemic applicability? In the next two sections, I will explain how we can avoid bothering with Carnap- style logical constructions while preserving all three Carnapian desiderata. 3 Analyticity and Logicality on the Cheap Carnap acts as if the analyticity of c requires its syntactical construction (within L) out of “logical measure functions”. It’s unclear why this should be true (even if you accept Carnap’s formalistic understanding of analyticity). There seems to be an additional presupposition at work here: that the function c should be two-place in the same way that ` is two-place. But, the attempt to construct a two-place c seems to have failed. Even Carnap found the need to add adjustable parameters to his recipes for constructing cs. And, the resulting c statements will not even be determinate (much less analytic) until all of these parameters are adjusted (and even Carnap concedes that – in general – they cannot be adjusted a priori). So, in the end, c isn’t really two-place, even for Carnap. I suggest another (more direct and general) way to ensure analyticity: • c is a three-place relation between E, H, and a probability model (or a family thereof) M.8 By making c explicitly a three-place relation in this way, we don’t restrict our- selves to a limited class of probability models that can be cooked-up using sim- ple first-order languages L according to some Carnapian recipe. This is ad- vantageous, because there are many probability models which seem salient to statistical and inductive inference that cannot be simulated by even the most sophisticated of Carnapian constructions (Maher 2001). Moreover, this (obvi- ously) yields analyticity (even in Carnap’s technical sense), since given M, all probability statements — relative to M — are mathematically true.9 So, we 8By a probability model, I mean a boolean algebra of propositions, together with a function satisfying whichever axioms for probability you prefer. I prefer Kolmogorov’s (1933) axioms for probability, but others prefer to take conditional probability as primitive. See (Hájek 2003). This issue is not crucial for my present purposes (especially in light of the fact that I am ignoring cases in which E and/or H are non-contingent), and so I will remain neutral on it here. 9One might object that this gets us analyticity too cheaply. After all, in this sense the mathe- matical statements of mathematical physics can also be considered analytic (once we relativize them to concrete mathematical structures). This may be true, but a mathematical theory of physics needs to be more than just analytic in this sense (i.e., in the sense that its mathematical claims are analytic qua mathematical claims) — it also needs to be empirically adequate, etc. Like- wise, theories of confirmation also need to satisfy logicality and applicability. So, the fact that analyticity is “cheap” in this way does not, to my mind, undermine the philosophical significance of inductive logic so understood. One of the reasons Carnap wanted the statements in his theory of confirmation to be analytic is that he wanted them (if true) to be knowable a priori. I don’t see how this is possible if the statements aren’t even determinate until certain adjustable parameters are fixed empirically (as in Carnap’s systems). By getting analyticity “on the cheap”, we avoid this problem. A complete response to this objection is beyond the scope of the present discussion. 5 can easily obtain analyticity (and increased generality) by making c three-place rather than two-place. How about logicality? Yep. That’s not a problem either, since, in all probability models M (Carnapian or otherwise), PrM(H | E) will be maximal (minimal) when E ` H (E ` ∼H). However, when we make the three-place nature of c explicit, the epistemic applicability problem becomes prominent. To see this, consider the following naïve, M-relativized Carnap-style bridge principle for firmness and credence: If X knows E and nothing else (a posteriori), and X knows (a priori) that PrM(H | E) = r , then X’s credence in H, given E, should be r . This just can’t be true, since if it were true, then nothing about the relationship between the model M, the agent X, and/or the world would be required to constrain X’s credences.10 Hence, this principle would be implausible even if we had an understanding of what it means (in general) to “know E and nothing else.” What made Carnap’s bridge principle at all plausible (perhaps modulo the “and nothing else” clause) was that his models were supposed to be models of “a priori probability”. As such, they had their epistemic credentials built-in to them from the start. But, what if there is no such thing as “logical” or “a priori” probability (as many of us now suspect)? Does this mean that inductive logic is impossible? Not necessarily. At least, that’s what I will try to argue. 4 Confirmation as Increase in Firmness Revisited Let’s return to confirmation as increase in firmness now. It is well-known (Fi- telson 2001) that there are many measures of “the degree to which E increases the firmness of H.” Interestingly, very few of these satisfy logicality. We can sum-up the desiderata of logicality and “sensitivity to increase in firmness (or relevance),” as follows. For all contingent 11 E and H: c(H, E, M) should be    Maximal if E entails H. > 0 if E and H are correlated in M. = 0 if E and H are independent in M. < 0 if E and H are anti-correlated in M. Minimal if E entails ∼H. As it turns out, of all the relevance measures used or defended in the histor- ical literature on confirmation (up to ordinal equivalence), only l(H, E, M) = log [ PrM (E | H) PrM (E | ∼H) ] and l′(H, E, M) = log [ PrM (H | E) PrM (H | ∼E) ] satisfy logicality.12 And, if 10This is tantamount to treating arbitrary Pr-functions as “expert probabilities” (Gaifman 1988). 11Cases in which E and/or H are not contingent are very tricky — even in deductive logic (e.g., the paradoxes of entailment). So, for simplicity, I will bracket those cases here. 12This fact about l was known to Kemeny & Oppenheim (1952), who are (strangely) not cited by Carnap (1962) in his discussion of confirmation as increase in firmness. What I am doing here differs from what Kemeney & Oppenheim were doing in at least two respects. First, they were still working with Carnap-style theories of “logical probability”, which I have abandoned. And, second, they do not discuss the problem of epistemic applicability, which is a crucial problem that any adequate account of inductive logic must address. 6 we impose the following additional constraint (which seems to be accepted by all historical practitioners of probabilistic confirmation theory): If PrM(H | E1) ≥ PrM(H | E2), then c(H, E1, M) ≥ c(H, E2, M). then, we get historical uniqueness of l.13 That’s neat. But, I don’t want make too much of fuss here about quantitative relevance confirmation claims of the form [c(H, E, M) = r \, since I think the most objective claims (and the claims most analogous to entailment claims) are qualitative claims of the form [c(H, E, M) ê 0\. Are there bridge principles relating the qualitative increase in firmness confirmation relation an some epistemically important relation? I think so. But, these principles will not bridge confirmation as increase in firm- ness and credence (since confirmation as increase in firmness measures are not probability functions). Rather, they will bridge confirmation and (incre- mental) evidential support. Moreover, I suspect that there are also comparative bridge principles, which bridge comparative confirmation claims and compara- tive claims about evidential support (but these will be more controversial). Let’s look at the qualitative case first. It’s useful here to compare the deductive and inductive cases. In the deduc- tive case, the bridge-principle is rather simple. If a rational agent knows E ` H and E, then (ceteris paribus14) they may be said to know H on the basis of E. Figure 1 allows us to picture a concrete example. In Figure 1, our agent learns a posteriori that (E) a card drawn at random from a standard deck is the ace of spades, and they know a priori that E entails (H) the card is a spade. So, the agent may (ceteris paribus) be said to know that H on the basis of E. The important thing to note here is that we may ignore the stochastic properties of the process which generated E. It doesn’t matter whether our typical model of random card draws is a correct model of the process which generated E. Be- cause the agent knows that E entails H in this case, the agent will know H on the basis of E here independently of such considerations. Carnap wanted the same kind of independence to obtain when it comes to inductive-logical bridge principles. In other words, he wanted the inductive-logical picture to look as 13By “historical uniqueness”, I mean only that l (or its ordinal equivalents) are the only his- torically proposed and/or defended measures (up to ordinal equivalence) which satisfy all of our present desiderata. To get mathematical uniqueness, one would need to make some rather strong continuity assumptions, which have no intuitive connection to material desiderata for inductive logic. Such dubious mathematical assumptions are used by Good (1960) and Milne (1996) for pre- cisely this purpose. See (Halpern 1999) for a nice critical discussion of these sorts of arguments and the implausibility of the continuity assumptions they require. I prefer not to appeal to any such purely mathematical conditions, which cannot be seen as material desiderata for inductive logic. This is why I talk in terms of historical uniqueness rather than mathematical uniqueness. I am open to the possibility that additional material desiderata will be discovered which will narrow the field even further. That, it seems to me, is how scientific investigation should work. 14I don’t mean to endorse a totally naïve closure principle for knowledge. This is a principle Carnap endorsed, and I am merely trying to tell a “Carnap-style” story here about firmness vs increase in firmness confirmation. To make this bridge principle more plausible, we would need to add various caveats, of course. I won’t worry too much about such caveats here. All I need to assume is that there is some plausible bridge principle in the vicinity of this naïve one, and I think this is not such an implausible assumption. See (Hawthorne 2004) for a nice contemporary discussion (and defense) of deductive closure principles in epistemology. 7 a priori knowledge of the logical relation: Process S (generates E) E = card is the ace of spades H = card is a spade (random card draw process) a posteriori knowledge of E (E generated by S) H E ` H E ` H Figure 1: Entailment and knowledge simple as our Figure 1. As I have implied above, I think that dream of Carnap’s cannot be achieved. I think things just aren’t so tidy in the case of inductive- logical bridge principles. That is, I think it is all but hopeless to have anything like the sort of bridge principle Carnap wanted (as stated above) for for confir- mation as firmness and credence. I am skeptical here (mainly) for two reasons. First, I don’t know what it means for an agent to “know E and nothing else (a posteriori)”.15 Second, such a principle would seem to require the existence of objective, “a priori probabilities”, and I don’t think there are such things. That’s the bad news. The good news is that I think we can avoid these problems when it comes to bridging confirmation as increase in firmness and evidential sup- port. I will argue that bridging these concepts does not require either a “and the agent knows nothing else a posteriori” clause or the existence of objective “a priori probabilities”. But, some additional complexity will be required in the inductive case, because in the inductive case the nature of the stochastic process that generates E cannot be ignored. Again, it helps to picture the sort of case I have in mind. Figure 2 depicts an agent who learns a posteriori that (E) a card drawn at random from a standard deck is either a 10 or a Jack, and they know a priori that — in the standard probability model M used to model random card draws — E is positively correlated with (H) the card is a face card. 15Indeed, one might reasonably wonder how it is even possible for E to be the agent’s only a posteriori knowledge, when their knowledge is supposed to be closed under logical consequence in accordance with the deductive bridge principle that Carnap himself endorsed. 8 random card draw model M such that l(H, E, M ) > 0 a priori knowledge of the logical relation: l(H, E, M ) > 0 Process S (generates E) E = card is a 10 or a Jack H = card is a face card (random card draw process) modeling relation between M and S (i.e., correctness of M) a posteriori knowledge of E (E generated by S) knowledge of the correctness of M E supports H Figure 2: Qualitative confirmation and evidential support I submit that in such cases the agent knows (ceteris paribus16) that E evi- dentially supports H — provided that the agent also knows that the model M is a correct model of the stochastic process S that generated the event E. That is, I am suggesting the following bridge principle (or inference rule) relating confir- mation as increase in firmness and evidential support: a knows (a posteriori) that E by observing the outcome of a stochastic process S. a knows (a priori) that E confirms (increase in firmness sense) H in the model M. a knows (a posteriori) that M is a correct 17 model of the stochastic process S. ∴ a knows that E evidentially supports H. 16As in the deductive case, there are bound to be caveats required to make this bridge principle plausible. I won’t try to articulate any of these here. This is a task for a longer treatise on inductive logic (I hope to articulate some of these in my book on inductive logic). 17A fair amount of metaphysical and epistemological work needs to go into this step. First, we need to say what it means for a probability model to be correct concerning a stochastic process, and second we need to say how we could come to know this. Presumably, the former would involve a story about the metaphysics of stochastic processes, and the latter would involve a del- icate story about the epistemology of probability (delicate because there is the threat of a regress here, if we are inclined to be evidentialists about the knowledge of model-correctness itself). I will bracket these issues here, since this paper is mainly concerned with formulating prima fa- cie plausible inductive-logical/epistemic bridge principles, not providing a complete story about probabilistic metaphysics and epistemology. 9 I think there are lots of examples illustrating the plausibility of the bridge principle stated above (and implicitly pictured in Figure 2).18 For instance, when a pregnancy test (S) which is known to be reliable is administered (properly, and on an appropriate individual, etc.), the observation of a positive test re- sult (E) constitutes evidence in favor of pregnancy (H). This sort of example fits the mold of our bridge principle perfectly. In such cases, the observation is of (E) an event generated by a stochastic process S, and in the probability model M of S which we know (let us assume) to be correct, there is a large positive likelihood-ratio l(H, E, M). Such cases are canonical ones in which we are inclined to infer that E constitutes strong evidence in favor of H. Indeed, these are precisely the kinds of cases that statisticians use to illustrate how we should interpret the evidential import of the outcomes of stochastic processes (Royall 1997). As such, whereas Carnap was trying to provide a logical founda- tion for credence (or epistemic probability), we can now be seen to be aiming for a logical foundation for evidential support. I suspect that one of the reasons our increase in firmness/evidential support bridge principles are more plausi- ble than their firmness/credence counterparts is that evidential relations are less sensitive to prior probabilities or background knowledge than credences are. See fn. 18 below and (Fitelson 2006) for further discussion of this issue. Finally, I would like to say something about the prospects of formulating bridge principles relating comparative confirmation (as increase in firmness) and comparative evidential support. This case is more controversial, because the content of any such principle will depend sensitively the ordinal structure of one’s particular measure of confirmation (as increase in firmness). And, there is widespread ordinal (or comparative) disagreement between such mea- sures (Fitelson 2001). Nonetheless, I think such principles can be formulated. Figure 3 illustrates how I think such principles would work. In this example, our agent learns a posteriori that (E) a card drawn at random from a standard deck is an ace, and they know a priori that — in the standard probability model M used to model random card draws — l(H1, E, M) > l(H2, E, M), where H1 is the hypothesis that the card is either the ace of spades or the ace of dia- monds, and H2 is the hypothesis that the card is the ace of clubs. I submit 18It seems to me that the most compelling of these involve cases where the notion of “evidential support” is of an objective/externalist nature. I think this bridge principle is harder to defend from a purely subjective/internalist point of view in epistemology. As such, traditional Bayesian confirmation theorists who tend to assume that all confirmation judgments must supervene on some credence function (sometimes, perhaps a historical or counterfactual one) may not always be happy with my examples. As far as I can see, this is bad news for traditional Bayesian confir- mation theory, since these examples — even the ones that cannot be given an internalist gloss — seem compelling to me. Besides, I think that the problem of old evidence has already showed us that this supervenience assumption implicit in traditional Bayesian confirmation theory is false. That can be seen with reference to my pregnancy test example by thinking about cases in which the agent already knows that the outcome of the test was positive. To my mind, this is irrelevant to whether E evidentially supports H. Intuitively, E supports H here whether or not E is already known to have occurred. It seems that this directs us toward an externalist notion of evidence, and I am perfectly happy with that. I am not suggesting that my theory won’t apply to subjec- tive evidential relations, just that its most interesting and compelling applications are to cases involving objective evidential relations (e.g., those discussed by non-Bayesian statisticians). 10 random card draw model M such that: l(H1, E, M ) > l(H2, E, M ) a priori knowledge of the logical relation: l(H1, E, M ) > l(H2, E, M ) Process S (generates E) E = card is an ace H1 = card is A♠ or A♢ H2 = card is A♣ (random card draw process) modeling relation between M and S (i.e., correctness of M) a posteriori knowledge of E (E generated by S) knowledge of the correctness of M E favors H1 over H2 Figure 3: Comparative confirmation and favoring that in such cases the agent knows (ceteris paribus) that E favors H1 over H2. That is, the agent may infer in such cases that E constitutes better evidence for the truth of H1 than for the truth of H2. I have intentionally chosen an example here in which there are other measures of confirmation as increase in firmness which disagree with l on this comparative claim. For instance, if we follow Milne (1996) and use the ratio measure r (H, E, M) = log [ PrM (H | E) PrM (H) ] , then we get r (H1, E, M) = r (H2, E, M) in this example, which would not undergird the inference that E favors H1 over H2. This sort of example reflects a long- standing controversy over alternative probabilistic accounts of comparative (or relational) confirmation (and support). I don’t have the space here to expand on this controversy, but I have written extensively on it elsewhere (Fitelson 2006). 5 Conclusion Carnap tried to provide an inductive-logical foundation for (epistemic) probabil- ity. I have explained what I think the shortcomings of his project were. Mainly, these involved problems with simultaneously satisfying three Carnapian desider- ata for inductive logic, which I have called analyticity, logicality, and applicabil- ity. I then explained how one can overcome these difficulties by opting for an inductive-logical foundation for a different epistemic concept: evidential sup- port. I briefly skecthed my favored theory of inductive logic (in this sense), and I explained how it can simultaneously satisfy all three of our Carnapian 11 desiderata for inductive logic. Moreover, I argued that the resulting framework is widely and intuitively applicable to the interpretation of statistical evidence. I have provided only a broad sketch of how I think this new approach to inductive logic should work. I plan to produce a fuller treatment in a future monograph. References Carnap, R., 1962, Logical Foundations of Probability (second edition), Chicago: Univer- sity of Chicago Press. Fitelson, B., 2001, Studies in Bayesian Confirmation Theory, PhD. disserta- tion, University of Wisconsin–Madison (this item can be downloaded from http://fitelson.org/thesis.pdf). Fitelson, B., 2005, “Inductive Logic”, in An Encyclopedia of Philosophy of Science, S. Sarkar and J. Pfeiffer (eds.), New York: Routledge, forthcoming (this item can be downloaded from http://fitelson.org/il.pdf). Fitelson, B., 2006, “Likelihoodism, Bayesianism, and Relational Con- firmation”, Synthese, forthcoming (this item can be downloaded from http://fitelson.org/synthese.pdf). Gaifman, H., 1988, “A Theory of Higher Order Probabilities”, in Causation, Chance, and Credence, B. Skyrms and William L. Harper (eds.), Dordrecht: Kluwer Academic Publishers. Good, I. J., 1960, “Weight of evidence, corroboration, explanatory power, information and the utility of experiments”, Journal of the Royal Statistical Society, Series B 22: 319–331. Hájek, A., 2003, “What Conditional Probability Could Not Be”, Synthese 137: 273–323. Halpern, J.Y., 1999, “A counterexample to theorems of Cox and Fine”, Journal of Arti- ficial Intelligence Research 10: 67–85. Hawthorne, J., 2004, Knowledge and Lotteries, Oxford: Oxford University Press. Kemeny, J. and P. Oppenheim, 1952, “Degrees of factual support”, Philosophy of Sci- ence 19: 307–24. Kolmogorov, A.N., 1933, Grundbegriffe der Wahrscheinlichkeitsrechnung, Berlin: Springer. English translation (1950): Foundations of the theory of probability, New York: Chelsea. Maher, P., 2001, “Probabilities for Multiple Properties: The Models of Hesse and Car- nap and Kemeny”, Erkenntnis 55: 183–216. Milne, P., 1996, “log[p(h | eb)/p(h | b)] is the one true measure of confirmation”, Phi- losophy of Science 63: 21–26. Michalos, A., 1971, The Popper-Carnap Controversy, The Hague: Martinus Nijhoff. Royall, R., 1997, Statistical Evidence: A Likelihood Paradigm, New York: Chapman & Hall/CRC. 12