Blurring Out Cosmic Puzzles Blurring Out Cosmic Puzzles Yann Benétreau-Dupin∗ Department of Philosophy & Rotman Institute of Philosophy Western University, Canada Forthcoming in Philosophy of Science PSA conference 2014 Abstract The Doomsday argument and anthropic arguments are illustrations of a paradox. In both cases, a lack of knowledge apparently yields surprising conclusions. Since they are formulated within a Bayesian framework, the paradox constitutes a challenge to Bayesianism. Several attempts, some successful, have been made to avoid these conclusions, but some versions of the paradox cannot be dissolved within the framework of orthodox Bayesianism. I show that adopting an imprecise framework of probabilistic reasoning allows for a more adequate representation of ignorance in Bayesian reasoning, and explains away these puzzles. ∗ybenetre@uwo.ca 1 mailto:ybenetre@uwo.ca Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 2 of 20 1 Introduction The Doomsday paradox and the appeal to anthropic bounds to solve the cosmological con- stant problem are two examples of puzzles of probabilistic confirmation. These arguments both make ‘cosmic’ predictions: the former gives us a probable end date for humanity, and the second a probable value of the vacuum energy density of the universe. They both seem to allow one to draw unwarranted conclusions from a lack of knowledge, and yet one way of formulating them makes them a straightforward application of Bayesianism. They call for a framework of inductive logic that allows one to represent ignorance better than what can be achieved by orthodox Bayesianism, so as to block these conclusions. 1.1 The Doomsday paradox The Doomsday argument is a family of arguments about humanity’s likely survival.1 There are mainly two versions of the argument discussed in the literature, both of which appeal to a form of Copernican principle (or principle of typicality or mediocrity). A first version of the argument endorsed by, e.g., John Leslie (1990) dictates a probability shift in favor of theories that predict earlier end dates for our species, assuming that we are a typical— rather than atypical—member of that group. The other main version of the argument, often referred to as the ‘delta-t argument’, was given by Richard Gott (1993) and has provoked both outrage and genuine scientific interest.2 It claims to allow one to make a prediction about the total duration of any process of indefinite duration based only on the assumption that the moment of observation is randomly selected. A variant of this argument, which gives equivalent predictions, reasons 1See, e.g., (Bostrom, 2002, §6-7), (Richmond, 2006) for reviews. 2See, e.g., (Goodman, 1994) for opprobrium and (Wells, 2009; Griffiths and Tenenbaum, 2006) for praise. Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 3 of 20 in terms of random sampling of one’s rank in a sequential process (Gott, 1994).3 The argument goes as follows: Let r be my birth rank (i.e., I am the rth human to be born), and N the total number of humans that will ever be born. 1. Assume that there is nothing special about my rank r. Following the principle of indifference, for all r, the probability of r conditional on N is p(r|N) = 1 N . 2. Assume the following improper prior probability distribution4 for N: p(N) = k N . k is a normalizing constant, whose value doesn’t matter. 3. This choice of distributions p(r|N) and p(N) gives us the prior distribution p(r): p(r) = ∫ N=∞ N=r p(r|N)p(N) dN = ∫ N=∞ N=r k N2 dN = k r . 4. Then, Bayes’s theorem gives us p(N|r) = p(r|N) ·p(N) p(r) = r N2 , which favors small N. To find an estimate with a confidence α, we solve p(N ≤ x|r) = α for x, with p(N ≤ x|r) = ∫x r p(N|r) dN. Upon learning r, we are able to make a prediction about N with a 3The latter version doesn’t violate the reflection principle—entailed by conditionalization—according to which an agent ought to have now a certain credence in a given proposition if she is certain she will have it at a later time (Monton and Roush, 2001). 4As Gott (1994) recalls, this choice of prior is fairly standard (albeit contentious) in statistical analysis. It’s the Jeffreys prior for the unbounded parameter N, such that p(N) dN ∝ d ln N ∝ dN N . This means that the probability for N to be in any logarithmic interval is the same. This prior is called improper because it is not normalizable, and it is usually argued that it is justified when it yields a normalizable posterior. Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 4 of 20 95%-level confidence. Here, we have p(N ≤ 20r|r) = 0.95. That is, we have: p(N > 20r|r) < 5%. This result should strike us as surprising: we shouldn’t be able to learn something from nothing! Indeed, according to that argument, we can make a prediction for N based only on knowing our rank r and on not knowing anything about the probability of r conditional on N, i.e., on being indifferent—or equally uncommitted—about any value it may take. If N is unbounded (possibly infinite), an appeal to our typical position (reflected in the choice of likelihood in the argument above) shouldn’t allow us to make any prediction at all about N, and yet it does. 1.2 Anthropic reasoning in cosmology Another probabilistic argument that claims to allow one to make a prediction from a lack of knowledge is commonly used in cosmology, in particular to solve the cosmological constant problem (i.e., explain the value of the vacuum energy density ρV ). This parameter presents physicists with two main problems:5 1. The time coincidence problem: we happen to live at the brief epoch—by cosmological standards—of the universe’s history when it is possible to witness the transition from the domination of matter and radiation to vacuum energy (ρM ∼ ρV ). 2. There is a large discrepancy—of 120 order of magnitudes—between the (very small) observed values of ρV and the (very large) values suggested by particle-physics mod- els. 5See (Carroll, 2000; Solà, 2013) for an overview of the cosmological constant problem. Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 5 of 20 Anthropic selection effects (i.e., our sampling bias as observers existing at a certain time and place and in a universe that must allow the existence of life) have been used to explain both problems. In the absence of satisfying explanations, anthropic selection effects make the coincidence less unexpected, and account for the discrepancy between observations and possible expectations from available theoretical background. But there is no known reason why having ρM ∼ ρV should matter to the advent of life. Weinberg and his collaborators (Weinberg, 1987, 2000; Martel et al., 1998), among others, proposed anthropic bounds on the possible values of ρV . Furthermore, they argued that anthropic considerations may have a stronger, predictive role. The idea is that we should conditionalize the probability of different values of ρV on the number of observers they allow: the most likely value of ρV is the one that allows for the largest number of galaxies (taken as a proxy for the number of observers).6 The probability measure for ρV is then as follows: dp(ρV ) = ν(ρV ) ·p?(ρV ) dρV , where p?(ρ) dρV is the prior probability distribution, and ν(ρV ) the average number of galaxies which form for ρV . By assuming that there is no known reason why the likelihood of ρV should be special at the observed value, and because the allowed range of ρV is very far from what we would ex- pect from available theories, Weinberg and his collaborators argued that it is reasonable to assume that the prior probability distribution is constant within the anthropically allowed range, so that dp(ρV ) can be calculated as proportional to ν(ρV ) dρV (Weinberg, 2000, 2). Weinberg then predicted that the value of ρV would be close to the mean value in that range (assumed to yield the largest number of observers). This “principle of mediocrity”, as Vilenkin (1995) called it, assumes that we are typical observers. 6This assumption is contentious (see, e.g., (Aguirre, 2001) for an alternative proposal). Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 6 of 20 Thus, anthropic considerations not only help establish the prior probability distribution for ρV by providing bounds, but they also allow one to make a prediction regarding its ob- served value. The initial uniform distribution is turned into a prediction—a sharply peaked distribution around a preferred value—for ρV . This method has yielded predictions for ρV only a few orders of magnitudes apart from the observed value.7 This improvement—from 120 orders of magnitude to only a few—has been seen by their proponents as vindicating anthropically-based approaches. 1.3 The problem: Ex nihilo nihil fit The two examples of this section—the Doomsday argument and anthropic reasoning— share a similar structure: 1) a uniform prior probability distribution reflects an initial state of ignorance or indifference, and 2) an appeal to typicality or mediocrity is used to make a prediction. This is puzzling: these two assumptions (of indifference and typicality) are meant to express neutrality, and yet from them alone we seem to be getting a lot of information. But assuming neutrality alone should not allow us to learn anything! If anthropic considerations were only able to provide us with one bound (either lower or upper bound), then the argument used to make a prediction about the vacuum energy density ρV would be formally identical to Gott’s 1993 ‘delta-t argument’: without knowing anything about, say, a parameter’s upper bounded, a uniform prior probability distribution over all possible ranges and the appeal to typicality of the observed value favors lower values for that parameter. I will briefly review several approaches taken to dispute the validity of the results obtained from these arguments. We will see that, because dropping the assumption of typicality isn’t enough to avoid these paradoxical conclusions, it is a more adequate rep- 7The median value of the distribution obtained by such anthropic prediction is about 20 times the observed value ρobsV (Pogosian et al., 2004). Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 7 of 20 resentation of ignorance or indifference that we should pursue. I wish to show that, when dealing with events we are completely ignorant about, one can use an imprecise, Bayesian- friendly framework that better handles ignorance, and avoids the paradoxical, uncomfort- able consequences of the Doomsday argument, and better models the limited role anthropic considerations can play for the cosmological constant problem. 2 Typicality, indifference, neutrality 2.1 How crucial to those arguments is the assumption of typicality? The appeal to typicality is central to Gott’s ‘delta-t argument’, Leslie’s version of the Doomsday argument, and Weinberg’s prediction. This assumption has generated much of the philosophical discussion about the Doomsday paradox in particular. Nick Bostrom (2002) offered a challenge to what he calls the Self-Sampling Assumption (SSA), according to which “one should reason as if one were a random sample from the set of all observers in one’s reference class.” In order to avoid the consequence of the Doomsday argument, Bostrom suggested to adopt what he calls the Self-Indicating Assumption (SIA): “Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.” (op. cit.) But as he noted himself (Bostrom, 2002, 122-126), this SIA is not acceptable as a general principle. Indeed, as Dieks (1992) summarized: Such a principle would entail, e.g., the unpalatable conclusion that armchair philosophizing would suffice for deciding between cosmological models that pre- dict vastly different chances for the development of human civilization.The in- finity of the universe would become certain a priori. Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 8 of 20 The biggest problem with Doomsday-type arguments resting on the SSA is that their conclusion depends on the choice of reference class. What constitutes “one’s reference class” seems entirely arbitrary or ill-defined: is my reference class that of all humans, mammals, philosophers, etc.? Anthropic predictions can be the object of a similar criticism: the value of the cosmological constant most favorable to the existence of life (as we know it) may not be the same as that most favorable to the existence of intelligent observers, which might be definable indifferent ways. Relatedly, Dieks (1992) and Radford Neal (2006) showed that a careful examination of the role of indexical information in the formulation of the Doomsday argument allows one to avoid its unpleasant conclusion. In particular, Neal (2006) argued that conditionalizing on non-indexical information (i.e., all the information at the disposal of the agent formulating the Doomsday argument, including all their memories) reproduces the effects of assuming both SSA and SIA. Indeed, conditionalizing on the probability that an observer with all their non-indexical information exists (which is higher for a later Doomsday, and highest if there is no Doomsday at all) blocks the consequence of the Doomsday argument, without invoking such ad hoc principles, and avoids the reference-class problem. Although full non-indexical conditioning cancels out the effects of Leslie’s Doomsday argument (and, similarly, anthropic predictions), it is not clear that it also allows one to avoid the conclusion of Gott’s version of the Doomsday argument. Neal (2006, 20) dismisses Gott’s argument because it rests only on an “unsupported” assumption of typicality. There are indeed no good reasons to endorse typicality a priori (see, e.g., Hartle and Srednicki, 2007). One might then hope that not assuming typicality would suffice to dissolve these cosmic puzzles. Irit Maor et al. (2008) showed for instance that without it, anthropic considerations don’t allow one to really make predictions about the cosmological constant, beyond just providing unsurprising boundaries, namely, that the value of the cosmological Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 9 of 20 constant must be such that life is possible. My approach in this paper, however, will not be to question the assumption of typicality in either of these cosmic puzzles. Indeed, in Gott’s version of the Doomsday paradox, we would obtain a prediction even if we didn’t assume typicality. Consider the formulation of Gott’s argument using an improper prior (§1.1 infra). Now, instead of assuming, a flat probability distribution for our rank r conditional on the total number of humans N ( p(r|N) = 1 N ) , let’s assume a non-uniform distribution. For instance, let’s assume a distribution that favors our being born in humanity’s timeline’s first decile (i.e., one that peaks around r = 0.1 × N). We would then obtain a different prediction for N than if we had assumed one that peaks around r = 0.9 ×N. This reasoning, however, yields an unsatisfying result if taken to the limit: if we assume a likelihood probability distribution for r conditional on N sharply peaked at r = 0, we would still obtain a prediction for N upon learning r, (see Fig. 1).8 Therefore, in Gott’s Doomsday argument, we would obtain a prediction at any confidence- level, whatever assumption we make as to our typicality or atypicality, and we would even obtain one if we assume N →∞. Thus, assuming typicality or not will not allow us to avoid the conclusion of Gott’s Doomsday argument. Consequently, it is toward the question of a probabilistic representation of ignorance that I will now turn my attention. 2.2 A neutral principle of indifference? One could hope that a more adequate prior probability distribution—one that better re- flects our ignorance and is normalizable—may prevent the conclusion of these cosmic puz- zles (especially Gott’s Doomsday argument). The idea that a uniform probability distri- bution is not a satisfying representation of ignorance is nothing new; this discussion is 8Tegmark and Bostrom (2005) used a similar reasoning to derive an upper bound on the likelihood of a Doomsday catastrophe. Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 10 of 20 Figure 1: Posterior probability distributions for N conditional on r, obtained for r = 100 and assuming different likelihood distributions for r conditional on N (i.e., with different assumptions as to our relative place in humanity’s timeline), which each peaks at different values τ = r N . The lowermost curve corresponds to a likelihood distribution that peaks at τ → 0, i.e., if we assume N →∞. as old as the principle of indifference itself.9 Indeed, a uniform probability distribution is unable to fulfill invariance requirements that one should expect of a representation of ignorance or indifference. As argued by John Norton (2010), a representation of ignorance or indifference - cannot be additive (and therefore does not obey the laws of probability), - cannot be represented by the degrees of a one-dimensional continuum, such as the reals in [0, 1], - must be invariant under redescription, 9See, e.g., (Syversveen, 1998) for a short review on the problem of representing non-informative priors. Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 11 of 20 - must be invariant under negation: if we are ignorant or indifferent as to whether or not α, we must be equally ignorant as to whether or not ¬α.10 For instance, in the case of the cosmological constant problem, if we adopt a uniform probability distribution for the value of the vacuum energy density ρV over an anthropically allowed range of length µ, then we are committed to assert, e.g., that ρV is 3 times more likely to be found in a any range of length µ 3 than in any other range of length µ 9 . But such an assertion is not compatible with complete ignorance as to what value ρV is more likely to have, hence the requirement of non-additivity for a representation of ignorance. These criteria for a representation of ignorance or indifference cast doubt on the pos- sibility for a probabilistic logic of induction to overcome these limitations.11 I will argue that an imprecise model of Bayesianism, in which our credences can be fuzzy, will be able to explain away these problems, without abandoning Bayesianism altogether. 3 Dissolving the puzzles with imprecise credence 3.1 Imprecise credence It has been argued (see, e.g., Levi, 1974; Walley, 1991; Joyce, 2010) that Bayesian credences need not have sharp values, and that there can be imprecise credences (or ‘imprecise probabilities’ by misuse of language). An imprecise credence model recognizes “that our beliefs should not be any more definitive or unambiguous than the evidence we have for them.” (Joyce, 2010, 320) Joyce defended an imprecise model of Bayesianism in which credences are not rep- resented merely by a range of values, but rather by a family of (probabilistic) credence 10For an extended discussion about criteria for a representation of ignorance—with imprecise probabili- ties in particular—see (de Cooman and Miranda, 2007, §4-5). 11The same goes for improper priors, as was argued, e.g., by Dawid et al. (1973). Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 12 of 20 functions. In this imprecise probability model, 1. a believer’s overall credal state can be represented by a family C of cre- dence functions [ci] (. . . ). Facts about the person’s opinions correspond to properties common to all the credence functions in her credal state. 2. If the believer is rational, then every credence function in C is a probability. 3. If a person in credal state C learns that some event D obtains (. . . ), then her post-learning state will be CD = {c(.|D) = c(X) c(D|X) c(D) ,c ∈ C}. 4. A rational decision-maker with credal state C is obliged to prefer one action A to another A∗ when A’s expected utility exceeds that of A∗ relative to every credence function in C. (Joyce, 2010, 288, my emphasis) An analogy is sometimes given to illustrate this model: the overall credal state C acts as a committee whose members (each being analogous to a credence function ci) are rational agents who do not all agree with each other and who all update their credence in the same way, by conditionalizing on evidence they all agree upon. In this analogy, the properties of the jury’s opinion (the overall credal state C) are those common to all the committee members’ opinions. This model allows one to simultaneously represent sharp and imprecise credences, but also comparative probabilities. It can accommodate sharp credences, and then the usual condition of additivity. But it can also accommodate less sharply defined relationships when credences are fuzzy. It does so by means of a family of credence functions, each of which is treated as in orthodox Bayesianism. This model is interesting when it comes to representing ignorance or indifference: it allows us to represent the credal state of ignorance by a set of functions that disagree with each other. In order to reframe our cosmic puzzles, two cases must be distinguished: Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 13 of 20 - in an unbounded case (i.e., here, Gott’s Doomsday argument),12 an imprecise prior credal set with an infinite number of probability distributions, each normalizable, will not allow one to obtain any prediction, - in a bounded case (i.e., here, anthropic predictions for ρV ), it is possible to construct an imprecise prior credal set with probability distributions that each favors a different value for ρV such that the invariance criteria given above in §2.2 are fulfilled. 3.2 Blurring out Gott’s Doomsday argument: Apocalypse Not Now Let us see how we can reframe Gott’s Doomsday argument with an imprecise prior credence for the total number of humans N, or more generally for the length of any process of indefinite duration X. Let our prior credence in X, C(X), be represented by a family of credal functions {cγ}, each normalizable and defined on R>0. Thus, we avoid improper prior distributions. If all we assume is that X is finite but can be indefinitely large, then all we can say is that C(X) is monotonically decreasing and that limX→∞(C(X)) = 0. Let us then represent our prior credence C(X) consist in the following set of functions {cγ}, all of which decrease but not at the same rate (i.e., similar to a family of Pareto distributions): cγ(X) = kγ Xγ , with γ > 1 and kγ a normalizing constant: kγ = 1∫∞ 0 dX Xγ . The limiting case γ → 1 corre- sponds to X →∞, but γ = 1 must be excluded to avoid a non-normalizable distribution. If we don’t want to assume anything about dC(X) dX (other than it being negative), this prior set must be such that it contains functions of decreasing rates that are arbitrarily small. That is, ∀X ∈ R>0,∀� ∈ R<0, ∃cγ ∈ C s.t. dcγ(X) dX > �. This requirement applies 12The results from (Neal, 2006) to counter Leslie’s Doomsday argument still apply in the imprecise framework. Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 14 of 20 not to any of the functions in C but to the set as a whole. It is what will block the conclusion of the Doomsday argument.13 Let us see how such a prior credal set avoids the conclusion of Gott’s Doomsday paradox. As in the Bayesian version of the argument given in §1.1, the principle of indifference gives us an expression for the likelihood of our rank r conditional on the total number of humans N, and with our choice of prior for N, we obtain expressions for the prior for r and the posterior for N conditional on r. The credal functions cγ(r) in the set of distributions for the prior credence in r, C(r) can be expressed as follows: cγ(r) = ∫ N=∞ N=r p(r|N) · cγ(N) dN = ∫ N=∞ N=r kγ Nγ+1 dN. Bayes’ theorem then yields an expression for posterior credal functions: cγ(N|r) = p(r|N) · cγ(N) cγ(r) = kγ Nγ+1 · ∫N=∞ N=r kγ Nγ+1 dN . To find a prediction for N with a 95%-level confidence, we solve C(N ≤ x|r) = 0.95 for x, with C(N ≤ x|r) = ∫x r C(N|r) dN. Now, as γ → 1, the prediction for x such that C(N ≤ x|r) = 95% diverges. In other words, this imprecise representation of prior credence in N, reflecting our ignorance about N, does not yield any prediction about N (see figure 2). Any of the credal functions cγ in the credal set as defined here would yield a prediction if taken individually. However, it is clear that this prediction would rest solely on an arbitrary choice of prior that doesn’t reflect our initial state of ignorance. Without the possibility for my prior credence to be represented not by a single probability distribution 13In order to avoid too sharply peaked distributions (at X → 0), further constraints can be placed on the variance of the distributions (namely, an lower bound on the variance), without it affecting my argument. Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 15 of 20 Figure 2: Posterior probability distributions for N conditional on r, obtained for r = 1.2 · 1011 and assuming different prior distributions for N (i.e., with different assumptions as to the total number of humans there will ever be). but by an infinite set of probability distributions, I cannot avoid obtaining an arbitrarily precise prediction. Other distributions that decrease at different rates (i.e., not as inverse powers of N) could have been included in the prior credal set {cγ}, as long as they fulfill the criteria listed at the beginning of this section. However, no other distribution we could include would change this conclusion. In order to represent our credence about the length of a process of indefinite duration, it is necessary that our prior credal set includes the functions cγ defined earlier, and that is sufficient to avoid the conclusions of the Doomsday argument. Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 16 of 20 3.3 Blurring out anthropic predictions We are ignorant about what value of the vacuum energy density ρV we should expect from our current theories. We now want to express the fact that, in the absence of a prior credence that tells us something about what we should expect, we shouldn’t be in a position to confirm or not the assumption of typicality on which anthropic predictions for the cosmological constant rest. If we substitute imprecise prior and posterior credences in the formula from (Weinberg, 2000, see §1.2 infra), we have: dC(ρV ) = ν(ρV ) ·C?(ρV ) dρV , with C?(ρV ) a prior credal set that will exclude all values of ρV outside the anthropic bounds, and ν(ρV ) the average number of galaxies which form for ρV , which as in §1.2 peaks around the mean value of the anthropic range. In order for the prior credence C? to express our ignorance, it should be such that it doesn’t favor any value of ρV . With the imprecise model, such a state of ignorance can be expressed by a set of probability distributions {c?i(ρV )}, all of which normalizable over the anthropic range and such that ∀ρV ,∃c?i,c?j ∈ C? such that ρV is favored by c?i and not by c?j.14 Such a prior credal set will not favor any value of ρV . Moreover, in order to fulfill the criterion of invariance under negation (according to which C?(ρV ) = C?(¬ρV ), see §2.2), one could define a credal set representing ignorance to be such that ∀ρV ,∀c?i ∈ C?,∃c?j ∈ C? such that c?i = 1 − c?j.15 With a prior credal set C? thus defined, even with a distribution ν(ρV ) peaked around 14This can be obtained, for instance, by a family of Gamma distributions, each of which giving an expected value at a different point in the anthropically allowed range. As in §3.1, In order to avoid dogmatic functions, a lower bound can be placed on the variance of all the functions in C?. 15But such a symmetry requirement need not be required in all cases; unwarranted conclusions can be avoided without necessarily assuming this condition. Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 17 of 20 the mean value of that anthropic range, the prediction C(ρV ) becomes very imprecise all over the anthropic range. But more importantly, we won’t have C(ρobsV ) > C?(ρ obs V ), i.e., there will be no agreement among all the distributions c?i ∈ C? that learning the actual value ρobsV will provide a confirmatory boost for our assumption of typicality. The imprecise model can then provide us with a way to express our ignorance such that our assumption of typicality is neither confirmed nor disconfirmed. And yet, that same approach doesn’t prevent Bayesian induction altogether. Indeed, all the functions in C? being probability distributions that can be treated as in orthodox Bayesianism, any of them can be updated and, in principle, converge toward a sharper credence, provided sufficient updating. 4 Conclusion These cosmic puzzles show that, in the absence of an adequate representation of igno- rance, a logic of induction will inevitably yield unwarranted results. Our usual methods of Bayesian induction are ill-equiped to allow us to address both puzzles. I have shown that the imprecise credence framework allows us to treat both arguments in a way that avoids their undesirable conclusions. The imprecise model rests on Bayesian methods, but it is expressively richer than the usual Bayesian approach that only deals with single probability distributions (i.e., sharp credence functions). Philosophical discussions about the value of the imprecise model usually center around the difficulty to define updating rules that don’t contradict general principles of condition- alization (especially the problem of dilation). But the ability to solve such paradoxes of confirmation and avoid unwarranted conclusions should be considered as a crucial feature of the imprecise model and play in its favor. Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 18 of 20 References Aguirre, A. (2001, September). Cold Big-Bang Cosmology as a Counterexample to Several Anthropic Arguments. Physical Review D 64 (8), 1–12. Bostrom, N. (2002). Anthropic Bias: Observation Selection Effects in Science and Philos- ophy. Routledge. Carroll, S. (2000). The Cosmological Constant. arXiv: astro-ph/0004075v2 , 1–50. Dawid, A. P., M. Stone, and J. V. Zidek (1973). Marginalization Paradoxes in Bayesian and Structural Inference. Journal of the Royal Statistical Society. Series B (Methodolog- ical) 35 (2), 189–233. de Cooman, G. and E. Miranda (2007). Symmetry of models versus models of symmetry. In W. L. Harper and G. Wheeler (Eds.), Probability and Inference. Essays in Honour of Henry E. Kyburg Jr, Number April, pp. 67–149. London: College Publications. Dieks, D. (1992). Doomsday–Or: The Dangers of Statistics. The Philosophical Quar- terly 42 (166), 78–84. Goodman, S. N. (1994). Future Prospects Discussed. Nature 368 (March), 108–109. Gott, J. R. (1993). Implications of the Copernican Principle for our Future Prospects. Nature 363 (6427), 315–319. Gott, J. R. (1994). Future Prospects Discussed. Nature 368 (March), 108. Griffiths, T. L. and J. B. Tenenbaum (2006). Optimal Predictions in Everyday Cognition. Psychological Science 17 (9), 767–773. Hartle, J. and M. Srednicki (2007, June). Are We Typical? Physical Review D 75 (12), 123523. Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 19 of 20 Joyce, J. M. (2010). A Defense of Imprecise Credences in Inference and Decision Making. Philosophical Perspectives 24 (1), 281–323. Leslie, J. A. (1990). Is the End of the World Nigh? The Philosophical Quarterly 40 (158), 65–72. Levi, I. (1974). On Indeterminate Probabilities. The Journal of Philosophy 71 (13). Maor, I., L. Krauss, and G. Starkman (2008, January). Anthropic Arguments and the Cosmological Constant, with and without the Assumption of Typicality. Physical Review Letters 100 (4), 041301. Martel, H., P. R. Shapiro, and S. Weinberg (1998). Likely Values of the Cosmological Constant. The Astrophysical Journal 492 (1), 29–40. Monton, B. and S. Roush (2001). Gott’s Doomsday Argument. http://philsci- archive.pitt.edu/id/eprint/1205 , 1–23. Neal, R. M. (2006). Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning. Arxiv preprint math/0608592 (0607), 1–56. Norton, J. D. (2010). Cosmic Confusions: Not Supporting versus Supporting Not. Philos- ophy of Science 77 (4), 501–523. Pogosian, L., A. Vilenkin, and M. Tegmark (2004, July). Anthropic Predictions for Vacuum Energy and Neutrino Masses. Journal of Cosmology and Astroparticle Physics 7 (005), 1–17. Richmond, A. (2006). The Doomsday Argument. Philosophical Books 47 (2), 129–142. Solà, J. (2013, August). Cosmological Constant and Vacuum Energy: Old and New Ideas. Journal of Physics: Conference Series 453 (1), 012015. Y. Benétreau-Dupin Blurring Out Cosmic Puzzles 20 of 20 Syversveen, A. R. (1998). Noninformative Bayesian Priors. Interpretation and Problems with Construction and Applications. Tegmark, M. and N. Bostrom (2005, December). Is a Doomsday Catastrophe Likely? Nature 438 (7069), 754. Vilenkin, A. (1995). Predictions from Quantum Cosmology. Physical Review Letters 74 (6), 4–7. Walley, P. (1991). Statistical Reasoning with Imprecise Probabilities. London: Chapman and Hall. Weinberg, S. (1987). Anthropic Bound on the Cosmological Constant. Physical Review Letters 59 (22), 2607–2610. Weinberg, S. (2000). A Priori Probability Distribution of the Cosmological Constant. arXiv preprint astro-ph/0002387 , 0–15. Wells, W. (2009). Apocalypse When? Calculating How Long the Human Race Will Survive. Springer Praxis Books. Praxis. Introduction The Doomsday paradox Anthropic reasoning in cosmology The problem: Ex nihilo nihil fit Typicality, indifference, neutrality How crucial to those arguments is the assumption of typicality? A neutral principle of indifference? Dissolving the puzzles with imprecise credence Imprecise credence Blurring out Gott's Doomsday argument: Apocalypse Not Now Blurring out anthropic predictions Conclusion