Miklós Rédei and Zalán Gyenis Measure theoretic analysis of consistency of the Principal Principle Article (Accepted version) (Refereed) Original citation: Rédei, Miklós and Gyenis, Zalán (2016) Measure theoretic analysis of consistency of the Principal Principle. Philosophy of Science, 83 (5). pp. 972-987. ISSN 0031-8248 © 2016 by the Philosophy of Science Association This version available at: http://eprints.lse.ac.uk/66236/ Available in LSE Research Online: April 2016 LSE has developed LSE Research Online so that users may access research output of the School. Copyright © and Moral Rights for the papers on this site are retained by the individual authors and/or other copyright owners. Users may download and/or print one copy of any article(s) in LSE Research Online to facilitate their private study or for non-commercial research. You may not engage in further distribution of the material or use it for any profit-making activities or any commercial gain. You may freely distribute the URL (http://eprints.lse.ac.uk) of the LSE Research Online website. This document is the author’s final accepted version of the journal article. There may be differences between this version and the published version. You are advised to consult the publisher’s version if you wish to cite from it. Title Measure theoretic analysis of consistency of the Principal Princi- ple Article type PSA 2014 Symposium Paper Submission Author #1 Miklós Rédei Affiliation #1 Department of Philosophy, Logic and Scientific Method, Lon- don School of Economics and Political Science, Houghton Street, London WC2A 2AE, UK, m.redei@lse.ac.uk Author #2 Zalán Gyenis Affiliation #2 BUTE Department of Algebra, Budapest, Hungary, gyz@renyi.hu Abstract Weak and strong consistency of the Abstract Principal Principle are defined in terms of classical probability measure spaces. It is proved that the Abstract Principal Principle is both weakly and strongly consistent. The Abstract Principal Principle is strength- ened by adding a stability requirement to it. Weak and strong con- sistency of the resulting Stable Abstract Principal Principle are defined. It is shown that the Stable Abstract Principal Principle is weakly consistent. Strong consistency of the Stable Abstract Principal principle remains an open question. Acknowledgement Research supported in part by the National Research, Develop- ment and Innovation Office, K 115593 and K 100715. M. Rédei thanks the Institute of Philosophy of the Hungarian Academy of Sciences, with which he was affiliated as Honorary Research Fel- low while this paper was written. 1 The claims This paper investigates the measure theoretic consistency of what we call the “Abstract Principal Principle”. The consistency expresses that the Abstract Principal Principle is in harmony with the basic structure of measure theoretic probability theory. This type of consistency is tacitly assumed in the literature on the Principal Principle, although we will see that the consistency in question is not trivial. The main philosophical significance of proving such a consistency is that without making sure that such a consistency obtains, the Abstract Principal Principle would be inconsistent as a general norm that guides forming subjective degrees of belief (credences): Without such consistency a Bayesian Agent would not always be able to adjust his degrees of belief to objective probabilities (e.g. chances) in a Bayesian manner, via Bayesian conditionalization. After stating the Abstract Principal Principle informally in section 2, we define formally the weak and strong consistency of the Abstract Principal Principle (Definitions 3.1 and 3.3) in section 3, and state weak and strong consistency of the Abstract Principal Principle (Propositions 3.2 and 3.4). We will then argue that it is very natural to strengthen the Abstract Principal Principle by requiring it to satisfy a stability property, which expresses that conditional degrees of belief in events already equal (in the spirit of the Abstract Principal Principle) to the objective probabilities of the events do not change as a result of conditionalizing them further on knowing the objective probabilities of other events (in particular of events that are independent with respect to their objective probabilities). We call this amended principle the Stable Abstract Principal Principle (if stability is required only with respect to further conditionalizing on values of probabilities of independent events: Independence-Stable Principal Principle). This stability requirement leads to suitably modified versions of both the weak and strong consistency of the (Independence-)Stable Abstract Principal Principle (Definitions 5.1 and 5.4). We will prove that the Stable Abstract Principal Principle is weakly consistent (Proposition 5.2). This entails weak consistency of the Independence-Stable Abstract Principal Principle (Proposition 5.3). The strong consistency of both the Stable and the Independence-Stable Abstract Principal Principle remain open problems however; we conjecture that both consistencies hold1. Until section 6 no references are given. Section 6 puts the results into context, here we discuss the relevance of strong consistency of the Stable Abstract Principal Principle from the perspective of Lewis’ Principal Principle and its “debugged” versions. The details of all the proofs are in the Appendix. 2 The Abstract Principal Principle informally The Abstract Principal Principle regulates probabilities representing the subjective degrees of belief psub j(A) of an abstract Bayesian agent by stipulating that psub j(A) are related to the objective probabilities pob j(A) as psub j(A|ppob j(A) = rq) = pob j(A) (1) where ppob j(A) = rq denotes the proposition “the objective probability, pob j(A), of A is equal to r”. The formulation (1) of the Abstract Principal Principle presupposes that both psub j and pob j are probability measures: additive maps defined on a σ-algebra taking values in [0,1]. pob j is supposed to be defined on a σ-algebra Sob j of random events; and psub j is supposed to be a map with a domain of definition being a σ-algebra Ssub j. It is crucial to realize that the σ-algebras Sob j and Ssub j cannot be unrelated: for the 1G. Bana, in his contribution to the symposium and to the present volume proved this conjecture. conditional probability psub j(A|ppob j(A) = rq) in eq. (1) to be well-defined via Bayes’ rule, the σ-algebra Ssub j must contain both the σ-algebra Sob j of random events and with every random event A also the proposition ppob j(A) = rq — otherwise the formula psub j(A|ppob j(A) = rq) cannot be interpreted as an expression of conditional probability specified by Bayes’ rule. It is far from obvious however that, given any σ-algebra Sob j of random events with any probability measure pob j on Sob j, there exists a σ-algebra Ssub j meeting these algebraic requirements in such a way that a probability measure psub j satisfying the condition (1) also exists on Ssub j. If there exists a σ-algebra S∗ob j of random events with a probability measure p∗ob j giving the objective probabilities of events for which there exists no σ-algebra Ssub j on which a probability function psub j satisfying (1) can be defined, then the Abstract Principal Principle would be inconsistent as a general norm: In this case the agent, being in the epistemic situation of facing the objective facts represented by (S∗ob j, p ∗ ob j), cannot have degrees of belief satisfying the Abstract Principal Principle for fundamental structural reasons inherent in the basic structure of classical probability theory. We say that the Abstract Principal Principle is weakly consistent if it is not inconsistent in the sense described. (The adjective “weakly” will be explained shortly.) Remark 2.1. One can construe the Principal Principle differently: taking it as a norm that regulates internal consistency of the Agent.2 Under this construal the subjective degrees of belief should satisfy psub j(A|ppob j(A) = rq) = r for all r ∈ [0,1] (2) Here ppob j(A) = rq is the proposition that the Agent believes that the objective probability 2We thank C. Hoefer and G. Bana for pointing this out in the discussion in the sympo- sium. of A is equal to r, and (2) requires that the Agent’s subjective degrees of belief conditional on this belief should be equal to r – otherwise the Agent is inconsistent in his thinking. The difference between (1) and (2) is that r on the right hand side of (2) need not be equal to the real objective probability pob j(A). The difference between these two interpretations plays no role however from the perspective of the consistency problem we investigate here: Because of the universal quantification over pob j in the consistency definitions and because of the universal quantification over r in (2) the two construals lead to the same consistency problem. 3 Weak and strong consistency of the Abstract Principal Principle (X,S, p) denotes a classical probability measure space, where S is a σ-algebra of (some) subsets of X and p is a probability measure on S . Given two σ-algebras S and S ′, the injective map h : S → S ′ is a σ-algebra embedding if it preserves all Boolean-σ-operations. The probability space (X′,S ′, p′) is called an extension of (X,S, p) with respect to h if h is a σ-algebra embedding of S into S ′ that preserves the probability measure p: p′(h(A)) = p(A) A ∈ S (3) Definition 3.1. The Abstract Principal Principle is called weakly consistent if the following hold: Given any probability space (Xob j,Sob j, pob j), there exists a probability space (Xsub j,Ssub j, psub j) and a σ-algebra embedding h of Sob j into Ssub j such that (i) For every A ∈ Sob j there exists an A′ ∈ Ssub j with the property psub j(h(A)|A′) = pob j(A) (4) (ii) If A,B ∈ Sob j and A 6= B then A′ 6= B′. Definition 3.1 says: Given the “objective” probability space (Xob j,Sob j, pob j), the σ-algebra Ssub j in (Xsub j,Ssub j, psub j) contains the “copies” h(A) of all the random events A ∈ Sob j and also an element A′ to be interpreted as representing the proposition “the objective probability, pob j(A), of A is equal to r” (this proposition we denoted by ppob j(A) = rq). If A 6= B then A′ 6= B′ must hold because ppob j(A) = rq and ppob j(B) = sq are different propositions – this is expressed by (ii) in the definition. The main content of the Abstract Principal Principle is then expressed by condition (4), which states that the conditional degrees of beliefs psub j(h(A)|A′) of an agent about random events h(A)↔ A ∈ Sob j are equal to the objective probabilities pob j(A), where the condition A′ is that the agent knows the values of the objective probabilities. Proposition 3.2. The Abstract Principal Principle is weakly consistent. The above proposition follows from Proposition 5.2 stating the weak consistency of the Stable Abstract Principal Principle, which we state later. Definition 3.3. The Abstract Principal Principle is defined to be strongly consistent if, in addition to conditions (i)-(ii) in Definition 3.1, the following hold: (iii) The probability space (Xsub j,Ssub j, psub j) is an extension of the probability space (Xob j,Sob j, p0sub j) with respect to h; i.e. we have psub j(h(A)) = p 0 sub j(A) A ∈ Sob j (5) The content of this additional requirement is that the agent’s prior probability function psub j restricted to the random events can be equal to probability measure p0sub j on Sob j that can differ from the objective probabilities of the random events given by pob j. Proposition 3.4. The Abstract Principal Principle is strongly consistent if pob j is absolutely continuous w.r.t. the agent’s prior degrees of beliefs p0sub j. 4 The Stable Abstract Principal Principle Once the agent has adjusted his subjective degree of belief by conditionalizing, psub j(h(A)|ppob j(A) = rq) = r, he may then learn the value of another objective probability, ppob j(B) = sq, in which case he must conditionalize again. What should be the result of this second conditionalization? Since the agent’s conditional degrees of belief psub j(h(A)|ppob j(A) = rq) in A are already correct (equal to the objective probabilities), it would be irrational to change his already correct degree of belief about A upon learning an additional truth, namely the value of the objective probability pob j(B). So a rational agent’s conditional subjective degrees of belief should be stable in the sense of satisfying the following condition: psub j ( h(A)|ppob j(A) = rq ) = psub j ( h(A)|ppob j(A) = rq∩ppob j(B) = sq ) (∀B∈Sob j) (6) If A and B are independent with respect to their objective probabilities pob j(A∩B) = pob j(A)pob j(B), then, if the conditional subjective degrees of belief are stable in the sense of (6), then (assuming the Abstract Principal Principle) one has psub j(h(A)∩h(B)|ppob j(A) = rq∩ppob j(B) = sq∩ppob j(A∩B) = tq) (7) = psub j(h(A∩B)|ppob j(A) = rq∩ppob j(B) = sq∩ppob j(A∩B) = tq) = psub j(h(A∩B)|ppob j(A∩B) = tq) = pob j(A∩B) = pob j(A)pob j(B) = psub j(h(A)|ppob j(A) = rq)psub j(h(B)|ppob j(B) = sq) = psub j(h(A)|ppob j(A) = rq∩ppob j(B) = sq∩ppob j(A∩B) = tq) (8) ·psub j(h(B)|ppob j(A) = rq∩ppob j(B) = sq∩ppob j(A∩B) = tq) (9) Equations (7) and (8)-(9) mean that if the conditional subjective degrees of belief are stable, then, if A and B are objectively independent, then they (their isomorphic images h(A),h(B)) are also subjectively independent: independent also with respect to the probability measure that represents conditional subjective degrees of belief, where the condition is that the agent knows the objective probabilities of all of A, B and (A∩B). In this case the conditional subjective degrees of beliefs properly reflect the objective independence relations of random events – they are independence-faithful. Note that for the subjective degrees of belief to satisfy the independence-faithfulness condition expressed by eqs. (7) and (8)-(9), it is sufficient that stability (6) only holds for the restricted set of elements B in the σ-subalgebra S A,indob j of Sob j generated by the elements in Sob j that are independent of A with respect to pob j. This motivates to amend the Abstract Principal Principle by requiring stability of the subjective probabilities, resulting in the “Stable Abstract Principal Principle”: Stable Abstract Principal Principle The subjective probabilities psub j(A) are related to the objective probabilities pob j(A) as required by equation (1); furthermore, the subjective probability function is stable in the sense that the following holds: psub j ( h(A)|ppob j(A) = rq ) = psub j ( h(A)|ppob j(A) = rq∩ppob j(B) = sq ) (∀B ∈ Sob j) (10) If the subjective probability function is only independence-stable in the sense that (10) above holds for all B ∈ S A,indob j , then the corresponding Stable Abstract Principal Principle is called the Independence-Stable Abstract Principal Principle. 5 Is the Stable Abstract Principal Principle strongly consistent? Definition 5.1. The Stable Abstract Principal Principle is defined to be weakly consistent if it is weakly consistent in the sense of Definition 3.1 and the subjective probability function psub j is stable: it satisfies condition (10). The Independence-Stable Abstract Principal Principle is defined to be weakly consistent if it is weakly consistent in the sense of Definition 3.1 and the subjective probability function psub j is independence-stable: it satisfies (10) for all B ∈ S A,indob j . Proposition 5.2. The Stable Abstract Principal Principle is weakly consistent. The above proposition entails Proposition 5.3. The Independence-Stable Abstract Principal Principle is weakly consistent. Definition 5.4. The Stable Abstract Principal Principle is defined to be strongly consistent if it is strongly consistent in the sense of Definition 3.3 and the subjective probability function psub j is stable. The Independence-Stable Abstract Principal principle is strongly consistent if it is strongly consistent in the sense of Definition 3.3 and the subjective probability function psub j satisfies (10) for all B ∈ S A,ind ob j . Problem 5.5. Is the (Independence-)Stable Abstract Principal Principle strongly consistent? The problem of strong consistency of both the Stable and the Independence-Stable Abstract Principal Principle remain open3. 6 Relation to other works Lewis (1986) introduced the term “Principal Principle” to refer to the principle linking subjective beliefs to chances. In the context of the Principal Principle psub j(A) is called the “credence”, Crt(A), of the agent in event A at time t, pob j(A) is the chance Cht(A) of the event A at time t, and the Principal Principle is the stipulation that credences and chances are related as Crt(A|pCht(A) = rq∩E) = Cht(A) = r (11) where E is any admissible evidence the agent has at time t in addition to knowing the value of the chance of A. Proposition pCht(A) = rq is clearly admissible evidence for (11), and, substituting E = pCht(A) = rq into equation (11), we obtain Crt(A|pCht(A) = rq) = Cht(A) = r (12) which, at any given time t, is an instance of the Abstract Principal Principle if we make the identifications pob j(A) = Cht(A), psub j(A) = Crt(A). By Proposition 3.4 we know that, for any time parameter t, relation (12) is consistent with probability as measure. 3See footnote 1. If, however, admissibility of evidence E is defined in such a way that propositions stating the values of chances of other events B at time t (i.e. propositions of the form pCht(B) = sq) are admitted as E, then (11) together with (12) entail that we also should have Crt(A|pCht(A) = rq∩pCht(B) = sq) = Cht(A) = r (13) The relation (13) together with equation (12) is, at any given time t, an instance of the Stable Abstract Principal Principle if we make the identifications pob j(A) = Cht(A), psub j(A) = Crt(A) and pob j(B) = Cht(B). Thus whether relations (13) and (12) can hold at all is exactly the question of whether the Stable Abstract Principal Principle is strongly consistent. If one allows as evidence E in (13) only propositions stating the value of objective chances of events B that are objectively independent of A, then the question of whether relations (13) and (12) can hold in general is exactly the question of whether the Independence-Stable Abstract Principal Principle is strongly consistent. Since Lewis regarded admissible all propositions containing information that is “irrelevant” for the chance of A (Lewis 1986, 91), for Lewis, admissible evidence should include propositions about values of chances of events that are independent of A with respect to the probability measure describing their chances. Under this interpretation of “irrelevant” information, the consistency of Lewis’ Principal Principle as a general norm needs proving consistency of the Independence-Stable Abstract Principal Principle. It should be emphasized that this kind of consistency has nothing to do with any metaphysics about chances or with the concept of natural law that one may have in the background of the Principal Principle; in particular, this inconsistency is different from the one related to “undermining” (see below). This consistency expresses a simple but fundamental compatibility of the Principal Principle with the basic structure of probability theory. Lewis himself saw a consistency problem in his Principal Principle (he called it the “Big Bad Bug”): If A is an event in the future of t that has a non-zero chance r > 0 of happening at that later time but we have knowledge E about the future that entails that A will in fact not happen, E ⊂ A⊥, then substituting this E into (11) leads to contradiction if r > 0. Such an A is called an “unactualized future that undermines present chances” – hence the phrase “undermining” to refer to this situation. Since certain metaphysical arguments led Lewis to think that one is forced to admit such an evidence E, he tried to “debug” the Principal Principle (Lewis 1994); the same sort of debugging was proposed simultaneously by Hall (1994) and Thau (1994). Other debugging attempts have followed (Black 1998; Roberts 2001; Loewer 2004; Hall 2004; Hoefer 2007; Ismael 2008; Meacham 2010; Glynn 2010; Nissan-Rozen 2013; Pettigrew 2013; Frigg–Hoefer 2015), and to date no consensus has emerged as to which of the debugged versions of the Principal Principle is tenable: Vranas (2004) claims that there was no need for a debugging in the first place; Briggs (2009) argues that none of the modified principles work; Pettigrew (2012) provides a framework that allows to choose the correct Principal Principle depending on one’s metaphysical concept of chance. Papers aiming at “debugging” Lewis’ Principal Principle typically combine the following three moves (a), (b) or (c): (a) Restricting the admissible evidence in (11) to a particular class AA of propositions in order to avoid “undermining” (Hoefer 2007). (b) Modifying the Principal Principle by replacing Cht(A) on the right hand side of (11) with a value F(A) given by a function F different from the objective chance function (New Principle by Hall (1994); General Principal Principle by Lewis (1980) and by Roberts (2001)). (c) Modifying the Principal Principle by replacing the conditioning proposition pCht(A) = rq∩E on the left hand side of (11) by a different conditioning proposition CA, which is a conjunction of some propositions from Sob j, AA, and propositions of form ppob j(B) = rq (Conditional Principle and General Principle by Vranas (2004)); General Recipe by Ismael (2008)). To establish a theory of chance along a debugging strategy characterized by a combination of (a), (b) and (c), it is not enough to show however that undermining is avoided: one has to prove that the debugged Principal Principle is consistent in the sense of Definition 6.1 below, which is in the spirit of the notion consistency investigated in this paper: Definition 6.1. We say that the “(AA,CA,F)-debugged” Principal Principle is strongly consistent if the following hold: Given any probability space (Xob j,Sob j, pob j) and another probability measure p0sub j on Sob j, there exists a probability space (Xsub j,Ssub j, psub j) and a σ-algebra embedding h of Sob j into Ssub j such that (i) For every A ∈ Sob j the set AA is in Ssub j, and for every A ∈ Sob j there exists a CA ∈ Ssub j with the property psub j(h(A)|CA) = F(A) (14) (ii) If A,B ∈ Sob j and A 6= B then CA 6= CB. (iii) The probability space (Xsub j,Ssub j, psub j) is an extension of the probability space (Xob j,Sob j, p0sub j) with respect to h; i.e. we have psub j(h(A)) = p 0 sub j(A) A ∈ Sob j (15) (iv) For all A ∈ Sob j and for all B ∈ AA we have psub j(h(A)|CA) = psub j ( h(A)|CA ∩B) (16) We say that the “(AA,CA,F)-debugged” Principal Principle is weakly consistent if (i),(ii) and (iv) hold. Taking specific CA, and F , one obtains particular consistency definitions expressing the consistency of specific debugged Principal Principles. For instance, stipulations CA = B∩ppob j(A|B) = rq (17) F(A) = pob j(A) (18) yield Vranas’ Conditional Principle (Vranas 2004, 370); whereas Hall’s New Principle (Hall 1994, 511) can be obtained by CA = Ht,w ∩Tw (19) F(A) = pob j(A|Tw) (20) where Ht,w is “the proposition that completely characterizes w’s history up to time t” (Hall 1994, 506) and Tw is the “proposition that completely characterizes the laws at w” (Hall 1994, 506) (w being a possible world). Proving consistency of the (AA,CA,F)-debugged Principal Principles is necessary for the respective debugged Principal Principles to be compatible with measure theoretic probability theory. To our best knowledge such consistency proofs have not been given: it seems that this type of consistency is tacitly assumed in the works analyzing the modified Principal Principles, although, as the propositions and their proofs presented in this paper show, the truth of these types of consistency claims are far from obvious. The problem of strong consistency of the Stable Abstract Principle is also relevant from the perspective of existence of particular models of the axioms of higher order probability theory (HOP) suggested by Gaifman (1988). If one regards the theory of HOP as an axiomatic theory, then the question arises whether models of the theory exist. Gaifman provides a few specific examples that are models of the axioms (Gaifman 1988, 208–10) but he does not raise the general issue of what kind of models exist. What one would like to know is whether any objective probability theory can be made part of a HOP in such a way that the objective probabilities are related to the subjective ones in the manner required by the HOP axioms. Proving the existence of such HOPs entail that the Stable Abstract Principal Principle is strongly consistent. 7 Appendix 7.1 Proof of strong consistency of the Abstract Principal Principle (Proposition 3.4) The statement follows from Proposition 7.1 below if we make the following identifications: • (Xob j,Sob j, pob j) ↔ (X,S, p̂) • (Xob j,Sob j, p0sub j) ↔ (X,S, p) • (Xsub j,Ssub j, psub j) ↔ (X′,S ′, p′) Proposition 7.1. Let (X,S, p) be a probability space and let p̂ be another probability measure on S such that p̂ is absolutely continuous with respect to p. Then there exists an extension (X′,S ′, p′) of (X,S, p) with respect to the embedding h : S → S ′ having the following properties: (i) For all A ∈ S there is A′ ∈ S ′ such that p′ ( h(A)|A′ ) = p̂(A) (ii) A 6= B implies A′ 6= B′ Proof. We distinguish two cases: (i) the σ-algebra S is finite (ii) non-finite. When S is finite, the proof consist of two steps. In the first step we choose an arbitrary element A ∈ S and construct an extension (X∗,S∗, p∗) of (X,S, p) with respect to an embedding h∗ in such a manner that in this extension this particular event A has a pair A′ = A∗ with the required properties. In step 2 we repeat this step n−1 times, choosing each time another element from S until we exhaust S and obtain the extension (X′,S ′, p′) of (X,S, p). Step 1. Take any A ∈ S . We wish to construct a space (X∗,S∗, p∗) and a function h∗ : S → S∗ such that • h∗ : (S, p)→ (S∗, p∗) is a measure preserving, injective Boolean algebra homomorphism. • There is A∗ ∈ S∗ such that p∗ ( h∗(A)|A∗ ) = p̂(A). Let let (X 1,S 1) and (X 2,S 2) be two disjoint copes of (X,S), and fix the algebra isomorphisms h1 : (X,S)→ (X 1,S 1) and h2 : (X,S)→ (X 2,S 2). Put X∗ = X 1 ∪X 2 and define S∗ = { h1(A)∪h2(B) : A,B ∈ S } (21) It is a routine task to verify that S∗ is a Boolean algebra of subsets of X∗ with respect to the usual set theoretical operations ∪, ∩, \ (below we also use the notation A⊥ to refer to the set theoretical complement of an element A with respect to a set which is fixed by the context). Define the map h∗ : S → S∗ by h∗(A) = h1(A)∪h2(A) A ∈ S (22) h∗ is a homomorphism between S and S∗. Let 0 ≤ α ≤ 1 be any number and define p∗ on S∗ by p∗ ( h1(A)∪h2(B) ) . = α· p(A)+(1−α)· p(B) A,B ∈ S (23) For each A ∈ S we have then p∗ ( h∗(A) ) = α· p(A)+(1−α)· p(A) = p(A) (24) Consequently, h∗ : (S, p)→ (S∗, p∗) is a measure preserving, injective Boolean algebra homomorphism. For any fixed A ∈ S define A∗ by A∗ . = h1(A)∪h2(A⊥) (25) Our aim now is to choose α in such a way that the following is true: p∗(h∗(A)|A∗) = p̂(A) (26) Some basic algebra shows that p∗(h∗(A)|A∗) = α· p(A) α· p(A)+(1−α)·(1− p(A)) (27) Thus in order to satisfy (26) we have to choose α to guarantee α· p(A) α· p(A)+(1−α)·(1− p(A)) = p̂(A) (28) By assumption, if p(A) = 1 then p̂(A) = 1, and thus any α 6= 0 makes (28) true. Similarly, if p(A) = 0, then p̂(A) = 0, which means that any α 6= 1 will do. Also, if p̂(A) = 0, then α = 0 will do. Therefore we may assume 0 < p(A) < 1 and 0 < p̂(A)≤ 1. By re-ordering equation (28) and using the notation p = p(A), r = p̂(A) we obtain α = r p−r r p−r + pr− p (29) To guarantee (28) we only have to show that α in equation (29) is between 0 and 1. Since 0 < p < 1 and 0 < r ≤ 1 we have r p < r and pr ≤ p. This means that both the numerator and the denominator of the fraction in (29) is negative, whence α is positive. On the other hand, we have 0 ≥ pr− p r p−r ≥ r p−r + pr− p r p−r r p−r + pr− p ≤ 1 Thus 0 ≤ α ≤ 1 can always be chosen so that equation (26) holds. Step 2. We obtain (X′,S ′, p′) by iterating Step 1. Let A1,...,An be an enumeration of S . Applying Step 1. with A1 in place of A, one finds a space (X1,S1, p1) = (X∗,S∗, p∗), an event A∗1 ∈ S1 and an embedding h1 (X,S, p) h1−→ (X1,S1, p1), such that p1 ( h1(A1)|A∗1 ) = p̂(A1) (30) Continuing in this way, we get elements ( hi−1 ···h1(Ai) )∗ ∈ Si and a chain of extensions (X,S, p) h1−→ (X1,S1, p1) h2−→ (X2,S2, p2) h3−→··· hn−→ (Xn,Sn, pn) such that pn ( hn ···h2h1(Ai) ∣∣∣hn ···hi+1((hi−1 ···h1(Ai))∗)) = p̂(Ai) holds for all Ai. Therefore we can complete the proof by letting (X′,S ′, p′) = (Xn,Sn, pn) h = hnhn−1 ···h1 A′i = hn ···hi+1 (( hi−1 ···h1(Ai) )∗) One has to verify that the extension in step j does not destroy the result of the previous one. But this is a consequence of h j being an embedding that preserves the probability. When the σ-algebra S is not finite, we take the extension (X′,S ′, p′) to be the product space (X,S, p)~([0,1],L,λ) = (X ~[0,1],S ~ L, p ~ λ) where ([0,1],L,λ) is the standard Lebesgue space over the unit interval, and where ~ denotes the special product of two probability spaces introduced in (Gyenis–Rédei 2011). The elements of S ~ L are certain [0,1]→ S functions, the embedding h : (X,S, p)→ (X′,S ′, p′) is via the constant function h(A)(x) = A (x ∈ [0,1]) The extension of p: p′(h(A)) = ∫ 1 0 p◦h(A)dλ = ∫ 1 0 p(A)dλ = p(A). Fix a real number α ∈ [0,1] and take any Lebesgue-measurable subset B ⊆ [0,1] with measure λ(B) = α. Write A′ for the function A′ : [0,1]→ S A′(x) =   A if x ∈ B A⊥ otherwise. Then A′ ∈ S ′ and one can verify easily that p′ ( h(A)|A′ ) = α· p(A) α· p(A)+(1−α)·(1− p(A)) . (31) It follows that if we choose α such that α· p(A) α· p(A)+(1−α)·(1− p(A)) = p̂(A), (32) then we get p′ ( h(A)|A′ ) = p̂(A) That we can choose α to satisfy (32) is contained in the proof of the finite case. 7.2 Proof of weak consistency of the Stable Abstract Principal Principle (Proposition 5.2) The statement of weak consistency of the Stable Abstract Principal Principle follows from Proposition 7.2 below if we make the following identifications: • (Xob j,Sob j, pob j) ↔ (X,S, p) • (Xsub j,Ssub j, psub j) ↔ (X′,S ′, p′) Proposition 7.2. Let (X,S, p) be a probability space. Then there exists an extension (X′,S ′, p′) of (X,S, p) with respect to a σ-algebra homomorphism h : S → S ′ such that (i) For all A ∈ S there is A′ ∈ S ′ such that p′ ( h(A)|A′ ) = p(A) (ii) A 6= B implies A′ 6= B′ (iii) p′(h(A)|A′) = p′ ( h(A)|A′∩B′ ) (∀B′ ∈ S) (33) Proof. Let (X,S, p) be a probability space and Y0 be a set disjoint from S and having the same cardinality as the cardinality of S . We can think of Y0 as having elements yA labeled by elements A ∈ S . Consider the set Y . = Y0 ∪{y}={yA : A ∈ S}∪{y} where y is an auxiliary element different from every yA. Take the powerset P (Y ) and let q be any probability measure on P (Y ) such that q({y}) 6= 0. Then (Y,P (Y ),q) is a probability space and we can form the product space (X′,S ′, p′) = (X ×Y,S ⊗P (Y ), p×q) with p′ = (p×q) being the product measure on S ⊗P (Y ). The map h : S → S ′ defined by h(A) . = A×Y is an injective, measure preserving σ-algebra embedding. For each A ∈ S put A′ . = X ×{yA,y} It is clear that (ii) in the proposition holds for A′,B′ so defined. Utilizing that p′ is a product measure one can verify by explicit calculation that both (i) and (iii) hold. References Black, R. (1998). Chance, credence, and the Principal Principle. The British Journal for the Philosophy of Science, 49:371–385. Briggs, R. (2009). The anatomy of the Big, Bad Bug. Noûs, 43:428–449. Frigg, R. and Hoefer, C. (2015). The best Humean system for statistical mechanics. Erkenntnis, 80:551–574. Gaifman, H. (1988). A theory of higher order probabilities. In B. Skyrms and W.L. Harper, editors, Causation, Chance, and Credence. Proceedings of the Irvine Conference on Probability and Causation, Volume 1, volume 41 of The University of Western Ontario series in philosophy of science, pages 191–219. Kluwer Academic, Dordrecht. Glynn, L. (2010). Deterministic chance. The British Journal for the Philosophy of Science, 61(51-80). Gyenis, Z. and Rédei, M. (2011). Characterizing common cause closed probability spaces. Philosophy of Science, 78:393–409. Hall, N. (1994). Correcting the guide to objective chance. Mind, 103:505–518. (2004). Two mistakes about credence and chance. Australasian Journal of Philosophy, 82:93–111. Hoefer, C. (2007). The third way on objective probability: A sceptic’s guide to objective chance. Mind, 116:449–596. Ismael, J. (2008). Raid! Correcting the Big Bad Bug. Noûs, 42:292–307. Lewis, D. (1980). A subjectivist’s guide to objective chance. In R.C. Jeffrey, editor, Studies in Inductive Logic and Probability, volume II, pages 263–293. University of California Press, Berkely. Reprinted in (Lewis 1986). (1986). Philosophical Papers, volume II. Oxford University Press. (1986). A subjectivist’s guide to objective chance. In Philosophical Papers, vol. II, pages 83–132. Oxford University Press, Oxford. (1994). Humean supervenience debugged. Mind, 103:473–490. Loewer, B. (2004). David Lewis’ Humean theory of objective chance. Philosophy of Science, 71:115–1125. Meacham, C.J.G. (2010). Two mistakes regarding the Principal Principle. The British Journal for the Philosophy of Science, 61:407–431. Nissan-Rozen, I. (2013). Jeffrey conditionalization, the Principal Principle, the desire as belief thesis and Adam’s thesis. The British Journal for the Philosophy of Science, 64:837–850. Pettigrew, R. (2012). Accuracy, chance and the Principal Principle. Philosophical Review, 121:241–275. (2013). A new epistemic utility argument for the Principal Principle. Episteme, 10:19–35. Roberts, J.T. (2001). Undermining undermined: Why Humean supervenience never needed to be debugged. Philosophy of Science, 68:S98–S108. Proceedings of the 2000 Biennial Meeting of the Philosophy of Science Association. Part I Contributed Papers. Thau, M. (1994). Undermining and admissibility. Mind, 103:491–504. Vranas, P.B.M. (2004). Have your cake and eat it too: The Old Principal Principle reconciled with the New. Philosophy and Phenomenological Research, LXIX:368–382. Redei_Measure_theoretic_analysis_of_consistency_Cover Redei_Measure_theoretic_analysis_of_consistency_Author