key: cord-0748423-n56mgxwc authors: Li, Kai; van Eijck, Jan title: Public Announcements, Public Lies and Recoveries date: 2022-03-17 journal: J Logic Lang Inf DOI: 10.1007/s10849-022-09351-4 sha: 329f297b7285c6be1a83a3cb45ecc18d86564425 doc_id: 748423 cord_uid: n56mgxwc The paper gives a formal analysis of public lies, explains how public lying is related to public announcement, and describes the process of recoveries from false beliefs engendered by public lying. The framework treats two kinds of public lies: simple lying update and two-step lying, which consists of suggesting that the lie may be true followed by announcing the lie. It turns out that agents’ convictions of what is true are immune to the first kind, but can be shattered by the second kind. Next, recovery from public lying is analyzed. Public lies that are accepted by an audience cannot be undone simply by announcing their negation. The paper proposes a recovery process that works well for restoring beliefs about facts but cannot be extended to beliefs about beliefs. The formal machinery of the paper consists of KD45 models and conditional neighbourhood models, with various update procedures on them. Completeness proofs for a number of reasoning systems (converse belief logic, public lies logic, lying and recovery logic, conditional neighbourhood logic, plus its dynamic version) are included. It has frequently been noted that the surest result of brainwashing in the long run is a peculiar kind of cynicism, the absolute refusal to believe in the truth of anything, no matter how well it may be established. In other words, the result K. Li, The authors wish to thank Hans van Ditmarsch and Malvin Gattinger for their help and advice. We are also grateful to an anonymous reviewer for his/her comments on an earlier version of the paper. of a consistent and total substitution of lies for factual truth is not that the lie will now be accepted as truth, and truth be defamed as lie, but that the sense by which we take our bearings in the real world -and the category of truth versus falsehood is among the mental means to this end -is being destroyed. Hannah Arendt, Truth and Politics ((Arendt(1967(Penguin Classics Edition, 2006) )) The effect of public lies, according to Hannah Arendt, is that it destroys our bearings in the world. In this paper, we will make an attempt to explain this formally. We will also model how to recover from public lies. For this, we model public lies along the same lines as public announcements. Our starting point is the representation of knowledge, ignorance and belief by means of Kripke models, more specifically KD45 models. Further on in the paper, we will also use conditional neighbourhood models, to model conditional beliefs. We will model both public announcements and public lies as maps from Kripke models to Kripke models. The results of public lies are Kripke models where Bayesian conditioning gives wrong results, in the sense that agents can be 100% sure of things that are not true. The effect of public lies cannot be detected from the inside: agents still have fully consistent world views. The only thing is that they can be out of touch with reality. But the agents have no means of knowing this. In order to explain recoveries from false beliefs one has to invoke the effects of acting on false beliefs. The results or utilities of our actions are not determined by our beliefs but by the real world. To see how rational investigation and approach to the truth should ideally proceed, consider the following quote from MacKay (2003): Denote the proposition 'the suspect and one unknown person were present' by S. The alternative,S, states 'two unknown people from the population were present'. The prior in this problem is the prior probability ratio between the propositions S andS. This quantity is important to the final verdict and would be based on all other available information in the case. Our task here is just to evaluate the contribution made by the data D, that is, the likelihood ratio, P(D|S, H )/P (D|S, H ) . In my view, a jury's task should generally be to multiply together carefully evaluated likelihood ratios from each independent piece of admissible evidence with an equally carefully reasoned prior probability. [This view is shared by many statisticians but learned British appeal judges recently disagreed and actually overturned the verdict of a trial because the jurors had been taught to use Bayes' theorem to handle complicated DNA evidence.] The core principles of rational belief seem to rely heavily on conditional reasoning. Suppose φ (a proposition that is not in contradiction with anything you know) is true. Would you then believe ψ? In other words, if the world would turn out to be φ, would you still believe ψ? It is important that the condition is not a counterfactual. If one knows that a condition does not hold, then speculating about what one would believe if it were otherwise is usually not fruitful for getting closer to the truth. We interpret belief in ψ conditional on φ in the information-theoretic sense. The inspiration for this is Bayesian update, with the following very useful notion of belief: Belief as willingness to bet on ψ, given information φ. In Sect. 3 we introduce the converse belief operator, we show how knowledge can be expressed in terms of belief and converse belief, and we model public lies along the lines of Steiner (2006) , Kooi and Renne (2011) . Then we provide a recovery operation in Sect. 4. We show that if truth tellers are able to sequentially perform a recovery operation and a public announcement, then the audience can recover from false beliefs. However this observation also indicates that liars can use the same tactic; if they do then their alleged "truth" becomes a mutual conviction (mutual KD45 belief). In Sect. 5 we discuss the effect of public lies and recoveries on a slightly different notion of belief: propositions that one assigns a probability greater than 0.5, under some condition. Because an audience may use prior probabilities different from those of truth tellers, and truth tellers are bound to announce truth, liars may have an advantage over truth tellers. We provide logic systems and completeness proofs. At the core of epistemic logic is the representation of uncertainty by means of a set of current options for what the actual world could turn out to be like (cf. Stalnaker (2006) ). Consider the case of a single fact, let us say the outcome of a coin toss, where the coin has landed, but is hidden under a cup. Let h represent the situation where the coin has landed heads up, and h the situation where the coin has landed tails up. Ignorance of some individual about this situation can be represented as follows: The actual world is h , but this indication of what is actually the case is invisible to the agent a. In general, if a representation for a knowledge situation contains a pointer to the actual world, then this pointer is always invisible to the knowing agents. A situation where I know one thing and you know another thing has at least four possible states of affairs. Suppose a knows the status of p and b the status of q. Say they both toss a coin, and p denotes heads for a, q denotes heads for b. We now need to distinguish the four possible outcomes, as follows, where the solid arrows represent the uncertainty of a, and the dashed arrows are for b. For convenience, we leave out the self-loops. Note that the accessibility relation of a (and b resp.) is an equivalence relation that is reflexive, transitive and symmetric. The models where all the accessibility relations are equivalence relations are called S5 models. Public announcement logic was pioneered in Plaza (1989) . Intuitively, a public announcement would make an agent restrict her belief to the announced case. A natural way to implement this is by restricting every belief-cell to φ-worlds after announcing φ. Observe that the public announcement update can be viewed as a restriction of the model and its accessibility relations to the set of worlds where the announcement is true. In a picture: Alternatively, we can model public announcements by means of cutting accessibility links. The public announcement of φ results in cutting the links between φ and ¬φ situations. The precondition for publicly announcing φ is that φ is true in the real world. Viewed as a relational change, we can model this as the change from a to (?φ; a; ?φ). Notice that this change maps equivalence relations to equivalence relations. The key validity for public announcement is: It expresses the equivalence of the following two statements: (1) a knows ψ after publicly announcing φ, and (2) if φ is true, then a knows that φ implies that ψ holds after a public announcement φ. We make our assertion on the right (the assertion about the model after the update) conditional on !φ being executable, i.e., on φ being true. Note that the consequent simplifies to the equivalent formula φ → K a [!φ]ψ. As public announcements can be regarded as changes of epistemic models, it is natural to consider whether lies can be treated in a similar way. Following Augustine, a lie is a statement that the liar disbelieves, and intends to make the listener believe (cf. Mahon (2016) ). Thus a lie can be seen as an action with two preconditions: (1) the liar disbelieves his statement and (2) the liar intends to deceive the listener. In dynamic epistemic logic (DEL), the first precondition can be embedded in action models (cf. van Ditmarsch (2014) ). The second precondition about the liar's intention can be modelled by introducing new modal operators for intentions (Sakama et al. (2010) ), but this requires introducing new relations for each agent in the model, and is omitted in most of the works using dynamic epistemic logic. To model a speaker who lies to a listener, let us reconsider the coin tossing example. Suppose the coin has landed heads up, and is hidden by agent a under a cup. Both b and c are ignorant of the situation, but know that a is aware of the status of the coin: h h bc abc abc Note that it is common knowledge that neither b nor c can distinguish h and h. In order to deal with the effects of lies one has to shift from S5 models to KD45 models (models where the epistemic state of an agent gets represented as a relation that is serial, transitive and euclidean, and is usually interpreted as 'belief'). Now suppose a privately lies to b that the coin has landed tails up. The updated model is given below, where the uncertainty of b is emphasized by the solid arrows: Agent b is deceived by a, and as a result b falsely believes the actual world is h. Note that all arrows of b pointing to h are eliminated and all those to h remain unchanged. Also note that the relation represented by b's arrows is serial, transitive and euclidean. In this picture the arrows of c are unchanged, which means she knows b's belief update, and everyone knows what she knows. This way of modelling is proposed by Steiner (2006) . Steiner studied belief change on KD45 models, where the statement is announced to a subgroup of the population. Those outside the subgroup are unaffected, except for the fact that they will notice the belief changes of the subgroup. Thus an explanation of c's unaffectedness is that c noticed that a lied to b about h, but was suspicious about a's statement. Furthermore c's overhearing and suspicion are common knowledge to all three of them. This seems a bit too strong. To get around this, it is convenient to assume that all agents in our models are either speakers or listeners. The key validity for this kind of lies is (van Ditmarsch et al. (2012) ): It expresses that b believing ψ after a lie that φ amounts to the following: if the liar does not believe φ, then b believes that after a truthful announcement that φ, ψ holds. If we focus on the effects of lies, we can further assume all agents are listeners. Thus liars can be regarded as outside speakers/observers, and the two preconditions (liars' disbeliefs and intentions) of lying can be removed from our framework. Let us reconsider the coin tossing example with two coins. This time an outsider speaker lies to a and b that the two coins landed differently ( p ↔ ¬q), which is illustrated in the following picture: Before lie ( p ↔ ¬q) After lie ( p ↔ ¬q) The statement p ↔ ¬q makes a and b falsely believe that coins tossed by them respectively landed differently. Furthermore it becomes a common belief. Note that p ↔ ¬q would be a public announcement if the evaluation point were pq. van Ditmarsch et al. (2012) suggest this kind of updates is a generalization of public announcements: if the announcement is true, it is a public announcement, and if it is false, the public is deceived into taking the announcement as true. This kind of lying announcement is referred to as public lies, and the outside liar is considered by van Ditmarsch (2014) as a malevolent agent who always tells falsehoods. The key validity for public lies is: It expresses that believing ψ after a lie that φ is equivalent to the following implication: ¬φ entails the belief that a public announcement of φ implies ψ. However, this kind of public lie assumes that the audience is credulous enough to accept any statement, even statements that contradict what the audience believes. Thus, by this axiom, after an unbelievable public lie, the audience will believe anything. Since this is not very realistic, in the next section we will turn our attention to more cautious audiences. In this section we extend the framework to public lying. As mentioned earlier, in order to deal with the effects of lies one has to shift from belief based on S5 models to KD45 models. The epistemic state now means unshakable consistent conviction. A KD45 model is a model where all accessibility relations are serial, transitive and euclidean. A KD45 model looks like a octopus, for a KD45 relation R can be viewed as a union of two relations R 1 and R 2 where R 1 = {(x, y) ∈ R|(y, x) / ∈ R} and R 2 = R − R 1 . The R 1 part is the set of tentacles into the body of the octopus, and the R 2 part (the body of the octopus) is an equivalence relation. (Someone might say that a KD45 model looks more like a coronavirus -a body with spikes -but we prefer the less scary octopus image.) where W is a nonempty set of worlds, each accessibility relations R a is serial, transitive and euclidean (D45 relations), and V is a valuation on W. A belief model is a model where the knowledge-cell of w given φ need not include w. Given world w and agent a, the belief-cell of a, written as R a (w), is the set of worlds that are a-accessible from w. Note that w need not be in R a (w). If R a is serial and euclidean, then R a (w) is an equivalence. Indeed, it follows from seriality of R a that R a (w) is non-empty. Next, assume (w, u) and (w, v) are both in R a . We have to show that (u, v) ∈ R a . But this follows immediately from euclideanness of R a . In order to incorporate public lies and their effects, we will turn our attention to belief operators. Our basic language includes belief operators and their reverse operators. Let Ag be a finite set of agents, and let Prop be a set of propositional variables. The basic language is given by the following BNF-form: φ::= p|¬φ|φ ∧ φ|B a φ|B a φ ∨, → and are defined as usual. B a φ can be interpreted as "a is convinced of the truth of φ" or "a is certain of φ", in the sense of no new information will change this conviction. B a is the reverse of B a , and B a φ can be read as "if a's conviction is true, then she knows φ". We introduce these reverse operators mainly for technical reasons, to ensure that the language is expressive enough to describe belief-cells.B a is the abbreviation of ¬B a ¬. Definition 2 Let M = (W , R, V ) be a belief model. Key causes of truth conditions for reverse operators B a are: We use [[φ] ] M for the set {w ∈ W |M, w | φ} as usual, and omit the index M if it is clear in the context. Note that we can use B a B a as the context or background knowledge operator K a , which, as we will see later, cannot be affected by public lies nor by recoveries. Similar to typical S5 knowledge operators, we can derive the truth condition for K a : and we useK a φ for ¬K a ¬φ. Note that if B a would satisfy weaker conditions this would also affect the conditions for knowledge. For instance, if B a only satisfies D, then the background knowledge K a would satisfy only principle T. The reader can also check that if the system for B a is D4.2, that is serial, transitive and convergent, then K a satisfies the knowledge principles S4.2 proposed by Stalnaker (2006) . It is also worth noting that our language is more expressive than the language of standard epistemic and doxastic logic, because, for instance, B a ⊥-worlds are outside belief-cells of a, which basically says that a's conviction is false. We can also define knowledge-cells of a as follows: It is easy to check that R a • R −1 a is reflexive and symmetric. Before proving transitivity, we show the following lemma that worlds in a knowledge-cell share the same belief-cell, which parallels the positive introspection principle B a φ → K a B a φ discussed by Stalnaker (2006) ). Because R a is euclidean, we have v R a w , and then by transitivity of R a and u R a v we have u R a w , i.e., w ∈ R a (u). Then both w and v are in [u] a . Using the above lemma we have R a (w) = R a (v), which implies the transitivity of R a • R −1 a . Recall that R a • R −1 a is reflexive and symmetric. It follows that R a • R −1 a is an equivalence relation, and [w] a is a knowledge-cell for our background knowledge operators K a , as in standard epistemic logic. The calculus CBL (for converse belief logic) is given by the following axioms and rules: Axioms (BC) and (CB) are the usual converse axioms. Proof Note that the KD45 system for standard doxastic logic is a sub-logic of this calculus, and is complete for belief models. Thus this theorem immediately follows from the completeness result of the converse axioms for bidirectional frames (Corollary 4.36, Blackburn et al. (2001), Chapter 4). Using this result it is easy to check the validity of the following formulas. (1) says that if agent a's convictions are false, then the real world is not a-accessible. (2) says that if a's beliefs are true and φ is her converse belief, then she is certain of φ. However, note that B a φ → B a φ is not valid, because the real world may not be in a's belief cell and a may not believe φ, which implies B a φ is true and B a φ is false. In this subsection we model the effects of public lies on the convictions of an audience. As mentioned earlier, the traditional definition of lying requires (1) the liar disbelieves the statement and (2) the liar intends to make the listener believe the statement. We will briefly discuss (1) at the end of Sect. 5, but since our formal machinery does not allow us to model intentions, we have no way of giving an account of (2). Because credulous listeners of public lies at the end of the previous section are unrealistic, we turn our attention to cautious audiences. Our cautious audience assumption (Kooi and Renne (2011) ) for conviction change is that an agent accepts statements consistent with her convictions, and rejects those she is certain of being false. Not all public lies have such effects, and we focus on those that have. Brainwashing, for instance, can be regarded as a tactic of public lying that influences the convictions of its target audience by means of a repeated stream of public lies accompanied by suppression of diverging opinions. We will not look into the complex structure of brainwashing techniques in this paper. We simply treat brainwashing as public lying that has an effect on the convictions of its audience. Steiner (2006) studied belief change of the audience who react to announcements in accordance with the cautious audience assumption. Kooi and Renne (2011) showed that Steiner's system is a special case of their theory, and they call this kind of audience cautious. The effects of lying to a cautious audience, along with other two types of agents, is also modelled by van Ditmarsch (2014) , also by means of action models. Instead of using the word "cautious", he calls them skeptical and adds another precondition for lying: the skeptical listener "considers it possible that the speaker believes [the statement]". However since public lies and especially brainwashing are usually performed by powerful (and perhaps insane) people, it is hard for the audience to understand what is going on in their heads, let alone to keep skeptical if the statement looks authentic. Therefore we will restrict our attention to cautious audiences. A successful public lie ¬φ will cut the accessibility links of the audience to the real world (where φ is true), which is modelled as relation change: from c to (?φ; c; ?¬φ) ∪ (?¬φ; c; ?¬φ). We use a dynamic update operator "[ φ]" for public lying about φ. "[ φ]ψ" can be interpreted as "after public lying φ, ψ becomes true". The key validity for public lying is: This formula is a reformulation of axioms (A4) and (A5) for KD45 belief change given by Steiner (2006) . The corresponding picture is given below, where M φ is the model updated after public lie φ: The formula [ φ]B a ψ says that, in M φ , all worlds t that are a-accessible from s satisfy ψ. However as this update was originally treated as belief change in Steiner (2006), can also be interpreted as a truthful public announcement if φ is believed by the speaker. Because the precondition for the speaker's epistemic state is abstracted in our modelling (until the end of Sect. 5), public lies and truthful public announcement become the same update mechanism, which should not be surprising as it is a reflection of the difficulty of detecting lies. Even though we will use [ φ] for both public lying about φ and truthful announcement φ, we will call it public lying update for convenience. Nevertheless we may still define public announcement consistent with the cautious audience assumption (with a slight abuse of notation): The Language L P L for public lies is the basic language plus operators [ φ], which is given by the following BNF-form: In a next move, we can define products of public lying updates for belief models formally. The key causes of truth conditions for public lying operators: It is easy to verify that the updated relation R φ a is also serial, transitive and euclidean. However public lies cannot influence one's background knowledge, as is shown in the following lemma. Lemma 7 Let M = (W , R, V ) be a belief model, let w ∈ W and let φ be any formula. φ a , which completes our proof. The calculus PLL (public lies logic) is CBL plus the following reduction axioms for [ φ]: The intuition of Axiom ( 5) is that if after [ φ] agent a's convictions will still be true, then it is necessary that either φ is true, or a is certain of not φ. Proof Because the new axioms from RL are reduction axioms, it suffices to show that these reduction axioms are sound. It is easy to check that axiom ( 4) express public lying update at the syntactic level. We only have to illustrate the soundness of axiom ( 5). Let M = (W , R, V ) be a belief model, let w ∈ W and let φ, ψ be any then there is no u ∈ R a (w) such that M, u | φ, and hence using Definition 11 again Suppose the M, w | ¬φ ∧ ¬B a ¬φ. Consider any v ∈ W such that v R a w. Then M, w | ¬φ and there is a u ∈ R a (w) such that M, u | φ, which implies, using The case for a cautious audience accepting statements consistent with their convictions is formally given by the following proposition. Proposition 9 Let a ∈ Ag be any agent and let φ be any boolean formula. Then Proof Straightforward, by the completeness of calculus PLL and Definition 6. The following proposition shows that if a statement φ has been accepted, any further statement of ¬φ cannot retract it. Proposition 10 Let a ∈ Ag be any agent, let φ be any boolean formula. Proof By induction on the construction of φ and axioms ( 1-3), we have P L L φ ↔ [ ¬φ]φ. Since P L L B a φ → B a φ, we can imply P L L B a φ → B a [ ¬φ]φ. Using axiom ( 4) Using Proposition 9 it is trivial to show that for each boolean formula φ: That is, if the audience all consider both φ and ¬φ are possible, then after announcing φ, φ will become a mutual conviction. By proposition 10, further announcements cannot retract this mutual belief. Thus it really matters which is announced first, a public lie or a truthful public announcement. Note that this does not hold anymore if instead of mutual conviction we consider common belief, for some of the audience may consider it possible that someone else is certain of ¬φ before the announcement φ and will not be affected by the operation φ. In this section, instead of addressing the important but difficult question how public lies can be detected, we focus on a simpler question. What does the process of recovery from false beliefs look like? When Donald Trump stated that the practice of voting by mail could lead to voter fraud, an FBI official testified that "there is no evidence of a coordinated mail-in voting fraud effort". In this example, the key to recovery is the statement "there is no evidence of ¬φ". We wish to model this retraction from the false belief ¬φ, the public opening of the mind for φ again, or, in still other words, the common realization that φ might be true. Adopting the notation φ for this, the sequence φ; !φ models the recovery from the false belief ¬φ followed by public update with φ. Compared to the contraction in AGM belief revision (see Gärdenfors (2003) ), φ does not contract ¬φ from the audience's belief sets, but contracts the evidence supporting ¬φ, which leaves both φ and ¬φ possibly true. This effect is similar to forgetting (cf. van Ditmarsch et al. (2009) ), with the difference that our operation does not erase all evidence either way. Erasure of all evidence for or against φ would result in the beliefB a φ ∧B a ¬φ. The effect of φ, however, is to convince a thatB a φ is true. Thus if a is certain of φ, φ would not make her believeB a ¬φ. Another related topic is the discussion of reverse public announcement in Balbiani et al. (2016) and Haney (2018) . The converse announcement update proposed by Balbiani et al. can be seen as a kind of recovery from public announcement, but it is not deterministic. Haney uses worlds in a canonical model to expand epistemic models in reverse public announcement update. In contrast with this, we use the worlds in an agent's background knowledge to expand belief-cells in recoveries. Suppose a current belief state is given by relation c. Then the act of recovering from the false belief ¬φ is given by the relational change c := c ∪ (c; c ; ?φ). Explanation: we need to put φ situations back into an agents' consideration. If you are in a situation s that φ is disbelieved, then you can recover the connection to any φ-situation t that is disbelieved by first taking a c step inside the body of the octopus, and next taking a reverse c step out of the octopus again to t. This relational change also affects those who are certain of φ, for this act implicitly suggests people to examine all φ-situations. The operation of recovering from the lie that ¬φ can be pictured as follows: We have the following key formula for the recovery from the false belief that ¬ p: What this says is this. After the recovery from the false belief that ¬ p, B a q is true if a is certain of q and knows that if p holds then q is true. Note that this opens the way for an axiomatization with reduction axioms. We can also define public recovery ! φ using the operation φ with the precondition that φ is actually true: A truthful recovery φ is an operation φ with the precondition that the speaker is certain of φ, and a lying recovery φ is an operation φ with the precondition that the speaker is certain of ¬φ. Since the speaker's belief state is abstracted in our modelling, we use operation φ for both truthful and lying recovery φ, and simply call it recovery. A successful recovery φ for agent a means that a was previously convinced of ¬φ, and after the recovery she believes that φ possibly holds. However using axiom B4 we know that B a ¬φ implies B a ¬B a φ, and since we assume agents are always cautious about influence on their conviction, how can a acceptB a φ? An explanation is that the use of axiom (B4) invokes introspection, which is perhaps too strong in real life, and even if someone is aware that ¬B a φ is in his conviction, he may occasionally let it slip away. Thus if "there is no evidence of φ" is announced when he is in a less conscious state, he might think that sinceB a φ is consistent with ¬φ, he is better to accept it. In any case, recovery is never an easy task. It is usually performed by authentic people or those trusted by the audience, and it requires either explanation, persuasion or repetition, which are abstracted in our framework. The language L L R for public lies and recoveries is language L P L plus operators [ φ], which gives the following BNF definition: [ φ] can be read as "there is no evidence that φ is false", and [ φ]ψ can be interpreted as "after the announcement that there is no evidence that φ is false, ψ becomes true". The key clauses of truth conditions for recovery operators [ φ] is given below. Let M = (W , R, V ) be a belief model, and let φ be a L L R -formula. The key causes of truth conditions for L L R : Note that the updated relations R φ a are still serial, transitive and euclidean. Note that neither public lies nor recoveries change one's background knowledge. This is borne out by the following lemma. Using Definition 11, we have either w R a v (that implies v ∈ [w] a ) or v ∈ [w] a , and similarly v must be in [u] a . It follows that w and u are in the same knowledge cell in M. : ( 4) parallels the recovery update φ for R a in Definition 11. ( 5) describes the equivalent condition for [ φ]B a ψ that is: a will know ψ if a's convictions will become true after recovery φ. This condition can be read as "if either φ is true or a's convictions are true before the recovery, then a knows that ψ will be true after recovery φ". Proof Since calculus LRL is PLL plus reduction axioms ( 1-5), it suffices to show that these axioms are sound. We only prove the soundness of ( 4) and ( 5). Let M = (W , R, V ) be a belief model, let w ∈ W and let φ, ψ be any L L R -formula. First consider ( 4). iff, by Definition 11, Next consider ( 5). From left to right. Suppose M, w | [ φ]B a ψ and M, w | φ ∨ ¬B a ⊥. We have either M, w | φ or w ∈ R a (w). Consider any v ∈ [w] a . If M, w | φ, then using Definition 11 we can obtain that v R φ a w. If w ∈ R a (w), then by Lemma 4 we know that v R a w, and hence using Definition 11 again we have The following proposition illustrates that the sequence p; !p indeed helps the audience recover from the lie that ¬ p and convinces them that p. Proposition 14 Let a ∈ Ag be any agent and let φ be any boolean formula. Then (2) immediately follows from (1) if we can prove L RLKa φ → [ ¬φ]K a φ. This is straightforward from Lemma 7 and Theorem 13. Note that this proposition cannot be generalized for arbitrary formulas. For instancê is not valid in LRL, for B a B a ⊥ (which is K a ⊥) is always false in belief models. As an example of the effects of recoveries, suppose a tribe is facing the coordination game Stag Hunt 1 . Everyone in the tribe has two options: to hunt for a stag together (STAG) or to capture a hare by themselves (HARE). Those who go hunting for hares can each get one hare, but it is better for the tribe to hunt stag together, as a stag can provide much more food for everyone. The snag is that for successful stag-hunting everyone has to join in. If anyone abandons the joint task, the hunt would fail. Thus there are two equilibria for the tribe: STAG or HARE. Now, to twist this into our own story, suppose that a priestess has decided that the omens are auspicious for stag hunting, and she has ordered two elders to convey her message. As it turns out, one of the elders is dishonest and the other one is honest. The dishonest elder publicly lies that the decision is HARE. Let p and ¬ p be "the decision is STAG" and "the decision is HARE" respectively. Suppose everyone is fooled by the dishonest elder into believing that the priestess has decided HARE. Then a cure (the sequence p; p) is to first claim that there is no evidence that the priestess has made her mind to hunt for hares, and then to announce that actually the priestess' decision is STAG. By means of these two steps the tribe can reach a mutual conviction of STAG, as the above proposition implies. We get: However, the proposition also suggests a more vicious form of "public lying" executed in the very same sequence ¬ p; ¬ p. This tactic can be seen in real life, for instance in cults proclaiming that "science is just another religion" before preaching their own doctrines. In the stag hunt game, cooperation requires conventions or common knowledge, which one may assume to be acquired by a public announcement or a public event. But things are more complicated in real life. Chwe (2013) emphasizes the importance of common experience, where everyone is seeing the reactions of the rest of the audience. This makes it crucial to have public gatherings. Monderer and Samet (1989) consider cases where it is probable that not everyone is hearing a communication, and prove that in those situations common p-beliefs can approximate common knowledge, where p is a probability. Binmore (2008) suggests that "most conventions arise gradually and acquire force by a slow progression", and thus that not all conventions need to be common knowledge, and some of them may be the product of social evolution. Because of these considerations it becomes important to investigate the effects of public lies and recoveries on subjective probabilities instead of on KD45 beliefs. In this section, we take steps in that direction. We will focus on a kind of very simple belief operators P a . P a φ is true if a's subjective probability of φ is greater than 0.5 or a is willing to bet φ. These 0.5-beliefs (Monderer and Samet (1989) ) are related to what Foley (1992) calls the Lockean thesis, and we will call them Lockean beliefs. For related work, see Hamblin (1959) , Burgess (1969) and Herzig (2003) . Herzig and Longin (2003) give a system of Lockean beliefs and KD45 beliefs on neighbourhood models. Ghosh and de Jongh (2013) present, among many other systems, a logic of these two kind of beliefs for plausibility models. Lockean belief operators and probability models are discussed by van Eijck and Renne (2016) . Many of our daily decisions are based on this kind of belief. For instance if it has been raining for days, Bob may think that it is likely to rain tomorrow. If the weather forecast says that it will not rain, he may believe that there is no need to take an umbrella to work tomorrow. Neither of these beliefs is a KD45 belief. Since the weather forecast only provides predictions, the belief based on the forecast should be interpreted as a conditional belief, just like the belief based on the observation of rain today, which is also a conditional belief, with the observation of the recent state of the weather as an implicit condition. Conditional beliefs can be interpreted on plausibility models (cf. Baltag and Smets (2006) , Pacuit (2013) ). Demey (2013) studies public announcement on such models. Another way to represent conditional beliefs is by means of neighbourhood functions, as in van Eijck and Li (2017) and Marianna et al. (2018) . If A is the proposition that it will rain tomorrow, then Bob's belief about A can be represented as A in his neighbourhood N b . If B is the proposition that the weather forecast says it will not rain tomorrow, we can assign the proposition that Bob will not take an umbrella (C) in his neighbourhood with condition B (C ∈ N b (B) ). Assume p ranges over a set of proposition letters P, and a ∈ Ag. The language for conditional neighbourhood logic L C N is our basic language plus binary operators C a , which is given by the following BNF definition: C a (φ, ψ) can be read as "assuming φ, agent a is willing to bet ψ against ¬ψ". Let Ag be a finite set of agents. A conditional neighbourhood model M is a tuple (W , R, N , V ) where -(W , R, V ) is a belief model; -N : Ag × W × P W → PPW is a function that assigns to every agent a ∈ Ag, every world w ∈ W and set of worlds X ⊆ W a collection N w a (X ) of sets of worlds-each such set called a neighbourhood of X -subject to the following conditions: a (X ). We call N a neighbourhood function; a neighbourhood N w a (X ) for agent a in w, conditioned by X is a set of propositions each of which agent a believes more likely to be true than its complement. Property (c) expresses that what is believed is also known; (ec) expresses equivalence of conditions, i.e., if an agent knows that two conditions are equivalent, then the agent's beliefs are the same under both conditions; (d) expresses "determinacy": an agent does not believe both a proposition and its complement; (sc) expresses a form of "strong commitment": if the agent does not believe the complement of Y then she must believe any weaker Z implied by Y . It was proved in van Eijck and Li (2017) that neighbourhood functions also satisfy the following, for any a ∈ A, w ∈ W , X ⊆ W : where (m) and (ni) expresses monotonicity and no-inconsistency (an agent does not hold an inconsistent belief) respectively. (∅) expresses that conditioning with information that contradicts what the agent knows will cause an agent to believe nothing anymore. Let M = (W , R, N , V ) be a conditional neighbourhood model, let w ∈ W . Then the key clauses of truth conditions are given by: As the Lockean belief P a φ is interpreted as "a is willing to bet φ", this decision on φ should be based on what a is certain of, namely her conviction. Thus using the above definition, Lockean belief operators P a can be given by: P a φ expresses our intuition that "given what a is certain of, a is willing to bet φ", and we can establish the following equivalence: We can also give a complete calculus CNL (conditional neighbourhood logic) for conditional neighbourhood models, which is calculus CBL plus the following axioms. Axiom (D) guarantees the truth of neighbourhood condition (d), (EC) would correspond to (ec), (M) to (m), (C) to (c) and (SC) to (sc). Using Theorem 5 and the completeness result in van Eijck and Li (2017) , we can easily derive the following completeness theorem. The calculus CNL for Conditional Neighbourhood logic given above is sound and complete for conditional neighbourhood models. After adding public lying operators and recovery operators into L C N , we can also define public lies and recoveries for conditional neighbourhood models. Definition 17 let M = (W , R, N , V ) be a conditional neighbourhood model, and let φ be any formula. Note that this definition relies on the fact that public lies and recoveries have no effect on an agent's background knowledge. Also note that the definition of the neighbourhood function does not depend on KD45 beliefs. The truth conditions for our two kinds of dynamic operators are defined as usual. This "dynamic version" of CNL (let's call it calculus CND) is CNL plus PLL and the following two reduction axioms for C a . Theorem 18 Calculus CND is sound and complete for conditional neighbourhood models. iff, by Lemma 7 and Definition 17, For axiom ( C) similarly using Lemma 12 and Definition 17, we can obtain that Our next proposition shows that every conviction φ is also a Lockean belief. For each agent a ∈ Ag and each formula φ ∈ L C N , C N D B a (φ) → P a (φ). Proof By (N)* we know that for each world w in the domain, R a (w) is always in the neighbourhood N w a (R a (w)). Thus the formula, which is equivalent to B a (φ) → C a (¬B a ⊥, φ), is valid. Use the completeness of CND to obtain its derivability. Using the completeness result of CND, we can show that the effects of public lying on Lockean beliefs are similar to the effects on convictions, i.e., if a's convictions are consistent with a boolean formula φ, then publicly lying that φ would make a become willing to bet φ against its negation, and it cannot be undone by announcing not φ. To see why it holds, first notice that by Proposition 9 and 10 we have φ) , and using Proposition 19 we can establish Can we also deduce that if the truth ¬φ is the first to be publicly announced, then public lying φ will not affect one's Lockean beliefs? Yes, if ¬φ can truly be publicly announced. However there is a difference between the facts we can observe and the propositions inferred from those facts. What is inferred from a fact may not be a fact, and thus such inferred propositions may not be announced by truth tellers. But usually it is the inferred propositions that really matter for one's decision, and people may infer differently from truth tellers. The importance of "carefully reasoned prior probability" was already mentioned by MacKay (2003) . The movie The Big Short that tells the story of the unfolding of the financial crisis provides another illustration of this. Bear Stearns stock has fallen more than 38% and everyone is pessimistic. Well, almost everyone is, for Bruce Miller, a bullish investor, still believes that he should buy more stock. Because truth tellers cannot announce their conclusions like "Bear Stearns will go bankrupt" as facts, they will not be able to persuade Miller. But liars have no such restriction. They can announce "Bear Stearns will not go bankrupt" as a fact. Thus liars can directly affect their audience's Lockean beliefs, while truth tellers can only hope that when presenting the facts, their audience will draw the appropriate conclusion. This gives liars an advantage over truth tellers. To model this, we treat the announcements by liars and truth tellers as agent announcements, as in van Ditmarsch (2014) . We assume the liar and truth teller have the same epistemic status, and use e (the elders in the tribe) for either of them. Truthful public announcement [! φ]ψ (with the precondition that the announced proposition is what the truth teller is convinced of) is given by: Thus the ideal process of rational investigation described by MacKay can be approximated as follows. After announcing the evidence D, each agent a will examine whether P a S is true. 2 If the truth teller is certain that from the evidence D it is rational and scientific to infer S being more likely, then naturally by the act ! D she is also expecting that each agent a will endorse P a S (until being confronted by people like Miller who do not accept the evidence). Untruthful announcement [¡ φ]ψ (which is either lying or bluffing in van Ditmarsch (2014) ) is defined as: Note that the precondition ¬B e φ differs from the requirement of the traditional definition of lying, namely that the liar disbelieves the statement, which is expressed by B e ¬φ. Proposition 20 Let φ be any boolean formula, and let a ∈ Ag be any agent different from e. Then C N D ¬B e φ ∧B a φ → [¡ φ]P a φ, and if ¬B e φ is true, ! φ is not executable. Proof Immediate from Propositions 9 and 19. To represent the invariance of R e under the updates of public lies and recoveries, we need two reduction axioms: The completeness proof is just routine. Proofs for the first axiom can be found in both Steiner (2006) and van Ditmarsch (2014) . As an example, let us go back to the tribe that is about to decide on a stag hunt, where there is also a gap to be bridged between "the decision (of the priestess) is STAG" ( p) and "the tribe will perform STAG" (q). As q is still not settled, ¬B e q ∧B e q holds for both elders. Suppose a tribesman, say Bob (b), believes that the tribe is so disorganized that even if the decision is STAG, they are still very likely to hunt hares instead. Thus if p is announced by the honest elder, Bob would still bet ¬q. Usually, when trying to persuade our audience, we either state our own judgement or that of other people. So what if the honest elder announces her bet that q is true (i.e., P e q)? This will work if C b (P e q, q) holds, namely Bob accepts her Lockean belief that q, but it will not work if Bob is certain of p → P e q, that if p holds, the elders are willing to bet q. The following proposition illustrates the effects of announcing other agent's beliefs. Proposition 21 Let a ∈ Ag ∪ {e} and b ∈ Ag be two distinct agents, let O be either B or P, and let φ be any boolean formula. , using Definition 6, 17 and Theorem 18 we can obtain the conclusion. Next, consider the dishonest elder who can announce ¬q, which will result in every tribesmen being convinced of ¬q. It follows that, to their best interests, they all should hunt for hares, and thus ¬q will become true. Compare this with the true lies in Agotnes et al. (2018) , where a true lie is a formula φ satisfying ¬φ → [¡ φ]φ, which may suggest ¬q is a true lie. However there is a slight difference: from a deterministic perspective, ¬q may not be false at the true history (past, present and future); it can only be false if we cannot remember our history. In our perspective, a more suitable candidate for a true lie would be the statement "it is inevitable that the tribe will hunt for hares", but the formal analysis of that is beyond our current scope. As for recovery updates, we will show that the lying sequence φ; φ for Lockean beliefs is still as deleterious as for conviction. Let a ∈ Ag be any agent and φ be any boolean formula. Proof Since Proposition 14 also holds for calculus CND, we know that With an abuse of notation, the truthful recovery [! φ] and untruthful recovery [¡ φ] can be introduced as follows: Using the above proposition we can easily verify that: Proposition 23 Let φ be any boolean formula, and let a ∈ Ag be any agent different from e. Again consider the tribal stag hunt. Recall p and q are "the decision is STAG" and "the tribe will do STAG". Using Proposition 23, it is easy to check that the lying sequence ¡ ¬q; ¡¬q makes everyone be willing to bet ¬q. Because ! q is not viable for the honest elder, the best she can do is to execute the sequence ! p; !p. Then there will be a mutual conviction of p. However after this Bob, the pessimistic tribesman who holds C a ( p, ¬q) , will still bet ¬q. Executing ! P e q will still be in vain if Bob knows that the elder is willing to bet q only if p holds. Our final proposition illustrates the effect of recovery sequence ; when announcing the speaker's beliefs, that is to make the recovery sequence successful, the listener should agree with the speaker's belief. Proposition 24 Let b ∈ Ag be any agent, let O be either B or P, and let φ be any boolean formula. Proof The proof is similar to Proposition 21: Use Definition 17, 3 Theorem 18 and the four reduction axioms for B e . This suggests that liars continue to have an advantage over truth tellers. Is there a remedy for this at all? Would imperatives like "all hunt for stag, and you go now!" prevent people from adopting unreasonable prior probabilities? Maybe we should conclude with the truism that there is no substitute for education. Liars also have an advantage over truth tellers in both KD45 models, as in conditional neighbourhood models. This is because the asymmetry between the preconditions of ! and ¡ does not rely on neighbourhood semantics. Nevertheless, we conclude this section to show that neighbourhood semantics could be a better candidate to model the influence of authoritative opinions to public decisions. Let us reconsider the previous stag hunt example informally. Suppose the message given by the priestess is too vague and ambiguous that it is common knowledge that no one knows the truth value of p (the decision of the priestess is STAG). The elders have more experiences in interpreting the priestess's message than tribesmen. They both tend to believe p is true, but are not certain, and they do not know each other's epistemic status. By tradition in this situation after the message is announced in the gathering, there will be a vote to decide whether to perform STAG or HARE. We use s for "the tribe should perform STAG". Thus both B a s and P a s can express that a votes for STAG. The dishonest elder wants to manipulate the vote so that the tribe will hunt for hares. While he is not in the position to tell what the tribesmen should do, his has two options: claiming ¬ p is a fact or announcing he is certain of ¬ p. The former is not effective because everyone knows the message is too vague even for him. However the latter can hardly be opposed to even by the honest elder, for one should be free to speak his mind. Let a be a tribesman. We will examine on what conditions [ B e ¬ p]B a ¬s and [ B e ¬ p]P a ¬s hold respectively. First consider [ B e ¬ p]B a ¬s. It can be checked that this formula is entailed by two propositions: B a (B e ¬ p → ¬p) and [ B e ¬ p](B a ¬ p → B a ¬s). The first expresses that after knowing e's conviction of ¬ p, a is convinced of ¬ p. The second can be interpreted as after the announcement of the elder's conviction, a is certain of ¬ p only if he is certain of ¬s. Thus in order to manipulate the vote, the dishonest elder should make sure the tribesmen will blindly follow his "conviction". This is a strong condition, especially when one realizes that the elder's conviction is only his interpretation, which may be wrong. Next consider [ B e ¬ p]P a ¬q. Using Proposition 23, it is entailed also by two propositions: C a (B e ¬ p ∧ ¬B b ⊥, ¬ p) and [ B e ¬ p](P a ¬ p → P a ¬s). The first says that after knowing e's conviction of ¬ p, a is willing to bet ¬ p. The second expresses that after knowing e's conviction of ¬ p, a is willing to bet ¬ p only if he is willing to bet ¬s. In other words, a takes the elder's conviction as an advice, which is common in real life. Whenever a piece of news, a policy or even a sign is important but hard to fully understand, we are likely to consult experts or read relevant analyses to form our own judgments, which in turn would guide our decisions or actions. Comparison of the two cases shows that requirements for manipulating Lockean beliefs are weaker than those for manipulating KD45 beliefs. Since public lying has to involve forms of belief manipulation, this suggests that neighbourhood semantics is perhaps more suitable than KD45 semantics for modeling the effects of public lies on public opinions. We have modelled the effects of public lies on KD45 beliefs, and after introducing the reverse belief operators, we have also provided recoveries of false beliefs, and we have axiomatized these updates. By first executing recovery update then public announcement, an audience can be made to recover from false beliefs. However, similar tactics can be used by liars, so that liars still can deceive cautious audiences. Next, we have investigated public lies and recoveries on conditional beliefs and Lockean beliefs. The reduction axioms for these updates turned out to be straightforward. Again, the analysis shows that those who do not stick to the truth have an advantage over truth tellers. We end with some suggestions for further work. An obvious step would be to give a calculus for public lying and recovery in a language L L B for probability models, and show soundness and completeness. As was mentioned above, common knowledge and common beliefs play important roles in cooperation. What are sound axioms for public lying and recoveries from false beliefs for a language with common knowledge and common belief operators, and how can we show completeness? The key effect of public lying is that the community of agents loses touch with reality. This is detrimental for all agents, because the utilities of our actions in the world are determined by properties of the world, not by what agents believe about the world. To work this out formally, we need to add agent-utilities, and use these to model the effects on individual agents when these agents act on false beliefs. This would allow us to connect up to Paolo Galeazzi's world (Galeazzi (2017) ). We did not present a very detailed analysis of Stag Hunt, for we only used the game to illustrate flawed communication. For a full analysis of how cooperation is achieved in the game, one has to take common knowledge and utilities into account. In the preparation for the stag hunt, individual tribesmen will not cooperate unless others do so. So it is natural to assume that the more people an agent believes will cooperate, the more she is willing to take part. All of this is yet beyond the scope of our framework. For suppose Alice is the only one in the group who firmly believes the decision is STAG (B a p), and is willing to bet STAG (P a q). However, after a public lie that ¬ p, everyone else would become certain of ¬ p, and most of these believers would be willing to bet ¬q. In our framework, Alice's convictions cannot be affected by the lie ¬ p, and thus Alice will still bet q (P a q). However as Alice is aware most of others are willing to bet ¬q, actually she should become pessimistic about cooperation (P a ¬q). How can we extend our framework to represent this? The authors have no financial or proprietary interests in any material discussed in this article. Ethical approval This article does not contain any studies with human participants performed by any of the authors. True lies Truth and politics Before announcement Conditional doxastic models: A qualitative approach to dynamic belief revision. Electron Notes Theory of Computer Science Do conventions need to be common knowledge? Topoi Probability logic Rational ritual: Culture, coordination, and common knowledge Contemporary epistemic logic and the lockean thesis The epistemology of belief and the epistemology of degrees of belief Play without regret Belief revision Comparing strengths of beliefs explicitly The modal 'probably Reverse public announcement operators on expanded models On modal probability and belief Modal probability, belief, and actions Arrow update logic Information Theory, Inference, and Learning Algorithms The Definition of Lying and Deception Conditional beliefs: from neighbourhood semantics to sequent calculus Approximating common knowledge with common beliefs Dynamic epistemic logic i: Modeling knowledge and belief Logics of public communications A logical account of lying On logics of knowledge and belief A system for consistency preserving belief change Introspective forgetting Conditional belief, knowledge and probability Update, probability, knowledge and belief Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations