key: cord-0598754-i1bmgkce authors: Gradwohl, Ronen; Hahn, Niklas; Hoefer, Martin; Smorodinsky, Rann title: Reaping the Informational Surplus in Bayesian Persuasion date: 2020-06-03 journal: nan DOI: nan sha: e5474f67852bad942e4c69ed265f1abd0fa8ff51 doc_id: 598754 cord_uid: i1bmgkce The Bayesian persuasion model studies communication between an informed sender and a receiver with a payoff-relevant action, emphasizing the ability of a sender to extract maximal surplus from his informational advantage. In this paper we study a setting with multiple senders, but in which the receiver interacts with only one sender of his choice: senders commit to signals and the receiver then chooses, at the interim stage, with which sender to interact. Our main result is that whenever senders are even slightly uncertain about each other's preferences, the receiver receives all the informational surplus in all equilibria of this game. The celebrated model of information design known as Bayesian persuasion (Kamenica and Gentzkow, 2011 ) studies a setting where some agents, typically known as senders, receive private information. Other agents, known as receivers, must choose an action that affects their own payoff as well as the senders'. The senders' challenge, and the focus of the literature, is to decide how much of the information they should share with the receivers. In particular, the Bayesian persuasion model captures settings where the informed senders can commit to a signal-a distribution over messages that depends on the senders' information-prior to learning their private information. Although senders take no action, the combination of access to private information with the ability to commit to a signal proves to be quite advantageous. However, when multiple senders are involved, this advantage may decline. Indeed, recent work discusses the deterioration in senders' payoffs as more senders compete, at the same time benefiting the single receiver. We discuss some of these results in Section 1.2. In this paper we provide a new model of competition among senders. In our model there are multiple senders and a state space for each of the senders and receiver. States are a priori unknown, but there may be almost arbitrary correlation between different players' states. As always, senders commit to a signal prior to receiving any private information. In contrast with other models, however, our receiver is restricted to receiving one message, from a sender of his choice. From a descriptive perspective, there are many settings in which the receiver may lack the attention to receive input from all senders. For example, the receiver may be a policymaker contemplating the implementation of some policy, and the senders may be experts who are knowledgeable about the policy's projected implications. Consulting with all the experts may be costly; instead, the policymaker might just choose one with whom to consult. 1 Alternatively, the receiver may be a judge or court of law and the senders prosecutors arguing a case (as in the canonical example of Kamenica and Gentzkow, 2011) . The court may be limited in the number of cases it can hear, and so may have to choose with which prosecutors to interact. From a normative perspective, when the receiver has flexibility in choosing how much advice to receive, he may prefer to limit the amount of advice. Indeed, our main result is that in all equilibria of the game, the receiver learns all of his payoff-relevant information. That is, with as few as two competing senders, the receiver reaps all of the informational surplus. We obtain this strong dichotomy between the single-and multiple-sender cases in a very general model. We allow for an arbitrary information structure and arbitrary utility functions of the senders, as long as senders are not perfectly aligned in their objectives. Without this assumption-that is, if senders are perfectly aligned-then the multiple-sender case is nearly identical to the the single-sender case, and so the informational surplus is primarily enjoyed by the senders. Our main result leads to two counterintuitive observations: First, from the receiver's point of view, the restriction to interacting with just a single sender is actually a benefit, and in fact should be self-imposed. Second, from the senders' point of view, commitment power could be a double-edged sword. A single sender interacting with a receiver is always better off with commitment power, but with more than one sender that same commitment power could be strictly harmful. Here, senders may be better off forgoing this power. We now discuss these observations in greater detail. Example 1 (Inspired by an example of Kamenica and Gentzkow, 2011.) There is a government policy that may be beneficial or harmful and a policymaker who can either implement the policy (action P ) or maintain the status quo (action Q). Additionally, there are two experts, and each is either biased or unbiased. The biased type always wants the policy implemented (similarly to the prosecutor in the example of Kamenica and Gentzkow, 2011) whereas the unbiased type is aligned with the policymaker. Each player obtains utility 1 when his preferred action is taken, and utility 0 otherwise. The prior distribution over the information is as follows: The policy is beneficial with probability 0.5 and each of the experts is unbiased with probability ε > 0, all independently. Each expert learns his own type and whether or not the policy is beneficial. Before we discuss competition between experts we consider the optimal signal for a single expert, absent competition. As a single, unbiased expert is fully aligned with the policymaker he will fully disclose the policy's benefit. On the other hand, a biased expert will always recommend implementation. As a result, the policymaker will implement the policy upon hearing a recommendation to implement, and will maintain the status quo otherwise. This yields an expected utility of 1 to the expert and 0.5 + ε/2 to the policymaker. We now return to the two-sender setting and consider two variants of the game. In the first, the policymaker chooses one expert with whom to interact (which is the model discussed in the paper). In the second, the policymaker interacts with both. • Two senders and one signal: This is the setting we will analyze in the rest of the paper. In this setting the policymaker chooses a single expert, and observes the realization only of that expert's signal. We apply our main result, Theorem 1, to deduce that in this setting the policymaker will always take his correct action. This means that the experts' utility drops to 0.5 + ε/2, while the policymaker's increases to 1. Note that this conclusion holds for any ε > 0. However, at ε = 0 we witness a striking discontinuity, as all experts can now choose the sender-optimal signal from the single-expert case, and will then fully enjoy their informational advantage. 2 • Two senders and two signals: What if the policymaker observes both experts' signal realizations? Consider the case where all experts adopt the optimal course of action for the single-expert setting. As the policymaker observes all realizations of signals his optimal strategy is to maintain the status quo if one or more of the experts recommends this, and otherwise implement the policy. For small ε, the result is that the experts once again (almost) fully extract the informational surplus, while the policymaker expects a utility close to 0.5: With probability (1 − ε) 2 , both experts are biased and recommend implementation, and the policymaker obliges, leading to an expected utility of 0.5 for the latter in this case. To see why this strategy profile constitutes an equilibrium in the game note that an unbiased expert always gets utility of 1, and so cannot improve. A biased expert gets utility 1 when the other expert is also biased, since then both recommend their optimal action, implementation. The only situation in which a biased expert does not get his maximal utility is when the other expert is unbiased and the policy harmful. However, in this case he cannot change the policymaker's action from Q to P , since the policymaker knows that the other expert's recommendation is aligned with his own preference. Thus, in the example above the receiver strictly prefers to choose and interact with just one sender rather than to observe both senders' signal realizations. A sender that can commit to his future course of action enjoys an obvious advantage over one who cannot, whenever he monopolizes information. To see this, observe that a sender with commitment power can always simulate one that has no such power. In the competitive setting, however, all the informational surplus is enjoyed by the receiver, and so commitment power is no longer an obvious benefit. Indeed, the following example demonstrates the possibility that senders engaged in competition may prefer to have no commitment power: Example 2 Consider a regulator who is contemplating the imposition of restrictions in the market for electronic cigarettes. The regulator can tap in to one of two leading experts Each of the experts learns his own payoff-relevant state as well as the receiver's payoffrelevant state, but not that of the other expert. Our main result, Theorem 1, implies that whenever experts commit to signals and only one sender is chosen by the regulator, the regulator always takes his preferred action. Thus, the unique equilibrium payoffs are 1 to the regulator and 0.4 to each expert. In contrast, absent any commitment power the two experts could provide no information in equilibrium, 3 the regulator would take an arbitrary action as he is indifferent, and experts would enjoy a higher payoff of 0.5. 4 Thus, this example demonstrates that commitment power could be a double-edged sword for the senders: It is beneficial when there is a single sender, but can be strictly harmful when there are multiple senders. The study of Bayesian persuasion was initiated by Aumann and Maschler (1966) and came back into focus more recently following the work of Kamenica and Gentzkow (2011) , leading to a plethora of variants (e.g., Celli et al., 2020; Arieli and Babichenko, 2019; Ely, 2017; Ely et al., 2015; Au, 2015; Emek et al., 2012; Alonso and Câmara, 2016; Kolotilin, 2015; Goldstein and Leitner, 2018; Rabinovich et al., 2015; Koessler et al., 2019) . Kamenica (2019) provides a good overview of Bayesian persuasion and some of the extensions. Our research contributes to work on competing senders and the effect this competition has on the amount of information revealed to the receiver. This work includes Gentzkow and Kamenica (2017a) , who study a model in which the decision maker receives the realized signals from every sender before making a decision. They show that increasing the number of senders cannot decrease the amount of information revealed to the receiver. Their work is supplemented by that of Li and Norman (2018), who show that the assumptions in the previous reference are critical and the result does not hold if any are violated. We show a stark contrast between the model of Gentzkow and Kamenica (2017a) and one where the receiver is restricted to receiving the realized signal from a single sender. In the latter model the signals of competing senders in equilibrium are fully informative to the receiver, whereas in the former there are cases where equilibria are not fully informative. Au and Kawai (2020) study the case of n senders with a valuation drawn independently from a single known distribution competing for the patronage of a receiver. Only a single sender can be chosen by the receiver and only this sender gets a positive payoff. The receiver's payoff is determined by the chosen sender's valuation. The paper shows that an increase in the number of senders increases the amount of information revealed by each individual sender. In contrast to this, our model allows for almost arbitrary correlation between the senders and the receiver. Additionally, we allow multiple senders to profit from the receiver's decision. Our setting shows that already with two senders, senders are fully informative. Finally, Gentzkow and Kamenica (2017b) , compare the information revealed when senders compete to that of collusive senders in the same scenario. They identify a condition on the information structure which is necessary and sufficient for competition to increase the informativeness of signals in a broad class of settings. We note that this condition does not hold in our setup. They also show that their result does not hold if mixed equilibria are permitted, whereas our result holds for both pure and mixed equilibria. We use a different approach and focus on commitment power, comparing the information revealed and payoff for a single sender to that of multiple competing senders. Some of the notation is adapted from Gentzkow and Kamenica (2017b) . There are n senders, denoted {1, . . . , n}, and a single receiver, denoted R. The state space is finite and is of the form Ω = Ω 1 × . . . × Ω n × Ω R , with a typical element ω = (ω 1 , . . . , ω n , ω R ). The senders and receiver share a common prior µ 0 over Ω. The receiver is the only player with a payoff relevant action. Let A denote this set of actions and u R : A × Ω R → R and u i : A × Ω i → R the payoff functions of the receiver and sender i, respectively. Note that each player's payoff depends only on the corresponding entry in the state space. This generalizes the standard model in which there is a common, payoff-relevant state Ω to all players, as the prior could be such that ω R = ω 1 = . . . = ω n always. Our assumptions on utilities will limit this generality to some extent, however. We make two assumptions. The first, made solely to simplify the exposition, states that each player has a unique optimal action in each state: Assumption 1 For every j ∈ {1, . . . , n, R} and ω j ∈ Ω j , the set arg max a∈A u j (a, ω j ) is a singleton. The second assumption is the main assumption underlying our theorem. In fact, without such an assumption the result will not hold, as demonstrated by Example 1 with ε = 0. In words, given the receiver's and any sender's states, there is still some residual uncertainty about other senders' states, and in particular on the possibility that their preferences are aligned with the receiver. Assumption 2 For every pair of senders i = j and every (ω j , ω R ) ∈ Ω j × Ω R that has positive probability under µ 0 , there exists an ω i ∈ Ω i with P (ω i | (ω j , ω R )) > 0 for which This assumption warrants some discussion, as it is the main driving force behind our result. First, note that the assumption is "weak" in that it is implied by the following "standard" assumptions: that the prior µ 0 has full support, and that there are no undesirable actions-for each sender i and each action a, there is some state ω i for which a = arg max a ′ ∈A u i (a ′ , ω i ). Second, note that Assumption 2 will be satisfied for any prior when senders' preferences are perturbed a bit so that each sender, independently but with arbitrarily small probability, is aligned with the receiver (as in Example 1). Next, in order to describe the game we require some preliminary notation. A signal for player i is composed of an abstract message space, M i , and a function π i : where Ω iR = Ω i × Ω R and ∆(·) denotes the corresponding set of probability distributions. Let Π i denote the set of all signals of sender i. The persuasion game we study proceeds as follows. First, each sender i chooses a distribution π i over Π i . The receiver observes the vector (π 1 , . . . , π n ) drawn from (π 1 , . . . , π n ), after which he chooses one of the players, say j. The state is then realized, sender j learns the state ω jR def = (ω j , ω R ) and the receiver gets (only) the message π j (ω jR ) sent by the chosen player j. Finally, R takes an action in A and payoffs of all players are realized. 5 Without loss of generality we assume that M i ⊆ A, and interpret the message realized by the signal as a recommended action to the receiver. A signal π i is incentive compatible (IC) if, upon any realization a ∈ A of π i , the receiver's optimal action is a. We invoke the revelation principle and assume, without loss of generality, that senders are restricted to IC signals. Hereinafter we let Π i denote the set of all such IC signals. Denote by v R (π) the expected utility of the receiver when he takes his optimal action following every realization of the signal π. Denote by v i (π) the expected utility of sender i when the receiver takes these actions. We write π j π ′ when v j (π) ≥ v j (π ′ ), and π ≻ j π ′ when v j (π) > v j (π ′ ), for any j ∈ {1, . . . , n, R}. Note that for any sender j, a signal π j and realization a ∈ A of π j induce a posterior belief over Ω. Denote this belief by π j | a , and by π j the distribution of beliefs induced by realizations of π j . Furthermore, denote by π j i the marginal distribution of π j over Ω i , and by π j (i,R) the marginal distribution of π j over Ω i × Ω R . Finally, we will be interested in the Nash equilibria of this game. We assume that when the receiver is indifferent between multiple senders, he chooses one of them uniformly at random-this is akin to the standard assumption in Bayesian persuasion that the receiver breaks ties in favor of the sender. 6 To this end, fix the decision rule D : . . , n}) of the receiver as a uniformly-random choice of a sender from the set {j : π j R π i ∀i ∈ {1, . . . , n}}. Definition 1 A Nash equilibrium (NE) is a profile π = (π 1 , . . . , π n ) with the following property: There do not exist a sender i and distribution π ′ i over signals with We stress that the receiver chooses a sender with whom to interact after he observes the signal π i drawn from each sender i's distribution π i . Were he to make this choice after observing only the distributions π i , the model would be identical to one restricted to pure strategies, since each π i is equivalent to some π i in terms of the senders' and receiver's eventual utilities. 6 Our result holds also for weaker assumptions on the tie-breaking rule-see Appendix A. where π ′ is the profile π but with π ′ i replacing π i . We are interested in fully-informative signals, in which the receiver obtains enough information to always take his optimal action. Formally, Definition 2 A signal π i is fully informative if for every a ∈ supp(π i ) and every ω R ∈ supp ( π i | a R ) the recommended action a = arg max b∈A u R (b, ω R ). Moreover, we are interested in fully-informative profiles of signals: Definition 3 A profile π is fully informative if, with probability 1, the profile (π 1 , . . . , π n ) drawn from π satisfies the following: For every sender j that is chosen by the receiver with positive probability it holds that π j is fully informative. Our main result is then: Theorem 1 Every NE of the game is fully informative. The main idea underlying the proof of Theorem 1 is best illustrated by the simpler case in which π is a pure profile, say (π 1 , . . . , π n ). Suppose that in this profile, player j is always chosen by the receiver, but that π j is not fully informative. In this case we can construct a profitable deviation for any other player i. First, in Lemma 1 we show that this other player can simulate the signal of player j, by playing a signal π ′ i that yields the same marginal distribution over his own and the receiver's states as π j . Because of this equivalence in marginal distributions, both the receiver and player i are indifferent between π j and π ′ i . Then, in Lemma 3 we show that player i can modify π ′ i to yield a signal π ′′ i that both he and the receiver strictly prefer to π ′ i and hence to π j . The way he does this is by making π ′ i a touch more informative (to improve the receiver's utility), but only when his own preferences are aligned with the receiver's (to also improve his own utility). This is feasible for i, because he knows his own state while j does not. This deviation thus yields a profile in which the receiver always chooses π ′′ i , which player i strictly prefers to the current profile in which π j is chosen. We begin with some lemmas that will be used in the proof of Theorem 1. Lemma 1 For every pair of senders i = j and every signal π j ∈ Π j there exists a signal π i ∈ Π i for which π i (i,R) = π j (i,R) . In words, sender i can simulate any signal of j so that the posterior distribution over the payoff relevant states of i and R equals that induced by j's signal. Proof: The proof is by construction. Fix a pair of senders i = j and a signal π j ∈ Π j . For each action a ∈ supp(π j ), recall that π j induces a distribution π j | a (i,R) over states Ω i × Ω R , namely a probability P (ω iR | π j = a) for each state ω iR . Let the signal π i be the distribution over A generated as follows: In each state ω iR , signal π i takes each value a ∈ A with probability P (π j = a | ω iR ), where In this construction, for every ω iR and a. Thus, π i (i,R) = π j (i,R) . Note that in Lemma 1, the marginal distributions over ω R , given any recommended action a, is equal for π j and π i . Therefore, if π j is IC, then so is π i . Lemma 2 For every pair of senders i = j, signal π j ∈ Π j , realization a ∈ supp(π j ), Proof: Fix some realization a ∈ supp(π j ) and state ω jR ∈ supp( π j | a (j,R) ). By Assumption 2, for every pair of senders i = j and every ω jR ∈ Ω j × Ω R , there exists an ω i ∈ Ω i with P (ω i | ω jR ) > 0 for which arg max a∈A u i (a, ω i ) = arg max a∈A u R (a, ω R ). It remains to show that this ω i ∈ supp( π j | a i ): where the strict inequality follows from Assumption 2 and the fact that all other terms are also strictly positive. Lemma 3 Fix a pair of senders i = j and a signal π j ∈ Π j that is not fully informative. Then there exists a signal π i ∈ Π i for which both π i ≻ R π j and π i ≻ i π j . Proof: Since π j is not fully informative, there exist an action a ∈ supp(π j ) and a state ω ∈ supp ( π j | a R ) such that a = arg max b∈A u R (b, ω). By Lemma 1 there exists a signal π ′ i for which π ′ i (i,R) = π j (i,R) . This implies that both v i (π ′ i ) = v i (π j ) and v R (π ′ i ) = v R (π j ). Consider the set Ω a R def = {ω R ∈ supp ( π ′ i | a R )}. For every ω R ∈ Ω a R let b ω R = arg max b∈A u R (b, ω R ) be the optimal action for the receiver in state ω R . By Lemma 2, for every ω R ∈ Ω a R there exists We now construct the signal π i , as follows: 1. Given any state, generate the recommendation a ′ according to π ′ i . 2. If the realized recommendation is a ′ = a, signal π i is realized as a ′ . 3. If the realized recommendation is a ′ = a and the state is ω b ω R i , ω R for some ω R ∈ Ω a R , then: (a) with probability ε ω R (to be determined below) signal π i is realized as b ω R , and (b) with probability 1 − ε ω R the signal is realized as a ′ . 4. Otherwise (the realized recommendation is a ′ = a but the state is not signal π i is realized as a ′ . We will set ε ω R in 3(a) above so that the following is achieved: if the realized recommendation of π ′ i is a, then with some small probability, π i is fully informative, and is realized as b ω R = arg max a∈A u R (a, ω R ). Furthermore, in the case when π i is fully informative, the state ω R is revealed (through the recommended action b ω R ) only when the state of player i is ω b ω R i . That is, the signal reveals the optimal action for the receiver precisely in those cases when that same action is also optimal for sender i. To achieve this, set Λ a = {ω R : a = arg max b u R (b, ω R )}. We will set ε ω R so that, conditional on realizing 3(a) above, each recommendation a is realized with probability proportional to P (ω R ∈ Λ a | π ′ i = a). Thus, this is akin to playing a fully informative signal with some small probability. Formally, set for ε > 0. Set ε small enough so that ε ω R ≤ 1 for all ω R ∈ Ω a R . Hence, if the realized recommendation of π ′ i is a ′ = a and the state is some ω R ∈ Ω a R , then the signal π i is realized as a with the following probability. For a = a, Because the signal π i thus constructed is sometimes realized as π ′ i and sometimes is fully informative, π i ≻ R π ′ i . Furthermore, since the fully informative recommendation is realized precisely when sender i and the receiver's preferences are aligned, π i ≻ i π ′ i . Combining this with the facts above that Finally, the following lemma essentially extends Lemma 3 to mixed strategies by the senders. It states that if in some profile a sender plays signals that are almost never chosen, then he has a profitable deviation. Lemma 4 Fix a NE π that is not fully-informative. Then no sender i's strategy π i has positive measure on any set of signals that is chosen by the receiver with probability 0. Before proving the lemma we develop some notation. For a profile π, let T i = supp (v R (π i )) be the set of possible receiver payoffs attainable by a signal in sender i's support, and denote by τ i = min T i . 7 Furthermore, denote by τ = max i τ i . Observe that the receiver will never choose a signal π i with v R (π i ) < τ , since there will always be some signal leading to higher utility available. Also, note that all signals π i with v R (π i ) > τ are potentially chosen in equilibrium. Finally, we denote by π −i the profile π with sender i removed. We also slightly misuse notation and write D(π −i ) as a uniformly-random choice of a sender from the set {j : π j R π k ∀k ∈ {1, . . . , n} \ {i}}. Proof of Lemma 4. π i is i's equilibrium strategy, which implies that i is indifferent between all but a measure 0 of the signals in the support of π i . That is, for each such signal, sender i considers the expected utility from that signal: the probability that it is chosen by the receiver times the utility i derives from it. In equilibrium, the expected utility of a particular signal is equal for all signals. Now, suppose towards a contradiction that π i has positive measure on a set of signals that is chosen by the receiver with probability 0. Thus, by the above, i sometimes plays signals that are almost never chosen by the receiver. This leads to utility arbitrarily close to E v i (π D(π −i ) ) for sender i. The equilibrium indifference then implies that this same expected utility is obtained by sender i for all the signals in the support of π i . Thus, his equilibrium payoff is arbitrarily close to E v i (π D(π −i ) ) . We now construct a profitable deviation for sender i, as follows. Let j = i and π j satisfy v i (π j ) ≥ v i (π k ) for all senders k = i and signals π k ∈ supp(π k ) with v R (π k ) ≥ τ , and for which v R (π j ) ≥ τ . That is, π j is the best signal for sender i played in equilibrium by any other sender j that is potentially chosen by the receiver. If π j is not fully-informative, then by Lemma 3 there exists a signal π ′ i for sender i that is strictly better than π j for both i and the receiver. Sender i's profitable deviation is then to play π ′ i , which is the same as π i except that it replaces the set of signals that is chosen by the receiver with probability 0 with π ′ i . This deviation is profitable, contradicting the assumption that π is a NE. If π j is fully informative, then there are two cases. First, if v i (π j ) > E v i (π D(π −i ) ) , then sender i has a profitable deviation to always play a fully-informative signal. If v i (π j ) = E v i (π D(π −i ) ) , then it must be the case that for all k = i and signals π k ∈ supp(π k ) for which v R (π k ) ≥ τ the equality v i (π k ) = E v i (π D(π −i ) ) holds. If some such π k is not fullyinformative, then sender i can invoke the same profitable deviation as above, by simulating and improving upon π k as in Lemma 3. If, on the other hand, all such π k 's are fully informative, then whenever the receiver chooses a signal from any sender other than i, that signal is fully informative. But this implies that the receiver always has a fully-informative signal available, and, since such a signal is optimal for him, he will always choose it. This contradicts the assumption that π is not fully-informative. Proof of Theorem 1. Fix a NE π, and suppose towards a contradiction that π is not fully-informative. Observe that all senders have the same minimal τ i , namely τ i = τ . To see this, let j be the sender for whom τ j is maximal-namely, τ j ≥ τ k for all k ∈ {1, . . . , n}-and suppose there is some sender i for whom τ i < τ j . But this implies that π i has positive measure on those signals that lead to receiver payoffs in [τ i , τ j ), signals that are never chosen. By Lemma 4, this is a contradiction. Now, note that, since π is not fully-informative, the signals leading to receiver utility τ are not fully informative (since fully informative signals lead to utilities that are maximal for the receiver and are thus always chosen). Next, we will consider three cases, corresponding to whether all, some, or no sender strategies put positive measure on the set of signals that lead to receiver utility exactly equal to τ : (i) P (v R (π i ) = τ ) = 0 for all i ∈ {1, . . . , n}. (ii) P (v R (π i ) = τ ) > 0 and P (v R (π j ) = τ ) = 0 for some i, j ∈ {1, . . . , n}. (iii) P (v R (π i ) = τ ) > 0 for all i ∈ {1, . . . , n}. We begin with case (i). Let i be any sender, and note that the equality P (v R (π i ) = τ ) = 0 implies that for all ε > 0, (for otherwise, τ would not be in the support of v R (π i )). Thus, sender i chooses a signal π i with v R (π i ) ∈ [τ, τ + ε] with positive probability. Furthermore, the probability that i's signal is chosen by the receiver, when he plays such a signal, approaches 0 with ε: This is because, as ε approaches 0, the probability that there is some j = i and π j such that v R (π j ) > τ + ε approaches 1, and the receiver will choose the sender offering the signal with the highest v R . Thus, sender i's strategy π i has positive measure on a set of signals that is almost never chosen by the receiver. By Lemma 4, this is a contradiction. We now consider case (ii), that P (v R (π i ) = τ ) > 0 and P (v R (π j ) = τ ) = 0 for some i, j ∈ {1, . . . , n}. Here, observe that signals π i ∈ supp(π i | v R (π i ) = τ ) are played by sender i with positive probability, but are chosen with probability 0 (since any signal of sender j will be preferred over π i ). As above, this again implies that sender i's strategy π i has positive measure on a set of signals that is almost never chosen by the receiver. Again by Lemma 4, this is a contradiction. Finally, we consider case (iii), that P (v R (π i ) = τ ) > 0 for all i ∈ {1, . . . , n}. Note that, under case (iii), P v R π D(π) = τ > 0. We consider three cases. First, if for some sender i it holds that | v R (π D(π) ) = τ ] for all j = i, with a strict inequality for at least one such j, then sender i has the following profitable deviation: Instead of choosing (π i | v R (π i ) = τ ), sender i chooses π ′ i : Replace every π i ∈ supp(π i | v R (π i ) = τ ) with π ′ i such that with (arbitrarily) small probability ε signal π ′ i is realized as a fully-informative signal-that is, it is realized as arg max a∈A u R (a, ω R ) in every state ω R ∈ Ω R -and with probability 1 − ε it is realized as π i . For any ε > 0, the new signal distribution π ′ i will be more informative than π j , and so the receiver will choose sender i with probability 1 (conditional on v R (π D(π) ) = τ ), leading to a strict increase in i's utility. Since ε can be arbitrarily small, this does not harm sender i much, and so for small enough ε the overall gain from the deviation is positive. The second case is if for some sender i it holds that E[v i (π i ) | v R (π D(π) ) = τ ] < E[v i (π j ) | v R (π D(π) ) = τ ] for some j = i. In this case, sender i has the following profitable deviation: Replace (π i | v R (π i ) = τ ) by π ′ i , where π ′ i is the "simulation" of (π j | v R (π j ) = τ ). That is, for each π j ∈ supp(π j | v R (π j ) = τ ), i plays the simulation π ′ i of π j guaranteed to exist by Lemma 1. Under this deviation, sender i's utility, conditional on receiver utility equal to τ , is equal to E[v i (π j ) | v R (π D(π) ) = τ ]. By assumption, this is a strict improvement over The third case is if for some sender i it holds that π) ) = τ ] for all j = i. Note that this expected utility is thus equal to E[v i (π D(π) ) | v R (π D(π) ) = τ ]. Let ℓ and π ℓ satisfy v i (π ℓ ) ≥ v i (π k ) for all senders k = i and signals π k ∈ supp(π k ). That is, π ℓ is the signal yielding highest utility to sender i played by any other sender. In this third case sender i has the following profitable deviation: If π ℓ is not fully-informative, replace (π i | v R (π i ) = τ ) by π ′ i , where π ′ i is the improvement on the simulation of π ℓ guaranteed to exist by Lemma 3. By the lemma and assumption, Since sender i deviates to a signal that yields him the highest possible utility over all other senders' signals, this deviation is profitable. If π ℓ is fully informative and v i (π ℓ ) > E[v i (π i ) | v R (π D(π) ) = τ ], replace (π i | v R (π i ) = τ ) by a fully-informative signal. This again is a profitable deviation. Finally, suppose π ℓ is fully informative and that for all senders k = i and signals π k ∈ supp (π k | v R (π k ) = τ ) the equality v i (π k ) = E[v i (π D(π) ) | v R (π D(π) ) = τ ] holds. If some such π k is not fully-informative, then sender i can invoke the same profitable deviation as above, by simulating and improving upon π k as in Lemma 3. If, on the other hand, all such π k 's are fully informative, then whenever the receiver chooses a signal π k from any sender other than i, and obtains utility v R (π k ) = τ , that signal is fully informative. However, this is a contradiction, since π is not fully-informative and so τ is strictly less than the receiver's optimal utility. Underlying our results is an observation that competing senders always prefer to be chosen by the receiver in the interim stage. Consequently, they engage in a kind of a war of attrition and yield more and more information to the receiver until, finally, they disclose all of the receiver's payoff-relevant information. This result holds whenever the senders are not perfectly aligned in their interests. The sender's advantage when he monopolizes the information becomes a knife-edge advantage with more than one sender. Whenever senders' interests are perfectly aligned they can still enjoy this advantage. However, any perturbation, no matter how small, will result in the complete transfer of the surplus to the receiver. To model the possibility of senders' misalignment we consider a state space with a product structure, whereby each player's utility function only depends on one entry of the state. Note that, as we allow correlation between senders and receiver, this model does not exclude the possibility that a sender and receiver care about the same set of states (as is standard in the single-sender model). What our model allows is the introduction of conditions whereby one sender is not perfectly informed about the states of other senders, and so misalignment of interests is always possible. In this section we discuss extensions of our main result to different tie-breaking rules of the receiver. We first observe that, although we assumed that when the receiver is indifferent between several senders' signals he chooses one uniformly at random, this assumption is not necessary for our proof. In fact, for the proof to go through, we need the following property: For every profile π = (π 1 , . . . , π n ), sender i, deviation π ′ i , and profile π ′ = (π ′ i , π −i ), if i ∈ supp(D(π)) and i ∈ supp(D(π ′ )), then the distributions D(π) and D(π ′ ) are identical. This property is satisfied by any tie-breaking rule that depends only on the identities and signals of the best senders, from the receiver's perspective. It could even involve different distributions for different top senders or different signals that they choose. We note that without such a property, one problem that may arise is that a deviating sender may be "punished" by the receiver for deviating: For example, suppose i is never chosen under profile π, nor under π ′ . However, the receiver may break ties differently under π ′ than under π in such a way as to harm i and so discourage him from deviating. This possibility is not accounted for in our proof of Theorem 1. We do not know whether our main result holds without this restriction on tie-breaking rules. For the special case in which we restrict to pure NE, however, we do have a stronger result-one that is even stronger than only allowing for unrestricted tie-breaking rules. To describe this strengthening, denote by D the set of all optimal decision rules for the receiver: D = {D : Π 1 × . . . × Π n → ∆ ({1, . . . , n}) : i ∈ supp (D (π 1 , . . . , π n )) ⇒ π i R π j ∀j ∈ {1, . . . , n}} . In words, given a profile of players' signals, an optimal decision rule D of the receiver is any distribution over senders whose signals he prefers over others'. We now define the notion of a (pure) pessimistic Nash equilibrium: Definition 4 A pure pessimistic Nash equilibrium of the game is a profile (π 1 , . . . , π n ) and a receiver's decision rule D ∈ D with the following property: For every sender i and signal π ′ i there exists a decision rule D ′ ∈ D for which E u i (a, ω i ) | π D ′ (π ′ i ,π −i ) ≤ E u i (a, ω i ) | π D(π 1 ,...,πn) , where (π ′ i , π −i ) is the profile (π 1 , . . . , π n ) but with π ′ i replacing π i . Note the quantifier on D ′ : Counterfactually, a deviation is profitable if it is profitable for every decision rule following the deviation. Hence the qualifier in the definition above-a deviating sender is pessimistic about the receiver's reaction to his deviation. Note that the set of pure pessimistic Nash equilibria is a superset of the set of pure NE, and so the weaker definition renders our main result stronger. In particular, the proof of Theorem 1 actually shows that all pure pessimistic NE of the game are fully informative. To see this, note that the proof, when restricted to pure equilibria, does the following: In a non-fully-informative profile, whenever a sender is not chosen with probability 1, he can construct a profitable deviation under which he will be chosen with probability 1. That is, the deviation is such that his signal is strictly preferred by the receiver over all other signals, and so the issue of tie-breaking is rendered moot. Persuading voters Private bayesian persuasion Dynamic information disclosure Competitive information disclosure by multiple senders Game theoretic aspects of gradual disarmament Private Bayesian persuasion with sequential games Suspense and surprise Signaling schemes for revenue maximization Bayesian persuasion with multiple senders and rich signal spaces Competition in persuasion Stress tests and information disclosure Bayesian persuasion and information design Bayesian persuasion