key: cord-1049632-6umhh5qm authors: Kordonis, Ioannis; Lagos, Athanasios-Rafail; Papavassilopoulos, George P. title: Dynamic Games of Social Distancing During an Epidemic: Analysis of Asymmetric Solutions date: 2021-10-11 journal: Dyn Games Appl DOI: 10.1007/s13235-021-00403-1 sha: f5ea9eef558dcac0c483f29c653c9117fd4d9f36 doc_id: 1049632 cord_uid: 6umhh5qm Individual behaviors play an essential role in the dynamics of transmission of infectious diseases, including COVID-19. This paper studies a dynamic game model that describes the social distancing behaviors during an epidemic, assuming a continuum of players and individual infection dynamics. The evolution of the players’ infection states follows a variant of the well-known SIR dynamics. We assume that the players are not sure about their infection state, and thus, they choose their actions based on their individually perceived probabilities of being susceptible, infected, or removed. The cost of each player depends both on her infection state and on the contact with others. We prove the existence of a Nash equilibrium and characterize Nash equilibria using nonlinear complementarity problems. We then exploit some monotonicity properties of the optimal policies to obtain a reduced-order characterization for Nash equilibrium and reduce its computation to the solution of a low-dimensional optimization problem. It turns out that, even in the symmetric case, where all the players have the same parameters, players may have very different behaviors. We finally present some numerical studies that illustrate this interesting phenomenon and investigate the effects of several parameters, including the players’ vulnerability, the time horizon, and the maximum allowed actions, on the optimal policies and the players’ costs. COVID-19 pandemic is one of the most important events of this era. Until early April 2021, it has caused more than 2.8 million deaths, an unprecedented economic depression, and affected most aspects of people's lives in the larger part of the world. During the first phases of the pandemic, non-pharmaceutical interventions (primarily social distancing) have been one of the most efficient tools to control its spread [13] . Due to the slow roll-out of the vaccines, their uneven distribution, the emergence of SARS-CoV-2 variants, age limitations, and people's resistance to vaccination, social distancing is likely to remain significant in large part of the globe for the near future. Mathematical modeling of epidemics dates back to early twentieth century with the seminal works of Ross [33] and Kermack and McKendrick [24] . A widely used modeling approach separates people in several compartments according to their infection state (e.g., susceptible, exposed, infected, recovered, etc.) and derives differential equations describing the evolution of the population of each compartment (for a review, see [1] ). However, the description of individual behaviors (practice of social distancing, use of face masks, vaccination, etc.) is essential for the understanding of the spread of epidemics. Game theory is thus a particularly relevant tool. A dynamic game model describing voluntary implementation of non-pharmaceutical interventions (NPIs) was presented in [32] . Several extensions were published, including the case where infection severity depends on the epidemic burden [6] , and different formulations of the cost functions (linear vs. nonlinear, and finite horizon vs. discounted infinite horizon or stochastic horizon) [11, 15, 37] . Aurell et al. [4] study the design of incentives to achieve optimal social distancing, in a Stackelberg game framework. The works [2, 10, 12, 17, 22, 30, 31, 39] study different aspects of the coupled dynamics between individual behaviors and the spread of an epidemic, in the context of evolutionary game theory. Network game models appear in [3, 18, 27, 29] , and the effects of altruism on the spread of epidemics are studied in [8, 14, 23, 26] . Another closely related stream of research is the study of the adoption of decentralized protection strategies in engineered and social networks [19, 21, 36, 38] . A review of game theoretic models for epidemics (including also topics other than social distancing, e.g., vaccination) is presented in [9, 20] . This paper presents a dynamic game model to describe the social distancing choices of individuals during an epidemic, assuming that the players are not certain about their infection state (susceptible (S), infected (I), or removed (R)). The probability that a player is at each health state evolves dynamically depending on the player's distancing behavior, the others' behavior, and the prevalence of the epidemic. We assume that the players care both about their health state and about maintaining their social contacts. The players may have different characteristics, e.g., vulnerable versus less vulnerable, or care differently about maintaining their social contacts. We assume that the players are not sure about their infection state, and thus, they choose their actions based on their individually perceived probabilities of being susceptible, infected, or removed. In contrast with most of the literature, in the current work, players-even players with the same characteristics-are allowed to behave differently. We first characterize the optimal action of a player, given the others' behavior, and show some monotonicity properties of optimal actions. We then prove the existence of a Nash equilibrium and characterize it in terms of a nonlinear complementarity problem. Using the monotonicity of the optimal solution, we provide a simple reduced-order characterization of the Nash equilibrium in terms of a nonlinear programming problem. This formulation simplifies the computation of the equilibria drastically. Based on that result, we performed numerical studies, which verify that players with the same parameters may follow different strategies. This phenomenon seems realistic since people facing the same risks or belonging to the same age group often have different social distancing behaviors. The model presented in this paper differs from most of the dynamic game models presented in the literature in the following ways: (a) The players act without knowing their infection state. However, they know the probability of being susceptible, infected, or recovered, which depends on their previous actions and the prevalence of the epidemic. (b) The current model allows for asymmetric solutions, i.e., players with the same characteristics may behave differently. The rest of the paper is organized as follows. Section 2 presents the game theoretic model. In Sect. 3, we analyze the optimization problem of each player and prove some monotonicity properties. In Sect. 4, we prove the existence of the equilibrium and provide Nash equilibrium characterizations. Section 5 presents some numerical results. Finally, 'Appendix' contains the proof of the results of the main text. This section presents the dynamic model for the epidemic spread and the social distancing game among the members of the society. We assume that the infection state of each agent could be susceptible (S), infected (I), recovered (R), or dead (D). A susceptible person gets infected at a rate proportional to the number of infected people she meets with. An infected person either recovers or dies at constant rates, which depend on her vulnerability. An individual who has recovered from the infection is immune, i.e., she could not get infected again. The evolution of the infection state of an individual is shown in Fig. 1 . We assume that there is a continuum of agents. This approximation is frequently used in game theoretic models dealing with a very large number of agents. The set of players is described by the measure space ([0, 1), B, μ), where B is the Borel σ -algebra and μ the Lebesgue measure. That is, each player is indexed by an i ∈ [0, 1). Denote by S i (t), I i (t), R i (t), D i (t) the probability that player i ∈ [0, 1) is susceptible, infected, removed, or dead at time t. The dynamics is given by: where r , α i are positive constants, and u i (t) is the action of player i at time t. The quantity u i (t) describes player i's socialization, which is proportional to the time she spends in public places. The quantity I f , which denotes the density of infected people in public places, is given by: For the actions of the players, we assume that there are positive constants u m , u M , such that The constant u m describes the minimum social contacts needed for an agent to survive, and u M is an upper bound posed by the government. The cost function for player i is given by: where T is the time horizon. The parameter G i > 0 depends on the vulnerability of the player and indicates the disutility a player experiences if she gets infected. The quantity 1 − S i (T ) corresponds to the probability that player i gets infected before the end of the time horizon. (Note that in that case the infection state of the player at the end of the time horizon is I , R, or D.) The second term corresponds to the utility a player derives from the interaction with the other players, whose mean action is denoted byũ(t): The third term indicates the interest of a person to visit public places. The relative magnitude of this desire is modeled by a positive constant κ. Finally, constant s i indicates the importance player i gives on the last two terms that correspond to going out and interacting with other people. Considering the auxiliary variableū(t): and computing S(T ) by solving (1), the cost can be written equivalently as: Without loss of generality, assume that R i (0) = D i (0) = 0 for all i ∈ [0, 1). There are M types of players. Particularly, there are M + 1 values 0 =ī 0 < · · · <ī M = 1 such that the functions S i (0), Denote by m j = μ([ī j−1 ,ī j )) the mass of the players of type j. Of course m 1 + · · · + m M = 1. The finite number of types assumption is very common in many applications dealing with a large number of agents. For example, in the current COVID-19 pandemic, people are grouped based on their age and/or underlying diseases to be prioritized for vaccination. Assumption 1, combined with some results of the following section, is convenient to describe the evolution of the states of a continuum of players using a finite number of differential equations. The interval [0, T ) can be divided in subintervals [t k , t k+1 ), with t 0 = 0 < t 1 < · · · < t N = T , such that the actions of the players are constant in these intervals. Remark 2 Assumption 2 indicates that people decide only a finite number of times (t k ) and follow their decisions for a time interval [t k , t k+1 ). A reasonable length for that time interval could be 1 week. The action of player i in the interval [t k , t k+1 ) is denoted by u i k . Under Assumptions 1-3, there is a unique solution to differential equations (1), with initial conditions S · (0), I · (0), and the integrals in (2), (4) are well defined (see Appendix A.1). We use the following notation: For each player, we define an auxiliary cost, by dropping the fixed terms of (6) and dividing by s i :J where b i = S i (0)G i /s i , and u i = [u i 0 , . . . , u i N −1 ] T . Denote by U = [u m , u M ] N the set of possible actions for each player. Observe that u i minimizes J i over the feasible set U if and only if it minimizes the auxiliary costJ i . Thus, the optimization problem for player i is equivalent to: Note that the cost of player i depends on the actions of the other players through the termsū and I f . Furthermore, the current actions of all the players affect the future values ofū and I f through the SIR dynamics. Assumption 5 Each player i has access only to the probabilities S i and I i and the aggregate quantitiesū and I f , but not the actual infection states. This assumption is reasonable in cases where the test availability is very sparse, so the agents are not able to have a reliable feedback for their estimated health states. In the rest of the paper, we suppose that Assumptions 1-5 are satisfied. In this section, we analyze the optimization problem for a representative player i, givenū k and I f k > 0, for k = 0, . . . , N − 1. Let us first define the following composite optimization problem: where: The following proposition proves that (8) and (9) are equivalent and express their solution in a simple threshold form. (8) and (9) are equivalent, in the sense that they have the same optimal values, and u i minimizes (8) if and only if there is an optimal A for (9) such that u i attains the minimum in (10) . , the function f is continuous, non-increasing, convex, and piecewise affine. Furthermore, it has at most N affine pieces and The idea of the proof of Proposition 1 is to reduce the problem of minimizing (8) into the minimization of the sum of the concave function −b i e −A with a piecewise affine function f (A). Then, the candidates for the minimum are only the corner points and of f (A) and the endpoints of the interval, where f is defined. The form of the optimal action u i comes from a Lagrange multiplier analysis of (10). For a detailed proof, see Appendix 2. Remark 4 Part (v) of the proposition shows that if the density of infected people in public places I f is high, or the average socializationū is low, then it is optimal for a player to choose a small degree of socialization. The optimal action for each player depends on the ratioū k /I f k . Particularly, there is a threshold λ such that the action of player i is u m for values of the ratio below the threshold and u M for ratios above the threshold. Note that the threshold is different for each player. The fact that the optimal value of a linear program is a convex function of the constraints constants is known in the literature (e.g., see [35] chapter 2). Thus, the convexity of the function f is already known from the literature. There is a simple way to solve the optimization problem (8) using the following steps: For all λ ∈ compute u λ with: Compare the values of J i (u λ ), for all λ ∈ , and choose the minimum. We then prove some monotonicity properties for the optimal control. Proposition 2 Assume that for two players i 1 and i 2 , with parameters b i 1 and b i 2 , the minimizers of (9) are A 1 and A 2 , respectively, and u i 1 and u i 2 are the corresponding optimal actions. Then: Thus, Proposition 2(ii) expresses of the fact that if (a) a person is more vulnerable, i.e., she has large G i , or (b) she derives less utility from the interaction with the others, i.e., she has smaller s i , or (c) it is more likely that she is not yet infected, i.e., she has larger S i (0), then she interacts less with the others. The optimal control law can be expressed in feedback form (see Appendix A.4.1). In this section, we prove the existence of a Nash equilibrium and characterize it in terms of a nonlinear complementarity problem (NCP). We Denote by: the set of possible distributions of actions of the players of type j and by = 1 ×· · ·× M the set of all possible distributions. Finally, let F : → R 2 N ·M be the vector function of auxiliary costs; that is, the component F ( j−1)2 N +l (π) is the auxiliary cost of the players of type j playing a strategy v l , as introduced in (7), when the distribution of actions is π. We denote F j (π) = [F ( j−1)2 N +1 (π), ..., F j2 N (π)] the vector of the auxiliary costs of the players of type j playing v l , l = 1, . . . , 2 N . Let us recall the notion of a Nash equilibrium for games with a continuum of players (e.g., [28] ). Definition 1 A distribution of actions π ∈ is a Nash equilibrium if for all j = 1, . . . , M and l = 1, . . . , 2 N : Let δ j (π) be the value of problem (8), i.e., the minimum value of the auxiliary cost of an agent of type j. This value depends on π, through the terms I f andū. Define j (π) = F j (π) − δ j (π) and (π) = [ 1 (π)... M (π)]. We then characterize a Nash equilibrium in terms of a nonlinear complementarity problem (NCP): where π ⊥ (π) means that π T (π) = 0. A distribution π ∈ corresponds to a Nash equilibrium if and only if it satisfies the NCP (13) . corresponds to a Nash equilibrium if and only if it satisfies the variational inequality: (iii) There exists a Nash equilibrium. Proof See Appendix A.5. In principle, we can use algorithms for NCPs to find Nash equilibria. The problem is that the number of decision variables grows exponentially with the number of decision steps. Thus, we expect that such methods would be applicable only for small values of N . In this section, we use the monotonicity of the optimal strategies, shown in Proposition 2, to derive a reduced-order characterization of the Nash equilibrium. The actions on a Nash equilibrium have an interesting structure. Assume that π is a Nash equilibrium and: is the set of actions used by a set of players with a positive mass. Let us define a partial ordering onŨ . . . , N . Proposition 2(iii) implies that V is a totally ordered subset ofŨ (chain). For each time step k, denote by ρ k the fraction of players who play u M , that is, Given any vector ρ = [ρ 1 . . . ρ N ] ∈ [0, 1] N , we will show that there is a unique π ∈ , such that the corresponding actions satisfy the conclusion of Proposition 2(iii) and induce the fractions ρ. An example of the relationship between π and ρ is given in Fig. 2 . Let us define the following sets: Let k 1 , . . . , k N be a reordering of 0, . . . , N − 1 such that ρ k 1 ≤ ρ k 2 ≤ · · · ≤ ρ k N . Consider also the setṼ = {v 1 , . . . ,v N +1 } of N + 1 actionsv n with: Before stating the proposition, let us give an example. Example 1 Consider the fractions described in Fig. 2 . There are three types of players with total mass m 1 , m 2 , and m 3 with corresponding colors blue, pink, and yellow. In this example, we assume that the actions of each player i ∈ [0, 1) are given by: The sets I k are given by: The sets K k are given by: The actionsv n are given by: The mass of the players of each type following each action is described in the following table: The following proposition and its corollary present a method to compute π from ρ in the general case (i.e., without assuming a set of actions in a form similar to (17)). Type 1 1 2 2 2 2 3 3 Mass (ii) If for some k, k , it holds ρ k = ρ k , then μ-almost surely all the players have the same action on k, k , i.e., μ({i : u i k = u i k }) = 1. (iii) Up to subsets of measure zero, the following inclusions hold: (iv) For μ-almost all i ∈ I k n+1 \ I k n , the action u i is given byv n+1 , for μ-almost all i ∈ I k 1 , u i =v 1 , and for μ-almost all i ∈ [0, 1) Proof See Appendix A.7. where we use the convention that ρ k 0 = 0, and ρ k N +1 = 1. Thus: Proof The proof follows directly from Propositions 4 and 2(ii). There are at most M + N + 1 combinations of j, l such that π ( j−1)2 N +l > 0. Let us denote byπ(ρ) the value of vector π computed by (20) . The situation is the same as in Example 1, but without assuming that the actions are given by (17) . Then, Corollary 2 shows thatπ(ρ) is given by the table in Example 1. The fractions ρ 0 , . . . , ρ N −1 correspond to a Nash equilibrium if and only if: whereF j,v n (π) is the cost of actionv n , for a player of type j. Furthermore, H (π) is continuous and nonnegative. Proof See Appendix A.8. The computation of an equilibrium has been reduced to the calculation of the minimum of an N −dimensional function. We exploit this fact in the following section to proceed with the numerical studies. In this section, we give some numerical examples of Nash equilibria computation. Section 5.1 presents an example with a single type of players and Sect. 5.2 an example with many types of players. Section 5.3 studies the effect of the maximum allowed action u M on the strategies and the costs of the players. 1 In this subsection, we study the symmetric case, i.e., all the players have the same parameter of the population is infected. We assume that κ = 3. We then compute the Nash equilibrium using a multi-start local search method for (21) . We then present some results for the case where b i = 200. Figure 4 illustrates the evolution of S i (t) and I i (t), for the players having different strategies. We observe that the trajectories of S i 's do not intersect. What is probably interesting is that the trajectories of I i may intersect. This indicates that, toward the end of the time horizon, it is probable for a person who was less cautious, i.e., she used higher values of u i , to have a lower probability of being infectious. We then compute the Nash equilibrium for the case of multiple types of players. We assume that there are six types of players with vulnerability parameters G 1 = 100, G 2 = 200, G 3 = 400, G 4 = 800, G 5 = 1600, G 6 = 3200. The sociability parameter s i is equal to 1, for all the players. The masses of these types are m 1 = 0.5 and m 2 = · · · = m 6 = 0.1. The initial condition is for all the players I 0 = 0.0001, and the time horizon is 52 weeks (approximately a year). Here, we assume that the maximum action is u M = 0.8. The rest of the parameters are as in the previous subsection. We then analyze the case where the types of the players are as in Sect. 5.2, and the initial condition is I 0 = 0.005 for all the players, for various values of u M . The time horizon is 13 weeks. Figure 6a illustrates the equilibrium fractions ρ k , for the various values of u M . We observe that as u M increases, the fractions ρ k decrease. Figure 6b shows the evolution of the mass of infected players, for the different values of the maximum action u M . We observe that as u M increases, the mass of infected players decreases. Figure 6c presents the cost of the several types of players, for the different value of the maximum action u M . We observe that players with low vulnerability (G = 100) prefer always a larger value of u M , which corresponds to less stringent restrictions. For vulnerable players (e.g., G = 3200), the cost is an increasing function of u M ; that is, they prefer more stringent restrictions. This paper studied a dynamic game of social distancing during an epidemic, giving an emphasis on the analysis of asymmetric solutions. We proved the existence of a Nash equilibrium and derived some monotonicity properties of the agents' strategies. The monotonicity result was then used to derive a reduced-order characterization of the Nash equilibrium, simplifying its computation significantly. Through numerical experiments, we show that both the agents' strategies and the evolution of the epidemic depend strongly on the agents' parameters (vulnerability, sociality) and the epidemic's initial spread. Furthermore, we observed that agents with the same parameters could have different behaviors, leading to rich, high-dimensional dynamics. We also observe that more stringent constraints on the maximum action (set by the government) benefit the more vulnerable players at the expense of the less vulnerable. There are several directions for future work. First, we can study more general epidemics models than the SIR. Second, we can investigate different information patterns, including the cases where the agents receive regular or random information about their health state. Finally, we can compare the behaviors computed analytically with real-world data. Note that the first two equations in (1) do not depend on R i and D i . Thus, it suffices to show that the first two equations of (1) have a unique solution. For any i ∈ [0, 1), if [S i (t), I i (t)] solve the differential equations (1), with initial condition (S i (0), I i (0)) ∈ [0, 1] 2 , then (S i (t), I i (t)) remain in [0, 1] 2 . Thus, we consider the solution of the differential equations:Ṡ where sat B (z) = max (min(z, B) , −B), and B = ru 2 M + max j α j . Consider the Banach space X = L 1 ([0, 1) , R 2 ), and let x 0 = (S · (0), I · (0)) : [0, 1) → R 2 . Then, under Assumptions 1,3, it holds x 0 ∈ X . For each interval [t k , t k+1 ), the differential equations (22) with the corresponding initial conditions can be written as: where for x : i → [S i , I i ] T , the value of f k (x) ∈ X is given by: For the initial condition, it holds x 0 0 = x 0 , and x k 0 = x(t k ) is computed from the solution of (23) on the interval [t k−1 , t k ), for k ≥ 1. For all k, f k is Lipschitz and thus there is a unique solution to (23) (e.g., Theorem 7.3 of [7] ). Furthermore, both I · (t) and u · (t) are measurable and bounded. Thus, the integrals in (4), (2) are well defined. Note that from Assumption 1, we only used the fact that S · (0), I · (0) : [0, 1) → R are measurable and not the piecewise constant property. (i) Since r I f k > 0 and b i > 0, the cost (7) is strictly concave, with respect to u i k . Thus, the minimum with respect to u i k is either u m or u M . (ii) Since U is compact andJ is continuous, there is an optimal solution for (8) . Denote by u i, this solution. Further, denote by the values of problems (8) and (9), respectively. Then, for A = where the first inequality is due to the definition of V 2 and the second inequality is due to the definition of f (A ). To contradict, assume that V 2 < V 1 . Then, there is some A and an ε > 0 such that: Thus, there is aũ i such that A = N −1 k=0 rũ i k I f k and N −1 k=0ũ i kū k < f (A)+ε. Combining with (24), we getJ i (ũ) < V 1 − ε, which contradicts the definition of V 1 . For u i, minimizing (8), the problem (9) is minimized for A = N −1 k=0 ru i, k I f k and u i, attains the minimum in (10) . To see this, observe that otherwise we would have V 2 < V 1 . On the other hand, assume that A minimizes (9) and u i attains the minimum in (10) . Then, and thus, u i minimizes J i . (iii) The set u i ∈ U : N −1 is non-increasing and concave in S i k 0 . Using (28) , i.e., minimizing with respect to u i k 0 , we conclude to the desired result. (ii) The monotonicity properties of V k are a direct consequence of its definition (26) . The concavity of the optimal value follows easily from (i). (iii) From Proposition 1(i), we know that the optimal u i k is in the set, {u m , u M }. Using Proposition 2 to the subproblem starting at time step k, and applying the principle of optimality, we get the desired result. The fact that if S i k G i /s i = l k , then both u i k = u m and u i k = u M are optimal, is a consequence of the continuity ofṼ k . Propositions 6(iii) and 1(v) both express the optimal actions for a player i. The primary difference is that Proposition 1(v) uses dynamic programming, and thus, the policies obtained can better handle uncertainties in the individual dynamics of player i. Consider a Nash equilibrium π that induces the fractions ρ k , the mean actionsū k , and the mass of infected people in public places I f k , for k = 0, . . . , N −1. Consider also the auxiliary cost-to-go functionṼ k defined in (27) , and the corresponding thresholds l k . Then, based on the strategies in Proposition 28(iii) we will describe a Nash equilibrium in feedback strategies. Consider the set of strategies: . Proof Observe that the set of strategies (29) generates the same actions with the Nash equilibrium π. (i) Assume that a π ∈ satisfies (13) and fix a j ∈ {1, . . . , M}. For any l such that π ( j−1)2 N +l > 0, it holds ( j−1)2 N +l (π) = 0, that is F ( j−1)2 N +l (π) = δ j (π) = min l F ( j−1)2 N +l (π). Thus, π ∈ is a Nash equilibrium. Conversely, assume that π ∈ is a Nash equilibrium and fix a j ∈ {1, . . . , M}. There is an l such that π ( j−1)2 N +l > 0. Since π is a Nash equilibrium, it holds F ( j−1)2 N +l (π) = δ j (π) and for all other l , it holds F ( j−1)2 N +l (π) ≥ F ( j−1)2 N +l (π) = δ j (π), which implies (13) . (ii) Assume that π is a Nash equilibrium and π ∈ . Then, 2 N l=1 π ( j−1)2 N +l = 2 N l=1 π ( j−1)2 N +l = m j . Since π is a Nash equilibrium, it holds: Thus, (14) holds. Conversely, assume that (14) holds, for some π ∈ . If π is not a Nash equilibrium, then there is a j, l such that π ( j−1)2 N +l > 0 and F ( j−1)2 N +l > δ j (π). Then, if l is such that F ( j−1)2 N +l = δ j (π), taking π = π + π ( j−1)2 N +l e ( j−1)2 N +l − π ( j−1)2 N +l e ( j−1)2 N +l , we get (π − π) T F(π) < 0, which is a contradiction. (iii) With a slight abuse of notation, we write I f k (π),ū k (π) to describe the quantities I f k ,ū k when the distribution of actions is π andJ j (v l , π) to describe the auxiliary cost of a player of type j who plays action v j when the distribution of the actions is π. The quantities I f k (π),ū k (π),J j (v l , π), are continuous on π. Proof The state of the system evolves according to the set of M2 N +1 +1 differential equations: , and: The initial conditions are S j,v l (0) = S j (0), I j,v l (0) = I j (0), (Assumption 1) and z(0) = 0. The right-hand side of the differential equations depends continuously on π through the term I f . Furthermore, S j,v l (t), I j,v l (t) remain in [0, 1] for all j, v l . Thus, the state space of the system remains in [0, 1] M·2 N × R and the right-hand side of the differential equation is Lipschitz. Thus, Theorem 3.4 of [25] applies and S j,v l (t), I j,v j (t) and z(t) depend continuously on π. Thus, I f k = z(t k + 1)−z(t k ) depends continuously on π. Furthermore,ū k is continuous (linear) on π. Finally, the auxiliary cost J j (v l , π), due to its form (7), depends continuously π. To complete the proof, observe that F(π) is continuous and is compact and convex. Thus, the existence is a consequence of Corollary 2.2.5 of [16] . An alternative would be to use Theorem 1 of [34] or Theorem 1 of [28] , combined with Lemma 2 to prove the existence of a mixed Nash equilibrium and then use Assumption 1, to construct a pure strategy equilibrium. However, the reduction to an NCP is useful computationally. Thus, beginning from [u m , . . . , u m ] and changing at each step one point from u m to u M , we get a sequence of N + 1-ordered vectors. So, every maximal chain has length N + 1. Then, we prove that the number of such chains is N ! using induction. For N = 2, it is easy to verify that we have two chains of 3 elements. For N = n, we have n! maximal chains of n + 1 elements. Then, for N = n + 1 we consider one of the previous chains v 1 v 2 · · · v N and at each of its elements, we add an extra bit:ṽ i = [v i , β i ]. We observe that if β i = u M , then for all j > i it should hold β j = u M , in order for the new vectors to remain ordered under . Denote The fact that V has at most N + 1 elements is also a consequence of Corollary 1. (i) To contradict, assume that I k ⊂ I k and I k ⊂ I k . Then, there is a pair of players i 1 , i 2 such that u i 1 k = u i 2 k = u M and u i 2 k = u i 1 k = u m , which contradicts Proposition 2(iii). (ii) Without loss of generality, assume that I k ⊂ I k . Then: Thus, μ(I k \ I k ) = 0. Combining with I k ⊂ I k and the definition of I k , I k , we get μ({i : u i k = u i k }) = 1. (iii) The equality μ(I k ) = ρ k is immediate from the definition of ρ k . Consider a pair I k n , I k n+1 . There are two cases, ρ k n < ρ k n+1 and ρ k n = ρ k n+1 . In the first case, we cannot have I k n+1 ⊂ I k n . Thus, from (i) we have I k n ⊂ I k n+1 . If ρ k n = ρ k n+1 , then I k n ⊂ ∼ I k n+1 from part (ii). The inclusion K k n ⊃ K k n+1 is immediate from the definition. (iv) Let i ∈ I k n+1 \ I k n . Then, since i / ∈ I k n u i k n = u m for n ≤ n. On the other hand, μ-almost all i ∈ I k n+1 satisfy i ∈ I k n , for n > n. Thus, for μ-almost all i ∈ I k n+1 \ I k n , the action u i is given by (16) . The proof is similar for i ∈ I k 1 , and i ∈ [0, 1) \ I k N . If ρ corresponds to a Nash equilibrium, then combining (13) and (20) we conclude that H (ρ) = 0. Conversely, since all the terms of (21) are nonnegative, H (ρ) = 0 implies that if μ([ī j−1 ,ī j ) ∩ [ρ k n−1 , ρ k n )) > 0, then F ( j−1)2 N +n (π(ρ)) = δ j (π(ρ)). Combining this with (20) , we conclude that if for some j, l, π ( j−1)2 N +l > 0, then F ( j−1)2 N +l (π) = δ j (π), where π =π(ρ). That is, π is a Nash equilibrium. From (20) , we observe that π(ρ) is continuous with respect to ρ, since μ(·) is the Lebesgue measure. Moreover, (21) can be expressed as: H (ρ) = π(ρ) T (π(ρ)). The fact that H (ρ) is nonnegative is a result of (13). Furthermore, from Lemma 2, F ( j−1)2 N +n (π) =J j (v n , π) is continuous with respect to π. Additionally, δ j (π), which is the minimum of F ( j−1)2 N +l (π) =J j (v l , π) for all v l , is continuous with respect to π as the minimum of continuous functions. So, (π) = F(π) − δ(π) is continuous with respect to π and H (ρ) is continuous with respect to ρ as composition of continuous functions. An epidemiological model with voluntary quarantine strategies governed by evolutionary game dynamics Epidemic spreading and equilibrium social distancing in heterogeneous networks Optimal incentives to mitigate epidemics: a Stackelberg mean field game approach Bertsekas DP (1997) Nonlinear programming Game dynamic model of social distancing while cost of infection varies with epidemic burden Functional analysis Individual altruism cannot overcome congestion effects in a global pandemic game Game theoretic modelling of infectious disease dynamics and intervention methods: a review Modeling the effect of information quality on risk behavior change and the transmission of infectious diseases Mean-field game analysis of SIR model with social distancing Information-related changes in contact patterns may trigger oscillations in the endemic prevalence of infectious diseases ECDC (2020) Guidelines for the implementation of non-pharmaceutical interventions against COVID-19 Disease dynamics in a stochastic network game: a little empathy goes a long way in averting outbreaks Contact rate epidemic control of COVID-19: an equilibrium view Finite-dimensional variational inequalities and complementarity problems Modelling the influence of human behaviour on the spread of infectious diseases: a review Impacts of game-theoretic activation on epidemic spread over dynamical networks Game-theoretic vaccination against networked SIS epidemics and impacts of human decision-making Game-theoretic frameworks for epidemic spreading and human decision making: a review A differential game approach to decentralized virus-resistant weight adaptation policy over complex networks Evolutionary game theory modelling to represent the behavioural dynamics of economic shutdowns and shield immunity in the COVID-19 pandemic Decisions and disease: a mechanism for the evolution of cooperation A contribution to the mathematical theory of epidemics Nash social distancing games with equity constraints: how inequality aversion affects the spread of epidemics Games of social distancing during an epidemic: local vs statistical information On a theorem of Schmeidler Networked SIS epidemics with awareness Risk perception and effectiveness of uncoordinated behavioral responses in an emerging epidemic Spontaneous behavioural changes in response to epidemics Game theory of social distancing in response to an epidemic An application of the theory of probabilities to the study of a priori pathometry Equilibrium points of nonatomic games Lectures on stochastic programming: modeling and theory Selfish response to epidemic propagation Equilibrium social distancing, Cambridge working papers in economics Decentralized protection strategies against SIS epidemics in networks Modelling epidemic dynamics under collective decision making For A ∈ [A m , A M ], there exists an optimal solution u i that attains the minimum in (10) . Since (10) is a feasible linear programming problem, there is a Lagrange multiplier λ (e.g., Proposition 5.2.1 of [5] ), and u i minimizes the Lagrangian:Thus, u i k = u m , ifū k /I f k < λ and u i k = u M , ifū k /I f k > λ. To compute f (A), we reorder k, using a new index k , such thatū k /I f k is non-increasing. Let:Then:]. It holds:Therefore, f is continuous and piecewise affine. Furthermore, sinceū k /I f k is nonincreasing with respect to k , the slope of f is non-decreasing, i.e., it is convex. (10) . We then show that for all the non-differentiability points A of f , there is a unique u i minimizing (10) . If A is a non-differentiability point, there. We then show that the unique minimizer in (10) is given by u i k = u M for k ≤ k 0 and u i k = u m for k > k 0 . Indeed, u i is feasible and if u = u i is another feasible point, it holds: Multiplying byū k 0 /I f k 0 , we get:and the inequality is strict if for some k > k 0 , u k = u m . Therefore, u i is optimal and if u is also optimal, then it should satisfy u k = u m for all k > k 0 . Combining this with the fact that N −1We have shown that if u i is optimal, then there is a k 0 such that u i k = u M for k ≤ k 0 and u i k = u m for k > k 0 . Then, using the original index k, the optimal control can be expressed as: (i) Since A 1 is optimal for b i 1 and A 2 is optimal for b i 2 , it holds:Adding these equations and reordering, we get:And since b i 2 > b i 1 , we get A 1 ≥ A 2 (ii) Using (v) of Proposition 1, and A 1 ≥ A 2 , we get:where h λ (·) is the Heaviside function, i.e., h λ (x) = 1 if x ≥ λ and h λ (x) = 0 otherwise. Therefore, λ 1 ≤ λ 2 . (iii) Assume that for k 1 = k 2 , u i 1 k 1 = u i 2 k 2 = u m and u i 2 k 1 = u i 1 k 2 = u M . Then, using (v) of Proposition 1 we have:which is a contradiction. In this section, we express the optimal control law in feedback form using dynamic programming. Let V k (S i k , s i , G i ) be the optimal cost-to-go from time k of a player with parameters s i , G i :whereThe optimal cost-to-go can be expressed as:We callṼ k the 'auxiliary cost-to-go.' Proof (i) The auxiliary cost-to-go can be written as:Then, applying the principle of optimality we get:We use induction. For k = N , the auxiliary cost-to-go is given byṼ k (S i N , s i /G i ) = −G i S i N /s i . That is,Ṽ k (S i N , s i /G i ) is non-increasing and concave in S i N . Assume that it holds for k = k 0 . Then, for each fixed u i k 0 , the quantity: