key: cord-0466030-2wrqwk40 authors: Hashimoto, Kazumune; Onoue, Yuga; Ogura, Masaki; Ushio, Toshimitsu title: Event-Triggered Control for Mitigating SIS Spreading Processes date: 2020-12-30 journal: nan DOI: nan sha: 4b4499a6a36b50794799ddb239846b595795634a doc_id: 466030 cord_uid: 2wrqwk40 In this paper, we investigate the problem of designing event-triggered controllers for containing epidemic processes in complex networks. We focus on a deterministic susceptible-infected-susceptible (SIS) model, which is one of the well-known, fundamental models that capture the epidemic spreading. The event-triggered control is particularly formulated in the context of viral spreading, in which control inputs (e.g., the amount of medical treatments, a level of traffic regulations) for each subpopulation are updated only when the fraction of the infected people in the subpopulation exceeds a prescribed threshold. We analyze stability of the proposed event-triggered controller, and derives a sufficient condition for a prescribed control objective to be achieved. Moreover, we propose a novel emulation-based approach towards the design of the event-triggered controller, and show that the problem of designing the event-triggered controller can be solved in polynomial time using geometric programming. We illustrate the effectiveness of the proposed approach through numerical simulations using an air transportation network. Analysis and control of epidemic processes in complex networks [1] have been studied over the past decades in several research fields including epidemiology, computer science, social science, and control engineering, with applications in epidemic spreading of infectious diseases over human contact networks [2] , malware spreading over computer networks [3] , rumor propagation [4] , and cascading failures or blackouts in electrical networks [5] . This research trend has been further strengthened by the current COVID-19 pandemic, which is posing a significant threat to humanity and economy worldwide. In recent years, an increasing attention has been paid in the systems and control theory toward the analysis and control of epidemic spreading processes, where the amount of medical treatments or a level of traffic regulations are treated as control inputs to be designed, as in [1, 6, 7, 8] . Specifically, control strategies are designed for the mitigation of the epidemic spreading processes (e.g., asymptotic or exponential convergence towards the disease-free state), based on the analysis of network structure and dynamical systems capturing the spreading processes. In this context, most of the early works consider feedforward control strategies, in which suitable control inputs that are applied constantly for all the times will be designed [9, 10, 11, 12] . More recently, dynamical and feedback control strategies have been investigated, in which control inputs are determined to adapt the number of infected people and the time-varying nature of the dynamics [13, 14, 15, 16, 17, 18, 19] . In this paper, we are particularly interested in designing a novel feedback control strategy for a deterministic susceptible-infected-susceptible (SIS) model, which is one of the well-known and fundamental models for the epidemic spreading processes (see, e.g., [20, 21] ). The deterministic SIS model can address epidemic spreading processes in the following two different contexts. The first one is the individual-based context, where the model consists of n ≥ 2 individuals interacting with each other, and the state of each node represents the probability that the individual is infected. The other one is the metapopulation context, where the model consists of n ≥ 2 subpopulations containing a group of individuals, or called subpopulations, and the state of each node indicates the fraction of the infected individuals in each subpopulation. This paper focuses on the metapopulation context, as the state of each subpopulation can be measured or estimated in real time (see, e.g., [8] ), so that the state-feedback controller can be reasonably implemented in practice. Various dynamical and feedback control strategies for the SIS models have been proposed, see, e.g., [14, 17, 18] . For example, [14] proposed to employ a model predictive control (MPC), in which control inputs are computed by solving a finite horizon optimal control problem online. The authors in [14] also showed that the asymptotic stability of the disease-free state is guaranteed by the proposed controller. Moreover [17] investigated the instability of the disease-free state under a linear state-feedback controller for a bi-virus epidemic spreading model. In addition, [18] proposed a method for designing a time-varying controller for the asymptotic stabilization of the epidemic processes. Feedback control strategies presented in the aforementioned papers [14, 17, 18] assume that control inputs are updated continuously (for the continuous-time case) or per unit of time (for the discrete-time case). However, such frequent control updates are not necessarily suitable in practice, since a even small fluctuation of the processes forces us to update the control inputs. For example, the level of traffic regulation, which can be regarded as one of the control inputs for mitigating the epidemic spreading processes, is not necessarily realistic to update continuously because even a small adjustment could require enormous efforts. Thus, a more practical approach will be to update the control inputs only when they are necessary, rather than updating the inputs at every time instants. For example, the level of traffic regulation may be preferable to be changed only when the fraction of the infected people in each subpopulation increases or decreases by the prescribed thresholds. Motivated by the above observation, in this paper we propose to employ an event-triggered control-based framework [22] for containing epidemic spreading processes. In the proposed framework, control inputs for each subpopulation are updated only when the fraction of the infected people in the subpopulation increases or decreases by a given threshold. As can be seen in the current situation of the COVID-19, employing the event-triggered control for the epidemic processes is reasonable and useful in practice, as many countries have been carrying out their mitigation strategies in an event-triggered manner. In Japan, for example, each prefecture has created its own cautionary levels according to the number of COVID-19 cases, in which, for each cautionary level, the contents of requests to prefectural residents are provided (see, e.g., [23] ). In other words, each prefecture dynamically updates its own mitigation strategies only when the number of the COVID-19 cases increases or decreases to some extent. Therefore, the event-triggered control-based framework could serve as a useful decision-making system to inform us of when to update our mitigation strategies in response to the dynamic change in the number of infected people in subpopulations. The main contribution of this paper is to formulate a framework for the eventtriggered containment of the deterministic SIS model. We furthermore formulate the event-triggered control in a distributed manner, in which control inputs for each subpopulation (or node in the graph) are updated based on the well-designed local event-triggered conditions. The SIS model may not be appropriate to concretely capture the dynamical behavior for the current COVID-19 pandemic. Nevertheless, this paper can be viewed as a first step towards a rigorous, mathematical formulation of the event-triggered control for mitigating the epidemic spreading. In particular, we provide both theoretical analysis on stability and design procedure of the event-triggered control in a computationally efficient way. Specifically, the technical contribution of this paper is twofold: (i) We derive a sufficient condition for the event-triggered controller to achieve the prescribed control objective. We further show that the condition can be checked by solving a convex program; for details, see Section 4. (ii) Based on the analysis given in (i), we then propose a novel framework to design the event-triggered control for mitigating the SIS spreading processes. In particular, we propose to leverage an emulation-based approach (see, e.g., [24, 25] ) as a two-step procedure to design parameters of the eventtriggered control. As we will see later, the main advantage of employing the emulation-based approach is that the problem of designing the eventtriggered control can be formulated by a geometric programming (see, e.g., [26, 27] ), which can be translated into the convex program and thus can be solved in polynomial time; for details, see Section 5. Our approach is related to applications of event-triggered control for multiagent systems, (see, e.g., [28, 29, 30, 31] ). Note that our result differs from these previous results in terms of both analysis and design in the following aspects. While most of the previous works consider linear multi-agent systems with single or double integrator dynamics and the control objective is to asymptotically achieve a consensus, we study the non-linear dynamical system arising from the SIS model and, furthermore, the control objective is not achieving the consensus but to asymptotically suppress the fraction of the infected people below prescribed thresholds. As will be seen in Section 4, this leads to the stability analysis of an event-triggered controller for positive and quadratic dynamical systems, which has not been fully investigated in the literature. The design procedure presented in this paper is the emulation-based design using a geometric programming, which also differs from the ones in the aforementioned works. The remainder of this paper is organized as follows. In Section 2, we describe dynamics of the SIS model and the control objective to be achieved in this paper. In Section 3, we describe the details of the proposed event-triggered control for the SIS model. In Section 4, we investigate the stability that derives a sufficient condition for achieving the control objective under the event-triggered controller. In Section 5, we provide an emulation-based approach towards the design of the event-triggered controller. In Section 6, numerical simulations are given to illustrate the effectiveness of the proposed approach. Finally, conclusions and future works are given in Section 7. (Notation and convention): Let R, R >0 , N denote the set of real numbers, positive real numbers, and non-negative integers, respectively. Let 0 denote the vector or matrix whose elements are all 0. Let I n and 1 n denote the n × n identity matrix and the n-dimensional vector whose elements are all 1. The transpose of vectors and matrices are denoted by (·) T . For any real vector x = [x 1 , x 2 , . . . , x n ] T ∈ R n , the Euclidean norm and the 1 norm are denoted by x and x 1 , respectively (i.e., x = x 2 1 + x 2 2 + · · · + x 2 n and x 1 = |x 1 | + |x 2 | + · · · + |x n |). Moreover, let diag(x) denote the n × n diagonal matrix whose ith diagonal equals x i . In addition, denote by supp(x) ⊆ {1, . . . , n} the support of x, i.e., we define the set For any two real vectors x = [x 1 , . . . , x n ] ∈ R n , y = [y 1 , . . . , y n ] ∈ R n , we write x ≤ y if and only if x i ≤ y i for all i ∈ {1, . . . , n}. Given a set N , we denote by |N | the cardinality of N . A directed graph is defined as the pair G = (V, E), where V = {1, . . . , n} is the set of nodes and E ⊆ V × V is the set of edges. The set of out-neighbors of node i is denoted by N out i , i.e., N out Similarly, the set of in-neighbors of node i (including node i itself) is denoted by N in i , i.e., N in i = {j ∈ V : (j, i) ∈ E}. Note that, from the definition, if a certain node includes the self-loop, both the in-and the out-neighbors include the node itself, i.e., if (i, i) ∈ E, then i ∈ N in i and i ∈ N out i . Next, we shall review some basic concepts of the geometric programming [26] . Let the positive variables be given by y = [y 1 , . . . , y n ] T ∈ R n >0 . A function g : R n >0 → R >0 is called a monomial if it is given of the form: g(y) = cy a 1 1 · · · y an n , where c ≥ 0 and a 1 . . . , a n ∈ R are given constants. Moreover, a function f : R n >0 → R >0 is called a posynomial if it is given of the form: f (y) = n i=1 c i y a 1i 1 · · · y a ni n , where c i ≥ 0 and a 1i , . . . , a ni ∈ R are given constants. Given a set of posynomial functions f 0 , f 1 , . . . , f k : R n >0 → R and a set of monomials g 1 , . . . , g q , a geometric program is an optimization problem given of the form: Although a geometric programming is not a convex program by itself, it can be converted into a convex problem with logarithmic changes of variables and logarithmic transformation of the objective and the constraint functions, (see, e.g., [32] ). Hence, the geometric program can be solved efficiently in polynomial time. In this section, we describe the dynamics of the epidemic spreading and the control objective in this paper. As previously stated in the Introduction, we adopt a deterministic susceptibleinfected-susceptible (SIS) model in terms of the metapopulation context (see, e,g., [8] ). Consider a network that consists of n (n ≥ 2) groups of individuals, which are labeled by {1, . . . , n}. Individuals in each group are affected by those in their own group or those in the neighboring groups. The neighbor relationships among the groups are captured by a directed graph G = (V, E), where V = {1, . . . , n} is the set of nodes and E ⊆ V × V is the set of edges. If (i, j) ∈ E, it means that the node i is affected by the node j. Since individuals can be affected by those in their own group, the graph has the self-loops at all nodes, i.e., (i, i) ∈ E for all i ∈ V. If (i, j) ∈ E with i = j, it means that there is a possibility of contact from individuals in node i to those in node j. In the (metapopulation) SIS model, each node has a [0, 1]-valued state variable representing the fraction of infected individuals in the node. We let the state of node i at time t ≥ 0 denoted by x i (t). Under this notation, the scalar 1 − x i (t) ∈ [0, 1] represents the fraction of individuals in node i that are not infected, which we call susceptible subpopulation. The dynamics of the state variable of node i in the SIS model is then expressed as follows: where δ i > 0 and β ji > 0 (j ∈ N in i ) are called the recovery and the infection rates, respectively. Note that i ∈ N in i for all i ∈ V, since every node has the selfloop. As shown in (1), the epidemic spreading in the SIS model is captured by the following two processes: (a) infected individuals in node i recover from infection according to the recovery rate δ i , and (b) susceptible individuals in node i are infected either from the node i itself according to the infection rate β ii or other nodes having their edges to i according to the infection rates β ji , j ∈ N in i \{i}. In this paper, it is supposed that we can dynamically control the recovery and the infection rates (see, e.g., [14] ). For example, the recovery rate can be controlled by increasing or decreasing the amount of medical resources and treatments. On the other hand, the infection rates from the neighbors can be controlled by traffic regulations (e.g., decreasing the number of flights) or a closure of facilities such as the sightseeing places. This consideration allows us to replace the constants δ i and β ji in (1) with the time-dependent functionsδ i + u i (t) and β ji −v ji (t), respectively, whereδ i > 0 andβ ji > 0 (j ∈ N in i ) represent the natural (or, baseline) recovery and infection rates before intervention, whereas the timedependent scalars u i (t) and v ji (t) (j ∈ N in i ) for t ≥ 0 represent the effect from applying control inputs and are assumed to satisfy u i (t) ≥ 0 and v ji (t) ∈ [0,β ji ] for all t ≥ 0. Then, the original SIS dynamics (1) is rewritten as follows: (2) The model (2) can be further written in a vectorial form aṡ where x = [x 1 , . . . , x n ] T is the state vector and the n × n matricesD,B, X(t), U (t), and V (t) are defined byD by using the vectorsδ = [δ 1 , . . . ,δ n ] T and u(t) = [u 1 (t), . . . , u n (t)] T . In this paper, we consider the following control objective: for every initial state x(0) ∈ [0, 1] n , there exists t ≥ 0 such that the state trajectory satisfies where M ∈ N >0 is the number of given control objectives, with V 1 ∪ V 2 · · · ∪ V M = V (for the illustration, see Fig. 1 ), and we would like to stabilize the average of the states in each group V m below the thresholdx m ≥ 0 in finite time, i.e., for some t ≥ 0. This control objective can be expressed by (5) if we choosē d m = |V m |x m and define w m by We can confirm that the control objective (5) As previously described in the Introduction, conventional feedback control strategies for the SIS models assume that the control inputs can be updated continuously, i.e., the amount of medical resources and the qualitative degree of traffic regulations must be changed continuously or even per unit of time. However, such frequent control updates are not necessarily suitable in practice, since even a small fluctuation of the states (fraction of infected individuals) forces us to update the control inputs. Hence, a more suitable approach would be to update the control inputs only when they are needed instead of continuously, i.e., the amount of medical resources and the traffic regulations are changed only when the fraction of infected individuals increases or decreases by the prescribed thresholds. This leads us to the usage of an event-triggered control [22] , in which the control inputs are updated only when they are needed according to a well-designed event-triggered condition (as detailed below). To formulate the proposed event-triggered control strategy, let t i the triggering time instants when control inputs for the recovery rate for node i, u i (t), and the infection rates from node i to its outneighbors (including the node i itself), v ij (t), (j ∈ N out i ), are updated. For simplicity, it is assumed that the initial updating time instants for all the nodes, i.e., t i 0 , i ∈ V, are given. For any t ∈ [t i 0 , ∞), node i evaluates the following event-triggering condition where e i (t) ∈ R denotes the error between x i (t) and the state at the latest triggering time before t, i.e., where the scalars σ i , η i , i ∈ V are the parameters to characterize the eventtriggered condition. The parameters σ i , η i , i ∈ V are called the event-triggering gains, which will be designed later in this paper. It is assumed that the eventtriggering gains are chosen such that for all i ∈ V. If the condition (9) is satisfied, then node i does not update the control inputs, i.e., t = t i +1 . On the other hand, if (9) is violated, then node i updates the control inputs, i.e., t = t i +1 . More specifically, the triggering time instants are given as follows: for all ∈ N. Our choice of the event-triggering condition in (9) (as well as the triggering time instants in (12)) is motivated as follows. Intuitively, the term σ i x i (t) in the right hand side of (9) becomes more dominant than η i when x i (t) is large, and η i is more dominant than σ i x i (t) when x i (t) is very small. For example, when the state is very small, the control inputs are no more updated unless the error In particular, when the state is decreasing and eventually satisfies 0 < x i (t) ≤ η i for all t ≥ t (for some t ), the control inputs are no more updated after t since the error |x i (t) − x i (t i t )| does not exceed η i for all t ≥ t . Therefore, by using the event-triggering condition (9) , it can be expected that the frequency of the control updates becomes less and less as the state gets smaller and smaller. Remark 1. It is necessary for the event-triggering gains η i to be designed as η i > 0 for all i ∈ V in our problem set-up in order to guarantee that the interevent times are always positive, i.e., avoid the Zeno behavior or satisfy an event-separation property (see, e.g., [22, 33] ). If we set η i = 0, we cannot guarantee that the inter-event times are always positive due to the effect of the term (2); for certain values of this term, the inter-event times can eventually become zero in finite time, (see, e.g., [33] ). If η i > 0 for all i ∈ V, it follows that, for each t i , the next triggering time t i +1 is given at least after the absolute error |e is continuous for all t, there always exists a ∆ > 0 such that the eventtriggered condition (9) is satisfied for all t ∈ [t i , t i + ∆], and so the inter-event times are positive for all the times. For each node i ∈ V, the control inputs are updated according to the following linear state feedback controller: for all ∈ N, where k i , l ij , i ∈ V, j ∈ N out i are parameters to characterize the control strategy. The parameters k i , l ij , i ∈ V, j ∈ N out i are called the control gains, which will be designed together with the event-triggering gains later in this paper. It is assumed that the control gains must be chosen such that for all i ∈ V and j ∈ N out i , wherek i > 0 andl ij ∈ (0,β ij ] (i ∈ V, j ∈ N out i ) are given upper bounds for the control gains of k i and l ij , respectively. The following proposition establishes the invariance of the region [0, 1] n under the event-triggered controller (13) , (14) . Proposition 1. Consider the SIS model (2) and the event-triggered controller (13) , (14) , in which the control and the event-triggering gains are, respectively, chosen such that (11) and (15) are satisfied for all i ∈ V and j ∈ N out Proof. Since we apply the event-triggered controller (13) , (14) , where the control gains are chosen to satisfy (15) for all i ∈ V and j ∈ N out i , it follows that the control inputs are piecewise continuous in t and satisfy with µ being the collection of all the control inputs, i.e., , satisfying the Lipschitz condition in Ω [34] . Hence, the solution oḟ x = f (x, µ) exists and unique in Ω. The fact that the state trajectory remains in Ω for all the times can be shown as follows. Let ∂Ω ⊂ Ω be the boundary of Ω and let ∂Ω 1,i , ∂Ω 2,i ⊂ ∂Ω for all i ∈ V given by Note that the union of ∂Ω 1,i and ∂Ω 2,i for all i ∈ V comprises ∂Ω. In addition, let o 1,i ∈ {−1, 0} and o 2,i ∈ {0, 1} n for i ∈ V be the outer normal vectors with respect to ∂Ω 1,i and ∂Ω 2,i , respectively, i.e., o 1,i (resp. o 2,i ) is the vector whose i-th element is −1 (resp. 1) and 0 otherwise. Then, for all In addition, for all (19) and (20) imply that, for every x on the boundary of Ω and for every u i ∈ [0,k i ], v ji ∈ [0,β ji ], i ∈ V and j ∈ N in i , the vector field f (x, µ) is tangential or pointing inwards Ω, which shows that Ω is an invariant set (for the related analysis, see, e.g., [35] ). Therefore, if x(0) ∈ Ω, the state trajectory satisfies x(t) ∈ Ω for all t ≥ 0. In this section, we analyze the stability of the closed-loop system for the SIS model (2) . In particular, we investigate a sufficient condition that, under appropriate selections of the control and the event-triggering gains, the control objective (5) is achieved by applying the event-triggered controller (13) , (14) . We start our stability analysis by introducing several additional parameters and notations. First, we denote the set of control and the event-triggering gains as k = [k 1 , . . . , k n ] T , σ = [σ 1 , . . . , σ n ] T , and η = [η 1 , . . . , η n ] T . We then define the matrices Second, we define a candidate Lyapunov function by for a given Lyapunov parameter p = [p 1 , . . . , p n ] T ∈ R n >0 . Note that we can make use of the candidate Lyapunov function as the linear function of x, since the dynamics (2) is non-negative for every x(0) ∈ [0, 1] n (see Proposition 1). Moreover, let p * m > 0 for all m ∈ {1, . . . , M } be given by where w m ∈ {0, 1} n is defined in (5) . That is, p * m represents the smallest value among the set of the Lyapunov parameters whose indices belong to the support of w m . Additionally, define the vectors s, r ∈ R n by Finally, define the matrix Q ∈ R n×n and the set W ⊂ R n by where S = diag(s) , P = diag(p), and > 0 is an any positive constant. The following theorem gives a sufficient condition for the control objective (5) to be achieved and is the main result of this section. Theorem 1. Consider the SIS model (2), the event-triggered controller (13) , (14) , and the control objective (5) . Assume that the control and the event-triggering gains satisfying (11) and (15) are chosen such that the following conditions are satisfied: for all m ∈ {1, . . . , M }, where θ * ∈ R is defined according to the following optimization problem: where W ⊂ R n is defined in (28) . Then, for any x(0) ∈ [0, 1] n , the control objective (5) is achieved by applying the event-triggered controller (13), (14) . In addition, for every selection of the control and the event-triggering gains satisfying (11) and (15), it follows that the optimization problem (30) is strictly convex. In essence, Theorem 1 states that if the control and the event-triggering gains are appropriately chosen such that (29) is satisfied, then the control objective (5) is achieved by applying the event-triggered controller (13), (14) . Theorem 1 also states that the optimization problem (30) is strictly convex for every selection of the control and the event-triggering gains. Hence, the condition (29) can be efficiently checked in polynomial time. Remark 2. Note that the matrix Q may not be a symmetric matrix. Without loss of generality, if the matrix Q is not symmetric, we can replace the set W in (30) , so that Q T + Q is the symmetric matrix and the convex optimization problem is given in a standard form [26] . Such replacement is valid because Proof. Let us first show that the optimization problem (30) is strictly convex. From (28) , the strict convexity of (30) can be shown by guaranteeing that the matrix Q is positive definite for every selection of the control and the event-triggering gains satisfying (11) and (15) . From (27) , it can be shown that the i-th diagonal element of the matrix Q, denoted by q ii , is given by for all i ∈ V. Moreover, the (j, i)-th (j = i) off-diagonal element of the matrix Q, which is denoted as q ji , is given by Hence, the difference between the i-th diagonal element and the sum of the other elements in the i-th row is given by where we used 1 − (σ i + η i )/2 > 0 for all σ i ∈ (0, 1) and η i ∈ (0, 1). Hence, it follows that Q is a strongly diagonally dominant matrix, which implies that, from the Gershgorin circle theorem [36] , the matrix Q is positive-definite. Next, we show that the control objective (5) is achieved by applying the eventtriggered controller (13), (14) . Using (13) and (14), the closed-loop system is given byẋ Moreover, due to the event-triggered condition (9), it follows that |e i (t)| ≤ σ i x i (t)+ η i , for all t ≥ 0 and i ∈ V. Hence, we obtaiṅ Note that for every Proposition 1) . Hence, the last term in (34) can be computed as Thus, (34) becomeṡ Usingβ ji ≥ l ji and 1 − σ i < 1, we then obtaiṅ By collecting (37) for all i ∈ V, we havė . The derivative of the Lyapunov function V (x) = p T x is then given by so that dV (x(t))/dt ≤ r T x(t) − x T (t)Qx(t). Now, consider θ * ∈ R computed from (30) . From (30) , it follows that x T Qx − (r + 1 n ) T x ≤ 0 =⇒ p T x ≤ θ * for all x ∈ R n . Moreover, since (30) is strictly convex and it corresponds to the maximization of the linear function (i.e., p T x) over the ellipsoidal set (i.e., W), the optimal solution of (30) is unique and lies on the boundary of W 2 . In other words, we have x * T Qx * − (r + 1 n ) T x * = 0, where x * is the optimal solution of x in (30) (i.e., p T x * = θ * ). Therefore, for all x ∈ R n . Hence, by taking the contrapositive of (40), we obtain p T x ≥ θ * =⇒ x T Qx − (r + 1 n ) T x ≥ 0 for all x ∈ R n . Therefore, from (39), we obtain Eq. (41) implies that the derivative of the Lyapunov function V along the trajectory of the SIS model satisfies where SinceV is negative in Λ, any state trajectory starting in Λ converges to the set Ω = {x ∈ [0, 1] n : V (x) ≤ θ * } in finite time (see e.g., Section 4.8 in [34] ). Moreover, sinceV is negative in it is shown that Ω is an invariant set, i.e., once the state enters Ω, it remains therein for all future times. Therefore, for every Moreover, since p * m is defined by (24), we have which implies that Hence, if (29) holds for all m ∈ {1, . . . , M }, we have w T m x(t) ≤d m , for all t ≥ t and m ∈ {1, . . . , M }. Therefore, for any x(0) ∈ [0, 1] n , the control objective (5) is achieved by applying the event-triggered controller (13) , (14) . In this section, we investigate an event-triggered controller design. As shown in Theorem 1, the control objective (5) is achieved if the control and the eventtriggering gains satisfy the inequality (29) for all m ∈ {1, . . . , M } , which we can efficiently check by convex optimization. However, it is not necessarily easy to directly use the inequality (29) for designing the control and the event-triggering gains, because the vector r and the matrix Q used to define the set W in the optimization problem (30) contain the parameters to be designed. In order to overcome this difficulty, in this section, we present a tractable and numerically efficient method for designing the control and event-triggering gains via convex relaxation techniques for the conditions required in Theorem 1, such that both the control and the event-triggering gains are designed in polynomial time. Specifically, we propose an emulation-based approach to the design of the control and the event-triggering gains. The emulation-based approach is the wellknown technique to design the event-triggered controller (see, e.g., [22, 24] ), and basically it consists of the two steps. First, we find the set of the control gains under the assumption that the continuous-time controller is implemented. Second, using the control gains obtained by the first step, we then design the eventtriggering gains, such that the control objective is achieved. As will be shown below, both the former and the latter problems can be formulated by geometric programmings [26] , meaning that the control and the event-triggering gains can be found efficiently in polynomial time. We start by designing the control gains k i , l ij for all i ∈ V and j ∈ N out i . As mentioned above, in the emulation-based approach, the control gains are designed under the assumption that the continuous-time controller is implemented; that is, (13) and (14) are replaced by for all t ≥ 0. Moreover, define the constantsr c,i , c 1,i for all i ∈ V bỹ The following proposition shows that the set of the control gains achieving the control objective (5) under the continuous-time controller can be found by solving a geometric programming problem. (2), continuous-time controller (46), (47), and the control objective (5). Moreover, letk * i ,l * ij ,s * c,i > 0 for all i ∈ V, j ∈ N out i and * 1 , * 2 , * 3 , ξ * c > 0 denote the optimal solution ofk i ,l ij ,s c,i > 0 for all i ∈ V, j ∈ N out i and 1 , 2 , 3 , ξ c > 0, in the following geometric programming: where Z c is the vector that collects all the decision variables in the optimization problem, i.e., Z c = k i ,l ij ,s c,i , i ∈ V, j ∈ N out i , 1 , 2 , 3 , ξ c T and g c (·) is a given posynomial function. Then, the control objective is achieved by applying the continuous-time controller, in which the control gains k i , l ij are given by for all i ∈ V and j ∈ N out i . Remark 3 (On the selection of the cost function g c ). For example, one could select the (posynomical) cost function g c (·) as follows: where w k,i , w l,ij > 0 for all i ∈ V, j ∈ N out i are given weights. Note thatk i andl ij for all i ∈ V and j ∈ N out i are the variables in the optimization problem satisfyingk i =k i − k i andl ij =l ij − l ij (see Appendix A). Moreover,k i ,l ij for all i ∈ V and j ∈ N out i are the constants that represent the upper bounds of the control gains (see (15) ). Hence, reducing k i (resp. l ij ) implies to reduce the cost of 1/k i (resp. 1/l ij ). Therefore, minimizing (58) subject to the constraints (52)-(56) aims at obtaining small control gains while achieving the control objective. Let us now design the event-triggering gains, i.e., σ i ∈ (0, 1), η i ∈ (0, 1) for all i ∈ V. Consider the event-triggered controller (13) , (14) , in which the triggering time instants t i 0 , t i 1 , t i 2 , . . . are given according to (12) . Fix the control gains by i ) are the optimal control gains that are designed by solving the geometric programming problem proposed in Proposition 2. Moreover, define the constants c 3,i for all i ∈ V by For technical reasons, we make the following assumption: Recall thatr c,i and C are defined in (48) and (50), respectively. Hence, Assumption 1 implies that, for all i ∈ V satisfyingr c,i < 0, the following condition is satisfied: Hence, (60) implies that the (optimal) control gains k * i , l * ij , i / ∈ C, j ∈ N out i should be chosen large enough such that c 3,i +r c,i is positive. This leads to the following posynomial constraint: where > 0 is a given arbitrary small positive constant. Hence, c 3,i +r c,i > 0 is achieved by additionally imposing (61) for all i / ∈ C in the geometric programming provided in Proposition 2. The following proposition shows that the event-triggering gains achieving the control objective can be found by solving the geometric programming problem: Proposition 3. Consider the SIS model (2), event-triggered controller (13), (14) , and the control objective (5). Let Assumption 1 hold, and letσ * i ,s * e,i , η * i ,r * e,i > 0 for all i ∈ V and * 1 , * 2 , * 3 , ξ * e > 0 be the optimal solution ofσ i ,s e,i , η i ,r e,i > 0 for all i ∈ V and 1 , 2 , 3 , ξ e > 0, in the following geometric programming: subject tos e,iσ where Z e is the vector that collects all the decision variables in the optimization problem, i.e., Z e = [σ i ,s e,i , η i ,r e,i , i ∈ V, 1 , 2 , 3 , ξ e ] T and g e (·) is a given posynomial function. Then, the control objective is achieved by applying the event-triggered controller (13), (14) , in which the event-triggering gains σ i , η i are given by Proposition 3 is proven by modifying the conditions required in Theorem 1 (see Lemma 2 in Appendix B), so that the conditions required to achieve the control objective can be translated into the posynomial constraints as shown in (62)-(67). For the detailed proof, see Appendix B. Remark 5 (On the selection of the cost function g e ). For example, one could select the cost function g e (·) as follows: where w σ,i , w η,i > 0 for all i ∈ V are given weight parameters. Note thatσ i and η i for all i ∈ V are the variables satisfyingσ i = 1 − σ i (see Appendix B), and that σ i , η i for all i ∈ V are the event-triggering gains. Hence, increasing σ i (resp. η i ) implies to reduce the cost ofσ i (resp. η −1 i ). From (12) , increasing σ i and η i allow us to reduce the number of the control updates. Therefore, minimizing (70) subject to the constraints (62)-(67) implies to obtain large event-triggering gains so as to reduce the number of the control updates while achieving the control objective. In this section, we provide some discussions on the proposed approach. In Section 6.1, we provide a way of how to design the Lyapunov parameter p. In Section 6.2, we discuss a conservativeness of the geometric programming problems in Proposition 2 and 3 with respect to the condition derived in Theorem 1. Note that the Lyapunov parameter p ∈ R n >0 should be given in Proposition 2 (and Proposition 3), which means that p must be chosen apriori before designing the control and the event-triggering gains. For example, one could choose p = [p 1 , . . . , p n ] T by solving the following linear program: where c p > 0 is a given positive constant. Sincer c,i = −p i δ i + j∈N out i p j β ij for all i ∈ V (see (48)), the optimization problem (71) aims at finding p such that i∈Vr c,i is minimized. Moreover, recall that the vectorr c = [r c,1 , . . . ,r c,n ] has been utilized in the derivative of the Lyapunov function under the continuous time controller: (see (A.6) in the Appendix). Hence, intuitively, if we have smaller components of r c,i , i ∈ V, then the termr T c x becomes smaller and so we can obtain a larger domain of x for which dV /dt is ensured to be negative: {x ∈ [0, 1] n :r T c x−x TS c x < 0}. Thus, designing p such thatr c,i becomes small may have the potential to enlarge the domain of attraction. Note that it is indeed difficult to take the matrix S c into account for designing p, sinceS c must satisfy the constraint involving the control gains (on the other hand,r c,i does not depend on the control gains). In (71), the linear constraint p 1 = p 1 + p 2 + · · · p n = c p has been utilized to normalize the Lyapunov parameter, so that the sum of all the components of p equals c p . In essence, this avoids the case where the optimal solution of p becomes extremely close to zero. For example, ifβ ii >δ i for all i ∈ N , then the cost in (71) is positive for all p > 0. Hence, if p 1 = c p were not given, we could then obtain the optimal solution as p * ≈ 0, since it tries to make the cost in (71) as close as possible to 0, i.e., i∈V (−p iδi + j∈N out i p jβij ) → 0 as p → 0. One might wonder how to select c p in (71). Here, without loss of generality, we can set c p = 1; how we select c p does not affect the domain of the control gains k i , l ij , i ∈ V, j ∈ N out i (resp. the event-triggering gains σ i , η i , i ∈ V) for which the geometric programming problem in Proposition 2 (resp. Proposition 3) is feasible. Specifically, we have the following result: 2 , . . . , p (2) n ] denote the optimal solution of (71) with c p = γ (1) and c p = γ (2) , respectively, where γ (1) , γ (2) > 0 with γ (1) = γ (2) are any positive constants. Let (P.1) and (P.2) (resp. (Q.1) and (Q.2)) denote the geometric programming problem in Proposition 2 (resp. Proposition 3) with the Lyapunov parameter being given by p = p (1) and p = p (2) , respectively. Then, it follows that the feasibility of (P.1) (resp. (Q.1)) implies the feasibility of (P.2) (resp. (Q.2)), and vice versa. Proposition 4 implies that the domain of the control gains (resp. event-triggering gains) for which the geometric programming problem in Proposition 2 (resp. Proposition 3) with p = p (1) is feasible equals the one with p = p (2) . Hence, if the cost function of the geometric programming problem in Proposition 2 depends only on the control gains, i.e., g c (k i ,l ij , i ∈ V, j ∈ N out i ), then the optimal solution with p = p (1) equals the one with p = p (2) . In general, we define the cost functions depending only on the control gains, since we would like to optimize these parameters (see (58) as an example of the cost function). Similarly, if the cost function of the geometric programming problem in Proposition 3 depends only on the event-triggering gains, i.e., g e (σ i , η i , i ∈ V), then the optimal solution with p = p (1) equals the one with p = p (2) (see (70) as an example of the cost function). For the proof of Proposition 4, see Appendix C. In this section, we discuss the potential conservativeness of Proposition 2 and 3. Note that the posynomial constraints in Proposition 2 and 3 are given as the sufficient conditions to those in Theorem 1. Hence, it is worth discussing how the posynomical constraints derived in Proposition 2 and 3 are conservative with respect to those in Theorem 1. For deriving the posynomial constraints of the control gains (Proposition 2), the sufficiency has arisen since the term i / 16) ) has been taken for all i / ∈ C. Thus, the conservativeness for designing the control gains from Proposition 2 increases as the number of the nodes i satisfying i / ∈ C (i.e., the cardinality of V\C) increases. In other words, the conservativeness decreases as the number of the nodes i satisfying i ∈ C (i.e., the cardinality of C) increases. Recall that C is defined as Ifβ ii ≥δ i (i.e., the baseline infection rate for the node itself is larger than the natural recovery rate), then (73) holds. Hence, as the number of nodes satisfyinḡ β ii ≥ δ i increases, which may be the case where a huge outbreak (i.e., the infection rates are large) happens, the conservativeness for designing the control gains from Proposition 2 decreases. For deriving the posynomial constraints of the event-triggering gains (Proposition 3), the sufficiency has arisen in (B.5), since we have used the following inequality: (27)). The term 1 2 P L(I n − G)(G + H) is the matrix that enumerates the coefficiencies of the cross term x i x j , j ∈ N in i in the derivative of x i (see (37) ). Since we neglect this term for deriving the posynomial constraints, it implies that σ i , η i will be more conservatively selected as the number of the in-neighbor nodes for node i is larger. In other words, if the number of the in-neighbor nodes for node i is very large, very small σ i , η i could be obtained, which could result in frequent control updates (or the geometric programming problem in Proposition 3 may become infeasible). If the geometric programming problem of finding the control gains (Proposition 2) is not feasible, we have no choice but could try to change the candidate Lyapunov function (i.e., modify the parameter p), or, if allows, try to enlarge the upper bound of the control gainsk i ,l ij in (15) so as to increase the feasibility domain of the geometric programming problem in Proposition 2. If Proposition 2 is feasible but Proposition 3 is not feasible, we could modify the cost function in Proposition 2 (e.g., change the weight parameters of (58)), or, slightly tighten the constraints in Proposition 2 so as to make Proposition 3 feasible. More specifically, in Proposition 2, we replace the constraints (52), (55), (56) with where s , r ∈ (0, 1) are given positive constants. Note that setting s , r → 0 in (74), (75) and (76) corresponds to (52), (55) and (56), respectively. Let k * i ,l * ij ,s * c,i > 0 for all i ∈ V, j ∈ N out i and * 1 , * 2 , * 3 , ξ * c > 0 denote an any feasible solution to the posynomial constraints (53), (54), (74)-(76). The corresponding control gains are denoted as k * i =k i −k * i , l * ij =l ij −l * ij . Then, it follows that Moreover, if η i = r /c 3,i , it follows that Additionally, from (55) and (56), it follows that Now, consider the feasibility problem provided in Proposition 3: findσ i ,s e,i , η i , r e,i > 0 for all i ∈ V and 1 , 2 , 3 , ξ e > 0 such that (62)-(67) hold. Suppose that r is chosen small enough such that r < c 3,i for all i ∈ V. Then, (77), (78), (79) and (80) imply that the posynomical constraints (62)-(67) are all feasible with Therefore, if we utilize the slightly tightened constraints in Proposition 2, and if it is feasible, we can guarantee the feasibility of the posynomical constraints in Proposition 3. Even though Proposition 3 becomes feasible, it is still possible that, due to the conservativeness as described above, very small event-triggering gains σ i , η i might be obtained. In such case, we could make use of the result of Theorem 1 in order to reduce the conservativeness. That is, if the resulting σ i , η i according to Proposition 3 are very small for some i, we could try to increase these parameters (i.e., σ i ← σ i + σ , η i ← η i + η for some σ , η > 0 and the other parameters are fixed) and then check if (29) in Theorem 1 holds. If (29) holds, then it follows that the control objective is achieved even with the modified event-triggering gains. Since the condition in Theorem 1 is less conservative than those in Proposition 3, we have the potential to enlarge the event-triggering gains. Note that (29) can be checked via convex program, since all the control and the event-triggering gains are here given. While the above approach may be somewhat heuristic, it will be useful in practice for reducing the conservativeness of Proposition 3. In this section, we demonstrate the performance of the proposed event-triggered controller through a numerical simulation. The simulation have been conducted on MacOS Big Sur, 8-core Intel Core i9 2.4GHz, 32GB RAM using Python 3. [37, 38] . The directions of the edges in the graph are omitted for brevity. All the nodes are labeled by the world airport codes (IATA 3-letter codes), and divided by the three groups colored by red (Group 1), blue (Group 2), and green (Group 3). Moreover, we used CVXPY for solving convex optimization problems. The code is available on Github: https://github.com/yugaro/etc-sis-solver. (Problem setup): We apply the proposed approach against the epidemic spreading propagated over an air transportation network consisting of 50 airports in the United States (U.S.). The graph is constructed from the statistical data [37, 38] of the number of passengers and flights. More specifically, we extract the data from [37] the top 50 U.S. airports according to the number of passengers in 2019, and from [38] the number of the flights among the airports. The resulting graph consists of 50 nodes (i.e., V = {1, . . . , 50}) that represent the set of 50 airports, and the set of edges among different nodes that represent the existence of the directed flights among them. Fig. 2 depicts the resulting graph structure, where the nodes in the network are divided into three groups denoted by V 1 , V 2 , V 3 ⊆ V, which are called Group 1, 2 and 3, respectively. In this numerical simulation, we randomly choose the baseline recovery ratē δ i of each node i ∈ V from a uniform distribution on the interval [8.0×10 −2 , 1.0× 10 −1 ]. On the other hand, each baseline infection rate from the neighborβ ij (i ∈ V, j ∈ N out i \{i}) is chosen on the interval (0, 5.0 × 10 −2 ] in accordance with the data [38] . More specifically, eachβ ij (i ∈ V, j ∈ N out i \{i}) is chosen to be proportional to the number of the direct flights from the airport corresponding to node i ∈ V to the one corresponding to node j ∈ N out i \{i}, and is normalized such that the maximum value ofβ ij (i ∈ V, j ∈ N out i \{i}) equals 5.0 × 10 −2 . Moreover, each baseline infection rate for the node itselfβ ii , i ∈ V is chosen to be proportional to the population of the city that contains the airport corresponding to the node i, and is normalized such that the maximum value ofβ ii (i ∈ V) equals to 5.0 × 10 −2 . The control objective in this numerical simulation is to achieve (7) for all m ∈ {1, 2, 3} withx 1 = 8.0 × 10 −2 ,x 2 = 0.1,x 3 = 9.0 × 10 −2 . In other words, we aim at containing the average of the fraction of the infected people in Group 1, 2, and 3 within the corresponding thresholdsx 1 = 8.0 × 10 −2 ,x 2 = 0.1, and x 3 = 9.0 × 10 −2 , respectively. As described in Section 2.2, such control objective can be expressed by (5) with appropriate selection of the parameters w m ∈ {0, 1} n andd m for each m ∈ {1, 2, 3}. The upper bounds of the control gains are given byk i = 5.2 × 10 −1 ,l ij = 5.4 × 10 −2 for all i ∈ V and j ∈ N out i . The Lyapunov parameter p ∈ R n >0 in (23) is chosen by solving (71). When solving the geometric programming problem to find the optimal control gains from Proposition 2, we define the cost function g c (·) by (58) with w k,i = w l,ij = 1 for all i ∈ V, j ∈ N out i . In addition, when solving the geometric programming problem to find the optimal event-triggering gains from Proposition 3, we define the cost function g e (·) by (70) with w σ,i = w η,i = 1 for all i ∈ V. (Simulation results): Fig. 3 (a) plots the state trajectories for all the nodes by applying the event-triggered controller designed by the emulation-based approach proposed in Section 5, where the initial state x(0) is randomly given from a uniform distribution on the interval set [0, 1] n . For comparisons, we illustrate in Fig. 3 (b) the state trajectories without control inputs (i.e., the state trajectories with u i (t) = v ij (t) = 0, for all i ∈ V, j ∈ N out i , and t ≥ 0), where the initial state is the same as the event-triggered controller. Fig. 3(a) shows that the fraction of infected people are contained effectively by applying the proposed eventtriggered controller compared with the control-free case in Fig. 3(b) . To verify that the control objective is achieved, we illustrate in Fig. 4 the trajectories of |V m | −1 i∈Vm x i (t) for all m ∈ {1, 2, 3} under the proposed event-triggered con- troller (red solid line), without control inputs (green dotted line), the continuoustime controller (46), (47) (see Section 5.1; blue dashed line) whose control gains are the same as the event-triggered controller. Fig. 4 shows that, by applying the event-triggered controller, the average of the fraction of infected people in each group can be appropriately contained such that the control objective is achieved (i.e., converge below the prescribed thresholdsx 1 ,x 2 , andx 3 ), while at the same time preserving almost the same convergence performance as the continuous-time controller. It can be also seen that the event-triggered controller provides a better control performance than the continuous-time controller. To describe why this has happened, recall that the event-triggered controller updates the control inputs only when they are needed, while the continuous controller updates them continuously. Moreover, the linear state-feedback controller is employed under both the continuous controller and the event-triggered controller (see (13) , (14) ). Thus, when the state (the fraction of the infected people) is decreasing, the control input under the continuous controller is decreasing for all times, while the control input under the event-triggered controller is kept constant during the inter-event times and it decreases only for the update times. Therefore, if the inter-event times are relatively long, the control input under the event-triggered controller tend to be larger than the one under the continuous-time controller. Therefore, the eventtriggered controller tends to mitigate the infection further than the continuoustime controller (since it tends to utilize larger control inputs) and thus provides a better performance. Note that, when the state is increasing, the reverse holds; the control inputs under the event-triggered controller tends to be smaller than the continuous-time controller and thus provide worse performance. In this numerical simulation, the event-triggered controller provides a better performance than the continuous time controller, since, as shown in Fig. 3 , many states are decreasing for most of the times. In Fig. 5 (a) and (b), we plot the trajectories of the control inputs u i (t), v ij (t) for i ∈ V 1 under the proposed event-triggered controller. In addition, Fig. 5(c) illustrates the corresponding inter-event times, i.e., t i +1 − t i for ∈ N and i ∈ V 1 . From the figures, we can observe that the control inputs are not updated continuously but are updated aperiodically based on the proposed event-triggered controller. We can furthermore observe the inter-event times tend to be larger as time evolves and the state trajectories converge, which implies that the control inputs are updated only when the fraction of the infected people in each node increases or decreases by the threshold designed by the proposed approach. In summary, we can confirm the validity of the proposed event-triggered controller by showing that, the control inputs for each node are updated only when they are needed while preserving almost the same convergence performance as the continuous-time controller. In this paper, we proposed an event-triggered control-based framework for a deterministic SIS model. In the proposed framework, control inputs for each subpopulation are updated only when the fraction of the infected people increases or decreases by a prescribed threshold, aiming at reducing unnecessarily burden of updating the control inputs. First, we analyzed the stability of the closed-loop system under the event-triggered controller. We in particular derived a sufficient condition for the event-triggered controller to achieve a prescribed control objective. We further showed that the derived conditions are characterized by convex programs, which can be thus efficiently solved in polynomial time. Then, we proposed an emulation-based approach towards the design of the event-triggered controller. We in particular showed that the problem of designing the control and the event-triggering gains can be solved by the geometric programmings, which can reduce to convex programs and can be efficiently solved in polynomial time. Finally, we confirmed the validity of our proposed approach through numerical simulations of an epidemic spreading using an air transportation network. The following aspects should be further pursued in our future works of research: • In the event-triggered control framework presented in this paper, the state for each node needs to be monitored continuously so as to determine the triggering time instants. Moreover, the inter-event times might still be very small especially when a huge outbreak happens. Hence, it is of great importance to provide some constraints on the lower bound of the inter-event times, so as to further reduce the frequency of the control inputs. These issues could indeed be solved by employing the periodic event-triggered control [24] , in which the states are measured only periodically (not continuously as in the standard event-triggered control presented in this paper). Therefore, future work involves solving the above issues by formulating the periodic event-triggered control for mitigating the epidemic spreading. • In our current problem setup, we impose the constraints on the control gains (see (15) ) in order to restrict the amount of the control inputs. However, in this formulation, the upper bound of the control input is utilized only when the state is 1 (i.e., u i (t) =k i iff x i (t) = 1). Hence, it might be more reasonable and realistic to impose the constraints directly on the control inputs, e.g., u i (t) ≤ū i , v ij (t) ≤v ij for all t ≥ 0 rather than impose the constraints on the control gains. Hence, considering how to impose u i (t) ≤ū i , v ij (t) ≤v ij when designing the event-triggered controller for the epidemic spreading should be pursued in our future work. • In addition to the above, there exist several research directions that should be further pursued in relation to the current COVID-19 pandemic. For example, it is of both theoretical and practical interest to investigate the eventtriggered control of more realistic epidemic models, such as the SIR models, the SEIR models, and various others (see, e.g., [39, 40, 41, 42] ). Another important research direction is to develop a theoretical framework for the event-triggered control of epidemics over temporal networks [43, 44] in order to account the intrinsic time-variability of contact networks in the hu-man society. Additionally, time delays arising in the feedback loop as well as parameter uncertainties in the epidemic models (see, e.g., [40] ) should be taken into account for further investigations in designing the event-triggered controller. Before proving Proposition 3, let us show that the following lemma holds as a slight modification of Theorem 1: Lemma 2. Consider the SIS model (2), the event-triggered controller (13) , (14) , and the control objective (5) . Assume that the control and the event-triggering gains satisfying (11) and (15) where W e = {x ∈ R n : x TS e x−(r e + 1 n ) T x ≤ 0} with > 0,S e = diag(s e ), and the vectorsr e ,s e ∈ R n are defined such that the following inequalities satisfied: Then, for any x(0) ∈ [0, 1] n , the control objective (5) is achieved by applying the event-triggered controller. As shown in the proof below, Lemma 2 is a more conservative result than Theorem 1, in the sense that it provides sufficient conditions to the ones provided in Theorem 1. However, Lemma 2 is useful for translating the conditions required to achieve the control objective into the posynomial constraints (see the proof of Proposition 3 below). (Proof of Lemma 2): From Theorem 1, it is shown that the control objective (5) is achieved by finding the the control and the event-triggering gains, such that (29) is satisfied for all m ∈ {1, . . . , M }. From (25) , (26) , and (27) , it follows that where we used (B.3) and (B.4) to obtain (B.5). Hence, it follows that x TS e x − (r e + 1 n ) T x > 0 =⇒ x T Qx − (r + 1 n ) T x > 0 for all x ∈ R n . Therefore, letting W e = {x ∈ R n : x TS e x − (r e + 1 n ) T x ≤ 0}, we have W ⊆ W e (recall that W is defined in (28)). Hence, θ * in (30) where c 3,i > 0 is the constant defined in (59). Thus, defining the new variables bỹ σ i = 1 − σ i for all i ∈ V, we obtain the following constraint: wherer c,i is the constant defined in (48). If i ∈ C, we haver c,i ≥ 0 and sõ r e,i ≥r c,i + c 3,i η i > 0. This implies that (B.9) is the posynomial constraint. Thus, solving the geometric programming problem in Proposition 3. with p * (1) m = min i∈supp(wm) p (1) i . Multiplying γ in both sides of (C.1), we obtain γs (1) c,i + γp Letting c 1,i = p (2) ik i + j∈N out i p (2) jl ij , we obtain γs (1) c,i + p where we used γc (1) 1,i = c 1,i . Moreover, by multiplying γ in both sides of (C.4), it follows that 2,m . Letting c 2,m . In addition, by multiplying γ 2 in both sides of (C.5) in the above, we obtain i∈V (p findk i ,l ij ,s c,i > 0 for all i ∈ V, j ∈ N out i and 1 , 2 , 3 , ξ c > 0 such that s c,i + p (2) ik i + j∈N out i p (2) jl ij ≤ c (2) 1,i , ∀i ∈ Ṽ k i + 1 ≤k i , ∀i ∈ Ṽ l ij + 2 ≤l ij , ∀i ∈ V, ∀j ∈ N out i , (ξ c ) 3 , ξ c = γ 2 ξ (1) c . (C.11) Thus, the feasibility of the geometric program in Proposition 2 with p (1) implies the feasibility of the geometric program in Proposition 2 with p (2) . It can be shown in the same way that the feasibility of geometric program with p (2) implies the feasibility of the geometric program with p (1) . Epidemic processes in complex networks Virus spread in networks Modeling malware spreading dynamics Information contagion: an empirical study of spread of news on digg and twitter social networks Cascading failures in power grids: Analysis and algorithms Traffic optimization to control epidemic outbreaks in metapopulation models Traffic control for network protection against spreading processes Analysis and control of epidemics: A survey of spreading processes on complex networks Optimal resource allocation for network protection against spreading processes Data-driven network resource allocation for controlling spreading processes Optimal resource allocation for control of networked epidemic models Distributed algorithm for suppressing epidemic spread in networks Epidemic processes over adaptive statedependent networks Dynamic resource allocation to control epidemic outbreaks a model predictive control approach Output-feedback control of virus spreading in complex networks with quarantine Robust economic model predictive control of continuous-time epidemic processes Analysis and control of a continuous-time bi-virus model Analysis and distributed control of periodic epidemic processes Networked multi-virus spread with a shared resource: Analysis and mitigation strategies Some discrete-time si, sir, and sis epidemic models On the dynamics of deterministic epidemic propagation over networks An introduction to eventtriggered and self-triggered control Model-based periodic eventtriggered control for linear systems Periodic event-triggered control for linear systems Convex Optimization Geometric programming for optimal positive linear systems Distributed eventtriggered control for multi-agent systems Event-based broadcasting for multi-agent average consensus Distributed event-triggered control of multi-agent systems with combinational measurements Event-triggered communication and control of networked systems for multi-agent consensus A tutorial on geometric programming Event-separation properties of eventtriggered control systems Nonlinear Systems -third edition A deterministic model for gonorrhea in a nonhomogeneous population Matrix analysis Global stability analysis of an sir epidemic model with demographics and time delay on networks Can the COVID-19 Epidemic Be Controlled on the Basis of Daily Test Reports? Generalized epidemic meanfield model for spreading processes over multilayer complex networks Modelling the COVID-19 epidemic and implementation of population-wide interventions in Italy Stability of spreading processes over timevarying large-scale networks Epidemic processes over time-varying networks we obtainr c,i + c 3,i η i ≤r e,i for all i ∈ C, which yields the posynomial constraint This work was supported in part by JSPS KAKENHI Grant 21H01353 and in part by JST ERATO HASUO Metamathematics for Systems Design Project (No. JPMJER1603). To prove Proposition 2, let us first provide the following lemma, which gives a sufficient condition for the control objective (5) to be achieved under the continuoustime controller. Lemma 1. Consider the SIS model (2) , the continuous-time controller (46), (47), and the control objective (5) . Assume that the control gains satisfying (15) are chosen such that the following conditions are satisfied:for all m ∈ {1, . . . , M }, whereθ * c ∈ R is defined according to the following optimization problem:θ * c = max where W c = {x ∈ R n : x TS c x − (r c + 1 n ) T x ≤ 0} with > 0,S c = diag(s c ) ands c ∈ R n is chosen such that the following inequality is satisfied:Then, for any x(0) ∈ [0, 1] n , the control objective (5) is achieved by applying the continuous-time controller.(Proof of Lemma 1): The proof is given based on Theorem 1. Applying the continuous-time controller (46), (47) is equivalent to setting the event-triggering gains as σ i → 0 and η i → 0 for all i ∈ V (i.e., G → 0 and H → 0). Hence, by setting G → 0, H → 0 in (38) , the upper bound ofẋ(t) under the continuous-time controller is computed aṡThe derivative of the Lyapunov function V (x) = p T x (under the continuous-time controller) is then given bywhere we usedr T c = p T B −D (see the statement after (48)) and (A.3) to obtain (A.6). Now, similarly to (40) , it follows thatHence, by taking the contrapositive, we obtainThus, by following the same procedure as Summing p iki + j∈N out i p jlij in both sides of (A.8), we obtaiñwhere c 1,i is the constant defined in (49). Define the new decision variables k i ,Then, the constraint (A.9) results in (52), which is the posynomial constraint (with respect to the variablesk i ,l ij ,s c,i ). With the new decision variablesk i ,l ij (j ∈ N out i ), the constraint (15) is then rewritten ask i 0, i ∈ V, j ∈ N out i ,s c = [s c,1 , . . . ,s c,n ] > 0 and 1 , 2 , 3 , ξ c > 0, such that the posynomial constraints (52)-(56) are satisfied. Sincek i =k i − k i ,l ij =l ij − l ij , the (optimal) control parameters are given by (57) after solving the geometric programming problem shown in Proposition 2. (62). If i / ∈ C, we haver c,i + c 3,i η i = 0, since η i = −r c,i c 3,i (see (69)). Note that we have η i = −r c,i c 3,i ∈ (0, 1) from Assumption 1. Thus, the constraint (B.9) for all i / ∈ C is trivially given byr e,i ≥ 0, i / ∈ C. Using the new decision variableσ i , the constraint of σ i (i.e., σ i ∈ (0, 1)) is rewritten asσ i ∈ (0, 1). This leads to the posynomial constraint (64). The constraint for η i is given by (66), which follows trivially. Let us now translate (B.1) into a posynomial constraint. As with (A.14), the optimization (B.2) is analytically solved as 2 , . . . , p (2) n ] denote the solution to (71) with c p = γ (1) and c p = γ (2) , respectively, where γ (1) , γ (2) > 0 are any positive constants. From (71), it can be easily shown that p (2) = γp (1) , where γ = γ (2) γ (1) . Here, we will only prove for the geometric program in Proposition 2 (i.e., the feasibility of the geometric program in Proposition 2 with p (1) implies the one with p (2) and vice versa), since the proof for Proposition 3 is given in the same way with Proposition 2. Letkc > 0 be an any feasible solution that satisfies the posynomial constraints in Proposition 2 with p (1) . In other words, we havẽ