key: cord-0125582-dwd141lg authors: Bouvier, Jean-Baptiste; Xu, Kathleen; Ornik, Melkior title: Quantitative Resilience of Linear Driftless Systems date: 2021-01-28 journal: nan DOI: nan sha: d49991c2e98611aaed5ec1047945e2e02e6cadd2 doc_id: 125582 cord_uid: dwd141lg This paper introduces the notion of quantitative resilience of a control system. Following prior work, we study systems enduring a loss of control authority over some of their actuators. Such a malfunction results in actuators producing possibly undesirable inputs over which the controller has real-time readings but no control. By definition, a system is resilient if it can still reach a target after a loss of control authority. However, after a malfunction a resilient system might be significantly slower to reach a target compared to its initial capabilities. We quantify this loss of performance through the new concept of quantitative resilience. We define this metric as the maximal ratio of the minimal times required to reach any target for the initial and malfunctioning systems. Na"ive computation of quantitative resilience directly from the definition is a time-consuming task as it requires solving four nested, possibly nonlinear, optimization problems. The main technical contribution of this work is to provide an efficient method to compute quantitative resilience. Relying on control theory and on three novel geometric results we reduce the computation of quantitative resilience to a single linear optimization problem. We illustrate our method on two numerical examples: an opinion dynamics scenario and a trajectory controller for low-thrust spacecrafts. When failure is not an option, critical systems are built with enough redundancy to endure actuator failure [24] . The study of this type of malfunction typically considers either actuators locking in place [26] or actuators losing effectiveness but remaining controllable [27, 28] . However, when actuators can be subject to damage or hostile takeover, the malfunction may result in the actuators producing undesirable inputs over which the controller has real-time readings but no control. This type of malfunction has been discussed in [5] under the name of loss of control authority over actuators and encompasses scenarios where actuators and sensors are under attack [8] . In the setting of loss of control authority, undesirable inputs are observable and can have a magnitude similar to the controlled inputs, while in classical robust control the undesirable inputs are not observable and have a small magnitude compared to the actuators' inputs [4, 16] . The results of [6] showed that a controller having access to the undesirable inputs is considerably more effective than a robust controller. After a partial loss of control authority over actuators, a target is said to be resiliently reachable if for any undesirable inputs produced by the malfunctioning actuators there exists a control driving the state to the target [5] . However, after the loss of control the malfunctioning system might need considerably more time to reach its target compared to the initial system. In this work we thus introduce the concept of quantitative resilience for control systems in order to measure the delays caused by the loss of control authority over actuators. While concepts of quantitative resilience have been previously developed for water infrastructure systems [21] or for nuclear power plants [14] , such concepts only work for their specific application. In this work we formulate quantitative resilience as the maximal ratio of the minimal times required to reach any target for the initial and malfunctioning systems. This formulation leads to a nonlinear minimax optimization problem with an infinite number of equality constraints. Because of the complexity of this problem, a straightforward attempt at a solution is not feasible. While for linear minimax problems with a finite number of constraints the optimum is reached on the boundary of the constraint set [20] , such a general result does not hold in the setting of semi-infinite programming [11] where our problem belongs. However, the fruitful application of the theorems of [18, 19] stating the existence of time-optimal controls combined with the specific geometry of our problem, allow us to derive two bang-bang results concerning some nonlinear optimization problems. Then, the quantitative resilience of a driftless system is reduced to single linear optimization problem. As a first step toward the study of quantitative resilience for linear systems we restrict this work to driftless systems. Indeed, we will see that even with these simple dynamics the theory is already sufficiently rich. Furthermore, one can find an abundance of driftless systems in robotics [22] . The contributions of this paper are fourfold. First, we introduce the concept of quantitative resilience for systems enduring a loss of control authority over some of their actuators. Secondly, in the course of solving our central problem, we determine a simple analytical solution to a related nonlinear optimization problem with applications not restricted only to control theory. Thirdly, we provide an efficient method to compute the quantitative resilience of driftless systems by simplifying a nonlinear problem of four nested optimizations into a single linear optimization problem. Finally, based on quantitative resilience and controllability we establish a necessary and sufficient condition to verify if a system is resilient. The remainder of the paper is organized as follows. Section 2 introduces preliminary results concerning resilient systems and defines quantitative resilience. Section 3 establishes three optimization results that will prove crucial for the computation of quantitative resilience. To evaluate this metric we need the minimal time for the system to reach a target before and after the loss of control authority. We calculate this minimal time for the initial system in Section 4 and for the malfunctioning system in Section 5. Section 6 is the pinnacle of this work as we design an efficient method to compute quantitative resilience and assess whether a system is resilient or not. In Section 7 our theory is applied to an opinion dynamics scenario and on a linear trajectory controller for a low-thrust spacecraft. Appendices A and B gather all the lemmas required to prove our central nonlinear optimization result. The continuity of the minimal malfunctioning reach time is proved in Appendix C. Finally, we compute the dynamics of the low-thrust spacecraft in Appendix D. Notation: We use ∂X to denote the boundary of a set X and its interior is denoted X • := X\∂X. Set X is symmetric if for all x ∈ X, we have −x ∈ X. The convex hull of a set X is denoted with co(X). The set of integers from 1 to N is [N ] := {1, . . . , N }. We denote the set of nonnegative real numbers with R + := [0, ∞) and we use the subscript * to exclude zero, for instance R + * := (0, ∞). In the real n-dimensional space R n we denote the Euclidean norm with · and the unit sphere with S := {x ∈ R n : x = 1}. The ball of radius ε centered on x with B ε (x) := y ∈ R n : y − x ≤ ε . The scalar product of vectors is denoted by ·, · . For x ∈ R n * and y ∈ R n * we denote as x, y the signed angle from x to y in the 2D plane containing both of them. We take the convention that the angles are positive when going in the clockwise orientation. We say that x ∈ [x 1 , x 2 ] ⊂ R n if there exists λ ∈ [0, 1] such that x = λx 1 + (1 − λ)x 2 . The infinity-norm of a vector x ∈ R n is x ∞ := max{|x i | : i ∈ [n]}. The image of a matrix A ∈ R n×m is denoted Im(A) ⊂ R n , its rank is rank(A) = dim Im(A) ≤ n and its norm is A := sup x = 0 Ax x . Unless otherwise stated, the element at row i and column j of a matrix A is denoted by A i,j . For square integrable functions f : R → R n , the L 2 -norm is defined as f 2 L 2 := t ∈ R f (t) 2 dt, and the L ∞ -norm is defined as f L∞ := sup t ∈ R f (t) ∞ . A set-valued function ϕ from X to Y is denoted as ϕ : X Y following [1] . The sequence x 0 , x 1 , . . . is denoted with {x k }. As a first step toward linear systems, we begin with driftless systems governed by the differential equationẋ (t) =Bū(t), with x(0) = x 0 ∈ R n ,ū ∈Ū , whereB ∈ R n×(m+p) is a constant matrix. Let u max > 0 be the bound on the input magnitude so that the set of allowable controls is After a malfunction, the system loses control authority over p of its m + p initial actuators. Because of the malfunction the initial control inputū is split into the remaining controlled inputs u and the undesirable inputs w. Without loss of generality we always consider the columns C representing the malfunctioning actuators to be at the end ofB. We split the control matrix accordingly:B = B C . Then, the dynamics becomė with U := u : R + → R m : u L∞ ≤ u max and W := w : R + → R p : w L∞ ≤ u max . (4) We will use the concept of controllability of [18] . A system following the dynamics (1) is controllable if for all target x goal ∈ R n there exists a controlū ∈Ū and a time T such that x(T ) = x goal . We recall here the definition of the resilience of a system introduced in [6] . A system following the dynamics (1) is resilient to the loss of p of its actuators corresponding to the matrix C as above, if for all undesirable inputs w ∈ W and all target x goal ∈ R n there exists a control u ∈ U and a time T such that the state of the system (3) reaches the target at time T , i.e., x(T ) = x goal . Proof. Let y ∈ R n , x goal := y + x 0 ∈ R n and w ∈ W such that w(t) = 0 for all t ≥ 0. Since the system is resilient, there exist u ∈ U and T ≥ 0 such that Then, x goal − x 0 = Bz = y ∈ Im(B), so rank(B) = n andẋ(t) = Bu(t) is controllable. By definition, a resilient system is still capable of reaching any target after losing control authority over p of its actuators. However, the time for this malfunctioning system to reach a target might be considerably larger than the time needed for the initial system to reach the same target. We introduce these two times for the target x goal ∈ R n and the target distance d : Definition. The nominal reach time T * N is the shortest time required to reach the target for the initial system following (1): Definition. The malfunctioning reach time T * M is the shortest time required to reach the target for the malfunctioning system following (3) when the undesirable input is chosen to make that time the longest: By definition, if the system is controllable, then T * N (d) is finite for all d ∈ R n , and if it is resilient, then T * M (d) is finite. We only write the argument d of T * N and T * M when their dependency on d needs to be highlighted. Definition. The ratio of reach times in the direction d ∈ R n is After the loss of control, the malfunctioning system can take up to t(d) times longer than the initial system to reach the target d+x 0 . Since the performance is degraded by the undesirable inputs, one can easily show that t(d) ≥ 1. We take the convention that t(d) = +∞ whenever T * M (d) = +∞, regardless of the value of T * N (d). To make this case coherent with (7) and (8) we choose We now define the quantitative resilience of a system. Definition. The quantitative resilience r q of a system following (3) is the inverse of the maximal ratio of reach times, i.e., Quantitative resilience can be defined in exactly the same way for general control systems, but we focus on linear driftless systems in this work. For a resilient system, r q ∈ (0, 1]. The closer r q is to 1, the smaller is the loss of performance caused by the malfunction. Quantitative resilience r q depends on matrices B and C, i.e., on the actuators that are producing undesirable inputs. One could also define the quantitative resilience of a system to the loss of any p actuators by taking the minimal r q over all configurations of malfunctions. Computing r q requires solving four nested optimization problems over continuous constraint sets, with three of them being infinite-dimensional function spaces. A brute force approach to this problem is doomed to fail. Thus, we focus on the following problem. Problem. Establish an efficient method to compute r q . In this section, we introduce three novel optimization results on polytopes that will be needed to compute quantitative resilience. The proofs rely heavily on geometric arguments. A polytope in R n is a compact intersection of finitely many half-spaces. With this definition polytopes are considered to be convex. They are an n-dimensional generalization of planar polygons. With this definition, a vertex of a polytope corresponds to the usual understanding of a vertex of a polytope. We can now state our first optimization result on polytopes. Proof. First, we will show that the maximum in (9) exists and has a unique argument. For x ∈ X, the set S(x) := y ∈ Y : y − x ∈ R + d is compact since it is a closed subset of the compact set Y . Since X ⊂ Y , we have x ∈ S(x) and so S(x) = ∅. The map : y → y − x is continuous, so it reaches a maximum over S(x). This maximum is reached at the point of Y the furthest of x in direction d, i.e., at a unique y * (x) ∈ ∂Y . Then the map y * : X → ∂Y introduced in (9) is well-defined. Note that y * (x) is the linear projection of x along +d onto ∂Y , so y * is continuous. Thus, the function : x → y * (x) − x is continuous and reaches a minimum over the compact and nonempty set X. This minimum is not necessarily achieved uniquely over X. Let x * ∈ X such that y * (x * ) − x * = min x ∈ X y * (x) − x . Since x * must minimize the distance between itself and y * (x * ) ∈ ∂Y , with X ⊂ Y obviously x * ∈ ∂X. For contradiction purposes assume now that x * is not on a vertex of ∂X. Let S x be the surface of lowest dimension in ∂X such that x * ∈ S x and dim S x ≥ 1. Let v be a vertex of S x , a := v − x * and x(α) := x * + αa for α ∈ R. Notice that x(0) = x * and x(1) = v. Due to the choice of v, the convexity of S x and x * not being a vertex, there exists ε > 0 such that x(α) ∈ S x for all α ∈ [−ε, 1]. We also define the lengths L(α) := y * x(α) − x(α) and L * := L(0). All these definitions are illustrated on Figure 1 . Figure 1 : The convexity of Y compels x * to be on a vertex. 1] . Assume that there exists α 0 ∈ (0, 1] such that L * < L(α 0 ). We introduce the convexity coefficient β := α 0 α 0 +ε and then By convexity of Y , z ∈ Y , which contradicts the optimality of x * . Thus, there is no α 0 ∈ (0, 1] such that L * < L(α 0 ). Therefore, for all α ∈ [0, 1], L(α) = L * . By taking α = 1, we have x(α) = v, so the minimum L * is also reached on a vertex v of X. Theorem 1 will help us calculate the malfunctioning reach time T * M of resilient systems. The following optimization result concerns a ratio of two optimization problems and will simplify the calculation of r q . exists and is finite. Proof. The sets S ± (x) := y ∈ Y : y ± x ∈ R + d are both closed subsets of Y , so they are compact. They are nonempty because x ∈ S − (x) and −x ∈ S + (x). Functions f ± : S ± → R defined as f ± (y) := y ± x are both continuous, so they each reach a maximum over respectively S ± . Let y − be the argument of the maximum at the denominator of r Y (d, x). Because of its optimality y − ∈ ∂Y . Since x ∈ Y • , we have y − − x > 0 for any d ∈ S. Then, r Y (d, x) exists and is finite. Proof. Set Y is compact because it is a polytope. Then, all the assumptions of Proposition 2 are satisfied and thus the ratio r Y (d, x) exists and is finite. Vector x is fixed, so we write r(d) := r Y (d, x) to alleviate the notation. The proof of this theorem relies on numerous geometric arguments and is quite long. To help the reader, we divided the proof into several lemmas all gathered in Appendix A. Let d 0 ∈ S. Since d 0 and x are two vectors of R n , there exists a two-dimensional plane P passing through the origin that contains both of d 0 and x. Let d ∈ S ∩ P and y + , y − be the arguments of the two maxima in (10) , so that r(d) = y + +x y − −x . Because of their optimality, y + ∈ ∂Y and y − ∈ ∂Y . Additionally, ±x ∈ P and d ∈ P, so y ± ∈ P. We will study how r(d) varies when d takes values in S ∩ P. To introduce all the necessary definitions we first consider the case where the rays directed by d, y − and y + all intersect the same face of ∂Y as illustrated on Figure 2 . We introduce the signed angles α := d, ∂Y , β := x, d, β + := x, y + and β − := x, y − . These angles are represented on Figure 2 and they all take value in [0, 2π). Let α 0 be the value of α when β = 0, i.e., when d is positively collinear with x. Definition. We say that y + is leading and y − is trailing when β − < β < β + , and conversely when β + < β < β − , we say that y − is leading and y + is trailing. If β + = β, then y + is collinear with d. So x is also collinear with d because y + + x ∈ R + d. Then, d and y − are collinear with x, so β − = β + = β ∈ {0, π}. The same conclusion is reached when β − = β. Thus, only when β ∈ {0, π}, neither y + nor y − are leading or trailing. For each d ∈ S ∩ P we define D := max y ∈ Y y : y ∈ R + d , whose existence is justified by the compactness of Y . Definition. We say that y ± is outside when y ± ± x > D. Otherwise y ± is inside. We parametrize all directions d ∈ S ∩ P by the angle β. Then, we will study how r(d) varies when β ∈ [0, 2π). We first establish in Lemma 1 of Appendix A that the ratio r(d) is constant on the faces of ∂Y . Then, r(d) can only change when d crosses a vertex. Prior to studying vertex crossings we need to find the range of β for which y ± is leading or trailing and outside or inside. Lemma 2 establishes the following statements. • If β ∈ (0, π), then y + is leading. If β ∈ (π, 2π), then y − is leading. • If α + β ∈ (0, π), then y + is outside. If α + β ∈ (π, 2π), then y − is outside. Based on Lemma 3 we can rewrite the above bullet list using only the angle α + β. • If α + β ∈ (α 0 , π), then y + is leading and outside. • If α + β ∈ (π, α 0 + π), then y + is leading and inside. • If α + β ∈ (α 0 + π, 2π), then y − is leading and outside. • If α + β ∈ (2π, α 0 + 2π), then y − is leading and inside. Figure 3 : Vector y ± is leading or trailing and inside or outside depending solely on α + β. The polygon Y ∩ P can then be divided into four regions as illustrated by Figure 3 . According to Lemma 4 and Lemma 5 in Appendix A, r(d) decreases during the crossing of a vertex when the leading vector y ± is outside. This situation occurs for α + β ∈ (α 0 , π) ∪ (α 0 + π, 2π). Following Lemma 6, r(d) increases during the crossing of a vertex when the leading vector y ± is inside. This situation occurs for α + β ∈ (π, α 0 + π) ∪ (2π, α 0 + 2π). The specific case of the vertices v π and v 2π is tackled by Lemma 7. To summarize we have proved the following: • if α + β ∈ (α 0 , π), then y + is leading and outside, so r(d) is decreasing, • if α + β ∈ (π, α 0 + π), then y + is leading and inside, so r(d) is increasing, • if α + β ∈ (α 0 + π, 2π), then y − is leading and outside, so r(d) is decreasing, • if α + β ∈ (2π, α 0 + 2π), then y − is leading and inside, so r(d) is increasing. Then, the maximum of r(d) over β ∈ [0, 2π) happens when α + β ∈ α 0 , α 0 + π . This situation corresponds to β ∈ 0, π , i.e., d collinear with x. Then max d ∈ P∩S r(d) = max r(x), r(−x) . Recall that we have worked with d in the plane P generated by the vectors x and d 0 ∈ S. Therefore, max The ratio of optimization problems describing r q is actually more complex than the one solved in Theorem 2 where the vector x ∈ R n is fixed. Building on Theorem 2 we will now introduce our optimization problems of interest. Proposition 3. Let X, Y be two nonempty symmetric polytopes in R n with X ⊂ Y • and d ∈ S. Then, (i) max Proof. (i) Let S := (x, y) ∈ X × Y : x + y ∈ R + d . Set S is a closed subset of the compact set X × Y , so S is compact. Since X and Y are nonempty, symmetric and convex, 0 ∈ X ∩ Y . Then, (0, 0) ∈ S, so S is nonempty. Function f : S → R defined as f (x, y) := x + y is continuous, so it reaches a maximum over S. (ii) For x ∈ X define S(x) := y ∈ Y : x + y ∈ R + d . Since S(x) is a closed subset of the compact set Y , S(x) is compact. Since −X ⊂ Y , we have −x ∈ S(x) and so S(x) = ∅. Function f x : S(x) → R defined as f x (y) = x + y is continuous, so it reaches a maximum over S(x). Lemma 14 in Appendix C shows that λ * is continuous in x and d, so y * is also continuous in x and d. Then, function f : is continuous, so it reaches a minimum over the compact and nonempty set X. (iv) Note that y * (x, d) ∈ ∂Y for all x ∈ X. Indeed, assume for contradiction purposes that there Let X and Y be two nonempty symmetric polytopes in R n with X ⊂ Y • , and let d ∈ S. We define Proof. Following Proposition 3, r X,Y is well-defined. Reusing y * from the above proof, we introduce For some d ∈ S the arg min and arg max in the above definitions might not be unique; if so we take x * M and x * N to be any such argument. According to Theorem 1, . Since sets X and Y are symmetric, functions y * N , y * M , x * N and x * M are odd. Then, r X,Y is an even function, i.e., r X, Since dim X = 1, we can take P to be a two-dimensional plane containing X. Then, we work with d ∈ P ∩ S. In Lemmas 8, 9 and 11 of Appendix B we prove that for the directions d not involved in the crossing of vertices v π and v 2π . These vertices were introduced in Lemma 3 and vertex crossing is defined in Lemma 9. With . Then, we can apply the proof of Theorem 2 showing that the maximum of r X,Y (d) for d not involved in the crossing of v π or v 2π is achieved at either x or −x. Lemma 10 states that r X,Y reaches a local minimum during the crossing of vertices v π and v 2π . Thus, the maximum of r X,Y over d ∈ P ∩ S is achieved at either x or −x. Then, max Since r X,Y is even these two values are equal, leading to max We will keep these optimization results under our belt for now and go back to the discussion of resilient systems. We start with the initial system of dynamics (1) and aim to calculate the nominal reach time T * N . We introduce the set of constant inputsŪ c := ū ∈ R m+p : ū ∞ ≤ u max . Proof. Dynamics (1) are linear in x andū. SetŪ defined in (2) is convex and compact. The system is controllable, so x goal is reachable. The assumptions of Theorem 4.3 of [18] are satisfied, leading to the existence of a time optimal controlû ∈Ū . Thus, the infimum in (5) is a minimum and The multiplication of the variablesū c and T prevents the use of linear solvers. Instead, we will consider after using the transformation λ = 1 T in (12) . Problem (13) is linear inū so the optimal control inputū * belongs to the boundary of the constraint set [20] for d = 0. The uninteresting case d = 0 has been treated in Remark 1, so we consider d = 0. Thenū * = 0, leading to ū * ∞ = u max . The optimization variable is then (ū, λ) and the constraints are the following We now introduce an interesting property of T * N that will be needed later. Proposition 5. The nominal reach time T * N is an absolutely homogeneous function of d, i.e., . We have established that the nominal reach time is absolutely homogenous and can be achieved with a constant control input. We can now tackle the dynamics of the malfunctioning system after a loss of control authority over some of its actuators. We study the system of dynamics (3), with the aim of computing the malfunctioning reach time T * M . We define the constant input sets and V c as the set of vertices of W c . Proposition 6. For a resilient system, d ∈ R n * and w ∈ W , the infimum T M (w, d) of (6) defined as is achieved with a constant control input u * d (w) ∈ U c . Proof. First, we show that the infimum of (6) is a minimum. Let d ∈ R n , d = 0 and w ∈ W . Then, with z := d − T 0 Cw(t) dt ∈ R n a constant vector once w is fixed. Since the system is resilient, any z ∈ R n is reachable. Additionaly, U is convex and compact, and (16) is linear in u. Then, according to Theorem 4.3 of [18] a time-optimal control exists. Following the proof of Proposition 4, we conclude that the infimum of (16) is a minimum, the optimum u * d (w) is independent of time and belongs to U c . We can now work on the supremum of (6). Proposition 7. For a resilient system and d ∈ R n * , the supremum T * M (d) of (6) is achieved with a constant undesirable input w * ∈ W c . Proof. We will show first that we can restrict the constraint space to W c and then that the supremum of (6) is a maximum. For d ∈ R n * , following Proposition 6, (6) simplifies to with Bu * d from Proposition 6. Let w ∈ W and consider Therefore, the constraint space of (17) can be restricted to W c . We define the function ϕ : When applying w c and u * d (w c ) the dynamics becomeẋ = ϕ(w c ). We now use the work from Neustadt [19] concerning the existence of optimal control inputs. Neustadt defines in [19] the attainable set from x 0 and using inputs in W c as which is continuous in w c . Set W c is compact, t 0 = 0 and x 0 ∈ R n are fixed. Then, Theorem 1 of [19] states that A Wc is compact. is the supremum of a continuous function over the compact set A Wc , so the supremum of (17) is a maximum achieved on W c . Following Propositions 6 and 7, the malfunctioning reach time can now be calculated with The simplifications achieved so far were based on existence theorems from [18, 19] upon which the bang-bang principle relies. The logical next step is to show that the maximum of (19) is achieved by the extreme undesirable inputs, i.e., at the set of vertices of W c , which we denote by V c . However, most of the work on the bang-bang principle considers systems with a linear dependency on the input [17, 18, 25] , while ϕ introduced in (18) is not linear in the input w c . The work from Neustadt [19] considers a nonlinear ϕ, yet his discussion on bang-bang inputs would require us to show that co(ϕ(W c )) = co(ϕ(V c )). Since ϕ is not linear, such a task is not trivial and in fact it amounts to proving that inputs in V c can do as much as inputs in W c , i.e., we would need to prove the bang-bang principle. Two more works [2, 9] consider bang-bang properties for systems with nonlinear dependency on the input. However, both of them require conditions that are not satisfied in our case. Work contained in [2] needs the subsystemẋ = Cw to be controllable, while [9] requires T M defined in Lemma 12 in Appendix C to be concave in w c . Thus, even if bang-bang theory seems like a natural approach to restrict the constraint space from W c to V c in (19), we had to establish our own optimization result, namely Theorem 1. We can now prove that the maximum of (19) is achieved on V c . Proposition 8. For a resilient system and d ∈ R n * , the maximum of (19) is achieved with a constant input w * ∈ V c , i.e., its components are w * i := ±u max for all i ∈ [p]. Proof. We introduce sets X := − Cw c : w c ∈ W c and Y := Bu c : u c ∈ U c . Then, using λ = 1 Then, our problem of interest becomes Sets U c and W c as defined in (14) are hypercubes in R m and R p respectively, and thus they are polytopes. Sets X and Y are defined as images of W c and U c under a linear transformation, so they are polytopes of R n [1] . To apply Theorem 1, we need to show that X ⊂ Y . Since the system is resilient, for all w c ∈ W c and all d 0 ∈ R n there exists u c ∈ U c and T ≥ 0 such that (Bu c + Cw c )T = d 0 . Then, for x = −Cw c ∈ X, x = 0 and d 0 = x there exists y ∈ Y and T > 0 such that (y − x)T = x. Then, y = λx with λ : We can now apply Theorem 1 and conclude that the minimum x * of (20) must be realized on a vertex of X. Now, we want to show that x * belongs to the image of V c by C. Let w c ∈ W c such that x * = −Cw c . If w c ∈ V c we are done. Otherwise, two possibilities remain. In the first case w c is on the boundary of the hypercube W c and then we take F to be the surface of lowest dimension of ∂W c such that w c ∈ F and dim F ≥ 1. The other possibility is that w c ∈ W • c ; we then define F := W c . Thus, in both cases V c ∩ F = ∅ and F is convex. Then, we take v ∈ V c ∩ F and a := v − w c ∈ F . Since dim F ≥ 1 and w c ∈ F , there exists some α > 0 such that w c ± αa ∈ F . Then . Since x * is a vertex of X and x ± ∈ X, according to our definition of vertices We have reduced the constraint set of (6) from an infinite-dimensional set W to a finite set V c of cardinality 2 p , with p being the number of malfunctioning actuators. Following Propositions 6, 7 and 8, the malfunctioning reach time can now be calculated with It is logic to wonder if the minimum of (21) could be restricted to the vertices of U c , just like we did for the maximum over W c . However, that is not possible. Indeed, w c is chosen freely in W c in order to make T * M as large as possible. On the other hand, u c is chosen to counteract w c and make Bu c + Cw c collinear with d. This constraint could not be fulfilled for all d ∈ R n if u c was only chosen among the vertices of U c . Similarly to the nominal reach time, T * M is also linear in the target distance. Proposition 9. The malfunctioning reach time T * M is an absolutely homogeneous function of d, i.e., Proof. Because of the minimax structure of (21), scaling like in the proof of Proposition 5 is not sufficient to prove the homogeneity of T * M (d). The polytope Y of R n has a finite number of faces, so we can choose d ∈ R n * not collinear with any face of Y . Since Y is convex, the ray y * (x, d) + αd : α ∈ R intersects with ∂Y at most twice. Since y * (x, d) ∈ ∂Y , one intersection happens at α = 0. If there exists another intersection, it occurs for some α 0 = 0. Since y * (x, λd) ∈ ∂Y , we have y * (x, d) + α(λ)d ∈ ∂Y . Then, α(λ) ∈ {0, α 0 } for all λ > 0. According to Lemma 12, T M is continuous in d, so α is continuous in λ but its codomain is finite. Therefore, α is constant and we know that α(1) = 0. So α is null for all λ > 0, We now extend this result to negative λ. For d ∈ R n * and w c ∈ W c , Using the symmetry of W c we obtain max We can now combine the initial and malfunctioning dynamics in order to evaluate the quantitative resilience of the system. Quantitative resilience is defined in (8) as the infimum of T * N (d)/T * M (d) over d ∈ R n . Using Proposition 5 and Proposition 9 we reduce this constraint to d ∈ S. Focusing on the loss of control over a single actuator we will simplify tremendously the computation of r q . In this setting, we can determine the optimal d ∈ S by noting that the effects of the undesirable inputs are the strongest along the direction described by the malfunctioning actuator. This intuition is formalized below. Theorem 4. For a resilient system following (3) with C a single column matrix, the direction d maximizing the ratio of reach times t(d) is collinear with the direction C, i.e., max Proof. We fix d ∈ S and we will evaluate the ratio of reach times t(d) in the direction d. Since C has a single column, W c = [−u max , u max ]. Then, according to Proposition 8, the worst undesirable input is w * (d) = ±u max for the direction d. Using the same transformation as in (13) We focus on the nominal reach time and proceed to the separation ofB = [B C] in (13): with X := Cw c : w c ∈ W c . We can now gather (22) and (23) into with r X,Y defined in (11) . In the proof of Proposition 8 we showed that sets X and Y are polytopes verifying X ⊂ Y and the resilience of the system states that for all d ∈ R n and x ∈ X there exists y ∈ Y such that x + y ∈ R + * d. Then, dim Y = n. To apply Theorem 3 we need to prove that X ⊂ Y • . Assume for contradiction purposes that there exists x 1 ∈ X ∩ ∂Y . Take d = −x 1 , then the best input is y = −x 1 ∈ ∂Y because Y is symmetric. Then, x 1 + y = 0 / ∈ R + * d, which contradicts the resilience of the system. Therefore, X ∩ ∂Y = ∅, i.e., X ⊂ Y • . Since U c and W c are symmetric, so are X and Y . Because C is a single column dim X = 1 and x * M (d) ∈ ∂X = ± Cu max . We can then apply Theorem 3 and conclude that max by scaling according to Propositions 5 and 9. Then, to calculate the quantitative resilience r q we only need to evaluate T * N (C) and T * M (C). The computation load can be even further reduced with the following result. Theorem 5. For a resilient system losing control over a single nonzero column C, r q = r max , where Proof. Letū ∈Ū c , u ∈ U c and w ∈ W c be the arguments of the optimization problems (12) and (21) for d = C = 0. We splitū = [u B u C ] such that u B ∈ U c and u C ∈ W c . Then, We consider the loss of a single actuator, thus W c = [−u max , u max ] ⊂ R which makes CwT * M (C) and Cu C T * N (C) collinear with C. From Proposition 8, we know that w = ±u max . Since w maximizes T * M (C) in (25), we obviously have w = −u max . On the contrary, u C is chosen to minimize T * N (C) in (25) , so u C = +u max . According to (25) , Bu B and Bu are then also collinear with C. The control inputs u B and u are chosen to minimize respectively T * N (C) and T * M (C) in (25) . Therefore, they are both solutions of the same optimization problem: We transform this problem into a linear one using the transformation λ = 1 τ : By combining all the results, (25) simplifies into: Following Theorem 4, r q = We introduced quantitative resilience as the solution of four nonlinear nested optimization problems and with Theorem 5 we reduced r q to the solution of a single linear optimization problem. We can then quickly calculate the maximal delay caused by the loss of control of a given actuator. So far, all our results need the system to be resilient. However, based on the work [6] verifying the resilience of a system is not an easy task. Besides, as explained in Section 2, the resilience criteria established in [6] cannot be applied to this paper because of a difference of setting in the set of allowable control inputs. Proposition 1 establishes only a necessary condition for resilience. The following proposition produces a necessary and sufficient condition. Proof. First, assume that the system (1) is resilient. Then, according to Proposition 1, the systeṁ x(t) = Bu(t) is controllable. Since Im(B) ⊂ Im(B), the system (1) is controllable a fortiori. If C = 0, then following Proposition 7, T * M (C) is finite. If C = 0, then T * M (C) is also finite according to Remark 1. Now, assume that the system (1) is controllable and T * M (C) is finite. If C = 0, then rank(B) = rank(B) = n. For any w ∈ W c and any d ∈ R n * , there exists u ∈ R m * such that Bu = d. Define u c := u u ∞ u max and T := u ∞ umax , then (Bu c + Cw)T = Bu c T = d and u c ∈ U c , so the system is resilient. For C = 0, because T * M (C) is finite, T M (w, C) is positive and finite for w ∈ W c = [−u max , u max ], with T M (·, ·) of Lemma 12 in Appendix C. There exists u w ∈ U c such that (Bu w +Cw)T M (w, C) = C. For all w ∈ W c , we have u w ∈ U c and more specifically if w = −u max , then We will now prove that B is full rank. Indeed, Then, for any d ∈ R n * , there exists u d ∈ R m * such that Bu d = d. For any w ∈ W c and for λ := according to (26) . To sum up, we have found that for all d ∈ R n * and all w ∈ W c , there exist λ > 0 and u λ ∈ U c such that Bu λ + Cw = λd, i.e., the system is resilient to the loss of column C. The intuition behind Proposition 10 is that a resilient system must fulfill two conditions: being able to reach any state, this is controllability, and doing so in finite time despite the worst undesirable inputs, which corresponds to T * M (C) being finite. Our goal is to relate resilience and quantitative resilience through the value of r max . To breach the gap between this desired result and Proposition 10, we evaluate the requirements on the ratio T * N (C) T * M (C) for a system to be resilient. Corollary 1. A system following (1) is resilient to the loss of control over a column C if and only if it is controllable and Proof. First, assume that the system (1) is resilient. Then, according to Proposition 1, it is controllable. If C ∈ R n * , we then have 0 < T * N (C) T * M (C) ≤ 1. If C = 0, following Remark 1 we have T * N (0) T * Now, assume that the system is controllable and T * N (C) T * M (C) ∈ (0, 1]. If C = 0, then T * M (C) = 0 according to Remark 1. We conclude with Proposition 10 that the system is resilient. Now for the case where C = 0, let d ∈ R n * . Since the system following (1) is controllable, T * N (C) is finite. Since C = 0, we have T * N (C) > 0. If T * M (C) = +∞, then T * N (C) T * M (C) = 0, which contradicts the assumption. By definition, T * M (C) ≥ 0, thus T * M (C) is finite. Then, according to Proposition 10, the system is resilient. Theorem 5 allows us to compute r q for resilient systems with a single linear optimization. We now want to extend that result to non-resilient systems, by showing that r max also indicates whether the system is resilient. Corollary 2. A system following (1) is resilient to the loss of control over a nonzero column C if and only if it is controllable and r max ∈ (0, 1]. Proof. For a resilient system with C = 0, following Theorem 5 we have r q = T * N (C) T * M (C) = r max . Then, according to Corollary 1 the resilient system is controllable and r q ∈ (0, 1]. Now assume that the system is controllable and r max ∈ (0, 1]. We will study λ * introduced in (24). If λ * + u max < 0, then by the definition of r max in (24), λ * − u max ≥ λ * + u max , leading to the impossible conclusion that −u max ≥ u max . If λ * + u max = 0, then r max = −2umax 0 = ∞, contradicting r max ∈ (0, 1]. Therefore, λ * + u max > 0. Let u * ∈ U c such that Bu * = λ * C. For w ∈ W c , we define T w := 1 λ * +w , so that (Bu * + Cw)T w = C. Note that T w is positive and finite because λ is finite. Then, Proposition 10 states that the system is resilient. We now have all the tools to assess the resilience and quantitative resilience of a driftless system. IfB is not full rank, the system following (1) is not controllable and there is no need to go further. Otherwise, we compute the ratio r max and using Corollary 2 we assess whether the system is resilient. If it is, Theorem 5 states that r max = r q , so we have already computed the quantitative resilience of the system. If it is not resilient, then r q = 0. We will now apply this method to two numerical examples. Our first example considers a linearized model of a low-thrust spacecraft performing orbital maneuvers. We study the resilience of the spacecraft with respect to the loss of control over some thrust frequencies. Our second example features an opinion dynamics scenario where two agents are influenced by five different sources. We study how the loss of control over one of the sources affects the opinion shaping of the agents. We consider a low-thrust spacecraft in orbit around a celestial body. Because of the complexity of nonlinear low-thrust dynamics Kolosa [15] established a linear model for the spacecraft dynamics using Fourier thrust acceleration components. Given an initial state and a target state, the model simulates the trajectory of the spacecraft in different orbit maneuvers, such as an orbit raising or a plane change. The states of this linear model are the orbital elements: semi-major axis, eccentricity, inclination, longitude of the ascending node, argument of perigee, mean anomaly. Because of the periodic motion of the spacecraft, the thrust acceleration vector F can be expressed in terms of its Fourier coefficients α and β: where F R is the radial thrust acceleration, F W is the circumferential thrust acceleration, F S is the normal thrust acceleration and E is the eccentric anomaly. The work [12] determined that only 14 Fourier coefficients affect the average trajectory, and we use those coefficients as the inputū: The Fourier coefficients considered in [12] have a magnitude of order 10 −7 , so we can safely assume that for our case the Fourier coefficients are bounded by u max = 1. Following [15] , the state-space form of the system dynamics isẋ =B(x)ū. We calculateB(x) in Appendix D using the averaged variational equations for the orbital elements given in [12] . We implement the orbit raising scenario presented in [15] , with the orbital elements of the initial and target orbits listed in Table 1 . 20 20 We approximateB(x) as a constant matrixB taken at the initial state. The resulting matrix is: We immediately notice that the two coefficients on the first row ofB are significantly larger than all the other coefficients ofB. This difference of magnitude inB reflects the difference of magnitude in the state x, where the semi-major axis a is significantly larger than any other element as can be seen in Table 1 . Losing control over one of the 14 Fourier coefficients means that a certain frequency of the thrust vector cannot be controlled. Since the coefficientsB 1,5 andB 6,1 have a magnitude significantly larger than coefficients of respectively the first and last row ofB, we have the intuition that the system is not resilient to the loss of the 1 st or the 5 th Fourier coefficient. We will now assess the resilience of the system using the method described at the end of Section 6. The matrixB is full rank, soẋ =Bū is controllable. We denote with r max and r q the vectors whose components are respectively r max (j) and r q (j) for the loss of the frequency j ∈ {1, . . . , 14} Since the 1 st , 4 th , 5 th , 8 th , 10 th , and 13 th values of r max are negative, according to Corollary 2 the system is not resilient to the loss of control over any one of these six corresponding frequencies. Their associated r q is zero. This result validates our intuition about the 1 st and 5 th frequencies. Corollary 2 also states the resilience of the spacecraft to the loss over any one of the 2 nd , 3 rd , 6 th , 7 th , 9 th , 11 th , 12 th and 14 th frequency because their r max belongs to (0, 1]. Then, using Theorem 5 we deduce that Since r q (3), r q (7) and r q (9) are close to 1, the loss of one of these three frequency would not delay significantly the system. The lowest positive value of r q occurs for the 6 th frequency, r q (6) = 0.15. Its inverse, 1 rq(6) = 6.8 means that the malfunctioning system can take up to 6.8 times longer than the initial system to reach a target. The specific maneuver described in Table 1 leads to d = x goal − x 0 = 667, 0.067, 2, 2, 2, 2 . We compute the associated time ratios t(d) using (13) and (21) Then, losing control over one of the first four frequencies will barely increase the time required for the malfunctioning system to reach the target compared with the initial system. However, after the loss over the 7 th , 9 th , 11 th , 12 th , or the 14 th frequency of the thrust vector, the undesirable input can multiply the maneuver time by a factor of up to 151.1. If one of the 5 th , 8 th , 10 th , or the 13 th frequency is lost, then some undesirable inputs can render the maneuver impossible to perform. When computing r q , we have seen that the system is not resilient to the loss of the 1 st or the 4 th frequency. Yet, the specific target described in Table 1 happens to be reachable for the same loss since the 1 st and 4 th components of t(d) in (27) are finite. Indeed, r q speaks only about a target for which the undesirable inputs cause maximal possible delay. Opinion dynamics study how a group of agents shapes their opinions [10] . We are interested in scenarios where agents are affected by outside opinion drivers. The influence of an outside opinion source u can be studied with x(t + 1) = x(t) + µε u(t) − x(t) , a modified Deffuant model [23] where µ is a convergence parameter and ε encodes the strength of the opinion source u. For our purpose, we will consider u as an input to the system and switch to a continuous time model: We assume that agents have no interactions with each other, and the influence of an opinion driver is independent of the agent's opinion. Then,ẋ(t) = µε u(t). We refer to the outside sources as channels. An example is a consumer of multiple media sources with different levels of trust towards different media. The agents opinions are solely determined by the controller of the channels. The dynamic model is then a driftless system:ẋ(t) =Bū(t). The controller is using its channels to steer the opinion of each agent towards a target set. For instance, the controller could be a political party financing advertisments to sway the opinion of voters in swing states during election campaigns [13] . Or the controller could be a worldwide media conglomerate such as the News Corporation [3] . The COVID-19 pandemic has minimized direct interactions, hence making our setting more realistic. An extreme variant of this scenario is illustrated by the episode "Fifteen Million Merits" of the Black Mirror series [7] . A perturbing event, e.g., loss of influence, foreign acquisition of a news channel, or a new board of directors, causes one of the channels to become uncontrollable and to produce undesirable inputs. The controller has still access to this channel and is informed in real time of its content, while being unable to modify it. We consider n = 2 agents both having initially a neutral opinion: x 0 = (0, 0). Then, the target is d = x goal ∈ R 2 . For instance, d = (1, 1) is a consensus target, while d = (−1, 1) is a polarization target. The components ofB, denoted byB i,j ∈ [−1, 1] reflect the influence of channel j over agent i. Using as a guideline the resilience criterion from [6] , we pick m+p = 2n+1 = 5 different channels. For instance, considerB In this setting, both agents trust channel 1 but not channel 2, they have diverging opinion on channels 3 and 4, while they are not strongly influenced by channel 5. We compute r max for the loss of control over each single channel: r max = 0.02 0.04 0.14 0.14 0.91 . Since rank(B) = 2 and all the values of r max belong to (0, 1], according to Corollary 2, the system is resilient to the loss of control over any channel. Using Theorem 5 we have r q = r max . Because neither agent is significantly influenced by channel 5, the resilience to its loss is greater than the resilience to the loss of channel 1, which is trusted by both agents. The inverse of r q tells us how much extra time the malfunctioning system needs to reach some opinion state compared to the initial system: From Theorem 4, we know that 1 rq(j) = t(B j ), which is the ratio of reach times in the directionB j , the j th column ofB. Thus, if x goal =B 1 = (0.8, 0.9), then the loss of control over channel 1 can increase the time to reach this target by a factor up to 39.5. On the other hand, the loss of control over channel 5 has a negligible impact on the time to reach any target. We now choose the target to be x goal = (1, 1) and compare how the loss of each channel affects the delay to reach this target. Intuitively, since this is a consensus target, losing control over the channel 1 or 2 will have a considerable impact, while the loss of the other channels should not be significant. Indeed, when calculating t(d) for the loss of each channel we obtain t(d) = 39.5 26.3 1.0 1.0 1.1 , which confirms our intuition. If the controller has polarization objectives, for instance d = (−1, 1), then losing control of channel 3 or 4 should be problematic, while the others should have a smaller impact. Indeed, t(d) = 2.6 1.7 7.1 7.1 1.1 , so the loss of channel 3 or 4 causes the most delay for a polarization target. To illustrate Theorem 2 we compute the ratio t(d) for all directions d ∈ S parametrized by an angle β ∈ [0, 2π]. The red spike shows when d is collinear with the direction C. As can be seen on Figure 4a for the loss of the first channel, the spike coincides with the maximum of t(d), located at 39.5 as announced in (28) . The two red spikes correspond to the directions C and −C and note that t takes the same value for these two directions t(C) = t(−C). Indeed, we showed in the proof of Theorem 3 that r X,Y is an even function and in Theorem 4 we proved that r X,Y (d) = t(d). Similarly, on Figure 4b for the loss of channel 3 the maximal ratio t(d) is 7.1 as calculated in (28) and is reached when d is collinear with C = ±B 3 . To quantify the drop in performance caused by the loss of control authority over actuators, this paper introduced the notion of quantitative resilience for control systems. Relying on bang-bang control theory and on three novel optimization results, we transformed a nonlinear problem consisting of four nested optimizations into a single linear problem. This simplification leads to a computationally efficient algorithm to verify resilience and calculate the quantitative resilience of driftless systems. There are three promising avenues of future work. Previous work on resilient systems only considered L 2 input bounds, while we worked here with L ∞ bounds. Then, the first direction of work is to build a proper resilience theory concerning these inputs. Secondly, we have only considered driftless systems because of the complexity of the subject. However, future work should be able to extend the concept of quantitative resilience to non-driftless linear systems. Finally, noting that Theorems 4 and 5 only concern the loss of a single actuator, our third direction of work is to extend these results to the simultaneous loss of multiple actuators. A Proof of Theorem 2 Lemma 1. When d, y + and y − all intersect the same face of ∂Y , the ratio r(d) is constant. Proof. We define the lengths Then, r(d) = D+δ + D−δ − . The sign of δ ± depends on whether y ± is inside or outside, as illustrated on Figure 5 . Because y + , y − and D all intersect the same face of ∂Y as illustrated on Figure 5 , the two triangles bounded by ∂Y , δ ± and ±x are congruent. Using the law of sines in these triangles, we have x sin α = δ + sin γ = δ − sin γ . Then, δ + = δ − =: δ > 0 and γ = π − α − β. Thus, δ D = x sin(α+β) D sin α . As we have seen before, the representation of γ on Figure 5 is only accurate when α + β ∈ [0, π). When α + β ∈ [π, 2π), we instead refer to Figure 8 . In this setting δ + = δ − =: δ < 0 and χ = 2π − α − β. We similarly use the sine law in the triangles of sides ∂Y , ±x and δ ± : The sine law uses lengths that must be positive, which explains the minus sign in front of δ. Therefore, the expression δ D = x sin(α+β) D sin α holds for all values of α + β. Noticing that r(d) = we can now evaluate r(d). We will prove that the ratio δ/D is the same for two directions d 1 ∈ S and d 2 ∈ S when their respective D, y + and y − all intersect the same face of ∂Y , as illustrated on Figure 6 . We also define The sum of the angles of the triangle in Figure 6 is Therefore, α + β is constant on faces of ∂Y . We also use the sine law in the triangle in Figure 6 and obtain D 1 sin α 2 = D 2 sin(π − α 1 ) = D 2 sin α 1 . Then, Hence, r(d 1 ) = r(d 2 ), the ratio r(d) is constant when d, y + and y − are on the same face of ∂Y . Thus, the variations of r(d) only occur when d is crossing a vertex. Lemma 2. The following statements are true: • If β ∈ (0, π), then y + is leading. If β ∈ (π, 2π), then y − is leading. • If α + β ∈ (0, π), then y + is outside. If α + β ∈ (π, 2π), then y − is outside. Proof. We assume that D, y + and y − all intersect the same face of ∂Y . The objective of this part is to learn the values of the angle β for which y ± is leading or trailing and outside or inside. Figure 7 represents the situation where the vector y + is leading and outside, while y − is trailing and inside. We want to determine for which values of β this situation arises. We apply the sine law in the two triangles of Figure 7 bounded by y ± , ±x and D: Then, we have x sin β = y − sin(β − β − ) = y + sin(β + − β). Since the three norms are positive, the three sine functions have the same sign. Since we assumed that y + is leading, we have 0 ≤ β − < β < β + ≤ 2π. Then, β − β − > 0 and β + − β > 0. For contradiction purposes, assume that β −β − > π, then sin(β −β − ) < 0 and so sin(β + −β) < 0, which leads to β + −β > π. Then, β + −β − > 2π, but that is impossible since β ± ∈ [0, 2π). Therefore, β − β − ∈ (0, π). Thus, sin(β − β − ) > 0, which leads to sin β > 0 and then β ∈ (0, π). To sum up, when y + is leading we have β ∈ (0, π). Now we study the other case, when y − is leading as represented on Figure 8 and we want to find the range of β where this situation occurs. We apply the sine law in the triangles delimited by y ± , D and ±x: Then, we have x sin β = − y − sin(β − − β) = − y + sin(β − β + ). Since y − is leading we have 0 ≤ β + < β < β − < 2π. Therefore β − − β > 0 and β − β + > 0. Assume for contradiction purposes that β − − β > π. Then, y − y + sin(β − − β) < 0, and so, sin(β − β + ) < 0. Thus, β − β + > π, which leads to the impossible conclusion that β − − β + > 2π. Therefore, β − − β ∈ (0, π), so − y − x sin(β − − β) < 0, i.e., β ∈ (π, 2π). To sum up, when y − is leading we have β ∈ (π, 2π). We also know that y + leading implies β ∈ (0, π), and for all β ∈ (0, π) ∪ (π, 2π) either y + or y − must be leading. We deduce that the converse of the two implications proved above are true: if β ∈ (0, π), then y + is leading, and if β ∈ (π, 2π), then y − is leading. Now, we want to determine the range of values of the angle α + β for which y ± is outside. Based on Figure 7 where y + is outside, we have α + β ∈ (0, π). Then, based on Figure 8 where y + is inside and y − is outside, we have α + β ∈ (π, 2π). The only situation where neither y + nor y − is outside occurs when y + + x = D = y − − x , i.e., at the vertices v π and v 2π , i.e., when α + β ∈ {π, 2π}. For all other values of α + β, either y + or y − is outside. We deduce that if α + β ∈ (0, π), then y + is outside, and if α + β ∈ (π, 2π), then y − is outside. Lemma 3. The following statements are true: • if α + β ∈ (α 0 , π), then y + is leading and outside, • if α + β ∈ (π, α 0 + π), then y + is leading and inside, • if α + β ∈ (α 0 + π, 2π), then y − is leading and outside, • if α + β ∈ (2π, α 0 + 2π), then y − is leading and inside. Proof. We have taken the convention that the angles are positively oriented in the clockwise orientation. According to (30), the angle α + β is constant on a face of ∂Y . When d crosses a vertex of external angle ε as represented on Figure 10 , the value of α has a discontinuity of +ε. Let q be the number of vertices of ∂Y and ε i the external angle of the i th vertex v i . Since Y ∩ P is a polygon, q i=1 ε i = 2π. We can then represent the evolution of α + β as a function of β with Figure 9 . Recall that α 0 is the value of α when β = 0. After a whole revolution α + β = α 0 + 2π. So there are two vertices v π and v 2π where α + β first crosses π and then 2π. In the eventuality that α + β = π or 2π on a face, we define v π or v 2π as the vertex preceding the face. This face is parallel with the span of x. Thus y + + x = y − − x = D, so y + and y − are neither outside nor inside. The ratio is r(d) = 1 on this face. Because of the monotonic evolution of α + β as a function of β, we can use α + β instead of β to parametrize the directions d. The interval β ∈ (0, π) is the same as α + β ∈ (α 0 , α 0 + π) and the interval β ∈ (π, 2π) is the same as α + β ∈ (α 0 + π, α 0 + 2π). Then, the bullet list established in Lemma 2 can be rewritten as claimed in this lemma. Proof. The leading vector y + is outside and crosses a vertex while β increases. We separate the vertex crossing into two parts: when only y + has crossed, and when both d and y + have crossed the vertex. Since we do not yet consider the vertices v π and v 2π , the leading vector is outside before and after the vertex. Let ε be the external angle of the vertex between the faces F 1 and F 2 of ∂Y as shown on Figure 10 . Because y + does not intersect F 1 anymore, δ + is shorter than δ − . We define l := δ − −δ + . Notice that the two green segments of length l in Figure 10 are parallel. We parametrize the position of y + on F 2 with the length m as defined on Figure 10 . When y + is at the vertex m = 0, and m increases with β. Using the sine law we can link the loss l with the distance m We calculate the ratio r(d) as a function of r F 1 , which is the value of r(d) on the face F 1 : By definition the length m is positive. Since x / ∈ ∂Y but y − ∈ ∂Y , we have D − δ − = y − − x > 0. We have seen previously that for y + to be leading and outside we need α + β ∈ (α 0 , π). In that case sin(α + β) > 0 and sin(β) > 0. Therefore, the term subtracted from r F 1 is positive, i.e., r(d) < r F 1 . We can now tackle the second part of the crossing, when y + and d both have crossed the vertex as illustrated on Figure 11 . Figure 11 : Part II of the crossing of a vertex by y + leading and outside as β increases. Because y − does not intersect F 2 , δ − is longer than δ + . As before, let l := δ − − δ + . Using the sine law, we can relate l to m and express the ratio r(d): . Since y + is leading and outside on F 2 we have α + β ∈ (α 0 , π), so sin(β) > 0. If α was still measured between d and F 1 , then its value would be α F 1 = α − ε. Since we are not considering the crossing of v π or v 2π , y + is also leading and outside on F 1 . Then, α F 1 + β ∈ (α 0 + π), i.e., α + β − ε ∈ (α 0 , π). This yields sin(α + β − ε) > 0, which makes l > 0, because the length m is positive by definition. Note that r F 2 = D+δ + D−δ + , which leads to r(d) > r F 2 . Thus, the ratio r(d) decreases during the crossing of a vertex when y + is leading and outside. Lemma 5. The ratio r(d) decreases when the leading vector y − is outside for a vertex crossing. Proof. The leading vector y − is outside and crosses a vertex while β increases. We separate the vertex crossing into two parts: when only y − has crossed, and when both d and y − have crossed the vertex. Since we do not yet consider the vertices v π and v 2π , the leading vector is outside before and after the vertex. Let ε be the external angle of the vertex between the faces F 1 and F 2 of ∂Y as shown on Figure 12 . Figure 12 : Part I of the crossing of a vertex by y − leading and outside as β increases. Since y − is outside and y + is inside, by definition (29), δ + < 0 and δ − < 0. We keep l := δ − −δ + like in the previous case. The distance m also increases monotonically with β as y − goes further away from the vertex. We apply the sine law in the same triangle as before: Thus, the relation linking m and l is the same whether y + or y − is leading: l = m sin(α+β) sin β . Besides, δ + and δ − are also related through the same equation with l. Therefore, the ratio r(d) as a function of r F 1 is the same as previously: . For the same reasons as above m > 0 and D − δ − > 0. We have established previously that to have y − leading and outside we need α + β ∈ (α 0 + π, 2π). In this situation sin(α + β) < 0 and sin β < 0. Therefore, the term subtracted from r F 1 is positive, so r(d) < r F 1 . Now we consider the second part of the crossing, when both y − and d are on F 2 , as illustrated on Figure 13 . Figure 13 : Part II of the crossing of a vertex by y − leading and outside as β increases. Since y + is inside and y − outside, we have δ + < 0 and δ − < 0 according to (29). Their length on Figure 13 is then given by −δ + and −δ − respectively. Because y + does not yet intersects F 2 , −δ + is longer than −δ − . We also reuse l := δ − − δ + . Using the sine law, we can relate l to m As previously m > 0. Since y − is leading and outside on F 1 and on F 2 , we have α+β−ε ∈ (α 0 +π, 2π) and α + β ∈ (α 0 + π, 2π). Therefore, sin(α + β − ε) < 0 and sin β < 0. Thus l > 0. Then, we can express the ratio r(d): with r F 2 the value of r(d) on the face F 2 . Therefore, r(d) decreases during the crossing of a vertex when y − is leading and outside. Lemma 6. The ratio r(d) increases when the leading vector is inside for a vertex crossing. Proof. We base this reasoning on the proof of Lemma 4 where y + was leading and outside, but it could be done similarly based on Lemma 5. We now assume that the angles are positive when oriented counterclockwise. With this change of orientation, Figure 11 represents y − leading and inside after crossing a vertex from face F 2 to F 1 , while d and the trailing vector y + are still on F 2 . The figure is the same, so (33) still holds. Since y − is leading and inside on F 1 and F 2 , we have α + β − ε ∈ (2π, α 0 + 2π) and α + β ∈ (2π, α 0 + 2π). Then, sin(α + β − ε) < 0 and sin(β) < 0 which leads to r(d) > r F 2 , i.e., r(d) is increasing during the first part of that crossing. The second part of the crossing is illustrated by Figure 10 . It represents d and y − leading and inside having both crossed the vertex from F 2 to F 1 and y + is trailing and still on F 2 . Similarly, (31) and (32) still holds. The difference is again in the range of angles, α + β ∈ (2π, α 0 + 2π). Then, sin(α + β) < 0 and sin(β) < 0, which leads to r(d) < r F 1 . Therefore, r(d) is also increasing during the second part of this crossing. The same method can be applied to the proof of Lemma 5 to show that when y + is leading and inside, r(d) increases at the vertices crossings. Lemma 7. The ratio r(d) decreases when crossing the vertices v π and v 2π . Proof. As can be seen on Figure 3 , before the vertices v π and v 2π the leading vector is outside, but comes inside after crossing the vertex. Because of this feature Lemma 5 does not apply to the crossing of the vertices v π and v 2π . Figure 14 : Part I of the crossing of v π as β increases. The first part of the crossing of v π is illustrated on Figure 14 . Notice that the situation is very similar to the one described by Figure 10 . Indeed, equations (31) and (32) also hold for this case. Thus, r(d) decreases when y + crosses v π . The second part of the crossing of v π as illustrated on Figure 15 is also similar to Figure 11 . The difference is that y + is inside and thus δ + < 0. Since y − is inside, δ − > 0. If y − was already intersecting F 2 it would be of same length as δ + . We have as usual l := δ − − δ + . We parametrize how far y − is from v π using the length m defined on Figure 15 . The sine law gives . Figure 15 : Part II of the crossing of v π as β increases. Since m > 0, and α + β − ε < π and β ∈ (0, π) we have that l > 0. We calculate the ratio r(d): because l > 0. Therefore, the ratio r(d) decreases when crossing v π . The crossing of v 2π is identical except that y − is leading instead of y + . In the following four lemmas we reuse the notation introduced in Theorem 2 and Appendix A. We know from Theorem 1 that x * M (d) ∈ x, −x for all d ∈ S. In the case illustrated on Figure 16 , x * M (d) = −x because it maximizes δ M , while in general we only know that x * M (d) = x . If α + β ∈ {π, 2π}, then X is parallel with a face of ∂Y making x * N and x * M not uniquely defined. Regardless, we can still take x * N (d) = −x * M (d) ∈ ∂X. Otherwise, x * N and x * M are uniquely defined. Since x * N (d) ∈ X, x * M (d) ∈ X for all d ∈ S and dim X = 1, vectors x * N (d) and x * M (d) are always collinear. We then use Thales's theorem and obtain δ N (d) = δ M (d) . Since x * N (d) is chosen to maximize δ N and is independent from δ M it must have the greatest norm, so x * N (d) ∈ ∂X. Due to α + β / ∈ {π, 2π}, x + y is not constant. Because x * N (d) is chosen to maximize x + y while x * M (d) is minimizing it, we have x * N (d) = x * M (d). Since they both belong in ∂X = − x, x , we have x * N (d) = −x * M (d). According to Lemma 15, the coupled functions x * N , y * N (d) = arg max x ∈ X, y ∈ Y x + y : x + y ∈ R + d are continuous. Since x * N (d) ∈ x, −x , x * N (d) is constant on the faces of ∂Y and so is x * M (d). Then, r X,Y (d) = r(d) is constant on the faces of ∂Y according to Lemma 1. Lemma 9. During the crossing vertices before v π as β increases, x * N (d) and x * M (d) are constant and opposite: x * N (d) = −x * M (d) ∈ ∂X. Proof. We study the crossing of a vertex v of angle ε between the faces F 1 and F 2 of ∂Y . In Theorem 2 x was fixed, while here the vectors x * M (d) and x * N (d) depend on d, so we need a new definition for vertex crossings. For each vertex v we introduce x v the vector collinear with x, going from v to the ray directed by d, as illustrated on Figure 17 and we say that the crossing of v is ongoing as long as x v < x . We also define δ Figure 17 : Illustration of x v during the crossing of a vertex v, with y * N leading. Before starting the crossing of v π we have α + β ∈ (α 0 , π). Then, as can be seen on Figure 16 , y * N is leading and outside, so it reaches the vertex before y * M and d. The length of x * N (d) can vary to maximize δ N , so y * N could still intersect F 1 , even if the crossing is ongoing. We have seen in Lemma 8 that if y * N is still on F 1 , then it must be the furthest possible to maximize δ N , in that case y * N = v. Otherwise, y * N intersects F 2 . We want to establish a criterion to distinguish these two possible scenarios. We first consider the scenario where y * N = v and x * N (d) = x v . We take y ∈ F 2 \{v} such that y + x ∈ R + d as represented on Figure 18 and we define δ := x + y − D. Since δ N must be maximized by the choice of y * N and y = y * N , we have δ < δ N = δ v . But x > x v , so the line segment corresponding to x crosses the interior of Y . Focusing on this part of Figure 18 we obtain Figure 19 . Figure 19 : Illustration of the line segment corresponding to x crossing the interior of Y in Figure 18 . Two of the angles of the triangle delimited by F 1 , F 2 and x are π − α − β and π − ε. Therefore, their sum is in (0, π) and thus α + β + ε > π. Since we assumed that α + β ∈ (α 0 , π), the vertex v must in fact be v π for this scenario to happen. Thus, the crossing of a vertex preceding v π follows the second scenario as depicted on Figure 17 with y * N ∈ F 2 . We study Figure 20 which is a more detailed view of Figure 17 , with δ 0 depending solely on d and ε. Figure 20 : Illustration of x v and x * N in Figure 17 . Since x v and x * N (d) are collinear, we can apply Thales's theorem in Figure 20 and obtain that δ N − δ 0 = (δ v − δ 0 ) Lemma 10. During the crossing of v π and v 2π , the ratio r X,Y (d) reaches a local minimum. Proof. During the crossing of v π , i.e., when x vπ < x , we have α + β ≤ π but α + β + ε > π. The situation is illustrated on Figure 21 . We showed in Lemma 9 that y * N = v π and x * N (d) = x vπ . Once d has crossed v π , we still have y * N = v π to maximize δ N . Then, the equality x * N (d) = x vπ holds during the entire crossing of v π , i.e., until x * N (d) = −x. This second part of the crossing is illustrated on Figure 22 . Assume that during the entire crossing of v π , x * M (d) = −x. Then, at the end of the crossing we will have y * M = v π and x * M (d) = x vπ = x * N (d), which contradicts the definitions of x * M (d) and x * N (d). Thus, x * M (d) does not remain equal to −x during the entire crossing. Since x * M ∈ x, −x , at some point x * M switches to x as y * M switches from F 1 to F 2 . This switching point is illustrated on Figure 22 , and y * M becomes the leading vector. By definition, x vπ < x during the crossing. We have showed that x * N (d) = x vπ , so δ N = δ vπ . Now using Thales's theorem in Figure 21 , we have δ vπ = δ M xv π x * M . Since x * M = x we conclude that δ vπ < δ M during the crossing. Also, y * N ∈ v π during the entire crossing, so r X,Y (d) = D+δv π D−δ M . During the first part of the crossing, y * M intersects F 1 , as represented on Figure 21 . If y * N was further on the dashed line of Figure 21 , the ratio would be r F 1 = D+δ M D−δ M , which is the value of r X,Y on F 1 . However, since δ vπ < δ M , we have r X,Y (d) < r F 1 . During the second part of the crossing, y * M ∈ F 2 and x * M (d) = x. If y * N was on the dashed prolongation of F 2 in Figure 22 , we would have r F 2 = D+δ M D−δ M , value of r X,Y on F 2 . However, since δ vπ < δ M , we have r X,Y (d) < r F 2 . Thus, r X,Y reaches a local minimum during the crossing of v π . During the crossing of v 2π we also have y * N (d) = v 2π and r X,Y reaching a local minimum because functions y * N , y * M , x * N and x * M are odd. Proof. After the crossing of v π , α + β ∈ (π, α 0 + π) and y * M is leading and inside as established in Lemma 10. Thus, y * M is the first to reach vertex v, but since x * M = x we cannot have y * M = v during the entire crossing. In Lemma 8 we showed that x * N is continuous in d. Thus, x * N (d) cannot switch like x * M (d) to take the lead. Instead, x * N (d) is trailing during the crossing as illustrated on Figure 23 . Since y * N ∈ F 1 during the crossing, we can apply Thales's theorem on Figure 23 and obtain that for a fixed d, δ N is proportional to x * N (d) . Thus, x * N (d) ∈ ∂X and, since y * N is trailing, Infinite Dimensional Analysis: A Hitchhiker's Guide Global controllability and bang-bang steering of certain nonlinear systems Switching power: Rupert Murdoch and the global business of media politics: A sociological analysis On the minimax reachability of target sets and target tubes Resilient reachability for linear systems Designing resilient linear driftless systems Currencies of control: Black Mirror, In Time, and the monetary policies of dystopia Secure estimation and control for cyber-physical systems under adversarial attacks On theoretical and numerical aspects of the bang-bang-principle Opinion dynamics and bounded confidence models, analysis, and simulation Semi-infinite programming: theory, methods, and applications Reduction of low-thrust continuous controls for trajectory dynamics Media politics: A citizen's guide Development of a quantitative resilience model for nuclear power plants Implementing a linear quadratic spacecraft attitude control system Reachability analysis for uncertain systems -the ellipsoidal technique Time optimal control systems Calculus of Variations and Optimal Control Theory: a Concise Introduction The existence of optimal controls in the absence of convexity conditions Linear max-min programming A systematic review of quantitative resilience measures for water infrastructure systems Springer Handbook of Robotics Opinion dynamics: models, extensions and external effects How much redundancy: Some cost considerations, including examples for spacecraft systems A bang-bang theorem with bounds on the number of switchings An adaptive actuator failure compensation controller using output feedback Attitude stabilization of spacecrafts under actuator saturation and partial loss of control effectiveness Hybrid fault-tolerant flight control system design against partial actuator failures we have x * N (d) = −x during the entire crossing. By the definitions of x * N (d) and x * M (d), we have x * N (d) = x * M (d). Also, both x * N (d) and x * M (d) belong to ∂X = x, −x , then x * M (d) = x during the entire crossing. Following Lemma 9 we have showed that x * N (d) and x * M (d) are constant and opposite for the crossing of all vertices encountered when β ∈ (0, π), except for v π .Since functions x * N and x * M are odd, x * N (d) and x * M (d) are also constant and opposite for all the vertices encountered when β ∈ (π, 2π) except for v 2π . We define a set-valued function ϕ for w c ∈ W c and d ∈ R n * ϕ(w c , d)whereWe can now define the function f : Gr ϕ → R + * as f (w c , d, y) := T such that y + Cw c )T = d. The resilience of the system implies that for all w c ∈ W c and all d ∈ R n , the set ϕ(w c , d) is nonempty. Since Y is compact and R + d − {Cw c } is closed, their intersection ϕ(w c , d) is compact for all w c ∈ W c and all d ∈ R n . Additionally, based on Lemma 13, ϕ satisfies the continuity definition 17.2 of [1] . Thus, the conditions of the Berge Maximum Theorem [1] are satisfied, leading to the continuity of T M in w c and d. Proof. We define X := W c × R n , and Y := Bu c : u c ∈ U c so that the set-valued function is ϕ : X Y . On the space X we introduce the norm · X as (w, d) X = w + d . Since · is the Euclidean norm, · X is a norm on X. By the definition 17.2 of [1] , we need to prove that ϕ is both upper and lower hemicontinuous at all points of X.First, using Lemma 17.5 of [1] we will prove that ϕ is lower hemicontinuous by showing that for an open subset A of Y , ϕ l (A) is open. The lower inverse image of A is defined in [1] asSince λ ≥ 0 and C ≥ 0 are both fixed, we can choose ε d and ε w positive and small enough so thatThen, we have showed that for allis open, and so ϕ is lower hemicontinuous.To prove the upper hemicontinuity of ϕ, we will use Lemma 17.4 of [1] and prove that for a closed subset A of Y , the lower inverse image of A is closed. Let {x k } be a sequence in ϕ l (A) converging to x = (w, d) ∈ X. We want to prove that x ∈ ϕ l (A).For each k ≥ 0, we have (w k , d k ) = x k and we define Λ k := λ k ≥ 0 : λ k d k − Cw k ∈ A = ∅. Since A is a closed subset of the compact set Y , then A is compact. Thus Λ k has a minimum and a maximum; we denote them by λ min k and λ max k respectively. Since sequences {d k } and {w k } converge, they are bounded. The set A is also bounded, thusWe define segments S k := λd k − Cw k : λ ∈ [0, λ max ] , and S := λd − Cw : λ ∈ [0, λ max ] . These segments are all compact sets. We also introduce the sequences a k := λ min k d k − Cw k ∈ A ∩ S k and b k := λ min k d − Cw ∈ S. Take ε > 0. Since {d k } and {w k } converge toward d and w respectively, there exists The minimum exists because A and S are both compact and the norm is continuous. Since a k ∈ A and b k ∈ S, we have dist(A, S) ≤ a k − b k for all k ≥ 0. Therefore, dist(A, S) = 0. So, A ∩ S = ∅, leading to x ∈ ϕ l (A). Then, ϕ l (A) is closed and so ϕ is upper hemicontinuous.Lemma 14. Let X and Y be two nonempty symmetric polytopes in R n with X ⊂ Y • . Then,Proof. According to Proposition 3 (ii), whose proof does not rely on the current lemma, λ * is welldefined. We introduce the set-valued function ϕ :If we take x = Cw c , then ϕ is the same as in (34).We define the graph of ϕ as Gr ϕ := (x, d, y) ∈ X × S × Y : y ∈ ϕ(x, d) , and the continuous function f : Gr ϕ → R + as f (x, d, y) = x + y . Set X × S is compact and nonempty. Since Y is compact and R + d − {x} is closed, their intersection ϕ(x, d) is compact. The symmetry of X and Y leads to −x ∈ ϕ(x, d), so ϕ(x, d) = ∅. According to Lemma 13, ϕ satisfies the continuity definition 17.2 of [1] . Then, we can apply the Berge Maximum Theorem [1] and conclude that λ * is continuous in x and d.Lemma 15. Let X and Y be two nonempty polytopes in R n . Then, the coupled functions x * N , y * N (d) = arg maxx ∈ X, y ∈ Y x + y : x + y ∈ R + d are continuous in d ∈ S.Proof. Let Z := X + Y = x + y : x ∈ X, y ∈ Y . Then Z is a nonempty compact set. According to Proposition 3 (i), whose proof does not rely on the current lemma, maxx ∈ X, y ∈ Y x + y : x + y ∈ R + d exists and thus max z ∈ Z z : z ∈ R + d is also well-defined.We introduce the set-valued function ϕ : S → Z as ϕ(d) = Z ∩ R + d for d ∈ S. The proof of Lemma 13 can be easily adapted to show that ϕ is continuous as it it is defined very similarly to (34). The graph of ϕ is Gr ϕ := (z, d) ∈ Z × S : z ∈ ϕ(d) . The function f : Gr ϕ → R + defined as f (z, d) = z is obviously continuous. Then, m(d) := maxWe define z(d) := m(d)d ∈ Z for d ∈ S. This function is continuous since m is continous, and z(d) = arg max, these functions are continuous. The control matrixB can be written aswith 0 i,j denoting the null matrix with i rows and j columns. We calculate the submatrices using the averaged variational equations for the orbital elements given in [12] :