key: cord-0755069-9k4qikl0 authors: Gijbels, Irène; Kika, Vojtěch; Omelka, Marek title: Multivariate Tail Coefficients: Properties and Estimation date: 2020-06-30 journal: Entropy (Basel) DOI: 10.3390/e22070728 sha: c1a0f2ec9120f72365d6dd502c37785c45ee5198 doc_id: 755069 cord_uid: 9k4qikl0 Multivariate tail coefficients are an important tool when investigating dependencies between extreme events for different components of a random vector. Although bivariate tail coefficients are well-studied, this is, to a lesser extent, the case for multivariate tail coefficients. This paper contributes to this research area by (i) providing a thorough study of properties of existing multivariate tail coefficients in the light of a set of desirable properties; (ii) proposing some new multivariate tail measurements; (iii) dealing with estimation of the discussed coefficients and establishing asymptotic consistency; and, (iv) studying the behavior of tail measurements with increasing dimension of the random vector. A set of illustrative examples is given, and practical use of the tail measurements is demonstrated in a data analysis with a focus on dependencies between stocks that are part of the EURO STOXX 50 market index. Assume that we have a d-variate random vector and we are interested in the tendency of the components to achieve extreme values simultaneously, which is taking extremely small or extremely large values. In the bivariate setting, when d = 2, this so-called tail dependence has been studied thoroughly in the literature. Bivariate lower and upper tail coefficients appeared for example in [1] but the idea of studying bivariate extremes dates back to [2] . These coefficients, being conditional probabilities of an extreme event given that another event is also extreme, have become the standard tool to quantify tail dependence of a bivariate random vector. Later, a generalization into arbitrary dimension d became of interest. The presence of more than two components however brings difficulties of defining tail dependency and several proposals appeared in the literature. These proposals include those made by [3, 4] or [5] who adopted different strategies for conditioning in general dimensions. Further proposals were made for specific copula families, for example, by [6] for Archimedean copulas or by [7] for extreme-value copulas. In this paper, we aim to contribute to the discussion on the appropriateness of multivariate tail coefficients, from the view point of properties that one would desire such coefficients to have. This study also entails the proposal of some new multivariate tail measures, for which we establish the properties. We investigate an estimation of the discussed multivariate tail coefficients and establish consistency of all estimators. It is also of particular interest to find out how tail dependence measures behave when the dimension d increases. The organization of the paper is as follows. In Section 2, we briefly review some basic concepts about copulas and classes of copulas that will be needed in subsequent sections. Section 3 is devoted to the study of various multivariate tail dependence measures, whereas Section 7 discusses statistical estimation of these measures, including consistency properties. Section 4 investigates some further probabilistic properties of the multivariate tail dependence measures. Section 5 studies the behavior of the tail coefficient measures for Archimedean copulas when the dimension increases to infinity. A variety of illustrative examples is provided in Section 6, and it accompanies the studies that are presented in Sections 3 and 5. Finally, in Section 8, it is demonstrated how multivariate tail coefficients contribute in getting insights into dependencies between stocks that are part of the EURO STOXX 50 market index. In this section, we briefly introduce concepts and notation from copula theory that will be necessary in the rest of this text. For more details on copulas, see e.g., [8] . Suppose that we have a d-variate random vector X = (X 1 , . . . , X d ) having a joint distribution function F. Let further F j denote the continuous marginal distribution function of X j for j = 1, . . . , d. Sklar's theorem [9] describes the relationship between the joint distribution function and the marginals that are given by a unique copula function C d : [0, 1] We denote the set of all d-variate copulas by Cop(d). From the above relationship, it is easily seen that the random vector U = (U 1 , . . . U d ) = (F 1 (X 1 ), . . . , F d (X d )) has a joint distribution function C d , that is, with u = (u 1 , . . . , u d ) ∈ [0, 1] d , C d (u) = P(U ≤ u). The inequalities of vectors in this text are understood component-wise. The survival function C d that is associated to a copula C d is defined as C d (u) = P(U > u). The survival copula C S d that is associated to a copula C d is defined as the copula of the random vector 1 − U, that is Let π be a permutation of the set of indices {1, . . . , d}, i.e., π : {1, . . . , d} → {1, . . . , d}. The copula C π d is defined using a copula C d as [10] C π d (u 1 , . . . , u d ) = C d (u π (1) , . . . , u π(d) ), ∀u ∈ [0, 1] d . In every point of the unit hypercube [0, 1] d , the value of a copula C d is restricted by the lower Fréchet's bound W d (u) = max(∑ d j=1 u j − d + 1, 0) and the upper Fréchet's bound M d (u) = min(u 1 , . . . , u d ). In other words, The function M d is a copula for any d ≥ 2 and it is often called the comonotonicity copula, since it corresponds to the copula of a random vector X whose arbitrary component can be expressed as a strictly increasing function of any other component. If the components of a random vector X are mutually independent, the copula of X is the independence copula Π d (u) = ∏ d j=1 u j . The copula that is associated to any subset of components of a d-dimensional random vector X is called a marginal copula of C d . A marginal copula might be calculated from the original copula by setting arguments corresponding to the unconsidered components to 1. For example, the marginal copula C (1,. ..,d−1) d−1 of (X 1 , . . . , X d−1 ) can be obtained as C (1,. ..,d−1) d−1 (u 1 , . . . , . . . u d−1 ) = C d (u 1 , . . . , u d−1 , 1), where C d is the copula of X. Marginal copulas can be used to calculate the survival function C d of a copula C d , since ..,k j ) j (u k 1 , . . . , u k j ). (2) In the study here, we pay particular attention to two classes of copulas: multivariate extreme-value copulas and multivariate Archimedean copulas. This definition is only one of many ways how to define extreme-value copulas. For other definitions and properties, see, for example, ref. [11] . Every extreme-value copula C d can be expressed in terms of a so-called stable tail dependence function d : [0, 1) d → [0, ∞) as C d (u 1 , . . . , u d ) = exp(− d (− log u 1 , . . . , − log u d )). Denote by ∆ d−1 the d-dimensional unit simplex ∆ d−1 = (w 1 , . . . , w d ) ∈ [0, ∞) d : w 1 + · · · + w d = 1 . Every extreme-value copula can be equivalently expressed in terms of Pickands dependence function A d : ∆ d−1 → [1/d, 1] as The function A d is the restriction of the function d on the unit simplex and given as Further, A d is convex and it satisfies max(w 1 , . . . , w d ) ≤ A d (w 1 , . . . , w d ) ≤ 1, for w = (w 1 , . . . , w d ) ∈ ∆ d−1 . The comonotonicity copula M d and the independence copula Π d are both extreme-value copulas with respective Pickands dependence functions A d (w) = max(w 1 , . . . , w d ) and A d (w) = 1, i.e., the lower and upper bounds above. Note that if A d (1/d, . . . , 1/d) = 1/d, then the corresponding copula must be the comonotonicity copula M d . Indeed, if A d (1/d, . . . , 1/d) = 1/d it follows from (4) that C d (u, . . . , u) = u for every u ∈ (0, 1). Because, for any copula C d , it holds that C d (u) ≤ M d (u) for all u ∈ [0, 1] d , the upper Fréchet bound, and C d (u) ≥ C d (min(u 1 , . . . , u d ), . . . , min(u 1 , . . . , u d )), where the latter quantity equals min(u 1 , . . . , u d ) in this case and, consequently, C d (u) ≥ M d (u) for all u ∈ [0, 1] d . Hence, in this case Similarly, if A d (1/d, . . . , 1/d) = 1, then the corresponding copula C d must be the independence copula Π d . To see this, first suppose that there exists a point w = (w 1 , . . . , which is a contradiction. This means that A d (w) = 1 for every w ∈ ∆ d−1 . Immediately from (4), we get that C d (u) = ∏ d j=1 u j for every u ∈ [0, 1] d and, hence, C d = Π d . Finally, from Definition 1, it follows that the marginal copula of an extreme-value copula is also an extreme-value copula. We next provide an illustrative example. Let C d be the d-variate extreme-value copula of (X 1 , . . . , X d ) and C d+1 be the (d + 1)-variate copula of (X 1 , . . . , X d , X d+1 ) where X d+1 is independent of (X 1 , . . . , X d ) , that is Subsequently, from Definition 1, C d+1 is also an extreme-value copula. The stable dependence function d+1 can be expressed, using (3), as Then from (5) Another class of copulas that we consider is the class of multivariate Archimedean copulas, thoroughly discussed, for example, in [12] . Definition 2 (Archimedean copula). A non-increasing and continuous function ψ : [0, ∞) → [0, 1], which satisfies the conditions ψ(0) = 1, lim x→∞ ψ(x) = 0 and is strictly decreasing on [0, inf{x : ψ(x) = 0}) is called an Archimedean generator. A d-dimensional copula C d is called Archimedean if it, for any u ∈ [0, 1] d , permits the representation for some Archimedean generator ψ and its inverse ψ −1 : (0, 1] → [0, ∞), where, by convention, ψ(∞) = 0 and In [12] , the authors also provide a characterization of an Archimedean generator leading to some Archimedean copula by means of the following definition and proposition. if it is continuous on [0, ∞) and differentiable on (0, ∞) up to the order d − 2 and the derivatives satisfy for any x ∈ (0, ∞) and further if (−1) d−2 f (d−2) is non-increasing and convex in (0, ∞). If f has derivatives of all orders in (0, ∞) and if (−1) k f (k) (x) ≥ 0 for any x ∈ (0, ∞) and any k = 0, 1, . . . , then f is called completely monotone. It can be shown that exactly this definition is the key to specify which Archimedean generators can generate copulas. Proposition 1 (Characterization of Archimedean copulas). Let ψ be an Archimedean generator and d ≥ 2. Most of the well-known Archimedean generators are completely monotone, also called strict generators. For strict generators, ψ −1 (0) = ∞. However, the range of parameter values possibly depends on the dimension. We illustrate this with the Clayton copula family. Let C d be the d-variate Clayton copula with parameter θ. In the bivariate case, its generator is defined as ψ θ (t) = (1 + θt) −1/θ + with θ ≥ −1. However, ψ θ is d-monotone only for θ ≥ −1/(d − 1) (see [12] ). That is, if we want to consider Clayton copula in any dimension, we have to restrict ourselves to θ ≥ 0, where case θ = 0 is defined as a limit θ 0 and, in fact, corresponds to the independence copula. Figure 1 shows how the generator of the Clayton family depends on the parameter θ. When θ < 0 and, thus, ψ θ is not completely monotone, then there exists t ∈ (0, ∞), such that ψ θ (t) = 0. Otherwise, for θ ≥ 0, lim t→∞ ψ θ (t) = 0, but for every t ∈ (0, ∞) we have ψ θ (t) > 0. In Figure 1 , we see the most common shape of the generator function. The following lemma focuses on the behavior of generators close to t = 0 and is useful later in this text. Let ψ be an Archimedean generator that generates a copula, differentiable on (0, ) for some > 0. Suppose now that ψ (0 + ) = 0. Afterwards, from negativity of ψ on (0, inf{x : ψ(x) = 0}), ψ must decrease, which is in contradiction with the fact that ψ is a non-decreasing function on [0, ∞). The following example shows that ψ (0 + ) can be equal to −∞. Example 3. Let ψ θ (t) = exp(−t 1/θ ) for θ ≥ 1 which is the generator of the Gumbel-Hougaard family. Then Recall that θ = 1 corresponds to the independence copula. Figure 2 shows how the generator of Gumbel-Hougaard family depends on the parameter θ. In the bivariate case (i.e., d = 2), lower and upper tail coefficients are defined, respectively, as if the limits above exist. Throughout the text, when defining these and other tail coefficients, we will assume the existence of the limits involved. The general idea behind the tail coefficients is to measure how likely a random variable is extreme, given that another variable is extreme. These coefficients can take values between 0 and 1, since they are probabilities. For extreme-value copulas, tail coefficients can be expressed as functions of Pickands dependence function A 2 corresponding to the copula C 2 as see [11] . That is, unless the studied copula is the comonotonicity copula, extreme-value copulas do not possess any lower tail dependence. Recall that, when A 2 (1/2, 1/2) = 1, the corresponding copula must be the independence copula Π 2 . Therefore, an extreme-value copula possesses upper tail dependence, unless the copula is the independence copula. In case of Archimedean copulas, the tail coefficients can be expressed via the corresponding generator ψ as see [14] . Note that both tail coefficients only depend on the behavior of the generator ψ in proximity of the points 0 and ψ −1 (0). Recall that, in the case of strict Archimedean generators, the latter is equal to ∞. Given their meaning and mathematical expression, tail coefficients cannot be generalized in general dimension d ≥ 2 in a straightforward and unique way. We first propose a set of desirable properties that are expected to hold for any multivariate tail coefficient t d : Cop(d) → R and for any d-variate copulas C d and C d,m , m = 1, 2, . . . . The following properties are stated under the working condition that all tail coefficients (t d (C d ), t d+1 (C d+1 ), t d (C d,m ), and so on) exist. (Addition of an independent component) For X d+1 independent of (X 1 , . . . , X d ) Property (T 4 ) could be formulated in a slightly stricter way, as (T 4 ) For X d+1 , independent of (X 1 , . . . , X d ), there exists a constant k d (t d ) ∈ [0, 1] not depending on C d such that Because both lower and upper tail dependence are of interest, usually we consider that t d has actually two versions t U,d and t L,d focusing on either upper tail (variables simultaneously large) or lower tail (variables simultaneously small) dependence respectively. Thus we can also consider the following property In general, some of the desirable properties above are easy to be enforced. If one starts with a candidate coefficient t * d , property (T 1 ) can be achieved by defining . Property (T 3 ) can be achieved by taking an average of the candidate coefficient t * d over all of the permutations where S d denotes all of the permutations of the set {1, . . . , d}. Note, however, that, especially for high dimensions, this significantly increases computational complexity. In the case of property (T 5 ), we can simply use it to define an upper tail coefficient from the lower tail one (or the other way around). In the following, we briefly review multivariate tail coefficients proposed in the literature and elaborate on their behavior with respect to the desirable properties (T 1 )-(T 5 ). For brevity of presentation, we refer to (T 4 ) or its variant (T 4 ) as the "addition property". To simplify the notation, the subscript d of t d , denoting the dimension, will sometimes be omitted in the text, the dimension being clear from an argument of a functional t. Frahm (see [3] ) considered lower and upper extremal dependence coefficients L , U , respectively, defined as , given the limits exist, where U max = max(U 1 , . . . , U d ) and U min = min(U 1 , . . . , U d ). These coefficients are not equal to λ L , λ U , respectively, in the bivariate case. More specifically, for any copula C 2 (see [3] ) Thus, we can consider it more as a different type of tail dependence coefficient than a generalization of bivariate tail coefficients. For extreme-value copulas, extremal dependence coefficients can be stated in terms of Pickands dependence function. Let C d be an extreme-value copula with Pickands dependence function A d and denote the Pickands dependence function of the marginal copula C (k 1 ,...,k j ) j as A (k 1 ,...,k j ) j . Subsequently, where w = 1/j if ∈ {k 1 , . . . , k j } and w = 0 otherwise. As opposed to (8) , expression (9) only involves the overall d-dimensional Pickands dependence function. This might be helpful, for example, during estimation, since not all of the lower-dimensional Pickands dependence functions in (8) need to be estimated. Thus, for the lower extremal dependence coefficient, one obtains because the polynomial (in t) in the denominator contains lower-degree terms than the polynomial in the numerator. We can see that this behavior resembles λ L for bivariate extreme-value copulas, since the only extreme-value copula possessing lower tail dependence is the comonotonicity copula. For the upper extremal dependence coefficient, we can calculate where, as above, w = 1/j if ∈ {k 1 , . . . , k j } and w = 0 otherwise. We next look into the tail coefficients (7) for Archimedean copulas. Let {C d } d≥2 be a sequence of d-dimensional Archimedean copulas with (the same) generator ψ. Subsequently, The corresponding derivatives, if they exist, are Afterwards, the extremal dependence coefficients can be expressed as where we used L'Hospital's rule to get to the equation in (12) , and the second equation in the derivation towards (13) . Recall that ψ −1 (1) = 0 and ψ −1 (0) = inf{u : ψ(u) = 0}. One can see that using L'Hospital's rule does not solve the 0/0 limit problem for general ψ and knowledge of the precise behavior of ψ is thus crucial for calculating the coefficients L (C d ) and U (C d ). As will be illustrated in Section 6, Archimedean copulas can have both extremal dependence coefficients non-zero, depending on the generator. For U , one additional assumption regarding a generator ψ may become useful. Because (from the definition of the generator) lim u 1 ψ −1 (u) = 0, if the additional condition ψ (0 + ) > −∞ is fulfilled, we get using that from Lemma 1 ψ (0 + ) cannot be equal to zero. In other words, if ψ (0 + ) > −∞, then the corresponding Archimedean copula is upper tail independent, for every dimension. Next, we investigate which of the desirable properties (T 1 )-(T 5 ) are satisfied for Frahm's extremal dependence coefficients L and U . Frahm's extremal dependence coefficients L and U satisfy normalization property (T 1 ), permutation invariance property (T 3 ), and addition property (T 4 ), with k d ( L ) = k d ( U ) = 0 for every d ≥ 2, and (T 5 ). Proof. Normalization property (T 1 ) follows from straightforward calculations Permutation invariance property (T 3 ) follows immediately from the fact that the coefficients only depend on U max and U min , which do not depend on the order of components of the random vector. Look now into the addition of an independent component, i.e., property (T 4 ). To be able to distinguish between the dimensions, we use the notation U max,d = max(U 1 , . . . , U d ) and U min,d = min(U 1 , . . . , U d ). For X d+1 independent of (X 1 , . . . , which means that the property about adding an independent component (T 4 ) holds with constants We next look into duality (T 5 ). Using relation (1) between the survival function and the survival copula, coefficients L and U can be rewritten as , and thus where substitution v = 1 − u was used. This proves the validity of duality property (T 5 ). We suspect that the continuity property (T 2 ) does not hold in its full generality for most multivariate tail coefficients. To obtain insight into this, consider the following example with a sequence of copulas {C d,m } given by Note that the distribution that is given by C d,m is uniform on the set [ 1 m , 1] d and it corresponds to the upper Fréchet's bound M d otherwise. Note that C d,m is a copula with an ordinal sum representation, see [8] (Section 3.2.2). It is easily seen that On the other hand, L (Π d ) = 0. Hence, for this sequence of copulas, the continuity property (T 2 ) does not hold. However, a continuity property may hold, in general, under more specific conditions on the copula sequences. One such condition is that of a sequence of contaminated copulas, defined as follows. Let C d and B d,m , for m = 1, . . . be d-variate copulas, and let m be a sequence of numbers in [0, 1]. One considers the sequence of contaminated copulas Note that C d,m is a convex combination of the copulas C d and B d,m and, hence, is also a copula, see e.g., [8] . The interest is to investigate the behavior of a tail coefficient for the sequence C d,m when m → 0, as m → ∞. Proposition 3. Suppose that, for any d-variate copulas C d and C d,m , m = 1, 2, . . . , there exist > 0, such that Further assume that L (C d,m ) exists for every m = 1, 2, . . . . Subsequently, In particular, condition (15) is satisfied for a sequence of contaminated copulas, as in (14), for which m → 0, as m → ∞, and provided L (C d ) exists. Proof. Assumption (15) allows for us to use the Moore-Osgood theorem to interchange the limits and, thus . Suppose now that we have a sequence of contaminated copulas, for which m → 0, as m → ∞. Subsequently, one calculates One next realizes that max{B d, Furthermore, with the help of Formula (2) for the survival function of a copula one gets which implies (15) . Analogously, a similar result could be stated for U . 2) defines so-called lower and upper tail dependence parameters, as follows given the expressions exist. It is evident that these coefficients heavily depend on the choice of the set I h . Additionally, this generalization includes the usual bivariate tail dependence coefficients λ L and λ U , by letting h = 1, I 1 = {1} and J 1 = {2} or the other way around. Li [4] further states that and, therefore, duality property (T 5 ) is fulfilled. One can also notice that, for exchangeable copulas (i.e., symmetric in their arguments), the dependence parameters are in fact functions of cardinality h rather than particular contents of I h . Especially in this case, it is worth introducing another notation being In paper [15] , it is shown that these coefficients can be rewritten while using one-sided derivatives of the diagonal section δ C d (u) = C d (u, . . . , u) of the corresponding copula in the following way: Additionally, the authors in [15] comment on the connection with Frahm's extremal dependence coefficients L and U , which can be expressed as if all of the above quantities exist. De Luca and Rivieccio [6] (Def. 2) also use this way to measure tail dependence, although they consider it as a measure for Archimedean copulas only since we can express the measures while using the generator, as where we applied l'Hospital's rule for obtaining the equation in (17) and (18) . In contrast to the Frahm's coefficient, here the additional condition that ψ (0 + ) > −∞ is not helpful, since it leads to and numerator and denominator are both equal to zero here. satisfy normalization property (T 1 ), addition property (T 4 ), and duality property (T 5 ). Proof. Duality property (T 5 ) was shown in [4] . Normalization property (T 1 ) follows from straightforward calculations while using (17) and (18) , it follows from duality property (T 5 ). We now check property (T 4 ), the addition of an independent random component. Suppose that the added independent component belongs to the set I h+1 . Subsequently, If the added independent component belongs to the set J d−h+1 , then from the definition of the coefficient Showing the duality property for λ The proof of Proposition 4 shows that, in fact, property (T 4 ) is fulfilled if one distinguishes two cases. If the added independent component belongs to the set I h+1 , then (T 4 ) holds with k d (λ L ) = k d (λ U ) = 0 for every d ≥ 2. However, if the added independent component belongs to the set J d−h+1 , Permutation invariance (T 3 ) does not hold in general. However, if one would restrict to only permutations that permute indices within I h and within J d−h and not across these two sets, λ L and λ U would be invariant with respect to such permutations. Further, we might consider the special case when h = d − 1, which is if we condition only on one variable. Subsequently, for any permutation π and analogously for λ U , we have λ . A continuity property can be shown under a specific condition on the copula sequence as is established in Proposition 5. Proposition 5. Suppose that, for any d-variate copulas C d and C d,m , m = 1, 2, . . . , there exist > 0, such that Further assume that λ ) exists for every m = 1, 2, . . . , as well as λ In particular, condition (20) holds for a sequence of contaminated copulas, see (14) , for which m → 0, as m → ∞, and and λ Proof. The first part of Proposition 5 is proven along the same lines as the proof of Proposition 3 and hence omitted here. Consider now a sequence of contaminated copulas satisfying in addition (21) . We need to show that (20) holds. To see this, note that, similarly as in (16), one gets Further note that, for all sufficiently large m for all u ∈ (0, ) Combining (21), (22) and (23) now yields that (for all sufficiently large m) where the O(1)-term does not depend on u. Thus, one can conclude that condition (20) of Proposition 5 is satisfied. An analogous result as the one stated in Proposition 5 can be stated for λ U . Schmid and Schmidt (see [5] (Sec. 3.3)) considered a generalization of tail coefficients based on a multivariate conditional version of Spearman's rho, which is defined as for some non-negative measurable function g provided that the integrals exist. The choice g(u) = and the multivariate tail dependence measure is defined as provided the existence of the limit. Similarly, they define Additionally, these coefficients are not equal to λ L , λ U , respectively, in the bivariate case, so we can consider it more as a different type of tail dependence coefficient rather than a generalization. Proposition 6. Schmid's and Schmidt's tail dependence measure λ L,S satisfies normalization property (T 1 ), permutation invariance property (T 3 ), and addition property ( Proof. Normalization property (T 1 ) and permutation invariance (T 3 ) follow from the normalization property and permutation invariance of Spearman's rho, see, for example [16] . When adding an independent component, one gets This finishes the proof. In order for duality property (T 5 ) to hold, the upper version should rather be defined as This seems to be more logical, since λ U,S (C d ) can only be expressed, after substituting into (25), as which cannot be further simplified. It is easy to show that in the bivariate case (i.e., d = 2) the coefficients λ U,S (C d ) and λ * U,S (C d ) coincide. For a general dimension d > 2 however they can differ. The continuity property (T 2 ) cannot be shown in full generality, but a continuity property is fulfilled in the special case of a sequence of contaminated copulas, as in (14) . As stated in (6), bivariate tail coefficients for extreme-value copulas can be simply expressed using the corresponding Pickands dependence function. Thus tail dependence is fully determined by the Pickands dependence function A 2 in the point (1/2, 1/2). The range of values for A 2 is limited by max(w 1 , w 2 ) ≤ A 2 (w 1 , w 2 ) ≤ 1, which also gives us 1/2 ≤ A 2 (1/2, 1/2) ≤ 1 where the bivariate tail coefficient λ U gets larger when A 2 (1/2, 1/2) is closer to 1/2. On the other hand, A 2 (1/2, 1/2) = 1 means tail independence. Following this idea and given that also for the d-dimensional Pickands dependence function A d associated to a copula After proper standardization, this leads to Note that such a coefficient is equal to a translation of the extremal coefficient given in [17] or [7] and defined as θ Schlather and Town (see [18] ) give a simple interpretation of θ E , related to the amount of independent variables that are involved in the corresponding d-variate random vector. Proposition 8. The multivariate tail dependence coefficient λ U,E in (28) satisfies normalization property (T 1 ), continuity property (T 2 ), permutation invariance property (T 3 ), and addition property ( Proof. Normalization (T 1 ) and permutation invariance (T 3 ) follow immediately from the definition of λ U,E . If lim m→∞ C d,m (u) = C d (u), ∀u ∈ [0, 1] d , and then also lim m→∞ A d,m (w) = A d (w), ∀w ∈ ∆ d−1 , which proves the validity of (T 2 ). For X d+1 independent of (X 1 , . . . , X d ), we can use Example 1 and obtain Remark 1. The duality property (T 5 ) is not applicable, since the survival copula of an extreme-value copula does not have to be an extreme-value copula. A common element of the multivariate tail dependence measures discussed in Sections 3.1-3.3 is that they focus on extremal behavior of all d components of a random vector X. However, one could also be interested in knowing whether there is any kind of tail dependence present in the vector, which is even for subvectors of X. An interesting observation to be made is for tail dependence measures that satisfy property (T 4 ) with k d = 0 for every d ≥ 2. Assume that X and Y are independent random variables. Then any tail measure t 2 (C 2 ) would be zero for the random couple (X, Y) and no matter which random component we add the tail measure for the extended random vector would stay 0. In other words, for any such tail dependence measure, this leads to tail independence of the d-dimensional random vector (X, . . . , X, Y) , no matter what d is. Considering tail dependence of subvectors would be of particular interest in this case. Suppose that we have a multivariate tail coefficient µ L,d that can be calculated for general dimension d ≥ 2. Suppose further that this coefficient only depends on the strength of tail dependence when all of the components of a random vector are simultaneously large or small. This is the case for all multivariate tail coefficients mentioned in Sections 3.1-3.3. Subsequently, we can introduce a tail coefficient given by where 1 .., j ) ) can be interpreted as an average tail dependence measure per dimension, and where w d,j = w d,j ( d j ). This measure deals with a disadvantage of current multivariate tail coefficients that assign a value of 0 to the copulas, where d − 1 components are highly dependent in their tail, and the d-th component is independent. When dealing with possible stock losses, for example, this situation should be also captured by a tail coefficient. Recall that the weight w d,j corresponds to an importance given to the average tail dependence within all the j-dimensional subvectors of X. Because tail dependence in a higher dimension is more severe, as more extremes occur simultaneously, it is natural to assume w d,2 ≤ w d,3 ≤ · · · ≤ w d,d . However, such an assumption excludes other approaches to measure tail dependence. For example, setting w d,2 = 1 and w d,j = 0 for j = 3, . . . , d would lead to the construction of a tail dependence measure as the average of all pairwise measures. If the underlying bivariate measure satisfies (T 1 ), (T 2 ), (T 3 ), and (T 5 ) with d = 2 only, these properties are carried over to the pairwise measure. Additionally, (T 4 ) can be shown similarly as in Proposition 1 in [16] . Despite possibly fulfilling the desirable properties, all of the higher dimensional dependencies are ignored, being a clear drawback of such a pairwise approach. In the sequel, we focus on the setting that w d,2 ≤ w d,3 ≤ · · · ≤ w d,d . What happens in case of the addition of an independent component (property (T 4 )) is not so straightforward, since the weights differ depending on the overall dimension d. The addition of an independent component increases dimension and, thus, possibly changes all of the weights. However, one could try to come up with a weighting scheme that guarantees fulfilment of property (T 4 ). Consider X d+1 independent of (X 1 , . . . , X d ) . Suppose that the input tail dependence measures µ L,j satisfy property (T 4 ), with k j = k j (µ L,j ) for j = 2, . . . , d. First, we express µ L for the random vector (X 1 , . . . , X d+1 ) , as Now using property (T 4 ) in (30) together with the fact that for index j = 2, the corresponding summand is µ L,2 (Π 2 ) = 0 and, thus, this index can be omitted, one obtains which is equal to µ L (C d ) with weights given as for every j = 2, . . . , d. A sufficient criterion for fulfillment of property (T 4 ) would thus be to have for every j = 2, . . . , d. Knowing the values k j , w d,j , w d+1,j , for j = 2, . . . , d, and w d+1,d+1 , one can check (31). One rather general method of weight selection can then be as follows. Suppose that one wants to achieve that proportions of weights w d,d 1 and w d,d 2 corresponding to two subdimensions d 1 and d 2 do not depend on the overall dimension d. This can be achieved by setting recursively w d+1,j = r d+1 w d,j for j = 2, . . . , d and w d+1,d+1 = 1 − ∑ d j=2 w d+1,j = 1 − r d+1 . The initial condition is obviously given as w 2,2 = 1. To obtain w d,2 ≤ w d,3 ≤ · · · ≤ w d,d , one needs that r d ∈ [0, 1/2] for every d = 3, . . . . Values of r d closer to 0 give more weight to the d-th dimension, values close to 1/2 limit its influence. If we further assume that r d = r, which is r d does not depend on d, this further simplifies to for d = 2, . . . and j = 2, . . . , d. We next check the condition in (31) for this particular weight selection. Condition (31) can be rewritten as If k j = 1 for every j as in one case of Li's tail dependence parameter, condition (32) allows for only one selection of r, which is r = 0. On the other hand, if k j = 0 for every j, r can take any values in [0, 1/2]. Looking from the other perspective, if r = 1/2, then condition (32) is satisfied if Let us recall that these conditions can only be seen as sufficient, not necessary. A precise study of what happens when an independent component is added requires knowledge of the weighting scheme and knowledge of the underlying input tail dependence measure. In summary, the above discussion reveals that a measure that is able to detect tail dependence not only in a random vector as a whole, but also in lower-dimensional subvectors, can be constructed. A simple and interpretable weighting scheme proposed above can be used, such that several desirable properties of the tail dependence measure are guaranteed. For convenience of the reader, we list in Table 1 all of the discussed tail dependence measures, with reference to their section number, and indicate which properties they satisfy. Frahm's extremal dependence coefficient In Section 3, the focus was on properties (T 1 )-(T 5 ). In this section, we aim at exploring some further properties that might be of special interest. We, in particular, investigate the following type of properties. Here, t d (C d ) denotes a multivariate tail coefficient for C d ∈ Cop(d). When needed, we specify whether it concerns a lower or upper tail coefficient, referring to them as t L,d (C d ) and t U,d (C d ), respectively. Expansion property (P 1 ). Given is a random vector X = (X 1 , . . . , X d ) with copula C d . One adds one random component X d+1 to X. Denote the copula of the expanded random vector (X , X d+1 ) by C d+1 . How does Monotonicity property (P 2 ). Consider two copulas C d,1 , C d,2 ∈ Cop(d). Does the following hold? (i) If C d,1 (u) ≤ C d,2 (u) for u in some neighborhood of 0, then t L,d (C d,1 ) ≤ t L,d (C d,2 ). (ii) If C d,1 (u) ≤ C d,2 (u) for u in some neighborhood of 1, then t U,d (C d,1 ) ≤ t U,d (C d,2 ). Suppose that the copula C d can be written as C d = αC d,1 + (1 − α)C d,2 for α ∈ [0, 1], and C d,1 , C d,2 ∈ Cop(d). What can we say about the comparison between t d (C d ) and αt d (C d,1 ) + (1 − α)t d (C d,2 )? For extreme-value copulas, we look into geometric combinations instead. The logic behind property (P 1 ) comes from the perception of a tail coefficient as a probability of extreme events of components of the random vector to happen simultaneously. Thus, when another component is added, the probability of having extreme events cannot increase. However, there is no such a limitation from below and adding a component can immediately lead to a decrease of the coefficient to zero. In the next subsections, we briefly discuss these properties for the multivariate tail coefficients discussed in Section 3. For Frahm's coefficient, it holds that L (C d+1 ) ≤ L (C d ) and analogously for the upper coefficient. This result can be found in Proposition 2 of [3] . For Li's tail dependence parameters, we need to distinguish two cases. If we add the new component to the set I h , then we have However, if the component is added to the set J d−h , no relationship can be shown, in general. A special situation occurs when the component X d+1 added to the set J d−h is just a duplicate of a component, which is already included in J d−h . Subsequently, obviously λ For Schmid's and Schmidt's tail dependence measures, one cannot say, in general, how the coefficient λ L,S (C d+1 ) behaves when compared to λ L,S (C d ). As can be seen from (24), the integral expression decreases with increasing dimension d, but, at the same time, the normalizing constant increases with d. For the tail coefficient for extreme-value copulas, λ U,E (C d ) it follows from Example 7 in Section 6 that the addition of another component can lead to an increase in this coefficient. See, in particular, also Figure 5. Concerning the monotonicity property (P 2 ) it is easily seen that (P 2 )(i) holds for Frahm's lower dependence coefficient L (C d ) if we additionally assume that C d,1 (u) ≤ C d,2 (u) for u in some neighborhood of 0. Similarly, we need to assume that C d,1 (u) ≤ C d,2 (u) for u in some neighborhood of 1 in order to show that (P 2 ) (ii) holds. For Li's tail dependence parameters, property (P 2 ) does not hold in general. This is illustrated via the following example in case d = 4. Consider a random vector (U 1 , U 2 , U 3 , U 4 ) with uniform marginals and with distribution function a Clayton copula with parameter θ > 0 (see Example 6), given by C 4,1 (u) = (u 1 + u 2 + u 3 + u 4 − 3) 1/θ (see (39)). We denote this first copula by C 4,1 . Note that the random vector (U 1 , U 2 , U 3 ) has as joint distribution a three-dimensional Clayton copula with parameter θ, which we denote by C 3 . The vector (U 1 , U 2 , U 4 ) has the same joint distribution C 3 . Next, we consider the copula of the random vector (U 1 , U 2 , U 3 , U 3 ) that we denote by C 4,2 . One has that, for all u ∈ [0, 1] 4 , In Example 6 we calculate Li's lower tail dependence parameter for a d-variate Clayton copula, which equals λ (41)). Applying this in the setting of the current example leads to which thus contradicts monotonicity property (P 2 )(i). From the definition of Schmid's and Schmidt's tail dependence measure, it is immediate that the monotonicity property (P 2 ) holds. For the tail coefficient for extreme-value copulas, λ U,E defined in (28) the monotonicity property (P 2 ) holds. To see this, recall from (3), that, for an extreme-value copula C d,1 , we can express its stable tail dependence function as and, hence, using that C d,1 ≤ C d,2 , it follows that C d,1 ≥ C d,2 . The same inequality holds for Pickands dependence function A d,1 , which is a restriction of the stable tail dependence function C d,1 on the unit simplex. Hence, C d,1 ≤ C d,2 also implies that A C d,1 ≥ A C d,2 . From the definition of the tail coefficient in Consider a copula C d that is a convex combination of two copulas C d,1 and C d,2 , i.e., C d = αC d,1 + (1 − α)C d,2 for α ∈ [0, 1]. For the survival function, we then also have C d = αC d,1 + (1 − α)C d,1 . Before stating the results for the various multivariate tail coefficients, we first make the following observation. For α, a, b, c, d ∈ [0, 1], it is straightforward to show that Frahm's lower extremal dependence coefficient for the copula C d is given by . The same conclusion can be found for Frahm's upper extremal dependence coefficient U . Li's lower tail dependence parameter for C d , a convex mixture of copulas, equals , and an application of (34) gives that, if λ For an extreme-value copula, it does not make sense to look at convex combinations of two extreme-value copulas, since it cannot be shown, in general, that such a convex combination would again be an extreme-value copula. A more natural way to combine two extreme-value copulas C d,1 and C d,2 is by means of a geometric combination, i.e., by considering C d = C α d,1 C 1−α d,2 , with α ∈ [0, 1]. In, for example, Falk et al. [19] (p. 123) it was shown that a convex combination of two Pickands dependence functions is also a Pickands dependence function. Denoting by A d,1 and A d,2 , the Pickands dependence functions of C d,1 and C d,2 , respectively, it then follows from (33) that the Pickands 2 . From this it is seen that C d is again an extreme-value copula. For the tail dependence coefficient for extreme-value copulas, it thus holds that i.e., the coefficient λ U,E of a geometric mean of two extreme-value copulas is equal to the corresponding convex combination of the coefficients of the concerned two copulas. A natural question to examine is an influence of increasing dimension on possible multivariate tail dependence. If one restricts to the class of Archimedean copulas, several results can be achieved, despite that similar problems with interchanging limits occur while studying the continuity property (T 2 ). First, let us formulate a useful lemma that describes the behavior of the main diagonal of Archimedean copulas when the dimension increases. Lemma 2. Let {C d } be a sequence of d-dimensional Archimedean copulas with (the same) generator ψ. Then for u ∈ [0, 1) and v ∈ (0, 1] Proof. The proof is along the same lines as the proof of Proposition 9 in [16] . This lemma can be used in the following statements that focus on individual multivariate tail coefficients. The first one to be examined is the Frahm's extremal dependence coefficient L . . Proof. The statement follows by the direct application of Lemma 2, since then An analogous result could be stated for U . The condition on interchanging limits is, in general, difficult to check. However, we discuss some examples in which the condition can be checked. A first example is that of the independence copula 1) . Consequently, in this example, the condition of interchanging limits holds. A second example is the Gumbel-Hougaard copula also considered in Example 7 in Section 6. For this copula it can be seen that, as in the previous example, the two concerned limits (when u → 0 and when d → ∞) are zero and, hence, interchanging the limits is also valid in this example. Proposition 10 further shows that if we construct estimators (based on values of u close to 0 or close to 1) of the limits above for Archimedean copulas in high dimensions, these will be very close to 0. Proof. Using Lemma 2, we obtain from which the statement of this proposition follows. An analogous statement could be formulated for λ U . What can one learn from the results in this section? Archimedean copulas may be not very appropriate in high dimensions, because of their symmetry, but they are a convenient class of copulas to use. It is good to be aware though that, when the dimension increases, the tail dependence of Archimedean copulas vanishes, at least from the perspective of L , λ and their upper tail counterparts. Obtaining similar results for different classes of copulas would also be of interest, for example, for extreme-value copulas with restrictions on Pickands dependence function. However, this is complicated by the fact that, unlike Archimedean copulas, extreme-value copulas do not share a structure that could be carried through different dimensions. Some insights into this behavior are studied using the examples given in Section 6. This section includes examples on both Archimedean and extreme-value copulas, as well as examples outside these classes. Example 4. Farlie-Gumbel-Morgenstern copula. Let C d be a d-dimensional Farlie-Gumbel-Morgenstern copula defined as where the parameters have to satisfy the following 2 d conditions This copula is neither an Archimedean nor extreme-value copula. We first consider Frahm's extremal dependence coefficients L and U . From (35), up to a constant C d (u1) ≈ u d when u ≈ 0. Further, plugging (35) into (2) gives that 1 − C d (u1) behaves like a polynomial u − u 2 + . . . when u ≈ 0. Thus, because the polynomial in the numerator converges to zero faster than the polynomial in the denominator. Similarly, one obtains since, again, the corresponding limits contain ratios of polynomials, such that the polynomials in the numerators converge to zero faster than the polynomials in the denominators. To obtain λ L,S , the integral [0,p] d C d (u) du needs to be calculated. Consider now a special case when the only non-zero parameter is α = α 1,...,d . Then Going back to general C d , we can notice that the resulting integral would always be a polynomial in p, with the lowest power being 2d and thus A similar calculation leads to λ U,S (C d ) = 0. Some further calculations (not presented here) also show that λ * U,S (C d ) = 0. From the perspective of all the above tail dependence coefficients, the Farlie-Gumbel-Morgenstern copula does not possess any tail dependence. Let C d be a d-variate Cuadras-Augé copula, that is of the form 1] . The Cuadras-Augé copula combines the comonotonicity copula M d with the independence copula Π d . If θ = 0, then C d becomes Π d . If θ = 1, then C d becomes M d . We again start with calculating L and U . From (2), we find and Frahm's lower extremal dependence coefficient L is thus given as since if θ ∈ [0, 1), the polynomial in u in the numerator converges to zero faster than the polynomial in the denominator. For U , using L'Hospital's rule leads to These values coincide with those calculated in [20] for a more general group of copulas. One can also notice that In other words, if the parameter θ is smaller than 1, any sign of tail dependence disappears when the dimension increases. If θ = 1, then U (C d ) = 1 for every d ≥ 2 which is no surprise, since, in that case, C d is the comonotonicity copula M d . This behavior is illustrated in Figure 3 that details the influence of the parameter θ on the speed of decrease of U (C d ) when d increases. A Cuadras-Augé copula is an exchangeable copula, which is invariant with respect to the order of its arguments. Therefore, when calculating Li's tail dependence parameters, only the cardinality of the index sets I h and J d−h plays a role. Subsequently, and by using L'Hospital's rule If θ = 1, then λ (24), we first need to calculate the integral [0,p] d C d (u) du. A straightforward calculation gives that which equals 1 when θ = 1 and 0 when θ ∈ [0, 1). Schmid's and Schmidt's lower tail dependence measure thus equals Frahm's lower extremal dependence coefficient L as well as Li's lower tail dependence parameter λ . Determining Schmid's and Schmidt's upper tail dependence measure λ U,S (C d ) in (25) is less straightforward. This dependence measure involves three integrals. Because its expression concerns the limit when p → 0, it suffices to investigate the behavior of the numerator and the denominator of (25) for p close to 0. From (27) it is easy to see that, for p close to 0, and, hence, the denominator of (25) behaves, for p close to 0, as For the integral [1−p,1] d C d (u) du, note that, since C d is an exchangeable copula, we can divide the integration domain [1 − p, 1] d into d parts depending on which argument from u 1 , . . . , u d is minimal. The integrals over each of the d parts are equal. We get where the approximation, valid for p close to 0, is based on a careful evaluation of the integral. For brevity, we do not include the details here. Consequently the numerator of (25) behaves, for p close to 0, as Combining (37) and (38) reveals that λ U,S (C d ) = θ, for all d ≥ 2. Other calculations (omitted here for brevity) lead to λ * U,S (C d ) = θ. A Cuadras-Augé copula is also an extreme-value copula. This can be seen through the following calculation, where the notation u (1) = min(u 1 , . . . , u d ) is used. One gets and, thus, C d is an extreme-value copula with Pickands dependence function This allows for calculating the tail coefficient for extreme-value copulas, λ U,E , as In case of the Cuadras-Augé copula, tail dependence measured by λ U,E does not depend on the dimension d. For illustration, the values of λ U,E (C d ) are included in Figure 3 . One can see that U and λ U,E behave very differently, both in terms of shapes and values. Let C d be a d-variate Clayton family copula defined as for θ > 0. The Clayton copula is an Archimedean copula and the behavior of its generator is studied in Example 2. For Frahm's lower extremal dependence coefficient, either using (12) or by factoring out as below, one obtains whereas, for Frahm's upper extremal dependence coefficient, using (13) with the derivative of the Clayton generator ψ (t) = −(1 + θt) −(1+θ)/θ , one finds Analytical calculation of lim d→∞ L (C d ) is not possible; however, insight can be gained by plotting L (C d ) as a function of the dimension d. This is done in Figure 4 . From the plot it is evident that L (C d ) decreases when the dimension increases. However, for larger parameter values, the decrease seems to be slow. A Clayton copula is also an exchangeable copula and, thus, when calculating Li's tail dependence parameters, only the cardinality of the index sets I h and J d−h comes into play. Then If, as in Proposition 11, the cardinality of J d−h is kept constant (equal to h * ) when the dimension increases, then In fact, in this example, even a milder condition is sufficient for achieving (42 Spearman's rho for the Clayton copula cannot be explicitly calculated and, thus, the values of λ L,S and λ U,S are unknown. Gumbel-Hougaard copula. Let C d be a d-variate Gumbel-Hougaard copula, defined as where θ ≥ 1. The Gumbel-Hougaard copula is the only copula (family) that is both an extreme-value and an Archimedean copula as proved in [21] (Sec. 2). The behavior of its Archimedean generator is studied in Example 3. Note that θ = 1 corresponds to the independence copula Π d and the limiting case θ → ∞ corresponds to the comonotonicity copula M d . As expected (see (10) ), for an extreme-value copula which is not the comonotonicity copula, the Frahm's lower extremal dependence coefficient is since the polynomial in u in the numerator converges to zero faster than the polynomial in the denominator. For the Frahm's upper extremal dependence coefficient, by using (13) with the derivative of the Gumbel-Hougaard generator ψ (t) = −1 θ exp(−t 1/θ )t 1/θ−1 , one obtains Analytical calculation of lim d→∞ U (C d ) is not possible; however, insights can be gained by plotting U (C d ) as a function of dimension d. This is done in Figure 5 . It is evident that U (C d ) decreases when the dimension increases; but, the decrease seems to be slow for larger parameter values. When comparing Figures 4 and 5 , one might come to a conclusion that L for the Clayton copula with parameter θ is equal to U for the Gumbel-Hougaard copula with the same parameter θ. Despite their similarity, that is not true, as can be easily checked by calculating both of the quantities for any pair (d, θ). When calculating Li's tail dependence parameters, one uses that the Gumbel-Hougaard copula is also an exchangeable copula and, thus, only the cardinality of the index sets I h and J d−h plays a role. Then This function of parameter θ, dimension d and cardinality h is rather involved and it is depicted in Figure 6 for different parameter choices and also two different selections of h. In one of the cases, h = d − 1 and thus corresponds to h * = 1 in Proposition 11. In the other case, the number of components on which we condition h * = h * (d) is chosen to increase with d, specifically h * (d) = Spearman's rho for a Gumbel-Hougaard copula cannot be calculated explicitly and thus the values of λ L,S and λ U,S are unknown. Pickands dependence function A d of a Gumbel-Hougaard copula is Note that lim d→∞ λ U,E (C d ) = 1. From our perspective, such a behavior is rather counter-intuitive and should be taken into account when using this tail coefficient. An overview of the results obtained in the illustrative examples is given in Table 2 . Before we move to the estimation of tail coefficients itself, we introduce the setting and notation for the estimation. Let X 1 , . . . , X n be a random sample of a d-dimensional random vector with copula C d where X i = (X 1,i , . . . X d,i ) for i ∈ {1, . . . n}. Throughout this section, the dimension d of a copula C d is arbitrary but fixed and, thus, for simplicity of notation, we omit the subscript d in C d . We consider the empirical copula where Similarly, we define the empirical survival function as For extreme-value copulas, one can take advantage of estimation methods for the Pickands dependence function or the stable tail dependence function. The estimation of these was discussed, for example, in [22] [23] [24] , or [7] . We briefly discuss the estimator for the Pickands dependence function, as proposed in [7] . The multivariate w-madogram, as introduced in [7] , is, for w ∈ ∆ d−1 , defined as where u 1/w j = 0 by convention if w j = 0 and 0 < u < 1. The authors in [7] further show a relation between Pickands dependence function and the madogram given by . This leads to the following estimator of Pickands dependence function However, the estimator A MD n is not a proper Pickands dependence function. To deal with this problem, they propose an estimator based on Bernstein polynomials that overcomes this issue and results into an estimator, which is a proper Pickands dependence function. The estimation of the Frahm's extremal dependence coefficients has not been discussed in the literature so far. However, a straightforward approach is to consider empirical approximations of the quantities in definition (7), i.e., L = C n (u n , . . . , u n ) where {u n } is a sequence of positive numbers converging to zero. The choice of u n is crucial for the performance of the estimator. Small values of u n provide an estimator with low bias but large variance, large values of u n provide an estimator with large bias but small variance. Note that, in applications, it is useful to think about u n as u n = k n n+1 , where k n stands for the numbers of extreme values used in the estimation procedure. Alternatively, if the underlying copula is known to be an extreme-value copula, the estimator can be based on the estimator of Pickands dependence function plugged into (11) . This results in the following estimator with w = 1/j if ∈ {k 1 , . . . , k j } and w = 0 otherwise. Similarly as for Frahm's extremal dependence coefficients, one can introduce the following estimators Also in this case, one can make use of the empirical copula (45). Recall the definition of λ L,S in (24), and consider p small. More precisely, let p n be a small positive number. Subsequently, one can calculate The estimator of λ L,S that could then be considered is of the form However, this quantity does not provide the value 1 for a sample from a comonotonicity copula. See the related discussion in [25] . This problem increases, while p n gets smaller. Thus, we propose to use an estimator defined as where the denominator is based on estimating [0,p] d M d (u) du using (46) and the fact that for a sample from a comonotonicity copula U 1,i = · · · = U d,i for every i ∈ {1, . . . , n} almost surely. Analogous arguments lead to an estimator of λ * U,S , as defined in (26), given by Because coefficient λ U,E , in (28), is a function of Pickands dependence function A d , estimation can again be based on estimation of A d . For example, the madogram estimator A MD n can be used, which results in the following estimator The consistency results for the suggested estimators can be found in the following propositions. Suppose that C d is a d-variate extreme-value copula. Subsequently, the estimators MD U and λ MD U,E are strongly consistent. Proof. The statement of the proposition follows by Theorem 2.4(b) in [7] , which states that Proposition 13. Suppose that u n , p n ∈ (n −δ , n −γ ) for some 0 < γ < δ < 1. (i) Then L and U are weakly consistent. Then λ L,S and λ * U,S are weakly consistent. (iii) Further suppose that (n C J d−h (u n 1)) → ∞. Subsequently, the following implications hold. is weakly consistent. Proof. We will only deal with the estimators of the lower dependence coefficients L , λ L,S and λ . The estimators of the upper dependence coefficients can be handled completely analogously. With the help of (A.22) of [26] , one gets that for each β < 1 2 U j,i = U j,i + U This, together with Lemma A3 in [27] (see also (A.12) in [26] ), implies that, for each ε > 0 with probability arbitrarily close to 1 for all sufficiently large n, it holds that Subsequently, conditionally on (47) and with the help of Chebyshev's inequality, one gets that Analogously, also As ε > 0 is arbitrary, one can combine (49) and (50) to deduce that C n (u n 1) = C(u n 1) + o P (u n ). Completely analogously with the help of (2), one can show that Further note that 1 − C(u n 1) = P(U min ≤ u n ) ≥ P(U 1 ≤ u n ) = u n . Now combining (51), (52) and (53) yields that L = C n (u n 1) 1 − C n (u n 1) = C(u n 1) + o P (u n ) 1 − C(u n 1) + o P (u n ) = C(u n 1) 1 − C(u n 1) First of all, note that it is sufficient to show that Further, it is straightforward to bound To prove the weak consistency of λ , it is sufficient to show that C n (u n 1) − C(u n 1) We start with the second convergence. Similarly, as in (48) for each ε > 0 with probability arbitrarily close to 1 for all sufficiently large n, one can bound Now, by the assumption in (iii), the ratios and can be made arbitrarily close to 1 for ε close enough to zero and n large enough. Further, by Chebyshev's inequality and, similarly, one can show also To show the first convergence in (57), one can proceed as in (48) (exploiting (47)) and arrive at C n (u n 1) − C(u n 1) Now, the second term on the right-hand side of the last inequality can be rewritten as which, thanks to the assumptions of the theorem and the existence of λ , can be made arbitrarily small by taking ε small enough and n sufficiently large. As an analogous lower bound can be derived for C n (u n 1)−C(u n 1) , one can conclude that the first convergence in (57) also holds. In this section, we illustrate the practical use of the multivariate tail coefficients via a real data example. The data concern stock prices of companies that are constituents of the EURO STOXX 50 market index. EURO STOXX 50 index is based on the largest and the most liquid stocks in the eurozone. Daily adjusted prices of these stocks are publicly available on https://finance.yahoo.com/ (downloaded 19 March 2020). The selected time period is 15 years, starting on 18 March 2005 and ending on 18 March 2020. Note that this period covers both the global financial crisis 2007-2008, as well as the sharp decline of the markets that was caused by COVID-19 coronavirus pandemic in early 2020. All the calculations are done in the statistical software R [28] . The R codes for the data application, written by the authors, are available at https://www.karlin.mff.cuni.cz/~omelka/codes.php. The preprocessing of the data was done, as follows. The stocks are traded on different stock exchanges and thus might differ in trading days. The union of all trading days is used and missing data introduced by this method are filled in by linear interpolation. No data were missing on the first or the last day of the studied time range. Negative log-returns are calculated from the adjusted stock prices and ARMA(1,1)-GARCH(1,1) is fitted to each of the variables (stocks), similarly as for example in [29] . We also refer therein for detailed model specification. Fitting ARMA(1,1)-GARCH(1,1) model to every stock does not necessarily provide the best achievable model, but residual checks show that the models are adequate. The standardized residuals obtained from these univariate models are used as the final dataset for calculating various tail coefficients. The total number of observations is n = 3847. Table 3 summarizes the stocks used for the analysis. It is of interest here to discuss tendency of extremely low returns happening simultaneously, which translates into calculating upper tail coefficients while working with negative log-returns. This allows us to use also the methods assuming that the data are coming from an extreme-value copula. Six different settings are considered: stocks from Group 1 (G1), from Group 2 (G2), from Group 3 (G3), from G1 and G2, from G1 and G3, and finally stocks from G2 and G3. The dimension d is equal to 3 for the first three settings and equal to 6 for the last three settings. Six different estimators are considered: U , MD U , λ * U,S , λ U,E , and λ with two different selections of the conditioning sets I h and J d−h . In one case, h * = d − h = 1 and we condition on only one variable. The specific choice of that one variable does not impact the result, as follows from (19) . The analysis with the conditioning on only one variable shows how the rest of the group is affected by the behavior of one stock. In the other case, we condition on all of the stocks, except for the one with largest market capitalization within the group. This analysis indicates how the largest player is affected by the behavior of the rest of the group. The estimators that are functions of the amount of data points k (recall from Section 7.2 that a common choice is u n = k n /(n + 1), with k n = k here) do not provide one specific estimate but rather a function of k. A selection of in some sense the best possible k requires further study. Intuitively, one should look at lowest k for which the estimator is not too volatile. This idea was used in [30] for estimating bivariate tail coefficients by finding a plateau in the considered estimator as a function of k. The results of the analysis are summarized in Figures 7 and 8 and Table 4 . Examining Figure 7 , it seems that k around 100 would be a possible reasonable choice for the tail coefficients of Frahm, and Schmid and Schmidt, for these data. For Li's tail dependence parameters, it appears from Figure 8 that, when conditioning on more than one variable, a larger value for k is needed, for example k = 200. For the tail dependence measurements for extreme-value copulas, we include the coefficients λ U,E and the original extremal coefficient θ E (see [17] ), where the latter can be estimated from the former, since θ E = d(1 − d−1 d λ U,E ). Recall that the various tail coefficient estimators estimate different quantities and, therefore, their values should not be compared to each other. However, a few general conclusions can be made based on Figures 7 and 8 . Clearly, all the studied groups possess a certain amount of tail dependence. The combinations of groups also seem to be tail dependent, although the strength of dependence is smaller. Groups G2 and G3 seem to be slightly more tail dependent than G1, which suggests that sharing industry influences tail dependence more than sharing geographical location. The estimator of Frahm's extremal dependence coefficient in Figure 7a ,b is clearly the smallest of all the estimators, which follows its "strict" definition in (7) . The dots, representing the estimates under the assumption of underlying copula being an extreme-value copula, are greater than the fully non-parametric estimators. This indicates that assuming underlying extreme-value copula might not be appropriate. The estimator of Schmid's and Schmidt's tail dependence measure in Figure 7c ,d is much smoother as a function of k than the other estimators. However, it tends to move towards 0 or 1 for very low k. The estimator λ Figure 8a suggests that, for all three groups, the probability of two stocks having an extremely low return given that the third stock has an extremely low return is approximately 0.2. The estimator λ Figure 8d on the other hand suggests that, in all three group combinations, the largest company is heavily affected if the remaining five stocks have extremely low returns. For group combinations G1 + G3 and G2 + G3, the estimated tail coefficient is, in fact, equal to 1. The values of λ U,E and θ E are presented in Table 4 . One can notice that these measures also suggest that groups G2 and G3 are slightly more tail dependent than G1, or, in other words, they likely contain less independent components (see [18] ). The authors declare no conflict of interest. Parametric Families of Multivariate Distributions with Given Margins Bivariate extreme statistics On the extremal dependence coefficient of multivariate distributions Orthant tail dependence of multivariate extreme value distributions Multivariate conditional versions of Spearman's rho and related measures of tail dependence Multivariate Tail Dependence Coefficients for Archimedean Copulae Multivariate nonparametric estimation of the Pickands dependence function using Bernstein polynomials An Introduction to Copulas Random variables, joint distribution functions, and copulas Multivariate measures of concordance Extreme-value copulas Multivariate Archimedean copulas, d-monotone functions and 1 -norm symmetric distributions Probabilistic Metric Spaces Independence results for multivariate tail dependence coefficients On the specification of multivariate association measures and their behaviour with increasing dimension Max-stable processes and spatial extremes Inequalities for the extremal coefficients of multivariate extreme value distributions Laws of Small Numbers: Extremes and Rare Events On a family of multivariate copulas for aggregation processes A characterization of Gumbel's family of extreme value distributions Nonparametric estimation of the dependence function for a multivariate extreme value distribution Nonparametric estimation of an extreme-value copula in arbitrary dimensions Bias-corrected estimation of stable tail dependence function A note on nonparametric estimation of copula-based multivariate extensions of Spearman's rho Score tests for covariate effects in conditional copulas Functions of order statistics R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing Visualizing dependence in high-dimensional data: An application to S&P 500 constituent data Non-parametric estimation of tail dependence This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license