key: cord-0575586-bexpka6t authors: Padoan, Simone A.; Stupfler, Gilles title: Joint inference on extreme expectiles for multivariate heavy-tailed distributions date: 2020-07-17 journal: nan DOI: nan sha: 3cc01f4518645ccb3bb5edd03cd294828739bd71 doc_id: 575586 cord_uid: bexpka6t The notion of expectiles, originally introduced in the context of testing for homoscedasticity and conditional symmetry of the error distribution in linear regression, induces a law-invariant, coherent and elicitable risk measure that has received a significant amount of attention in actuarial and financial risk management contexts. A number of recent papers have focused on the behaviour and estimation of extreme expectile-based risk measures and their potential for risk management. Joint inference of several extreme expectiles has however been left untouched; in fact, even the inference of a marginal extreme expectile turns out to be a difficult problem in finite samples. We investigate the simultaneous estimation of several extreme marginal expectiles of a random vector with heavy-tailed marginal distributions. This is done in a general extremal dependence model where the emphasis is on pairwise dependence between the margins. We use our results to derive accurate confidence regions for extreme expectiles, as well as a test for the equality of several extreme expectiles. Our methods are showcased in a finite-sample simulation study and on real financial data. Expectiles, introduced by Newey and Powell (1987) , induce risk measures which have recently gained substantial traction in the risk management context. Expectiles of an integrable random variable X are obtained as minimisers of asymmetrically squared deviations in the following sense: where η τ (u) = |τ −1{u ≤ 0}|u 2 is the so-called expectile check function and 1{·} the indicator function. Expectiles can be seen as L 2 −analogues of quantiles, which can be obtained by minimising asymmetrically weighted mean absolute deviations (Koenker and Bassett, 1978) : where ρ τ (u) = |τ − 1{u ≤ 0}||u| is the quantile check function. Unlike the τ th quantile, the τ th expectile is always uniquely defined by its convex optimisation problem, and satisfies In particular, expectiles depend on tail realisations of the loss variable as well as their probability. The advantages of the expectile include that it is the only risk measure, apart from the simple expectation, that defines a law-invariant, coherent (Artzner et al., 1999) and elicitable (Gneiting, 2011) risk measure, see Bellini et al. (2014) and Ziegel (2016) . It follows from the elicitability property that expectiles benefit from the existence of a natural backtesting methodology. Quantiles, by contrast, are elicitable, but are often criticised for not being a coherent risk measure, and for missing out on important information about the tail of the underlying distribution since they only depend on the frequency of tail events. Meanwhile, the popular quantile-based Expected Shortfall is coherent, takes into account the actual values of the risk variable on the tail event, but is not elicitable. Formula (2) links expectiles to the notion of gain-loss ratio, which is a popular performance measure in portfolio management and is well-known in the literature on no good deal valuation in incomplete markets (see Bellini and Di Bernardino, 2017, and references therein) . Further investigations carried out by Ehm et al. (2016) and Bellini and Di Bernardino (2017) , among others, suggest that expectiles define perfectly sensible alternatives to the quantile and Expected Shortfall. Although expectile estimation dates back to Newey and Powell (1987) in the context of linear regression, it has been the subject of renewed interest in a large range of models, see for example Sobotka and Kneib (2012) and references therein as well as Holzmann and Klar (2016) and Krätschmer and Zähle (2017) , for the estimation of central expectiles of fixed order τ staying away from the tails of the underlying distribution. Meanwhile, probabilistic aspects of extreme expectiles, with τ ↑ 1, have been examined by Bellini et al. (2014) and Bellini and Di Bernardino (2017) . Inference on extreme expectiles has been considered even more recently in Daouia et al. (2018 Daouia et al. ( , 2019 Daouia et al. ( , 2020 . These results are limited to inference about extreme expectiles of a single sample of data; in other words, they do not make it possible to construct joint confidence regions for several extreme expectiles from different variables of interest. This is a substantial restriction in actuarial and financial applications, where practitioners are interested in evaluating the asymptotic dependence existing within several risk variables, stock prices or stock indices, and in carrying out precise joint inference about the extremes of these risk variables. Such questions are for instance considered in Jones et al. (2006) with nonparametric testing of equality of distortion risk measures in an actuarial context, in Straetmans et al. (2008) in the detection of tail asymmetries, Zhou (2010) and Mainik et al. (2015) for the construction of diversified financial portfolios, and in Hurlin et al. (2017) as a way to directly compare risk measures between assets. Besides, an inspection of the results in Daouia et al. (2018) shows that, in the univariate case, standard plug-in asymptotic confidence intervals obtained from the asymptotic normality of the estimators behave in fact more often than not quite poorly in finite samples. In particular, the Gaussian QQ-plots in Appendix A.2 of Daouia et al. (2018) show that, despite the fact that the Gaussian distribution will in many cases be a reasonable model for the uncertainty of extreme expectile estimators, the sample variance of the estimators can be a long way off the variance obtained via a naive use of the theoretical Gaussian approximation. These two issues constitute a serious gap that should be addressed if expectiles are to be used widely in risk management. This paper contributes to filling that gap as follows. In a general framework of multivariate distributions with marginal heavy tails and extremal dependence between margins, and given independent and identically distributed (i.i.d.) data, we start by rigorously investigating the joint asymptotic normality of intermediate tail expectiles of the margins. The order of expectiles is such that τ = τ n ↑ 1 with n(1 − τ n ) → ∞ as n → ∞, where n denotes sample size. Let us highlight that the theoretical properties of the methods we shall consider, called the Least Asymmetrically Weighted Squares (LAWS) estimators and Quantile-Based (QB) estimators, had been analysed only for the estimation of a single extreme expectile. Our emphasis here is on describing the asymptotic dependence structure of our estimators using the concept of tail copula introduced and studied in Schmidt and Stadtmüller (2006) . Our results are then used to tackle the important question of joint inference about tail expectiles from two distinct angles. First, we exploit our joint Gaussian asymptotics of tail expectile estimators to construct asymptotic joint confidence regions for tail expectiles. This is done by, on the one hand, designing specific finite-sample corrections for the standard plug-in asymptotic variance estimators of each expectile estimator to obtain accurate representations of marginal uncertainty. On the other hand, we construct an appropriate nonparametric estimator of the tail dependence between two such estimators pertaining to different marginals. This results in an estimate of the covariance matrix of our set of expectile estimators, used to build Gaussian confidence regions for the vector of expectiles of interest and resulting in a procedure that is computationally very fast and avoids having to resort to bootstrapping. Second, we tackle the important problem of testing whether tail expectiles across marginals are equal. We do so by adapting the classical likelihood ratio test of equal means in a Gaussian random vector. The deviance statistic in this testing procedure prominently features our covariance matrix estimators that will be used to construct accurate confidence regions. The outline of the paper is the following. Section 2 explains in detail our statistical context and contains the main theoretical results of the paper on joint intermediate and extreme expectile estimation. Section 3 explores the implications of our results on joint inference about tail expectiles. The finite-sample performance of the methods is examined on simulated data sets in Section 4 and on financial exchange rates data in Section 5. The methods and data considered in this article have been incorporated into the R package ExtremeRisks, freely available on CRAN. The Appendix gives further finite-sample results. Let (X i , 1 ≤ i ≤ n), with X i = (X i,j , 1 ≤ j ≤ d), be i.i.d. copies of a d-dimensional random vector X = (X 1 , . . . , X d ), with marginal distributions F j , associated survival functions F j = 1 − F j , and tail quantile functions U j (s) = inf{x ∈ R | F j (x) ≥ 1 − s −1 }, for s > 1. The realisations of X j may for example be seen as the negatives of generic financial positions, so that large positive values of X j represent extreme losses associated to one specific position, or as losses incurred by an insurance company in distinct lines of business. We focus on the joint estimation of extreme expectiles of X 1 , . . . , X d . We work with heavy-tailed distributions, representing the tail structure of many financial and actuarial data examples fairly well, see e.g. p.9 of Embrechts et al. (1997) . Mathematically, we assume ∀j ∈ {1, . . . , d}, ∀x > 0, lim The tail indices γ j > 0 specify marginal tail heaviness. With condition E| min(X j , 0)| < ∞, the assumption 0 < γ j < 1 ensures that the first moment of X j exists and thus expectiles of the X j are well-defined. These two conditions will be part of our assumptions throughout. More precisely, our overarching focus in the present paper is to establish the joint asymptotic distribution of tail expectile estimators of level τ close to 1. Specifically, according to (1), the expectile for the jth marginal distribution F j is defined as where η τ is the expectile check function defined below Equation (1). We consider hereafter the problem of the joint inference of (ξ τ,1 , . . . , ξ τ,d ), where the level τ is such that τ = 1 − p for a small value of p = p n . Two cases are considered, when p is (much) larger and smaller than 1/n, with n large: these are respectively the intermediate case, when nonparametric estimation methods can be used, and the properly extreme case when extrapolation methods whose rationale is rooted in the heavy-tailed assumption have to be developed. To carry out joint inference about estimators of extreme expectiles, we model here the extremal dependence structure between any two components of X in the form of a tail copula. This translates into the following general assumption that we shall work with throughout. Condition A. For every 1 ≤ j ≤ d, let F j and U j be the distribution function and tail quantile function associated to X j . Assume that the F j are continuous and: (i) U j is regularly varying with index γ j : U j (sx)/U j (s) → x γ j as s → ∞, for any x > 0. (ii) For any (j, ) with j = , there is a function R j, on [0, ∞] 2 \ {(∞, ∞)} such that Condition A(ii) formalises the existence of a limiting dependence structure in the upper tail of any two components X j and X , given by the tail copula R j, (Schmidt and Stadtmüller, 2006) . It is a weak assumption since it is satisfied by any X in the maximum domain of attraction of a multivariate extreme value distribution (de Haan and Ferreira, 2006) . Let 0 < τ n < 1 satisfy τ n → 1 and n(1 − τ n ) → ∞ as n → ∞. We focus on estimating tail expectiles of the X j at level τ n . We consider two methods: the nonparametric empirical counterpart of (3), called the Least Asymmetrically Weighted Squares (LAWS) estimator and a semiparametric Quantile-Based (QB) estimator built on our heavy-tailed assumption. Nonparametric estimator via asymmetric least squares We first consider estimating the expectile ξ τn,j of the marginal distribution F j by its empirical estimator ξ τn,j = arg min θ∈R n i=1 η τn (X i,j − θ). This LAWS estimator can be computed with iteratively reweighted least squares, or with standard minimisation routines such as uniroot in R. Theorem 2 in Daouia et al. (2018) shows that the empirical estimator ξ τn,j is consistent and n(1 − τ n )−asymptotically normal; this result is limited to the marginal estimation of an intermediate expectile. Our first main result provides the joint asymptotic normality of the estimators ξ τn,j , for 1 ≤ j ≤ d. Theorem 2.1. Assume that Condition A is satisfied. Assume further that there is δ > 0 such that E| min(X j , 0)| 2+δ < ∞ and that 0 < γ j < 1/2 for any 1 ≤ j ≤ d. Let τ n ↑ 1 be such that n(1 − τ n ) → ∞ as n → ∞. Then we have The covariance matrix V LAWS (γ, R) has entries To understand the above joint asymptotic distribution further, consider the case γ j = γ for any j ∈ {1, . . . , d}, when the X j have equivalent tails. By 1-homogeneity of the tail copula (see Schmidt and Stadtmüller, 2006 , Theorem 1(ii)), we have The variance term on the diagonal of this matrix is indeed equal to the asymptotic variance derived in Daouia et al. (2018, Theorem 2) in the univariate case. The covariance terms off the diagonal can be rewritten in terms of the asymptotic correlation of two estimators as The expression of the correlation structure C LAWS (γ, R) is similar to the one representing the contribution of temporal dependence in the variance of the intermediate marginal LAWS expectile estimator in a stationary time series, see Padoan and Stupfler (2020, Theorem 3.1) . Semiparametric estimator via a quantile-based procedure An alternative estimator is provided by the asymptotic proportionality relationship between expectile and quantile: where q τ,j is the quantile function of the jth marginal. This was first noted by Bellini et al. (2014) . This connection suggests the class of QB estimators ξ τn,j = ( γ −1 τn,j − 1) − γ τn,j q τn,j where for each j ∈ {1, . . . , d}, q τn,j and γ τn,j are consistent estimators of q τn,j and γ j . Throughout, the estimator q τn,j is taken to be q τn,j = X n− n(1−τn) ,n,j , where · is the floor function and X 1,n,j ≤ · · · ≤ X n,n,j denote the ascending order statistics of the sample (X 1,j , . . . X n,j ). There has been a wealth of research on the estimation of the tail index γ j ; we refer to Chapter 4 in Beirlant et al. (2004) and Chapter 3 of de Haan and Ferreira (2006) . We work below with the Hill estimator (Hill, 1975) with effective sample size k = n(1 − τ n ) : X n−i+1,n,j X n− n(1−τn) ,n,j . This estimator is in fact the maximum likelihood estimator in the purely Pareto model and is known to be optimal, in terms of rate of convergence, when the distribution function F j belongs to the wide Hall-Welsh class of models (Hall and Welsh, 1985) , that is where a j > 0, b j = 0 and ρ j < 0. See Drees (1998) . The asymptotic normality of a single one of the ξ τn,j has been investigated in Corollary 2 of Daouia et al. (2018) . To write the corresponding joint convergence result, we require the following set of second-order conditions designed to control the rate of convergence in (4). Condition B. Assume that Condition A(i) holds and that, for every 1 ≤ j ≤ d, where ρ j ≤ 0 and A j is a measurable function converging to 0 at infinity and having constant sign. Hereafter, (x ρ j − 1)/ρ j is to be read as log(x) when ρ j = 0. Condition B controls rates of convergences in Condition A(i): since |A j | is regularly varying with index ρ j (by Theorems 2.3.3 and 2.3.9 in de Haan and Ferreira, 2006) , the larger |ρ j | is, the faster |A j | converges to 0 and the smaller the error in the approximation of the right tail of U j by a purely Pareto tail will be. Any distribution part of the Hall-Welsh class (5) satisfies this kind of condition (as a consequence of Theorem 2.3.9 in de Haan and Ferreira, 2006) . Numerous examples of commonly used distributions that satisfy this assumption can be found in Beirlant et al. (2004) . Our next result, of interest in its own right, examines the joint convergence between Hill estimators and intermediate order statistics across marginals. A related result, limited to joint convergence of Hill estimators only, is Theorem 4 in Stupfler (2019). Theorem 2.2. Assume that Conditions A and B hold. Let τ n ↑ 1 be such that n(1−τ n ) → ∞ and, for any 1 ≤ j ≤ d, n(1 − τ n )A j ((1 − τ n ) −1 ) → λ j ∈ R as n → ∞. Then we have The covariance matrix Σ Q (γ, R) can be partitioned into 2 × 2 blocks Σ Q j, (γ, R), given by Σ Q j,j (γ, R) = γ 2 j I 2 (where I 2 denotes the 2 × 2 identity matrix) for any j ∈ {1, . . . , d} and for any j, ∈ {1, . . . , d} with j < . Let us highlight that although there is asymptotically no correlation between the Hill estimator for a given marginal and the corresponding order statistic (see also Lemma 3.2.3 p.71 in de Haan and Ferreira, 2006) , there are generally nonzero correlations between pairs of Hill estimators, pairs of intermediate order statistics, as well as between the Hill estimator of a given marginal and an intermediate order statistic pertaining to another marginal. The desired result on the joint convergence of the ξ τn,j is now a corollary of Theorem 2.2. Set m(x) = (1 − x) −1 − log(x −1 − 1), for x ∈ (0, 1). Corollary 2.3. Work under the conditions of Theorem 2.2. Assume in addition that E| min(X j , 0)| < ∞, that 0 < γ j < 1 for any 1 ≤ j ≤ d and that n(1 − τ n )q −1 τn,j → µ j ∈ R as n → ∞. Then where the asymptotic bias b QB has components and the covariance matrix V QB (γ, R) has entries This result is the multivariate extension of Corollary 2 in Daouia et al. (2018) that is required for our purposes. Note also that unlike the latter, our result is written without the unnecessary assumption of an increasing (marginal) distribution function. We consider now the problem of most relevance to risk management in practice, which is to estimate extreme expectiles ξ τ n ,j , where τ n → 1 is such that n(1 − τ n ) → c ∈ [0, ∞). In risk management, one would typically consider τ n ≥ 1 − 1/n, see for example Chapter 4 of de Haan and Ferreira (2006) and Cai et al. (2015) in the context of extreme quantile esstimation. The basic idea, dating back to Weissman (1978) , is to extrapolate intermediate expectile estimators at level τ n to the extreme level τ n , beyond the observed data, using the marginal heavy tails assumption. This is warranted by convergence (4), which entails This suggests the following two estimators: the LAWS-based extrapolating estimator ξ τ n ,j = ξ τ n ,j (τ n ) = 1 − τ n 1 − τ n − γ τn,j ξ τn,j and the QB extrapolating estimator ξ τ n ,j = ξ τ n ,j (τ n ) = 1 − τ n 1 − τ n − γ τn,j ξ τn,j = ( γ −1 τn,j − 1) − γ τn,j q τ n ,j , where q τ n ,j is the Weissman estimator of the extreme quantile q τ n ,j (Weissman, 1978) . Our next main result towards our goal of carrying out joint inference about extreme expectiles is a statement of the joint convergence of these estimators across marginals. Theorem 2.4. Assume that Conditions A and B hold, with ρ j < 0 for any j ∈ {1, . . . , d}. Let τ n , τ n ↑ 1 with n(1 − τ n ) → ∞, n(1 − τ n ) → c ∈ [0, ∞) and n(1 − τ n )/ log[(1 − τ n )/(1 − τ n )] → ∞ as n → ∞. Assume also that for any 1 ≤ j ≤ d, n(1 − τ n )q −1 τn,j → µ j ∈ R and n(1 − τ n )A j ((1 − τ n ) −1 ) → λ j ∈ R as n → ∞. Let b = (λ j /(1 − ρ j )) 1≤j≤d and define a covariance matrix V (γ, R) by V j, (γ, R) = γ 2 j if j = , γ j γ R j, (1, 1) if j < . (i) Assume that there is δ > 0 such that E| min(X j , 0)| 2+δ < ∞ and that 0 < γ j < 1/2 for any 1 ≤ j ≤ d. Then (ii) Assume that E| min(X j , 0)| < ∞ and that 0 < γ j < 1 for any 1 ≤ j ≤ d. Then This result generalises the convergence of a single one of either the ξ τ n ,j or ξ τ n ,j , examined in Corollaries 3 and 4 of Daouia et al. (2018) . It is proven by showing that the joint asymptotic Gaussian distribution of our Weissman-type extrapolating estimators is exclusively governed by that of the Hill estimators used in the extrapolation procedure. However, and even though the asymptotic behaviour of the Hill estimators is certainly crucial, correctly inferring the anchor intermediate expectile will also be important in finite-sample situations, as we shall show in our construction of confidence regions and in our simulation study. Equipped with our theory developed in Section 2, we derive asymptotic confidence regions for inference about extreme expectiles and provide a testing procedure for their equality. We start by the construction of confidence regions at intermediate and extreme levels. Of course, the study of the intermediate case is less important in practice since most applications in tail risk management focus on the estimation of risk measures at properly extreme levels. However, as we shall illustrate below, giving an accurate measure of the uncertainty about intermediate expectile estimators will be key to our definition of accurate Gaussian confidence regions for multiple extreme expectiles. Throughout this section, we let ξ τn = (ξ τn,1 , . . . , ξ τn,d ) and define similarly ξ τ n , ξ τn , ξ τn , ξ τ n and ξ τ n . The symbol 1 d denotes the d−dimensional vector with all entries equal to 1. All operations on vectors, apart from matrix operations, are meant componentwise. Using LAWS estimation Our main instrument is Theorem 2.1, namely Using this Gaussian asymptotic approximation to build a confidence region for ξ τn is a delicate task. In the multivariate case, this problem is even more difficult because of the additional nontrivial question of estimating the off-diagonal elements of V LAWS (γ, R) to model correctly the dependence between LAWS estimators. We investigate here a solution based on the proof of Theorem 2.1. If ϕ τ (y) = |τ − 1{y ≤ 0}|y is the derivative of η τ /2, one has the following nonparametric approximation of (γ, R) for large n: (γ, R) ≈ 1 (1 − τ n )ξ τn,j ξ τn, × E(ϕ τn (X j − ξ τn,j )ϕ τn (X − ξ τn, )) [1 + (2τ n − 1)F j (ξ τn,j )/(1 − τ n )][1 + (2τ n − 1)F (ξ τn, )/(1 − τ n )] . This approximation is our starting point for the construction of an estimator of V LAWS j, (γ, R). One could estimate each term in this nonparametric approximation directly; this turns out not to be the best-performing solution in practice because it tends to provide an underestimation of the marginal uncertainty on expectiles. Our solution, suggested by the results of extensive Monte-Carlo simulations, is the following. For the diagonal entry V LAWS For off-diagonal elements, the covariance Cov(ϕ τn (X j − ξ τn,j ), ϕ τn (X − ξ τn, )) is in practice found to be a good approximation of the direction of dependence within the data; a finitesample improvement on the estimation of the strength of this dependence is found by writing V LAWS j, (γ, R) ≈ γ j γ E(ϕ τn (X j − ξ τn,j )ϕ τn (X − ξ τn, )) (1 − τ n )ξ τn,j ξ τn, for large n. Our estimator of V LAWS (γ, R) is now constructed by plugging in the LAWS and Hill estimators, the empirical survival functions F n,j based on the X i,j (1 ≤ i ≤ n) and the empirical covariances m n,j, = 1 n n i=1 ϕ τn (X i,j − ξ τn,j )ϕ τn (X i, − ξ τn, ). This results in the estimator V LAWS n (γ, R) of V LAWS (γ, R) given elementwise by V LAWS n,j,j (γ, R) = 2 γ 2 τn,j 1 − 2 γ τn,j × 1 + F n,j ( ξ τn,j )/(1 − τ n ) 1 + (2τ n − 1) F n,j ( ξ τn,j )/(1 − τ n ) 2 and V LAWS n,j, (γ, R) = γ τn,j γ τn, m n,j, (1 − τ n ) ξ τn,j ξ τn, for j = . Under the assumptions of Theorem 2.1, this is indeed a consistent estimator of V LAWS (γ, R). When V LAWS (γ, R) is symmetric positive definite (in particular, no perfect asymptotic dependence between two components of X can be present), multiplying the left-hand side in (6) by the positive definite inverse square root [V LAWS (γ, R)] −1/2 of V LAWS (γ, R) and then plugging in our estimator V LAWS n (γ, R) produces an asymptotically Gaussian random vector with independent standard Gaussian components. Therefore, if · 2 denotes the Euclidean norm on R d and χ 2 d,1−α denotes the (1 − α)−quantile of the chi-square distribution with d degrees of freedom, one has Denoting by B d (0 d , r) the closed Euclidean ball in R d whose centre is the origin and radius is r, we find the corresponding (1 − α)−asymptotic LAWS-based confidence region for ξ τn as the random ellipsoid [Recall that all operations except the matrix product V LAWS n (γ, R) 1/2 u are meant com- Using QB estimation With the QB estimator, our main tool is Corollary 2.3: Similarly to what is observed when using LAWS estimators, great care has to be taken in constructing confidence regions based on this convergence. Contrary to the LAWS estimator, the QB estimator is asymptotically biased due to its reliance on the relationship (4). The jth component of this bias is essentially, as n → ∞, Two sources of bias therefore arise when using the QB estimator: one due to marginal tail heaviness and the other to the second-order framework. The correction of the latter source of bias involves estimating accurately the second-order parameter ρ j , which is a notoriously difficult problem (see e.g. the Introduction of Cai et al., 2013) , especially from the practical point of view since consistent estimators of ρ j typically suffer from low rates of convergence, see e.g. Goegebeur et al. (2010 Goegebeur et al. ( , p.2638 and Gomes et al. (2009, p.298) . As such, correcting second-order bias tends to increase finite-sample variability substantially, resulting in confidence regions that may be too conservative. By contrast, the simple expression of the bias component proportional to q τn,j makes its correction a straightforward task, with all estimators involved converging at the rate n(1 − τ n ) or more. This constitutes our rationale for concentrating specifically on the first source of bias with the estimator The covariance matrix V QB (γ, R), meanwhile, is estimated as follows: where the estimator of the tail copula function R j, is defined as [Here r n,i,j denotes the marginal rank of observation X i,j .] This estimator is a slightly modified version of the estimator of the empirical upper tail copula estimator given in Equation (13) in Schmidt and Stadtmüller (2006) . The estimator V QB n (γ, R) is a consistent estimator of V QB (γ, R), by a combination of Theorem 2.2 and known results on the uniform consistency of R τn,j, , see Schmidt and Stadtmüller (2006, Section 5) . A calculation entirely similar to the one carried out with the LAWS estimator now yields an (1 − α)−asymptotic QB confidence region for ξ τn as the random ellipsoid A comparison of these regions in terms of actual coverage will be carried out in Section 4. At the extreme level, the key result for our purposes is Theorem 2.4. Nevertheless, if one constructs an asymptotic confidence region directly from this result, the actual finite-sample coverage probability can be quite poor, even in the estimation of a single extreme expectile: see Appendix A.2 in Daouia et al. (2018) where Gaussian QQ-plots show that the observed variance of extreme expectile estimators can be fairly different from the asymptotic variance in the Gaussian approximation. We shall illustrate this in more detail in Section 4.1. Our idea is to, first, get a finer understanding of the uncertainty in the estimation of extreme expectiles. The gist of our method is that any estimator of the form where ξ τn,j is a consistent estimator of ξ τn,j , satisfies Under the conditions of Theorem 2.4, the second (random) term and the third (bias) term are dominated by the first term, leading to the common asymptotic distribution obtained therein. In practice however, the behaviour of ξ τn,j matters, and so does the correlation between ξ τn,j and γ τn,j , especially when log d n = log[(1 − τ n )/(1 − τ n )] is only moderately large. Investigating this uncertainty and correlation will lead us to define corrected Gaussian asymptotic confidence regions. All our confidence regions will be constructed on the log-scale; using this scale has been shown to improve finite-sample coverage of confidence regions for extreme risk measures (see e.g. p.628 in Drees, 2003 , in the context of extreme quantile estimation). We found from Monte-Carlo simulations that this is also the case for expectiles. Using the LAWS-based extrapolating estimator The crucial result is an extension of Theorem 2.1 giving the joint convergence of the Hill estimators and intermediate LAWS expectile estimators across marginals. Theorem 3.1. Assume that Conditions A and B hold. Assume further that there is δ > 0 such that E| min(X j , 0)| 2+δ < ∞ and that 0 < γ j < 1/2 for any 1 ≤ j ≤ d. Let τ n ↑ 1 be such that n(1 − τ n ) → ∞ and, for any when j = ∈ {1, . . . , d} and, elementwise, for any j, ∈ {1, . . . , d} with j < . Theorem 3.1 and Equation (8) suggest the following approximation for the LAWS-based extrapolating estimator on the log-scale: We now focus on the estimation of the bias term appearing in the above distributional approximation, and of the matrix V ,LAWS n (γ, R). Use Proposition 1(i) in Daouia et al. (2020) and the proof of Theorem 4.3.8 in de Haan and Ferreira (2006) Here and as above we neither emphasise nor estimate the bias term proportional to A j ((1 − τ n ) −1 ). We therefore suggest the following working approximation: To find an estimator of the covariance matrix V ,LAWS n (γ, R), we note that τn,j if j = γ τn,j γ τn, R τn,j, (1, 1) if j < with the notation of Section 3.1. Similarly Σ LAWS j, n,j, (γ, R). An estimation method for the off-diagonal entry Σ LAWS j, (γ, R)(1, 2) is obtained by noting that We thus estimate Σ LAWS j, (γ, R)(1, 2) with Σ LAWS n,j,j (γ, R)(1, 2) = γ 3 τn,j ( γ −1 τn,j −1) γ τn,j /(1− γ τn,j ) 2 when j = , and otherwise by (recall that ϕ τn (X − ξ τn, ) has expectation 0): The entry Σ LAWS j, (γ, R)(2, 1) is estimated by Σ LAWS n,j, (γ, R)(2, 1) defined in a similar fashion by exchanging j and . This suggests an estimator of V ,LAWS n (γ, R) defined elementwise as We finally deduce an (1 − α)−asymptotic LAWS-based confidence region for the extreme expectile ξ τ n as the deformed random ellipsoid One can easily deduce from that construction a LAWS-based asymptotic (1 − α)−confidence interval for the jth marginal extreme expectile ξ τ n ,j : where z 1−α/2 is the quantile of the standard Gaussian distribution at level 1 − α/2. This can be seen as an adjusted version of the confidence interval based on the LAWS estimator that is considered in Daouia et al. (2018) . Using the QB extrapolating estimator We rewrite Equation (8) for ξ τ n ,j as log ξ τ n ,j By Proposition 1(i) in Daouia et al. (2020) , the first component of the bias on the second line of the right-hand side is essentially a linear combination of 1/q τ n ,j and A j ((1 − τ n ) −1 ), which at the extreme level τ n are typically very small. The second component, meanwhile, is asymptotically proportional to A j ((1−τ n ) −1 ) (see the proof of Theorem 4.3.8 of de Haan and Ferreira, 2006) , and we have discussed previously how estimating this kind of bias component is not necessarily beneficial for confidence region construction. We then ignore these two bias terms and use a Taylor expansion to write, as n → ∞, Using Theorem 2.2 suggests the following approximation for the QB extrapolating estimator: This matrix is readily estimated with the matrix V ,QB n (γ, R) defined as This yields an (1 − α)−asymptotic QB confidence region for the extreme expectile ξ τ n as the deformed random ellipsoid We can also deduce from this confidence region a QB asymptotic (1 − α)−confidence interval for the jth marginal extreme expectile at level τ n : This is an adjusted version of the confidence interval based on the so-called indirect estimator in Daouia et al. (2018) . We shall compare the relative finite-sample performance of the intervals I τ n ,j,α and I τ n ,j,α , and of the regions E τ n ,α and E τ n ,α , in Section 4. An alternative way of carrying out joint inference about several risk measures is to test their equality. This is relevant to actuarial and financial practice, where risk managers may want to assess the asymptotic dependence between several risk variables, individual stock prices or stock indices, as well as whether certain assets or stocks should be considered riskier than others. We show here how our construction of asymptotic confidence regions can be used to design a test of equality of extreme expectiles. We focus here on properly extreme levels since this is the relevant case for extreme risk management. Consider, for an order τ = τ n → 1 where n(1 − τ n ) → c ∈ [0, ∞) as n → ∞, the system of hypotheses H 0 : ξ τ n ,1 = · · · = ξ τ n ,d = ξ τ n , H 1 : ∃(j, ) with j = such that ξ τ n ,j = ξ τ n , . To construct a testing procedure for this problem, we note that we have at our disposal jointly asymptotically Gaussian estimators of the ξ τ n ,j . Testing the equality of the ξ τ n ,j can thus be essentially viewed as testing the equality of the means of a Gaussian random vector. A simple and powerful solution to this problem is given by a likelihood ratio test, which we briefly recall here; more can be found in e.g. Silvey (1970) . Suppose that Z = (Z 1 , . . . , Z d ) is a d−dimensional Gaussian random vector with mean m and a known, positive definite covariance matrix V . Suppose that it is of interest to consider the nested models problem The (log-likelihood ratio) deviance statistic for testing the validity of model M 0 is In model M 0 , the statistic Λ has a chi-square distribution with d − 1 degrees of freedom. In our case, we can set Z to be the LAWS-based extrapolating estimator ξ τ n or the QB extrapolating estimator ξ τ n . This leads us to two distinct testing procedures. LAWS-based test Following the discussion of Section 3.2, we approximate the distribution of the vector Z = Z n = log ξ τ n + b QB / n(1 − τ n ) by a Gaussian distribution with mean m = m n = log ξ τ n and covariance matrix with the notation of Sections 3.1 and 3.2. We thus compute the test statistic We finally define a test with asymptotic type I error α by deciding that if Λ LAWS QB test Still following Section 3.2, we approximate the distribution of the vector Z = Z n = log ξ τ n by a Gaussian distribution with mean m = m n = log ξ τ n and covariance matrix with the notation of Section 3.2. We thus compute the test statistic A test with asymptotic type I error α is defined by rejecting H 0 if and only if Λ QB n > χ 2 d−1,1−α . Our goal is now to compare the performance of our inference procedures (asymptotic confidence regions and tests) on simulated data in a variety of models, before showcasing our procedures on a sample of real data. Here we study the finite-sample performance of the inferential methodology developed in Section 3. We first assess the quality of inference about marginal extreme expectiles. We then study the performance of our joint confidence regions for intermediate and extreme expectiles. Finally, we investigate the power of the tests for the equality of extreme expectiles. To save space, all Figures and Tables containing our full results are deferred to Appendix A. Here we simulate M = 10,000 samples of n = 1,000 independent observations from • The Fréchet distribution, having distribution function F (x) = exp(−x −1/γ ) for x > 0, • The Pareto distribution, having distribution function F (x) = 1 − x −1/γ for x > 1, • The Student-t distribution with 1/γ degrees of freedom. The tail index is chosen to be γ = 1/3 in each case. For each simulated sample we estimate the (univariate) expectile at the extreme level τ n = 0.999 = 1 − 1/n and we compute the associated confidence intervals I τ n ,α and I τ n ,α defined in Section 3.2 (there is no dependence on the label of the marginal in this univariate case) with 95% nominal coverage probability. The anchor intermediate level is taken to be τ n = 1 − k/n, with k ∈ [6, 300]. Then, we compute a Monte Carlo approximation of the relative Mean Squared Error (MSE) for the extrapolating point estimators and the actual coverage probability of the corresponding interval estimators. Results are collected in Figure I , see Appendix A. The top panels of this Figure show that the QB extrapolating estimator has lower relative MSE than its LAWS counterpart in the Fréchet and Pareto cases, and comparable MSE in the Student-t case. Interestingly, however, the adjusted interval estimators perform comparably in each case, and in fact the LAWS confidence interval has slightly better and more stable coverage, as the middle and bottom panels show. Our adjusted intervals provide visibly improved results compared to their unadjusted versions for all three distributions, with a remarkable improvement in the LAWS case for the Fréchet and Pareto distributions. By contrast, the actual non-coverage probability of the unadjusted versions is typically in the range of 15-25%. As a conclusion, it appears that in terms of marginal inference at the extreme level, the LAWS and QB extrapolating estimators are comparable, with a slight advantage for the former once our adjustment to the confidence interval has been applied. In the second and third parts of our experiments we work with, among others, two families of Archimedean copulae, which we briefly introduce below. Further details can be found in Joe (2014) . Let ϕ : (0, 1] → [0, ∞) be a convex and strictly decreasing function with ϕ(1) = 0 and ϕ(t) ↑ ∞ as t ↓ 0. The Archimedean copula in dimension d with generator ϕ is the d-dimensional distribution function C with uniform marginals defined by The Archimedean families we consider are, first, the Clayton family, defined through the generator ϕ(u) = θ −1 (u −θ − 1) for θ > 0. Here the components of u become independent for θ → 0, and completely dependent for θ → ∞. We also consider the Gumbel family, defined through the generator ϕ(u) = (− log(u)) ϑ for ϑ ≥ 1, with ϑ = 1 representing the case of independent variables and ϑ → ∞ the case of perfectly dependent variables. Our experiments are based on the below models for X = (X 1 , . . . , X d ) (we take d ≤ 5). (i) [Clayton-Fréchet model] Let U follow a Clayton copula with dependence parameter θ = 10. Take X j = (− log(U j )) −γ with γ = 1/3. Then X has Fréchet marginal distributions with tail index 1/3 and a Clayton copula dependence structure. (ii) [Gaussian-Student model] Let U follow a Gaussian copula. Pairwise correlation parameters are taken as ρ 1,2 = 0.8 for d = 2, (ρ 1,2 = 0.8, ρ 1,3 = 0.6, ρ 2,3 = 0.4) for d = 3, (ρ 1,2 = 0.8, ρ 1,3 = 0.6, ρ 1,4 = 0.4, ρ 2,3 = 0.5, ρ 2,4 = 0.4, ρ 3,4 = 0.4) for d = 4 and (ρ 1,2 = 0.8, ρ 1,3 = 0.6, ρ 1,4 = 0.4, ρ 1,5 = 0.2, ρ 2,3 = 0.5, ρ 2,4 = 0.4, ρ 2,5 = 0.3, ρ 3,4 = 0.6, ρ 3,5 = 0.4, ρ 4,5 = 0.3) for d = 5. Take X j = F −1 ν (U j ) where F ν is the Student-t distribution function with ν = 3 degrees of freedom. Then X has Student-t marginal distributions with tail index 1/3 and a Gaussian copula dependence structure. (iii) [Gumbel-Fréchet model] Let U follow a Gumbel copula with dependence parameter ϑ = 3. Take X j = (− log(U j )) −γ with γ = 1/3. Then X has Fréchet marginal distributions with tail index 1/3 and a Gumbel copula dependence structure. (iv) [Multivariate Student-t model] Let X follow a zero-mean multivariate Student-t distribution with ν = 3 degrees of freedom and a scale matrix given by ρ 1,2 = 0.8 for d = 2, (ρ 1,2 = 0.8, ρ 1,3 = 0.6, ρ 2,3 = 0.4) for d = 3, (ρ 1,2 = 0.8, ρ 1,3 = 0.6, ρ 1,4 = 0.4, ρ 2,3 = 0.5, ρ 2,4 = 0.4, ρ 3,4 = 0.4) for d = 4 and (ρ 1,2 = 0.8, ρ 1,3 = 0.6, ρ 1,4 = 0.4, ρ 1,5 = 0.2, ρ 2,3 = 0.5, ρ 2,4 = 0.4, ρ 2,5 = 0.3, ρ 3,4 = 0.6, ρ 3,5 = 0.4, ρ 4,5 = 0.3) for d = 5. In these four models, all univariate margins have the same tail index γ = 1/3. The components of X are asymptotically independent in models (i) and (ii), in the sense that all pairwise tail copulae are identically 0, and asymptotically dependent in models (iii) and (iv). Figure II in Appendix A shows typical samples from each model. It is important to note that even though models (i) and (ii) are technically cases of tail independence, finite samples can show a degree of dependence in the joint empirical tail. We also highlight that a sample generated from models (iii) or (iv) typically shows strong dependence in the joint upper tail. We first study the finite-sample behaviour of the intermediate expectile estimators. In each model, we simulate M = 10 4 samples of size n = m · 10 3 , with m ∈ {1, 1.5, 2, 2.5, 5, 10} and dimension d ∈ {2, 3, 4, 5}. We estimate the d-dimensional expectile ξ τn , with τ n = 1 − 1/ √ n, using the LAWS and QB expectile point estimators and the confidence regions E τn,α and E τn,α , with α = 0.05 (95% nominal coverage probability), described in Section 3.1. Then, we compute a Monte Carlo approximation of the relative MSE of the LAWS and QB point estimators across all components and we report the actual (non-)coverage probabilities of the associated confidence regions (see Tables I, II and III in Appendix A) . With every model except the Gumbel-Fréchet model, the actual coverage probability of the LAWS confidence region estimator is close to the nominal level. With the Gumbel-Fréchet model, permissive confidence regions are generally obtained. This seems to be due to the strong dependence structure of the Gumbel-Fréchet model which is somewhat difficult to estimate accurately. The conclusions for the QB confidence region are similar. By contrast, the naive confidence regions obtained assuming that the margins are independent (and thus ignoring the question of the estimation of the asymptotic dependence between components) provide unsuitable regions whose actual non-coverage probabilities are either substantially higher than desired (for the LAWS estimator) or virtually equal to zero (for the QB estimator). Our proposal therefore allows to obtain considerably more accurate confidence regions than existing methods; moreover, while the LAWS confidence region performs best in the Clayton-Fréchet and Gaussian-Student-t models, the QB confidence region is better in the Gumbel-Fréchet and Multivariate Student-t models, and results do not seem to deteriorate significantly with increasing dimension (at least up to d = 5). To assess the performance of our methods at the extreme level, we keep the same simulation setting but with the difference that a single sample size n = 1,000 and the extreme level τ n = 0.999 = 1 − 1/n are used. Monte Carlo approximations of the actual coverage probabilities are displayed in Figure III , see Appendix A. Our proposed confidence region estimators provide satisfactory estimation results at the extreme level, with the exception of the QB region in the asymptotically independent case of the Clayton-Fréchet model. The LAWS-based confidence region seems to perform well, with very stable coverage probabilities close to the nominal level in Fréchet models, and a clearly identified stability region for values of k around 50 with a coverage probability close to the nominal level in Student models. There is no clear conclusion as to which method is best in a given case, with the LAWS method being at times slightly more conservative than the QB method, and in other models slightly more permissive. Results seem to be robust with respect to the dimension. In our final simulation experiment we check the performance of the tests for equality of several extreme expectiles. We keep the models of Section 4.2, although in each of the models (i)-(iv) we allow the tail index γ to vary within the interval [0.1, 0.4], for one margin of the joint distribution. In each case we simulate M = 10,000 samples of size n = 1,000 from the thus modified models. The null hypothesis of equal extreme expectiles, i.e. H 0 : ξ τ n ,1 = · · · = ξ τ n ,d = ξ τ n is then true if and only if γ = 1/3. Then we perform the LAWS and QB tests and we compute the proportion of rejections, thus deriving a Monte Carlo approximation of the type I error probability and the corresponding power of the test. Table IV in Appendix A reports the type I errors of the LAWS and QB versions of the test for τ n = 0.999 = 1 − 1/n, k = 50 and d = 2, 3, 4, 5. The QB version has a larger type I error than anticipated in the case of the Clayton-Fréchet model; in the other cases, our tests tend to have a lower type I error than expected. However, results obtained with the LAWS version tend to improve as the dimension increases, approaching the nominal level when d = 5. Figure IV in Appendix A displays the power of both versions of the test when γ ∈ [0.1, 0.4] and d = 2, 3, 4, 5. The power curves reflect the excellent power of both tests. The rejection rate increases (decreases) for stronger (weaker) dependence structures and the highest (lowest) rejection rate is indeed obtained with the Gumbel-Fréchet (Clayton-Fréchet) model. Our testing procedures appear to yield reasonably stable results across a wide range of parameters k, as Figure V in Appendix A shows in the case d = 2. The analysis of exchange rate risk is one of the most difficult tasks in economics. Links between exchange rates and fundamental economic principles have been established (see e.g. Engel and West, 2005) . A modern approach to understanding exchange rates uses a supplyand-demand analysis of the exchange rate seen as the price of domestic assets in terms of foreign assets (see Madura, 2014) . The exchange rate is influenced by a positive interest rate differential, in the short term, implying an appreciation of the home currency. In the long term, all other things being equal, a rise in a country's price level is correlated with depreciation of its currency, while an increased demand for exports (imports) is correlated with appreciation (depreciation) of its currency, see e.g. Harrison et al. (1992, p. 201) . We consider negative weekly log-returns (returns for brevity) of the exchange rates of the Great British Pound (GBP) versus the United States Dollar (USD), the Japanese Yen (JPY), the Canadian Dollar (CAD), the Australian Dollar (AUD) and the Norwegian Krone (NOK), from January 1, 1980 to June 26, 2020 1 . These samples of size n = 2,133 are plotted on the top panels of Figure VI in Appendix A. They are technically, of course, time series data; in our results we do not enter into the important but difficult question of handling serial dependence. This is the reason why, as suggested by Cai et al. (2015) , we chose to consider weekly returns as a way to substantially reduce the amount of dependence present in the exchange rates. The United States and Japan are developed, industrialised economies characterised by the presence of a large number of global firms and, in recent years, similar monetary policy leading to low interest rates, therefore a substantial degree of dependence between the GBP-USD and GBP-JPY exchange rates is to be expected. Canada and Australia are close partners of United States, accessing the American market for exports, attracting American capital and technology for economic development and sharing large international finance institutions. Hence, a fairly strong dependence among the GBP-USD, GBP-CAD and GBP-AUD exchange rates is expected as well. Such expectations are confirmed from the scatterplots in Figure 1 (see also Figure VIII in Appendix A). We also find visible dependence within the (GBP-CAD, GBP-NOK) and (GBP-AUD, GBP-NOK) pairs. Table 1 gives estimated correlations between exchange rates, suggesting strong correlations between GBP-USD and GBP-JPY, GBP-USD and GBP-CAD. The purpose of analysing multiple exchange rate returns simultaneously is that it can be useful in understanding and predicting the risks that nations and companies exposed to the global economy are subjected to. Risk analysis is most often based on Value-at-Risk (VaR) at the 99.9% level (see e.g. Drees, 2003; de Haan et al., 2016) or on a quantile at level 1 − p n where p n is not larger than 1/n. The potential of extreme expectiles for risk assessment is illustrated by Bellini and Di Bernardino (2017) , Daouia et al. (2018) and Padoan and Stupfler (2020) , where it is found that parametric and nonparametric expectile-based forecasts may provide similar outcomes to those obtained with VaR, in suitable settings. We analyse here the joint tail risk in multiple exchange rate returns through our expectile-based multivariate inferential procedures, at the extreme level τ n = 1 − p n = 0.9995312 with p n = 1/n. Point estimates and the 95% confidence intervals of the tail index for the five series are displayed in the middle row of Figure VI in Appendix A. The tails of the individual series seem moderately heavy; estimates are fairly stable for a series-dependent interval of values of k. To select a common range, we plot the trace of the estimated variance-covariance matrix V ,LAWS n (γ, R) relative to the extrapolating estimator ξ τ n (discussed in Section 3.2) that combines together individual information coming from the five exchange rates returns. Figure VII suggests that the trace of V ,LAWS n (γ, R) is stable for k ∈ [50, 150] . In the sequel, we use k = 150 in our inferential procedures. Tail index point estimates of individual exchange rate returns with corresponding 95% confidence intervals are reported in Table 2 . The lower off-diagonal values in Table 1 are pairwise extremal coefficient estimates. Recall that the bivariate extremal coefficient is a tail dependence measure ω ∈ [1, 2], equal to the value at (1, 1) of the stable tail dependence function (Drees and Huang, 1998) , with the lower and upper bounds representing the case of complete dependence and independence (see e.g. Beranger and Padoan, 2015) . For two exchange rates labelled j and , say, their extremal coefficient is estimated with ω n,j, = 2 − R τn,j, (1, 1), where R τn,j, (1, 1) is defined in (7). These suggest that there is a fairly strong dependence in the joint tail of the two-dimensional exchange rate returns (GBP-USD, GBP-CAD) and (GBP-CAD, GBP-AUD), with milder dependence in the other pairs of returns. In addition to tail index estimates, Table 2 reports the expectile point estimates obtained with the extrapolating LAWS estimator ξ τ n and QB estimator ξ τ n with associated marginal confidence intervals I τ n ,j,α and I τ n ,j,α . We have also computed the two-and three-dimensional asymptotic 95% confidence regions for all the pairs and triplets of exchange rate returns, using the LAWS and QB confidence region estimators E τ n ,α and E τ n ,α . Figure 1 displays these estimated regions for the most tail dependent pairs and triplets of exchange rate returns (plots for other pairs and triplets are available in Figure VIII , see Appendix A). These devices are an important tool for the quantification of the potential contamination risk that a certain type of international economy might be subjected to, and therefore could be useful for risk managers. Finally, we complete the analysis by performing our testing procedures to assess the validity of the assumption of equal risk severity among exchange rate returns. We did this applying the two versions of the test described in Section 3.3. The hypothesis of equal expectile risk severity among all exchange rate returns is rejected with 5% significance level GBP-USD GBP-JPY GBP-CAD GBP-AUD GBP-NOK GBP-USD -0 Table 3 : Hypothesis testing outcome for the exchange rate returns data, obtained with k = 150 and τ n = 0.9995312. Starred test statistics indicate rejection at the α = 5% significance level. The null hypothesis tested by Λ Q n is obtained by replacing expectiles in the first column by their quantile counterparts. using both versions of the test (see Table 3 ). Then, we perform the tests again, assuming the same expectile risk severity between pairs of exchange rate returns only. The outcome of the pairwise tests suggest to reject the null hypothesis with 5% significance level for pairs involving the GBP-JPY exchange rate (except for the (GBP-JPY, GBP-AUD) pair). This suggests that overall the GBP-JPY exchange rate return seems to carry different extreme risk than the other returns; it is interesting to note that this is not obvious either from marginal tail index confidence intervals or extreme expectile confidence intervals, which strongly overlap across marginals. Leaving out the GBP-JPY exchange rate and testing again for equality of extreme expectiles does not give empirical evidence to reject the null hypothesis, confirming our intuition. By way of comparison, we carried out an analogue test on the equality of extreme quantiles, which is built on the joint asymptotic normality of the Weissman quantile estimators across marginals: (The proof is identical to that of Theorem 2.4). Neglecting the bias term, and setting Z = Z n = log q τ n as well as we then consider the test statistic in order to test the hypothesis H 0 : q τ n ,1 = · · · = q τ n ,d = q τ n . When Λ Q n > χ 2 d−1,1−α , the test rejects this hypothesis with asymptotic type I error α. Table 3 reports the results of the test applied to exchange rate returns data. It is readily seen here that this test is much less conclusive than our expectile-based tests, with only the hypothesis q τ n ,GBP−JPY = q τ n ,GBP−NOK being narrowly rejected. As a result our inferential methodology based on the expectile risk measure appears to be more sensitive than its quantile-based competitor in detecting differences in tail risk, suggesting that the use of expectile-based inference is beneficial in tail risk assessment. q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q qq q q q q q q q q q q qq q q This section contains figures and tables linked to our results in Sections 4 and 5. • Figure I gives detailed results of our experiments concerning marginal uncertainty about tail expectiles, considered in Section 4.1. • Figures II and III, as well as Tables I, II and III give further information about our experiments on joint tail expectile inference in Section 4.2. • Table IV and Figures IV and V contain additional information related to our experiments on testing for equality of extreme expectiles in Section 4.3. • Figures VI, VII and VIII in Section 5 give additional results on tail index and extreme expectile estimates related to our real data analysis, as well as certain bivariate and trivariate confidence regions for extreme expectiles. q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q qq q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q qq q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Gaussian−Student−t model q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q qqq q q q q q q q q q qq q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q qq q q q qq q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q qq q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q qq q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q qq q q q q q q q q qq q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q 1.0 6.52(0.01) 7.10(0.02) 7.65(0.00) 8.82(0.01) 1.5 5.72(0.00) 6.46(0.01) 7.32(0.00) 7.43(0.00) 2.0 5.58(0.00) 6.05(0.00) 6.66(0.00) 7.45(0.00) 2.5 5.49(0.00) 6.11(0.00) 6.71(0.00) 6.93(0.00) 5. Table III : Monte Carlo actual non-coverage probability (in %) for the QB confidence region estimator E τn,α at the intermediate level, with n = m · 10 3 (left column), τ n = 1 − 1/ √ n and 95% nominal level. Between brackets we report the coverage probability obtained assuming independence between the margins. Table IV : Monte Carlo rejection rate (in %) of the tests of equality of extreme expectiles, with 5% nominal type I error rate, for n = 1,000, τ n = 0.999 and k = 50. q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q τ n ' = 0.999 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q qq q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q qq q q q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q −0. q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q qq q qq q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q Coherent measures of risk Statistics of Extremes: Theory and Applications Risk management with expectiles Generalized quantiles as risk measures Extreme dependence models Estimation of the marginal expected shortfall: the mean when a related variable is extreme Bias correction in extreme value statistics with index around zero Estimation of tail risk based on extreme expectiles Extreme M-quantiles as risk measures: From L 1 to L p optimization Tail expectile process and risk assessment Optimal rates of convergence for estimates of the extreme value index Extreme quantile estimation for dependent data, with applications to finance Best attainable rates of convergence for estimators of the stable tail dependence function Of quantiles and expectiles: consistent scoring functions, Choquet representations and forecast rankings (with discussion) Modelling Extremal Events for Insurance and Finance Exchange rates and fundamentals Making and evaluating point forecasts Kernel estimators for the second order parameter in extreme value statistics A note on the asymptotic variance at optimal levels of a bias-corrected Hill estimator Extreme Value Theory: An Introduction Adapting extreme value statistics to financial time series: dealing with bias and serial dependence Adaptive estimates of parameters of regular variation Introductory Economics A simple general approach to inference about the tail of a distribution Expectile asymptotics Risk measure inference Dependence Modeling with Copulas Testing hypotheses about the equality of several risk measure values with applications in insurance Regression quantiles Statistical inference for expectile-based risk measures Financial Markets & Institutions (Eleventh Edition) Portfolio optimization for heavytailed assets: extreme risks vs Asymmetric least squares estimation and testing Extreme expectile estimation for heavy-tailed time series Non-parametric estimation of tail dependence Statistical Inference Geoadditive expectile regression Extreme US stock market fluctuations in the wake of 9/11 On a relationship between randomly and non-randomly thresholded empirical average excesses for heavy tails Estimation of parameters and large quantiles based on the k largest observations Dependence structure of risk factors and diversification effects Coherence and elicitability Figure VIII: Two-and three-dimensional 95% Part of this research was carried out when the authors were visiting each other at Bocconi University and the University of Nottingham where G. Stupfler was previously based. Sup-