key: cord-0463202-5uxezr7g authors: Carvalho, Miguel de; Reis, Gonccalo dos; Kumukova, Alina title: Regression-type analysis for block maxima on block maxima date: 2021-02-18 journal: nan DOI: nan sha: 582ba35c0ff092bfe97e0fb306dcae943c91a39c doc_id: 463202 cord_uid: 5uxezr7g This paper devises a regression-type model for the situation where both the response and covariates are extreme. The proposed approach is designed for the setting where both the response and covariates are themselves block maxima, and thus contrarily to standard regression methods it takes into account the key fact that the limiting distribution of suitably standardized componentwise maxima is an extreme value copula. An important target in the proposed framework is the regression manifold, which consists of a family of regression lines obeying the latter asymptotic result. To learn about the proposed model from data, we employ a Bernstein polynomial prior on the space of angular densities which leads to an induced prior on the space of regression manifolds. Numerical studies suggest a good performance of the proposed methods, and a finance real-data illustration reveals interesting aspects on the comovements of extreme losses between two leading stock markets. covariates via a joint distribution-rather than by specifying a functional relation between response and covariate as, for example, in quantile regression (Koenker and Bassett 1978) . An important result in the field of statistics of extremes-that will be fundamental for our developments-is that the properly standardized vector of block maxima converges in distribution to a so-called extreme value copula (Gudendorf and Segers 2010) . Thus, a key target in the proposed framework is what we will refer below as the regression manifold, that is, a family of regressions lines that obeys the latter large sample result. Our methods thus take on board information on the dependence structure between the extreme values so to assess what effects block maxima covariates can have on a block maxima response. To learn about the proposed model from data, we develop a prior in the space of regression manifolds by resorting to a flexible Bernstein polynomial prior on the space of angular densities as recently proposed by Hanson et al. (2017) . Our approach contributes to the literature on conditional modeling given large observed values (e.g. Wang and Stoev 2011; Cooley et al. 2012) , nonetheless, our focus differs from the latter papers in a number of important ways as we describe next. The main difference is that, as anticipated above, here the focus is on devising a regression framework for a block maxima response on block maxima covariate, whereas the latter papers focus mainly on using the conditional density as a way to make probabilistic statements about the likelihood of an extreme given the occurrence of another extreme. Since our main target of analysis is regression, our method has some links with statistical approaches for nonstationary extremes (e.g. Katz 2013; Eastoe and Tawn 2009; Wang and Tsai 2009; Yee and Stephenson 2007; Coles 2001, Section 6) ; the most elementary version of approaches for nonstationary extremes aims to learn about how the limiting law of a suitably standardized block maxima response (Sx) changes according to a covariate x = (x 1 , . . . , xp) T , via the specification (S | X = x) ∼ GEV(µx, σx, ξx). (1.1) Since the approach in (1.1) is built from the univariate theory of extremes it is not tailored for conditioning on another variable being extreme as it fails to take on board information from the dependence structure between the extremes. Additionally, the method proposed in this work is loosely related to quantile regression (Koenker and Bassett 1978) , whose original version consists in modeling the conditional quantile of a response Y given a covariate X = (X 1 , . . . , Xp) T in a linear fashion, that is where F −1 (q | x) = inf{y : F (y | x) ≥ q} and F (y | x) is the distribution function of Y | X = x. Versions of quantile regression that aim to equip (1.2) with the ability to extrapolate into the tail of Y are often known as extremal quantile regression methods (e.g. Chernozhukov 2005) . While flexible and sturdy, such quantile regression-based approaches do not take into account information on the fact that the limiting joint distribution of suitably standardized componentwise maxima is an extreme value copula, and thus fail to be equipped with the ability to extrapolate into the joint tail. The approach proposed in this paper will take such knowledge on the limiting joint distribution into consideration and will assume a conditional law that stems from such knowledge-rather than imposing a linear specification as in (1.2); yet, the proposed approach is not to be seen as a competitor to quantile regression but rather as a method based on some loosely related principles and specific to the context where we have a block maxima response and a block maxima covariate. The remainder of the paper unfolds as follows. In Section 2 we introduce the proposed model and Section 3 devises an approach for learning about it from data. Section 4 reports the main findings of a Monte Carlo simulation study. We showcase the proposed methodology in a real data application to stock market data in Section 5. Finally, in Section 6 we present closings remarks. Proofs and derivations can be found in the appendix, and further numerical experiments and other technical details are presented in the supplementary material. 2 Modelling limiting block maxima conditioned on block maxima Prior to introducing a regression of block maxima on block maxima we need to lay groundwork on multivariate extremes. Let {(Y i , X i )} n i=1 be a sequence of independent random vectors with unit Fréchet marginal distributions, i.e. exp(−1/z), for z > 0. In our setup, Y i should be understood as a response, whereas X i = (X 1,i , . . . , X p,i ) should be understood as a p-dimensional covariate. Let the componentwise block maxima be Mn = (Mn,y, Mn,x 1 , . . . , Mn,x p ) with Mn,y = max{Y 1 , . . . , Yn} and Mn,x j = max(X j,1 , . . . , X j,n ), for j = 1, . . . , p. Under this setup, it is well-known that the vector of normalized componentwise maxima Mn/n converges in distribution to a random vector (Y, X) which follows a multivariate extreme value distribution with the joint distribution function G(y, x) = exp{−V (y, x)}, y, x 1 , . . . , xp > 0. (2.1) Here, is the exponent measure; see, for instance, de Haan and Resnick (1977 ), Pickands (1981 , and Coles (2001, Theorem 8.1 ). In addition, H is a parameter of the multivariate extreme value distribution G known as angular measure, which controls the dependence between the extreme values; specifically, H is a probability measure on the unit simplex where 1 1 d is a vector of ones in R d . If H is absolutely continuous with respect to the Lebesgue measure then its density is given by the Radon-Nikodym derivative h = dH/dw, for w ∈ ∆ d . We are now ready to introduce our regression method for block maxima on block maxima. We define the regression manifold as the family of regression lines, is a conditional quantile of a multivariate extreme value distribution, with q ∈ (0, 1) and x ∈ (0, ∞) p , and G Y |X (y | x) = P {Y ≤ y | X = x} is a conditional multivariate extreme value distribution function. In higher dimensions G Y |X can be expressed with the help of a joint multivariate extreme value density g Y,X and its expression has been derived by Stephenson and Tawn (2005) . By applying where V Λ (y, x) corresponds to mixed partial derivative of the exponent measure V (y, x) with respect to the lth components of (y, x) such that l ∈ Λ, n i is the number of partitions of {1, . . . , d} of size i = 1, . . . , d, and r ij is the jth partition of {1, . . . , d} of size i, with 1 ≤ j ≤ n i . In the particular case where we have a single covariate (p = 1), the regression manifold L in (2.3) can be derived using properties of bivariate copulas; see Appendix A. Accordingly, for an absolutely continuous angular measure H (with density h), it follows that wh(w) dw, x, y > 0, (2.6) where ω(x, y) = x/(x + y), and y q|x is then calculated via (2.4). We now derive regression manifolds L in (2.3) for the cases of independent and perfectly dependent extremes, which are depicted in Figure 2 .1. When extremes are independent, H assigns equal mass to the boundaries of the simplex, which also corresponds to asymptotic independence of X and Y (Hüsler and Li 2009) , resulting in When extremes are perfectly dependent, the angular measure H assigns all its mass to the barycenter of the simplex, d −1 1 1 d , leading to G(y, x) = exp{− max(y −1 , x −1 1 , . . . , x −1 p )}. Taking derivatives of G(y, x) in this case is non-trivial, and we replace the maximum function with a soft maximum (Cook 2011) so to obtain an approximation for the shape of the regression lines for perfectly dependent extremes. Thus, the soft maximum approximation for the regression lines for perfectly dependent extremes isL (2.8) Thus, regression lines for the case of perfectly dependent extremes do not depend on q. See Appendix B for the derivation, and Figure 2 .1 for a chart of its regression manifold. We end this section with comments on properties of regression manifolds. Trivially, regression lines obey the standard properties of quantile functions (van der Vaart 1998, Chap. 21) . Less trivial is however the fact that, monotone regression dependence of bivariate extremes (Guillem 2000, Theorem 1) implies that regression lines y q|x in (2.4) are non-decreasing in x, for p = 1, under some mild assumptions. Proposition 1 (Monotonicity of regression manifold) Let G Y |X (y | x) = P {Y ≤ y | X = x} be a conditional bivariate extreme value distribution function, which we assume to be jointly continuously differentiable and strictly increasing in y for any fixed x ∈ (0, ∞). Then, the regression lines for bivariate extremes (0, ∞) x → y q|x are non-decreasing for all q ∈ (0, 1). Proof See Appendix C. An example of a bivariate extreme value distribution satisfying the assumptions of Proposition 1 is the Logistic model, whose regression manifold is discussed in Example 1 below. We now consider some parametric instances of regression manifolds as defined in (2.3). Charts of regression manifolds for these parametric examples are depicted in Figure 2 .2. In Appendix D, we show that for sufficiently large x, the following linear approximation holds for the regression manifold, Lq = {y q|x : x ∈ (0, ∞)}, of the Logistic model from Example 1 with (2.9) Here, γq and βq are functions of both α and q (see (6.5) and (6.4)), and o(x) is little-o of x in Bachmann-Landau notation; the numerical accuracy of this approximation is illustrated in the supplementary material. Example 1 (Logistic) An instance of the Logistic regression manifold can be found in Figure 2 .2 (top). It stems from the Logistic bivariate extreme value distribution function given by where α ∈ (0, 1] characterizes the dependence between extremes: The closer α is to 0, the stronger the dependence, with the limit α → 0 corresponding to the case of perfect dependence. The conditional distribution of Y given X is thus leading to the following family of regression lines Lq in (2.3) where (2.10) and x > 0. Here, W is the so-called Lambert W function, that is, the multivalued analytic inverse of f (z) = z exp(z) with z denoting a real or complex number (Borwein and Lindstrom 2016) ; see the supplementary material for further details. As it can be seen from Figure 2 .2 (top), the regression lines obey what is claimed in Proposition 1 in the sense that (0, ∞) x → y q|x are non-decreasing for all q ∈ (0, 1). Example 2 (Husler-Reiss) An instance of the Husler-Reiss regression manifold is depicted in Figure 2.2 (middle). It follows from the Husler-Reiss bivariate extreme value distribution function which has the following form: where Φ is the standard Normal distribution function and λ ∈ (0, ∞] is the parameter regulating the dependence between extremes: λ → 0 corresponds to perfect dependence and the limit case λ → ∞ corresponds to complete independence. The family of regression lines Lq in (2.3) for this model does not have explicit representations and is obtained using (2.4) with where φ is the standard Normal density function. Example 3 (Coles-Tawn) An instance of the Coles-Tawn regression manifold is depicted in Figure 2 .2 (bottom). It follows from the Coles-Tawn bivariate extreme value distribution function which has the following form: where Be(q; a, b) is the distribution function of a Beta distribution function with parameters a, b > 0, q = αy −1 /(αy −1 + βx −1 ) and α, β > 0 are the parameters regulating dependence between extremes; the case α = β = 0 corresponds to complete independence, whereas α = β → ∞ corresponds to perfect dependence. For fixed α (β) the strength of dependence increases with β (α). The family of regression lines Lq in (2.3) for this model does not have an explicit representation and is calculated using (2.4), for x, y > 0, with where be(q; a, b) is the density function of the Beta distribution with parameters a, b > 0 and γ = (α + β + 2)(α + β + 1). Section 2 introduced our key parameter of interest-regression manifolds for block maxima on block maxima, i.e. L as in (2.3)-, it commented on some of its properties, and gave examples of parametric instances. Next, we discuss Bayesian inference for L . 3.1 Induced prior on the space of regression manifolds for p = 1 In this section we discuss how to learn about regression manifolds from data. To achieve this, we resort to the Bayesian paradigm and will define an induced prior on the space of regression manifolds by resorting to a flexible prior on the space of all angular measures that was recently proposed by Hanson et al. (2017) . To lay the groundwork, we start by defining the setup of interest. Let known as the pseudo-angular decomposition of the observations. In de Haan and Resnick (1977) it is shown the equivalence of the convergence of normalized componentwise maxima to G to the following weak convergence of measures This means that when the radius R is sufficiently large, the pseudo-angles W are nearly independent of R and follow approximately a distribution associated with the angular measure H. Thus, to learn about Lq in (2.3), we first learn about H based on k = |{W i : R i > u, i = 1, . . . , n}| exceedances above a large threshold u, with a methodology we describe next. Following Hanson et al. (2017) , we model the angular density h via a Bernstein polynomial defined on the unit simplex ∆ d , and hence basis polynomials are Dirichlet densities. More precisely, our specification for the angular density is dx is the gamma function; finally in (3.1) the πα > 0 are weights and J ∈ N controls the order of the resulting polynomial. To ensure that the resulting h(w) is a valid angular density (i.e. an actual density satisfying the moment constraint (2.2)), the weights must obey |α|=J πα = 1, for j = 1, . . . , d. The normalization and mean constraints in (3.2) imply that there are m − d parameters, where m = ( J−1 d−1 ) is the number of basis functions in (3.1); denote such free weights as {πα : α ∈ F }, where F = {α ∈ N d , |α| = J, and α ∈ {a 1 , . . . , a d }} with a j being a J-vector of ones except element i is J − d + 1. Similarly to Hanson et al. (2017) , we parametrize the free weights via a generalized logit transformation that implicitly defines the auxiliary parameters π α , that is, Now, to induce a prior in the space of regression manifolds we plug-in the angular density in (3.1) into (2.5); subsequent integration with respect to y and inversion of G Y |X (y|x) leads to an induced prior on the space of regression lines Lq. In detail, to define a prior on the space of regression manifolds we proceed as follows. The Bernstein polynomial prior in (3.1) induces a prior on the space of regression lines where ω(x, y) = x/(x + y), for x, y > 0. Finally, to complete the model specification we set the following Dirichlet prior on the free parameters where I is the indicator function, which accordingly induces a prior on the auxiliary parameters π α in (3.3). 3.2 Induced prior on the space of regression manifolds for p > 1 When p ≥ 2 we proceed as in Section 3.1, that is our induced prior in the space of regression lines is again induced by the Bernstein polynomial prior for the angular density in (3.1), and it follows by solving G Y |X (y | x) = q, with h(w) as in (3.1). The expression for the conditional multivariate extreme value distribution G Y |X (y | x) for p ≥ 2 is however not as manageable as the one in (3.4). We thus propose an approach for learning about the regression manifold L , as defined in (2.3), via an approximation to the conditional multivariate GEV density. (3.5) An induced prior for g can be devised by plugging the approximation in (3.5) with the specification from Section 3.1, which leads to the following prior for the conditional multivariate extreme value distribution function, is the multivariate beta function. Hence, we can learn about the regression manifold L by estimating πα as described in Section 3.1, that is, by plugging in the Bernstein polynomial estimates (3.1) into the approximation (3.5) and numerically inverting (3.6) with respect to y. This strategy is illustrated numerically in the supplementary material. We study the finite sample performance of the proposed methods under three data generating scenarios that were introduced in Section 2; see Examples 1-3. Specifically, we simulate data as follows: -Scenario 1-strongly dependent extremes: Husler-Reiss model with λ = 0.1. For now we focus on illustrating the methods in a single-run experiment; a Monte Carlo simulation study will be reported in Section 4.2. To illustrate how the resulting estimates compare with the true regression lines on a one-shot experiment, in each scenario we generate n = 5000 samples . For the analysis we use observations for which X i +Y i > u, where u is the 95% quantile of the pseudo-radious, providing k = 250 exceedances to fit the model. To learn about regression lines from data, we exploit the single component adaptive Markov Chain Monte Carlo (MCMC) with a wide Dirichlet prior, Dirichlet(0.1 × 1 1 k ), defined on a generalized logit transformation of weights πα. The length of each MCMC chain is 10000 with a burn-in period of 4000. The multivariate effective sample sizes are 903, 1121, 1808 for Scenarios 1,2,3 respectively. In Figure 4 .1 we plot true and estimated regression manifolds under the three scenarios above over the range (x, y) ∈ (0, 20] × (0, 20], where 20 corresponds to the 95% quantile of the unit Fréchet marginal distributions. Figure 4 .1 shows that, for these one shot experiments, the proposed estimator recovers well the shape of L for all three cases, although as expected for q closer to 0 and 1 there is some bias. Figure 4 .1 also anticipates a feature that we will revisit in Section 4.2, i.e. that the case of weakly dependent extremes is more challenging-which is a consequence of the fact that it is more to challenging to learn about U-shaped angular densities from data. To have a closer look into the outputs from these numerical experiments, we depict in Figure 4 .2 cross sections of the angular manifold, over q and over x, thus leading to regression lines and conditional quantiles for the Husler-Reiss (top), Logistic (central), and Coles-Tawn (bottom) models. Once more, we see that the fits are fairly reasonable overall although a bit more of bias is visible for q closer to 0 and 1. Interestingly, it can also be seen from Figure 4 .2 that regression lines are approximtely linear for the Logistic model, and we prove that this indeed the case for large x; see Appendix D. To conduct a simulation study we generate 500 Monte Carlo samples of sizes n = 5000 and n = 10000 resulting in k = 250 and k = 500 for the three scenarios described in Section 4.1. We use the MCMC algorithm as described in Section 4.1 with the same prior on πα's. The performance of our methods will be visualized via a comparison of posterior mean estimates of the regression lines with the true regression lines Lq for a few fixed q ∈ (0, 1). We focus on the region x ∈ (0, 20] as the bivariate extreme value concentrates most of its mass (at least 90%) in the set (0, 20] × (0, 20]. The regression lines corresponding to the described scenarios for k = 500 are shown in Figure 4 .3; a similar chart for k = 250 is available from the supplementary material ( Figure SM.3) . Figure 4. 3 outlines that the model fits the data from Scenario 1 reasonably well and, for weakly dependent extremes (Scenario 2) and asymmetrically dependent extremes (Scenario 3), it provides relatively precise estimates for middle values of q, but as expected it presents some bias for q close to 0 and 1. Comparing different sample sizes (i.e. comparing Overall, the difference in the performance for the considered scenarios is mainly related to the degree of association between extremes. For moderate-strongly dependent extremes with bellshaped angular densities, as in Scenarios 1 and 3, we observe a reasonably good fit, whereas for weakly dependent extremes with U -shaped angular densities, as in Scenario 2, the estimates tend to be less accurate-as it is more challenging learning about the latter from data. We now apply the proposed method to two of the world's biggest stock markets-the NASDAQ (National Association of Securities Dealers Automated Quotations) and NYSE (New York Stock Exchange). According to the Statistics Portal of the World Federation of Exchanges (https:// statistics.world-exchanges.org), the total equity market capitalization of NASDAQ and NYSE are respectively 20.99 and 24.67 trillion US$, as of 2021 / Apr, thus illustrating well the scale of these players in the worldwide stock-exchange industry. The data were gathered from Yahoo Finance (https://finance.yahoo.com), and consist of daily closing prices of the NASDAQ and NYSE composite indices over the period from February 5, 1971 to June 9, 2021. A key goal of the analysis will be to learn about spillover between extreme losses in these markets through the lenses of our model, and thus we focus on modeling negative log returns, which can be regarded as a proxy for losses, and which consist of first differences of prices on a log-scale; the resulting sequence of m componentwise weekly maxima losses for NASDAQ and NYSE is denoted below as The sample period under analysis is sufficiently broad to cover a variety of major downturns and selloffs including, for example, those related with the 2007-2010 subprime mortgage crisis, the ongoing China-US trade war, and with the 2020 COVID-19 pandemic. We take weekly maxima of negative log returns and convert them to unit Fréchet margins via the transformation ( where F X and F Y respectively denote the empirical distribution functions (normalized by m + 1 rather than by m to avoid division by zero) of negative log returns for NASDAQ (X ) and NYSE (Y); the supplementary material include the reverse analysis that swaps the roles of NASDAQ and NYSE (i.e. NASDAQ becomes Y and NYSE becomes X ). The raw data and resulting preprocessed data are depicted in Figure 5 .1. As can be seen from the latter figure the composite indices exhibit a similar dynamics reacting to different economic shocks (9/11 attacks, 2001; 2008 financial crisis; China-US trade war started in 2018) alike. Also, as can be seen from Figure 5 .1, the shape of the scatterplot of the negative log returns brought to unit Fréchet margins in log-log scale above the boundary threshold evidences intermediate level of extremal dependence between negative log returns. We now apply our model so to learn about how the extreme losses on both exchanges relate. To employ our model we start by fitting the angular density via the Bernstein polynomial-based approach from Section 3.1 by using the pseudo-angles based on thresholding the pseudo-radius at their 95% quantile. Some comments on prior specification and on posterior inference are in order. For our calculations we used a single component adaptive MCMC method with a wide Dirichlet prior defined on a generalized logit transformation of weights πα. We run a MCMC chain of length 25 000 with a burn-in period of 10 000 and set the number of basis functions to be equal to the number of exceedances. The specified chain has the multivariate effective sample size of 1 726 657. The obtained fit for the angular density is reported in Figure 5 .1 (right). As is illustrated by this plot most of the observed pseudo-angles lie closer to the middle of the interval (0, 1) and the estimate resembles a bell-shaped right-skewed density which suggests there is an asymmetric intermediate dependence between extremal losses on NASDAQ and NYSE composite indices.Next we learn about the regression manifold. Figure 5.2 (a) represents the resulting estimates of the regression manifold together with cross-sections in q and x, respectively in (b) and (d) for negative log-returns on NASDAQ and NYSE composite indices in the original margins. The regression manifold is highly non-linear and the regression lines on the middle graph substantially differ from those corresponding to independence and tend to be closer to the identity line. Moreover, the cross-sections for different values of x reveal considerable variation in quantiles of y supporting the conclusion about presence of the dependence between negative log-returns. To assess the quality of the fitted regression manifold we depict in Figure 5.1 (b) a QQ-plot of a version of Dunn and Smyth (1996) randomized quantile residuals adapted to our model, defined as ε i = Φ −1 (G H (Y i | X i )), for Y i + X i > u, with u denoting the 95% quantile of the pseudo-radius. The latter chart depicts randomized quantile residuals against the theoretical standard Normal quantiles, and it suggests an acceptably good fit of the proposed model. Having evaluated the regression manifold, we are now ready to examine by how much the NYSE can plummet, when the NASDAQ plummets. To examine this, we report in Table 1 predicted 75%, 90% and 95% quantiles of losses on NYSE evaluated for 1%, 2% and 3% weekly maxima losses on NASDAQ. This table follows from the regression manifold, and its interpretation is as follows. First, from a qualitative viewpoint, Table 1 indicates that whenever the NASDAQ plummets, the NYSE tends to plummet reasonably by the same amount. Second-and more interesting from a financial outlook-are the quantitative claims that came be made from the analysis. For example, Table 1 indicates that whenever there is a 1% weekly maximum loss on the NASDAQ, only in 5% of the times we expect to suffer a loss in the NYSE above 1.71%. As another example, Table 1 indicates that whenever there is a 3% weekly loss on the NASDAQ, only in 5% of the times we expect to suffer a loss in the NYSE above 3.33%. We propose a regression-type model for the setup where both the response and the covariate are block maxima. The modeling starting point is the result that the limiting behavior of the vector of properly standardized componentwise maxima is given by a multivariate extreme value distribution. Conceptually, the model is then constructed in a similar fashion as in quantile regression, that is, by assessing how the conditional quantile of the response reacts to changes in the covariate while it takes into account the latter asymptotic result. An important target in the proposed framework is the regression manifold, which consists of a family of regression lines obeying the proviso of multivariate extreme value theory. A Bernstein polynomial prior on the space of angular densities is used to learn about the model from data, with numerical studies showcasing its flexibility. One could wonder why not to resort to statistical models for nonstationary extremes (e.g. Coles 2001, Section 6) as an alternative to methods proposed herein, as these can be used for assessing the effect of covariates on an extreme-valued response, by indexing the parameters of the GEV distribution with a covariate. Yet, since the latter models are built from the univariate theory of extremes they are not tailored for conditioning on another variable being extreme, as they fail to take on board information from the dependence structure between the extremes. Other related approaches include extremal quantile regression methods (Chernozhukov 2005 )-which similarly to the statistical models for nonstationary extremes-have not been designed for conditioning on another variable being extreme, as they do not take into account the dependence structure between the extremes. While not explored here, the comparison of the fitted models for both Y | X = x and X | Y = y, would look natural for some applied settings of interest so to get an idea of cause and effects, and indeed related ideas are analyzed by Mhalla et al. (2020) . Finally, we close the paper with some comments on future research. For regressions with many predictors, it is likely that most covariates will have little effect on the response and thus one could wonder how to devise a version of the proposed method that shrinks towards zero the effect of such irrelevant covariates; the development of a Lasso (Tibshirani 1996) version of the proposed model would thus seem natural for such situation, and is left as an open problem for future research. Another natural avenue for future research would be to devise regression-type methods for exceedances on exceedances by resorting to the so-called multivariate generalized Pareto distribution (Kiriliouk et al. 2019) , rather than with the multivariate extreme value distribution as herein. Finally, the development of a version of the model that could take into account asymptotic independence by resorting to the hidden angular measure (Ramos and Ledford 2009) , rather than the standard angular measure as herein, would seem natural as well. Here we derive the expression for the conditional bivariate extreme value distribution function in (2.6). Sklar's theorem (Nelsen 2006, Theorem 2.3 .3), implies that a joint bivariate distribution function G : R 2 → [0, 1] with continuous marginal distributions G X : R → [0, 1] and G Y : Using the following well-known property of copulas, we calculate the conditional distribution (Y | X) as In our setting Assuming H is absolutely continuous with density h, we have ∂ ∂x where ω(x, y) = x/(x + y). Then, the conditional copula has the following form wh(w) dw, x, y > 0. Here we give details on the soft maximum approximation for the regression lines for perfectly dependent extremes claimed in (2.8). We use a smooth approximation of a maximum function called soft-maximum, which is infinitely differentiable everywhere and converges to the maximum function as N → ∞ (Cook 2011). Then, the approximation of a multivariate GEV distribution function for the case of perfect dependent extremes, G(y, x) = max{y −1 , x −1 and its partial derivative of order d is This yields the following approximation of the conditional multivariate GEV density for perfectly dependent extremes, and the following approximation for the corresponding conditional cumulative distribution functioñ Passing the last expression to the limit as N → ∞ provides an ansatz for the true conditional distribution functionG Y |X (y | x) = 1, y ≥ min(x 1 , . . . , xp) 0, y < min(x 1 , . . . , xp) with y, x 1 , . . . , xp > 0, from where (2.8) follows. Since y → G Y |X (y | x) is continuous (strictly increasing) for all x ∈ (0, ∞), y q|x given by (2.4) is the solution to G Y |X (y | x) = q for a fixed q ∈ (0, 1). Then y satisfying G Y |X (y | x) = q is an implicit function of x parametrized by q. Under our assumptions we apply the implicit function theorem and calculate the derivative of y q|x with respect to x via ∂ ∂x . (6.1) Equation (6.1) combined with the monotone regression dependence property, i.e. x → G Y |X (y | x) is non-increasing for all y ∈ (0, ∞) (Guillem 2000, Theorem 1), and the strict monotonicity of This completes the proof. Here we give details on how the exact (2.10) and approximated (2.9) regression manifolds for the logistic model can be derived. The derivations below require the use of Lampert W function (Borwein and Lindstrom 2016) on which some properties and details can be found in the supplementary material. D.1. Exact regression manifold . Here we compute the conditional quantiles for bivariate extreme value distribution and their linear approximation for large x. Using (2.6), we calculate the conditional distribution function for the logistic model; for (x, y) ∈ (0, ∞) 2 , it follows that The conditional quantiles behave differently depending on the strength of dependence between extremes. The special case of the logistic model is for α = 1, corresponding to independence between extremes for which the family of regression lines are known to be given by (2.7). We now derive conditional quantiles for bivariate dependent extremes, i.e. when α ∈ [0, 1). Since y → G Y |X is continuous, a conditional quantile is a solution to which can be written in terms of the Lambert W function. Rewriting (6.2) as x, x > 0. and that the conditional quantiles tend to infinity as x → ∞. D.2. Limiting regression manifold . Below we show that (2.9) holds. To find γq and βq we use the expansion of the principal branch of the Lambert W function, a solution to z = we w when z > 0, around 0, that is where O(z) is big-O of z. Substituting in (6.3) the expansion of W leads to and taking x → ∞ we find the linear asymptote of x → y q|x which is based on and, the more involved calculation of γq. The derivative of the asymptotic expansion of a function does not necessarily correspond to the asymptotic expansion of the derivative of the function; hence, we use L'Hospital with (6.3) and only then inject the asymptotic expansions. We have then after some tedious derivations that The resulting linear approximation is as follows, for q ∈ (0, 1) and x 1: We report on two one-shot numerical experiments aimed at illustrating the approach in Section 3.2 in the paper, that induces a prior on the space of all regression manifolds by resorting to Bernstein polynomials and an approximation of a multivariate GEV density due to Cooley et al. (2012) . For the numerical experiments in this supplementary material, we test our model by taking a trivariate logistic extreme value distribution with dependence parameter α = 0.1 ('strongly' dependent extremes) for the case p = 2, i.e. with the trivariate GEV distribution We generate two samples of sizes n = 10000 and n = 20000 which, after thresholding at 95% empirical quantiles of the pseudo-radius, yield k = 500 and k = 1000 data points to fit the model. Here, we use a similar prior specification and MCMC setup as in Section 4.1 of the paper. Figure SM .1 indicates that the proposed estimator of the angular density captures reasonably well the dependence between extremes by concentrating around the barycenter of the simplex, though in a less pronounced form than the true density. As can be seen from Figure SM .2, the resulting fitted regression lines resemble the true ones, Lq, and increasing sample size improves the fit as the lateral surfaces of the estimates become more slanting, for q = {0.3, 0.5, 0.7}. for any complex z. Since we deal only with positive real valued z, the equation f (z) = ze z has only one solution w = W (z), with W being the principal branch of the Lambert W function. A useful property of this function is that for any constant a ∈ R one has lim z→∞ zW (a/z) = lim See Borwein and Lindstrom (2016) for further details. In this section we present the reverse analysis to that presented in Section 5 of the paper; that is, here NASDAQ is the response, whereas NYSE is taken as covariate. Figure Fig. 5 .2 in the paper but for the reverse analysis; and the same applies to Table 2 , which is the reverse analysis equivalent of Table 1 in the paper. Interpretations follow along the same lines as in Section 5 of the paper. Here we illustrate how the exact and limiting regression manifold for logistic model compare; see Appendix D for details on the derivation of these. As it can be seen from Figs. SM.5-SM.6, the Statistics of Extremes: Theory and Applications Meetings with Lambert W and other special functions in optimization and analysis Extremal quantile regression An Introduction to Statistical Modeling of Extreme Values Approximating the conditional density given large observed values via a multivariate extremes framework, with application to environmental data Limit theory for multivariate sample extremes Randomized quantile residuals Modelling non-stationary extremes with application to surface level ozone Modelling Extremal Events for Insurance and Finance Extreme-Value Copulas Structure de dépendance des lois de valeurs extrêmes bivariées Bernstein polynomial angular densities of multivariate extreme value distributions Testing asymptotic independence in bivariate extremes Statistical methods for nonstationary extremes Peaks over thresholds modeling with multivariate generalized Pareto distributions Regression quantiles Causal mechanism of extreme river discharges in the upper danube basin network Multivariate extreme value distributions A New Class of Models for Bivariate Joint Tails Exploiting occurrence times in likelihood inference for componentwise maxima Regression shrinkage and selection via the lasso Conditional sampling for spectrally discrete max-stable random fields Vector generalized linear and additive extreme value models Acknowledgements We thank, without implicating, Johan Segers (Université catholique de Louvain) and Raphaël Huser (King Abdullah University of Science and Technology) for insightful discussions, suggestions, and comments. M. de Carvalho acknowledges support from the Fundação para a Ciência e a Tecnologia (Portuguese NSF) through the projects PTDC/MAT-STA/28649/2017 and UID/MAT/00006/2020. G. dos Reis acknowledges support from the Fundação para a Ciência e a Tecnologia (Portuguese NSF) through the project UIDB/00297/2020 (Centro de Matemática e Aplicações CMA/FCT/UNL). A. Kumukova was supported by The Maxwell Institute Graduate School in Analysis and its Applications, a Centre for Doctoral Training funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016508/01), the Scottish Funding Council, Heriot-Watt University, and the University of Edinburgh. The datasets analysed during the current study are available from Yahoo Finance (https://finance.yahoo.com). Appendix A: Conditional bivariate extreme value distribution Appendix C: Proof of Proposition 1