key: cord-0559323-58beus2h authors: Xiu, Zidi; Tao, Chenyang; Gao, Michael; Davis, Connor; Goldstein, Benjamin A.; Henao, Ricardo title: Variational Disentanglement for Rare Event Modeling date: 2020-09-17 journal: nan DOI: nan sha: 1479840da81d268f29d434f722074db562ac9af6 doc_id: 559323 cord_uid: 58beus2h Combining the increasing availability and abundance of healthcare data and the current advances in machine learning methods have created renewed opportunities to improve clinical decision support systems. However, in healthcare risk prediction applications, the proportion of cases with the condition (label) of interest is often very low relative to the available sample size. Though very prevalent in healthcare, such imbalanced classification settings are also common and challenging in many other scenarios. So motivated, we propose a variational disentanglement approach to semi-parametrically learn from rare events in heavily imbalanced classification problems. Specifically, we leverage the imposed extreme-distribution behavior on a latent space to extract information from low-prevalence events, and develop a robust prediction arm that joins the merits of the generalized additive model and isotonic neural nets. Results on synthetic studies and diverse real-world datasets, including mortality prediction on a COVID-19 cohort, demonstrate that the proposed approach outperforms existing alternatives. Early identification of in-hospital patients who are at imminent risk of life-threatening events, e.g., death, ventilation or intensive care unit (ICU) transfer, is a critical subject in clinical care (Bedoya et al. 2019) . Especially during a pandemic like COVID-19, the needs for healthcare change dramatically. With the ability to accurately predict the risk, an automated triage system will be well-positioned to help clinicians better allocate resources and attention to those patients whose adverse outcomes can be averted if early intervention efforts were in place. Despite the great promise it holds, with the richness of modern Electronic Health Record (EHR) repositories, the construction of such a system faces practical challenges. A major obstacle is the scarcity of patients experiencing adverse outcomes of interest. In the COVID-19 scenario, which we consider in our experiments, the mortality of patients tested positive at the Duke University Health System (DUHS) is slightly lower than 3%. Further, in another typical EHR dataset we consider, less than 5% of patients are reported to suffer adverse outcomes (ICU transfer or death). In these low-prevalence scenarios, commonly seen in clinical practice, standard classification models such as logistic regression suffer from majority domination, in which models tend to favor the prediction accuracy of majority groups. This is clearly undesirable for critical-care applications, given the high false negative rates (Type-II error), in which patients in urgent need of care could be falsely categorized. Situations where the distribution of labels is highly skewed and the accuracy of the minority class bears particular significance (Dal Pozzolo et al. 2017; Machado and Lopes 2020) have been associated with the name imbalanced dataset (He and Garcia 2009), whereas the methods dealing with such cases are coined extreme classification (Zong, Huang, and Chen 2013) . Under such a setting, the lack of representation of minority cases severely undermines the ability of a standard learner to discriminate, relative to balanced datasets (Mitchell 1999) . Consequently, these solutions do not generalize well on minority classes, where the primary interest is usually focused. To address such a dilemma, several remedies have been proposed to account for the imbalance between class representations. One of the most popular strategies is the sampling-based adjustment, where during training, a model oversamples the minority classes (or undersamples the majority classes) to create balance artificially (Drummond, Holte et al. 2003) . To overcome the biases and the lack of information that naive sampling adjustments might induce, variants have been proposed to maximally preserve the clustering structure of the original dataset (Mani and Zhang 2003; Yen and Lee 2009) and to promote diversity of oversampling schemes (Han, Wang, and Mao 2005) . Alternatively, cost-sensitive weighting where minority losses are assigned larger weights provides another popular option via tuning the relative importance of minority classes (Elkan 2001; Munro et al. 1996; Zhou and Liu 2005) . While the above two strategies introduce heuristics to alleviate the issues caused by class imbalance, importance sampling (IS) offers a principled treatment that flexibly combines the merits of the two (Hahn and Jeruchim 1987; Heidelberger 1995) . Each example is sampled with the probability of a pre-specified importance weight, and with the weight's inverse when accounting for the relative contribution in the overall loss. This helps to flexibly tune the representation of rare events during training, without biasing the data distribution (Heidelberger 1995; Shimodaira 2000; Gretton et al. 2009 ). It is important to note that, poor choice of importance weights may result in uncontrolled variance that destabilizes training (Robert and Casella 2013; Botev and Kroese 2008) , calling for adaptive (Rubinstein and Kroese 2013) or variance reduction schemes (Rubinstein and Kroese 2016) to protect against such degeneracy. Apart from the above strategies that fall within the standard empirical risk minimization framework, recent developments explicitly seek better generalization for the minority classes. One such example is the one-class classification that aims to capture one target class from a general population (Tax 2002) . Meta-learning and few-shot learning strategies instead trying to transfer the knowledge learned from data-rich classes to facilitate the learning of data-scarce classes (Böhning, Mylona, and Kimber 2015; Finn, Abbeel, and Levine 2017) . Additionally, non-cross-entropy based losses or penalties have been proved useful to imbalanced classification tasks (Weinberger and Saul 2009; Huang et al. 2016) . For instance, the Focal loss (Lin et al. 2017) upweights the harder examples, and Cao et al. (2019) introduced a label-distribution-aware margin loss encouraging minority classes having larger margins. In this work, we present a novel solution called variational inference for extremals (VIE), capitalizing on the learning of more generalizable representations for the minority classes. Our proposal is motivated by the observation that the statistical features of "rarity" have been largely overlooked in the current literature of rare-event modeling. And the uncertainties of rare-events are often not considered. Framed under the Variational Inference framework, we formulate our model with the assumption that the extreme presentation of (unobserved) latent variables can lead to the occurrence (or the inhibition) of rare events. This encourages the accurate characterization of the tail distribution of the data representation, which has been missed by prior work to the best of our knowledge. Building upon the state-of-the-art machine learning techniques, our solution features the following contributions: (i) the model accounts for representation uncertainty based on variational inference; (ii) the adoption of mixed Generalized Pareto priors to promote the learning of heavy-tailed feature representations; and (iii) integration of additive isotonic regression to disentangle representation and facilitate generalization. We demonstrate how our framework facilitates both model generalization and interpretation, with strong empirical performance reported across a wide-range of benchmarks. To simplify our presentation, we focus on the problem of rare event classification for binary outcomes. The generalization to the multiple-class scenario is simple and presented in the Supplementary Material (SM) 1 be a dataset of interest, where x i and y i denote predictors and outcomes, respectively, and N is the sample size. Without loss of generality, we denote y = 1 as the minority event label (indicating the occurrence of an event of interest), and y = 0 as the majority label. 1 SM can be found at https://arxiv.org /abs/2009.08541 In the following, we will briefly review the three main techniques we used in this work, namely, variational inference (VI), extreme value theory (EVT), and additive isotonic regression. VI allows for approximate maximum likelihood inference while accounting for data uncertainty. EVT provides a principled and efficient way to model extreme, heavy-tailed representations. Additive isotonic regression further introduces monotonic constraints to disentangle the contribution of each latent dimension to the outcome. Consider a latent variable model where v ∈ R m is the observable data, z ∈ R p is the unobservable latent variable, and θ represents the parameters of the likelihood model, p θ (v|z). The marginal likelihood p θ (v) = p θ (v, z)dz requires integrating out the latent z, which typically, for complex distributions, does not enjoy a closed-form expression. This intractability prevents direct maximum likelihood estimation for θ in the latent variable setup. To overcome this difficulty, Variational Inference (VI) optimizes computationally tractable variational bounds to the marginal log-likelihood (Kingma and Welling 2014; Chen et al. 2018) . Concretely, the most popular choice of VI optimizes the following Evidence Lower Bound (ELBO): where q φ (z|v) is an approximation to the true (unknown) posterior p θ (z|v), and the inequality is a direct result of Jensen's inequality. The variational gap between the ELBO and true marginal log-likelihood, i.e., log p θ (v) − ELBO(v; p θ (v, z), q φ (z|v)), is given by the Kullback-Leibler (KL) divergence between posteriors, i.e., , which implies that the ELBO tightens as q φ (z|v) approaching the true posterior p θ (z|v). For estimation, we seek parameters θ and φ that maximize the ELBO in (1). Given a set of observations {v i } N i=1 sampled from data distribution v ∼ p d (v), maximizing the expected ELBO is also equivalent to minimizing the KL divergence KL(p d (v) p θ (v)) between the empirical and model distributions. When p θ (v|z) and q φ (z|v) are specified as neural networks, the resulting architecture is commonly known as the variational auto-encoder (VAE) (Kingma and Welling 2014) , where q φ (z|v) and p θ (v|z) and are known as encoder and decoder, respectively. Note that q φ (z|v) is often used for subsequent inference tasks on new data. Extreme Value Theory (EVT) provides a principled probabilistic framework for describing events with extremely low probabilities, which we seek to exploit for better rare event modeling. In particular, we focus on the exceedance models, where we aim to capture the asymptotic statistical behavior of values surpassing an extreme threshold (Davison and Smith 1990; Tao et al. 2017) , which we briefly review below following the notation of Coles et al. (2001) . Without loss of generality, we consider exceedance to the right, i.e., values greater than a threshold u. For a random variable X, the conditional cumulative distribution of exceedance level x beyond u is given by , where x > 0 and F (x) denotes the cumulative density function for X. A major result from EVT is that under some mild regularity conditions, e.g., continuity at the right end of F (x) and others, F u (x) will converge to the family of Generalized Pareto Distributions (GPD) regardless of F (x), as u approaches the right support boundary of F (x) (Balkema and De Haan 1974; Pickands III et al. 1975 Falk, Hüsler, and Reiss 2010) , where GPD ξ,σ,u (x) is of the form where σ is a positive scale parameter. When ξ < 0 the exceedance x has bounded support 0 ≤ x ≤ u − σ/ξ, otherwise when ξ ≥ 0, x is unbounded. A major implication of this asymptotic behavior is that, for modeling extreme values, one only needs to fit extreme samples to the loglikelihood function of the GPD. Also known as monotonic regression, isotonic regression is a non-parametric regression model that constrains the relation between predictor and outcome to be monotonic, (e.g., et al. 1972) . Such monotonic constraint is a natural and flexible extension to the standard linear relation assumed by many statistical models. To accommodate multi-covariate predictors, additive isotonic regression combines isotonic models for each individual one-dimensional predictor (Bacchetti 1989) . Standard implementations often involve specialized algorithms, such as local scoring algorithms (Hastie 2017) and the alternating conditional expectation (ACE) method of Breiman and Friedman (1985) . All these approaches typically require costly iterative computations and are not scalable to large datasets. Here we consider recent advances in unconstrained monotonic neural networks, which allow for efficient and flexible end-to-end learning of monotonic relations with robust neural nets based on standard training schemes such as stochastic gradient descent (Sill 1998 ; Wehenkel and Louppe 2019). The proposed model is based on the hypothesis that extreme events are driven by the extreme values of some latent factors. Specifically, we propose to recast the learning of low-prevalence events into the learning of extreme latent representations, thus amortizing the difficulties associated with directly modeling rare events as outcomes. To allow for more efficient learning from the rare events, we make some further assumptions to regularize the latent representation: (i) effect disentanglement: the contribution from each dimension of the latent representation to the event occurrence is additive; (ii) effect monotonicity: there is a monotonic relation between the outcome likelihood and the values of each dimension of the latent representation. The key to the proposed approach is using an additive isotonic neural network to model the one-dimensional disentangled monotonic relations from a latent representation, which is obtained via variational inference. Specifically, we impose an EVT prior to explicitly capture the information from the few minority group samples into the tail behavior of the extreme representation. Below we provide the rationale for our choices followed by a description of all model components. Disentanglement & additive isotonic regression. Consistent with assumptions (i) and (ii), we posit a scenario in which the underlying representation of extreme events is more frequent at the far end of the representation spectrum, for which additive isotonic regression is ideal. The disentanglement consists of modeling each latent dimension individually, thus avoiding the curse of dimensionality when modeling combinatorial effects with few examples. Further, the monotonicity constraint imposed by the isotonic regression model restricts possible effect relations, thereby improving generalization error by learning with a smaller, yet still sufficiently expressive, class of models (Bacchetti 1989) . EVT & VI. Note that the spread of representation of extreme events is expected to be more uncertain relative to those of the normal, more abundant events, due to a few plausible causes: (i) extreme events represent the breakdown of system normality and are expected to behave in uncertain ways; (ii) there is only a small number of examples available for the extreme events, so the learned feature encoder will tend to be unreliable. As a result, it is safely expected that the encoded features associated with the extremes events will lie outside the effective support of the Gaussian distribution assumed by the standard VI model. In other words, the representation of the events can manifest as a heavy-tailed distribution. This will compromise the validity and generalizability of a prediction model if not dealt with appropriately. So motivated, we explicitly model the distribution of the extreme underlying representations via EVT. Using EVT, we decouple the learning of the tail end of the representation distribution. Since EVT-based estimation only requires very few parameters, it allows for accurate modeling with a small set of tail-end samples. Further, in combination with the variational inference framework, it accounts for representation uncertainty via the use of a stochastic encoder, which further strengthens model robustness. Benefits of heavy-tailed modeling. A few other considerations further justify modeling with a heavy-tailed distribution for the extreme event representation. One obvious benefit is that it allows better model resolution along the representation axis, i.e., better risk stratification. For light-tail representations, extreme examples are clustered in a narrow region where the tail vanishes, thus a standard (lighttailed) learning model will report the average risk in that region. However, if the representations are more spread out, then there is a more gradual change in risk, which can be z2 z1 z2 event non-event grounG truth GauVVian GMM EVT Figure 1 : Left: Distribution of a two-dimensional latent space z where the long tail associates with higher risk. Right: Tail estimations with different schemes for the long-tailed data in one-dimensional space. EVT provides more accurate characterization comparing to other mechanisms. better captured, as shown in Figure 1 . Another argument for favoring heavy-tailed representations is that heavy-tailed phenomena are very common in nature (Bryson 1974) , and these tail samples are often encoded less robustly due to the lack of training examples. Allowing long-tail representations relieves the burden of an encoder. Model structure. We consider latent variable model p θ (y, x, z) = p θ (y|z)p θ (x|z)p(z), where v = {x, y} are the observed variables. Under the VI framework, similar to (1) we write the ELBO(v; p θ (v, z), q φ (z|v)) as where p θ (y|z) is specified as an additive isotonic regression model, p(z) is modeled with EVT, and the approximate posterior, q φ (z|v), is specified as an inverse auto-regressive flow. Note that unlike in the standard ELBO in (3), we have dropped the term E Z∼q φ (z|v) [log p θ (x|z)] because we are not interested in modeling the covariates. Note this coincides with the variational information bottleneck (VIB) formulation (Alemi et al. 2016) . Additionally, the posterior q φ (z|v) will not be conditioned on y, but only on x, because in practice, the labels y are not available at inference time. Specifically, we rewrite the objective in (3) as , where β is a hyperparameter controlling the relative contribution of the KL term to the objective. Below we provide details for each component of the proposed approach. First, let us consider the following monotone mapping z l h(s; θ)ds + γ, consisting on integrating a non-negative function h(s; θ) specified as a neural network with onedimensional input, s, and parameterized by θ. The choice of the lower end l is arbitrary, and γ is a bias term. For multi-dimensional latent representation z ∈ R p , we write the additive monotonic neural network (AMNN) as where α j serves as a weight which controlling the effect directions. In other words, when α j > 0, it can be interpreted as an event stimulator; otherwise it is an event blocker. To ensure h(s; θ) is non-negative, we apply exponential activation function to the network's output. The integration of z is conducted with numerical integration by the Riemann-Stieltjes method (Davis and Rabinowitz 2007). From (5) we obtain log p θ (y|z) = CLL (y, H(z; θ)), where CLL (y, a) = log{1 y=1 (y)(1 − exp(− exp(a))) + 1 y=0 (y) exp(− exp(a))} is the complementary log-log (CLL) link, where 1 (·) is the indicator function. We prefer CLL over the standard logistic link since the CLL link is more sensitive at the tail end (Aranda-Ordaz 1981). To better capture the tail behavior of the latent representation, we assume random variable Z ∼ p(z) is a mixture of a standard Gaussian distribution truncated at u and a GPD for modeling the tail end thresholded at u, i.e., Consequently, the CDF for the mixed GPD is given by For simplicity, we denote the set of parameters in GPD as ψ={ξ GPD ,σ GPD } and the threshold u is a user-defined parameter. In the experiments we set u to Φ −1 (0.99). Considering we have adopted a long-tailed GPD prior, we seek a posterior approximation q φ (z|x) that is: (i) a flexible parameterization to approximate arbitrary distributions; and (ii) with a tractable likelihood to be able to evaluate the KL(q φ (z|x) p(z)) exactly. We need (i) because the true posterior is likely to exhibit heavy-tailed behavior due to the extended coverage of the GPD prior, and (ii) is to ensure accurate and low-variance Monte Carlo estimation of the KL-divergence at the tail end of the prior. These requirements invalidate some popular choices, e.g., a standard Gaussian posterior is light-tailed, and the implicit neuralsampler-based posterior typical in the work of adversarial variational Bayes (Mescheder, Nowozin, and Geiger 2017) , does not have a tractable likelihood. One model family satisfying the above two requirements is known as the generative flows (Rezende and Mohamed 2015) , where simple invertible transformations with tractable log Jacobian determinants are stacked together, transforming a simple base distribution into a complex one, while still having closed-form expressions for the likelihood. In this work, we consider the inverse autoregressive flow (IAF) model (Kingma et al. 2016) . The flow chain is built as: where µ t ∈ R p and σ t ∈ R p are learnable parameters, denotes the element-wise product, z 0 is typically drawn from a p-dimensional Gaussian distribution, z 0 ∼ N (µ 0 , Diag(σ 2 0 )) where µ 0 and σ 0 are obtained from an initial encoder defined by a neural network given input x with parameter φ. A sample from the posterior q φ (z|x) is given by z T , obtained by "flowing" z 0 through (7). Provided the Jacobians dµt dzt−1 and dσt dzt−1 are strictly upper triangular (Papamakarios, Pavlakou, and Murray 2017), we obtain the following closed-form expression for the log posterior where e j = (x j − µ 0,j )/σ 0,j for the jth dimension. We consider an additional modification that explicitly encourages the match of the aggregated posterior q φ (z) = q φ (z|x)p d (x)dx to the prior p(z), which has been reported to be vastly successful at improving VAE learning (Mescheder, Nowozin, and Geiger 2017) . In our case, q φ (z) does not have a closed-form expression for the likelihood ratio of the KL formulation, which motivates us to use a sample-based estimator. We consider the mini-max KL estimator based on the Fenchel duality (Tao et al. 2019; Dai et al. 2018) . Concretely, recall the KL can be expressed in its Fenchel dual form 2 where ν(z) is commonly known as the critic function in the adversarial learning literature, and we maximize wrt ν(z) in the space of all functions F, modeled with a deep neural network. We use (4) and (9) to derive an augmented ELBO that further penalizes the discrepancy between the aggregated posterior and the prior, i.e., Ψ β (x, y; p θ (y|z), q φ (z|x)) − λKL(q φ (z) p(z)), where λ is a regularization hyperparameter (Chen, Feng, and Lu 2018). Solving for this objective results in the following mini-max game where β and λ are regularization hyperparameters. In a similar vein to β-VAE and adversarial variational Bayes (AVB), our objective leverages β, λ > 0 to balance the prediction accuracy and the complexity of the latent representation via KL regularization. Further, from Ψ β (x, y; p θ (y|z), q φ (z, x)) in (4), note that the decoder p θ (y|z) is obtained from the additive neural network in (5), p ψ (z) is the Gaussian GPD mixture with CDF in (6), q φ (z|x) is the autoregressive flow implied by (7) and ν(z; ω) is the critic function specified as a neural network and parameterized by ω. 2 We have removed the constant term for notational clarity. To avoid collapsing to suboptimal local minima, we train the encoder arm more frequently to compensate for the detrimental posterior lagging phenomenon (He et al. 2019). The pseudo-code for the proposed VIE is summarized in Algorithm 1 and detailed architecture can be found in the SM. Rare-event modeling with regression. Initiated by King and Zeng (2001) , the discussion on how to handle the unique challenges presented by rare-event data for regression models has attracted extensive research attention. The statistical literature has mainly focused on bias correction for sampling (Fithian and Hastie 2014) and estimation (Firth 1993) , driven by theoretical considerations in maximum likelihood estimation. However, their assumptions are often violated in the face of modern datasets (Sur and Candès 2019), characterized by high-dimensionality and complex interactions. Our proposal approaches a solution from a representation learning perspective (Bengio, Courville, and Vincent 2013), by explicitly exploiting the statistical regularities of extreme values to better capture extreme representations associated with rare events. Re-sampling and loss correction. Applying statistical adjustments during model training is a straightforward solution to re-establish balance, but often associated with obvious caveats. For example, the popular down-sampling and up-sampling (He and Garcia 2009) discard useful information or introduce artificial bias, exacerbating the chances of capturing spurious features that may harm generalization (Drummond, Holte et al. 2003; Cao et al. 2019) , and their performance gains may be limited (Byrd and Lipton Our contribution is orthogonal to these developments and promises additional gains when used in synergy. Transferring knowledge from the majority classes. Adapting the knowledge learned from data-rich classes to their under-represented counterparts has shown success in few-shot learning, especially in the visual recognition field (Wang, Ramanan, and Hebert 2017; Chen et al. 2020) , and also in the clinical setting (Böhning, Mylona, and Kimber 2015). However, their success often critically depends on strong assumptions, the violation of which typically severely undermines performance (Wang et al. 2020) . Related are the one-class classification (OCC) models (Tax 2002) , assuming stable patterns for the majority over the minority classes. Our assumptions are weaker than those made in these model categories, and empirical results also suggest the proposed VIE works more favorably in practice (see experiments). We carefully evaluate the proposed VIE on a diverse set of realistic synthetic data and real-world datasets with different degrees of imbalance. Our implementation is based on PyTorch, and code to replicate our experiments are available from https://github.com/ZidiXiu/VIE/. We provide additional experiments and analyses in the SM. Baseline Models We consider the following set of competing baselines to compare the proposed solution: LASSO regression (Tibshirani 1996) , MLP with re-sampling and re-weighting (MLP), Importance-Weighting model ( (Ruff et al. 2018) . We tune the hyper-parameters of baseline models on the validation dataset, and pick best performing hyper-parameters to evaluate test set performance. For detailed settings please refer to the SM. Evaluation Metrics To quantify model performance, we consider AUC and AUPRC. AUC is the area under the Receiver Operating Characteristic (ROC) curve, which provides a threshold-free evaluation metric for classification model performance. AUC summarizes the trade-off between True Positive Rate (TPR) and False Positive Rate (FPR). AUPRC summarizes the trade-off between TPR and True Predictive Rate. Specifically, it evaluates the area under Precision-Recall (PR) curve. We discuss other metrics in the SM. In simulation studies, we repeat simulation ten times to obtain empirical AUC and AUPRC confidence intervals. For real world datasets, we applied bootstrapping to estimate the confidence intervals. VIE applies a few state-of-art techniques in variational inference in order to achieve optimal performance. In this section, we decouple their contributions via an ablation study, to justify the necessity of including those techniques in our final model. To this end, we synthesize a semi-synthetic dataset based on the Framingham study (Mitchell et al. 2010 ), a long-term cardiovascular survival cohort study. We use a realistic model to synthesize data from the realworld covariates under varying conditions, i.e., different event rates, sample size, non-linearity, etc. More specifically, we use the CoxPH-Weibull model (Bender, Augustin, and Blettner 2005) to simulate the survival times of patients is either a linear function or a randomly initialized neural net. Our goal is to predict whether the subject will decease within a pre-specified time frame, i.e., T < t 0 . Via adjusting the cut-off threshold t 0 , we can simulate different event rates. A detailed description of the simulation strategy is in the SM. We experiment with different combinations of advanced VI techniques, as summarized in Table 1 . Limited by space, we report results at 1% event rate with g(·) set to a randomly initialized neural network under various sample sizes. Additional results on linear models and other synthetic datasets are consistent and can be found in the SM. IAF and GPD only variants perform poorly, even compared to the vanilla VAE solution. This is possibly due to the fact that priors are mismatched. Explicitly matching to the prior via Fenchel mini-max learning technique improves performance. However, without using an encoder with a tractable likelihood, the model cannot directly leverage knowledge from the GPD prior likelihood. Stacked together (mixed GPD+IAF+Fenchel), our full proposal of VIE consistently outperforms its variants, approaching oracle performance in the large sample regime. To extensively evaluate real-world performance, we consider a wide range of real-world datasets, briefly summa- Table 3 . Note that InP, SEER and SLEEP are all survival datasets, among which SEER and SLEEP include censored subjects. We follow the data pre-processing steps in (Xiu et al. 2020) . To create outcome labels, we set a cut-off time to define an event of interest the same as in the ablation study, and exclude subjects censored before the cut-off time. The excluded samples only account for less than 0.2% of the whole population, and therefore it is expected to have a very limited impact on our results. Datasets have been randomly split into training, validation, and testing datasets with ratio 6:2:2. See the SM for details on data pre-processing. Table 2 compares VIE to its counterparts, where the numbers are averaged over the bootstrap samples. We see the proposed VIE yields the best performance in almost all cases, and the lead is more significant with low event rates. Note that the one-class classification based DeepSVDD performs poorly, which implies treating rare events as outliers are inappropriate in the scenarios considered here. Reweighting and resampling based methods (IW, Focal) are less stable compared to those simple baselines (LASSO, MLP). The theoretically optimal LDAM works well in general, second only to VIE in most settings. To further demonstrate the stability of our method, we visualize the bootstrapped evaluation scores for the COVID dataset in Figure 3 , and defer the additional cross-validation results to the SM. We see that VIE leads consistently. We also verify empirically that the estimated GPD shape Motivated by the challenges of rare-event prediction in clinical settings, we presented Variational Inference with Extremals (VIE), a novel extreme representation learningbased variational solution to the problem. In this model we leveraged GPD to learn the extreme distributions with few samples and applied additive monotonic neural networks to disentangle the latent dimensions' effects on the outcome. VIE featured better generalization and interpretability, as evidenced by a strong performance on real and synthetic datasets. In future work, we will extend this framework to the context of causal inference to quantify treatment effects in the label imbalanced setting ). When the prevalence of an event is extremely low, but the event itself has substantial importance, the methods to identify such targets are called rare event modeling. Accurate and robust modeling of rare events is significant in many fields, such as identifying patients in high-risk and hopefully to prevent adverse outcomes from happening based on early intervention. The scarcity of rare cases can cause extreme imbalanced among the dataset. Therefore, rare event modeling is challenging for most standard statistical approaches. As we discussed in the main text, careful statistical adjustments and new methodologies are required to approach such imbalance. Otherwise, the classifiers would be driven to the majority side and give misleading results. Also, the lack of representation in the minority class may cause unadjusted models to wrongly capturing spurious features that cannot generalize well to other observations. The apex of the risk curve or the mass of risk density usually overlays with the tail of the feature representation distribution, as illustrated in Figure S1 , traditional statistical methods (such as Gaussian based approaches) often ill perform at the tail end, which can lead to lack-of-fit and poor generalization ability. Gaussian risk Figure S1 : Feature representation mismatch at the tail parts. The heavy-tailed distribution can exploit extreme behavior in the latent space. We approach a solution to such challenges with a variational representation learning scheme that models disentangled extreme representations. Further, we design a robust, powerful prediction arm that combines the merits of a generalized additive model and isotonic neural net. An important theory in Extreme value theory (EVT) shows that under some mild conditions, the conditional cumulative distribution of exceedance over a threshold u follows Generalized Pareto Distribution, GPD(u, ξ, σ) (McFadden 1978), which has the cumulative distribution function (CDF) as: where σ is a positive scale parameter. According the shape parameter ξ, x could have different support. When ξ < 0, the exceedance x has bounded support 0 ≤ x ≤ u − σ/ξ, otherwise x is bounded by 0 on the left. u is the location parameter. The corresponding PDF is: Thus the log-likelihood function is: To enable modeling of the extreme representations, we adopt the Generalized Pareto Distribution as the tail part of our new variational prior, and the regular bulk representations z ≤ u with a standard Gaussian distribution. Then mixed extreme tail distribution has the form (McNeil 1997), When z > u, the tail estimator is,F (z) = (1 − F n (u))G u,ξ,σ,u (z) + F n (u) to approximate F (z). Now we show thatF (z) also has a GPD distribution with same ξ and the following scale and location parameters, Our main algorithm was written in PyTorch (version 1.3.1) (Paszke et al. 2017) . The experiments were conducted on an Intel(R) Xeon(R) and Tesla P100-PCIE-16GB GPU (except for the COVID dataset). The COVID dataset were stored and analyzed on a protected virtual network space with Inter(R) Xeon(R) Gold 6152 CPU 2.10GHz 2 Core(s). Model Structure. In VIE, we end up optimizing the following objective, where Note that the GPD parameters (ξ, σ) are absorbed in φ, and hyperparameter u is used in the GPD prior p(z). u is set to be F −1 z (0.99) in all experiments. When the event rate is ≥ 1%, we set λ, β = (1 × 10 −3 , 1 × 10 −5 ), otherwise we shrink the parameters to λ, β = (1 × 10 −4 , 1 × 10 −6 ). More concretely, the constituting parts p θ (y|z), p θ (z|x), q φ (z|x) and ν(z) are specified as follows Pseudo-code for VIE is presented in Algorithm 2. In all experiments, AMNN, IAF, ν(z) are specified in terms of two-layer MLPs of 32 hidden units with Rectified Linear Unit (ReLU) activation functions. The initial encoder Init-Encoder is Algorithm 2: Variational Inference with Extremals. Data: D = (x, y). x: inputs, y: labels Networks and parameters: Init-Encoder(x, ; φ): Initial encoder network; IAF(z; φ): recursive autoregressive neural network; ν(z; ω): critic neural network; AMNN(z; θ): additive monotonic neural net; prior: p ψ (z) =MixedGPD(z;ψ,u), ψ={ξ GPD ,σ GPD } Initialize: Init-Encoder, IAF, ν, AMNN, ψ specified as a three-layers MLPs of 32 hidden units. We set the minibatch size to m = 200. The critic network ν(x) uses the RMSprop optimizer with learning rate 1 × 10 −3 , other parts of the algorithm used Adam optimizer with learning rate 1 × 10 −4 . To avoid over-fitting, we set the dimension of latent space as 4 in all experiments. Note we have used the Complementary Log-Log (CLL) link function for p θ (y|z) in (13), for the outcome model as opposed to the standard Logistic link 1/(1 + exp(−a)). The CLL link is more sensitive at the tail end, so it is more frequently used in statistical models dealing with vanishing probabilities (Aranda-Ordaz 1981). To avoid collapsing to suboptimal local minimums, we train the encoder arm more frequently to compensate for the detrimental posterior lagging phenomenon (He et al. 2019). Our pseudo-code for VIE is summarized in Algorithm 1. Numerical Integration. Following (5), we divide region [l, z j ] evenly into M bins of width d j = zj −l M , with z j,M = z j . For the M bins, we select a random point z r j,k in each bin. The integral approximation on support [z j,k , z j,k+1 ] is the rectangular area h j (z (r) j,k ) * d j . As a result, the integral zj 0 h(s; θ)ds is approximated with With this approximation (5) can be written as: We set M = 100 and l = −5 in all the experiments. Discussions on Evaluation Metrics. In the main text, we reported AUC and AUPRC instead of single evaluation metrics, e.g., overall accuracy or error rate. Standard statistical metrics like Brier Scores (BS) and Binary classification entropy (BCE) could be deceptive when the event rate is low, e.g., ≤ 10% (Schmid and Griffith 2014). We will add BCE loss and the positive case BCE loss in the following sections in the simulation study for reference. Some poorly performed models can have relatively low BCE scores. In this case, the ground truth (Oracle) is the best reference we have. We examine model performance on two simulation strategies. The first one is the semi-synthetic dataset, which exploits the real-world covariates structures. The second one is a synthetic dataset based on our extreme representation assumptions. We synthesize a semi-synthetic dataset based on the Framingham study (Mitchell et al. 2010 ), a long-term cardiovascular survival cohort study. After quality control, 40, 046 subjects with nine covariates (four continuous and five categorical) are included. We use a realistic model to synthesize data from the real-world covariates under varying conditions, i.e., different event rates, sample size, nonlinearity, etc. More specifically, we use the coxPH-Weibull model (Bender, Augustin, and Blettner 2005) to simulate the survival time of patients T = { − log U λ exp(g(x)) } 1/ν , where g(x) is either a linear function or a randomly initialized neural net. Our goal is to predict whether the subject will decease within a pre-specified time frame, i.e., T < t 0 . Via adjusting the cut-off threshold t 0 , we can simulate different event rates. The details are shown in Algorithm 3. Extract covariates from Framingham Dataset ; Set ν, λ (the parameters of cox-Weibull distribution); Set time cut-bound t 0 ; Decide g(x) : R q → R form; for i ∈ {1, . . . , n} do Sample u i from Unif(0, 1); In our experiments, the performances when g(·) set as a randomized neural network or a linear function do not differ very much. For simplicity, we will present the results under the neural network settings. Apart from the results at 1% event rate given in the main text, we will show the results at 0.5% and 5% event rates here. The oracle results are calculated with plugging in the true g(x i ) in Algorithm 3, and the randomness is from the generating scheme of the survival time t. Additional Results for semi-synthetic datasets In 1% event rate case presented in the main text, the AUC and AUPRC distributions are summarized in Figure S2 , which corresponds to the average and standard deviation values presented in Table 1 . We further examine the cases with 0.5% and 5% event rates to evaluate our method's robustness. Results are summarized in Figure S2 : Box plot of 10 independent 1% event rate semi-synthetic analysis. Table S1 and Table S2 respectively. Apart from the threshold-free metrics AUC and AUPRC, we also presented Binary Cross-Entropy loss (BCE) and the BCE loss associated with true events (positive case losses). Note that for an imbalanced dataset, BCE loss can be misleading. In the model, VAE-GPD, which is poorly-behaved in AUC and AUPRC, can have relatively low BCE loss since the majority group overwhelms the minority (more important) group. We can refer to the BCE loss and positive case loss in the oracle results for reference. VIE performs consistently close to the oracle results, especially with low event rate and small training sample size, and Fenchel-GPD is in the second place. ( ( We design the long-tailed synthetic datasets based on our proposed method, where the latent variable z enjoys a long-tailed distribution. The pseudo-code for this simulation strategy is shown in Algorithm 4, where t 0 is a pre-specified time-cut, and H(·) is a randomized monotone neural network to create a monotone mapping from z to the risk. Algorithm 4: Generation of long-tailed data. Set sample size n, latent space dimension p, number of covariates q ; Set µ p , Σ p , ξ p , σ p (the parameters of a long-tailed distribution); Set ν, λ (the parameters of cox-Weibull distribution); Set time cut-bound t 0 ; Initialize ψ (for MLP g(·; ψ) : R p → R q ), θ (for AMNN H(·; θ) : R p → R); for i ∈ {1, . . . , n} do Sample z i from mixed GPD (µ p , Σ p , ξ p , σ p ); Compute x i = g(z i ; ψ); Sample u i from Unif(0, 1); Table S3 summarizes the findings with the long-tailed distributed latent space datasets, with 1% event rate. VIE can achieve relatively high AUC and AUPRC even with a small training sample size, which suggests that the proposed method can recover the long-tailed behavior in the feature representation. Among the combinations of different VI techniques, the Fenchel duality mechanism facilitates the distribution matching the best among other inference techniques. In summary, we have tested the performance on various simulation settings (model assumptions, event rates, sample sizes, non-linearity, etc.) where VIE takes the lead in all cases. IAF-and GPD-only variants perform poorly, even not comparable to the vanilla VAE model. This is possibly due to the prior is not matched. Explicitly matching the prior via the Fenchel mini-max scheme improves the performance, especially in the long-tailed representation datasets. Stacked together, our full proposal of VIE consistently outperforms its variants and always approaching the oracle performance in the large sample regime. We consider 5 real-world datasets, including 3 survival datasets in the study. Among those dataset, COVID and InP are from Duke University Health System (DUHS), which are not public at this time. SEER ) and SLEEP ) are two public survival datasets. Besides above clinical-based datasets, we further evaluate the model performance on Fraud dataset (Dal Pozzolo et al. 2017) in this supplementary material. Baseline Models. In all experiments, LDAM, FOCAL, IW, DeepSVDD and MLP are specified in terms of three-layer MLPs of 32 hidden units with ReLU activation. When tuning parameters for LASSO, based on the notation of Pedregosa et al. (2011) function sklearn.linear model.Lasso, we choose α from [10 −5 , 10 −4 , 10 −3 , 10 −2 , 0.1, 0.2, 0.5, 0.8] referred to the best performance on the validation datasets. In Focal Loss, the parameter γ is selected from the list [0.1, 0.5, 1.0, 1.5, 2.0] based on the best performance on the validation datasets. The dataset includes inpatient encounters to DUHS as of January 1, 2020. Vitals, administered medications, lab results, comorbidities, etc. are used as predictors to identify the risk of inpatient death, ventilation, and ICU transfer as adverse outcomes. The raw data's detailed description for each group of covariates can be found in Table S4 . The mortality rate in this dataset is 2.8%, ventilation 7.8% and ICU transfer 18%. From rare event modeling purposes, apart from the mortality prediction, we set the group of patients who experienced either death or ventilation as the combined adverse outcome group, which has 8% event rate. In Figure S3 , we presented the comparison of VIE versus other baseline models with the two outcomes (combined and mortality). VIE shows strong performance under these metrics. Figure S3 : Bootstrapped AUC (left) and AUPRC (right) Distribution of COVID datset with different outcomes. Note that comparisons of AUPRC among event-rates groups are meaningless. To qualitatively show the superior performance of VIE, we examine the performances on a 5-fold cross validation of the COVID dataset for mortality prediction. VIE outperforms other baseline consistentlly on each fold. Comparing to the performance of the second-best model LDAM with a paired t test, the p-value yields 0.093 < 0.1, with effect size 0.98, which shows the performance gap is statistically significant at α = 0.1. Figure S4 : Bootstrapped AUC (left) and AUPRC (right) Distribution of InP datset with different event rate. Note that comparisons of AUPRC among event-rates groups are meaningless. SEER and SLEEP Datasets SEER and SLEEP are two public survival datasets that contain censoring (i.e., an event that is not reported during the follow-up period of a subject). To create a classification dataset from a survival dataset, we deleted patients censored before the time-cut. The proportion of subjects excluded for SEER is less than 0.1%, for SLEEP dataset is less than 0.2%, which should not affect the overall credibility of the analysis. We follow the pre-processing steps provided in Chapfuwa et al. (2020) . Credit Card Fraud Detection To evaluate the performances on non-clinical data, we examined the VIE model on fraud detection benchmark dataset (Dal Pozzolo et al. 2017) , where fraudulent credit card transactions are coined as rare events (∼ 0.2%). The dataset includes 284k records with 29 covariates. We split the original dataset into training, validation, and testing datasets with a 6:2:2 ratio to ensure fair and stable comparison. The hyperparameters are selected based on the best performance on the validation dataset. The average and standard deviation of the metrics are presented in Table S7 . VIE outperforms other baselines and achieved an average of over 0.99 AUC in the bootstrapped samples. Exploration of the feature representation We visualize the marginal relationship between latent space dimensions and risk in the real-world dataset InP dataset (1% event rate), which are shown in Figure S7 . The first dimension (top-left), the extremal behavior contributes significantly and positively to the event risk prediction. The other three dimensions serve as inhibitors to the event risk. Empirically, all the latent dimensions have a long-tailed distribution, with learned scale parameter ξ > 0. We also embed the posterior space z on a 2D plot with t-SNE, with probability contour lines, as shown in Figure S8 . The events are concentrated to one end of the latent space. Our binary classification framework can be generalized to multiple-class problems easily. We will stick to the mixed-GPD distribution of the posterior z with p dimensions, and increase the number of monotone networks for each dimension of z. In the binary case, each dimension of z corresponds to one monotone network, here we can set it to k networks per-dimension. In total, we now have p × k monotone functions. Then we can apply an FC layer to the final output, with m categories, as shown in Figure S9 . In the learning object, we would replace the binary cross-entropy loss (BCE) with regular cross-entropy loss (CE) in the reconstruction term. We generate a toy dataset to illustrate VIE's performance on multiclassification problems. Based on Algorithm 3, instead of setting a binary time-cut, now we split the generated time t with a sequence of time-cuts based on the percentiles [5%, 15%, 30%, 60%] of t. In this way, we have a dataset with 5 categorical outcomes with event rates [5%, 10%, 15%, 30%, 40%], respectively. To evaluate the performance, except for the per-class accuracy, we use F 1 score, which is a the harmonic mean of precision (True Positives) and recall (sensitivity), 2 recall −1 +precision −1 , ranges from 0 to 1, where 1 indicating better performance. We use micro-averaged F1-score (micro-F1) to calculate the overall F1 scores for all classes, Comparing to related methods: FOCAL and LDAM, the model VIE results on this toy example are comparable per class and better in terms of F1 score. FOCAL and LDAM are specified as 3-layer MLPs with 32 hidden units, and VIE uses the previous setting, except for k = 3. Deep variational information bottleneck On two families of transformations to additivity for binary response data Additive isotonic models Residual life time at great age. The Annals of probability Improved variational inference with inverse autoregressive flow Auto-encoding variational Bayes Focal loss for dense object detection Evaluating the causal effects of cellphone distraction on crash risk using propensity score methods Reconsidering Generative Objectives For Counterfactual Reasoning Rare and extreme events: the case of COVID-19 pandemic kNN approach to unbalanced data distributions: a case study involving information extraction Estimating the tails of loss severity distributions using extreme value theory Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks Arterial stiffness and cardiovascular events: the Framingham Heart Study Machine learning and data mining Neural network learning of low-probability events Development, Implementation, and Evaluation of an In-Hospital Optimized Early Warning Score for Patient Deterioration Masked autoregressive flow for density estimation Statistical inference using extreme order statistics. the The sleep heart health study: design, rationale, and methods Variational inference with normalizing flows SEER survival monograph: cancer survival among adults: US SEER program, 1988-2001, patient and tumor characteristics. National Cancer Institute, SEER Program Monte Carlo statistical methods The cross-entropy method: a unified approach to combinatorial optimization, Monte-Carlo simulation and machine learning Simulation and the Monte Carlo method Deep one-class classification Improving predictive inference under covariate shift by weighting the log-likelihood function Monotonic networks A modern maximum-likelihood theory for high-dimensional logistic regression On Fenchel Mini-Max Learning Generalized reduced rank latent factor regression for high dimensional tensor fields, and neuroimaging-genetic applications One-class classification: Concept learning in the absence of counter-examples Regression shrinkage and selection via the lasso Generalizing from a few examples: A survey on few-shot learning Learning to model the tail Unconstrained monotonic neural networks Distance metric learning for large margin nearest neighbor classification Variational learning of individual survival distributions Cluster-based under-sampling approaches for imbalanced data distributions A Class Imbalance Loss for Imbalanced Object Recognition Training cost-sensitive neural networks with methods addressing the class imbalance problem Weighted extreme learning machine for imbalance learning SM References Survival cluster analysis Credit card fraud detection: a realistic modeling and a novel learning strategy Modeling the choice of residential location Estimating the tails of loss severity distributions using extreme value theory Automatic differentiation in PyTorch Scikit-learn: Machine Learning in Python The sleep heart health study: design, rationale, and methods SEER survival monograph: cancer survival among adults: US SEER program, 1988-2001, patient and tumor characteristics Multivariate classification rules: calibration and discrimination Acknowledgements The authors would like to thank the Duke Institute for Health Innovation (DIHI) for providing access to curated COVID-19 data and outcomes. This research was supported in part by NIDDK R01-DK123062 and NIH/NIBIB R01-EB025020.