key: cord-0518706-zoa37yvt authors: Cai, Tian; Xie, Li; Chen, Muge; Liu, Yang; He, Di; Zhang, Shuo; Mura, Cameron; Bourne, Philip E.; Xie, Lei title: Exploration of Dark Chemical Genomics Space via Portal Learning: Applied to Targeting the Undruggable Genome and COVID-19 Anti-Infective Polypharmacology date: 2021-11-23 journal: nan DOI: nan sha: 02859ab729a7d7148332f47c18934b700cef0413 doc_id: 518706 cord_uid: zoa37yvt Advances in biomedicine are largely fueled by exploring uncharted territories of human biology. Machine learning can both enable and accelerate discovery, but faces a fundamental hurdle when applied to unseen data with distributions that differ from previously observed ones -- a common dilemma in scientific inquiry. We have developed a new deep learning framework, called {textit{Portal Learning}}, to explore dark chemical and biological space. Three key, novel components of our approach include: (i) end-to-end, step-wise transfer learning, in recognition of biology's sequence-structure-function paradigm, (ii) out-of-cluster meta-learning, and (iii) stress model selection. Portal Learning provides a practical solution to the out-of-distribution (OOD) problem in statistical machine learning. Here, we have implemented Portal Learning to predict chemical-protein interactions on a genome-wide scale. Systematic studies demonstrate that Portal Learning can effectively assign ligands to unexplored gene families (unknown functions), versus existing state-of-the-art methods, thereby allowing us to target previously"undruggable"proteins and design novel polypharmacological agents for disrupting interactions between SARS-CoV-2 and human proteins. Portal Learning is general-purpose and can be further applied to other areas of scientific inquiry. to disrupt interactions between SARS-CoV-2 and human proteins. The rapid emergence of SARS-CoV-2 variants has posed a significant challenge to existing vaccine and anti-viral development paradigms. Gordon et al. experimentally identified 332 human proteins that interact with the SARS-CoV-2 virus [17] . This PPI map provides unique opportunities for anti-SARS-CoV-2 drug discovery: targeting the host proteins involved in PPIs can disrupt human SARS-CoV-2 interactions, thereby thwarting the onset of COVID-19. By not aiming to directly kill virions, this indirect strategy should lessen the selection pressure on viral genome evolution. A polypharmacological agent that interacts moderately strongly with multiple human proteins could be a potentially quite effective and safe anti-COVID-19 therapeutic: on the one hand, the normal functions of human proteins should not be significantly perturbed while, on the other hand, the interactions required for successful SARS-CoV-2 infection would be inhibited. Here, we virtually screened compounds in the Drug Repurposing Hub [18] against the 332 human SARS-CoV-2 interactors. Two drugs, Fenebrutinib and NMS-P715, ranked highly; interestingly, both of these anti-tumorigenic compounds inhibit kinases. Their interactions with putative human targets were supported by further (structure-based) analyses of protein-ligand binding poses. In summary, the contributions of this work are three-fold: 1. A novel, generalized training scheme, Portal Learning, is proposed as a way to guide biologyinspired systematic design in order to improve the generalization power of machine learning on OOD problems, such as is found in the dark regions of molecular/functional space. 2. To concretely illustrate the Portal Learning approach, a specific algorithm, PortalCG, is proposed and implemented. Comprehensive benchmark studies demonstrate the promise of PortalCG when applied to OOD problems, specifically for exploring the dark regions of chemical genomics space. To enable the exploration of dark regions of chemical and biological space, Portal Learning rests upon a systematic, well-principled training strategy, the underpinnings of which are shown in Figure 1 . In Portal Learning, a model architecture together with a data set and a task defines a universe. Each universe has some global optimum with respect to the task based on a pre-defined loss function. The model-initialized instance in a universe-which could be a local optimum in the current universe, but which facilitates moving the model to the global optimum in the ultimately targeted universe-is called a portal. The portal is similar to a catalyst that lows the energy barrier via a transition state for a chemical reaction to occur. The dark chemical genomics space cannot be explored effectively if the learning process is confined only to the observed universe of protein sequences that have known ligands, as the known data are highly sparse and biased (details in Result section). Hence, it is critical to successfully identify portals into the dark chemical genomics universe starting from the observed protein sequence and structure universe. For clarity and ease of reference, key terms related to Portal Learning are given in the Supplemental Materials. The remainder of this section describes the three key components of the Portal Learning approachnamely, end-to-end step-wise transfer learning (STL), out-of-cluster meta-learning (OOC-ML), and stress model selection. End-to-end step-wise transfer learning (STL). Information flow in biological systems generally involves multiple intermediate steps, from a source instance to a target. For example, a discrete genotype (source) ultimately yields a downstream phenotype (target) via many steps of gene expression, in some environmental context. For predicting genotype-phenotype associations, explicit machine learning models that represent information transmission from DNA to RNA to cellular phenotype are more powerful than those that ignore the intermediate steps [19] . In Portal Learning, transcriptomics profiles can be used as a portal to link the source genetic variation (e.g., variants, SNPs, homologs, etc.) and target cellular phenotype (e.g., drug sensitivity). Using deep neural networks, this process can be modeled in an end-to-end fashion. Out-of-cluster meta-learning (OOC-ML). Even if we can successfully transfer the information needed for the target through intermediate portals from the source universe, we still need additional portals to reach those many sparsely-populated regions of the dark universe that lack labeled data in the target. Inspired by Model Agnostic Meta-Learning (MAML) [11] , we designed a new OOC-ML approach to explore the dark biological space. MAML cannot be directly applied to Portal Learning in the context of the OOD problem because it is designed for few-shot learning under a multi-task formulation. Few-shot learning expects to have a few labeled samples from the test data set to update the trained model during inference for a new task. This approach cannot be directly applied to predicting gene functions of dark gene families where the task (e.g., binary classification of ligand binding) is unchanged, but rather there are no labeled data for a unseen distribution that may differ significantly from the training data. In a sense, rather than MAML's "few-shot/multitask" problem context, mapping dark chemical/biological space is more of a "zero-shot/single-task" learning problem. A key insight of OOC-ML is to define sub-distributions (clusters) for the labeled data in the source instance universe. An example demonstrated in this paper is to define subdistributions using Pfam families when the source instance is a protein sequence. Intuitively, OOC-ML involves a two-stage learning process. In the first stage, a model is trained using each individual labeled cluster (e.g., a given Pfam ID), thereby learning whatever knowledge is (implicitly) specific to each cluster. In the second stage, all trained models from the first stage are combined and a new ensemble model is trained, using labeled clusters that were not used in the first stage. In this way, we may extract common intrinsic patterns shared by all clusters and apply the learned essential knowledge to dark ones. Stress model selection. Finally, training should be stopped at a suitable point in order to avoid overfitting. This was achieved by stress model selection. Stress model selection is designed to basically recapitulate an OOD scenario by splitting the data into OOD train, OOD development, and OOD test sets as listed in Table 1 ; in this procedure, the data distribution for the development set differs from that of the training data, and the distribution of the test data set differs from both the training and development data. For additional details and perspective, the conceptual and theoretical basis of Portal Learning is further described in the Methods section of the Supplemental Materials. We implemented the Portal Learning concept as a concrete model, PortalCG, for exploring the dark chemical genomics space. In terms of Portal Learning's three key components (STL, OOC-ML, and stress model selection), PortalCG makes the following design choices (see also Figure 2 ). End-to-end sequence-structure-function STL. The function of a protein-e.g., serving as a target receptor for ligand binding-stems from its three-dimensional (3D) shape and dynamics which, in turn, is ultimately encoded in its primary amino acid sequence. In general, information about a protein's structure is more powerful than purely sequence-based information for predicting its molecular function because sequences drift/diverge far more rapidly than do 3D structures on evolutionary timescales. Although the number of experimentally-determined structures continues to exponentially increase, and now AlphaFold2 can reliably predict 3D structures of most singledomain proteins, it nevertheless remains quite challenging to directly use protein structures as input for predicting ligand-binding properties of dark proteins. In PortalCG, protein structure information is used as a portal to connect a source protein sequence and a corresponding target protein function ( Figure 1A ). We begin by performing self-supervised training to map tens of millions of sequences into a universal embedding space, using our recent distilled sequence alignment embedding (DISAE) algorithm [1] . Then, 3D structural information about the ligand-binding site is used to fine-tune the sequence embedding. Finally, this structure-regularized protein embedding was used as a hidden layer for supervised learning of cross-gene family CPIs, following an end-to-end sequence-structurefunction training process. By encapsulating the role of structure in this way, inaccuracies and uncertainties in structure prediction are 'insulated' and will not propagate to the function prediction. Out-of-cluster meta-learning. In the OOC-ML framework, Pfam gene families provide natural clusters as sub-distributions. In each Pfam family, the data is split into support set and query set as shown in Figure 1(B) . Specifically, a model is trained for a single Pfam family independently to reach a local minimum using the support set of the Pfam family as shown in the inner loop IID optimization in Figure 1 (C.1). Then a query set from the same Pfam family is used on the locally optimized model to get a loss from the local loss landscape, i.e. outer loop IID meta optimization in Figure 1 (C.1). Local losses from the query sets of multiple Pfam families will be aggregated to calculate the loss on a global loss landscape, i.e. meta optimization in Figure 1 (C.1). For some cluster with very limited number of data, they don't have a support set hence will only participate in the optimization on the global loss landscape. There could be many choices of aggregations. A simple way is to calculate the average loss. The aggregated loss will be used to optimize the model on the global loss landscape. Note that weights learned on each local loss landscape will be memorized during the global optimization. In our implementation, it is realized by creating a copy of the model trained from the each family's local optimization. In this way, the local knowledge learned is ensured to be only passed to the global loss landscape by the query set loss. Stress model selection. The final model was selected using Pfam families that were not used in the training stage ( Figure 2, right panel) . The Supplemental Materials provide further methodological details, covering data pre-processing, the core algorithm, model configuration, and implementation details. We inspected the known CPIs between (i) molecules in the manually-curated ChEMBL database, which consists of only a small portion of all chemical space, and (ii) proteins annotated in Pfam-A [20] , which represents only a narrow slice of the whole protein sequence universe. The ChEMBL26 [21] database supplies 1, 950, 765 chemicals paired to 13, 377 protein targets, constituting 15, 996, 368 known interaction pairs. Even for just this small portion of chemical genomics space, unexplored CPIs are enormous, can be seen in the dark region in Figure 3 . Approximately 90% of Pfam-A families do not have any known small-molecule binder. Even in Pfam families with annotated CPIs (e.g., GPCRs), there exists a significant number of 'orphan' receptors with unknown cognate ligands ( Figure 3 ). Fewer than 1% of chemicals bind to more than two proteins, and < 0.4% of chemicals bind to more than five proteins, as shown in Supplemental Figures S1, S2 and S3. Because protein sequences and chemical structures in the dark chemical genomics space could be significantly different from those for the known CPIs, predicting CPIs in the dark space is an archetypal, unaddressed OOD problem. When compared with the state-of-the-art method DISAE [1] , which already was shown to outperform other leading methods for predicting CPIs of orphan receptors, PortalCG demonstrates superior performance in terms of both Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves, as shown in Figure 4 (a). Because the ratio of positive and negative cases is imbalanced, the PR curve is more informative than the ROC curve. The PR-AUC of PortalCG and DISAE is 0.714 and 0.603, respectively. In this regard, the performance gain of Portal Learning (18.4%) is significant (p-value < 1e − 40). Performance breakdowns for binding and non-binding classes can be found in Supplemental Figure S4 . PortalCG exhibits much higher recall and precision scores for positive cases (i.e., a chemicalprotein pair that is predicted to bind) versus negative, as shown in Supplemental Figure S4 ; this is a highly encouraging result, given that there are many more negative (non-binding) than positive cases. The deployment gap, shown in Figure 4 (b), is steadily around zero for PortalCG; this promising finding means that we can expect that, when applied to the dark genomics space, the performance will be similar to that measured using the development data set. With the advent of high-accuracy protein structural models, predicted by AlphaFold2 [5] , it now becomes feasible to use reversed protein-ligand docking (RPLD) [22] to predict ligand-binding sites and poses on dark proteins, on a genome-wide scale. In order to compare our method with the RPLD approach, blind docking to putative targets was performed via Autodock Vina [23] . After removing proteins that failed in the RPLD experiments (mainly due to extended structural loops), docking scores for 28,909 chemical-protein pairs were obtained. The performance of RPLD was compared with that of PortalGC and DISAE. As shown in Figure 4 (a), both ROC and PR for RPLD are significantly worse than for PortalGC and DISAE. It is well known that PLD suffers from a high falsepositive rate due to poor modeling of protein dynamics, solvation effects, crystallized waters, and other challenges [24] ; often, small-molecule ligands will indiscriminately 'stick' to concave, pocketlike patches on protein surfaces. For these reasons, although AlphaFold2 can accurately predict many protein structures, the relatively low reliability of PLD still poses a significant limitation, even with a limitless supply of predicted structures [25] . Thus, the direct application of RPLD remains a challenge for predicting ligand binding to dark proteins. PortalCG's end-to-end sequence-structurefunction learning could be a more effective strategy: protein structure information is not used as a fixed input, but rather as an intermediate layer that can be tuned using various structural and functional information. From this perspective, again the role of protein structure in PortalCG can be seen as that of a portal (sequence→function; Figure 1 ) and a regularizer ( Figure 2 ). To gauge the potential contribution of each component of PortalCG to the overall system effectiveness in predicting dark CPIs, we systematically compared the four models shown in Table 2 . Details of the exact model configurations for these experiments can be found in the Supplemental Materials Table S10 and Figure S13 . As shown in Table 2 , Variant 1, with a higher PR-AUC compared to the DISAE baseline, is the direct gain from transfer learning through 3D binding site information, all else being equal; yet, with transfer learning alone and without OOC-ML as an optimization algorithm in the target universe (i.e., Variant 2 versus Variant 1), the PR-AUC gain is minor. Variant 2 yields a 15% improvement while Variant 1 achieves only a 4% improvement. PortalCG (i.e., full Portal Learning), in comparison, has the best PR-AUC score. With all other factors held constant, the advantage of PortalCG appears to be the synergistic effect of both STL and OOC-ML. The performance gain measured by PR-AUC under a shifted evaluation setting is significant (p-value < 1e-40), as shown in Supplemental Figure S5 . We find that stress model selection is able to mitigate potential overfitting problems, as expected. Training curves for the stress model selection are in Supplemental Figures S4 and S6 . As shown in Supplemental Figure S6 , the baseline DISAE approach tends to over-fit with training, and IID-dev performances are all higher than PortalCG but deteriorate in OOD-test performance. Hence, the deployment gap for the baseline is -0.275 and -0.345 on ROC-AUC and PR-AUC, respectively, while PortalCG deployment is around 0.01 and 0.005, respectively. A production-level model using PortalCG was trained with ensemble methods for the deployment. Details are in the Supplemental Methods section. The trained PortalCG model was applied to two case-studies in order to assess its utility in the exploration of dark space. As long as a protein and chemical pair was presented to this model with their respective sequence and SMILES string, a prediction could be made, along with a corresponding prediction score. To select high confidence predictions, a histogram of prediction scores was built based on known pairs (Supplemental Figure S7 ). A threshold of 0.67, corresponding to a false positive rate of 2.18e-05, was identified to filter out high-confidence positive predictions. Around 6,000 drugs from the Drug Repurposing Hub [26] were used in the screening. The remainder of this section describes the two case-studies that were examined with PortalCG, namely (i) COVID-19 polypharmacology and (ii) the 'undruggable' portion of the human genome. In order to identify lead compounds that may disrupt SARS-CoV-2-Human interactions, we screened 5,886 approved and investigational drugs against the 332 human proteins known to interact with SARS-CoV-2. We considered a drug-protein pair as a positive hit and selected it for further analysis only when all models in an ensemble vote as positive and the false positive rate does not exceed is 2.18e-05. Drugs involved in these positive pairs were ranked according to the number of proteins to which they are predicted to bind. Detailed information is given in Supplemental Table S1 . Most of these drugs are protein kinase inhibitors and are already in Phase 2 clinical trials. Among them, Fenebrutinib and NMS-P715 are predicted to bind to seven human SARS-CoV-2 interactors, as shown in Table 3 . In order to elucidate how these drug molecules might associate with a SARS-CoV-2 interactor partner, we performed molecular docking for Fenebrutinib and NMS-P715. Structures of two SARS-CoV-2 interactors were obtained from the Protein Data Bank; the remaining five proteins do not have experimentally solved structures so their predicted structures (via AlphaFold2) were used for docking. For most of these structures, the binding pockets are unknown. Therefore, blind docking was employed, using Autodock Vina [23] to search the full surfaces (the accessible molecular envelope) and identify putative binding sites of Fenebrutinib and NMS-P715 on these interactors. Docking conformations with the best (lowest) predicted binding energies were selected for each protein; the respective binding energies are listed in Table 3 . Components of the exosome complex are predicted targets for both Fenebrutinib and NMS-P715. The exosome complex is a multi-protein, intracellular complex which is involved in degradation of many types of RNA molecules (e.g., via 3'→5' exonuclease activities). As shown in Figure 5 , the subunits of the exosomal assembly form a central channel; RNA passes through this region as part of the degradation/processing. Intriguingly, SARS-CoV-2's genomic RNA has been found to be localized in the exosomal cargo, suggesting a key mechanistic role for the channel region in SARS-CoV-2 virion infectivity pathways [27] . Fenebrutinib and NMS-P715 were also predicted to bind to a specific exonuclease, RRP43, of the exosome complex, while NMS-P715 was also predicted to bind yet another exonuclease, RRP46. The predicted binding poses for Fenebrutinib and NMS-P715 with the exosomal complex components are shown in Figure 5 . The physicochemical/interatomic interactions between these two drugs and the exosome complex components are also schematized as a 2D layout in this figure. The favorable hydrogen bond, pi-alkyl, pi-cation and Van der Waals interactions provide additional support that Fenebrutinib and NMS-P715 do indeed bind to these components of the exosome complex. The predicted binding poses and 2D interactions maps for Fenebrutinib and NMS-P715 with other targeted proteins are shown in Supplementary Figures S8, S9 , and S10. Table 3 : Docking scores for Fenebrutinib and NMS-P715 It is well known that only a small subset of the human genome is considered druggable [28] . Many proteins are deemed "undruggable" because there is no information on their ligand-binding properties or other interactions with small-molecule compounds (be they endogenous or exogenous ligands). Here, we built an "undruggable" human disease protein database by removing the druggable proteins in Pharos [29] and Casas's druggable proteins [30] from human disease associated genes [14] and applied PortalCG to predict the probability for these "undruggable" proteins to bind to druglike molecules. A total of 12,475 proteins were included in our disease-associated undruggable human protein list. These proteins were ranked according to their probability scores, and 267 of them have a false positive rate lower than 2.18e-05, as listed in the supplementary material Table S2 . Table 4 shows the statistically significantly enriched functions of these top ranked proteins as determined by DAVID [31] . The most enriched proteins are involved in alternative splicing of mRNA transcripts. Malfunctions in alternative splicing are linked to many diseases, including several cancers [32] [33] and Alzheimer's disease [34] . However, pharmaceutical modulation of alternative splicing process is a challenging task. Identifying new drug targets and their lead compounds for targeting alternative splicing pathways may open new doors to developing novel therapeutics for complex diseases with few treatment options. Diseases associated with these 267 human proteins were also listed in Table 5 . Since one protein is always related to multiple diseases, these diseases are ranked by the number of their associated proteins. Most of top ranked diseases are related with cancer development. 21 drugs that are approved or in clinical development are predicted to interact with these proteins as shown in Table S3 . Several of these drugs are highly promiscuous. For example, AI-10-49, a molecule that disrupts protein-protein interaction between CBFb-SMMHC and tumor suppressor RUNX1, may bind to more than 60 other proteins. The off-target binding profile of these proteins may provide invaluable information on potential side effects and opportunities for drug repurposing and polypharmacology. The drug-target interaction network built for predicted positive proteins associated with Alzheimer's disease was shown in Figure 6 . Functional enrichment, disease associations, and top ranked drugs for the undruggable proteins with well-studied biology (classified as Tbio in Pharos) and those excluding Tbio are list in Supplemental This paper confronts the challenge of exploring dark chemical genomics space by recognizing it as an OOD generalization problem in machine learning, and by developing a new learning framework to treat this type of problem. We propose Portal Learning as a general framework that enables systematic control of the OOD generalization risk. As a concrete algorithmic example and usecase, PortalCG was implemented under the Portal Learning framework. Systematic examination of the PortalCG method revealed its superior performance compared to (i) a state-of-the-art deep learning model (DISAE), and (ii) an AlphaFold2-enabled, structure-based reverse docking approach. PortalCG showed significant improvements in terms of both sensitivity and specificity, as well as close to zero deployment performance gap. With this approach, we were able to explore the dark regions of the druggable genome. Applications of PortalCG to COVID-19 polypharmacology and to the targeting of hitherto undruggable human proteins affords novel new directions in drug discovery. PortalCG uses three database, Pfam [20] , Protein Data Bank (PDB) [35] and ChEMBL [21] . Two applications are demonstrated, COVID-19 polypharmacology and undruggable human proteins, for which known approved drugs are collected from CLUE [26] , 332 human proteins interacting SARS-CoV-2 are listed in recent publication [36] , 12,475 undruggable proteins are collected by removing the druggable proteins in Pharos [29] and Casas's druggable proteins [30] from human disease associated genes [14] . Detailed explanation of how each data set is used can be found in Supplemental Materials Methods section. Major data statistics are demonstrated in Figure 3 and Supplemental Materials Figure S1 , S2, and S3. Experiments are first organized to test PortalCG performance against baseline models, DISAE [1] and AlphFold2 [5] . DISAE is a protein language which predicts protein function based on protein sequence information alone. AlphaFold2 uses protein sequence information to predict protein structure, combing docking methods, can be used to predict protein function. Main results are shown with Table 2 and Figure 4 . Ablation studies is also performed mainly to test some variants of PortalCG components such as binding site distance prediction as shown in Supplemental Figure S12 . Since Portal Learning is a general framework, there could be many interesting variants to pursue in future studies. To enhance application accuracy, a production level model is built with ensemble learning, and high confidence predictions are selected as demonstrated in Supplemental Material Figure S7 . Evaluation metrics used are F1, ROC-AUC and PR-AUC. Extensive details can be found in Supplemental Materials Methods section. A literature review of related works could be found in Supplemental Materials section Related Works. Figure 1 : Illustration of two of the three major Portal Learning components for OOD problems, End-to-end step-wise transfer learning (STL) and out-of-cluster meta-learning (OOC-ML), using the prediction of out-of-gene family chemical-protein interactions (CPIs) as an example: A. STL: 3D structure of protein ligand binding site is in the center connecting protein sequences to CPIs. There are two portals, the first traveling from the protein sequence universe to the binding site structure universe by pre-training a protein language model that is optimal in the protein sequence universe and leads to a model initialization instance closer to the global optimum in the binding site structure universe. The optimization based on this initialized instance leads to the discovery of the second portal through which protein function universe gets a model initialization instance closer to its own global optimum. B. Problem formulation of OOC-ML in comparison with MAML: Different from MAML where training data is grouped based on the task, the training data in OOC-ML is clustered in the instance space. Instead of decomposing the data in all clusters into support and query set like MAML, there is only a query set in certain training clusters and all testing clusters in OOC-ML to simulate OOD scenario. C. Optimization of OOC-ML in comparison with MAML: Intuitively, OOC-ML first performs local optimizations on each cluster of training data with the support/query decomposition, then meta optimizations on the training set that has only query sets by ensembling the knowledge learned from the local optimization. The optimized model is applied to the test data in a zero-shot learning setting. In contrast, the meta-optimization in MAML requires query sets in the setting of few-shot learning. TC conceived the concept of Portal Learning, implemented the algorithms, performed the experiments, and wrote the manuscript; Li Xie prepared data, performed the experiments, and wrote the manuscript; MC implemented algorithms; YL implemented algorithms; SZ prepared data; CM and PEB refined the concepts and wrote the manuscript; Lei Xie conceived and planned the experiments, wrote the manuscript. Data, a pre-trained PortalCG model, and PortalCG codes can be found in the following link: https: //github.com/XieResearchGroup/PortalLearning To provide common ground for discussion with readers of various backgrounds, we list the specification of key terms related to the methods in Supplemental materials. The following list provides explanations at an intuition level without the attempt to establish formal definitions. Readers could refer to referenced materials for more formal definitions. Deep learning specific 1. model architecture: the design of a model as a set of trainable parameters without specification on the exact weight of the parameters [37] . 2. loss landscape: the geometry of the global loss associated with a model architecture [38] . 3. model instance: given a model architecture with certain amount of trainable parameters, a set of weights associated to the trainable parameters defines an instance of model; during the training process, each optimization step leads to a model instance. 4. optimization [37] : neural network is trained by optimizing a object function, usually in the form of minimizing a loss function. 5. global/local optimum: under the optimization formalization, the optimum point is at the end of the optimization process. As explained in [39] , in an ideal world where the complete distribution of data is available to train a model, the optimum is global; while any stop point for any certain subdistribution is a local optimum. 6. model initialization: optimization always starts with an initialization of an model instance. 7. pretraining [10] : train a model on a pretext task before training on the target task; the trained model instances will become the initialization of the target task. 8. finetuning [10] : train a model on a target task with initialization pretrained task. 9. Independent and Identifically distributed (IID) [40] : if given a set of data x i , each of these x i observations is an independent draw from a fixed ("stationary") probability distribution. 10. Out-of-distribution (OOD) generalization [41] : Generalization consists in reducing the gap in performance between training data and testing data. When the data generating process from the training data is indistinguishable from that of the test data, it's "in-distribution", if not, it's called out-of-distribution generalization problem [41] . As in [42] , considering datasets Data e := {(x e i , y e i )} ne i=1 collected under multiple domains e, each containing samples IID with a probability distribution D(X e , Y e ). The goal of OOD generalization is to use these datasets to learn a predictior Y ≈ f (X), which performs well across a large set of unseen but related domains e ∈ E all . Namely, the goal is to minimize , where is the risk under domain e. Here the set E all contains all possible domains. 11. generalization [37] : the most general goal of generalization is to enable the model make reliable prediction on unseen data; out-of-distribution prediction (OOD) is a more challenging type of generalization problem which requires the model to be generalizable to unseen data distribution. 12. mini-batch [37] : as a common practice for robustness and computation memory concerns, no matter how large a data set is available, only a sub set of data is sampled to train a model at each optimization step. 13. representation [37] : as coined by the line of work named "representation learning", the word representation is interachangable with "embedding", referring to a vector/matrix of learned features. 14. transfer model parameter: a technique that is related to pretraining-finuetuning; for the implementation, a simple initialization of part of the target model using the pretrained model could serve the purpose. Chemical-protein interaction (CPI) specific 1. CPI prediction: formulated as a binary classification task, to predict whether or not a pair of protein and chemical will bind given only protein sequence and chemical SMILES string. 2. protein descriptor, chemical descriptor: the modules of a whole CPI prediction model to extract protein/chemical embedding in a euclidean space. Portal learning specific 1. universe: a model architecture that defines a data transformation space, together with a data set. 2. portal: a model instance in a universe-which could be a local optimum in the current universe, but which facilitates moving the model to the global optimum in the ultimately targeted universe. 3. local loss landscape: optimize a model on a sub-distribution of the complete underlying distribution of the whole data set. 4. global loss landscape: the direction of gradient search towards global optimum for all subdistributions. 5. stress test: a technique [43] to evaluate a predictor by observing its outputs on specifically designed inputs; three common types are stratified performance evaluation, shifted evaluation and contrastive evaluation. 6. shifted evaluation [43] : the stress test employed in this paper, which splits train/test data set by Pfam families, i.e. proteins in the testing and training sets come from different Pfam families. This is a simple simulation off dark space model deployment. 7. deployment gap: the difference between the performance evaluated by the test set and that evaluated by the development set. 8. classic deep learning training scheme: randomly split a whole data set into train/dev/test set; optimize model on a randomly sampled mini-batch data; choose a final trained model instance based on the best test evaluation metrics; usually adopts empirical risk minimization [44] formulation. In this section, we present the detailed methodology used in portal learning in the context of a four-universe configuration. The four-universe configuration is built on three major databases, Pfam [20] , Protein Data Bank (PDB) [35] , BioLp [45] and ChEMBL [21] . The data were preprocessed as follows. • Protein sequence universe. All sequences from Pfam-A families are used to pretrain the protein descriptor following the same setting in DISAE [1] that highlights a MSA-based distillation process. • Protein structure universe. In our protein structure data set, there are 30,593 protein structures, 13,104 ligands, and 91,780 ligand binding sites. Binding sites were selected according to the annotation from BioLip (updated to the end of 2020). Binding sites which contact with DNA/RNA and metal ions were not included. If a protein has more than one ligand, multiple binding pockets were defined for this protein. For each binding pocket, the distances between Cα atoms of amino acid residues of the binding pocket were calculated. In order to obtain the distances between the ligand and its surrounding binding site residues, the distances between atom i in the ligand and each atom in the residue j of the binding pocket were calculated and the smallest distance wa selected as the distance between atom i and residue j. In order to get the sequence feature of the binding site residues in the DISAE protein sequence representation [1] , binding site residues obtained from PDB structures (queries) were mapped onto the multiple sequence alignments of its corresponding Pfam family. First, a profile HMM database was built for the whole Pfam families. hmmscan [46] was applied to search the query sequence against this profile database to decide which Pfam family it belongs to. For those proteins with multiple domains, more than one Pfam families were identified. Then the query sequence was aligned to the most similar sequence in the corresponding Pfam family by using phmmer. Aligned residues on the query sequence were mapped to the multiple sequence alignments of this family according to the alignment between the query sequence and the most similar sequence. • Chemical universe. All chemicals in the ChEMBL26 database consists of the chemical universe. • Protein function universe. CPI classification prediction data is the whole ChEMBL26 [21] database where the same threshold for defining positive and negative labels creating as that in DISAE [1] was used. In the four-universe configuration, portal learning starts with portal identification in the protein sequence universe, then travels into protein structure universe for portal calibration before finally comes into the target protein function universe, where OOC-ML will be invoked for model optimization. Along the way, shifted evaluation, one type of stress model selection, is used to select the "best" model instance, which splits train/test based on Pfam families, i.e. training set and testing set have proteins come from different Pfam families. Each phase will be specified in the following sections. A chemical was represented as a graph and its embedding was learned using GIN [47] . Protein descriptor is pretrained from scratch following exactly DISAE [1] on whole Pfam families, making it a universal protein language model. With standard Adam optimization, shifted evaluation is used to select the "best" instance. With the protein descriptor pretrained using the sequences from the whole Pfam, chemical descriptors and a distance learner were plugged in to fine-tune the protein representation. The distance learner follows Alphafold [4] which formulates a multi-way classification on a distrogram. Based on the histogram of binding site distances, a histogram equalization 1 was applied to formulate a 10-way classification on our binding site structure data as in Supplemental material Figure S S11. Since protein and chemical descriptors output position-specific embeddings of a distilled protein sequence and all atoms of a chemical, pair-wise interaction features on the binding sites were created with a simple vector operation: a matrix multiplication was used to select embedding vectors of each binding residue and atom; multiply and broadcast the selected embedding vectors into a symmetric tensor as shown in the following, where H is embedding matrix of size (number_of _residues, embedding d imension) or (number_of _atoms, embedding d imension) and A is selector matrix [48] , This pair-wise interaction feature tensor H interaction binding_site was fed into a Attentive Pooling [49] layer followed by feed-forward layer for final 10-way classification. Detailed model architecture configuration could be found in Table S S10 and Figure SS13 .The intuition for the simplest form of distance learner is to put all stress of learning on the shared protein and chemical descriptors which will carry information across universes. Again, with standard Adam optimization, shifted evaluation was used to select the "best" instance. Two versions of distance structure prediction were implemented, one formulated as a binary classification, i.e. contact prediction, one formulated as a multi-way classification, i.e. distogram prediction. The performance of the two version are similar, as shown in Figure S S12. With fine-tuned protein descriptor in the protein function universe, a binary classifier is plugged on, which is a ResNet [50] layered with two linear layers as shown in Table S S10 and Figure SS13 . What plays the major role in this phase is the optimization algorithm OOC-ML as shown in pseudocode Algorithm1 and main content Figure 1 (B),(C.1). The local loss landscape exploration is reflected in line 4-9, and line 10 shows ensemble of global loss landscape. Note that more variants could be derived from changing sampling rule (line 3 and 5) and global loss ensemble rule. OOC-ML is built on MAML [11] but has significant differences. Echoing to steps illustrated in the Figure 1 of the main text: 2. In each mini-batch, a few sub-distributions are sampled. The whole optimization has two layers, inner loop and outer loop. At the inner loop, each sub-distribution data has its own local loss landscape. The support set is used for in-distribution optimization on the local loss landscape. 3. The locally optimized model is then used on query set to get a query set loss, which will be fed to the global loss landscape. Each sub-distribution is independently optimized. This step is the same as MAML. What is different is that OOC-ML also calculates query-set without local in-distribution optimization for the small clusters. 4. Local query set losses are pooled together and the model will be optimized on the global loss landscape as meta-optimization defined in MAML. After finishing train, the model will be deployed. 6. MAML is designed for multi-class classification in few-shot learning, at deployment stage, it's expected to meet new unseen class. And it's assumed that there are a few labelled sample available as support set, hence named as few-shot learning. For each unseen class, the trained model will carry out a fast in-distribution adaptation using support set before final prediction on the query set. However, this is impossible in the context of dark space illumination. Portal learning trained model has to make robust predictions without any chance of in-distribution adaptation. In classic training scheme common practice, there are 3-split data sets, "train set", "dev set" and "test set". Train set as the name suggested is used to train model. Test set as commonly expected is used to set an expectation of performance when applying the trained model to unseen data. Dev set is to select the preferred model instance. In OOD setting, data is split (main content Table 1) such that dev set is a OOD from train set and test set is a OOD from both train and dev set. Deployment gap is calculated by deducting ODD-dev performance with OOD-test performance. With portal learning being a framework, all experiments are based on the configuration of a fouruniverse design. Four major variants of models are trained as shown in main content Table 2 for controlled factor experiments to verify the contribution of key components of Portal Learning. In this section we present implementation details. Due to the large number of total samples, all training are carried out under global step-based formalization instead of epoch-based. Typically, a deep learning model is trained for numerous epochs, in each epoch the model will loop over all training data. Evaluation will be carried out once on the whole test data set at the end of each epoch. In the global step formalization, a mini-batch is sampled at uniform random from pre-split training data set. For a pre-defined total number of global steps, this mini-batch sampling will be repeated. Training is stopped when loss decreases are within a pre-defined error margin. To evaluate along the way of training, for every m global steps of training, a subset of test data is sampled uniformly randomly from a pre-split test set. To compute generalization gaps, in addition to evaluate on test set split according to the shifted evaluation, a dev set is held out from the train set for the evaluation as well. In this way, dev set and train set are iid. The performance difference between dev and train is the observed space generalization gap while the performance difference between dev and test is the dark space generalization gap. Distogram prediction uses an average accuracy on the distogram. CPI binary classification uses F1, ROC-AUC and PR-AUC for overall evaluation with breakdown by class F1, recall and precision scores. Protein-ligand docking was performed using Autodock Vina [23] . The whole protein surface search implemented in the Autodock Vina was applied to identify the ligand binding pocket. The center of each protein was set as the center of the binding pocket. The largest distance of the protein atoms to the center of the protein is calculated for each x, y, and z direction to define the edge of each protein. 10 Angstrom of extra space was added to the protein edge to set up the search space for the docking. To create a production level model, three models were trained in PortalCG with only difference in data split. Dev set was OOD in respect of training set to make sure there was no overlapped Pfam families between them. By rotating Pfam families between training set and OOD-dev set in the fashion of a cross-validation, each of the three models was trained on different train set in light of Pfam families involved. Then a voting mechanism was used to make the final prediction. A neural network classifier is trained by minimizing a loss function with a standard form as the following: where p θ (x, y) is the probability that a sample x belongs to the class y according to the trained neural network with parameters θ, and D t is the training data set with the number of samples |D t |. As laid out in the recent framework in [39] that reasons about generalization in deep learning, the test error of a model f t could be decomposed as follows, real world generalization gap When data are sampled as independent and identically distributed (iid) random variables, "ideal world" is a scenario where the complete data distribution is available with infinite data and optimization is on a population loss landscape. By contrast, "real world" has only finite data, where optimization is on an empirical loss landscape. In the dark space context of the OOD setting, this decomposition needs to be changed to observed space generalization gap This explains that the effort could be devoted to decrease the observed space error and/or the dark space generalization gap to reduce T estError(f OOD t ). When stochastic gradient descent (SDG) is applied to the optimization, it approximately estimates ∇ θ J(θ), the expectation of gradient, using a small set of sample of size m, i.e., the mini-batch drawn uniformly from the training set. When all data are IID, this approximation works fine to update θ with g. However, for the ODD with unknown distribution, this θ updating function could easily fall into a local minimum based on the m mini-batch samples. The test error for a trained model in the OOD setting includes two parts: test errors in the observed IID space and a generalized gap when stepping into the OOD space. Furthermore, as discussed and proved in [41] , [42] , not all OOD tasks are equal. Depending on how different the OOD data set is from the train set, some OOD task could be more challenge. It is true for predicting ligand binding to dark proteins. It is impossible for training data to provide sufficient coverage of the whole distribution in the dark chemical genomics space. The motivation of Portal Learning for exploring the dark space follows: one model architecture defines a functional mapping space, together with a data set defines a universe. The model initialized instance in a universe closer to the global optimum universe is a portal that is transferred from an associated universe. CPI dark space is impossible to be explored if the learning is confined only in the observed protein function, i.e. CPI universe since the known data are far sparse as shown in main content Figure 3 . Hence STL is important to identify portals. The model optimization on a loss function can decrease IID training errors but will not help with the observed IID space generalization gap T estError(f t ) iid − T rainError(f iid t ) or the dark space generalization gap T estError(f OOD When we consider the proteins in Tbio, there are 9545 proteins which are not int Casas's druggable proteins. If 0.67 was used as the cutoff, 219 proteins were predicted as positive hits. The gene enrichment analysis result for these proteins was listed in Table SS4 . Disease associated with these 219 human proteins were also listed in Table SS5 . Since one protein is always related with multiple diseases, these diseases are ranked by the number of their associated proteins and the top 10 diseases were listed in the table. Most of top ranked diseases are related with cancer development. 21 drugs that are approved or in clinical trial are predicted to interact with these proteins as shown in Table SS6 . If the proteins in Tbio were removed from the undruggable list, only 2930 proteins were left. If 0.67 was used as the cutoff, there will be only 41 proteins predicted positive and no significant enrichment with David gene enrichment analysis. So 0.665 was used as a cutoff, and 348 proteins were predicted as positive hits. The gene enrichment analysis result for these proteins was listed in Table SS7 . Disease associated with these 348 human proteins were also listed in Table SS8 . 42 drugs that are approved or in clinical trial are predicted to interact with these proteins as shown in The recent work Invariant risk minimization (IRM) [42] is a dedicated algorithm to OOD generalization, which is under the goal of transformative solution for invariant representation. However, given its completeness in theory, many experiments [51] report IRM are not doing well in large real-world data set. Many deep learning tasks are inherently OOD generalization. Among those jargon, some are famous for defining a type of OOD scenario problem, for example, Domain generalization [52] can be taken as the equivalent of OOD generalization; domain shift [52] rephrases the fact of distribution change in terms of D(X, y). Some jargon define a type of solution: domain alignment [53] minimizes the difference/distance between source domains and target domains distributions for an invariant representation where the distance between source and target domain distributions are measured by a wide variety of statistical distance metrics from simple l 2 , f − divergence to Wasserstein distance; domain adaptation [54] is to leverage pretrained model on a different domain and is just one idea to achieve domain generalization, the more general term that is equivalent to OOD generalization in a more practical sense; causal learning is proved by [41] to be equivalent to OOD generalization when causality makes senses (taking into consideration the existence of cases where causality is meaningless); robust optimization [55] that focuses on worst-group performance instead of the average one in ERM; although robust optimization has not quite been adapted to modern deep learning, its sub-field distributional robust optimization [56] has witnessed quite a few recent works adapted to be used in deep learning. Worth to be clarified that, many works that are solving the sub-group or sub-population shift problem is quite different from the OOD generalization problem as discussed in the setting of dark chemical genomics space. Sub-population shift is more like a imbalanced data problem where the test set has major resemblance with training data just the shift from a major class to a minor class or vise-versa. For example, GroupDRO [57] was published in 2018 to address this problem, proposing to incorporate structural assumptions on distribution, which could be straight forward in some data sets which has more meta-data or is a multi-label classification case where the label structure could be used as the structural assumption. (Model architecture) Even since the debut of the survey [58] enlightening the perspective of representation learning, enormous research passion is motivated for model architecture design, almost taken as equivalent to deep learning and overshadowing all other directions. A key idea that echos the demand of generalization is to learn global representation which helps to decrease both T rainError(f iid i ) and known space generalization gap, denoising large data set. Hence, to solve OOD, good model architecture design is not enough. All existing work in CPI is confined in the known space and limited works have concerned generalization. Generally, proposed CPI deep learning models follow the same fashion: build model architecture of three key modules, protein descriptor, chemical descriptor and interaction learner formulating a classification problem with a few variants as regression problem. Innovation is seen mostly for model architecture, particularly active for chemical descriptor, reflecting all milestones in recent years deep learning advancement from CNN, LSTM to Transformer and GNN as demonstrated in DeepPurpose [59] . Generalization has not shown in any previous work as a main goal of research except for DISAE [1] which proves generalizability to orphan GPCR protein drug screening mainly relying on a general purpose pretrained protein language model. It's fine-tuned on GPCR data set with shifted evaluation. Hence, DISAE becomes the baseline model in this work. (Model intialization) Although could be categorized as a type of representation learning, transfer learning became an iconic independent concept for its huge success with breakthroughs in many NLP and CV benchmark tasks. It features a pretraining-finetuning procedure. An intuitive example is to pretrain a language model on large general English vocabulary with pretext task formulation such as predicting next word and then to finetune the language model on specific downstreawm task such as machine translation in biology domain. Well-renowned Transformer based pretrained models starting from human language models are a combined success of attention based model architecture design and transfer learning. In the computation biology field, most eye-catching equivalent is protein language model, i.e. protein descripto, which inspired several similar works at the same time by different groups: TAPE and ESM showcases pretraining on large protein vocabulary could significantly improves downstream task such as protein-protein interaction prediction; MSAbased-tranformer and DISAE incorporates MSA in pretraining. From the perspective of the target downstrem task, the power of transfer learning comes from a better model initialization. This is a major breakthrough that could fill the gap of T estError(f OOD t ) − T estError(f iid t ) but not necessarily, depending on how it's incorporated into the whole training scheme at system level, particularly depending on data fed in. DISAE is used in our work here as a pretrained protein descriptor. This choice over other protein language models is due to the fact that DISAE is the smallest among other in terms of memory required to use and optimize with same level of performance. STL is a way to leverage transfer learning to fine better model initialization. The main difference and innovation is that transfer learning naively relies on the belief that more general knowledge transferred will bring better performance while STL in portal learning actively leverage biology endorsed biased when transferring general knowledge. Further, by defining the goal "to learn the portal", which will be closer to global optimum in target universe loss landscape, the whole training system is steered actively solve ODD. (STL ) Sparked by the breakthrough of Alphafold 1 [4] and Alphafold 2 [5] in protein structure prediction, deep learning has been trusted in molecule interaction distance map prediction to learn structure information. The inclusion of CPI-structure, i.e. protein function prediction portal calibration is inspired by recent success in protein structure prediction led by the great work of AlphaFold1 [4] and AlphaFold2 [5] . Specifically, we pretrain the model to predict residue-residue contacts for a protein whose structure is solved and chemical atom-protein residue contacts given a known CPI complex structure. There are three popular ways of residue-residue pairwise distance matrix prediction depending on how to formulate it as a machine learning task. On the one end is to formulate it as a binary classification where a distance threshold is set defining whether a pair of residues are in contact or not, hence the name contact prediction. On the other end is to formulate it as a regression problem where the exact distance is used as a regression target, hence the name exact distance prediction. AlphaFold1 showed another way in between the two ends, which is to formulate it as a multi-class classification problem, where the distribution of pair-wise residue distances is broken down into multiple class labels according to a histogram, hence the name distogram prediction. We first focus on residue-atom pair wise distance at binding sites and then experiments contact prediction and distogram prediction. In our results, the two formulations have similar performance in light of the final CPI prediction through ablation study as shown in Supplemental Figure SS12 . (OOC-ML) It's long be aware that the sequence order of training data exposed to the model has an impact on model generalizability. Active learning [60] emphasises to actively query data in a iterative fashion to only expose the model to data close to he decision boundary . Curriculum learning [61] emphasises to sort all training so that the model is exposed to challenges of increasing difficulty. This element of data logistics has also been closely weaved into many optimization algorithms that aim to improve model generalizability. For example, contrastive loss [62] requires certain ratio of positive v.s. negative samples in each mini-bath. Most related to portal learning is meta-learning which can be categorized into metric-based, model-based and optimizer based "learn to learn" algorithms [12] with application to few-shot learning and zero-shot learning. Meta-learning started for the data-efficiency challenge instead of generalization or OOD. Although meta-learning is defined very general, making many algorithms seem to be mere an variants falling under its umbrella, in practice, algorithms proposed bearing the name of meta-learning are defined on multiclass classification data set, typically image classification, where the main challenge is the huge number of classes while limited data points are known in each class. Because of this underlying motivation, meta-learning features a very involving data logistics with multiple layer of optimization each has its meta-train/meta-test set sampled based on label distribution. These unspecified facts reveal that there is no existing meta-learning algorithm fit into CPI data. However, the idea of "learn to learn" is attractive. MAML [11] is the optimization based metalearning work that inspired OOC-ML proposed as a major component of portal learning. The differences are major. OOC-ML algorithm expands on it by focusing on data feature distribution instead of label distribution, encouraging active sampling in local neighborhood, which simplifies the support/query meta-train/meta-test data logistics, and ensembling a few local loss directions to learn global gravity direction. Figure S1 : Dark space statistics histogram based on known CPI pairs in ChEMBL26. < 1% proteins in each pfam invovled in ChEMBLE have known binding chemcials. Figure S2 : Dark space statistics histogram based on known CPI pairs in ChEMBL26. < 1% chemicals bind to more then 2 proteins; < 0.4% chemicals bind to more than 5 proteins. Figure S3 : As shown in Figure S S2 , there are three main ranges in terms of the binding targets in a pfam for one chemical: [2, 5] , [5, 20] , [20,) . For each of the range, a heatmap is shown with y axis representing each chemicals, x axis representing each pfam, each point representing the known binding pairs for one chemical and one pfam. As we can see, there is huge dark space. Figure S12: Ablation study to compare the impact of two formulation of protein structure distance prediction, contact prediction v.s. distogram prediction. The two variants have similar OOD-test performance. Figure S13 : Illustration of PortalCG architecture: (A) the pipeline of STL (B) The model architecture for predicting binding site distance matrix. Note that Portal Learning is a general framework at the training scheme level instead of at the model architecture level. OOC-ML as an optimization algorithm is only used in the protein function universe which is not a model architecture component (See Figure 1) . Msa-regularized protein sequence transformer toward predicting genome-wide chemical-protein interactions: Application to gpcrome deorphanization Few-shot learning creates predictive models of drug response that translate from high-throughput screens to individual patients Robust prediction of patient-specific clinical response to unseen drugs from in vitro screens using context-aware deconfounding autoencoder Improved protein structure refinement guided by deep learning based accuracy estimation Highly accurate protein structure prediction with alphafold Accurate prediction of protein structures and interactions using a 3-track network Identifying cell types from single-cell data based on similarities and dissimilarities between cells Toward causal representation learning Automated synthetic-to-real generalization Albert: A lite bert for self-supervised learning of language representations Model-agnostic meta-learning for fast adaptation of deep networks Meta-learning in neural networks: A survey Exploring the dark genome: implications for precision medicine The disgenet knowledge platform for disease genomics: 2019 update Deepaffinity: interpretable deep learning of compound-protein affinity through unified recurrent and convolutional neural networks Deepdta: deep drug-target binding affinity prediction The drug repurposing hub: a next-generation drug library and information resource A cross-level information transmission network for hierarchical omics data integration and phenotype prediction from a new genotype Pfam: The protein families database in 2021 Reverse screening methods to search for the protein targets of chemopreventive compounds Autodock vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization and multithreading Challenges, applications, and recent advances of protein-ligand docking in structure-based drug design Performance of virtual screening against gpcr homology models: Impact of template selection and treatment of binding site plasticity The drug repurposing hub: a next-generation drug library and information resource Circulating exosomes are strongly involved in sars-cov-2 infection The druggable genome and support for target identification and validation in drug development Utcrd and pharos 2021: mining the human proteome for disease biology The druggable genome and support for target identification and validation in drug development David-ws: a stateful web service to facilitate gene/protein list analysis Pharmacology of modulators of alternative splicing Alternative splicing as a biomarker and potential target for drug discovery Journal of Parkinson's disease and Alzheimer's disease The Protein Data Bank A sars-cov-2 protein interaction map reveals targets for drug repurposing Deep Learning Visualizing the loss landscape of neural nets The deep bootstrap: Good online learners are good offline generalizers Performance evaluation of computer and communication systems Out of distribution generalization in machine learning Invariant risk minimization Underspecification presents challenges for credibility in modern machine learning Principles of risk minimization for learning theory Biolip: a semi-manually curated database for biologically relevant ligand-protein interactions Hmmer web server: 2018 update How powerful are graph neural networks? Introduction to applied linear algebra: vectors, matrices, and least squares Attentive pooling networks Deep residual learning for image recognition The risks of invariant risk minimization Domain generalization: A survey Domain generalization via invariant feature representation What you saw is not what you get: Domain adaptation using asymmetric kernel transforms Robust optimization Distributionally robust optimization: A review Does distributionally robust supervised learning give robust classifiers? Unsupervised feature learning and deep learning: A review and new perspectives Deeppurpose: A deep learning library for drug-target interaction prediction Active machine learning helps drug hunters tackle biology Curriculum learning Siamese neural networks for one-shot image recognition This project has been funded with federal funds from the National Institute of General Medical Sciences of National Institute of Health (R01GM122845) and the National Institute on Aging of the National Institute of Health (R01AD057555). We appreciate that Hansaim Lim helped with proof reading and provided constructive suggestions.