key: cord-0545143-ggx898jn authors: Scholkopf, Bernhard; Locatello, Francesco; Bauer, Stefan; Ke, Nan Rosemary; Kalchbrenner, Nal; Goyal, Anirudh; Bengio, Yoshua title: Towards Causal Representation Learning date: 2021-02-22 journal: nan DOI: nan sha: 8f566001453bc6be0a935bf69ffd90d9db3af32b doc_id: 545143 cord_uid: ggx898jn The two fields of machine learning and graphical causality arose and developed separately. However, there is now cross-pollination and increasing interest in both fields to benefit from the advances of the other. In the present paper, we review fundamental concepts of causal inference and relate them to crucial open problems of machine learning, including transfer and generalization, thereby assaying how causality can contribute to modern machine learning research. This also applies in the opposite direction: we note that most work in causality starts from the premise that the causal variables are given. A central problem for AI and causality is, thus, causal representation learning, the discovery of high-level causal variables from low-level observations. Finally, we delineate some implications of causality for machine learning and propose key research areas at the intersection of both communities. If we compare what machine learning can do to what animals accomplish, we observe that the former is rather limited at some crucial feats where natural intelligence excels. These include transfer to new problems and any form of generalization that is not from one data point to the next (sampled from the same distribution), but rather from one problem to the nextboth have been termed generalization, but the latter is a much harder form thereof, sometimes referred to as horizontal, strong, or out-of-distribution generalization. This shortcoming is not too surprising, given that machine learning often disregards information that animals use heavily: interventions in the world, domain shifts, temporal structure -by and large, we consider these factors a nuisance and try to engineer them away. In accordance with this, the majority of current successes of machine learning boil down to large scale pattern recognition on suitably collected independent and identically distributed (i.i.d.) data. To illustrate the implications of this choice and its relation to causal models, we start by highlighting key research challenges. a) Issue 1 -Robustness: With the widespread adoption of deep learning approaches in computer vision [101, 140] [54] , and speech recognition [85] , a substantial body of literature explored the robustness of the prediction of state-of-the-art deep neural network architectures. The underlying motivation originates from the fact that in the real world there is often little control over the distribution from which the data comes from. In computer vision [75, 228] , changes in the test distribution may, for instance, come from aberrations like camera blur, noise or compression quality [106, 129, 170, 206] , or from shifts, rotations, or viewpoints [7, 11, 63, 282] . Motivated by this, new benchmarks were proposed to specifically test generalization of classification and detection methods with respect to simple algorithmically generated interventions like spatial shifts, blur, changes in brightness or contrast [106, 170] , time consistency [94, 227] , control over background and rotation [11] , as well as images collected in multiple environments [19] . Studying the failure modes of deep neural networks from simple interventions has the potential to lead to insights into the inductive biases of state-of-the-art architectures. So far, there has been no definitive consensus on how to solve these problems, although progress has been made using data augmentation, pre-training, selfsupervision, and architectures with suitable inductive biases w.r.t. a perturbation of interest [233, 59, 63, 137, 170, 206] . It has been argued [188] that such fixes may not be sufficient, and generalizing well outside the i.i.d. setting requires learning not mere statistical associations between variables, but an underlying causal model. The latter contains the mechanisms giving rise to the observed statistical dependences, and allows to model distribution shifts through the notion of interventions [183, 237, 218, 34, 188, 181] . b) Issue 2 -Learning Reusable Mechanisms: Infants' understanding of physics relies upon objects that can be tracked over time and behave consistently [52, 236] . Such a representation allows children to quickly learn new tasks as their knowledge and intuitive understanding of physics can be re-used [15, 52, 144, 250] . Similarly, intelligent agents that robustly solve real-world tasks need to re-use and re-purpose their knowledge and skills in novel scenarios. Machine learning models that incorporate or learn structural knowledge of an environment have been shown to be more efficient and generalize better [14, 10, 16, 84, 197, 212, 8, 274, 26, 76, 83, 141, 157, 177, 211, 245, 258, 272, 57, 182] . In a modular representation of the world where the modules correspond to physical causal mechanisms, many modules can be expected to behave similarly across different tasks and environments. An agent facing a new environment or task may thus only need to adapt a few modules in its internal representation of the world [220, 84] . When learning a causal model, one should thus require fewer examples to adapt as most knowledge, i.e., modules, can be re-used without further training. c) A Causality Perspective: Causation is a subtle concept that cannot be fully described using the language of Boolean logic [151] or that of probabilistic inference; it requires the additional notion of intervention [237, 183] . The manipulative definition of causation [237, 183, 118] focuses on the fact that conditional probabilities ("seeing people with open umbrellas suggests that it is raining") cannot reliably predict the outcome of an active intervention ("closing umbrellas does not stop the rain"). Causal relations can also be viewed as the components of reasoning chains [151] that provide predictions for situations that are very far from the observed distribution and may even remain purely hypothetical [163, 183] or require conscious deliberation [128] . In that sense, discovering causal relations means acquiring robust knowledge that holds beyond the support of an observed data distribution and a set of training tasks, and it extends to situations involving forms of reasoning. Our Contributions: In the present paper, we argue that causality, with its focus on representing structural knowledge about the data generating process that allows interventions and changes, can contribute towards understanding and resolving some limitations of current machine learning methods. This would take the field a step closer to a form of artificial intelligence that involves thinking in the sense of Konrad Lorenz, i.e., acting in an imagined space [163] . Despite its success, statistical learning provides a rather superficial description of reality that only holds when the experimental conditions are fixed. Instead, the field of causal learning seeks to model the effect of interventions and distribution changes with a combination of datadriven learning and assumptions not already included in the statistical description of a system. The present work reviews and synthesizes key contributions that have been made to this end: • We describe different levels of modeling in physical systems in Section II and present the differences between causal and statistical models in Section III. We do so not only in terms of modeling abilities but also discuss the assumptions and challenges involved. • We expand on the Independent Causal Mechanisms (ICM) principle as a key component that enables the estimation of causal relations from data in Section IV. In particular, we state the Sparse Mechanism Shift hypothesis as a consequence of the ICM principle and discuss its implications for learning causal models. • We review existing approaches to learn causal relations from appropriate descriptors (or features) in Section V. We cover both classical approaches and modern re-interpretations based on deep neural networks, with a focus on the underlying principles that enable causal discovery. • We discuss how useful models of reality may be learned from data in the form of causal representations, and discuss several current problems of machine learning from a causal point of view in Section VI. • We assay the implications of causality for practical machine learning in Section VII. Using causal language, we revisit robustness and generalization, as well as existing common practices such as semi-supervised learning, self-supervised The present paper expands [221] , leading to partial text overlap. learning, data augmentation, and pre-training. We discuss examples at the intersection between causality and machine learning in scientific applications and speculate on the advantages of combining the strengths of both fields to build a more versatile AI. The gold standard for modeling natural phenomena is a set of coupled differential equations modeling physical mechanisms responsible for the time evolution. This allows us to predict the future behavior of a physical system, reason about the effect of interventions, and predict statistical dependencies between variables that are generated by coupled time evolution. It also offers physical insights, explaining the functioning of the system, and lets us read off its causal structure. To this end, consider the coupled set of differential equations with initial value x(t 0 ) = x 0 . The Picard-Lindelöf theorem states that at least locally, if f is Lipschitz, there exists a unique solution x(t). This implies in particular that the immediate future of x is implied by its past values. If we formally write this in terms of infinitesimal differentials dt and dx = x(t + dt) − x(t), we get: From this, we can ascertain which entries of the vector x(t) mathematically determine the future of others x(t + dt). This tells us that if we have a physical system whose physical mechanisms are correctly described using such an ordinary differential equation (1), solved for dx dt (i.e., the derivative only appears on the left-hand side), then its causal structure can be directly read off. 1 While a differential equation is a rather comprehensive description of a system, a statistical model can be viewed as a much more superficial one. It often does not refer to dynamic processes; instead, it tells us how some of the variables allow prediction of others as long as experimental conditions do not change. E.g., if we drive a differential equation system with certain types of noise, or we average over time, then it may be the case that statistical dependencies between components of x emerge, and those can then be exploited by machine learning. Such a model does not allow us to predict the effect of interventions; however, its strength is that it can often be learned from observational data, while a differential equation usually requires an intelligent human to come up with it. Causal modeling lies in between these two extremes. Like models in physics, it aims to provide understanding and predict the effect of interventions. However, causal discovery and learning try to arrive at such models in a data-driven way, replacing expert knowledge with weak and generic assumptions. The overall TABLE I A SIMPLE TAXONOMY OF MODELS. THE MOST DETAILED MODEL (TOP) IS A MECHANISTIC OR PHYSICAL ONE, USUALLY IN TERMS OF DIFFERENTIAL EQUATIONS. AT THE OTHER END OF THE SPECTRUM (BOTTOM), WE HAVE A PURELY STATISTICAL MODEL; THIS CAN BE LEARNED FROM DATA, BUT IT OFTEN PROVIDES LITTLE INSIGHT BEYOND MODELING ASSOCIATIONS BETWEEN EPIPHENOMENA. CAUSAL MODELS CAN BE SEEN AS DESCRIPTIONS THAT LIE IN BETWEEN, ABSTRACTING AWAY FROM PHYSICAL REALISM WHILE RETAINING THE POWER TO ANSWER CERTAIN INTERVENTIONAL Table I , adapted from [188] . Below, we address some of the tasks listed in Table I in more detail. A. Predicting in the i.i.d. setting Statistical models are a superficial description of reality as they are only required to model associations. For a given set of input examples X and target labels Y , we may be interested in approximating P (Y |X) to answer questions like: "what is the probability that this particular image contains a dog?" or "what is the probability of heart failure given certain diagnostic measurements (e.g., blood pressure) carried out on a patient?". Subject to suitable assumptions, these questions can be provably answered by observing a sufficiently large amount of i.i.d. data from P (X, Y ) [257] . Despite the impressive advances of machine learning, causality offers an under-explored complement: accurate predictions may not be sufficient to inform decision making. For example, the frequency of storks is a reasonable predictor for human birth rates in Europe [168] . However, as there is no direct causal link between those two variables, a change to the stork population would not affect the birth rates, even though a statistical model may predict so. The predictions of a statistical model are only accurate within identical experimental conditions. Performing an intervention changes the data distribution, which may lead to (arbitrarily) inaccurate predictions [183, 237, 218, 188] . Interventional questions are more challenging than predictions as they involve actions that take us out of the usual i.i.d. setting of statistical learning. Interventions may affect both the value of a subset of causal variables and their relations. For example, "is increasing the number of storks in a country going to boost its human birth rate?" and "would fewer people smoke if cigarettes were more socially stigmatized?". As interventions change the joint distribution of the variables of interest, classical statistical learning guarantees [257] no longer apply. On the other hand, learning about interventions may allow to train predictive models that are robust against the changes in distribution that naturally happen in the real world. Here, interventions do not need to be deliberate actions to achieve a goal. Statistical relations may change dynamically over time (e.g., people's preferences and tastes) or there may simply be a mismatch between a carefully controlled training distribution and the test distribution of a model deployed in production. The robustness of deep neural networks has recently been scrutinized and become an active research topic related to causal inference. We argue that predicting under distribution shift should not be reduced to just the accuracy on a test set. If we wish to incorporate learning algorithms into human decision making, we need to trust that the predictions of the algorithm will remain valid if the experimental conditions are changed. Counterfactual problems involve reasoning about why things happened, imagining the consequences of different actions in hindsight, and determining which actions would have achieved a desired outcome. Answering counterfactual questions can be more difficult than answering interventional questions. However, this may be a key challenge for AI, as an intelligent agent may benefit from imagining the consequences of its actions as well as understanding in retrospect what led to certain outcomes, at least to some degree of approximation. 2 We have above mentioned the example of statistical predictions of heart failure. An interventional question would be "how does the probability of heart failure change if we convince a patient to exercise regularly?" A counterfactual one would be "would a given patient have suffered heart failure if they had started exercising a year earlier?". As we shall discuss below, counterfactuals, or approximations thereof, are especially critical in reinforcement learning. They can enable agents to reflect on their decisions and formulate hypotheses that can be empirically verified in a process akin to the scientific method. The data format plays a substantial role in which type of relation can be inferred. We can distinguish two axes of data modalities: observational versus interventional, and hand-engineered versus raw (unstructured) perceptual input. 2 Note that the two types of questions occupy a continuum: to this end, consider a probability which is both conditional and interventional P (A|B, do(C)). If B is the empty set, we have a classical intervention; if B contained all (unobserved) noise terms, we have a counterfactual. If B is not identical to the noise terms, but nevertheless informative about them, we get something in between. For instance, reinforcement learning practitioners may call Q functions as providing counterfactuals, even though they model P (return from t| agent state at time t, do (action at time t)), and therefore closer to an intervention (which is why they can be estimated from data). Observational and Interventional Data: an extreme form of data which is often assumed but seldom strictly available is observational i.i.d. data, where each data point is independently sampled from the same distribution. Another extreme is interventional data with known interventions, where we observe data sets sampled from multiple distributions each of which is the result of a known intervention. In between, we have data with (domain) shifts or unknown interventions. This is observational in the sense that the data is only observed passively, but it is interventional in the sense that there are interventions/shifts, but unknown to us. Hand Engineered Data vs. Raw Data: especially in classical AI, data is often assumed to be structured into high-level and semantically meaningful variables which may partially (modulo some variables being unobserved) correspond to the causal variables of the underlying graph. Raw Data, in contrast, is unstructured and does not expose any direct information about causality. While statistical models are weaker than causal models, they can be efficiently learned from observational data alone on both hand-engineered features and raw perceptual input such as images, videos, speech etc. On the other hand, although methods for learning causal structure from observations exist [237, 188, 229, 113, 174, 187, 139, 17, 246, 277, 175, 123, 186, 176, 36, 82, 161] , learning causal relations frequently requires collecting data from multiple environments, or the ability to perform interventions [251] . In some cases, it is assumed that all common causes of measured variables are also observed (causal sufficiency). 3 Overall, a significant amount of prior knowledge is encoded in which variables are measured. Moving forward, one would hope to develop methods that replace expert data collection with suitable inductive biases and learning paradigms such as meta-learning and self-supervision. If we wish to learn a causal model that is useful for a particular set of tasks and environments, the appropriate granularity of the high-level variables depends on the tasks of interest and on the type of data we have at our disposal, for example which interventions can be performed and what is known about the domain. As discussed, reality can be modeled at different levels, from the physical one to statistical associations between epiphenomena. In this section, we expand on the difference between statistical and causal modeling and review a formal language to talk about interventions and distribution changes. The machine learning community has produced impressive successes with machine learning applications to big data problems [148, 171, 223, 231, 53] . In these successes, there are several trends at work [215] : (1) we have massive amounts of data, often from simulations or large scale human labeling, (2) we use high capacity machine learning systems (i.e., complex function classes with many adjustable parameters), 3 There are also algorithms that do not require causal sufficiency [237] . (3) we employ high-performance computing systems, and finally (often ignored, but crucial when it comes to causality) (4) the problems are i.i.d. The latter can be guaranteed by the construction of a task including training and test set (e.g., image recognition using benchmark datasets). Alternatively, problems can be made approximately i.i.d., e.g.. by carefully collecting the right training set for a given application problem, or by methods such as "experience replay" [171] where a reinforcement learning agent stores observations in order to later permute them for the purpose of re-training. For i.i.d. data, strong universal consistency results from statistical learning theory apply, guaranteeing convergence of a learning algorithm to the lowest achievable risk. Such algorithms do exist, for instance, nearest neighbor classifiers, support vector machines, and neural networks [257, 217, 239, 66] . Seen in this light, it is not surprising that we can indeed match or surpass human performance if given enough data. However, current machine learning methods often perform poorly when faced with problems that violate the i.i.d. assumption, yet seem trivial to humans. Vision systems can be grossly misled if an object that is normally recognized with high accuracy is placed in a context that in the training set may be negatively correlated with the presence of the object. Distribution shifts may also arise from simple corruptions that are common in real-world data collection pipelines [9, 106, 129, 170, 206] . An example of this is the impact of socio-economic factors in clinics in Thailand on the accuracy of a detection system for Diabetic Retinopathy [18] . More dramatically, the phenomenon of "adversarial vulnerability" [249] highlights how even tiny but targeted violations of the i.i.d. assumption, generated by adding suitably chosen perturbations to images, imperceptible to humans, can lead to dangerous errors such as confusion of traffic signs. Overall, it is fair to say that much of the current practice (of solving i.i.d. benchmark problems) and most theoretical results (about generalization in i.i.d. settings) fail to tackle the hard open challenge of generalization across problems. To further understand how the i.i.d. assumption is problematic, let us consider a shopping example. Suppose Alice is looking for a laptop rucksack on the internet (i.e., a rucksack with a padded compartment for a laptop). The web shop's recommendation system suggests that she should buy a laptop to go along with the rucksack. This seems odd because she probably already has a laptop, otherwise she would not be looking for the rucksack in the first place. In a way, the laptop is the cause, and the rucksack is an effect. Now suppose we are told whether a customer has bought a laptop. This reduces our uncertainty about whether she also bought a laptop rucksack, and vice versa --and it does so by the same amount (the mutual information), so the directionality of cause and effect is lost. However, the directionality is present in the physical mechanisms generating statistical dependence, for instance the mechanism that makes a customer want to buy a rucksack once she owns a laptop. 4 Recommending an item to buy constitutes an intervention in a system, taking us outside the i.i.d. setting. We no longer work with the observational distribution, but a dis-tribution where certain variables or mechanisms have changed. Reichenbach [198] clearly articulated the connection between causality and statistical dependence. He postulated: Common Cause Principle: if two observables X and Y are statistically dependent, then there exists a variable Z that causally influences both and explains all the dependence in the sense of making them independent when conditioned on Z. As a special case, this variable can coincide with X or Y . Suppose that X is the frequency of storks and Y the human birth rate. If storks bring the babies, then the correct causal graph is X → Y . If babies attract storks, it is X ← Y . If there is some other variable that causes both (such as economic Without additional assumptions, we cannot distinguish these three cases using observational data. The class of observational distributions over X and Y that can be realized by these models is the same in all three cases. A causal model thus contains genuinely more information than a statistical one. While causal structure discovery is hard if we have only two observables [187] , the case of more observables is surprisingly easier, the reason being that in that case, there are nontrivial conditional independence properties [238, 51, 74] implied by causal structure. These generalize the Reichenbach Principle and can be described by using the language of causal graphs or structural causal models, merging probabilistic graphical models and the notion of interventions [237, 183] . They are best described using directed functional parent-child relationships rather than conditionals. While conceptually simple in hindsight, this constituted a major step in the understanding of causality. The SCM viewpoint considers a set of observables (or variables) X 1 , . . . , X n associated with the vertices of a directed acyclic graph (DAG). We assume that each observable is the result of an assignment using a deterministic function f i depending on X i 's parents in the graph (denoted by PA i ) and on an unexplained random variable U i . Mathematically, the observables are thus random variables, too. Directed edges in the graph represent direct causation, since the parents are connected to X i by directed edges and through (3) directly affect the assignment of X i . The noise U i ensures that the overall object (3) can represent a general conditional distribution P (X i |PA i ), and the set of noises U 1 , . . . , U n are assumed to be jointly independent. If they were not, then by the Common Cause Principle there should be another variable that causes their dependence, and thus our model would not be causally sufficient. If we specify the distributions of U 1 , . . . , U n , recursive application of (3) allows us to compute the entailed observational joint distribution P (X 1 , . . . , X n ). This distribution has structural properties inherited from the graph [147, 183] : it satisfies the causal Markov condition stating that conditioned on its parents, each X j is independent of its non-descendants. Intuitively, we can think of the independent noises as "information probes" that spread through the graph (much like independent elements of gossip can spread through a social network). Their information gets entangled, manifesting itself in a footprint of conditional dependencies making it possible to infer aspects of the graph structure from observational data using independence testing. Like in the gossip analogy, the footprint may not be sufficiently characteristic to pin down a unique causal structure. In particular, it certainly is not if there are only two observables, since any nontrivial conditional independence statement requires at least three variables. The two-variable problem can be addressed by making additional assumptions, as not only the graph topology leaves a footprint in the observational distribution, but the functions f i do, too. This point is interesting for machine learning, where much attention is devoted to properties of function classes (e.g., priors or capacity measures), and we shall return to it below. a) Causal Graphical Models: The graph structure along with the joint independence of the noises implies a canonical factorization of the joint distribution entailed by (3) into causal conditionals that we refer to as the causal (or disentangled) factorization, While many other entangled factorizations are possible, e.g., the factorization (4) yields practical computational advantages during inference, which is in general hard, even when it comes to non-trivial approximations [210] . But more interestingly, it is the only one that decomposes the joint distribution into conditionals corresponding to the structural assignments (3). We think of these as the causal mechanisms that are responsible for all statistical dependencies among the observables. Accordingly, in contrast to (5), the disentangled factorization represents the joint distribution as a product of causal mechanisms. b) Latent variables and Confounders: Variables in a causal graph may be unobserved, which can make causal inference particularly challenging. Unobserved variables may confound two observed variables so that they either appear statistically related while not being causally related (i.e., neither of the variables is an ancestor of the other), or their statistical relation is altered by the presence of the confounder (e.g., one variable is a causal ancestor for the other, but the confounder is a causal ancestor of both). Confounders may or may not be known or observed. c) Interventions: The SCM language makes it straightforward to formalize interventions as operations that modify a subset of assignments (3), e.g., changing U i , setting f i (and thus X i ) to a constant, or changing the functional form of f i (and thus the dependency of X i on its parents) [237, 183] . Several types of interventions may be possible [62] which can be categorized as: No intervention: only observational data is obtained from the causal model. Hard/perfect: the function in the structural assignment (3) of a variable (or, analogously, of multiple variables) is set to a constant (implying that the value of the variable is fixed), and then the entailed distribution for the modified SCM is computed. Soft/imperfect: the structural assignment (3) for a variable is modified by changing the function or the noise term (this corresponds to changing the conditional distribution given its parents). Uncertain: the learner is not sure which mechanism/variable is affected by the intervention. One could argue that stating the structural assignments as in (3) is not yet sufficient to formulate a causal model. In addition, one should specify the set of possible interventions on the structural causal model. This may be done implicitly via the functional form of structural equations by allowing any intervention over the domain of the mechanisms. This becomes relevant when learning a causal model from data, as the SCM depends on the interventions. Pragmatically, we should aim at learning causal models that are useful for specific sets of tasks of interest [207, 267] on appropriate descriptors (in terms of which causal statements they support) that must either be provided or learned. We will return to the assumptions that allow learning causal models and features in Section IV. An example of the difference between a statistical and a causal model is depicted in Figure 1 . A statistical model may be defined for instance through a graphical model, i.e., a probability distribution along with a graph such that the former is Markovian with respect to the latter (in which case it can be factorized as (4)). However, the edges in a (generic) graphical model do not need to be causal [97] . For instance, the two graphs X 1 → X 2 → X 3 and X 1 ← X 2 ← X 3 imply the same conditional independence(s) (X 1 and X 3 are independent given X 2 ). They are thus in the same Markov equivalence class, i.e., if a distribution is Markovian w.r.t. one of the graphs, then it also is w.r.t. the other graph. Note that the above serves as an example that the Markov condition is not sufficient for causal discovery. Further assumptions are needed, cf. below and [237, 183, 188] . A graphical model becomes causal if the edges of its graph are causal (in which case the graph is referred to as a "causal graph"), cf. (3) . This allows to compute interventional distributions as depicted in Figure 1 . When a variable is intervened upon, we disconnect it from its parents, fix its value, and perform ancestral sampling on its children. A structural causal model is composed of (i) a set of causal variables and (ii) a set of structural equations with a distribution over the noise variables U i (or a set of causal conditionals). While both causal graphical models and SCMs allow to compute interventional distributions, only the SCMs allow to compute counterfactuals. To compute counterfactuals, we need to fix the value of the noise variables. Moreover, there are many ways to represent a conditional as a structural assignment (by picking different combinations of functions and noise variables). a) Causal Learning and Reasoning: The conceptual basis of statistical learning is a joint distribution P (X 1 , . . . , X n ) (where often one of the X i is a response variable denoted as Y ), and we make assumptions about function classes used to approximate, say, a regression E[Y |X]. Causal learning considers a richer class of assumptions, and seeks to exploit the fact that the joint distribution possesses a causal factorization (4) . It involves the causal conditionals P (X i | PA i ) (e.g., represented by the functions f i and the distribution of U i in (3)), how these conditionals relate to each other, and interventions or changes that they admit. Once a causal model is available, either by external human knowledge or a learning process, causal reasoning allows to draw conclusions on the effect of interventions, counterfactuals and potential outcomes. In contrast, statistical models only allow to reason about the outcome of i.i.d. experiments. We now return to the disentangled factorization (4) of the joint distribution P (X 1 , . . . , X n ). This factorization according to the causal graph is always possible when the U i are independent, but we will now consider an additional notion of independence relating the factors in (4) to one another. Whenever we perceive an object, our brain assumes that the object and the mechanism by which the information contained in its light reaches our brain are independent. We can violate this by looking at the object from an accidental viewpoint, which can give rise to optical illusions [188] . The above independence assumption is useful because in practice, it holds most of the time, and our brain thus relies on objects being independent of our vantage point and the illumination. Likewise, there should not be accidental coincidences, such as 3D structures lining up in 2D, or shadow boundaries coinciding with texture boundaries. In vision research, this is called the generic viewpoint assumption. If we move around the object, our vantage point changes, but we assume that the other variables of the overall generative process (e.g., lighting, object position and structure) are unaffected by that. This is an invariance implied by the above independence, allowing us to infer 3D information even without stereo vision ("structure from motion"). For another example, consider a dataset that consists of altitude A and average annual temperature T of weather stations [188] . A and T are correlated, which we believe is due to the fact that the altitude has a causal effect on temperature. Suppose we had two such datasets, one for Austria and one for Switzerland. The two joint distributions P (A, T ) may be rather different since the marginal distributions P (A) over altitudes will differ. The conditionals P (T |A), however, may be (close to) invariant, since they characterize the physical mechanisms that generate temperature from altitude. This similarity is lost upon us if we only look at the overall joint distribution, without information about the causal structure A → T . The causal factorization P (A)P (T |A) will contain a component P (T |A) that generalizes across countries, while the entangled factorization P (T )P (A|T ) will exhibit no such robustness. Cum grano salis, the same applies when we consider interventions in a system. Causal model For a model to correctly predict the effect of interventions, it needs to be robust to generalizing from an observational distribution to certain interventional distributions. One can express the above insights as follows [218, 188] : The causal generative process of a system's variables is composed of autonomous modules that do not inform or influence each other. In the probabilistic case, this means that the conditional distribution of each variable given its causes (i.e., its mechanism) does not inform or influence the other mechanisms. This principle entails several notions important to causality, including separate intervenability of causal variables, modularity and autonomy of subsystems, and invariance [183, 188] . If we have only two variables, it reduces to an independence between the cause distribution and the mechanism producing the effect distribution. Applied to the causal factorization (4), the principle tells us that the factors should be independent in the sense that (a) changing (or performing an intervention upon) one mechanism P (X i |PA i ) does not change any of the other mechanisms P (X j |PA j ) (i = j) [218] , and (b) knowing some other mechanisms P (X i |PA i ) (i = j) does not give us information about a mechanism P (X j |PA j ) [120] . This notion of independence thus subsumes two aspects: the former pertaining to influence, and the latter to information. The notion of invariant, autonomous, and independent mechanisms has appeared in various guises throughout the history of causality research [99, 71, 111, 183, 120, 240, 188] . Early work on this was done by Haavelmo [99] , stating the assumption that changing one of the structural assignments leaves the other ones invariant. Hoover [111] attributes to Herb Simon the invariance criterion: the true causal order is the one that is invariant under the right sort of intervention. Aldrich [4] discusses the historical development of these ideas in economics. He argues that the "most basic question one can ask about a relation should be: How autonomous is it?" [71, preface] . Pearl [183] discusses autonomy in detail, arguing that a causal mechanism remains invariant when other mechanisms are subjected to external influences. He points out that causal discovery methods may best work "in longitudinal studies conducted under slightly varying conditions, where accidental independencies are destroyed and only structural independencies are preserved." Overviews are provided by Aldrich [4] , Hoover [111] , Pearl [183] , and Peters et al. [188, Sec. 2.2] . These seemingly different notions can be unified [120, 240] . We view any real-world distribution as a product of causal mechanisms. A change in such a distribution (e.g., when moving from one setting/domain to a related one) will always be due to changes in at least one of those mechanisms. Consistent with the implication (a) of the ICM Principle, we state the following hypothesis: Sparse Mechanism Shift (SMS). Small distribution changes tend to manifest themselves in a sparse or local way in the causal/disentangled factorization (4), i.e., they should usually not affect all factors simultaneously. In contrast, if we consider a non-causal factorization, e.g., (5) , then many, if not all, terms will be affected simultaneously as we change one of the physical mechanisms responsible for a system's statistical dependencies. Such a factorization may thus be called entangled, a term that has gained popularity in machine learning [23, 109, 158, 247] . The SMS hypothesis was stated in [181, 24, 221, 115] , and in earlier form in [218, 279, 220] . An intellectual ancestor is Simon's invariance criterion, i.e., that the causal structure remains invariant across changing background conditions [235] . The hypothesis is also related to ideas of looking for features that vary slowly [69, 270] . It has recently been used for learning causal models [131] , modular architectures [84, 28] and disentangled representations [159] . We have informally talked about the dependence of two mechanisms P (X i |PA i ) and P (X j |PA j ) when discussing the ICM Principle and the disentangled factorization (4) . Note that the dependence of two such mechanisms does not coincide with the statistical dependence of the random variables X i and X j . Indeed, in a causal graph, many of the random variables will be dependent even if all mechanisms are independent. Also, the independence of the noise terms U i does not translate into the independence of the X i . Intuitively speaking, the independent noise terms U i provide and parameterize the uncertainty contained in the fact that a mechanism P (X i |PA i ) is non-deterministic, 5 and thus ensure that each mechanism adds an independent element of uncertainty. In this sense, the ICM Principle contains the independence of the unexplained noise terms in an SCM (3) as a special case. In the ICM Principle, we have stated that independence of two mechanisms (formalized as conditional distributions) should mean that the two conditional distributions do not inform or influence each other. The latter can be thought of as requiring that independent interventions are possible. To better understand the former, we next discuss a formalization in terms of algorithmic independence. In a nutshell, we encode each mechanism as a bit string, and require that joint compression of these strings does not save space relative to independent compressions. To this end, first recall that we have so far discussed links between causal and statistical structures. Of the two, the more fundamental one is the causal structure, since it captures the physical mechanisms that generate statistical dependencies in the first place. The statistical structure is an epiphenomenon that follows if we make the unexplained variables random. It is awkward to talk about statistical information contained in a mechanism since deterministic functions in the generic case neither generate nor destroy information. This serves as a motivation to devise an alternative model of causal structures in terms of Kolmogorov complexity [120] . The Kolmogorov complexity (or algorithmic information) of a bit string is essentially the length of its shortest compression on a Turing machine, and thus a measure of its information content. Independence of mechanisms can be defined as vanishing mutual algorithmic information; i.e., two conditionals are considered independent if knowing (the shortest compression of) one does not help us achieve a shorter compression of the other. Algorithmic information theory provides a natural framework for non-statistical graphical models [120, 126] . Just like the latter are obtained from structural causal models by making the unexplained variables U i random, we obtain algorithmic graphical models by making the U i bit strings, jointly independent across nodes, and viewing X i as the output of a fixed Turing machine running the program U i on the input PA i . Similar to the statistical case, one can define a local causal Markov condition, a global one in terms of d-separation, and an additive decompo-sition of the joint Kolmogorov complexity in analogy to (4) , and prove that they are implied by the structural causal model [120] . Interestingly, in this case, independence of noises and independence of mechanisms coincide, since the independent programs play the role of the unexplained noise terms. This approach shows that causality is not intrinsically bound to statistics. Let us turn to the problem of causal discovery from data. Subject to suitable assumptions such as faithfulness [237] , one can sometimes recover aspects of the underlying graph 6 from observational data by performing conditional independence tests. However, there are several problems with this approach. One is that our datasets are always finite in practice, and conditional independence testing is a notoriously difficult problem, especially if conditioning sets are continuous and multi-dimensional. So while, in principle, the conditional independencies implied by the causal Markov condition hold irrespective of the complexity of the functions appearing in an SCM, for finite datasets, conditional independence testing is hard without additional assumptions [225] . Recent progress in (conditional) independence testing heavily relies on kernel function classes to represent probability distributions in reproducing kernel Hilbert spaces [90, 91, 73, 278, 60, 191, 42] . The other problem is that in the case of only two variables, the ternary concept of conditional independence collapses and the Markov condition thus has no nontrivial implications. It turns out that both problems can be addressed by making assumptions on function classes. This is typical for machine learning, where it is well-known that finite-sample generalization without assumptions on function classes is impossible. Specifically, although there are universally consistent learning algorithms, i.e., approaching minimal expected error in the infinite sample limit, there are always cases where this convergence is arbitrarily slow. So for a given sample size, it will depend on the problem being learned whether we achieve low expected error, and statistical learning theory provides probabilistic guarantees in terms of measures of complexity of function classes [55, 257] . Returning to causality, we provide an intuition why assumptions on the functions in an SCM should be necessary to learn about them from data. Consider a toy SCM with only two observables X → Y . In this case, (3) turns into with U ⊥ ⊥ V . Now think of V acting as a random selector variable choosing from among a set of functions ) depends on v in a nonsmooth way, it should be hard to glean information about the SCM from a finite dataset, given that V is not observed and its value randomly selects among arbitrarily different f v . This motivates restricting the complexity with which f depends on V . A natural restriction is to assume an additive noise model If f in (7) depends smoothly on V , and if V is relatively well concentrated, this can be motivated by a local Taylor expansion argument. It drastically reduces the effective size of the function class -without such assumptions, the latter could depend exponentially on the cardinality of the support of V . Restrictions of function classes not only make it easier to learn functions from data, but it turns out that they can break the symmetry between cause and effect in the two-variable case: one can show that given a distribution over X, Y generated by an additive noise model, one cannot fit an additive noise model in the opposite direction (i.e., with the roles of X and Y interchanged) [113, 174, 187, 139, 17] , cf. also [246] . This is subject to certain genericity assumptions, and notable exceptions include the case where U, V are Gaussian and f is linear. It generalizes results of Shimizu et al. [229] for linear functions, and it can be generalized to include non-linear rescalings [277] , loops [175] , confounders [123] , and multi-variable settings [186] . Empirically, there is a number of methods that can detect causal direction better than chance [176] , some of them building on the above Kolmogorov complexity model [36] , some on generative models [82] , and some directly learning to classify bivariate distributions into causal vs. anticausal [161] . While restrictions of function classes are one possibility to allow to identify the causal structure, other assumptions or scenarios are possible. So far, we have discussed that causal models are expected to generalize under certain distribution shifts since they explicitly model interventions. By the SMS hypothesis, much of the causal structure is assumed to remain invariant. Hence distribution shifts such as observing a system in different "environments / contexts" can significantly help to identify causal structure [251, 188] . These contexts can come from interventions [218, 189, 192] , non-stationary time series [117, 100, 193] or multiple views [89, 115] . The contexts can likewise be interpreted as different tasks, which provide a connection to meta-learning [22, 67, 213] . The work of Bengio et al. [24] ties the generalization in meta-learning to invariance properties of causal models, using the idea that a causal model should adapt faster to interventions than purely predictive models. This was extended to multiple variables and unknown interventions in [131] , proposing a framework for causal discovery using neural networks by turning the discrete graph search into a continuous optimization problem. While [24, 131] focus on learning a causal model using neural networks with an unsupervised loss, the work of Dasgupta et al. [50] explores learning a causal model using a reinforcement learning agent. These approaches have in common that semantically meaningful abstract representations are given and do not need to be learned from high-dimensional and low-level (e.g., pixel) data. Traditional causal discovery and reasoning assume that the units are random variables connected by a causal graph. However, real-world observations are usually not structured into those units to begin with, e.g., objects in images [162] . Hence, the emerging field of causal representation learning strives to learn these variables from data, much like machine learning went beyond symbolic AI in not requiring that the symbols that algorithms manipulate be given a priori (cf. Bonet and Geffner [33] ). To this end, we could try to connect causal variables S 1 , . . . , S n to observations where G is a non-linear function. An example can be seen in Figure 2 , where high-dimensional observations are the result of a view on the state of a causal system that is then processed by a neural network to extract high-level variables that are useful on a variety of tasks. Although causal models in economics, medicine, or psychology often use variables that are abstractions of underlying quantities, it is challenging to state general conditions under which coarse-grained variables admit causal models with well-defined interventions [41, 207] . Defining objects or variables that can be causally related amounts to coarse-graining of more detailed models of the world, including microscopic structural equation models [207] , ordinary differential equations [173, 208] , and temporally aggregated time series [78] . The task of identifying suitable units that admit causal models is challenging for both human and machine intelligence. Still, it aligns with the general goal of modern machine learning to learn meaningful representations of data, where meaningful can include robust, explainable, or fair [142, 133, 276, 130, 260] . To combine structural causal modeling (3) and representation learning, we should strive to embed an SCM into larger machine learning models whose inputs and outputs may be high-dimensional and unstructured, but whose inner workings are at least partly governed by an SCM (that can be parameterized with a neural network). The result may be a modular architecture, where the different modules can be individually fine-tuned and re-purposed for new tasks [181, 84] and the SMS hypothesis can be used to enforce the appropriate structure. We visualize an example in Figure 3 where changes are sparse for the appropriate causal variables (the position of the finger and the cube changed as a result of moving the finger), but dense in other representations, for example in the pixel space (as finger and cube move, many pixels change their value). At the extreme, all pixels may change as a result of a sparse intervention, for example, if the camera view or the lighting changes. We now discuss three problems of modern machine learning in the light of causal representation learning. a) Problem 1 -Learning Disentangled Representations: We have earlier discussed the ICM Principle implying both the independence of the SCM noise terms in (3) and thus the feasibility of the disentangled representation Fig. 2 . Illustration of the causal representation learning problem setting. Perceptual data, such as images or other high-dimensional sensor measurements, can be thought of as entangled views of the state of an unknown causal system as described in (10) . With the exception of possible task labels, none of the variables describing the causal variables generating the system may be known. The goal of causal representation learning is to learn a representation (partially) exposing this unknown causal structure (e.g., which variables describe the system, and their relations). As full recovery may often be unreasonable, neural networks may map the low-level features to some high-level variables supporting causal statements relevant to a set of downstream tasks of interest. For example, if the task is to detect the manipulable objects in a scene, the representation may separate intrinsic object properties from their pose and appearance to achieve robustness to distribution shifts on the latter variables. Usually, we do not get labels for the high-level variables, but the properties of causal models can serve as useful inductive biases for learning (e.g., the SMS hypothesis). as well as the property that the conditionals P (S i | PA i ) be independently manipulable and largely invariant across related problems. Suppose we seek to reconstruct such a disentangled representation using independent mechanisms (11) from data, but the causal variables S i are not provided to us a priori. Rather, we are given (possibly high-dimensional) X = (X 1 , . . . , X d ) (below, we think of X as an image with pixels X 1 , . . . , X d ) as in (10) , from which we should construct causal variables S 1 , . . . , S n (n d) as well as mechanisms, cf. (3), modeling the causal relationships among the S i . To this end, as a first step, we can use an encoder q : R d → R n taking X to a latent "bottleneck" representation comprising the unexplained noise variables U = (U 1 , . . . , U n ). The next step is the mapping f (U ) determined by the structural assignments . . , f n . Finally, we apply a decoder p : R n → R d . For suitable n, the system can be trained using reconstruction error to satisfy p • f • q ≈ id on the observed images. If the causal graph is known, the topology of a neural network implementing f can be fixed accordingly; if not, the neural network decoder learns the compositionp = p • f . In practice, one may not know f , and thus only learn an autoencoderp • q, where the causal graph effectively becomes an unspecified part of the decoderp, possibly aided by a suitable choice of architecture [149] . Much of the existing work on disentanglement [109, 158, 159, 256, 157, 135, 202, 61] focuses on independent factors of variation. This can be viewed as the special case where the causal graph is trivial, i.e., ∀i : PA i = ∅ in (12) . In this case, the factors are functions of the independent exogenous noise variables, and thus independent themselves. 7 However, the ICM Principle is more general and contains statistical independence as a special case. Note that the problem of object-centric representation learning [10, 39, 83, 86, 87, 138, 155, 160, 262, 255] can also be considered a special case of disentangled factorization as discussed here. Objects are constituents of scenes that in principle permit separate interventions. A disentangled representation of a scene containing objects should probably use objects as some of the building blocks of an overall causal factorization 8 , complemented by mechanisms such as orientation, viewing direction, and lighting. The problem of recovering the exogenous noise variables is ill-defined in the i.i.d. case as there are infinitely many equivalent solutions yielding the same observational distribu-tion [158, 116, 188] . Additional assumptions or biases can help favoring certain solutions over others [158, 205] . Leeb et al. [149] propose a structured decoder that embeds an SCM and automatically learns a hierarchy of disentangled factors. To make (12) causal, we can use the ICM Principle, i.e., we should make the U i statistically independent, and we should make the mechanisms independent. This could be done by ensuring that they are invariant across problems, exhibit sparse changes to actions, or that they can be independently intervened upon [221, 21, 29] . Locatello et al. [159] showed that the sparse mechanism shift hypothesis stated above is theoretically sufficient when given suitable training data. Further, the SMS hypothesis can be used as supervision signal in practice even if PA i = ∅ [252] . However, which factors of variation can be disentangled depend on which interventions can be observed [230, 159] . As discussed by Schölkopf et al. [220] , Shu et al. [230] , different supervision signals may be used to identify subsets of factors. Similarly, when learning causal variables from data, which variables can be extracted and their granularity depends on which distribution shifts, explicit interventions, and other supervision signals are available. b) Problem 2 -Learning Transferable Mechanisms: An artificial or natural agent in a complex world is faced with limited resources. This concerns training data, i.e., we only have limited data for each task/domain, and thus need to find ways of pooling/re-using data, in stark contrast to the current industry practice of large-scale labeling work done by humans. It also concerns computational resources: animals have constraints on the size of their brains, and evolutionary neuroscience knows many examples where brain regions get re-purposed. Similar constraints on size and energy apply as ML methods get embedded in (small) physical devices that may be battery-powered. Future AI models that robustly solve a range of problems in the real world will thus likely need to re-use components, which requires them to be robust across tasks and environments [220] . An elegant way to do this is to employ a modular structure that mirrors a corresponding modularity in the world. In other words, if the world is indeed modular, in the sense that components/mechanisms of the world play roles across a range of environments, tasks, and settings, then it would be prudent for a model to employ corresponding modules [84] . For instance, if variations of natural lighting (the position of the sun, clouds, etc.) imply that the visual environment can appear in brightness conditions spanning several orders of magnitude, then visual processing algorithms in our nervous system should employ methods that can factor out these variations, rather than building separate sets of face recognizers, say, for every lighting condition. If, for example, our nervous system were to compensate for the lighting changes by a gain control mechanism, then this mechanism in itself need not have anything to do with the physical mechanisms bringing about brightness differences. However, it would play a role in a modular structure that corresponds to the role that the physical mechanisms play in the world's modular structure. This could produce a bias towards models that exhibit certain forms of structural homomorphism to a world that we cannot directly recognize, which would be rather intriguing, given that ultimately our brains do nothing but turn neuronal signals into other neuronal signals. A sensible inductive bias to learn such models is to look for independent causal mechanisms [180] and competitive training can play a role in this. For pattern recognition tasks, [181, 84] suggest that learning causal models that contain independent mechanisms may help in transferring modules across substantially different domains. c) Problem 3 -Learning Interventional World Models and Reasoning: Deep learning excels at learning representations of data that preserve relevant statistical properties [23, 148] . However, it does so without taking into account the causal properties of the variables, i.e., it does not care about the interventional properties of the variables it analyzes or reconstructs. Causal representation learning should move beyond the representation of statistical dependence structures towards models that support intervention, planning, and reasoning, realizing Konrad Lorenz' notion of thinking as acting in an imagined space [163] . This ultimately requires the ability to reflect back on one's actions and envision alternative scenarios, possibly necessitating (the illusion of) free will [184] . The biological function of self-consciousness may be related to the need for a variable representing oneself in one's Lorenzian imagined space, and free will may then be a means to communicate about actions taken by that variable, crucial for social and cultural learning, a topic which has not yet entered the stage of machine learning research although it is at the core of human intelligence [107] . All of this discussion calls for a learning paradigm that does not rest on the usual i.i.d. assumption. Instead, we wish to make a weaker assumption: that the data on which the model will be applied comes from a possibly different distribution, but involving (mostly) the same causal mechanisms [188] . This raises serious challenges: (a) in many cases, we need to infer abstract causal variables from the available low-level input features; (b) there is no consensus on which aspects of the data reveal causal relations; (c) the usual experimental protocol of training and test set may not be sufficient for inferring and evaluating causal relations on existing data sets, and we may need to create new benchmarks, for example with access to environment information and interventions; (d) even in the limited cases we understand, we often lack scalable and numerically sound algorithms. Despite these challenges, we argue this endeavor has concrete implications for machine learning and may shed light on desiderata and current practices alike. Suppose our underlying causal graph is X → Y , and at the same time we are trying to learn a mapping X → Y . The causal factorization (4) for this case is The ICM Principle posits that the modules in a joint distribution's causal decomposition do not inform or influence each other. This means that in particular, P (X) should contain no information about P (Y |X), which implies that SSL should be futile, in as far as it is using additional information about P (X) (from unlabelled data) to improve our estimate of P (Y |X = x). In the opposite (anticausal) direction (i.e., the direction of prediction is opposite to the causal generative process), however, SSL may be possible. To see this, we refer to Daniušis et al. [49] who define a measure of dependence between input P (X) and conditional P (Y |X). 9 Assuming that this measure is zero in the causal direction (applying the ICM assumption described in Section IV to the two-variable case), they show that it is strictly positive in the anticausal direction. Applied to SSL in the anticausal direction, this implies that the distribution of the input (now: effect) variable should contain information about the conditional of output (cause) given input, i.e., the quantity that machine learning is usually concerned with. The study [218] empirically corroborated these predictions, thus establishing an intriguing bridge between the structure of learning problems and certain physical properties (cause-effect direction) of real-world data generating processes. It also led to a range of follow-up work [279, 266, 280, 77, 114, 281, 32, 96, 263, 243, 195, 152, 156, 153, 167, 204, 115] , complementing the studies of Bareinboim and Pearl [12, 185] , and it inspired a thread of work in the statistics community exploiting invariance for causal discovery and other tasks [189, 192, 105, 104, 115] . On the SSL side, subsequent developments include further theoretical analyses [121, 188, Section 5.1.2] and a form of conditional SSL [259] . The view of SSL as exploiting dependencies between a marginal P (X) and a non-causal conditional P (Y |X) is consistent with the common assumptions employed to justify SSL [44] . The cluster assumption asserts that the labeling function (which is a property of P (Y |X)) should not change within clusters of P (X). The low-density separation assumption posits that the area where P (Y |X) takes the value of 0.5 should have small P (X); and the semi-supervised smoothness assumption, applicable also to continuous outputs, states that if two points in a high-density region are close, then so should be the corresponding output values. Note, moreover, that some of the theoretical results in the field use assumptions well-known from causal graphs (even if they do not mention causality): the co-training theorem [31] makes a statement about learnability from unlabelled data, and relies on an assumption of predictors being conditionally independent given the label, which we would normally expect if the predictors are (only) caused by the label, i.e., an anticausal setting. This is nicely consistent with the above findings. One can hypothesize that the causal direction should also have an influence on whether classifiers are vulnerable to adversarial attacks. These attacks have recently become popular, and consist of minute changes to inputs, invisible to a human observer yet changing a classifier's output [249] . This is related to causality in several ways. First, these attacks clearly constitute violations of the i.i.d. assumption that underlies statistical machine learning. If all we want to do is a prediction in an i.i.d. setting, then statistical learning is fine. In the adversarial setting, however, the modified test examples are not drawn from the same distribution as the training examples. The adversarial phenomenon also shows that the kind of robustness current classifiers exhibit is rather different from the one a human exhibits. If we knew both robustness measures, we could try to maximize one while minimizing the other. Current methods can be viewed as crude approximations to this, effectively modeling the human's robustness as a mathematically simple set, say, an l p ball of radius > 0: they often try to find examples which lead to maximal changes in the classifier's output, subject to the constraint that they lie in an l p ball in the pixel metric. As we think of a classifier as the approximation of a function, the large gradients exploited by these attacks are either a property of this function or a defect of the approximation. There are different ways of relating this to causal models. As described in [188, Section 1.4] , different causal models can generate the same statistical pattern recognition model. In one of those, we might provide a writer with a sequence of class labels y, with the instruction to produce a set of corresponding images x. Clearly, intervening on y will impact x, but intervening on x will not impact y, so this is an anticausal learning problem. In another setting, we might ask the writer to decide for herself which digits to write, and to record the labels alongside the digit (in this case, the classifier would try to predict one effect from another one, a situation which we might call a confounded one). In a last one, we might provide images to a person, and ask the person to generate labels by classifying them. Let us now assume that we are in the causal setting where the causal generative model factorizes into independent components, one of which is (essentially) the classification function. As discussed in Section III, when specifying a causal model, one needs to determine which interventions are allowed, and a structural assignment will then, by definition, be valid under every possible (allowed) intervention. One may thus expect that if the predictor approximates the causal mechanism that is inherently transferable and robust, adversarial examples should be harder to find [216, 134] . 10 Recent work supports this view: it was shown that a possible defense against adversarial attacks is to solve the anticausal classification problem by modeling the causal generative direction, a method which in vision is referred to as analysis by synthesis [222] . A related defense method proceeds by reconstructing the input using an autoencoder before feeding it to a classifier [95] . We can speculate that structures composed of autonomous modules, such as given by a causal factorization (4), should be relatively robust to swapping out or modifying individual components. Robustness should also play a role when studying strategic behavior, i.e., decisions or actions that take into account the actions of other agents (including AI agents). Consider a system that tries to predict the probability of successfully paying back a credit, based on a set of features. The set could include, for instance, the current debt of a person, as well as their address. To get a higher credit score, people could thus change their current debt (by paying it off), or they could change their address by moving to a more affluent 10 Adversarial attacks may still exploit the quality of the (parameterized) approximation of a structural equation. neighborhood. The former probably has a positive causal impact on the probability of paying back; for the latter, this is less likely. Thus, we could build a scoring system that is more robust with respect to such strategic behavior by only using causal features as inputs [132] . To formalize this general intuition, one can consider a form of out-of-distribution generalization, which can be optimized by minimizing the empirical risk over a class of distributions induced by a causal model of the data [5, 204, 169, 189, 218] . To describe this notion, we start by recalling the usual empirical risk minimization setup. We have access to data from a distribution P (X, Y ) and train a predictor g in a hypothesis space H (e.g., a neural network with a certain architecture predicting Y from X) to minimize the empirical riskR whereR Here, we denote byÊ P (X,Y ) the empirical mean computed from a sample drawn from P (X, Y ). When we refer to "out-ofdistribution generalization" we mean having a small expected risk for a different distribution P † (X, Y ): Clearly, the gap betweenR P (X,Y ) (g) and R OOD P † (X,Y ) (g) will depend on how different the test distribution P † is from the training distribution P . To quantify this difference, we call environments the collection of different circumstances that give rise to the distribution shifts such as locations, times, experimental conditions, etc. Environments can be modeled in a causal factorization (4) as they can be seen as interventions on one or several causal variables or mechanisms. As a motivating example, one environment may correspond to where a measurement is taken (for example a certain room), and from each environment, we obtain a collection of measurements (images of objects in the same room). It is nontrivial (and in some cases provably hard [20] ) to learn statistical models that are stable across training environments and generalize to novel testing environments [189, 204, 167, 5, 2] drawn from the same environment distribution. Using causal language, one could restrict P † (X, Y ) to be the result of a certain set of interventions, i.e., P † (X, Y ) ∈ P G where P G is a set of interventional distributions over a causal graph G. The worst case out-of-distribution risk then becomes To learn a robust predictor, we should have available a subset of environment distributions E ⊂ P G and solve In practice, solving (18) requires specifying a causal model with an associated set of interventions. If the set of observed environments E does not coincide with the set of possible environments P G , we have an additional estimation error that may be arbitrarily large in the worst case [5, 20] . D. Pre-training, Data Augmentation, and Self-Supervision Learning predictive models solving the min-max optimization problem of (18) is challenging. We now interpret several common techniques in Machine Learning as means of approximating (18) . The first approach is enriching the distribution of the training set. This does not mean obtaining more examples from P (X, Y ), but training on a richer dataset [244, 53] , for example, through pre-training on a huge and diverse corpus [196, 54, 112, 137, 59, 35, 45, 253] . Since this strategy is based on standard empirical risk minimization, it can achieve stronger generalization in practice only if the new training distribution is sufficiently diverse to contain information about other distributions in P G . The second approach, often coupled with the previous one, is to rely on data augmentation to increase the diversity of the data by "augmenting" it through a certain type of artificially generated interventions [9, 234, 140] . For the visual domain, common augmentations include performing transformations such as rotating the image, translating the image by a few pixels, or flipping the image horizontally, etc. The high-level idea behind data augmentation is to encourage a system to learn underlying invariances or symmetries present in the augmented data distribution. For example, in a classification task, translating the image by a few pixels does not change the class label. One may view it as specifying a set of interventions E the model should be robust to (e.g., random crops/interpolations/translation/rotations, etc). Instead of computing the maximum over all distributions in E, one can relax the problem by sampling from the interventional distributions and optimize an expectation over the different augmented images on a suitably chosen subset [38] , using a search algorithm like reinforcement learning [48] or an algorithm based on density matching [154] . The third approach is to rely on self-supervision to learn about P (X). Certain pre-training methods [196, 54, 112, 35, 45, 253] have shown that it is possible to achieve good results using only very few class labels by first pre-training on a large unlabeled dataset and then fine-tuning on few labeled examples. Similarly, pre-training on large unlabeled image datasets can improve performance by learning representations that can efficiently transfer to a downstream task, as demonstrated by [179, 110, 102, 46, 92] . These methods fall under the umbrella of self-supervised learning, a family of techniques for converting an unsupervised learning problem into a supervised one by using so-called pretext tasks with artificially generated labels without human annotations. The basic idea behind using pretext tasks is to force the learner to learn representations that contain information about P (X) that may be useful for (an unknown) downstream task. Much of the work on methods that use self-supervision relies on carefully constructing pretext tasks. A central challenge here is to extract features that are indeed informative about the data generating distribution. Ideas from the ICM Principle could help develop methods that can automate the process of constructing pretext tasks. Finally, one can explicitly optimize (18) , for example, through adversarial training [79] . In that case, P G would contain a set of attacks an adversary might perform, while presently, we consider a set of natural interventions. An interesting research direction is the combination of all these techniques, large scale training, data augmentation, selfsupervision, and robust fine-tuning on the available data from multiple, potentially simulated environments. Reinforcement Learning (RL) is closer to causality research than the machine learning mainstream in that it sometimes effectively directly estimates do-probabilities. E.g., on-policy learning estimates do-probabilities for the interventions specified by the policy (note that these may not be hard interventions if the policy depends on other variables). However, as soon as off-policy learning is considered, in particular in the batch (or observational) setting [146] , issues of causality become subtle [164, 81] . An emerging line of work devoted to the intersection of RL and causality includes [13, 21, 164, 37, 50, 275, 1] . Causal learning applied to reinforcement learning can be divided into two aspects, causal induction and causal inference. Causal induction (discovery) involves learning causal relations from data, for example, an RL agent learning a causal model of the environment. Causal inference learns to plan and act based on a causal model. Causal induction in an RL setting poses different challenges than the classic causal learning settings where the causal variables are often given. However, there is accumulating evidence supporting the usefulness of an appropriate structured representation of the environment [2, 26, 258] . a) World Models: Model-based RL [248, 67] is related to causality as it aims at modeling the effect of actions (interventions) on the current state of the world. Particularly relevant for causal leaning are generative world models that capture some of the causal relations underlying the environment and serve as Lorenzian imagined spaces (see INTRODUCTION above) to train RL agents [127, 248, 98, 47, 271, 178, 232, 214, 268] . Structured generative approaches further aim at decomposing an environment into multiple entities with causally correct relations among them, modulo the completeness of the variables, and confounding [58, 265, 43, 264, 14, 136] . However, many of the current approaches (regardless of structure), only build partial models of the environment [88] . Since they do not observe the environment at every time step, the environment may become an unobserved confounder affecting both the agent's actions and the reward. To address this issue, a model can use the backdoor criterion conditioning on its policy [200] . b) Generalization, Robustness, and Fast Transfer: While RL has already achieved impressive results, the sample complexity required to achieve consistently good performance is often prohibitively high. Further, RL agents are often brittle (if data is limited) in the face of even tiny changes to the environment (either visual or mechanistic changes) unseen in the training phase. The question of generalization in RL is essential to the field's future both in theory and practice. One proposed solution towards the goal of designing machines that can extrapolate experience across environments and tasks is to learn invariances in a causal graph structure. A key requirement to learn invariances from data may be the possibility to perform and learn from interventions. Work in developmental psychology argues that there is a need to experiment in order to discover causal relationships [80] . This can be modelled as an RL environment, where the agent can discover causal factors through interventions and observing their effects. Further, causal models may allow to model the environment as a set of underlying independent causal mechanisms such that, if there is a change in distribution, not all the mechanisms need to be re-learned. However, there are still open questions about the right way to think about generalization in RL, the right way to formalize the problem, and the most relevant tasks. c) Counterfactuals: Counterfactual reasoning has been found to improve the data efficiency of RL algorithms [37, 165] , improve performance [50] , and it has been applied to communicate about past experiences in the multi-agent setting [68, 241] . These findings are consistent with work in cognitive psychology [64] , arguing that counterfactuals allow to reason about the usefulness of past actions and transfer these insights to corresponding behavioral intentions in future scenarios [203, 199, 145] . We argue that future work in RL should consider counterfactual reasoning as a critical component to enable acting in imagined spaces and formulating hypotheses that can be subsequently tested with suitably chosen interventions. d) Offline RL: The success of deep learning methods in the case of supervised learning can be largely attributed to the availability of large datasets and methods that can scale to large amounts of data. In the case of reinforcement learning, collecting large amounts of high-fidelity diverse data from scratch can be expensive and hence becomes a bottleneck. Offline RL [72, 150] tries to address this concern by learning a policy from a fixed dataset of trajectories, without requiring any experimental or interventional data (i.e., without any interaction with the environment). The effective use of observational data (or logged data) may make real-world RL more practical by incorporating diverse prior experiences. To succeed at it, an agent should be able to infer the consequence of different sets of actions compared to those seen during training (i.e., the actions in the logged data), which essentially makes it a counterfactual inference problem. The distribution mismatch between the current policy and the policy that was used to collect offline data makes offline RL challenging as this requires us to move well beyond the assumption of independently and identically distributed data. Incorporating invariances, by factorizing knowledge in terms of independent causal mechanisms can help make progress towards the offline RL setting. A fundamental question in the application of machine learning in natural sciences is to which extent we can complement our understanding of a physical system with machine learning. One interesting aspect is physics simulation with neural networks [93] , which can substantially increase the efficiency of hand-engineered simulators [103, 143, 269, 211, 264] . Significant out-of-distribution generalization of learned physical simulators may not be necessary if experimental conditions are carefully controlled, although the simulator has to be completely re-trained if the conditions change. On the other hand, the lack of systematic experimental conditions may become problematic in other applications such as healthcare. One example is personalized medicine, where we may wish to build a model of a patient health state through a multitude of data sources, like electronic health records and genetic information [65, 108] . However, if we train a clinical system on doctors' actions in controlled settings, the system will likely provide little additional insight compared to the doctors' knowledge and may fail in surprising ways when deployed [18] . While it may be useful to automate certain decisions, an understanding of causality may be necessary to recommend treatment options that are personalized and reliable [201, 242, 224, 273, 6, 3, 30, 165] . Causality also has significant potential in helping understand medical phenomena, e.g., in the current Covid-19 pandemic, where causal mediation analysis helps disentangle different effects contributing towards case fatality rates when a textbook example of Simpson's paradox was observed [261] . Another example of a scientific application is in astronomy, where causal models were used to identify exoplanets under the confounding of the instrument. Exoplanets are often detected as they partially occlude their host star when they transit in front of it, causing a slight decrease in brightness. Shared patterns in measurement noise across stars light-years apart can be removed in order to reduce the instrument's influence on the measurement [219] , which is critical especially in the context of partial technical failures as experienced in the Kepler exoplanet search mission. The application of [219] lead to the discovery of 36 planet candidates [70] , of which 21 were subsequently validated as bona fide exoplanets [172] . Four years later, astronomers found traces of water in the atmosphere of the exoplanet K2-18b -the first such discovery for an exoplanet in the habitable zone, i.e., allowing for liquid water [25, 254] . This planet turned out to be one that had first been detected in [70, exoplanet candidate EPIC 201912552]. State-of-the-art AI is relatively narrow, i.e., trained to perform specific tasks, as opposed to the broad, versatile intelligence allowing humans to adapt to a wide range of environments and develop a rich set of skills. The human ability to discover robust, invariant high-level concepts and abstractions, and to identify causal relationships from observations appears to be one of the key factors allowing for a successful generalization from prior experiences to new, often quite different, "out-of-distribution" settings. Multi-task learning refers to building a system that can solve multiple tasks across different environments [40, 209] . These tasks usually share some common traits. By learning similarities across tasks, a system could utilize knowledge acquired from previous tasks more efficiently when encountering a new task. One possibility of learning such similarities across tasks is to learn a shared underlying data-generating process as a causal generative model whose components satisfy the SMS hypothesis [220] . In certain cases, causal models adapt faster to sparse interventions in distribution [131, 194] . At the same time, we have clearly come a long way already without explicitly treating the multi-task problem as a causal one. Fuelled by abundant data and compute, AI has made remarkable advances in a wide range of applications, from image processing and natural language processing [35] to beating human world champions in games such as chess, poker and Go [223] , improving medical diagnoses [166] , and generating music [56] . A critical question thus arises: "Why can't we just train a huge model that learns environments' dynamics (e.g. in a RL setting) including all possible interventions? After all, distributed representations can generalize to unseen examples and if we train over a large number of interventions we may expect that a big neural network will generalize across them". To address this, we make several points. To begin with, if data was not sufficiently diverse (which is an untestable assumption a priori), the worst-case error to unseen shifts may still be arbitrarily high (see Section VII-C). While in the short term, we can often beat "out-of-distribution" benchmarks by training bigger models on bigger datasets, causality offers an important complement. The generalization capabilities of a model are tied to its assumptions (e.g., how the model is structured and how it was trained). The causal approach makes these assumptions more explicit and aligned with our understanding of physics and human cognition, for instance by relying on the Independent Causal Mechanisms principle. When these assumptions are valid, a learner that does not use them should fare worse than one that does. Further, if we had a model that was successful in all interventions over a certain environment, we may want to use it in different environments that share similar albeit not necessarily identical dynamics. The causal approach, and in particular the ICM principle, point to the need to decompose knowledge about the world into independent and recomposable pieces (recomposable depending on the interventions or changes in environment), which suggests more work on modular ML architectures and other ways to enforce the ICM principle in future ML approaches. At its core, i.i.d. pattern recognition is but a mathematical abstraction, and causality may be essential to most forms of animate learning. Until now, machine learning has neglected a full integration of causality, and this paper argues that it would indeed benefit from integrating causal concepts. We argue that combining the strengths of both fields, i.e., current deep learning methods as well as tools and ideas from causality, may be a necessary step on the path towards versatile AI systems. In this work, we discussed different levels of models, including causal and statistical ones. We argued that this spectrum builds upon a range of assumptions both in terms of modeling and data collection. In an effort to bring together causality and machine learning research programs, we first presented a discussion on the fundamentals of causal inference. Second, we discussed how the independent mechanism assumptions and related notions such as invariance offer a powerful bias for causal learning. Third, we discussed how causal relations might be learned from observational and interventional data when causal variables are observed. Fourth, we discussed the open problem of causal representation learning, including its relation to recent interest in the concept of disentangled representations in deep learning. Finally, we discussed how some open research questions in the machine learning community may be better understood and tackled within the causal framework, including semi-supervised learning, domain generalization, and adversarial robustness. Based on this discussion, we list some critical areas for future research: a) Learning Non-Linear Causal Relations at Scale: Not all real-world data is unstructured and the effect of interventions can often be observed, for example, by stratifying the data collection across multiple environments. The approximation abilities of modern machine learning methods may prove useful to model non-linear causal relations among large numbers of variables. For practical applications, classical tools are not only limited in the linearity assumptions often made but also in their scalability. The paradigms of metaand multi-task learning are close to the assumptions and desiderata of causal modeling, and future work should consider (1) understanding under which conditions non-linear causal relations can be learned, (2) which training frameworks allow to best exploit the scalability of machine learning approaches, and (3) providing compelling evidence on the advantages over (noncausal) statistical representations in terms of generalization, repurposing, and transfer of causal modules on real-world tasks. b) Learning Causal Variables: "Disentangled" representations learned by state-of-the-art neural network methods are still distributed in the sense that they are represented in a vector format with an arbitrary ordering in the dimensions. This fixed-format implies that the representation size cannot be dynamically changed; for example, we cannot change the number of objects in a scene. Further, structured and modular representation should also arise when a network is trained for (sets of) specific tasks, not only auteoncoding. Different high-level variables may be extracted depending on the task and affordances at hand. Understanding under which conditions causal variables can be recovered could provide insights into which interventions we are robust to in predictive tasks. c) Understanding the Biases of Existing Deep Learning Approaches: Scaling to massive data sets, relying on data augmentation and self-supervision have all been successfully explored to improve the robustness of the predictions of deep learning models. It is nontrivial to disentangle the benefits of the individual components and it is often unclear which "trick" should be used when dealing with a new task, even if we have an intuition about useful invariances. The notion of strong generalization over a specific set of interventions may be used to probe existing methods, training schemes, and datasets in order to build a taxonomy of inductive biases. In particular, it is desirable to understand how design choices in pre-training (e.g., which datasets/tasks) positively impact both transfer and robustness downstream in a causal sense. d) Learning Causally Correct Models of the World and the Agent: In many real-world reinforcement learning (RL) settings, abstract state representations are not available. Hence, the ability to derive abstract causal variables from high-dimensional, low-level pixel representations and then recover causal graphs is important for causal induction in real-world reinforcement learning settings. Moreover, building a causal description for both a model of the agent and the environment (world models) should be essential for robust and versatile modelbased reinforcement learning. Causalworld: A robotic manipulation benchmark for causal structure and transfer learning Solving rubik's cube with a robot hand Limits of estimating heterogeneous treatment effects: Guidelines for practical algorithm design Autonomy Invariant risk minimization Deep-treat: Learning optimal personalized treatments from observational data using neural networks Why do deep convolutional networks generalize so poorly to small image transformations Systematic generalization: what is required and can it be learned? Document image defect models Structured agents for physical construction Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models Transportability from multiple environments with limited experiments: Completeness results Bandits with unobserved confounders: A causal approach Interaction networks for learning about objects, relations and physics Simulation as an engine of physical scene understanding Relational inductive biases, deep learning, and graph networks The arrow of time in multivariate time series A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy Recognition in terra incognita Impossibility theorems for domain adaptation Doina Precup, and Yoshua Bengio. Independently controllable features Learning a synaptic learning rule Representation learning: A review and new perspectives A meta-transfer objective for learning to disentangle causal mechanisms Group invariance principles for causal generative models A theory of independent mechanisms for extrapolation in generative models Counterfactuals uncover the modular structure of deep generative models Time series deconfounder: Estimating treatment effects over time in the presence of hidden confounders Combining labeled and unlabeled data with co-training Error asymmetry in causal and anticausal regression Learning first-order symbolic representations for planning from the structure of the state space Counterfactual reasoning and learning systems: The example of computational advertising Language models are few-shot learners Causal inference by compression and Nicolas Heess. Woulda, coulda, shoulda: Counterfactually-guided policy search Improving the accuracy and speed of support vector learning machines Monet: Unsupervised scene decomposition and representation Multitask learning Multi-level cause-effect systems Fast conditional independence test for vector variables with large sample sizes A compositional object-based approach to learning physical dynamics Semi-Supervised Learning Generative pretraining from pixels A simple framework for contrastive learning of visual representations Recurrent environment simulators Autoaugment: Learning augmentation strategies from data Inferring deterministic causal relations Causal reasoning from meta-reinforcement learning Conditional independence in statistical theory How We Learn: Why Brains Learn Better Than Any Machine... for Now. Penguin Imagenet: A large-scale hierarchical image database Bert: Pre-training of deep bidirectional transformers for language understanding A Probabilistic Theory of Pattern Recognition Jukebox: A generative model for music On the transfer of disentangled representations in realistic settings An objectoriented representation for efficient reinforcement learning On robustness and transferability of convolutional neural networks A permutation-based kernel conditional independence test A framework for the quantitative evaluation of disentangled representations Exact Bayesian structure learning from uncertain interventions Exploring the landscape of spatial robustness The functional theory of counterfactual thinking. Personality and social psychology review A guide to deep learning in healthcare Strong universal consistency of neural network classifiers Modelagnostic meta-learning for fast adaptation of deep networks Counterfactual multiagent policy gradients Learning invariance from transformation sequences A systematic search for transiting planets in the K2 data Autonomy of economic relations Offpolicy deep reinforcement learning without exploration Kernel measures of conditional dependence Logical and algorithmic properties of independence and their application to Bayesian networks Imagenettrained cnns are biased towards texture; increasing shape bias improves accuracy and robustness On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset Domain adaptation with conditional transferable components Causal discovery from temporally aggregated time series Explaining and harnessing adversarial examples A theory of causal learning in children: causal maps and Bayes nets Evaluating reinforcement learning algorithms in observational health settings Causal generative neural networks Object files and schemata: Factorizing declarative and procedural knowledge in dynamical systems Recurrent independent mechanisms Speech recognition with deep recurrent neural networks Multi-object representation learning with iterative variational inference On the binding problem in artificial neural networks Shaping belief states with generative environment models for rl The incomplete rosetta stone problem: Identifiability results for multi-view nonlinear ica Measuring statistical dependence with Hilbert-Schmidt norms Kernel methods for measuring independence Bootstrap your own latent: A new approach to self-supervised learning Neuroanimator: Fast neural network emulation and control of physics-based models Using videos to evaluate image model robustness Towards deep neural network architectures robust to adversarial examples A survey of learning causality with data: Problems and methods Causality: Objectives and assessment World models The probability approach in econometrics Hidden markov nonlinear ica: Unsupervised learning from nonstationary time series Deep residual learning for image recognition Momentum contrast for unsupervised visual representation learning Learning to predict the cosmological structure formation Conditional variance penalties and domain shift robustness Invariant causal prediction for nonlinear models Benchmarking neural network robustness to common corruptions and perturbations The Secret of our Success A targeted real-time early warning score (trewscore) for septic shock Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework Learning representations by maximizing mutual information across views Causality in economics and econometrics Universal language model fine-tuning for text classification Nonlinear causal discovery with additive noise models Behind distribution shift: Mining driving forces of changes and causal arrows Causal discovery from heterogeneous/nonstationary data Nonlinear independent component analysis: Existence and uniqueness results Nonlinear ica of temporally dependent stationary sources Causal inference in statistics, social, and biomedical sciences Causal regularization Causal inference using the algorithmic Markov condition Semi-supervised interpolation in an anticausal learning scenario Detecting non-causal artifacts in multivariate linear regression models Identifying confounders using additive noise models Telling cause from effect based on high-dimensional observations Informationgeometric approach to inferring causal directions Algorithmic independence of initial condition and dynamical law in thermodynamics and causal inference Reinforcement learning: A survey How image degradations affect deep cnn-based face recognition Algorithmic recourse under imperfect causal knowledge: a probabilistic approach Chris Pal, and Yoshua Bengio. Learning neural causal models from unknown interventions. arXiv preprint 1910.01075v2 Optimal decision making under strategic behavior Avoiding discrimination through causal reasoning Generalization in anti-causal learning Disentangling by factorising Neural relational inference for interacting systems Big transfer (bit): General visual representation learning Sequential attend, infer, repeat: Generative modelling of moving objects Consistency of causal inference under the additive noise model Imagenet classification with deep convolutional neural networks Unsupervised learning of object keypoints for perception and control Counterfactual fairness Data-driven fluid simulations using regression forests Building machines that learn and think like people Missed opportunities: Psychological ramifications of counterfactual thought in midlife women Batch reinforcement learning Graphical Models Deep learning Structural autoencoders improve representations for generation and transfer Offline reinforcement learning: Tutorial, review, and perspectives on open problems Causation. The journal of philosophy Domain generalization via conditional invariant representation Deep domain generalization via conditional invariant adversarial networks Fast autoaugment Space: Unsupervised object-oriented scene representation via spatial attention and decomposition Detecting and correcting for label shift with black box predictors On the fairness of disentangled representations Challenging common assumptions in the unsupervised learning of disentangled representations Weaklysupervised disentanglement without compromises Object-centric learning with slot attention Towards a learning theory of cause-effect inference Discovering causal signals in images Die Rückseite des Spiegels Deconfounding reinforcement learning in observational settings Sample-efficient reinforcement learning via counterfactual-based data augmentation An overview of deep learning in medical imaging focusing on MRI Domain adaptation by using causal inference to predict invariant conditional distributions Storks deliver babies (p= 0.008) Causality from a distributional robustness point of view Benchmarking robustness in object detection: Autonomous driving when winter is coming Human-level control through deep reinforcement learning Stellar and planetary properties of K2 campaign 1 candidates and validation of 17 planets, including a planet receiving earth-like insolation From ordinary differential equations to structural causal models: the deterministic case Regression by dependence minimization and its application to causal inference On causal discovery with cyclic additive noise models Distinguishing cause from effect using observational data: methods and benchmarks Flexible neural representation for physics prediction Action-conditional video prediction using deep networks in atari games Representation learning with contrastive predictive coding Learning independent causal mechanisms Learning independent causal mechanisms Learning explanations that are hard to vary Causality: Models, Reasoning, and Inference Giving computers free will External validity: From docalculus to transportability across populations Identifiability of causal graphs using functional models Causal discovery with continuous additive noise models Elements of Causal Inference -Foundations and Learning Algorithms Causal inference by using invariant prediction: identification and confidence intervals Causal models for dynamical systems Kernelbased tests for joint independence Learning stable and predictive structures in kinetic systems Invariant causal prediction for sequential data An analysis of the adaptation speed of causal models Failing loudly: An empirical study of methods for detecting dataset shift Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training Spatially structured recurrent modules The Direction of Time Reflective learning: The use of "if only Improving the accuracy of medical diagnosis with causal machine learning Learning deep disentangled embeddings with the f-statistic loss The functional basis of counterfactual thinking Invariant models for causal transfer learning Variational autoencoders pursue PCA directions (by accident) Effects of degradations on deep neural network architectures Causal consistency of structural equation models From deterministic ODEs to dynamic structural causal models An overview of multi-task learning in deep neural networks Artificial intelligence: a modern approach Learning to simulate complex physics with graph networks A simple neural network module for relational reasoning Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-... hook. PhD thesis Curious model-building control systems Artificial intelligence: Learning to see and act Causal learning Learning with Kernels On causal and anticausal learning Modeling confounding by half-sibling regression Causal and statistical learning Causality for machine learning Towards the first adversarially robust neural network model on MNIST Mastering atari, go, chess and shogi by planning with a learned model Reliable decision support using counterfactual models The hardness of conditional independence testing and the generalised covariance measure Telling cause from effect in deterministic linear dynamical systems Do image classifiers generalize across time? Not using the car to see the sidewalk-quantifying and controlling the effects of context in classification and segmentation A linear non-Gaussian acyclic model for causal discovery Weakly supervised disentanglement with guarantees Mastering the game of go with deep neural networks and tree search The predictron: End-to-end learning and planning Tangent prop -a formalism for specifying selected invariances in an adaptive network Best practices for convolutional neural networks applied to visual document analysis Causal ordering and identifiability Principles of object perception Causation, Prediction, and Search Grundlagen der Entscheidungstheorie Support Vector Machines Causal Markov condition for submodular information measures Counterfactual multi-agent reinforcement learning with graph convolution communication Counterfactual normalization: Proactively addressing dataset shift and improving reliability using causal mechanisms Preventing failures due to dataset shift: Learning predictive models that transport Revisiting unreasonable effectiveness of data in deep learning era Stochastic prediction of multi-agent interactions from partial observations Causal inference by choosing graphs with most plausible Markov kernels Robustly disentangled causal mechanisms: Validating deep representations for interventional robustness Introduction to reinforcement learning Intriguing properties of neural networks Pure reasoning in 12-month-old infants as probabilistic inference Causal discovery from changes Is independence all you need? on the generalization of representations learned from correlated data Self-supervised learning of video-induced visual invariances Water vapour in the atmosphere of the habitable-zone eight-earth-mass planet K2-18b Relational neural expectation maximization: Unsupervised discovery of objects and their interactions Are disentangled representations helpful for abstract visual reasoning? Statistical Learning Theory Grandmaster level in StarCraft II using multi-agent reinforcement learning Semisupervised learning, causality and the conditional cluster assumption On the fairness of causal algorithmic recourse Simpson's paradox in Covid-19 case fatality rates: a mediation analysis of age-related causal effects Towards causal generative scene models via competition of experts Learning robust representations by projecting superficial statistics out Visual interaction networks: Learning a physics simulator from video Cobra: Data-efficient modelbased rl through unsupervised object discovery and curiositydriven exploration Causal and anti-causal learning in pattern recognition for neuroimaging Pragmatism and Variable Transformations in Causal Modelling Latent space physics: Towards learning the temporal evolution of fluid flow Slow feature analysis: unsupervised learning of invariances Model-based reinforcement learning with parametrized physical models and optimism-driven exploration CLEVRER: Collision events for video representation and reasoning Ganite: Estimation of individualized treatment effects using generative adversarial nets Deep reinforcement learning with relational inductive biases Near-optimal reinforcement learning in dynamic treatment regimes Fairness in decisionmaking -the causal explanation formula On the identifiability of the postnonlinear causal model Kernelbased conditional independence test and application in causal discovery Domain adaptation under target and conditional shift Multi-source domain adaptation: A causal view Causal discovery from nonstationary/heterogeneous data: Skeleton estimation and orientation determination Making convolutional networks shift-invariant again Many thanks to the past and present members of the Tübingen causality team, without whose work and insights this article would not exist, in particular to Dominik Janzing, Chaochao Lu and Julius von Kügelgen who gave helpful comments on [221] . The text has also benefitted from discussions with Elias Bareinboim, Christoph Bohle, Leon Bottou, Isabelle Guyon, Judea Pearl, and Vladimir Vapnik. Thanks to Wouter van Amsterdam for pointing out typos in the first version. We also thank Thomas Kipf, Klaus Greff, and Alexander d'Amour for the useful discussions. Finally, we thank the thorough anonymous reviewers for highly valuable feedback and suggestions.