key: cord-0493364-blb6xweu authors: Dafoe, Allan; Hughes, Edward; Bachrach, Yoram; Collins, Tantum; McKee, Kevin R.; Leibo, Joel Z.; Larson, Kate; Graepel, Thore title: Open Problems in Cooperative AI date: 2020-12-15 journal: nan DOI: nan sha: 2a1573cfa29a426c695e2caf6de0167a12b788ef doc_id: 493364 cord_uid: blb6xweu Problems of cooperation--in which agents seek ways to jointly improve their welfare--are ubiquitous and important. They can be found at scales ranging from our daily routines--such as driving on highways, scheduling meetings, and working collaboratively--to our global challenges--such as peace, commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate. Since machines powered by artificial intelligence are playing an ever greater role in our lives, it will be important to equip them with the capabilities necessary to cooperate and to foster cooperation. We see an opportunity for the field of artificial intelligence to explicitly focus effort on this class of problems, which we term Cooperative AI. The objective of this research would be to study the many aspects of the problems of cooperation and to innovate in AI to contribute to solving these problems. Central goals include building machine agents with the capabilities needed for cooperation, building tools to foster cooperation in populations of (machine and/or human) agents, and otherwise conducting AI research for insight relevant to problems of cooperation. This research integrates ongoing work on multi-agent systems, game theory and social choice, human-machine interaction and alignment, natural-language processing, and the construction of social tools and platforms. However, Cooperative AI is not the union of these existing areas, but rather an independent bet about the productivity of specific kinds of conversations that involve these and other areas. We see opportunity to more explicitly focus on the problem of cooperation, to construct unified theory and vocabulary, and to build bridges with adjacent communities working on cooperation, including in the natural, social, and behavioural sciences. Problems of cooperation-in which agents have opportunities to improve their joint welfare but are not easily able to do so-are ubiquitous and important. They can be found at all scales ranging from our daily routines-such as driving on highways, scheduling meetings, and working collaboratively-to our global challenges-such as peace, commerce, and pandemic preparedness. Human civilization and the success of the human species depends on our ability to cooperate. Advances in artificial intelligence pose increasing opportunity for AI research to promote human cooperation. AI research enables new tools for facilitating cooperation, such as language translation, human-computer interfaces, social and political platforms, reputation systems, algorithms for group decision-making, and other deployed social mechanisms; it will be valuable to have explicit attention to what tools are needed, and what pitfalls should be avoided, to best promote cooperation. AI agents will play an increasingly important role in our lives, such as in self-driving vehicles, customer assistants, and personal assistants; it is important to equip AI agents with the requisite competencies to cooperate with others (humans and machines). Beyond the creation of machine tools and agents, the rapid growth of AI research presents other opportunities for advancing cooperation, such as from research insights into social choice theory or the modeling of social systems. The field of artificial intelligence has an opportunity to increase its attention to this class of problems, which we refer to collectively as problems in Cooperative AI. The goal would be to study problems of cooperation through the lens of artificial intelligence and to innovate in artificial intelligence to help solve these problems. Whereas much AI research to date has focused on improving the individual intelligence of agents and algorithms, the time is right to also focus on improving social intelligence: the ability of groups to effectively cooperate to solve the problems they face. AI research relevant to cooperation has been taking place in many different areas, including in multi-agent systems, game theory and social choice, human-machine interaction and alignment, naturallanguage processing, and the construction of social tools and platforms. Our recommendation is not merely to construct an umbrella term for these areas, but rather to encourage focused research conversations, spanning these areas, focused on cooperation. We see opportunity to construct more unified theory and vocabulary related to problems of cooperation. Having done so, we think AI research will be in a better position to learn from and contribute to the broader research program on cooperation spanning the natural sciences, social sciences, and behavioural sciences. Our overview comes from the perspective of authors who are especially impressed by and immersed in the achievements of deep learning [Sej20] and reinforcement learning [SB18] . From that perspective, it will be important to develop training environments, tasks, and domains that can provide suitable feedback for learning and in which cooperative capabilities are crucial to success, non-trivial, learnable, and measurable. Much research in multi-agent systems and human-machine interaction will focus on cooperation problems in contexts of pure common interest. This will need to be complemented by research in mixed-motives contexts, where problems of trust, deception, and commitment arise. Machine agents will often act on behalf of particular humans and will impact other humans; as a consequence, this research will need to consider how machines can adequately understand human preferences, and how best to integrate human norms and ethics into cooperative arrangements. Researchers building social tools and platforms will have other perspectives on how best to make progress on problems of cooperation, including being especially informed by real-world complexities. Areas such as trusted hardware design and cryptography may be relevant for addressing commitment problems and cryptography. Other aspects of the problem will benefit from expertise from other sciences, such as political science, law, economics, sociology, psychology, and neuroscience. We anticipate much value in explicitly connecting AI research to the broader scientific enterprise studying the problem of cooperation and to the broader effort to solve societal cooperation problems. We recommend that "Cooperative AI" be given a technically precise, problem-defined scope; otherwise, there is a risk that it acquires an amorphous cloud of meaning, incorporating adjacent (clusters of) concepts such as aligned AI, trustworthy AI, and beneficial AI. Cooperative AI, as scoped here, refers to AI research trying to help individuals, humans and machines, to find ways to improve their joint welfare. For any given situation and set of agents, this problem is relatively well defined and unambiguous. The Scope section elaborates on the relationship to adjacent areas. Conversations on Cooperative AI can be organized in part in terms of the dimensions of cooperative opportunities. These include the strategic context, the extent of common versus conflicting interest, the kinds of entities who are cooperating, and whether the researchers are focusing on the cooperative competence of individuals or taking the perspective of a social planner. Conversations can also be focused on key capabilities necessary for cooperation, including: 1. Understanding of other agents, their beliefs, incentives, and capabilities. 2. Communication between agents, including building a shared language and overcoming mistrust and deception. 3. Constructing cooperative commitments, so as to overcome incentives to renege on a cooperative arrangement. 4. Institutions, which can provide social structure to promote cooperation, be they decentralized and informal, such as norms, or centralized and formal, such as legal systems. Just as any area of research can have downsides, so is it prudent to investigate the potential downsides of research on Cooperative AI. Cooperative competence can be used to exclude others, some cooperative capabilities are closely related to coercive capabilities, and learning cooperative competence can be hard to disentangle from coercion and competition. An important aspect of this research, then, will be investigating potential downsides and studying how best to anticipate and mitigate them. The remainder of the paper is structured as follows. We offer more motivation in the section Why Cooperative AI? We then discuss several important dimensions of Cooperative Opportunities. The bulk of our discussion is contained in the Cooperative Capabilities section, which we organize in terms of Understanding, Communication, Commitment, and Institutions. We then reflect on The Potential Downsides of Cooperative AI and how to mitigate them. Finally, we conclude. To ground our discussion, we introduce here two vignettes, one based on the near-term cooperation problems facing self-driving vehicles, the other about the global challenge of pandemic preparedness. Self-driving vehicles confront a broad portfolio of cooperation problems with respect to other drivers (human and AI). There are opportunities for joint welfare gains between drivers (or the principals in whose interests they are driving). In order to predict the behaviour of other drivers, an AI, and others on the road, would benefit from it understanding the goals, beliefs, and capabilities of those other drivers. The AI, and others, would benefit from it understanding the (local) conventions of driving, such as what side of the road to drive on. It, and others, benefit from it accurately modeling the beliefs of other agents, such as whether the fast-walking pedestrian looking at their phone is aware of the car. It, and others, benefit from it understanding the intentions of other agents, such as that of a driver attempting to urgently cross several lanes of fast-flowing traffic. An AI, and others, would benefit from improved communication skill and infrastructure. Will the AI understand the wave of a police officer, indicating that the officer wants the AI to go through an intersection? Can it express to other vehicles that the reason it is idling in a narrow parking lot is to wait for a car up ahead to pull out? Can it accurately, precisely, and urgently express a sudden observation of dangerous road debris to (AI) drivers behind it? Can it communicate sufficiently with another brand of driver AI to achieve the subtle coordination required to safely convoy in close proximity? Cooperative gains are available to those able to construct credible commitments. Car A, in busy traffic, may wish to cross several lanes; car B may be willing to make space for A to do so only if A commits to moving along and not staying in B's lane. Or, suppose a driver is waiting for another car to pull out of a parking space, blocking other cars looking for parking spots; in principle, the driver would be willing to allow those other cars to pass, but not if one of them is going to "defect" by taking the newly opened parking space. Can those other drivers credibly commit to not doing so? Populations of drivers could be made better off by new institutions, which AI research could help build. What is the optimal congestion pricing to maximize human welfare, and can this be assessed and processed through AI-enabled innovations? Are there joint policies for populations of drivers which are Pareto improving to the typical equilibria from uncoordinated behaviour? If so, could a navigation app achieve this while maintaining incentive compatibility to participate in the mechanism? Can the valuable information from the video feeds of the many smart vehicles be optimally distributed, with fair pricing, safeguards for privacy, and incentive compatibility? While our first example was meant to be grounded in soon-to-be-upon-us technical possibilities, our second is meant to illustrate the global stakes of making progress in solving cooperation problems. There are multiple problems of global cooperation with stakes in the trillions of dollars or millions of lives, including those of disarmament, avoiding nuclear war, climate change, global commerce, and pandemic preparedness. Given its timeliness, we focus on the last one. While these global problems will not immediately be the focal problems of Cooperative AI, they illustrate the magnitude of benefits that could ultimately come from substantial advances in global cooperative competence. The death toll from COVID-19 is in the millions, and the economic damage is estimated in the tens of trillions of dollars. Future pandemic diseases could cause similar, or greater, devastation. And yet, despite these high stakes, the world struggles to prepare adequately, in part due to cooperation problems. The gains are great from more investment in generalized vaccine development, disease surveillance and data sharing, harmonization of policy responses so as to avoid unnecessary breakdowns in supply chains, pooled reserves of supplies to support those who most need them, epidemiological research during the outbreak, and building coordinating institutions to help achieve these common interests. But achieving these gains is not trivial. They may require that decision makers understand each other sufficiently well that they can agree to a fair (enough) division of investment for a problem in which expected harm and ability to pay are unevenly distributed and hard to estimate. They may require decision makers to reliably communicate on sensitive issues like the existence of a disease outbreak or the state of one's medical system. They may require overcoming the commitment problem arising from the great pressure for countries to renege on certain agreements in a crisis, such as on sharing of supplies and not interfering in supply chains. Lastly, building global institutions poses difficult problems of institutional design, and the management of difficult political realities, conflicts of interest, demands for legitimacy, and requirements for technical competence. As these vignettes illustrate, problems of cooperation are ubiquitous, important, and diverse, but they also share fundamental commonalities. Problems of cooperation span scale: from inter-personal, to inter-organizational, to inter-state. They can involve two, several, or millions of agents; they exist for small stakes and global stakes; they arise amongst humans, machines, and organizations; they arise in domains with more or less well-defined interests, norms, and institutions. Cooperative competence requires distinct kinds of social intelligence and skills, which were critical for the success of humans. Without a cultural inheritance of tools and skills, and a community to collaborate with, a single human cannot achieve much. Rather, to create our modern technological and cultural wonders required a community of collaborating humans, ever growing in scale, who passed their knowledge down through the generations. Furthermore, this capacity evolved beyond a narrow kind of cooperative intelligence, capable of only solving a limited class of cooperation problems and brittle to changes, to an increasingly general cooperative intelligence capable of solving dynamic and complex problems, including interpersonal disputes, cross-border pollution, and global arms control. Further improvements in humanity's general cooperative intelligence may be critical for solving our increasingly complex global problems. As AI systems are deployed throughout the economy, it becomes important that they be adept at participating cooperatively in our shared global civilization-which is composed of humans, organizations, and, increasingly, machine agents. The problem of cooperation is a fundamental scientific problem for many fields, from biochemistry, to evolutionary biology, to the social sciences. Biologists regard the history of life as the progressive formation of ever larger cooperative structures [SS97] , from coalitions of cooperating genes [SS93] , to multi-cellularity requiring restraint on single-cell egoism [Bus87, FN04] , to the emergence of complex societies featuring division of labour in the lineages of ants, termites, and primates [Rob92, MGA + 05]. In 2005, Science Magazine judged the problem of "How Did Cooperative Behavior Evolve?" to be one of the top 25 questions "facing science over the next 25 years." The problem of cooperation is at least as important to social scientists. Economists are often interested in when people achieve, or fail to achieve, welfare-enhancing arrangements. Political scientists study how people make collective decisions: the very mechanisms by which groups of people cooperate, or fail to do so. Within international relations and comparative politics, some of the most central and rich questions concern the causes of costly conflict, including civil war and interstate war. Prominent in sociology is the study of social order: how it emerges, persists, and evolves. Explicitly framing relevant research in AI as addressing this fundamental problem will help with the consilience of science, allowing relevant insights across fields to more readily flow to each other and helping to show these other fields the increasing relevance of AI research to understanding cooperation. A more explicit connection across these fields will help researchers adopt a common vocabulary and learn from their respective advances. When considering how to build maximally beneficial AI, many researchers emphasize the importance of certain directed strands of AI research, such as on safety [AOS + 16, Rus19], fairness [HPS16] , interpretability [OSJ + 18], and robustness [RDT15, QMG + 19]. Each of these points AI research towards achieving AI with a certain set of attributes, which are thought to be on net socially beneficial, and they plausibly make for a coherent research program. We recommend that cooperation and Cooperative AI be added to this list. These and other areas of AI are defined by what they aspire to achieve by their goals. Even the field of "artificial intelligence" itself is defined through its aspiration to build a kind of machine intelligence that does not yet exist. Aspirational research programs have the advantage of prominently communicating what the point of the research is and reminding researchers of some of its long-term goals. Consider, by contrast, labeling a cluster of this kind of research as being on "multi-agent AI", "game theory", "strategic interaction": these don't communicate the social goal of the research and the pro-social bet motivating the field. "Cooperative AI" more clearly points to the social purpose served by pursuing these connected clusters of research. Cooperative AI will draw together research spanning the field of artificial intelligence, as well as many other disciplines in the natural and social sciences. Accordingly, prior work relevant to Cooperative AI is vast and will be most meaningfully summarized for particular sub-clusters of work. Reflecting the background of the majority of the authors of this paper, we briefly review prior work here with a heavy bias towards multi-agent research. Looking back decades, multi-agent systems research (and before that, the distributed AI research community) has long been interested in the interactions of intelligent agents and the arising collective behaviour [Wei99, Fer99, Kraus [Kra97] argued for the importance of studying mixed-motive cooperation problems and not just pure common-interest problems. She advocated an interdisciplinary approach and organized multi-agent cooperative work along several dimensions, which will recur in our paper. These are (in our words): the degree of common interest; the opportunity for building institutions; the number of agents; the kinds of agents (machines, humans); and the costs and capabilities of both communication and computation. Panait and Luke [PL05] review work on machine learning in multi-agent cooperation, though mostly limited to settings of pure common interest. They discuss challenges from different learning dynamics and illustrate them with simple games and real-world problems. They emphasize the problem of "teammate modeling", which maps to the concepts we discuss in Understanding and Communication. In the past two decades, research in multi-agent systems has exploded to cover diverse areas including institutions and norms [Rob04, BVDTV06, AJB06] ; human agent interaction [Lew98, GBC07, BFJ11, SBA + 03]; knowledge representation, reasoning, and planning [Geo88, vdHW02, vHLP08] Despite the frameworks and methodologies that have been developed over the past decades and the great potential for multi-agent technologies, progress has been slow in some areas due to the inherent complexity of the problems [JSW98, Wei99, Sin94, SLB09, Woo09] . In particular, it has long been noted that open, heterogeneous, and scalable multi-agent systems require learning agents, including agents who learn to adapt and cooperate with others [LMP03] . In light of the recent progress in our ability to analyze and learn from data in various forms-including image processing, recognition, and generation [ [BB20] . 2 Two-player zero-sum games were a productive domain for early multi-agent research as they are especially tractable: the minimax solution coincides with the Nash equilibrium and can be computed in polynomial time through a linear program, their solutions are interchangeable, and they have worst-case guarantees [vNM07] . However, two-player zero-sum games provide no opportunity for the agents to learn how to cooperate, and it is undesirable for research to be overly focused on domains that are inherently rivalrous. The field of multi-agent reinforcement learning seems to be naturally reorienting towards games of pure common interest, such as Hanabi [BFC + 20] and Overcooked [CSH + 19], as well as team games [JCD + 19]. There is also growth in the study of mixed-motives settings [Bak20] , such as alliance politics [HAE + 20, AET + 20, PLB + 19] and tax policy [ZTS + 20], which will be critical since some strategic dynamics only arise in these settings, such as issues of trust and commitment problems. Finally, all else equal, earlier is better in cooperation research, since much of actual cooperation depends on historically established vocabularies, norms, precedents, protocols, and institutions: as more and more AI systems are being deployed throughout society, we do not want to unintentionally lock in sub-optimal equilibria. We want our deployed AIs to be forward compatible with future advances in cooperation. In this section, we will discuss some of the diversity in cooperative opportunities, which are situations in which agents may be able to achieve joint gains or avoid joint losses. Our discussion will consider four major dimensions that structure the character of cooperative opportunities and the associated research. (1) The degree of common versus conflicting interest between agents. (2) The kinds of agents attempting to cooperate, such as machines, humans, or organizations. (3) The perspective taken on the cooperation problem: either that of an individual trying to cooperate with others or that of a social planner facilitating the cooperative interactions of a population. (4) The scope of Cooperative AI research and, specifically, how it should relate to related fields. November 2020 CCC Quadrennial Paper on "Artificial Intelligence and Cooperation" [BDVG + 19]. A recent research agenda from the AI safety community in part emphasizes problems of cooperation [CK20] . 2 principals. B: AI will enable new tools for promoting cooperation, such as language translation. C: Especially capable and autonomous AI may be better conceptualized as an agent, such as an email assistant capable of replying to many emails as well as a human assistant. Human principals need their AI agents to be safe and aligned. This relationship can be conceptualized as a cooperation game. The vertical dimension depicts the normative priority of the top agent, as in a principal-agent relationship. When the agent is aligned, it is a relationship of pure common interest. D: Combining these, the future will involve cooperative opportunities between human-AI teams. Advances in AI will enable the nexus of cooperation to move "down" to the AI-AI dyad (increasingly blue arrows), such as with coordination between Level V self-driving cars [S + 18]. E: AI research can take on the "planner perspective". Rather than focus on building Cooperative AI aids or agents for individuals, this perspective seeks to improve social infrastructure (e.g., social media) or improve policy to better cultivate cooperation within a population. F: The structure of interactions can of course be much more complicated, including involving organizations with complex internal structure and nested cooperative opportunities. (Thanks to Marta Garnelo for illustrations.) Decades of social science research have found that the dynamics of multi-agent interaction are fundamentally shaped by the extent of alignment between agents' payoffs [KP95, Rap66, RG05, Sch80] . 1. At one edge of the space are games of pure common interest, in which any increase in one agent's payoffs always corresponds with an increase in the payoffs of others. 2. In the broad middle are games with mixed motives, in which agent interests are, to varying extents, sometimes aligned and sometimes in conflict. 3. At the other edge are games of pure conflicting interest, in which an increase in one agent's payoff is always associated with a decrease in the payoff of others. 3 Figure 2 | A simple class of multi-agent situation is a game with two players, each of which can adopt one of two possible pure strategies [RG05] . By converting each player's payoffs to ranked preferences over outcomes-from the most-preferred to least-preferred outcome-we see that there are 144 distinct games. Even in this simple class of two-player games, there exists some common interest in the overwhelming majority of situations. Opportunities for cooperation exist, at least in principle, in situations of pure common interest and mixed motives. It is only in situations of pure conflicting interest where cooperation is impossible. The ubiquity of cooperative opportunities can be seen by considering the small size of the space of pure conflicting-interest games. First, they are almost entirely confined to two-player games, since the introduction of a third player will typically offer at least one dyad an opportunity to cooperate, if only against the third player. Even considering only two-player games, most possible arrangements of payoffs will not be consistently inversely related. To formalize this, Figure 2 shows that in the taxonomy popularized by Robinson and Goforth [RG05] , the vast majority of games are either pure common interest or mixed motive. Finally, even within the subset of games with purely conflicting interests, if the 3 Though note that with more than two players, it is mathematically impossible to have entirely negatively associated payoffs between all players; there must at least be many indifference relations. It is possible to construct a game with three or more players which would effectively reduce to a series of dyadic pure conflicting-interest games. However, this is only possible so long as every player has no option to intervene in the "dyadic games" of the others; if there was such an intervention, then there would be some common interest. underlying utilities are not perfectly negatively correlated, then the introduction of a costly action that benefits another player (transfers utility) can introduce common interest. Thus, situations of purely conflicting interest are rare in the space of strategic games. We believe such situations are also relatively rare in the real world. However, machine learning and reinforcement learning research has focused heavily on conflicting-interest cases-and particularly on two-player, zero-sum environments. Many of the most renowned achievements of multi-agent research, for instance, have focused on pure-conflict games such as backgammon [Tes94] , chess [CHJH02] Evidence of this weighting towards games of pure conflicting interest extends beyond these prominent studies. To evaluate more systematically how different fields attend to games with or without cooperative opportunities, we analyzed citation patterns. We first used keywords (such as "chess", "social dilemma", and "coalitional") to produce a rough proxy for whether a multi-agent paper was studying a situation of "common interest", "mixed motives", or "conflicting interest". We then examined the proportion of citations from papers in economics, machine learning, reinforcement learning, and multi-agent research which were directed to these different categories of papers. We found that papers in machine learning and reinforcement learning were much more likely to cite work on "conflicting-interest" situations (around 10-15% of their outgoing multi-agent citations) than were economics papers or other multi-agent papers (only 2% and 4% of their outgoing multi-agent citations, respectively). This weighting towards conflicting-interest games suggests that there are underexplored opportunities to study mixed-motive and common-interest environments in reinforcement learning. Work on settings of purely conflicting interests can, of course, provide useful insights also relevant to work on Cooperative AI. For example, research on poker-playing AI systems led to the development of counterfactual regret minimization [ZJBP07] , which has subsequently been leveraged to improve algorithmic performance in mixed-motive settings [SKWPT19] . As mentioned above, two-player zerosum games were a productive domain for early multi-agent research as they are especially tractable: the minimax solution coincides with the Nash equilibrium and can be computed in polynomial time through a linear program, their solutions are interchangeable, and they have worst-case guarantees [vNM07] . This tractability may explain why such games have received significant research attention, despite being relatively rare in the real world and in the space of possible games. Going forward, we think the study of pure common-interest and mixed-motives games-games permitting cooperation-will be particularly fruitful. Cooperative opportunities are critically affected by the kind of agents and entities involved in interactions. Separate research communities have coalesced around the topics of cooperation within human groups (e.g., [RN13, Tom09] ), cooperation within communities of artificial agents (e.g., [LZL + 17, OSFM07]), and cooperation between human and artificial partners (e.g., [COIO + 18, Nor94]). Problems of cooperation between states (e.g., [FL15] ), firms (e.g., [Par93] ), and other entities are also prominent themes in the social and policy sciences. Within each of these categories-humans, machines, organizations-there is substantial variation in preferences and cognition. Cooperation amongst children, for example, differs from cooperation amongst adults [War18] ; cooperation between people in the Eastern United States diverges from cooperation between individuals in Peruvian communities [HBB + 04]. Though a deep body of research has studied the mechanisms underlying cooperation within human groups, settings with non-human agents are likely to possess markedly different dynamics. Machinemachine cooperation, for instance, may involve more exotic settings, higher bandwidth communication, greater strategic responsiveness, closer adherence to principles of rational decision-making, hard-to-interpret emergent language [KMLB17] , sensitivity to narrow strategy profiles [CSH + 19], and greater strategic complexity. Machine learning algorithms have a well-documented tendency to find unexpected or undesired solutions to problems they are posed, a critical specification challenge for AI researchers [KUM + ]. Cooperative opportunities involving solely artificial agents may thus generate solutions, both good and bad, that diverge from those observed in human communities. The landscape further shifts when we consider hybrid groups containing both humans and machines. Machines developed to interact with human partners are more likely to succeed if their design incorporates processes and factors related to human behaviour-including cognitive heuristics [TK74] , social cognitive mechanisms [RS11] , cultural context [TKNF00] , legible motion [DLS13] , and personal preferences and expectations [Nor94] . For instance, self-driving cars that performed well when interacting with other autonomous vehicles have struggled to adapt to the assertiveness of human drivers, even when this manifests in subtle ways such as inching forward at intersections [SPAM + 19, RD15] . A different set of challenges arises for AI research focused primarily on cooperation among humans (e.g., [ZTS + 20]). Initial studies of such contexts suggest that the natural dynamics of human groups can be substantially altered by the presence of an artificial agent [SC17, TSJ + 20]. These situations will likely require researchers to adapt a new set of hybrid approaches, drawing heavily from fields including social psychology, sociology, and behavioural economics. A third distinction relates to whose welfare one is most concerned with in a particular cooperation problem. The individual perspective seeks to achieve the goals of an individual in a cooperative setting, which usually involves improving the individual's cooperative capabilities (as covered in our sections on Understanding, Communication, and Commitment). The planner perspective instead seeks to achieve some notion of social welfare for an interacting population, which usually involves intervening on population-level dynamics through policies, institutions, and centralized mechanisms (as covered in our section on Institutions). 4 The individual perspective tends to be concerned with machine agents, but it could also look at machine aids to humans. The planner perspective tends to look at populations of humans, but it could also look at populations involving machines. Which perspective one takes depends in part on the problem one is trying to solve and the opportunities available. Do we have an opportunity to advise or improve the cooperative capabilities of an individual? Do we have an opportunity to influence behaviour-shaping factors, like norms, policies, mediators, or institutions? To some extent, any cooperative situation benefits from being understood from both these perspectives. A competent social planner should understand the interests (and cooperative capabilities) of the individuals, at the least to know how best to intervene to maximally facilitate cooperation. Similarly, a maximally competent cooperative individual likely needs the ability to think about the population's cooperative opportunity as a whole to identify what group-level changes would best help bring the population (including the individual) to the Pareto frontier. 5 These perspectives are associated with differences in method. The individual perspective tends to involve an agent optimizing over its local environment and considering the strategic response of 4 While these different goals tend to lead to different focuses (on individual capabilities vs institutions), they need not. It is conceivable that the individual perspective leads to the problem of institution design or that the planner perspective leads to the problem of improving individual cooperative competence. 5 Illustrating the value of a cooperative individual being able to "simulate the social planner" and take the perspective of the group trying to cooperate, UN Secretary-General Dag Hammarskjöld famously argued "that every individual involved in international relations, particularly those working with the UN, should speak for the world, rather than from purely national interest." [GRW19] , emphasis ours. other agents. The planner perspective, especially for large populations, more often involves a study of equilibration and emergence, and thus more resembles a problem in statistical physics (as noted by [Kra97] ): what nudges and structures would steer these emergent equilibria in desirable directions? The field of Cooperative AI, as scoped here, involves AI research which can help contribute to solving problems of cooperation. This may seem overly expansive, as it includes many different kinds of cooperation (machine-machine, human-machine, human-human, and more complex constellations) and many disparate areas (multi-agent AI research, human-machine interaction and alignment, mechanism design, and myriad tools such as language translation and collaborative productivity software). However, research fields and programs should not be thought of in this way, as merely expressing a set relationship to each other. For example, it is not especially meaningful to say that biology is just applied chemistry, and chemistry is applied physics. Rather, research fields can be thought of as expressing a bet about where productive conversations lie. The bet of a Cooperative AI field is similar to the bet for organizing a Cooperative AI workshop: that there are productive conversations to be had here which are otherwise not happening for want of: (1) an overarching compass aligning the many disparate research threads into solving a large common problem; (2) a unified theory and vocabulary to facilitate the transfer of insights across threads; and (3) a more deliberate construction of conversations which span communities (including non-AI communities). The problem of cooperation has been worked on in biology, game theory, economics, political science, psychology, and other fields, and there has been much productive exchange across these disciplines. The bet behind Cooperative AI is that there is value in connecting the research in AI, and opportunities with AI, to this broader conversation in a theoretically explicit and sustained way. We will elaborate this point further. Cooperative AI emerges in part from multi-agent research, as reflected in our review of prior work. However, it is not equivalent to this field. It will emphasize different subproblems. For example, it will point away from zero-sum games and towards social dilemmas. It will also more consciously strive to develop theory which is compatible with adjacent sciences of cooperation, and to be in conversation with those sciences. Cooperative AI emerges in part from work on human-machine interaction and AI alignment, but it is not equivalent. For example, these areas typically involve a principal-agent relationship, in which the human (principal) has normative priority over the machine (agent). AI alignment researchers are concerned with the problem of aligning the machine agent so that its preferences are as the human intends; this is largely outside of Cooperative AI, which takes the fundamental preferences of agents as given. When alignment succeeds, the human-machine dyad then possesses pure common interest, which is an edge case in the space of cooperation problems. Absent sufficient success at alignment, these fields then invest in mechanisms of control, so that the human's preferences are otherwise dominant; such methods of control are also not the primary focus of the science of cooperation. Human-machine interaction and alignment can be interpreted as working on "vertical" coordination problems, where there is a clear principal; the heart of Cooperative AI concerns "horizontal" coordination problems, where there are multiple principals whose preferences should be taken into account. Accordingly, human-machine interaction and alignment emphasize control and alignment, and relatively underprioritize problems of bargaining, credible communication, trust, and commitment. Conversations about Cooperative AI ought to also include work on tools and infrastructure relevant to human cooperation, though this work is typically more narrowly targeted at a specific product need. Nevertheless, a component of the Cooperative AI bet is that work on those tools would be enhanced through greater connection to the broader science of cooperation, and the science of cooperation would be similarly enhanced by learning from the work on those tools. Table 1 | One-sided assurance game. (C,C) is the mutual best outcome and unique equilibrium. Given uncertainty about the other's payoffs, Row may still choose D. The above described some of the key dimensions of cooperative opportunities. We now discuss how specific strands of research can contribute by producing relevant capabilities for promoting cooperation. These are mostly cognitive skills of agents (per the individual perspective), but they also include properties of hardware and the capabilities of institutions. We organize discussion of cooperative capabilities according to the whether they address (1) understanding, (2) communication, (3) commitment, or involve (4) institutions. We illustrate this framework using simple strategic games. At the most basic level, the decisions of agents are affected by their understanding of the payoffs of the game and of the other player's beliefs, capabilities, and intentions. Consider a one-sided assurance game such as depicted in Table 1 , where the players have a mutual best outcome in (C,C), but player Row may choose D out of fear that Column will play D. To make this concrete, imagine that these are researchers choosing what problems to work on. Column strictly prefers to work on problem "C", and would like to collaborate with Row on it. Row is happy working on either problem, so long as it is in collaboration with Column. If Row does not understand Column's preferences, Row might choose D, to both of their loss. If, however, Row knows Column's preferences, then Row will predict that Column will choose C, and thus Row will also choose C, making them both better off. Hare Stag 1, 1 -1.2, 0 Hare 0, -1.2 0, 0 However, in many cooperative games, such as Stag Hunt (Table 2) , understanding of payoffs is not sufficient for cooperation because there are multiple equilibria. Here the players also need some way of coordinating their intentions, actions and beliefs to arrive at the efficient outcome. Communication offers a solution. If Row can utter and be understood to be saying "Stag", then Column will believe Row intends to play "Stag", and will thus also play "Stag". Your announcement of "Stag" will be "self-committing", since your partner's best response will be to play stag, which implies that you should now do so too [Far88] . 6 Swerve Straight Swerve 1, 1 0.5, 1.5 Straight 1.5, 0.5 0, 0 Communication can be complicated by conflicting incentives, as it can produce incentives to misrepresent one's beliefs or intentions [CS82, Fea95] . In the game of Chicken (Table 3) , for instance, each player prefers the equilibrium where they drive straight and the other player swerves. And so while one player may clearly express that they intend to drive straight, the other player may feign deafness, disbelief, or confusion. When talk is cheap and incentives conflict, players may simply ignore each other; even with abundant opportunity to talk, two players may still fail to avoid a collision. Where communication may fail, commitment capabilities may help. In Chicken, for example, if one player can credibly commit to driving straight, then the best response for the other is to swerve, averting disaster. In the Prisoner's Dilemma-often used as the quintessential social dilemma-cooperation can be achieved if one player can make a conditional commitment: to play C if and only if the other plays C. Commitments can be constructed in various ways, such as by a physical constraint, by signing a binding contract [Kat90] , by sinking costs [Fea97] , by repeated play and the desire to maintain a good relationship, and by the collateral of one's broader reputation [MS06] . Finally, cooperation may not be achievable without supporting social structure, which we inclusively term institutions. Institutions involve "sets of rules" which structure behaviour and vary in the extent to which they are formal, detailed, centralized, and intentionally designed. They may consist of conventionsself-enforcing patterns in beliefs-such as rules about what side of the road to drive on. They may involve norms, which further reinforce pro-social behaviour through informal sanctions. They may involve formal rules, roles, and incentives, such as we see in constitutions and governments. C D C 1, 1 -0.5, 1.5 D 1.5, -0.5 0, 0 With respect to strategic games, institutions can be interpreted as characterizing equilibrium selection, or of involving stronger interventions on the game, such as linking the game to adjacent games or inducing changes to the payoffs. For instance, in the Prisoner's Dilemma (Table 4) , the one-shot game has no cooperative Nash equilibrium, whereas the infinitely repeated game with discounting contains subgame-perfect equilibria that support cooperation, such as tit-for-tat [AH81] . Effective institutions often have the properties that they promote cooperative understanding, communication, and commitments. Fearon [Fea20] conjectures that the problem of designing institutions-of "changing the game"-is the harder, and more important, kind of cooperation problem. In any setting, an agent would do well to understand the world: to be able to predict the (payoff relevant) consequences of their actions. In strategic settings-where outcomes depend on the actions of multiple individuals-it also helps to be able to predict the behaviour of other agents. 7 This is thus also true of cooperative opportunities: the ability to predict behaviour can be critical for achieving mutually beneficial outcomes. This was illustrated above by the one-sided assurance game (Table 1) , where improved understanding of the beliefs, preferences, or strategy of Column could be necessary for the players to maximize their joint welfare. These predictions may be explicit, like in model-based reinforcement learning and search, or they may be implicit in the ways a strategy is adapted to the strategies of others, such as arises from evolutionary adaptation and model-free reinforcement learning. We use the term understanding to refer to when an agent adequately takes into account (1) predictions of the consequences of actions, (2) predictions of another's behaviour, or (3) contributing factors to behaviour such as another's beliefs and preferences. 8 Note again that though "understanding" connotes deliberate reasoning, we have defined it here to also include when behaviour is adapted to implicit predictions, as would arise from evolutionary selection of policies. Our discussion of understanding begins from a simple game-theoretic setting. We then complicate it with uncertainty, complexity, and deviations from perfect rationality. We divide our discussion into the problems of understanding the world, behaviour, preferences, and recursive beliefs. Understanding of the mental states (beliefs, goals, intentions) of another agent is also sometimes called theory of mind [BCLF85] . The primary problem confronting an agent who is seeking to craft a payoff-maximizing policy is that of understanding the (payoff relevant) consequences of its actions. This problem might be entirely subsumed (in a model-free way) in learning a policy function (best policy) or a value function (estimate of the value of actions), or it could be complemented with an explicit model of the world 9 and representation of the state of the world [SB18, §1] . The heart of single-agent reinforcement learning is thus the problem of understanding the world. Introducing other agents enriches and complicates an agent's understanding problem in many ways, as elaborated below. One particular way involves using another agent's actions to infer that agent's private information about the state of the world. This is relevant to the cooperation problem because sometimes the revelation of an agent's private information would be helpful for cooperation. For example, suppose an investor and an entrepreneur are considering whether to start a business; with their initial information they might each be too uncertain to make the necessary investments. However, by credibly sharing their respective private information, they may be able to confirm whether a joint venture would in fact be mutually beneficial. As will be discussed below, given sufficient common interest and a means of communication, this problem is easily overcome. However, if the agents lack a means of communication, then the uninformed agent may need to draw inferences in a more indirect manner. If the agents lack sufficient common interest, then the informed party's utterances cannot be trusted. In such a setting, agents may need to rely on "costly signals"-actions which reveal information because they are too costly to fake-for achieving cooperation. The canonical example is of a student getting a good grade in an arduous course being a costly signal of certain aptitudes ([Spe73, Zah77, Bar13]). 10 AI tools could help humans jointly learn about the world in ways that would improve cooperation. For example, trusted AI advisors could help humans better understand the consequences of their actions [Har16, 320] .Other examples, such as privacy-preserving machine learning, will be discussed under Communication. While understanding the environment is the full problem facing an isolated individual, in multi-agent settings an individual also benefits from anticipating the actions and responses of other agents. This is particularly true for cooperation. 11 To sustain a cooperative equilibrium requires some understanding of each other's strategy-the ways that the agent will behave in response to different actions-in order to decide whether cooperative actions will be rewarding and defection unrewarding. This level of mutual understanding of behaviour is implicit in the concept of a Nash equilibrium, which requires that each strategy be a best response to others' strategies; each strategy implicitly takes into account a correct prediction about others' behaviour. Going further to accommodate incomplete information, the solution concept of a Bayesian Nash equilibrium, and refinements like a Perfect Bayesian equilibrium, also explicitly require that the agent's beliefs be consistent with the strategies of other players. These solution concepts thus assume a certain degree 9 Also called an environment model, dynamics function, or transition function; we use these terms interchangeably. 10 Some behaviours are impossible to fake no matter an agent's effort. Jervis [Jer89] calls these "indices", distinguishing them from "signals". Indices can be conceptualized as at the extreme end of the continuum of costly signals, where their costs are infinite for the wrong types of agents. 11 We can conceptualize cooperation as a Pareto-superior action profile (a situation where players undertake a mutually beneficial set of actions) when there was some possibility of a Pareto-inferior action profile. If there was no possibility of a Pareto-inferior outcome, then there was no opportunity to cooperate (or to not cooperate). of mutual understanding of behaviour. There also exist more radically cooperative solution concepts, which require even greater mutual understanding, of superrationality [Hof08, FRD15] and program equilibrium [Ten04, BCF + 14, Cri19, Gen09] . Likewise, there exist weaker definitions of cooperative outcomes which require greater understanding of others, such as Kaldor-Hicks efficiency [Hic39, Kal39] which only requires that the arrangement be Pareto-improving after a hypothetical transfer from the better off. To illustrate, consider how the strategy tit-for-tat in iterated Prisoner's Dilemma takes into account (implicit or explicit) predictions about the others' behaviour. It understands that being nice (playing C initially) may induce cooperative reciprocation; that being forgiving (playing C after C, even given a history of D) may allow the players to recover cooperation after a mistake; that being provocable is a good deterrent (that playing D after D will reduce the chance of the other defecting in the future). Tit-for-tat also has the critical property that it is clear, making it easy for others to, in turn, understand [Axe84, 54] . This understanding may be implicit, such as if the strategy emerges from an evolutionary process, or it could be explicit if it was deployed by a reasoning agent, like a game theorist after reading Axelrod. Compared to sustaining a cooperative equilibrium, moving to a cooperative equilibrium often poses a greater challenge of understanding. Experience can no longer be relied on for evidence that the cooperative equilibrium is in fact beneficial and robust. Instead, the agents have to jointly imagine or stumble towards this new equilibrium; achieving such a shift in behaviour and expectations can be difficult. Achieving cooperation, rather than just sustaining it, thus may often require a deeper and more theoretical form of understanding. Understanding of behaviour can be achieved in many ways which can be arrayed on a spectrum from being more empirical to more theoretical; this distinction relates to that between model-free and model-based learning [Day09] . On the most empirical end, we have learning processes that lack any in-built ability to plan, like simple evolutionary processes, classical conditioning, and model-free learning. Some cooperative equilibria are stumbled upon from these kinds of experiential processes, such as a sports team which learns from extensive training to intuitively coordinate like a single organism. There are learning techniques which can help agents to understand each other, such as cross-training, in which each teammate spends some time learning to perform the roles of others [NLRS15] . At the population level, regularities often naturally arise which coordinate intentions and behaviour; these we call conventions. Consider the what-side-of-the-road-to-drive-on game, the solution of which in a particular jurisdiction was emergent before it was codified in law [You96] . At the most theoretical end of the spectrum of understanding, agents may be able to reason through the vast strategy space and identify novel cooperative equilibria. Such theoretical (model-based) understanding has the advantage of permitting larger jumps in the joint strategy space, though dependence on a learned (imperfect) model brings with it additional learning costs and risks of error [HLF + 15]. Sufficient understanding of an agent's strategy is all that is needed for agents to identify and sustain feasible cooperative equilibria, since these are, after all, simply stable strategy profiles. However, achieving such sufficient understanding of strategy from behaviour alone is generally implausible: the strategy space for most games is computationally intractable, and strategy itself is unobservable and subject to incentives to misrepresent. Instead, agents often do well to understand the causes of others' behaviour, such as other's private information, preferences, and (recursive) beliefs themselves. Sometimes the critical information needed to predict behaviour to achieve cooperation is about an agent's preferences, also variously called payoffs, values, goals, desires, utility function, and reward function. This was illustrated in the assurance game in Table 1 , where Row's uncertainty about Column's preferences could lead to the mutually harmful outcome of (D, C). However, eliciting and learning preferences is not an easy task. Even under pure common interest, preferences may be computationally intractable; for example, humans are probably not able to express their preferences about broad phenomena in a complete and satisfactory manner [Bos14, Rus19] . Humans may not even have adequate conscious access to their own preferences [Rus19, 138] . Complicating this process further, in the presence of conflicting incentives, agents may have incentives to misrepresent their preferences. AI research on learning of preferences is growing in prominence, in part because this is regarded as a critical direction for the safety and alignment of advanced AI systems ([RDT15, CLB + 17]). Some of this research seeks to learn directly from an agent's behaviour (see recent survey on such work [AS18] ), where the agent is oblivious or indifferent to the learner (often called inverse reinforcement learning, or IRL) [NR + 00]. It is often critical to inject sufficient prior knowledge [AM18] or control [AJS17, SSSD16] to produce sufficiently useful inferences. Some preference learning research takes place in an explicitly cooperative context, where the observee is disposed to help the observer, and may even learn to be a better teacher; this is sometimes called cooperative IRL [HMRAD16] As Figure 1 illustrated, safety and alignment can be regarded as the complement to Cooperative AI for achieving broad coordination within society and avoiding outcomes that are harmful for humans. Safety and alignment address the "vertical coordination problem" confronting a human principal and a machine agent; Cooperative AI addresses the "horizontal coordination problem" facing two or more principals. In the near future, preference learning is also likely to be critical for the beneficial deployment of AI agents that interact with humans in preference-heterogeneous domains, such as with writing assistants [BMR + 20, LKE + 18] and other kinds of personal assistants. The exposition so far has largely focused on what can be called "first-order understanding", which avoided the recursion of how one agent's beliefs can be about the beliefs of others, which themselves may be about the beliefs of the first, and so on. In strategic equilibria, behaviour and beliefs must be consistent and thus can involve recursive relations. Recursive beliefs are sometimes called recursive mind-reading and are a critical skill for agents in socially complex environments. Humans, for example, have been shown to be capable of at least seven levels of recursive mind-reading [OKSSP15] . In AI research, recursive mind-reading has been explored in negotiation settings [dWVV15, dWVV17] and is believed to be important for games like Hanabi [BFC + 20]. If a proposition is known, is known to be known, and so on to a sufficiently high level, the proposition is said to be common knowledge. 12 High-order beliefs and common knowledge play important roles in social coordination, from supporting social constructs such as money [Har15, Sea95] to collective action like revolution [Loh94] . Attention to common knowledge has been productive in recent AI research [FdWF + 18]. To be clear, like most cooperative capabilities, while competence at recursive mind-reading can 12 Many scholars define common knowledge as requiring infinite levels of mutual knowledge, but this is likely an overly strong definition for humans. Chwe [Chw13, [75] [76] [77] offers several ways to relax this assumption. Instead of requiring a 100% confident belief ("knowledge") at each level, one can require sufficiently confident belief (say, over 90%). Common knowledge may be achieved through a single recursive step. Perhaps levels of mutual knowledge is in practice sufficient for humans, where is 2 or 3. Humans may have various heuristics which are understood to achieve (infinite-level) common knowledge, such as eye contact. sometimes be critical for cooperation, at other times it may undermine it. To offer a theoretical example: in a finitely repeated Prisoner's Dilemma, if the agents have mutual knowledge of the length of the game to an order greater or equal to the length of the game, then the logic of backward induction will lead them to defect immediately; whereas if they only have mutual knowledge to an order less than the length of the game, then backward induction cannot fully unravel a cooperative equilibrium. Empirically, there is also evidence that deliberative reasoning can undermine heuristic cooperative behaviour: participants in public goods games may contribute more when they are forced to make their decision under time pressure [RGN12, RPKT + 14] . This suggests that, for most people, the default intuitive strategy may be cooperation, but with further reasoning they will consider the possibility of defection. Communication can be critical for achieving understanding and coordination. By sharing information explicitly, agents can often more effectively gain insight into one another's behaviour, intentions, and preferences than could be gleaned implicitly from observation and regular interaction. Such an exchange of information may therefore lead to the efficient discovery of, convergence on, and maintenance of Pareto-optimal equilibria. As a simple example, two individuals who enjoy each other's company would do well to compare their calendars to find a time to meet, rather than each trying independently to infer when the other might be free. Likewise, division of labour benefits from coordination contingent on effective communication: one agent may announce an intention to speak with a client, allowing the other to focus on updating the firm's accounts. The information exchanged may take a usefully compressed form, providing abstractions that aid cooperation. "Let's carry this box into the lounge" is far more efficient and flexible than a series of low-level motor instructions for several individuals. In the simplest setting for communication, we can imagine two individuals with pure common interest and similar background knowledge about the world, with no constraint on data transfer, but each with some private information relevant to cooperation (which may include their intentions). For example, they may be playing a symmetric coordination game. Even here, communication is not trivial, but requires sufficient common ground [CB91] so that an agent can interpret the other's message. With no common ground, all signals will be meaningless. The need for common ground will be greater for more complex messages. In practice, most communication channels have limitations, such as in bandwidth and latency. Cooperation under limited bandwidth calls for efficiently compressed information transfer, and the most suitable type of compression depends on the cooperative context, such as what the agents are trying to achieve and whether the agents are humans, machines, or both. Cooperation under high latency requires that agents are able to communicate effectively despite temporal delay of messages, which may be challenging in a fast-moving setting. Sometimes one agent has critical knowledge to impart to another agent. This knowledge may be some private information, a skill, or even a protocol for more effective communication. In such settings, the agents need to cooperate through a teacher-student relationship. Building agents who can be, or can learn to be, good teachers and good students is an active area of research. Challenges include the teacher providing a suitable curriculum and having adequate theory of mind, improved opponent modeling and shaping strategies, and greater sample efficiency in student learning. Communication is easiest under pure common interest. When agents have conflicting interest, a host of new challenges can arise. Agents may now have to worry about being manipulated by the other's signals. Can the agent's trust each other to communicate honestly, given incentives to misrepresent? Do agent's have the ability to detect dishonesty and deception? Can agents protect their communication channel by isolating their common interests, by norms such as honesty, or with other institutional arrangements? Given the prevalence of mixed-motives settings, building artificial agents capable of cooperative communication in that context may be a critical goal. The first component of communication is common ground: presumed background knowledge shared by participants in an interaction [Sta02] . 13 Such background information may take the form of shared understanding of the physical world, a shared vocabulary for linguistic interaction, or a shared representation of some relevant information. Some degree of common ground is necessary for meaningful communication. If the recipient of a message has no idea what the terms within a message refer to, the recipient will not be able to make sense of the message content, and the message therefore cannot guide action or improve communication. One can divide much of the effort involved in communication between the initial work of building these representations and the subsequent use thereof. Humans, for instance, spend their early years learning to ground language such that they can use terms in ways that their family and others in their community will find intelligible [LCT13, BK17] . This large initial investment enables more substantive and less expensive communication later in life. The trade-off between the fixed costs of developing common ground representations and the variable costs of using those representations to communicate makes different levels of complexity more or less optimal under different settings. For instance, scuba divers must agree on a small vocabulary of sign language where each term has a precise meaning for unambiguous safety [Mer11] . By contrast, American Sign Language has all the ambiguity, compositionality, and open-endedness typical of spoken languages, appropriate for its use for cooperation in a much richer set of circumstances [GMB17] . Common ground will vary depending on the kinds of agents and the context of the cooperation problem. In machine-machine communication, the common ground is not likely to be immediately human-interpretable. Hard-coded machine-machine communication [AO18] already underlies the current information revolution. Such systems tend to have fixed protocols, enabling domain-specific cooperation, such as in e-commerce. Future research could enable machine-machine communication in general domains, for instance by employing learning or evolutionary algorithms. The question of how to bootstrap common ground between machines is often referred to as emergent communication [WRUW03, SSF16, FADFW16, CLL + 18]. AI research is also facilitating the finding of common ground between humans. Machine translation is the most prominent such example: by mapping from one set of representations (e.g., German) to another (e.g., Chinese), such systems can enable rich human-human communication without the need for either party to learn an entirely new language. Improved translation, by removing barriers to communication, may lead to increased international trade [BHL19b] , higher productivity (through increased competition and efficient reallocation of resources), and a more borderless world [Web20] The bandwidth of a communication channel measures the amount of information that can be transferred over the channel in a given unit of time. To consider human-human cooperation, the versatility of human vocal cords seems to enable humans to convey more information per second than other primates [GR08, Fit18] . However, spoken language still has far lower bandwidth than our sensory and cognitive experiences, necessitating compression. The open-ended and flexible means by which human language achieves this compression is inextricably linked to our species-unique cooperation skills [SP14] . Considering the low-bandwidth end of the communication spectrum, one finds (long-range) communication methods used by humans such as smoke signals and maritime flags. These work well for transmitting a small set of basic messages, but would be ill-suited to negotiating a complex legal agreement. At the other end of the spectrum, fixed-protocol machine-machine communication, such as inter-server networking within data centres, can have far higher bandwidth than human language. Research in the aforementioned emergent communication paradigm seeks to develop common ground between machines, given a limited-bandwidth communication channel. When augmented with appropriate biases [EBL + 19], such multi-agent systems can cooperate to solve a variety of tasks, from negotiation [CLL + 18] to sequential social dilemmas [JLH + 18], even under conditions where the bandwidth must be minimized [MZX + 20]. However, the languages which emerge between agents typically do not display the classic hallmarks of human language, such as compositionality and Zipf 's law [KMLB17, CKDB19, LHTC18] . Therefore, while the approach of emergent communication may work well for machine-machine systems, much work remains to be done on crossing the human-machine barrier. A related and promising research domain is human-computer interface development, with (deep learning enabled) advances in brain-computer interfaces representing an especially radical direction of advance [ZYW + 19]. Latency refers to the delay between a message's transmission and reception. This can vary independently of bandwidth, and it shapes the degree of autonomous behavior required by each agent in a cooperative system, as well as the amount of planning and prediction necessary when crafting a message. Consider communication between NASA's Perseverance Rover on Mars and NASA's headquarters in the United States. Because of the latency involved in interplanetary sharing of information (between three and 23 minutes, depending on the planets' relative positions), NASA engineers cannot feasibly control the rover's actions in the same way they could if it operated locally [BCA + 17]. Instead, Perseverance has to navigate the Red Planet with some autonomy, with NASA communicating high-level guidance as opposed to low-level motor direction. The connections to cooperation are clear: latency affects actors' ability to coordinate in fast-moving circumstances. The cooperative importance of reducing latency is exemplified by the Moscow-Washington hotline, which was installed to accelerate communication following the near catastrophe of the Cuban Missile Crisis, during which diplomatic messages could take more than 12 hours to process. In general, latency shapes the type of agreements into which actors can enter, and therefore the design of communicative agents [Bla03, CXL + 20]. The problems presented by high latency may be particularly pronounced in situations where one individual seeks to affect the learning of another, for instance by providing reward [YLF + 20, LP20] or imposing taxation [ZTS + 20], since the effects of learning may only be apparent at a later time. A key feature of cooperative intelligence is the ability to teach others. Teaching enables the communication of practically useful knowledge and thereby greatly increases opportunities for joint gains. This can be viewed as an advanced form of social learning [Ban77, HL13] , where not only do individuals learn by observing more capable agents, but those agents also modify their behaviour in order to elicit better learning from their students. The evolution of teaching is associated with increased cumulative cultural abilities in many species [TR10] , and stands out as an important component of the social intelligence of humans [FSL11] , perhaps even tied to the origins of language itself [Lal17] . An important component of learning to teach is learning to be taught. In the evolutionary biology literature, this often goes under the moniker of observational learning, and it is closely related to what in AI is known as imitation learning. Following the deep learning revolution, the field of AI has demonstrated increasing interest in transferring knowledge from teachers-human and artificialto student algorithms, particularly given the sizable datasets of human demonstrations (e.g., those available on video sharing platforms). Direct methods of learning from human teachers, such as imitation learning [Pom88, Sch99] , have become widespread for real-world robotics research (see for example [RKG + 18, ACN10]). In turn, teachers can learn to select the most useful demonstrations or lessons for their students [CL12] . Distillation of policies between agents can help achieve higher performance across a wide range of environments [SHZ + 18]. A more model-free approach was recently explored [BPMP17, WFH20] , whereby a student learned to follow a teacher in a gridworld environment via curriculum-based reinforcement learning (RL). Competence in learning from others can permit faster learning of skills and superior zero-shot transfer performance [NELJ20] . Learning to teach has received increasing attention in recent years [DSGC17, Bec98, OKL + 19, ZVW14] . This is particularly challenging, since teacher performance can only be evaluated after the student has learned. Therefore, the feedback signal for teacher learning may be temporally distant from the start of teaching. There are various nascent methods for addressing this problem, including meta-learning students [ZTS + 20], second-order gradient methods [FCAS + 17, LFB + 18], and inverse reinforcement learning [CL12] . Machines that can teach humans hold much promise for society. Indeed, it may only be once high-performing agents develop the ability to instruct that we realize their full potential: super-human algorithms, such as AlphaZero [SHS + 17] in the domain of chess, go, and Shogi, would generate much more utility if humans could learn directly from their inner workings. This topic has foundations in the rapidly evolving domain of interpretable and explainable AI, which in turn has significance for technical safety. However, effective teaching goes beyond interpretability to include questions of interface design for human-computer interaction [SPC + 16], the evaluation of effective pedagogy [GBL08] , summarization methods [NZdS + 16, ZWZ19] , and knowledge distillation [YJBK17] . The notion of teaching is also relevant to considerations of equilibrium selection and the construction of welfare-improving equilibria, namely through correlated equilibria [Aum74] and coarse correlated equilibria. Evolutionary models demonstrate that correlated equilibria can solve mixed-motive problems [MA19, LP16] . Furthermore, independent no-regret learning algorithms provably converge to coarse correlated equilibria [HMC00] . Further research is warranted into the nature of communicative equilibria and how they might support cooperation. Communication generally becomes more difficult the more agents'c preferences are in conflict. The fundamental problem facing communication under mixed-motives is the incentive to deceive, and the consequent risk of being deceived. Under pure conflicting interest, agents have no incentive to communicate: any message that one agent would want to be heard, the other agent will not want to hear. Conflicting interest can thus destroy much of the potential of communication for achieving joint gains [CCB11] . As agents' preferences are more aligned, cheap-talk communication generally increases in efficacy [CS82] . Alternatively, under mixed-motives, credible communication can sometimes be achieved through costly signals to overcome the incentive problems preventing honest communication, with the aforementioned canonical example being of a student's grade being a credible signal of certain aptitudes and interests ([Spe73, Zah77, Bar13]). As artificial agents are deployed in society, scientists, policymakers, and the general public will have to grapple with complex questions about what communication norms machines should be expected to abide by, such as perhaps declaring their (machine) identity [O'L19] and committing to avoid deception, and how these norms should be reinforced. Research in mechanism design offers opportunities for building mechanisms to incentivize truthful revelation, a property known as incentive compatibility. AI could play an important role in devising new incentive-compatible mechanisms and in acting as a trustworthy mediator. Automated mechanism design has already achieved noteworthy results in a number of areas, including auctions, voting and matching, and assignment problems [FNP18, CDW12b, CDW12a, NAP16] . In many cases, function approximators can obtain (approximate) incentive compatibility in situations too complex to be tractable under closed-form methods. Given the complexity of many real-world settings, this line of research holds great promise for increasing global cooperation. Advances in cryptography offer other opportunities for promoting cooperative communication under mixed motives. It is increasingly possible to build (cryptographic based) information architectures which permit precise complex forms of information flow, sometimes called structured transparency [TBG + ], in which, for example, the owner of information can allow that information to be used for some narrow purpose while keeping it otherwise private. Successes in structured transparency can open up new opportunities for mutual gains, such as enabling privacy-preserving medical research which depends on analyzing the health data of many individuals [TBG + ] or privacy-preserving contact tracing for pandemics [SGD19] . Privacy-preserving machine learning, which encompasses methods such as homomorphic encryption [GLN12, Gen09] , secure multi-party computation [MZ17] , and federated learning [MMR + 17], allows for the training of models on data without the model owner ever having access to the full, unencrypted data or the data owner ever having access to the full, unencrypted model [GLN12, TBG + ]. AI research has an important role to play in advances in this kind of structured transparency. Finally, an important mixed-motive setting for communication is negotiation. In situations lacking a formal contractual agreement protocol, "cheap talk" [CS82] can play an important role, including in human-machine interactions [COIO + 18]. When an agreement structure is available, automated negotiation systems may seek a cooperative outcome [RZ94, JFL + 01, FSJ02]. Such systems aim to reach agreements by reasoning over possible deals or iteratively making offers and modifying a deal in mutually beneficial ways [Kra97, MSJ98, BKS03] . AI research has investigated how to design protocols and strategies for automated negotiation. Such protocols define the syntax used for communication during the negotiation, restrictions as to which messages may be sent, and the semantics of the messages [Smi80, CW94] . Researchers have proposed many automated negotiation protocols [APS04, IHK07] , focusing on developing specialized negotiation languages which are expressive enough to capture key preferences of agents, but still allow for computationally efficient dealmaking [Sid94, Mue96, WP00] . The advent of deep learning has opened up profitable avenues, including agents which learn to negotiate based on historical interactions [Oli96, NJ06, LYD + 17]. The above capabilities of Understanding and Communication seek to address cooperation failures from incorrect or insufficient information. However, cooperation can fail even absent information problems. Work in social science has identified "commitment problems"-the inability to make credible threats or promises-as an important cause of cooperation failure. Prominent scholarship even argues that cooperation failure between rational agents requires either informational problems or commitment problems [Fea95, Pow06] . 14 More broadly, a large literature has looked at the many ways that commitment problems undermine cooperation [Sen85, Nor93, Fea95, Bag95, Hov98 , H + 00, Pow06, GS07, JM11]. 15 To illustrate, consider the Prisoner's Dilemma (Table 4 above), often regarded as the canonical, and most difficult, of 2-by-2 game-theoretic cooperation problems. This game involves perfect and complete information, so no amount of improved understanding or communication would help. Though the players could both be better off if they could somehow play (C, C), each has a unilateral incentive to play D, irrespective of what the other player says or intends to do. However, if one player can somehow make a conditional commitment to play C if and only if the other player plays C, then the dilemma would be solved; the other player would now strictly prefer to also play C. Commitment problems are ubiquitous in society. Absent the solutions society has constructed, commerce would be crippled by commitment problems. Every time a buyer and seller would like to transact, each may fear that the transaction will go awry in any number of ways; government-backed currency, credit cards, and consumer protection regulations each address some of these potential commitment problems-from a prospective buyer failing to deliver payment, to a seller delivering faulty (or no) products. Domestic political order depends on the ability of leaders to make credible promises. Liberal polities in particular often depend on constitutions which articulate the fundamental civic promises and social mechanisms to credibly enforce them [AR12, NWW + 09]. When ruling elites can't make such promises, or can't trust the promises of prospective challengers, repression or civil war may be the only recourse [Fea98, Fea04] . Peace among great powers itself may depend on avoiding abrupt power transitions and the commitment problems those produce [Pow99] . Overcoming a commitment problem often requires a commitment device, which is a device that compels one to fulfill a commitment, either through a "soft" change to one's incentives for taking different actions [Sni96, BKN10] , such as a penalty for non-compliance, or a "hard" pruning of one's action space, such as implied by the commitment metaphor of burning one's boats. 16 Commitment devices may be unilateral, where a single agent is capable of executing the commitment, or multilateral, requiring multiple agents to consent. Commitment devices can involve an unconditional commitment to some action or involve more sophisticated commitments conditional on the actions of others or events. Unilateral unconditional commitments may be the most accessible commitment devices, since they only require that an agent have some means of shaping their own incentives or options. Perhaps the most common unilateral unconditional commitment is to simply take a hard-to-reverse action. These commitments are implicit in sequential games, producing less risk of agents simultaneously 14 A third cause of conflict is issue indivisibility, though this is sometimes said to still depend on the prior inability to commit to an efficient gamble which would be Pareto superior to costly conflict [Pow06] . 15 In practice, informational problems and commitment problems are often intertwined, and solutions for them can be substitutes. It can be useful theoretically to distinguish them. 16 Thus removing from the choice set the option of retreating of an invading force [Rey59] . miscoordinating. These commitments are more common when there is a large or salient first mover who can shift the expectations of many other players; examples include an "anchor tenant" in a development project or a large firm committing to a technical standard. Unilateral conditional commitments are typically much harder to construct, probably because they require binding oneself to a more complex pattern of behaviour. However, conditional commitments can be much more powerful because they can support the precise promises (or threats) that may be needed to support a cooperative venture; this is illustrated by how a conditional commitment is sufficient to overcome the Prisoner's Dilemma, whereas an unconditional commitment is not. Lastly, commitment devices may be multilateral, requiring multiple parties to consent to the commitment before it goes into effect. Legal contracts exemplify multilateral commitments. Most conditional commitment devices in society may be multilateral commitment devices. Though it is worth noting how a set of unilateral conditional commitments can be equivalent to a multilateral unconditional commitment. This is illustrated by the National Popular Vote Interstate Compact, which aims to replace a state-by-state first-past-the-post system with national popular voting without a change to the US Constitution. Their strategy is for US states to unilaterally commit to award all electoral votes to the presidential candidate who wins the popular vote, conditional on enough other states similarly committing [Nat20] . AI research can contribute in several ways. Researchers can develop languages for specifying commitment contracts and semantics for the actions taken under them [KMS + 16, LCO + 16, FN16]. This work will benefit by being interdisciplinary, given how the space of commitment mechanisms spans domains such as law, politics, economics, psychology, and even physics. Researchers can improve our ability to reason about the strategic impacts of commitment, for example by developing algorithms for finding the optimal course of action to commit to [CS06] or predicting how agents are likely to respond when others commit to a certain course of action [KYK + 11, PPM + 08]. One may also examine specific domains, such as disarmament [DC17] , to identify favorable commitment perturbations to the game that increase welfare. Commitment devices come in many forms, including enforcement, automated contracts, and arbitration [BTU91, GMW94, Gre94, WCL97, Ost98, KV00, Roc01, MT04, MS05, MT09, KKLS10, GIM + 18, CH19]. We mentioned above how agents often have some unilateral commitments available to them, if only from their ability to "move first", sink costs, or literally destroy some options available to them. In addition, we see several classes of other commitment devices, each of which depends on some social infrastructure: reputation, a social planner, contracts, and hardware. Each of these has distinct properties and is associated with distinct research problems. We briefly review these here and discuss some at greater length in the following section. First, reputation systems provide a mechanism for commitment by creating a valuable asset (reputation) which can be put up as collateral for cooperative behaviour in transient encounters. Just as a canonical solution to the Prisoner's Dilemma is to iterate the game, which then can give agents an incentive to cooperate today to preserve their reputation for being likely to cooperate tomorrow [NS98] , so in human society does reputation seem to undergird many cooperative achievements such as trade and debt [Gre89, Tom12] . AI research can assist in designing and facilitating effective reputation platforms [RZ02, Kor83, ZM00] , thus enabling verification of agent identity [MGM03, GJA03] , as well as building agents who are skilled at promoting cooperative reputation systems. We discuss reputation further below. Another method of achieving commitment involves agents delegating decision-making power to a social planner-a trusted third party or a central authority. AI research can study such mechanisms of delegation and mediation [Ten04, MT09] and can work to improve the efficacy of a central planner [ZTS + 20]. A central authority can also provide a legal framework and enforcement, within which agents can construct multilateral commitments, namely through contracts. The emergence of increasingly cognitively capable algorithms, cryptographic protocols and authentication, trusted hardware [LTH03, BS13] , and "smart contracts" [CD16, KMS + 16, LCO + 16, BP17, GIM + 18, CH19, WZ18] enable the delegation of increasingly sophisticated commitments to non-human entities, including commitments that are conditional on states of the world. These technologies can enable contractual commitments without requiring a central authority. These tools can also make communication more credible, such as with tamper-proof recordings from the sensors in autonomous vehicles [GLNS20] , which could provide forensic evidence in the event of an accident. Achieving the requisite understanding, communication, and commitment for cooperation often requires additional social structure. Following economics and political science, we refer to this structure abstractly as institutions. Institutions involve a system of beliefs, norms, or rules that determine the "rules of the game" played by the individuals and organizations composing a collective [Gre06] , shaping the actions that can be taken by individuals and the outcomes determined by these actions, resulting in stable patterns of behaviour [S + 08, Ord86, Kni92, ASB98, Hun06, Mas08]. Cooperative institutions are those which support cooperative dynamics. They do so primarily by resolving coordination problems and aligning incentives to resolve social dilemmas. They may also provide structural scaffolding upon which complex inter-agent behaviours can be built [MT95] , such as by enabling agents to adopt simplifying assumptions about the behaviour of others. For games of common interest, conventions are patterns of expectations and behaviour which promote coordination. For mixed-motives games, these patterns can be reinforced with social reward and sanctions, which we refer to as norms. Society can go further, allocating roles, responsibilities, power, and resources in ways designed to reproduce a pattern of desired interactions; these thicker and more formal entities are what is most commonly denoted by the term institutions, though we also use the term in an encompassing way. To illustrate, consider the Prisoner's Dilemma, described in the section on Commitment and depicted in Table 4 . As noted, if one player is able to conditionally commit to play if and only if the other player plays , then the dilemma is overcome. How can such a commitment be constructed? If the game can be made to repeat or be linked to other similar games, then cooperation may become an equilibrium; this linking of games is sometimes understood as an institution, such as is achieved in trade negotiations under the WTO or in global diplomacy within the UN. Given a repeated game, one needs suitable expectations to support cooperation; a norm such as tit-for-tat is one such self-reinforcing norm. Cooperation can otherwise be achieved if it's possible to allocate external incentives to change the payoffs in the one-shot game or to otherwise make the conditional commitment binding; achieving these are sometimes called institutions [FL15, Nor93] . Institutions vary in the extent to which they are emergent vs designed, informal vs formal, and decentralized vs centralized. These properties are correlated, but not perfectly. Institutions may initially emerge from trial-and-error processes [Ost98] but then take on a more formal, designed, centralized character. For instance, in the case of file-sharing systems, participants may initially refuse to share their files with others who are not sharing. Such a rule can later be implemented formally in a peer-to-peer network or file-sharing service, where a mechanism can be introduced to limit the rate of service of participants who are not using their resources to provide service to others [GLBML01, LFSC03, LLKZ07] . Groups may later agree on a more formal framework for making joint decisions so as to improve social welfare such as voting systems, auctions, and matching mechanisms. We divide the following discussion into decentralized institutions and centralized institutions. In decentralized institutions, there is no single central trusted authority which can make and enforce decisions on behalf of a group of agents. Instead, institutional structures will often emerge from the interactions of agents over time [Ost98, ST97] , such that agents' themselves act in a way that incentivizes the desired behaviour in others (for example, through informal social punishments [Wie05] ). There is a rich literature on decentralized algorithms arising from the field of distributed computing [TVS07] , which support the design and analysis of decentralized institutions. Within multi-agent systems, many methods have been proposed which help agents to interact directly with one another, negotiate, make decisions, plan, and act jointly [Smi80, HL04, Fer99, Jen96, OJ96, BG14, Wel93, VSDD05, Cas98] . One prominent way of achieving societal goals without relying on a trusted central authority is through norms. Norms are broadly understood to be informal rules that guide the behaviour of a group or a society [BMS18] .They constrain the behaviour of group members, often by capturing and encoding sanctions or social consequences for transgressions, and are seen as central to supporting societal coordination. One prevalent interpretation of social norms is that they can be represented as equilibria in strategic games and thus may be viewed as stable points among the group's interactions [Bic06, Mor14] . Human groups have remarkable abilities to self-organize around social norms to overcome issues of collective action [Ost00] . Indeed, the emergence of robust social norms is thought to have been a key process in the development of large-scale human civilization [AS19, Hen16, Har15] . It is this importance for human interaction which motivates research into how artificial intelligence can learn to recognize and follow norms. Researchers have argued that norms can be used to organize systems of agents or influence the design of agents themselves (e.g., [CDJT99, Dig99, VSDD05] ) and have worked on agents who adhere to social norms, reflecting constraints on the behaviour of agents that ensure that their individual behaviours are compatible (e.g., [ST95, CCD98, CC95, ST92] ). These constraints are typically imposed offline to reduce the need for negotiation and the chances of online conflict. Furthermore, work has examined possible social norms for various environments and investigated their computational properties, both in terms of identifying predicted behaviour under various norms (for instance, in terms of the emerging equilibrium behaviour) and in identifying good norms that lead to desired behaviour (e.g., [DKS02, HL04, ASP09, yLLd06, WPN14, KHMHL20] ). There has also been much work on the emergence of social norms among groups of agents (e.g., [MMDVP19] ), both in the agent-basedmodelling community, where the tasks are typically abstracted matrix or network games (see for example [YZRL13, VSMS13] ), and more recently in the multi-agent reinforcement learning community (e.g., [AHM19, HLP + 18, TS09, HS15, PBGRLS17, KHMHL20]) where the state-of-the-art is temporally and spatially extended gridworld games. This work lays a foundation for addressing important outstanding problems in Cooperative AI. AI research could explore the space of distributed institutions that promote desirable global behaviours [Had20, CAB11] and work to design algorithms which can predict which norms will have the best properties. Such algorithms already have a strong foundation to build on, including languages for expressing societal objectives and solving them through model checking, identifying agents that are critical to achieving the global objective, or dealing with non-compliance [Gro07, ÅvdHW07, ÅvdHTW09, BCM17] .Furthermore, we need to better understand how systems comprising mixtures of humans and machines devise and enforce norms, and develop AI algorithms that are able to generalize social norms to different circumstances and co-players. A specific decentralized dynamic for which institutions can often help is bargaining, which refers to the methods and protocols through which agents may attempt to negotiate welfare-improving arrangements. One may view a platform that enables such negotiation or bargaining as an institution. Such work includes automated negotiation systems as well as formal protocols and frameworks for multi-agent contracts, bargaining, and argumentation [Smi80, RZ94, SL + 95, Kra97, JFL + 01, MPW02, KM06, LS01, BFG + 13]. Challenges in this space include the creation of formal specifications and protocols enabling interactions, computationally tractable algorithms to be used by agents, and better understanding of how to support interactions so as to yield high social welfare [Kra97, JFL + 01, YKL + 07, LMMZ18]. Centralized institutions involve an authority able to shape the rules and constraints on the actions of the participants. Our understanding of these institutional structures and their properties has been heavily influenced by social choice theory, game theory, and mechanism design. There is often a focus on the rules and axioms that should be satisfied so as to ensure desirable social outcomes are reached or to provide incentives such that agents perceive cooperation to be in their self-interest. Social choice theory studies the aggregation of agents' preferences in support of some collective choice. Voting, for example, is a widely used class of social choice mechanism. Much of the research is axiomatic in nature: a set of desirable properties is proposed and then the question as to whether there exists a set of voting rules that satisfies these properties is explored. For example, Arrow's Theorem, a central result in social choice theory, states that it is generally impossible to have a non-dictatorial voting rule that also satisfies a number of reasonable properties [Arr51] . However, there have been significant advances in relaxing the assumptions of Arrow's Theorem and identifying and characterizing families of voting rules by sets of properties [BCE + 16], including deepening our understanding of the impact of computation on those properties (e.g., [BTT89, BITT92, FP10, FHH10, CS02a, CS02b, ZPR09, ML17] ). The insights and theoretical foundations provided by social choice theory can provide guidance in the construction of cooperative institutions. Another important strand of social choice work seeks to develop notions or protocols for fairness, particularly in the context of resource allocation or reward sharing. A perceived lack of fairness in how resources, awards, or credit is shared across a group may lead to the breakdown of cooperative structures. Such problems are common for humans in settings such as partnership or company dissolutions, divorces, dividing inheritances, or even determining how much effort to contribute towards a group project. Institutions or protocols have been developed to address these concerns, including the famous "cut and choose" protocol for divisible resources [BT96, AM16] , maximum Nash welfare for both divisible and indivisible resource settings [NJ50, CKM + 19], and the Shapley value for group reward division [Sha53] . The closely related field of mechanism design studies when and how it might be possible to design rules or institutional frameworks so as to align the incentives of the individual agents so that it is possible to achieve a socially desirable outcome, ideally in such a way so that it is in the individual agents' best interest to truthfully share or communicate the relevant preference information. The challenge becomes one of communication and incentives; the self-interest of agents may lead them to misreport or miscommunicate relevant information, making it impossible for the institution to select the appropriate outcome. While there are several impossibility results that highlight the boundary of what can be achieved for self-interested agents due to their strategic behaviour [Arr51, Gib73, Sat75, GL77, Rob79] , there are islands of possibility. In particular, if there is certain structure in the preferences or utility functions of agents, then mechanisms can be designed to incentivize honesty, resulting in a socially optimal outcome being selected. Examples of such mechanisms include the class of median mechanisms for when agents have single-peaked preferences [Bla58] and the class of Groves mechanisms for when agents have quasi-linear utility functions [Gro73, Vic61, Cla71] . This latter class of mechanisms has drawn particular attention since quasi-linear utility functions naturally capture any setting where transfers can be made between agents or between agents and the mechanism itself, where the transfers are often interpreted as payments. Auctions, widely used to allocate resources efficiently across groups of self-interested agents, are one example of mechanisms for settings with quasi-linear utility. There remain numerous important outstanding problems that require further study as we explore the use of social choice and mechanism design to support Cooperative AI. For example, many of the foundations of social choice theory are axiomatic in nature; are these the right axioms if we consider designing institutions for collectives of humans and agents? Is it possible to combine axiomatic methods with data-driven processes [dL20, AL19], or are there particular characterizations of social choice rules that will prove to be particularly useful for supporting cooperation and coordination (e.g., [JMP + 14, CPS13])? How we can apply mechanism design for setting the incentive to improve social good purposes such as welfare or fairness [AG18] ? Finally, there is a strong interplay between understanding, communication, and social choice which deserves further exploration [BR16] . The multi-agent systems research community has long advocated for the use of mechanism design for solving coordination problems between agents [RZ94] . With the advent of more complex agents, are there novel coordination and cooperation problems for which insights from mechanism design might prove useful, and what is the interplay between incentive structures, computation, and Cooperative AI [NR01, NR07] ? For example, it has been shown that multi-robot pathfinding can be modelled as a combinatorial auction problem, where non-colliding paths are allocated to the robots (e.g. [ASS15] ). Going forward, might it be possible to extend such ideas to navigation problems involving autonomous vehicles? There is interest in using data-driven approaches to design mechanisms for specific instances [CS02a, San03] . For instance, tools from deep learning and reinforcement learning have recently been used to automatically design auctions (e.g. [DFJ + 12, DFN + 19, TSG + 19, Tan17]). It might be useful to move beyond auctions and explore new institutional structures that best serve the goals of cooperative behaviour. As mentioned above, reputation provides a mechanism for aligning incentives for cooperation and for addressing commitment problems. It does so by creating a valuable asset-the reputation itselfwhich can then be put up as collateral to encourage cooperative behaviour. Reputation systems can exist in decentralized settings, but can often be improved by a central authority. In typical reputation systems [RZ02, MGM06, SS05, FG12, SNP13, PSM13, LWQL18], agents may rate each other so as to build trust between participants. These systems can be designed to reveal particular information regarding the behaviour of other agents, such as whether they have held up their part of a bargain or agreement in past transactions [FKM + 05, JF09, TPJL06, RHJ04]. Reputation systems are already prevalent and used by humans in e-commerce websites, such as eBay or Amazon, and information websites such as Quora or Stack Exchange [Del03, LAH07] . While prominent and functional reputation systems are already in place in the private sector, more research is needed to understand the relative benefits of different reputation mechanisms [HS13] . The multi-agent research community has long recognized that trust is central for supporting cooperation and coordination [CF98, WS07, Sen13, CSLC19], and has used such models for maintaining, disseminating, and using information regarding the behaviour of agents [JG09, YSL + 13]. This research has recognized the importance of socio-economic models of trust [MDS95] , explored the relationship between trust and norms [LBK + 09], and used trust to support community formation [KLC09] . It has also long been argued that trust will be instrumental for supporting the acceptance of robots [Kam13] and, more broadly, acceptance of and cooperation between humans and machines [MMG16, dWVV17] . While we highlighted some promising research directions above, we conclude this section with some overarching research directions we believe may advance the Cooperative AI agenda. One promising avenue for machine learning research is institutional design, whereby (human) participants determine desiderata that the institution or mechanism should achieve and leave the design thereof to an AI agent. This could open the door to new, innovative approaches for tackling long-established problems [ZTS + 20]. These methods could also enable institutions that take into account a richer set of features than have typically been considered in prior literature. For instance, while social choice scholarship on voting systems generally limits preference representation across alternatives to ordinal or cardinal scores, the Polis platform aims to model user preferences by taking into account a broad set of features including opinions articulated via natural language [Pol19] . It may end up that, as has happened in many domains, machine learning methods as applied to mechanism and institutional design offer major performance improvements at the cost of parsimony, interpretability, and closed-form provability-a trade-off that raises interesting questions about what we truly value in social systems. To what extent does the liberal affinity for democratic mechanisms derive from the accountability and improved outcomes that they empirically create, and to what extent is it due to the simplicity and transparency of a system such as single-member plurality voting? The growing mistrust of election integrity in several countries suggests that the latter category has significant weight. This question is structurally similar to active debates around the implications of interpretability and bias in machine learning systems in scientific domains (when should we be satisfied with a system merely on the grounds of empirical performance and when should we push for an "explanation" of its decision?). All scientific and technological advances can have potential downsides, posing risks or harms. An important part of responsible research is explicit consideration of these possible downsides and exploration of strategies to mitigate the risks. We see possible downsides with Cooperative AI falling into three categories: (1) Cooperative competence itself can cause harms, such as by harming those who are excluded from the cooperating set and by undermining pro-social forms of competition (i.e., collusion). (2) Advances in cooperative capabilities may, as a byproduct, improve coercive capabilities (e.g., deception). (3) Successful cooperation often depends on coercion (e.g., pro-social punishment) and competition (e.g., rivalry as an impetus for improvement), making it hard to pull these apart. Advances in cooperative competence, by definition, should improve the welfare of the cooperating agents. However, enhanced cooperative competence may harm others who are excluded from the cooperating set. For example, mechanisms that promote cooperation among criminals-such as cryptocurrencies, the darkweb, and associated reputation systems-can be socially harmful [PNC17] . Often, individuals collaborate to engage in "rent seeking": working together not to increase productivity but to transfer value from society to themselves [MSV93, Ols82] . Buyers or sellers in an auction can cooperate by colluding to set prices in a self-serving and welfare-harming way [Pes00] . In international politics, greater national cohesion and cooperation in one country can pose risks to its rivals. "Cooperation" can also be harmful when it undermines pro-social competition; this often manifests as collusion. Recent work in the fields of economics and law argues that the use of AI for determining prices may increase the risk of collusion between firms even without explicit communication [CCDP20, ES17] . This would be a concerning development as competition can be a powerful mechanism for producing pro-social outcomes by incentivising effort, revealing private information, and efficiently allocating resources. It is for this reason that productive societies forbid various kinds of anti-social "cooperation", such as students sharing answers during examinations, peer reviewers at a scientific journal soliciting payment from authors for a favorable review, firms coordinating on a strategy for setting prices, and policymakers soliciting personal payments. An open question, then, concerns when particular kinds of increases in cooperative capabilities lead to a net positive or net negative social outcome. What might a social planner do to incentivize the right kinds of cooperation? Many specific capabilities that are useful for cooperation may also be useful for coercion, defined as efforts by an actor to get something from another through threats or the use of force. Arbitrary improvements in coercive competence are generally not regarded as socially desirable: they may lead to an (undeserved) transfer in value from others to the more coercively competent actor in a manner that exposes society to harms, threats, and (illegitimate) uses of force. They may also lead to an increase in the use of coercion, which is generally regarded as undesirable. Whereas cooperation at least involves welfare improvements among the cooperative set, coercion can not even guarantee that. There are many examples of capabilities which are useful for coercion as well as cooperation, and of coercive capabilities which may be learned as a byproduct of learning cooperative capabilities. While understanding is often essential for cooperation, so is it for successful coercion; understanding a target's weaknesses and vulnerabilities confers an important advantage. 17 In order to learn cooperative communication in mixed-motives settings, one must learn to be able to send messages interpreted by others as honest and to discern honesty from deception in others' messages; but, coercive communication benefits from the same abilities! Similarly, commitment is often essential for making credible promises, but also threats. Insights into cooperation-oriented institutional design may also be useful for promoting obedience or for manipulating institutions to serve a narrow set of interests. Finally, the mechanisms and welfare implications of cooperation and coercion are often deeply intermixed, with coercive capabilities sometimes playing a critical role in cooperation. Punishment, for instance, is often critical for sustaining cooperation [VLRY14, AHV03, BGB10, KHMHL20]. A prominent example is the use of legal contracts to facilitate cooperation by enabling each party to submit themselves to punishment in the event of breach of contract. Just as cooperative competence is not always socially beneficial, depending on who gains this competence, so is coercive competence not always socially harmful. Many believe it to be beneficial for responsible parents to have "coercive capabilities", such as being able to physically restrain their children from running out on the road. Similarly, it is often regarded as a requisite of a functional state for the state to possess a monopoly of violence over sub-state actors. Finally, one of the greatest drivers of cooperative competence has been inter-group competition. Competition facilitates learning by providing a smooth, motivating, and scalable curriculum [LHLG19] . The major transitions in biological evolution and cultural evolution, which can be understood as achievements of cooperation, have all plausibly been driven by inter-group competition. Thus it may be that a valuable way to learn cooperative skills is to expose agents to strong inter-group competitive pressures. In so doing, it may be hard to differentially train cooperative skill without also training skill in coercion and competition. In summary, there are potential downsides to developments in Cooperative AI. Acknowledging and studying these issues can help guide future research in ways that maximize positive impact and mitigate risks. When are increases in cooperative competence socially beneficial? When do the exclusion effects or correlated increases in coercive competence outweigh the benefits of increases in cooperative competence? As a baseline, we offer the Hypothesis that Broad Cooperative Competence is Beneficial: large and broadly distributed increases in cooperative competence tend to be, on net, broadly welfare improving. We offer two theoretical arguments and an empirical argument for this hypothesis. Firstly, the first-order effect of greater cooperation is, by definition, to improve the welfare of those who are cooperating. It is thus only the second-order effects wherein exclusionary harms arise. It seems plausible that with broadly distributed gains in cooperative competence the positive first-order effects will, in aggregate, often dominate adverse second-order effects. Of course, the strength of this argument will clearly depend on the social context and nature of the increase in cooperative competence; we regard investigating this as an important open question in the science of cooperation. Secondly, mutual gains in coercive capabilities tend to cancel each other out, like how mutual training in chess will tend to not induce a large shift in the balance of skill. To the extent, then, that research on Cooperative AI unavoidably also increases coercive skill, the hope is that those adverse impacts will largely cancel, whereas the increases in cooperative competence will be additive, if not positive complements. This argument is most true for coercive capabilities that are not destructive but merely lead to transfers of wealth between agents. Nevertheless, mutual increases in destructive coercive capabilities will also often cancel each other out through deterrence. The world has not experienced more destruction with the advent of nuclear weapons, because leaders possessing nuclear weapons have greatly moderated their aggression against each other. By contrast, cooperation and cooperative capabilities lead to positive feedback and are reinforcing; it is in one's interests to help others learn to be better cooperators. Finally, plausibly as a consequence of the above, historic examples of larger scale cooperative structures seem to have been more effective than smaller parasitic or rivalrous ones. The "major transitions" in evolution detail the systematic increase in complexity and functional differentiation in biological evolution: prokaryotes to eukaryotes, protists to multicellular organisms, individuals to colonies, primates to human society. Transitions in cultural evolution show the same trend: tribes to cities, to territorial states, to larger and more cohesive states, to globalization. Thus, plausibly, enhanced cooperative capabilities will on net favor larger scale, more inclusive, and broadly beneficial cooperation. Are there general insights about when increases in cooperative competence are most likely to have positive impacts on welfare? Might they depend on distributions of power and cooperative competence? By working with the fields of economics, governance, and institutional design, can we develop a general theory of when restrictions on certain kinds of cooperation are most socially beneficial or harmful? Can we identify capabilities which are disproportionately useful for (pareto-improving) cooperation, as opposed to coercion? For example, the development of skills for cheap-talk communication are plausibly cooperation-biased, since an agent can choose to ignore a cheap-talk channel if it is on net not rewarding [JLH + 19]. Other advances in communication may be especially useful for honest revelation and not for deception, such as trusted mediators, reputation systems, trusted hardware that can verify observations, or norms against lying. For commitment, perhaps multilateral commitment mechanisms, such as legal contracts, are cooperatively biased relative to unilateral commitment mechanisms? Can we build test environments for evaluating the coercive disposition of AI agents, and decide how AIs should behave with respect to deception and threats? Even if competition is useful for learning cooperative capabilities, are there ways of separating the gains in cooperative and coercive capabilities and of prioritizing instilling our AIs with the former? Cooperation was important to the major transitions in evolution, has been foundational to the human story, and remains critical for human well-being. Problems of cooperation are also complex and hard, and they seem to scale in difficulty with the magnitude of our cooperative achievements. Cooperation is thus an attractive target for research on intelligence. The field of artificial intelligence has much to contribute to this research frontier. Advances in AI are providing new scientific tools for understanding social systems and for devising novel cooperative structures. Developments in AI are themselves being deployed in society as tools, infrastructure, and agents; it is imperative that this deployment be done in a way that promotes human cooperation. Due to the wide-ranging and deep implications that cooperation has for the human condition, research on and knowledge about cooperation are dispersed across a great number of different disciplines in the natural, engineering, and social sciences. Crucial fields include biology, sociology, social psychology, anthropology, economics, history, international relations, and computer science. As a consequence, many of the open problems raised in this article arise at the intersection of AI with those other fields. For research in Cooperative AI to succeed, it will therefore be necessary to bridge the gaps between disciplines, develop a common vocabulary on problems of cooperation, and agree on goals that can be pursued and achieved in cooperation. As the field of AI takes increasingly confident strides in its ambition to build intelligent machine agents, it is critical to attend to the kinds of intelligence humanity most needs. Necessarily among these is cooperative intelligence. For valuable input and discussion, the authors would like to thank Markus Anderljung, Asya Bergal, Matt Botvinik, Ryan Carey, Andrew Critch, Owen Cotton-Barratt, Nick Bostrom, Owain Evans, Ulrike Franke, Ben Garfinkel, Gillian Hadfield, Eric Horvitz, Charlotte Jander, Shane Legg, Sören Mindermann, Luke Muehlhauser, Rohin Shah, Toby Shevlane, Peter Stone, Robert Trager, Aaron Tucker, Laura Weidinger, and especially Jan Leike, Chris Summerfield, and Toby Ord. The project also benefited from thoughtful feedback from researchers across DeepMind, and specifically the Multi-Agent team, as well as from seminars at the Centre for the Governance of AI, Future of Humanity Institute, University of Oxford. We would also like to thank Alex Lintz for excellent research assistance; Julia Cohen, Charlotte Smith, and Aliya Ahmad at DeepMind for their support; and Wes Cowley for copy editing. The following statement of the intellectual agenda of a 2020 NeurIPS workshop was published September 1 2020, at cooperativeAI.com. It was circulated to potential co-organizers from June 4 2020. Problems of cooperation-in which agents seek ways to jointly improve their welfare-are ubiquitous and important. They can be found at all scales ranging from our daily routines-such as highway driving, communication via shared language, division of labor, and work collaborations-to our global challenges-such as disarmament, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate, in our social intelligence and skills. Since machines powered by artificial intelligence and machine learning are playing an ever greater role in our lives, it will be important to equip them with the skills necessary to cooperate and to foster cooperation. We see an opportunity for the field of AI, and particularly machine learning, to explicitly focus effort on this class of problems which we term Cooperative AI. The goal of this research would be to study the many aspects of the problem of cooperation, and innovate in AI to contribute to solving these problems. Central questions include how to build machine agents with the capabilities needed for cooperation, and how advances in machine learning can help foster cooperation in populations of agents (of machines and/or humans), such as through improved mechanism design and mediation. Such research could be organized around key capabilities necessary for cooperation, including: understanding other agents, communicating with other agents, constructing cooperative commitments, and devising and negotiating suitable bargains and institutions. In the context of machine learning, it will be important to develop training environments, tasks, and domains in which cooperative skills are crucial to success, learnable, and non-trivial. Work on the fundamental question of cooperation is by necessity interdisciplinary and will draw on a range of fields, including reinforcement learning (and inverse RL), multi-agent systems, game theory, mechanism design, social choice, language learning, and interpretability. This research may even touch upon fields like trusted hardware design and cryptography to address problems in commitment and communication. Since artificial agents will often act on behalf of particular humans and in ways that are consequential for humans, this research will need to consider how machines can adequately learn human preferences, and how best to integrate human norms and ethics into cooperative arrangements. Research should also study the potential downsides of cooperative skills-such as exclusion, collusion, and coercion-and how to channel cooperative skills to most improve human welfare. Overall, this research would connect machine learning research to the broader scientific enterprise, in the natural sciences and social sciences, studying the problem of cooperation, and to the broader social effort to solve coordination problems. We are planning to bring together scholars from diverse backgrounds to discuss how AI research can contribute to the field of cooperation. Autonomous helicopter aerobatics through apprenticeship learning Learning to play no-press diplomacy with best response policy iteration Mechanism design for social good The evolution of cooperation Understanding the impact of partner choice on cooperation and social norms by means of multi-agent reinforcement learning The carrot or the stick: Rewards, punishments, and cooperation Multi-agent system development based on organizations Repeated inverse reinforcement learning Adaptive agents and multi-agent systems: adaptation and multi-agent learning Machine learning to strengthen democracy A discrete and bounded envy-free cake cutting protocol for any number of agents Occam's razor is insufficient to infer the preferences of irrational agents Machine-to-machine communication: An overview of opportunities Concrete problems in AI safety A formal model of open agent societies Advances in multi-robot systems An extended multi-agent negotiation protocol Why nations fail: The origins of power, prosperity, and poverty. Currency Social choice and individual values Autonomous agents modelling other agents: A comprehensive survey and open problems The evolution of human cooperation Social choice theory, game theory, and positive political theory Continuous adaptation via meta-learning in nonstationary and competitive environments Specifying norm-governed computational societies APRIL: Active preference learningbased reinforcement learning Multi-agent pathfinding as a combinatorial auction Subjectivity and correlation in randomized strategies Power in normative systems Normative system games The evolution of cooperation Commitment and observability in games Emergent reciprocity and team formation from randomized uncertain social preferences Social learning theory Strategies for cooperation in biological markets, especially for humans ReBeL: A general game-playing ai bot that excels at poker and more. blog Dota 2 with large scale deep reinforcement learning A comprehensive survey of multiagent reinforcement learning Extravehicular activity operations concepts under communication latency and bandwidth constraints Computational social choice Benja Fallenstein, Marcello Herreshoff, Patrick LaVictoire, and Eliezer Yudkowsky. Robust cooperation in the prisoner's dilemma: Program equilibrium via provability logic Developing multi-agent systems with JADE Does the autistic child have a "theory of mind Norms and value based reasoning: justifying compliance and violation Multi-Agent Programming Artificial intelligence & cooperation Learning to teach with a reinforcement learning agent The Hanabi challenge: A new frontier for AI research Evaluating practical negotiating agents: Results and analysis of the 2011 international competition Human-agent interaction Readings in Distributed Artificial Intelligence Coordinated punishment of defectors sustains cooperation and can proliferate when rare Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations Learning to understand goal specifications by modelling reward Does machine translation affect international trade? evidence from a large digital platform The grammar of society: The nature and dynamics of social norms How hard is it to control an election? Common ground and development Emergent tool use from multi-agent autocurricula Towards a structured design of electronic negotiations The theory of committees and elections Coordinating multiple agents for workflow-oriented process orchestration Collaborative multi-robot exploration Social norms Convergence and no-regret in multiagent learning An empirical analysis of smart contracts: platforms, applications, and design patterns Observational learning by reinforcement learning Incomplete information with communication in voting Handbook on Computational Social Choice, chapter 10 Intention, plans, and practical reason TrustedDB: A trusted hardware-based database with privacy and data confidentiality Superhuman AI for multiplayer poker Fair Division: From cake-cutting to dispute resolution The computational difficulty of manipulating an election Credible commitments, contract enforcement problems and banks: Intermediation as credibility assurance The evolution of individuality Introduction to normative multiagent systems Open issues for normative multi-agent systems Modelling social action for AI agents Grounding in communication Perspectives on Socially Shared Cognition The dynamics of reinforcement learning in cooperative multiagent systems Understanding the functions of norms in social groups through simulation. Artificial societies: The computer simulation of social life Communication, commitment, and deception in social dilemmas: experimental evidence. Quaderni -working paper dse no Autonomous norm acceptance Artificial intelligence, algorithmic pricing, and collusion Blockchains and smart contracts for the internet of things Deliberative normative agents: Principles and architecture An algorithmic characterization of multi-dimensional mechanisms Optimal multi-dimensional mechanism design: Reducing revenue to welfare maximization A short introduction to computational social choice Principles of trust for MAS: Cognitive anatomy, social importance, and quantification Blockchain disruption and smart contracts Deep blue Rational ritual: Culture, coordination, and common knowledge Ai research considerations for human existential safety (ARCHES) The unreasonable fairness of maximum Nash welfare Algorithmic and human teaching of sequential decision tasks Multipart pricing of public goods Deep reinforcement learning from human preferences Cooperating with machines When do noisy votes reveal the truth? A parametric, resource-bounded generalization of Löb's theorem, and a robust cooperation criterion for open-source game theory Strategic information transmission Complexity of manipulating elections with few candidates Vote elicitation: Complexity and strategyproofness Computing the optimal strategy to commit to On the utility of learning about humans for human-AI coordination Trusted AI and the contribution of trust modeling in multiagent systems A speech-act-based negotiation protocol: design, implementation, and test use Delay-aware multi-agent reinforcement learning for cooperative and competitive environments Goal-directed control and its antipodes Disarmament games The digitization of word of mouth: Promise and challenges of online feedback mechanisms Payment rules through discriminant-based classifiers Autonomous agents with norms A formal specification of dMARS From desires, obligations and norms to goals Testing axioms against human reward divisions in cooperative games Legibility and predictability of robot motion Multiagent traffic management: A reservation-based intersection control mechanism Simultaneously learning and advising in multiagent reinforcement learning Negotiating with other minds: the role of recursive theory of mind in negotiation with incomplete information Negotiating with other minds: the role of recursive theory of mind in negotiation with incomplete information Biases for emergent communication in multi-agent reinforcement learning When does communication improve coordination? Artificial intelligence & collusion: When computers inhibit competition Learning to communicate with deep multi-agent reinforcement learning Communication, coordination and Nash equilibrium Meaning and credibility in cheap-talk games Learning with opponent-learning awareness Multi-agent common knowledge reinforcement learning Rationalist explanations for war Signaling foreign policy interests: Tying hands versus sinking costs Commitment problems and the spread of ethnic conflict Why do some civil wars last so much longer than others Two kinds of cooperative AI challenges: Game play and game design Multi-agent systems: an introduction to distributed artificial intelligence At what level (and in whom) we trust: Trust across multiple organizational levels Using complexity to protect elections The biology and evolution of speech: a comparative analysis A specification of the agent reputation and trust (ART) testbed: experimentation and competition for trust in agent societies World Politics: Interests, Interactions, Institutions: Third International Student Edition Problems of somatic mutation and cancer From institutions to code: Towards automated generation of smart contracts Deep learning for revenueoptimal auctions with budgets AI's war on manipulation: Are we winning? AI Magazine Bayesian action decoder for deep multi-agent reinforcement learning Using similarity criteria to make issue trade-offs in automated negotiations The evolution of teaching When security games go green: Designing defender strategies to prevent poaching and illegal fishing Modeling human-agent interaction with active ontologies Approaches to evaluating teacher effectiveness: A research synthesis A fully homomorphic encryption scheme Communication and interaction in multi-agent planning Manipulation of voting schemes: a general result On legal contracts, imperative and declarative smart contracts, and blockchain systems A reputation system for peer-to-peer networks Characterization of satisfactory mechanisms for the revelation of preferences for public goods Incentives for sharing in peer-to-peer networks ML confidential: Machine learning on encrypted data Proof-of-event recording system for autonomous vehicles: A blockchain-based solution A formal analysis and taxonomy of task allocation in multi-robot systems Gesture, sign, and language: The coming of age of sign language and gesture studies Coordination, commitment, and enforcement: The case of the merchant guild Generative adversarial nets The belief-desire-intention model of agency Evolution of human vocal production Reputation and coalitions in medieval trade: evidence on the Maghribi traders Cultural beliefs and the organization of society: A historical and theoretical reflection on collectivist and individualist societies Institutions and the path to the modern economy: Lessons from medieval trade Incentives in teams Designing invisible handcuffs: Formal investigations in institutions and organizations for multi-agent systems Negotiating at the United Nations: A Practitioner's Guide. Routledge Autonomous bidding agents in the trading agent competition Economics of conflict: An overview. Handbook of Defense Economics Game-theoretic interpretations of commitment The normative infrastructure of cooperation Learning to resolve alliance dilemmas in many-player zero-sum games Sapiens: A Brief History of Humankind Homo Deus: A brief history of tomorrow. Random House Foundations of human sociality: Economic experiments and ethnographic evidence from fifteen small-scale societies The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter The foundations of welfare economics A survey of multi-agent organizational paradigms Social learning: an introduction to mechanisms, methods, and models Interplay of approximate planning strategies Heather Roff, and Thore Graepel. Inequity aversion resolves intertemporal social dilemmas Other-play" for zero-shot coordination A simple adaptive procedure leading to correlated equilibrium Cooperative inverse reinforcement learning Metamagical themas: Questing for the essence of mind and pattern Games, threats, and treaties: understanding commitments in international relations Equality of opportunity in supervised learning Macau: A basis for evaluating reputation systems Deep recurrent Q-learning for partially observable MDPs Emergence of language with multi-agent games: Learning to communicate with sequences of symbols JAM: A BDI-theoretic mobile agent architecture Political order in changing societies Multi-issue negotiation protocol for agents: Exploring nonlinear utility spaces Reward learning from human preferences and demonstrations in Atari Human-level performance in 3d multiplayer games with population-based reinforcement learning Coordination techniques for distributed artificial intelligence On agent-based software engineering The logic of images in international relations Mechanisms for making crowds truthful Automated negotiation: prospects, methods and challenges. Group Decision and Negotiation Intrinsic social motivation via causal influence in multi-agent RL Social influence as intrinsic motivation for multi-agent deep reinforcement learning The reasons for wars: an updated survey Reward-rational (implicit) choice: A unifying formalism for reward learning Diverse randomized agents vote to win A roadmap of agent research and development Robocup: The robot world cup initiative Welfare propositions of economics and interpersonal comparisons of utility Curing robot autism: a challenge The strategic structure of offer and acceptance: Game theory and the law of contract formation Silly rules improve the capacity of agents to learn stable enforcement and compliance behaviors A commitment folk theorem Designing and building a negotiating automated agent Review of text-to-speech conversion for English Exchanging reputation information between communities: A payment-function approach Adaptive agent negotiation via argumentation Natural language does not emerge'naturally'in multi-agent dialog Hawk: The blockchain model of cryptography and privacy-preserving smart contracts Institutions and social conflict Reliance, reputation, and breach of contract Interpersonal relations: Mixed-motive interaction Negotiation and cooperation in multi-agent environments. Artificial intelligence Strategic negotiation in multiagent environments ImageNet classification with deep convolutional neural networks Specification gaming: the flip side of ai ingenuity Reliability and learnability of human bandit feedback for sequence-to-sequence reinforcement learning Stackelberg vs. Nash in security games: An extended investigation of interchangeability, equivalence, and uniqueness The dynamics of viral marketing Giraffe: Using deep reinforcement learning to play chess The origins of language in teaching Flexible behaviour regulation in agent based systems Making smart contracts smarter Young children's understanding of cultural common ground Designing for human-agent interaction Stable opponent shaping in differentiable games Incentives for cooperation in peer-to-peer networks Um-prs: An implementation of the procedural reasoning system for multirobot applications Improving policies via search in cooperative partially observable games Autocurricula and the emergence of innovation from social interaction: A manifesto for multi-agent intelligence research Emergence of linguistic communication from referential games with symbolic and pixel input Scalable agent alignment via reward modeling: a research direction Clustering and sharing incentives in BitTorrent systems Twentyfive years of group decision and negotiation: a bibliometric overview Agent technology: Enabling next generation computing (A roadmap for agent based computing). AgentLink The dynamics of informational cascades: The Monday demonstrations in Leipzig Learning to coordinate: Co-evolution and correlated equilibrium Gifting in multi-agent reinforcement learning Multi-agent cooperation and the emergence of (natural) language. preprint Bargaining with limited computation: Deliberation equilibrium Implementing an untrusted operating system on trusted hardware BARS: a blockchain-based anonymous reputation system for trust management in VANETs Multi-agent reinforcement learning in sequential social dilemmas Evolution of social norms and correlated equilibria Mechanism design: How to implement social goals The swarm simulation system: A toolkit for building multi-agent simulations Sharad Mehrotra, and Nalini Venkatasubramanian. Multi-agent simulation of disaster response An integrative model of organizational trust Negotiating underwater space: The sensorium, the body and the practice of scuba-diving The evolution of agriculture in insects Identity crisis: anonymity vs reputation in P2P systems Taxonomy of trust: Categorizing P2P reputation systems Human-level control through deep reinforcement learning Computational aspects of strategic behaviour in elections with top-truncated ballots Norm emergence in multiagent systems: a viewpoint paper People do not feel guilty about exploiting machines Communication-efficient learning of deep networks from decentralized data Order within anarchy: The laws of war as an international institution Desiderata for agent argumentation protocols Mediation in situations of conflict and limited commitment Repeated games and reputations: long-run relationships Deepstack: Expertlevel artificial intelligence in heads-up no-limit poker Determining successful negotiation strategies: An evolutionary approach Why is rent-seeking so costly to growth? Artificial social systems K-implementation Strong mediated equilibrium Negotiation principles SecureML: A system for scalable privacy-preserving machine learning Learning agent communication under limited bandwidth by message pruning Automated mechanism design without money via machine learning National Popular Vote Inc. Agreement among the states to elect the president by national popular vote Learning social learning A review of deep learning based speech synthesis The bargaining problem Learning to negotiate optimally in nonstationary environments Improved human-robot team performance through cross-training, an approach inspired by human team training practices Institutions and credible commitment How might people interact with agents PRESAGE: A programming environment for the simulation of agent societies Algorithms for inverse reinforcement learning Algorithmic mechanism design Computationally feasible VCG mechanisms Evolution of indirect reciprocity by image scoring Speech recognition using deep neural networks: A systematic review Violence and social orders: A conceptual framework for interpreting recorded human history WaveNet: A generative model for raw audio Learning to teach in cooperative multiagent reinforcement learning The ease and extent of recursive mindreading, across implicit and explicit tasks Google's duplex: Pretending to be human. Intelligent Systems in Accounting A machine-learning approach to automated negotiation and prospects for electronic commerce The Rise and Decline of Nations: Economic Growth, Stagflation, and Social Rigidities Game theory and political theory: An introduction Consensus and cooperation in networked multi-agent systems The building blocks of interpretability. Distill A behavioral approach to the rational choice theory of collective action: Presidential address Collective action and the evolution of social norms Strategic alliance structuring: A game theoretic and transaction cost examination of interfirm cooperation The emergence of altruism as a social norm An adaptive agent bidding strategy based on stochastic modeling A study of collusion in first-price auctions Multi-agent systems in a distributed smart grid: Design and implementation Toward a mechanistic psychology of dialogue Voting in multi-agent systems Cooperative multi-agent learning: The state of the art No press diplomacy: Modeling multi-agent gameplay Order without law: Reputation promotes cooperation in a cryptomarket for illegal drugs Polis: Input crowd, output meaning ALVINN: An autonomous land vehicle in a neural network the shadow of power: States and strategies in international politics War as a commitment problem Playing games for security: An efficient exact algorithm for solving Bayesian Stackelberg games Computational trust and reputation models for open multi-agent systems: a review GloVe: Global vectors for word representation Game theory and decision theory in multiagent systems Developing intelligent agent systems: A practical guide Economic reasoning and artificial intelligence Adversarial robustness through local linearization A taxonomy of 2×2 games Google's driverless cars run into problem: Cars with drivers. The New York Times Research priorities for robust and beneficial artificial intelligence The burning ships of Hernán Cortés Flocks, herds and schools: A distributed behavioral model The topology of the 2x2 games: A new periodic table Spontaneous giving and calculated greed Learning complex dexterous manipulation with deep reinforcement learning and demonstrations Human cooperation The characterization of implementable choice rules Regulation of division of labor in insect societies Multi-agent coordination as distributed logic programming Securities regulation as lobster trap: A credible commitment theory of mandatory disclosure Algorithmic game theory Social heuristics shape intuitive cooperation The neuroscience of social decision-making A Computational Theory of Grounding in Natural Language Conversation Human compatible: Artificial intelligence and the problem of control. Penguin Rules of encounter: designing conventions for automated negotiation among computers Trust among strangers in internet transactions: Empirical analysis of eBay's reputation system SAE On-Road Automated Vehicle Standards Committee et al. Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles Limitations of the Vickrey auction in computational multiagent systems Automated mechanism design: A new application area for search algorithms Strategy-proofness and Arrow's conditions: Existence and correspondence theorems for voting procedures and social welfare functions Reinforcement learning: An introduction Human-agent teamwork and adjustable autonomy in practice Locally noisy autonomous agents improve global human coordination in network experiments Dynamic models of segregation The Strategy of Conflict Is imitation learning the route to humanoid robots The construction of social reality The unreasonable effectiveness of deep learning in artificial intelligence Benefits of assistance over reward learning in cooperative AI Contact tracing apps can help stop coronavirus. but they can hurt privacy A value for n-person games Review of speech-to-text recognition technology for enhancing learning A general reinforcement learning algorithm that masters chess, shogi, and go through self-play An artificial discourse language for collaborative negotiation Multiagent systems Finding friend and foe in multi-agent games Issues in automated negotiation and electronic commerce: Extending the contract net framework Multiagent systems: Algorithmic, game-theoretic, and logical foundations The contract net protocol: High-level communication and control in a distributed problem solver The future of disaster response: Humans working with multiagent teams using DEFACTO Learning to summarize from human feedback Speaking our minds: Why human communication is different, and how language evolved to make it special Social behavior for autonomous vehicles Designing the user interface: strategies for effective human-computer interaction Job market signaling If multi-agent learning is the answer, what is the question? The origin of chromosomes I. Selection for linkage The major transitions in evolution Review on computational trust and reputation models Learning multiagent communication with backpropagation Information gathering actions over human internal state On the synthesis of useful social laws for artificial agent societies (preliminary report) On social laws for artificial agent societies: off-line design On the emergence of social conventions: modeling, analysis, and simulations Common ground. Linguistics and philosophy Multiagent systems: A survey from a machine learning perspective Security and game theory: algorithms, deployed systems, lessons learned Reinforcement mechanism design Beyond privacy tradeoffs with structured transparency TD-Gammon, a self-teaching backgammon program, achieves master-level play Reward learning from narrated demonstrations Judgment under uncertainty: Heuristics and biases A cultural perspective in social interface Why we cooperate Reputation and international cooperation: Sovereign debt across three centuries Travos: Trust and reputation in the context of inaccurate information sources Identifying teaching in wild animals Transfer learning for reinforcement learning domains: A survey A neural architecture for designing truthful and efficient auctions Vulnerable robots positively shape human conversational dynamics in a humanrobot team Grandmaster level in StarCraft II using multi-agent reinforcement learning Tractable multiagent planning for epistemic goals Handbook of knowledge representation Counterspeculation, auctions, and competitive sealed tenders Reward and punishment in social dilemmas Theory of games and economic behavior Organizing multiagent systems Robust convention emergence in social networks through self-reinforcing structures dissolution How children solve the two challenges of cooperation The origins of credible commitment to the market The 2020s political economy of machine translation Eliza -a computer program for the study of natural language communication between man and machine Adaptation and learning in multi-agent systems: Some remarks and a bibliography Multiagent systems: a modern approach to distributed artificial intelligence A market-oriented programming environment and its application to distributed multicommodity flow problems Learning to interactively learn and assist The 2001 trading agent competition Norm enforcement among the Ju/'hoansi bushmen An introduction to multiagent systems Languages for negotiation Designing collective behavior in a termite-inspired robot construction team Progress in the simulation of emergent communication and language Formal trust model for multiagent systems Design patterns for smart contracts in the ethereum ecosystem Towards playing full MOBA games with deep reinforcement learning A gift from knowledge distillation: Fast optimization, network minimization and transfer learning Autonomous service level agreement negotiation for service composition provision A normative framework for agent-based systems The economics of convention A survey of multiagent trust management systems Emergence of social norms through collective learning in networked agent societies Reliability in communication systems and the evolution of altruism Regret minimization in games with incomplete information Multi-agent learning with policy prediction Trust management through reputation mechanisms Algorithms for the coalitional manipulation problem The AI economist: Improving equality and productivity with AI-driven tax policies Teacher-student framework: a reinforcement learning approach HIBERT: Document level pre-training of hierarchical bidirectional transformers for document summarization A survey on deep learning based brain computer interface: Recent advances and new frontiers