key: cord-0863457-vpwuming authors: Catanzaro, Michele; Buchanan, Mark title: Network opportunity date: 2013-03-01 journal: Nat Phys DOI: 10.1038/nphys2570 sha: 941235d328750ca790cd0482732444b86331be0a doc_id: 863457 cord_uid: vpwuming Our developing scientific understanding of complex networks is being usefully applied in a wide set of financial systems. What we've learned from the 2008 crisis could be the basis of better management of the economy — and a means to avert future disaster. I n his Foundation series of science fiction books, Isaac Asimov imagined a discipline called psychohistory -a basis for the scientific prediction of the future of a human society. Such a science would have been invaluable 10 years ago, when the economies of most of the developed world began hurtling towards the cliff of financial crisis and the current recession. Psychohistory is a fantasy, of course, but in the last decade alternatives to the mainstream economic and financial models have attracted a great deal of attention 1,2 , with ideas streaming in from other disciplines. Physics is one of the main contributors. The field of 'econophysics' , born in the 1970s, is now experiencing a second youth through the development and exploration of physicsinspired models for financial markets, banking networks and instabilities arising from leverage and other factors -all relevant to potential economic policy. Theoretical progress in the science of complex systems is being amplified and accelerated by information technology and a flood of quantitative data on human behaviour. The promise certainly isn't a real 'psychohistory' , but science-based insight into the origins of the kind of systemic risk that caused the current crisis, and how it might be controlled or mitigated with intelligent policies. The "serious limitations" of existing economic and financial models were recognized in November 2010 by Jean-Claude Trichet -then president of the European Central Bank (ECB) -in his opening address 3 at the ECB Central Banking Conference, in Frankfurt. "Macro models failed to predict the crisis and seemed incapable of explaining what was happening to the economy in a convincing manner. As a policymaker during the crisis, I found the available models of limited help. In fact, I would go further: in the face of the crisis, we felt abandoned by conventional tools", said Trichet in his speech. These tools belong to two classes. On the one hand, econometric models rest on the idea that the future will be like the past. Ancient meteorologists looked into annals of past weather maps to find a map similar to the weather of the day, then forecast the next day's weather according to what was on the next page. Similarly, econometricians find predictive patterns in past data and hope they will provide insight into the future. In contrast, economists using dynamic stochastic general-equilibrium models -today considered the theoretical state-of-the-art -deduce the probable future from a set of assumptions about the behaviour of economic agents (people, companies, institutions, countries and so on). They assume that agents have 'rational expectations' -that is, a correct view of the distribution of probabilities of future states of the world -and always make optimal decisions in pursuit of their own interests. Conventional economic models usually assume that economic agents (people, companies, institutions, countries and so on) independently optimize their interests, based on shared information. They assume that agents have 'rational expectations'; that is, an approximately correct view of the distribution of probabilities of futures states of the world. Standard economic thinking also sees markets as reflecting an overall balance or equilibrium. An idea known as the efficientmarket hypothesis -still implicitly accepted by many as at least roughly correctsuggests that markets accurately evaluate the risk and profitability of assets, liabilities and portfolios. Hence, significant price changes occur only when new information arrives to the market from outside. The more significant the external event, the larger the changes in price in the market: a linear effect, without feedbacks. Markets, in this view, should never be too far out of balance, and can be trusted to naturally seek and maintain a stable equilibrium. Unfortunately, financial and economic reality frequently violates these assumptions. Regarding econometrics, the economic future is often wholly unlike the economic past. In 2006, for example, global markets were dominated by new financial products that did not even exist a few years before, including the bundles of mortgage debt known as collateralized debt obligations, and the derivatives linked to default risk called credit default swaps. Moreover decades of data suggested that housing values would never simultaneously fall across the USAalthough this is precisely what happened. Human reality also routinely violates the assumptions of traditional economic theory: first because people aren't identical, but highly heterogeneous in their behaviour, and second because they often depart from the rational ideal. More importantly, a wealth of empirical evidence points to the strong interdependence of human actions, as companies or individuals don't act on the basis of independent judgement, but observe the actions of others, infer information (or what they take to be information) and act accordingly -often triggering herdlike behaviour. Prominent empirical 'anomalies' reflect these shortcomings of theory. Great catastrophic events do not seem to be triggered by important external news, which equilibrium theory insists should be the case. For example, the 2008 shock was initiated by the subprime mortgage crisis, yet the amount of money in these mortgages was much smaller than the global effect of the crisis. Equilibrium and stability don't seem to be useful to describe a system that has experienced at least two bubbles -the dotcom and housing episodes -in only a single decade. On a more general level, mainstream models also fail to reproduce many of the most basic statistical realities or 'stylized facts' of markets. For example, the probability distribution of returns "In the face of the crisis, we felt abandoned by conventional tools" Jean-Claude Trichet COMMENTARY | FOCUS across many different markets shows a preponderance of large price fluctuations over timescales ranging from milliseconds to weeks. Even in periods that seem relatively calm, their distribution is not Gaussian -as often assumed in standard models -but fat-tailed. Other stylized facts not captured by classic models include arbitrage efficiency and volatility clustering. "Rational expectations theory has brought macroeconomic analysis a long way over the past four decades. But there is a clear need to re-examine this assumption", said Trichet in 2010. With respect to the efficient markets hypothesis, as former Federal Reserve President Alan Greenspan admitted in congressional testimony in October, 2008, "the whole intellectual edifice collapsed in the summer of last year". The crisis brought into grim focus something that financial experts and regulators had not taken properly into account: systemic risk. That is, risks tied to the collective organization of the system as a whole, rather than to the status of an one particular institution. Based on notions of equilibrium and balance, on independent action rather than interaction, conventional economic models do not naturally reflect the mechanisms that might cause systemic risk. This requires a different way of thinking, a different way of modelling -a commitment to exploring complexity in full detail. In a complex system of many strongly interacting parts -a solid or liquid, an ant colony, an ecosystem or an economycollective properties emerge out of myriad feedbacks among interacting elements, which may be atoms, people, or institutions. These feedbacks control overall system dynamics and, in particular, determine stability or instability. The focus of complexity science is on these feedbacks and the architecture of interactions behind them. This is one of the central aspects missing in mainstream economic models. In the study of economics and finance as complex, adaptive systems, two perspectives have become essential: complex networks 4 , and an informationcentric approach targeted on large databases. In a speech 5 to the Financial Student Association in April 2009, Andy Haldane, executive director for financial stability of the Bank of England, noted that the dynamics of the financial crisis following the failure of Lehman Brothers in September 2008 shared significant similarities with the 2002 epidemic of Severe Acute Respiratory Syndrome (SARS). In both cases, a relatively small event hit a system and fear spread, ultimately resulting in enormous damage. "These similarities are no coincidence", Haldane noted. "Both events were manifestations of the behaviour under stress of a complex, adaptive network. Seizures in the electricity grid, degradation of ecosystems, the spread of epidemics and the disintegration of the financial systemeach is essentially a different branch of the same network family tree. " In pursuing this similarity -which seems to be much more than a metaphorseveral studies have made significant progress in identifying key elements that underlie the possibility of systemic failure in financial networks. The most basic cause of interdependence between firms and financial institutions is through the lending and borrowing of money. One example is the interbank market, which banks use to manage demands for cash by shuttling funds among themselvers, often overnight. This process creates a dynamic network of banks linked through the interchange of funds. One of the early alerts of the 2008 crisis was the freezing of this market, as a result of an evaporation of mutual trust on which such lending depends. In any such network, an obvious insight is that the largest institutions -those considered 'too big to fail' -present the greatest systemic risk. But the truth behind systemic risk is more subtle. For example, a recent study 6 demonstrated that the position of an institution in the banking network is equally and sometimes more important than its size. To identify the systemically most important institutions, Battison et al. 6 defined a new measure: DebtRank, inspired by Google's PageRank algorithm, by which the search engine ranks webpages. They found that a group of 22 financial institutions, which received most of the loans issued by the US Federal Reserve from 2008 to 2010, had become more central to the network, meaning that the default of any one of them would have a large economic impact on the whole network. Moreover, in November 2008 -at the peak of the financial crisis -DebtRank scores for the largest 20 or so banks show that simple bank size isn't as important as we have come to think. Institutions such as Barclays, Bank of America, JPMorgan Chase and the Royal Bank of Scotland presented more systemic risk than did Citigroup or Deutsche Bank, despite being significantly smaller in total assets. Wells Fargo stands out even more: it presented as much systemic risk as Citigroup, despite having only a quarter of the assets. Another form of connection between companies comes through direct partial ownership, giving one company a financial stake in another, and the ability to influence it's decisions. An analysis of these links among 43,060 transnational companies has shown that three quarters of the total operating revenue of corporations globally in 2007 remained within a core of 1,300 companies that are highly connected to each other 7 . A group of 737 companies controlled 80% of the shares: among them Barclays (which alone controlled 4%), JPMorgan Chase and the Goldman Sachs group. This picture makes it clear how the 'virus' of the crisis could quickly spread from one entity to another. Other studies show how greater diversification of risks between institutionsnormally thought to reduce systemic risk -can actually increase it, due to nonlinear aspects of financial contagion 8 . The increasing use of leverage by competing firms can also push a market past a threshold of stability, making violent financial collapse likely 9 . Both of these effects essentially arise from interactions and feedbacks, and cannot be studied by models that treat economic agents as independent actors. There are many other ways in which economic systems can be seen as networks -by measuring correlations among the price movements of companies' stocks, for example. Corporations in the same sector usually change their prices in an almost synchronized fashion. One can draw a network by connecting companies whose stocks are most strongly correlated. Finally, not only institutions, but countries as well, can be connected by the web of world trade, where links are import-export relations. Another unprecedented opportunity for modelling economic systems is through the flood of data increasingly available on economics and finance. For example, social networks on the internet are providing a completely new laboratory in which to measure social interactions. In an experiment conducted on Facebook, researchers at the University of California, San Diego, arranged the posting of a message on the wall of 61 million users on the day of the 2010 US Congress elections 10 . The message stated "Today is election day"; users could click on a "I voted" button and see which of their friends had clicked. By setting up control groups and checking against public voting registers, it was estimated that 300,000 people had gone to vote as a consequence of the social influence of their Facebook friends. One important challenge is to make scientific advances useful for policymakers. This is just an illustration of how information technology can help in quantifying subtle collective behaviours. In the context of economics, such experiments could be enormously valuable. For example, it has been shown 11 that the volume of transactions of a company at a certain time is correlated to the volume of searches related to that company at the previous time-stepthe reason is unclear, but it could be that the number of searches is a rough measure of the interest (generated by worry or enthusiasm) in that company. Notably, similar systems have been employed to predict other phenomena, such as flu epidemics 12 . The coincidence of new theoretical tools, a wealth of new data and the will to change paradigms provides a chance for a leap forward in our understanding of financial and economic systems, with an attendant increase in our capacity to manage them and avoid the worst problems. One important challenge, among many others, is to make scientific advances both available to the public, and useful for policymakers. No one believes that better science alone will make economic crises a thing of the past, or allow the precise prediction of the economic or financial future. But better models that take into account feedbacks and network dynamics should greatly boost the ability of everyone to foresee the kinds of events to which markets and economies are prone, to understand the conditions that are likely to create them, and to offer some guidance on how to avoid those circumstances. Even a little more knowledge could be of great value. ❐ weapons of mass destruction' , one might have expected the world at large to sit up and listen -particularly in the wake of subsequent events that led to the financial crisis of 2008. Instead, the derivatives market continues to grow in size and complexity (Fig. 1) , spawning a new generation of financial innovations, and raising concerns about its potential impact on the economy as a whole. A derivative instrument is a financial contract between two parties, in which the value of the payoff is derived from the value of another financial instrument or asset, called the underlying entity. In some cases, this contract acts as a kind of insurance: in a credit default swap, for example, a lender might buy protection from a third party to insure against the default of the borrower. However, unlike conventional insurance, in which a person necessarily owns the house she wants to insure, derivatives can be negotiated on any underlying entity -meaning anyone could take out insurance on the house in question. Speculation therefore emerges as another reason to trade in derivatives. By engaging in a speculative derivatives market, players can potentially amplify their gains, which is arguably the most plausible explanation for the proliferation of derivatives in recent years. Needless to say, losses are also amplified. Unlike bets on, say, dice -where the chances of the outcome are not affected by the bet itself -the more market players bet on the default of a country, the more likely the default becomes. Eventually the game becomes a self-fulfilling prophecy, as in a bank run, where if each party believes that others will withdraw their money from the bank, it pays each to do so. More perversely, in some cases parties have incentives (and opportunities) to precipitate these events, by spreading rumours or by manipulating the prices on which the derivatives are contingent -a situation seen most recently in the London Interbank Offered Rate (LIBOR) affair. Proponents of derivatives have long argued that these instruments help to stabilize markets by distributing risk, but it has been shown recently that in many situations risk sharing can also lead to instabilities 2,3 . Players engaging in the derivatives market can enter into an unlimited number of contracts with other parties, so the market can be seen as a complex financial network, in which interactions between the nodes are nonlinear 4 . A derivative contract can itself be made arbitrarily complex -it has been estimated 5 that if one of these contracts A Very Short Introduction Robert May and Joseph Stiglitz The intrinsic complexity of the financial derivatives market has emerged as both an incentive to engage in it, and a key source of its inherent instability. Regulators now faced with the challenge of taming this beast may find inspiration in the budding science of complex systems