key: cord-0832878-9zuhgl61 authors: Harrison, Colin G.; Williams, Peter R. title: A systems approach to natural disaster resilience date: 2016-03-09 journal: Simul Model Pract Theory DOI: 10.1016/j.simpat.2016.02.008 sha: e8e49e6a3c68d96a17f9777527a7a2e757666297 doc_id: 832878 cord_uid: 9zuhgl61 The frequency, social, and economic impacts of natural disasters show exponential increases in recent decades. Cities and countries around the world have begun to realize that these events are no longer “hundred year” storms, but repeat within a few years. As urbanisation continues throughout this century, more and more people and more economic activity will be concentrated in at-risk areas; especially as new arrivals in cities throughout Asia and Africa are likely to be concentrated in the highest risk districts, much as they often are in North America and Europe today. This article reviews recent growth of natural disasters and considers how a systems approach can improve approaches to mitigation and adaptation of these risks and to recovery from such events. This article considers how the mitigation of and the adaptation to the risks of natural disasters in cities and regions can be strengthened through the application of Systems Science perspectives and Systems Engineering methods building on the Internet of Things, analytical and modelling techniques from Data Science, and general developments in Information and Communications Technology (ICT). Such work has only recently begun and there are great opportunities to apply techniques of Simulation Modelling, not only for academic interest, but in order to reduce the exposures of lives and property to these risks. The government sector at local, regional, and national levels as well as the private sector, notably insurance companies, has strong interests in developing such tools. No human life is free of risk and in the end we all die. How long and how well we live depends in large part on how we recognise risks and mitigate and adapt to them. We do this as individuals and as members of communities. We accept some risks, because, if we succeed in avoiding disaster, we may achieve some desirable outcome. We mitigate or adapt to other risks to avoid disaster, but this entails an opportunity cost. Both as individuals and communities we do a poor job of understanding risks and the associated costs. This article describes new methods to enable urban and regional communities to improve their resilience to such events. The massive expansion of geophysical sensing, which began some 50 years ago with satellite imaging of the Earth, and of our ability to capture such information through the Internet of Things and to analyse it with the algorithms and computational power of Data Science have the potential to transform our approaches to natural disaster risks. Similar expansions of sensing are also underway in cities and these enable understanding of Urban Systems. In this article we point to areas where these technologies, together with new theories based on Systems Science, can aid in the protection of human life and of the massive investments in public and private infrastructure. Natural disasters are not isolated events. They are the repeated, but episodic, outcomes of natural systems that have existed for millennia, although they may be exacerbated through modern human activities, for example, the propagation of contagious diseases through massive, global travel. From this perspective, most natural disasters should not come as a surprise. In the hands of Earth's natural systems, we are powerless to prevent natural disasters. In some cases, for example earthquakes and volcanic eruptions, the exact time of their occurrence may not be usefully predictable. But they should not be a complete surprise. For a given city or region, with moderate effort, it is possible to identify what principal risks exist, where their impacts will lie, and what indicative signs will warn of an impending event. Some of this analysis may come from historical records, some from scientific research, such as local geoscience, and some from contemporary instrumentation. This analysis will also point to the people and infrastructures that are most at risk and will allow assessment of humanitarian and economic costs of likely events. This in turn will permit the development of policy for the mitigation of some or all of these risks and the identification of instances where mitigation and day-to-day activities can support each other -for example, where smart grid technologies also confer disaster resilience or where a flood zone is designated as a park when not under water. The goal of such policy is not to construct a built environment that is unassailable by natural forces, which in general is impractical, but rather to understand the core systems of life in the city or region and to ensure that these can be protected and re-established as quickly as possible following an event. This is resilience to natural disasters. Thus natural disasters should not be considered as unpredictable, transitory events demanding emergency responses, but rather as ongoing risks with lifecycles extending over years or centuries whose mitigation and adaptation should be permanently embedded in urban planning and policy. This framing points to the balance required of policymakers: the need to make large-scale investments or to exclude potential economic developments today for the sake of reducing the impacts of future events or, where possible, to enable the two policies to coincide. This article considers natural disasters and their impacts on human settlements from the perspective of Systems Science [1] . It views such settlements neither as purely social systems nor as purely infrastructure systems, but rather as a myriad of interactions among the inhabitants, between the inhabitants and the natural and built environments and between the natural and the built environments. These interactions constitute Urban Systems [2] . The article takes the view that current advances such as the Internet of Things (IoT) and Big Data offer greatly enhanced abilities to study such systems and that these can be applied throughout the lifecycles of natural disaster risks to improve the resilience of such settlements. We do not argue that science and technology are miracle solutions for natural disaster resilience. Indeed there is ample evidence that, for example, social cohesion [3] by itself is a powerful factor that can at the very least reduce the impact of an event. But we live in a time when cities and regions around the world are already highly stressed through social and political instabilities, through urbanisation and migration on unprecedented scales, and in which the frequency of natural disasters is increasingly strongly. All of these trends threaten greater impacts from natural disasters. In this context, we should consider not only social approaches to resilience, but also opportunities created through science and technology. Few countries in the world have invested as much as Japan in conventional infrastructure to defend itself against the natural hazards -earthquakes and tsunamis -to which it is exposed. Japan has 28,0 0 0 km of coastline and some 40% of this has a sea wall, typically 6 m high or more, to protect against tsunamis. And so Japan should not be ashamed that an exceptional Magnitude 9 earthquake [4] produced waves that easily surmounted such sea walls. But it was evident in Fukushima and other coastal regions that insufficient thought -systems thinking -had gone into the possibility that such an event could happen and what the related consequences might be. Such failures to challenge assumptions, to assess systemic interdependencies, and to consider the knock-on effects of failures can create secondary disasters that are simply waiting to be triggered by a natural disaster. This article argues that the tools for such systems-based approaches to resilient are emerging. Our central goal in this article is to invite urban planners and policymakers to consider this question: We know how to design and build cities that function well under "normal conditions", but how should we design and build cities that can continue to function in "limp-along" fashion following a disaster? Large-scale engineering systems such as power stations, commercial aircraft, and aircraft carriers are designed and constructed in this way, so how can we extend those methods to improving the resilience of an entire city? The article begins with a review of current trends in natural disasters. It then describes how systems thinking can help in understanding the life of a community such as a city in terms of its Urban Systems [2] rather than simply its built environment. The article describes the concept of the lifecycle of natural disaster risk in the context of Urban Systems and how this may be applied to analyse and model methods to mitigate or adapt to such risks. With this background we examine how to apply these approaches to the mitigation of and adaptation to natural disaster risks by focusing on the critical component of these Urban Systems. We also examine how concepts of autonomy and decentralisation can improve the performance of critical Urban Systems during the period of disaster recovery. The article concludes with a survey of current work on international standards that illustrate these system-based approaches to natural disaster risks. Natural disasters are acute stresses ranging in duration from minutes, for example earthquakes, to several months or even months, for example epidemics or droughts. For this article, we will exclude technological disasters such as urban fires or factory explosions, but include human-related events that perturb natural systems such as the failure of a dam leading to downstream flooding. There are many such stresses: • Contagious diseases: For example, the outbreak of SARS (Severe Acute Respiratory Syndrome) in Guangdong Province, China and 37 other countries during 20 02-20 03 caused the loss of 916 lives and a financial cost of USD 13bn. More recently Ebola has presented a similar threat, which has now been contained, but with a cost of over 11,0 0 0 lives. Historically contagious diseases, notably the major plagues of the middle ages, have presented extreme threats to the resilience of societies, not only in localized cities, but globally as they are rapidly and widely propagated through trade and air travel. • Extreme weather events: For example Hurricane Katrina, which devastated the city of New Orleans in 2005 and overwhelmed even the abilities of the United States to contain and recover from this disaster. Almost 200 people died in this event and the subsequent flooding, which left some 75,0 0 0 people homeless. The financial cost is estimated at USD 125bn. Large areas of New Orleans remain destroyed to this day and the population is no more than half of its pre-event level. Other violent weather events include tornados, snow and ice storms, and monsoons. • Flooding and landslides: Extreme weather events are often associated with fluvial flooding as rivers overflow their banks and as levees and dams collapse. Pluvial flooding is the runoff from higher ground resulting from heavy or sustained rainfall. Hurricanes often produce extremely heavy rainfall owing to the seawater they carry inland. Heavy rainfall and associated flooding may also lead to landslides. For example the disaster in Rio de Janeiro in April 2010 that produced landslides in the favelas on the hills surrounding the main city resulted in the loss of 400 lives and a financial cost estimated at USD 13bn. • Storm surges: Coastal flooding occurs when high winds from hurricanes and lesser storms drive tides higher onto shore, especially where natural barrier islands and sub-surface reefs, mangroves, and other vegetation have been removed. Such storm surges were experienced in the 2012 Hurricane Sandy in the region of New York City. destroyed some USD 4bn property. As rainfall patterns change, new areas of drought are susceptible to such fires, which are extremely difficult to contain especially when fanned by strong winds. • Volcanoes: In April 2010 the Eyjafjallajökull volcano in Iceland erupted, spewing a vast cloud of ash into the skies over the North Atlantic and Western Europe. There were no direct deaths associated with this, although there may be longterm health issues for those who breathed the contaminated air, but the event heavily disrupted air travel for some 10 million passengers in the region with an estimated economic impact of USD 1.7bn. The frequencies and the humanitarian and economic impacts of natural disasters show an almost exponential increase in recent decades (see Fig. 1 ). Cities and countries around the world have begun to realize that these events are can no longer be considered as "hundred year storms". As massive urbanisation and migration continue throughout this century, more and more people will be concentrated in at-risk areas. New arrivals in large cities throughout Africa, Latin America, and Asia are likely to be concentrated in the districts with the highest risks such as flooding, landslides, mudslides, wildfires, or epidemics. Not all of these effects are related to climate change, but many, including coastal and river flooding as well as hurricanes and typhoons, are becoming more violent as ocean temperatures rise. While well-deserved publicity goes to organisations such as the Red Cross and Red Crescent or Doctors Without Borders that respond to natural and other disasters, there are also many national and international, public, civic, and private organisations that monitor such events and assist national and local governments in developing strategies to mitigate and to adapt to these risks. EM-DAT is an online database [5] of international disasters both natural and technological. Fig. 1 plots the reported natural disasters during the 20th and early 21st centuries, showing that until mid-century there were fewer than 10 reported events per year, but following World War II, this has grown to several hundreds per year. EM-DAT also provides data for fatalities and financial costs. The human costs thankfully are not rising as strongly, but the financial costs show similar, almost exponential growth with a peak at USD 300bn in 2011. The rapid growth of the numbers of these events in the last 60 years is difficult to explain completely. Prior to midcentury there was no doubt some undercounting as the world was less instrumented and populated. However it is difficult to posit physical causes that completely account for the recent growth. The increasing inter-connectedness of the world appears to be responsible for epidemics originating in previously isolated pockets of diseases such as HIV, SARS, avian 'flu, and MERS (Middle East Respiratory Syndrome). The 2012 IPCC reports [6] on climate change concluded that there is no strong relationship between the numbers of extreme weather events and global warming, although there may be an overall increase in their violence due to higher ocean temperatures, and climate change may also lead to droughts or elevated temperatures that degrade agriculture. Climate change leading to droughts may result in more and larger wildfires. Rising sea levels, together with more powerful storms, account for some increases in coastal flooding. But it is difficult to see any connection to geologic events such as earthquakes, tsunamis, and volcanic eruptions. Some of the financial costs of natural disasters are borne by large re-insurance companies such as Swiss Re and Willis Re, as well as national and regional governments. These organisations conduct extensive research to assess these risks with the aid of historical meteorological databases and satellite imaging and develop large-scale statistical or actuarial models of the geographic distributions of natural disaster and other types of risks [7] . These models correlate historical data about the occurrence of natural disaster events in regions with the impacts created in terms of the loss of life and the destruction of the built environment. As cities expand laterally or vertically (taller buildings) and as the violence of weather-related events increases, the probable impacts also grow. Insurers are naturally concerned that their liabilities in such events are adequately covered in the long-term by premiums paid by individuals or governments. Some governments, such as the Government of New Zealand, operate public insurance programmes to cover such risks. Such insurers are motivated to work with local and regional governments to mitigate the natural disaster risks. These studies have to date been proprietary and confidential, but the organisations share their knowledge to develop strategies for mitigation and adaptation and are developing and supporting open platforms for modelling catastrophes. The World Economic forum (WEF) publishes an annual report on global risks that includes natural disaster risk assessment. The report is not a scientific study, but a consensus from a panel of corporate risk officers. The 2015 WEF Global Risks report [8] highlights several types of natural disasters among the 10 major risks: 1. Interstate conflict 2. Extreme weather events 3. Failure of national governance 4. State collapse or crisis 5. Unemployment or underemployment 6. Natural catastrophes 7. Failure of climate change adaptation 8. Water crises 9. Data fraud or theft 10. Cyber attacks Many of the affected regions and countries are extremely poor, such as Bangladesh, and the costs of natural disaster mitigation are unsupportable without global assistance. The agreement at the 2015 Paris conference on climate change includes an annual budget of USD 100bn to address only those interventions related to climate change. This money must be spent wisely and the ability of science and technology to provide fact-based assessment of risk and of the most effective interventions is essential. However the rising statistics illustrated in Fig. 1 are causing rising concern. David Bresch, the Head of Sustainability at Swiss Re, has suggested that some regions may become uninsurable in the near future. New approaches are urgently needed that can be widely and quickly applied in the cities and regions with the most immediate risks. Cities exist to enable large numbers of people to live in close proximity. This is manifest in the built environment or physical infrastructure that a city presents. But these physical elements do not represent how the city lives, although they do provide affordances and constraints for life. How the city lives and works emerges when we view the city as a very large collection of systems -Urban Systems [2] -with wide ranges of spatial and temporal scales. By "system" here we do not mean the mechanical systems of the built environments, but rather the objects of study in Systems Science [1] . Systems Science views the world as a vast number of structured interactions among many kinds of entities. These entities may be natural (biological, geological, environmental, and so forth), human (social, economic, political, and so forth) or mechanical (infrastructure, utility services, transportation, and so forth). Human habitation includes all of these systems and they are all susceptible to disruption by natural disasters. Many such systems have evolved natural resilience, so that even if one or more components of the system is impaired or destroyed, the system can continue to function by exploiting redundant elements or by adopting an alternative configuration. Many others have not. In cities these systems are the myriad interactions among the citizens, between the citizens and the built environment, and among the elements of the built environment itself. The city's Urban Systems also interact with the natural environment. Some of these systems are formally defined, such as the integrated elements and management that provide electricity service to residences and businesses. Many other systems are informal or ad hoc, composed by the citizens to enable them to lead their lives. We call all of these Urban Systems. The complexity of a city emerges from the immense numbers of these systems and from their many mutual interactions and inter-dependencies, leading to a System of Systems. The goal of natural disaster resilience, at the local scale, is the long-term survival of these Urban Systems. The concept of Urban Systems is relatively novel, but the systems themselves have deep histories. Until recently however they were largely invisible and inaccessible for large-scale analysis. Traditional methods for understanding, say, transportations flows, relied on installing pressure pads on a stretch of roadway for a few weeks and counting the passage of wheels. The Smart Cities initiatives that began around 2005 recognize that mobile devices, security systems, and, most generally, the Internet of Things, provide insights into these Urban Systems with high spatial and temporal resolutions. By 2018 it is predicted that over 2 billion sensors will be deployed for monitoring Urban Systems [2] . Instead of counting the passages of wheels on a single stretch of road for a few weeks every few years, we can observe traffic flows continuously in real-time on most major roads. Similar insights into how citizens use the services and affordances of the city come from electronic payment systems for transit, for parking, and for purchasing goods. The wealth of data produced by such incidental instrumentation is the object of study for a growing number of universities under the emerging discipline of Urban Informatics. For example New York University's Center for Urban Science and Progress (CUSP) has a large programme known as the Quantified Community [9] that is beginning to reveal some of the Urban Systems of New York City based on data provided by the City of New York. At the University of Chicago, the Urban Center for Computation and Data (UCCD) is instrumenting the city's streets by deploying an Array of Things [10] . Such virtual or actual instrumentation of a major city such as New York produces large volumes of data, say 1TB per week, that can be analysed using geospatial tools to observe spatial and temporal patterns of movement, energy and water consumption, waste production, crime, public health, and so forth. Such patterns may have predictive value through statistical methods; for example, sustained high levels of vehicles entering a city, may correlate with congestion in specific districts of the city with a defined delay. Other patterns may provide input to physical models, for example rainfall and water consumption data for a hydrological model of the city's water supply. Others may employ video analysis to identify problems or unusual activities in the behaviour of crowds or traffic or water flows and to then bring these to the attention of a person able to make a decision on whether some intervention is required. While Urban Systems are persistent and sustainable, they are also permanently in disequilibrium. That is, their many interactions are akin to the jostling of citizens on crowded streets -bumping and dodging, but ultimately making progress towards their goals. This constant jostling arises through the propagation of system states as awareness spreads of system stresses such traffic jams, weather conditions, or sales events at department stores and the linear responses of other systems to this state information. Latency in the propagation of system state often leads to dysfunctional adaptations, as systems respond to remote stresses that have since disappeared. Hence Urban Systems are permanently in disequilibrium. From a systems perspective, the central achievements of a smart city are to increase the volume and detail of information about these system states, and to enhance or limit its propagation. However, excessively strong stresses -widespread congestion, a hurricane, the collapse of the stock market -may tip some of the systems beyond their limits of reversible response and into new states from which it may be difficult to recover. That is, when the stress is removed, the systems do not quickly return to their original states. For example, congestion grows when too many vehicles try to pass through a given district and average speed declines. Beyond some level of stress, average speed declines to zero and we have gridlock. At this point, removing the stress does not (immediately) restore traffic flow. It is extremely desirable for the study of urban resilience to have models that represent at least the principal Urban Systems, for example: electricity, water, food, public health, transportation, and so forth, and their interactions and dependencies. Several institutions have as research goals the development of abstract models of such Urban Systems and Systems of Systems. This work appears to be progressing piecewise with early studies focusing on, for example water and transportation. These models are, moreover, location specific, rather than abstract; and they usually make the assumption of linear interaction. An exception is where some traffic flow models predict and attempt to regulate the onset of congestion. Accordingly, even for individual Urban Systems, we are not yet able to generalize about system behaviour or tipping points and hence whether a given city is resilient against a range of risks. The notion of a Science of Cities is relatively new. It is still rather undefined (it would in any case almost certainly be a synthesis of other branches of science and engineering, perhaps building on Services Science) and has only recently begun to get access to the operational data required to understand abstractly how cities operate. While the development of such a science is of great intellectual interest, we see here that it also has strong practical value. While simulation models would already be a great step forward, the development of a theory of cities would further allow the exploration of how a given city functions under conditions of extreme stress, such as following natural disaster. In the absence of such models of how cities operate, we propose below an approach that at least enables us to understand intra-and inter-system dependencies. From this systems perspective, the impact of a disaster is not only to cause physical destruction of the built environment, but also to disrupt and possibly destroy these Urban Systems, specifically the mechanisms for capturing and propagating system state information. The terrifying experience of being in a disaster zone is one of chaos, the absence of these structures of society and personal life. In such a zone, no system exists beyond each individual's range of view and action. While a primary role of disaster response is to save lives and to restore the built environment, in effect we are seeking to restore these Urban Systems that sustain the lives of individuals and of the community. When one of the authors toured the Tohoku region of East Japan three weeks after the 11 March 2011 earthquake and tsunami, the surviving residents were suffering extreme deprivation, most living in cardboard pens in school gymnasiums. Travel was difficult. Many of the roads had been cleared of debris, but commerce had come to a standstill. Destruction of the electricity distribution system meant that gas stations could not dispense fuel, even though it was in the tanks underground. Shops and hotels could not operate for lack of electricity to operate lights, refrigerators, and cash registers and the systems of delivering fresh food had broken down. Automatic Teller Machines lacked both electricity and communications, so people could not get money. Governments and businesses could not pay their workers because they no longer knew who was on the payroll and the banking systems were not accessible. Later, a quarter of a million vehicles that had been destroyed or swept away would need to be de-registered and insurance claims filed. This was a vivid experience of the inter-dependencies that exist among the many Urban Systems, and that make recovery of "normal life" following a disaster so very difficult. It also illustrates the local dependencies we have developed on remote systems, notably electricity and telecommunications. While the Tohoku region will survive in some sense, because the regional and central governments can support the recovery and the reconstruction, in other senses it can never return to its former state. Some infrastructure cannot or should not be re-built, but even more importantly the population has changed. Many young people, who needed to quickly find new jobs, left within days or weeks. Older people were dispersed into shelters and later, if they consented, into apartments in nearby cities until, possibly, their homes could be rebuilt. The old systems can never be restored. The same condition exists in Christchurch, New Zealand, after its long series of earthquakes beginning in 2011, and in New Orleans, which, as previously noted, lost half of its population following Hurricane Katrina. In many senses these cities will never be the same. The most difficult challenge of recovery is not to restore the infrastructure. The discrete tasks of repairing and reconstructing roads, pipes, cables, buildings, and so forth are evident. What cannot be seen and therefore may be lost forever, are the Urban Systems that both enabled and represented the life of the region. It would be good to have studied these before such events take place. The above text suggests many ways to apply systems-thinking to the improvement of urban resilience. From these we consider two: 1. To develop a deep understanding of the Urban Systems that are critical to the life of the city and analysis of their interdependencies as part of planning disaster mitigation with the goals of understanding how well the city or region is prepared for predictable disasters and which urban sub-systems and infrastructure elements are essential to maintaining these critical Urban Systems during disaster response and recover. 2. To design and deploy critical elements or enhancements to critical elements of the built environment, as identified in strategy no. 1, with the goal of enabling distributed and autonomous operation, thereby reducing levels of interdependency, and providing small islands of relative normality that can be progressively annealed together. The following section lays out principles for resilience and is followed by sections that consider how science and technology can be applied to enhance resilience both before and following a natural disaster. In discussing the ability of a community or city to overcome disasters, whether natural or technological, we refer to "resilience". As a scientific term, resilience has its origins in studies of ecological systems. Loosely, "resilience" means the ability of a general system to respond in a reversible manner to an acute or chronic stress so that it may return to its initial state. In the context of cities, we broaden this to include a return to a different but viable state. Like the systems themselves, resilience needs to be considered at various spatial and temporal scales. Resilience at one scale may be in conflict with resilience at another scale. For example, an abundant water supply may increase the resilience of a city to drought, but may also expose individual neighbourhoods to flooding. The need for resilience has deep roots in many civilisations. Confucius (551-479 BC) wrote: The green reed which bends in the wind is stronger than the mighty oak which breaks in a storm. Many older societies and civilisations were extremely careful in their development. They did not see themselves as resilient, since they were (comparatively) poor in resources and critically dependent on nature. They would permit themselves to extend only cautiously, constantly aware that a single failed harvest, a few years of over-grazing, excessive population growth, or a plague could lead to their collapse [11] . The Old Testament contains many references to the need for resilience, such as Pharaoh's dream, interpreted by Moses, of seven fat cows and seven lean cows [12] . Perhaps only the great empires of China, Egypt, Greece, Rome, and Spain can be said to have had sufficient resources to consider themselves resilient. Today the developed world has been (relatively) rich in resources for two to three centuries and, rightly or wrongly, feels confident of its resilience at the national or regional levels. It has also created institutions to go to the aid of less wealthy communities following natural and other disasters and to help these communities to prepare themselves for future events. More specifically in the context of human settlements here is a definition of resilience given by an organisation called 100 Resilient Cities [13] that was established by the Rockefeller Foundation to develop policies and methods for improving resilience in cities and regions: City Resilience describes the capacity of individuals, communities, institutions, business, and systems within a city to survive, adapt, and grow no matter what kinds of chronic stresses and acute shocks they experience. Note the emphasis here on people and institutions rather than infrastructure and the built environment. The 100 Resilient Cities initiative takes a very broad view of resilience including social, political, and economic stresses as well as natural disasters. It provides in the City Resilience Framework [14] a very expansive framework for assessing and addressing resilience across this wide spectrum of stresses. It notes that systems-based approaches have been applied to resilience, but that "they mostly examine the resilience of individual sub-systems rather than attempting to consider the resilience of the city as a system itself". A similarly broad perspective is given by Judith Rodin, president of the Rockefeller Foundation, in her 2014 book "The Resilience Dividend" [15] , which argues for the long-term economic benefits of investing time, money, and effort in increasing the resilience of cities and regions. This article views a subset of resilience thus defined, namely resilience to natural disasters, and aims to show that systems-based approaches can provide an integrated approach to such stresses. It is more closely aligned with the definition of resilience [16] developed by the UN International Strategy for Disaster Reduction (UN ISDR) [17] : The ability of a system, community or society exposed to hazards to resist, absorb, accommodate to and recover from the effects of a hazard in a timely and efficient manner, including through the preservation and restoration of its essential basic structures and functions. Hazards or stresses to a system may be endogenous, that is arising from within the system itself, or exogenous, that is arising from external events. An example of an endogenous stress would be political change leading to a revolution. Endogenous and exogenous stresses may also combine in disruptive ways. Natural disasters mainly create exogenous stresses. Few, if any, systems are infinitely resilient. Beyond some level of stress, any system will transform to a different state from which it may not ever be possible to recover to the previous state or to some other viable state. The extinction of a species is a limiting case of an irreversible transformation. Resilience is not in itself good or bad. The resilience of an oppressive regime may prevent a transition to a more just form of government. But generally in the context of natural disasters, we seek higher resilience. An exception would be the resilience of the system that leads repeatedly to the reconstruction of housing and businesses in areas that are clearly exposed to flooding, for example. A corollary of resilience is transformability, which is the ability of a system to be deliberately modified. Urban poverty is an example of a system that is highly resistant to attempts to transform it. Resilience is different from sustainability. Sustainability takes the view that a community must live cautiously so as not to impair its natural environment, social balance, and economic viability under the assumption that all externalities remain constant . Resilience deals with the fact that things do not remain constant. Climate change may slowly bring drought, new technologies may lead to the decline of old industries, and revolutions may change social and political structures. Resilience and sustainability will often interact. Thus deforestation of the hills within or upstream of a city (a sustainability issue) can lead to or worsen flash flooding (a resilience issue). Resilience and sustainability may also, at a given scale, be in conflict. While global sustainability may demand reductions in greenhouse gas emissions through lower energy consumption, a given city or nation may by itself be more resilient if it develops strong industries that rely on high energy consumption. However resilience and sustainability may also be mutually supportive as illustrated in Fig. 3 . For example, we will show later that resilience can often be increased by reducing system dependencies and inter-dependencies; thus reducing resource consumption from the natural environment may benefit both resilience and sustainability. Mitigation and adaptation are non-exclusive alternative ways of responding to any kind of risk. Mitigation implies that means can be found to reduce the level of the risk itself. For example, the danger of river flooding can be reduced by constructing dams or weirs to regulate the flow. Adaptation implies that we must accept the risk as-is and adapt ourselves to withstand it. For example, the danger of earthquakes can be reduced by constructing buildings that can withstand them. Mitigation of global warming consists primarily in reducing greenhouse gas emissions. Adaptation to global warming for low-lying islands may consist of building seawalls to protect against rising sea levels. So we have the concepts of acute or chronic stresses that have negative impacts on the many systems of the city and of the ability of the city to return to its previous state when the stress is removed or mitigated or when the community learns how to adapt to it. It is the responsibility of individuals, communities, governments, and organisations to determine what risks they face and how to deal with these risks. Unfortunately human reaction to risks is highly irrational. In the immediate aftermath of a disaster, there is great outcry over how such an event could take so many lives and create such destruction. The public and political leaders demand unlimited effort s to compensate losses. Survivors insist that their community and property be re-constructed to the pre-event state, while environmental agencies advise against re-building in what is clearly a high-risk area. Insurance compensation is negotiated, planning committees are set up, and survivors become frustrated at their lack of involvement and at the length of time taken. After several months other events capture the attention of the public and their political leaders and interest in the aftermath of the disaster declines. A number of years later many of the survivors are still displaced and waiting for compensation. Some decades later the collective memory has faded and construction begins anew in high-risk areas. A tragic story from the 2011 East Japan tsunami tells of stone markers several centuries old and elevated along the coastline that warn against building below that level [18] . Studies show that most disaster recoveries go through two cycles of planning, the first being appalling and the second only slightly better [19] . Instead of viewing disasters as isolated, unpredictable events followed by attempts at recovery, it is more effective to consider them as a continual element of urban planning as shown in Fig. 4 . This lifecycle consists of a very long-term process of risk identification, risk assessment, planning and implementation of mitigations, evolving to real-time processes as a disaster occurs or becomes imminent, and extending again through recovery processes back to long-term considerations. Most cities exposed to such risks will invest in equipment and training for the disaster management itself, but struggle with the long-term perspectives that influence investments for mitigation and adaptation. These investments, which tend to be large relative to normal Public Works budgets, are a choice like any other investment in that there is always an associated opportunity cost. How to choose between strengthening a levee and building a much-needed school or water system? Fig. 4 shows that resilience needs to be a central concern of urban planning, not only in recovery, but throughout the lifecycle. Often there will be no way to mitigate natural disaster risks, because of their exogenous nature. And sometimes it may be deemed to be undesirable to spend public money on adaptation. For example, the east coast of England is being eroded by the North Sea and this is consuming farm land, homes, and entire villages [20] . The same problem exists in parts of Wales. After an extensive study, the UK government concluded that the cost of defending the coastline, through the construction of seawalls, was not financially justifiable [21] . The slowly unfolding disaster would be allowed to continue. With Urban Systems and Resilience now defined, we consider in the following two sections how to apply emerging technical capabilities such as IoT and Data Science to improving the resilience of a city over the lifecycle of natural disaster risks. Human beings are accustomed to things going wrong. We are highly adaptive creatures and mature adults are often able to recover from the effects of a single a failure. This is possible because we can exploit alternative affordances or indeed improvise such alternatives. Accidents occur when several things go wrong simultaneously. For example, a moderately competent driver may well be able to deal with a brake failure or a crossing vehicle that has gone through a red light, but is unlikely to be able to deal with both simultaneously. A highly skilled and alert driver might even be able to deal with two things going wrong simultaneously, but not three or four. In everyday life we do not plan for such multiple events. Indeed we assume that many combinations of events cannot or will not occur. At the scale of individuals, our civilisation accepts that such accidents will happen and that the associated costs are a reasonable trade off with the benefits, of say, allowing people to drive vehicles. Unfortunately, we often take the same approach when considering risks that can impact the lives of hundreds or thousands of people. The 11 March 2011 earthquake and tsunami caused destruction of much of the electricity transmission and distribution network in Tohoku. To protect itself and people, the network disconnected regions where power lines had been brought down producing ground faults. Among other effects, this disrupted power to the cooling systems of the Fukushima nuclear Fig. 4 . Disaster planning and recovery should be considered on timescales that evolve according to the immediacy of the risks. Mitigation and adaptation projects, implemented over long periods prior to disaster events, complement disaster management during and immediately after a disaster event and the long journey towards a new period of stability. But the statistics show that these periods of stability are becoming shorter. plant. The designers of the plant had considered that such loss of network power might happen and provided backup Dieselpowered generators. But the tsunami produced waves that were able to inundate and damage the backup generators. No one appeared to have considered that both failures could occur simultaneously. It was a "black swan" [22] event that "could not happen". What is surprising about the Fukushima black swan is that the combination of events took place within a single system, the electrical utility system, where one might reasonably have expected its designers to have been aware of the possibility. More typically these unexpected combinations of events occur across different systems and the lack of mitigation results from the lack of coordination among both public and private organisations. We propose that physical models of disaster scenarios and systems analysis of how a city operates under disaster conditions can provide rational and optimal approaches to mitigation and adaptation to natural disaster risks. The modelling of natural disaster risks concerns the magnitude of possible disaster events and the probability of their occurrence. Historical and present day records provide initial source data for these. Re-insurance companies compile detailed models of such events and have statistical modelling methods for the purpose of estimating the probability of occurrence and the potential damage. CLIMADA is an open-source model, built on MatLab, for assessing the economics of adaptation to a variety of natural hazards. This was developed by Swiss Re and is now supported by the Swiss Federal Institute of Technology (ETH) [23] . Such models correlate historical events with the resulting destruction to produce estimates of risks based on projections of future events. Major earthquakes result from the movement of tectonic plates along a fault line and present great difficulties for modelling. Following an event the fault line may be quiescent for decades or centuries until sufficient stress has developed that a new shift is triggered. This is challenging to predict. Historical records and recent methods of geological analysis can provide retrospective periodicity, which informs us about the amount of time since the last occurrence of a given magnitude, but is indicative of a rising probability rather than a specific date for future events. At the time of the 2011 Tohoku event, Japan was actually expecting the next major earthquake to be south of Tokyo. A possible strategy here is simply to plan for the worst case, for example, the most powerful events in the historical record. However the costs of "total mitigation" may be extremely high and difficult to justify if the lifecycle of the event is longer than the lifecycle of the mitigating infrastructure or other investments. Some cities, in particular a group among the 100 Resilient Cities [13] , are attempting to distil from the IPCC's global climate models how local climate conditions may evolve [24] . In Durban they have drawn on the Long-Term Adaptation Scenarios developed by the South Africa Department of Environmental Protection [25] , but their experience is discouraging. As they note: "the level of confidence in the data decreases the finer the (spatial) scale". What they have been able to extract are the bounds on possible, local climate changes, which they judge to be sufficient to develop, in the case of say Durban, South Africa, a regional climate change strategy [26] . Academic researchers develop models for natural disaster events by estimating the probability of occurrence and the magnitude of the events. For example, the Global Earthquake Model (GEM) [27] provides a library of standardised data about earthquake events derived from historical (10 0 0-190 0 CE) records as well as more recent (190 0-20 0 0 CE) events for which seismographic data is available. GEM also provides tools and methodologies for estimating risk exposure and impacts and is an example of the open source models and data now becoming available for the study of natural disaster risks. Simulation modelling of earthquake frequencies can also draw on geologic or physical modelling that estimates the build-up of stress in fault lines due to the measureable movement of tectonic plates. Volcanic eruptions are similarly difficult to predict, but often provide warnings in the form of ground movements, specifically elevations, that are measureable by seismic or surveying instruments days or weeks in advance, thereby enabling evacuation to be prepared. With these exceptions of earthquakes and eruptions, most other forms of natural disaster are more amenable to modelling. Their frequencies of occurrence are higher, they are the consequences of observable, predictive phenomena, and the underlying physics is understood at least to some degree. Fluvial and pluvial flooding, for example, are most often caused by severe rainfall and hence can be modelled by hydrological models that predict where and when water from rainfall, snowmelt, or springs will flow and accumulate on land or in river systems taking into account topography, the nature of the soil and its current degree of saturation, and supportable depths in rivers. From these models it is possible to make accurate real-time predictions of where and when flooding can be expected. An important use of such models is to provide input to the management of water as it descends from the headwaters using dams and weirs. Developed countries have also instrumented flow rates in rivers susceptible to flooding [28] . Many fluvial floods could be avoided by better coordination of such management along the length of the river, although this is complicated when rivers pass through multiple local, national, or national territories. In the case of tropical storms, hurricanes, and typhoons physics (meteorological) codes model the transfer of energy (heat) from the ocean surface into the atmosphere, enabling the prediction of storm intensity and hence wind velocity. Hydrological models have been developed to predict the height of storm surges, caused by the coupling between onshore winds and the ocean surface (as well as seasonal tides) [29] . Similar analyses can be applied to estimating the height and on-shore penetration of tsunamis [30] . These results are then used by municipal and regional administrations to estimate what areas are likely to be flooded [31] and should be evacuated. A new factor for such models is the predicted local rise in sea-level resulting from climate change [32] . Because of ocean currents and persistent wind patterns, sea-rise is not uniform around the world. Also the elevation of coastal land subject to earthquakes can move vertically as well as laterally; the area around Sendai moved vertically by several meters following the 11 March 2011 event. The elevations of both the land and the ocean surface can be measured to high accuracy using LIDAR (Light Detection and Ranging) [33] from aircraft and radar from low Earth-orbit platforms such as the US Space Shuttle, which had a topography mission. In cases where reliable forecasts of an imminent disaster are produced, the regional or national government may encourage or require the at-risk population to evacuate. Evacuation of a building or sports stadium in the face of a fire or other threat can often be practiced, can be simulated using agent-based models, and is often successful. But evacuation of an extended, densely populated region is problematic. Regions with known risks of natural or technological disasters (such as nuclear power stations) usually have evacuation plans based on using personal vehicles. However, these are impossible to practice, highly likely to produce traffic deadlocks, and ignore those people without vehicles or who are unable to drive, such as hospital patients. A lesson learned from Hurricane Sandy was that hospitals and other care facilities should not be operated within at-risk areas. These and many other models and simulations serve two functions: first to develop estimates of risk (frequency of occurrence, threats to human life, threats to the built environment) and second to provide real-time warning of imminent events. The first, as shown in Fig. 4 , enables the assessment of land use policy and building codes, the value proposition for mitigation investments, and the needs for disaster insurance in combination with the projected development of urban and regional population and its demographics. The second, also shown in Fig. 4 , enables the identification of the at-risk districts and the initiation of emergency response plans to protect lives and critical services in these areas. It may also reveal weaknesses in the city's urban planning where critical services are located or have dependencies on infrastructure and services that are at-risk. The major impediment to using these models more widely is the lack of instrumentation in less developed parts of the world. This is a major opportunity for the strategic application of IoT to instrumenting the natural environment, to monitoring the condition and stresses on the built environment, and to understanding how the city or region works (Urban Systems). With such information, we can proceed to apply systems analysis to determine how best to ensure that critical services can continue during and following the predicted events. The traditional approach to natural disaster risk mitigation has been to invest in infrastructure projects with the aim of achieving blanket protection of a community or region. We argue that this is not only practically unachievable, it is also unaffordable. Our systems approach is to accept that some impact is unavoidable and to ask: what Urban Systems must definitely be kept operational during or following an event? This list will vary from country to country, but hypothetically it might consist of: • Electric power and telecommunications for those in the impact areas. • Medical care for those injured by the event and for those already in medical facilities. • Shelter, water, and food supply for those in the impact areas. • Evacuation of those in the impact areas. • Delivery and distribution of clothing, baby products, and so forth in the impact areas. • Banking services and municipal administrative services in the impact areas, … and so forth. The analytical method we propose to determine specific needs for mitigation is: • From the previous modelling exercise, identify natural disaster scenarios to be studied. These may be prioritised by estimated imminence and impact. Scenarios will ideally include combinations of events that, taken individually, might not constitute a disaster, but whose combined impact is disastrous. • For each scenario and each critical service, perform an analysis of the service, identifying the critical components including people, infrastructure, technology, and process. Components are not generic, but must be spatially and functionally resolved. Thus a valid component is not "a hospital" but rather "a hospital in district X that can treat victims of a disaster of type Y". • Document this analysis as a directed graph in which each edge represents the interdependency of a desired service on an infrastructure component or on another service. Fig. 5 is a highly simplified example of such a graph, which might be represented by a System Dynamics model. • Assign a risk to each edge to represent the estimated probability that the required component will not be available or will be impaired by the event. • Repeat this for all scenarios and critical services and identify the common components. • Prioritise the critical components in terms of the frequency of occurrence across the services and risk exposure and the absence of alternatives. This list now shows the prioritised components, e.g. a bridge or a pumping station that must be kept operational in order for the city to deal with the event. Consider the following scenario: a substation on a district power circuit provides power to a water treatment plant, traffic signals, streetlights, and a hospital within that district. Ordinarily, the water treatment plant enables the operation of the hospital, the traffic signals enable people to come and go from it, the streetlights provide safety at night -and the hospital attends to district residents' health needs ( Fig. 6 ). This is a mundane picture of a segment of urban life. Now consider a scenario that has the district vulnerable to, say flooding from a freak rainstorm and river flood. The substation is in the flood zone. Combine this with a random second event such as a strike of diesel delivery drivers, meaning that the hospital's back-up power system is non-operative -only temporarily, but at the critical point in the scenario. The power goes off in the district, with the following immediate consequences: • Storm water pumps become inoperative, prolonging the flooding. • Water becomes contaminated, resulting in a need to issue boil notices. Notwithstanding these notices, some people become sick. Untreated water is discharged into the river, perhaps damaging the environment. • Traffic signals fail, resulting in traffic disruption. This is compounded by flooding of some part of the district road network). Effort s to help residents and businesses in flooded areas become snarled in the traffic. Evacuation becomes difficult. • Road accidents and crime increase as a consequence of the loss of street-lighting. This requires additional police time and attention, just when they are trying to help flood victims and hospital patients (see below). The hospital's power fails, resulting in the need to transfer patients to other hospitals -through the same partially flooded, traffic-snarled streets as just described. Meanwhile, those people poisoned by contaminated water are trying to get to the hospital for treatment. This is a problem of interactions between multiple systems -energy, water treatment, roads, traffic signals, street-lighting, healthcare, law and order, and so on. This is just to focus on the physical systems. The impact on and with social and economic systems -business continuity -is not covered here. But the impacts might then start to cascade. For example: • The hospital also provided dialysis capabilities for a much wider area than the district itself. The city as a whole starts to run short of capacity. It succeeds in transferring patients to other hospitals, resulting in long lines and a shortage of beds in those other locations. • Traffic finds alternative routes, transferring congestion to other areas. • The city's power grid is thrown out of balance by the loss of the substation and needs rapid reconfiguration if energy is to be maintained in other areas. • Equipment in the water treatment plant suffers damage as a result of the loss of power and now needs repairs before it can be restarted. • Police and emergency services' attention is diverted from other crimes and accidents elsewhere in the city, potentially leading to looting. This is the type of complex scenario that might be faced by a city government. Suppose, however, that it had understood the precise connection between the power supply system and the impacts that it would have if it failed. Suppose also that it had also been able to predict the possible complications of a diesel delivery strike and a rainstorm (not normally events that might have been paired together in any explicit scenario). Many of the consequences above might have been averted before they arose, or at least attenuated, potentially saving lives and certainly reducing economic damage. These -today, largely hypothetical -suppositions give the basis for the problem statement: • Understanding of the intersections between the system-of-systems within an urban area and the consequences of malfunction or loss of any part of any one or more systems within that area. • Preparation -prioritisation of actions to harden and improve the resilience of those systems. • Response -prioritisation of actions to restore those systems after the event. This article proposes to address the emergency planning and management problem just described by enabling intersection points between the systems of systems within a city to be visualized and explored. The same solution, in meeting emergency needs, would also enable management of such activities as routine maintenance, or activation of "stand-by" responses to external events known to have critical implications such as a tanker-driver strike. Based on the risk assessments developed using the models described above, a systems model that represents the full complexity of these inter-dependencies Fig. 6 . A visualisation of the inter-system dependencies among the electrical distribution system, the water treatment system, the road system, and a hospital. could then explore by means of stochastic (Monte Carlo) iteration the many combinations of failures possible under a wide range of natural (and technological) disasters. While the more sophisticated agencies, for example the US Federal Emergency Management Agency (FEMA), regularly perform critical asset assessment, particularly for disaster readiness, very few consider the inter-dependencies among these assets and their behaviours as systems of systems. • The solution can be thought of as a geographically-deployed, interactive version of the "fault tree" concept familiar to engineers. The proposed methodology is analogous in concept to the well known military engineering methodology of Failure Modes, Effects and Criticality Analysis (FMECA) [34] . It would consist of the following very broad steps: • Identify and describe all key infrastructure assets. This would include segmentation of linear assets such as power lines, rail lines, roads and water pipes. • Identify potential failure modes for each asset. Critically, as well as internal failures (mechanical malfunction, sabotage, etc.), identify potential external causes of such failures -floods, earthquakes, prolonged temperatures, etc. It will probably be necessary to run various scenarios to identify these causes. For example, in the aftermath of Hurricane Sandy, the City of New York ran different climate change scenarios to generate a range of sea level rise options, which were then linked to potential storm surges. The extent of flooding was documented and then assets likely to be affected are identified. • Identify first-order consequences of failure (impact, severity), and document. • Instantiate these first-order consequences as additional external causes for assets that are linked to the asset in question: for example, "if substation x stops working, then water treatment plant y loses its supply", or "if road segment z floods, expect traffic congestion in the following locations". Document these relationships and their severity, noting the originating asset whose failure caused the problem. Continue this process of capturing "2nd order" effects of the original failure until the required level of cascade has been reached. Modelling may be needed to identify 2nd-order effects: in the example just given, modelling of traffic impacts. Second order impacts would need to have a timescale associated with them. For example, traffic back-up might not be instant, whereas power black-out would be. • Identify long-run mitigation actions, for example, elevating substation x, and prioritize by number and severity of impacts of failure of each asset on the critical services required. • (In the context of an emergency management situation, identify and sequence responses -see below). The core of this approach is to develop understanding of the criticality of each system, sub-system, or component by asking a cascading series of "what if" questions. That is, what is the knock-on impact on the dependent systems of the failure of each such item? It should also lead to the re-examination of assumptions, especially hidden assumptions and black swan assumptions, about how these Urban Systems work. Through these studies, the process generates resilience knowledge that should enable decision-making to focus on the most critical items. Such processes have been used in the defence and aerospace industries for many years for failure mode analysis of infrastructures as large as aircraft carriers, which are comparable in scale and complexity to cities. Technically it should be feasible for cities if a formal description of the built environment and of the critical Urban Systems based on them is available. But in many cities, particularly older cities, such details, for example where water and sewer pipes or electrical cable ducts run underground, may be lacking or uncertain. Cities that have invested already in the use of Geographic Information Systems (GIS) [35, 36] as an information hub will be better positioned and will reap additional value through this process. Software tools to capture, visualize, and identify inter-and intra-system dependencies have been developed for such analysis in the context of aerospace and defence systems, large plants, such as refineries or power stations, but will need to be extended to integrate well with the GIS. Some GIS-based products provide explicit support for this kind of formal documentation of the built environment and its interdependencies. In the future, as more infrastructure assets are monitored and controlled remotely via sensors and actuators, more of this geo-spatial dependency information will be generated and maintained as part of the construction and maintenance of the assets. The goal of this process is the generation of Risk Knowledge [37] that can be understood by many kinds of users and can enable decision-making about focused investment in mitigating those risks. This process is almost certain to require collaboration between multiple city and national governmental departments and agencies, as well as energy and communications utilities. Its outcomes are quite likely to conflict with the plans and priorities of the individual departments, agencies, and companies, which are more likely to be directed towards other goals such as economic development, traffic congestion, or greenhouse gas emissions. It is also likely to add some cost, although the UN ISDR in its 2015 Global Assessment of Risk [38] estimates that such costs would represent only 0.1 per cent of the USD 6tn to be invested in infrastructure to 2030. In the following section we describe ways in which we may aid in the restoration of Urban Systems following a disaster, notably through the application of principles first advocated by Mori [39] for Autonomous Decentralized Systems (ADS). The preceding section has mainly focused on how, during the pre-Event phase of the lifecycle, we can assess risks and minimize the impact of inevitable natural disasters in a given city or region. In that section we argue that one of the principal weaknesses in the ways that we plan and construct cities lies in the creation of inter-dependences among Urban Systems. Although few cities or regions have undertaken rigorous natural disaster risk assessments from this system perspective, it is clear from even casual analysis that our cities have not been planned and constructed with resilience in mind. Even the City of New York only completed such an approach in the aftermath of Hurricane Sandy in 2012. This tendency towards inter-dependencies among Urban Systems is, in large part, the legacy of the centralized processes of the 19th and 20th industrial eras. An early lesson of the Industrial Revolution was the important gain in efficiency that could be achieved by concentrating workers into factories, which replaced the preceding "cottage industry" methods of processing and manufacturing. As engineering developed, this led to the concentration of expensive processing and manufacturing plant into these factories, where processes could be optimized to produce maximum outputs. There were also in that era relatively few professionals who understood these new technologies or who knew how to organize and manage large numbers of workers. Industrial ecosystems emerged based on the large-scale acquisition of raw materials from up-stream suppliers, its conversion into higher value products or services, and the delivery of these over a distribution system to down-stream consumers. In that era of rapidly emerging prosperity, consumers were pleased to take whatever they could afford of these cheap and innovative products and services and their diverse, individual needs were of little concern to the manufacturers. The result was an efficient, but highly centralized system of production that was geographically and mentally disconnected from its customers. To a striking degree this emphasis on centralised methods for both private and public systems is still reflected in today's Urban Systems for electricity, water, communications, transportation, food, and so forth. For example, notwithstanding the rapid deployment of solar panels at the consumer level, most electricity in the USA is produced by enormous generating stations and sent to consumers over hundreds of miles of transmission and distribution networks. While there is redundancy in those transmission networks, it decreases closer to the consumer with the last few miles being essentially without redundancy. It is efficient, from the perspective of the utility, but far from resilient from the point of view of the consumer. In the State of Connecticut, an affluent state in the northeast of the USA, outages of a couple of days affecting many thousands of people are annual occurrences and outages of a week or more affecting hundreds of thousands of people every few years. These outages are caused by snow, ice, rain, and wind storms. We might say that this system is not resilient. So what can be done to improve the resilience of some of these basic services to natural disasters? One conceptual approach emerges from the work of Prof. Kinji Mori on Autonomous Decentralised Systems (ADS) [39] : An autonomous subsystem is defined as having autonomous controllability and autonomous coordinability 1 and a system is understood as the result of integration of autonomous subsystems. Autonomous subsystems are mutually connected through the Data Field, where all of the [shared] data are broadcast, and each subsystem independently elects to receive only the necessary data and is driven only by the received data. This architecture ensures online expansion, fault tolerance, and online maintenance of the system An ADS is thus the inverse of a traditional, centralised system. Instead capability and the related sensors, actuators, and intelligence are divided into smaller sub-systems that can be geographically dispersed, that can operate collaboratively with their peers under normal conditions, but that can also operate independently when communication (data sharing) is disrupted, and can re-integrate smoothly when communication is restored. Such an approach, the inverse of the earlier policies of centralization, is attractive today for several reasons: • Automation in the manufacturing many of the components of an ADS has strong positive returns of scale. Historically it was argued that large, centralised plant was a more efficient use of capital. Today it is often cheaper to manufacture large volumes of small devices than a single, one-of-a-kind device. We see this already in that the cost per Watt of solar panels has fallen below the cost per Watt of traditional power stations in recent years (even without government subsidies). • Automation in system management now embodies large amounts of knowledge that were previously only accessible via human operators. Historically it was argued that it took the same amounts of labour to manage a large plant as to manage a small plant. Today much of the intelligence required to manage a given capability, at least for relatively short periods, say, a few days, can be embedded in the sub-system's own local, intelligence. This removes the dependence on communication with a centralised control centre and creates a "virtual utility" spanning the distributed sub-systems. • If such a sub-system does fail, it affects f ewer people and their location is precisely known. When the electricity network in Tohoku shut down following the tsunami, it cut off power to many areas that were essentially unaffected by the disaster. Further, restoring power to those regions then depended on human inspection, which consumed many persondays of effort that could more usefully have been applied in areas that were affected by the disaster. Decentralisationtogether with extensive IoT-based instrumentation of infrastructure to detect failures -would allow disaster responders to operate more efficiently in the vicinity of the disaster and to focus their efforts. • Being closer to the disaster impact zone, such a system can respond more quickly and more appropriately than a remote control centre. A challenge with traditional electricity distribution is that the electrical utility has no means to detect which customers have lost power, although this is one of the aims of current Smart Meter [41] initiatives. Sub-dividing large infrastructure projects has another effect that is more related to politics than resilience. Large projects, such as the construction of a massive dam or a large power station, are inherently political, because of the enormous costs, the displacement of inhabitants, and the environmental impacts they produce. Hence they seek support from very large and affluent customers who can benefit from them. Conversely, small projects, such as small dams or renewable energy sources, have much smaller impacts and their benefits can go to smaller, often poorer customers. A system where this approach is beginning is in local electricity distribution. As the scenario above illustrates, in a disaster zone the lack of electrical power impacts the water supply, the operation of hospitals and other healthcare facilities, telecommunications, refrigeration of foods, the availability of fuel for transportation and heating, and so forth. As our dependence on a stable electricity supply has grown, service interruptions for any reason have become less acceptable to consumers. In regions where the supply is not dependable, such as India and Connecticut, the inhabitants who can afford to do so will install their own generators, historically powered by hydrocarbons. When the network supply fails, these private systems will disconnect from the network, start up the generator, run independently, and reverse the process when network power is restored. Such independent generation and distribution networks are known as "micro-grids". Today many homes in Europe and North America are installing solar panels with peak generation levels that exceed the home's demand for much of the year. Since these clearly do not generate at night or during heavily cloudy days, they are not complete substitutes for the hydrocarbon-powered-generators, but there is a prospect of affordable energy storage based on either re-cycling of batteries from electric vehicles or from new technologies. In combination, the possibility exists for autonomous, decentralised energy production, known as "island operation" at the scale of a community or even a single house. This has profound implications for the business models of the traditional utility industry. It is also widely acknowledged by the utility industry that such distributed storage will be needed, on a large-scale, to buffer fluctuations in the output of solar and wind systems during normal operation, and to help balance the increased energy flows due to charging of electric vehicles. Indeed in the senate of the United States government, a law [42] was proposed in May 2015 to establish a national target for in-network energy storage; the bill would require utilities to be able to support 1% of peak demand by 2021 and 2% by 2024. The bill has little chance of becoming law by itself, but it may be adopted in a larger bill on overall energy strategy. One of the main experimental programmes for Smart Grids in the USA, known as the GridWise Alliance [43] , is experimenting with micro grids and island operation, including the ability to achieve frequency and phase synchronisation before cut-over. In Tohoku, the operation of many companies was disrupted following the 2011 tsunami, because the network shut down, even though these companies' facilities and the surrounding areas were not damaged. Even after the network was restored, power was limited because all nuclear power stations were shut down. The results were rolling blackouts during the spring and summer of 2011. These companies are now reported to be installing systems for island operation at the scale of an industrial park. So we see here the potential evolution of a more resilient approach to electricity generation and distribution that could reduce the disruptive impact of many natural disasters. Another system that is progressing in this direction is the mobile telephone system. We are accustomed today to almost continuous communications and this is of extremely high importance in enabling communication among the many teams of disaster responders. Such teams, many from other regions or countries, often employ specialised communications equipment that is not compatible or even not licensed for use in the disaster area. Hence the availability of telecommunications based on Global System for Mobile (GSM) standards is highly desirable. A growing medium in disaster communication is public texting or social media, which are increasingly monitored and analysed to develop understanding of the situation on the ground. Fixed-line telephones are easily disrupted by failures of overhead or underground cable systems. They are also less resilient than formerly, when copper twisted pairs ran all the way from a subscriber building back to the central exchange, where there was a large battery and a standby generator (a rare example of a benefit from a centralised system). Today, such copper wires typically run no more than a 1-2 km and are then concentrated onto an optical fibre. The concentrators have batteries, but these last for only a few days. However, modern mobile telephone systems can be relatively resilient. In the 2011 event in Tohoku, mobile telephone towers mainly survived the earthquake and many survived the impacts of debris carried back and forth by the tsunami. In that region, the backhaul of the telephone signals to a central exchange outside the disaster zone was generally via landlines and these landlines were destroyed within the tsunami flood plain, which was some 3-5 km wide. These base stations also had back-up batteries with only a few hours' capacity. Since that event, the battery capacity in Japan has been expanded to provide a few days of back-up power. Other countries employ local diesel generators to provide such capacity. The backhaul from the mobile telephone towers to the backbone network is increasingly provided by microwave radio links that have a good chance of surviving earthquakes and tsunamis. A further refinement could be to use the directional antennas of the mobile telephone towers to provide ad hoc network bridges among the cells until a connection can be found to the backhaul network. Similar ADS scenarios could be developed for water supply, transportation, medical services, and so forth. While on the one hand the frequency and severity of natural disasters has been growing, our growing understanding of how to understand natural disaster resilience in specific locations and our ability to push intelligence and capability closer to grass-roots needs shows signs that technologies based on the principles of Autonomous Distributed Systems can strongly aid the mitigation of many risks associated with natural disasters. The following section describes new approaches to natural disaster risk assessment that are being adopted by the UN ISDR that illustrate the power of systems thinking in this area. The methodology described above, which identifies and assesses the criticality of systems, sub-systems, and their components and thereby enables focused investment in mitigating natural disaster risks faced by a city or region are one example of an ascending scale of methods to enhance the capabilities of local and regional governments in the management of their jurisdictions. The world is a great deal more complex today than even 50 years ago and some 30 years ago work began on the assessment of organisational capabilities in various domains. While many cities have extensive professional teams for emergency response and emergency management, the concept of professionals trained in resilience planning and management is only just emerging. The Rockefeller Foundation's 100 Resilient Cities [13] initiative has the development of such professionals, including Chief Resilience Officers, as one of its chief goals. One of the first domains to develop such professional methodologies was software development. Software development, then known as programming, began commercially as an artisanal activity in the 1960s and soon reached levels of complexity that demanded more rigorous organisational processes. The Software Engineering Institute (SEI) [44] was established in 1986 at Carnegie Mellon University and produced an appraisal methodology known as the Capability Maturity Model (CMM) [45] . CMM defines five steps towards organisational maturity in the software development: Initial, Repeatable, Defined, Managed, and Optimising. Here are extracts from two of the definitions of these levels 2 : 1. At the Initial Level (level 1) , the organization typically does not provide a stable environment for developing and maintaining software. Such organizations frequently have difficulty making commitments that the staff can meet with an orderly engineering process, resulting in a series of crises. Success depends entirely on having an exceptional manager and a seasoned and effective software team. Even a strong engineering process cannot overcome the instability created by the absence of sound management practices. In spite of this ad hoc, even chaotic, process, Level 1 organizations fre-quently develop products that work, even though they may be over the budget and schedule. Thus, at Level 1, capability is a characteristic of the individuals, not of the organization. 2. At the Optimizing Level (level 5) , the entire organization is focused on continuous process improvement. The organization has the means to identify weaknesses and strengthen the process proactively, with the goal of preventing the occurrence of defects. Software project teams in Level 5 organizations analyse defects to determine their causes. There is chronic waste, in the form of rework, in any system simply due to random variation. Waste is unacceptable; organized efforts to remove waste result in changing the system, i.e., improving the process by changing "common causes" of inefficiency to prevent the waste from occurring. While this is true of all the maturity levels, it is the focus of Level 5. The software process capability of Level 5 organizations can be characterized as continuously improving because Level 5 organizations are continuously striving to improve the range of their process capability, thereby improving the process performance of their projects. The purpose here is not to educate the reader on how to manage software development, but rather illustrate how the adoption by an organization of systematic methods leads to better outcomes even for such complex and artisanal work. Similar ways to assess organizational maturity in various domains have been developed. The most general approaches are the family of ISO 90 0 0 [46] standards for the general management of quality in a broad range of sectors. In this vein, collaboration between IBM and AECOM [47] has led to the development of a capability maturity assessment framework [48] for resilience to natural disasters that was adopted by the UN ISDR in 2014. The framework assesses a city or region against 10 factors, the Ten Essentials [49] , that were originally developed by the UN ISDR as a checklist to guide cities seeking to improve their resilience to natural disasters. In version 2.2 of the assessment, the Ten Essentials are: this elaborate example of systems methods will serve as a decision-support tool for resilience planning by incorporating information specific to a city or region from domains such as natural resources, economics, industry, and geography. Its demonstration in the city of Accra, Ghana was planned for late in 2015. No human life is free of risks and in the end we all die. We live in a period of many and great changes that have profound global impacts. Urbanisation, which is driven in large part by economic transformations, increases global population living in cities by some 180,0 0 0 per week and many more millions are migrating to escape from conflicts and economic hardship. Many of these people are joining the urban poor of large cities around the world and live, at least initially, in the areas of highest risk from coastal and pluvial flooding, landslides, mudslides, fires, and earthquakes. Rising sea levels and stronger storm surges threaten poor nations on small islands and also the richest cities in the world. In consequence we find growing attention to the resilience of cities not only to natural disasters, but also to social factors such as changes in industry, education, healthcare, and the distortion of social equality during a period of rapid economic change. In this article we have focused only on the issues of resilience to natural disasters and on the roles that new science and technology offer in mitigating and adapting to such risks. Science and technology alone will not solve these problems, but we have aimed to show that they can play important roles in enabling societies to make better choices about how to deal with known risks and to improve the ability of critical infrastructure systems to survive disaster events and thereby enable better humanitarian relief. Historically human settlements have grown organically over decades and centuries and have learned through experience what risks they face and how to avoid them. Until two centuries ago, the global population was primarily sparsely distributed. In the 21st century, as population is rapidly concentrating in cities and changes in both the natural environment and the social environment are taking place at rates hitherto unknown, we can no longer simply wait for disasters to strike in order to generate Risk Knowledge. We must think ahead and generate such Risk Knowledge by studying where people are choosing to live, what changes are underway in those environments, what dependencies and exposures exist in the Urban Systems supporting those communities, and hence how to focus efforts on protecting those at greatest risk. The major insurers have long had proprietary tools for risk and impact analysis, but in 2014 the PreventionWeb made available CLIMADA [23] . The Global Earthquake Model (GEM) [27] community aims to provide open source datasets and risk models to allow cities and regions to perform risk analysis on one of the most unpredictable hazards. There appears to be a large opportunity for the Simulation Modelling community to develop software tools to support the type of systems-of-systems analysis that we are advocating. Failure-tree analysis has a long-history, but ignores inter-system and spatial dependencies. GIS systems are excellent at identifying spatial interactions, but do not of themselves consider system functionality. This kind of analysis is routine in global supply-chain management, which studies the end to end performance under spatially-distributed failure conditions such as storms, strikes, production failures, and so forth, and this is still performed by human beings. No commercial tool of which we are aware considers how two or more systems might interfere during normal operation. In the near-term, for many cities, the lack of formal descriptions of the built environment and of the identified patterns of systems comprising the Urban Systems as well as limited sensing capabilities will limit the deployment of this approach. But these deficiencies will no doubt be remedied as IoT and Data Science evolve. The resulting implementation plans will have a lot in common with the kinds of ideas produced by planning for a smart city and, just as with planning for sustainability, there are many mutual benefits that flow in both directions. Through smart cities initiatives and the emergence of the Internet of Things, we have more factual knowledge about such Urban Systems and growing scientific insights about how cities in general work. Through these we are gaining the power to understand where the needs for resilience to natural disasters lie and how to prioritise investment. Through technologies related to energy, to communication, and autonomous and distributed management of infrastructure systems we are gaining the power to create resilience in Urban Systems. These are new methods that need to be broadly tested and adopted in cities around the world that are already struggling with many other pressures. A new generation of public administrators is required that can apply these new methods while reducing rather than increasing these existing burdens. Our central goal in this article is to invite urban planners and policymakers to consider this question: We know how to design and build cities that function well under "normal conditions", but how should we design and build cities that can continue to function in "limp-along" fashion following a disaster? Large-scale engineering systems such as power stations and aircraft are designed and constructed in this way, so how can we extend those methods to improving the resilience of a city? A further question is addressed to those working on simulation modelling techniques: How can we develop such models at the scale of entire cities? Expedite recovery and build back better Within these Ten Essentials, 85 evaluation criteria are scored over several aspects. For example, within Essential no. 1 there is an assessment on: Co-ordination of all relevant pre-event planning and preparation activities exists for the city's area, with clarity of roles and accountability across all relevant organizations Substantial reductions in disaster damage to critical infrastructures and disruption of basic services, including health and education Increase in the number of countries with national and local disaster risk reduction strategies Enhanced international cooperation Items 4 and 7 illustrate the important roles of systems in the future of natural disaster resilience. Finally, in this area, the UN ISDR has recently launched the RISE initiative with the aim of developing an integrated approach for the application of the Ten Essentials. As part of this it has adopted a modelling system Social Capital, A missing link to disaster recovery The Great East Japan Earthquake, Available The International Disaster Database, Available Managing the risks of extreme weather events and disasters to advance climate change adaptation Minding the risk: cities under threat from natural disasters Global Risks Collapse -How Societies Choose to Fail or Succeed, Penguin Books Economics of Good and Evil: The Quest for ECONOMIC Meaning from Gilgamesh to Wall Street The Rockefeller Foundation, 100 Resilient Cities The Rockefeller Foundation, and Arup, City Resilience Framework The Resilience Dividend, Public Affairs New York Times, 17a Tsunami Warnings, Written in Stone Urban disaster recovery: a measurement framework and its application to the 1995 Kobe earthquake The Guardian The Black Swan: The Impact of the Highly Improbable, Random House From Durban to Boulder, a quest for climate resilience at city scale, Available Long-Term Adaptation Scenarios Flagship Research Programme (LTAS) for South Africa. Climate Change Implication for the Agriculture and Forestry Sectors in South Africa Durban Climate Change Strategy, Available Hydrological Instrumentation Facility, Available A third-generation model for wind waves on slowly varying, unsteady, and inhomogeneous depths and currents Tsunami Modelling Manual Stochastic Analysis of Storm-Surge Induced Infrastructure Losses The impact of future sea-level rise on the European Shelf tides Available Vertical accuracy and use of topographic LIDAR data in coastal marshes FMECA, Failure Modes, Effects, and Criticality Analysis Geographic Information System Successful Response Starts With a Map, Mapping Science Committee, National Research Council Global Assessment of Risk Autonomous decentralized systems: concept, data field architecture and future trends Coordinability of dynamic systems The Edison Institute National Energy Storage Target Available files/Future _ of _ the _ Grid _ web _ final _ v2.pdf Capability Maturity Model for Software, v1.1, Software Engineering Institute Disaster Resilience Scorecard Pilot, Available Disaster Resilience Scorecard The Ten Essentials for Making Cities Resilient The Ecological Sequestration Trust, resilience.io, Available The Ecological Sequestration Trust, Available We gratefully acknowledge the support of our colleagues at the IBM Corporation and Prof. Kinji Mori of Waseda University. Possible responses are:1. Single point of coordination exists with agreed roles and responsibilities. 2. Single point exists but with some minor exceptions. 3. Single point exists in principle, but with some major omissions, or lack of agreement on some major areas. 4. Initial steps taken to create a single point of coordination. 1 -No single point but plans exist to create one. 5. No single point and no plans to create one.The purpose is not to rank the city, but rather to identify strengths and weaknesses and thereby develop focused plans for improvement.These UN ISDR projects are undertaken within 10 or 15 year programmes that are adopted by most of the UN member countries. The most recent such programme was adopted through a marathon session in Sendai, Japan in April 2015 that addresses the following issues: