key: cord-0478308-khikeuhf authors: Lutjens, Bjorn; Leshchinskiy, Brandon; Requena-Mesa, Christian; Chishtie, Farrukh; D'iaz-Rodr'iguez, Natalia; Boulais, Oc'eane; Sankaranarayanan, Aruna; Pina, Aaron; Gal, Yarin; Raissi, Chedy; Lavin, Alexander; Newman, Dava title: Physically-Consistent Generative Adversarial Networks for Coastal Flood Visualization date: 2021-04-10 journal: nan DOI: nan sha: ce4c9c77b41d95546e34a9544b88349987e4689f doc_id: 478308 cord_uid: khikeuhf As climate change increases the intensity of natural disasters, society needs better tools for adaptation. Floods, for example, are the most frequent natural disaster, and better tools for flood risk communication could increase the support for flood-resilient infrastructure development. Our work aims to enable more visual communication of large-scale climate impacts via visualizing the output of coastal flood models as satellite imagery. We propose the first deep learning pipeline to ensure physical-consistency in synthetic visual satellite imagery. We advanced a state-of-the-art GAN called pix2pixHD, such that it produces imagery that is physically-consistent with the output of an expert-validated storm surge model (NOAA SLOSH). By evaluating the imagery relative to physics-based flood maps, we find that our proposed framework outperforms baseline models in both physical-consistency and photorealism. We envision our work to be the first step towards a global visualization of how climate change shapes our landscape. Continuing on this path, we show that the proposed pipeline generalizes to visualize arctic sea ice melt. We also publish a dataset of over 25k labelled image-pairs to study image-to-image translation in Earth observation. : The Earth Intelligence Engine generates physicallyconsistent satellite imagery of future coastal flood events to aid in climate communication. Explore more results in highresolution at trillium.tech/eie. O ur climate changes, causing natural disasters to become more intense [1] . Floods are the most frequent weatherrelated disaster [2] and already cost the U.S. 3.7 B USD per year [3] ; this damage is projected to grow over the next decades [1] . Visualizations of climate impacts are widely used by policy and decision makers to raise environmental awareness and facilitate dialogue on long-term climate adaptation decisions [4] . Current visualizations of coastal flood impacts, however, are limited to color-coded flood maps [5] or synthetic street-view imagery [6] , which do not convey city-wide flood impacts in a compelling manner, as shown in Fig. 2 and [7] . Our work generates synthetic satellite imagery of future coastal floods, informed by the projections of expert-validated flood models, to enable a more engaging communication of city-wide flood risks to governmental offices. Generative adversarial networks (GANs) have been used to generate highly photorealistic imagery of faces [12] , [13] , animals [14] , [15] , or even street-level flood imagery [10] . Recent works, have adapted GANs to generate satellite imagery [16] , [17] , [18] , [19] , [20] . Synthetic satellite imagery, however, needs to be trustworthy [21] . While many approaches exist to increase the trustworthiness of neural network-based models, including interpretable networks [22] , adversarial robustness [23] , [24] , or ensemble predictions [25] , [26] , this work focuses on ensuring physical-consistency. Many recent works incorporate domain knowledge from the physical sciences into deep learning [27] , [28] , [29] , [30] . Our arXiv:2104.04785v3 [cs.CV] 19 May 2021 a) color-coded b) street view c) satellite (ours) Fig. 2 : Physically-consistent satellite imagery (c) could enable more engaging and relatable communication of city-scale flood risks [4] . Most existing visualizations of coastal floods or sea-level rise that are aimed towards the public rely on color-coded geospatial rasters (a), that can be unrelatable or impersonal [5] , [8] , [9] . Alternative photorealistic visualizations are often limited to local street-level imagery (b) [6] , [10] that lack further spatial context. Image sources: [5] , [5] , [6] , [6] , [11] , ours. work aims to generate physically-consistent imagery, whereas we define an image as physically-consistent if it depicts the same flood extent as an expert-validated coastal flood model, as detailed in Section III-C. To achieve physical-consistency, one could adapt the neural network architecture to incorporate physics as: inputs [31] , training loss [32] , the learned representation [33] , [34] , [22] , hard output constraints [35] , or evaluation function [36] . Alternatively, one could embed the neural network in differential equations [37] , for example, as: parameters [38] , [32] , dynamics [39] , residual [40] , [41] , differential operator [27] , [42] , or solution [32] . Our work is the first in leveraging any of these methods to ensure physical-consistency in synthetic visual satellite imagery, to the extent of the authors' knowledge. Specifically, our work leverages years of scientific domain knowledge by incorporating physics-based coastal flood model projections as neural network input and evaluation function. Exploring the alternative forms of physical consistency for satellite imagery is an exciting field left for future works. Our work makes five contributions: • the first generative vision pipeline to generate physicallyconsistent visual satellite imagery for hypothetical scenarios, called the Earth Intelligence Engine, • the first physically-consistent and photorealistic visualization of coastal flood models as satellite imagery, • a novel metric, the Flood Visualization Plausibility Score (FVPS), to evaluate the photorealism and physicalconsistency of generated imagery, and • the demonstration of a climate impact visualization pipeline on coastal floods and melting Arctic sea ice, • an open-source dataset with over 25k labelled highresolution image-pairs to study image-to-image translation in Earth observation. Our work combines a coastal flood model with a generative vision model in a novel physically-consistent pipeline to create visualizations of coastal floods. We aim to learn the change in satellite imagery from before to after a coastal flood, which is similar to paired image-to-image translation [12] . Within image-to-image translation models, generative adversarial networks (GANs) generated samples of highly photorealistic imagery. For example, semantic image synthesis models generated photorealistic street scenery from semantic segmentation masks: DCGAN [43] , Pix2pixHD [13] , DRPAN [44] , SPADE [45] , or OASIS [46] . In comparison to GANs, normalizing flows [47] , [26] or variational autoencoders [48] capture the distribution of possible image-to-image translations more accurately [49] , but single samples often look less realistic ( [50] , [51] , Fig. 4 ). Because our use case requires photorealism we focus on GANs and extend the high-resolution semantic image synthesis model, pix2pixHD [13] , to take in physical information and produce imagery that is both photorealistic and physically-consistent. We leave ensemble predictions capturing the full distribution of images for future work. Physics-informed deep learning has recently generated significant excitement. It promises to increase trust, interpretability, and data-efficiency of deep learning models [27] , [28] , [31] . The Earth Intelligence Engine incorporates a physicsbased coastal flood model as input and evaluation function and is the first in the physics-informed deep learning literature to generate physically-consistent satellite imagery [31] . Future works will extend the connections between physics and deep learning-generated satellite imagery. For example, [52] could be used to learn a physically-interpretable latent space, e.g., a "flood neuron", [53] to embed deep learning in atmospheric noise models, or [46] to incorporate physics-based flood maps in the loss function. Visualizations of climate change are commonly used in policy making and community discussions on climate adaptation [4] , [54] . Landscape visualizations are used to raise environmental awareness in the general public or policy [7] , [10] , because they can convey the impacts of climate change, such as rising sea levels or coastal floods, in a compelling and engaging manner ( [7] , Fig. 2b ). Most landscape visualizations, however, are limited to regional information [6] . Additionally, most landscape visualizations require expensive physics-based renderings and/or high-resolution digital elevation models [6] . Alternative visualization tools of coastal floods or sea-level rise are color-coded maps, such as [55] , [5] , [9] . Color-coded maps convey the flood extent on a city-wide scale, but are less engaging than a photorealistic image [4] . We are generating compelling visualizations of future coastal floods as satellite imagery to aid in policy and community discussion on climate adaptation. The proposed pipeline uses a generative vision model to generate post-flood images from pre-flood images and a flood extent map, as shown in Fig. 3 . A. Data Overview. Obtaining ground-truth post-flood images that display standing water is challenging due to cloud-cover, time of standing flood, satellite revisit rate, increased atmospheric noise, and cost of high-resolution imagery. This work leverages the xBD dataset [11] , a collection of pre-and post-disaster images from events like Hurricane Harvey or Florence, from which we obtained ∼3 k pre-and post-flood image pairs with the following characteristics: ∼.5 m/px, RGB, 1024×1024 px/img, Maxar DigitalGlobe. The coastal flood model is the Sea, Lake and Overland Surges from Hurricanes (SLOSH) model [56] , developed by the National Weather Service (NWS). SLOSH estimates storm surge heights from atmospheric pressure, hurricane size, forward speed, and track data, which are used as a wind model driving the storm surge. The SLOSH model consists of shallow water equations, which consider unique geographic locations, features, and geometries. The model is run in deterministic, probabilistic, and composite modes by various agencies for different purposes, including NOAA, National Hurricane Center (NHC) and NWS. We use outputs from the composite approach -that is, running the model several thousand times with hypothetical hurricanes under different storm conditions. As a result, we obtain a binary flood hazard map from [5] as displayed in Fig. 2a which are storm-surge, height-differentiated, flood extents at 30 m/px resolution. The flood hazard maps do not intersect with the locations of existing post-flood imagery. To get around the data limitation, we generate and coarse-grain segmentation maps of the postflood imagery to 30 m/px for training and evaluation and use binarized flood hazard maps during test. Future works will extend the state-of-the-art ADvanced CIRCulation model (ADCIRC) [57] model, which is described in [8] and has a stronger physical foundation with better accuracy, and higher resolution than SLOSH. B. Model architecture. The central model of our pipeline is a generative vision model that learns the physically-conditioned image-to-image transformation from pre-flood image to post-flood image. We leveraged the existing implementation of pix2pixHD [13] . Pix2pixHD is a state-of-the-art semantic image synthesis model that uses multi-scale generator and discriminator architectures to generate high-resolution imagery. We extended the input dimensions to 1024×1024×4 to incorporate the flood extent map. The resulting pipeline is modular, such that it can be repurposed for visualizing other climate impacts. C. Physically-consistent image. We define a physically-consistent model as one that fulfills laws of physics, such as, conservation of momentum, mass, and energy [28] . For example, most coastal flood models consist of numerical solvers that resolve the conservation equations to generate flood extent predictions [56] . Here, we consider an image to be physically-consistent if it depicts the predictions of a physically-consistent model. Specifically, we define our generated satellite imagery, I G ∈ I = [0, 1] w×h×c with width, w = 1024, height, h = 1024, and number of channels, c = 3, to be physicallyconsistent if it depicts the same flood extent as the binary flood map, F ∈ F = {0; 1} w×h . We implemented a flood segmentation model, m seg : I → F, to measure the depicted flood extent in the generated image. If the flood extent of a generated image and the coastal flood model match within a margin, the image is in the set of physically-consistent images, i.e,. I phys ∈ I phys = {I G ∈ I : The generated image is considered photorealistic, if it is contained in the manifold of naturally possible satellite images, I photo ∈ I photo ⊂ I. Hence, we are looking for a conditional image generation function, g, that generates an image that is both, physically-consistent and photorealistic, i.e, g : I photo × F → I photo ∩ I phys . Here, we condition the GAN on the flood map, F , and use a custom evaluation function to identify the generation function, g. Evaluating imagery generated by a GAN is difficult [59] , [60] . Most evaluation metrics measure photorealism or sample diversity [60] , but not physical consistency [61] (see, e.g., The VAEGAN, BicycleGAN [51] (f), creates glitchy imagery (zoom in). A handcrafted baseline model (g), as used in common visualization tools [9] , [55] , visualizes the correct flood extent, but is pixelated and lacks photorealism. SSIM [62] , MMD [63] , IS [64] , MS [65] , FID [66] , [67] , or LPIPS [68] ). To evaluate physical consistency we propose using the intersection over union (IoU) between water in the generated imagery and water in the flood extent map. This method relies on flood masks, but because there are no publicly available flood segmentation models for Maxar RGB satellite imagery, we trained our own model on ∼100 hand-labeled flooding images (Section IV-B). This segmentation model produced flood masks of the generated and ground-truth flood image which allowed us to measure the overlap of water in between both. When the flood masks overlap perfectly, the IoU is 1; when they are completely disjoint, the IoU is 0. To evaluate photorealism, we used the state-of-the-art perceptual similarity metric Learned Perceptual Image Patch Similarity (LPIPS) [68] . LPIPS computes the feature vectors (of an ImageNet-pretrained AlexNet CNN architecture) of the generated and ground-truth tile and returns the mean-squared error between the feature vectors (best LPIPS is 0, worst is 1). Because the joint optimization over two metrics poses a challenging hyperparameter optimization problem, we propose to combine the evaluation of physical consistency (IoU) and photorealism (LPIPS) in a new metric (FVPS), called Flood Visualization Plausibility Score (FVPS). The FVPS is the harmonic mean over the submetrics, IoU and (1−LPIPS), that are both [0, 1]-bounded. Due to the properties of the harmonic mean, the FVPS is 0 if any of the submetrics is 0; the best FVPS is 1. In other words, the FVPS is only 1 if the imagery is both photorealistic and physically-consistent. (1) In terms of both physical-consistency and photorealism, our physics-informed GAN outperforms an unconditioned GAN that does not use physics, as well as a handcrafted baseline model (Fig. 4) . 1) A GAN without physics information generates photorealistic but non physically-consistent imagery. The inaccurately modeled flood extent in Fig. 4e illustrates the physical-inconsistency and a low IoU of 0.226 in Table I over the test set further confirms it (see Section A for test set details). Despite the photorealism ( LPIPS = 0.293), the physical-inconsistency renders the model non-trustworthy for critical decision making, as confirmed by the low FVPS of 0.275. The model is the default pix2pixHD [13] , which only uses the pre-flood image and no flood mask as input. 2) A handcrafted baseline model generates physicallyconsistent but not photorealistic imagery. Similar to common flood visualization tools [9] , the handcrafted model overlays the flood mask input as a handpicked flood brown (#998d6f) onto the pre-flood image, as shown in Fig. 4g . Because typical storm surge models output flood masks at low resolution (30m/px [5] ), the handcrafted baseline generates pixelated, non-photorealistic imagery. Combining the high IoU of 0.361 and the poor LPIPS of 0.415, yields a low FVPS score of 0.359, highlighting the difference to the physics-informed GAN in a single metric. 3) The proposed physics-informed GAN generates physically-consistent and photorealistic imagery. To create the physics-informed GAN, we trained pix2pixHD [13] from scratch on our dataset (200 epochs in ∼7 hrs on 8× V100 Google Cloud GPUs). This model successfully learned how to convert a pre-flood image and a flood mask into a photorealistic post-flood image, as shown in Fig. 1 . The model outperformed all other models in IoU (0.553), LPIPS (0.263), and FVPS (0.532) ( Table I ). The learned image transformation "in-paints" the flood mask in the correct flood colors and displays an average flood height that does not cover structures (e.g., buildings, trees), as shown in 64 randomly sampled test images in Fig. 5 . Occasionally, city-scenes show scratch patterns, e.g., Fig. 5 (top-left) . This could be explained by the unmodeled variance in off-nadir angle, sun inclination, GPS calibration, color calibration, atmospheric noise, dynamic objects (cars), or flood impacts, which is partially addressed in Section IV-C1. While our model also outperforms the VAEGAN (BicyleGAN), the latter has the potential to create ensemble forecasts over the unmodeled flood impacts, such as the probability of destroyed buildings. B. Flood segmentation model. The flood segmentation model was a pix2pix segmentation model [12] , which uses a vanilla U-net as generator. The model was trained from scratch to minimize L1-loss, IoU, and adversarial loss and had the last layers finetuned on L1loss. We hand-labelled pixel-wise flood maps of 111 post-flood images to train the model. A four-fold cross validation was performed leaving 23 images for testing. The segmentation model selected to be used by the FVPS has a mean IoU performance of 0.343. Labelled imagery will be made available as part of the dataset. So far, we showed that our pipeline can generate post-flood imagery for the selected locations, such hurricane Harvey in Houston, TX, and for matching remote sensing instruments between train and test, e.g., Maxar satellite. The Earth Intelligence Engine, however, aims to visualize global climate change as seen from space, starting with a visualization of coastal floods and sea-level rise along the full U.S. East Coast. In order to achieve a visualization along the coast, the pipeline needs to generalize across locations, remote sensing instruments, and climate phenomena. 1) Generalization across location and remote sensing instruments. To generalize across the U.S. East Coast, the current framework would require a Maxar pre-flood image mosaic, which would be costly and challenging to open-source for the full U.S. East Coast. Hence, we assembled a dataset of preflood image tiles from the open-access U.S.-wide mosaic of 1.0m/px visual aerial imagery from the National Agriculture Imagery Program (NAIP) [69] . The pre-flood NAIP image tiles are paired with open-access Maxar post-flood satellite imagery and a generated pixelwise flood segmentation mask. This creates a dataset of 6500 clean image-triplets that we are releasing as the floods-section of our open-source dataset to study image-to-image translation in Earth observation. The translation task from NAIP aerial to Maxar satellite imagery is significantly more challenging than the Maxar→Maxar task, because the learned imagetransformation needs to account for differing remote sensing instruments, flight altitude, atmospheric noise magnitude, color calibration, and more. To reduce the learning task complexity, we removed the variation within Maxar data, via sourcing This shows that image-to-image translation across remote sensing instruments is feasible, as well as, the potential to leverage the Earth Intelligence Engine to create a visualization of coastal floods along the full U.S. East Coast. The retreat of Arctic sea ice is one of the most important and imminent consequences of climate change [1] . However, visualizations of melting Arctic sea ice are limited to physicsbased renderings, such as [71] . There is also a lack of daily visual satellite imagery of the past due to satellite revisit rate, cloud cover, or polar night. The Earth Intelligence Engine is envisioned to create visualizations of past and future melting Arctic sea ice. We assembled a dataset with ∼ 20k 1024 × 1024 px imagepairs of high-resolution (10m/px) visual Sentinel-2 imagery, as showcased in Fig. 7 . We leveraged ice-free tiles from the temporary Arctic summer (1/6 − 31/8/2020) as training data to generate ice-free visual satellite imagery. Each icefree summer tile is associated with a corresponding winter (1/10 − 1/5/2020) tile from the same area. We ensured image diversity in land, ocean, and ice tiles by sampling Arctic coastal regions and removing image duplicates with perceptual hashing. Ice segmentation masks were generated for each summer tile by classifying each pixel with normalized grayscale value, i > 0.7, as ice. The image-triplets are then used to retrain the Earth Intelligence Engine using the same hyper-parameters and configuration used in predicting floods. We acknowledge that predictions of Arctic sea ice extent only exist at low-resolution (e.g., ∼ 6 km in the Modéle Atmosphéric Régional, MAR) while our framework leverages high-resolution masks. Future works will leverage coarsegrained masks during training to fully extend the framework to visualize future melting of Arctic sea ice as projected by MAR. Although our pipeline outperformed all baselines in the generation of physically-consistent and photorealistic imagery of coastal floods, there are areas for improvement in future works. For example, our flood datasets only contained 3 or 6.5k samples and were biased towards vegetation-filled satellite imagery; this data limitation likely contributes to our model rendering human-built structures, such as streets and out-of-distribution skyscrapers in Fig. 5 top-left, as smeared. Although we attempted to overcome our data limitations by using several state-of-the-art augmentation techniques, this work would benefit from more public sources of high-resolution satellite imagery (augmentation details in Section B). Apart from the data limitation, smeared features are still a current concern in state-of-the-art GAN architectures [46] . Furthermore, the computational intensity of training GANs made it difficult to optimize the models on new data. Improved transfer learning techniques could address this challenge. Lastly, satellite imagery is an internationally trusted source for analyses in deforestation, development, or military domains [72] , [73] . With the increased capability of data-generating models, more work is needed in the identification of and the education around misinformation and ethical and trustworthy AI [21] . We point out that our satellite imagery is synthetic, should only be used as communication aid [4] , and we take first steps towards guaranteeing trustworthiness in synthetic satellite imagery. 2) Cloud-penetrating satellite imagery. Remote sensing commonly faces the problem of missing frames, due to cloud-cover, orbital alignment, or cost of highresolution imagery [74] , [75] . The Earth Intelligence Engine can be seen as a gap-filling model that combines the information from low-resolution flood maps and high-resolution preflood image mosaics to infer the missing high-resolution postflood satellite imagery. For example after floods, the arrival of the first visual images is often delayed until clouds pass or expensive drone surveys are conducted. Synthetic-aperture radar (SAR) is cloud-penetrating and often returns the first available medium-resolution flood maps (at ∼ 10 m/px) [76] . The Earth Intelligence Engine could visualize the mediumresolution SAR-derived flood extent maps. However, future work will be necessary to extend the trustworthiness of generated flood visualizations in disaster response, for example, via incorporating information on the flood height, building damage, or the raw SAR signal. The current visualizations are aimed towards media or policy to communicate the possible extent of future floods in a compelling manner [4] . We envision a global visualization tool for climate impacts. By changing the input data, future work can visualize impacts of other well-modeled, climate-attributed events, including Arctic sea ice melt, hurricanes, wildfires, or droughts. Nonbinary climate impacts, such as inundation height, or drought strength could be generated by replacing the binary flood mask with continuous model predictions. Opportunities are abundant for further work in visualizing our changing Earth. This work opens exciting possibilities in generating physically-consistent imagery with potential impact on improving climate mitigation and adaptation. The research was partially sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. set excludes imagery from hurricane Michael or Matthew, because the majority of tiles does not display standing flood. • We did not use digital elevation maps (DEMs), because the information of low-resolution DEMs is contained in the storm surge model and high-resolution DEMs for the full U.S. East Coast were not publicly available. APPENDIX B EXPERIMENTS A. Data Augmentation. Standard data augmentation, here rotation, random cropping, hue, and contrast variation, and state-of-the art augmentation -here elastic transformations [78] -were applied. Furthermore, spectral normalization [79] was used to stabilize the training of the discriminator. A relativistic loss function has been implemented to stabilize adversarial training. We also experimented with training pix2pixHD on LPIPS loss. Quantitative evaluation of these experiments, however, showed that they did not have significant impact on the performance and, ultimately, the results in the paper have been generated by the pytorch implementation of pix2pixHD [13] extended to 4-channel inputs. The standard LPIPS did not clearly distinguish in between the handcrafted baseline and the physics-informed GAN, contrasting the opinion of a human evaluator. This is most likely because LPIPS currently leverages a neural network that was trained on object classification from ImageNet. The neural network might not be capable to extract meaningful highlevel features to compare the similarity of satellite images. In preliminary tests the ImageNet-pretrained network would classify all satellite imagery as background image, indicating that the network did not learn features to distinguish satellite images from each other. Future work, will use LPIPS with a network trained to have satellite imagery specific features, e.g., Tile2Vec or a land-use segmentation [80] He obtained his PhD from the Cambridge machine learning group, working with Zoubin Ghahramani FRS and funded by the Google Europe Doctoral Fellowship. Yarin made substantial contributions to early work in modern Bayesian deep learning quantifying uncertainty in deep learning and developed ML/AI tools that can inform their users when the tools are "guessing at random". These tools have been deployed widely in industry and academia, with the tools used in medical applications, robotics, computer vision, astronomy, in the sciences, and by NASA. Chedy Raïssi received his PhD in Computer Science from the Ecole des Mines d'Ales in July 2008. After completing his PhD, Chedy worked as a research fellow (post-doctoral researcher) at the National University of Singapore on privacy-preserving data mining with emphasis on the anonymization of clinical trial data. In 2010, Chedy was appointed as a permanent research scientist (chargé de recherche) at the French Institute for Research in Computer Science and Automation (INRIA), France where he joined the Orpailleur team and worked in the field of sequence and graph combinatorics and concept lattices (also known as "Galois lattices" ). Since 2019, Chedy is on a sabbatical leave from INRIA and joined Ubisoft Singapore as the Data Science Director where he leads a new team of researchers and engineers to shape up innovative projects for machine learning and video games. Alexander Lavin spent much of his career at the intersection of AI and neuroscience. He founded Latent Sciences to develop a patented AI platform (i.e. a probabilistic programming domain-specific language he built) for predictive and causal modeling neurodegenerative diseases, which was acqui-hired into a stealth enterprise AI company. Prior to Latent Sciences, he worked with Vicarious and Numenta towards artificial general intelligence. Prior to pursuing AI, he was a spacecraft engineer, working with NASA, Blue Origin, Astrobotic, and Technion. He is a technical lead with nasa.ai for various ML projects in climate science and astronaut health, and leading novel initiatives in Systems ML and a Forbes 30 Under 30 honoree in Science. He studied computational mechanics and robotics at Carnegie Mellon (under advisors Red Whittaker and Kenji Shamada), engineering management at Duke University, and mechanical and aerospace engineering at Cornell. Away from the computer he is a runner, yogi, outdoors explorer, and dog dad. More information available at lavin.io. Dava Newman is the Apollo Program Professor of Astronautics and Director of the MIT-Portugal Program at the Massachusetts Institute of Technology, and a Harvard-MIT Health, Sciences, and Technology faculty. Her aerospace biomedical engineering research investigates human performance across the spectrum of gravity, including space suits, life support and astronaut performance. She has been the PI on 4 spaceflight missions. Her second skin Bio-Suit™planetary spacesuit inventions are now being applied to soft exoskeletons to enhance locomotion on Earth. She has exhibited the BioSuit™at the Venice Biennial, London's Victoria and Albert Museum, Paris' Cite des Sciences et de L'Industrie, American Museum of Natural History, and Metropolitan Museum of Art. Her current research targets climate change and Earth's vital signs from Oceans-to-Space. She has circumnavigated, sailing around the world. She is the PI on the Earth Intelligence Engine AI platform for weather and climate. Newman is the author of the text Interactive Aerospace Engineering and Design and has over 300 publications. Dr. Newman served as NASA Deputy Administrator from 2015-2017, and was responsible for articulating NASA's vision, providing leadership and policy direction, spear-heading diversity and inclusion, and representing NASA to the White House, Congress, international space agencies, and industry. Dr. Newman was the first female engineer and scientist to serve in this role and was awarded the NASA Distinguished Service Medal. Global warming of 1.5c. an ipcc special report on the impacts of global warming of 1.5c above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty Centre for Research on the Epidemiology of Disasters (CRED) and UN Office for Disaster Risk Reduction UNISDR Visualizing Climate Change A Guide to Visual Communication of Climate Change and Developing Local Solutions. Taylor and Francis Group National Storm Surge Hazard Maps, Texas to Maine, Category 5 Surging seas: Sea level rise analysis Landscape visualisation and climate change: the potential for influencing perceptions and behaviour First street foundation flood model technical methodology document Sea level rise, predicted sea level rise impacts on major cities from global warming up to 4c Visualizing the consequences of climate change using cycle-consistent adversarial networks Creating xBD: A Dataset for Assessing Building Damage from Satellite Imagery Image-to-image translation with conditional adversarial networks High-resolution image synthesis and semantic manipulation with conditional gans Unpaired image-to-image translation using cycle-consistent adversarial networks Large scale gan training for high fidelity natural image synthesis Predicting landscapes from environmental conditions using generative networks Tilegan: Synthesis of largescale non-homogeneous textures Generating synthetic multispectral satellite imagery from sentinel-2 Cloud-gan: Cloud removal for sentinel-2 imagery using a cyclic consistent generative adversarial networks Generative adversarial networks for realistic synthesis of hyperspectral samples Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible ai Understanding the role of individual units in a deep neural network Towards deep learning models resistant to adversarial attacks Truebranch: Metric learning-based verification of forest conservation projects Weather forecasting with ensemble methods Srflow: Learning the super-resolution space with normalizing flow Deep hidden physics models: Deep learning of nonlinear partial differential equations Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control Deep learning to represent subgrid processes in climate models Uncertainty-aware physicsinformed neural networks for parametrizations in ocean modeling AGU) Fall Meeting, Session on AI in Weather and Climate Modelling Deep learning and process understanding for data-driven earth system science Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations Deep learning for universal linear embeddings of nonlinear dynamics Hamiltonian neural networks Embedding hard physical constraints in neural network coarse-graining of 3d turbulence Deep unsupervised state representation learning with robotic priors: a robustness analysis Universal differential equations for scientific machine learning Using neural networks for parameter estimation in ground water Neural ordinary differential equations Physics-guided Neural Networks (PGNN): An Application in Lake Temperature Modeling Use of neural networks for stable, accurate and physically consistent parameterization of subgrid atmospheric processes with good performance at reduced precision PDE-Net 2.0: learning pdes from data with a numeric-symbolic hybrid deep network Unsupervised representation learning with deep convolutional generative adversarial networks Discriminative region proposal adversarial networks for high-quality imageto-image translation Semantic image synthesis with spatially-adaptive normalization You only need adversarial supervision for semantic image synthesis Variational inference with normalizing flows Auto-encoding variational bayes Gaussian process prior variational autoencoders Generating images with perceptual similarity metrics based on deep networks Toward multimodal image-to-image translation Semantic photo manipulation with a generative image prior Synthetic satellite imagery for current and future environmental satellites Downscaling and visioning of mountain snow packs and other climate change implications in North Vancouver, British Columbia Noaa sea level rise viewer Slosh: Sea, lake, and overland surges from hurricanes ADCIRC: an advanced three-dimensional circulation model for shelves coasts and estuaries, report 1: theory and methodology of ADCIRC-2DDI and ADCIRC-3DL High-resolution image synthesis and semantic manipulation with conditional gans An empirical study on evaluation metrics of generative adversarial networks Pros and cons of gan evaluation measures Adversarial training with cycle consistency for unsupervised super-resolution in endomicroscopy Image quality assessment: from error visibility to structural similarity A test of relative similarity for model selection in generative models Improved techniques for training gans Mode regularized generative adversarial networks Gans trained by a two time-scale update rule converge to a local nash equilibrium Establishing an evaluation metric to quantify climate change image realism The unreasonable effectiveness of deep features as a perceptual metric National Geospatial Data Asset National Agriculture Imagery Program (NAIP) Imagery Open data program, hurricane harvey, 8/31/2017, tileid: 105001000b95e100 Annual arctic sea ice minimum 1979-2020 with area graph High-resolution global maps of 21st-century forest cover change Earth observation in service of the 2030 agenda for sustainable development Deep learning in remote sensing: A comprehensive review and list of resources Creating synthetic radar imagery using convolutional neural networks Near real time satellite imagery to support and verify timely flood modelling Open data for disaster response Best practices for convolutional neural networks applied to visual document analysis Spectral normalization for generative adversarial networks Large scale high-resolution land cover mapping with multi-resolution data Pre-and post-flood imagery Post-flood images that display standing water are challenging to acquire due to cloud-cover, time of standing flood, satellite revisit rate, and cost of high-resolution imagery. To the extent of the authors' knowledge, xBD [11] is the best publicly available data-source for preprocessed high-resolution imagery of pre-and post-flood images. More open-source, high-resolution, pre-and post-disaster images can be found in unprocessed 3284 flood-related RGB image pairs from seven flood events at 1024×1024 px of ∼0.5 m/px resolution of which 30% display a standing flood East or Golf Coast), spring floods Our evaluation test set is composed of 216 images: 108 images of each hurricane Harvey and Florence. The test