key: cord-0890482-kpqxur6g authors: Milazzo, Mario; Buehler, Markus J. title: Designing and fabricating materials from fire using sonification and deep learning date: 2021-07-16 journal: iScience DOI: 10.1016/j.isci.2021.102873 sha: a27517c8f35dc6f9f6ded23b481b99e8f8883b37 doc_id: 890482 cord_uid: kpqxur6g Fire has fascinated humankind since the prehistoric era. Rooted in the interactions between sound and flames, here we report a method to use fire for a variety of purposes, including sonification, art, and the design and manufacturing nature-inspired materials. We present a method to sonify fire, thereby offering a translation from the silent nature of flames, to represent audible information and to generate de novo flame images. To realize material specimen derived from fire, we use the autoencoder to generate image stacks to yield continuous 3D geometries that are manufactured using 3D printing. This represents the first generation of nature-inspired materials from fire and can be a platform to be used for other natural phenomena in the quest for de novo architectures, geometries, and design ideas, thus creating additional directions in artistic and scientific research through the creative manipulation of data with structural similarities across fields. Fire has fascinated humankind since the prehistoric era and, according to different traditions, was considered as one of the four main elements that ''make of all things in Nature'' (Macauley, 2010) . Science and art have both used fire as a tool and inspiration over the years. From a scientific-technological standpoint, fire is one of the fundamental agents to enable chemical reactions, and its controlled dynamics have been widely studied to optimize and control the combustion processes (Drysdale, 2011) . Another field of interest concerns fire detection and monitoring for safety purposes since an efficient control of fire since its activation has a huge impact not only on people and animal protection but also on environmental sustainability (Sousa et al., 2020; Zhuang et al., 2017) . The complex nature of flames ( Figure 1A ) offers a deep foundation of shapes and their temporal evolution that can push the boundary of nature-inspired materials -typically focused on learning from biomaterials such as nacre or spider webs -toward other phenomena in nature. Enabled by a variety of deep learning approaches, here we report a palette of tools that create new material and structure designs through this mode of observation. In recent work published by Lattimer et al., researchers reviewed the use of artificial intelligence (AI) strategies to discriminate flame features, showing the advantages in terms of time savings against computational fluid dynamics approaches (Lattimer et al., 2020) . In particular, the authors showed two alternative approaches: dimensionality reduction (aka reduced-order modeling) and deep learning. In the first case, they showed the results of an unsupervised machine learning approach that is able to reduce the design space by using a data set that was not labeled. The algorithm processes fire images from which it extracts a subset of data used for the detailed analysis of the fire features through ordinary differential equations. This method thus allows the operators to achieve a lower dimensional analysis subspace saving computational time and cost. In contrast, the deep learning approach is used to evaluate the velocity and temperature fields based on the volumetric flow field conditions, geometry of the targeted environment, and fire description. The model works quite well in 2D, and it is able to detect the spatial variations of the targeted fields, achieving good matching with computational fluid dynamics results but with a reduced computational cost (Hodges, 2018; Lattimer et al., 2020) . As for visual and music arts, fire has been used as a tool and inspiration for creating paintings but, more interestingly, has inspired composers to create melodies able to evoke its power and perpetual shape mutation without using the audible dull sounds emitted from the interaction with air (ClassicFM, 2020) . From the observation of a fire to an experimental setup to study sound-fire interactions Fire (panel A) is an important natural phenomenon, which yields complex shapes that are dynamically changing, and whose motions are heavily influenced by environmental factors The structure of a fire as shown in this panel can be viewed as a collection of individual flames (one extracted flame is shown on the right). For the scientific study reported here, we hence focus on the study of individual flames. Panels B and C depict the experimental setup, consisting of a speaker as audio source, a burning candle with flame, and a camera to record images in front of a black background. Panel B. Axonometric view. Panel C. Top view with dimensions with cross-sectional view (D-D) for better understanding the speaker-flame relative positioning. The camera is positioned in such a way that the flame only, not the candle, is visible during the data capturing. Changing paradigm as also demonstrated in previous works, it is possible to re-interpret and use scientific data into a new form, expanding the design palette (Barrass and Kramer, 1999; Hermann et al., 2011; Kramer et al., 2010) . An example was given recently by mapping the 3D shape of proteins, the building blocks of life, into music, by exploiting the structural similarities of the domains (Franjou et al, 2019 (Franjou et al, , 2021 Yu et al., 2019) . For instance, the spike protein of COVID-19, which is responsible for the deadly infection, was mapped into a set of vibrations -akin to a unique timbre for each virus variant -demonstrating how vibrational data can be correlated to the epidemiological data in terms of lethality and transmission rate (Hu and Buehler, 2021) . The employment of AI may also allow us to design and tune new proteins and structures for specific applications, leading to a new paradigm in designing and developing constructs, not limited to the bioengineering field, as it allows for rapid translation of information across domains of information and materialization (Yu and Buehler, 2020) . In a recent work, in our own work we used melodies, also from proteins, to deform a thin layer of water, thus employing sonification and its reverse process to create water patterns. Using a convolutional neural network (CNN), we were able to classify and transform the images collected from the water patterns as ''keys'' of a new artistic tool able to create a visualization of musical harmonies (Buehler, 2020) . A different approach that leads to the sonification of visual models was developed by Zhao et al., who exploited the natural synchronization of visual and audio channels to map images to sounds without a manual supervision. They called their system ''PixelPlayer'', a tool that is able to recognize, in unlabeled videos, the regions of each frame that show objects that can produce sounds. Thus, each pixel of such areas are used to create melodies based on their features in the RGB map (Zhao et al., 2018) . Their work leveraged on previous studies that developed tools to create sounds from silent videos (Owens et al., 2016; Zhou et al., 2018) and to localize sound sources from motion (Izadinia et al., 2012) or semantic cues (Arandjelovic and Zisserman, 2018; Senocak et al., 2018) . In this work, we use fire upon air interaction as the key element to create a versatile AI-driven tool for manifold uses. Based on a single flame of a candle as a model for more complex fire shapes ( Figure 1A ), using data from physico-chemical phenomena driven by acoustic interaction, we propose a new direction to develop new bioinspired materials with an approach that does not follow the traditional avenues but exploits specific dynamic mechanisms to extract structural features to inspire and develop new materials and constructs. Furthermore, from an artistic standpoint, it is possible to create a synthetic analog instrument for new compositions, where a flame takes the place of a vibrating string. Here, it is worth to note that in contrast to the traditional fire-inspired music, we use images of real fire as inputs for our compositions. Additionally, from a visual perspective, artistic pictures can be made by combining the information from fire with other images, thus creating a new mashup of artistic visualizations that offer semblances of the internal convolutional layers deep inside the neural network, depicted in everyday images. Finally, we demonstrate the use of deep neural nets to generate de novo 3D geometries, to realize nature-inspired material designs that take structural features from fire, and include them into hierarchical material patterns that are fabricated using 3D printing. Using the experimental setup described in Figures 1B and 1C , we first collect a series of images of candles exposed to all frequencies in an octave, as well as under no acoustic exciting (silence), to build a data set that consists of flame images and associated labels reflecting the type of audio signal the flame was exposed to during imaging. Figure 2A shows sample images of the flame exposed to different frequencies. Figure 2B provides an overview of the deep neural network used here, summarizing the two major models used (a deep CNN classifier and a deep convolutional variational autoencoder [VAE] ) for the purposes of sonification and materials design. Figure 3 shows details of the deep CNN model used, summarizing the detailed layers and hyperparameters, as well as the training performance in Figure 4 . The goal of the classifier is to predict, from an image provided to it, the correct audio signal that the flame was exposed to (there are 13 classes: silence (no audio), and each note in an octave). Once trained, we test the performance of the trained CNN model that acts as a classifier to determine the original audio source upon feeding images of a flame. To test the model performance, we collected new data that the model has never seen in the original training process and compute the average predicted score. Figure The trained CNN can be used for a variety of purposes. An artistic application of the model is to use the deep dream algorithm (Szegedy et al., 2015) (similar to the earlier application as reported in (Buehler, 2020) ) to elucidate a recurrence of the internal patterns that the trained neural network ''sees''. We applied this to a variety of images, whereas a few examples are shown in Figure 6 . It is clearly visible how the model is capable of ''seeing'' flame patterns all over the image canvas, leading to a remarkably changed outcome. The air pressure waves generated by the audio signals lead to shape changes of the flames, which are recorded by the camera (as shown in Figure 1 , the speaker is to the left of the flames). Panel B: Flowchart of the two methods used in this paper, based on the training set, involving supervised learning resulting in a CNN model that is used to classify and sonify, as well as used as the basis for a fire Inceptionism algorithm, and the VAE model is used to generate videos and 3D printing models. OPEN ACCESS iScience 24, 102873, August 20, 2021 iScience Article This approach represents an extension of the concept of bio-inspired or nature-inspired materials to bioinspired art. Another possible application of the CNN model is to apply the model in sonification, to render images as sound, and by using a sequence of images (or a video) a change in sound or music. In other words, we can This model is used for supervised learning based on pairs on images and labels (reflecting the audio source) and serves as a classifier model to associate an image of a flame with an audio source. It is also used to sonify data, in that images of flames trigger a certain audio to be played. iScience 24, 102873, August 20, 2021 5 iScience Article watch a flame flicker, or move due to external factors such as air movement and render a soundtrack to it by classifying the audio signals that are being associated with each frame. Figure 7 shows the basic outline of the approach, showing a sample audio signals of continuously varying frequency (from pitch C2 upward), applied over 40 s time each for up and down (a total of 80 s). In this example, the fire serves as an analog transformer, which is captured using the same experimental setup as shown in Figure 2 . The resulting video with the predicted audio is shown in Video S1. The way the audio was generated is to classify the images in the temporal sequence and to render the associated sound in the same temporal evolution, yielding an automated soundtrack or sonification that matches the flame shape seen with the instantaneous audio signal. Since the input audio signal used in this experiment is different than the original audio used to train the CNN model (e.g. the fact that we traverse a continuous range of pitches rather than the 12 discrete pitches in the octave, and that the pitch ranges are different), the fire acts as an analog transformer that can be viewed as a new form of a musical instrument. Other modes of excitation of flames could be hand motions, vocalizations by professional singers, or environmental conditions such as wind or thermal movement of air, to induce certain flame shapes, which can be associated with a certain pitch. This can result in interesting sonification methods, where fire becomes a musical instrument, or a translator of data from one to another form. Another example of this method could be to apply the sonification method to other audio signals, e.g. those that do not consist of pure sine waves. This is left to future work, especially in the creative domain. With the available data set of flames under different audio excitations (see Figures 2A and 5 for sample flame shapes), it is also possible to develop a generative neural net that allows us to create a continuous range of synthetic flames, using a method of unsupervised learning, as depicted in Figure 8A . Figures 8B and 8C shows the results of a deep convolutional VAE model of the flames exposed to different frequency. The model is capable, once trained, to reproduce different states of deformation (from Still, to each of the 12 pitches in the octave), and to distinguish them in latent space. The capacity to distinguish the different flame shapes underscore the earlier results for the classifier model shown in Figure 5 (albeit, it is noted that we use a different neural net topology for the autoencoder model, thereby suggesting that the classification properties are general). Figure 8D shows an example of systematic variations in the twodimensional latent space, revealing the associated flame shapes. The dimensionality of the latent space can be adapted, in principle, to achieve a matching level to the type of data that is of interest. It is also noted that the plot of the latent space ( Figure 8 ) shows that indeed, in agreement with the results in Figures 3 and 4 , unique flame shapes are associated with specific audio sources (since there is a clear emergence of clustering of the datapoints in latent space associated with certain audio sources). The significance of the autoencoder model is that it allows us to generate new flame shapes, including those that were not included in the training set, and explore the approach as a way to offer nature-inspired design ideas. In this vein, this model has several applications: first, to generate synthetic flickering flames using the VAE model; to further explore the latent space for design ideas; and ultimately, to generate 3D material samples inspired by fire. iScience Article Figure 9 shows the use of a random walk algorithm to generate pairs of latent space numbers ( Figure 9A ), and we use the decoder model to generate a synthetic flickering flame. This result is shown as video, in Video S2. This approach may also be used to map other variables, such as environmental data (e.g., temperature) into latent space variables and then flame shapes. In future work, these newly predicted flame shapes could also be sonified, which offers yet another way to realize data sonification of complex multidimensional data. Another perspective concerns the fabrication of nature-inspired structures from fire, integrating the machine learning methods with additive manufacturing (Figure 10 ). The combination of the autoencoder model and a random walk algorithm can be used to create material designs that can be manufactured using 3D printing. Figure 10A shows the differential random walk data generated (the variables in the latent space are offset by (À4, À4) to reflecting the beginning of the walk (red circle) with a stable flame shape (as can be confirmed in Figure 10A ). The random walk is used to generate a series of images (with x-y data) that are stacked together (in the z-direction) to create a 3D geometry. The walk distance in each step must be chosen small enough to achieve smooth variations in shape in each step. Figure 10B shows the resulting 3D geometry as seen in a slicer program, rendering a hollow interior. Figure 10C shows various renderings of the flame shapes, specifically focusing on the internal structures; the ''inside of a flame'' that is typically invisible, and which provides a novel perspective of fire and its materialization. Figure 10D depicts the result of a 3D print of the same geometry, realized using an SLA (LCD) printing Article method. Figure 10E shows another example, this time printed using an FDM printing technique. Figure 10F shows the resulting printed material realizations using a multi-material FDM printer and PVA support material (note, to generate this geometry, we use the distance matrix of protein with PDB ID 4RLC (crystal structure of the N-terminal beta-barrel domain of Pseudomonas aeruginosa) to generate the path in 2D latent space, thereby drawing the resulting flame shape as a material interpretation). This is an example for an application of the method to material-to-material translations, where material source data (e.g. distance matrices of proteins) are used to generate flame geometries, which are then rendered into 3D material shapes. Moreover, when flames are printed using more flexible materials such as polyurethane, the resulting materials can be used for mechanical experiments, such as bending ( Figure S5 ) or torsional deformation (Video S3). Such analysis could also be carried out using finite element modeling in future work, and offer additional layers of mechanistic analysis, and applications to solid mechanics. Additionally, to increase the sensorial experience, we use the sonification method described above to generate a video in Video S4 that shows the sonification of the same 3D-printed dataset that is overlaid on the original video recording. This video is a culmination of the methods reported here; where the autoencoder is used to generate the flame shapes and 3D structure of the flame composition, and the classifier is used to determine the sound over time. The temporal evolution of the sound reflects a type of musical composition -a rendering of the materialization of the fire shapes shown in Figure 10 , made audible, providing yet another perspective into the structure. We note that instead of using random walk as input, we can also use other data (e.g., climate data -using pairs of temperature and sea level) to realize evolutions of flame shapes, and associated audio. These types of explorations are left to future work. By changing the dimensionality of the latent space (in the examples shown here, it was two) and matching them to the dimensionality of the data to be visualized, one can explore a variety of options. iScience Article Looking ahead, we envision that in future work the random walk and design creations can be coupled with an optimization algorithm (e.g., a genetic algorithm, Bayesian methods) to achieve certain target material properties. For instance, the use of flame shapes in architecture may require finding a certain volume fraction, or stiffness, which can be identified by seeking certain latent space parameters. This, combined with finite element modeling of the resulting shapes, could yield interesting new insights into mechanical performance (deformation of flames, fracture of flames, etc.), and adaptation toward certain required material properties. The manipulation of data from a clean flame may lead to applications also in the materials science field. As also outlined in previous work (Franjou et al, 2019 (Franjou et al, , 2021 , it is possible to exploit and correlate the similarities of the hierarchical structures of music and of many complex tissues and composites. As already demonstrated in the specific case of proteins , such a mapping may be also used with data coming from an excited fire. The musical instrument described above, could be used to bridge the natural fire source with a design palette for materials. Through a double-step data processing, we could extract specific patterns and features from the observed deformation to identify bioinspired structural properties with which create new materials that can be ultimately 3D printed. A first example of such direction is the collection of the frequency spectrum that could be the target of structural optimizations. The systematic collection of the topological features of flames upon an external disturbance over time (e.g., flame height, bending angle) may be used as a model for structural properties to implement in novel bioinspired materials. Mimicking the deformability of flames upon external loads may find applications in the so-called soft robotics, an emerging field of research that uses the high deformability and biomimetic functionalities of structural materials to prevent collisions and ensure, as a meta-material (Zadpoor, 2020) , a high adaptability in unstructured environments and applications not limited to the bioengineering field (Ilievski et al., 2011; Wang et al., 2019) . Finally, following also the examples of previous studies (Halder and Dey, 2015; O'Brien et al., 2010) , observing the relationship between excitation and structural response of the fire (i.e., deformation, delay response), new bioinspired strategies for controlling deformable structures may be implemented, complementing the traditional approaches. This work has highlighted several features of ''fire'' at the nexus of physics, engineering, and art, and we reported the first nature-inspired material design from fire, including the materialization of fire using a complement of deep learning and additive manufacturing. The use of a natural phenomenon like fire expands the concepts of design by nature from the typical nacre-or silk-inspired materials and offers us now a new way to use neural nets to translate a variety of phenomena into new designs. Figure 7 . Application of the model in sonification As input, we use images of a flame exposed to a systematic variation of a sine way audio signal with continuous frequency increase from 0 to 120 Hz and mirrored decrease over the same time frame of Dt = 80 s. The flame images produced by this continuous change in pitch of the audio signal are then classified using the neural network and a musical score generated that is used for generation of a new audio signal, based on classifying the images and identifying associated sounds. See attached supplementary information, Video S1. iScience 24, 102873, August 20, 2021 9 iScience Article First, we showed that flame shapes can be associated closely with the source of excitation. While this depends on the details of the parameters (e.g., distance of audio source, sound amplitude) and may not be generalizable easily, if parameters are kept constant, such a characterization can be done rigorously and a relationship between audio source and flame pattern learned using a deep neural network. This model, in iScience Article turn, allows us to use the model in the reverse direction and predict the audio source from the flame shape, to create a unique sound association. We further used this new data set of flames to develop a variational autoencoder that enable us to map a two-parameter latent space into a variety of flame shapes. One application was to generate flickering flame models by randomly moving in this latent space. Another application could be to map other data to this latent space, and then provide visualizations of such data in the form of flames, and ultimately as sound (since each flame image can be associated with a pitch). The last, but not least important, application is the use of the collected data to create new data sets of properties and features to be exported and translated in the engineering field. For instance, new materials and constructs may be developed by mimicking the deformability of fire or its adaptability to the surrounding environment. In addition to materials science, also new emerging fields like the soft robotics may take advantage of such approach to develop devices able to be integrated complex systems for unstructured scenarios. Some of the preliminary analyses reported here (e.g., Figure S5 , Video S3) offer some insights into what may be possible toward applications in science and technology. In conclusion, the methods reported here offer new directions in scientific and artistic research fields. This transcends the traditional definition of bio-inspiration in materials research and opens a new avenue where human creativity is intermingled with AI to explore interfaces of natural and synthetic worlds, especially via the use of multi-material additive manufacturing of complex geometries. In addition to the use in STEM outreach, this can also provide a powerful toolkit for translation across disciplinary boundaries, and elucidate new design paradigms that solicit natural design languages, patterns, and other forms of signals in the engineering process. For instance, augmenting images using the fire Inceptionism algorithm can provide material inspiration for architectural design, to come up with new shapes or geometries for fire-inspired design work for future infrastructure. This can be a powerful tool to mix emotional with material aspects that reflect cultural heritage in novel dimensions. Figure 9 . Generating synthetic ''flickering flames'' using a random walk algorithm Using a random walk algorithm to generate pairs of latent space numbers (panel A) we use the decoder model described in Figure 8A to generate a synthetic ''flickering flame'', with examples shown in panel B. This approach may also be used to map other variables, such as environmental data (e.g. temperature) into latent space variables and then flame shapes, which could also be sonified to provide audible renditions of data using fire as a translator. A video of synthetic flickering is shown is provided in supplementary information, Video S2. Figure S4 shows an example where we use the distance matrix of a protein to generate 2D trajectory data, which then yields a novel 3D model design as shown in Figure 10F . This type of material-to-material translation using fire as a medium can be applied to numerous other areas and provide a new source of material design, art, and cross-domain transformation. iScience 24, 102873, August 20, 2021 11 iScience Article Figure 10 . Manufacturing of nature-inspired materials from fire The combination of the VAE model and a random walk algorithm can also be used to realize material designs that can be manufactured using 3D printing. Panel A shows the differential random walk data generated (the variables in the latent space are offset by (À4, À4) to reflecting the beginning of the walk (red circle) with a stable flame shape (as can be seen in Figure 8C ). The random walk is used to generate a series of images that are stacked together to create a 3D geometry. The walk distance must be chosen small enough to achieve smooth variations in shape in each step. Panel B shows the resulting 3D geometry as seen in a slicer program, rendering a hollow interior. Panel C depicts various internal views of the 3D model, in particular the right version resembling the interior of the gut. Panel D shows the result of a 3D using an SLA/ LCD (detailed views see Figure S3 ). Panel E shows a section of the same sample printed using FDM with a thermoplastic polymer, and Panel F shows the resulting prints using an FDM printer and PVA support material [left -with support material, right -support material removed] (note, to generate this geometry, we use the distance matrix of protein with PDB ID 4RLC (crystal structure of the N-terminal beta-barrel domain of Pseudomonas aeruginosa) to generate the path in 2D latent space, thereby drawing the resulting flame shape as a material interpretation). We have also provided, via Video S4, a sonification of the same printed dataset that was overlaid over the video, rendering the emergence of de novo photographic data and audible data. iScience Article Limitations of the study One limitation of this work is the need of a clean image of a flame, so that we used a dark background to highlight the sharp fire contour. Moreover, we acknowledge that the flame from a candle is a relatively weak source of light that can be easily affected by external noises. This is why we have performed all our experiments in a controlled environment to avoid any bias on the deep learning algorithm. In view of this, it is possible to pursue this avenue of research to provide, in future works, additional data sets with ''stronger flames'' (e.g., wildfire, log fire, gas fire, and others) and an improved deep learning algorithm able to detect random fire features also in pictures with environmental noise. Other methods to improve the stability of the algorithm is to use image-transforming GAN methods, for example, to map complex flames seen in stronger flame sources into corresponding ''unit measures'' of fire, a single flame, as studied here. Detailed methods are provided in the online version of this paper and include the following: The authors acknowledge support from CAST via the Mellon Foundation, and the MIT-IBM AI Lab for the development of the machine learning model and the data set, as well as ONR (N000141612333, N000141912375, N000142012189), AFOSR (FATE MURI FA9550-15-1-0514), ARO (W911NF1920098), and NIH (U01EB014976). To develop a 13-type classifier (still, plus each of the notes in an octave), we use the machine learning model as shown in Figure 3 , and train it using the dataset described above. The training and testing performance are shown in Figure 4 , revealing good convergence. The sonification method uses the trained classifier, and takes a time series of images as input, then produces a MIDI file that features unique notes in the octave at each time point. It was rendered a Digital Audio Workstations (DAW), Ableton Live (Version 10.1, https://www.ableton.com/en/). We develop a convolutional variational autoencoder to encode and decode images of the flames exposed to different frequencies. The detailed structure of the neural net is shown in Figure S1 . The model is trained on a set of 1,300 unlabeled images (100 images for each audible condition), and uses a 2-dimensional latent space vector. Images are scaled to sizes of 1,024 x 512 pixels. Figure S2 depicts sample snapshots of the training performance of the variational autoencoder model (VAE) model, from top (early) to bottom (converged). One can see how the model learns, over multiple optimization epochs, how to draw flame shapes. Once trained, we validate the model by visualizing the original label in latent space, revealing the clustering of the same labels close in latent space, as shown in Figure 8C . It is noted that in principle, other dimensionalities for the latent space can be chosen. We found the 2D approach useful for simple visualization of the latent space, and for the subsequent random walk analyses. We use the VAE model to generate stacks of images via movements in latent space. The stacks of images, typically on the order of several thousand, are translated into a 3D geometry using Dragonfly (http://www. theobjects.com/dragonfly/, Version 2021.1.977). A STL file is then rendered, which can be printed upon slicing them to prepare native print code. In this study we exemplify the printing using both fused deposition modeling (FDM) (Monoprice Maker Ultimate, PLA+ print filament, https://www.monoprice.com/, as well as the Ultimaker S3 with both polyurethane and PLA filament, as well as PVA support material, https://ultimaker.com/3d-printers/ultimaker-s3) and stereolithography (SLA)/LCD masking (Elegoo Mars, http://elegoo.com). These two layer-by-layer manufacturing techniques have a different working principle: with the FDM, a rod of raw matter is progressively fed through a nozzle and melts while depositing the material layer while, in contrast, SLA consists in selectively melting a bed of raw material using a heating source. Despite the differences, they have been broadly employed to fabricate constructs made of thermoplastics or, more recently, biomimetic/bioinspired materials and composites with particular hierarchical structures (e.g., silk-based hydrogels, bone-like tissues) Valino et al., 2019) . For some complex flame geometries (e.g. overhangs, or islands) it is necessary to print support material. Generally, the LCD method works better for the complex flame geometries. No statistical analysis was performed in the study. OPEN ACCESS Objects that sound Using sonification Liquified protein vibrations, classification and cross-paradigm de novo image generation using deep neural networks Classical Music Inspired by Fire An Introduction to Fire Dynamics A perspective on musical representations of folded protein nanostructures Sounds interesting: can sonification help us design new proteins? Biomimetic algorithms for coordinated motion: theory and implementation The Sonification Handbook (Logos Verlag Berlin) Predicting large domain multi-physics fire behavior using artificial neural networks. Doctoral dissertation Comparative analysis of nanomechanical features of coronavirus spike proteins and correlation with lethality and infection rate Soft robotics for chemists Multimodal analysis for identification and segmentation of moving-sounding objects Sonification Report: Status of the Field and Research Agenda Using machine learning in physics-based simulation of fire. Fire Saf Elemental Philosophy: Earth, Air, Fire, and Water as Environmental Ideas Additive manufacturing approaches for hydroxyapatite-reinforced composites Biomimetic control for DEA arrays Visually indicated sounds Learning to localize sound source in visual scenes Wildfire detection using transfer learning on augmented datasets Going deeper with convolutions Advances in 3D printing of thermoplastic polymer composites and nanocomposites Liquid metal based soft robotics: materials, designs, and applications Sonification based de novo protein design using artificial intelligence, structure prediction, and analysis using molecular modeling A self-consistent sonification method to translate amino acid sequences into musical compositions and application in protein design using artificial intelligence The sound of pixels Visual to sound: generating natural sound for videos in the wild Total Cost of Fire in the United States (Fire Protection Research Foundation Quincy) Further information and requests for resources should be directed to and will be fulfilled by the Lead Contact, Prof. Markus J. Buehler (mbuehler@mit.edu) . This study did not generate new unique reagents. This paper does not report original code. The experimental setup, shown in Figures 1B and 1C , consists of a speaker as audio source (studio monitors ROKIT5, KRK systems), a burning candle with a clearly distinguishable flame, and a camera to record images in front of a black background. We use a wax candle (https://www.rei.com/product/410128/ucocandle-lantern-candles) as the fire source, to ensure a consistent flame shape and little wax dropping.A Sony A7S III camera with a Canon 105 mm F/2.8 EX DG OS HSM Macro lens, mounted on a tripod in the positioning described in Figures 1B and 1C , is used to take high-frame rate videos at 180 fps, at HD resolution, for different frequency excitations ranging from C2 and upwards for one octave (see Table 1 ). The audio signals consist of pure sine waves for each frequency with amplitudes equal to a power output of 50 W. Experiments are carried out at room temperature (~23 C) and atmospheric pressure (~1 bar).The video is recorded in the S-Log3 format for high dynamic range, and then converted to REC.709 8-bit images at HD resolution using a Sony LUT executed in Adobe Premiere Pro (Version 15.1; https://www.adobe.com/). We use 2,000 images for each acoustic excitation signal, which are split into train/test datasets using a randomized 80:20 ratio. We use image augmentation methods to enrich the training data set. We also use an additional 200 images to validate the model after training.