key: cord-0057723-ts308ifd authors: Irmisch, Patrick; Ernst, Ines; Baumbach, Dirk; Linkiewicz, Magdalena M.; Unnithan, Vikram; Sohl, Frank; Wohlfeil, Jürgen; Grießbach, Denis title: A Hand-Held Sensor System for Exploration and Thermal Mapping of Volcanic Fumarole Fields date: 2021-03-18 journal: Geometry and Vision DOI: 10.1007/978-3-030-72073-5_6 sha: 015ae4ccd00650a0732b904d6ff843485e2da8fe doc_id: 57723 cord_uid: ts308ifd Research from the field of planetary- and geoscience require investigation of geological features in harsh environments. Sampled data needs to be precisely localized spatially and in time, and to be prepared appropriately for further inspection and evaluation. In the case of thermal image data, the computer vision community has made enormous progress in the past years to combine thermal with spatial information in form of thermal 3D models. In this paper, we propose to use a camera-based hand-held sensor system to capture and georeference thermal images in rugged terrain and to prepare the data for further investigations by visualizing the thermal data in 3D. We use a global localization solution to gain fast 3D impressions of the environment and further Structure from Motion to generate detailed mid-scale models of selected interesting structures. We demonstrate our application based on a challenging dataset that we acquired in the active fumarole fields of Vulcano, Sicily. The exploration of planetary bodies requires the development of new and innovative mapping techniques. The large range of environmental conditions, such as extreme temperatures, harsh or low atmospheric conditions and broad terrain morphologies imposes many challenges in the development of methods and instruments for planetary mapping. One of the most challenging environments is active volcanic fumarole fields, which are generally characterized by extreme ground temperatures and escaping corrosive gases. Remote infrared sensing based on satellites, airborne or ground-based stations are widely used to monitor volcanic activity. Blackett [4] provides a comprehensive historical overview of thermal remote sensing of volcanoes. These methods provide accurate georeferenced data, but due to their limited spatial resolution, they potentially miss smaller structures and subsurface features such as covered fumaroles. To overcome this, e.g. Mannini et al. [25] used satelliteand stationary ground-based surveys in conjunction with measurements of a hand-held thermal camera that is used to directly sample vent temperatures. In this context, hand-held thermal cameras allow the investigation of structures in greater detail with georeferenced data using built-in GPS solutions. However, due to missing camera orientations and absence of depth information for the captured thermal image data, spatial relationships are difficult to derive. This severely limits its further usability and analysis. In this paper, we aim to enrich thermal image data with detailed spatial information to allow in-depth inspection of active fumarole fields. Therefore, we deploy a multi-sensor hand-held localization system, the Integrated Positioning System (IPS) [5] , that is equipped with an infrared imaging camera for thermal inspection. It fuses stereo camera based Visual Odometry (VO) with data from an Inertial Measurement Unit (IMU) and GPS for localization. The light weight of IPS allows fast inspection of interesting structures close-up and from different views. IPS is used in this work to estimate the global position and orientation of each thermal image and to visualize the thermal information using 3D reconstruction, exemplified in Fig. 1 (c) . Data for this paper was collected during the fifth Vulcano Summer School 2019, a two-week summer school at Vulcano, Sicily. The school is aimed at exposing young researchers and students to a broad background on planetary and terrestrial field studies. The study of terrestrial analogs is an important tool for planetary scientists, since geological processes on planetary bodies operate in comparable ways to those shaping the Earth's surface. As a consequence, the program of the summer school is built around topical lectures, experiments and sampling campaigns, covering geology, volcanology, geophysics, biology, oceanography and robotic environmental exploration [33] . Vulcano is the third largest and southernmost island of the Aeolian archipelago. It is also one of the most closely monitored, heavily researched and studied active volcanoes in the world. It hosts the largest unique assemblage of high and low temperature volcanic and hydrothermal minerals. The larger part of the island consists of two main edifices built by strombolian to phreatomagmatic eruptions in the last 200k years. The latest volcanic eruption took place in 1888 and since then the volcano has exhibited 2-3 phases of enhanced activity [2] . From a planetary perspective, the surface morphology of parts of the Fossa Crater on Vulcano are similar to lunar and Martian regions with extremely dry, arid conditions and little or no vegetation cover. Fumarolic activity presents strong temperature gradients, high spatial variability in the surface texture and morphology, and extreme environmental conditions (toxic SO 2 vapors), making this an excellent test case for the IPS. Our contribution is the application of a hand-held system for inspection and large-scale 3D reconstruction, for recording, global localization and 3D visualization of thermal image data in harsh and dynamic environments. We therefore propose a pipeline for automatic rapid processing of large datasets and subsequent refinement of selected scenes to allow detailed analysis. We demonstrate this approach on data acquired on Vulcano s Fossa fields. With this contribution, we closer connect the computer vision field with geo-and planetary exploration. In the following, we first summarize related work of ground-based mobile platforms for thermal 3D reconstruction and their applicability for our subject. Then we describe the used sensor system and introduce the method for localization and the two methods for 3D reconstruction. In the results section, we show excerpts from the georeferenced sensor data and corresponding 3D visualization. Finally, we discuss our results and the proposed approach and describe possible improvements and the future direction. Methods for 3D reconstruction based on ground-based mobile sensor platforms with superimposing of thermal information were frequently proposed in the last decade. A common method is to map thermal information onto previously generated point clouds using a calibrated sensor setup. Light Detection And Ranging systems (LiDAR) allow accurate 3D reconstruction. Contextually, they have been applied by manually collecting several scans from different positions [11] or on mobile platforms to increase the degree of mobility and automation [7] , and in large-scale vehicle-based applications [22] . However, this method is less suited in inaccessible and rugged terrain, such as fumarole fields, where the scene requires to be quickly viewed from different angles for detailed inspection. A more light-weight sensor is the RGB-D sensor, which has been used for fast and accurate 3D reconstruction and thermal mapping in indoor scenarios. A hand-held system is presented by Vidas et al. [34] , which operates in real-time. To avoid misregistration of the thermal data, they use a voxel-based occlusion algorithm and proposed a risk-averse neighborhood weighting mechanism. The latter selects one surface estimate from multiple thermal images based on a confidence value that is cautionary to object edges. Using visual cameras, 3D reconstruction can also be effectively applied for outdoor scenarios using light-weight hand-held devices. In this context, Structure from Motion (SfM) is well suited for accurate 3D reconstruction, but relatively time consuming. For instance, Troung et al. [32] generated two models based on RGB and thermal images separately and aligned them afterwards with scale normalization. Based on the fixed camera transformation in the RGB-thermal stereo rig, they could estimate the metric scale of the model. For real-time applications, Visual Odometry (VO) and Simultaneous Localization and Mapping (vSLAM) is a widely used alternative to SfM. Contextually, Yamaguchi et al. [37] used a monocular RGB Visual Odometry approach for 3D reconstruction and superimposed the thermal image. They could also recover the scale using generated depth images from both domains using Multi-View-Stereo (MVS) and the known camera transformation in the RGB-thermal stereo rig. However, SfM and MVS from thermal images requires a high degree of image overlap, which is not guaranteed in our approach, due to our selective thermal imaging during inspection. Instead, to demonstrate our application, we use a stereo system for 3D reconstruction which has proven itself in similar environments [6] and for large-scale 3D reconstruction in similar configurations [3] , and then map the thermal information. 3D reconstruction with thermal mapping has a wide range of applications, such as in energy-efficiency monitoring of buildings [12, 19] , Object Detection [21] or even fruit tree characterization [38] , but also critical applications such as hotspot and fire detection [15, 28] in buildings. Related to fumarole field monitoring, Lewis et al. [23, 24] use SfM in the visible domain to generate a mid-scale digital elevation model to map and georeference pre-dawn thermal information from tripod-based stationary thermal cameras. This approach helps to minimize the impact of solar heating, though it is restricted to few thermal camera poses. In contrast, we concentrate on rapid large scale exploration and close-up inspection of numerous fumaroles, and use SfM as optional subsequent refinement. Related, a promising future application is in planetary robotic exploration, since rovers are often equipped with a thermal camera for inspection, such as in [36] . A hand-held IPS equipped with an additional thermal imaging camera and protective gear was used for data acquisition (see Fig. 2 (left)) during the campaign on Vulcano, Italy. In this section, we first describe the systems properties, calibration and synchronization. Then, the captured datasets are introduced along with the modification made to the system to protect it from the corrosive fumarolic gases. The IPS demonstrator of [5] is used. The system consists of a stereo camera (AVT GC1380H) and a thermal imaging camera (Optris Pi 450), with parameters listed in Fig. 2 (right). Its relatively wide stereo baseline of 20 cm, high resolution and narrow field-of-view cameras allow detailed 3D reconstruction and inspection. For localization, IPS also has an IMU (ADIS-16488) and an optional low-cost GNSS receiver (u-blox LEA-6). The infrared data is automatically processed using the software Optris PI Connect [26] and the processed pixel-wise temperature values are logged. We selected an operating range of 0 to 250 • C, which covers most of the fumaroles with an estimated mean vent temperature of 185 • C in 2019 [25] . A Field Programmable Gate Array (FPGA) synchronously triggers both visual cameras and handles the timestamp assignment for all sensor data. The data is recorded and processed on an external laptop. The thermal sensor data is registered geometrically and temporally to the basic IPS sensor data. For geometrical co-registration of the trifocal camera setup, the calibration provided by [9] is used. Here, a passive aluminum chessboard with alternating blank and matt-black-printed tiles is used, that leads to a clearly visible chessboard pattern in both visual cameras and the thermal camera image, if a low-temperature background is reflected by the blank tiles. For temporally registration, a hardware trigger is attached to the handle that triggers the thermal camera and a timestamp is logged by the FPGA. As the thermal camera uses its own internal frequency, this timestamp is not precise and is only used to assign the image to the nearest stereo frame. This can lead to inaccurate camera pose assignment during motion and therefore, we hold the system still Background: GoogleEarth © 2020 Google Images © Maxar Technologies while triggering the thermal camera. Further miss-alignment occur due to periodical Non-Uniformity Corrections of the Optris Pi 450 where the image stream is temporarily frozen during radiometric re-calibration. Corresponding thermal images are sorted out manually. During the two survey days (16th and 18th June 2019), four different datasets were acquired. The trajectories are shown in Fig. 3 . The first two datasets (day1 ) aimed at the 3D reconstruction of a large-scale emissive structure within the rim zone. The different zones are characterized in [18] . The other two (day2 ) datasets focused on the exploration of potentially interesting small-scale structures, located in the middle zone. The datasets contain GCPs that were measured using GPS, where IPS is placed for a short time. They are intended to automatically register two different IPS trajectories and to introduce supervised loop closures. The first two recordings have a total length of 42 min and contain 240 thermal images. The other two have a length of 64 min and contain 560 thermal images. Due the presence of corrosive gases in the fumarole field, the system had to be equipped with protective gear, which affected the sensing properties of the thermal camera. The used IPS has a solid aluminum frame with protective glasses in front of the visual cameras, see Fig. 2 . The opening to the hardware on the back of the IPS needed to be closed using plastic utensils. The lens of the thermal camera was protected using a thin plastic foil, which requires an adjustment of the transmission factor. For this setup, a transmissivity of 0.9 was determined on site by comparison with a second thermal camera (Testo 875). For data acquisition in different locations of Vulcano island during the summer school, an emissivity of 0.93 was kept constant, which is in the broad range of the geological materials. [18] obtained an emissivity of 0.97 for surfaces within the Fossa field. Consequently, both fixed radiometric values introduce inaccuracies in the presented dataset. The error can be approximated using a validation set of 48 samples from 4 image pairs of both thermal cameras, in the range of 39 • C to 240 • C. For the (Testo 875), we process the data using the software IRsoft [31], while using an emissivity of 0.97. Based on this dataset, we approximate a mean absolute error of 3.4K to the reference measurement. We refer to [18] for further error consideration in thermal imaging within the Fossa field. In this section we describe the localization and 3D reconstruction components. Figure 4 gives an overview of the individual components, which are explained in the following sections. To summarize, based on the different sensor information, IPS estimates a global trajectory T IP S , that is used during Direct Reconstruction. Here, generated depth maps based on the stereo frames at each pose are used to generate the point cloud, where semantic segmentation is used to ignore specific objects during reconstruction and thermal values are directly mapped onto the point clouds. The result is P C Comb that combines grey and thermal color values, and is split into segments for easier investigation, which are constrained by GCPs or by user-defined temporal intervals. In Indirect Reconstruction, a trajectory segment is first selected T * IP S and refined using SfM with Bundle Adjustment. Here, Direct Reconstruction is used to reference the thermal values in 3D P C * T hrm , while the Dense Reconstruction P C * Grey is done separately by the used photogrammetry tool [27] . Finally, P C * Comb is generated by point cloud based color interpolation. The IPS localization comprises 3 sub-modules, a local (a) and a global (b) navigation module, as well as a switchover module (c) from local to global navigation. The first two modules compute in Cartesian coordinates, whereby (a) refers to a local tangential plane and (b) to the earth-centered, earth-fixed (ECEF) coordinate system. The localization from (a) can be understood as an ad hoc solution, since it is generated purely with measurements of the IMU together with a camera based visual odometry in real time. All four trajectories shown in Fig. 3 start with the same path section represented by a triangle of 3 GCPs. These GCPs are used to register the different runs. On the other hand, the presence of additional infrastructure based information such as GPS coordinates or GCPs the switchover from (a) to (b) can be done with the help of an estimated homogeneous transformation matrix including its associated covariance. This transformation depicts 3 parameters of rotation and translation as well as a scaling factor. In order to estimate these 7 unknowns, at least 3 or more pairs of 3D position vectors in local or global coordinates are required. With a closed form solution of the 7-parameter transformation, based on the well-known Helmert transformation (e.g. see [35] ), the estimation vector of a non-linear system of equations is initialized. This means that only a few iterations are required to determine a solution in the subsequent Gauss-Newton algorithm. Since absolute measurements are not always available at run-time or only generated during the run (e.g. a loop closure due to repeated placement of IPS on an already known GCP), the switchover of the local navigation solution to a global coordinate system is regarded here as a subsequent optimization step. The physical parameters of IPS localization include orientation described as a quaternion, position and speed, each in 3 dimensions. In addition, the axisdependent offsets of the angular rates and acceleration sensors of the IMU are estimated and corrected. A detailed description of the equations of motion of (a) can be found in [16] , that of (b) in [17] . The following sensors and their type of measurand are used in our localization pipeline: The stereo-images of IPS can not only be used for reliable VO estimation (based on sparse 3D sets of points), but also for 3D point cloud generation. For the computationally expensive stereo matching that computes high-density depth maps, a semi-global matching (SGM) GPU-implementation [13] is used. It is implemented in OpenCL, includes preparatory image rectification, and uses a census cost function for the data term. From the depth maps, 3D points are generated for each recorded pair of images and fused with 3D points obtained at subsequent frames by incorporating the camera poses from the IPS trajectory T IP S . The overall frame rate for point cloud generation is dynamically adapted as follows: if the calculated IPS navigation solution shows a substantial difference in pose or time to the previously used image pair, a new local 3D point cloud is extracted from the depth maps and transformed into the global navigation frame. Subsequently, the local point clouds are merged and accumulated into a high-density cloud and filtered into a 1.5 cm large voxel grid. This method builds on the Point Cloud Library [29] . The presence of moving objects, such as tourists in our dataset, can lead to disruptive artifacts during 3D reconstruction, as shown in Fig. 5 (left) . Such artifacts can be removed by using semantic information during offline processing. For this step, we use a pre-trained MobileNetV2 [30] that was trained on the PAS-CAL 2012 dataset [14] . It has shown to give a good trade-off between accuracy and run-time for our datasets. The segmentation is computed in full resolution and is used to generate a mask for the object classes person and bicycle. The mask is additionally dilated by 10 pixel to cover inaccurate object boundaries and is applied directly on the depth map to exclude all depth values that belong to the forbidden object classes. In combination with statistical filters, almost all artifacts are removed without disturbing the static scene, as shown in Fig. 5 (right). IPS can record additional sensor data synchronized with image and IMU data simultaneously. In the case of image data, they can be immediately mapped onto the 3D point cloud, which we use to map the thermal information in color rep- resentation from the additional thermal camera. Here, an exact registration of the thermal camera to the overall IPS is mandatory. By projecting the local 3D points into the valid IR camera image, considering the internal and relative orientations, the color or measured temperature values can be assigned directly to the respective surface points. For instance in Fig. 6 , the points of (b) are mapped into the thermal image of (a) and colored accordingly, resulting in a combined model P C Comb of (c) or only in a thermal point cloud (d). A voxel-based occlusion algorithm is used to prevent incorrect color assignment on occluded 3D points, building on the implementation of [1] in [29] . In a subsequent automatic filter process, voxels and their additional information are removed based on the frequency of the additional camera images, the number of 3D points found per voxel, the proportion of values with additional information and their reliability. Local localization and Direct Reconstruction with thermal mapping can be done simultaneously in real-time using a capable laptop. In this work, we apply it as a post processing step on a global trajectory and add the module for semantic segmentation, which requires an additional GPU. In Indirect Reconstruction we seek to exploit the potential of photogrammetry tools for detailed 3D reconstruction by using the SfM and Dense Reconstruction modules. We use the photogrammetry tool Pix4Dmapper [27], which we experienced to work generally well in combination with IPS. As SfM is highly time-consuming, we only concentrate on relatively small trajectory parts T * IP S with around 800-1200 camera poses, for instance with 1200 greyscale images for the 3D model of Fig. 1 (c) . These parts are defined by segments from Direct Reconstruction or custom time intervals. They are selected by a human operator due to the presence of interesting fumarole structures, found by investigating the direct 3D models and sensor data. To further shorten the processing time, we only use one visual camera and use camera poses from T * IP S as well as the intrinsic camera calibration parameters of IPS as a priori. The scale for T * BA and resulting P C * Grey is defined by the a priori T * IP S . Finally, a manual step is applied to remove occasional disruptive artifacts per hand. For thermal mapping, we apply the Direct Reconstruction pipeline based on the optimized trajectory T * BA , which produces the thermal point cloud P C * T hrm , as shown in Fig. 6 (d) . It is defined in the same coordinate system as P C * Grey . In a manual step, we interpolate color values of P C * T hrm onto P C * Grey in point cloud space using CloudCompare [10] . During thermal mapping on fine details, this method is relatively sensitive to smallest deviations of the surfaces of both point clouds, caused by scale-and calibration inaccuracies. To demonstrate the possibilities in our proposed application, we slightly correct the scale for P C * T hrm (via the stereo baseline) during Indirect Reconstruction to guarantee most detailed thermal mapping. All points that are not assigned a color value keep their grey value, resulting in P C * Comb and shown in Fig. 6 (f) . For better visualization, we transform P C * Comb into polygonal meshes using Poisson Surface Reconstruction [20] in [10]. The overall result is a global position and orientation for all sensory data with visualization in a georeferenced 3D model. In this chapter, we concentrate on excerpts of the results and interesting examples. The global localization results of all four trajectories are illustrated in Fig. 3 . Due to the used filter-based localization method, drifts occur in mid-term, but are limited in long-term by the GPS-aid. The magnitude of the mid-term drift can be exemplified using day1-a. It consists of 5 intermediate mid-scale loops that start and end at the same GCP. They can be used to approximate the Direct Reconstruction was applied on the full datasets and consists each of a large-scale model that was split into segments for simplified investigation. Figure 7 (left) gives an exemplary segment from Direct Reconstruction. It shows an excerpt of the elongated heat field. The heated area is clearly visible, while larger rocks and the surrounding area are at lower temperatures. In this way, the direct model provides a good qualitative 3D impression and scene understanding. Direct Reconstruction can also be used to generate coherent large scale models, as shown in Fig. 7 (right) for the entire two day1 sessions. Due to the applied GCPs, which introduced hard corrections in this dataset, the alignment quality of the segments is restricted by the accuracy of the used GPS and had to be refined manually. Four generated 3D models based on Indirect Reconstruction are shown in Fig. 8 . Each row shows (a) one exemplary thermal image in color representation that is assigned to one greyscale camera image. Based on IPS localization, the corresponding camera pose is referenced globally (b) and plotted in a global context. The trajectory segments, marked in blue, are used for reconstruction. The resulting 3D mesh and the corresponding thermal camera pose are shown in (c). In (3) Using the presented hand-held camera system, we were able to record and prepare sensor data for detailed investigations of the fumaroles fields on Vulcano. Also, we mapped fumaroles and heated regions that would otherwise be hidden from above, such as shown in Fig. 8 (1) . Data acquisition and processing in such harsh environments are often not straightforward. Corrosive gases, dense steam and high surface temperatures complicate data acquisition, damage the sensor system and affect the localization and reconstruction result. The hazards are not always visible, characterized by the many tourists that we encountered to walk on extremely hot areas (>200 • ) and through the steam without protection. Also, the dense steam often obscured large image areas, indicated in the upper part of the greyscale image in Fig. 8 (3a) , and sometimes obscured the view completely. Using IPS, we estimated a reliable global localization solution and detailed 3D models. Due to the absence of reference data for the presented datasets, a quantitative evaluation could not be conducted. In a relative manner, the localization results has shown to be of high quality in short-term, as it led to visibly appealing 3D models using the computational inexpensive Direct Reconstruction. In midterm, they show small trajectory drifts, due to the temporary-relative nature of the chosen navigation solution. The trajectory drift and remaining statistical noise restrict quality and level of detail of the direct models. For the individual segments, Indirect Reconstruction mostly eliminated the trajectory drift using Bundle Adjustment and led to consistent and more detailed 3D models. Though, the details come at the expense of required computationally power and the proportion of manual steps in this method is relatively high. In an absolute manner, the global positioning accuracy is shaped by the quality and accuracy of the GPS sensor in the chosen filter-based implementation. The introduced GCPs helped well for registration of the different sessions, but due to their uncertain/inaccurate measurements, the coherent large-scale model could not be generated completely automatically. In future measurement campaigns, it would be beneficial to measure the GCP positions using differential GPS. Also, a vSLAM extension could help to reduce mid-term inconsistencies and to get closer to the qualitative results of the indirect method. The presented 3D models show the overall well registered thermal data, that we mapped in color representation for demonstration, and give an appealing 3D visualization. Geometrically, inaccuracies can occur on object borders, where the thermal color values are not correctly mapped onto the point cloud. Possible sources of error are calibration inaccuracies and an inaccurate estimation of the thermal camera pose during motion, due to the rough temporary assignment of the thermal data. Radiometrically, the thermal properties changed due to the protective foil, which we reported in comparison with a second thermal camera. For future measurement campaigns, the system could benefit from a radiometric calibration for this setup. Also, the emissivity that was obtained in [18] for Vulcano's fumarole fields can be set. To prevent gross alignment errors on object borders, our method could benefit from the risk-averse neighborhood weighting mechanism of [34] . We may emphasize that this method can augment remote sensing methods, such as in [25] . Alternatively, airborne systems could be used to gain a broad large-scale model beforehand, which is then used to select interesting spots that are investigated in detail with the hand-held system. Overall, the methods that we presented would be extremely well-suited for manned and robotic-based exploration and mapping of planetary bodies. Quick and accurate planetary exploration requires tools which are able to survive extremes in environmental conditions, working both on the surface and subsurface. Sub-surface exploration is critical for example on Mars or the Moon, since signs of extant or extinct life is likely to be found in sub-surface caves [8] . With the recent drive towards space and planetary exploration, this work presents promising, innovative tools for mapping and exploration. The Integrated Positioning System (IPS) was used for acquiring data in the harsh volcanic terrain of Vulcano. Based on the presented methods, we could reach fairly qualitative global localization and detailed mid-scale 3D-models with 3D visualization of the captured thermal data. Using the presented method for direct 3D reconstruction, quick and qualitative impressions of the complete scenes could be achieved. Then, indirect 3D reconstruction allowed to generate 3D models of high quality of selected parts in the expense of required computational power. In this work, we have made progress towards the development of a prototype for planetary exploration in the context of detailed inspection of geological features in extreme environments. In future, an interesting direction will be to automatically detect and classify fumaroles based on the given imaging sensors. Generally, we seek to support geophysical, geochemical and planetary in-situ investigations. A fast voxel traversal algorithm for ray tracing Geology, volcanic history and petrology of Vulcano (central Aeolian archipelago) Mobile solution for positioning, 3D-mapping and inspection in underground mining Review of the utility of infrared remote sensing for detecting and monitoring volcanic activity with the case study of shortwave infrared data for Lascar IPS -a vision aided navigation system Cameras for navigation and 3D modelling on planetary exploration missions Thermal 3D mapping of building façades Mars extant life: what's next? Conference report Automatic calibration and co-registration for a stereo camera system and a thermal imaging sensor using a chessboard Interpreting thermal 3D models of indoor environments for energy efficiency Mutual information based semi-global stereo matching on the GPU The PASCAL Visual Object Classes Challenge Fusion of radar, LiDAR and thermal information for hazard detection in low visibility environments Stereo-vision-aided inertial navigation Principles of GNSS, Inertial, and Multisensor lntegrated Navigation Systems Thermal surveys of the Vulcano Fossa fumarole field 1994-1999: evidence for fumarole migration and sealing Supervised detection of façade openings in 3D point clouds with thermal attributes Screened Poisson surface reconstruction Robotic sensing and object recognition from thermalmapped point clouds Thermographic mobile mapping of urban environment for lighting and energy studies Integrated thermal infrared imaging and structure-from-motion photogrammetry to map apparent temperature and radiant hydrothermal heat flux at Mammoth Mountain High-resolution imaging of hydrothermal heat flux using optical and thermal structure-from-motion photogrammetry Combining ground-and ASTER-based thermal measurements to constrain fumarole field heat budgets: the case of Vulcano Fossa Optris: Pi Connect: Process Imager Software Reconstruction of textured meshes for fire and heat source detection 3D is here: point cloud library (PCL). In: (ICRA) International Conference on Robotics and Automation MobileNetV2: inverted residuals and linear bottlenecks. In: (CVPR) Conference on Computer Vision and Pattern Recognition Registration of RGB and thermal point clouds generated by structure from motion Vulcano summer school Real-time mobile 3D temperature mapping Computing Helmert transformations First results of the ROBEX analogue mission campaign: robotic deployment of seismic networks for future lunar missions Superimposing thermal-infrared data on 3D structure reconstructed by RGB visual odometry LiDAR and thermal images fusion for ground-based 3D characterisation of fruit trees Acknowledgements. The Vulcano Summer Schools have been generously supported by the Helmholtz Alliance Robotic Exploration of Extreme Environments (ROBEX) and the EU H2020 Europlanet program.