key: cord-0058619-3ss0f6fp authors: Saponaro, Mirko; Turso, Adriano; Tarantino, Eufemia title: Parallel Development of Comparable Photogrammetric Workflows Based on UAV Data Inside SW Platforms date: 2020-08-19 journal: Computational Science and Its Applications - ICCSA 2020 DOI: 10.1007/978-3-030-58811-3_50 sha: 01f53faa7c3f5c5a03226ef22f8be079fd973ea1 doc_id: 58619 cord_uid: 3ss0f6fp A wide range of industrial applications benefits from the accessibility of image-based techniques for three-dimensional modelling of different multi-scale objects. In the last decade, along with the technological progress mainly achieved with the use of Unmanned Aerial Vehicles (UAVs), there has been an exponential growth of software platforms enabled to return photogrammetric products. On the other hand, the different levels of final product accuracy resulting from the adoption of different processing approaches in various softwares have not yet been fully understood. To date, there is no validation analysis in literature focusing on the comparability of such products, not even in relation to the use of workflows commonly allowed inside various software platforms. The lack of detailed information about the algorithms implemented in the licensed platforms makes the whole interpretation even more complex. This work therefore aims to provide a comparative evaluation of three photogrammetric softwares commonly used in the industrial field, in order to obtain coherent, if not exactly congruent results. After structuring the overall processing workflow, the processing pipelines were accurately parameterized to make them comparable in both licensed and open-source softwares. For the best interpretation of the results derived from the generation of point clouds processed by the same image dataset, the obtainable values of root-mean-square error (RMSE) were analyzed, georeferencing models as the number of GCPs varied. The tests carried out aimed at investigating the elements shared by the platforms tested, with the purpose of supporting future studies to define a unique index for the accuracy of final products. The technological trends strongly linked to the use of Information and Communications Technologies (ICT) hardware and software solutions are now an unequivocal source of development for Industry 4.0 and therefore make the development of Technological Systems totally vital for the determination of new industrial scenarios worldwide [1] . Among all of them, it is evident how the introduction of remote control technologies has in itself revolutionized different areas of use but above all simplified, in the common vision, the obtainment of various certified products in a short time. One of the fields mostly influenced by the exponential growth of hardware and software technologies is certainly the geomatic sector, encouraged by the now widespread use of Unmanned Aerial Vehicles (UAV) equipped with simple low cost cameras for the generation of two and three-dimensional photogrammetric products [2] . Many industrial applications benefit from the accessibility of Structure from Motion (SfM) techniques, essentially based on the manipulation of 2D images for threedimensional modelling of different objects at different scales [3, 4] . In the last decade, together with the technological evolution of aircraft components and transportable sensors, there has been an exponential growth of software platforms enabled to return photogrammetric products, but above all a sophistication of the algorithms implemented [5] . However, while it is possible for any software used with the necessary precautions to return products which are much more comparable than those obtained from much more expensive technologies (e.g. Terrestrial Laser Scanner) [6] , the different levels of final product accuracy resulting from the adoption of different processing approaches in various softwares have not yet been fully understood [7] . No validation analysis has been conducted so far in literature to consider these products as repeatable and reproducible, not even in relation to the use of workflows commonly allowed inside various software platforms. In fact, in consideration of the way products are validated geometrically, as drafted by ASPRS [8] or often by national regulations, the dependence between the evaluations made and the repeatability and reproducibility of the results remains unexpressed [9, 10] . Moreover, the lack of detailed information on the algorithms implemented in the licensed platforms makes the whole interpretation even more complex. This work therefore aims to provide a comparative evaluation of three photogrammetric softwares commonly used in the industrial field, in order to obtain coherent, if not exactly congruent. In particular, three processing chains were started in parallel in Agisoft Photoscan, Pix4D Mapper and MicMac. After structuring the overall processing workflow, the processing pipelines were carefully parameterized to make them comparable in both licensed and open source softwares. For the best interpretation of the results derived from the generation of point clouds processed from the same set of image data, through statistical inference, the influence on the final product accuracy of the number of Ground Control Points (GCP) implemented in the georeferencing was then analyzed [11] . The tests carried out were aimed at investigating the elements shared by the platforms tested, with the purpose of supporting future studies to define a single index for the accuracy of final products. The tests were carried out on a dataset of images about the excavation area of a road section in the trench of the Pedemontana Veneta Highway. The Pedemontana Veneta Highway is a road infrastructure currently under construction that crosses Veneto (a region in Northern Italy) being developed in the context of the European Mediterranean Corridor (ex Corridor n.5). The intervention concerns the decongestion of the territorial conurbation of the central metropolitan area of the regional territory, with the creation of an overall by-pass and a foothill route for continuity. The dataset consists of 243 images acquired by a low-cost camera NIKON CORPORATION Coolpix A (focal length 18.5 mm, ISO 400, Shutter 1/1000, 4928 Â 3264 pixels, 16 MP) mounted on board the professional multicopter IA-3 Colibrì of the IDS supplied by SIPAL S.p.A. This latter is a vertical take-off landing (VTOL) UAV of weight up to 5 kg and is propelled by 4 brushless rotors. Being characterized by a maximum flight time of 40 min at optimal payload (0.5 kg) as in the case study, the flight was performed at a height of about 50 m above the ground, obtaining a Ground Sample Distance (GSD) of 1.23 cm/pix, and covering an area of about 0.0853 km 2 , considering a non homogeneous longitudinal overlap of 80% in the whole area. The UAV was equipped with a high-precision GNSS rover receiver, which in continuous mode records the coordinates of the antenna phase centre in Real Time Kinematic (RTK), creating a radio-bridge with the GNSS master station on the ground in static acquisition. Thus, each image was associated with the geo-tag of the receiver at the time of shooting. The coordinates of the images were recorded and transformed into a Linear Local Reference System, useful during the road construction phase. Finally, twenty 80 Â 80 cm targets were distributed throughout the entire scene as in Fig. 1 , so that they could be recognized in the images captured by UAVs. The targets were then measured in a GNSS survey using Leica GS08plus receivers in Real Time Kinematic (RTK) mode, achieving an average accuracy equal to 0.02 m along the three axes. The ground truth coordinates were consequently obtained, to be used as Ground Control Points (GCPs) or Check Points (CPs) in each test performed. Based on extensive literature work and currently validated methodologies for the return of products complying with commonly accepted accuracy standards [9, [12] [13] [14] , an overall processing workflow was structured (Fig. 2) . Parallel processing was performed in a hardware system with ordinary performance, i.e. featuring an Intel(R) Core (TM) i7-5500U CPU @ 2.40 GHz, 8 GB RAM and an Intel(R) HD Graphics 5500 GPU. The first step, which tends to be underestimated and preferred by a default parameterization, is fundamental and necessary to obtain reliable products. In any software, a reasonable setting of the workspace impacts on the plausibility of its operations, targeting to specify the workflow for each case study. In a general view, the choice of a shared reference system, both in consideration of the data acquired in the field and the factors that will intervene in the resolution of the equations at the base of the photogrammetric algorithms, establishes the coherence of the orientations and scale of the final models; lastly, the arrangement of the calibration parameters of the camera and its lever-arm optimize the estimates of the internal orientations, possibly the propagation source of a multitude of distortions in the final accuracy values. Such arrangement may be derived from rigorous operations in the laboratory or obtained through self-calibration from the acquired data, which is often preferred and returns totally reliable values compared to the former. On the other hand, the calibration parameters of a low-cost camera cannot be considered as constant since they are subject to variations mainly related to opticalmechanical deterioration and temperatures [15] . These parameters are corrected during the processing of the acquired data, optimizing the interior orientation of the cameras from time to time. The achievable corrections will depend on the quality of the acquired images, as well as on the characteristics of the surveyed scenario, the inclination of the camera, the overlap between the images and above all on the presence of ground-truth points marked on the screen. In the second step, the software enables the algorithms to search for point features image by image, i.e. based on the primitive Scale Invariant Feature Transform (SIFT) algorithms [16] , characteristic and unambiguous points which are present in images invariant to scale and orientation changes and partially invariant to lighting changes. Setting • Enable Camera Self-Calibration Step 1 • Tie points Extraction • Image Matching • Sparse Points Clouds Generation Step 2 • Filtering Reprojection Error • GCPs/CPs import and collimation Step 3 • Bundle Block Adjustment • GCP/CP RMSE statistics Step 4 At the end of the count, the software starts matching algorithms, i.e. the images are compared by searching for homologous points among those already recorded. Once the correspondences between the images have been defined by means of tie points, considering the interior orientation estimates of the camera assisted by the positional information of the images, the geometrical relations between the various images are constructed and then a sparse points cloud is computed. In most cases, implemented algorithms are not public domain in commercial software platforms, whereas in clearly open-source suites users can even contribute with their intervention. The lack of this information therefore makes a comparative analysis of the achievable results more complicated. In the next step, the sparse points cloud can preferably be filtered, reducing the amount of points characterized by a Reprojection Errors above a certain threshold, usually greater than 0.5 pixels. The Reprojection Error in pixels identifies the difference between the estimated values of the points present in an image and those projected in the sparse points cloud [14] . The software thus learns the corrections and processes the improved information about the relative orientation of the images to update the estimates made in the first phase of images alignment. The collimation phase is the most complex and time-consuming step. In particular, when the coordinates of the points measured in the field are loaded into the workspace, they must be marked in all the images in which they are visible. The GCPs are different from the CPs, the first ones are useful for the photogrammetric block georeferencing while the others to control the accuracy. The last but no least important phase provides that the sparse points cloud filtered and assisted by the accurate information of the ground truths is started to an adjustment of the estimates by means of the Bundle Block Adjustment (BBA) algorithms. These adjust and refine the geometry of the scene by minimizing squared reprojection errors between the points in the images and those in the photogrammetric block. Most softwares allow to also aggregate heterogeneous information to BBA compensations, such as the positional information recorded in each image by the UAV receiver. This information, although of lower accuracy than the ground truth and therefore with a lower weight in the equations, is essential in cases of Direct Georeferencing (DG) and Indirect Georeferencing (IG) with less than 3 GCPs implemented. For the remaining cases, the choice of aggregation could produce different final results, so any discrepancies will be analysed. At the end of the processes, the model obtained can thus be considered as valid and consistent for the generation of products useful in any field. The processing pipelines were carefully parameterized for each software, generating a sparse points cloud in each one of them. In the relative georeferencing management fields, GCPs were implemented through leave-one-out techniques [17, 18] , thus generating a wide case history of 21 models for each workspace. At the end of the processes, an overall representation of any differences, in terms of mean error and rootmean-square error (RMSE) evaluated on GCPs and CPs, was built. For the purposes of this work, the processes for obtaining the products beyond the sparse points clouds were not undertaken, these being considered as the main ones for any discussion regarding geometric accuracy. Agisoft PhotoScan. Considering the premises described about the workflow structure, the working area of the software platform needed a suitable parameterization in order to guarantee the success of consecutive operations. In the Settings panel a window showed three areas of the settings configuration of the entire workspace. The Reference Settings acts on the general Coordinate System by setting a local system as described in Sect. 2.1. The other two configuration areas parameterize the field measurements accuracy of the receiver UAV on-board and the Inertial Measurement Unit (IMU), and the coordinates accuracies of the GCPs in meters when acquired. A value of 0.05 m was set in the Camera Accuracy (m) option, maintained at 10 deg in the Camera Accuracy (deg) option since the UAV did not record IMU orientation data, and 0.02 m in the Marker Accuracy (m) parameter. Under the Image Coordinates Accuracy title, the accuracy in pixels of the markers was parameterized, i.e. to determine how carefully the markers could be placed in the software workspace, and a fairly realistic value of 0.5 pixels was assigned. The Tie Point Accuracy option, instead, identifies the weight that will have to be given to the tie points in the block adjustment phases, and this was fixed as equal to about 3 pixels, as suggested in Mayer et al. [14] . In Camera Calibration the corrections of the camera parameters were enabled in every orientation estimation process and in GPS/INS Offset the Lever-Arm vector equal to [0.00, 0.00, 0.40] m was introduced with the relative accuracy of 0.01 m, having measured it in laboratory. Agisoft PhotoScan adopts Brown's model [19] for the parametric description of the camera lens. In the second phase shown in Fig. 2 , the Align Photos process was run, setting High option as execution mode and preferring to disable the automatic pre-filtering by fixing the value 0 in the Key Point and Tie Point Limits options. Considering the interior orientation estimates of the camera assisted by the positional information of the images, the geometrical relations between the various images were constructed and finally a sparse points cloud was computed. The process launched required 5 h and 57 min of processing time in the matching phase and 20 min and 33 s in image alignment. At the end of the processing a sparse cloud of 2,284,543 points was returned, with a mean Reprojection Error of 0.452339 pixels and an average point spacing of about 27 points/m 2 . The Agisoft PhotoScan platform gives the possibility to manipulate the obtained point cloud, filtering those points characterized by conspicuous Reprojection Errors and thus obtaining a model quite consistent with reality. The Gradual Selection item was selected: this option contains filtering tools for sparse point clouds. Following the indications proposed in Mayer et al. [14] , three filtering operations were performed: • Reconstruction Uncertainty: 10 -The number of points after this first filter was 2,284,136. • Projection Accuracy: 3 -The number of points after this filter was 1,517,282. • Reprojection Error: 0.40 -The number of points after this last filter was 1,517,282. The cloud thus presented an RMSE of the Reprojection Error equal to 0.307533 pixels so that it can be considered consistent and robust for subsequent processing. Then the coordinates of the 20 GCPs acquired during the survey in the same image reference system were imported into the workspace. The GCPs thus needed to be collimated, image by image, trying to keep their reprojection value low in the model. The 21 Chunks consequently generated by duplication represented a limited number of solutions to be adopted in the georeferencing phase as the number of GCPs implemented varies up to 0 GCP solution, i.e. by DG. In order to follow a fairly univocal methodology, starting from a number of implemented GCPs equal to 20, the case study was built using the leave-one-out technique, i.e. removing a GCP from time to time and transforming it into a Check Point (CP). The GCPs were removed in such a way to always have a homogeneous distribution between external and central zones of both GCPs and CPs. After defining the list of Chunks, each one characterized by a number of GCP implemented from 20 to 0, for GCP cases with GCP greater than 3 the positional information of the images was removed in order to use only GCP in the adjustment. Then BBA processing was programmed for each case into a Batch Process using the Optimize Cameras command. Pix4D Mapper. When the software was started, it required to load the images to be processed and then to set the workspace [20] : the Image Properties window is where we set the Coordinate System distinctive of the geo-tags of the images, select the source from which to extract the positional information of the images, select the relative geolocation accuracy and finally choose the model of the camera used. The choices already described in the previous paragraph were consequently followed. The camera model was then configured. Pix4D is equipped with an internal database in which the calibration data of several commercial cameras are stored. The software recognizes the camera model from the EXIF and automatically searches for it in its database: if present, as in the case under this study, it loads the calibration values. Unlike many other softwares, Pix4D adopts a proprietary camera model, i.e. a different parametric description of the lenses but convertible to any model. After selecting the Output Coordinate System, the software offered a wide choice of processing methods organized in predefined Templates. These templates are briefly standardized processing options that facilitate the user's work in the immediate achievement of results, without following the various processes step by step. A model (e.g. 3D maps) can be chosen and adapted to the needs of the case. In Advanced, the software enables new subfolders for advanced processing parameterization. In particular, for the Initial Processing advanced parameters can be selected using: Generals panel which allows the user to modify the processing options and to select what will be displayed in the Quality Report; Matching panel which allows the user to modify the processing options related to the matching of the key points for the first step; and Calibration panel where it is possible to modify the calibration options of the camera and the desired outputs for this first step. For the purposes of this work, it was chosen to make the software work at the original image resolution with the Full option, indicating the Aerial Grid or Corridor option as the matching strategy between the corresponding images, the flight mission having been structured in swaths. While making the processing slower, a geometric verification of the correspondences was enabled, in order to discard the geometrically inconsistent correspondences. Afterwards, continuous optimization of the internal and external parameters of the camera orientations was preferred, calibrated through the Alternative option, recommended for images acquired by UAV. The launched process took 1 h and 26 min of processing time for the entire first step. At the end of the processing a sparse cloud of 2,255,469 points was returned, with an RMSE value of Reprojection Error of 0.195 pixels, a relative difference of 0.42% between the optimized internal camera parameters and the initial ones and an average point spacing of about 26 points/m 2 . With GCP/MTP Management, Pix4D provides a worksheet for the management of important Ground Control Points (GCP) in the workspace. Using the RayCloud Editor, GCPs were marked on the images. Finally, 21 projects were generated by duplication, representing the same case history as the solutions adopted in the previous case. Once the list of 21 projects had been drawn up, the BBA processing for each of them was started with the Re-Optimize command, considering, in the compensation, the contribution offered by the camera's positional information. MicMac. Unlike the other softwares seen, MicMac tends to be exclusively used from the command line, as a high-performance graphic user interface has not yet been completed [21] . Therefore, it was necessary to parameterize each command each time in order to make the processes comparable to the other two platforms. Having available the file in which the positions of the camera were indicated at the time of acquisition, it was useful to build an.xml file helping the search algorithm and tie points matching. The OriConvert command generates the.xml file containing the appropriate pairings between the images of the entire dataset according to their position. In the following stage, the Tapioca File command was started. As already done in the previous sections, among the options of the command, the value −1 has been used to indicate as image size the full resolution one. The software gives the possibility to activate the basic SIFT++ algorithms, an evolution of D. Lowe's original SIFT [16] , or to choose, as in the case of this study, the DIGEO algorithms, a much faster and more efficient evolution of the SIFT algorithms. At the end of the feature search processes and therefore of the correspondences among the various points of the images, it was necessary to introduce an orientation phase that could set preliminary geometries among all the points starting from a modelling of the camera, then passing through the relative geometries among the various acquisitions. Tapas is the useful command for calculating purely interior and relative exterior orientation. The camera calibration mode was set to Brown's Model [19] , which was known to have been adopted in Agisoft PhotoScan software. Then the CenterBascule tool was started, as it allows to transform the purely relative orientation, as calculated with Tapas, into an absolute one. In particular, CenterBascule assigns a new orientation to an image dataset, characterized by an orientation of its centers, considering the actual positioning of the centers defined by the database processed in OriConvert. At the end of this step the Campari command was used to compensate to the minimum squares the model orientation by heterogeneous measurements. Essentially, starting from the orientation obtained in CenterBascule, Campari compensates the measurements by assuming the coordinates of the GNSS-receiver of UAV-images with the relative planar and altitude accuracies. To follow, it enables the refinement of the estimates of all the parameters of the camera calibration. Once the compensated orientation database was obtained, the AperiCloud command was executed with it. In AperiCloud the optional argument SeuilEc = 0.4 was introduced to filter all the points with a high residual value, i.e. those that can be classified as outliers eliminated in Agisoft PhotoScan through the Reprojection Accuracy filter. The entire process returned a sparse cloud of 1,771,128 points with an RMSE value of 1,238 pixels and an average point spacing of about 21 points/m 2 . The processes so far analysed required a much longer processing time than those described in the other two reports. Among the subsequent operations, the Tapioca command took about 50% of the time taken by this first step (about 19.26 h of processing in total). In order to import the GCPs and CPs datasets into the processing, a preliminary management of the positional information was required to obtain a.xml file readable by MicMac. The GCPConvert command was used for this purpose. By means of the SaisieAppuisPredicQT command, a graphic interface was started to collimate the points, first for GCPs and then also for CPs. Considering the orientation obtained by the Campari command and loading the coordinates of the points, the SaisieAppuisPredicQT command is able to hypothesize the position of GCPs and CPs in the images indicated in the command line that must be approved by the operator. The last mandatory argument is the name of an.xml file in which the coordinates of the points image were stored. As in the previous sections, the case history of 21 folders for each georeferencing case was organized and the GCPBascule command started for each one of them to transfer the absolute orientation inherited from the collimated GCPs to the entire photogrammetric block. Once a robust absolute orientation was transferred to the point clouds scattered by the GCPs, BBA processing was started to correct and fix the entire block. In particular, the Campari command was restarted by introducing the database of implemented GCPs as a useful measure for compensation. Two cases of compensation were studied, one sticking to the BBAs executed in Agisoft PhotoScan and the other keeping the positional information of the cameras as in Pix4D Mapper. The method to verify the accuracy of the georeferencing is to use CPs to estimate the 3D similarity of the measurements between the calculated coordinates and those measured in the field. The residues on GCPs allow to qualify the accuracy of the georeferencing result. The command introduced, GCPCtrl, allowed to quantify these residues and then to return the degree of accuracy achieved in the processes seen. The values obtained from the 21 GCPs and CPs management cases implemented for each software and therefore from the related BBA processes, were analyzed and compared to the geometric standards widely accepted by the scientific community, as updated by ASPRS in 2015. These standards were developed in response to the pressing need of the GIS and mapping community to embrace the growing rise of new geospatial technologies. The standard follows metric units of measurement in order to be consistent with international standards and practices, although it does not specify the best methodology needed to obtain values above the set thresholds. It will be the data provider's responsibility to establish the control procedures and the final quality of the geospatial product to be returned, in conjunction with the commissioned requests. The standard is independent from the technology and addressed to a broad base, while recognizing the existence of application limitations. The ASPRS defines accuracy classes based on root-mean-square error (RMSE) thresholds evaluated on CPs for digital orthoimagery, digital planimetric data and digital elevation data [8] . The RMSE values, taken as cumulative values of systematic errors and any variances, are actually a measure of the accuracy of the referenced datum. At the same time, the absolute average errors obtained along the three axes will be analyzed, looking for possible systematisms. A control instead of the same values on the GCPs accredits the robustness and consistency of the georeferencing phases in the photogrammetric blocks. Given the specific requirements recommended by the document, for the purposes of this work, the statistics concerning the Horizontal Accuracy Standards for Geospatial Data and the Vertical Accuracy Standards for Elevation Data were evaluated [20] . In particular, for the former, the ASPRS tables the RMSEr values in 24 accuracy classes, i.e. the planar error resulting from contributions along the x and y axes, recommended for Digital Planimetric Data produced by digital source imagery at various ranges of GSDs [8] . While the vertical one is based on 10 accuracy classes using RMSE Z statistics, i.e. exclusively along the z-axis, but differing for non-vegetable soils and for vegetated soils, in which the statistics of its 95th percentile are considered. In this case study it was possible to clearly assume the statistics for the first scenario. As pointed out in the previous paragraphs, the comparisons in this study are presented regarding the results firstly obtained in Agisoft PhotoScan and MicMac by not implementing the geotags of the images in the various BBA processes; then comparisons are made with those derived in Pix4D Mapper and MicMac, however considering the BBA processes completed by the positional information of the images. Finally, the planar and Z-axis RMSE values are evaluated in an integrated analysis of the three softwares, comparing them with the thresholds established by the ASPRS standards. Figure 3 shows the trends in RMSE XYZ values and average errors recorded in the various cases of georeferencing in Agisoft PhotoScan and MicMac softwares. Analyzing the values returned in the CPs, completely equal values are presented for cases where the number of GCPs implemented is less than 3, being characterized by the positional information of geo-tags that therefore reduce the contribution of tie points in the BBA processes. This is in fact evident in subsequent cases where the BBA processes within MicMac do not support a reduced number of GCPs and reveal much larger RMSE XYZ values than those derived from Agisoft PhotoScan. While in Agisoft PhotoScan the values can be considered as constant for the entire case history, in MicMac there is a downward trend to a minimum value in the georeferencing condition with 19 GCPs. The average errors follow the same trends, except for the cases in MicMac falling in the 3-5 GCPs range implemented, where the deviation shows systematic errors lower than the accidental ones. On the other hand, in PhotoScan, although there are slight deviations between mean and RMSE errors below the 3 GCPs implemented, the values recorded in the GCPs show a reduced robustness of the georeferencing, which then remains constant for all other cases analyzed. MicMac, on the other hand, produces more robust georeferencing than PhotoScan ones, attesting lower values of RMSE and mean errors from the 6th GCP implemented onwards. Figure 4a shows a descending step behaviour of the Pix4D Mapper software for less than 3 GCPs implemented, unlike MicMac where, from the first implementation onwards, RMSE values and average errors are almost constant. The precariousness of georeferencing below the 3 GCPs implemented is in fact reflected in the error values recorded in Fig. 4b in the GCPs about the trend in Pix4D. In MicMac, on the other hand, the variability is negligible, showing a uniform robustness in any case of georeferencing even if with values higher than those obtained in Pix4D. The BBA procedures in Pix4D thus benefit from a number greater than 3 GCP, giving accuracy values on CPs better than an average deviation of about 1.5 cm up to the extreme case of 19 GCP where the results between the two softwares converge. As it can be seen in the Figs. 3 and 4 about the estimated values on CPs, regardless of the assumptions made and the software used, the maximum accuracy limit achieved of 0.02 m was inherited from the measurements on the GCPs implemented. It can therefore be ascertained that more accurate results could only be achieved by adopting more accurate GNSS measurement modes (e.g. with longer acquisition intervals in relative mode) or post-processing modes (e.g. using precise orbital ephemeris). Figure 5a integrates the planar RMSE values obtained in the three softwares and compares them with the thresholds set by ASPRS for Digital Planimetric Data. Figure 5b shows the comparison between the RMSE values along the Z axis for each georeferencing case and the ASPRS standards for Vertical Data. In this comparison, the RMSE values obtained in MicMac by integrating positional image information into the BBA processes were considered. The results obtained can be considered as in line with those already discussed in previous works [11, 18] , in which it was possible to see a coherent trend of RMSE values for DG and complete IG cases, especially about the elbow point of the statistic curve in a range of GCPs implemented equal to 5-7. Considering a different application scenario and a flight altitude of 120 m, in Agüera-Vega et al. [17] the trends obtained in this work were confirmed by recording a reduction of both planar and vertical RMSE values around the seventh GCP implemented, while the lowest values were recorded in the configuration with 15 GCPs. Instead, observing the results obtained by Benassi et al. [9] , comparing the processes in the three softwares analysed under this study, as in Fig. 5 , MicMac offers a constancy of the planar RMSE values as the implemented GCPs vary, while, at the same time, Pix4D also shows a behaviour comparable to those obtained in PhotoScan even if they are considered as better. From a summary analysis of the examination generated, it is fundamental to see that the processing in different softwares, being it carried out in accordance with a common workflow, generate results that are not congruent but in most cases coherent. In fact, as it is possible to observe, both planar and vertical RMSE values assume comparable trends, maintaining, in most cases of implemented georeferencing, the same class of accuracy set by ASPRS standards. In the last decade, while hardware evolution has determined a colossal modernization of the industrial sector, there has also been an exponential development of software systems, with an increasing sophistication of the algorithms implemented, enabling users to define high-performance ICT solutions. One of the sectors mostly affected by such modernization is Geomatics, with digital photogrammetry being completely reformed by the rise of UAVs. Despite a wide utilisation of a variety of softwares for the return of the same products in high quality, no validation analysis has been conducted, in literature, to date, to determine whether such products can be considered as repeatable and reproducible regardless of the platform used. In order to obtain consistent results, a comparative evaluation of three photogrammetric softwares, commonly used in the industrial field, was therefore carried out in this work: in particular, for the purposes of this study, three processing chains were started in parallel in Agisoft Photoscan, Pix4D Mapper and MicMac. The tests carried out aimed at investigating the elements of shared by the platforms tested, in order to support future studies to define a unique index for the accuracy of the final products. In particular, setting up a general workflow for the processes, the related sparse point clouds were processed within each software and then subjected to 21 georeferencing strategies, varying the number of GCPs implemented with the leave-one-out technique. After obtaining the results from the BBA for each model, a statistical inference was generated to compare the results obtainable from the different softwares and verify them based on the accuracy thresholds established by the ASPRS. From the analysis of the comparisons generated, it is evident that, despite sharing a common workflow, the processes analyzed generate results that are consistent, yet at the same time not congruent. Both planar and vertical RMSE values follow comparable trends, so that in most cases of georeferencing the same class of accuracy was defined, as indicated by ASPRS standards. A procedure for automating earthwork computations using UAV photogrammetry and open-source software Unmanned aerial vehicle for remote sensing applications-a review Using low-cost UAVs for environmental monitoring, mapping, and modelling: examples from the coastal zone The use of UAVs for cultural heritage and archaeology The rise of UAVs Comparison of UAV imagery-derived point cloud to terrestrial laser scanner point cloud Generation of 3D surface models from UAV imagery varying flight patterns and processing parameters American Society for Photogrammetry and Remote Sensing (ASPRS): ASPRS positional accuracy standards for digital geospatial data Testing accuracy and repeatability of UAV blocks oriented with GNSSsupported aerial triangulation The reproducibility of SfM algorithms to produce detailed digital surface models: the example of PhotoScan applied to a high-alpine rock glacier Comparative analysis of different UAV-based photogrammetric processes to improve product accuracies Cost-effective non-metric photogrammetry from consumergrade sUAS: implications for direct georeferencing of structure from motion photogrammetry. Earth Surf Parameter optimization for creating reliable photogrammetric models in emergency scenarios A comprehensive workflow to process UAV images for the efficient production of accurate geo-information UAV cameras: overview and geometric calibration benchmark Distinctive image features from scale-invariant keypoints Assessment of photogrammetric mapping accuracy based on variation ground control points number using unmanned aerial vehicle Assessing the impact of the number of GCPS on the accuracy of photogrammetric mapping from UAV imagery Close-range camera calibration Applying ASPRS accuracy standards to surveys from small unmanned aircraft systems (UAS) MicMac -a free, open-source solution for photogrammetry