key: cord-1017894-dq8xag2m authors: Vaidyanathan, Akshayaa; Guiot, Julien; Zerka, Fadila; Belmans, Flore; Van Peufflik, Ingrid; Deprez, Louis; Danthine, Denis; Canivet, Gregory; Lambin, Philippe; Walsh, Sean; Occchipinti, Mariaelena; Meunier, Paul; Vos, Wim; Lovinfosse, Pierre; Leijenaar, Ralph T.H. title: An externally validated fully automated deep learning algorithm to classify COVID-19 and other pneumonias on chest CT date: 2022-03-24 journal: ERJ Open Res DOI: 10.1183/23120541.00579-2021 sha: 627497f7d7e876a70a1e0c4db1455e9ffb5a8579 doc_id: 1017894 cord_uid: dq8xag2m PURPOSE: In this study, we propose an Artificial Intelligence framework based on 3D Convolutional Neural network (CNN) to classify CT scans of patients with COVID-19, Influenza/CAP, and no-infection, after automatic segmentation of the lungs and lung abnormalities. METHODS: The AI classification model is based on inflated 3D Inception architecture and was trained and validated on retrospective data of CT images of 667 adult patients (No infection: 188, COVID-19: 230, Influenza/CAP: 249) and 210 adult patients (No infection: 70, COVID-19: 70, Influenza/CAP: 70), respectively. The model's performance was independently evaluated on an internal test set of 273 adult patients (No infection: 55, COVID-19: 94, Influenza/CAP: 124) and an external validation set coming from a different center (305 adult patients, COVID-19: 169, No infection: 76, Influenza/CAP: 60). RESULTS: The model showed excellent performance in the external validation set with an AUC of 0.90, 0.92 and 0.92 for COVID-19, Influenza/CAP and No infection respectively. The selection of the input slices based on automatic segmentation of the abnormalities in the lung reduces analysis time (56 s per scan) and computational burden of the model. The TRIPOD score of the proposed model is 47% (15 out of 32 TRIPOD items). CONCLUSION: This AI solution provides rapid and accurate diagnosis in patients suspected of COVID-19 infection and influenza. Imaging with computed tomography (CT) plays a central role in the diagnosis of respiratory diseases [1, 2] . After the outbreak of COVID-19 in 2020, more emphasis has been given to the different types of pneumonias and to the distinctive features of COVID-19 from all others [3, 4] . Viral pneumonias, either COVID-19 or others, can all present with reticulation, Ground Glass Opacities (GGO), and consolidations at chest CT scan, creating a challenge for radiologists in their routine differential diagnosis. Previous studies on the performance of radiologists in discriminating between COVID-19 and other pneumonias on chest CT scans have shown high variability in both sensitivity (73%-94%) and specificity (24%-100%), with on average high sensitivity and moderate specificity [5] . This variability of interpretation of CT findings of pneumonia still creates a routine challenge for clinicians in their differential diagnosis, which is key to properly treat the patients and prevent infection spread during pandemics and in the next future. In this context, the development of innovative artificial intelligence (AI) imaging solutions to support the radiologists in swift and precise differential diagnosis would be of invaluable help. Convolutional Neural Networks (CNN) have shown great potential in detection, segmentation, and classification tasks in radiological images [6] . A recent study demonstrated the application of CNN for the differentiation among Influenza, COVID-19, and no-infection, on chest CT scans with an overall accuracy of 86.7%. The proposed method incorporated training on image patches extracted from CT volumes where each image patch required manual labelling as " " " " [7] . Another study compared the performance of different AI models in classifying COVID-19 from other atypical and viral pneumonias, showing 99.5% accuracy in classifying COVID-19 [8] . However, these approaches involve all manual detection (i.e. drawing boxes around the lesions), labelling of the lesions in all the slices and training the models on the patches of detected lesions and manual labels. The time required to perform these manual operations is usually not considered when addressing the real-world application of these models and represent probably one of the major hurdles to wide spread clinical adoption. A fully automatic tool running on chest CT images for the differential diagnosis of pneumonias can represent an important step forward for decreasing the variability of interpretation among clinicians and for faster the diagnostic process. This will unburden medical staff and in turn provide better and faster diagnosis for patients, reducing the use of hospital resources. Better allocation of both material and human resources can be essential in a time of crisis as the COVID-19 pandemic demonstrated with dramatic clarity [9] . To attain this goal, we developed and externally validated a fully automated deep learning framework with a 3D CNN, able to classify chest CT scans of patients with COVID-19, Influenza/CAP, or no-infection without manual intervention. Individual AI based whole lung and lung abnormalities segmentation models were used to pre-process the CT images to train the 3D CNN model and are integral part of the workflow to assure that only the patients presenting abnormalities in the lung volume are processed by the model, saving time and computational power. The study was approved by the local ethics committee of the CHU-Liège (EC number 116/2020). The institutional review board waived the requirement to obtain written informed consent for this retrospective case series, since all analyses were performed on de-identified (i.e., anonymized) data and there was no potential risk to patients. Three cohorts of patients were included retrospectively in this study for model training, validation and testing. Cohorts came from two University Hospitals (CHU Sart-Tilman and CHU Notre Dame des Bruyères) in Liège, Belgium. The first cohort (label: consisted of all patients with COVID-19 infection confirmed by RT-PCR that underwent chest CT imaging before March 28th, 2020. The second cohort (label: Influenza/CAP) consisted of patients with influenza, parainfluenza or community acquired pneumonia (CAP) infection confirmed by RT-PCR , with confirmed no infection in the lungs disregarding any other lung disease. The three cohorts were pooled together and randomly split between training, validation, and testing set (see Fig. 1 ). Additionally, the open source dataset COVID-CT-MD was used as external validation set [10] . The final population consisted of 169 RT-PCR confirmed positive COVID-19 cases (from February 2020 to April 2020), 60 Community Acquired Pneumonia (CAP) cases (from April 2018 to November 2019), and 76 No Infection cases (from January 2019 to May 2020): all the patients were treated at Babak Imaging Center in Tehran, Iran, and labeled by three experienced radiologists. In this retrospective study CT scans of the three cohorts of patients included were acquired from different scanners (Siemens and GE) with diverse reconstruction kernels (soft and sharp). In case of presence of more than one series per case, all the available series were used in training the model (as the reconstruction kernel corresponding to the series were considered as a form of image augmentation). Slice thickness of the scans ranged between 0.5 mm and 2 mm while pixel spacing between 1 and 2.5 mm. A complete summary of the imaging parameters of both training and external validation set is reported in Table S1 . The prevalence of COVID-19 cases in the three datasets was adjusted to avoid class imbalance and bias in classification [11] . COVID-19 cases represented between 35 and 45% of the whole cohort for each dataset. A fully automated lung segmentation model (see Supplementary Material -Lung Segmentation) was used to filter out the slices not containing lungs from the CT scan series. The presence of abnormalities in each filtered slice was confirmed using a lung abnormalities segmentation model (see Supplementary Material -Abnormalities Segmentation). If no abnormalities were present in the filtered slices, the scan was discarded from model processing. Different sets of 48 consecutive axial slices with an overlap of 10 slices between one set and the other (extracted from the whole volume with axial slices containing lungs) were obtained, while each set including at least one slice containing abnormalities in the lung was used to train the model. The workflow for the pre-processing protocol is depicted in Fig. 2 . The entirety of datasets provided by clinicians were used in the model training and validations, without any prior scan quality selection Each data point containing the 48 consecutive axial slices was processed in three different ways to obtain a three-channel input for the model: -The first channel (Channel 1) contained slices with intensities clipped at Lung window level settings (W:1500 HU, L:-600 HU) with lungs and the abnormalities cropped. -The second channel (Channel 2) contained the slices with the original intensities of lungs along with the abnormalities cropped. -The third channel (Channel 3) contained slices with intensities clipped at Mediastinal window level settings (W:350 HU, L:50 HU) within the region containing the cropped lungs, for which the bounding rectangular crop within which lungs or lung abnormalities pixels are present was obtained. This operation was performed in order to better assess pleural effusion [12] . Finally, the slices were center cropped to a slice size of 448 by 448 pixels. An example of the resulting lung and abnormalities segmentation is reported in Fig. 3 An inflated 3D Inception model [13] , pre-trained on Kinetics dataset [14] , was trained on 48 [15] and consists of inflated filters and pooling kernels into 3D, leading to very deep, naturally spatiotemporal classifiers. The model is trained for five epochs and early stopping was performed after the 5 th epoch as the validation loss started to increase while the training loss decreased, using the categorical cross-entropy loss as an objective function at a batch size of 2. A batch size of 2 was preferred to fit GPU memory of 11 GB. The model was trained on 10,500 data points (which are different sets of 48 consecutive axial slices obtained from the image volume with an overlap to 10 slices between one set and the other) and validated on 6000 datapoints. The network weights were updated by using an Adam optimizer at a constant learning rate of 1e -4 [16] . x slices of the test datasets. The overall class and the overall class probability were computed: if more than 20 % of the predictions correspond to the class COVID-19, then the patient is assigned to that class. If the probailities for the class Influenza/CAP are higher than 20% then the patient is assigned to the class Influenza/CAP. Otherwise h d h "N " . The clinician is presented with an automatically generated report containing the results of the classification algorithm. The report presents basic patients data (Patients ID, Scan number and Scan date) along with the diagnosis (No infection, Influenza/CAP, COVID-19) and the probability calculated by the model for each class. The reports also shows the 48 consecutive slices with the corresponding lung and lung abnormalities segmentations masks used by the model to make the classification. Model performance is reported in Fig. 4 . The ROC curves for each class (COVID-19, Influenza/CAP, and no Infection) are depicted in The lung abnormalities segmentation model identified 19 cases with no abnormalities in the external validation set. These scans were not processed by the DL architecture: performance metrics report in Fig. 5A 5A ). Confusion matrix on the external validation set is reported in Fig. 5B . The performance in the external validation set is in good agreement with the internal testing set. A summary of the performance metrics for both internal test set and external validation set are presented in Table 2 . The TRIPOD score of the proposed model is 47% (15 out of 32 TRIPOD items). The output of the classification workflow is also reported in the Clinical summary report. A sample report for Influenza/CAP and COVID-19 patients is presented in Supplementary Information ( Fig. S4 and S5). We developed and validated a deep learning AI model for the classification of no-infection, COVID-19, or Influenza/CAP cases based upon CT imaging. The model showed a performance of AUC of 0.90, 0.92 and 0.92 for the COVID-19, Influenza/CAP and No Infection respectively in the external validation. The proposed workflow automatically segments and detects both lungs and lung abnormalities, reducing the time and computational burden of the classification task. Moreover the network produces an automatic clinical summary report, that can be used by the clinician to verify the model decision. The datasets used for this study comes from different countries and different centers. The training cohort is from the University Hospital in Liege while the external validation set is from Babak Imaging Center in Tehran. The training dataset presents a certain homogeneity in imaging acquisition parameters, barring the use of different scanners at the different centers. However, the validation data presents different characteristics as coming from a different country with different standard of care and thus image acquisition protocols. This is an indication of the difference existing among the dataset and indirect proof of the generalizability of the performances of our model which attained good performances also in the external validation dataset. Several deep learning COVID-19 classification networks have been published thus far, both 2D [18] and 3D [19] , also based on automatic segmentation of the lungs [20, 21] . Both Machine learning [22] Deep learning [23, 24] or a combination of both [25] have been explored for this classification task. The models performances are high to very high for all the published approaches (AUC between 0.8 and 0.95) and several authors compared the AI workflow with clinicians performances [26, 27] , reporting comparable if not better performances from the AI models, and faster and more reproducible diagnosis. Our model has a performance of around AUC 0.9 for all the classes in line with those reported in the literature [21, 28] . The possibility to integrate a fully automatic tool for evaluation of pneumonias source in the clinical workflow can be instrumental to improve patients management and hospital resources allocation. Automatic identification of COVID-19, Influenza/CAP and no infection patients can reduce the diagnostic errors, related to human reader experience. The possibility of fast throughput of CT scans analysis will unburden medical staff and free resources to be allocated to more urgent needs. The dubious cases will have to be confirmed by the clinicians upon examination, but the time and effort required to do so will be drastically reduced. A careful evaluation of the real cost-benefit of these tools is sorely needed to promote their application in the clinical practice. However, these automatic tools still have important limitations of applicability in the clinical setting. Overfitting, lack of generalizability and of explainability are the most relevant ones for deep learning models [29, 30] . In this study, several techniques were used to prevent overfitting. The model was trained on a multi-vendor (GE, Siemens) dataset with diverse acquisition protocols and on differently reconstructed series of the same case. In this way, the model learnt how to generalize in varying image acquisition parameters, which is well reflected by the high sensitivity when evaluated on a held-out internal test set with diverse acquisition protocols and on the external validation set, coming from a different medical center. The ability of the model architecture to generalize to images with diverse imaging parameters is a desired property for real-world clinical applications. Another important aspect of deep learning applied to medical image analysis is explainability w h h "b k b x" h h w despread adoption of these methods by clinicians. The production of parsimonious models (i.e. clinicians clearly comprehend and agree with how the model reached the result to support a clinical decision) is instrumental to build confidence and acceptance [31, 32] . In the field of AI there are two main explainability approaches: post-hoc systems which provide explanation for a single specific decision and make it possible to obtain it on demand and ante-hoc system ( k w " b x") wh h he model is built to be intrinsically explainable, so it is possible to follow each step that the model takes to reach its classification decision [32] [33] [34] . Usually (gradient) class activation maps are used to visualize the region of the scan on which the model based its classification decision [35] : thus, this explainability approach falls under the post-hoc systems category. In the present study, the use of pre-selected and segmented slices containing lung abnormalities can be seen as an ante-hoc explainability system, as the model is specifically looking at the abnormal areas of the lung, segmented by the lung abnormalities segmentation model. In this way the end user can verify on which slices and on which areas of the slice (i.e., the abnormalities) the model based its classification decision. This can be easily confirmed by the clinicians looking at the 48 consecutive slices along with lung and lung abnormalities segmentation masks, used by the model for the classification, and reported in the automatic clinical summary report ( Fig. S1 and S2 in Supplementary Information) . Indeed, our model selected only those slices containing abnormalities in the lungs, while most deep learning models published in literature [7, 36] are still based on manual segmentation of the CT scans and use as input all the slices containing lungs or the whole 3D lung volume when automatic segmentation is implemented. Moreover, in previous studies the identification of the regions of the slice used by the model to make its classification decision are the output of the model, helping with interpretability. In our model the identification of the abnormalities in the lungs, linked to the different kinds of pneumonia, is done a-priori, removing irrelevant information (e.g., other pathological presentations in the lung). An additional advantage of our approach is the possibility to select up-front the scans for the model to process. If the selected slices do not present any abnormalities, the model will not process the image, saving time and computational power. This was d h x d . Th "N -" h ( = 76) COVID-CT-MD dataset is composed also of healthy patients: our segmentation model correctly identified all the slices without abnormalities and the corresponding scans were not processed by the model (19 out of 76 cases). Furthermore, the pre-selection of slices to be evaluated by the model allows a reduction of computational burden, also researched in this study by using Inception architecture. Indeed, the use of Inception architecture compared to other approaches based on ResNet or ResNext reduces the computational burden of the model, while maintaining equivalent performances [37] . This approach can allow shallow networks to achieve results comparable to their deeper and more complex counterparts with shorter training times, enabling good classification performances, even when using limited hardware [38] . The computation time (57 s per scan), that can be seen as an indication of the computational burden of the model, was faster than alternatives reported in the literature. Moreover, compared to other studies that used Inception architecture for similar classification tasks (see Table 3 ), our network showed comparable performances [39, 40] and was validated on an external testing set. This validation step is very important to verify the generalizability of the model to patients other than those used for model development (i.e., training and testing). Considering the limitations of this study, a relevant point related to the external validation test set is the presence of only CAP cases for the Influenza/CAP class. This could lead to a misestimation of the model performance for this classification task. However, influenza cases were present in the internal validation and testing cohorts and the performances of the model were tested there. An additional external validation dataset with direct clinician assessment of source of pneumonia would strengthen the generalizability and add credibility to our approach. The further distinction between bacterial and viral (non-COVID) pneumonia would represent an additional step forward, allowing the clear identification of the best therapeutic treatment for each patients. This can also result in a better therapeutic management, regarding for example the administration of antibiotics. The misuse and abuse of antibiotic is a cause of great concerns in the research and clinical communities. The insurgence of antimicrobial resistance (AMR) is regarded as one of the top 10 global public health threats for the near future [41] . The timely identification of patients with pneumonias that does not require antibiotics can inform better therapy decisions and procedures, contributing to ease the burden of health care associated infections (HAS) from resistant strains of bacteria [42] . Looking at the dataset used for this study, the provenance of all scan from scanner from only two different vendors might limit somehow the generalizability of our approach, even though the image were acquired with two of the most diffuse scanner manufacturers on the market. Adding more data of different vendors, different acquisition and reconstruction settings might improve the model performances. Ideally these kind of clinical decision making support tools need to be continuously updated with new and heterogeneous data to attain accuracy, specificity and sensitivity comparable to the latest implementation of diagnostic and therapeutic state of the art, for example via Distributed learning [43, 44] . To verify the real clinical utility of the proposed tool, a prospective clinical validation study should be carried out comparing performance and time to diagnosis of the AI tool to the current standard of care. Moreover, the clinical use of this tool might need to be updated and modified according to the development of the COVID-19 pandemic. We can expect that pneumonia from COVID-19 infection will become endemic and recurring in the future. Our approach could be adapted to spot the undiagnosed cases or to provide a second independent verification of the occurrence of the disease, also past the emergency status of this pandemic. COVID-19 associated lung diseases can mimic other viral lung diseases such as (para-)influenza or CAP which may result in misdiagnosis, delayed and unproper treatment. In this context, the development of new diagnostic tools based on AI could become critical for deployment in the daily practice in the near future.. This approach could be exploited also for other type of pulmonary diseases, fine tuning the abnormalities segmentation model to only recognize and select the slices which contains the abnormalities relevant to the investigated disease. To reach this goal a close collaboration between clinicians and data scientist is essential and will also promote the future application of these decision support tools in the clinic. The authors declare the following financial interests/personal relationships which may be considere d as potential competing interests: - Akshayaa Vaidyanathan, Fadila Zerka, Flore Belmans, Ingrid Van Peufflik and Mariaelena Occhipinti are salaried employees of Radiomics (Oncoradiomics SA). -Dr Julien Guiot reports, within and outside the submitted work, research agreements from Radiomics (Oncoradiomics SA). He is in the permanent SAB of Radiomics (Oncoradiomics SA) for the SALMON trial without any specific consultancy fee for this work. He is co-inventor of one issued patent on radiomics licensed to Radiomics (Oncoradiomics SA). He confirms that none of the above entities or funding was involved in the preparation of this work. d w k grants/sponsored research agreements from Radiomics (Oncoradiomics SA), ptTheragnostic/DNAmito, Health Innovation Ventures. He received an advisor/presenter fee and/or reimbursement of travel costs/consultancy fee and/or in kind manpower contribution from Radiomics (Oncoradiomics SA), BHV, Merck, Varian, Elekta, ptTheragnostic, BMS and Convert pharmaceuticals. Dr Lambin has minority shares in the company Radiomics (Oncoradiomics SA), Convert pharmaceuticals, Comunicare Solutions and LivingMed Biotech, he is co-inventor of two issued patents with royalties on radiomics (PCT/NL2014/050248, PCT/NL2014/050728) licensed to Radiomics (Oncoradiomics SA) and one issued patent on mtDNA (PCT/EP2014/059089) licensed to ptTheragnostic/DNAmito, one non issued patent on LSRT (PCT/ P126537PC00) licensed to Varian Medical, three non-patented invention (softwares) licensed to ptTheragnostic/DNAmito, Radiomics (Oncoradiomics SA) and Health Innovation Ventures and three non-issues, non-licensed patents on Deep & handcrafted Radiomics (US P125078US00, PCT/NL/2020/050794, n° N2028271). He confirms that none of the above entities or funding was involved in the preparation of this paper. The rest of the co-authors have no known competing financial interests or personal relationships to declare. This model architecture consists of a 3D U-Net (1) with residual blocks (2) in the encoder part of the network. Publicly available data from the cancer imaging archive (3) was used to train and validate the model. The specific dataset contains CT scans of 422 confirmed non-small cell lung cancer cases, along with manual segmentations of the left and right lungs. The segmentations were performed by an experienced radiologist and these segmentations were used as a reference standard. The data was randomly partitioned into a training set (n = 322), a tuning set (n = 50), and a test set (n = 50). In order to generate homogeneous CT volumes as input for the model, the following pre-processing steps were performed. All the volumes were resized to 160 x 160 x 448 along the x, y and z axis and image intensities were clipped at a window width of 1500 HU and a window level of -600 HU (i.e., a standard lung CT window level settings). The following data augmentations were performed to avoid overfitting (4) on the training dataset: The model was trained with the pre-processed volumes and their corresponding reference labels, using Jaccard loss (5) as an objective function. Here, the loss is calculated in a mini batch of two images per iteration. The network was trained for 10 epochs and at the end of each epoch the Jaccard loss was d h d ' d d . The 2D lung segmentation model architecture is based on a 2D Feature Pyramid Network (6) adapted with ResNext blocks (7) in the encoder. The model was trained and validated on the following datasets, 1) Publicly available dataset with 888 CT scans and the corresponding reference annotations for lungs available from LUNA16 challenge (8) 2) Publicly available data from the cancer imaging archive (3) containing CT scans of 422 confirmed nonsmall cell lung cancer cases, along with manual segmentations of the left and right lungs. The segmentations were performed by an experienced radiologist and these segmentations were used as a reference standard. The network was trained with the 2D axial slices clipped at a window width of 1500 HU and a window level of -600 h Adam optimizer at an initial learning rate of 1e-5 (9). The model was trained using customized Jaccard loss (5) as an objective function where the loss is calculated in a mini batch of 8 images per iteration. The predicted segmentations of each architecture (i.e., the segmentation output from both the 3D and the 2D segmentation models) were ensembled and the intersection constitutes the final total lung segmentation which is used for extraction of radiomics features. The deep learning-based lung segmentation achieved a mean Dice similarity coefficient score of 0.92 across the publicly available datasets which indicates adequate precision (i.e. no significant over or under segmentation). The segmentation model is based on 2D U-Net combined with Res Next as encoder and deep supervision and was trained on axial unenhanced chest CT scans of 199 COVID-19 patients coming from three different centres in three different countries (10) . Th d ' w d x 50 COVID-19 patients coming from several different centres in Moscow, Russia (11) . All datasets are open source, freely available online. An automatic in-house lung segmentation model (see above 1 Lung segmentation) was used to crop the lung region from the CT volumes. Axial slices with no segmented lung regions were removed from the volumes. Different sets of 48 consecutive axial slices with an overlap of 10 slices between one set and the other (extracted from the whole volume) were used to train the model. Each set contains at least one slice with lung abnormalities. Each data point containing the consecutive axial slices was pre-processed in following ways to obtain a three channel input to the model: -The first channel contains slices with intensities clipped at lung window level settings (W:1500 HU, L:-600 HU) with lungs and the abnormalities cropped. -The second channel contains the slices with original intensities with lungs and abnormalities cropped. -The third channel contains slices with intensities clipped at Mediastinal window level settings (W:350 HU, L:50 HU) with the region containing the lungs cropped. A rectangular crop was obtained with x_min = minimum x value for which lungs or lung abnormalities pixels are present, x_max = maximum x value for which lungs or lung abnormalities pixels are present and y_min = minimum x value for which lungs or lung abnormalities pixels are present, y_max = maximum y value for which lungs or lung abnormalities pixels are present. Fig.1 report an example of the input for the three channels. The automatic deep learning segmentation algorithm achieved good performances (mean DSC 0.6 ± 0.1) on the external test set. High-resolution computed tomography findings from adult patients with Influenza A (H1N1) virus-associated pneumonia Pulmonary High-Resolution Computed Tomography (HRCT) Findings of Patients with Early-Stage Coronavirus Disease 2019 (COVID-19) in Hangzhou Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study Clinical Characteristics of Coronavirus Disease 2019 in China Performance of radiologists in differentiating COVID-19 from viral pneumonia on chest CT A survey on deep learning in medical image analysis A Deep Learning System to Screen Novel Coronavirus Disease Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks Fair Allocation of Scarce Medical Resources in the Time of Covid-19 COVID-19 computed tomography scan dataset applicable in machine learning and deep learning A systematic study of the class imbalance problem in convolutional neural networks Comparative Interpretation of CT and Standard Radiography of the Pleura The Kinetics Human Action Video Dataset Going deeper with convolutions A method for stochastic optimization. 3rd Int. Conf. Learn. Represent. ICLR 2015 -Conf. Track Proc Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement Deep-chest: Multi-classification deep learning model for diagnosing COVID-19, pneumonia, and lung cancer chest diseases Using Artificial Intelligence to Detect COVID-19 and Communityacquired Pneumonia Based on Pulmonary CT: Evaluation of the Diagnostic Accuracy Deep Learning in the Detection and Diagnosis of COVID-19 Using Radiology Modalities: A Systematic Review Application of Machine Learning in Diagnosis of COVID-19 Through X-Ray and CT Images: A Scoping Review Texture feature-based machine learning classifier could assist in the diagnosis of COVID-19 Automatic distinction between COVID-19 and common pneumonia using multi-scale convolutional neural network on chest CT scans A fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis Decoding COVID-19 pneumonia: comparison of deep learning and radiomics CT image signatures CT radiomics facilitates more accurate diagnosis of COVID-19 pneumonia: compared with CO-RADS Clinically Applicable AI System for Accurate Diagnosis, Quantitative Measurements, and Prognosis of COVID-19 Pneumonia Using Computed Tomography Uzun Ozsahin D. Review on Diagnosis of COVID-19 from Chest CT Images Using Artificial Intelligence An Overview of Overfitting and its Solutions Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping Explainable Deep Learning Models in Medical Image Analysis Causability and explainability of artificial intelligence in medicine What do we need to build explainable AI systems for the medical domain? ArXiv Development and evaluation of an artificial intelligence system for COVID-19 diagnosis A deep learning algorithm using CT images to screen for Corona virus disease (COVID-19) Benchmark Analysis of Representative Deep Neural Network Architectures Comparing different deep learning architectures for classification of chest radiographs Automated detection of COVID-19 using ensemble of transfer learning with deep convolutional neural network based on CT scans Using X-ray images and deep learning for automated detection of coronavirus disease Strategies to Prevent Healthcare-Associated Infections: A Narrative Overview Blockchain for Privacy Preserving and Trustworthy Distributed Machine Learning in Multicentric Medical Imaging (C-DistriM) Systematic Review of Privacy-Preserving Distributed Machine Learning From Federated Databases in Health Care 3D U-net: Learning dense volumetric segmentation from sparse annotation Deep residual learning for image recognition The public cancer radiology imaging collections of The Cancer Imaging Archive Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping Optimizing the Dice Score and Jaccard Index for Medical Image Segmentation: Theory & Practice Feature pyramid networks for object detection Aggregated residual transformations for deep neural networks Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the LUNA16 challenge Adam: A method for stochastic optimization CT Images in Covid-19 [Data set Chest CT Scans With COVID-19 Related Findings Dataset. (2020) Available at The authors thank Fabio Bottari, PhD, of Radiomics for providing medical writing support in accordance with Good Publication Practice (GPP3) guidelines.