key: cord-0822872-r5zvwq0l authors: nan title: Detecting SARS-CoV-2 From Chest X-Ray Using Artificial Intelligence date: 2021-02-23 journal: IEEE Access DOI: 10.1109/access.2021.3061621 sha: 1e55bca9bd2e5861b46f149e1bc9a7dee4f2429f doc_id: 822872 cord_uid: r5zvwq0l Chest radiographs (X-rays) combined with Deep Convolutional Neural Network (CNN) methods have been demonstrated to detect and diagnose the onset of COVID-19, the disease caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). However, questions remain regarding the accuracy of those methods as they are often challenged by limited datasets, performance legitimacy on imbalanced data, and have their results typically reported without proper confidence intervals. Considering the opportunity to address these issues, in this study, we propose and test six modified deep learning models, including VGG16, InceptionResNetV2, ResNet50, MobileNetV2, ResNet101, and VGG19 to detect SARS-CoV-2 infection from chest X-ray images. Results are evaluated in terms of accuracy, precision, recall, and f- score using a small and balanced dataset (Study One), and a larger and imbalanced dataset (Study Two). With 95% confidence interval, VGG16 and MobileNetV2 show that, on both datasets, the model could identify patients with COVID-19 symptoms with an accuracy of up to 100%. We also present a pilot test of VGG16 models on a multi-class dataset, showing promising results by achieving 91% accuracy in detecting COVID-19, normal, and Pneumonia patients. Furthermore, we demonstrated that poorly performing models in Study One (ResNet50 and ResNet101) had their accuracy rise from 70% to 93% once trained with the comparatively larger dataset of Study Two. Still, models like InceptionResNetV2 and VGG19’s demonstrated an accuracy of 97% on both datasets, which posits the effectiveness of our proposed methods, ultimately presenting a reasonable and accessible alternative to identify patients with COVID-19. nose, sore throat, and diarrhea have also been associated with the disease [2] , [3] . Several methods can be followed to detect SARS-CoV-2 infection [4] , including: • Real-time reverse transcription polymerase chain reaction (RT-PCR)-based methods • Isothermal nucleic acid amplification-based methods • Microarray-based methods. Health authorities in most countries have chosen to adopt the RT-PCR method, as it is regarded as the gold-standard in diagnosing viral and bacterial infections at the molecular level [5] . However, due to the rapidly increasing number of new cases and limited healthcare infrastructure, rapid detection or mass testing is required to lower the curve of infection. Recent studies claimed that chest Computed Tomography (CT) has the capability to detect the disease promptly. Therefore, in China, to deal with many new cases, CT scans were used for the initial screening of patients with COVID-19 symptoms [6] [7] [8] [9] . Similarly, chest radiograph (X-ray) image-based diagnosis may be a more attractive and readily available method for detecting the onset of the disease due to its low cost and fast image acquisition procedure. In our study, we investigate recent literature on the topic and tackle the opportunity to present an effective deep learning-based screening method to detect patients with COVID-19 from chest X-ray images. Developing deep learning models using small image datasets often results in the incorrect identification of regions of interest in those images, an issue not often addressed in the existing literature. Therefore, in the present work, we have analyzed our models' performance layer by layer and chose to select only the best-performing ones, based on the correct identification of the infectious regions present on the X-ray images. Also, previous works often do not demonstrate how their proposed models perform with imbalanced datasets which is often challenging. Here, we diversify the analysis and consider small, imbalanced, and large datasets while presenting a comprehensive description of our results with statistical measures, including 95% confidence intervals, p-values, and t-values. A summary of our technical contributions is presented below: • Modification and evaluation of six different deep CNN models (VGG16, InceptionResNetV2, ResNet50, MobilenetV2, ResNet101, VGG19) for detection of COVID-19 patients using X-ray image data on both balanced and imbalanced datasets; and • Verify the possibility to locate affected regions on chest X-rays incorporated with heatmaps, including a cross-check with a medical doctor's opinion. In the recent past, the adoption of Artificial Intelligence (AI) in the field of infectious disease diagnosis has gained a notable prominence, which led to the investigation of its potential in the fight against the novel coronavirus [10]- [12] . Current AI-related research efforts on COVID-19 detection using chest CT and X-ray images are discussed below to provide a brief insight on the topic and highlight our motivations to research it further. To date, several efforts in detecting COVID-19 from CT images have been reported. A recent study by Chua et al. (2020) suggested that the pathological pathway observed from the pneumonic injury leading to respiratory death can be detected early via chest CT, especially when the patient is scanned two or more days after the development of symptoms [13] . Related studies proposed that deep learning techniques could be beneficial for identifying COVID-19 disease from chest CT [12] , [14] . For instance, Shi et al. (2020) introduced a machine learning-based method for the COVID-19 screening from an online COVID-19 CT dataset [15] . Similarly, Gozes et al. (2020) developed an automated system using artificial intelligence to monitor and detect patients from chest CT [16] . Chua et al. (2020) focused on the role of Chest CT in the detection and management of COVID-19 disease from a high incidence region (United Kingdom) [13] . Ai et al. (2020) also supported CT-based diagnosis as an efficient approach compared to RT-PCR testing for COVID-19 patients detection with a 97% sensitivity [17] , [18] . Due to data scarcity, most preliminary studies considered minimal datasets [19] [20] [21] . For example, Chen et al. (2020) used a UNet++ deep learning model and identified 51 COVID-19 patients with a 98.5% accuracy [19] . However, the authors did not mention the number of healthy patients used in the study. Ardakani et al. (2020) used 194 CT images (108 COVID-19 and 86 other patients) and implemented ten deep learning methods to observe COVID-19 related infections and acquired 99.02% accuracy [20] . Moreover, a study conducted by considered 453 CT images of confirmed COVID-19 cases, from which 217 images were used as the training set, and obtained 73.1% accuracy, using the inception-based model. The authors, however, did not explain the model network and did not show the mark region of interest of the infections [22] . Similarly, Zheng et al. (2020) introduced a deep learning-based model with 90% accuracy to screen patients using 499 3D CT images [21] . Despite promising results, a very high performance on small datasets often raises questions about the model's practical accuracy and reliability. Therefore, a better way to represent model accuracy is to present it with an associated confidence interval [23] . However, none of the work herein referenced expressed their results with confidence intervals, which should be addressed in future studies. As larger datasets become available, deep-learning-based studies taking advantage of their potential have been proposed to detect and diagnose COVID-19. Xu et al. (2020) investigated a dataset of 618 medical images to detect COVID-19 patients and acquired 86.7% accuracy using ResNet23 [24] . utilized an even larger dataset (a combination of 1296 COVID-19 and 3060 Non-COVID-19 patients CT images) and achieved 96% accuracy using ResNet50 [25] . With larger datasets, it is no surprise that deep learning-based models predict patients with COVID-19 symptoms with accuracies ranging from 85% to 96%. However, obtaining a chest CT scan is a notably time consuming, costly, and complex procedure. Despite allowing for comparatively better image quality, its associated challenges inspired many researchers to propose X-ray-based COVID-19 screening methods as a reliable alternative way [26] , [27] . Preliminary studies have used transfer learning techniques to evaluate COVID-19 and pneumonia cases in the early stages of the COVID-19 pandemic [28] [29] [30] [31] . However, data insufficiency also hinders the ability of such proposed models to provide reliable COVID-19 screening tools based on chest X-ray [12] , [32] , [33] . [34] . Sethy & Behera (2020) also considered only 50 images and used ResNet50 for COVID-19 patients classification, and ultimately reached 95% accuracy [33] . Also, Narin et al. (2020) used 100 images and achieved 86% accuracy using InceptionResNetV 2 [12] . As noted, these studies use relatively small datasets, which does not guarantee whether their proposed models would perform equally well on larger datasets. Also, the possibility of a model overfitting is another concern for larger CNN-based networks when trained with a small datasets. In view of these issues, recent studies proposed model training with larger datasets and reported a better performance compared to smaller ones [35] [36] [37] [38] . Chandra et al. (2020) developed an automatic COVID screening system to detect infected patients using 2088 (696 normal, 696 pneumonia, and 696 COVID-19) and 258 (86 images of each category) chest X-ray images, and achieved 98% accuracy [39] . Sekeroglu et al. (2020) developed a deep learning-based method to detect COVID-19 using publicly available X-ray images (1583 healthy, 4292 pneumonia, and 225 confirmed COVID-19), which involved the training of deep learning and machine learning classifiers [40] . Pandit et al. (2020) explored pre-trained VGG-16 using 1428 chest X-rays with a mix of confirmed COVID-19, common bacterial pneumonia, and healthy cases (no infection). Their results showed an accuracy of 96% and 92.5% in two and three output class cases [41] . Ghosal & Tucker (2020) used 5941 chest X-ray images and obtained 92.9% accuracy [11] . proposed a modified VGG16 model and achieved 99% accuracy with a dataset of 6505 images. However, they have used fairly balanced data with a 1 : 1.17 ratio; 3003 COVID-19 and 3520 other patients. . Partially as an effect of a more imbalanced dataset, their reported accuracy was comparatively low, reaching 89.6% [38] . On imbalanced datasets, there is a higher chance that the model may be biased on significant classes and might affect the overall performance of the model. We propose three separate studies, wherein three distinct datasets were used, as detailed below: 1) Study One -smaller, balanced dataset: chest X-ray images of 25 patients with COVID-19 symptoms, and 25 images of patients with diagnosed pneumonia, obtained from the open-source repository shared by Dr. Joseph Cohen [43] . 2) Study Two -larger, imbalanced dataset: chest X-ray images of 262 patients with COVID-19 symptoms, and 1583 images of patients with diagnosed pneumonia, obtained from the Kaggle COVID-19 chest X-ray dataset [44] . 3) Study Three -multiclass dataset: chest X-ray images of 219 patients with COVID-19 symptoms, 1345 images of patients with diagnosed pneumonia and 1073 images of normal patients, also obtained from the Kaggle COVID-19 chest X-ray dataset [45]. Figure 1 presents a set of representative chest X-ray images of both COVID-19 and pneumonia patients from the aforementioned datasets. Table 1 details the overall assignment of data for training and testing of each investigated CNN model. In both studies, six different deep learning approaches were investigated: VGG16 [46] , InceptionResNetV2 [47] , ResNet50 [48] , MobileNetV2 [49] , ResNet101 [50] and VGG19 [46] . A pre-trained network is a network that was previously trained on a larger dataset which, in most cases, is enough to learn a unique hierarchy to extract features from. It works more effectively on small datasets. A prime example is the VGG16 architecture, developed by Simoyan and Zisserman (2014) [51] . Figure 2 shows a sample architecture of the pre-trained model procedure. All models implemented in this study are available as a pre-package within Keras [51] . Figure 3 demonstrates a fine-tuning sequence on the VGG16 network. The modified architecture follows the steps below: 1) Firstly, the models were initiated with a pre-trained network without a fully connected (FC) layer. 2) Then, an entirely new connected layer added a pooling layer and ''softmax'' as an activation function, appended it on top of the VGG16 model. 3) Finally, the convolution weight was frozen during the training phase so that only the FC layer should train during the experiment. The same procedure was followed for all other deep learning techniques. In this experiment, the additional modification of the model for all CNN architectures was constructed as follows: As it is known, most pre-trained models contain multiple layers which are associated with different parameters (i.e., number of filters, kernel size, number of hidden layers, number of neurons) [52] . However, manually tuning those parameters is considerably time consuming [53] , [54] . With that in mind, in our models, we have optimized three parameters: batch size , 1 epochs , 2 and learning rate 3 1 Batch size characterizes the number of samples to work through before updating the internal model parameters [55] 2 It defines how many times the learning algorithm will work through the entire dataset [55] 3 It is a hyper-parameter that controls the amount to change the model in order to calculate the error each time the model weights are updated [56] (inspired by [57] , [58] ). We used the grid search method [59] , which is commonly used for parameter tuning. Initially, we randomly selected the following: Batch size = [4, 5, 8, 10] Number of epochs = [10, 20, 30, 40] Learning rate = [.001, .01, 0.1] For Study One, using the grid search method, we achieved better results with the following: Learning rate = .001 Similarly, for Study Two, the best results were achieved with: Batch size = 50 Number of epochs = 50 Learning rate = .001 Finally, during Study Three, best performance was achieved with: Batch size = 50 Number of epochs = 100 Learning rate = .001 We used the adaptive learning rate optimization algorithm (Adam) as an optimization algorithm for all models due to its robust performance on binary image classification [60] , [61] . As commonly adopted in data mining techniques, this study used 80% data for training, whereas the remaining 20% was used for testing [62] [63] [64] . Each study was conducted twice, and the final result was represented as the average of those two experiment outcomes, as suggested by [65] . Performance results were presented as model accuracy, precision, recall, and f-score [66] . where, • True Positive (t p ) = COVID-19 patient classified as patient • False Positive (f p ) = Healthy people classified as patient • True Negative (t n ) = Healthy people classified as healthy • False Negative (f n ) = COVID-19 patient classified as healthy. The overall model performance for all CNN approaches was measured both on the training (40 images) and test (10 images) sets using equation 1, 2, 3, and 4. Table 2 presents the results of the training set. In this case, VGG16 and MobileNetV2 outperformed all other models in terms of accuracy, precision, recall, and f score. In contrast, the ResNet50 model showed the worst performance across all measures. Confusion matrices were used to better visualize the overall performance of prediction. The test set contains 10 samples (5 COVID-19 and 5 other patients). In accordance with the performance results previously presented, Figure 4 shows that the VGG16, InceptionResNetV2, and MobileNetV2 models correctly classified all patients. In contrast, models ResNet50, and ResNet101 incorrectly classified 3 non-COVID-19 patients as COVID-19 patients, and models VGG19 classified 2 non-COVID patients as COVID-19 patients while also classifying 1 COVID-19 patient as non-COVID-19. 2) MODEL ACCURACY Figure 5 shows the overall training and validation accuracy during each epoch for all models. Models VGG16 and MobileNetV2 demonstrated higher accuracy at epochs 25 to 30, while VGG19, ResNet50, and ResNet101 displayed lower accuracy which sporadically fluctuated between epochs 10. Figure 6 shows that both training loss and validation loss were reduced following each epoch for VGG16, Inception-ResNetV2, and MobileNetV2. In contrast, for VGG19, both measures are scattered over time, which is an indicative of poor performance. For Study Two, on the training set, most model accuracies were measured above 90%. Table 4 shows that 100% accuracy, precision, recall, and f score were achieved using MobileNetV2. Among all other models, ResNet50 showed the worst performance across all measures. Figure 7 shows that most of the models performance is satisfactory on the test set. In Study Two, classification accuracy for ResNet50 and ResNet101 is significantly better compared to Study One, possibly as an effect of the models being trained with more data and more epochs. In general, MobileNetV 2 performed better among all the models and misclassified only 2 images out of 369 images, while ResNet50 showed lower performance and misinterpreted 25 images out of 369 images. Figure 8 suggests that the overall training and validation accuracy were more steady during Study Two than Study One. The performance of ResNet50 and ResNet101 significantly improved once trained with more data (1845 images) and more epochs (50 epochs). Figure 9 provides evidence that both training and validation losses were minimized following each epoch for all models, potentially as an effect of the increased batch size, number of epochs, and data amount. As means of highlighting the potential of our proposed models with more complex classifications, we executed a small-scale pilot study to assess the performance of the VGG16 model on a multi-class dataset. The performance outcomes for the train and test runs are presented in Table 6 . The accuracy remained above 90% on both runs, which suggests a notably high performance of our model with either binary or multi-class datasets. Table 7 presents 95% confidence intervals for model accuracy on the test sets for Studies One and Two. For instance, in Study One, the average accuracies for VGG16 and MobileNetV2 were found to be 100%; however, the Wilson score and Bayesian interval show that the estimated accuracies lie between 72.2% to 100% and 78.3% to 100%, respectively. On the other hand, Study Two reported relatively narrower interval ranges. A paired t-test was conducted to compare model accuracies on both studies as shown in Table 8 . There was no significant difference identified within the scores for Study One (M = 84.50, SD = 15.922) and Study Two (M = 97.39, SD = 2.38); t(5) = −2.251, p = .074. These results suggest that model accuracy is competent on both datasets and makes no statistically significant differences (p > 0.05). As a means of comparing our results with those available in the literature, Table 9 contrasts the accuracies of our three best VOLUME 9, 2021 performing CNN models on small datasets as part of Study One. It is relevant to emphasize that none of the referenced studies presents their results as confidence intervals, which hinders a direct comparison, but still allows for a higher-level assessment of the reported performance measures. Using 50 chest X-ray images, we have achieved accuracy ranges from 68.1% to 99.8% using InceptionResNetV2, while Narin et al. (2020) used 100 images and obtained 86% accuracy [12] . Hemdan et al. (2020) and Sethy & Behera (2020) used small datasets of 50 images and acquired 90% and 9% accuracy using VGG19 and ResNet50 + SVM, respectively [32] , [33] . Additionally, In Study Two, some of our models-VGG16, InceptionResNetV2, MobileNetV2,VGG19-demonstrated almost similar accuracy while considering a highly imbalanced dataset than referenced literature [37] , [38] that also used imbalanced datasets (Table 10) . For the imbalanced dataset, we used 262 COVID-19 and 1583 non-COVID-19 patients' (1 : 6.04) chest X-ray images. Apostolopoulos and Mpesiana (2020) used 1428 chest X-ray images where the data ratio was 1 : 5.4 (224 COVID-19: 1208 others) and achieved 98% accuracy [36] . Similarly, Khan et al. (2020) used 1251 chest X-ray images, data proportion 1 : 3.4 (284 COVID-19:967 others), and acquired 89.6% accuracy [38] . In Study Two, some of the best models we acquired were VGG16, VGG19, InceptionResNetV2, and MobileNetV2 and accuracy lies between 97% to around 100%. Figure 10 highlights extracted features as an effect of different CNN layers of VGG16 models applied on chest X-ray images from Study One. For instance, in block1_conv1 and block1_pool1, the extracted features were slightly fuzzy, while in block4_conv3 and block5_pool, those features become more visible/prominent. The heatmap also demonstrates a considerable difference in both COVID-19 and other patient images corresponding to each layer. For instance, as shown in Figure 11 (left), two specific regions were highlighted by heatmap for the COVID-19 patient's X-ray image, whereas for other patients' images, the areas were found to be haphazard and small. During the experiment, each layer plays a significant role in identifying essential features from the images in the training phase. As a result, it is also deemed possible to see which features are learned and play a crucial role in differentiating between the two classes. In Figure11, the left frame represents a chest X-ray image of a COVID-19 patient, and the right one highlights infectious regions of that same image, as spotted by the VGG16 model during Study One. The highlighted region on the upper right shoulder, which resulted from the individual layer of the VGG16 model (Study One), can be considered an irrelevant and therefore unnecessary feature identified by the network. The following topics extend the discussion on this issue: 1) The models attained unnecessary details from the images since the dataset is small compared to the model architecture (contains multiple CNN layers). 2) The models extracted features beyond the center of the images, which might not be essential to differentiate the COVID-19 patients from the non-COVID-19 patients. 3) The average age of COVID-19 patients in the first case study is 55.76 years. Therefore it is possible that individual patients might have age-related illnesses (i.e. weak/damaged lungs, shoulder disorder), apart from VOLUME 9, 2021 Model's ability to identify important features on chest X-ray using VGG16. Model's competency to identify essential features on chest X-ray using MobileNetV2. complications related to COVID-19, which are not necessarily considered by the doctor's notes. Interestingly, the these irrelevant regions spotted by our models decreased significantly when trained with larger datasets (1845 images) and increased epochs (50 epochs). For instance, Figure 12 , presents the heatmap of the Conv-1 layer of MobileNetV2, acquired during the Study Two. The heatmap verifies that the spotted regions are very similar and match closely with the doctor's findings. We present the following items as limitations of our study, which shall be addressed in future works that consider our choice of tools and methods: • At the time of writing, the limited availability of data represented a challenge to confidently assess the performance of our models. Open databases of COVID-19 patient records, especially those containing chest X-ray images, are rapidly expanding and should be considered in ongoing and future studies. • We did not consider categorical patient data such as age, gender, body temperature, and other associated health conditions that are often available in medical datasets. More robust classification models that use those variables as inputs should be investigated as a means of achieving higher performance levels. • We were limited to assessing the classification performance of our models against the gold standards of COVID-19 testing. However, those gold standards themselves are imperfect and often present false positives/negatives. It is imperative to ensure that the training sets of AI models like those herein presented are classified to the highest standards. • Lastly, our study did not explore the compatibility of our proposed models with existing computer-aided diagnosis (CAD) systems. From a translational perspective, future works should explore the opportunity to bridge that gap with higher priority. Our study proposed and assessed the performance of six different deep learning approaches (VGG16, Incep-tionResNetV2, ResNet50, MobileNetV2, ResNet101, and VGG19) to detect SARS-CoV-2 infection from chest X-ray images. Our findings suggest that modified VGG16 and MobileNetV2 models can distinguish patients with COVID-19 symptoms on both balanced and imbalanced dataset with an accuracy of nearly 99%. Our model outputs were crosschecked by healthcare professionals to ensure that the results could be validated. We hope to highlight the potential of artificial-intelligence-based approaches in the fight against the current pandemic using diagnosis methods that work reliably with data that can be easily obtained, such as chest radiographs. Some of the limitations associated with our work can be addressed by conducting experiments with extensively imbalanced big data, comparing the performance of our methods with those using CT scan data and/or other deep learning approaches, and developing models with explainable artificial intelligence on a mixed dataset. He has published several Q1 (IEEE TII and Computer Networks) and Q2 (MDPI JSAN) Journals, while also attending several IEEE conferences (GLOBECOM, ISAECT, ICAMechS, and ISGT-Asia). His research interests include machine learning, artificial intelligence, cloud networked robotics, optimal decision-making, industrial automation, and the IoT. Besides that, he is the Guest Editor for MDPI JSAN special issue on ''Industrial Sensor Networks'', and an Official Reviewer for Robotics (MDPI). He has also been part of the TPC for several conference such as CCBD, IEEE Greentech, and IEEE ICAMechS. ZAHED SIDDIQUE received the Ph.D. degree in mechanical engineering from the Georgia Institute of Technology, in 2000. He is currently a Professor of mechanical engineering with the School of Aerospace and Mechanical Engineering and also the Associate Dean of research and graduate studies with the Gallogly College of Engineering, The University of Oklahoma. He is the coordinator of the industry sponsored capstone from at his school and is the advisor of OU's FSAE team. He has published more than 163 research articles in journals, conference proceedings, and book chapters. He has also conducted workshops on developing competencies to support innovation, using experiential learning for engineering educators. His research interests include product family design, advanced material, engineering education, the motivation of engineering students, peer-to-peer learning, flat learning environments, technology assisted engineering education, and experiential learning. As a Researcher, he pursues his interests include bridging the gap between engineering and modern medicine, while investigating the applications of current manufacturing technologies in the field of Tissue Engineering and Regenerative Medicine. His latest research explores the design and fabrication of 3D printed bioresorbable implants assisting in the regeneration of musculoskeletal tissues such as the knee meniscus and osteochondral. VOLUME 9, 2021 COVID-19 World Meter The epidemiology and pathogenesis of coronavirus disease (COVID-19) outbreak Clinical characteristics of COVID-19 patients with digestive symptoms in Hubei, China: A descriptive, cross-sectional, multicenter study,'' Amer Recent advances and perspectives of nucleic acid detection for coronavirus Real-time RT-PCR in COVID-19 detection: Issues affecting the results Clinical characteristics of 24 asymptomatic infections with COVID-19 screened among close contacts in nanjing, China Performance of radiologists in differentiating COVID-19 from non-COVID-19 viral pneumonia at chest CT Coronavirus disease 2019 (COVID-19): Role of chest CT in diagnosis and management Chest CT findings in coronavirus disease-19 (COVID-19): Relationship to duration of infection AI-driven tools for coronavirus outbreak: Need of active learning and cross-population train/test models on multitudinal/multimodal data Estimating uncertainty and interpretability in deep learning for coronavirus (COVID-19) detection Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks The role of CT in case ascertainment and management of COVID-19 pneumonia in the UK: Insights from high-incidence regions Coronavirus (COVID-19) classification using CT images by machine learning methods Large-scale screening of COVID-19 from community acquired pneumonia using infection size-aware classification Rapid ai development cycle for the coronavirus (COVID-19) pandemic: Initial results for automated detection & patient monitoring using deep learning ct image analysis Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: A report of 1014 cases Sensitivity of chest CT for COVID-19: Comparison to RT-PCR Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks A weakly-supervised framework for COVID-19 classification and lesion localization from chest CT A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19),'' MedRxiv Machinelearningmastery A deep learning system to screen novel coronavirus disease 2019 pneumonia Using artificial intelligence to detect COVID-19 and communityacquired pneumonia based on pulmonary CT: Evaluation of the diagnostic accuracy Extracting possibly representative COVID-19 biomarkers from X-ray images with deep learning approach and image data related to pulmonary diseases Detection of coronavirus (COVID-19) associated pneumonia based on generative adversarial networks and a finetuned deep transfer learning model using chest X-ray dataset Deep-COVID: Predicting COVID-19 from chest X-ray images using deep transfer learning COVID-CAPS: A capsule network-based framework for identification of COVID-19 cases from X-ray images COVID-19 symptoms detection based on Nas-NetMobile with explainable AI using various imaging modalities COVIDXnet: A framework of deep learning classifiers to diagnose COVID-19 in X-ray images Detection of Coronavirus Disease (COVID-19) Based on Deep Features and Support Vector Machine Deep MLP-CNN model using mixed-data to distinguish between COVID-19 and Non-COVID-19 patients An ensemble learning approach for brain cancer detection exploiting radiomic features Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks Automated detection of COVID-19 cases using deep neural networks with X-ray images CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest X-ray images Coronavirus disease (COVID-19) detection in chest X-ray images using majority voting based classifier ensemble Detection of COVID-19 from chest X-ray images using convolutional neural networks Automatic detection of COVID-19 from chest radiographs using deep learning Explainable deep learning for pulmonary disease and coronavirus COVID-19 detection from X-rays COVID-19 image data collection: Prospective predictions are the future Very deep convolutional networks for large-scale image recognition Inception-v4, inception-resnet and the impact of residual connections on learning Extremely large minibatch SGD: Training ResNet-50 on ImageNet in 15 minutes MobileNetV2: Inverted residuals and linear bottlenecks Deep residual learning for image recognition Deep learning for computer vision Predicting parameters in deep learning Beyond manual tuning of hyperparameters Parameters optimization of deep learning models using Particle swarm optimization What is the Difference Between a Batch and an Epoch in a Neural Network Deep learning with adaptive learning rate using Laplacian score A disciplined approach to neural network hyper-parameters: Part 1 -learning rate, batch size, momentum, and weight decay Don't decay the learning rate, increase the batch size Random search for hyper-parameter optimization The effectiveness of data augmentation in image classification using deep learning Computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies Using deep learning forimage-based plant disease detection Data mining static code attributes to learn defect predictors Cost-based modeling for fraud and intrusion detection: Results from the JAM project Viral pneumonia screening on chest X-ray images using confidence-aware anomaly detection Face recognition in an unconstrained and real-time environment using novel BMC-LBPH methods incorporates with DJI vision sensor Evaluation of scalability and degree of fine-tuning of deep convolutional neural networks for COVID-19 screening on chest X-ray images using explainable deep-learning algorithm Viral pneumonia screening on chest X-ray images using confidence-aware anomaly detection