key: cord-0701143-dkqc853s authors: Muhammad, Khan; Ullah, Hayat; Khan, Zulfiqar Ahmad; Saudagar, Abdul Khader Jilani; AlTameem, Abdullah; AlKhathami, Mohammed; Khan, Muhammad Badruddin; Abul Hasanat, Mozaherul Hoque; Mahmood Malik, Khalid; Hijji, Mohammad; Sajjad, Muhammad title: WEENet: An Intelligent System for Diagnosing COVID-19 and Lung Cancer in IoMT Environments date: 2022-02-02 journal: Front Oncol DOI: 10.3389/fonc.2021.811355 sha: eee737d32416957847519e44b9aade8ebcb7e7d9 doc_id: 701143 cord_uid: dkqc853s The coronavirus disease 2019 (COVID-19) pandemic has caused a major outbreak around the world with severe impact on health, human lives, and economy globally. One of the crucial steps in fighting COVID-19 is the ability to detect infected patients at early stages and put them under special care. Detecting COVID-19 from radiography images using computational medical imaging method is one of the fastest ways to diagnose the patients. However, early detection with significant results is a major challenge, given the limited available medical imaging data and conflicting performance metrics. Therefore, this work aims to develop a novel deep learning-based computationally efficient medical imaging framework for effective modeling and early diagnosis of COVID-19 from chest x-ray and computed tomography images. The proposed work presents “WEENet” by exploiting efficient convolutional neural network to extract high-level features, followed by classification mechanisms for COVID-19 diagnosis in medical image data. The performance of our method is evaluated on three benchmark medical chest x-ray and computed tomography image datasets using eight evaluation metrics including a novel strategy of cross-corpse evaluation as well as robustness evaluation, and the results are surpassing state-of-the-art methods. The outcome of this work can assist the epidemiologists and healthcare authorities in analyzing the infected medical chest x-ray and computed tomography images, management of the COVID-19 pandemic, bridging the early diagnosis, and treatment gap for Internet of Medical Things environments. In the beginning of December 2019, a novel infectious acute disease called coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has emerged and caused severe impact on health, human lives, and global economy. This COVID-19 disease originated in Wuhan city of China and then spread in several other countries and become a global pandemic (1) . This virus is easily transmitted between two persons through petite drops caused by coughing, sneezing, and talking during close contact. The infected person usually has certain symptoms after 7 days that include high fever, continuous cough, shortness of breath, and taste loss. According to the statistical report of the World Health Organization (WHO) (2) , COVID-19 affected around 192 countries with 199 million confirmed active cases and 4.2 million confirmed deaths till August 4, 2021 . Considering the fast spread and its high contiguous nature, it is essential to diagnose COVID-19 at an early stage to greatly prevent the outbreak by isolating the infected persons, thereby minimizing the possibilities of infection to healthy people. Till date, the most common and convenient technique for diagnosing COVID-19 is the reverse transcription polymerase chain reaction (RT-PCR). However, this technique has very low precision, high delay, and low sensitivity, making it less effective in preventing the spread of COVID-19 (3) . Besides the RT-PCR testing system, there are several other medical imaging-based COVID-19 diagnosing methods such as computed tomography (CT) (4) (5) (6) and chest radiography (x-ray) (7, 8) . Diagnosis of COVID-19 is typically associated with both the symptoms of pneumonia and medical chest x-ray tests (9, 10) . Chest x-ray is the first medical imaging-based technique that plays an important role in the diagnosis and detection of COVID-19 disease. Some attempts have been made in the literature to detect COVID-19 from medical chest x-ray images using machine learning and deep learning approaches (11, 12) . For instance, Narin et al. (13) evaluated the performance of five pretrained Convolutional Neural Network (CNN)-based models for the detection of coronavirus pneumonia-infected patients using medical chest x-ray images. Ismail et al. (14) utilized deep feature extraction and fine-tuning of the pretrained CNNs to classify COVID-19 and normal (healthy) chest x-ray images. Tang et al. (15) used chest x-ray images with effective screening for the detection of COVID-19 cases. Furthermore, Jain et al. (16) used transfer learning for the COVID-19 detection using medical chest x-ray images, and they compared the performance of medical imaging-based COVID-19 detection methods. More recently, several other deep learning-based approaches (17) are presented to overcome the limitations of previous imaging-based COVID-19 detection methods. For instance, Minaee et al. (18) proposed a transfer learning strategy to improve the COVID-19 recognition rate in medical chest x-ray images. They investigated different pretrained CNN architectures on their newly prepared COVID-19 x-ray image datasets and claimed reasonable results. However, their newly created dataset is not balanced and has a smaller number of COVID-19p images compared with non-COVID-19 images. Aniello et al. (19) presented ADECO-CNN to classify infected and noninfected patients via medical CT images. They compared their CNN architecture with pretrained CNNs including VGG19, GoogleNet, and ResNet50. Yujin et al. (20) suggested a patchbased CNN approach for efficient classification and segmentation of COVID-19 chest x-ray images. They first preprocessed medical chest x-ray images and then fed them into their proposed network for infected lung area segmentation and classification in medical images. However, their attained performance is relatively low due to the small number of images in their used dataset. Similarly, Yu-Huan et al. (21) presented a joint classification and segmentation framework called JCS for COVID-19 medical chest CT diagnosis. They trained their JCS system on their newly created COVID-19 classification and segmentation dataset. They claimed real-time and explainable diagnosis of COVID-19 in chest CT images with high efficiency in both classification and segmentation. Afshar et al. (22) proposed a deep uncertainty-aware transfer learning framework for COVID-19 detection in medical x-ray and CT images. They first extracted CNN features from images of chest x-ray and CT scan dataset and then evaluated by different machine learning classifiers to classify the input image as COVID or non-COVID. The current COVID-19 pandemic situation greatly overwhelms the health monitoring systems of even developed countries, leading to an upward trend in the number of deaths on a daily basis. Also, the inaccessibility of healthcare system and required medication to the rural areas caused increase in the loss of human lives. Therefore, an intelligent AI-driven healthcare system is necessarily needed for combating with COVID-19 pandemic and rescuing hospitals and other medical staff. Thanks to the Internet of Things (23) (24) (25) (26) and Internet of Medical Things (IoMT) (27, 28) for offering powerful features (i.e., online monitoring, high-speed communication, and remote checkups) that can greatly assist healthcare system of a country against COVID-19 pandemic (29) . Also, Healthcare 5.0 with 5Genabled IoMT environment can effectively improve the accessibility of doctors and nurses to their patients in remote areas, enabling COVID-19 patients to control their health based on daily recommendations from doctors. Undoubtedly, the deployment of 5G-enabled IoMT protocols can greatly enhance the performance of smart healthcare system by connecting hospitals and patients, transmitting their healthrelated data between both parties. However, such a smart IoMT healthcare environment demands computationally efficient yet accurate AI algorithms (including both machine learning and deep learning algorithms) (30) . Most of the existing deep learning approaches use computationally complex CNN architectures that require high network bandwidth and computational requirements and cannot be employed on resource constrained devices. Thus, the architecture of AI Algorithm (i.e., CNN architecture) to be deployed must meet the requirements of the executional environment (the device used in IoMT environment). To alleviate the shortcomings of pervious approaches and design energy-efficient model for IoMT-enabled environment, we propose a computationally efficient, yet accurate CNN architecture called WEENet. The proposed architecture is designed to efficiently detect COVID-19 in medical chest x-ray images, requiring limited computational resources. More precisely, the key contributions of this study are summarized as follows: 1. Deep learning-based models require huge amount of medical imaging data to train effectively, but COVID-19 benchmarks have relatively limited number of samples especially for COVID class. To increase the number of images for effective training of the proposed WEENet framework, we applied offline data augmentation techniques on available medical chest x-ray images such as rotation, flipping, zooming, etc. that brought improvements in the performance as evident from the results. 2. WEENet is developed to detect COVID-19 in medical chest x-ray images and support the management of IoMT environments. WEENet uses EfficientNet (31) model as a backbone for feature extraction from chest xray images, followed by stacked autoencoding layers to represent the features in more abstract form before the final classification decision. 3. The performance of several deep learning-based models is evaluated using benchmark medical chest x-ray images datasets and eight evaluation metrics including a novel strategy of cross-corpse and robustness evaluation for COVID-19 detection in chest x-ray images. Furthermore, we also compared the performance of our WEENet with other state-of-the-art (SOTA) methods, where it surpassed in terms of several evaluation metrics. The remainder of the article is organized as follows: Section 2 covers the proposed IoMT-based WEENet framework with a discussion on datasets. In Section 3, we discuss the experimental setups, the experimental results, and their analysis. Finally, Section 4 concludes this paper and suggests future research directions. This section discusses the overall workflow of our WEENet framework in IoMT environment for efficient and timely detection of COVID-19 in x-ray images over edge computing platforms. For better understanding, the proposed WEENET framework is divided into three phases including Data Acquisition, Preprocessing, and WEENet. The first phase presents the detail of data collection from different sources, followed by the second phase which performs extensive data augmentation on the data collected in the first phase to prevent underfitting/overfitting problems. The third phase contains the WEENet architecture which is responsible for COVID-19 detection in x-ray images. The overall graphical overview of our proposed framework with all phases is given in Figure 1 and explained in the following subsections. During the pandemic, hospitals around the world produced image data related to COVID-19 (such as medical x-ray and CT images), and some of them are publicly available for research purposes in medical imaging. However, the available COVID-19 image datasets are either not well organized or have lack of balance between positive and negative class samples, which often lead network to model overfitting during the training process. Therefore, the research community is working to organize the available COVID-19 image data and make it usable before utilizing it for early diagnosis of COVID-19. To achieve data diversity and balance between positive and negative class samples, we actively used data augmentation approaches, which not only increase the volume of data but can also significantly improve the classification performance of deep learning models as evident from our experiments. In this research, we have used three different COVID-19 image datasets namely chest x-ray images (CXI) (18), x-ray dataset COVID-19 (XDC) (32) , and COVID-19 radiography database (CRD) (33) , where each dataset contains medical chest x-ray images of positive and negative patients. To alleviate the chances of model overfitting and class biasness, we performed extensive data augmentation by equalizing the number of positive and negative class samples in each of the abovementioned datasets. Considering the number of images per class in the dataset, we performed data augmentation with different augmentation ratio for each dataset so that we can obtain balance training data. Following this strategy, we augmented the COVID-19 images of CXI (18) dataset with augmentation ratio of 1:15 such that each image is reproduced in 15 different variants. Similarly, for XDC (32), we used the data augmentation ratio of 1:10 for both positive and negative classes. For CRD (33) dataset, we only augmented the COVID-9 class with the augmentation ratio of 1:3, where each image is reconstructed with its 3 different variants. The proposed data augmentation strategy analyzed different augmentation approaches and then selected the most suitable eight distinct operations on each image of the dataset that include Rotation, Zoom, Width shift, Height shift, Shear, Fill mode, Flip, and Brightness operations before forwarding to our proposed WEENet for training. The details of augmentation operations used in our method are listed in Table 1 . It can be noticed that images in original XDC dataset are insufficient for training a deep learning algorithm. Also, the number of positive samples in the CXI dataset is comparatively lesser than negative samples prior to data augmentation process. Similarly, the CRD dataset has also a huge difference between the number of positive and negative samples. On the other hand, the augmented version of the listed datasets can be found well balanced and rich in terms of data diversity, thus more suitable for deep learning-based methods. Several CNN architectures have been explored before choosing the appropriate model that are extensively used in different domain studies such as time series prediction, classification, object detection, and crowed estimation. These architectures include VGG16 (34), VGG19 (34), ResNet18 (35), ResNet50 (35) , and ResNet101 (35) that are used by researchers for COVID-19 detection in chest x-ray images, but each CNN model has its own pros and cons. However, researchers investigate these architectures to boost their accuracy by using different scaling strategies to adjust the network depth, width, or resolution. Most of the networks are based on single scaling, that scales only a single dimension from depth, width, and size. Though, scaling two or three dimensions will yield efficiency and suboptimal accuracy. To this end, we investigate EfficientNet that scales all the dimensions through compound scaling technique. This network is developed through leveraging multiobjective architecture search, which optimizes both floating point operations (FLOPs) and accuracy. EfficientNet uses the search space of (36) and ACC (m) × [FLOPS (m)/T] w as an optimization tool. The ACC (m) and FLOPS (m) represent the accuracy and FLOPs of model m while T and w are the FLOPs target and hyperparameters, respectively. These terms control the tradeoff between the accuracy and FLOPs. This network comprises several convolutional layers where different-sized kernels are equipped in each layer. The input frame having three channels (R, G, B) corresponds to size such as 224 × 224 × 3. The subsequent layers are scaled down in a resolution that reduces the size of feature maps while the width is scaled up to increase the accuracy. This tool ensures the collection of important features from the input frame. For example, the second layer consists of Width = 112 kernels, and the number of kernels by next convolution is Width = 64. The total maximum kernels used are Depth = 2,560 in the last layer, where the resolution is 7 × 7 which represents the most discriminative features. At the end, we added max pooling layer that is followed by encoding layers and a SoftMax layer for the final classification. The proposed WEENet is based on EfficientNet model followed by encoding layers. EfficientNet is used to extract important features from the input data and then the output is feed forward to stacked encoding layers. The stacked encoding layers are based on autoencoder (37) used to compress the data from high dimension into low dimension, while preserving the salient information from the input data. Autoencoders are a type of deep neural networks that map the data to itself through a process of (nonlinear) dimensionality reduction, followed by dimensionality expansion. The autoencoder models include three layers: input, hidden, and output layer as shown in Figure 2 . The encoder part is used to map the input data into lower dimension followed by decoding layers to reconstruct it. Let us suppose the input data (X n ) N n=1 , where X n belongs to R m x 1 , h n is the low-dimension map (hidden state) which is calculated from X n , and "O n " is the output decoder. The mathematical representation of encoding and decoding layer is shown in Eq. (1). here, F represents the encoding function, W 1 is the weight metrics, and ? 1 is the bias term. The mathematical representation of the decoding layer is shown in Eq. (2). In this equation, G is the decoding function, W 2 is the weight metrics, and ? 2 is the bias term of decoding layer. In our WEENet, we used the encoding part of the autoencoder to represent the features in more abstract form. In these layers, the high-dimension EfficientNet features is encoded to low-dimension features. In the proposed model, two encoding layers are incorporated with EfficientNet architecture. The output of EfficientNet is 2,560 dimension feature vector which is encoded to 1,280 dimension feature vector. Furthermore, 1,280 dimension feature vector is then encoded to 640 dimension feature vector is then 320. The proposed model is trained for 50 epochs, using SGD optimizer with 0.0001 learning rate, and its performance is tested against SOTA as given in Section 3. In this section, we evaluated our WEENet on three publicly available COVID-19 datasets and compared the classification performance with other methods. For this, we first provide the details of experimental settings of this research study, followed by information about datasets and metrics for performance evaluation. Subsequently, we compare the proposed WEENet with other SOTA CNN architectures used for COVID-19 classification. Finally, we close this section by emphasizing on the feasibility of our proposed WEENet framework for COVID-19 diagnosis in 5G-enabled IoMT environments. This section provides the detail of experimental settings and the execution environment used for implementing our proposed WEENet framework. The proposed method is purely implemented in Python (version 3.5) language using Visual Studio Code (VSCode)-integrated development environment. The WEENet concepts are implemented by utilizing a very prominent deep learning framework called Keras on Intel Core i7 CPU equipped with a GPU of Nvidia GTX having 6 GB onboard memory. The proposed WEENet architecture is trained on three different datasets including CXI, XDC, and CRD with the same configuration of hyperparameters, i.e., number of epochs, batch size, learning rate, weight decay, etc. The training and validation performances of our WEENet on CXI, XDC, and CRD datasets are visually depicted in Figures 3-5 , respectively. For experimental evaluation, we have used three publicly available datasets (18, 32, 33) to validate the performance of our proposed method compared with other SOTA CNN architectures. These datasets contain chest x-ray images of positive and negative COVID-19 patients assigned with corresponding labels, i.e., COVID-19 and normal. The statistical details of the abovementioned datasets are listed in Table 2 . Besides these datasets, there are several other publicly available datasets commonly used for COVID-19 classification. However, most of them are either imbalance or have weak diversity leading to poor performance. Therefore, we selected CXI, XDC, and CRD datasets from the publicly available listed datasets in Table 3 . The detail of each dataset is given in Table 3 including publishing year, name of the dataset, number of samples in COVID class and non-COVID class, methods of evaluation, and experimental outcomes in terms of sensitivity, specificity, and accuracy. It is one of the most used datasets for COVID-19 diagnosis in medical image analysis community. This dataset contains a total of 184 COVID-19 infected and more than 5,000 normal chest xray images. Clearly, the original CXI (18) dataset has imbalance class samples that significantly affect a model's performance during training. Considering the chances of overfitting during training, we augmented the dataset and balanced the number of images for both COVID-19 and normal class. The number of per class images for both original and augmented CXI dataset is listed in Table 2 . This dataset is created by collecting a small number of chest x-ray images of positive and negative COVID-19 patients. Overall, this The COVID-19 radiography dataset is the large-scale chest x-ray image dataset released in different versions. In the first release, they publicly share 219 COVID-19 infected and 1,341 normal chest x-ray images. In the second release, they increased the number of COVID-19 infected chest x-ray images to 1,200. Following this, in the third release, the number of COVID-19 infected chest x-ray images is increased to 3,616 and normal chest x-ray images to 10, 192 . In this paper, we used the final release of the CRD (33) dataset, whose statistical details are presented in Table 2 . In image classification problem, the performance of trained CNN model is mostly evaluated by conducting quantitative assessment via commonly used classification performance metrics. These Here, sensitivity indicates the number of correctly classified positive samples over the total number of positive samples. Similarly, specificity represents the number of correctly classified negative samples over the total number of negative samples. Accuracy is a generic classification metric that indicates the total number of correct classifications over the total number of samples. Finally, ROC metric represents the relationship (indicated by symbol~R~) between specificity and sensitivity. The generalization of a system plays an important role especially when dealing with uncertain computational environment, where data under the observation is semantically different from the data used for training the algorithm. Bearing this in mind, we proposed a new evaluation strategy called cross-corpse evaluation for validating the generalization and robustness of our proposed system in uncertain environment. In this new evaluation strategy, first, we evaluated the performance of our method against other SOTA on test sets of the same datasets used for training. While in the second round of experiments, we assessed the performance of the proposed approach compared with the underlined investigated CNNs on test sets of the datasets other than training datasets, which is termed as cross-corpse evaluation. The obtained quantitative results for both the same dataset and cross-corpse evaluation strategy are presented in Tables 4, 5 . It can be easily perceived that the obtained accuracy score for cross-corpse evaluation is comparatively lower than that of the original dataset, yet the accuracy scores indicate the better generalization performance. Furthermore, the reported quantitative results in Tables 4, 5 verify the overwhelming performance of our method by obtaining the highest accuracy across each dataset in both the same dataset and cross-corpse evaluation. We also evaluated the qualitative performance of our method against SOTA by doing classification on randomly collected images from the test sets of each experimented dataset. The prediction results for randomly selected images from each experiment dataset are shown in Figure 6 , where it can be noticed that our method provides the best prediction results compared with other SOTA methods for COVID-19 classification. This section presents the comparative analysis of our proposed WEENet with other SOTA methods for COVID-19 classification on test sets of CXI, XDC, and CRD datasets. For comparative analysis, we evaluated the performance of our method and compared it with SOTA including MobileNet (38), NASNet-Mobile (39), VGG16 (34), ResNet101 (35), RestNet50 (35), VGG19 (34) , and EfficientNet (31) . To investigate the performance of our method and validate the effectiveness of the proposed data augmentation strategy, we conducted experiments on both original datasets and augmented datasets and compared the results with the SOTA methods. The obtained results for the original dataset are given in Table 4 , where it can be perceived that our proposed WEENet outperforms all comparative CNNs on original datasets across each evaluation metric except NASNet-Mobile (39) that performs comparatively better than our method in terms of TP and FP on the CXI dataset. On the other hand, the obtained results on augmented datasets are given in Table 5 , where it can be noticed that our proposed WEENet achieved the best results by overwhelming the SOTA CNNs across each evaluation metric, thus showing its superiority and efficiency for COVID-19 classification in medical chest x-ray images. We also compared our WEENet architecture with other SOTA CNN-based COVID-19 classification approaches and reported the results in Table 6 . The reported results reflect the dominancy of our WEENet on CRD dataset across each evaluation metric Although our method obtained comparatively lower values for sensitivity and specificity on the CXI and XDC datasets, still our method attained best results on the same datasets across the other two evaluation metrics. The best reported results presented in Tables 3-6 are highlighted in the bold text while the runner up scores are indicated in the underlined text. Furthermore, some visual results of the proposed WEENet over test set of each dataset are given in Figure 7 . Considering the requirements of 5G-enabled IoMT environment for rapid and accurate smart healthcare systems (42) (43) (44) , it is essential to analyze the feasibility of a system before deploying in the real world. The feasibility assessment protocols involved different steps to investigate the suitability of a given system for the problem under observation in various aspects such as the robustness of decision-making system, automation, real-time response, and employability on edge-computing platforms. Having this in mind, we conducted feasibility analysis experiments and investigated our proposed WEENet in the abovementioned aspects. Based on the obtained quantitative results in the previous section, we estimated the robustness of our WEENet by averaging the attained accuracy score across all datasets and achieved an average of 90% accuracy. Next, the proposed method meets automation requirements thereby providing fully end-to-end deep learning system. Although, our method takes relatively more time for diagnosing COVID-19 in chest x-ray image, it has limited memory storage requirements for deployment on edge devices, making it a suitable approach for early COVID-19 detection in 5G-enabled IoMT environments. The conducted feasibility assessment findings are depicted in Figure 8 . In this section, we discuss the effectiveness and reusability of our proposed WEENet framework for early detection of lung cancer in chest CT scan images of infected patients. The deep learningbased early detection of lung cancer (45) can greatly facilitate the doctors and other medical-related individuals to eliminate the cancer cell at first place by providing proper care and treatment to the infected patients. Considering the relevancy in the image data (chest CT scan images) used for COVID-19 detection and lung cancer CT scan images (46) , the proposed WEENet framework can be used for lung cancer detection by finetunning the architecture on lung cancer image data using transfer learning strategy (47) . For efficient retraining of the WEENet architecture, the trained weights (already learned knowledge during training on COVID-19 image data) can be The utilization of trained weights will not only reduce the training efforts (in terms of training time) but can also improve the performance of retrained architecture for lung cancer detection. The reusability workflow of our proposed WEENet for lung cancer detection task is depicted in Figure 9 . The COVID-19 pandemic started in 2019 and has severely affected human life and the world economy for which different actions are initiated to stop its spread and efficiently handle the pandemic. Such actions include the concept of smart lockdown, development of new devices for temperature checking, early detection of COVID-19 using medical imaging techniques, and treatment plans for patients with different risk levels. This work supports the necessary action of early COVID-19 detection using medical chest x-ray images in 5G-enabled IoMT environment, contributing to the management of COVID-19 pandemic. Considering the limited available medical imaging data and different conflicting performance metrics for early COVID-19 detection, in this work, we investigated deep learning-based frameworks for effective modeling and early diagnosis of COVID-19 from medical chest x-ray images in IoMT-enabled environment. We proposed "WEENet" for COVID-19 diagnosis using efficient CNN architecture and evaluated its performance on three benchmark medical chest x-ray and CT image datasets using eight different evaluation metrics such as accuracy, ROC, robustness, specificity, and sensitivity etc. We also tested the performance of our method using cross-corpse evaluation strategy. Our results are encouraging against SOTA methods and will support healthcare authorities in analyzing medical chest x-ray images of infected patients and will assist the management of the COVID-19 pandemic in IoMT environments. The reported results are better than SOTA methods, but model size is not the best among all methods under consideration (though better than majority of the models). This is due to some of the architectural layers, tuned to balance the performance metrics towards optimization. More investigation is needed to further reduce the model size without affecting the performance, which is one of our plans. We also plan to extend this work to a multiclass problem including mild, moderate, and severe as discussed in COVIDGR dataset (48) from the University of Granada, Spain. The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding authors. KM, HU, and ZK contributed to the idea conceptualization, data acquisition, implementation, experimental assessment, manuscript writing, and revision. AS contributed to the data acquisition and manuscript revision. AA, MA, MK, MHAH contributed to acquisition and interpretation of data. KMM, MH, and MS contributed to the design of study and revision of manuscript. Novel Coronavirus (2019-Ncov) Pneumonia. Radiology (2020) COVID-19) Dashboard (2021) Chest CT for Typical Coronavirus Disease 2019 (COVID-19) Pneumonia: Relationship to Negative RT-PCR Testing Clinical Features of Patients Infected With 2019 Novel Coronavirus in Wuhan Diagnosis of COVID-19 Pneumonia Based on Graph Convolutional Network Covid-Fact: A Fully-Automated Capsule Network-Based Framework for Identification of Covid-19 Cases From Chest Ct Scans Pneumonia of Unknown Aetiology in Wuhan, China: Potential for International Spread via Commercial Air Travel Deep Learning-Based Decision-Tree Classifier for COVID-19 Diagnosis From Chest X-Ray Imaging Radiological Findings From 81 Patients With COVID-19 Pneumonia in Wuhan, China: A Descriptive Study Application of Machine Learning in Diagnosis Weakly Supervised Deep Learning for Covid-19 Infection Detection and Classification From Ct Images Explainable AI For COVID-19 CT Classifiers: An Initial Comparison Study Automatic Detection of Coronavirus Disease (Covid-19) Using X-Ray Images and Deep Convolutional Neural Networks Deep Learning Approaches for COVID-19 Detection Based on Chest X-Ray Images EDL-COVID: Ensemble Deep Learning for COVID-19 Cases Detection From Chest X-Ray Images Deep Learning Based Detection and Analysis of COVID-19 on Chest X-Ray Images Artificial Intelligence for COVID-19: A Systematic Review Deep-Covid: Predicting Covid-19 From Chest X-Ray Images Using Deep Transfer Learning COVID-19: Automatic Detection of the Novel Coronavirus Disease From CT Images Using an Optimized Convolutional Neural Network Deep Learning Covid-19 Features on Cxr Using Limited Training Data Sets Jcs: An Explainable Covid-19 Diagnosis System by Joint Classification and Segmentation An Uncertainty-Aware Transfer Learning-Based Framework for COVID-19 Diagnosis Blockchain-Based Federated Learning for Device Failure Detection in Industrial IoT Computation Offloading With Deep Reinforcement Learning for Internet of Things Big Data Privacy Preserving in Multi-Access Edge Computing for Heterogeneous Internet of Things Redundancy Avoidance for Big Data in Data Centers: A Conventional Neural Network Approach Efficient Security and Authentication for Edge-Based Internet of Medical Things An Open IoHT-Based Deep Learning Framework for Online Medical Image Recognition Smart Management of Healthcare Professionals Involved in COVID-19 Dynamic Fusion-Based Federated Learning for COVID-19 Detection Rethinking Model Scaling for Convolutional Neural Networks Federated Learning for COVID-19 Screening From Chest X-Ray Images Exploring the Effect of Image Enhancement Techniques on COVID-19 Detection Using Chest X-Ray Images Very Deep Convolutional Networks for Large-Scale Image Recognition Deep Residual Learning for Image Recognition Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Nonlinear Principal Component Analysis Using Autoassociative Neural Networks Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv preprint arXiv Learning Transferable Architectures for Scalable Image Recognition In AlexNet-Level Accuracy With 50x Fewer Parameters and< 0.5 MB Model Size. arXiv preprint arXiv A Deep Neural Network for Classification of Thoracic Diseases on Chest Radiography. arXiv preprint arXiv SaYoPillow: Blockchain-Integrated Privacy-Assured IoMT Framework for Stress Management Considering Sleeping Habits Man in the Middle Attack Mitigation in Internet of Medical Things PMRSS: Privacy-Preserving Medical Record Searching Scheme for Intelligent Diagnosis in IoT Healthcare Deep-Chest: Multi-Classification Deep Learning Model for Diagnosing COVID-19, Pneumonia, and Lung Cancer Chest Diseases Lung Ultrasound in the Diagnosis of COVID-19 Pneumonia: Not Always and Not Only What Is COVID-19 "Glitters Transfer Learning by Cascaded Network to Identify and Classify Lung Nodules for Cancer Detection The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number 959. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.