key: cord-0073248-wbc6n5qd authors: Ahmed, Imran; Jeon, Gwanggil; Chehri, Abdellah title: An IoT-enabled smart health care system for screening of COVID-19 with multi layers features fusion and selection date: 2022-01-10 journal: Computing DOI: 10.1007/s00607-021-00992-0 sha: 2a8bdf4d6e8ec01743a5bab6937edd8a5a78f5ee doc_id: 73248 cord_uid: wbc6n5qd Advancement of smart medical sensors, devices, cloud computing, and health care technologies is getting remarkable attention from academia and the health care industry. As, Internet of things (IoT) has been recognized as one of the promising research topics in the domain of health care, particularly in medical image processing. Researchers utilized various machine and deep learning techniques along with artificial intelligence for analyzing medical images. These developed techniques are used to detect diseases that might assist medical experts in diagnosing diseases at early stages and providing accurate, consistent, effective, and speedy results, and decrease the mortality rate. Nowadays, Coronavirus (COVID-19) has been growing as one of the most rigorous and severe infections and spreading globally. Consequently, an intelligent automated system is required as an active diagnostic choice that might be used to prevent the spread of COVID-19. Thus, this work presented an IoT-enabled smart health care system for the automatic screening and classification of contagious diseases (Pneumonia, COVID-19) in Chest X-ray images. The developed system is based on two different deep learning architectures used with a multi-layers feature fusion and feature selection approach to classify X-ray images of infectious diseases. This work comprises the following steps: to enhance the diversity of the data set, data augmentation is performed, while for feature extraction, deep learning architectures, i.e., VGG-19 and Inception-V3, are used along with transfer learning. For the fusion of extracted features obtained from deep learning architectures, a parallel maximum covariance, and for feature selection, the multi-logistic regression controlled entropy variance approach is applied. For experimentation, a data set is customized containing Chest X-ray images using various publicly available resources. The system provides an overall classification accuracy of 97%. Internet of Things plays an essential role in various fields of life, i.e., smart cities, smart health care, and intelligent surveillance [1] [2] [3] [4] [5] [6] [7] [8] . The growth of Artificial Intelligence (AI) and the IoT in health care is remarkable due to its widespread applications. In addition to various image processing, machine learning, and deep learning techniques, it provides solutions that assist both service providers and patients in improving health care outcomes in several areas. Some applications of AI and IoT in the health care sector are shown in Fig. 1 . These applications include virtual nursing assistant, medical diagnoses, medical data analysis, robotic surgery, telemedicine, DNA and genome sequence analysis, drug discovery, medical data security, medical risk prediction, clinical trials, emergency services, elderly patient care, and medical image analysis (radiology, brain monitoring, computer tomography). Among all these, medical image analysis is recognized as one of the most promising fields of health innovation [9] . Researchers have presented several techniques for investigating different applications in medical imaging, from image acquisition to image analysis and prognostic evaluation [10] [11] [12] . These techniques can also be contextualized in the analysis of Chest X-rays, in which significant amounts of quantitative data are obtained/extracted and can be used for many diagnostic purposes. Chest X-rays are considered as one of the most basic examination tools in many medical practices [13] . It is economical and has significant clinical value in diagnosing various infectious diseases of lungs [14] , like, Pneumonia, Tuberculosis, early Lung cancer, and, nowadays, COVID-19. The novel COVID-19 was initially detected in December 2019, and since then, the world has been suffering from this terrible outbreak. This deadly disease causes a respiratory infectious virus that spread quickly in the lungs, creating a critical threat to peoples' health. Due to its fast transmission and spread, the World Health Organization (WHO) reported it as a pandemic in March 2020. According to the WHO reports, the worldwide total confirmed COVID-19 cases are more than 50 Million, as depicted in Fig. 2 1 . Medical industry is exploring innovative methods and technologies to monitor and help control the virus's spread. Typically, the examination of COVID-19 is correlated with symptoms, and signs of Pneumonia, and tests of Chest X-rays [15] . Chest X-rays performs a requisite role in diagnoses of COVID-19 virus [16] . It carries a substantial amount of useful information regarding a patient's health. Nevertheless, accurate analysis of that information is one of the challenging tasks for a physician. The complexity of the interpretation in Chest X-rays significantly increases with the overlapping structure of tissues. Such analysis is difficult if the contrast between the surrounding tissue and the lesion is low or even if the lesion region overlaps with the ribs and pulmonary blood vessels. Indeed, it is sometimes difficult for an experienced physician to differentiate between the same lesions or detect very complex nodules. Therefore, the lung disease examination in Chest X-rays will cause missed detection. In addition, the white spots or viruses in Chest X-rays might create hurdles for the radi-ologist, and they might sometimes mistakenly diagnose other diseases, for example, pulmonary and Pneumonia, as COVID-19. Nowadays, researchers developed various Artificial Intelligence (computer vision, machine, and deep learning) techniques for medical image analysis, e.g., Chest X-rays into infectious diseases [9, 17] . These developed techniques automatically diagnose the infection in Chest X-rays that might assist medical experts or physicians, enhance the treatment test process, and reduce the burden. The diagnosis of disease at an early stage with these developed systems and techniques also reduces mortality rates. Inspired by the improved performance results of earlier techniques, a smart IoT-enabled smart health care system is presented to classify infectious diseases, e.g., COVID-19 and Pneumonia in the Chest X-rays. Two deep learning architectures, i.e., VGG-19 [18] and Inception-V3 [19] are applied along with transfer learning. As these architectures are previously trained on the ImageNet data set, the idea of multiple features fusion and selection [20] [21] [22] is applied. In this work, we adopted a similar approach of multiple features fusion and selection as [23] , authors used this idea for object classification. Applying this idea, different feature vectors are combined into a singular feature vector or space that has been used in several object classifications and medical imaging applications [24, 25] . Furthermore, the feature selection process is used to remove irrelevant features from the fused vector. The primary contributions of the work are provided as follows: -To present an IoT-enabled automated smart health care system for screening of Chest X-rays into COVID-19 and Pneumonia. Two deep learning architectures, e.g., VGG-19 and Inception-V3, along with transfer learning, are used. -To customize different online available sources into a Chest X-rays data set, used for additional training and testing of deep learning architectures. -To apply feature fusion-based techniques, i.e., parallel maximum covariance, to combine feature vectors of deep learning architectures. -To perform feature selection via employing the multi-logistic regression controlled entropy-variances approach. -To analyze the system's performance by conducting experiments and comparing its results with other techniques. The remaining work is organized as follows; The review of various methods used for Chest X-ray classification is discussed in Sect. 2. Data set customized for training and testing is illustrated in Sect. 3 . The details of the method presented in this work are provided in Sect. 4. Finally, Sect. 5 presents the experimentation results, while Sect. 6 concludes the overall paper. Researchers have done significant work in medical image analysis and provide effective and efficient solutions for the prediction and detection of infectious diseases using Chest X-rays. Yang et al. [26] presented applications of Artificial intelligence in the health care sector that might give support to medical experts in health care organizations. It also encourages the implementation of quarantine and helps in tracking pandemic outbreaks. Artificial Intelligence techniques e.g., citeshi2020review, [27] [28] [29] are widely utilized and practiced by different researchers to detect and classify COVID-19. Shi et al. [30] surveyed Artificial Intelligence techniques implemented for segmentation and examination of COVID-19 Chest X-rays. Mostly in literature, researchers used limited data set containing CT-scans and X-ray sample images like, [31, 32] . Hemdan et al. [33] used and compared seven deep learning models utilizing a small limited data set containing (25 COVID-19 and 25 healthy). Wang et al. [34] proposed a convolutional neural network based system specified as Covid-net for classification of X-Rays into COVID-19, Pneumonia, and Normal. The authors used a total of 800 X-rays, and the reported average accuracy is 92.4%. Authors in [35] practiced the ResNet-50 paradigm for classification of X-Rays into Healthy, Bacterial, Viral, and Infected Pneumonia. The recorded outcomes are good in contrast to Covid-net [34] with an accuracy of 96.23%. In another work, [36] used X-Ray images and presented an automatic detection design via applying transfer learning with neural networks for the classification of Chest X-rays. Pereira et al. [37] performed COVID-19 detection utilizing their own generated data set, including 1,144 X-rays, comprising COVID-19 and five types of Pneumonia and Healthy images. Ahmed et al. [17] , utilized a Faster-RCNN architecture for the classification of COVID-19 Chest X-rays. To the best of our knowledge, researchers utilized various techniques for the classification of Chest X-rays into different infectious diseases but mostly used a limited data set. In this work, we presented a smart health care system that combines Artificial intelligence with IoT and performs the classification of COVID-19 Chest X-rays. An IoT-enabled, smart health care system is introduced for the classification of Chest X-rays into Normal, COVID-19, and Pneumonia images, as illustrated in Fig. 3 . Chest X-ray images are obtained from different online accessible resources. Initially, the images are passed through the pre-processing stage in which image resizing, shuffling, and normalization are done-after this, data augmentation is applied to enhance the diversity of the data set. The data is then categorized into a training and a testing set. In this work, two pre-trained deep learning architectures are used for classification, i.e., VGG-19 and Inception-V3. As these architectures were trained on the ImagNet data set, we additionally trained both architectures using the customized Chest-Xray (training) data. Next, with the help of transfer learning, the new trained layer is combined with the previous architectures. Both deep learning architectures extracted different features for input images; we combine feature vectors of both architectures and performed feature fusion. Finally, the robust features are selected for final classification purposes. At the output, we get Chest X-ray images classified as Normal, COVID-19, and Pneumonia. The deep learning based classification system is then tested using testing samples, and finally, the system results are evaluated. The detail of the overall system is provided in the following subsections: To maintain consistency, the collected data set is shuffled, resized, and normalized. As both deep learning architectures take a pre-defined input image, e.g., 224 × 224 pixels, therefore, to manage correspondence, each image in the collected data set is resized. Moreover, image normalization is performed to maintain variation in image appearances, e.g., (brightness and contrast). Data augmentation is performed to increase diversity, expand the existing data set and achieve robust results. We used the data augmentation technique the same as [45] , and the images are randomly transformed or modified, as shown in Fig. 4 . These augmentation techniques increase the generalization capability of the deep learning architectures. It can be seen that the original images are transformed, using different types of augmentation approaches such as Random Sun Flare", "Random Fog," "Random Brightness," "Random Crop," "Rotate," "RGB Shift," "Random Snow," "Horizontal Flip," "Vertical Flip," "Random Contrast," "HSV. In this work, two pre-trained deep learning architectures, particularly VGG-19 and Inception-V3, are practiced for feature extraction and image classification. The deep We firstly used VGG-19 [18] architecture for the classification of Chest X-rays into infectious diseases. It is a modification of the VGG architecture; it consists of 19 trainable layers, of which 16 are convolutional layers. These convolution layers extracted features for the input image and are also used during transfer learning. In addition, it has three fully connected layers, five max-pool layers, and a softmax layer used as an output layer, as explained in Fig. 5 . The architecture shown in Fig. 5 is previously trained on the ImageNet data set, using transfer learning; it is additionally trained for Chest X-ray images. The architecture takes an RGB input image of size 224×224 pixels. The only pre-processing performed by the architecture is that it subtracts the mean RGB value from each pixel, calculated over the whole training set. The output of all convolution layers is the extracted features used for classification purposes. It used a kernel of size 3 × 3 size with a stride size of 1 pixel to cover the whole image. To preserve the image's spatial resolution, spatial padding is performed. The max-pooling is made with a stride of 2 over a 2 × 2 pixel window. The local image features for an input image is given as: In the above equation, V i (M) represents an output layer; the base value is represented with Bi (M) , ϕ M i,k indicates the filter mapping of the kth feature value, while h k determines the M − 1 output layer. After this, a ReLu unit is introduced to make the Fig. 6 Inception-V3 architecture, trained for Chest X-ray images. The output of the architecture is further utilized for transfer learning, multi layer feature fusion and selection architecture perform better classification and improve computational time. Finally, classification is conducted using three fully connected layers. Inception-V3 is a widely utilized deep learning architecture for image classification. It is trained on the ImageNet data set. It is composed of asymmetric and symmetric building blocks, including convolutional layers, average pooling, max-pooling, concatenations (Concat), dropouts, and fully connected layers, as depicted in Fig. 6 . Batch normalization is applied extensively all over the architecture and implemented for activation inputs. For computing loss function, softmax is employed. The architecture mainly consists of 316 layers and 350 connections. For various filter sizes, 94 convolutional layers are applied. Activation is performed at the first convolution layer, and the weight matrix is obtained with a dimension of 149 × 32. These Relu activation layers are then combined with batch normalization mathematically described as: A pooling layer is added between the convolution layers to get active neurons. The max-pooling is mathematically given as: In the equations; m (q) x 3 are defined filters and S q represents stride, that is used for extraction of for feature set maps e.g., 2 × 3. The activation is utilized and at the output, a resultant features map is obtained as weight matrix with dimension of 1 × 1 × 2048. We used transfer learning at the feature extraction step and additionally trained both architectures for Chest X-ray images, as illustrated in Fig. 9 . As discussed earlier, all images are resized in accordance with the input layer of both deep learning architectures. After that, input convolutional and output layers are selected for feature mapping. The initial convolutional layer of VGG-19 is selected as an input, and the fully connected layer is chosen as the output layer. Later, activation is done, and training and testing vectors are obtained. The resultant feature vector with a dimension of 1 × 4096 is obtained at the feature layer represented as φ k1 that is also utilized in the fusion process. In the case of Inception-V3, the same as VGG-19, the initial convolutional layer are selected as an input, and for feature maps, the average pool layer is utilized. Similar to the previous architecture, transfer learning is performed, and the architecture is additionally trained for Chest X-ray images, and activation is applied to the average pool layer. A feature vector of dimension 1 × 2048 is obtained, expressed as φ k2 . The feature vectors further proceed for the features fusion method. This method is mostly used in many image classification applications, in which multiple feature vectors are combined into one matrix. In this work, after training both architectures for Chest X-ray images. Now, feature vectors of both architectures are combined using the feature fusion method. The main objective of feature fusion is to get a more active and powerful feature vector for classification purposes. Recent research [46] shows that the fusion method increases the overall accuracy of the architectures or systems but at the cost of computational time (seconds). As our main objective is to enhance classification accuracy, therefore, parallel maximum covariance approach is applied for multiple features fusion. The lengths of both extracted feature vectors are equalized, and maximum covariance is calculated for fusion in a separate matrix. In our case, there are two feature vectors with a dimension of n × m and n × q, where n represents the total number of Chest X-ray images, and m & q represents the feature vector length of VGG-19 and Inception-V3, respectively. To obtain the equal length of vectors, each vector's maximum length is calculated using average value padding. From a higher length vector, the average feature is calculated. The covariance is maximize as follows [23] : Finally; In the above equations, the covariance value between φ 1 and φ 2 is represented by C φ 1 ,φ 2 , its ith and jth features value is represented φ i (t) and φ j j(t). The feature value of maximum covariance C φ 1 ,φ 2 is stored in the final fused vector; the process continues till all values are associated with each others. The fused vector is achieved, expressed by φ ( f u) , with dimensions of N × K , N is the sample image and K means the feature length, that changes according to the selected features.(For more details readers are referred to [23] ). In this work, we used a feature selection technique, i.e., multi-logistic regression controlled entropy variances. It is used as a function. i.e., the partial derivative activation function eliminates the unnecessary vector's features, while the prominent and strong features are transferred to the entropy variance function that gives the new vector with positive values. Lastly, this vector is given to the fitness function named Ensemble Learning for Datastream (ESD) Classification [47] , and the efficiency of the method is estimated. The final entropy-variances function Ent is given as [23] : In Eq. 10, the entropy function is represented with H (β), σ 2 indicates the variance of the selected feature vector. Both architectures' features are transferred using this function to determine the difference among all the extracted features. The ESD classifier then confirms the chosen features. Finally, the subspace discriminant method is applied for ensemble learning. This section elaborates on the evaluation and testing results of the developed system. The testing is performed by using Chest X-Ray images as described in Table 1 . We first presented the training and testing observations of both architectures using a customized data set. The training, testing accuracy, and loss of both architectures are shown in Figs. 7 and 8. Each architecture is trained for 100 epochs with the customized three-class data set; after applying conventional augmentation techniques. It is noted that after the 10th epoch, the loss is reducing firmly in both training and testing, as determined in Fig. 7. From Fig. 8 . It can also be seen that rates of training and testing accuracies are significantly increased after the first 10th epoch since we studied particular class data as Pneumonia, Normal, and COVID-19. The training and testing accuracy of VGG-19 is 81% and 86%, while for Inception-V3, it is 93% and 95%, respectively. The testing results of the above-discussed classification system can also be seen in Fig. 9 . The system is tested for three different classes of data set. The first row of Fig. 9 shows the classification results for COVID-19. It can be seen that the system effectively classifies the images of COVID-19. In all three sample images In Fig. 9a -c, the severity of the COVID-19 virus is different. As in Fig. 9a , it can be seen that mostly the lower region of the lungs is infected with the virus, while in Fig. 9b , the virus is spread Fig. 9 Classification results Chest X-ray images into COVID-19, Pneumonia, and Normal, utilizing IoT based smart health care system based on deep learning architectures, multiple feature fusion, and selection. The system is tested for three different types of Chest X-rays covers a large area of the lungs. From Fig. 9c , it might be concluded that the severity of COVID-19 is high as compared to Fig. 9a and b . Similarly, the second row shows the results of Pneumonia images. It can be noted from images of row first and second that the symptoms or effect of both infections are almost similar, but still, the deep learning based system gives good results by classifying different Chest X-rays of infectious diseases. The system accurately identifies infections (white places that include pus and water) and Normal Chest X-rays. In some cases, the virus spread in Chest X-rays is severed and can not be differentiated; therefore, in such cases, the deep learning based system might play a significant role in diagnosing the disease with expert opinions. The system is also tested for Normal Chest X-rays, as shown in Fig. 9 . After testing, the above-discussed system is also evaluated using true-positive, false-positive, true-negative, and false-negative. Finally, the system's performance is estimated using Accuracy, Precision, Recall values, as determined in Fig. 10 . Results are good, with an overall detection accuracy of 97%, 97%, and 98% for a test set of Fig. 10 . The overall performance evaluation of the system is given in Fig. 11 . The Precision value of the system is 96%, Recall is 93%, and accuracy is 97%. In medical diagnosis, sensitivity and specificity values are also used as it evaluates the system's ability. We also plotted Area Under the Curve (AUC) using sensitivity and specificity values, as shown in Fig. 12 . The sensitivity rate of the system is high, as seen in Fig. 12 , which means that the classifier can detect more numbers of true-positives and true-negatives than false-negatives and false-positives. The system results are also compared with several deep learning techniques, as demonstrated in Fig. 13 . It can be observed that the presented method performs better and shows good results in comparison with other deep learning architectures. To make a fair comparison, the same results are presented as produced by the original authors. Most of these techniques are presented utilizing limited data sets. In Table 2 the overall accuracy compassion is given. An IoT-enabled smart health care system is presented for the automatic classification of infectious diseases in Chest X-ray images, i.e., Pneumonia, COVID-19. The system is based on deep learning architectures with multi-layer features fusion and selection approaches. We applied multi-layer features fusion and selection approaches to classify infectious diseases in Chest X-ray images. For feature extraction, two deep learning architectures, i.e., VGG-19 and Inception-V3, are used along with transfer Fig. 12 The ROC analysis curve, high sensitivity rate shows that system gives more true-positives than false-negatives learning. The variety of data sets is enhanced via applying data augmentation, and deep learning architectures are trained with various augmented images. The extracted feature vectors of both deep learning architectures are fused together using a parallel maximum covariance approach, and for the selection of features, the multi logistic regression controlled entropy variance approach is used. For experimentation, we customized a data set of Chest-X rays using various publicly available data sets. The system provides significantly improved results with an overall classification accuracy of 97% for three different Chest X-ray images. Furthermore, the classification results are compared with other deep learning techniques. Although we used different online accessible data sets, a large data set is still needed to better train and test the system. We might use a different type of Chest X-ray images to classify other infectious diseases in future work. In addition, the work might be extended for the classification of disease along with its severity. Top view multiple people tracking by detection using deep SORT and YOLOv3 with transfer learning: within 5G infrastructure Din S (2021) A deep learning-based social distance monitoring framework for COVID-19 Social distance monitoring framework using deep learning architecture to control infection transmission of COVID-19 pandemic A framework for pandemic prediction using big data analytics IoT-based crowd monitoring system: using SSD with transfer learning Adapting Gaussian YOLOv3 with transfer learning for overhead view human detection in smart cities and societies A Deep-learning-based smart healthcare system for patient's discomfort detection at the edge of internet of things Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine AI-enabled remote monitoring of vital signs for COVID-19: methods, prospects and challenges A comprehensive review of the COVID-19 pandemic and the role of IoT, drones, AI, blockchain, and 5G in managing its impact Chest X-ray segmentation using Sauvola thresholding and Gaussian derivatives responses Computer-aided detection in chest radiography based on artificial intelligence: a survey Radiological findings from patients with COVID-19 pneumonia in Wuhan, China: a descriptive study An IoT based deep learning framework for early assessment of Covid-19 Rethinking the inception architecture for computer vision Deep residual learning for image recognition A survey of deep neural network architectures and their applications A sustainable deep learning framework for object recognition using multi-layers deep features fusion and selection A multilevel paradigm for deep convolutional neural network features selection with an application to human gait recognition Probabilistic feature selection and classification vector machine Combining point-of-care diagnostics and internet of medical things (IoMT) to combat the COVID-19 pandemic Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal A deep learning algorithm using CT images to screen for Corona virus disease (COVID-19) Review of artificial intelligence techniques in imaging data acquisition, segmentation, and diagnosis for COVID-19 Automatic X-ray COVID-19 lung image classification system based on multi-level thresholding and support vector machine. medRxiv Development of a machine-learning system to classify lung CT scan images into normal/COVID-19 class Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in x-ray images COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images Covid-resnet: A deep learning framework for screening of covid19 from radiographs Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks COVID-19 identification in chest X-ray images on flat and hierarchical classification scenarios Automated detection and forecasting of COVID-19 using deep learning techniques: a review Covid-19 image data collection: Prospective predictions are the future Extracting possibly representative COVID-19 Biomarkers from X-Ray images with Deep Learning approach and image data related to Pulmonary Diseases Goldgof GM (2020) Finding covid-19 from chest x-rays using deep learning on a small dataset Classification of COVID-19 from chest X-ray images using deep convolutional neural networks Automatic detection of coronavirus disease (COVID-19) in X-ray and CT images: a machine learning-based approach Exploration of interpretability techniques for deep COVID-19 classification using chest X-ray images A survey on image data augmentation for deep learning Medical Imaging fusion techniques: a survey benchmark analysis, open challenges and recommendations A survey on ensemble learning for data stream classification Automated detection of COVID-19 cases using deep neural networks with X-ray images Detection of coronavirus disease (covid-19) based on deep features Deepcovidexplainer: Explainable covid-19 predictions based on chest x-ray images Covidcaps: A capsule network-based framework for identification of covid-19 cases from x-ray images Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations