key: cord-0573471-oqfcq1ec authors: Degerli, Aysen; Ahishali, Mete; Yamac, Mehmet; Kiranyaz, Serkan; Chowdhury, Muhammad E. H.; Hameed, Khalid; Hamid, Tahir; Mazhar, Rashid; Gabbouj, Moncef title: COVID-19 Infection Map Generation and Detection from Chest X-Ray Images date: 2020-09-26 journal: nan DOI: nan sha: ecfcce2994fb8e69a3782d0e11e4e506d7b5cf67 doc_id: 573471 cord_uid: oqfcq1ec Computer-aided diagnosis has become a necessity for accurate and immediate coronavirus disease 2019 (COVID-19) detection to aid treatment and prevent the spread of the virus. Compared to other diagnosis methodologies, chest X-ray (CXR) imaging is an advantageous tool since it is fast, low-cost, and easily accessible. Thus, CXR has a great potential not only to help diagnose COVID-19 but also to track the progression of the disease. Numerous studies have proposed to use Deep Learning techniques for COVID-19 diagnosis. However, they have used very limited CXR image repositories for evaluation with a small number, a few hundreds, of COVID-19 samples. Moreover, these methods can neither localize nor grade the severity of COVID-19 infection. For this purpose, recent studies proposed to explore the activation maps of deep networks. However, they remain inaccurate for localizing the actual infestation making them unreliable for clinical use. This study proposes a novel method for the joint localization, severity grading, and detection of COVID-19 from CXR images by generating the so-called infection maps that can accurately localize and grade the severity of COVID-19 infection. To accomplish this, we have compiled the largest COVID-19 dataset up to date with 2951 COVID-19 CXR images, where the annotation of the ground-truth segmentation masks is performed on CXRs by a novel collaborative expert human-machine approach. Furthermore, we publicly release the first CXR dataset with the ground-truth segmentation masks of the COVID-19 infected regions. A detailed set of experiments show that state-of-the-art segmentation networks can learn to localize COVID-19 infection with an F1-score of 85.81%, that is significantly superior to the activation maps created by the previous methods. Finally, the proposed approach achieved a COVID-19 detection performance with 98.37% sensitivity and 99.16% specificity. The COVID-19 sample CXR images, their corresponding ground-truth segmentation masks which are annotated by the collaborative human-machine approach, and the generated infection maps from the state-of-the-art segmentation models. March 2020. The disease may lead to hospitalization, intubation, intensive care, and even death, especially for the elderly [1] , [2] . Naturally, reliable detection of the disease has the utmost importance. However, the diagnosis of COVID-19 is not straight-forward since its symptoms, such as cough, fever, breathlessness, and diarrhea are generally indistinguishable within other viral infections [3] , [4] . The diagnostic tools to detect COVID-19 are currently reverse transcription of polymerase chain reaction (RT-PCR) assays and chest imaging techniques, such as Computed Tomography (CT) and X-ray imaging. Primarily, RT-PCR has become the gold standard in the diagnosis of COVID-19 [5] , [6] . However, RT-PCR arrays have a high false alarm rate which may be caused by the virus mutations in the SARS-CoV-2 genome, sample contamination, or damage to arXiv:2009.12698v1 [eess.IV] 26 Sep 2020 the sample acquired from the patient [7] , [8] . In fact, it is shown in hospitalized patients that RT-PCR sensitivity is low and the test results are highly unstable [6] , [9] - [11] . Therefore, it is recommended to perform chest CT imaging initially on the suspected COVID-19 cases [12] , since it is a more reliable clinical tool in the diagnosis with higher sensitivity compared to RT-PCR. Hence, several studies [12] - [14] suggest performing CT on the negative RT-PCR findings of the suspected cases. However, there are several limitations of CT scans. Their sensitivity is limited in the early COVID-19 phase groups [15] , and they are limited to recognize only specific viruses [16] , slow in image acquisition, and costly. On the other hand, X-ray imaging is faster, cheaper, and less harmful to the body in terms of radiation exposure compared to CT [17] , [18] . Moreover, unlike CT devices, X-ray devices are easily accessible; hence, reducing the risk of COVID-19 contamination during the imaging process [19] . Currently, chest X-ray (CXR) imaging is widely used as an assistive tool in COVID-19 prognosis, and it is reported to have a potential diagnosis capability in recent studies [20] . In order to automate COVID-19 detection/recognition from CXR images, many studies [17] , [21] - [27] have proposed to use deep Convolutional Neural Networks (CNNs). However, the main limitation of these studies is that the data is scarce for the target COVID-19 class. Such limited amount of data degrades the learning performance of the deep networks. Two recent studies [28] and [29] have addressed this drawback with a compact network structure and achieved the state-of-theart detection performance over the benchmark QaTa-COV19 and Early-QaTa-COV19 datasets that consist of 462 and 175 COVID-19 CXR images, respectively. Despite the fact that these datasets were the largest available at that time, such a limited number of COVID-19 samples raises robustness and reliability issues for the proposed methods in general. Moreover, all these previous machine learning solutions with X-ray imaging remain limited to only COVID-19 detection. However, as stated by Shi [30] , COVID-19 pneumonia screening is important for evaluating the status of the patient and treatment. Therefore, along with the detection, COVID-19 related infection localization is another crucial problem. Hence, several studies [31] - [33] produced activation maps that are generated from different Deep Learning (DL) models trained for COVID-19 detection (classification) task to localize COVID-19 infection in the lungs. Infection localization has two vital objectives: an accurate assessment of the infection location and the severity of the disease. However, results of previous studies show that the activation maps generated inherently from the underlying DL network may fail to accomplish both objectives, that is, irrelevant locations with biased severity grading appeared in many cases. To overcome these problems, two studies [34] , [35] proposed to perform lung segmentation as a first step in their approaches. This way, they have narrowed the region of interest down to the regions of lungs to increase reliability of their methods. Overall, until this study, screening COVID-19 infection from such activation maps produced by classification networks was the only option for the localization due to the absence of ground-truth of the datasets available in the literature. Many studies [30] , [34] , [36] - [38] have COVID-19 infection ground-truths for CT images; however, ground-truth segmentation masks for CXR images are non-existent. In this study, in order to overcome the aforementioned limitations and drawbacks, first, the benchmark dataset QaTa-COVSeg proposed by the researchers of Qatar University and Tampere University in [28] and [29] is extended to include 2951 COVID-19 samples. This new dataset is 3-20 times larger than those used in earlier studies. The extended benchmark dataset, QaTa-COVSeg with around 15, 500 CXR images, is not only the largest ever composed dataset, but it is the first dataset that has the ground-truth segmentation masks for COVID-19 infection regions, some samples are shown in Fig. 1 . To obtain the ground-truth, an expert human-machine collaborative approach is introduced to improve the segmentation masks manually drawn by medical doctors (MDs). This is an iterative process, where MDs initiate the segmentation by "manually-drawn" segmentation masks for a subset of CXR images. Then, the trained segmentation networks over this subset generate their own "competing" masks and the MDs are asked to compare them pair-wise (initial manual segmentation versus automatically segmented mask) for the same patient. The networks also segment the remaining CXR images, which are verified by the expert MDs. Such a verification improves the quality of the generated masks as well as the training. The human-machine collaboration continues until the MDs are fully satisfied, i.e., a satisfactory mask can be found among the masks generated by the networks for all CXR images in the dataset. In this study, we show that even with two stages (iterations), highly superior infection maps can be obtained, and an elegant COVID-19 detection performance can be achieved. For the infection map generation, we use the following stateof-the-art deep networks: U-Net [39] , U-Net++ [40] , which provide top performances in biomedical image segmentation, and Deep Layer Aggression (DLA) [41] encoder-decoder CNN (E-D CNN) type segmentation networks. Moreover, the encoder structure of the E-D CNN architectures is varied: CheXNet [42] (fined-tuned version of DenseNet-121 [43] ), DenseNet-121, Inception-v3 [44] , and ResNet-50 [45] . Next, the infection maps are generated from the predictions of the E-D CNN models to visualize/detect COVID-19 infection. The rest of the paper is organized as follows. In section II-A, we introduce the benchmark QaTa-COVSeg dataset. Our novel human-machine collaborative approach for the groundtruth annotation is explained in Section II-B. Next, the details of COVID-19 infected region segmentation, and the infection map generation and COVID-19 detection are presented in Sections II-C and II-D, respectively. The experimental setup and results with the benchmark dataset are reported in Section III-A and III-B, respectively. Finally, we conclude the paper in Section IV. The proposed approach in this study is composed of three main phases: 1) training the state-of-the-art deep models for COVID-19 infected region segmentation using the groundtruth segmentation masks, 2) infection map generation from the trained segmentation networks, and 3) COVID-19 detection as it can be depicted in Fig. 2 . In this section, we first detail the creation of the benchmark QaTa-COVSeg dataset. Then, the proposed approach for collaborative human-machine ground-truth generation is introduced. The researchers of Qatar University and Tampere University have compiled the largest COVID-19 dataset up to date: QaTa-COVSeg including 2951 COVID-19, and 12, 544 normal (control group) CXR images. To create QaTa-COVSeg, we have utilized several publicly available, scattered, and different format datasets and repositories. Therefore, the collected images from the datasets had some duplicate, over-exposed and lowquality images that were identified and removed in the preprocessing stage. Consequently, the COVID-19 CXRs are from different publicly available sources resulting in high intraclass dissimilarity as depicted in Fig. 3 . The image sources of normal and COVID-19 CXRs are detailed as follows: Normal CXRs: RSNA pneumonia detection challenge dataset [46] is comprised of about 29.7K CXR images, where 8851 images are normal. All CXRs in the dataset are in DICOM format, a popularly used format for medical imaging. Padchest dataset [47] consists of 160, 868 CXR images from 67, 625 patients, where 37, 871 images are from normal class. The images are evaluated and reported by radiologists at Hospital Sun Juan in Spain during 2009 − 2017. The dataset includes six different position views of CXR and additional information regarding image acquisition and patient demography. Paul Mooney [48] has released an X-ray dataset of 5863 CXR images from a total of 5856 patients, where 1583 images are from normal class. The data is collected from pediatric patients aging one to five years old at Guangzhou Women and Children's Medical Center, Guangzhou. The dataset in [49] consists of 7470 CXR images and the corresponding radiologist reports from the Indiana Network for Patient Care, where a total of 1343 frontal CXR samples are labeled as normal. In [50] , there are 80 normal CXRs from the tuberculosis control program of the Department of Health and Human Services of Montgomery County and 326 normal CXRs from Shenzhen Hospital. In this study, a total of 12, 544 normal CXRs are gathered from the aforementioned datasets. CXRs: BIMCV-COVID19+ [51] is the largest publicly available dataset with 2473 COVID-19 positive CXR images. The CXR images of BIMCV-COVID19+ dataset were recorded with computed radiography (CR) and digital X-ray (DX) machines. Hannover Medical School and Institute for Diagnostic and Interventional Radiology [52] released 183 CXR images of COVID-19 patients. A total of 959 CXR images are from public repositories: Italian Society of Medical and Interventional Radiology (SIRM), GitHub, and Kaggle [35] , [53] - [56] . As mentioned earlier, any duplication and lowquality images are removed since COVID-19 CXR images are collected from different public datasets and repositories. In this study, a total of 2951 COVID-19 CXRs are gathered from the aforementioned datasets. Therefore, COVID-19 CXRs are of different age, group, gender, and ethnicity. The two stages of the human-machine collaborative approach. Stage I: A subset of CXR images with manually drawn segmentation masks are used to train three different deep networks in a 5-fold cross-validation scheme. The manually drawn ground-truth (a), and the three predictions (b, c, d) are blindly shown to MDs, and they select the best ground-truth mask. Stage II: Five deep networks are trained over the best segmentation masks selected. Then, they are used to produce the segmentation masks for the rest of the CXR dataset (a, b, c, d, e), which are shown to MDs. Recent developments in machine and deep learning techniques led to state-of-the-art performance in many computer vision (CV) tasks, such as image classification, object detection, and image segmentation. However, supervised DL methods require a huge amount of annotated data. Otherwise, the limited amount of data degrades the performance of the deep network structures since their generalization capability depends on the availability of large datasets. Nevertheless, to produce ground-truth segmentation masks, pixel-accurate image segmentation by human experts can be a cumbersome and highly subjective task even for moderate size datasets. In order to overcome this challenge, in this study, we propose a novel collaborative human-machine approach to accurately produce the ground-truth segmentation masks for infected regions directly from the CXR images. The proposed approach is performed in two main stages. First, a group of expert MDs manually segment the infected regions of a subset of (500 in our case) CXR images. Then, several segmentation networks that are inspired by the U-Net [39] structure with a 5-fold cross-validation scheme, are trained over the initial ground-truth masks. For each fold, the segmentation masks of the test samples are predicted by the networks. The network predicted masks along with the initial (MD drawn) groundtruth masks, and original CXR image are assessed by the MDs, and the best segmentation mask among them is selected. Steps of Stage-I are illustrated in Fig. 4 (top). At the end of the first stage, collaboratively annotated ground-truth masks for the subset of CXR images are formed, and they are obviously superior to the initial manually drawn masks since they are selected by the MDs. An interesting observation in this stage was that MDs preferred the machine-generated masks over the manually drawn masks in the first stage in three out of five cases. In the second stage five deep networks, inspired by U-Net [39] , UNet++ [40] , and DLA [41] architectures are trained over the collaborative masks, which were formed in Stage-I. The trained segmentation networks are used to predict the segmentation masks of the rest of the data, which is around 2400 unannotated COVID-19 images. Among the five predictions, the expert MDs select the best one as the ground-truth or deny all if none was found successful. For the latter case, MDs were asked to draw the ground-truth masks manually. However, we notice that this was indeed a minority case that included less than 5% of unannotated data. The steps of Stage-II are shown in Fig. 4 (bottom) . As a result, the groundtruth masks for 2951 COVID-19 CXR images are gathered to construct the benchmark QaTa-COVSeg dataset. The proposed approach does not only save valuable human labor time, but it also improves the quality and reliability of the masks by reducing the subjectivity with Stage-II verification step. Segmentation of COVID-19 infection is the first step of our proposed approach as depicted in Fig. 2 . Once the ground-truth annotation for QaTa-COVSeg benchmark dataset is formed as explained in the previous section, we perform infected region segmentation extensively with 24 different network configurations. We have used three different segmentation models: U-Net, UNet++ and DLA, with four different encoder structures: CheXNet, DenseNet-121, Inception-v3 and ResNet-50, and frozen & not frozen encoder weight configurations. 1) Segmentation Models: We have tried distinct segmentation model structures starting from shallow to deep structures with varied configurations as follows: • U-Net [39] is an outperforming network for medical image segmentation applications with a u-shaped architecture as the encoder part is symmetric with respect to its decoder part. Therefore, this unique decoder structure with many feature channels allows the network to carry the information through its latest layers. • UNet++ [40] has further developed the decoder structure of U-Net by connecting the encoder to the decoder with the nested dense convolutional blocks. This way, the bridge between the encoder and decoder parts are more firmly knit; thus, the information can be transferred to its final layers more intensively compared to the classic U-Net. • DLA [41] investigates the connecting bridges between the encoder and decoder, and proposes a way to fuse the semantic and spatial information with dense layers, which are progressively aggregated by iterative merging to deeper and larger scales. In this study, we use several deep CNNs to form the encoder part of the above-mentioned segmentation models as follows: • DenseNet-121 [43] is a deep network with 121 layers, each with additional input nodes connecting all the layers directly with each other. Therefore, the maximum information flow through the network is satisfied. • CheXNet [42] is based on the architecture of DenseNet-121, which is trained over the 14-class ChestX-ray14 dataset [57] to detect pneumonia cases from CXR images. In [42] , DenseNet-121 is initialized with the ImageNet weights, and fine-tuned over 100K CXR images resulting the state-of-the-art results on the ChestX-ray14 dataset with a better performance compared to the conclusions of radiologists. • Inception-v3 [44] achieves state-of-the-art results with much less computational complexity compared to its deep competitors by factorizing the convolutions and pruning the dimensions inside the network. Despite the less complexity, it preserves a higher performance. • ResNet-50 [45] introduces a deep residual learning framework that forces the desired mapping of the input to a residual mapping. It is possible to achieve this goal by the shortcut connections on the stacked layers. These connections enable to merge the input and output of the stacked layers by addition operations; therefore, the problem of gradient vanishing is prevented. We perform transfer learning on the encoder side of the segmentation models by initializing the layers with the ImageNet weights, except for CheXNet which is pre-trained on the ChestX-ray14 dataset. We tried two configurations, in the first we freeze the encoder layers, while in the second, they are allowed to vary. In this study, we have performed training the segmentation networks with a hybrid loss function by combining focal loss [58] together with dice loss [59] to achieve a better segmentation performance. We use focal loss since COVID-19 infected region segmentation is an imbalanced problem: the number of background pixels is superior to the foreground's. Let the ground-truth segmentation mask be Y, where each pixel class label is defined as y, and the network prediction asŷ. We define the pixel class probabilities as for the positive class P (y = 1) = p, and for the negative class P (y = 0) = 1 − p. On the other hand, the network prediction probabilities are modeled by the logistic function using the sigmoid curve as, where z is some function of the input CXR image X. Then, we define the cross-entropy (CE) loss as follows: A common solution to address the class imbalance problem is to add a weighting factor α ∈ [0, 1] for the positive class, and 1 − α for the negative class, which defines the balanced cross-entropy (BCE) loss as, In this way, the importance of positive and negative samples are balanced. However, adding the α factor does not solve the issue for the large class imbalance scenario. This is because the network cannot distinguish outliers (hard samples) and inliers (easy samples) with the BCE loss. To overcome this drawback, focal loss [58] proposes to set focusing parameter γ ≥ 0 in order to down-weight the loss of easy samples that occur with small errors; so that the model can be forced to learn hard negative samples. The focal (F) loss is defined as, where F loss is equivalent to BCE loss when γ = 0. In our experimental setup, we use the default setting as α = 0.25, and γ = 2 for all the networks. To achieve a good segmentation performance, we combined focal loss with dice loss, which is based on the dice coefficient (DC) defined as follows: whereŶ is the predicted segmentation mask of the network. Hence, the DC can be interpreted as a dice (D) loss as follows: where h and w are height and width of the ground-truth and prediction masks Y andŶ, respectively. Finally, we combined D and F losses by summation to achieve the so-called hybrid loss function for the segmentation networks. Having the training set of COVID-19 CXR images via the collaborative human-machine approach explained in Section II-A, we train the aforementioned segmentation networks to produce infection maps. We train the networks with a 5-fold cross-validation scheme, where in each fold we feed each test CXR sample X into the network. Then, we obtain the network prediction maskŶ, which is used to generate an infection map that is a measure of infected region probabilities on the input X. Each pixel inŶ is defined asŶ h,w ∈ [0, 1], where h and w represent the size of the image. We then apply an RGB-based color transform, i.e., the jet color scale to obtain the RGB version of the prediction mask,Ŷ R,G,B as shown in Fig. 5 for a pseudo-colored probability measure visualization. The infection map is generated as a reflection of the network predictionŶ R,G,B onto the CXR image X. Hence, for visualization, we form the imposed image by concatenating the hue and saturation components ofŶ H,S,V , and value component of X H,S,V . Finally, the imposed image is converted back to RGB domain. In the infection map, we do not show the pixels/regions with zero probabilities for a better visualization effect. This way, the infected regions, whereŶ > 0 are shown translucent as in Fig 5. Along with the infection map generation, which already provides localization and segmentation of COVID-19 infection, COVID-19 detection can easily be performed using the proposed approach. The detection of COVID-19 is performed based on the predictions of the trained segmentation networks. Accordingly, a test sample is classified as COVID-19 class if Y ≥ 0.5 at any pixel location. In this section, first, the experimental setup is presented. Then, both numerical and visual results are reported with an extensive set of comparative evaluations over the benchmark QaTa-COVSeg dataset. Finally, visual comparative evaluations are presented between the infection maps and the activation maps extracted from state-of-the-art deep models. Quantitative evaluations for the proposed approach are performed for both COVID-19 infected region segmentation and COVID-19 detection. COVID-19 infected region segmentation is evaluated on a pixel-level, where we consider the foreground (infected region) as the positive class, and background as the negative class. For COVID-19 detection, the performance is computed per CXR sample, and we consider COVID-19 as the positive class and the control group where sensitivity (or Recall) is the rate of correctly detected positive samples in the positive class samples, where specificity is the ratio of accurately detected negative class samples to all negative class samples, P recision = T P T P + F P where precision is the rate of correctly classified positive class samples among all the members classified as positive samples, where accuracy is the ratio of correctly classified elements among all the data, where F -score is defined by the weighting parameter β. The F1-Score is calculated with β = 1, which is the harmonic average of precision and sensitivity. The F2-score is calculated with β = 2, which emphasizes FN minimization over FPs. The main objective of both COVID-19 segmentation and detection is to maximize sensitivity with a reasonable specificity in order to minimize FP COVID-19 cases or pixels. Equivalently, maximized F2-score is targeted with an acceptable F1-Score value. The performances with their 95% confidence interval (CI) for both COVID-19 infected region segmentation and detection are given in Tables I and III , respectively. The range of values can be calculated for each performance as follows: where z is the level of significance, metric is any performance evaluation metric, and N is the number of samples. Accordingly, z is set to 1.96 for 95% CI. We have evaluated the networks in a stratified 5-fold crossvalidation scheme with a ratio of 80% training to 20% test (unseen folds) over the benchmark QaTa-COVSeg dataset. The input CXR images are resized to 224 × 224 pixels. Table II shows the number of CXRs per class in the dataset. Since the two classes are imbalanced, we have applied data augmentation in order to balance the classes. Therefore, COVID-19 samples are augmented up to the same number of samples as the normal class in the training set for each fold. The data augmentation is performed using Image Data Generator in Keras: the CXR samples are augmented by randomly shifting them both vertically and horizontally by 10% and randomly rotating them in a range of 10 degrees. After shifting and rotating the images, blank sections are filled using the nearest mode. We have implemented the deep networks with Tensorflow library [60] using Python on NVidia R GeForce RTX 2080 Ti GPU card. For training, Adam optimizer [61] is used with the default momentum parameters, β1 = 0.9 and β2 = 0.999 using the aforementioned hybrid loss function. The segmentation networks are trained with 50-epochs with a learning rate of α = 10 −4 and a batch size of 32. For comparing the computed infection maps, the activation maps are computed as follows: the encoder structures of the segmentation networks are trained for the classification task with a modification at the output layer by adding 2-neurons for the number of total classes. The activation maps extracted from the classification models are then compared with the infection maps of the segmentation models. The classification networks, CheXNet, DenseNet-121, Inception-v3 and ResNet-50 are fine-tuned using categorical cross-entropy as loss function with 10 epochs and a learning rate of α = 10 −5 , which is a sufficient setting to prevent over-fitting, based on our previous study [29] . Other settings of the classifiers are kept the same with the segmentation models. The experiments are carried out for both COVID-19 infected region segmentation and COVID-19 detection. We extensively tested the benchmark QaTa-COVSeg dataset using three different state-of-the-art segmentation networks with four different encoder options. We also investigated the effect of frozen encoder weights on the performance. 1) COVID-19 Infected Region Segmentation: The performance of the segmentation models for COVID-19 infected region segmentation are presented in Table I . Each model structure is evaluated with two configurations: frozen and not frozen encoder layers. We have used transfer learning on the encoder layers with ImageNet weights, except for the CheXNet model, which is pre-trained on the ChestX-ray14 dataset. The evaluation of the models with frozen encoder layers is also important since this process can lead to a better convergence and improved performance. However, as the results show, better performance is obtained when the network continues to learn on the encoder layers as well. For each model, we have observed that two encoders: DenseNet-121 and Inception-v3 are the top-performing ones for the infected region segmentation task. The U-Net model with DenseNet-121 encoder holds the leading performance by 84% sensitivity, 85.81% F1-Score, and 84.71% F2-Score. DenseNet-121 produces better results compared to other encoder types since it can preserve the information coming from earlier layers through the output by concatenating the feature maps from each dense layer. However, in the other segmentation models, Inception-v3 outperforms the other encoder types. The presented segmentation performances are obtained by setting the threshold value to 0.5 to compute the segmentation mask from the network probabilities. The Precision-Recall curves are plotted in Fig. 6 by varying this threshold value. 2) COVID-19 Detection: The performances of the segmentation models for COVID-19 detection are presented in Table III . All the models are evaluated by stratified a 5-fold cross-validation scheme, and the table shows the averaged results of these folds. The most crucial metric here is the sensitivity since missing any patient with COVID-19 is critical. In fact, the results indicate the robustness of the model as the proposed approach can achieve high sensitivity levels of 98.37% with a 97.08% F2-Score. Additionally, the proposed approach achieves an elegant specificity of 99.16%, indicating a significantly low false alarm rate. It can be observed from Table III that DenseNet-121 encoder with the not frozen encoder layer setting gives the most promising results among the others. The confusion matrices, accumulated on each fold's test set, are presented in Table IV . The highest sensitivity in COVID-19 detection is achieved by the U-Net DenseNet-121 model (Table IVa) . Accordingly, the U-Net DenseNet-121 model only misses 48 COVID-19 patients out of 2951. On the other hand, the highest specificity is achieved by UNet++ DenseNet-121 model ( Table IVb ). The UNet++ model only misses a minor part of the normal class Fig. 7 : Several CXR images with their corresponding ground-truth masks. The activation maps extracted from the classification models are presented in the middle block. The last block is the generated infection maps from the segmentation models. It is evident that the infection maps yield a superior localization of COVID-19 infection compared to activation maps. with 105 samples out of 12544. Several studies [31] - [33] propose to localize COVID-19 from CXRs by extracting activation maps from the deep classification models trained for COVID-19 detection. Despite the simplicity of the idea, there are many limitations of this approach. First of all, without any infected region segmentation ground-truth masks, the network can only produce a rough localization, and the extracted activation maps may entirely fail to localize COVID-19 infection. In this study, we check the reliability of our proposed COVID-19 detection approach by comparing it with DL models trained for the classification task. In order to achieve this objective, we compare the infection map and activation map of CXR images, which are generated from the segmentation and classification networks, respectively. Therefore, we have trained the encoder structures of the segmentation networks, which are CheXNet, DenseNet-121, Inception-v3, and ResNet-50 to perform COVID-19 classification task in a stratified 5-fold cross-validation scheme. We have extracted activation maps from these trained models by the Gradient-weighted Class Activation Mapping (Grad-CAM) approach proposed in [62] . The localization Grad-CAM L c Grad-CAM ∈ R h×w of height h and width w for class c is calculated by the gradient of m c before the softmax with respect to the convolutional layer's feature maps A k as ∂m c ∂A k . The gradients are passed through from the global average pooling during back-propagation; where α is the weight that shows the important feature map k from A for a target class c. Then, the linear combination is performed following by ReLU to obtain the Grad-CAM; Despite their elegant performance, activation maps extracted from deep classification networks are not suitable for localizing COVID-19 infection as depicted in Fig 7. In fact, infections found by the activation maps are highly irrelevant indicating false locations outside of the lung areas. On the other hand, infection maps can generate a highly accurate location with an elegant severity grading of COVID-19 infection. The proposed infection maps can conveniently be used by medical experts for an enhanced assessment of the disease. Real-time implementation of the infection maps will obviously speed up the detection process, cal also monitor the progression of COVID-19 infection in the lungs. In this section, we present the computational times of the networks and their number of trainable & non-trainable parameters. Table V shows the elapsed time in milliseconds (ms) during the inference step for each network used in the experiments. The results in the table represent the running time per sample. It can be observed from the table that the U-Net model is the fastest among the others due to its shallow structure. The fastest network is U-Net Inception-v3 with frozen encoder layers taking up 2.53 ms. On the other hand, the slowest model is UNet++ structure since it has the largest number of trainable parameters. The most computationally demanding model is UNet++ ResNet-50 with frozen encoder layers, which takes 5.58 ms. We therefore conclude that all models can be used as real-time clinical applications. IV. CONCLUSIONS The immediate and accurate detection of highly infectious COVID-19 plays a vital role in preventing the spread of the virus. In this study, we used CXR images since X-ray imaging is cheaper, easily accessible, and faster than the conventional methods commonly used such as RT-PCR and CT. As a major contribution, the largest CXR dataset, QaTa-COVSeg, which consists of 2951 COVID-19, and 12544 normal images, has been compiled and will be shared publicly as a benchmark dataset. Moreover, for the first time in the literature, we release the ground-truth segmentation masks of the infected regions along with the introduced benchmark QaTa-COVSeg. Furthermore, we proposed a human-machine collaborative approach, which can be used when fast and accurate groundtruth annotation is desired but manual segmentation is slow, costly, and subjective. Finally, we propose a joint approach for the COVID-19 infection map generation and detection by using state-of-the-art segmentation models. Our extensive experiments on QaTa-COVSeg show that a reliable COVID-19 diagnosis can be achieved by generating infection maps, which can locate the infection on the lungs by 84% sensitivity, and 85.81% F1-Score. Moreover, the proposed joint approach can achieve an elegant COVID-19 detection performance with 98.37% sensitivity and 99.16% specificity. The most important aspect of this study is that the generated infection maps can be valuable from a medical perspective, whilst they can be used for a better and objective COVID-19 assessment. It is clear that when compared with the activation maps extracted from deep models, infection maps are highly superior and reliable mappings of COVID-19 infection. Severe Outcomes Among Patients with Coronavirus Disease 2019 (COVID-19)-United States World Health Organization World health organization declares global emergency: A review of the A review of coronavirus disease-2019 (covid-19) A comprehensive literature review on the clinical presentation, and management of the pandemic coronavirus disease 2019 (covid-19) Stability issues of rt-pcr testing of sars-cov-2 for hospitalized patients clinically diagnosed with covid-19 Real-time rt-pcr in covid-19 detection: issues affecting the results Evaluation of coronavirus in tears and conjunctival secretions of patients with sars-cov-2 infection False-negative of rt-pcr and prolonged nucleic acid conversion in covid-19: rather than recurrence Laboratory diagnosis and monitoring the viral shedding of 2019-ncov infections Laboratory testing for coronavirus disease 2019 (covid-19) in suspected human cases: interim guidance Coronavirus disease 2019 (covid-19): a systematic review of imaging findings in 919 patients Sensitivity of chest ct for covid-19: comparison to rt-pcr Correlation of chest ct and rt-pcr testing in coronavirus disease 2019 (covid-19) in china: a report of 1014 cases Chest ct findings in coronavirus disease-19 (covid-19): relationship to duration of infection Coronavirus disease 2019 (covid-19): role of chest ct in diagnosis and management Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks Computed tomography-an increasing source of radiation exposure The role of chest imaging in patient management during the covid-19 pandemic: a multinational consensus statement from the fleischner society Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19 Can ai help in screening viral and covid-19 pneumonia Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks Finding covid-19 from chest x-rays using deep learning on a small dataset Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images Detection of coronavirus disease (covid-19) based on deep features Covid-19 screening on chest x-ray images using deep learning based anomaly detection Covid-caps: A capsule network-based framework for identification of covid-19 cases from x-ray images Convolutional sparse support estimator based covid-19 recognition from x-ray images Advance warning methodologies for covid-19 using chest x-ray images Large-scale screening of covid-19 from community acquired pneumonia using infection size-aware classification A cascaded learning strategy for robust covid-19 pneumonia chest x-ray screening Deep learning covid-19 features on cxr using limited training data sets Automated detection of covid-19 cases using deep neural networks with x-ray images Covid mtnet: Covid-19 detection with multi-task deep learning approaches Covid-cxnet: Detecting covid-19 in frontal chest x-ray images using deep learning Lung infection quantification of covid-19 in ct images with deep learning Clinically applicable ai system for accurate diagnosis, quantitative measurements, and prognosis of covid-19 pneumonia using computed tomography Miniseg: An extremely minimum network for efficient covid-19 segmentation U-net: Convolutional networks for biomedical image segmentation Unet++: A nested u-net architecture for medical image segmentation Deep layer aggregation Chexnet: Radiologistlevel pneumonia detection on chest x-rays with deep learning Densely connected convolutional networks Rethinking the inception architecture for computer vision Deep residual learning for image recognition RSNA Pneumonia Detection Challenge Padchest: A large chest x-ray image dataset with multi-label annotated reports Identifying medical diagnoses and treatable diseases by image-based deep learning Preparing a collection of radiology examinations for distribution and retrieval Two public chest x-ray datasets for computer-aided screening of pulmonary diseases Bimcv covid-19+: a large annotated dataset of rx and ct images from covid-19 patients COVID-19 Image Repository COVID-19 DATABASE Covid-19 image data collection COVID-19 Radiography Database Chest Imaging Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases Focal loss for dense object detection V-net: Fully convolutional neural networks for volumetric medical image segmentation Tensorflow: Large-scale machine learning on heterogeneous distributed systems Adam: A method for stochastic optimization Grad-cam: Visual explanations from deep networks via gradient-based localization