key: cord-0666887-mk3v6vav authors: Qiblawey, Yazan; Tahir, Anas; Chowdhury, Muhammad E. H.; Khandakar, Amith; Kiranyaz, Serkan; Rahman, Tawsifur; Ibtehaz, Nabil; Mahmud, Sakib; Al-Madeed, Somaya; Musharavati, Farayi title: Detection and severity classification of COVID-19 in CT images using deep learning date: 2021-02-15 journal: nan DOI: nan sha: 308aa26b4d9efb8c9c7f6b418e81f48c311e9dac doc_id: 666887 cord_uid: mk3v6vav Since the breakout of coronavirus disease (COVID-19), the computer-aided diagnosis has become a necessity to prevent the spread of the virus. Detecting COVID-19 at an early stage is essential to reduce the mortality risk of the patients. In this study, a cascaded system is proposed to segment the lung, detect, localize, and quantify COVID-19 infections from computed tomography (CT) images Furthermore, the system classifies the severity of COVID-19 as mild, moderate, severe, or critical based on the percentage of infected lungs. An extensive set of experiments were performed using state-of-the-art deep Encoder-Decoder Convolutional Neural Networks (ED-CNNs), UNet, and Feature Pyramid Network (FPN), with different backbone (encoder) structures using the variants of DenseNet and ResNet. The conducted experiments showed the best performance for lung region segmentation with Dice Similarity Coefficient (DSC) of 97.19% and Intersection over Union (IoU) of 95.10% using U-Net model with the DenseNet 161 encoder. Furthermore, the proposed system achieved an elegant performance for COVID-19 infection segmentation with a DSC of 94.13% and IoU of 91.85% using the FPN model with the DenseNet201 encoder. The achieved performance is significantly superior to previous methods for COVID-19 lesion localization. Besides, the proposed system can reliably localize infection of various shapes and sizes, especially small infection regions, which are rarely considered in recent studies. Moreover, the proposed system achieved high COVID-19 detection performance with 99.64% sensitivity and 98.72% specificity. Finally, the system was able to discriminate between different severity levels of COVID-19 infection over a dataset of 1,110 subjects with sensitivity values of 98.3%, 71.2%, 77.8%, and 100% for mild, moderate, severe, and critical infections, respectively. The coronavirus disease 2019 (COVID-19) has become a global pandemic, which affects different aspects of human life. Until the 11 of January 2020, more than 88.8 million confirmed cases and 1.92 million death cases have been recorded and its infection rate is still rapidly increasing worldwide [1] . Several laboratory identification tools are used for the detection of COVID-19, such as real-time reverse transcriptionpolymerase chain reaction (RT-PCR) and isothermal nucleic acid amplification technology [2, 3] . Currently, RT-PCR is considered the gold standard to detect COVID-19 [4] . However, a high false alarm rate usually occurs due to the sample contamination, damage, or virus mutations in the COVID-19 genome. Medical imaging can be considered as a first-line investigation tool [5] . Several studies [6, 7] suggested performing chest computerized tomography (CT) image as a secondary test if the suspected patients show symptoms after a negative RT-PCR finding. For instance, in Wuhan, China, among 1014 COVID-19 patients, 59% had positive RT-PCR results but 88% had positive CT scans. Besides, among the positive RT-PCR results, the CT scans achieved a 97% sensitivity [8] . Thus, CT scans can detect COVID-19 with higher accuracy than RT-PCR. Moreover, CT images can show early lesions in the lung and they can be used for the diagnosis by radiologists. However, radiologists require significant diagnostic experience to distinguish COVID-19 from other types of pneumonia [9] . Radiologists need to carry out two tasks for COVID-19 patients which are identification and severity quantification. The purpose of identification is to identify COVID-19 patients among other patients to isolate them as early as possible. Severity quantification can help medical personnel to prioritize the patients who will require emergency medical care. It requires a high evaluation time for radiologists to carry out both tasks. Thus, developing artificial intelligence (AI)-based solutions specific to identification and severity quantification to COVID-19 can offer a fast, efficient and reliable alternative that can supplement conventional medical diagnostic strategies. Recent studies showed that state-of-the-art deep convolutional neural networks (CNNs) can achieve or exceed the performance of medical experts in numerous medical image diagnosis tasks, such as skin lesion classification [10] , brain tumor detection [11] , and breast cancer detection [12] , and lung pathology screening [13, 14] . In general, COVID-19 recognition from other types of pneumonia has a unique difficulty compared to other lung diseases, such as tuberculosis screening, lung nodule detection, and lung cancer diagnosis. This difficulty arises from the high similarity between different types of pneumonia (especially in the early-stage) and large variations in various stages of the same type. Powered by large annotated datasets and modern graphical processing units (GPUs), machine learning especially deep learning techniques, has achieved outbreak performance in several computer vision applications, such as image classification, object detection, and image segmentation. Recently deep learning techniques on chest CT scans and chest X-ray (CXR) images are getting increased popularity for diagnosing different lung diseases, showing promising results in various applications. Several studies have been published on CT-based COVID-19 diagnosis systems using machine learning models [15] [16] [17] [18] [19] . Several representative studies are summarised and reviewed below. Harmon et al. [20] trained and evaluated a series of deep learning networks on a diverse multi-national cohort of 922 COVID-19 cases and 1695 non-COVID patients to localize lung parenchyma followed by identification of COVID-19 pneumonia. AH-Net was utilized for lung volume segmentation achieving a dice similarity coefficient (DSC) of 95%, while 3D-Densnet-121 was employed to recognize lung regions as COVID-19 or non-COVID. The average score of multiple lung regions was utilized for the classification scheme achieving 88.9% accuracy, 85.3% sensitivity, and 90.1% specificity. Wang et al. [21] introduced a deep regression framework for automatic pneumonia identification by jointly learning from CT scan images and clinical information (i.e., age, gender, and clinical complaints). Recurrent Neural Network (RNN) with ResNet50 as the backbone was used to extract visual features from CT images. The initial clinical information collected from admitted patients (fever, cough, trouble in breathing, etc.) was analyzed by a Long short-term memory (LSTM) network and concatenated with demographic features (age and gender), and extracted visual features from CT images. Finally, a regression framework was utilized to diagnose the suspected patient as Community-acquired pneumonia (CAP) or normal. The proposed framework was evaluated over 900 clinical cases (450 CAP and 450 normal), achieving accuracy, sensitivity, specificity, and F1-Score of 0.946, 0.942, 0.949, and 0.944, respectively. In a similar approach, Mei et al. [22] proposed a joint AI algorithm to combine chest CT findings with clinical data (symptoms, exposure history, and laboratory testing) to diagnose COVID-19 from non-COVID patients using a dataset of 905 cases. The joint model achieved high discriminative performance with 0.92 AUC, 84.3% sensitivity, and 82.8% specificity, outperforming a senior radiologist who achieved 0.84 AUC, 74.6% sensitivity, and 93.8% specificity. The drawback of combined systems is the availability of clinical information especially when a large number of suspected patients are waiting to be diagnosed. Furthermore, the proposed studies do not show the infection location in the lung which can be useful for the medical personnels for longitudinal monitoring of the patients. The aforementioned machine learning solutions with CT imaging were limited to only COVID-19 detection. However, COVID-19 pneumonia screening is important for evaluating the status of the patient and treatment. In particular, COVID-19 related infection localization and the segmentation of pneumonia lesions is a crucial task for accurate diagnosis and follow-up of pneumonia patients. Zhou et al. [23] proposed a lesion detection system that can quantify COVID-19 infection regions from the chest CT scans. Three independent two-dimensional (2D) U-Nets are used for x-y, y-z, and x-z views of CT scan, where for each model, five adjacent slices are used as an input, while the network outputs infection prediction mask for the middle slice. The three intermediate binary predictions are aggregated by a simple sum up, with a threshold value of 2 to detect infection pixels. Moreover, to alleviate the data scarcity for annotated infection masks, a dynamic model was developed for data augmentation by simulating the progression of infection regions using multiple CT scan readings from the same patient. With the augmented data, the proposed system showed a performance of 78.3% DSC and 77.6% sensitivity. Besides, deep learning has a high potential to automate the lesion detection task but requires a large set of high-quality annotations that are difficult to collect during the current pandemic. Learning from noisy training labels that are easier to generate has the potential to alleviate this problem. Wang et al. [24] introduced a novel framework to learn from noisy COVID-19 infection masks. performance was reported with a 68.5% of hit rate (HR). Authors in [26] proposed a system that carried out lung and lesion segmentation for CT images using DRUNET, which provides a DSC value of 95.9% for lung segmentation. On the other hand, lesion segmentation scored a mean DSC (mDSC) of 58.7% for DeepLabv3 [27] using 4,695 CT slice images were used for lung and lesion segmentation. The performance indicators for the previous studies in lesion segmentation are lower than the lung segmentation, which can be improved further. A large annotated dataset is required to increase the performance as only 201 scans were used in the results reported in [23] . Similar problems and computationally expensive techniques were reported in research articles [25] and [26] . For severity quantification, several studies recommended that using deep learning can help in the quantification of COVID-19 lung opacification. Moreover, it can eliminate the subjectivity in the initial assessment for COVID-19 patients. Chaganti et al. [28] presented a method that automatically segments and quantifies abnormal CT patterns in COVID-19 patients. The proposed system utilized 9749 chest CT volume and segmented lesions, lungs, and lobe areas and used four matrices for severity quantification: percentage of opacity, percentage of high opacity, lung severity score, and lung high opacity score. Despite the good performance, no clear evaluation metric for segmentation network models was presented. Another work classified the severity into four classes (mild, moderate, severe, and critical) [16] . Lung and lesion segmentation were carried out using the UNet model via commercial tools with a median DSC of 0.85% for both models. Shen et al. [29] created a system that considers computer and radiologist evaluations to determine the COVID-19 patient severity. The computer approach consisted of four phases: segmentation of the lung and lobes, segmentation of the pulmonary vessels, filter out pulmonary vessels from the lung region, and detection of infection. The lesion segmentation was done using thresholds and adaptive region growing. The work showed that the Pearson correlation between computer and radiologist evaluation was ranged from 0.7679 to 0.8373, which was carried out using only 44 patients. Pu et al. [30] created an automated system to quantify COVID-19 severity and progression using chest CT images. 120 patients were used to train and evaluate two U-Net models for lung and vessel segmentation. The proposed system achieved 95% and 81% DSC for lung and lesion segmentation, respectively. It is notable that the model failed to deal with pneumonic regions that are very small and near the vessels. Besides, the work used small datasets for training and testing, a total of 192 CT volume were used in this work. Although most of the reviewed studies showed good performance for both lung and infection segmentation tasks, they mainly used conventional U-Net architecture or other techniques that are based on image processing. However, recently, different variants of U-Net architecture and other encoder-decoder (E-D) CNN, such as feature pyramid network (FPN), with residual, dense blocks, or inception blocks have shown state-of-the-art segmentation results in various applications. Therefore, there is still room to investigate the capability of those architectures for lung detection and COVID-19 infection localization tasks. Besides, several studies used a small number of patients and CT images to train, test, and validate the proposed systems. Table 1 summarizes the results of segmentation and classification obtained by the recent studies in the literature, and the table highlights the dataset size and the main networks used in each study. Although the above studies have demonstrated some promising results by using chest CT for the diagnosis of COVID-19, there is room for improvement particularly in lesion segmentation and severity detection. Several works addressed lung and lesion segmentation, as shown in the previous section, which can help physicians to diagnose COVID-19 accurately and to assess the treatment response. The performance of the lesion segmentation models is still low compared to lung segmentation. This work aims to propose a system to identify and classify the severity of COVID-19 patients into four levels: mild, moderate, severe, and critical infection. Besides, the work investigates different deep learning methods for detecting COVID-19 infected slices from CT volume. For segmentation, U-Net [35] and feature pyramid network (FPN) [36] models were investigated with different encoders to achieve the best performance for lung and lesion segmentation. ResNet18 [37] , ResNet50, ResNet152 [37] , DenseNet121, DenseNet161, and DenseNet201 [38] were used as the backbone encoder for the segmentation models. Additionally, a reliable method was proposed to identify COVID-19 slices from the prediction maps generated by infection segmentation models. Besides, COVID-19 infection is quantified by computing the percentage of infected lung pixels on the segmented lung CT slices. Finally, a 3D volumetric visualization is developed to show the overall infected area in the lungs. This work uses several datasets from 1,139 patients (51,027 CT slices) for training and validation and thus the system dealt with different images from different devices with varying image quality levels. The rest of the paper is organized as follows: Section II describes the used methodology adopted for the study. The experimental setup and evaluation metrics are presented in Section III. Section IV presents the results and performs an extensive set of comparative evaluations among the networks employed and we discuss and analyze the results. Finally, the conclusions are drawn in Section V. from CT images is the first step of our proposed system. Transfer learning was used on encoder layers with ImageNet weights to train the segmentation networks. The input CT volumes are evaluated slice-by-slice. First, a binary lung mask is generated for input CT slice using the 1st E-D CNN. Next, the lung is segmented using the generated mask and fed to the 2nd E-D CNN, which identifies the infected lung regions. The To train and evaluate the proposed system, four public datasets from different sources were used in this work (Table 2) . A total of 1,139 patients and 51,027 CT slices were used in this work. The description of the used datasets is below: The first dataset [39] consists of CT volumes from 20 patients including 3520 CT images with ground truth lung masks and lesion masks. These images were labeled by two radiologists and verified by an experienced radiologist as mentioned in the dataset description. The second dataset called "COVID-19 CT segmentation dataset" [40] , the dataset is based on volumetric CTs from Radiopaedia. It includes 9 patients with 829 slices along with their corresponding ground truth lung masks, which are created by expert radiologists. Another dataset was found on the Kaggle platform, it consists of 267 CT slices with their corresponding ground truth lung masks [41] , the images are non-COVID cases as they were collected in 2017. Additionally, MosMedData 256×256 for the segmentation tasks. Figure 2 shows the sample images in each dataset. convolution is utilized to map the output from the last decoding block to two-channel feature maps, where a pixel-wise SoftMax activation function is applied to map each pixel into a binary class of background or lung for Lung parenchyma segmentation task, and background or lesion for infection segmentation task. While FPN employs the encoder and decoder structure as a pyramidal hierarchy where a prediction mask is made on each spatial level of the decoder path. In the final step, predicted feature maps are up-sampled to the same size, concatenated, convolved with a 3×3 convolutional kernel, and SoftMax activation is applied to generate the final prediction mask. Transfer learning was utilized on the encoder side of the segmentation networks by initializing the convolutional layers with ImageNet weights. The cross-entropy (CE) loss is used as the cost function for the segmentation networks: where denotes the k th pixel in the predicted segmentation mask, ( ) denotes its SoftMax probability, is a binary random variable getting 1 if = , otherwise 0, and denotes the class category, i.e., ∈ { , } for the lung segmentation task, and ∈ { , } for the infection segmentation. The detection of COVID-19 is performed based on the prediction maps generated by the lesion segmentation networks. Accordingly, a CT slice is classified as COVID-19 positive if at least one pixel is predicted as COVID-19 infection, i.e., ( ) > 0.5, otherwise, the image is considered normal. The severity of the COVID-19 patient is classified into four classes based on lung parenchyma percentage in the patients' lungs: mild, moderate, severe, and critical infection. The Percentage of Infection (PI) is calculated as the infected areas (sum of white pixels) over the lung area for one CT slice. For the entire volume, the average of all slices is considered as the patient severity percentage. Based on the percentage, the patient is classified into four classes. Figure 3 demonstrates the process of calculation of Percentage of Infection (PI) on one CT slice. Table 3 . Early stopping criterion was used as follows: when no improvement in validation loss is seen during the 10 epochs, training is stopped abruptly. Table 3 presents the training and hyper-parameters for the lung and infection segmentation models. Lung Segmentation networks were trained using 5-fold cross-validation (CV), with 80% train and 20% test (unseen) folds, where 20% of training data was used as a validation set to avoid overfitting. For infection segmentation, instead of 5 fold cross-validation, 10 fold cross-validation was used. Class imbalance in the dataset impacts the performance of the deep learning models. Thus, data augmentation was used to balance the size of each class in lung and lesion segmentation datasets to ensure every possible aspect of avoiding data overfitting [43] . This step is crucial for the training phase to reduce the associated error from the lung segmentation task, which might propagate to the subsequent lesion segmentation task [44] . We performed data augmentation by applying rotations of 90, -90, 180 degrees for CT images and ground truth masks. Table 4 summarizes the number of images per class used for training, validation, and testing at each fold. Independent training and evaluation were provided for both the networks, where original CT slices were used as input to the lung segmentation models, and lung segmented CT slices were used as inputs to the lesion segmentation network, where infection masks were used as ground-truth. Besides, a combined evaluation was provided using the best lung segmentation and infection segmentation models to evaluate the overall performance of the proposed cascaded system. Quantitative The performance of detection and segmentation tasks was assessed using different evaluation metrics with 95% confidence intervals (CIs). Accordingly, the CI for each evaluation metric was computed as follows: 2) where, N is the number of test samples, and is the level of significance that is 1.96 for 95% CI. All values were computed over the overall confusion matrix that accumulates all test fold results of the 5-fold or 10-fold cross-validation in respective experiments. The performance of the lung and lesion segmentation networks were evaluated using three evaluation metrics which are accuracy, Intersection over Union (IoU), and Dice Similarity Coefficient (DSC): where is the ratio of the correctly classified pixels among the image pixels. TP, TN, FP, FN represent the true positive, true negative, false positive, and false negative, respectively. where, both are statistical measures of spatial overlap between the binary ground-truth segmentation mask and the predicted segmentation mask, whereas the main difference is that DSC considers double weight for pixels (true lung/lesion predictions) compared to IoU. Five evaluation metrics were considered for the COVID-19 detection scheme: accuracy, sensitivity, precision, F1-score, and specificity. where is the rate of correctly classified positive class CT samples among all the samples classified as positive samples. where is the rate of correctly predicted positive samples in the positive class samples, 1 = 2 n × y n + y (8) where 1 is the harmonic average of precision and sensitivity. where specificity is the ratio of accurately predicted negative class samples to all negative class samples. This section describes the results of the lung and lesion segmentation, COVID-19 detection and severity classification along with 3D lung modeling to visualize lung infections. The results of 5-fold cross-validation are tabulated in Table 5 This is considered a challenging task for the deep learners as shown in rows, 2 and 3 in Figure 4 , where it was shown that the network can generate a mask for the small lung slices. Although the lungs can be severely affected by COVID-19 lesions, the trained model successfully segmented the lung boundaries as shown in Figure 4 . This reflects the robustness of the model proposed in this study for lung segmentation. Authors in [22, 45] discarded the small lung area (less than 20% of the body part) slices during the pre-processing phase. But this work included such images in the training and testing sets. Lesion segmentation can assist medical doctors to diagnose better the infection in the lung. The segmentation performances of the different networks are presented in Table 6 . The results indicate that the FPN network performs better than the UNet in general. DenseNet201 FPN achieved the best segmentation performance with IoU, and DSC of 91.85% and 94.13%, respectively. The second and third best networks were FPN models too but the results are very close with insignificant differences. Figure 6 shows the ability of the top three networks to segment the infected regions even from small lung areas ( Figure 6 ). The performance of lesion segmentation networks from the CT lung images is presented in Table 7 . Since missing any COVID-19 positive case is critical, sensitivity is the primary metric that we consider in detection. All the networks achieved high sensitivity values (>99%), where both U-Net and FPN networks with DenseNet201 as the backbone achieved the best performance with 99.64% sensitivity, which indicates that the proposed approach can achieve a high level of robustness. Moreover, the FPN model with DenseNet201 as backbone achieved the specificity of 98.72%, indicating a significantly low false alarm rate. 1,110 patients were provided in MosMedData Dataset, which was used to test the performance of the proposed severity classification system. Lung and lesion masks were generated using the best-performing networks obtained from previous sections. Figure 7 shows three examples of predicted masks by both models: the best lung segmentation network (DenseNet 161 UNet) and the best lesion segmentation network (DenseNet201 FPN) on an entirely independent dataset. It can be seen that the cascaded networks were able to detect the lung borders very accurately and also performed well in detecting the main COVID-19 infection regions. The infection percentage has been calculated for each CT volume, where each volume is classified as healthy (CT0), or with mild (CT1), moderate (CT2), severe (CT3), or critical (CT4) COVID-19 infection using the criteria mentioned in the MosMedData Dataset, however, quantified using our infection percentage quantification method. It should be noted that the ground truth of CT0-CT4 classification was provided in the dataset, which was done by visually inspecting the CT slices by professional radiologists. The confusion matrix for the classification of 1,110 patients is shown in Figure 8 and the quantitative evaluation is summarised in Table 8 . [42] , it was said that no viral pneumonia is shown in these cases. However, other types of pneumonia may exist. Figure 8 shows that all COVID-19 cases were detected as CT1, CT2, CT3, and CT4; none of the COVID-19 cases were predicated as normal (CT0). In other words, the system can distinguish COVID-19 patients from healthy cases on an independent test set very reliably. Thus, the severity classification performance matches the results obtained in the detection section. Furthermore, the proposed system showed lower sensitivity values for moderate (CT2) and severe (CT3) compared to CT0, CT1, and CT4, this can be related to how the dataset was labeled by radiologists. The dataset was labeled by two radiologists using a visual semi-quantitative approach, such an approach can lead to weak labeling [46] . A 3D model of the lung with infection segmentation is generated for each patient using the output of lung and lesion segmentation networks. The proposed tool can assist the medical doctors to assess better the infection and to evaluate its severity. Figure 9 shows 3D lung models from different views while the COVID-19 infection is presented with the red color saturation. In this paper, we proposed a systematic approach for COVID-19 detection, lung, and lesion segmentation, In summary, computer-aided detection and quantification is an accurate, easy, and feasible method to diagnose COVID-19 cases. Qatar University COVID19 Emergency Response Grant (QUERG-CENG-2020-1) provided the support for the work and the claims made herein are solely the responsibility of the authors. The authors report no declarations of interest. Weekly epidemiological update on COVID-19 Isothermal nucleic acid amplification technologies for point-ofcare diagnostics: a critical review Detection of 2019 novel coronavirus (2019-nCoV) by real-time RT-PCR A Comprehensive Literature Review on the Clinical Presentation, and Management of the Pandemic Coronavirus Disease 2019 (COVID-19) The Role of Chest Imaging in Patient Management during the COVID-19 Multinational Consensus Statement from the Fleischner Society Coronavirus Disease 2019 (COVID-19): A Systematic Review of Imaging Findings in 919 Patients Sensitivity of Chest CT for COVID-19: Comparison to RT-PCR Correlation of chest CT and RT-PCR testing in coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: a descriptive study Dermatologist-level classification of skin cancer with deep neural networks Automatic brain tumor detection and segmentation using U-Net based fully convolutional networks Deep learning to improve breast cancer detection on screening mammography End-to-end lung cancer screening with three-dimensional deep learning on lowdose chest computed tomography Coronavirus: Comparing COVID-19, SARS and MERS in the eyes of AI A Deep Learning System to Screen Novel Coronavirus Disease 2019 Pneumonia Deep-Learning Approach Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images Diagnosis of Coronavirus Disease 2019 (COVID-19) With Structured Latent Multi-View Representation Learning A Fully Automatic Deep Learning System for COVID-19 Diagnostic and Prognostic Analysis Artificial intelligence for the detection of COVID-19 pneumonia on chest CT using multinational datasets Deep Regression via Multi-Channel Multi-Modal Learning for Pneumonia Screening Artificial intelligence-enabled rapid diagnosis of patients with COVID-19 A rapid, accurate and machine-agnostic segmentation and quantification method for CT-based covid-19 diagnosis A noise-robust framework for automatic segmentation of covid-19 pneumonia lesions from ct images A Weakly-supervised Framework for COVID-19 Classification and Lesion Localization from Chest CT Clinically Applicable AI System for Accurate Diagnosis, Quantitative Measurements, and Prognosis of COVID-19 Pneumonia Using Computed Tomography Rethinking atrous convolution for semantic image segmentation Automated Quantification of CT Patterns Associated with COVID-19 from Quantitative computed tomography analysis for stratifying the severity of Coronavirus Disease Automated quantification of COVID-19 severity and progression using chest CT images Dual-branch combination network (DCN): Towards accurate diagnosis and lesion segmentation of COVID-19 using CT images Residual Attention U-Net for Automated Multi-Class Segmentation of COVID-19 Chest CT Images Dual-Sampling Attention Network for Diagnosis of COVID-19 From Community Acquired Pneumonia Development and evaluation of an artificial intelligence system for COVID-19 diagnosis U-net: Convolutional networks for biomedical image segmentation Feature pyramid networks for object detection Deep residual learning for image recognition Densely connected convolutional networks COVID-19 CT Lung and Infection Segmentation Dataset COVID-19 CT segmentation dataset Finding and Measuring Lungs in CT Data MosMedData: Chest CT Scans With COVID-19 Related Findings Dataset Deep Learning With Python (Data Sciences) Prior-Attention Residual Learning for More Discriminative COVID-19 Screening in CT Images A Fully Automated Deep Learning-based Network For Detecting COVID-19 from a New And Large Lung CT Scan Dataset CT-based COVID-19 Triage: Deep Multitask Learning Improves Joint Identification and Severity Quantification