key: cord-0995316-8864qi95 authors: Liu, Jiannan; Dong, Bo; Wang, Shuai; Cui, Hui; Fan, Dengping; Ma, Jiquan; Chen, Geng title: COVID-19 Lung Infection Segmentation with A Novel Two-Stage Cross-Domain Transfer Learning Framework date: 2021-08-06 journal: Med Image Anal DOI: 10.1016/j.media.2021.102205 sha: 545d053f44c21224bf1b9b3e5c2bb2c3fb292f14 doc_id: 995316 cord_uid: 8864qi95 With the global outbreak of COVID-19 in early 2020, rapid diagnosis of COVID-19 has become the urgent need to control the spread of the epidemic. In clinical settings, lung infection segmentation from computed tomography (CT) images can provide vital information for the quantification and diagnosis of COVID-19. However, accurate infection segmentation is a challenging task due to (i) the low boundary contrast between infections and the surroundings, (ii) large variations of infection regions, and, most importantly, (iii) the shortage of large-scale annotated data. To address these issues, we propose a novel two-stage cross-domain transfer learning framework for accurately segmenting COVID-19 lung infections from CT images. Our framework consists of two major technical innovations, including an effective infection segmentation deep learning model, called nCoVSegNet, and a novel two-stage transfer learning strategy. Specifically, our nCoVSegNet conducts effective infection segmentation by taking advantage of attention-aware feature fusion and large receptive fields, aiming to resolve the issues related to low boundary contrast and large infection variations. To alleviate the shortage of the data, the nCoVSegNet is pre-trained using a two-stage cross-domain transfer learning strategy, which makes full use of the knowledge from natural images (i.e., ImageNet) and medical images (i.e., LIDC-IDRI) to boost the final training on CT images with COVID-19 infections. Extensive experiments demonstrate that our framework achieves superior segmentation accuracy and outperforms the cutting-edge models, both quantitatively and qualitatively. The outbreak of the 2019 coronavirus disease has triggered a global public health emergency (Lancet, 2020) . There are a total of 165,711,111 confirmed cases and 3,528,951 confirmed deaths worldwide as of May 20th, 2021. The COVID-19 pandemic has caused unprecedented hazards to public health, the global economy, and so on (Xiong et al., 2020) . In this severe situation, rapid control of the spread of COVID-19 becomes particularly important. Early diagnosis of COVID-19 plays a vital role in controlling the spread of the disease. Currently, reverse transcription-polymerase chain reaction (RT-PCR) is the most widely adopted approach for the diagnosis of COVID-19 (Guan et al., 2020) . However, RT-PCR suffers from a number of limitations, including low efficiency, short of supply test kits, and low sensitivity Xie et al., 2020) . Compared with RT-PCR, chest computed tomography (CT) imaging allows effective COVID-19 screening with high sensitivity and is easy to access in a clinical setting . Besides, CT imaging gained increasing attention from the research community (Phelan et al., 2020) , where efforts have been directed to investigate the COVID-19 induced pathological changes from the perspective of radiology. Accurate segmentation of lung infections from CT images is crucial to the quantification and diagnosis of COVID-19 Shi et al., 2020; Tilborghs et al., 2020) . Traditional manual/semi-automatic segmentation techniques are time-consuming and require the intervention of clinical physicians. In addition, the segmentation results tend to be biased towards the expert's experience. Therefore, automatic lung infection segmentation is greatly desired in a clinical setting. Significant efforts have been directed towards this direction (Vaishya et al., 2020) . In particular, deep learning techniques have been widely employed and shown great potentials. For instance, Shan et al. (2020) proposed a deep learning model, called VB-Net, to segment lung lobes and lung infections from the CT scans of patients. In addition, a human-in-the-loop strategy is employed to refine the annotation of each CT scan. Elharrouss et al. (2020) developed a multitask deep learning framework for lung infection segmentation from CT images. Qiu et al. (2020) proposed a lightweight deep learning model, called MiniSeg, for COVID-19 infection segmentation, aiming to resolve the issues of over-fitting and low computational efficiency. However, accurate COVID-19 lung infection segmentation is still a challenging task due to three key factors, including (i) Low boundary contrast. The boundary between the COVID-19 infected regions and surrounding normal tissues suffers from the low contrast issue and is usually blurry (Fan et al., 2020) . This induces significant difficulties for accurate lung infection segmentation. (ii) Large variation. The COVID-19 lung infection exhibits a large variety of morphological appearances, e.g., size, shape, etc, which aggregates the difficulty of accurate segmentation. Most importantly, (iii) Short of labeled data. Large-scale infection annotations provided by clinical doctors are extremely difficult to obtain, especially at an early stage of the disease outbreak. This is a major issue restricting the performance of deep learning segmentation models that rely on sufficient training data. To handle the low boundary contrast and large infection variation, a large receptive field is greatly desired since it can provide rich contextual information. In addition, the fusion of multi-level features is another key factor determining the success of infection segmentation. However, existing works usually overlook the importance of these two factors, which can result in unsatisfactory performance. To tackle the shortage of labeled data, transfer learning has been adopted and gained increasing interest from the medical image analysis community (Shie et al., 2015; Shin et al., 2016; Cheplygina et al., 2019) . In general, there are two kinds of widely adopted transfer learning strategies: (i) Network Backbone. A network trained with large-scale datasets (e.g., ImageNet (Deng et al., 2009) ) can be embedded into the medical image analysis models as a backbone for exacting informative features (Carneiro et al., 2015) . (ii) Network Pre-training. Methods in this category pre-train the whole network using large-scale datasets, and then perform formal training with the target dataset Chatfield et al., 2014) . Both strategies have shown promising performance in medical image analysis tasks. However, existing works usually focus on one aspect of these two kinds of strategies, which is unable to make full use of powerful transfer learning. In this paper, we propose a novel two-stage cross-domain transfer learning framework for the accurate segmentation of COVID-19 lung infections from CT images. Our framework is based on a specially designed deep learning model, called nCoVSegNet, which segments lung infections with attentionaware feature fusion and large receptive fields. Specifically, we first feed the CT images to a backbone network to extract multi-level features. The features are then passed through our global context-aware (GCA) modules, which provide rich features from significantly enlarged receptive fields. Finally, we fuse the features using our dual-attention fusion (DAF) modules, which integrate the multi-level features with the guidance from spatial and channel attention mechanisms. Furthermore, we train our model with a two-stage transfer learning strategy for improved performance. The first stage takes advantage of a backbone network trained on ImageNet (Deng et al., 2009 ) and provides valuable cross-domain knowledge from the hu-man perception of natural images. Since a large gap exists between natural images and COVID-19 CT images, we further perform a second stage transfer learning, where our model is pre-trained using Lung Image Database Consortium-Image Database Resource Initiative (LIDC-IDRI) Lung Cancer Dataset (Armato III et al., 2011) , which is currently the largest CT dataset for pulmonary nodule detection and provides a large amount of chest CT images that share similar appearances with the COVID-19 CT images. The second stage transfer learning can fill the gap between two domains and provides vital knowledge from a neighboring domain to improve the segmentation accuracy. Finally, we train our model with COVID-19 CT images from the MosMedData dataset (Morozov et al., 2020) . Extensive experiments demonstrate the effectiveness of our nCoVSegNet and the two-stage transfer learning strategy, both quantitatively and qualitatively. Our main contributions are summarized as follows: 1. We develop a novel two-stage transfer learning framework for segmenting COVID-19 lung infections from CT images. Our framework learns valuable knowledge from both natural images and CT images with pulmonary nodules, allowing more effective network training for improved performance. 2. We propose an effective infection segmentation network, nCoVSeg-Net, which consists of a backbone network along with our GCA and DAF modules. Our nCoVSegNet accurately segments lung infections from CT images by taking advantages of attention-aware feature fusion and large reception fields. 3. Extensive experiments on two COVID-19 CT datasets demonstrate that our framework is able to segment lung infections accurately and outperforms state-of-the-art methods remarkably. Our framework, which provides vital information regarding lung infections, has great potentials to boost the clinical diagnosis and treatment of COVID-19. Our paper is organized as follows: In Section 2, we introduce related works. Section 3 describes our COVID-19 lung infection segmentation framework in detail. In Section 4, we present the datasets and experimental results. Finally, we conclude our work in Section 5. In this section, we review related works from two categories, including COVID-19 lung infection segmentation and transfer learning in medical imaging. Recently, deep learning has been actively employed for COVID-19 lung Chen et al., 2020; Zhao et al., 2020; Wu et al., 2021; Yan et al., 2020) . A novel noise-robust framework (Wang et al., 2020a) was also proposed for COVID-19 pneumonia lesion segmentation, where a noise-robust Dice loss and a mean absolute error loss were used. The success of deep learning models relies on a large amount of labeled training data, which is, however, hard to guarantee for the COVID-19 infection segmentation, especially at the early break of the disease. To this end, effort has been dedicated to semi-supervised (Fan et al., 2020) and weakly-supervised Different from existing works (Bressem et al., 2021; Zhou et al., 2021; Fan et al., 2020) , we propose a novel network with consideration of attentionaware feature fusion and enlarged receptive fields. In addition, we address the shortage of training data with an effective two-stage cross-domain transfer learning strategy. Transfer learning is particularly effective in resolving the shortage of training data for the deep learning models that are designed for medical image analysis tasks. In general, we classify the existing works into two categories. The first category is similar to our first-stage transfer learning, where a convolutional neural network (CNN) (e.g., VGGNet (Simonyan and Zisserman, 2015) and ResNet (He et al., 2016) ) that has been pre-trained on a large-scale dataset (e.g., ImageNet (Deng et al., 2009) ) is utilized as the backbone of a network for feature extraction. This strategy has been widely employed for a variety of medical image analysis tasks, such as lesion detection and classification (Byra et al., 2019; Khan et al., 2019) , normal/abnormal tissue segmentation (Vu et al., 2020; Bressem et al., 2021) , disease identification (Bar et al., 2015; Shie et al., 2015) , etc. The other strategy is to pre-train the network with a large-scale dataset before the formal training using the target dataset with limited data. This strategy has been employed in brain tumor retrieval (Swati et al., 2019) and various disease diagnosis tasks (Tajbakhsh et al., 2016; Liang and Zheng, 2020) . In particular, Li et al. (2017) employed three different transfer learning strategies to conduct diabetic retinopathy fundus image classification, demonstrating that transfer learning is a promising technique for alleviating the shortage of training data. In our work, we address the shortage of training data using a two-stage transfer learning strategy, which takes advantage of both model-level transfer learning (i.e., network backbone) and data-level transfer learning (i.e., network pre-training). In this section, we provide the details of our framework by first presenting the proposed segmentation model, nCoVSegNet, and then explaining the two-stage transfer learning strategy. Our segmentation model mainly consists of a backbone network and two key modules, i.e., the global context-aware (GCA) module and dual-attention fusion (DAF) module. The backbone network extracts multi-level features from the input CT images. Then, the GCA modules enhance the features before feeding them to the DAF modules for predicting the segmentation maps. As shown in Fig. 1 , the multi-level features are first extracted from the hierarchical layers of the backbone network. Both of the low-level and highlevel features are then fed to GCA modules for enhancement by enlarging the receptive fields. Note that the low-/high-level features denote the features closer to the beginning/end (i.e., input/output) of backbone network (Lin et al., 2017; He et al., 2016) . We then employ three DAF modules to perform feature fusion for predicting the segmentation maps. Furthermore, we employ a deep supervision strategy to supervise the outputs of three DAF modules and the output of the last GCA module. We use the first four layers of the pre-trained ResNet50 as the encoder for nCoVSegNet. The enhanced channel attention (ECA) component is embedded in each ResNet block (RB) to preserve as much useful texture information as possible in the original CT image and to filter out interference information, e.g., noise. Note that the size of the feature map is halved and the number of channels is doubled between two neighboring RBs. Inspired by (Liu et al., 2018) , we develop the GCA module, which exploits more informative features using enlarged receptive fields. As shown in Fig. 2 where f i denotes the features from ith branch with i ∈ {1, 3, 5, 7} representing the size of the asymmetric convolutional kernel size; Cat(·) denotes the concatenation operation; Conv 1×1 (·) and Conv 3×3 (·) represent the convolutional layers with kernel sizes of 1 × 1 and 3 × 3, respectively; f RB denotes the features extracted from the backbone. For the last GCA module (i.e., GCA 4 in Fig. 1) , we remove the branch for 7 × 7 convolutional layers since the receptive field of high-level features is already large. This adjustment can save the memory and only has a marginal influence on the results. To fuse the rich features from GCA modules, we propose a novel DAF module, which enhances the lower-level features by using the attention maps where f k GCA and f k+1 GCA represent the features provided by kth (lower-level) and k + 1th (upper-level) GCA modules with k = 1, 2, 3. denotes the Hadamard product, i.e., element-wise multiplication. DConv(·) represents the deconvolution operation, which enlarges the size of the feature map. W CA and W SA are the CA and SA attention weight matrices, and are defined as where P ool(·) denotes pooling operation and σ(·) represents Sigmoid activation function. We employ a deep supervision strategy (Lee et al., 2015) to design the loss function. Specifically, as shown in Fig. 1 , the supervision is added to each DAF module and the last GCA module, allowing better gradient flow and more effective network training. For each supervision, we consider two losses, i.e., the binary cross-entropy (BCE) loss and Dice loss (Milletari et al., 2016) . The overall loss is therefore designed as For the BCE loss, a modified version (Wei et al., 2020 ) is adopted to alleviate the imbalance problem between positive (lesion) and negative (normal tissue). For clarity, we omit the superscript k. The definition of L BCE is as follows where l ∈ (0, 1) indicates two kinds of labels. p i,j and g i,j are the prediction and ground truth values at location (i, j) in an CT image with a shape of width W and height H. Ψ represents all the parameters of the model, and P r(p i,j = l) denotes the predicted probability. α i,j ∈ [0, 1] is the weight of each pixel. Following (Wei et al., 2020) , we define α as where A i,j represents the area surrounding the pixel (i, j), and |·| denotes the computation of absolute value. In addition, the Dice loss is defined as We train nCoVSegNet using an effective two-stage cross-domain transfer learning strategy. As shown in Fig. 3 by the backbone network which is pre-trained using the natural image is transferred to our task at the model level. It is worth noting that this stage provides cross-domain learning, which transfers the knowledge from natural images to medical images. As aforementioned, we use a modified ResNet block pre-trained on ImageNet as the backbone of nCoVSegNet for transfer learning at this stage. At the second stage, the CT images for lung nodule detection are utilized for transfer learning at the data level. Our motivation lies in two aspects. First, there exists a large gap between natural images and COVID-19 CT images, which raises significant demands to introduce a procedure for filling the cross-domain gap. Second, as shown in Fig. 4 , COVID-19 lung infections share similar appearances with the pulmonary nodules, such as ground-glass opacity at the early stage, and pulmonary consolidation at the late stage Zhou et al., 2020b) . Therefore, lung nodule segmentation is able to provide useful guidelines for COVID-19 lung infection segmentation. Motivated by these, we employ the current largest lung nodule CT dataset, LIDC-IDRI, to perform the second stage transfer learning, which provides vital knowledge from a neighboring domain to fill the cross-domain gap. Therefore, the LIDC-IDRI dataset is regarded as the source domain data to train the whole network for the model level transfer learning. After that, the COVID-19 dataset is employed for the final training of our model. In this section, we first provide detailed information on the datasets, followed by experimental settings and evaluation methods. Finally, we present the experimental results for our nCoVSegNet and the cutting-edge models. To conduct a data-level transfer learning, we use the largest dataset for lung nodule detection, i.e., LIDC-IDRI, for pre-training. In addition, the COVID-19 CT images from MosMedData dataset (Morozov et al., 2020) are used to train our model. In our experiments, 40 cases from MosMedData are used for training, while the remaining 10 subjects are used for testing. The COVID-19 CT images from https://coronacases.org/ with the annotations provided by (Ma et al., 2021) are used to test our trained model for evaluating its generalization performance. This dataset is denoted as "Coronacases". Table 1 shows the statistics of our training and testing datasets. The data processing for three datasets is detailed as follows. LIDC-IDRI : According to the CSV files provided by the dataset, we select subjects with lung nodules. Next, we generate the nodule ground truth mask by referring to each patient's XML file. Since each lung nodule is marked by four doctors, we take a 50 % consensus as the choice to determine ground truth masks. lung regions from CT images before training and testing. For this purpose, we first segment the lung using the unsupervised method in (Liao et al., 2019) . Based on the 3D lung mask, we determine a bounding box for the region of the lung. This bounding box is then utilized to crop the 3D region of the lung from a CT scan. Our model is implemented using PyTorch and trained on an NVIDIA GeForce RTX 2080 Ti GPU. We first pre-train the model using LIDC-IDRI with the settings as follows: 100 epochs, initial learning rate of 1e-4 with a decay of 0.1 every 50 epochs, batch size of 4, and image size of 352 × 352. Note that a quarter of LIDC-IDRI training data is used for validation to search for the best hyperparameters. The resulting hyperparameters are employed for the formal training with all training data. An Adam optimizer (Kingma and Ba, 2014) is employed for optimization. At this stage, 875 subjects with a total of 13,916 slices are fed into nCoVSegNet to pre-train the model for filling the gap between natural images and medical images. Next, MosMedData, which contains 40 subjects with a total of 1,640 slices, is utilized to train the model for COVID-19 infection segmentation. The learning rate is reduced to 5e-5 and the number of epochs is set to 50, and an early stopping strategy is employed to prevent over-fitting. Similar to LIDC-IDRI, a validation procedure is employed to determine the best hyperparameters for training with MosMedData. For data augmentation, we use the random horizontal flip, random crop, and multi-scale resizing with different ratios {0.75,1,1.25}. The input images are normalized to (0,1) before the training. Finally, the remaining subjects in MosMedData and the whole Coronacases are used for the testing. Six widely adopted metrics are used for measuring the performance of segmentation models. Dice similarity coefficient (DSC): The DSC is used to measure the similarity between the predicted lung infections and the ground truth (GT). where V Seg and V GT represent the voxel sets of the infection segmentation and GT, respectively. |·| denotes the operation of cardinality computation, which provides the number of elements in a set. The SEN reflects the percentage of lung infections that are correctly segmented. Its definition is as follows Specificity (SPE): The SPE reflects the percentage of non-infection regions that are correctly segmented and is defined as where V I denotes the voxel set of the whole CT volume. The PPV reflects the accuracy of the segmented lung infections and is defined as Volumetric Overlap Error (VOE): The VOE represents the error rate of the segmentation result. VOE equals zero for a perfect segmentation and one for the worst case that there is no overlap between the segmentation result and GT (Heimann et al., 2009) . VOE is defined as: Relative Volume Difference (RVD): The RVD is used to measure the difference between the predicted lung infections and GT, which is defined in Eq.(14) (Heimann et al., 2009) . We take the absolute value of RVD and report the corresponding results in this work. Our proposed model is compared with various state-of-the-art deep learning models, including two popular ones, U-Net (Ronneberger et al., 2015) and U-Net++ (Zhou et al., 2018) , as well as three recently developed CE-Net (Gu et al., 2019) , U 2 Net (Qin et al., 2020) , and U-Net3+ . All baseline methods are trained with default settings based on the codes provided in the literature. We show the quantitative results in Table 2 . As can be observed, our nCoVSegNet performs the best in terms of DSC, SEN, SPE, VOE, and RVD. Specifically, it outperforms the second-best model, CE-Net, by a large margin of 4.3% in terms of DSC, which is the key evaluation metric in segmentation. In addition, it significantly improves the SEN by a large margin of 12 %, while maintaining a high SPE. Furthermore, it provides the best performance in terms of VOE and RVD with 9.1% and 15.5% improvement in comparison with CE-Net. To further demonstrate the effectiveness of our nCoVSegNet, we performed paired student t-test between our results and those provided by CE-Net. The t-test results show that our improvement is statistically significant with p-values smaller than 0.05. Our nCoVSegNet performs the best in terms of all evaluation metrics, except PPV, where nCoVSegNet ranks the third-best method. The underlying reason is that there are many independent and scattered pulmonary nodules in the MosMedData, which posed challenges in segmentation. It is worth noting that our nCoVSegNet consistently outperforms the second best method, CE-Net, in terms of PPV. To verify the generalization capability of our nCoVSegNet, we further evaluate the segmentation performance on the Coronacases dataset. Note Table 3 : Quantitative segmentation results on Coronacases (Ma et al., 2021) . The best, second best, and third best results are marked by red, blue, and green colors, respectively. The visual comparison of segmentation results is shown in Fig. 5 . It can be observed that U-Net and U-Net++ fail in providing complete infection segmentation. A large amount of missing segmentation confirms the low sensitivity in Table 2 . The other three baseline methods, CE-Net, U 2 Net, and U-Net3+, improve the results, but are still not good as our nCoVSeg-Net, especially for the regions marked by red arrows. In contrast, nCoV-SegNet consistently provides the best performance in the visual comparison, demonstrating its effectiveness sufficiently. We further investigate the effectiveness of two-step cross-domain transfer learning strategy on MosMedData by comprising three versions of our method, including "w/o TL": Without transfer learning, "Single-Stage TL": One-stage transfer learning with ImageNet pre-trained model, and "Two- Furthermore, we show some representative visual results in Fig. 6 . As can be observed, the "Two-Stage TL" provides the best visual results that are close to the ground truth. In contrast, "Single-Stage TL" and "w/o TL" show unsatisfactory performance and are unable to provide segmentation results with complete infection regions. Overall, the results, shown in Table 4 and Finally, we perform an extensive ablation study on MosMedData to investigate the effectiveness of each component in the network. In general, we consider the effectiveness of GCA module, DAF module, and the deep supervision (DS) strategy. We summarize the quantitative results in Table 5 . The comparison baseline is denoted by "(A) Backbone", which is a U-Net-like architecture with the backbone network (ResNet50) as an encoder. Effectiveness of GCA: The ablated version (B) is the backbone with our GCA modules. As shown in Table 5 , (B) outperforms (A) in terms of all evaluation metrics, demonstrating that GCA modules can improve the performance effectively. We then investigate the effectiveness of the DAF module. The ablated version (C) is the backbone with our DAF modules. Compared with (A), our proposed DAF module improves the performance in terms of all metrics, sufficiently demonstrating its effectiveness. Table 5 , the ablated version with DS, i.e., (D), shows improved performance over (A) in terms of DSC, SEN, and PPV. In addition, it provides a comparable SPE in comparison with (A). All of these results demonstrate that DS is an effective component capable of improving performance effectively. Effectiveness of Module Combinations: Finally, we investigate the effectiveness of different module combinations, including (E), (F), and (G). As shown in Table 5 , each module combination outperforms the corresponding ablated versions with a single module. The results demonstrate that the module combination can improve the results. In addition, the full version of our nCoVSegNet , i.e., (H), outperforms (E), (F), and (G), indicating that jointly incorporating GCA, DAF, and DS into the network provides the best performance. In this paper, we have proposed a two-stage cross-domain transfer learning framework for COVID-19 lung infection segmentation from CT images. Our framework includes an effective infection segmentation model, nCoVSeg-Net, which is based on attention-aware feature fusion and enlarged receptive fields. In addition, we train our model using an effective two-stage transfer learning strategy, which takes advantage of valuable knowledge from both Im-ageNet and LIDC-IDRI. Extensive experiments on COVID-19 CT datasets indicate that our model achieves promising performance in lung infection segmentation and outperforms cutting-edge segmentation models. The results also demonstrate the effectiveness of the two-stage transfer learning strategy, the generalization of our model, and the effectiveness of proposed modules. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans Deep learning with non-medical training used for chest pathology identification 3D U-Net for segmentation of COVID-19 associated pulmonary infiltrates using transfer learning: State-of-the-art results on affordable hardware Breast mass classification in sonography with transfer learning using a deep convolutional neural network and color conversion Unregistered Multiview Mammogram Analysis with Pre-trained Deep Learning Models Return of the Devil in the Details: Delving Deep into Convolutional Nets SCA-CNN: Spatial and Channel-wise Attention in Convolutional Networks for image captioning Residual Attention U-Net for Automated Multi-Class Segmentation of COVID-19 Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis Imagenet: A large-scale hierarchical image database An encoder-decoderbased method for covid-19 lung infection segmentation Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images Sensitivity of chest CT for COVID-19: comparison to RT-PCR CE-Net: Context Encoder Network for 2D Medical Image Segmentation Clinical characteristics of 2019 novel coronavirus infection in China. MedRxiv Deep Residual Learning for Image Recognition Comparison and Evaluation of Methods for Liver Segmentation From CT Datasets UNet3+: A Full-Scale Connected UNet for Medical Image Segmentation A Novel Deep Learning based Framework for the Detection and Classification of Breast Cancer Using Transfer Learning Adam: A method for stochastic optimization Emerging understandings of 2019-nCoV Deeply-supervised nets The clinical and chest CT features associated with severe and critical COVID-19 pneumonia Convolutional neural networks based transfer learning for diabetic retinopathy fundus image classification A transfer learning method with deep residual network for pediatric pneumonia diagnosis Evaluate the Malignancy of Pulmonary Nodules Using the 3D Deep Leaky Noisy-or Network Feature Pyramid Networks for Object Detection Receptive Field Block Net for Accurate and Fast Object Detection Towards data-efficient learning: A benchmark for COVID-19 CT lung and infection segmentation V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation Mosmeddata: Chest CT scans with COVID-19 related findings. medRxiv Automated Chest CT Image Segmentation of COVID-19 Lung Infection based on 3D U-Net The Novel Coronavirus Originating in Wuhan, China: Challenges for Global Health Governance U 2 -Net: Going Deeper with Nested U-Structure for Salient Object Detection MiniSeg: An Extremely Minimum Network for Efficient COVID-19 Segmentation U-Net: Convolutional Networks for Biomedical Image Segmentation COVID TV-UNet: Segmenting COVID-19 Chest CT Images Using Connectivity Imposed U-Net Lung Infection Quantification of Covid-19 in CT Images with Deep Learning Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation and Diagnosis for COVID-19 Transfer representation learning for Medical image analysis Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning Very deep convolutional networks for large-scale image recognition Content-based brain tumor retrieval for MR images using transfer learning Convolutional neural networks for medical image analysis: Full training or fine tuning? Comparative study of deep learning methods for the automatic segmentation of lung, lesion and lesion type in CT scans of COVID-19 patients Artificial intelligence (AI) applications for COVID-19 pandemic Deep convolutional neural networks for automatic segmentation of thoracic organs-at-risk in radiation oncologyuse of non-domain transfer learning A Noise-robust Framework for Automatic Segmentation of COVID-19 Pneumonia Lesions from CT Images ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks F 3 Net: Fusion, Feedback and Focus for Salient Object Detection CBAM: Convolutional Block Attention Module JCS: An explainable COVID-19 diagnosis system by joint classification and segmentation Chest CT for typical 2019-nCov pneumonia: relationship to negative RT-PCR testing Impact of COVID-19 pandemic on mental health in the general population: A systematic review GASNet: Weakly-supervised Framework for COVID-19 Lesion Segmentation COVID-19 chest CT image segmentation-a deep convolutional neural network solution Label-Free Segmentation of COVID-19 Lesions in Lung CT SCOAT-Net: A Novel Network for Segmenting COVID-19 Lung Opacification from CT Images A Rapid, Accurate and Machine-Agnostic Segmentation and Quantification Method for CT CT features of coronavirus disease 2019 (COVID-19) pneumonia in 62 patients in Wuhan Automatic covid-19 ct segmentation using u-net integrated spatial and channel attention mechanism UNet++: A Nested U-Net Architecture for Medical Image Segmentation