key: cord-0501536-96p2ec0g authors: Fan, Deng-Ping; Zhou, Tao; Ji, Ge-Peng; Zhou, Yi; Chen, Geng; Fu, Huazhu; Shen, Jianbing; Shao, Ling title: Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images date: 2020-04-22 journal: nan DOI: nan sha: f30718ad585efebbea024f4033820ab8e2a4c490 doc_id: 501536 cord_uid: 96p2ec0g Coronavirus Disease 2019 (COVID-19) spread globally in early 2020, causing the world to face an existential health crisis. Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19. However, segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues. Further, collecting a large amount of data is impractical within a short time period, inhibiting the training of a deep model. To address these challenges, a novel COVID-19 Lung Infection Segmentation Deep Network (Inf-Net) is proposed to automatically identify infected regions from chest CT slices. In our Inf-Net, a parallel partial decoder is used to aggregate the high-level features and generate a global map. Then, the implicit reverse attention and explicit edge-attention are utilized to model the boundaries and enhance the representations. Moreover, to alleviate the shortage of labeled data, we present a semi-supervised segmentation framework based on a randomly selected propagation strategy, which only requires a few labeled images and leverages primarily unlabeled data. Our semi-supervised framework can improve the learning ability and achieve a higher performance. Extensive experiments on our COVID-SemiSeg and real CT volumes demonstrate that the proposed Inf-Net outperforms most cutting-edge segmentation models and advances the state-of-the-art performance. S INCE December 2019, the world has been facing a global health crisis: the pandemic of a novel Coronavirus Disease (COVID-19) [1] , [2] . According to the global case count from the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University (JHU) [3] (updated 1 May, 2020), 3,257,660 identified cases of COVID-19 have been reported so far, including 233,416 deaths and impacting more than 187 countries/regions. For COVID-19 screening, the reversetranscription polymerase chain reaction (RT-PCR) has been considered the gold standard. However, the shortage of equipment and strict requirements for testing environments limit the rapid and accurate screening of suspected subjects. Further, RT-PCR testing is also reported to suffer from high false negative rates [4] . As an important complement to RT-PCR tests, the radiological imaging techniques, e.g., X-rays and computed , where the red and green masks denote the GGO and consolidation, respectively. The images are collected from [9] . CASES. † DENOTES THE NUMBER IS FROM [11] . Modality #Cov/#Non-COV Task COVID-19 X-ray Collection [11] X-rays 229 † / 0 Diagnosis COVID-19 CT Collection [11] CT volume 20 / 0 Diagnosis COVID-CT-Dataset [12] CT image 288 / 1000 Diagnosis COVID-19 Patients Lungs [13] X-rays 70 / 28 Diagnosis COVID-19 Radiography [14] X-rays 219 / 2,686 Diagnosis COVID-19 CT Segmentation [9] CT image 110 / 0 Segmentation tomography (CT), have also demonstrated effectiveness in both current diagnosis, including follow-up assessment and evaluation of disease evolution [5] , [6] . Moreover, a clinical study with 1014 patients in Wuhan China, has shown that chest CT analysis can achieve 0.97 of sensitivity, 0.25 of specificity, and 0.68 of accuracy for the detection of COVID-19, with RT-PCR results for reference [4] . Similar observations were also reported in other studies [7] , [8] , suggesting that radiological imaging may be helpful in supporting early screening of COVID-19. Compared to X-rays, CT screening is widely preferred due to its merit and three-dimensional view of the lung. In recent studies [4] , [10] , the typical signs of infection could be observed from CT slices, e.g., ground-glass opacity (GGO) in the early stage, and pulmonary consolidation in the late stage, as shown in Fig. 1 . The qualitative evaluation of infection and longitudinal changes in CT slices could thus provide useful and important information in fighting against COVID-19. However, the manual delineation of lung infections is tedious and time-consuming work. In addition, infection annotation by radiologists is a highly subjective task, often influenced by individual bias and clinical experiences. Recently, deep learning systems have been proposed to detect patients infected with COVID-19 via radiological imaging [6] , [15] . For example, a COVID-Net was proposed to detect COVID-19 cases from chest radiography images [16] . An anomaly detection model was designed to assist radiologists in analyzing the vast amounts of chest X-ray images [17] . For CT imaging, a location-attention oriented model was employed in [18] to calculate the infection probability of COVID-19. A weakly-supervised deep learning-based software system was developed in [19] using 3D CT volumes to detect COVID-19. A paper list for COVID19 imaging-based AI works could be found in [20] . Although plenty of AI systems have been proposed to provide assistance in diagnosing COVID-19 in clinical practice, there are only a few works related infection segmentation in CT slices [21] , [22] . COVID-19 infection detection in CT slices is still a challenging task, for several issues: 1) The high variation in texture, size and position of infections in CT slices is challenging for detection. For example, consolidations are tiny/small, which easily results in the false-negative detection from a whole CT slices. 2) The inter-class variance is small. For example, GGO boundaries often have low contrast and blurred appearances, making them difficult to identify. 3) Due to the emergency of COVID-19, it is difficult to collect sufficient labeled data within a short time for training deep model. Further, acquiring highquality pixel-level annotation of lung infections in CT slices is expensive and time-consuming. Table I reports a list of the public COVID-19 imaging datasets, most of which focus on diagnosis, with only one dataset providing segmentation labels. To address above issues, we propose a novel COVID-19 Lung Infection Segmentation Deep Network (Inf-Net) for CT slices. Our motivation stems from the fact that, during lung infection detection, clinicians first roughly locate an infected region and then accurately extract its contour according to the local appearances. We therefore argue that the area and boundary are two key characteristics that distinguish normal tissues and infection. Thus, our Inf-Net first predicts the coarse areas and then implicitly models the boundaries by means of reverse attention and edge constraint guidance to explicitly enhance the boundary identification. Moreover, to alleviate the shortage of labeled data, we also provide a semi-supervised segmentation system, which only requires a few labeled COVID-19 infection images and then enables the model to leverage unlabeled data. Specifically, our semi-supervised system utilizes a randomly selected propagation of unlabeled data to improve the learning capability and obtain a higher performance than some cutting edge models. In a nutshell, our contributions in this paper are threefold: • We present a novel COVID-19 Lung Infection Segmentation Deep Network (Inf-Net) for CT slices. By aggregating features from high-level layers using a parallel partial decoder (PPD), the combined feature takes contextual information and generates a global map as the initial guidance areas for the subsequent steps. To further mine the boundary cues, we leverage a set of implicitly recurrent reverse attention (RA) modules and explicit edgeattention guidance to establish the relationship between areas and boundary cues. • A semi-supervised segmentation system for COVID-19 infection segmentation is introduced to alleviate the shortage of labeled data. Based on a randomly selected propagation, our semi-supervised system has better learning ability (see § IV). • We also build a semi-supervised COVID-19 infection segmentation (COVID-SemiSeg) dataset, with 100 labeled CT slices from the COVID-19 CT Segmentation dataset [9] and 1600 unlabeled images from the COVID-19 CT Collection dataset [11] . Extensive experiments on this dataset demonstrate that the proposed Inf-Net and Semi-Inf-Net outperform most cutting-edge segmentation models and advances the state-of-the-art performance. Our code and dataset have been released at: https://github.com/DengPingFan/Inf-Net In this section, we discuss three types of works that are most related to our work, including: segmentation in chest CT, semisupervised learning, and artificial intelligence for COVID-19. CT imaging is a popular technique for the diagnosis of lung diseases [23] , [24] . In practice, segmenting different organs and lesions from chest CT slices can provide crucial information for doctors to diagnose and quantify lung diseases [25] . Recently, many works have been provided and obtained promising performances. These algorithms often employ a classifier with extracted features for nodule segmentation in chest CT. For example, Keshani et al. [26] utilized the support vector machine (SVM) classifier to detect the lung nodule from CT slices. Shen et al. [27] presented an automated lung segmentation system based on bidirectional chain code to improve the performance. However, the similar visual appearances of nodules and background makes it difficult for extracting the nodule regions. To overcome this issue, several deep learning algorithms have been proposed to learn a powerful visual representations [28] - [30] . For instance, Wang et al. [28] developed a central focused convolutional neural network to segment lung nodules from heterogeneous CT slices. Jin et al. [29] utilized GAN-synthesized data to improve the training of a discriminative model for pathological lung segmentation. Jiang et al. [30] designed two deep networks to segment lung tumors from CT slices by adding multiple residual streams of varying resolutions. Wu et al. [31] built an explainable COVID-19 diagnosis system by joint classification and segmentation. In our work, we aim to segment the COVID-19 infection regions for quantifying and evaluating the disease progression. The (unsupervised) anomaly detection/segmentation could detect the anomaly region [32] - [34] , however, it can not identify whether the anomaly region is related to COVID-19. By contrast, based on the few labeled data, the semi-supervised model could identify the target region from other anomaly region, which is better suit for assessment of COVID-19. Moreover, the transfer learning technique is another good choice for dealing with limited data [35] , [36] . But currently, the major issue for segmentation of COVID-19 infection is that there are already some public datasets (see [20] ), but, being short of high quality pixel-level annotations. This problem will become more pronounced, even collecting large scale COVID-19 dataset, where the annotations are still expensive to acquire. Thus, our target is to utilize the limited annotation efficiently and leverage unlabeled data. Semi-supervised learning provides a more suitable solution to address this issue. The main goal of semi-supervised learning (SSL) is to improve model performance using a limited number of labeled data and a large amount of unlabeled data [37] . Currently, there is increasing focus on training deep neural network using the SSL strategy [38] . These methods often optimize a supervised loss on labeled data along with an unsupervised loss imposed on either unlabeled data [39] or both the labeled and unlabeled data [40] , [41] . Lee et al. [39] provided to utilize a cross-entropy loss by computing on the pseudo labels of unlabeled data, which is considered as an additional supervision loss. In summary, existing deep SSL algorithms regularize the network by enforcing smooth and consistent classification boundaries, which are robust to a random perturbation [41] , and other approaches enrich the supervision signals by exploring the knowledge learned, e.g., based on the temporally ensembled prediction [40] and pseudo label [39] . In addition, semi-supervised learning has been widely applied in medical segmentation task, where a frequent issue is the lack of pixel-level labeled data, even when large scale set of unlabeled image could be available [36] , [42] . For example, Nie et al. [43] proposed an attention-based semisupervised deep network for pelvic organ segmentation, in which a semi-supervised region-attention loss is developed to address the insufficient data issue for training deep learning models. Cui et al. [44] modified a mean teacher framework for the task of stroke lesion segmentation in MR images. Zhao et al. [45] proposed a semi-supervised segmentation method based on a self-ensemble architecture and a random patchsize training strategy. Different from these works, our semisupervised framework is based on a random sampling strategy for progressively enlarging the training set with unlabeled data. Artificial intelligence technologies have been employed in a large number of applications against COVID-19 [6] , [46] - [48] . Joseph et al. [15] categorized these applications into three scales, including patient scale (e.g., medical imaging for diagnosis [49] , [50] ), molecular scale (e.g., protein structure prediction [51] ), and societal scale (e.g., epidemiology [52] ). In this work, we focus on patient scale applications [18] , [22] , [49] , [50] , [53] - [55] , especially those based on CT slices. For instance, Wang et al. [49] proposed a modified inception neural network [56] for classifying COVID-19 patients and normal controls. Instead of directly training on complete CT images, they trained the network on the regions of interest, which are identified by two radiologists based on the features of pneumonia. Chen et al. [50] collected 46,096 CT image slices from COVID-19 patients and control patients of other disease. The CT images collected were utilized to train a U-Net++ [57] for identifying COVID-19 patients. Their experimental results suggest that the trained model performs comparably with expert radiologists in terms of COVID-19 diagnosis. In addition, other network architectures have also been considered in developing AI-assisted COVID-19 diagnosis systems. Typical examples include ResNet, used in [18] , and U-Net [58] , used in [53] . Finally, deep learning has been employed to segment the infection regions in lung CT slices so that the resulting quantitative features can be utilized for severity assessment [54] , large-scale screening [55] , and lung infection quantification [15] , [21] , [22] of COVID-19. In this section, we first provide details of our Inf-Net in terms of network architecture, core network components, and loss function. We then present the semi-supervised version of Inf-Net and clarify how to use a semi-supervised learning framework to enlarge the limited number of training samples for improving the segmentation accuracy. We also show an extension of our framework for the multi-class labeling of different types of lung infections. Finally, we provide the implementation details. Overview of Network: The architecture of our Inf-Net is shown in Fig. 2 . As can be observed, CT images are first fed to two convolutional layers to extract high-resolution, semantically weak (i.e., low-level) features. Herein, we add an edge attention module to explicitly improve the representation of objective region boundaries. Then, the low-level features f 2 obtained are fed to three convolutional layers for extracting the high-level features, which are used for two purposes. First, we utilize a parallel partial decoder (PPD) to aggregate these features and generate a global map S g for the coarse localization of lung infections. Second, these features combined with f 2 are fed to multiple reverse attention (RA) modules under the guidance of the S g . It is worth noting that the RA modules are organized in a cascaded fashion. For instance, as shown in Fig. 2 , R 4 relies on the output of another RA R 5 . Finally, the output of the last RA, i.e., S 3 , is fed to a Sigmoid activation function for the final prediction of lung infection regions. We now detail the key components of Inf-Net and our loss function. Edge Attention Module: Several works have shown that edge information can provide useful constraints to guide feature extraction for segmentation [59] - [61] . Thus, considering that the low-level features (e.g., f 2 in our model) preserve some sufficient edge information, we feed the low-level feature f 2 with moderate resolution to the proposed edge attention (EA) module to explicitly learn an edge-attention representation. Specifically, the feature f 2 is fed to a convolutional layer with one filter to produce the edge map. Then, we can measure Up-sample×2 Up-sample×2 Up-sample×2 Up-sample×2 1*1 Conv+BN the dissimilarity of the EA module between the produced edge map and the edge map G e derived from the ground-truth (GT), which is constrained by the standard Binary Cross Entropy (BCE) loss function: where (x, y) are the coordinates of each pixel in the predicted edge map S e and edge ground-truth map G e . The G e is calculated using the gradient of the ground-truth map G s . Additionally, w and h denote the width and height of corresponding map, respectively. Parallel Partial Decoder: Several existing medical image segmentation networks segment interested organs/lesions using all high-and low-level features in the encoder branch [57] , [58] , [62] - [65] . However, Wu et al. [66] pointed out that, compared with high-level features, low-level features demand more computational resources due to larger spatial resolutions, but contribute less to the performance. Inspired by this observation, we propose to only aggregate high-level features with a parallel partial decoder component, illustrated in Fig. 3 . Specifically, for an input CT image I, we first extract two sets of low-level features {f i , i = 1, 2} and three sets of highlevel features {f i , i = 3, 4, 5} using the first five convolutional blocks of Res2Net [67] . We then utilize the partial decoder p d (·) [66] , a novel decoder component, to aggregate the highlevel features with a paralleled connection. The partial decoder yields a coarse global map S g = p d (f 3 , f 4 , f 5 ), which then serves as global guidance in our RA modules. Reverse Attention Module: In clinical practice, clinicians usually segment lung infection regions via a two-step procedure, by roughly localizing the infection regions and then accurately labeling these regions by inspecting the local tissue structures. Inspired by this procedure, we design Inf-Net using two different network components that act as a rough locator and a fine labeler, respectively. First, the PPD acts as the rough locator and yields a global map S g , which provides the rough location of lung infection regions, without structural details (see Fig. 2 ). Second, we propose a progressive framework, acting as the fine labeler, to mine discriminative infection regions in an erasing manner [68] , [69] . Specifically, instead of simply aggregating features from all levels [69] , we propose to adaptively learn the reverse attention in three parallel high-level features. Our architecture can sequentially exploit complementary regions and details by erasing the estimated infection regions from high-level side-output features, where the existing estimation is up-sampled from the deeper layer. We obtain the output RA features R i by multiplying (element-wise ) the fusion of high-level side-output features {f i , i = 3, 4, 5} and edge attention features e att = f 2 with RA weights A i , i.e., where Dow(·) denotes the down-sampling operation, C(·) denotes the concatenation operation follow by two 2-D convolutional layers with 64 filters. The RA weight A i is de-facto for salient object detection in the computer vision community [69] , and it is defined as: where P(·) denotes an up-sampling operation, σ(·) is a Sigmoid activation function, and (·) is a reverse operation subtracting the input from matrix E, in which all the elements are 1. Symbol E denotes expanding a single channel feature to 64 repeated tensors, which involves reversing each channel of the candidate tensor in Eq. (2) . Details of this procedure are shown in Fig. 4 . It is worth noting that the erasing strategy driven by RA can eventually refine the imprecise and coarse estimation into an accurate and complete prediction map. Loss Function: As mentioned above in Eq. (1), we propose the loss function L edge for edge supervision. Here, we define our loss function L seg as a combination of a weighted IoU loss L w IoU and a weighted binary cross entropy (BCE) loss L w BCE for each segmentation supervision, i.e., where λ is the weight, and set to 1 in our experiment. The two parts of L seg provide effective global (image-level) and local (pixel-level) supervision for accurate segmentation. Unlike the standard IoU loss, which has been widely adopted in segmentation tasks, the weighted IoU loss increases the weights of hard pixels to highlight their importance. In addition, compared with the standard BCE loss, L w BCE puts more emphasis on hard pixels rather than assigning all pixels equal weights. The definitions of these losses are the same as in [70] , [71] and their effectiveness has been validated in the field of salient object detection. Note that the Correntropy-induced loss functions [72] , [73] can be employed here for improving the robustness. Finally, we adopt deep supervision for the three side-outputs (i.e., S 3 , S 4 , and S 5 ) and the global map S g . Each map is Perform testing using the trained model M and K CT images randomly selected from D Unlabeled , which yields network-labeled data D Net-labeled , consisting of K CT images with pseudo labels 5: Enlarge the training dataset using D Net-labeled , i.e., D Training = D Training ∪ D Net-labeled 6: Remove the K testing CT images from D Unlabeled 7: Fine-tune M using D Training 8: until D Unlabeled is empty 9: return Trained model M up-sampled (e.g., S up 3 ) to the same size as the object-level segmentation ground-truth map G s . Thus, the total loss in Eq. (4) is extended to Currently, there is very limited number of CT images with segmentation annotations, since manually segmenting lung infection regions are difficult and time-consuming, and the disease is at an early stage of outbreak. To resolve this issue, we improve Inf-Net using a semi-supervised learning strategy, which leverages a large number of unlabeled CT images to effectively augment the training dataset. An overview of our semi-supervised learning framework is shown in Fig. 5 . Our framework is mainly inspired by the work in [74] , which is based on a random sampling strategy for progressively enlarging the training dataset with unlabeled data. Specifically, we generate the pseudo labels for unlabeled CT images using the procedure described in Algorithm 1. The resulting CT images with pseudo labels are then utilized to train our model using a two-step strategy detailed in Section III-D. The advantages of our framework, called Semi-Inf-Net, lie in two aspects. First, the training and selection strategy is simple and easy to implement. It does not require measures to assess the predicted label, and it is also threshold-free. Second, this strategy can provide more robust performance than other semisupervised learning methods and prevent over-fitting. This conclusion is confirmed by recently released studies [74] . Our Semi-Inf-Net is a powerful tool that can provide crucial information for evaluating overall lung infections. However, we are aware that, in a clinical setting, in addition to the overall evaluation, clinicians might also be interested in the quantitative evaluation of different kinds of lung infections, e.g., GGO and consolidation. Therefore, we extend Semi-Inf-Net to a multi-class lung infection labeling framework so that it can provide richer information for the further diagnosis and treatment of COVID-19. The extension of Semi-Inf-Net is based on an infection region guided multi-class labeling framework, which is illustrated in Fig. 6 . Specifically, we utilize the infection segmentation results provided by Semi-Inf-Net to guide the multi-class labeling of different types of lung infections. For this purpose, we feed both the infection segmentation results and the corresponding CT images to a multi-class segmentation network, e.g., FCN8s [75] , or U-Net [58] . This framework can take full advantage of the infection segmentation results provided by Semi-Inf-Net and effectively improve the performance of multi-class infection labeling. Our model is implemented in PyTorch, and is accelerated by an NVIDIA TITAN RTX GPU. We describe the implementation details as follows. Pseudo label generation: We generate pseudo labels for unlabeled CT images using the protocol described in Algorithm 1. The number of randomly selected CT images is set to 5, i.e., K = 5. For 1600 unlabeled images, we need to perform 320 iterations with a batch size of 16. The entire procedure takes about 50 hours to complete. Semi-supervised Inf-Net: Before training, we uniformly resize all the inputs to 352×352. We train Inf-Net using a multiscale strategy [60] . Specifically, we first re-sample the training images using different scaling ratios, i.e., {0.75, 1, 1.25}, and then train Inf-Net using the re-sampled images, which improves the generalization of our model. The Adam optimizer is employed for training and the learning rate is set to 1e−4. Our training phase consists of two steps: (i) Pre-training on 1600 CT images with pseudo labels, which takes ∼180 minutes to converge over 100 epochs with a batch size of 24. (ii) Finetuning on 50 CT images with the ground-truth labels, which takes ∼15 minutes to converge over 100 epochs with a batch size of 16. For a fair comparison, the training procedure of Inf-Net follows the same setting described in the second step. Semi-Inf-Net+Multi-class segmentation: For Multi-class segmentation network, we are not constrained to specific choice of the segmentation network, and herein FCN8s [75] and U-Net [58] are used as two backbones. We resize all the inputs to 512 × 512 before training. The network is initialized by a uniform Xavier, and trained using an SGD optimizer with a learning rate of 1e − 10, weight decay of 5e − 4, and momentum of 0.99. The entire training procedure takes about 45 minutes to complete. As shown in Table I , there is only one segmentation dataset for CT data, i.e., the COVID-19 CT Segmentation dataset [9] 1 , which consists of 100 axial CT images from different COVID-19 patients. All the CT images were collected by the Italian Society of Medical and Interventional Radiology, and are available at here 2 . A radiologist segmented the CT images using different labels for identifying lung infections. Although this is the first open-access COVID-19 dataset for lung infection segmentation, it suffers from a small sample size, i.e., only 100 labeled images are available. In this work, we collected a semi-supervised COVID-19 infection segmentation dataset (COVID-SemiSeg), to leverage large-scale unlabeled CT images for augmenting the training dataset. We employ COVID-19 CT Segmentation [9] as the labeled data D Labeled , which consists of 45 CT images randomly selected as training samples, 5 CT images for validation, and the remaining 50 images for testing. The unlabeled CT images are extracted from the COVID-19 CT Collection [11] dataset, which consists of 20 CT volumes from different COVID-19 patients. We extracted 1,600 2D CT axial slices from the 3D volumes, removed non-lung regions, and constructed an unlabeled training dataset D Unlabeled for effective semisupervised segmentation. Baselines. For the infection region experiments, we compare the proposed Inf-Net and Semi-Inf-Net with five classical segmentation models in the medical domain, i.e., U-Net 3 [58] , U-Net++ 3 [57] , Attention-UNet 4 [76] , Gated-UNet 4 [77] , and Dense-UNet 5 [78] . For the multi-class labeling experiments, we compare our model with two cutting-edge models from the computer vision community: DeepLabV3+ [79] , FCN8s [75] and multi-class U-Net [58] . Evaluation Metrics. Following [22] , [55] , we use three widely adopted metrics, i.e., the Dice similarity coefficient, Sensitivity (Sen.), Specificity (Spec.), and Precision (Prec.). We also introduce three golden metrics from the object detection field, i.e., Structure Measure [80] , Enhance-alignment Measure [81] , and Mean Absolute Error. In our evaluation, we choose S 3 with Sigmoid function as the final prediction S p . Thus, we measure the similarity/dissimilarity between final the prediction map and object-level segmentation ground-truth G, which can be formulated as follows: 1) Structure Measure (S α ): This was proposed to measure the structural similarity between a prediction map and groundtruth mask, which is more consistent with the human visual system: where α is a balance factor between object-aware similarity S o and region-aware similarity S r . We report S α using the default setting (α = 0.5) suggested in the original paper [80] . 2) Enhanced-alignment Measure (E mean φ ): This is a recently proposed metric for evaluating both local and global similarity between two binary maps. The formulation is as follows: where w and h are the width and height of ground-truth G, and (x, y) denotes the coordinate of each pixel in G. Symbol φ is the enhanced alignment matrix. We obtain a set of E φ by converting the prediction S p into a binary mask with a threshold from 0 to 255. In our experiments, we report the mean of E ξ computed from all the thresholds. This measures the pixelwise error between S p and G, which is defined as: C. Segmentation Results To compare the infection segmentation performance, we consider the two state-of-the-art models U-Net and U-Net++. Quantitative results are shown in Table II . As can be seen, the proposed Inf-Net outperforms U-Net and U-Net++ in terms of Dice, S α , E mean φ , and MAE by a large margin. We attribute this improvement to our implicit reverse attention and explicit edge-attention modeling, which provide robust feature representations. In addition, by introducing the semi-supervised learning strategy into our framework, we can further boost the performance with a 5.7% improvement in terms of Dice. As an assistant diagnostic tool, the model is expected to provide more detailed information regarding the infected areas. Therefore, we extent to our model to the multi-class (i.e., GGO and consolidation segmentation) labeling. Table III shows the quantitative evaluation on our COVID-SemiSeg dataset, where "Semi-Inf-Net & FCN8s" and "Semi-Inf-Net & MC" denote the combinations of our Semi-Inf-Netwith FCN8s [75] and multi-class U-Net [58] , respectively. Our "Semi-Inf-Net & MC" pipeline achieves the competitive performance on GGO segmentation in most evaluation metrics. For more challenging consolidation segmentation, the proposed pipeline also achieves best results. For instance, in terms of Dice, our method outperforms the cutting-edge model, Multi-class U-Net [58] , by 12% on average segmentation result. Overall, the proposed pipeline performs better than existing state-of-the-art models on multi-class labeling on consolidation segmentation and average segmentation result in terms of Dice and S α . 2) Qualitative Results: The lung infection segmentation results, shown in Fig. 7 , indicate that our Semi-Inf-Net and Inf-Net outperform the baseline methods remarkably. Specifically, they yield segmentation results that are close to the ground truth with much less mis-segmented tissue. In contrast, U-Net gives unsatisfactory results, where a large number of mis-segmented tissues exist. U-Net++ improves the results, but the performance is still not promising. The success of Inf-Net is owed to our coarse-to-fine segmentation strategy, where a parallel partial decoder first roughly locates lung infection regions and then multiple edge attention modules are employed for fine segmentation. This strategy mimics how real clinicians segment lung infection regions from CT slices, and therefore achieves promising performance. In addition, the advantage of our semi-supervised learning strategy is also confirmed by Fig. 7 . As can be observed, compared with Inf-Net, Semi-Inf-Net yields segmentation results with more accurate boundaries. In contrast, Inf-Net gives relatively fuzzy boundaries, especially in the subtle infection regions. We also show the multi-class infection labeling results in Fig. 8 . As can be observed, our model, Semi-Inf-Net & MC, consistently performs the best among all methods. It is worth noting that both GGO and consolidation infections are In this subsection, we conduct several experiments to validate the performance of each key component of our Semi-Inf-Net, including the PPD, RA, and EA modules. 1) Effectiveness of PPD: To explore the contribution of the parallel partial decoder, we derive two baselines: Table IV . The results In the real application, each CT volume has multiple slices, where most slices could have no infections. To further validate the effectiveness of the proposed method on real CT volume, we utilized the recently released COVID-19 infection segmentation dataset [9] , which consists of 638 slices (285 noninfected slices and 353 infected slices) extracting from 9 CT volumes of real COVID-19 patients as test set for evaluating our model performance. The results are shown in Tables V. Despite containing non-infected slices, our method still obtains the best performance. Because we employed two datasets for semi-supervised learning, i.e., labeled data with 100 infected slices (50 training, 50 testing), and unlabeled data with 1600 CT slices from real volumes. The unlabeled data contains a lot of non-infected slices to guarantee our model could deal with non-infected slices well. Moreover, our Inf-Net is a general infection segmentation framework, which could be easily implemented for other type of infection. Although the our Inf-Net achieved promising results in segmenting infected regions, there are some limitations in the current model. First, the Inf-Net focuses on lung infection segmentation for COVID-19 patients. However, in clinical practice, it often requires to classify COVID-19 patients and then segment the infection regions for further treatment. Thus, we will study an AI automatic diagnosis system, which integrates COVID-19 detection, lung infection segmentation, and infection regions quantification into a unified framework. Second, for our multi-class infection labeling framework, we first apply the Inf-Net to obtain the infection regions, which can be used to guide the multi-class labeling of different types of lung infections. It can be seen that we conduct a two-step strategy to achieve multi-class infection labeling, which could lead to sub-optimal learning performance. In future work, we will study to construct an end-to-end framework to achieve this task. Besides, due to the limited size of dataset, we will use the Generative Adversarial Network (GAN) [82] or Conditional Variational Autoencoders (CVAE) [83] to synthesize more samples, which can be regarded as a form of data augmentation to enhance the segmentation performance. Moreover, our method may have a bit drop in accuracy when considering non-infected slices. Running a additional slice-wise classifier (e.g., infected vs non-infected) for selecting the infected slice is an effective solution for avoiding the performance drop on non-infected slices. In this paper, we have proposed a novel COVID-19 lung CT infection segmentation network, named Inf-Net, which utilizes an implicit reverse attention and explicit edge-attention to improve the identification of infected regions. Moreover, we have also provided a semi-supervised solution, Semi-Inf-Net, to alleviate the shortage of high quality labeled data. Extensive experiments on our COVID-SemiSeg dataset and real CT volumes have demonstrated that the proposed Inf-Net and Semi-Inf-Net outperform the cutting-edge segmentation models and advance the state-of-the-art performances. Our system has great potential to be applied in assessing the diagnosis of COVID-19, e.g., quantifying the infected regions, monitoring the longitudinal disease changes, and mass screening processing. Note that the proposed model is able to detect the objects with low intensity contrast between infections and normal tissues. This phenomenon is often occurs in nature camouflage objects. In the future, we plan to apply our Inf-Net to other related tasks, such as polyp segmentation [84] , product defects detection, camouflaged animal detection [85] . Our code and dataset have been released at: https://github.com/DengPingFan/Inf-Net A novel coronavirus outbreak of global health concern Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China Coronavirus COVID-19 global cases by the center for systems science and engineering at johns hopkins university Correlation of chest CT and rt-pcr testing in coronavirus disease 2019 (COVID-19) in China: A report of 1014 cases The role of chest imaging in patient management during the COVID-19 pandemic: A multinational consensus statement from the fleischner society Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation and Diagnosis for COVID-19 Sensitivity of chest CT for COVID-19: Comparison to RT-PCR Imaging profile of the COVID-19 infection: Radiologic findings and literature review COVID-19 CT segmentation dataset Chest CT manifestations of new coronavirus disease 2019 (COVID-19): a pictorial review COVID-19 image data collection COVID-CT-Dataset: a CT scan dataset about COVID-19 COVID-19 Patients Lungs X Ray Images 10000 Can AI help in screening Viral and COVID-19 pneumonia Harmony-Search and Otsu based System for Coronavirus Disease (COVID-19) Detection using Lung CT Scan Images COVID-Net: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest Radiography Images COVID-19 Screening on Chest X-ray Images Using Deep Learning based Anomaly Detection Deep learning system to screen coronavirus disease 2019 pneumonia Deep Learning-based Detection for COVID-19 from Chest CT using Weak Label COVID-19 Imaging-based AI Research Collection Quantification of tomographic patterns associated with COVID-19 from chest CT Lung infection quantification of COVID-19 in CT images with deep learning Computer analysis of computed tomography scans of the lung: a survey A review on lung and nodule segmentation techniques Unsupervised CT lung image segmentation of a mycobacterium tuberculosis infection model Lung nodule segmentation and recognition using SVM classifier and active contour modeling: A complete intelligent system An automated lung segmentation approach using bidirectional chain codes to improve nodule detection accuracy Central focused convolutional neural networks: Developing a data-driven model for lung nodule segmentation CT-realistic lung nodule simulation from 3D conditional generative adversarial networks for robust lung segmentation Multiple resolution residually connected feature streams for automatic lung tumor segmentation from CT images JCS: An explainable covid-19 diagnosis system by joint classification and segmentation Unsupervised anomaly detection with generative adversarial networks to guide marker discovery Deep Learning for Anomaly Detection: A Survey Sparse-GAN: Sparsity-constrained Generative Adversarial Network for Anomaly Detection in Retinal OCT Image Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis Collaborative learning of semi-supervised segmentation and classification for medical images A survey on semi-supervised learning Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks Temporal ensembling for semi-supervised learning Semisupervised learning with ladder networks Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation Asdnet: Attention based semisupervised deep networks for medical image segmentation Semi-supervised brain lesion segmentation with an adapted mean teacher model Multi-view Semisupervised 3D Whole Brain Segmentation with a Self-ensemble Network The role of imaging in the detection and management of COVID-19: a review Diagnosis of coronavirus disease 2019 (covid-19) with structured latent multi-view representation learning Deep learning covid-19 features on cxr using limited training data sets A deep learning algorithm using CT images to screen for corona virus disease (COVID-19) Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography: a prospective study Improved protein structure prediction using potentials from deep learning Artificial intelligence forecasting of COVID-19 in China Rapid AI development cycle for the coronavirus (COVID-19) pandemic: Initial results for automated detection & patient monitoring using deep learning CT image analysis Severity assessment of coronavirus disease 2019 (COVID-19) using quantitative features from chest CT images Large-scale screening of COVID-19 from community acquired pneumonia using infection size-aware classification Going deeper with convolutions UNet++: A nested U-Net architecture for medical image segmentation U-Net: Convolutional networks for biomedical image segmentation EGNet: Edge guidance network for salient object detection Stacked cross refinement network for edge-aware salient object detection ET-Net: A generic edge-attention guidance network for medical image segmentation Joint Optic Disc and Cup Segmentation Based on Multi-Label Deep Network and Polar Transformation CE-Net: Context Encoder Network for 2D Medical Image Segmentation Attention Guided Network for Retinal Image Segmentation Automated Design of Deep Learning Methods for Biomedical Image Segmentation Cascaded partial decoder for fast and accurate salient object detection Res2Net: A new multi-scale backbone architecture Object region mining with adversarial erasing: A simple classification to semantic segmentation approach Reverse attention for salient object detection BASNet: Boundary-aware salient object detection F3Net: Fusion, feedback and focus for salient object detection Efficient and robust deep learning with correntropy-induced loss function Correntropybased robust multilayer extreme learning machines Parting with illusions about deep active learning Fully convolutional networks for semantic segmentation Attention U-Net: Learning Where to Look for the Pancreas Attention gated networks: Learning to leverage salient regions in medical images H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes Encoderdecoder with atrous separable convolution for semantic image segmentation Structure-measure: A new way to evaluate foreground maps Enhanced-alignment measure for binary foreground map evaluation Hi-net: hybrid-fusion network for multi-modal MR image synthesis UC-Net: Uncertainty Inspired RGB-D Saliency Detection via Conditional Variational Autoencoders PraNet: Parallel Reverse Attention Network for Polyp Segmentation Camouflaged object detection