key: cord-0457237-57bgof6t authors: Wang, Guotai; Zhai, Shuwei; Lasio, Giovanni; Zhang, Baoshe; Yi, Byong; Chen, Shifeng; Macvittie, Thomas J.; Metaxas, Dimitris; Zhou, Jinghao; Zhang, Shaoting title: Semi-Supervised Segmentation of Radiation-Induced Pulmonary Fibrosis from Lung CT Scans with Multi-Scale Guided Dense Attention date: 2021-09-29 journal: nan DOI: nan sha: b9ca30eeb3c3fbb90121e4fdd24daf07ccca6f25 doc_id: 457237 cord_uid: 57bgof6t Computed Tomography (CT) plays an important role in monitoring radiation-induced Pulmonary Fibrosis (PF), where accurate segmentation of the PF lesions is highly desired for diagnosis and treatment follow-up. However, the task is challenged by ambiguous boundary, irregular shape, various position and size of the lesions, as well as the difficulty in acquiring a large set of annotated volumetric images for training. To overcome these problems, we propose a novel convolutional neural network called PF-Net and incorporate it into a semi-supervised learning framework based on Iterative Confidence-based Refinement And Weighting of pseudo Labels (I-CRAWL). Our PF-Net combines 2D and 3D convolutions to deal with CT volumes with large inter-slice spacing, and uses multi-scale guided dense attention to segment complex PF lesions. For semi-supervised learning, our I-CRAWL employs pixel-level uncertainty-based confidence-aware refinement to improve the accuracy of pseudo labels of unannotated images, and uses image-level uncertainty for confidence-based image weighting to suppress low-quality pseudo labels in an iterative training process. Extensive experiments with CT scans of Rhesus Macaques with radiation-induced PF showed that: 1) PF-Net achieved higher segmentation accuracy than existing 2D, 3D and 2.5D neural networks, and 2) I-CRAWL outperformed state-of-the-art semi-supervised learning methods for the PF lesion segmentation task. Our method has a potential to improve the diagnosis of PF and clinical assessment of side effects of radiotherapy for lung cancers. 45% from 1990 to 2015 among men, and 19% from 2002 to 2015 among women [3] . However, about half of all cancer patients who receive radiation therapy during their course of illness will suffer from Radiation-Induced Injuries (RII) to the hematopoietic tissue, skin, lung and gastrointestinal (GI) systems [2] . To prevent, mitigate or treat the RII plays an important role in improving the quality of radiation therapies. For lung cancer, the most common RII is the radiationinduced Pulmonary Fibrosis (PF) [4] , i.e., inflammation and subsequent scarring of lung tissues caused by radiation, which could lead to breathing problems due to lung damage and even lung failure [5] . Observation and assessment of the PF lesions using Computed Tomography (CT) imaging is critical for diagnosis and treatment follow-up of this disease [6] . For an accurate and quantitative measurement of PF, it is desirable to segment the PF lesions from 3D CT scans. The segmentation results can provide detailed spatial distribution and accurate volumetric measurement of the lesions, which is important for treatment decision making, PF progress modeling, treatment effect assessment and prognosis prediction. As manual segmentation of lesions from 3D images is timeconsuming, labor-intensive and faced with inter-and intraobserver varabilities, automatic PF lesion segmentation from CT images is highly desirable [7] . However, this is challenging due to several reasons. Firstly, with different severity of the disease, PF lesions have a large variation of size and shape. A small lesion may only contain few pixels, while a large lesion can occupy a lung segment. The irregular shapes make it difficult to use a statistical model for the segmentation task [8] . Secondly, the lesions have a complex spatial distribution, and can be scattered in different segments of the lung. Thirdly, PF lesions often adhere to lung structures including vessels, airways and the pleura, and other lesions like lung nodules and pneumonia lesions with similar appearance may also exist. These factors, alongside with the low contrast of soft tissues in CT images, make it hard to delineate the boundary of PF lesions. Fig. 1 shows two examples of lung CT scans with PF, and it demonstrates the difficulties of accurate segmentation. Recently, Convolutional Neural Networks (CNNs) have been increasingly used for automatic medical image segmentation [9] . By automatically learning features from a large set of annotated images, they have outperformed most traditional segmentation methods using hand-crafted features [7] , such as for recognition of lung nodules [10] , [11] , lung lobes [12] and COVID-19 infection lesions [13] , [14] . However, to the best of Macaque. The first row shows lung CT images, and the second row shows manual segmentation results of PF lesions. Note that the ambiguous boundary, irregular shape and various size and position make the segmentation task challenging. our knowledge, CNNs for PF lesion segmentation have rarely been investigated so far. Besides the above challenges, existing CNNs may obtain suboptimal performance for the PF lesion segmentation task due to the following reasons. First, lung CT images usually have anisotropic 3D resolutions with high inter-slice spacing and low intra-slice spacing. Existing CNNs using pure 2D convolutions or pure 3D convolutions have limited ability to learn effective 3D features from such images, as 2D CNNs [8] , [15] [16] [17] can only learn intra-slice features, and most 3D networks [18] [19] [20] [21] are designed with isotropic receptive field in terms of voxels. When dealing with 3D images with large inter-slice spacing, they have an imbalanced physical receptive field (in terms of mm) along each axis, i.e., the physical receptive field in the through-plane direction is much larger than that in in-plane directions, which may limit effective learning of 3D features. Second, existing CNNs often use position invariant convolutions without spatial awareness, which makes it difficult to handle objects with various position and size. Attention mechanisms have recently been proposed to improve the spatial awareness [17] , [22] , [23] , but their obtained spatial attention does not match the target region well, and their performance on PF lesions has not been investigated. What's more, current success of deep learning methods for segmentation relies highly on a large set of annotated images for training [9] . For 3D medical images, acquiring pixellevel annotations in segmentation tasks is extremely timeconsuming and difficult, as accurate annotations could be only provided by experts with domain knowledge [24] . For the PF lesion segmentation task, annotation of a CT volume could take several hours, and the complex shape and appearance of PF lesions further increase the efforts and time needed for annotation, which makes it difficult to annotate a large set of 3D pulmonary CT scans for training. To deal with these problems, we propose a novel semisupervised framework with a novel 2.5D CNN based on multi-scale attention for the segmentation of PF lesions from CT scans with large inter-slice spacing. The contribution is three-fold. First, we propose a novel network for PF lesion segmentation (i.e., PF-Net), which employs multi-scale guided dense attention to deal with lesions with various size and position, and combines 2D and 3D convolutions to achieve balanced physical receptive field along different axes to better learn 3D features from medical images with anisotropic resolution. Second, a novel semi-supervised learning framework using Iterative Confidence-based Refinement And Weighting of pseudo Labels (I-CRAWL) is proposed, where uncertainty estimation is employed to assess the quality of pseudo labels of unannotated images in both pixel level and image level. We propose Confidence-Aware Refinement (CAR) based on pixel-level uncertainty to refine pseudo labels, and introduce confidence-based image weighting according to image-level uncertainty to suppress low-quality pseudo labels. Thirdly, we apply our proposed method to radiation-induced PF lesion segmentation from CT scans, and extensive experimental results show that our method outperformed state-of-the-art semisupervised methods and existing 2D, 3D and 2.5D CNNs for segmentation. As far as we know, this is the first work on PF lesion segmentation based on deep learning, and our method has a potential to reduce the annotation burden for large-scale 3D image datasets in the development of automatic segmentation models with high performance. CNNs have achieved state-of-the-art performance for many medical image segmentation tasks [9] . Most widely used segmentation CNNs are inspired by U-Net [15] , which is based on an encoder-decoder structure to learn features at multiple scales. UNet++ [16] extends U-Net with a series of nested, dense skip pathways for a higher performance. Attention U-Net [22] introduced an attention gate using high-level features to calibrate low-level features. Spatial and channel "Squeeze and Excitation" (scSE) [17] enables a 2D network to focus on the most relevant features for better performance. Typical networks for segmentation of 3D volumes include 3D U-Net [18] , V-Net [19] and HighRes3DNet [20] . They assume that the input volume has an isotropic 3D resolution to learn 3D features, and are not suitable for images with large inter-slice spacing. To better deal with such images, a 3D anisotropic hybrid network that uses a pre-trained 2D encoder and a decoder with anisotropic convolutions was proposed in [25] . Jia et al. [26] designed a pyramid anisotropic CNN based on decomposition of 3D convolutions. The nnU-Net [27] automatically configures network structures and training strategies, where different types of convolution kernels can be adaptively combined for a given dataset. In [28] , a lightweight CNN combining inter-slice and intra-slice convolutions was proposed to for segmentation of CT images. In [29] , 2D and 3D convolutions were combined in a single network for Vestibular Schwannoma segmentation from images with anisotropic resolution. However, these networks have limited ability to segment lesions with various size and position. CNNs have been widely used for segmentation of lung structures from CT images [7] . In [12] , cascaded CNNs with non-local modules [23] were proposed to leverage structured relationships for pulmonary lobe segmentation. In [30] , a CNN was combined with freeze-and-grow propagation for airway segmentation. For lung lesions, a central focused CNN [11] was proposed to segment lung nodules from heterogeneous CT images, and Fan et al. [13] employed reverse attention and edge attention to segment COVID-19 lung infection. Wang et al. [14] developed a noise-robust framework for automatic segmentation of COVID-19 pneumonia lesions by learning from non-expert annotations. Despite the large amount of works on lung structure and lesion segmentation so far, there is a lack of deep learning models for the challenging task of radiation-induced pulmonary fibrosis segmentation. To reduce the burden for annotation, semi-supervised learning methods have been increasingly employed for medical image segmentation by using a limited number of annotated images and a large amount of unannotated images [24] . Existing semi-supervised methods mainly have two categories. The first category is based on pseudo labels [13] , [31] , [32] , where a model trained with annotated images obtains pseudo labels for unannotaed images that are then used to update the segmentation model. Lee [31] used such a strategy for classification problems, and Bai et al. [32] updated the pseudo segmentation labels and network parameters alternatively and used Conditional Random Field (CRF) to refine the pseudo labels. Fan et al. [13] progressively enlarged the training set with unlabeled data and their pseudo labels for learning. However, this method ignores the quality of pseudo labels, which may limit the performance of the learned model. The second category is to learn from annotated and unannotated images simultaneously, and they often consist of a supervised loss function for annotated images and an unsupervised regularization loss function for all the images. The regularization can be based on teacher-student consistency [33] , [34] , transformation consistency [35] , multi-view consistency [36] and reconstruction-based auxiliary task [37] . Adversarial learning [38] also regularizes the segmentation model by minimizing the distribution difference between segmentation results of annotated images and those of unannotaed images. However, adversarial models are hard to train, and the complex size and shape of PF lesions make it difficult to capture the true distribution of lesion masks when only a small set of annotated images are available. In this section, we first introduce our proposed Pulmonary Fibrosis segmentation Network (PF-Net) with multi-scale guided dense attention, and then describe how it is used in our I-CRAWL framework for semi-supervised learning. Our proposed PF-Net is illustrated in Fig. 2 . It employs an encoder-decoder backbone structure that is commonly used and effective for medical image segmentation [15] , [16] , [19] . The encoder contains five scales, and each is implemented by a convolutional block followed by a max-pooling layer for down-sampling. In each block, we use two convolutional layers each followed by a Batch Normalization (BN) [39] layer and a parametric Rectified Linear Unit (pReLU), and a dropout layer is inserted before the second convolutional layer. The decoder uses the same type of convolutional blocks as the encoder. We extend this backbone in the following aspects: 1) 2.5D Network Structure: To deal with the anisotropic 3D resolution with high inter-slice spacing and low intra-slice spacing, we combine 2D (i.e., intra-slice) and 3D convolutions so that the network has an approximately balanced physical receptive field along each axis. Let S represent the number of scales in the network (S = 5 in this paper), we use 2D convolutions and 2D max-poolings for the first M scales in the encoder, and employ 3D convolutions and 3D maxpoolings for the last S − M scales in the encoder. Each resolution level in the decoder contains the same type of 2D or 3D convolutional blocks as in the encoder. We use trilinear interpolation for upsampling in the decoder. As our lung CT images have a resolution around 0.3×0.3×1.25 mm, i.e., the in-plane resolution is about four times of the through-plane resolution, we set M = 2 so that the 2D max-pooling layers in the first two scales make the resulted feature maps have a near-isotropic 3D resolution, as shown in Fig. 2 To deal with 3D images with large inter-slice spacing, the first two scales use 2D convolutions while the other scales use 3D convolutions.P s is the predicted spatial attention at scale s, and it is sent to all lower scales with dense connection in the decoder, as shown by green lines. 2) Multi-Scale Guided Dense Attention: To better deal with PF lesions with various position and size, we use multiscale attention to improve the network's spatial awareness, and propose dense attention to leverage multi-scale contextual information for the segmentation task. Specifically, at each scale s of the decoder, we use a convolutional layer to get a spatial attention mapP s , and useP s as a high-level attention signal to guide the learning in all lower scales of the decoder. The input of the decoder at scale s is a concatenation of three parts: F e s from scale s of the encoder, an upsampled version of the decoder feature map F d s+1 andP s s+1 ⊕P s s+2 ⊕ ... ⊕P s S , where ⊕ is the concatenation operation andP s s+1 is the upsampled version ofP s+1 so that it has the same spatial resolution as F e s . Therefore, a lower scale accepts the attention maps from all the higher scales as input, which is referred to as Multi-Scale Dense Attention (MSDA). The decoder thus takes advantage of multi-scale contextual information that enables the network to pay more attention to the target region. To better learn the spatial attention, we propose Multi-Scale Guided Attention (MSGA) to explicitly supervisẽ P 1 ,P 2 , ...,P S at different scales. Let P s denote the softmaxed output ofP s and Y denote the ground truth (i.e., one-hot probability map) of a training sample X. Unlike a common deep supervison strategy that upsamplesP s or P s at different scales to the same spatial resolution as Y [40]- [42] , we down-sample Y to obtain multi-scale ground truth for spatial attention, which makes the loss calculation more efficient and can directly supervise the spatial attention maps at different scales. Let Y s denote the down-sampled ground truth at scale s. The multi-scale loss function for a single image is: where P = {P 1 , P 2 , ..., P S }. L s () is a base loss function for image segmentation, such as the Dice loss [19] . α s is the weight of L s () at scale s. Generating pseudo labels for unannotated images has been shown effective for semi-supervised segmentation [13] , [31] , [32] . However, the pseudo labels often contain some incorrect regions, and low-quality pseudo labels can largely limit the performance of the learned segmentation model. To prevent the learning process from being corrupted by inaccurate pseudo labels, we propose an Iterative Confidence-based Refinement And Weighting of pseudo Labels (I-CRAWL) framework for semi-supervised segmentation. Assume that the entire training set consists of one subset D L with N L labeled images and another subset D U with N U unlabeled images. For an image X L i ∈ D L , its ground truth label Y L i is known, while for an image X U i ∈ D U , its ground truth label is not provided, and we use Y U i to denote its estimated pseudo label. As the pseudo labels may have a large range of quality, we introduce an image-level weight w U i ∈ [0, 1] for each pair of X U i , Y U i for learning, and define w U i based on the confidence (or uncertainty, inversely) of Y U i . The weight for a labeled image X L i from D L can be similarly denoted as w L i , and we set w L i = 1 as the corresponding label Y L i is clean and reliable. Therefore, the labeled subset can be denoted as ..} (i = 1, 2, ..., N L ), and the unlabeled subset with pseudo labels can be denoted as ..} (i = 1, 2, ..., N U ). Our I-CRAWL is illustrated in Fig. 3 , and it is an iterative learning process with K rounds. Each round has four steps: 1) inference for unannotated images with uncertainty estimation, 2) confidence-aware refinement of pseudo labels, 3) confidence-based image weighting, and 4) network update where the current pseudo labels and image-level weights are used to train the network. These steps are detailed as follows. 1) Inference for Unannotated Images with Uncertainty Estimation: With the network parameter θ k−1 obtained in the last round, in round k, we first use θ k−1 to predict provisional pseudo labels for unannotated images in D U and the associated uncertainty estimation. Note that in the first round (i.e., k = 1), the initial network parameter θ 0 is obtained by pre-training with the annotated images in D L . With θ k−1 , we employ the Monte Carlo (MC) Dropout [43] that has shown to be an effective method for estimation of epistemic uncertainty caused by the lack of training data [34] , [36] . MC Dropout feeds an image X U i into the network R times with random dropout, which leads to R predictions, i.e., R foreground probability maps [43] . The average of these R foreground probability maps is taken as the provisional probability map P U i , a binarization of which leads to a provisional pseudo segmentation label Y U i . At the same time, the statistical variance of the R foreground probability maps is taken as the uncertainty map V U i , which gives voxel-level uncertainty. As uncertainty information can indicate potentially wrong segmentation results [34] , [44] , we treat the pseudo labels with high uncertainty (i.e., low confidence) values as unreliable labels for images in D U , and propose a Confidence-Aware Refinement (CAR) method to improve the pseudo labels' quality. 2) Confidence-Aware Refinement of Pseudo Labels: Given the provisional pseudo label Y U i with the uncertainty map V U i for an unannotated image X U i ∈ D U , and let x denote a voxel, we split the voxels in Y U i into three sets according to the status: high-confidence foreground voxels 05 is a small threshold value. For undetermined voxels in U, we refine their labels according to a contextual regularization considering intervoxel connections and softened probabilities for these voxels. Our CAR for pseudo label refinement has two steps: probability map softening and contextual regularization. First, we soften the foreground probabilities for uncertain voxels, which will degrade the influence of the network's prediction for these voxels in the following contextual regularization step. Let p denote a foreground probability value and u denote the corresponding uncertainty value, the softening function is: where the softened foreground probability get closer to 0.5 when u is larger. LetṖ U ix denote the softened foreground probability of voxel x, and is obtained by: Then, we use contextual regularization takingṖ U i as input to refine the pseudo labels, which is implemented by a fully connected Conditional Random Field (CRF) [45] . For simplicity, we denote X U ix , Y U ix andṖ U ix as x x , y x andṗ x , respectively. The energy function of CRF is: where φ(y x ) = −y x log(ṗ x ) − (1 − y x )(1 −ṗ x ) constrains the output to be consistent with the softened probability map, and this constraint for the low-confidence voxels in U is weak. The second term in Eq. (4) is a pairwise potential that encourages the label's contextual consistency: where µ(y x , y y ) = 1 if y x = y y and 0 otherwise. Minimization of Eq. (4) leads to a refined pseudo label Y U i for X U i . With the new pseudo label Y U i , we further employ the confidence to update its image-level weight w U i to suppress low-quality pseudo labels at the image level. We first define an image-level uncertainty v i based on uncertainty map V U i : where v i is the sum of voxel-level uncertainty normalized by the segmented lesion's volume in the image. η = 10 −5 is a small number for numerical stability. Let v max and v min denote the maximal and minimal values of v i among all the unannotated images, we map v i to the range of [0, 1]: Finally, the image-level weight w U i is defined as: where γ ≥ 1.0 is a hyper parameter to control the nonlinear mapping between the image-level uncertainty v i and the weight. We do not use γ < 1.0 as it will lead the weight of most samples to be very small (close to 0.0). In contrast, γ > 1.0 leads the weight for most samples close to 1.0, and only samples with a high uncertainty will be strongly suppressed. In the experiment, we set γ = 3.0 according to the best performance on the validation set. 4) Model Update with Batch Training: With the refined pseudo labels and image-level weights obtained above, we train the network based on D L and the current pseudo labels for images in D U , where each image is weighted by w L i or w U i in the segmentation loss function. The weighted loss for the entire training set is: where () is defined in Eq. (1). P L i and P U i are the multi-scale predictions obtained by PF-Net for an annotated image and an unannotated image, respectively. A. Experimental Setting 1) Data: Thoracic CT scans of 41 Male Rhesus Macaques with radiation-induced lung damage were collected with ethical committee approval. Once irradiated, each individual underwent serial CT scans for assessment of PF around every 30 days in 3 to 8 months. 133 CT scans with PF were used for experiments of the segmentation task. The CT scans have a slice thickness of 1.25 mm, with image size 512 × 512 and pixel size ranging from 0.20 mm × 0.20 mm to 0.38 mm × 0.38 mm. We randomly split the dataset at individual level into 86 scans from 27 individuals for training, 15 scans from 4 individuals for validation, and 32 scans from 10 individuals for testing. Manual annotations given by an experienced radiologist were used as the segmentation ground truth. In the training set, we used 18 scans with annotations as D L and the other 68 scans as unannotated images D U for semi-supervised learning, and also investigated the performance of our method with different ratios of annotated images. For preprocessing, we crop the lung region and normalize the intensity to [0, 1] using a window/level of 1500/-650. 2) Implementation and Evaluation Metrics: Our PF-Net 1 and I-CRAWL framework were implemented in Pytorch with PyMIC 2 [14] library on a Ubuntu desktop with an NVIDIA GTX 1080 Ti GPU. The channel number parameter N in PF-Net was set as 16. Dropout was only used in the encoder. The dropout rate at the first two scales of the encoder was 0 due to the small number of feature channels, and that for the last three scales was 0.3, 0.4 and 0.5, respectively. We set the base loss function L s () as Dice loss [19] . PF-Net was trained The round number for our I-CRAWL was K = 3, and we use round 0 to refer to the pre-training with annotated images. In each of the following round, the pseudo labels of unannotated images acquired by CAR were kept fixed, and we used Adam optimizer to train the network for tens of thousands of iterations until the performance on validation set stopped to increase. The learning rate was initialized as 10 −3 and halved every 10k iterations. Uncertainty estimation was based on 10 forward passes of MC dropout. We used the SimpleCRF library 3 to implement fully connected CRF [45] . Following [46] , image intensity was rescaled from [0, 1] to [0, 255] before the image was sent into the CRF, and the CRF parameters were: w 1 = 3, w 2 = 10, σ α = 10, σ β = 20 and σ γ = 15, which was tuned based on the validation set. γ in Eq. (8) was 3.0, and the performance based on different γ values is shown in Table III . For quantitative evaluation of the segmentation, we used Dice score, Relative Volume Error (RVE) and the 95-th percentile of Hausdorff Distance (HD 95 ) between the segmented PF lesions and the ground truth in 3D volumes. Paired t-test was used to see if two methods were significantly different. In this section, we investigate the performance of our PF-Net for pulmonary fibrosis segmentation only using the 18 annotated images for training. The results of semi-supervised learning will be demonstrated in Section IV-C. 1) Comparison with Existing Networks: Our PF-Net was compared with three different categories of network structures: 1) 2D CNNs including the typical 2D U-Net [15] and more advanced attention U-Net [22] that leverages spatial attention to focus more on the segmentation target; 2) 3D CNNs including 3D U-Net [18] , 3D V-Net [19] and scSE-Net [17] that combines a 3D U-Net [18] backbone with spatial and channel "squeeze and excitation" modules; and 3) existing networks designed for dealing with volumetric images with large inter-slice spacing (i.e., anisotropic resolution): nnU-Net [27] that automatically configures the network structure so that it is adapted to the given dataset, AH-Net [25] that transfers features learned from 2D images to 3D anisotropic volumes, and VS-Net [29] that uses a mixture of 2D and Table I . We found that 3D U-Net [18] achieved a Dice score of 57.19%, which was the lowest among the compared methods. 2D U-Net [15] , Attention U-Net [22] , 3D V-Net [19] and 3D scSE-Net [17] achieved similar performance with Dice score around 60%. The nnU-Net [27] , AH-Net [25] and VS-Net [29] designed to deal with anisotropic resolution generally performed better than these 2D and 3D networks. Among these existing methods, VS-Net achieved the best Dice that was 67.45%. Our PF-Net achieved Dice, RVE and HD 95 of 70.36%, 27.96% and 10.87 mm, respectively, where the Dice and HD 95 were significantly better than those of the other compared methods. Fig. 4 shows a qualitative comparison between these networks, where (a) and (b) are from two individuals, and axial and coronal views are shown in each case. In Fig. 4(a) , it can be seen that 2D U-Net [15] , 3D V-Net [19] and AH-Net [25] lead to obvious under-segmentation, as highlighted by blue arrows in the first row. VS-Net [29] performs better than them, but the result of our PF-Net is closer to the ground truth than that of VS-Net. From the coronal view of Fig. 4(b) , we can observe that the result of 2D U-Net [15] lacks interslice consistency. 3D V-Net [19] achieves better inter-slice consistency, but it has a poor segmentation in the upper and lower regions of the lesion, as indicated by the blue arrow in the third column. AH-Net [25] , VS-Net [29] and our PF-Net that consider anisotropic resolution obtain better performance than the above networks purely using 2D or 3D convolutions. What's more, Fig. 4 shows that the lesions have complex and irregular sizes and shapes, and the proposed PF-Net is able to achieve more accurate segmentation in these cases than AH-Net [25] and VS-Net [29] . 2) Ablation Study: To investigate the effectiveness of each component of our PF-Net, we set the baseline as a naive 2.5D U-Net that extends 2D U-Net [15] by using pReLU, replacing 2D convolutions with 3D convolutions at the three lowest resolution levels, and adding dropout layers to each convolutional block. To justify the choice of using 2D convolutions at the first two resolution levels and 3D convolutions at the other resolution levels, we set M to 0-5 respectively. Note that M = 0 and M = 5 correspond to pure 2D and In the uncertainty map, dark green and red colors represent low and high uncertainty values, respectively. Blue arrows highlight some local differences. pure 3D networks, respectively. Comparison between these variants are listed in the first section of Table II , which shows that the performance increases as M changes from 0 to 2, and decreases when M is 3 and larger. This is in line with our motivation to set M to 2 due to the fact that the in-plane resolution is around four times of the through-plane resolution. Thus, we use baseline (M = 2) in the following ablation study. The proposed PF-Net is referred to as baseline + MSDA + MSGA, where MSDA is our proposed multi-scale dense attention and MSGA is the proposed multi-scale guided attention. We compared the baseline and our PF-Net with: 1) Baseline + non-local [23] , where the non-local is a self-attention block inserted at the bottleneck (scale 5) of the baseline network, and it was not used at the lower scales with higher resolution due to memory constraint; 2) Baseline + deep supervision, where the baseline network was only combined with a typical deep supervision strategy as implemented in [40] , [41] ; 3) Baseline + MSGA, without using MSDA; 4) Baseline + MSGA + MSDA • , where MSDA • is a variant of MSDA and it refers toP s is only sent to its next lower scale s − 1, rather than all the lower scales in the decoder of PF-Net. Quantitative evaluation results of these variants are listed in the second section of Table II . It shows that compared with the baseline (M = 2), deep supervision improved the Dice score from 67.87% to 68.57%, but our MSGA was more effective with Dice of 69.32%. Using MSDA • could further improve the performance, but it was less effective than our MSDA. Table II demonstrates that our PF-Net was better than the compared variants, and it significantly outperformed the baseline in terms of Dice and HD 95 . Fig. 6 presents a visualization of multi-scale attention maps of PF-Net. It shows that attention maps across scales are consistent with each other and they change from coarse to fine as the spatial resolution increases. With PF-Net as the segmentation network structure, we further validate our proposed I-CRAWL for semi-supervised III: Performance on the validation set based on different γ values in the confidence-based sampling weighting of pseudo labels in round 1 of I-CRAWL. "Initial" refers to model pre-trained on annotated images (round 0). * denotes significant improvement from it (p-value < 0.05). training. In Section IV-C1 and IV-C2 we used 18 annotated and 68 unannounced images, i.e., the annotation ratio was around 20%. In Section IV-C3, we experimented with different annotation ratios including 10%, 20% and 50%. 1) Hyper-Parameter Setting: First, to investigate the best value of hyper parameter γ in Eq. (8) that controls the weight of unannotated images, we measured the model's performance on the validation set at the first round of I-CRAWL with γ ranging from 1.0 to 5.0. They are compared with "Initial" that refers to model pre-trained on the annotated images and "No weighting" that denotes treating all unannotated images equally without considering the quality of pseudo labels. Quantitative measurements shown in Table III demonstrate that the best γ value was 3.0, and its corresponding Dice score was 68.75%, which was better than 66.62% obtained by "Initial" and 67.83% obtained by "No weighting". Therefore, we set γ = 3.0 in the following experiments. 2) Uncertainty and Confidence-Aware Refinement: Fig. 5 shows a visual comparison between a standard fully connected CRF [45] and our proposed CAR that leverages confidence (uncertainty) for refinement of pseudo labels. The first row of each subfigure shows an unannotated image and the pseudo label with uncertainty obtained by the CNN, and the second row shows the updated pseudo labels. In Fig. 5(a) , the initial pseudo label has a large under-segmentated region, which is associated with high values in the uncertainty map, i.e., low confidence. The standard CRF only fixed the pseudo label moderately. In contrast, with the help of confidence, our CAR largely improved the pseudo label's quality by recovering the under-segmented region. In Fig. 5(b) , the initial pseudo label has some over-segmentation in airways, and the uncertainty map indicates the potentially wrong segmentation in the corresponding regions well. We can observe that CAR outperformed CRF in removing the over-segmented airways. We also compared MC Dropout [43] with two other uncertainty estimation methods: entropy minimization and Bayesian network [47] . A visual comparison of them is shown in Fig. 7 . We found that in spite of longer time required than the other methods, MC Dropout could generate more calibrated uncer-tainty estimation. As shown in Fig. 7 , the uncertain region obtained by entropy minimization and Bayesian network [47] are mostly located around the border of the segmentation output, and uncertainty map obtained by MC dropout can better indicate under-and over-segmentation regions, which is highlighted by red and yellow arrows in Fig. 7 . For quantitative comparison, we applied CAR as a post processing method to the validation set in the first round of I-CRAWL, where these uncertainty estimation methods were used respectively. Results in Table IV show that using MC dropout for CAR improved the prediction accuracy from 66.62% to 69.05% in terms of Dice, which outperformed using the other two uncertainty estimation methods. Fig. 8 shows a visualization of pseudo labels and uncertainty as the round increases. It can be observed that the initial pseudo label at round 0 has a large under-segmented region with high uncertainty. The pseudo label becomes more accurate and confident as the round increases. 3) Ablation Study of I-CRAWL: For ablation study of I-CRAWL, we set the PF-Net trained only with the annotated images as a baseline, and it was compared with: 1) IT that refers to naive iterative training, where in each round pseudo label of an unannotated image is reset to the prediction given by the network without refinement, 2) IT + CRF that uses standard fully connected CRF [45] to refine pseudo labels, 3) IT + CAR denoting that our confidence-aware refinement is used to update pseudo labels in each round, and 4) our I-CRAWL (IT + CAR + IW) where IW denotes our confidencebased image weighting of pseudo labels. The performance of these methods at different rounds are shown in Fig. 9 . Note that round 0 is the baseline, and all the methods based on iterative training performed better than the baseline. However, the improvement obtained by only using IT is slight. Using CRF or CAR to refine the pseudo labels at different rounds achieved a large improvement of Dice, and our CAR considering the voxel-level confidence of predictions outperformed the naive CRF. Weighting of pseudo labels based on image-level confidence helped to obtain more accurate result, and our I-CRAWL outperformed the other variants. Fig. 9 also shows that the improvement from round 0 to round 1 of I-CRAWL is large, but the model's performance does not change much at round 2 and 3. Table V shows quantitative comparison between the baseline and variants of I-CRAWL at the end of training (round 3). It can be observed that IT's performance was not far from the baseline, with an average Dice of 70.87% compared with 70.36%. Using CRF and CAR improved the average Dice to 72.13% and 72.71%, respectively, showing the superiority of CAR compared with CRF. I-CRAWL improved the average Dice to 73.04%, with HD 95 value of 7.92 mm in average, which was better than the other variants. 4) Comparison with Existing Methods: I-CRAWL was compared with several sate-of-the-art semi-supervised methods for medical image segmentation: 1) Fan et al. [13] that uses a randomly selected propagation strategy for semisupervised COVID-19 lung infection segmentation, 2) Bai et al. [32] that uses CRF to refine pseudo labels in an iterative training framework, corresponding to "IT + CRF" described Round 0 Round 1 Round 2 Round 3 Image and ground truth previously, 3) Cui et al. [33] that is an adapted mean teacher method, and 4) UA-MT [34] that is uncertainty-aware mean teacher. For all these methods, we used our PF-Net as the backbone network. We investigated the performance of these methods with different ratios of labeled data: 10%, 20% and 50%. For each setting, the baseline was learning only from the labeled images, and the upper bound was "full supervision" where 100% training images were labeled. The results are shown in Table VI . It can be observed that with only 10% images Our PF-Net combines 2D and 3D convolutions to deal with anisotropic resolutions, and we set M = 2 as the in-plane resolution is four times of the through-plane resolution in our dataset. It may be set to other values according to the spacing information of different datasets. Multi-scale guided dense attention in PF-Net is important for dealing with PF lesions with various positions, shapes and scales. We noticed that Sinha et al. [48] also proposed a multi-scale attention, but it has key differences from ours. First, Sinha et al. [48] concatenated feature at different levels of the encoder to obtain a multi-scale feature, which is used as input for parallel attention modules at different scales. While PF-Net learns multi-scale attentions sequentially, where attention at a lower resolution level is used as input for all the higher resolution levels with dense connections. Second, Sinha et al. [48] used self-attention inspired by non-local block [23] that is computationally expensive with large memory consumption, while PF-Net uses convolution to obtain the attention coefficients at different spatial positions, which has higher memory and computational efficiency. In addition, for each attention module, the attention maps are calculated in two steps in [48] , and a consistency between the two steps is imposed via an L2 distance of their encoded representations, which is called guided attention by the authors. In contrast, guided attention in PF-Net refers to supervising attention maps directly by the resampled segmentation ground truth. Our I-CRAWL is a pseudo label-based method for semisupervised learning. Despite that pseudo label has been previously investigated [13] , [32] , I-CRAWL is superior to these works mainly for the following reasons. First, the pseudo labels generated by a model trained with a small set of annotated images inevitably contain a lot of inaccurate predictions. Improving the quality of pseudo labels would benefit the final segmentation model. However, the method in [13] does not refine pseudo labels, and Bai et al. [32] refines pseudo labels by CRF without considering their confidence. Our method employs uncertainty estimation to find uncertain regions that are likely to be mis-segmented, and the confidence-aware refinement is more effective to refine these mis-segmented regions, leading to improved accuracy of pseudo labels. Second, the quality of pseudo labels of different images varies a lot, and it is important to exclude low-quality pseudo labels that may corrupt the segmentation model. However, Bai et al. [32] and [13] ignored this point and treated all the pseudo labels equally. In contrast, I-CRAWL uses image-level uncertainty information to highlight more confident pseudo labels and down-weight uncertain ones that are unreliable. Thus, the model is less affected by low-quality pseudo labels. Uncertainty estimation plays an important role in our I-CRAWL framework. We found that the simple yet effective MC dropout performed better than alternatives including entropy minimization and Bayesian networks [47] . Despite that MC Dropout is slow for uncertainty estimation, it is used offline at the beginning of each round of our method and takes a short time compared with the batch training step. In the scenario of 20% annotated data, our uncertainty estimation takes 8.72 minutes (7.69s per 3D image), and CAR and image weighting take 12.98 minutes (11.45s per 3D image). In contrast, the model update with batch training takes around 4 hours for each round, i.e., the first three steps of I-CRAWL account for 8.29% of the entire runtime of each round, which could be further accelerated by multi-thread parallel computing. Therefore, our uncertainty estimation and CAR require little extra time compared with naive iterative training and Bai et al. [32] . In conclusion, we present a novel 2.5D network structure and an uncertainty-based semi-supervised learning method for automatic segmentation of pulmonary fibrosis from anisotropic CT scans. To deal with complex PF lesions with irregular structures and appearance in CT volumes with anisotropic 3D resolution, we propose PF-Net that combines a 2.5D network baseline with multi-scale guided dense attention. To leverage unannotated images for learning, we propose I-CRAWL that is an iterative training framework, where a confidence-aware refinement process is introduced to update pseudo labels and a confidence-based image weighting is proposed to suppress images with low-quality pseudo labels. Experimental results with lung CT scans from Rhesus Macaques showed that our PF-Net outperformed existing 2D, 3D and 2.5D networks for PF lesion segmentation, and our I-CRAWL could better leverage unannotated images for training than state-of-the-art semi-supervised methods. Our methods can be extended to deal with other structures and human CT scans in the future. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries Cancer and radiation therapy: Current advances and future directions CA: A Pulmonary fibrosis: Pathogenesis, etiology and regulation Radiationinduced lung injury (RILI) Computer-aided diagnosis of pulmonary fibrosis using deep learning and CT images Automated segmentation of pulmonary structures in thoracic computed tomography scans: A review A homotopybased sparse representation for fast and accurate shape prior modeling in liver surgical planning Deep learning in medical image analysis Knowledge-based collaborative deep learning for benign-malignant lung nodule classification on chest CT Central focused convolutional neural networks : Developing a data-driven model for lung nodule segmentation Relational modeling for robust and efficient pulmonary lobe segmentation in CT scans Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Scans A noise-robust framework for automatic segmentation of COVID-19 pneumonia lesions from CT images U-Net: Convolutional networks for biomedical image segmentation Unet++: A nested u-net architecture for medical image segmentation Recalibrating fully convolutional networks with spatial and channel 'squeeze and excitation' blocks 3D U-Net: Learning dense volumetric segmentation from sparse annotation V-Net: Fully convolutional neural networks for volumetric medical image segmentation On the compactness, efficiency, and representation of 3D convolutional networks: brain parcellation as a pretext task 3D U2-Net: A 3D universal U-Net for multi-domain medical image segmentation Attention U-Net: Learning where to look for the pancreas Non-local Neural Networks Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation 3D anisotropic hybrid network: Transferring convolutional features from 2D images to 3D anisotropic volumes 3D APA-Net: 3D Adversarial Pyramid Anisotropic Convolutional Network for Prostate Segmentation in MR Images nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation DeepIGeoS-V2: Deep interactive segmentation of multiple organs from head and neck images with lightweight CNNs Automatic segmentation of vestibular schwannoma from T2-weighted MRI by deep spatial attention with hardness-weighted loss A CT-based automated algorithm for airway segmentation using freeze-and-grow propagation and deep learning Pseudo-Label : The simple and efficient semi-supervised learning method for deep neural networks Semisupervised learning for network-based cardiac MR image segmentation Semi-supervised brain lesion segmentation with an adapted mean teacher model Uncertainty-aware self-ensembling model for semi-supervised 3D left atrium segmentation Transformation consistent self-ensembling model for semi-supervised medical image segmentation Uncertainty-aware multi-view co-training for semi-supervised medical image segmentation and domain adaptation Multi-task attention-based semi-supervised learning for medical image segmentation ASDNet: Attention based semi-supervised deep networks for medical image segmentation Batch Normalization: Accelerating deep network training by reducing internal covariate shift 3D deeply supervised network for automatic liver segmentation from CT volumes Deep supervision with additional labels for retinal vessel segmentation task Automatic brain tumor segmentation based on cascaded convolutional neural networks with uncertainty estimation Dropout as a Bayesian approximation: representing model uncertainty in deep learning Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks Efficient inference in fully connected CRFs with gaussian edge potentials Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation A bayesian neural net to segment images with uncertainty estimates and good calibration Multi-Scale Self-Guided Attention for Medical Image Segmentation