key: cord-0183524-8npkz6va authors: Mahmud, Tanvir; Rahman, Md Awsafur; Fattah, Shaikh Anowarul; Kung, Sun-Yuan title: CovSegNet: A Multi Encoder-Decoder Architecture for Improved Lesion Segmentation of COVID-19 Chest CT Scans date: 2020-12-02 journal: nan DOI: nan sha: b93fc1d04b88dc34d157176ca9e47cc0707f17b1 doc_id: 183524 cord_uid: 8npkz6va Automatic lung lesions segmentation of chest CT scans is considered a pivotal stage towards accurate diagnosis and severity measurement of COVID-19. Traditional U-shaped encoder-decoder architecture and its variants suffer from diminutions of contextual information in pooling/upsampling operations with increased semantic gaps among encoded and decoded feature maps as well as instigate vanishing gradient problems for its sequential gradient propagation that result in sub-optimal performance. Moreover, operating with 3D CT-volume poses further limitations due to the exponential increase of computational complexity making the optimization difficult. In this paper, an automated COVID-19 lesion segmentation scheme is proposed utilizing a highly efficient neural network architecture, namely CovSegNet, to overcome these limitations. Additionally, a two-phase training scheme is introduced where a deeper 2D-network is employed for generating ROI-enhanced CT-volume followed by a shallower 3D-network for further enhancement with more contextual information without increasing computational burden. Along with the traditional vertical expansion of Unet, we have introduced horizontal expansion with multi-stage encoder-decoder modules for achieving optimum performance. Additionally, multi-scale feature maps are integrated into the scale transition process to overcome the loss of contextual information. Moreover, a multi-scale fusion module is introduced with a pyramid fusion scheme to reduce the semantic gaps between subsequent encoder/decoder modules while facilitating the parallel optimization for efficient gradient propagation. Outstanding performances have been achieved in three publicly available datasets that largely outperform other state-of-the-art approaches. The proposed scheme can be easily extended for achieving optimum segmentation performances in a wide variety of applications. Abstract-Automatic lung lesions segmentation of chest CT scans is considered a pivotal stage towards accurate diagnosis and severity measurement of COVID-19. Traditional Ushaped encoder-decoder architecture and its variants suffer from diminutions of contextual information in pooling/upsampling operations with increased semantic gaps among encoded and decoded feature maps as well as instigate vanishing gradient problems for its sequential gradient propagation that result in sub-optimal performance. Moreover, operating with 3D CTvolume poses further limitations due to the exponential increase of computational complexity making the optimization difficult. In this paper, an automated COVID-19 lesion segmentation scheme is proposed utilizing a highly efficient neural network architecture, namely CovSegNet, to overcome these limitations. Additionally, a two-phase training scheme is introduced where a deeper 2D-network is employed for generating ROI-enhanced CT-volume followed by a shallower 3D-network for further enhancement with more contextual information without increasing computational burden. Along with the traditional vertical expansion of Unet, we have introduced horizontal expansion with multi-stage encoder-decoder modules for achieving optimum performance. Additionally, multi-scale feature maps are integrated into the scale transition process to overcome the loss of contextual information. Moreover, a multi-scale fusion module is introduced with a pyramid fusion scheme to reduce the semantic gaps between subsequent encoder/decoder modules while facilitating the parallel optimization for efficient gradient propagation. Outstanding performances have been achieved in three publicly available datasets that largely outperform other state-of-the-art approaches. The proposed scheme can be easily extended for achieving optimum segmentation performances in a wide variety of applications. Impact Statement-With lower sensitivity (60-70%), elongated testing time, and a dire shortage of testing kits, traditional RT-PCR based COVID-19 diagnostic scheme heavily relies on post-CT based manual inspection for further investigation. Hence, automating the process of infected lesions extraction from chest-CT volumes will be major progress for faster accurate diagnosis of COVID-19. However, in challenging conditions with diffused, blurred, and varying shaped edges of COVID-19 lesions, conventional approaches fail to provide precise segmentation of lesions that can be deleterious for false estimation and loss of information. The proposed scheme incorporating an efficient neural network architecture (CovSegNet) overcomes the limitations of traditional approaches that provide significant improvement of performance (8.4% in averaged dice measurement scale) over two W ITH the recent outbreak of Coronavirus disease-2019 , the world has experienced an unprecedented number of deaths with a major collapse in the healthcare system throughout the world [1] , [2] . Early diagnosis is the primary concern to control this global pandemic at this stage for its extreme infectious nature [3] . Though Reverse transcription-polymerase chain reaction (RT-PCR) is considered as the gold standard for diagnosing COVID-19, its longer time requirement, lower sensitivity with a massive shortage of test-kits have already engendered the extreme urgency of alternative automated diagnostic schemes [4] , [5] . Due to the wide applicability of the artificial intelligence (AI) tools in numerous clinical diagnostic measures, it has enormous potential to expedite the diagnostic process of COVID-19 through automated analysis and interpretation of the clinical record [6] , [7] . Chest radiography has already been proven to be an effective source for COVID diagnostics due to its major implications relating to various levels of lung infections [8] . Computer tomography (CT) scan and chest X-ray have been extensively explored in the literature to establish an automated AI-based COVID diagnostic scheme [9] - [11] . Despite the easier access to chest X-ray, CT scans are more widely accepted due to its finer details leveraging the accurate diagnosis of COVID infections. Precise segmentation of lung lesions in chest CT scans is one of the most demanding and challenging aspects for faster diagnosis of COVID-19 due to the shortage of annotated data, diverse levels of infections, and novel types and characteristics of the infections [12] . Processing 3D CT volume at a whole increases computational complexity exponentially that makes the optimization and convergence more difficult limiting the architectural diversity of the network. The most widely used alternative of 3D-processing is to operate separately on 2D-slices extracted from the CT-volume [12] - [16] . However, such slice-based processing loses inter-slice contextual information that results in sub-optimal performance. In [17] - [20] , smaller sub-volumes are extracted from the original 3D volumes to minimize the computational burden as well as to utilize 3D contextual information. However, such methods suffer from inter-volume contextual information loss by considering a smaller portion of the whole set at a time as well as increases complexity to process sub-volume level prediction into the final result. A wide variety of approaches have been introduced in recent years for segmenting the region-of-interest in diverse applications. In [21] , a fully connected network (FCN) is introduced that produces multiple scales of encoded feature maps and reconstructs the segmentation mask utilizing these encoded representations. In [22] , Unet architecture is introduced by integrating an inverted decoder module following the encoder module to gradually reconstruct the mask that gains much popularity over the years. However, several architectural limitations of Unet are identified that provides suboptimal performance. • The skip connection introduced in Unet generates semantic gap between corresponding feature scale of encoderdecoder modules, which mainly arises from the direct concatenation of two semantically dissimilar feature maps. As the encoder module encodes the input image gradually into more generalized feature representation, it contains richer details compared to the corresponding decoded feature map which contains more information for the reconstruction of the final segmentation mask. These existing semantic gaps between corresponding encoder and decoder feature maps make the optimization process more difficult to converge for such direct concatenations through skip connections. • Contextual information loss occurs in traditional pooling/strided convolution-based downsampling operations that become more eminent with deeper architecture. Such downsampling operations are mainly carried out for generating more generalized, sparser feature representation with increased channels and reduced spatial resolution of the feature map. However, these operations also lead to loss of contextual information that greatly rises with the increase of vertical depth of the network. Similarly, the traditional upsampling operations fail to properly incorporate contextual information. • The vanishing gradient problem rises in a deeper structure for sequential optimization of multi-scale features. This problem mainly arises from the difficulty of gradient propagation through the deep stack of convolutional layers. Along with the incorporation of additional levels in the encoder and decoder stacks to make the network deeper, it becomes increasingly difficult to backpropagate the gradients through these levels for propagating through longer sequential paths that make the optimization of the deeper layers more difficult. Hence, this problem reduces the effective contributions of the deeper layers of the encoder and decoder modules for improper optimization. • Simplistic sequential convolutional layers are integrated into each level of encoder/decoder modules that lack enough architectural diversity to extract features from a broader spectrum, which is mainly caused by the linear propagation of gradients that reduces the impact of prior convolutional layers at each level for diminishing gradients. It lacks opportunity for the proper reuse of extracted features in the successive convolutions and lacks parallelism among convolutional layers required for better optimization, which lower the diversity of features generated at different levels of the network. Different architectural modifications have been explored in recent years to overcome some of these limitations. To increase the diversity of operations at each scale of feature maps, numerous established network building blocks are integrated in encoder/decoder module, e.g. residual block [23] , dense block [24] , inception block [25] , dilated residual block [26] , and multi-res block [27] . To reduce the semantic gap between a particular scale of encoder and decoder, a residual path is proposed in MultiResUnet architecture instead of a direct skip connection of Unet [27] . However, the semantic gap generated between multi-scale feature maps of encoder and decoder modules still persists. In Unet++ [28] , a nested stack of convolutional layers is introduced to reduce the semantic gaps. But, it increases computational complexity considerably which makes convergence difficult. In [19] , Vnet is proposed that utilizes residual building blocks in Unet architecture, while in [20] , cascaded-Vnet is presented for performance improvement that utilizes a dual-stack of the cascaded encoder-decoder module. Nevertheless, with existing numerous architectural limitations of traditional U-shaped architecture in each stage, it increases semantic gaps with the additional encoding-decoding stage as well as increases vanishing gradient issues with contextual information loss that open up opportunities for further optimization. In this paper, an improved, automated scheme is proposed for precise lesion segmentation of COVID-19 chest CT volumes by overcoming the limitations of traditional approaches with a novel deep neural network architecture, named as Cov-SegNet. The major contributions of this work are summarized below: 1) Along with the opportunity of vertical expansion, a horizontal expansion strategy is introduced in the CovSeg-Net architecture. In the vertical expansion mechanism, the encoder and decoder modules are deepened, while in horizontal expansion, several encoding-decoding stages are integrated. As discussed earlier, loss of contextual information occurs when the network is vertically expanded through subsequent downsampling operations, though it provides the opportunity for improved generalization through incorporating features from higher levels. Whereas, the horizontal expansion mechanism assists to integrate more detailed features at each level for finer reconstruction that helps to recover the loss of contextual information. As a result, it provides the opportunity to increase generalization while exploiting the available contextual information through an optimal combination of horizontal and vertical stages. 2) For further replenishing the loss of contextual information in traditional pooling/upsampling operations, a scale Fig. 1 . Workflow of the proposed scheme for segmenting lung lesions of COVID-19 in CT volume. In phase-1, deeper CovSegNet2D is trained and optimized with CT-slices. In phase-2, further joint optimization is carried out where pre-trained CovSegNet2D is fine-tuned for generating the ROI-enhanced CT volume while shallower form of CovSegNet3D is trained for more precise volumetric segmentation. transition scheme is introduced in the encoder/decoder module by incorporating multi-scale feature maps from preceding levels. This scale transition scheme also improves the gradient flow across different feature scales of a particular encoder/decoder module. 3) For reducing semantic gaps among corresponding feature scales of the encoder-decoder modules, a multiscale fusion module is introduced in between successive encoder-decoder modules. This module fuses multi-scale feature representations, generated at preceding encoder/decoder modules through pyramid fusion scheme, to generate representational features with reduced semantic gap and improved contextual information for the following decoder/encoder module, instead of directly connecting corresponding feature scales like Unet. Moreover, this module establishes parallel linkage among multi-scale feature maps of subsequent encoderdecoder modules that greatly improve the gradient flow across the network and helps to reduce the vanishing gradient problem. 4) A multi-phase training approach is introduced for integrating the advantages of both the 2D and 3D data processing scheme to reach the optimum performance. 2D processing provides faster processing with lower memory consumption while losing inter-slice contextual information. Whereas, 3D processing exploits both the intra-slice and inter-slice contextual information while increasing the computational burden. The proposed multi-phase training solves this problem by integrating a deeper variant of CovSegNet2D followed by a much shallower variant of CovSegNet3D for exploiting all possible contextual information while limiting the computational burden. 5) The proposed CovSegNet architecture is designed in a modular and structured way that can be adapted to its lightweight, shallow form to reduce complicacy with considerable performance as well as can be made very deep to increase diversity for incorporating finer details. This generic design provides more flexibility for tuning the design parameters in a wide variety of applications. 6) Extensive experimentations have been carried out to validate the effectiveness of the proposed scheme on two publicly available datasets containing chest CT scans from COVID-19 patients. Moreover, to validate the wide applicability of the proposed architecture, experimental results on a challenging, non-clinical, semantic segmentation dataset are also provided. The proposed scheme splits the segmentation of CT volumes into two subsequent phases to reduce the computational complexity of 3D convolution as well as to take the advantages of multi-scale 2D convolutions (Fig. 1 ). In the first phase of the training, 2D slice-based optimization process is carried out where a 2D variant of the proposed CovSegNet architecture (i.e. CovSegNet2D) is employed to extract the segmentation mask of the infected lesions in CT slices. After optimization, a thresholding scheme is employed to convert the predicted probability mask into a binary mask. Hence, after completion of the phase-1 of training and optimization, this network is capable of extracting slice-based lesion mask efficiently and effectively. However, slice-based processing of input CT volumes will lead to loss of inter-slice contextual information resulting in sub-optimal performance. To introduce further optimization and processing utilizing the inter-slice information, phase-2 of the training stage is incorporated. Several 2Dslices are extracted from input CT volumes and the pre-trained CovSegNet2D is utilized to extract the probability masks of the lung lesions. As CovSegNet2D is heavily optimized in phase-1 for 2D-slice based segmentation, it will provide the effective probability mask of the region-of-interest (ROI) in the CT slices. These masks are used for enhancing the ROIs of the CT slices while suppressing the redundant parts, and these are aggregated later to generate the ROI enhanced CT volume where most of the redundant parts are removed. Afterwards, the 3D variant of the proposed CovSegNet (i.e. CovSegNet3D) is employed into operation for further processing of the ROI enhanced CT volume considering both intra-slice and inter-slice contextual features. At the phase-2 of training, this CovSegNet3D is trained and optimized for generating the 3D-volumetric probability mask to introduce the inter-slice processing for improving performance while the pre-trained CovSegNet2D obtained from phase-1 is supposed to be fine-tuned for generating ROI-enhanced slices. Both these networks pass through a joint-optimization process for achieving optimum performance. Moreover, a deeper variant of CovSegNet2D is used for exploiting the advantages of less expensive 2D operations while a shallower variant of CovSegNet3D is used to reduce the computational burden of 3D processing. As considerably precise performance can be achieved from the slice-based operations utilizing the CovSegNet2D only, it minimizes the need for deeper 3Doperations in phase-2 of training. Hence, the proposed hybrid networking scheme is capable of exploiting the advantages of both the efficient, lighter 2D convolutions along with the 3D contextual information that provides optimal performance. Let consider the set of CT volumes as X, and their corresponding ground truths as Y, such that X i ∈ R h×w×s×c , Y i ∈ R h×w×s×c , and i = {1, 2, 3, . . . , N }, where (h, w, s, c) denote height, width, number of slices, and channels per slice, respectively, of a particular CT volume from total N number of CT volumes. Moreover, let consider x i,j ∈ R h×w×c as the i th slice from total S slices of j th CT volume and y i,j ∈ R h×w×c as its corresponding mask, such that i = {1, 2, . . . , S}, and j = {1, 2, . . . , N }. In the first phase of training, the objective function for slice-based optimization of CovSegNet2D is where, θ denotes the network parameter of CovSegNet2D, x, y p , y denote the input 2D-slice, predicted probability mask, and corresponding ground truth mask. In the phase-2 of training, the pre-trained CovSegNet2D network obtained from phase-1 is employed to generate ROI enhanced CT volume X , and thus where denotes element-wise multiplication and x denotes 2D-CT slice, x denotes ROI-enhanced CT-slice, and y p denotes the predicted probability mask. Afterwards, optimization of the CovSegNet3D is carried out utlizing ROI-enhanced CT-volume, while CovSegNet2D is fine-tuned to generate more accurate probability masks from 2D-slices, and the joint optimization objective function F can be formulated as where Θ 1 denotes the network parameters of CovSeg-Net2D, Θ 2 denotes the network parameters of CovSegNet3D, X , Y p , Y denote the ROI enhanced CT volume, predicted 3D mask, and corresponding 3D ground truth. The proposed CovSegNet architecture is a generic representation of a network with a wide range of flexibility for increasing its applicability in different challenging conditions. This architecture can be designed for efficient operations in both 2D and 3D domains. Moreover, it can be made deeper/lighter according to the requirement of the applications. In CovSegNet architecture, multiple stages of sequential encoding and decoding operations are carried out along with a fusion scheme of multi-scale features in between subsequent 64×64x4×128 encoder/decoder module. Each stage of the network consists of an encoder module and a corresponding decoder module. Hence, the network, N , can be represented as where E i , D i represents the encoder and decoder modules, respectively, of i th stage from total m stages, and θ Ei , θ Di represents their respective parameters. Two-stage implementation of this architecture is schematically presented in Fig. 2 . This network can be extended from level-1 to level-L to produce a deeper variant. The encoder/decoder module constitutes of several unit cells operating at each level of the network. To generate a deeper network, additional unit cells are integrated in each of the encoder/decoder module to increase number of levels. Here, E i,j , D i,j represent the i th unit cell of j th stage of encoder and decoder, respectively, where i = {1, 2, . . . , L}, and j = {1, 2, . . . , m}. Hence, L number of different scales of representative feature maps are obtained from each encoder/decoder module. Moreover, scale transition of feature maps is carried out in between succeeding encoder/decoder unit cells, and effective transformation on each scale of feature maps are integrated utilizing the generalized unit cell structure in encoder/decoder module. In between successive encoder/decoder modules, a multiscale fusion (MSF) module is introduced to reduce the semantic gap with preceding stages as well as to improve the gradient propagation through parallel linkage of multiscale features. Similar to encoder/decoder module, each MSF module consists of several operational unit cells operating at different levels. Let consider, F i represents the i th MSF module, F i,j represents the i th unit cell of j th MSF module, Each MSF module takes all scales of feature representations as input from all preceding encoder/decoder stages, and generates L number of different feature maps for the following encoder/decoder stage through deep fusion of multiscale features obtained from preceding stages. In each unit cell of MSF module, multi-scale feature aggregation and pyramid fusion scheme is employed, which can be represented as where F (.) represents the functional operations in the MSF unit cell considering L scale of representations from each of the preceding encoder/decoder module. From final level of the sequential decoder modules, several decoded feature representations are obtained which are processed together in the joint optimizer unit (J ) to produce the final segmentation mask, and it can be given by, where F(.) represents the joint optimizer function. All the basic building blocks of the CovSegNet architecture are generic and can be designed and optimized for both 2D and 3D operations. In the following discussions, different building blocks of the CovSegNet architecture are presented in detail. For the ease of discussion, mainly 2D operational blocks are focused. However, for 3D operations, all the convolutional kernels, pooling/upsampling windows are shifted in dimension for operating with 3D voxels instead of 2D pixels. Architectural details for the most optimized implementations of CovSegNet2D and CovSegNet3D are presented in Table I and in Table II , respectively. The encoder and decoder modules are structurally similar that are successively used in the sequential stages of Cov-SegNet. Encoder/decoder modules are schematically presented in Fig. 3 . These encoder/decoder modules are composed of several operational unit cells with transitional dense interconnections. The operations of encoder/decoder modules can be divided into two categories: unit cell operations and transitional operation. 1) Encoder/Decoder Unit Cell operation: In Fig. 4 , the unit cell structure of the encoder/decoder module is presented. In each unit cell, two input feature map is entered, one from the transitional unit and the other from the preceding MSF unit while the output feature map is passed through following transitional and multi-scale fusion operations. Moreover, each unit cell consists of four densely interconnected convolutional layers, where each convolutional layer provides two sequential convolutional filtering with (1 × 1) and (3 × 3) kernels. Such dense interconnection between convolutional operations has been proven to be effective in numerous applications. No dimensional scaling has been carried out in each of this unit cell as it is employed for introducing adequate transformation in the feature space to encode/decode effective representation. Hence, this unit cell operations can be functionally represented as, E, D : R h×w×c → R h×w×c , where (h, w, c) represents the height, width and channel of the feature map. 2) Encoder Down-transitional Operation: During down transitional operations between subsequent unit cells of the encoder module, the spatial dimension of the feature map is reduced for generalizing the feature map, whereas the channel depth is increased to incorporate more filtering operations in subsequent levels for generating more sparser features. It can be functionally presented as, f : R h×w×c → R h/2×w/2×2c , where spatial resolution is downscaled by 2 and channel depth is increased by 2 from the input feature map obtained from the previous level. However, traditional downsampling operations using pooling/strided convolutions results in loss of contextual information. Moreover, it can be more prominent while incorporating a deep stack of unit cells in the encoder module. To mitigate the loss of contextual information in down transitional operation, a higher level of dense interconnection is proposed among multi-scale feature maps generated from different unit cells. In Fig. 5a , the structure of such a down transition unit is schematically presented. In each of such down transition unit, encoded feature representations generated from all higher levels of unit cells are considered for generating the down-scaled feature map. Hence, contextual information lost in each transitional operation can be recovered from very deep stack of unit cells as feature representations from all preceding cells are considered during transition. To converge multi-scale feature maps from preceding levels, firstly, pooling operations with different kernels are carried out to make their spatial dimension uniform and subsequently, channelwise feature aggregation is carried out. The aggregated feature map, F agg,DT , generated at i th level can be represented as where ⊕ indicates the feature concatenation, P (2×2) represents pooling operation with (2 × 2) window, E i represents the output of i th unit cell of the encoder. Finally, a convolutional operation with (2 × 2) kernel is carried out with a stride of (2 × 2) for generating the downscaled feature map by filtering the aggregated feature vector. 3) Decoder Up-transitional Operation: On the contrary, up transitional operations are carried out in between successive decoder unit cells to provide the dimensional shifting towards the reconstruction of the final segmentation mask. In each of such up-transition operations, spatial resolution is upscaled by 2 while channel depth is reduced by 2 to get closer to the final reconstruction mask and it can be represented as, f : R h×w×c → R 2h×2w×c/2 . Similar to the down-transitional operation in Encoder, all the preceding representations of multi-scale decoded feature maps generated from different unit cells are taken into consideration in the up-transition operation to gather more contextual information (Fig. 5b) . Firstly, spatially uniform feature maps are created through bi-linear interpolation upsampling with different windows, and feature aggregation is carried out to generate aggregated feature vector, F agg,U T , which is given by where U (2×2) represents bilinear upsampling operation with (2 × 2) window, D i represents the output of i th unit cell of the decoder. Finally, the aggregated feature map is processed using a deconvolution operation with (2 × 2) kernel to incorporate the necessary dimensional up-scaling for further processing in the following unit cell. During sequential encoding-decoding operations, a semantic gap is generated between a similar scale of encoded and decoded feature maps. Moreover, in traditional architecture, the gradient has to propagate sequentially that sometimes gives rise to vanishing gradient problems for deeper encoder/decoder module particularly. As multiple stages of encoding and decoding operations are integrated into the CovSegNet, this problem is supposed to be more prominent if all the encoder and decoder modules are sequentially connected. To overcome these limitations, a multi-scale fusion module is proposed that develops parallel interconnection among different scales of feature maps of the encoder/decoder modules utilizing a pyramid fusion scheme. As shown in Fig. 6 , each MSF module consists of several MSF-unit cells where each cell considers multi-scale feature maps generated from different levels of preceding encoder/decoder modules and generates feature map for the unit cell of the following encoder/decoder module. Here, similar scale of feature representations generated from different levels of the preceding encoder/decoder modules are concatenated, firstly, to produce L number of multi-scale feature maps. Afterward, all the L scales of feature maps are made spatially equivalent in dimension through pooling and bi-linear upsampling with different windows, and channelwise feature concatenation is carried out to generate the aggregated feature vector. This can be represented as agg,M SF is the aggregated feature vector generated in the i th level of j th MSF module, and f i represents the i th concatenated feature map. Afterward, the aggregated feature vector is passed through a pyramid fusion scheme to generate the output feature vector that will be fed to the corresponding encoder/decoder unit cell of the following module. Hence, the generated output feature map from each MSF unit cell contains information from all preceding modules and thus, establishes a parallel flow of optimization for efficient gradient propagation. The pyramid fusion (PF) module incorporates pyramid fusion scheme into the aggregated feature map of MSF unit cell (F agg,M SF ) utilizing the combinations of sequential multiwindow pooling and upsampling operations (shown in Fig. 7) . Firstly, the depth of the aggregated vector, F agg,M SF , is reduced through a pointwise convolution (kernel, 1 × 1) to generate feature vector f a , and thus, F agg,M SF → f a , where f a ∈ R h×w×c . Afterwards, the generated vector, f a , passes through multiple spatial scaling-vertical scaling-inverse spatial scaling operations in parallel with different scaling factors. Spatial scaling operation is carried out utilizing pair of pooling and upsampling operations with different kernel windows, while vertical scaling is employed utilizing convolutional filtering (kernel, 3 × 3) to reduce the channel depth by one-fourth of the initial depth. Initial reduction followed by expansion of the feature map assists in gathering the more general feature representation, while initial expansion followed by reduction of the feature map gathers the more detailed information from a sparser domain. These operations pave the way to extract the most generalized representations through analyzing from where P r denotes one of the parallel operational paths in the PF module with a spatial scaling factor of r. Afterwards, feature aggregation operation is carried out utilizing different representations generated at multiple paths along with the input representation to generate the aggregated vector F agg,P F , where F agg,P F ∈ R h×w×2c . Finally, a final pointwise convolution (kernel, 1 × 1) is carried out to generate the output feature map f out,P F , where f out,P F ∈ R h×w×c . The decoded feature maps generated from the top of decoder modules are considered for final reconstruction through a joint optimization process. This process is schematically shown in Fig. 8 . Initially, an aggregated feature vector F agg,J , is created considering all the output feature maps from different decoder modules which can be given by where S denotes total number of stages. Afterward, pyramid fusion scheme is employed on aggregated vector to obtain the more generalized representation utilizing multi-scale decoded representations. Finally, another convolutional filtering (kernel, 3 × 3) is carried out to generate the final segmentation mask f mask , utilizing binary activation function, and these can be represented as where σ(.) denotes the non-linear activation. Tversky Index is introduced in [31] for better generalization of the the dice index by balancing out false positives and false negatives, which is given by where g 0i , p 0i indicate the ground truth and prediction probability of pixel i being in a normal region, while g 1i , p 1i indicate the ground truth and prediction probability of pixel i being in an abnormal region, P is the total number of pixels on a certain image, α, β are used to shift emphasize for balancing class imbalance such that α + β = 1, and (10 −8 ) is used to avoid division-by-zero as safety factor. To put more emphasis on hard training examples, a Focal Tversky loss function is introduced in [32] utilizing the Tversky Index, which is given by where γ is used to emphasize the challenging less accurate predictions. Due to the better generalization over a large number of datasets according to [32] , α = 0.7, β = 0.3, γ = 4 3 are used for all experimentations in this study. If y, y p denote slice-wise mask ground truth and corresponding probability prediction, respectively, while Y, Y p denote volumetric mask ground truth and corresponding probability prediction, respectively, the objective loss functions for separately optimizing CovSegNet2D and CovSegNet3D can be represented as L 2D = L(y, y p ); y, y p ∈ R h×w×c (16) The joint optimization objective function used in phase-2 combining slice-wise and volumetric operations is given by where λ denotes the scaling factor of 2D-loss term, and s denotes total number of 2D-slices per volume. Here, λ = 0.2 is used for optimization to provide more emphasis on CovSeg-Net3D in phase-2 as CovSegNet2D is pre-trained in phase-1 and is supposed to be fine-tuned in phase-2. Experimentations have been carried out on three publicly available datasets to validate the effectiveness of the proposed scheme on numerous segmentation tasks. Performances of CovSegNet2D and CovSegNet3D have been separately studied along with the proposed hybrid scheme of joint optimization combining CovSegNet2D and CovSegNet3D. Dataset-1 contains 20 CT volumes with 1800+ slices annotated by expert radiologist panel [33] . All the slices have annotations for both lung and infection regions. Each slices are of resolution (630 × 630) which are resized to (512 × 512). Dataset-2 is the "COVID-19 CT Segmentation dataset" that contains 110 axial CT images collected by the Italian Society of Medical and Interventional Radiology from 40 different COVID-patients [34] . All the images are of resolution (512 × 512). Each slice contains multi-class annotations of infections. Dataset-3 is the "Semantic Drone Dataset" where the semantic understanding of urban scenes is mainly focused to increase the safety of drone flight and landing procedures [35] . This dataset consists of 400 images with pixel-wise annotation for 20 different classes having resolutions of 6000 × 4000 and all of these images are resized to (512 × 512). Experimentations on Dataset-3 is mainly integrated to investigate the effectiveness of the proposed CovSegNet architecture on other domains with challenging operating conditions. Different hyper-parameters of the network are chosen through experimentation for better performance. Adam optimizer is employed for optimization of the network during the training phase with an initial learning rate of 10 −5 . The learning rate is decayed after ever 10 epochs with a decaying rate of 0.99. Intel® Xeon® D − 1653N CPU @2.80GHz with 12M Cache and 8 cores along with 24 GB RAM is used for experimentation. For hardware acceleration, 2× NVIDIA RTX 2080 Ti GPU having with 4608 CUDA cores running 1770 MHz with 24 GB GDDR6 memory is deployed. The network is trained for 1000 epochs on each dataset. Batch size is chosen to be 32 for processing 2D-CT slices, while it is chosen to be 2 for processing 3D-CT volume. A number of traditional evaluation metrics are used for the evaluation of performance. These are given by Specif icity = T P T P + F P where T P , F P , F N denote true positive, false positive, and false-negative predictions, respectively. A five-fold crossvalidation scheme is carried out separately on these databases for evaluation of the proposed scheme. Mean and standard deviations of the evaluation metrics obtained from different test folds are reported. For binary thresholding of the predicted probability mask, a threshold of 0.5 is used in general. The Wilcoxon rank-sum test is used for statistical analysis of the performance improvement obtained from the proposed scheme. The performances of the proposed schemes are statistically analyzed and the statistical significance level is set to α = 0.01. The null hypothesis is that no significant improvement of performance is achieved using the proposed scheme over the other existing best performing approaches. To analyze the effectiveness of different modules of the proposed CovSegNet architecture, an ablation study is carried out. The baseline model is defined as the two-stage implementations with encoder and decoder modules only excluding the down-transition (DT) units, up-transition units (UT), and multi-scale fusion modules. The statistical significance test is carried out to validate the improvement of dice-scores over the baseline model. Table III for 2D analysis. The inclusion of down-transition unit (V2) in encoder modules provides 1.7% improvement and 1.5% improvement of dice scores in Database-1 and 2, respectively, over the baseline. Moreover, the inclusion of up-transition unit (V3) in decoder modules provides 1.3% and 1.2% improvements of dice scores, while the inclusion of both of the transition units (V4) provide 2.6% and 2.9% improvements of dice scores in Database-1 and 2, respectively. Hence, both of the up-transition units and down-transition units are contributing considerable improvements over the baseline performance. Similar improvements can be noticeable for 3D variants of the transition units also (from V 2 3D to V 4 3D ) that are summarized in Table IV . All the improvements are found to be statistically significant (p < 0.01). 2) Effects of the multi-scale fusion (MSF) module: The MSF modules are proposed in place of the traditional direct skip connection scheme of Unet architecture to reduce the semantic gaps between subsequent encoder and decoder modules. In the baseline model, direct skip connections are used between succeeding modules instead of the MSF module. In Table III , the change of performance with the inclusion of the MSF module in the 2D-baseline model is provided in V6. It should be noticed that 5.1% improvement of dicescore, 4.3% improvement of IoU score have been achieved in Database-1, while 7.6% improvement of dice-score, 8.3% improvement of IoU score have been achieved in Database-2. Similar performance improvements can be noticed for the incorporation of MSF module in the 3D-baseline model (V 6 3D in Table IV ). These improvements are found to be statistically significant (p < 0.01). Pyramid fusion (PF) modules are integrated into the MSF modules to operate on the aggregated multi-scale feature vector in the MSF module. Instead of the PF module, a pointwise convolution with (1 × 1) kernel can be performed to reduce and transform the aggregated vector into the output vector. The performance of the 2D-baseline model including this simplified version of the MSF module is reported in V5 of Table III . It is to be noted that 2.3% improvement of dice score is achieved in Database-1 and 3.4% improvement is achieved in Database-2 over the baseline model using these simplified MSF modules, and these improvements are statistically significant (p < 0.01). However, 3.2% and 5.3% reduction of dice scores can be noticed in Database-1 and 2, respectively, from the baseline model with original MSF modules (V6) incorporating PF scheme. Similarly, considerable improvement is also achieved for the incorporation of 3D-pyramid fusion scheme in the 3D variants of MSF module which can be noticed from V 5 3D and V 6 3D in Table IV . It justifies the effectiveness of the pyramid fusion scheme in the MSF module. The proposed CovSegNet architecture is designed in a modular way with the opportunity for both vertical and horizontal expansions for integrating more number of levels and stages, respectively. In Table VII , the performances of the CovSegNet architecture with different numbers of levels and stages are provided. It should be noticed that the optimum dice score of 91.1% is obtained for CovSegNet2D with 5-levels and 2-stages. The best performance on single stage implementation is found to be 86.7%, which is 4.4% lower than the best of the 2stage implementation. Similar analyses have been carried out on CovSegNet3D using volumetric data where the highest dice score of 92.3% is achieved with 3-levels and 2-stages implementation. Moreover, when more stages are included, comparably higher performances are obtained in a lower number of levels, e.g. best dice score of 90.8% in the 3stage setup of CovsegNet2D has been achieved with 4-levels. With the horizontal expansion, the model gathers more amount of contextual information in a lower number of stages that result in higher performances. However, more expansion in both directions starts to increase the complexity that causes a decrease in performance due to overfitting issues. 5) Effects of the hybrid 2D-3D joint optimization scheme with two-phase training: The proposed 2-phase training scheme exploits the advantages of both the slice-based optimization and volumetric optimization. Quantitative performances obtained using CovSegNet2D, CovSegNet3D, and the hybrid scheme are provided in Table V and VI. Slice based processing provides the advantages of employing deeper networks for lighter 2D-convolutions, while loses the contextual information from z-axis. On the other hand, volumetric analysis increases the computational burden for optimization for 3D-kernels processing while providing more contextual information. The best variant of CovSegNet3D provides 1.2% higher dice score, and 0.8% higher IoU score over the best variant of CovSegNet2D. Thus, the performances of the proposed CovSegNet architectures are quite comparable in both 2D and 3D processing with minor variations. It should be noted that by combining the advantages of both these schemes in the proposed multi-phase training approach, 3% and 1.8% higher dice scores are achieved compared to the best performing CovSegNet2D and CovSegNet3D architectures, respectively. Moreover, to reduce the computational burden of 3D-data processing in the hybrid scheme, only 2-level, dual-stage implementation of the CovSegNet3D is employed accompanied by the 4-level, dual-stage implementation of the CovSegNet2D that provides the optimal performance with minimal complexity. This improvement signifies the effectiveness of the hybrid networking scheme in multi-phase training (p < 0.01). Moreover, qualitative analysis of the performances of the individual networks and hybrid networks are presented in Fig. 10 with different levels of infection. It should be noticed that both of the false positive and false negative regions are reduced in the segmented mask for the hybrid scheme compared to the individual networks. 6) Effects of the loss functions: In Table X , effect of different loss functions are summarized on the performance of the CovSegNet. For optimizing the hybrid network, joint optimization objective function (Eqn. 18) is defined incorporating losses of the CovSegNet2D and CovSegNet3D networks. It should be noticed that focal Tversky loss function provides 0.9% improvement of dice score over traditional dice loss function and 0.7% improvement over the the aggregated dice loss and binary cross entropy loss function. Similar improvement is also achieved for CovSegNet3D and CovSegNethybrid network. However, the proposed CovSegNet architecture mostly provides stable performance over different traditional loss functions, though the optimum performance is achieved with the focal Tversky loss for higher emphasis on the hard training examples. To compare the performances of the proposed CovSegNet architecture, several state-of-the-art networks are considered. To compare on a fair platform, most of these networks are implemented using their open-source implementation, and similar train-test folds are used for performance evaluation. Infection segmentation performances using slice-based 2Doperations and volumetric 3D-operations are summarized in Table V and VI, respectively. CovSegNet2D provides a 4.2% higher dice score in Database-1, and an 8.6% improvement in dice score in Database-2 compared to the second-highest score (Semi-Inf-Net). Hence, consistent improvements in performances have been achieved in 2D-slice based analysis using CovSegNet2D. Moreover, in the volumetric analysis approach, CovSegNet3D provides an 8.4% higher dice score and 9.4% higher IoU score compared to the next-best performing model (VNet). Thus, the 3D-variant of CovSegNet provides consistent improvements over other 3D-counterparts of existing networks. It should be noticed that the proposed hybrid scheme combining CovSegNet2D and CovSegNet3D provides the most optimum performance with a dice score of 94.1% and IoU score of 90.2%. Some of the qualitative visualizations of performances obtained in different challenging conditions are shown in Fig. 9 . For having the volumetric information of the Database-1, the proposed hybrid scheme is employed here, while only 2D-slice based analysis is carried out in Database-2 using CovSegNet2D. It should be noted that the proposed scheme performs consistently better compared to other networks in segmenting most of the challenging diffused, blurred, and varying shaped edges of COVID lesions. Moreover, quantitative performances on challenging multiclass lesion segmentation, including separate ground-glass opacity (GGO) and consolidation regions, are summarized in Table VIII , where 8.2% improvement in dice score is obtained in GGO segmentation and 11% improvement in consolidation segmentation using CovSegNet architecture over the other best-performing approaches. Additionally, from the visual analysis of the performances shown in Fig. 11 , it can be easily noted that the proposed network considerably reduces the false predictions even in these challenging conditions compared to other state-of-the-art approaches. Furthermore, quantitative results obtained from non-clinical Database-3 are summarized in Table IX which shows the significant performance improvement with 22.4% improvement in dice score, and 21.8% improvement in mean IoU compared to the Unet architecture. Weighted mean performances over all 20 classes are taken for better estimation. In Fig. 12 , visual representations of some of the sample images are shown for different networks in Database-3, which more conspicuously signifies the better performance of the proposed architecture. Since Database-3 is very complicated with a huge number of classes, the performance differences between the proposed CovSegNet and other existing networks are more prominent as this dataset demands effective exploitation of minute, complex, and scattered features of diversified classes. The proposed CovSegNet architecture ensures the proper optimization of all the network parameters through improved parallelization that enhances efficient gradient propagation in the whole network resulting in the effective exploitation of the contextual information with consistently good performance. However, this improved parallelism also poses some computational burden for the effective exploitation of the network parameters. In Table XI , the computational efficiency of different networks are summarized, where performances of different variants of CovSegNet is summarized based on the number of levels (L) and stages (S). For analyzing with 2D data, it is noticeable that the number of parameters of CovSegNet2D are considerably lower compared to other networks while providing a large improvement of performance. For example, CovSegNet-v2 provides a 94.8% reduction in parameter counts of Unet architecture, while providing 1.43x higher inference speed with a 3.5% higher dice score. With increasing levels, more precise estimation is achievable in the cost of speed and memory consumption. Moreover, GPUmemory usage for training with batch size-1 are summarized for different networks. A similar observation can be carried out for 3D analysis with CovSegNet3D. It should be noticed that CovSegNet-Hybrid provides the best achievable dice score (94.1%) while consisting of 0.09x parameters of Unet3D with 0.08s reduction of inference time. This significant reduction in parameter counts with the obtained highest performance is mainly achieved by the joint integration of the efficient 2D processing with effective inter-slice contextual information exploration using a lighter 3D variant of CovSegNet. Therefore, this hybrid scheme provides considerable advantages over other existing 3D variants in terms of parameters, and dice scores with comparable processing speed. In summary, numerous architectural renovations assist in achieving state-of-the-art performance on COVID lesion segmentation. The horizontal and vertical expansion mechanisms provide the opportunity to incorporate more detailed features as well as more generalized features, which improved the feature quality considerably that is particularly effective in distinguishing multi-class, scattered COVID lesions with widely varied shapes. Moreover, the improved gradient flow throughout the network, achieved with the introduction of multiscale fusion module and scale transition modules, have greatly reduced the contextual information loss in the generalization process and have also ensured the best optimization of all network parameters that particularly contribute to recover and distinguish the blurry, diffused edges of COVID lesions as well as the very minute instances of abnormalities. Furthermore, the integration of a hybrid 2D-3D networking scheme exploits both the intra-slice and inter-slice contextual information without increasing computational burden that results in more precise, finer segmentation performance mostly in challenging conditions. Although consistent performances have been achieved in both the datasets for COVID lesion segmentation, this study should be carried on larger datasets consisting of wide variations of subjects. However, in the current conditions of the pandemic, it is difficult to gather a considerably higher amount of data. The proposed study will be extended with the incorporation of diversified datasets including patient-based study considering age, sex, health conditions, and geographical locations of the patients. Due to the novel characteristics of the COVID infections, it is difficult to predict the risk and vulnerability among diverse subjects that can be effective for reducing the spread and better prevention. An in-depth, closer, patient-specific study should be carried out for better understandings of the nature of the infection. Moreover, generative adversarial network-based optimization can be carried out to generate more amount of realistic, synthetic data to overcome the limitations of available data. Additionally, this scheme is supposed to be extended for incorporating automated segmentation-classification joint optimization along with the severity prediction scheme of COVID infections. In this study, an automated scheme is proposed with an efficient neural network architecture (CovSegNet) for very precise lung lesion segmentation of COVID CT scans that provides outstanding performances with 8.4% average improvement of dice score over two datasets. The introduced scale transition operations are found to be very effective for replenishing contextual information loss through repeated integration of generated multi-scale features in both upscaling and downscaling operations. It is found that horizontal expansion mechanism with multi-stage encoder-decoder modules assists in further improvements for gathering more multi-scale contextual information when coupled with the traditional vertical expansion mechanism. Moreover, the multi-scale fusion module with a pyramid fusion scheme not only substantially reduced the semantic gaps between subsequent encoder-decoder modules but also introduced parallel inter-linking among multi-scale features that greatly mitigates the vanishing gradient issues for better optimization. Furthermore, the two-phase optimization scheme with hybrid 2D-3D processing provides considerable improvement over traditional single domain approaches for introducing more contextual information to gather finer details. It is shown that the proposed scheme is capable of segmenting infected regions along with multi-class COVID-19 lesions with unprecedented precision even in challenging conditions with blurred, diffused, and scattered edges. Moreover, it is found that the proposed network is not only effective in COVID lesion segmentation but also provides state-of-theart performance on a non-clinical, challenging, multi-class semantic segmentation task that proves the wide applicability of the proposed scheme. Therefore, the proposed scheme can be easily optimized on numerous applications that can be an effective alternative to other state-of-the-art approaches. The epidemiological characteristics of an outbreak of 2019 novel coronavirus diseases (COVID-19)-China, 2020 A novel coronavirus outbreak of global health concern Clinical characteristics of coronavirus disease 2019 in china Correlation of chest CT and RT-PCR testing in coronavirus disease 2019 (COVID-19) in china: a report of 1014 cases Diagnostic performance of CT and reverse transcriptase-polymerase chain reaction for coronavirus disease 2019: a meta-analysis Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT Artificial intelligence (AI) applications for COVID-19 pandemic Use of chest CT in combination with negative RT-PCR assay for the 2019 novel coronavirus but high clinical suspicion CovXNet: A multidilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest x-ray images with transferable multireceptive feature optimization Deep learning COVID-19 features on CXR using limited training data sets Diagnosis of coronavirus disease 2019 (COVID-19) with structured latent multi-view representation learning Inf-net: Automatic COVID-19 lung infection segmentation from CT images COVID TV-UNet: Segmenting COVID-19 chest CT images using connectivity imposed u-net An automatic COVID-19 CT segmentation network using spatial and channel attention mechanism COVID-19 chest CT image segmentation-a deep convolutional neural network solution Miniseg: An extremely minimum network for efficient COVID-19 segmentation Towards efficient COVID-19 CT annotation: A benchmark for lung and infection segmentation Automated chest CT image segmentation of COVID-19 lung infection based on 3d U-Net V-net: Fully convolutional neural networks for volumetric medical image segmentation Block level skip connections across cascaded V-Net for multi-organ segmentation Fully convolutional networks for semantic segmentation U-net: Convolutional networks for biomedical image segmentation Resunet-a: a deep learning framework for semantic segmentation of remotely sensed data DU-Net: Convolutional network for the detection of arterial calcifications in mammograms RIC-Unet: An improved neural network based on Unet for nuclei segmentation in histology images Dense dilated network with probability regularized walk for vessel detection MultiResUNet: Rethinking the u-net architecture for multimodal biomedical image segmentation Unet++: Redesigning skip connections to exploit multiscale features in image segmentation Attention U-net: Learning where to look for the pancreas CPFNet: Context pyramid fusion network for medical image segmentation Tversky loss function for image segmentation using 3d fully convolutional deep networks A novel focal tversky loss function with improved attention u-net for lesion segmentation COVID-19 CT lung and infection segmentation dataset COVID-19 CT segmentation dataset Semantic drone dataset