key: cord-0938033-6rff1vq4 authors: Kong, Zishang; He, Min; Luo, Qianjiang; Huang, Xiansong; Wei, Pengxu; Cheng, Yalu; Chen, Luyang; Liang, Yongsheng; Lu, Yanchang; Li, Xi; Chen, Jie title: Multi-Task Classification and Segmentation for Explicable Capsule Endoscopy Diagnostics date: 2021-08-19 journal: Front Mol Biosci DOI: 10.3389/fmolb.2021.614277 sha: 9976b2ba5af1e5b1543a9174b0687b8ec497d3f3 doc_id: 938033 cord_uid: 6rff1vq4 Capsule endoscopy is a leading diagnostic tool for small bowel lesions which faces certain challenges such as time-consuming interpretation and harsh optical environment inside the small intestine. Specialists unavoidably waste lots of time on searching for a high clearness degree image for accurate diagnostics. However, current clearness degree classification methods are based on either traditional attributes or an unexplainable deep neural network. In this paper, we propose a multi-task framework, called the multi-task classification and segmentation network (MTCSN), to achieve joint learning of clearness degree (CD) and tissue semantic segmentation (TSS) for the first time. In the MTCSN, the CD helps to generate better refined TSS, while TSS provides an explicable semantic map to better classify the CD. In addition, we present a new benchmark, named the Capsule-Endoscopy Crohn’s Disease dataset, which introduces the challenges faced in the real world including motion blur, excreta occlusion, reflection, and various complex alimentary scenes that are widely acknowledged in endoscopy examination. Extensive experiments and ablation studies report the significant performance gains of the MTCSN over state-of-the-art methods. Deep learning and convolutional neural networks have recently shown outstanding performances for visual recognition and semantic understanding [Krizhevsky et al. (2012) ; Simonyan and Zisserman (2014) ; He et al. (2016) ; Huang et al. (2017) ; Long et al. (2015) ]. The representation learning capacity of convolutional neural networks has also been successfully applied to medical image analysis and recognition in gastrointestinal endoscopy [Ronneberger et al. (2015) ; Le et al. (2019) ; Hwang et al. (2020) ]. Crohn's disease [Podolsky (1991) ; Baumgart and Sandborn (2012) ] is an inflammatory bowel disease (IBD), and its signs and symptoms range from mild to severe. It usually develops gradually but sometimes will come on suddenly, without warning. While there is not a known cure for Crohn's disease, early detection and preventative therapies will greatly reduce its signs and symptoms and even bring about long-term remission. Because the small intestine and colon can be affected by Crohn's disease, capsule endoscopy is the gold standard to examine the midsection of the gastrointestinal tract. A major challenge in capsule gastroscopy is that the procedure will output a video of several hours which suffers from complicated gastrointestinal environmental challenges, such as excreta occlusion, motion blur, and light scattering, wasting plenty of time for professional gastroenterologists to find out the location of lesions [Min et al. (2019) ]. Although several software enhancements, including Quick-View (Medtronic, Minneapolis, MN, United States) and Express View (CapsoVision, Inc., Saratoga, CA, United States), attempt to overcome these drawbacks, their performance is insufficient for use in clinical practice because of their limited accuracy and unexplicable output [Hwang et al. (2020) ]. To assist gastroenterologists to locate Crohn's lesions explicably and precisely, we introduce a dataset named the Capsule-Endoscopy Crohn's Disease dataset, a large-scale Crohn's gastrointestinal image dataset for clearness degree (CD) and tissue semantic segmentation (TSS) which will greatly help doctors understand the classification results. The proposed dataset covers 467 images in real-world scenarios. In the meanwhile, we propose a multi-task learning (MTL) scheme, which combines pixel-level segmentation and global image-level category classification. The proposed architecture is based on a fully convolutional image-to-image translation scheme, which enables efficient feature sharing between image regions, and fast prediction. A novel cross fusion module is proposed to mitigate the gap between different foci of classification and segmentation tasks. We evaluate our model on the proposed dataset, with clearness degree classification and tissue segmentation with eight classes. We show that through joint training, the model is able to learn shared representations that are beneficial for both tasks. Our method can be seen as a generalization of approaches relying on detection annotations to pre-train the deep model for classification purposes. We show that our joint training of classification and segmentation enables a better cooperation between tasks. Since AlexNet [Krizhevsky et al. (2012) ], deep convolutional neural networks have dominated image classification. With this trend, research has shifted from engineering handcrafted features to engineering network architectures. VGG-Net [Simonyan and Zisserman (2014) ] proposes a modular network design strategy, stacking the same type of network blocks repeatedly, which simplifies the workflow of network design and transfer learning for downstream applications. Built on the success of this pioneering work, He et al. (2016) introduced an identity skip connection which alleviates the difficulty of vanishing gradient in the deep neural network and allows for network learning deeper feature representations. Reformulations of the connections between network layers [Huang et al. (2017) ] have been shown by DenseNet to further improve the learning and representational properties of deep networks. DenseNet has become one of the most successful CNN architectures which has been adopted in various computer vision applications. With the great success of deep learning in high-level vision tasks, numerous semantic segmentation approaches [Long et al. (2015) ; Ronneberger et al. (2015) ; Zhao et al. (2017) ; Chen et al. (2018) ] are beneficial for CNNs. Long et al. (2015) first introduced fully convolutional networks (FCNs) for semantic segmentation which conduct pixel-wise classification in an end-to-end fashion. While U-Net was introduced by Ronneberger et al. (2015) , which concatenates the up-sampled feature maps with feature maps skipped from the encoder. Due to the precise pixel-level representation, deep learning-based semantic segmentation has been widely adopted in lesion and tumor segmentation, helping doctors get an accurate and explicable diagnosis. Li et al. (2018) proposed H-DenseUNet for liver and liver tumor segmentation. A modification to U-Net was proposed by Zhou et al. (2019) , named UNet++, which is applied to a variety of medical datasets for segmentation tasks. Multi-task learning [MTL, Caruana (1997) ] is often applied when related tasks can be performed simultaneously. Many MTL methods [Jalali et al. (2010) In the recent COVID-19 pandemic, multi-task learning was applied in CT imaging analysis by Amyar et al. (2020) . MTL schemes are based on the assumption that the difficulty of classification and segmentation tasks is the same. But in the real scenes, especially in the small intestine, classification is much simpler than segmentation tasks. Some pioneers have proposed a weighted loss design [Kendall et al. (2018) ] and attention module [Liu et al. (2019) ] to balance different types of tasks. As shown in Figure 1 , the evolution of MTL tends to bring more precise control on fusion between different tasks. We dive into this problem and introduce our solution to it. To assist the gastroenterologists in capsule endoscopy examination, both precision and interpretability are necessary. Following the previous methods [Le et al. (2019) ], we model the precision and interpretability tasks into classification and segmentation tasks. Our proposed multi-task framework shows that joint training of classification and segmentation enables a better cooperation between tasks. In the following, we first describe the overall framework of our proposed multi-task classification and segmentation network (MTCSN), shown in Figure 2 . Specifically, a backbone is adopted to extract the representations of the input image which are further used to generate the class label and segmentation map. Next, we introduce the cross fusion module, the key elements of the MTCSN, to alleviate the misalignment between classification and segmentation. Finally, we dive into the inherent problem in the multi-task learning training strategy and introduce our object function. As shown in Figure 2 , the proposed multi-task classification and segmentation network first utilizes a backbone to extract local features. The backbone we adopted includes different depths of ResNet or DenseNet. Following feature extraction, we design two multi-task branches which are the classification branch for image clearness degree measuring usability and the segmentation branch for tissue segmentation producing explicable visualization to help doctors understand the whole image. The classification branch is mainly constructed by fully connected layers, and the segmentation branch is based on an image-to-image scheme enabling efficient feature computation in each region but also sharing computation from all regions in the whole image in a single forward pass. In addition, we can still process input images with high spatial resolution. Our network mainly focuses on two tasks, classification and segmentation. In the prevailing pattern of MTL, two branches have been trained separately for these tasks following the shared backbone for feature extraction [ Figure 1 ]. Because the classification task and segmentation task place different emphasis on feature extraction, performance degeneration is foreseeable and needs to be resolved. Instead of designing two parallel backbones [Misra et al. (2016) ], we set our sights on efficiently exploiting the interaction between the two tasks' branches. We introduce a novel non-linearity cross fusion module which learns the extent of sharing, as illustrated in Figure 3 . After global average pooling, the classification branch feature's usual shape is [C 1 , 1, 1], where C 1 denotes the number of channels. While the segmentation branch feature's shape is [C 2 , H, W], C 2 is usually not the same as C 1 . First, we mold the classification feature into the same shape of segmentation. Then, we utilize a sharing parameter non-linearity transformation matrix M to learn the joint representations and extent of fusion automatically. In our experiment setting, M is formulated as a parameter matrix of the convolution layer. More precisely, the process of the cross fusion module can be formulated asX where X cls and X seg denote the classification and segmentation feature inputs to cross fusion. M denotes the non-linearity transformation matrix, and M T 's dimension order is different. The output of cross fusion isX cls andX seg . The network can automatically decide to make certain layers task-specific by setting a lower weight to the matrix or choosing a more shared representation by assigning a higher value to it. In general multi-task learning with K tasks, input X, and taskspecific labels Y i , i 1, 2, . . . , K, the loss function is defined as With task weightings λ i , L all is the linear combination of taskspecific losses L all . We study the effect of different weighting methods on our multi-task learning approaches. The overall object function of the MTCSN is composed of two parts: • For the classification task, we apply a class-wise cross-entropy loss for each predicted class label from a softmax classifier: where Here, X cls ′ is the predicted classification category. X i ′ and X i are the features before and after cross fusion in the classification branch. Φ CE and Φ MSE are the cross-entropy loss and MSE loss functions, respectively. We empirically set the weight α 0.1 in network training. • For the segmentation task, we apply a pixel-wise crossentropy loss for each predicted class label from a softmax classifier: where X seg ′ represents the predicted segmentation maps. Though Crohn's disease diagnosis is reliable using capsule endoscopy, there is no such open-sourced image dataset for further study so far. So, we build the first Capsule-Endoscopy Crohn's Disease dataset which includes 15 patients and 164 video clips. The dataset will improve the efficiency and accuracy of gastrointestinal endoscopy and help gain a better understanding of this disease. We divide the annotation process into three stages, and the gastroenterologists are divided into three teams corresponding to these three stages, as shown in Figure 4 . In the first stage, gastroenterologists collect the source capsule endoscopy videos from the database center of the hospital, and all the 15 patients' capsule endoscopy videos are filmed by MOMO Wireless Capsule Endoscopy JS-ME-I. Then, we invite several gastroenterologists to pick up the video clip of interest from the full examined videos whose length normally lasts 3-4 h. Finally, we take screenshots from these video clips by a fixed frame rate and get images for follow-up stages. In the second stage, two gastroenterologists are introduced to label the previous screenshots, respectively, at the pixel level and image level. They first classify the image into three clearness degrees according to adequacy assessment [Brotz et al. (2009)] and then segment the scenes into given categories. In the meantime, one gastroenterologist's annotations will be annotated by another doctor without knowing it, and divergence will be handed over to the third stage's chief to decide. In the third stage, all revised images are submitted to the chief and expert gastroenterologist in stage III for final-checking. All the data are anonymized for privacy protection. Here are the statics of the two tasks in our dataset: 1) Task 1: Clearness degree classification 2) Task 2: Tissue segmentation for precise understanding of the image The total number of annotation images is 467, and we split the dataset into training, validation, and testing datasets strictly by stratifying the sampling in the clearness categories. There are 372 images in the training dataset, 47 images in the validation dataset, and 47 images in the testing dataset. The statistic of basic attribute of our proposed datasets have been shown in Tables 1, 2. The classification results are evaluated by accuracy, precision, recall, and F1 score. A classic classification problem has four possible outcomes, true positive (TP), false positive (FP), false negative (FN), and true negative (TN). Accuracy is the fraction of predictions our model got right. Precision measures the proportion of actually correct positive identifications, and recall answers the proportion of actual positives identified correctly. F1 is an overall measure of a model's accuracy that combines precision and recall: The segmentation results are evaluated using the Jaccard index, also known as Intersection-over-Union (IoU). The IoU is a measure of overlap between the area of the automatically segmented region and that of the manually segmented region. The value of IoU ranges from 0 to 1, with a higher value implying a better match between the two regions. Pixel-wise accuracy is also used for evaluation. In this section, we first evaluate several baselines in our Capsule-Endoscopy Crohn's Disease dataset, respectively, on classification and segmentation tasks. Then, we evaluate our proposed method on two types of tasks. The implementation of our method was done using PyTorch. The model was performed on an Nvidia RTX 2080Ti GPU with 11 gb. The batch size is set to 8, and all images are resized to 240 p 240 to speed up training. • Single Task, Classification Task. We evaluate two different types of models on our classification problem. Table 3 shows that existing CNN-based classification models already have an acceptable accuracy, precision, and recall score. On account of the scale of datasets and shape of the input image, a simpler and shallower classification model is preferred. • Single Task, Segmentation Task. We evaluate four different models on our segmentation problem. Under the same backbone, Table 4 shows that the state-of-the-art segmentation model can achieve competitive results on the CECD dataset. But as shown in Figure 5 , the prediction of DeepLabv3 which performs best among them still has huge room for improvement. We employ the method described in Section 3.1 and compare it with two widely used multi-task learning methods, and the results are shown in Table 5 . Besides, we discuss some structure details when constructing the cross fusion module. We can see from Table 6 that the GAP pooling method in the cross fusion module performs better than GMP. The reason is that the global max pooling may introduce outliers while emphasizing the maximum in cross features. Table 6 shows that our proposed multi-task classification and segmentation network, described in Section 3, achieved the highest performance in both tasks. Because of the imbalance between the two tasks, if we simply apply a multi-task framework, the promotion of segmentation capacity is at the cost of classification performance. Our proposed cross fusion module elegantly fixes the imbalance between them. The qualitative segmentation can also be seen from Figure 5 , and the proposed method achieved the best performance. To the best of our knowledge, no one has previously attempted to utilize segmentation at the pixel level to assist the image-level clearness degree and provide explicable visual results for specialists in clinical practice. In practice, our proposed method will have inference on every frame of the entire output video of capsule endoscopy. The high clearness frames or frames mostly occupied by tissue or lesions will be marked by our framework. In fact, the marked frames only account for 10% of all frames which significantly reduces the heavy work of gastroenterologists. Our pixel-level semantic segmentation results also provide an explicable reference for gastroenterologists to determine the confidence of the output. In this work, we propose a multi-task learning framework named the multi-task classification and segmentation network (MTCSN). This framework combines tissue semantic segmentation and clearness degree classification for capsule endoscopy diagnosis. Our MTCSN achieves high performances on both clearness classification tasks and explicable tissue segmentation offering gastroenterologists visualization to understand the whole image. With explicable tissue segmentation, our framework significantly reduces the workload of gastroenterologists and provides steps forward for deep learning-based methods assisting gastroenterologists in clinical practice. The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ZK, MH, QL, and YC conceived and planned the experiments. ZK, YC, and LC carried out the experiments. XH, PW, and YLi contributed to sample preparation. YLu, XL, and JC contributed to the interpretation of the results. ZK took the lead in writing the manuscript. All authors provided critical feedback and helped shape the research, analysis, and manuscript. Multi-Task Deep Learning Based CT Imaging Analysis for COVID-19 Pneumonia: Classification and Segmentation Crohn's Disease. The Lancet A Validation Study of 3 Grading Systems to Evaluate Small-Bowel Cleansing for Wireless Capsule Endoscopy: a Quantitative Index, a Qualitative Evaluation, and an Overall Adequacy Assessment Multitask Learning. Machine Learn Encoder-Decoder With Atrous Separable Convolution for Semantic Image Segmentation Fine-Grained Recognition in the Wild: A Multi-Task Domain Adaptation Approach Deep Residual Learning for Image Recognition Densely Connected Convolutional Networks An Improved Classification and Localization Approach to Small Bowel Capsule Endoscopy Using Convolutional Neural Network A Dirty Model for Multi-Task Learning Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics Imagenet Classification With Deep Convolutional Neural Networks Multitask Classification and Segmentation for Cancer Diagnosis in Mammography H-Denseunet: Hybrid Densely Connected Unet for Liver and Tumor Segmentation from Ct Volumes End-to-end Multi-Task Learning with Attention Fully Convolutional Networks for Semantic Segmentation Overview of Deep Learning in Gastrointestinal Endoscopy Cross-Stitch Networks for Multi-Task Learning Inflammatory Bowel Disease U-net: Convolutional Networks for Biomedical Image Segmentation Very Deep Convolutional Networks for Large-Scale Image Recognition Many Task Learning With Task Routing A Novel Multi-Task Deep Learning Model for Skin Lesion Segmentation and Classification Pyramid Scene Parsing Network Unet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.