key: cord-0461729-yrwwmqrg authors: Li, Zhibin; Yu, Litao; Zhang, Jian title: Distribution-aware Margin Calibration for Medical Image Segmentation date: 2020-11-03 journal: nan DOI: nan sha: e5624a41e6d0e47c0e943d38f41fbb97a64fc452 doc_id: 461729 cord_uid: yrwwmqrg The Jaccard index, also known as Intersection-over-Union (IoU score), is one of the most critical evaluation metrics in medical image segmentation. However, directly optimizing the mean IoU (mIoU) score over multiple objective classes is an open problem. Although some algorithms have been proposed to optimize its surrogates, there is no guarantee provided for their generalization ability. In this paper, we present a novel data-distribution-aware margin calibration method for a better generalization of the mIoU over the whole data-distribution, underpinned by a rigid lower bound. This scheme ensures a better segmentation performance in terms of IoU scores in practice. We evaluate the effectiveness of the proposed margin calibration method on two medical image segmentation datasets, showing substantial improvements of IoU scores over other learning schemes using deep segmentation models. The medical image segmentation is a critical yet challenging learning problem in medical data analysis. The task is to build a computation model to accurately locate and identify the region of interest, such as lesions and instruments in medical images, which can be used for automatic medical instrument control and related disease diagnosis followed by some proper treatments. Specifically, during the global pandemic COVID-19 recently, the segmentation of the infection lesions from Computed Tomography (CT) scans is very important for quantitative measurement of the disease progression for accurate diagnosis and the follow-up treatment. Recently, the development of deep convolutional neural networks (CNNs) has led to remarkable progress in image segmentation due to their powerful feature representation ability to describe the local visual properties. For deep learning-based image segmentation, the encoderdecoder like convolutional segmentation models, such as U-net (Ronneberger, Fischer, and Brox 2015) and its variants (Wang et al. 2020; Zhou et al. 2019) , can well handle the visual-semantic consistencies and have achieved very promising results. To train a reliable deep learning model for medical image segmentation, the learning objective function is one Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. of the most critical ingredients. The most straightforward way is to treat the image segmentation as a dense classification task, which examines each pixel in images individually, comparing the class-predictions to the one-hot encoded target vector. Thus, categorical cross-entropy becomes the most intuitive loss function. The minimization of overall cross-entropy is directly related to the maximization of pixel accuracy. In the training process of deep segmentation models, cross-entropy loss averages over all pixels in images, which is essentially asserting equal learning to each pixel in an image batch. This can be problematic in medical image segmentation if the actual classes are in the imbalanced representation in the image corpus, as training can be dominated by the most prevalent class, e.g., the small foreground interest regions are submerged by large background areas. Although applying a cost-sensitive reweighting scheme (Wong et al. 2018) to alleviate the data imbalance and emphasize the "important" pixels, it is unclear how to determine the weights for the best IoU scores. Furthermore, the measure of cross-entropy on the validation set is often a poor indicator of the model quality, because minimizing the pixel-wise loss cannot guarantee that the model can obtain a higher Jaccard index (or IoU score, Intersection-over-Union) or dice coefficient, which are more commonly used in image segmentation and can better sketch the contours of interest regions. To deal with this problem, some recently proposed loss functions have been proposed, e.g., IoU loss (Rahman and Wang 2016) , dice loss (Eelbode et al. 2020) and Focal-Tversky loss (Abraham and Khan 2019) . However, these loss functions mainly aim to minimize the empirical IoU on the training dataset, which usually leads to over-fitting. The generalized performance, i.e., the expected IoU on the unknown test dataset, has not been investigated and cannot be guaranteed. A "better" machine learning model should feature a better generalized performance, i.e., the performance measured on the underlying data distribution, where the testing instances are sampled from. Clearly, there is a gap between the empirical performance on the training dataset and the generalized performance. This gap is commonly called the error bound. Thus, optimizing the generalized performance can be achieved through (1) optimizing the empirical performance approximated by a surrogate loss associated with the performance metric, e.g. IoU; and (2) controlling the er-ror bound through regularization terms such as l 2 -norm or a weight-decay scheme. As such, in medical image segmentation tasks, learning a model towards the optimal Jaccard index, or mIoU, should also consider these two factors. However, to the best of our knowledge, no proper method has been designed for controlling the error bound directly related to mIoU optimization, which is rather critical for a better generalization of the model. The error bound regarding accuracy can be controlled by the margins among multiple classes, which is well known for its use in Support Vector Machines (SVMs) (Boser, Guyon, and Vapnik 1992) . For data-imbalanced learning problems, uneven margins can be applied to well calibrate the importance of specific classes to attend (Li et al. 2002; Khan et al. 2019; Cao et al. 2019) . In medical image segmentation, class imbalance largely exists in various datasets, which hinders the maximization of mIoU. Although Li et. al proposed a margin tuning method (Li, Kamnitsas, and Glocker 2019) , their method is limited to binary segmentation and the parameter setting is empirical, which highly relies on the manual trials on different datasets. The power of the "uneven" margins inspire us to develop a proper margin calibration scheme, to control the error bound for the performance improvement in medical image segmentation. In this paper, we propose a novel distribution-aware margin calibration method, to optimize the mIoU in medical image segmentation. The margins across multiple classes are pre-computed based on the label distribution, which can well calibrate the distance between foreground and background pixels in the loss computation. Our method has the following three compelling advantages over other learning objectives: (1) it provides a lower bound for data-distribution-mIoU, which means the model has a guaranteed generalization ability; (2) the margin-offsets can be efficiently computed, which is readily pluggable into deep segmentation models; (3) the proposed learning objective is directly related to IoU scores, i.e., it is consistent with the evaluation metric. Due to the high discriminative power and stability, it is worth using the proposed margin calibration method as a learning objective in the challenging medical image segmentation tasks. We conduct comprehensive experiments on two medical image segmentation datasets, which indicates our method is able to achieve a considerable improvement compared to other training objectives. Deep learning-based image segmentation models have achieved significant progress on large-scale benchmark datasets (Zhou et al. 2017; Cordts et al. 2016 ) in recent years. The deep segmentation methods can be generally divided into two streams: the fully-convolutional networks (FCNs) and the encoder-decoder structures. The FCNs (Long, Shelhamer, and Darrell 2015) are mainly designed for general segmentation tasks, such as scene parsing and instance segmentation. Most FCNs are based on a stem-network (e.g., Inception network (Szegedy et al. 2017 )) pre-trained on a large-scale dataset, and the dilated convolution is used to enlarge the receptive field for more contextual information. In the encoder-decoder struc-ture (Badrinarayanan, Kendall, and Cipolla 2017; Milletari, Navab, and Ahmadi 2016) , the encoder maps the original images into low-resolution feature representations, while the decoder mainly restores the spatial information with skip-connections. Such networks are usually light-weighted with fewer parameters, which have been extensively used for medical image segmentation. Combining the encoderdecoder structure and dilated convolution can effectively boost the pixel-wise prediction accuracy (Chen et al. 2017 ), but is extremely computational demanding. As a dense prediction task, medical image segmentation aims to train light-weight models on comparably small datasets, to accurately sketch the contours of the region of interest, such as tumour and body organs. Thus, U-net (Ronneberger, Fischer, and Brox 2015) models and their variants are the best choices. Since the commonly used crossentropy in classification cannot well reflect the segmentation quality in medical images, a better optimization objective should be well designed. A "better" loss function should be consistent with the evaluation metrics and discriminative to target class labels. In recent years, various loss functions have been proposed specifically for medical image segmentation, and most of them can be used in a plug-and-play way. For example, the distribution-based loss functions (e.g., weighted cross-entropy loss (Ronneberger, Fischer, and Brox 2015) and focal loss (Lin et al. 2017 )), the region-based loss functions (e.g., IoU loss (Rahman and Wang 2016) , dice loss (Eelbode et al. 2020) and Tversky loss (Salehi, Erdogmus, and Gholipour 2017) ) and boundarybased loss functions (e.g., Hausdorff distance loss (Karimi and Salcudean 2019) and Boundary loss (Kervadec et al. 2019) ). These loss functions can also be jointly used in model optimization (Abraham and Khan 2019) . Applying either distribution-based loss or region-based loss functions, there exists a problem that the continuous class probability of each pixel is indirectly related to IoU scores. To deal with this problem, Maxim et al. proposed to use submodular measures to readily optimize the segmentation model in the continuous setting (Berman, Rannen Triki, and Blaschko 2018) . However, the above loss functions specifically designed for image segmentation mainly aim to minimize the empirical risk in the model training procedure, without the consideration of the generalization error over the underlying data distribution. In our work, we design a new margin calibration scheme to overcome this difficulty from the perspective of data-distribution-related error bound, which provides a better learning objective for medical image segmentation compared to other learning metrics, both theoretically and practically. Image segmentation can be considered as a dense prediction learning task. Considering an input space X ∈ R m , and the target space Y = {1, ..., c} m , where m is the number of image pixels. The function θ ∈ Θ : X → R m×c is a complex non-linear projection from raw images to scores for all pixels regarding all classes. In deep learning-based methods, Θ can be a deep learning model with trainable parameters. Given an image x ∈ X with a corresponding mask y ∈ Y, we denote the output score for i-th pixel regarding the j-th foreground class by θ ij (x), and the predicted label is given byŷ i = argmax j∈[c] θ ij (x). Then, given a vector of ground truth y and a predicted label vectorŷ, the empirical IoU for class k over an image of m pixels is defined as: where I(·) is an indicator function. It gives the ratio in [0, 1] of the intersection between the ground truth and the predicted mask over their union, with the convention that 0/0 = 1 (Berman, Rannen Triki, and Blaschko 2018) . Let p k0,m (θ) be the empirical probability that a foreground class k pixel is observed but is predicted as the back- . Similarly, p 0k,m (θ) denotes the empirical probability that a pixel of the background class is observed but is predicted as a foreground class k. We use p k,m to denote the empirical probability that a class k pixel is observed, i.e. When there are c classes presented, the empirical mean IoU k,m (θ). In the evaluation of the segmentation performance, IoU or mIoU is computed globally over an image dataset D, which contains n pixels in total. Replacing p k,m , p k0,m (θ) and p 0k,m in Eq.(2) with p k,n , p k0,n (θ) and p 0k,n (θ), respectively, we can get the IoU and mIoU on the dataset D. We denote the output score of i-th pixel in a dataset D regarding class j by s ij (θ, D), and denote its label by y i . We use s ij to denote s ij (θ, D) whenever there is no ambiguity. We assume that the images in the dataset D are independently and identically distributed (i.i.d) according to some unknown distribution D over X × Y, and let D Y denote the projection of D over Y. Note that we do not assume the pixels in an image are i.i.d. The IoU for class k over the data distribution is defined as: where p k0 (θ) is the probability that a class k pixel is observed and predicted as the background class by θ, over the underlying data distribution D. p 0k (θ) is similarly defined. We assume the empirical label distribution is an accurate estimation of the global label distribution D Y , i.e., Similarly, the mIoU over the data distribution is defined as Ideally, a function θ should produce a high mIoU(θ) to ensure the performance of θ on any data samples. Unfortunately, the data distribution D is usually fixed but unknown. Consequently, we can only optimize the empirical mIoU so that with a high probability it can lead to high mIoU(θ). In the next section, we present our method to minimize the error bound between the empirical mIoU and mIoU(θ) with a high probability, so the optimization of the empirical mIoU can also indicate the better mIoU(θ). The mIoU is the average value of IoU scores over all classes, whereas in the medical image segmentation task, the label distributions are usually imbalanced. So equally treating all the pixels in training can lead to the biased IoU scores towards the majority classes. An intuitive solution is to set different margins for the pixel-samples belonging to different classes. Thus, we would derive an optimal margin setting for a smaller error bound between the empirical mIoU given by mIoU n (θ) and the expected mIoU given by mIoU(θ). Define the margin for i-th pixel in the image dataset with regard to the class k as: Similarly we can calculate the margins {λ ij } c j=1 for pixel i with regard to every class. If pixel i is belong to the class k, then we would prefer a larger λ ik and a smaller λ ij , ∀j = k, for a high confidence of prediction on the training dataset. We then combine the margin λ ij with a ρ-margin loss function φ ρ (·) defined in (Mohri, Rostamizadeh, and Talwalkar 2018, Definition 5.5), to build the relationships between IoU score and the margin λ ij . The ρ-margin loss is defined as: which encourages the margin λ to be larger than ρ and provides an upper bound for 0-1 loss as illustrated in Figure 1 . We call the parameter ρ margin-offset. We can then bound the empirical probabilities p k0,n (θ) and p 0k,n (θ) in Eq. (2) as: where we use Y k to denote the index set of pixels belong to class k and i ∈ Y \ Y k to denote the index set of pixels excluding class k. ρ 0k and ρ k0 are pre-defined margin-offsets. Then, we can give a lower bound for Eq.(2) as: IoU k,n (θ) = p k,n − k0,n (θ, ρ k0 ) p k,n + 0k,n (θ, ρ 0k ) , and the related lower bound for mIoU n (θ): IoU k,n (θ). We can then derive a generalization error bound regarding mIoU with the margin-offsets ρ 0k and ρ k0 , based on the following theorem: Theorem 1 For any function θ ∈ Θ, define µ k = ρ k0 ρ 0k and F = C(Θ) + σ( 1 η ). C(Θ) is some proper complexity measure of the hypothesis class Θ and σ( 1 η ) ρmax 4c 2m log 2c η is typically a low-order term in 1 η with ρ max = max{ρ i0 , ρ 0i } c i=1 . Given a training dataset of n image pixels including n k pixels of class k, with each image consists of m pixels, then for any η > 0, with the probability at least 1 − η, Proof.We first give the proof that for each class k, the generalization error k regarding IoU k,n (θ), with probability 1 − η c , satisfies the following inequality: Averaging IoU k (θ), IoU k,n (θ) and k for k = 1 · · · c and taking a union bound we can get the in Theorem 1. With the definition of IoU k (θ), assume following inequality holds for non-negative 0k and k0 : (11) Solving above inequality, we can get: where a k = p k − p k0 (θ) and b k = p k + p 0k (θ). Next, we should get the values of 0k and k0 to satisfy following inequality: so we can simply substitute (13) into (11) to complete the proof. A sufficient condition for (13) regarding 0k and k0 is: Following the margin-based generalization bound in (Mohri, Rostamizadeh, and Talwalkar 2018, Theorem 9 .2), for the n k pixels belong to class k, with the probability at least 1 − η 2c , we have: where R n k (Θ) is the Rademacher complexity for the hypothesis class Θ over the n k pixels belong to the foreground class k. Note that this inequality is slightly different from (Mohri, Rostamizadeh, and Talwalkar 2018, Theorem 9 .2), because the pixels are m-dependent for a dataset contains m-pixel images. We first apply the McDiarmid's inequality for m-dependent data (Liu et al. 2019) to the proof of (Mohri, Rostamizadeh, and Talwalkar 2018, Theorem 3.3) to get a modified version of (Mohri, Rostamizadeh, and Talwalkar 2018, Theorem 3.3). Then we use it in the proof of (Mohri, Rostamizadeh, and Talwalkar 2018, Theorem 9.2) to get the formulation of (15). The Rademacher complexity R n k (Θ) typically scales in C(Θ) n k with C(Θ) being the some proper complexity measure of Θ (Neyshabur et al. 2018) , and such a scale has also been used in related work (see (Cao et al. 2019 ) and the references therein). We can then rewrite (15) as: 2m log 2c η is typically a low-order term in 1 η with ρ max = max{ρ i0 , ρ 0i } c i=1 . Similarly, let F = C(Θ) + σ( 1 η ), with probability at least 1 − η 2c , we have: for the n − n k pixels that belong to the background class. We then combine (16), (17), (14) and take a union bound over 0k and k0 , to get following equations, with which (14) holds with the probability at least 1 − η c : Then we substitute above equations into (12). Let µ k = ρ k0 ρ 0k , we have: so that with the probability at least 1 − η c the inequality (10) holds. In practice, we do not know the values of a k and b k so that Eq.(19) has its own limitations. However, we know a k b k ≤ 1 and b k ≥ p k so we can get an even more useful bound: Taking a union bound over all classes k, we can get the following inequality with the probability at least 1 − η: where we complete the proof. This theorem enables us to maximize the mIoU(θ) on the data distribution by maximizing a lower bound mIoU n (θ) for the empirical mIoU on the training dataset with a high probability. Meanwhile, we would prefer a small error bound so that the lower bound mIoU n (θ) on the empirical mIoU could be a reliable estimation for mIoU(θ). This scheme guarantees the performance of associated function θ on unseen data, e.g. the test data. Theorem 1 also indicates that a smaller requires more pixels n k for each class, and a simpler fit function (for smaller C(Θ)). Another important factor is that we can adjust the margin-offset ρ 0k to minimize the error bound . Note that increasing ρ 0i also increases the C(Θ) implicitly, because a larger margin-offset may require more complex hypothesis class Θ. Otherwise, mIoU n (θ) decreases due to the under-fitting. Therefore, the scale of margin-offset should be tuned carefully. Besides, the direct calculation of the optimal margin-offsets in Theorem 1 is difficult because it involves the complexity measure C(Θ), which is related to the structure of deep neural networks. Nevertheless, we can give the optimal proportions between ρ 0k 's that is irrelevant to C(Θ) by the following corollary: with υ (υ > 0) being a hyper-parameter. Then the minimum of the error bound in Theorem 1 is attained in the following condition: Proof. We substitute µ k in (22) with √ n k r(n/n k −1)− √ n−n k , where r is a hyper-parameter, we can get: , according to Cauchy-Schwarz inequality we have: so that The RHS of the equality is a constant because r is a given hyper parameter and we assume c k=1 ρ 0k = some contant. The equality holds when √ x1 y1 = ... = √ xc yc , which yields Corollary 1.1. Note that µ k = √ n k r(n/n k −1)− √ n−n k , while in Corollary . These two conditions are essentially equivalent when r and υ are hyper-parameters. To see this, simply let r = nυ and notice that p k = n k n . Corollary 1.1 provides a theoretical guidance for setting the margin-offsets towards a smaller error bound . The margin-offset ρ 0i is proportional to √ n−ni ni , which indicates a larger margin is required for class i, with comparably fewer pixels. We introduce τ (τ > 0) to be the scale hyperparameter of margin-offsets, which can be tuned on the validation dataset. A proper setting of τ and υ can provide a balance between and mIoU n (θ) for the maximization of mIoU(θ). The task of medical image segmentation is to maximize mIoU(θ) for the best performance. Ideally, we should maximize its lower bound mIoU n (θ) with a small error bound . However, in the training of deep neural networks, the direct optimization of mIoU n (θ) is impractical because the model is trained in a mini-batch manner. Unlike other decomposable evaluation metrics, such as classification accuracy, where the expectation of the metric on a mini-batch sample is equivalent to the metric on the whole dataset, the expectation of the mini-batch IoU is not equal to the overall IoU on the whole dataset. Accordingly, the lower bound mIoU n (θ) has a similar problem. For practical implementation, we instead minimize the sum of each ρ-margin loss in mIoU n (θ), with the optimal margin-offsets given in Corollary 1.1. By doing this, the empirical mIoU on the training dataset may be sub-optimal, but the margin-offsets can provide a guarantee for its generalization. So for a mini-batch of n pixels, the loss L(θ) is calculated by: with λ ik defined in Eq.(4). In practice, the margin-offsets ρ 0k and ρ k0 may greatly influence the optimization of corresponding ρ-margin loss and bring instability in the optimization. Thus, we substitute the ρ-margin loss φ ρ (λ) used in Eq.(27) with ρ-calibrated logloss ϕ ρ (λ) = log 2 (1 + 2 −λ+ρ ). The relationship between the ρ-margin loss φ ρ (λ) and the ρ-calibrated log-loss ϕ ρ (λ) is illustrated in Figure 1 . As is shown in Figure 1 , the gradient regarding the ρ-margin loss can be prohibitively large when ρ is very small, while the gradients outside the interval 0 0 1 ( ) = log2(1 + 2 + ) ( ) = min ( 1, max ( 0, 1 )) Figure 1 : The ρ-calibrated log-loss (blue dotted line) and ρmargin loss (orange solid line) functions. The ρ-margin loss is a upper bound for 0-1 loss. For the ρ-calibrated log-loss, ϕ ρ (ρ) = 1 and it upper bounds the ρ-margin loss. (0, ρ) is zero. The ρ-calibrated log-loss bounds the ρ-margin loss from above and leads to: and 0k,n (θ, ρ 0k ) < 1 n i∈Y \Y k log 2 (1 + 2 λ ik +ρ 0k ) = 0k,n (θ, ρ 0k ) (29) Based on the above two inequalities, we simply use k0,n (θ, ρ k0 ) and 0k,n (θ, ρ 0k ) to replace k0,n (θ, ρ k0 ) and 0k,n (θ, ρ 0k ) in Eq.(27) as the final loss function. Given the output scores (s ij ) ∈ R n×c of n pixels, the computation of the margin λ ij and the subsequent calibrated logloss incurs O(nc) time complexity. Specifically, compared to the cross-entropy loss, our calibration method requires extra O(nc) time overhead to compute the margins. We use the recent proposed COPLE-Net (Wang et al. 2020 ), a variant of U-net, as the deep image segmentation architecture and compare the final segmented performance when applying commonly used learning objectives and our designed margin calibration method, respectively. We demonstrate the method on two publicly available medical image dataset: COVID-19 pneumonia CT scan (UESTC COVID-19 dataset (Wang et al. 2020) ) and Robotic Instrument segmentation (Allan et al. 2019) . The COVID-19 dataset is collected from 10 different hospitals, in which the images have a large range of slice thickness/inter-slice spacing from 0.625mm to 8.0mm, and the pixel size ranges from 0.61mm to 0.93mm. The whole dataset contains two subsets, with 70 and 50 patient cases, respectively. The first subset (Part 1) is coarse-labeled while the second one (Part 2) is fine-labeled by experts. In the experiment, we used the fixed train/validation/test splits with 40/15/15 and 30/10/10 cases on the two subsets, respectively. The Robotic Instrument dataset provides 8×225-frame robotic surgical videos, where each part and type is manually annotated by a trained team. Here we conduct two segmentation tasks: Binary instrument segmentation and Multi-class instrument part segmentation. In the first task, each image is separated into da Vinci Xi instruments and the background class (ultrasound probe, surgical clips and porcine tissues). The second task is to correctly segment each articulating part of the instrument, including shaft, wrist, claspers and probe. In our experiment, this dataset is sequentially split into 1,200, 200 and 400 images according to the frame index for training, validation and testing, respectively. We implemented the segmentation model based on PyTorch. In the optimization, we employed the AdamW optimizer (Loshchilov and Hutter 2019) with the initial learning rate 10 −4 . We trained the COPLE-Net model with group normalization (Wu and He 2018) , which allows setting a very small batch size to fit models in the limited GPU memory. Our experiments were conducted on a server equipped with an NVIDIA Titan X GPU card, and our implementation is publicly available at https://github.com/XXX. Convergence study A very nice property of our proposed margin calibration method used in medical image segmentation is its tight correlation between empirical error and generalized error. We trained the segmentation model from the very beginning, using categorical cross-entropy (CE) and the proposed margin calibration (MC) in the first 50 optimization epochs, to record the loss values and mIoU scores, which are plotted in Figure 2 . As we can see, the curve margin between training loss and validation loss in crossentropy gradually enlarges when training epoch increases, while the training loss and validation loss in our proposed margin calibration are generally close to each other. However, the absolute loss values using different loss functions have no direct correlation to the evaluation metric (mIoU in our case). From Figure 2 (b) we can see using the margin calibration method, the mIoU has a comparable convergence speed to CE loss when applying the same optimization settings. Besides, our loss function leads to a higher training mIoU score, which can be accredited to the closer relationships between our loss function and mIoU score. Sensitivity analysis of hyper-parameters τ and υ The hyper-parameters τ and υ control the scale of margins between the current foreground and the background classes. Proper margin-offsets can well assist the calibration of the pixel-class distribution variance. Thus, we set different values of τ and υ to observe the segmentation performance on the validation dataset. The loss values and mIoU scores are summarized in Table 1 . We can observe that the settings of the two parameters do not significantly affect the actual performance of the models, which means the proposed margin calibration method is very robust to hyper-parameters. Performance comparison using single learning objectives We tested the segmentation model with multiple learning objectives as baselines, including cross-entropy loss, generalized dice loss (Eelbode et al. 2020) , focal loss (Lin et al. 2017) , Tversky loss (Salehi, Erdogmus, and Gholipour 2017) , lovász-softmax (Berman, Rannen Triki, and Blaschko 2018) . Cross-entropy is the most straightforward loss function in classification, as medical image segmentation can be treated as a dense prediction for each image pixel. Although its learning objective is not so consistent with the evaluation matrics, cross-entropy is still a good loss function for early training due to its simple and fast computation. In our experiment, we used cross-entropy as the basic learning objective to pre-train the segmentation models for 50 epochs. After that, we applied different learning objectives independently to fine-tune the coarsely trained model. For fair comparisons, we did not use the CRF postprocessing nor multi-scale prediction to bring complementary improvements. In model evaluation, we used per-pixel accuracy and IoU scores regarding different loss surrogates. We show the quantitative results of the two medical image datasets in Table 2 and 3, respectively. The two evaluation metrics, pixel accuracy and IoU score, although have a very high correlation in terms of the absolute values, the best one single metric cannot guarantee the other. For example, simply using cross-entropy for the coarsely labelled COVID-19 pneumonia lesion segmentation task (Part 1 in Table 2 ) achieve the best pixel accuracy, but its IoU score is not the best among the models with other loss functions. In fact, the IoU score is usually a better evaluation to quantify the percent overlap between the pixel-label output and target mask in medical image segmentation. Using the single loss function, our proposed margin calibration method obtains the best IoU or mIoU scores on the two datasets. Specifically, in the COVID-19 pneumonia lesion segmentation tasks, our method beats the second-best ones by 1.3% and 0.4% with the coarse-and fine-labelled CT image sets, respectively. The general performance on the fine-labelled set is much better due to the less noise. On the Robotic Instrument dataset, using the proposed margin calibration method as a single learning objective also outperforms other objective functions, with 1.4% and 3.0% performance boost in terms of IoU and mIoU scores for binary and multi-class segmentation, respectively. Also, although our method is not specifically designed to optimize the pixel accuracy, using the margin calibration can still achieve very promising performance. We illustrate the segmentation examples in Figure 3 and Figure 4 on the two datasets, respectively. By observing the results of COVID-19 pneumonia lesion, using different learning objectives in the COPLE-Net obtains very similar results, thus we cannot see obvious differences. On the visualization of the multi-class segmentation on the Robotic Instrument dataset, we can see that applying the proposed method, different parts can be better segmented, forming more smooth contours and obtaining more accurate results. Performance using loss function combinations Applying multiple loss functions simultaneously in training an image segmentation model is a common practice. Among the baselines, dice loss and Tversky loss are region-based loss functions, while Lovász-softmax and focal loss, as well as our proposed margin calibration, focus more on datadistribution. So we simply used our method in conjunction with dice loss and Tversky loss as the learning objectives to train the COPLE-Net models. On the two datasets, the IoU scores can be further boosted in general (see Table 5 ). We have presented a versatile margin calibration method for a better learning objective to optimize the Jaccard index in medical image segmentation. With the consideration of both empirical performance and the error bound regarding the generalization performance, the scheme can increase the discriminative power with a better generalization ability. We gave both theoretical and experimental analysis to demonstrate its effectiveness, substantially improving the IoU scores by inserting it into a deep learning-based medical image segmentation model. A novel focal tversky loss function with improved attention u-net for lesion segmentation robotic instrument segmentation challenge Segnet: A deep convolutional encoder-decoder architecture for image segmentation The lovász-softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks A training algorithm for optimal margin classifiers Learning imbalanced datasets with label-distribution-aware margin loss Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs The cityscapes dataset for semantic urban scene understanding Optimization for Medical Image Segmentation: Theory and Practice when evaluating with Dice Score or Jaccard Index Reducing the hausdorff distance in medical image segmentation with convolutional neural networks Boundary loss for highly unbalanced segmentation Striking the right balance with uncertainty The perceptron algorithm with uneven margins Overfitting of neural nets under class imbalance: Analysis and improvements for segmentation Focal loss for dense object detection McDiarmid-Type Inequalities for Graph-Dependent Variables and Stability Bounds Fully convolutional networks for semantic segmentation Decoupled Weight Decay Regularization V-net: Fully convolutional neural networks for volumetric medical image segmentation Foundations of machine learning The role of over-parametrization in generalization of neural networks Optimizing intersection-over-union in deep neural networks for image segmentation U-net: Convolutional networks for biomedical image segmentation Tversky loss function for image segmentation using 3D fully convolutional deep networks Inception-v4, inception-resnet and the impact of residual connections on learning A Noiserobust Framework for Automatic Segmentation of COVID-19 Pneumonia Lesions from CT Images 3D segmentation with exponential logarithmic loss for highly unbalanced object sizes Group normalization Scene parsing through ade20k dataset Unet++: Redesigning skip connections to exploit multiscale features in image segmentation