key: cord-0488577-lvy3pvzu authors: He, Siyuan; Xi, Pengcheng; Ebadi, Ashkan; Tremblay, Stephane; Wong, Alexander title: Performance or Trust? Why Not Both. Deep AUC Maximization with Self-Supervised Learning for COVID-19 Chest X-ray Classifications date: 2021-12-14 journal: nan DOI: nan sha: f41b3fe64516a4bf5f18b36a952c7e2cd8e6635f doc_id: 488577 cord_uid: lvy3pvzu Effective representation learning is the key in improving model performance for medical image analysis. In training deep learning models, a compromise often must be made between performance and trust, both of which are essential for medical applications. Moreover, models optimized with cross-entropy loss tend to suffer from unwarranted overconfidence in the majority class and over-cautiousness in the minority class. In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients using radiography images. In addition, we adopt a new quantification score to measure a model's trustworthiness. Ablation study is conducted for both the performance and the trust on feature learning methods and loss functions. Comparisons show that leveraging the new surrogate loss on self-supervised models can produce label-efficient networks that are both high-performing and trustworthy. COVID-19 continues to affect our daily lives. In the fight against the pandemic, computer-aided screening of patients using radiography images has served as a complementary approach to standard polymerase chain reaction (PCR) test. Despite our efforts in improving the performance of deep learning models [1, 2] , a compromise often must be made between the performance and model trust [3] . Effective representation learning is the key in improving model performance. The most common approach is through supervised learning on large-scale data sets with labels. It can also be realized through un-supervised training, when data labels are missing or expensive to collect. A series of self-supervised models have achieved comparable performance to its supervised counterparts on benchmark data sets [4, 5] . They learn image representations through minimizing an embedding distance between image pairs derived from the same image, while maximizing the distance between the pairs from different images. Regarding model trust, a deep classification neural network optimized on cross-entropy loss tends to be over-confident in its incorrect predictions of the majority class, and overcautious in its correct predictions of the minority class [6] . In this work, we investigate a new surrogate loss function termed deep AUC Maximization [7] . We integrate it with self-supervised models pre-trained with the MoCo framework [4] for improving model performance and trust. To our best knowledge, this is the first integration of the deep AUC Maximization loss function with self-supervised learning. In addition, we validate the models through quantitative comparisons to gain insight into the models' trustworthiness. Our assumption is that, by adopting the new surrogate loss function to self-supervised models, we no longer need to sacrifice model trust for performance but can achieve both. In summary, our contributions are threefold: • We proposed the use of a new surrogate loss on selfsupervised models to improve representation learning and model trust pertinent to the task of screening COVID-19 patients with deep learning; • We demonstrated with experimental results that the selfsupervised models had improved model performance over supervised ones on screening COVID-19 patients; • We showed that the use of a new surrogate loss can produce models that are more trustworthy than those optimized with cross-entropy loss. Self-supervised learning has gained momentum in learning visual representations. It can be categorized into generative and discriminative approaches. As a discriminative method, Momentum Contrast (MoCo) trains a visual representation encoder by matching an encoded query to a dictionary of encoded keys through a contrastive loss [4] . The query encoder is shared with the key encoder, which receives slow updates in order to achieve consistency in learning visual representations. In medical AI, contrastive learning has led to improved representation learning. In [8] , a model named MoCo-CXR proved that linear models trained on MoCo-CXR-pretrained representations outperform those without MoCo-CXR-pretrained representations. Due to the scarcity of COVID-19 patient data, the MoCo model has been applied to predicting patient deterioration based on chest X-rays [9] . Area under the Receiver Operating Characteristic curve (AUC) is widely used in medical image analysis for evaluating the performance of a neural network. Recently, Yuan et al. [7] proposed a novel surrogate loss over the standard cross-entropy loss to directly optimize for the AUC metric. AUC maximization, as the authors claim, can lead to the largest increase in a network's performance. This new surrogate loss function was integrated with supervised deep learning models and it achieved the first place in the Stanford CheXpert competition [7] . Our model architecture is illustrated in Fig. 1 . Our approach leverages deep AUC maximization [7] , a novel surrogate loss proposed for medical image classification, with self-supervised pre-training to maximize label efficiency, performance, and model trust. In our experiments, we compare the loss function against traditional crossentropy (CE) optimization on both self-supervised and supervised models. The self-supervised model is built on the MoCo framework [4] and it is pre-trained on MIMIC-CXR dataset [10] . All models are then fine-tuned on the COVIDx8B dataset [11] for validation. DenseNet-121 is chosen as the backbone architecture throughout our experiments [12] . The MoCo model has been pre-trained on the MIMIC-CXR dataset for predicting patient deterioration [9] . The dataset is composed of 377,110 chest radiographs [10] . As the dataset was constructed before the COVID-19 pandemic, it does not contain any positive chest X-ray samples of COVID-19. We perform end-to-end fine tuning on the COVIDx8B dataset [11] . The latest version of COVIDx8B consists of 15,952 chest radiographs for training and 400 for testing (Table 1) . Each sample in the dataset is labelled as either COVID-19 positive or negative. Stratified 5-fold cross validation is conducted on the training split during the fine-tuning stage to evaluate model performance. The DenseNet-121 model pre-trained on MIMIC-CXR using the MoCo framework has a projection dimension of 128, whereas the supervised model pre-trained on ImageNet has a projection dimension of 1,000. For end-to-end fine tuning, the parameters of the last fully connected layer of both pre-trained models are replaced and randomly initialized with a single output neuron for binary classification. We apply a sigmoid layer over the raw logits of the model to obtain a probability distribution. All input images are resized to 224x224, center cropped and normalized. Only random horizontal flipping is used for data augmentation as further augmentations were noted to provide little improvement for classification [9] . AUC Maximization. We adpot a novel surrogate loss function introduced by [7] to maximize the area under the Receiver Operating Characteristic curve. For end-to-end fine tuning, we use a learning rate of 0.1 for all layers of the DenseNet model. Then, we optimize the network with the new surrogate loss function to maximize the AUC metric. Lastly, we train for 30 epochs while decaying the learning rate by a factor of 10 at the 15th epoch. For standard end-to-end fine tuning, we set the learning rate at 1e − 3 for all layers of the DenseNet model. Following similar procedures to [9] , we use cosine annealing learning rate decay to reduce the learning rate. Finally, we use the SGD optimizer on cross-entropy loss with a momentum of 0.9 and weight decay of 1e − 4 to fine tune the model for 30 epochs. During each validation fold, we first compute an optimal threshold by maximizing F1-score on the validation split. Then, we save the model corresponding to the best validation accuracy. Finally, we evaluate the saved models on the unseen test split. We first examine the performance difference between traditional supervised pre-training on ImageNet and self-supervised contrastive pre-training on MIMIC-CXR. Tables 2 and 3 show significant improvements in the precision metric of the negative class and the sensitivity metric of the positive class for both CE optimization and AUC maximization. In medical image analysis, this improvement is key as maximizing the positive sensitivity score is necessary to lower false-negatives. However, this increase in performance comes at the cost of model trust. We examine the trustworthiness of each model by calculating a trust score of the positive class. As per the methods introduced in [3], we compute a score that rewards well-placed confidence and penalizes undeserved overconfidence. In Table 4 , we notice that in the case of CE optimization, supervised models are drastically more trustworthy than self-supervised models. Moreover, throughout our CE optimization experiments, we observed that self-supervised models are less confident in its correct predictions (overcautious) than its supervised counterparts. Our comparisons of CE Optimization against AUC Maximization show improvements across standard metrics as well as overall model trust-worthiness. Both Table 2 and Table 3 show improvements in the precision and sensitivity metrics regarding the supervised models. Moreover, Fig. 2 demonstrates an increase in the AUC scores of the supervised models. When examining selfsupervised models, AUC maximization still achieves performance comparable to CE optimization. More importantly, we observe significant gains in model trust scores, especially in the context of self-supervised models. Table 4 shows a near 1% increase in supervised pre-training and a near 6% increase in self-supervised pre-training. Moreover, when using AUC maximization, we do not see the same diminishment in model trust between supervised and self-supervised models. Therefore, unlike CE optimization, we can freely leverage AUC maximization with self-supervised pre-training to improve performance without sacrificing model trust. As shown in Tables 2, 3 and 4, AUC maximization allows us to achieve top metrics without trading off model trust for performance. This work demonstrates that we no longer need to sacrifice model trust for performance. Integrating AUC maximization can produce more trustworthy and better performing models. By extending the AUC maximization paradigm [7] to self-supervised pre-training, we showed that we can significantly improve key metrics while also maintaining model trust. We expect that our study on self-supervised learning with AUC maximization will contribute to the classification of both COVID-19 and future illnesses. More often than not, we cannot afford to collect large amount of labeled samples at the onset of a pandemic. Therefore, it is important that we exploit existing data, apply effective representation learning to maximizing model performance, and gain optimal model confidence. Covid-19 detection from chest x-ray images using imprinted weights approach Covid-19 detection from chest x-ray images using deep convolutional neural networks with weights imprinting approach How much can we really trust you? towards simple, interpretable trust quantification metrics for deep neural networks Momentum contrast for unsupervised visual representation learning A simple framework for contrastive learning of visual representations Ghost: Adjusting the decision threshold to handle imbalanced data in machine learning Large-scale robust deep auc maximization: A new surrogate loss and empirical studies on medical image classification Moco pretraining improves representation and transferability of chest x-ray models Covid-19 prognosis via self-supervised representation learning and multi-image prediction Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports Covid-net: a tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images Densely connected convolutional networks We would like to thank Jianxing (Jason) Zhang for his preliminary work and code preparation that led to the completion of this work. We also acknowledge support from the Pandemic Response Challenge Program at the National Research Council of Canada.