key: cord-0953875-9a7fi46y authors: Amyar, Amine; Modzelewski, Romain; Li, Hua; Ruan, Su title: Multi-task Deep Learning Based CT Imaging Analysis For COVID-19 Pneumonia: Classification and Segmentation date: 2020-10-08 journal: Comput Biol Med DOI: 10.1016/j.compbiomed.2020.104037 sha: 71a8974d9cbfabf338ab842e4649362017ace344 doc_id: 953875 cord_uid: 9a7fi46y This paper presents an automatic classification segmentation tool for helping screening COVID-19 pneumonia using chest CT imaging. The segmented lesions can help to assess the severity of pneumonia and follow-up the patients. In this work, we propose a new multitask deep learning model to jointly identify COVID-19 patient and segment COVID-19 lesion from chest CT images. Three learning tasks: segmentation, classification and reconstruction are jointly performed with different datasets. Our motivation is on the one hand to leverage useful information contained in multiple related tasks to improve both segmentation and classification performances, and on the other hand to deal with the problems of small data because each task can have a relatively small dataset. Our architecture is composed of a common encoder for disentangled feature representation with three tasks, and two decoders and a multi-layer perceptron for reconstruction, segmentation and classification respectively. The proposed model is evaluated and compared with other image segmentation techniques using a dataset of 1369 patients including 449 patients with COVID-19, 425 normal ones, 98 with lung cancer and 397 of different kinds of pathology. The obtained results show very encouraging performance of our method with a dice coefficient higher than 0.88 for the segmentation and an area under the ROC curve higher than 97% for the classification. This paper presents an automatic classification segmentation tool for helping screening COVID-19 pneumonia using chest CT imaging. The segmented lesions can help to assess the severity of pneumonia and follow-up the patients. In this work, we propose a new multitask deep learning model to jointly identify COVID-19 patient and segment COVID-19 lesion from chest CT images. Three learning tasks: segmentation, classification and reconstruction are jointly performed with different datasets. Our motivation is on the one hand to leverage useful information contained in multiple related tasks to improve both segmentation and classification performances, and on the other hand to deal with the problems of small data because each task can have a relatively small dataset. Our architecture is composed of a common encoder for disentangled feature representation with three tasks, and two decoders and a multi-layer perceptron for reconstruction, segmentation and classification respectively. The proposed model is evaluated and compared with other image segmentation techniques using a dataset of 1369 patients including 449 patients with COVID-19, 425 normal ones, 98 with lung cancer and 397 of different kinds of pathology. The obtained results show very encouraging performance of our method with a dice coefficient higher than 0.88 for the segmentation and an area under the ROC curve higher than 97% for the classification. c 2020 Elsevier B. V. All rights reserved. The novel coronavirus disease (COVID-19) spread rapidly around the world, changing the daily lives of billions of people. The infection can lead to severe pneumonia that can causes death. Also, COVID-19 is highly contagious, which is why it must be detected quickly, in order to isolate the infected person very fast to limit the spread of the disease. Today, the gold standard for detecting COVID-19 is the Reverse Transcription Polymerase Chain Reaction (RT-PCR) [42] , which consists of detecting viral RNA from sputum or nasopharyngeal swab. The limitation with the RT-PCR test is due to the time needed to get the results, the availability of the material which remains very limited in hospitals [42] and its relatively low sensitivity, which does not meet the major interest of rapidly detecting positive cases as soon as possible in order to isolate them [39] . An alternative solution for rapid screening could be the use of medical imaging such as chest X-ray images or computed tomography (CT) scanners [39] . Identifying COVID-19 at an early stage through imaging would indeed allow the isolation of the patient and therefore limit the spread of the disease [39] . However, physicians are very busy fighting this disease, hence the need to create deci- sion support tools based on artificial intelligence to not only detect but also segment the infection at the lung level in the image [42] . Artificial intelligence has seen a major and rapid growth in recent years with deep neural networks [43] as a first tool to solve different problems such as object detection [37, 29] speech recognition [37] , drug interaction [27] and image classification [10] . More specifically, convolutional neural networks [22] showed astonishing results for image processing [21] . For image segmentation, several works have shown the power and robustness of these methods [5] . CNNs architectures for medical imaging also have been used with very good results [14] , for both image classification [3, 6] or image segmentation [19] . For the detection of COVID-19 and the segmentation of the infection at the lung level, several deep learning works on chest X-ray images and CT scans have emerged and reported in [34] . In [25] Ali Narin et al. created a deep convolutional neural network to automatically detect COVID-19 on X-ray images. To that end, they used transfer learning based approach with a very deep architectures such as ResNet50, InceptionV3 and Inception-ResNetV2. The algorithms were trained on the basis of 100 images (50 COVID vs 50 non-COVID) in 5 crossvalidation. Authors claimed 97 % of accuracy using Incep-tionV3 and 87% using Inception-ResNetV2. However, due to the very limited size of the dataset and the very deep models, overfiting would rise and could not be ruled-out, hence the need to validate those results in a larger database is necessary. Also in [15] , Hemdan et al. created several deep learning models to classify X-ray images into COVID vs non-COVID classes reporting best results with an accuracy of 90% using VGG16. Again, the database was very limited with only 50 cases (25 COVID vs 25 non-COVID). A resembling study was conducted by wang and wang [38] where they trained a CNN on the ImageNET database [11] then fine-tuned on X-ray images to classify cases into one of four classes: normal, bacterial, non-COVID-19 viral and COVID-19 viral infection, with an overall performance of 83.5%. For CT images, Jinyu Zhao et al [45] created a container for CT scans initially with 275 CT COVID-19 on which they also applied a transfer learning algo-rithm using chest-x-ray14 [40] with 169-layer DenseNet [17] . The performance of the model is 84.7% with an area under the ROC curve of 82.4%. As of today, the database contains 347 CT images for COVID-19 patients and 397 for non-COVID patients. Instead of using CNNs, other works have used network capsules which were first proposed in [16] to solve the problems of CNNs architectures which need a large amount of data and many parameters. In [2] the authors opted for this method. They created a capsule network to identify COVID-19 cases in X-ray images. The results were encouraging with an accuracy of 95.7%, sensitivity at 90% and specificity at 95.8%. They compared their results with Sethy et al. [33] where they created a model based on resnets50 with SVMs and obtained a performance of 95.38%, a sensitivity of 97.29% and a specificity of 93.47%. In [18] , Jin et al. created and deployed an AI tool to analyze CT images of COVID-19 in 4 weeks. To do this, a multidisciplinary team of 30 people collaborated together using a database of 1136 images including 723 positive COVID-19 images from five hospitals, to achieve a sensitivity of 0.974 and a specificity of 0.922. The system was deployed in 16 hospitals and performed over 1300 screenings per day. They proposed a combined model for classification and segmentation showing lesion regions in addition to the screening results. The pipeline is divided into 2 steps: segmentation and classification. They used several models including 3D U-NET++, V-NET, FCN-8S for segmentation and InceptionV3, ResNet50 and others for classification. They were able to achieve a dice coefficient of 0.754 using 3D U-NET ++ trained on 732 cases. The combination of 3D U-NET ++ and ResNet50 resulted in an area under the OCR curve of 0.991 with a sensitivity of 0.974 and a specificity of 0.922. In practice, the model continued to improve by re-training. The model proved to be very useful to physicians by highlighting regions of lesions which improved the diagnosis. What should be noted here is that the two models are independent and therefore they cannot help each other to improve both classification and segmentation performances. Other works have also emerged recently with interesting results. In [26] T. Ozturk on X-ray images. The model inspired from Darknet-19, is a classifier model that forms the basis of a real-time object detection system named YOLO (You only look once) [30] . They implemented a 17 convolutional layers network achieving a results of 98.08% for binary classes and 87.02% for multi-class cases. In [27] , Y. Pathak et al. used a transfer learning approach to classify COVID-19 infected patients. They introduced a top-2 smooth loss function with cost-sensitive attributes to handle noise and imbalanced dataset. The model was trained on a public dataset of chest CT images, and then used to classify COVID-19 infected patients. The model achieved an accuracy of 0.93, a sensitivity of 0.91 and a specificity of 0.94. Other works such as in S. Dilbag et al. [35] tried a multi-objective differential evolution-based convolutional neural networks to classify COVID-19 patients from chest CT images. Multi-task learning (MTL) [8] is a type of learning algorithm whose goal is to combine several pieces of information from different tasks to improve the performance of the model and its ability to better generalize [44] . The basic idea of MTL is that different tasks can share a common features representation [44] , and therefore, training them jointly. The use of different datasets of different tasks makes it possible to learn an effective feature representation that is common to all tasks, because all datasets are used to obtain it, even if each task has a small dataset, which leads to improve the performance of each task. Different approaches have been proposed in MTL such as hard parameter sharing [8] or soft parameter sharing [32] . Hard parameter sharing is the most commonly used approach to MTL in neural networks and greatly reduces the risk of overfitting [32] , see Fig 4 . It is generally applied by sharing the hidden layers between all tasks, while keeping several task-specific output layers. Soft parameter sharing defines a model for each task with its own parameters, and the distance between the parameters of the model is regularized in order to encourage the parameters to be similar. In this work, we propose a novel multi-task deep learning model for jointly detecting COVID-19 image and segmenting lesions. The main challenges of this work are: 1) the lack of data and annotated data, because the databases were collected from multiple sources with a huge variation in images and most of the images are noisy (see Fig 3) . 2) instead of expensive models like ResNet 50 or DenseNet, developing a multitasking approach to reduce overfitting and improve results. In Facing these challenges, we proposed to train our neural network with three tasks: reconstruction, classification and segmentation, in order to both classify COVID / Non-COVID images and to segment the lesion regions. We add the reconstruction task often used in unsupervised learning, to better learn the disentangled feature representation with a single common encoder. Based on this features representation, three neural networks are designed to finally accomplish the three tasks. The paper is organized as follows. In Section 2, we describe our multi-task model, which is mainly based on classification and segmentation tasks. Section 3 presents the experimental studies. In section 4, we describe the validation methodology used in this study. Section 5 is showing the results of our work. Section 6 and 7 are for discussion and conclusion respectively. In this study, three datasets from different hospitals including one thousand three hundred and ninety-six CT images are used. The first one is a public available data set J o u r n a l P r e -p r o o f coming from [45] which includes 347 COVID-19 images and 397 non-COVID images with different types of pathologies. The database was pre-processed and stored in png format. The dimension varies from 153 to 1853 with an average of 491 for the height, while the width varies from 124 to 383 with an average of 1485. The second dataset coming from [http://medicalsegmentation.com/covid19/] in which 100 COVID-19 CT scan with ground truths lesions segmentation are available. The ground truth was defined by the physicians of different hospitals. Three lesion labels are provided: ground glass, consolidation and pleural effusion. As all lesion labels are not given in all images, for the purpose of this study, we merged the three labels into one lesion label (See Fig 2) . The third dataset coming from the Henri Becquerel Cancer Center (HBCC) in Rouen city of France includes 425 CT scans of normal patients and 98 of lung cancer. All the three image datasets were resized to have the same size 256 x 256 and the intensity normalized between 0 and 1 prior to analysis. Table 1 summarizes how to split the datasets for training, validation and test. We propose a new MTL architecture based on 3 tasks: 1) COVID vs Normal vs Other Infections classification, 2) COVID lesion segmentation, 3) image reconstruction. The two first tasks are essential ones, while the third task is added to enhance the feature representation extracted. In this work, we choose to use a hard parameter sharing to share parameters between the different tasks (see Fig 4) . We create a common encoder for the three tasks which takes a CT scan as input, and its output is then used to the reconstruction of the image via a first decoder, to the segmentation via a second decoder, and to the classification of COVID vs Normal vs Other Infections classification image via a multi-layer perceptron. Each convolutional layer, denoted C (m) , consists of F (m) feature maps, where m is the layer number. For the first layer, C (1) , each feature map is obtained by convolving the volume of interest with a weight matrix W i (1) to which a bias term b i (1) is added, where i is the feature map number. Then, the output goes through a non linear function f(x), where x is the input to a neuron, such as: Each element of a feature map, c i (1) , is obtained by convolving the input x with a kernel. The F (1) weight matrices (one matrix per feature map) are learned by looking at different position of the input, leading to the extraction of the description of features. Thus, the weight parameters are shared for all lesion or infection input sites, so that the layer has an equivariance property and is invariant to the input lesion transformations (such as translation and rotation). It also results in a sparse weight, which means that the kernel can detect small, but meaningful features. In order to extract high-level features from the low-level ones obtained in the initial layer, other layers are added. Each feature map in the other layers are obtained as follow: The encoder-decoder is based on the U-NET architecture [31] for both reconstruction and segmentation tasks (Fig.5) . The encoder is used to obtain the disentangled feature representation. It includes a convolutional block followed by skip connection. In order to maintain the spatial information, we use a convolution with stride= 2 to replace pooling operation. It's likely to require different receptive fields when segmenting different regions in an image. All convolutions are 3 x 3 and the number of filter is increased from 64 to 1024. Each decoder level begins with upsampling layer followed by a convolution to reduce the number of features by a factor of 2. Then the upsampled features are combined with the features from the corresponding level of the encoder part using concatenation. We trained the model with a linear activation for the output and a mean squared error for the loss function (Lrecon) and used accuracy as the metric: where y_true is the true label and y_predict is the predicted label. We used the same architecture as the reconstruction except for the activation function for the output, which is a sigmoid. The loss function is based on the dice coefficient loss (Lseg) which is considered as the metric: where is the the smoothing factor and used to avoid a division by zero. The resulting set of feature maps, encloses the entire spatial local information, as well as the hierarchical representation of the input. Then, each feature map is flattened out, and all the elements are collected into a single vector V of dimension K, providing the input for a fully connected hidden layer, called h, consisting of H units. The activation of the i (th) unit of the h hidden layer is given by: In details, the output of the encoder is a tensor of mini_batch x 32 x 32 x 1024 to which we add a convolutional layer followed by a maxpooling, and then a flatten operation to convert the data to a mono-dimensional tensor to perform the classification. The multilayer perceptron consist of a two Dense layer with 128 and 64 neurons respectively, with a dropout of 0.5 and the activation function elu. The last layer is a Dense layer with three neurons for image classification using a sigmoid activation and a binary cross entropy is used as the loss function (Lclass): which is a special case of the multinomial cross-entropy loss function for m = 2 : where n is the number of patients, y is the class label. The output layer consists of 3 neurons, each one output a binary value such as:ŷ ij ∈(0,1): jŷij =1 ∀i,j is the prediction of a COVID presence for the first neuron. The second neuron returns 1 if the patient is normal, 0 otherwise, and for the third neuron 1 if the patient has another infection and 0 otherwise. The patient is positive if he has the COVID- 19 a learning rate of 0.0001. The global loss function (loss glob) for the 3 tasks is defined by:: Our model was trained for 500 epochs with an early stopping of 10. The implementation of our method was done using the keras library with tensorflow in backend. The model was performed on an nvidia p6000 quadro gpu with 24 GB, and 128 RAM. Three experiments are conducted to evaluate our model. Experiment 1: The first experiment consisted of tuning the hyperparameters and add/remove a task to find the best model using only the training dataset. Several models were developed by combining the tasks 2 by 2 and the 3 tasks with different resolutions of images (512 x 512 and 256 x 256). The combination of the first task and the second one is only to evaluate segmentation results, since it is for image reconstruction and infection segmentation, while the peer T1 and T3 is for classification. Experiment 2: The second experiment is to compare our model with state of the art method U-NET in order to evaluate the performance on the segmentation task. Two U-NET with different resolutions were trained: 512 x 512 and 256 x 256. Experiment 3: Different state of the art models were compared to ours on the classification task. We use: Alexnet, VGG-16, VGG-19, ResNet50, 169-layer DenseNet, InceptionV3, Inception-ResNet v2 and Efficient-Net. We have also added an 8 layer deep neural network with 6 convolutional layers, where each one is followed by a Maxpooling and a Dropout regularization of 25% to prevent the model from overfitting. The feature maps go from 8 to 256 by a factor of 2 between each two layers. We used 3 x 3 filter for convolution and 2 x 2 for Maxpooling. Then a Flatten followed by two Dense layers with 128 neurons and 3 neurons respectively. A Dropout of 50% is also applied to the first layer to reduce and prevent overfitting. The activation function is elu for all layers except the last one which is a sigmoid. The loss function is the binary cross-entropy and the metric is the accuracy, with the Adam optimizer. The CNN was optimized in order to ensure a fair comparison with our proposed model. The model was trained for 500 epochs with an early stopping of 10, in the same condition as our model. To find the best hyperparameters, the influence of F, the number of feature maps (8 to 64), the number of neurons (128 to 4096) were evaluated, as well as different receptive field sizes (3 × 3, 5 × 5) and different sizes of mini-batch (2 to 16) . Several expressions of f(x), the activation function, were also evaluated (relu, elu, selu and tanh), to choose finally relu. For the validating methodology, we split the data for training, validation and test as shown in Table 1 . Among the 349 COVID cases in the training, the ground truth for the infection label (segmentation task) was available for 50 CT scans. Twenty others were in the validation and thirty in the test set. For normal patients, 50 were used in validation, 50 in test and 325 in training. For other infections, different kinds of pathology such as lung cancer or cases were selected randomly in training, validation and test. For a fair comparison, the other methods were trained, validated and tested on the same group of data. The performance of the models were evaluated using the dice coefficient for the segmentation task, area under the ROC curve (AUC) for the classification, and the accuracy (Acc), sensitivity (Sens), specificity (Spec) for both [1] , such as: where TP is the true positives, FN is the false negatives and TP + FN is the number of patients classified positively, or the segmented lesion infection. where TN is the true negatives, FP is the false positives and TN + FP is the number of patients classified negatively, or the non segmented region. For each curve, the definition of the thresholds was determined using the method proposed by Fawcett [13] , and the optimal cut-off point was defined using Youden's index. Table 3 . Classification results: Experiment 1 for optimizing hyperparameters and choosing the best combination of tasks. Experiment 3 for classification. The main results of the three experiments are shown in Table 2 and 3. Metrics include: dice coefficient, accuracy, sensibility, specificity and the area under the ROC curve. The neural network was trained for 500 epochs with an early stopping of 10. In figure 6 , the learning curve and the loss curve obtained from the train and validation sets respectively show the stability of our model. From figure 6 we can observe that the training and validation losses decrease to a point of stability, and have a small gap between them. Early stopping of the training provides a robust mechanism to avoid overfitting, like the behavior of our model. Experiment 1: As shown in table 2 for the segmentation and table 3 for the classification, the best dice coefficient = 88%, accuracy (ACC = 0.94.7) and area under the curve (AUC = 0.97) were obtained with the combination of the three tasks of image reconstruction, infection segmentation and image classification, with all images resized to 256 x 256. The results of 4 other experiments were also shown with multi-task learning but with a higher resolution of 512 x 512, and the combination 2 by 2 of the other tasks. The major differences between our best model and the model with higher resolution are in sensitivity (0.96 vs 0.94) and specificity (0.92 vs 0.85). Compared to the peer combination of T1 and T3 for segmentation our model proved to be more performant with an improvement of +9% of dice, and a higher AUC and accuracy compared to the peer T1 and T3 for classification only. On the ROC curve in Figure 8 , the advantage of using all three tasks combined can be seen by obtaining a significantly better area under the curve than when two tasks are used. The same results were observed for the peer segmentation and classification without reconstruction. Those results confirm the usefulness of the reconstruction task to extract meaningful features and to help improve the results of the other two tasks. For multi-class classification, as we can observe from the confusion matrix of our model in figure 10 , 47 of 50 COVID cases were classified correctly, while only 3 were miss-classified as other infections. The same observation for normal patients, only 2 were misclassified as other infections, while no normal patient was misclassified as COVID. Experiment 2: In Table 2 , the best results for image segmentation was obtained using our method with a dice_coef of 88% versus 77.69% and 76.69% using U-NET with 256 x 256 and 512 x 512 resolutions respectively. The combination of the reconstruction, segmentation and classification provide a higher accuracy to detect infection regions, compared to the use of the U-NET model alone. Figure 7 shows a comparison between our model and U-NET for infection segmentation. Experiment 3: The results of experiment 3 are also given in Table 3 . We compared our multi-task deep learning model with multiple deep convolutional neural networks. The obtained results show that our model outperformed the CNN in both accuracy and AUC. The ROC curve for experiment 3 is shown in figure 9 . Finally, we compared our model with the state of the art methods on COVID-19 for classification and segmentation. Table 4 shows the results on the classification task. Results vary from an accuracy of 66.67% in [24] to 92.6% [38] [36] CT 86 --Xu et al. [7] CT 86.7 --Ozturk et al (multiclass) [26] X-ray 87.02 --Li and Zhu [23] X-ray 88.9 --Hemdan et al. [15] X-ray 90 --Zheng et al. [46] CT 90.8 --Wang et al [38] X-ray 92.6 --Ours CT 94.67 96 92 Table 4 . A quantitative comparison between our model and state of the art for the classification task. %. Zhou et al., [47] who performed only the segmentation task achieved 61.0% using a modified U-NET and 69.1% using an attention mechanism. Other results reported in [9] reached a dice coefficient of 85.0%, which is less than our model with a dice coefficient of 88.0%. The results for the segmentation are shown in Table 5 . We have developed a new deep learning multi-task model to jointly detect COVID-19 CT images and segment the regions of infection. Our architecture is general, which means that it can be used for other segmentation-classification applications. We have also compared with several state of the art algorithms such as U-NET and CNNs. To show the performance of our method, we tested the different combinations of tasks 2 by 2 and all the 3 tasks simultaneously with different images resolutions. Our motivation was to leverage useful information contained in multiple related tasks to improve both segmentation and classification performances. Multitasking can handle small data problems well, although each task can have a relatively small data set. In our study, we were able to increase the size of the database, in total 1044 images by the combination of several databases, to learn the disentangled representation. Although we have a database of 100 images for segmentation, thanks to the learned latent representation, we have obtained good segmentation results. The state of the art U-NET has shown impressive results to deal with image segmentation in recent years, just like the classification with Alexnet, VGG-16, VGG-19, ResNet50, Den-sNet. However, these segmentation or classification methods usually require a large amount of annotated datasets to work efficiently. Due to the lack of annotated data in the medical imaging field, other mechanisms can be included to improve its generalisation ability. In this work, we propose to use a multitask learning approach that can jointly improve the U-NET model and classification models by enhancing its encoder. Indeed, by using a shared encoder for the classification and segmentation tasks, it is able to extract more meaningful information from the Method Dice_coef U-Net + DL Zhou et al. (2020) [47] 61.0% U-Net + FTL Zhou et al. (2020) [47] 66.7% AU-Net + DL Zhou et al. (2020) [47] 68.5% AU-Net + FTL Zhou et al. (2020) [47] 69.1% Backbone+PPD+RA+EA Fan et al. (2020) [12] 73.9% JCS Wu et al. (2020b) [41] 77.5% JCS' Wu et al. (2020b) [41] 78.3% U-net Chen et al. (2020b) [9] 82.0% M -A Chen et al. (2020b) [9] 85.0% M -R Chen et al. (2020b) [9] 84.0% Ours 88.0% Table 5 . A quantitative comparison between our proposed model and state of the art for the segmentation task. CT scan relating to the COVID-19 characteristics, which improves both tasks simultaneously with less annotated datasets. Furthermore, adding a third task for image reconstruction allows the encoder to refine the image characteristics to make a further improvement for both the classification and segmentation tasks. Thus, multitask learning can be used to improve U-NET and other classification models especially in the case where annotated data are limited. In addition to the many advantages of using CT images to spot early COVID-19 patients and isolate them, deep learning methods using CT images can be used as a tool to assist physicians fighting this new spreading disease, as they can be used also to not only classify and segment images in the medical field, but also to predict the outcome of treatment [4, 28] . In this paper, we proposed a multi-task learning approach to detect COVID-19 from CT images and segment the regions of interest simultaneously. Our method can improve segmentation results even if we don't have many segmentation ground truths. This thanks to the ground truths of classification which are relatively easy to obtain compared to those of segmentation. Our method shows very promising results. It outperformed the state of the art methods when used alone for image segmentation such as U-NET or image classification such as CNNs. We have shown that by combining jointly these two tasks, the method improves for both segmentation and classification performances. Moreover, adding a third task such as image reconstruction, the encoder can extract meaningful feature representation which help the other tasks (classification and segmentation) to improve even more their performances. From experiment 2, we observe a neat improvement when using the multitask approach with a dice coefficient of 88% for segmentation, 10% higher than when using the state of the U-net alone. With a specificity of 99.7% and a sensitivity of 90.2%, the segmentation results outperformed other models without multitask learning approach and when combining a peer of tasks. For the classification results, with an AUC=0.97 and an accuracy higher than 94%, our model shows a big improvement compared to other models with results ranging from 56% to 90%. Our method uses only CT images. Other information, like patient information, is not included in our architecture. In addition, the performance of our method was performed from a dateset of 150 patients. In future work, we will study new types of networks to take into account other useful information and test our method on a larger database to confirm its good performance. A hybrid multilayer filtering approach for thyroid nodule segmentation on ultrasound images Covid-caps: A capsule network-based framework for identification of covid-19 cases from x-ray images 3-d rpet-net: development of a 3-d pet imaging convolutional neural network for radiomics analysis and outcome prediction Radiomics-net: Convolutional neural networks on fdg pet images for predicting cancer treatment response Segnet: A deep convolutional encoder-decoder architecture for image segmentation Deep learning approach for microarray cancer data classification Deep learning system to screen coronavirus disease Multitask learning Residual attention u-net for automated multi-class segmentation of covid-19 chest ct images Multi-column deep neural networks for image classification Imagenet: A large-scale hierarchical image database Inf-net: Automatic covid-19 lung infection segmentation from ct images An introduction to roc analysis Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in x-ray images Matrix capsules with em routing Densely connected convolutional networks Ai-assisted ct imaging analysis for covid-19 screening: Building and deploying a medical ai system in four weeks Cnn-based segmentation of medical imaging data Adam: A method for stochastic optimization Imagenet classification with deep convolutional neural networks, in: Advances in neural information processing systems Gradient-based learning applied to document recognition Covid-xpert: An ai powered population screening of covid-19 cases using chest radiography images Within the lack of chest covid-19 x-ray dataset: A novel detection model based on gan and deep transfer learning Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks Automated detection of covid-19 cases using deep neural networks with x-ray images Deep transfer learning based classification model for covid-19 disease Feature selection for outcome prediction in oesophageal cancer using genetic algorithm and random forest classifier Convolutional neural network based detection and judgement of environmental obstacle in vehicle operation You only look once: Unified, real-time object detection U-net: Convolutional networks for biomedical image segmentation, in: International Conference on Medical image computing and computer-assisted intervention An overview of multi-task learning in Detection of coronavirus disease (covid-19) based on deep features Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19 Classification of covid-19 patients from chest ct images using multi-objective differential evolution-based convolutional neural networks Deep learning enables accurate diagnosis of novel coronavirus (covid-19) with ct images Deep neural networks for object detection Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest radiography images A deep learning algorithm using ct images to screen for corona virus disease (covid-19) Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases Jcs: An explainable covid-19 diagnosis system by joint classification and segmentation Deep learning system to screen coronavirus disease 2019 pneumonia How transferable are features in deep neural networks? A survey on multi-task learning Deep learning-based detection for covid-19 from chest ct using weak label An automatic covid-19 ct segmentation based on u-net with attention mechanism