key: cord-0827967-2mwyk04h authors: Hassan, Haseeb; Ren, Zhaoyu; Zhou, Chengmin; Khan, Muazzam A.; Pan, Yi; Zhao, Jian; Huang, Bingding title: Supervised and Weakly Supervised Deep Learning Models for COVID-19 CT Diagnosis: A Systematic Review date: 2022-03-05 journal: Comput Methods Programs Biomed DOI: 10.1016/j.cmpb.2022.106731 sha: cf66188a9da972471a136080d8fb7fe3d14735d0 doc_id: 827967 cord_uid: 2mwyk04h Artificial intelligence (AI) and computer vision (CV) methods become reliable to extract features from radiological images, aiding COVID-19 diagnosis ahead of the pathogenic tests and saving critical time for disease management and control. Thus, this review article focuses on cascading numerous deep learning-based COVID-19 computerized tomography (CT) imaging diagnosis research, providing a baseline for future research. Compared to previous review articles on the topic, this study pigeon-holes the collected literature very differently (i.e., its multi-level arrangement). For this purpose, 71 relevant studies were found using a variety of trustworthy databases and search engines, including Google Scholar, IEEE Xplore, Web of Science, PubMed, Science Direct, and Scopus. We classify the selected literature in multi-level machine learning groups, such as supervised and weakly supervised learning. Our review article reveals that weak supervision has been adopted extensively for COVID-19 CT diagnosis compared to supervised learning. Weakly supervised (conventional transfer learning) techniques can be utilized effectively for real-time clinical practices by reusing the sophisticated features rather than over-parameterizing the standard models. Few-shot and self-supervised learning are the recent trends to address data scarcity and model efficacy. The deep learning (artificial intelligence) based models are mainly utilized for disease management and control. Therefore, it is more appropriate for readers to comprehend the related perceptive of deep learning approaches for the in-progress COVID-19 CT diagnosis research. The 2019 novel coronavirus disease (COVID-19) is a life-threatening and infectious disease with few therapeutic options. Coronavirus sickness can appear in various ways, from minor symptoms to serious illnesses. Fever, cough, shortness of breath, muscle ache, disorientation, headache, sore throat, rhinorrhea, nausea, and vomiting are common symptoms [1, 2] . Pneumonia in both lungs, organ failure, respiratory failure, heart issues, acute renal injury, and bacterial infections are other consequences [3] [4] [5] [6] [7] . Early and accurate diagnosis of this disease is essential [8, 9] . Diagnostic tests can be used for coronavirus infection, quarantine, and self-isolation. Several It is important to emphasize that in asymptomatic cases, CT imaging diagnosis is not advised [51, 52] . It could be beneficial for COVID-19 symptomatic patients (who have many pulmonary symptoms). For instance, an initial study revealed bilateral lung involvement and ground-glass opacities (GGO) in most hospitalized patients [13, 53, 54] . Vasculature enlargement, bilateral abnormalities, lower lobe involvement, and posterior predilection are other CT symptoms with a high incidence described in more than 70% of RT-PCR test-proven COVID-19 cases. Consolidation, linear opacity, septal thickening, crazy-paving pattern, air bronchogram, pleural thickening, halo sign, bronchiectasis, nodules, and bronchial wall thickening have all been described in 10%-70% of RT-PCR test-proven COVID-19 patients. CT manifestations with Low-Incidence have been reported to be uncommon in RT-PCR test-proven COVID-19 cases, including pleural effusion, lymphadenopathy, tree-in-bud sign, central lesion distribution, pericardial effusion, and cavitating lung lesions [39, 41, 52, 55, 56] . Deep learning approaches have been widely explored to diagnose COVID-19 based on CT images [57, 58] . Several review articles on the subject had been published prior to our investigation, such as [59] [60] [61] [62] [63] [64] [65] [66] . In [59] , X-rays and computed tomography (CT) image-based studies were described in terms of image localization, segmentation, registration, and classification for COVID-19 diagnosis. Ghaderzadeh et al. [60] provided an overview of COVID-19 deep diagnosis models based on X-rays and CT modalities. In [61] , the authors focused on summarizing and applying the state-of-the-art deep learning models for COVID-19 medical image processing. Samuel et al. [62] offered a review of COVID-19 diagnosis, medication, screening, and prediction strategies based on machine learning and artificial intelligence. Nguyen et al. [63] focused on COVID-19 medical image processing, data analytics, text mining, natural language processing, the Internet of Things (IoT), computational biology, and medicine. Hussain et al. [62] produced a list of the most cutting-edge AI applications for COVID-19 administrations. The research also classified many AI techniques utilized in clinical data analysis, such as neural systems, traditional SVM, and edge computing. For COVID-19 classification task [67] , Ozsahin et al. [64] classified thirty studies. COVID-19/normal, COVID-19/non-COVID-19, COVID-19/non-COVID-19 pneumonia, and COVID-19/non-COVID-19 severity were among the studies that were chosen. Shao et al. [65] researched the sensitivity and utility of chest CT scans for detecting COVID-19 and its potential surgical uses. The study acknowledged and addressed the sensitivity of CT for the diagnosis of COVID-19 positive cases, both symptomatic and asymptomatic. According to findings, CT sensitivity can be ranged from 57%-100% for symptomatic and 46%-100% for asymptomatic COVID-19 patients, whereas RT-PCR sensitivity ranges from 39%-89%. Recently, Islam et al. [66] have published a taxonomy of deep learning algorithms for CT and X-ray modalities. Their review highlighted the data partitioning techniques, various performance measures, and well-known data sets developed for COVID-19 diagnosis. Readers can learn and obtain a lot of knowledge from these review articles. However, these review articles do not adopt a systematic approach to categorize the COVID-19 CT literature, and numerous technical disciplines are ignored. A review [68] published in the recent past explored the multi-level categorization of supervised and weakly supervised learning methods for medical image segmentation. Inspired by that, this review article aims to arrange the COVID-19 CT-based deep models into multi-level learning groups, i.e., supervised and weakly supervised learning. However, our review is different from that in many aspects. For instance, our topic is COVID-19 CT diagnosis, and for this purpose, we collect, classify, and analyze 71 primary and current studies. We provide a short description for each selected research and capture the most crucial information such as dataset information, adopted frameworks, and key results. Fig. 1 depicts the overall structure of our envisioned approach for the collected literature. Numerous COVID-19 CT diagnostic literature available and to include the relevant literature, we filter the unnecessary articles and consider their citations as a selection criterion, e.g., most of the included techniques are highly cited. Different reliable databases such as Google Scholar, IEEE Xplore, Web of Science, PubMed, Science Direct, and Scopus were used to obtain 71 pertinent studies on the given topic. We examine the selected approaches using backbone networks, network blocks, and loss functions for supervised learning. For weakly supervised learning, we analyze the collected literature using transfer learning and data augmentation techniques. Our review study is cascaded and aims to make it easier for researchers to find innovations to improve the accuracy of COVID-19 CT-based diagnosis. The other parts of this work are structured as follows. In Section 2, we examine the supervised learning-based literature. In Section 3, we group the collected literature based on weak supervision. Section 4 contains a discussion and future outlook on the provided issue. Deep learning and machine learning are both subfields of artificial intelligence. Both can be divided into three types: supervised, weakly supervised, and unsupervised learning. Supervised learning enables us to acquire or generate data based on prior knowledge. In a supervised task, the data for training should be appropriately selected and handled. COVID-19 CT diagnosis widely utilizes supervised learning techniques, including network backbones, network blocks, and the design of loss functions. The following subsections categorize our collected literature accordingly. Backbone networks are used to describe feature extractor networks. These feature extractor networks calculate features from the input image, which are then upsampled using a simple decoder module to create the final feature maps. The previously proposed approaches utilized Convolutional Neural Networks (CNNs) [69] , U-Net [70] , UNet++ [71] , 3DNet [72] , VNet [73] , and Recurrent Neural Network (RNN) architectures which are grouped in this category. To detect COVID-19 and community-acquired pneumonia, Li et al. [74] presented a supervised learning COVNet architecture based on the ResNet-50 [75] . The proposed model detected COVID-19 with 90% sensitivity and 96% specificity on an independent testing set. Serte et al. [76] also leverage the power of deep CNN model such as ResNet-50 by fusing image-level predictions to diagnose COVID-19. DeepPneumonia [77] was introduced to distinguish individuals with COVID-19 from bacterial pneumonia. ResNet-50 and the Feature Pyramid Network (FPN) [78] served as the foundation for their architecture. Singh et al. [79] proposed a CNN-based multi-objective differential evolution (MODE) framework to classify the COVID-19-infected patients as positive and negative infected. To calculate the infection probability of COVID-19, Xu et al. [80] used various CNN models. The authors further classified COVID-19, non-COVID cases, and influenza-A viral pneumonia (IVAP). Sun et al. [81] anticipated technique targeted the deep high-level features. In their method, location-specific features were extracted from the chest CT image. After that, a deep forest model [82] is used to learn the latent high-level representations of extracted features. Finally, feature selection and classifier training are integrated adaptively into a cohesive framework for COVID-19 prediction. Rahimzadeh et al. [83] used ResNet50V2 [84] as a backbone and categorized input CT as COVID-19 or normal. Using a feature pyramid network (FPN), the proposed technique explored several resolutions of the input image [74] , which greatly enhanced classification performance. In addition, Pu et al. [85] created multiple classifiers based on three-dimensional (3D) CNNs to distinguish COVID-19 from communityacquired pneumonia (CAP). According to the authors, compared to their proposed method, the radiologists' interpretation of CT scans has a low ability to identify between COVID19 and community-acquired pneumonia (CAP) cases. Another supervised-based technique [86] developed a sequential CNN to detect COVID-19 by analyzing the CT images. The model achieved an accuracy of 92.5%. Apart from the CNNs, many COVID-19 CT diagnosis works relied on U-Net [70] and its variants, such as 3D U-Net [72] , U-Net++, etc., as given in Table 2 . U-Nets are faster to train and generate highly detailed segmentation maps using minimal trading samples [88] . Gozes et al. [53] proposed a technique using commercial software with trained U-Net, including a pre-trained model on some extensive CT data. The authors demonstrated that AI-based models could help COVID detection with high accuracy. Amine et al. [89] proposed a multitask deep learning method for COVID classification and segmentation. The proposed method used one encoder and two decoders for image reconstruction and infection segmentation. Their final step utilized fully connected layers to classify COVID and non-COVID. Pu et al. [90] adopted a U-Net-based framework to segment the lung infected regions followed by the identification process. Heatmaps were used to visualize and assess progression. Similarly, another U-Net-based framework [91] is proposed where lung spaces and COVID anomalies were segmented from chest CT scans. Jun et al. [93] built a COVID-19 detection system on the top of U-Net++ [71] . ResNet-50 was used as the backbone of U-Net++. An external dataset containing 100 patients was used to evaluate the model's performance. The authors calculated five evaluating metrics: accuracy, sensitivity, specificity, positive prediction value (PPV), and negative prediction value (NPV). Ni et al. [94] presented a COVID detection technique where lesion detection and segmentation were conducted. The authors claimed that the algorithm's performance is better compared to radiologists. Jin et al. [96] designed a combination of segmentation and classification model. Segmentation was used for the lung lesion regions, followed by the classification technique to classify the lesion regions into COVID and non-COVID. For the segmentation task, several models were considered such as fully convolutional networks (FCN-8) [99] , U-Net [70] , V-Net [73] and 3D U-Net++ [71] . For classification task, the authors considered ResNet-50 [75] , Inception networks [100-102] DPN-92 [103] , and Attention ResNet-50 [104] . Dominik et al. [97] implemented a robust segmentation model for lungs and COVID-19 infected regions based on 3D U-Net architecture. The proposed method has comparatively better performance in terms of segmentation and improved medical image analysis with limited data. As given in Table 3 , some works used Recurrent Neural Networks (RNN) to detect and diagnose COVID-19. A recurrent Neural Network (RNN) is a class of artificial neural networks that allows temporal dynamic behavior. More simply, it allows previous outputs to be used as inputs while having hidden states. The Long Short-Term Memory (LSTM) [105] is one of the popular examples. Most of the RNN based techniques are utilized for the prediction [106] [107] [108] [109] and spreading of COVID-19 disease. For instance, Yang et al. [110] incorporated the LSTM model with SIER [111] to predict COVID-19 in China. Many LSTM based techniques targeted the Xray modality [112] [113] [114] . Some methods are data mining and prediction-based [109, 115, 116] . However, limited literature exists that utilizes CT imagery and the RNN models. For example, a DeepSense model (data mining-based hybrid model) [117] to diagnose the medical conditions of COVID patients. The developed model combined convolutional neural network (CNN) and recurrent neural network (RNN), which can extract and classify the related features of COVID-19 lesions from the lungs. Hassan et al. [118] presented technique extracted the relevant features with the Q-deformed entropy handcrafted features to diagnose COVID-19. They used LSTM network to precisely discriminate between COVID-19, pneumonia and healthy case. This section categorizes the collected literature based on dense connections, multi-scale, attention mechanism, and inception. A dense connection is used to design a special convolution neural network. Dense Convolutional Network (DenseNet) [122] connects each layer to another in a feed-forward fashion. Such types of convolutional neural networks also expanded for COVID detection. It can simultaneously extract the shallow features and inner representation of the image. For instance, Yang et al. [123] designed a DenseNet based model to classify images as COVID-19 or healthy. The proposed model has been evaluated in terms of sensitivity, specificity, and accuracy. Liu et al. [124] presented a modified DenseNet-264 model to screen and diagnose COVID-19 infected patients. With the rapid development of deep learning, many architectures are designed, such as multiscale information fusion [125, 126] . Such architectures can effectively enhance the context information of networks and extract richer semantic information. However, such architectures cannot restore the loss of detailed information due to the pooling process. A method [127] used a multi-scale convolutional neural network (MSCNN) to diagnose COVID-19. The proposed model performed well on both slice level and scan level. The presented technique achieved promising COVID detection results. To overcome the loss of detailed information caused by pooling operation, Chen et al. [128] proposed the atrous spatial pyramid pooling module (ASPP) to improve image segmentation results. Such techniques are also extended for the COVID-19 detection and diagnosis. For instance, Qingsen et al. [129] proposed a COVIDSegNet to segment COVID-19 infection regions and the entire lung from chest CT images. The authors included a Feature Variation (FV) block to address the difficulty distinguishing COVID-19 pneumonia from the lung. The authors also introduced the Progressive Atrous Spatial Pyramid Pooling (PASPP), which progressively aggregated the information and obtained more useful contextual features. Another suggested technique by Mohamad et al. [130] employed the EfficientNet architecture as the backbone and applied several feature maps with varied scales to CT scans. Furthermore, the obtained multiscale feature maps were used to atrous convolution at various rates to generate denser features, which aided the COVID-19 findings. Table 4 . COVID-19 CT diagnosis based on dense connections, multi-scale, attention mechanism, and inception. Yang et al. [123] COVID To diagnose COVID-19 from CT images, some recent works adopted the attention mechanism [136, 137] . The attention mechanism is the notion or idea based on attaining focus, which pays greater attention to certain factors when processing the data. It is one of the most prominent ideas in the deep learning techniques even though this idea is also adopted for COVID-19 detection purposes. A method [131] introduced a dual-sampling attention network to diagnose COVID-19 from CAP and utilized VB-Net toolkit segmentation [138] for pneumonia infection regions to ensure the predictions based on infected regions. Liu et al. [132] developed a lesion-attention deep neural network (LA-DNN) to predict COVID-19 positive or negative. Wang et al. [93] proposed a novel multitask prior-attention residual learning model to screen out COVID-19 and identify pneumonia types between COVID-19 and interstitial lung disease (ILD). The proposed model coupled two 3D-ResNets into a single model to perform the mentioned tasks. Another attention mechanism introduced a SCOAT-Net [134] framework, where a coarse-to-fine attention network is proposed for segmenting COVID-19 lung opacification from CT images. The method further involved embedding of spatial and channel-wise attention mechanism, which achieved comparatively a better performance. Meanwhile, Inception [101] based methods are also introduced. The inception modules allow utilizing multiple types of filter sizes, instead of being restricted to a single filter size, in a single image block, which can be concatenated and passed onto the next layer. Alom et al. [135] proposed an inception-based method that targeted both the X-ray and CT imaging modalities for COVID-19 detection. The authors used an Inception Residual Recurrent Convolutional Neural Network methodology for COVID-19 detection. The proposed method further comprises COVID-19 infected regions segmentation inspired by Nabla-Net [139] . Note that most of the COVID-19 diagnostic models explored the networks with transfer learning strategy. Apart from the backbone networks and function blocks, the selection of loss functions is also essential in improving network efficiency. Therefore, some works also focused on improving COVID-19 CT diagnosis. Such types of networks could be helpful to avoid the class imbalance problem [140] . Li et al. [141] proposed a stacked auto-encoder detector model. Initially, four auto-encoders were built, followed by four auto-encoders and further connected with dense layer and softmax classifier. A new classification loss feature is created by superimposing a reconstruction loss to improve the model's detection accuracy. Another method [142] used U-Net based segmentation network by incorporating an attention system, including spatial and channel attention, into a U-Net architecture to capture rich contextual relationships for better feature representation. The proposed method introduced the focal Tversky loss to cope with minor lesion segmentation. Saeedizadeh et al. [143] trained an architecture similar to the U-Net model to detect ground glass regions. A regularization term to the loss function is used to promote connectivity of the segmentation map for COVID-19 pixels. The proposed model was named "TV-UNet" because it uses 2D-anisotropic total-variation. Wang et al. [144] developed a noiserobust Dice loss for robustness against noise and then COVID-19 pneumonia lesion segmentation network (COPLE-Net). Further, a teacher model, where the noise-resistant dice loss and COPLE-Net are combined in which the Exponential Moving Average (EMA) of a student model was used. The model had achieved better noise-robust loss functions, while their COPLE-Net technique achieved higher performance in terms of segmentation. Weakly supervised learning is another type of supervised learning which lies between both supervised and unsupervised learning. Supervised learning needs more annotative data to train and test the learning models. However, it isn't easy to collect or generate extensive scale data with more annotations in most cases. On the contrary, weakly supervised learning models require limited annotations, and most of the data remain unlabeled. Thus, recently it has been utilized extensively in medical imaging. Apart from its widespread applications in other areas of medical science, it has also been adopted for COVID-19 CT analysis. For instance, an attention-based weakly supervised framework [148] is presented to diagnose COVID-19 and bacterial pneumonia. The proposed method achieved an overall accuracy of 98.6% and an AUC of 98.4%. Similarly, another attempt was made by Han et al. [149] with weak labels to achieve a more accurate and interpretable analysis of COVID-19 CT diagnosis. The proposed approach had a Cohen kappa score of 95.7 %, an overall accuracy of 97.9%, and an AUC of 99.0%. In [150] , a weakly supervised framework was developed for COVID-19 classification and lesion localization, where the pre-trained U-Net was used for the lung region segmentation. The infection probability was predicted based on the segmented 3D lung regions. The algorithm achieved a score of 0.959 ROC AUC and 0.976 PR AUC. So basically, weak supervision is a branch of machine learning where noisy, limited, or imprecise sources provide supervision signals for labeling large amounts of training data in a supervised manner. Weak supervision can be further decomposed into transfer learning and data augmentation procedures. Thus, in the subsequent sections, we collected such COVID-19 CT diagnosis methods which adopted transfer learning and data augmentation techniques. Transfer learning is a method of reproposing a model or knowledge for another activity. In the framework of COVID-19 diagnosis, extensive research efforts have been done by employing transfer learning. However, the literature on transfer learning has undergone multiple revisions, and the terms associated with it have been used loosely and frequently interchangeably. Therefore, there are various types of transfer learning approaches, such as, in our case, domain adaptation. They are all linked in a few aspects and attempt to solve similar problems [151] . The remaining COVID-19 CT diagnosis literature is further organized into subsections based on pretrained, few-shot learning, and domain adaptation. In computer vision transfer learning is commonly expressed through pre-trained models. A pretrained model has been trained on a big extensive benchmark dataset to address a problem comparable to the one we're working on. Due to the computational cost and complexity of training new models, importing and using such models from published literature (e.g., VGG, Inception, MobileNet). For example, Yu et al. [152] modified the GooLeNet, used the transfer learning strategy, and proposed a GoogLeNet-COD model. The authors suggested that the dropout layer and transitional layer are necessary for better computer-aided detection (CAD) system. In [153] , a diagnosis method was proposed that initially extracted the region of interest (ROI) as input images for training and validation cohorts and then trained a modified inception network based on the extracted ROI images for further feature extraction. The proposed method reported accuracy, sensitivity, and specificity as primary evaluating metrics. Wang et al. [154] proposed a two-step transfer learning prognostic model and claimed that their model could benefit medical resource optimization and COVID-19 prevention. The proposed system not only identified COVID-19 but also visualized the suspicious infected lung areas by using the heat maps. Aayush et al. [155] used the pre-trained neural networks to classify COVID positive and negative. [133] created a model using the Self-Trans method. To limit the risk of overfitting to learn dominant and unbiased characteristics, the authors adopted self-supervised learning with a transfer learning strategy-the suggested framework classified chest CT as COVID-positive or COVID-negative. Furthermore, the authors poised a publicly accessible dataset containing hundreds of positive COVID-19 CT scans. Kassania et al. [162] used state-of-the-art deep CNN descriptors to extract highly representative features from chest X-ray and CT images to differentiate between COVID-19 and healthy participants. Dilbag et al. [163] constructed a deep transfer learning model based on densely connected convolutional networks (DCCNs), ResNet152V2, and VGG16 to categorize the suspected cases as COVID-19, TB, pneumonia, or healthy. Fu et al. [164] proposed a transfer learning strategy where they adopted ResNet-50 pre-trained weights on ImageNet and differentiated COVID-19 from other viral pneumonia. Pham et al. [165] focused their research on using 16 pre-trained CNNs. In terms of accuracy, sensitivity, specificity, F1-score, and area under the curve, DenseNet-201 performed admirably. Their research showed that using transfer learning directly on the input slice yields better results than data augmentation-based training. Khan et al. [166] proposed optimized deep learning (DL) CT scheme to distinguish between COVID-19 infected and normal patients. In their proposed method, contrast enhancement was used to improve the quality of the original images. The pre-trained DenseNet-201 [122] classifier was then trained by adopting the transfer learning methodology. Table 6 summarizes some conventional transfer learning methods proposed for COVID-19 CT analysis. Domain adaptation is a subcategory of transfer and weakly supervised learning. The capacity to apply an algorithm trained in one or more "source domains" to a different (but related) "target domain" is known as domain adaptation. The source and target domains have the same feature space (but distinct distributions) in domain adaptation. However, transfer learning includes scenarios where the target domain's feature space differs from the source feature space or spaces [169] [170] [171] . The intuition behind this is that deep neural networks have a lot of capacity to learn representations from a single dataset, and some of that information can be reused for future tasks [172] . Such approaches could be adopted when there is a shortage of training data. Similarly, in the case of COVID-19 CT diagnosis, the required training data scarcity has been addressed by the transfer learning domain adaptation strategy. For instance, COVID-DA is a domain adaptation method introduced by Zhang et al. [173] with only a few COVID-19 annotations. The suggested technique effectively diagnoses COVID-19. Chen et al. [174] also adopted a domain adaptation strategy to segment COVID-19 CT lung infections. The authors used limited real data without annotations and more annotated synthetic data to jointly train the U-Net segmentation network. The authors introduced conditional GAN for adversarial training to overcome the domain mismatch. The proposed network outperformed a few of the state-of-theart methods significantly. To address various infections and domain shift concerns in COVID-19 datasets, Jin et al. [175] presented a domain adaption-based self-correction model (DASC-Net). The proposed DASC-Net contained a novel attention and feature domain enhanced domain adaptation model (AFD-DA) to solve the domain shifts and a self-correction learning process to refine segmentation results. Compared to other state-of-the-art methods, the suggested method indicated that DASC-Net performance is quite promising. Li et al. [176] also solved the insufficient COVID-19 CT medical data problem by adopting the domain adaptation strategy and successfully detected the infected regions. The proposed model achieved better accuracy upon comparing with SOTA approaches. Few-shot learning (FSL), also known as low-shot learning (LSL), is a machine learning problem in which the training dataset contains just a tiny amount of data. This has been introduced to address the issue of domain adaptation with a limited number of training samples. Machine learning applications need extensive data and the common practice to feed as much data as the model can take. Feeding more data enables the model to predict better. Contrary to this, FSL seeks to create accurate machine learning models with fewer training data. Note that FSL has different variations and cases such as N-Shot Learning, One-Shot Learning, Zero-Shot Learning. As medical CT datasets are limited for COVID-19 analysis. In order to tackle that, Yifan et al. [181] proposed a domain adaption-based COVID-19 CT diagnostic model on few-shot COVID-19 conditions. The authors utilized many synthetic COVID-19 CT images and adjusted the networks from the source domain (synthetic data) to the target domain (real data) with a crossdomain training mechanism. Voulodimos et al. [182] explored the efficacy of few-shot learning in U-Net architectures. Their experimental results indicated improved segmentation accuracy in identifying COVID-19 infected regions. Abdel et al. [183] suggested another COVID-19 diagnosis technique based on few-shot segmentation. The primary goal of their research was to create accurate segmentation from a small set of annotated lung CT data. The proposed FSS architecture allowed learning from small support samples and improved query sample generalization. Furthermore, the Res2Net50-based encoder [184] allowed for better network convergence. Chen et al. [185] developed a few-shot learning approach for predicting COVID-19 CT analysis with minimum training. Initially, the instance discrimination task was carried out to test the model's ability to distinguish between two images, regardless of whether they are identical instances or not. They also avoided data augmentation by generating alternative views of the same images to supplement the same dataset. Finally, a self-supervised technique [186] based on momentum contrastive training was used to improve performance. The suggested model's efficacy was tested using two publicly available datasets. Data augmentation is another type of weak supervision. It is also introduced to address the data scarcity problem. The data augmentation techniques could be further categorized into conventional data augmentation techniques and generative adversarial networks (GANs). This section organizes the COVID-19 CT diagnosis literature according to data augmentation with pre-trained models and data augmentation with GANs. For COVID-19 CT analysis, several works adopted such approaches with pre-trained models to diagnose COVID-19. Silva et al. [188] used data augmentation and transfer learning techniques to overcome data scarcity. Image rotation, zooming, and horizontal flipping were used as a data augmentation procedure. Horry et al. [189] performed a comparative study by adopting the transfer learning strategy with data augmentation. They optimized the VGG-19 model considering three different image modalities, including CT to COVID-19 against pneumonia or normal. In [190] , a transfer learning-based DensNet-121 approach was adopted to identify COVID-19. In order to increase more training samples, a data augmentation-based procedure was applied. Ko et al. [191] performed classification on chest CT images to classify COVID-19 pneumonia, other pneumonia, and non-pneumonia. The proposed network applied two distinct forms of data augmentation: image rotation and zoom. It used transfer learning-based pre-trained convolutional neural network (CNN) models as a backbone where ResNet-50 achieved better predictions. Ahuja et al. [192] developed a three-phase COVID19-CT detection model with three stages. For data augmentation in Phase 1, stationary wavelets decomposition was used. For binary classification in Phase 2, a trained CNN model was used. Finally, defects in CT scan images were found in Phase 3. Zheng et al. [193] proposed a model that used the 3D CT volumes to detect COVID-19. A pre-trained U-Net is used to segment the lung region of each patient. The segmented 3D lung region was fed into a 3D deep neural network to predict the probability of COVID-19 infection. Data augmentation with random affine transformation and color jittering strategies were applied to avoid the overfitting problem. The proposed model identified COVID1-19 in a faster way. Hu et al. [194] applied sixteen data augmentation operations to enrich the training set for the training phase. The authors used CNN with ShuffleNet-V2 as a backbone to efficiently distinguish the COVID-19 patients from non-infected or infected by other pneumonia (bacterial pneumonia or SARS). Hasan et al. [195] applied a newly adopted DenseNet-121 CNN with a data augmentation technique to classify and identify COVID-19 patients. The adopted DenseNet-121 CNN resulted in better COVID-19 predictions. Some authors [196] utilized ensemble transfer learning and fine-tuned a total of 15 pre-trained convolutional neural networks (CNNs) to detect COVID-19. Data augmentation was used during training to reduce the overfitting problem of deep CNN. A COVID-19 screening strategy based on transfer learning and data augmentation was also applied in a method [197] . The VGG-16 architecture has been fine-tuned and extracted features from CT scans. For feature selection, principal component analysis (PCA) was employed. Four distinct classifiers were employed for the final classification. Bagging ensemble using SVM achieved better classification results. Using less labeled data, Hu et al. [198] achieved a weakly supervised learning framework of COVID-19 classification and lesion localization. The suggested network considered data pre-and post-processing for lung segmentation. For lesion localization, the authors used multi-scale learning followed by weakly supervised learning. Bai et al. [199] adopted the pre-trained weights of EfficientNet with data augmentation technique and classified between COVID-19 and non-COVID-19 chest CT slices. The respective information for each selected article is provided in Table 9 . Table 9 . COVID-19 CT data augmentation with pre-trained Models methods and their selective information. Silva et al. [188] COVID-19 detection Datasets [120, 200] EfficientNet [201] ACC: 87.6, F1-score: 86.19, AUC: 90.5 Horry et al. [189] COVID-19 detection Multi-Modal datasets e.g., Xray dataset [119] , Chest CT dataset [202] , and Ultrasound dataset [203] CNN pre-trained model with data augmentation technique. VGG19 model performed well and achieved a precision of up to 84% for CT Li et al. [190] COVID The fundamental reason for introducing GAN-based techniques is COVID-19 benchmark datasets scarcity. The primary goal is to collect feasible CT benchmark datasets and use traditional data augmentations with CGAN to generate new images to help COVID-19 identification. Such that, Loey et al. [209] induced a deep transfer learning (DTL) model to classify COVID-19. The authors composed a small dataset and enriched their collected dataset using classical data augmentation and CGAN. After that, a classifier was used to predict COVID/non-COVID as classification outcomes. Song et al. [210] introduced a representation learning technique based on a large-scale bi-directional generative adversarial network (BigBiGAN) architecture. The architecture was mainly designed to extract semantic features from the CT images. The semantic feature matrix was utilized as input for linear classifier construction. Sedik et al. [211] proposed bi-data-augmentation models to detect COVID-19 accurately. The purpose of the two data-augmentation models was to enhance the learnability of the Convolutional Neural Network (CNN) and the Convolutional Long Short-Term Memory (ConvLSTM) based deep learning models (DADLMs). The authors also used a dataaugmentation-based strategy with CGAN. The proposed DALDLM model outperformed the data-augmented CGAN model in terms of the COVID-19 detection accuracy. Mobiny et al. [212] created a Detail-Oriented Capsule Networks DECAPS framework by boosting the COVID-19 classification accuracy. The authors adopted conditional generative adversarial networks (GANs) based data augmentation procedure on dealing with data scarcity. Goel et al. [213] established a generative adversarial network (GAN) [214] and ResNet-50 based model to classify COVID-19 and non-COVID-19. The Whale Optimization Algorithm (WOA) [215] optimized GAN parameters and generated more CT images. In the final stage of the proposed model, the newly constructed images were further fed into ResNet-50 for diagnosis purposes. Ghassemi et al. [216] utilized a cyclic generative adversarial net (CycleGAN) as data augmentation and the transfer learning strategy. The proposed model achieved a high accuracy rate, i.e., 99.60% accuracy. The respective information about each selected article is given in Table 10 . After extensive analysis of the COVID-19 CT diagnosis literature, it is evident that the supervised and weakly supervised deep learning models have been adopted extensively. Supervised learning has quite great benefits, but it is also challenging. For instance, supervised learning helps solve real-world problems, and for that, we need to choose lots of good examples from each class while training the classifier. However, it is not easy to have extensive and good data collection in hand for training. Classifying big data is another real challenge. Moreover, the comprehensive training data should be representative rather than non-representative, which could generalize the new cases and classes. Apart from these challenges, supervised learning requires a lot of computation time. For artificial intelligence models (both machine and deep learning models), the data is the fuel: the more data, the more accuracy and reliability of the model. Many works either modified or retrained the deep learning models by adopting the supervised learning strategies and conducted their experiments. Similarly, numerous COVID research implemented the pre-trained models and transfer learning techniques (weakly supervised). Initially, for COVID-19 diagnosis, these learning models were applied with limited CT datasets. Gradually, relatively large COVID-19 CT datasets [222] [223] [224] [225] [226] were introduced. Though, these datasets are still insufficient to solve the data scarcity problem. Therefore, weakly supervised learning techniques (especially conventional transfer learning) have been adopted extensively compared to supervised learning strategies. Along with these transfer learning techniques, many researchers used synthetic data procedures such as GANs and data augmentation to address the data scarcity. Similarly, some works in sections 3.1.2 and 3.1.2.1 adopted transfer learning data adaption and few-shot learning techniques to make the COVID-19 detection models more efficient. Despite an excellent performance, such strategies still don't provide closed-end solutions. We may see a large number of conventional transfer learning research have good performance in test and validation phases, but it may fail in practical clinical analysis. For instance, the most pretrained weights are borrowed from ImageNet, where the images are common objects such as cars, airplanes, humans, buses, boats, etc. On the other hand, biomedical imaging data is quite different. It is also evident from recent research [227] that traditional transfer learning is overhyped and not much that helpful for medical image processing. Therefore, transfer learning can be utilized effectively by reusing the sophisticated features rather than over-parameterizing the standard models. Recently, the trend has been diverting towards few-shot and self-supervised learning to address data scarcity and model efficacy. Few-shot learning is a concept for learning a common representation for a wide range of tasks and then training task-specific classifiers on top of it. Compared with the few-shot learning, self-supervised learning can do tasks without labeled data. The self-supervised learning process is multi-layered like human cognition and can acquire more knowledge from fewer and simple data. Self-supervised learning is an emerging research area and relatively less explored in COVID-19 CT diagnosis. As a result, this paradigm has a lot of promise for clinical enterprises. It can also help with deep learning's most complex challenges, such as data/sample inefficiency and subsequent expensive training. Current medical imaging research [228] [229] [230] [231] has demonstrated the feasibility of self-supervised learning. In order to preserve more information, Zhuo et al. [228] presented Preservational Learning. Their study compared ImageNet pre-trained model with the pre-trained model on Luna [232] , BraTS [233] , and LiTS [234] and found that self-supervised training on related data sets can improve the performance of segmentation and detection models for medical imaging. In [229] , a Swin UNETR structure was proposed with a hierarchical encoder by leveraging the selfsupervised pre-training on CT modality and outperformed all the competitors on MSD and BTCV datasets. In [230] , a self-supervised model for reconstructing and predicting geometric transformations was developed. Predictions based on geometric transformations have a greater influence on learning imaging features and have shown significant performance in predicting deviant scores in clinical brain CT data. Azizi et al. [231] introduced a novel Multi-Instance Contrastive Learning (MICLe) technique that combines several pictures of the underlying pathology of each patient case to create more informative positive pairs for self-supervised learning, increasing Top-1 accuracy by 6.7 %. Similarly, a few works [235] [236] [237] [238] [239] successfully induced the self-supervised intuition for COVID-19 diagnosis. Taking the discussion as mentioned above into account, self-supervised learning has the protentional and can be applied successfully. For instance, combining the composition of data augmentations, introducing a learnable nonlinear transformation, and contrastive learning from larger batch sizes and more training steps can uplift the performance of a predictive model [167] . Likewise, the famous Momentum Contrast (MoCo) can be adapted to facilitate contrastive unsupervised learning [240] . Besides, the Masked autoencoders (MAE) introduced by He et al. [241] can be utilized to reconstruct the missing pixels and enable the model to learn richer semantic representations. Another factor is evaluating the reliability and efficacy of intelligent diagnostics systems before their deployment into real-time practices; thus, uncertainty quantification [242] becomes essential [243] [244] [245] . This extra vision attempts to improve the overall trustworthiness of the systems so that clinicians and users can understand when and where they may trust the models' predictions [246] . In other words, the uncertainty quantification minimizes the poor generalization of a model in real-world clinical practice [247] . Consequently, accurate uncertainty estimations are required to improve the model's efficacy and apply it to the medical domain with trust and reliability. Therefore, to quantify the uncertainty of traditional deep learning methods, the Bayesian Deep Learning (BDL) methods can be a great solution [248] . Modeling uncertainty quantification not only improves the predictive performance but is also capable of detecting predictive failures [249] . Despite the current gold standard RT-PCR and other COVD-19 diagnosis tests, CT diagnosis by computer vision and artificial intelligence is an active area of research. Considerable research has been conducted, and many research articles cascaded that extensive spontaneous research works in the form of review articles by considering different aspects. Inspired by that, in this review article, we explored, arranged, and classified COVID-19 CT diagnosis research. For this purpose, we collected the relevant literature and categorized the collected techniques according to multi-level supervised and weakly supervised learning. Many works adopted the supervised schemes, such as network backbones and network block-based procedures. On the other hand, due to the limitations of COVID-19 CT datasets, weakly supervised learning approaches gained much attention compared to supervised learning. To predict COVID-19, the pre-trained models with data-augmentation procedures are extensively adopted with traditional transfer learning. It is noted that recently, domain adaptation-based transfer learning methods have been introduced, which is helpful to alleviate data scarcity to some extent. However, limited attention has been given to self-supervised learning, which can do tasks without massive amount of labeled data. Therefore, self-supervised strategies could be another ultimate solution for COVID-19 CT analysis and dataset scarcity. Last but not least, uncertainty quantification procedures are also essential to evaluate the reliability of a diagnostic system before its deployment into clinical practices. The authors declare no conflict of interest. Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases Laboratory diagnosis and monitoring the viral shedding of 2019-nCoV infections A novel coronavirus from patients with pneumonia in China Chest X-ray in new Coronavirus Disease 2019 (COVID-19) infection: findings and correlation with clinical outcome. La radiologia medica Importation and human-to-human transmission of a novel coronavirus in Vietnam A pneumonia outbreak associated with a new coronavirus of probable bat origin Molecular diagnosis of a novel coronavirus (2019-nCoV) causing an outbreak of pneumonia Diagnosis of COVID-19 pneumonia via a novel deep learning architecture Classification of Coronavirus (COVID-19) from X-ray and CT images using shrunken features Diagnosing COVID-19: the disease and tools for detection Molecular and serological tests for COVID-19. A comparative review of SARS-CoV-2 coronavirus laboratory and point-of-care diagnostics Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. The lancet Emergence of a novel coronavirus disease (COVID-19) and the importance of diagnostic testing: why partnership between clinical laboratories, public health agencies, and industry is essential to control the outbreak RT-qPCR testing of SARS-CoV-2: a primer. International journal of molecular sciences Covid-19-navigating the uncharted China medical treatment expert group for Covid-19 Coronavirus infections-more than just the common cold Outbreak of pneumonia of unknown etiology in Wuhan, China: The mystery and the miracle Detection of 2019 novel coronavirus (2019-nCoV) by real-time RT-PCR. Eurosurveillance Determinants of Chest Radiography Sensitivity for COVID-19: A Multi-Institutional Study in the United States Diagnostic testing for the novel coronavirus World Health Organization. Laboratory testing for 2019 novel coronavirus (2019-nCoV) in suspected human cases, Interim guidance Molecular and serological investigation of 2019-nCoV infected patients: implication of multiple shedding routes. Emerging microbes & infections Should RT-PCR be considered a gold standard in the diagnosis of Covid-19? Clinical features of COVID-19 in elderly patients: A comparison with young and middle-aged patients Positive rate of RT-PCR detection of SARS-CoV-2 infection in 4880 cases from one hospital in Characteristics of patients with coronavirus disease (COVID-19) confirmed using an IgM-IgG antibody test Diagnostic test evaluation methodology: a systematic review of methods employed to evaluate diagnostic tests in the absence of gold standard-an update Artificial Intelligence (AI) applications for COVID-19 pandemic Artificial Intelligence in Healthcare-A case study of Covid-19 Artificial intelligence for COVID-19 drug discovery and vaccine development. Frontiers in Artificial Intelligence Study: Chest X-rays Highly Predictive of COVID-19 Generalizability of deep learning tuberculosis classifier to COVID-19 chest radiographs: new tricks for an old algorithm? Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images Covid-resnet: A deep learning framework for screening of covid19 from radiographs Refining dataset curation methods for deep learning-based automated tuberculosis screening Chest x-ray findings in 636 ambulatory patients with COVID-19 presenting to an urgent care center: a normal chest x-ray is no guarantee Current role of imaging in COVID-19 infection with recent recommendations of point of care ultrasound in the contagion: a narrative review The role of chest radiography in confirming covid-19 pneumonia. bmj CT imaging features of 2019 novel coronavirus (2019-nCoV) Outbreak of novel coronavirus (COVID-19): What is the role of radiologists Coronavirus disease 2019 (COVID-19): a systematic review of imaging findings in 919 patients Role of computed tomography in COVID-19 A role for CT in COVID-19? What data really tell us so far. The Lancet Contribution of CT Features in the Diagnosis of COVID-19 The role of CT for Covid-19 patient's management remains poorly defined. Annals of translational medicine The role of chest imaging in patient management during the COVID-19 pandemic: a multinational consensus statement from the Fleischner Society The role of imaging in the detection and management of COVID-19: a review Chest CT findings in 2019 novel coronavirus (2019-nCoV) infections from Wuhan, China: key points for the radiologist Chest CT in COVID-19: what the radiologist needs to know Chest CT imaging signature of coronavirus disease 2019 infection: in pursuit of the scientific evidence Rapid ai development cycle for the coronavirus (covid-19) pandemic: Initial results for automated detection & patient monitoring using deep learning ct image analysis CT imaging of the 2019 novel coronavirus (2019-nCoV) pneumonia. Radiology Chest computed tomography findings of coronavirus disease 2019 (COVID-19) pneumonia. European radiology Coronavirus disease 2019 (COVID-19) in Italy: features on chest computed tomography using a structured report system. Scientific Reports Deep learning and its role in COVID-19 medical imaging. Intelligence-Based Medicine Machine Learning and Deep Learning Approaches to Analyze and Detect COVID-19: A Review. SN computer science Medical imaging with deep learning for COVID-19 diagnosis: a comprehensive review Deep learning in the detection and diagnosis of COVID-19 using radiology modalities: a systematic review Deep learning and medical image processing for coronavirus (COVID-19) pandemic: A survey. Sustainable cities and society Applications of machine learning and artificial intelligence for Covid-19 (SARS-CoV-2) pandemic: A review Artificial intelligence in the battle against coronavirus (COVID-19): a survey and future research directions Uzun Ozsahin D. Review on diagnosis of COVID-19 from chest CT images using artificial intelligence. Computational and Mathematical Methods in Medicine A systematic review of CT chest in COVID-19 diagnosis and its potential application in a surgical setting A review on deep learning techniques for the diagnosis of novel coronavirus (covid-19) The ensemble deep learning model for novel COVID-19 on CT images Medical image segmentation using deep learning: a survey Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems U-net: Convolutional networks for biomedical image segmentation Unet++: A nested u-net architecture for medical image segmentation. InDeep learning in medical image analysis and multimodal learning for clinical decision support 3D U-Net: learning dense volumetric segmentation from sparse annotation. InInternational conference on medical image computing and computer-assisted intervention V-net: Fully convolutional neural networks for volumetric medical image segmentation Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT Deep residual learning for image recognition Deep learning for diagnosis of COVID-19 using 3D CT scans. Computers in biology and medicine Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images Feature pyramid networks for object detection Deep transfer learning based classification model for COVID-19 disease. Irbm A deep learning system to screen novel coronavirus disease 2019 pneumonia. Engineering Adaptive feature selection guided deep forest for covid-19 classification with chest ct A fully automated deep learning-based network for detecting covid-19 from a new and large lung ct scan dataset Identity mappings in deep residual networks. InEuropean conference on computer vision Any unique image biomarkers associated with COVID-19? An automatic computer-based method for fast and accurate Covid-19 diagnosis. medRxiv COVID-19 infection presenting with CT halo sign. Radiology: Cardiothoracic Imaging U-Net and its variants for medical image segmentation: theory and applications Multi-task deep learning based CT imaging analysis for COVID-19 pneumonia: Classification and segmentation Automated quantification of COVID-19 severity and progression using chest CT images AI aiding in diagnosing, tracking recovery of COVID-19 using deep learning on Chest CT scans. Multimedia tools and applications Prior-attention residual learning for more discriminative COVID-19 screening in CT images A deep learning approach to characterize 2019 coronavirus disease (COVID-19) pneumonia in chest CT images Mvp-net: Multi-view fpn with position-aware attention for deep universal lesion detection AI-assisted CT imaging analysis for COVID-19 screening: Building and deploying a medical AI system in four weeks Automated chest ct image segmentation of covid-19 lung infection based on 3d u-net Towards efficient covid-19 ct annotation: A benchmark for lung and infection segmentation Fully convolutional networks for semantic segmentation Rethinking the inception architecture for computer vision Going deeper with convolutions Inception-v4, inception-resnet and the impact of residual connections on learning Residual attention network for image classification Long short-term memory Multiple-input deep convolutional neural network model for covid-19 forecasting in china Artificial intelligence forecasting of covid-19 in china COVID-19 epidemic analysis using machine learning and deep learning algorithms Deep-LSTM ensemble framework to forecast Covid-19: an insight to the global pandemic Modified SEIR and AI prediction of the epidemics trend of COVID-19 in China under public health interventions Modeling infectious diseases in humans and animals A combined deep CNN-LSTM network for the detection of novel coronavirus (COVID-19) using X-ray images. Informatics in medicine unlocked DeepCoroNet: A deep LSTM approach for automated detection of COVID-19 cases from chest X-ray images Deep GRU-CNN model for COVID-19 detection from chest X-rays data Predictions for COVID-19 with deep learning models of LSTM, GRU and Bi-LSTM Forecasting the spread of COVID-19 using LSTM network Analysis of COVID-19 Infections on a CT Image Using DeepSense Model. Frontiers in Public Health Classification of Covid-19 coronavirus, pneumonia and healthy lungs in CT scans using Q-deformed entropy and deep learning features Covid-19 image data collection: Prospective predictions are the future Covid-19 open research dataset challenge (cord-19). Allen Institute for Artificial Intelligence Densely connected convolutional networks Deep learning for detecting corona virus disease 2019 (COVID-19) on high-resolution computed tomography: a pilot study Assisting scalable diagnosis automatically via CT images in the combat against COVID-19. Scientific reports Spatial pyramid pooling in deep convolutional networks for visual recognition Ce-net: Context encoder network for 2d medical image segmentation Automatic distinction between covid-19 and common pneumonia using multi-scale convolutional neural network on chest ct scans Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs COVID-19 chest CT image segmentation--a deep convolutional neural network solution Deep Learning Approach for COVID-19 Detection in Computed Tomography Images Dual-sampling attention network for diagnosis of COVID-19 from community acquired pneumonia Online COVID-19 diagnosis with chest CT images: Lesion-attention deep neural networks. medRxiv Sample-efficient deep learning for COVID-19 diagnosis based on CT scans. medrxiv SCOAT-Net: A novel network for segmenting COVID-19 lung opacification from CT images. Pattern Recognition Improved inception-residual convolutional neural network for object recognition. Neural Computing and Applications Neural machine translation by jointly learning to align and translate Effective approaches to attention-based neural machine translation Lung infection quantification of COVID-19 in Nabla-net: A deep dag-like convolutional architecture for biomedical image segmentation. InInternational Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries Loss odyssey in medical image segmentation Stacked-autoencoder-based model for COVID-19 diagnosis on CT images. Applied Intelligence Automatic COVID-19 CT segmentation using U-Net integrated spatial and channel attention mechanism Segmenting COVID-19 chest CT images using connectivity imposed Unet. Computer Methods and Programs in Biomedicine Update A noise-robust framework for automatic segmentation of COVID-19 pneumonia lesions from CT images COVID-19 CT segmentation dataset Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results Uncertainty-aware self-ensembling model for semisupervised 3D left atrium segmentation. InInternational Conference on Medical Image Computing and Computer-Assisted Intervention Dual Attention Multiple Instance Learning with Unsupervised Complementary Loss for COVID-19 Screening Accurate screening of COVID-19 using attention-based deep 3D multiple instance learning A weakly-supervised framework for COVID-19 classification and lesion localization from chest CT Hands-On Transfer Learning with Python: Implement advanced deep learning and neural network models using TensorFlow and Keras Detection of COVID-19 by GoogLeNet-COD. InInternational Conference on Intelligent Computing A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19) A fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis Classification of the COVID-19 infected patients using DenseNet201 based deep transfer learning Very deep convolutional networks for large-scale image recognition Aggregated residual transformations for deep neural networks Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Computers in biology and medicine AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size Mobilenetv2: Inverted residuals and linear bottlenecks. InProceedings of the IEEE conference on computer vision and pattern recognition Xception: Deep learning with depthwise separable convolutions Automatic detection of coronavirus disease (COVID-19) in X-ray and CT images: a machine learning based approach Densely connected convolutional networks-based COVID-19 screening model. Applied Intelligence Deep learning-based recognizing covid-19 and other common infectious diseases of the lung by chest ct scan images. medRxiv A comprehensive study on classification of COVID-19 on computed tomography with pretrained convolutional neural networks. Scientific reports COVID-19 case recognition from chest CT images by deep learning, entropy-controlled firefly optimization, and parallel feature fusion A simple framework for contrastive learning of visual representations. InInternational conference on machine learning Decaf: A deep convolutional activation feature for generic visual recognition. InInternational conference on machine learning A survey of multi-source domain adaptation. Information Fusion Bayesian multi-domain learning for cancer subtype discovery from next-generation sequencing count data Transfer Learning and Deep Domain Adaptation. Advances and Applications in Deep Learning Deep domain adaptation from typical pneumonia to COVID-19 Unsupervised domain adaptation based COVID-19 CT infection segmentation network Domain adaptation based self-correction model for COVID-19 infection segmentation in CT images. Expert Systems with Applications NIA-Network: Towards improving lung CT infection detection for COVID-19 diagnosis Chest ct scans with covid-19 related findings dataset Learning deep features for discriminative localization Network in network Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems Few-shot Learning for CT Scan based COVID-19 Diagnosis A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images FSS-2019-nCov: A deep learning architecture for semi-supervised few-shot segmentation of COVID-19 infection. Knowledge-Based Systems Res2net: A new multi-scale backbone architecture Momentum contrastive learning for few-shot COVID-19 diagnosis from chest CT images. Pattern recognition Representation learning with contrastive predictive coding COVID-19 CT lung and infection segmentation dataset. Zenodo COVID-19 detection in CT images with deep learning: A voting-based scheme and cross-datasets analysis. Informatics in medicine unlocked COVID-19 detection through transfer learning using multimodal imaging data Transfer learning for establishment of recognition of COVID-19 on CT imaging using small-sized training datasets. Knowledge-Based Systems COVID-19 pneumonia diagnosis using a simple 2D deep learning framework with a single chest CT image: model development and validation Deep transfer learning-based automated detection of COVID-19 from lung CT scan slices. Applied Intelligence Deep learning-based detection for COVID-19 from chest CT using weak label Automated diagnosis of covid-19 using deep learning and data augmentation on chest ct. medRxiv DenseNet convolutional neural networks application for predicting COVID-19 using CT image. SN computer science Automated detection of COVID-19 using ensemble of transfer learning with deep convolutional neural network based on CT scans. International journal of computer assisted radiology and surgery Transfer learning-based ensemble support vector machine model for automated COVID-19 detection using lung computerized tomography scan data Weakly supervised deep learning for covid-19 infection detection and classification from ct images Artificial intelligence augmentation of radiologist performance in distinguishing COVID-19 from pneumonia of other origin at chest CT SARS-CoV-2 CT-scan dataset: A large dataset of real patients CT scans for SARS-CoV-2 identification Rethinking model scaling for convolutional neural networks. InInternational Conference on Machine Learning CT scan dataset about COVID-19 automatic detection of COVID-19 from a new lung ultrasound imaging dataset (POCUS) Lightweight convolutional neural network and its application in rolling bearing fault diagnosis under variable working conditions Deep-chest: Multi-classification deep learning model for diagnosing COVID-19, pneumonia, and lung cancer chest diseases. Computers in biology and medicine Towards explainable deep neural networks (xDNN) Autosegmentation for thoracic radiation treatment planning: a grand challenge at AAPM 2017 A deep transfer learning model with classical data augmentation and cgan to detect covid-19 from chest ct radiography digital images End-to-end automatic differentiation of the coronavirus disease 2019 (COVID-19) from viral pneumonia based on chest CT. European journal of nuclear medicine and molecular imaging Deploying machine and deep learning models for efficient data-augmented detection of COVID-19 infections Radiologist-level covid-19 detection using ct scans with detail-oriented capsule networks Automatic screening of covid-19 using an optimized generative adversarial network. Cognitive computation Generative adversarial nets. Advances in neural information processing systems The whale optimization algorithm Automatic Diagnosis of COVID-19 from CT Images using CycleGAN and Transfer Learning Large scale adversarial representation learning Covidgan: data augmentation using auxiliary classifier gan for improved covid-19 detection Dynamic routing between capsules Matrix capsules with EM routing. InInternational conference on learning representations Image-to-image translation with conditional adversarial networks Clinically applicable AI system for accurate diagnosis, quantitative measurements, and prognosis of COVID-19 pneumonia using computed tomography Jcs: An explainable covid-19 diagnosis system by joint classification and segmentation Covidnet-ct: A tailored deep convolutional neural network design for detection of covid-19 cases from chest ct images. Frontiers in medicine COVID-Net CT-2: Enhanced Deep Neural Networks for Detection of COVID-19 from Chest CT Images Through Bigger Benchmarking deep learning models and automated model design for covid-19 detection with chest ct scans. medRxiv Transfusion: Understanding transfer learning for medical imaging Preservational Learning Improves Self-supervised Medical Image Models by Reconstructing Diverse Contexts Self-supervised pre-training of swin transformers for 3d medical image analysis Self-supervised out-of-distribution detection in brain CT scans Big self-supervised models advance medical image classification Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the LUNA16 challenge. Medical image analysis Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge The liver tumor segmentation benchmark (lits) Self-supervised deep learning model for COVID-19 lung CT image segmentation highlighting putative causal relationship among age, underlying disease and COVID-19 A Deep Learning Model with Self-Supervised Learning and Attention Mechanism for COVID-19 Diagnosis Using Chest X-ray Images COVID-19 recognition using self-supervised learning approach in three new computed tomography databases COVID-19 Prognosis via Self-Supervised Representation Learning and Multi-Image Prediction Deep supervised learning using self-adaptive auxiliary loss for COVID-19 diagnosis from imbalanced CT images Momentum contrast for unsupervised visual representation learning Masked autoencoders are scalable vision learners Uncertainty quantification and deep ensembles A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information Fusion Multilevel context and uncertainty aware dynamic deep ensemble for breast cancer histology image classification Multivariate uncertainty in deep learning Robust Uncertainty-aware Hierarchical Feature Fusion with Ensemble Monte Carlo Dropout for COVID-19 Detection Dual-Consistency Semi-Supervised Learning with Uncertainty Quantification for COVID-19 Lesion Segmentation from CT Images Uncertainty quantification in skin cancer classification using three-way decision-based Bayesian deep learning. Computers in biology and medicine Uncertainty modelling in deep learning for safer neuroimage enhancement: Demonstration in diffusion MRI This work was financed by the Project of Educational Commission of Guangdong Province of China (No. 2020KZDXZ1215).