key: cord-0864706-m1nf6j8i authors: Jia, Guangyu; Lam, Hak-Keung; Xu, Yujia title: Classification of COVID-19 Chest X-Ray and CT Images using a Type of Dynamic CNN Modification Method date: 2021-04-29 journal: Comput Biol Med DOI: 10.1016/j.compbiomed.2021.104425 sha: 453c4382b4fa26e3bdf7ef11e93ae1bb482f53d4 doc_id: 864706 cord_uid: m1nf6j8i Understanding and classifying Chest X-Ray (CXR) and computerised tomography (CT) images are of great significance for COVID-19 diagnosis. The existing research on the classification for COVID-19 cases faces the challenges of data imbalance, insufficient generalisability, the lack of comparative study, etc. To address these problems, this paper proposes a type of modified MobileNet to classify COVID-19 CXR images and a modified ResNet architecture for CT image classification. In particular, a modification method of convolutional neural networks (CNN) is designed to solve the gradient vanishing problem and improve the classification performance through dynamically combining features in different layers of a CNN. The modified MobileNet is applied to the classification of COVID-19, Tuberculosis, viral pneumonia (with the exception of COVID-19), bacterial pneumonia and normal controls using CXR images. Also, the proposed modified ResNet is used for the classification of COVID-19, non-COVID-19 infections and normal controls using CT images. The results show that the proposed methods achieve 99.6% test accuracy on the five-category CXR image dataset and 99.3% test accuracy on the CT image dataset. Six advanced CNN architectures and two specific COVID-19 detection models, i.e., COVID-Net and COVIDNet-CT are used in comparative studies. Two benchmark datasets and a CXR image dataset which combines eight different CXR image sources are employed to evaluate the performance of the above models. The results show that the proposed methods outperform the comparative models in classification accuracy, sensitivity, and precision, which demonstrate their potential in computer-aided diagnosis for healthcare applications. In the fight against COVID-19, the immediate and accurate screening of infected patients is of great significance. As widely used screening approaches, chest X-ray (CXR) and computed tomography (CT) play an important role in the diagnosis of COVID-19 cases, especially when viral testing is in short supply. Studies showed that changes occur in CXR and CT images before the beginning of COVID-19 symptoms for some patients [1, 2] . Also, the symptoms of COVID-19 and other lung diseases can be similar in their very early stages [3] . It is crucial to effectively distinguish COVID-19 from other lung diseases during the early stages, otherwise inaccurate diagnoses may expose more people to coronavirus. The development of deep learning techniques enables end-to-end image classification without manual feature engineering. In the domain of COVID-19 detection, deep learning techniques have been widely adopted for related classification tasks [4, 5, 6] . For example, in [7] , Inception net was utilized for COVID-19 outbreak screening with CXRs. [8] proposed a type of patch-based convolutional neural network (CNN) with a small amount of trainable parameters for COVID-19 diagnosis. The research in [9] considered flat and hierarchical classification scenarios for COVID-19 identification for more than three classes. In [10] , two algorithms including a deep neural network on the fractal feature of images and a CNN architecture with the direct use of the CXR images were presented. The above research demonstrates that the advantage of deep learning methods for CXR image classification is mainly the capability of capturing the pixel-level information which cannot be obviously noticed by human eyes. Having reviewed the success of the reported applications, the existing studies on COVID-19 classification also show some limitations and challenges. First, as the number of available training data is limited, data imbalances of different classes are found in plenty of the literature. Deep learning models are unlikely to be well trained by the unbalanced data, and the high accuracy in these circumstances cannot guarantee the effectiveness of COVID-19 detection. Furthermore, after a careful comparison of the images from different data sources, it can be found that images in different classes vary in image quality, orientation, brightness, etc. The algorithms might take these into account during classification rather than focusing on the disease related information in images. In this research, we aim to design a COVID-19 detection system based on deep learning techniques which can be used in the screening process of COVID-19 CXR and CT images. Motivated by the fact that the viral test requires waiting time to obtain the results, the developed end-to-end medical images classification frameworks attempt to accelerate the testing process, alleviate the workload of clinicians in manual images processing, and provide patients with timely results, all of which facilitate effective isolation to further control the spread of coronavirus. To address the problem of data imbalance, we combined eight different data sources, such that each class in the dataset had a similar number of samples. To establish a more effective and suitable model for COVID-19 classification, we compared the classification results of several widely used CNN architectures which included VGG, ResNet, DenseNet, MobileNet, Inception and SqueezeNet. Among those models, we found that MobileNet achieved the best performance in COVID-19 classification of CXR images, and ResNet achieved high test accuracy with less computational budget in the classification of CT images. Although Mo-bileNet and ResNet possessed satisfactory classification performance, overfitting and gradient vanishing problems still occurred during training. To overcome these obstacles and further improve the classification performance of deep learning models, we proposed a type of dynamic CNN modification method which combines low-level and high-level features in the original model, such that the model is able to converge faster, be more robust and reach higher classification accuracy. The experimental results demonstrate that the proposed method achieves average 99.7% test accuracy in three-category classification, average 99.9% test accuracy in four-category classification and average 99.6% in five-category classification, which surpass the original MobileNet architecture. The contributions of this paper are summarised as follows: • A type of dynamic CNN modification method is proposed in this paper for the detection of COVID-19 cases. The proposed method combines features in different layers of a CNN using weights that are dynamically changed according to the inputs. The modified MobileNet and modified ResNet are respectively applied to the classification of CXR images and CT images for the COVID-19 detection. • To facilitate effective diagnosis, three classification scenarios are considered in this paper: (a) five-category classification: COVID-19 infection, bacterial pneumonia, viral pneumonia (with the exception of COVID-19), tuberculosis, and normal controls; (b) four-category classification: COVID-19, Non-COVID-19 pneumonia, tuberculosis, and normal controls; (c) three-category classification: COVID-19, Non-COVID-19 infections, and normal controls. • Comprehensive comparisons are conducted in this research which include the comparisons between the proposed method and two recently published models, i.e., COVID-Net [11] and COVIDNet-CT [12] . Also, six widely used deep learning methods, VGG16 [13] , Inceptionv3 [14] , ResNet18_v1 [15] , DenseNet121 [16] , MobileNetv3_small [17] , and SqueezeNet1.0 [18] , are employed for model selection and comparison. • The proposed models obtain the test accuracy of 99.6% on the 5-class CXR image dataset, 95.0% on the COVIDx dataset [11] , and 99.3% on the COVIDx-CT dataset [12] . The dynamic CNN modification method alleviates the gradient vanishing problem, achieves satisfactory sensitivity, precision, and strong robustness in classification. The implementation process of this paper is shown in Fig. 1 , which will be elaborated in the following Sections. The rest of this paper is arranged as follows. Section 2 introduces the background of this research. Data information is introduced in Section 3. Six widely used CNN architectures applied to the COVID-19 classification are illustrated in Section 4. Section 5 presents the proposed dynamic modification method, especially, the modified MobileNet architecture is proposed for the classification of CXR images. Modified ResNet with applications on COVID-19 CT image classification is elaborated in Section 6. The robustness test is then analysed in Section 7. Section 8 makes the conclusion. Detecting COVID-19 with deep learning techniques is a trending topic and has attracted extensive attention recently. Promising results using advanced CNNs have been published and continue to emerge in this domain. In the detection of COVID-19, CXR images and CT images are the two main types of datasets used for classification. In addition, wearable sensor signals have been used as inputs for COVID-19 detection recently [19] . In the detection of COVID-19, a variety of deep learning models have been designed and achieved success in medical images classification. Researchers in [1] proposed a type of deep learning method named DarkCovidNet, which achieved 98.08% test accuracy in binary classification and 87.02% test accuracy in three-category classification. In [11] , COVID-Net, a tailored deep learning model derived through generative synthesis [20] ) was used for the detection of COVID-19 cases with CXR images. The authors in this research also compiled CXR images from various open sources and made them available to the general public. For the chest CT images, a type of COVIDNet-CT model was proposed in [12] to identify COVID-19, non-COVID-19 pneumonia and normal cases using a machine-driven design exploration approach which was similar to the method in [11] . Another contribution of this research is the introduction of COVIDx-CT, a benchmark CT image dataset derived from CT imaging data which consists of 104,009 images across 1,489 patient cases. However, it must be acknowledged that there are also limitations in COVID-19 detection using deep learning techniques. Firstly, to control the spread of COVID-19 and protect people, many countries have issued a self-isolation policy, and people who begin to have mild symptoms are quarantined without CXR or CT examinations [21] . Thus, in the current CXR and CT image datasets, the number images of patients with severe COVID-19 symptoms is far greater than the number of images of patients suffering from only mild symptoms [21] . Secondly, the related research, especially that was conducted in 2020, suffered from a lack of COVID-19 images and data imbalance. For example, in studies [4, 22, 8, 23, 21, 24] , the number of COVID-19 images ranges from 50 to 300, and in some studies the number of data in different categories is much imbalanced. Thirdly, there are no standard criteria in model evaluation, which leads to less effective comparison of the performance among different models. Also, with the increasing number of deep learning methods proposed in this domain, it would be more and more difficult for researchers and health organisations to select the most appropriate classification method for COVID-19 detection [25] . Motivated by these factors, this paper proposes a dynamic CNN modification method which are applied to MobileNetv3 and ResNet18 for the classification of CXR and CT images. The datasets used in this paper are from a combination of different data sources, in which the number of samples is balanced and sufficient for model training. In addition, to provide fair and comprehensive comparison with other methods, we used two benchmark datasets, COVIDx [11] and COVIDx-CT [12] , to evaluate the proposed method. In particular, we used the same performance indicators with [11] and [12] for result comparison, and the same training and testing datasets in comparative studies. The preliminaries of CNN architectures adopted in this paper are introduced in this section. As one of the most important and prominent models in deep learning methods, CNNs have shown advantages in many application areas, such as computer vision, speech recognition and medical diagnosis [26, 27, 28] . In the ImageNet LSVRC-2012 competition, a revolutionary CNN architecture named AlexNet demonstrated that deep CNNs are able to achieve excellent performance on highly challenging datasets with purely supervised learning [29] . Alex et al. developed a wide range of network settings and training skills, such as ReLU, dropout, pooling and local response normalization, which made it possible to train deep CNNs more effectively and achieve better performance [29, 30] . In recent years, more advanced networks based on AlexNet were created, such as VGG, GoogLeNet, ResNet, DenseNet, MobileNet, SqueezeNet, etc.. In 2013, Network in Network (NiN) structure was proposed in [31] , which introduced 1 × 1 convolutional layer to act as fully-connected layers on the channels, such that each basic block in the model is like a complete network. VGG is proposed in [13] which offers a template for using loops and subroutines to design new networks. Taking advantages of repeated convolutional blocks proposed in VGG and the structure of NiN, GoogLeNet combines convolution kernels with different size, uses Inception blocks and employs 1 × 1 convolutions to reduce channel dimensionality [32] . In 2015, a novel type of CNN architecture named ResNet was proposed in [33] , which profoundly influence network structure design thereafter. ResNet realized identity mappling through inserting shortcut connections between layers, which well-defines the function complexity for adding new residual blocks and obviously improves CNNs' classification performance [33] . DenseNet extended the architecture of ResNet by using concatenation as the cross-layer connections, and make each layer densely connected to the last layer in these connections [16] . The above CNN architectures mainly focus on improving model accuracy. Another stream in this field was developed to improve the training efficiency of CNNs, which focused more on reducing computational budget of CNNs with reasonable compromising of accuracy. Mo-5 J o u r n a l P r e -p r o o f Bacterial Pneumonia Viral Pneumonia Tuberculosis Healthy Figure 2 : Example CXR images from data sources [36, 37, 38, 24, 39] . bileNet is a typical model designed in this context. The first version of MobileNet was proposed in 2017, aiming at achieving the resource and accuracy tradeoffs. It uses the depthwise separable convolutions to build light weight deep neural networks [34] and is designed for training on mobile devices. Another commonly used model is SqueezeNet, in which the amount of weights was reduced to 50 times less than AlexNet but the accuracy was kept almost the same as AlexNet [18] . Recently published studies [35, 24, 23, 22, 8, 4, 21] have shown that VGG [13] , Inception [14] , ResNet [15] , DenseNet [16] , MobileNet [17] and SqueezeNet [18] demonstrated effectiveness in COVID-19 detection. Therefore, these models are employed in this paper for model selection and comparative studies. Four classes of lung diseases are considered in this paper: COVID-19, bacterial pneumonia, viral pneumonia (except for COVID-19) and tuberculosis. In addition, healthy cases are included as the fifth class. The representative samples of each class are shown in Fig. 2 . The dataset used in this paper is the combination across four publicly available data sources (DS): • COVID-19 CXR images are from the open source GitHub repository ieee8023/covidchestxray-dataset (DS 1) [36] , Actualmed-COVID-chestxray-dataset (DS 2), Figure1-COVID-chestx-ray-dataset (DS 3) [37] , COVID-19 Radiography Database (DS 4) [24] . Data preparing for COVID-19 images in this paper referred to the code provided in the GitHub repository COVIDx Dataset contributed by Linda Wang et al. [11] . • The images of tuberculosis positive cases are from the dataset [38] which include CXR databases respectively obtained in Shenzhen, China and Montgomery, USA (DS 5), and the data source TB Portals Program, Office of Cyber Infrastructure and Computational Biology (OCICB), National Institute of Allergy and Infectious Diseases (NIAID) (DS 6). • The bacterial and viral pneumonia CXR images are from Pneumonia Classification Dataset (DS 7) [39] . • The CXR images of normal controls are from COVID-19 Radiography Database (DS 4) [24] . The distribution of samples across 7 data sources and the split of train, validation and test sets are shown in Table 1 . The classification of COVID-19 cases using CT images is also investigated in this paper. The data information of CT images for COVID-19 detection refers to Section 6. In order to obtain the most suitable model for the classification task, we first employed six widely used CNN architectures which have proven to be successful in computer vision and medical diagnosis. As mentioned in Section 2, we employed VGG16 [13] , Inceptionv3 [14] , ResNet18_v1 [15] , DenseNet121 [16] , MobileNetv3_small [17] and SqueezeNet1.0 [18] for COVID-19 detection in this paper. The model versions are determined by the trade-off of classification performance and computational costs. For example, in our experiments, ResNet18 and ResNet50 reached a similar accuracy in classification. In this case we chose ResNet18 rather than ResNet50 due to its greater parameter efficiency and lower computational cost. Also, the models were selected based on the classification performance reported in the related literature. During the experiments, the above models were set to be the pretrained mode. Therefore, they possess the weights and hyper-parameters fine tuned on ImageNet. As 5 classes are investigated in this paper, the ouput layer in the above models are revised to have 5 nodes. The classification results are depicted in Fig. 3 . As the models are pretrained, almost all models in Fig. 3 achieve above 80% test accuracy after 30 epochs. The test accuracy and the size of parameters of each model are listed in Table 2 . According to the test accuracy shown in Table 2 , both DenseNet121 and MobileNetv3_small have a test accuracy of 98.8%. However, from the validation accuracy shown in Fig. 3 , it can be viewed that DenseNet121 has a larger oscillation in the training process compared to the MobileNet. In addition, the parameters of DenseNet121 are about four times the size of MobileNet, which indicates that MobileNet has greater parameter efficiency. Therefore, MobileNetv3_small proposed in [17] is selected as the backbone model for the CXR image classification in this paper. However, in the process of training MobileNet, we found that over-fitting problem occurred which can be viewed from the divergence of the training and validation accuracy shown in Fig. 5(a) . We also found that the gradients in the top four blocks of MobileNet were very small along with the training. To improve the classification performance of Mo-bileNet and address the over-fitting problem, we proposed a type of modified MobileNet for the COVID-19 detection. Motivated by the design of ResNet and the channel attention mechanism in SENet [40] , we proposed a type of modified MobileNet based on the architecture of MobileNetv3_small in [17] . In the original MobileNetv3_small, as mentioned above, vanishing gradient and overfitting problems occurred, which are shown in Fig. 5 and Fig. 6 . Furthermore, there is no residual connection in the top three blocks of MobileNetv3_small as the stride of these three layers is 2. Based on the above observations, we designed a type of modified MobileNet to make the model more adaptive to our dataset. The structure of the proposed method is depicted in Fig. 4 , the corresponding configuration of hyper-parameters is shown in Table 3 . In the modified MobileNet, the outputs of the top four blocks of the original model are processed by a pointwise convolution block, which is deigned to reduce the channel dimensionality and keep the ouputs of different layers summable. The outputs of five pointwise conv blocks are multiplied by the corresponding weights (w 1 , ..., w 5 ). As the weights are variable during training, the outputs of pointwise conv blocks are dynamically weighted and added. The weights are in a 1 × 5 vector which is the output of the first pointwise conv block with five channels fed by the inputs. The configuration of hyper-parameters can be found in Table 3 . With this modified CNN architecture, we aim to: • Combine low-level and high-level information of the original network. • Address the overfitting problem occurred in the original MobileNet ( Fig. 5(a) ). • Solve the gradient vanishing problem during training (Fig. 6) . In essence, the proposed CNN structure is changed dynamically according to the input data. It is because the outputs of pointwise conv blocks (Fig. 4) are multiplied by the corresponding weights w i where the weights are obtained from a pointwise conv block. As the weights are variable and determined by the inputs, the outputs of pointwise conv blocks are dynamically weighted and combined. Therefore, this CNN modification method provides dynamic combinations of pointwise conv blocks which are adaptive to the inputs. In the original MobileNet, the output layer is a convolutional layer with 1024 input channels and 1×1 convolution kernels. This design achieved satisfactory classification results in ImageNet with 1000 classes, however, the large amount of parameters leads to extra computational burdens and over-fitting problem for the 5-class COVID-19 classification task. The pointwise conv blocks in the branch circuits are used to first reduce the number of channels through the convolutional layer with 64 channels and 1 × 1 filters, and secondly reduce the spatial dimension of inputs by global average pooling. Also, we computed the gradients of weights in convolutional layers which are influenced by the modification structure, and compared them with the gradients in corresponding layers of the original MobileNet. Fig. 6 shows the distribution of absolute values of gradients in the first eight convolutional layers. The number of parameters of corresponding kernels is 7,264. The mean value and variance of the absolute value of gradients in the original MobileNet are 0.037 and 0.002. For the modified MobileNet, the mean value and variance of the absolute value of gradients are 0.547 and 0.551. Fig. 6 demonstrates that this type of modification in MobileNet is able to alleviate the gradient vanishing problem which usually happens when networks are deep. The training and validation accuracy of the modified MobileNet for the classification of COVID-19, tuberculosis, bacterial pneumonia, viral pneumonia and healthy cases are shown in Fig. 5(b) . We adopted 5-fold cross validation for result evaluation. From Table 4 , it can be noted that after using the modified MobileNet, the test accuracy reaches 99.6% which is higher than that of the original MobileNet by 0.8%. The overfitting problem is alleviated using the modified MobileNet, which can be seen by comparing Fig. 5 (a) and Fig. 5(b) . Sensitivity, precision, and F1-score of the original and proposed models are shown in Table 6 . It can be noted from Table 6 and Table 4 that the proposed model has achieved an accuracy of 99.6% in the five-category classification, and obtained sensitivity, precision and F1-score values of 100% in COVID-19 detection. To test the effectiveness of the proposed method, we also applied the proposed method to the classification of 3 categories: COVID-19, healthy, and Non-COVID-19 infections; four categories: COVID-19, Non-COVID-19 pneumonia, tuberculosis and normal controls. The results are summarized in Table 5 . These two scenarios could be implemented faster and more convenient in practice when rapid diagnosis is required. It can be noted from Table 5 that the proposed method obtains 99.7% test accuracy in three-category classification, 99.9% in four-category classification, and 99.6% in three-category classification, which all outperform the original MobileNet. To further observe the rationality and effectiveness of the proposed classifier, the class activation mapping (CAM) proposed in [41] was adopted to localize the discriminative image regions. In Fig. 7 , the highlighted regions show the areas that the classifier used to identify that category. Figure 7 : Class activation maps (CAM) of CXR images in each category. Images in the first row are the original CXR images of each class, the second row lists the corresponding CAM in which the highlighted parts represent the discriminative image regions that the classifier uses to identify that category. Comparing Fig. 5 (a) and Fig. 5(b) , it can be found that the modified structure alleviates the over-fitting problem which occurred in the original MobileNet. The main reasons behind this are analysed as follows: (a) The poinwise conv blocks in the modified MobileNet uses 1 × 1 convolutional layer with 64 channels as well as global average pooling to reduce the dimensions of the corresponding outputs, which makes the input fed into the last layer to have a size of 64×1×1 rather than 1024×1×1 in the original MobileNet; (b) the introducing of a dropout layer after the added outputs from former layers improves the generalisability of the proposed model; (c) this structure is more capable of solving the vanishing gradient problem in back propagation due to the connections with former layers, which is shown in Fig. 6 ; (d) the proposed structure combines the features obtained from lower layers and higher layers, which provides more comprehensive information for the classification of the final layer. To provide comprehensive evaluation of the proposed method, we compared the proposed method with COVID-Net [11] , a recently proposed model for the detection of COVID-19 cases using CXR images. To provide fair comparisons, we employed the benchmark dataset COVIDx to evaluate both methods. COVID-Net [11] is a deep convolutional neural network designed for the detection of COVID-19 cases on COVIDx, an open access benchmark dataset containing 13,975 CXR images. In [11] , a machine-driven design exploration strategy was used to create COVID-Net. The test accuracy and parameters of the proposed method and COVID-Net are shown in Table 7 , and the comparisons of sensitivity and precision are shown in Tables 8 and 9 . From Tables 7, 8 , and 9, it can be concluded that the proposed method outperforms COVID-Net in terms of computational memory, accuracy (1.7% improvement on test accuracy), sensitivity, and precision in COVID-19 detection. It further demonstrates the effectiveness of the proposed method in diagnosing COVID-19 cases with CXR images. Non-COVID-19 pneumonia Normal Figure 8 : Example CT Images of three classes [12] . To further investigate the effectiveness of this type of model modification, we applied the same modification method to ResNet18 [33] and proposed a type of modified ResNet. This version of ResNet is chosen for the reason that it is the residual network with the fewest layers and parameters. The experiment was conducted using a benchmark CT image dataset COVIDx-CT which consists of 104,009 chest CT images across 1,489 patients. Example CT images of this dataset are shown in Fig 8, the information of the dataset can be found in [12] . Similarly, the comparisons of six CNN architectures are conducted for model selection. The test accuracy, parameters of different models and the training process are shown in Table 10 and Fig. 9 . After comparing the performance of VGG16 [13] , Inceptionv3 [14] , ResNet18_v1 [15] , MobileNetv3_small [17] , DenseNet121 [16] , and SqueezeNet1.0 [18] , we found that both InceptionNetv3 and ResNet18 achieved satisfactory classification performance. Considering the parameter efficiency of ResNet18 and InceptionNetv3, we adopted ResNet18 as the CNN backbone, and modified it with a similar method to that used in the modified Mo-bileNetv3_small (Section 5). The architecture of the modified ResNet18 is depicted in Fig. 10 and the corresponding configuration of hyper-parameters is shown in Table 11 . In this section, we compared the modified ResNet18 with COVIDNet-CT [12] on the benchmark dataset COVIDx-CT. COVIDNet-CT [12] is tailored for the detection of COVID-19 cases from chest CT images via a deep convolutional neural network. The results of COVIDNet-CT in the classification of COVID-19, Non-COVID-19 pneumonias, and normal controls are listed in [42] (COVID-Net CT-2 S (2A)). To provide fair comparisons, we used the same training and test datasets as that of COVIDNet-CT [12] in our experiment. The comparative results can be found in Tables 12, 13 , and 14. The proposed method achieves 99.3% test accuracy on the COVIDx-CT dataset which consists of 143,778 training images, 25486 validation images, and 25,658 test images [12] . The modified ResNet18 possesses higher test accuracy (1.4% higher than COVIDNet-CT) and outperforms COVIDNet-CT in sensitivity and precision in identifying three categories. However, the advantage of COVIDNet-CT is its higher parameter efficiency in terms of much fewer parameters in the network. Robustness analysis aims to evaluate the model's capability to resist input perturbations. To test the robustness of the model, we add different levels of Gaussian noise to the test data and feed the contaminated data into the classifiers. In this section, we adopt modified MobileNet in Section 5 to analyse the robustness of the proposed method. Assume that the inputs are denoted by a which has the size of s 1 × s 2 × 3. To test the robustness of the model, we utilised two forms of perturbations. The first one is the additive form: a = a + m · randn(s 1 , s 2 , 3), whereā denotes the contaminated data, m denotes the noise levels which are positive values, randn() represents the function of Gaussian noise and '·' denotes the element-wise multiplication. The noise level is respectively chosen as m = 1, 3, 5, 7, 10. The second form is the combination of multiplicative and additive noise: a = n · a + m · randn(s 1 , s 2 , 3), where n denotes the multiplicative noise level, other denotations are the same as equation (1) . The following levels of noise are considered: n = 0.5, 0.7, 0.9, 1.1, 1.3; m = 2. Note that through data perturbations, the pixel values of input images may exceed [0, 255]. We limited the pixel values in [0, 255] by clipping the exceeded values. The robustness results of two forms of noise on test dataset are shown in Tables 15 and 16. From Table 15 and Table 16 , it can be concluded that the proposed model possesses stronger robustness compared to the original model, which demonstrates that the proposed method has stronger generalisation ability to accept wider range of data and its suitability to clinical applications when having inevitable data perturbations. In this paper, a type of dynamic CNN modification method is proposed for the classification of two COVID-19 CXR image datasets and a CT image dataset. The proposed method establishes connections between different layers of the original CNN architecture through pointwise convolution blocks, which achieve dynamic combinations of different layers. Six widely used deep learning algorithms, as well as two recently published models specifically designed for COVID-19 detection, are employed and compared with the proposed method. Three scenarios of the classification problem are investigated using the proposed method. The results are analysed through test accuracy, sensitivity, precision, robustness test, and class activation maps. The modified CNN architecture demonstrates satisfactory classification performance in our comparative study, which shows its potential to be applied in clinical settings for computer-aided diagnosis of COVID-19 positive cases. Future work may investigate the impacts of image qualities on COVID-19 detection due to the difference of image sources. No potential conflict of interest was reported by the authors. Automated detection of covid-19 cases using deep neural networks with x-ray images Covidiagnosis-net: Deep bayes-squeezenet based diagnostic of the coronavirus disease 2019 (covid-19) from x-ray images An efficient framework for identification of tuberculosis and pneumonia in chest x-ray images using neural network Classification of covid-19 in chest x-ray images using detrac deep convolutional neural network Covid-19 screening on chest x-ray images using deep learning based anomaly detection Prediction of respiratory decompensation in covid-19 patients using machine learning: The ready trial Truncated inception net: Covid-19 outbreak screening using chest x-rays, Physical and engineering sciences in medicine Deep learning covid-19 features on cxr using limited training data sets Covid-19 identification in chest x-ray images on flat and hierarchical classification scenarios Diagnosis and detection of infected tissue of covid-19 patients based on lung x-ray image using convolutional neural network approaches Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images Covidnet-ct: A tailored deep convolutional neural network design for detection of covid-19 cases from chest ct images Very deep convolutional networks for large-scale image recognition Rethinking the inception architecture for computer vision Identity mappings in deep residual networks Proceedings of the IEEE conference on computer vision and pattern recognition Searching for mobilenetv3 Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size Coviddeep: Sars-cov-2/covid-19 test based on wearable medical sensors and efficient neural networks Ferminets: Learning generative machines to generate efficient neural networks via generative synthesis Extracting possibly representative covid-19 biomarkers from x-ray images with deep learning approach and image data related to pulmonary diseases Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in x-ray images Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks Can ai help in screening viral and covid-19 pneumonia? Systematic review of artificial intelligence techniques in the detection and classification of covid-19 medical images in terms of evaluation and benchmarking: Taxonomy analysis, challenges, future solutions and methodological aspects Presentation of a new hybrid approach for forecasting economic growth using artificial intelligence approaches Application of gene expression programming and sensitivity analyses in analyzing effective parameters in gastric cancer tumor size and location Presentation of a developed sub-epidemic model for estimation of the covid-19 pandemic and assessment of travel-related risks in iran Imagenet classification with deep convolutional neural networks Anatomical classification of upper gastrointestinal organs under various image capture conditions using alexnet Network in network Going deeper with convolutions Deep residual learning for image recognition Mobilenets: Efficient convolutional neural networks for mobile vision applications Detection of covid-19 from chest x-ray images using artificial intelligence: An early review Covid-19 image data collection: Prospective predictions are the future Figure 1 covid-19 chest x-ray dataset initiative Two public chest x-ray datasets for computer-aided screening of pulmonary diseases Labeled optical coherence tomography (OCT) and chest x-ray images for classification Squeeze-and-excitation networks Learning deep features for discriminative localization This work was partly supported by King's College London and the China Scholarship Council and has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service funded by EPSRC Tier-2 capital grant EP/P020259/1.