key: cord-1028741-1wrxqakn authors: Attallah, Omneya title: A computer-aided diagnostic framework for coronavirus diagnosis using texture-based radiomics images date: 2022-04-11 journal: Digit Health DOI: 10.1177/20552076221092543 sha: ca085ee3571b679a0e6c97b24786b27c31ea238c doc_id: 1028741 cord_uid: 1wrxqakn The accurate and rapid detection of the novel coronavirus infection, coronavirus is very important to prevent the fast spread of such disease. Thus, reducing negative effects that influenced many industrial sectors, especially healthcare. Artificial intelligence techniques in particular deep learning could help in the fast and precise diagnosis of coronavirus from computed tomography images. Most artificial intelligence-based studies used the original computed tomography images to build their models; however, the integration of texture-based radiomics images and deep learning techniques could improve the diagnostic accuracy of the novel coronavirus diseases. This study proposes a computer-assisted diagnostic framework based on multiple deep learning and texture-based radiomics approaches. It first trains three Residual Networks (ResNets) deep learning techniques with two texture-based radiomics images including discrete wavelet transform and gray-level covariance matrix instead of the original computed tomography images. Then, it fuses the texture-based radiomics deep features sets extracted from each using discrete cosine transform. Thereafter, it further combines the fused texture-based radiomics deep features obtained from the three convolutional neural networks. Finally, three support vector machine classifiers are utilized for the classification procedure. The proposed method is validated experimentally on the benchmark severe respiratory syndrome coronavirus 2 computed tomography image dataset. The accuracies attained indicate that using texture-based radiomics (gray-level covariance matrix, discrete wavelet transform) images for training the ResNet-18 (83.22%, 74.9%), ResNet-50 (80.94%, 78.39%), and ResNet-101 (80.54%, 77.99%) is better than using the original computed tomography images (70.34%, 76.51%, and 73.42%) for ResNet-18, ResNet-50, and ResNet-101, respectively. Furthermore, the sensitivity, specificity, accuracy, precision, and F1-score achieved using the proposed computer-assisted diagnostic after the two fusion steps are 99.47%, 99.72%, 99.60%, 99.72%, and 99.60% which proves that combining texture-based radiomics deep features obtained from the three ResNets has boosted its performance. Thus, fusing multiple texture-based radiomics deep features mined from several convolutional neural networks is better than using only one type of radiomics approach and a single convolutional neural network. The performance of the proposed computer-assisted diagnostic framework allows it to be used by radiologists in attaining fast and accurate diagnosis. The novel coronavirus is promoted by a severe respiratory syndrome coronavirus 2 (SARS-COV-2), resulting in this enduring pandemic. 1 The rate of infection is growing rapidly worldwide reaching over 190 million cases with mortality cases of over 4 million on 31 July 2021. 2 The international fast propagation of this novel disease has set an enormous division of the world's residents into quarantine and has overwhelmed numerous industrial sectors resulting in a worldwide financial crisis. Currently, there are several vaccine choices, however, at this time, these vaccine options would take a long time to reach the corner of the globe, especially developing countries. The known symptoms of coronavirus are sore throat, headache, myalgia, increase in temperature, chest ache, and dry cough. 3 These symptoms may completely show on the infected person in almost 14 days. Nevertheless, in various conditions, no symptoms are apparent or asymptotic. 4, 5 Asymptotic persons might affect other individuals which raise the risk to health care organizations and other industrial sectors. Consequently, it is extremely significant to precisely diagnose coronavirus promptly to control the infection and death rates and avert the threat to public health and industrial sections. At present, the standard method to diagnose coronavirus is the real-time reverse transcription-polymerase chain reaction (RT-PCR) test, however, it has several drawbacks. This test is susceptible to medical staff risks. Moreover, it is expensive, takes time, and is sometimes inaccurate. 6, 7 Thus other alternative approaches are required. The common sign of coronavirus is a lung infection that can be visually inspected using chest imaging modalities especially computed tomography (CT). 8 It has been proven that the CT imaging modality has a greater ability to diagnose coronavirus compared to the RT-PCR tests. 9 Radiologists manually analyze these images to recognize visual patterns of the coronavirus infection which is a complex, exhausting, time-consuming process, and prone to error. Therefore, there is a critical need to automate this process to facilitate the coronavirus diagnosis procedure and achieve more accurate, faster, and effective results. Due to the ongoing advancement of artificial intelligence (AI) including machine and deep learning, such methods have been extensively used in health and medical applications through computer-aided diagnostics systems. 10, 11 Recently, deep learning approaches have attracted several researchers in the medical and health informatics fields. In specific, recently convolutional neural networks (CNNs) have demonstrated their great capacity for analyzing medical images of several diseases. [12] [13] [14] [15] [16] [17] [18] Lately, CNNs have supported radiologists in the accurate diagnosis of coronavirus. 19, 20 Several deep learning-based studies have been conducted for coronavirus diagnosis through CT images. For example, Soares et al. 21 constructed an explainable deep learning model achieving an accuracy of 97.38%, F1-score of 97.31%, precision of 99%, and sensitivity of 95.53%. The study 22 proposed a deep learning-based framework that fused bidirectional long short-term memory (Bi-LSTM) with a mixture density network (DBM) model reaching an accuracy of 98.37%, a sensitivity of 98.87%, a precision of 98.74%, and an F1-score of 98.14%. Whereas, the authors of the study 23 proposed a deep learning model called CoviDenseNet that uses transfer learning with DenseNet reaching an accuracy of 86.88%, sensitivity 87.41%, specificity of 85.92%, F1-score of 89.52%, and precision of 91.76%. On the other hand, a customized CNN was created reaching a 95% accuracy, 96% sensitivity, 95% F1-score, and 95% precision. Similarly, the study 24 built a customized CNN paradigm and call it "CTnet-10" attaining accuracy of 82.1%. Furthermore, Zhao et al. 25 proposed a modified version of ResNet CNN achieving an accuracy of 99.2%. The major restriction of the earlier studies is utilizing CNNs separately to carry out classification, nonetheless, some research articles verified that merging features or predictions of numerous CNNs can improve classification results. 14, [26] [27] [28] Alternatively, other studies used ensemble deep learning models including, 23 33 integrated several handcrafted features with deep features. Some of the previous studies suffer from the huge dimension of features used in the classification step which raises the complexity and time of classification. Most of the former studies are based only on spatial information extracted from the original CT images to accomplish a diagnosis, nonetheless, textural information obtained from radiomics images through texture analysis improves medical diagnosis. 34 Other studies used hybrid methods based on deep learning models. 35 The study 36 constructed a deep denoising convolution autoencoder (DDCAE) model to diagnose coronavirus from CT images in an unsupervised manner. The authors obtained hidden representations from CT images producing a target histogram. Then, they used a distance metric to assess a test CT scan using a threshold. The main advantage of this technique is that it has low computational complexity, however, it only considers spatial information. Moreover, it is threshold dependent. This means that its value is conditional on the dataset. Changing the threshold value may affect the accuracy. The other study 37 proposed a framework for coronavirus diagnosis based on the parallel integration and optimization of deep learning models. The authors extracted deep features from AlexNet and VGG-16 and then fused them in a parallel manner and then selected significant features using entropy controlled firefly approach. These features are then fed to an support vector machine (SVM) classifier reaching an accuracy of 98%. Although this framework selected a reduced number of features, the number of features is redundant, and again the classification process depends only on the spatial information extracted from CT images. On the other hand, the study 38 developed a selfactivated CNN based on 32 deep layers. The authors employed transfer learning to extract deep features and used several machine learning classifiers to perform diagnosis attaining a maximum accuracy of 99.4%. Even though good performance was achieved using the study, 38 it used a huge number of features to train the classifiers. The study 39 proposed a fully automated and effective deep learning-based approach, called LungINFseg, to segment the coronavirus diseases in lung CT scans. LungINFseg is based on a new module called the receptive-field-aware (RFA) that could expand the receptive field of the segmentation models and improve the learning capability of the model with no information loss Radiomics is a growing area in medical image analysis. 40 The incorporation of radiomics and AI methods have enabled the precise diagnosis of several diseases. 41 The main privilege of radiomics rises from its ability to obtain textural and other essential components of disease or tumor patterns from medical images. 42 This information could assist the AI techniques to accurately diagnose the disease. 43 Thus, in this study, radiomics images are utilized instead of the original CT images to diagnose coronavirus. The proposed computer-assisted diagnostic (CAD) framework uses two types of radiomics images based on texture analysis including discrete wavelet transform (DWT) and gray-level covariance matrix (GLCM). These images are initially used separately to train three CNNs architectures. Afterward, deep features are extracted from each of these CNN and fused using DCT. Their dimension is also diminished using DCT. These reduced features are further combined altogether (for the three CNNs). Finally, three machine learning classifiers are employed to classify these images into coronavirus and non-coronavirus. Most previous studies used the original CT images to build their models based on only the spatial information of such images. In other words, they trained deep learning models with only the original CT images. However, the integration of spectral-temporal information extracted from wavelet techniques could improve the diagnostic accuracy of novel coronavirus diseases. 44 Although previous papers used wavelet transform for coronavirus diagnosis, they used it as a feature extractor. However, in the proposed method, the coefficients of the wavelet decomposition are converted to heatmaps images and used as input images to train CNN's deep learning models. Furthermore, most studies that analyzed medical images using radiomics methods extracted radiomics texture features from images and use them directly to train machine learning classifiers. Nevertheless, in this study, the grey level covariance matrix (GLCM) is used to obtain textural information from CT images and these textural features are also converted to heatmaps images and employed as input images to train the CNNs instead of the original CT images. In this study, we aimed to investigate if using the radiomics images (heatmaps of GLCM and DWT) is better than employing the original CT images to train deep learning models for coronavirus diagnosis. Furthermore, we investigate if fusing the spatial-temporal information of DWT heatmap images and textural information of GLCM heatmap images is capable of boosting the diagnostic accuracy of each ResNet independently. Finally, we examine if fusing deep features obtained from the spatial-temporal information of DWT heatmap images and textural information of GLCM heatmap images of the three Residual Networks (ResNets) altogether could improve the performance of the three classifiers. To achieve these aims we used ResNets as they are commonly utilized in the literature. [45] [46] [47] [48] In future work more deep learning models could be employed. The dataset used in this study is called SARS-COV-2 CT. 21 It is a benchmark 2D CT dataset that is commonly used in the literature. This dataset contains a sum of 2482 CT images. Among these images, 1252 are labeled as positive for coronavirus infection, and the remaining 1230 scans are labeled as non-coronavirus infection. The aspect of CT scan available in the dataset differs from 119 × 104 to 416 × 512. Figure 1 exhibits samples of CT scans of the SARS-COV-2 CT dataset. Deep learning methods have numerous architectures, among them, the CNN structure is the most used for medical applications especially those analyzing medical images, 49.50 CNN is a perceptron neural network consisting of multiple different layers, the major ones are convolutional, pooling, and fully connected layers. 51 The convolutional layer is responsible for extracting attributes from the input images. The process of extraction is done by convolving each input image with numerous filters followed by a non-linear activation function. Whereas the pooling layer lowers the dimension of attributes resulting from the convolutional layer, which facilitates making the demonstration invariant to small interpretation at the entrance. Lastly, the fully connected layer score the class labels. In this study, three CNNs architectures are used including ResNet-18, ResNet-50, and ResNet-101. ResNets are the most common deep learning model utilized in the literature. ResNets are also employed as they are able to converge effectively with adequate computation load even with expanding the layers' number in contrast to AlextNet and Inception CNN's. 52, 53 The reason is that the study conducted by Het et al. 53 provided a novel module to the CNN relying on residual learning. This module contains shortcuts (known as residuals) to skip some convolutional layers at a time which correspondingly quickens and smoothens the CNN convergence as well as enhances its performance. 31 In future work, more deep learning models could be employed. Radiomics, in general, is an image analysis technique that aims to mine large volumes of quantitative information or features from radiological scans utilizing a range of computational approaches. 54 These obtained image features involve measurements of shape, intensity, and texture. 55 In particular, texture analysis characterizes a group of methods to enhance the representation of the abnormality heterogeneity that includes mining texture indicators from several imaging techniques such as magnetic resonance imaging (MRI), mammogram, X-ray, and CT. 56 The textual information extracted from medical images helps in clinical decision-making and reveals significant information that facilitates the diagnosis of the illness or abnormality. Among textural analysis techniques, the DWT, and GLCM are widely adopted in several medical and health domains. These techniques usually boost the performance of the diagnosis process, especially when combined. [57] [58] [59] Thus, radiomics based on these two textural analysis methods are employed in this study. GLCM is a second-order histogram approach that counts the gray-level allocation among couples of pixels. It measures the common frequencies of the entire pairwise blends of the gray-level arrangement of every pixel in the left hemisphere (at distinct orientations) with every adjacent pixel in the right hemisphere. Consequently, numerous covariance matrices are created equivalent to the combinations of pairwise pixels. Subsequently, each covariance matrix is normalized by the summation of its elements to define the covariance relative frequency within the gray levels of common pixels. 60 DWT is another textural analysis approach that conveys the time-frequency representation of an image utilizing a collection of perpendicular basis functions. DWT simply analyzes images by convolving the input image by low and high pass filters obtaining four groups of coefficients. These groups are the approximation coefficients and three detailed coefficients consisting of the horizontal, vertical, and diagonal coefficients, respectively. 58, 61 Proposed CAD framework The proposed CAD framework consists of four phases involving image preprocessing and radiomics images generation phase, ResNets training and feature extraction phase, feature fusion and reduction phase, and finally classification phase. In the first phase, radiomics images are generated using DWT and GLCM methods. They are then resized and augmented. In the second phase, three ResNets CNNs are constructed and trained using the original and two types of radiomics images separately. Then deep features are extracted from these CNNs. Next, in the third phase, deep features extracted from each CNN trained with each type of radiomics method are fused using DCT. These fused features are also reduced using the DCT. Then, these reduced features of the three CNNS are further combined altogether. Finally, three machine learning classifiers are used to perform the classification procedure. Figure 2 shows the flowchart of the proposed CAD framework. Image preprocessing and radiomics images generation. In this phase, images are analyzed using two texture-based radiomics approaches including GLCM and DWT. For the GLCM, four angles are applied including 0, 45, 90, and 135, respectively. Next, the heatmaps of the output of the GLCM and DWT approximation coefficients are plotted representing the radiomics images. Afterward, these images, as well as the original CT images, are resized to 227 × 227 × 3 to be the same size as the input layer of the ResNets. Finally, the training data is augmented using shearing (0, 50), scaling (0.85, 1.2), random translation (−35, 35), and rotation methods (−20, 20) . Samples of texture-based radiomics images are shown in Figure 3 . ResNets training and feature extraction. In this step, three pre-trained ResNets including ResNet-18, ResNet-50, and ResNet-101 are constructed using transfer learning. 62 Transfer learning is first used to alter the output layer sizes of three ResNets previously trained on the ImageNet dataset to 2 equivalent to the number of classes of the SARS-COV-3 CT dataset. Next, some parameters are adjusted which will be discussed later in the parameter setup section. Then, these ResNets are trained using the stochastic gradient momentum techniques. These CNNs are trained independently using the original and the two types of radiomics images. Afterward, deep features are extracted using transfer learning from the last pooling layer of each CNN. The deep features extracted are of size 2048 for ResNet-50 and ResNet-101 and 512 for ResNet-18. Feature fusion and reduction. The deep features extracted from each CNN trained with each type of radiomics image are fused using the DCT approach. The fusion process is done in two stages. In the first stage, the deep features extracted from each ResNet trained with radiomics images (heatmaps images of DWT and GLCM are fused. They are first combined in one feature vector and fed as an input to DCT which integrates them and reduces their dimension. In the second fusion stage, the reduced ResNet deep features generated after DCT are then combined altogether in a concatenated manner. DCT is a method that decomposes input data into its low and high-frequency components. 63 The DCT does not perform a reduction step on its own. Though, it can compress the majority of the input's significant information in a reduced set of coefficients by a further reduction step where a few coefficients are selected to create feature vectors. 64 Thus, DCT is used to fuse deep features extracted from each CNN trained with each type of radiomics approach, and then a reduced set of DCT coefficients is chosen using zigzag scanning. Finally, the reduced DCT coefficients (fused deep features using DCT) obtained for all CNN are further fused and used to train the three SVM classifiers. Classification. In the last phase of the proposed CAD framework, three SVM classifiers are constructed to identify coronavirus cases. These classifiers include linear-SVM (L-SVM), cubic SVM (C-SVM), and quadratic SVM (Q-SVM). Fivefold cross-validation is used to validate the results of the proposed approach. The classification phase is accomplished through three experiments. In the first experiment, an end-to-end deep learning classification is done to test the significance of radiomics images on the classification performance compared to the original CT images. In the second experiment, deep features extracted from the three CNNs trained with either the original or radiomics images are used individually to train the three SVM classifiers. In the third experiment, the fused deep features (using DCT) extracted from used CNN are used to train the three SVMs. Then, these fused features obtained from the three CNNs are further combined altogether and used to train the three SVM classifiers. The results of the proposed CAD framework are validated using several statistical validation metrics including F1-score, precision, accuracy, specificity (true positive rate (TPR)), and sensitivity. Equations (1) to (5) are used to compute these measures. Also, the confusion matrices and area under the curve (AUC) are employed. where TN is equivalent to the amount of non-coronavirus images wrongly diagnosed. TP is equivalent to the amount of coronavirus pictures that are well recognized. FN is equivalent to the amount of coronavirus pictures wrongly considered as non-coronavirus. FP is equivalent to the amount of non-coronavirus scans mistakenly detected as coronavirus. The CNN's hyperparameters utilized are the minibatch size which is the quantity of data involved in each sub-epoch weight change. It was noticed in, 65 that in practice while utilizing a bigger batch size, there is a decrease in the quality of the CNN model, as evaluated by its capability to generalize. Big batch sizes have a tendency to converge to sharp minimizers of both the training and testing tasks. Sharp minima result in a weaker generalization. Conversely, small batch sizes regularly converge to smooth minimizers, and usually achieve the best generalization performance. 66 Therefore, it is selected to be 10. The learning rate determines the step size at each iteration while moving toward a minimum of a loss function. Normally, larger learning rates permit the model to learn more rapidly, at the expense of reaching a suboptimal final set of weights. On the other hand, smaller learning rates can permit the model to learn a further optimal or even globally optimal set of weights but result in a longer training time. In addition, too large learning rates will lead to large weight updates and the performance of the model (e.g. the model loss on the training dataset) will fluctuate over training epochs. Fluctuating performance is a result of the divergence of weights. On contrary, too small learning rates might never converge or could become trapped on a suboptimal resolution. Thus in the experiments, the learning rate is chosen to be 0.001 which is not too small or large. The maximum number of epochs is chosen to be 20 as increasing the number of epochs did not improve the performance, but it only increased the computational load. The validation frequency is chosen to be 173 to calculate the validation error once by the end of each epoch. The three ResNets networks are trained with stochastic gradient descent with momentum techniques as it can improve the rate of convergence and avoid local minima during convergence. [67] [68] [69] Some network parameters are selected to avoid overfitting including batch normalization which can reduce overfitting. 70, 71 Moreover, augmentation is utilized to enlarge the training dataset size and avoid overfitting. 72 Furthermore, we used DCT to reduce the dimension of features as the large number of features used to train machine learning classifiers may lead to overfitting. Thus feature reduction techniques are essential to avoid overfitting. This section discusses the results of the end-to-end deep learning classification procedure of the three ResNets as shown in Table 1 . This table reveals the accuracy of the ResNets trained with radiomics images of DWT (heatmaps of the approximation coefficients) of four mother wavelets including Haar (Daubechies-1), Symlets (Sym2), and discrete Myer (Dmyer). It also includes the accuracy of the ResNets trained with radiomics images of GLCM (heatmaps of the GLCM values) with several gray levels including 4, 8, 12, and 16. Table 1 indicates that the ResNets' accuracies attained using texture-based radiomics images are higher for the three ResNets than that of the original CT images. This is obvious as for ResNet-18, the highest accuracy is achieved using DWT-Haar (80.81%) and GLCM-8 (83.22%) which is higher than the 70.34% obtained with the original CT images. Similarly, for ResNet-50, the highest accuracy of 78.39% and 80.94% is attained using DWT-Haar and GLCM-8. This accuracy is greater than the 76.5% achieved using the original CT images. Likewise, for ResNet-50, the highest accuracy is accomplished using DWT-Haar (77.99%) and GLCM-8 (80.54%) which is better than the 73.42% achieved with the original CT images. It can be concluded from Table 1 that the highest accuracies are attained using GLCM-8 and DWT-Haar radiomics images. As mentioned before, Table 1 indicated that the highest performance is achieved when the three ResNets are trained with radiomics images of GLCM with 8 gray levels (GLCM-8) and DWT with Haar mother wavelet (DWT-Haar), therefore deep features of these ResNets are extracted and used in this experiment. In this section, the results of the three SVM classifiers trained with deep features extracted from each CNN trained with radiomics images of GLCM-8 and DWT-Haar are illustrated in Table 1 . Also, Table 2 shows that the deep features of the DWT-based radiomics images have better performance than the GLCM-based radiomics images. For the L-SVM classifier trained with the deep features of the DWT-based radiomics images, the accuracy attained is 95.4%, 98.6%, and 98.3% which is higher than 94.8%, 98.3%, and 98% obtained by the deep features of the GLCM-based radiomics images extracted from ResNet-18, ResNet-50, and ResNet-101, respectively. Similarly, the accuracy attained using the Q-SVM classifier trained with the deep features of the DWT-based radiomics images is 97.1%, 99.4%, and 98.8% which is greater than the 95%, 98.9%, and 98.9% obtained by the GLCM-based radiomics images extracted from ResNet-18, ResNet-50, and ResNet-101, respectivel,y expect for the ResNet-101 which is almost the same. Likewise, for the C-SVM classifier, an accuracy of 97%, 99.6%, and 99.3% is achieved using the deep features of the DWT-based radiomics images. This accuracy is superior to the 95.2%, 99.1%, and 99% reached by the GLCM-based radiomics images extracted from ResNet-18, ResNet-50, and ResNet-101, respectively. The confusion matrices of the C-SVM classifier trained with the deep features extracted from the ResNet-18, ResNet-50, and ResNet-101 CNNs learned with the DWT-based radiomics images are shown in Figure 4 . After texture-based radiomics (DWT-Haar and GLCM-8) deep features are extracted from each CNN, they are fused for every CNN using DCT. Then, these fused deep features are reduced using a zigzag scan. Figure 5 shows the classification accuracy of the three SVM classifiers trained with the fused radiomics based deep features versus the number of DCT coefficients. This figure shows that the highest accuracy is attained using around 500 DCT coefficients for ResNet-18 and ResNet-50 and 700 DCT coefficients for ResNet-101. Next, a further fusion process is done to combine all these DCT coefficients obtained from the three CNNs. These fused DCT features are concatenated and then used to train the three SVM classifiers. The results after this second fusion step are also shown in this section. Table 3 shows the results after fusion with the DCT approach and compares them with the results obtained with deep features extracted from the CNNs trained with the original CT images. Table 3 verifies that fusing deep features of both GLCM-8 and DWT-Haar radiomics images using DCT has superior performance to greater than those obtained by the same classifiers trained with the deep features of the original CT. This table also proves that using the fused deep features obtained from CNNs trained with texture-based radiomics images is better than using an individual deep feature set obtained from a single type of texture-based radiomics images (DWT or GLCM) as shown in Table 2 . The performance measures calculated after the second fusion process are shown in Table 4 . This table indicates that the second fusion process has improved the classification accuracy of the three SVM classifiers. This is because they obtained higher accuracies than those achieved with the fused deep features of the two texture-based radiomics images (DWT-Haar + GLCM-8) shown in Table 3 . This is clear as the accuracies obtained after the second fusion procedure have increased to 99.00%, 99.54%, and 99.60% for the L-SVM, Q-SVM, and C-SVM classifiers respectively. These accuracies are superior to those achieved using the first fusion step that used the DCT method to combine deep features obtained from the two texture-based radiomics images for each ResNet as well as those obtained by deep features mined from the original image (Table 3 ). The receiving operator characteristic (ROC) curves and the corresponding AUCs obtained using the three SVMs after the second fusion step are shown in Figure 6 . As noted from Figure 6 , the AUCs achieved using the three SVM classifiers are equal to 1. Table 4 also shows that the proposed CAD framework obtained a sensitivity of (99.28%, 99.45%, and 99.47%), specificity of (98.74%, 99.63%, and 99.72%), precision of (98.75%, 99.63%, and 99.72%), and F1-score of (99.00%, 99.54%, and 99.60%) using the L-SVM, Q-SVM, and C-SVM classifiers, respectively. According to the studies, 73, 74 the results of the proposed CAD framework indicate that it is a reliable system, as the sensitivities are greater than 80%, and both precisions and specificities are greater than 95%. The final architecture of the proposed framework is shown in Figure 7 . The figure shows that initially, images are augmented and resized. They are then analyzed using DWT-Haar and GLCM-8. The approximation coefficients of DWT and the values obtained by GLCM are transformed into heatmaps images. Next, these images are used individually to train the ResNets. Afterward, deep features are extracted from these ResNets and are fused using DCT, where 500 coefficients are chosen for ResNet + 18 and ResNet-50 and 700 DCT coefficients for ResNet-101. Finally, these coefficients are fused altogether in a concatenation manner and used as inputs to the C-SVM classifier. The results of the proposed CAD framework after the second fusion step are compared with those obtained by other related studies based on the SARS-COV-2 CT-Scan dataset. These results are shown in Table 5 . The table proves the competitiveness of the proposed CAD framework compared to other methods. This is clear, as the performance measures obtained using the proposed CAD framework are superior to those obtained by other related studies. The outstanding performance of the proposed CAD framework allows it to be used by radiologists to help them in providing an accurate and fast diagnosis. In this study, an automated CAD framework for coronavirus diagnosis was proposed. At first, the CT scans were analyzed using two texture-based radiomics approaches including GLCM and DWT methods. Afterward, the heatmaps of the DWT and GLCM coefficients were plotted as images. Next, these texture-based radiomics images were resized and augmented then used to train three ResNets separately. Also, the original CT images were used to train the same ResNets. Then, the deep features were extracted from the three ResNets trained with texture-based radiomics images and original CT images. Thereafter, the deep features obtained from the two types of texture-based radiomics images were fused and reduced using the DCT method for each ResNet. After that, the fused deep features of the three ResNet were further combined altogether in a concatenation way. Finally, three SVM classifiers were used to perform the classification step into coronavirus and non-coronavirus. The classification procedure was done through three experiments. In the first experiment, an end-to-end deep learning classification was accomplished. The results of this experiment showed that training ResNets using the texture-based radiomics images was better than training the ResNets with the original CT images. This is because the accuracy achieved using ResNets trained with heatmaps images of GLCM and DWT are (83.22%, 74.90%) for ResNet-18, (80.94%, 78.39%) for ResNet-50, and (80.54%, 77.99%) for ResNet-101. These accuracies are higher than that obtained using the original CT images (70.34%, 76.51%, and 73.42%) for ResNet-18, ResNet-50, and ResNet-101, respectively. In experiment II, the deep features extracted from the ResNets trained using texture-based radiomics images (DWT or GLCM) were used individually to construct the three SVM classifiers. The accuracies of this experiment indicated that the L-SVM trained with deep features of the texture-based radiomics (GLCM and DWT) images (94.8%, 98.3%, and 98%) for GLCM and (95.4%, 98.6%, and 98.3%) for DWT are higher than that attained using the end-to-end deep learning classification of experiment I. In the last experiment, the deep features sets obtained from each CNN trained with each type of texture-based radiomics images were fused and reduced using the DCT method. These fused deep features sets were used individually to train the three SVMs. The results proved that fusing both sets of deep features obtained from each CNN trained using the texture-based radiomics images is superior to using only one type of deep features set obtained from either DWT or GLCM. The results also indicated that training the SVM classifiers with the fused texture-based radiomics deep features is superior to using deep features obtained from the original CT images. The accuracies attained for the L-SVM classifier after fusing the radiomics images (heatmaps of DWT or GLCM) are 96.70%, 98.30%, and 98.32% for ResNet-18, ResNet-50, and ResNet-101, respectively. These accuracies are higher than the 94.98%, 98.20%, and 97.44% obtained using the L-SVM classifiers trained with deep features mined with the same ResNets trained with the original CT images. In the same experiment, the fused texture-based radiomics deep features of each CNN were further combined altogether and employed to train the three SVMs. The accuracy attained using L-SVM, Q-SVM, and C-SVM classifiers (99.00%, 99.54%, and 99.60%) demonstrated that combining texturebased radiomics deep features of the three CNNs has improved the performance of the proposed CAD framework. The performance of the proposed CAD framework compared to other related studies verified its superiority as it attained 99.60% which is higher than the accuracies obtained by related studies which range from 86.88% to 99%. Accordingly, it can be used to assist radiologists in the automatic diagnosis of coronavirus accurately and rapidly. Motivated by the promising results of the proposed framework, we emphasize the efficiency of the proposed approach. Therefore it can be employed as a computer-assisted tool for the automatic clinical diagnosis of several diseases and tumors such as brain, lung, and breast tumors. It can be also used for other types of imaging modalities such as X-rays, MRI, and mammograms. The noise in realworld CT images is a well-known problem that reduces the accuracy of diagnostics. This study did not examine this problem and it is considered one of the limitations of the proposed study that should be addressed in our future work. Furthermore, the study did not consider optimization techniques for feature selection and network hyperparameters selection which will be addressed in the upcoming work. Also, the study did not investigate the problem of discriminating coronavirus from other forms of pneumonia and chest diseases which will be addressed in future work. Further work will also consider using more deep learning and feature reduction techniques. Upcoming work will also consider studying the effect of using different combinations of radiomics and fusion methods as well as other textural-based analysis methods. Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Contributorship: It is a single author paper. the only contributor is the author Omneya Attallah. Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article. Ethical approval: Not applicable, because this article does not contain any studies with human or animal subjects. Informed consent: Not applicable, because this article does not contain any studies with human or animal subjects. ORCID iD: Omneya Attallah https://orcid.org/0000-0002-2657- Trial registration: Not applicable, because this article does not contain any clinical trials. COVID-19 diagnosis and management: a comprehensive review A deep learning based approach for automatic detection of COVID-19 cases using chest X-ray images Clinical features of patients infected with 2019 novel coronavirus in wuhan Epidemiology and clinical features of COVID-19: a review of current literature Estimation of the asymptomatic ratio of novel coronavirus infections (COVID-19) Comparison of nasopharyngeal and oropharyngeal swabs for SARS-CoV-2 detection in 353 patients received tests with both specimens simultaneously ECG-BiCoNet: an ECG-based pipeline for COVID-19 diagnosis using bi-layers of deep features integration CT Imaging features of 2019 novel coronavirus (2019-NCoV) Chest X-ray findings and temporal lung changes in patients with COVID-19 pneumonia Bayesian Neural network approach for determining the risk of re-intervention after endovascular aortic aneurysm repair. Proceedings of the institution of mechanical engineers Using multiple classifiers for predicting the risk of endovascular aortic aneurysm repair Re-intervention through hybrid feature selection. Proceedings of the institution of mechanical engineers GASTRO-CADx: a three stage framework for diagnosing gastrointestinal diseases A framework for breast cancer classification using multi-DCNNs Histo-CADx: duo cascaded fusion stages for breast cancer diagnosis from histopathological images Deep learning techniques for automatic detection of embryonic neurodevelopmental disorders DIAROP: automated deep learning-based diagnostic tool for retinopathy of prematurity His: histopathological diagnosis of pediatric medulloblastoma and its subtypes via AI CoMB-Deep: composite deep learning-based pipeline for classifying childhood medulloblastoma and its classes A fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis COVID-19 Diagnosis system by deep learning approaches SARS-CoV-2 CT-scan dataset: a large dataset of real patients CT scans for SARS-CoV-2 identification Deep bidirectional classification model for COVID-19 disease infected patients COVID-Nets: deep CNN architectures for detecting COVID-19 using chest CT scans Diagnosis of COVID-19 using CT scan images and deep learning techniques Deep learning for COVID-19 detection based on CT images Deep learning for EEG motor imagery classification based on multi-layer CNNs feature fusion Multi-feature fusion CNNs for drosophila embryo of interest detection Deep feature fusion for iris and periocular biometrics on mobile devices The ensemble deep learning model for novel COVID-19 on CT images Automated detection of COVID-19 using ensemble of transfer learning with deep convolutional neural network based on CT scans MULTI-DEEP: a novel CAD system for coronavirus (COVID-19) diagnosis from CT images using multiple convolution neural networks A novel hand-crafted with deep learning features based fusion model for COVID-19 diagnosis and classification using chest X-ray images FUSI-CAD: coronavirus (COVID-19) diagnosis based on the fusion of CNNs and handcrafted features Can Radiomics Features Boost the Performance of Deep Learning upon Histology Images Review on COVID-19 diagnosis models based on machine learning and deep learning approaches A novel unsupervised approach based on the hidden features of deep denoising autoencoders for COVID-19 disease detection COVID-19 case recognition from chest CT images by deep learning, entropycontrolled firefly optimization, and parallel feature fusion A self-activated CNN approach for multi-class chest-related COVID-19 detection Lunginfseg: segmenting COVID-19 infected regions in lung CT images based on a receptive-field-aware deep learning framework Risk score generated from CT-based radiomics signatures for overall survival prediction in non-small cell lung cancer Radiomics-based machine learning model for efficiently classifying transcriptome subtypes in glioblastoma patients from MRI Intelligent dermatologist tool for classifying multiple skin cancer subtypes by incorporating manifold radiomics features categories From handcrafted to deep-learning-based cancer radiomics: challenges and opportunities COVID-19 lesion detection and segmentation-A deep learning method Using handpicked features in conjunction with ResNet-50 for improved detection of COVID-19 from chest X-ray images COV19-CNNet and COV19-ResNet: diagnostic inference engines for early detection of COVID-19 CO-ResNet: optimized ResNet model for COVID-19 diagnosis from X-ray images COVID-19 detection based on image regrouping and ResNet-SVM using chest X-ray images Medical image analysis using convolutional neural networks: a review Convolutional neural networks in medical image understanding: a survey A survey of convolutional neural networks: analysis, applications, and prospects Application of deep transfer learning for automated brain abnormality classification using MR images Deep residual learning for image recognition Radiomics and texture analysis in laryngeal cancer. Looking for new frontiers in precision medicine through imaging analysis FLT PET radiomics for response prediction to chemoradiation therapy in head and neck squamous cell cancer Prognostic value of textural indices extracted from pretherapeutic 18-F FDG-PET/CT in head and neck squamous cell carcinoma Texture analysis methods for medical image characterisation Hybrid discrete wavelet transform and gabor filter banks processing for features extraction from biomedical images Fetal brain abnormality classification from MRI images of different gestational age Automated screening of MRI brain scanning using grey level statistics Image processing by using different types of discrete wavelet transform Transfer learning using computational intelligence: a survey An approach for streaming data feature extraction based on discrete cosine transform and particle swarm optimization Feature extraction using discrete cosine transform and discrimination power analysis with a face recognition technology On large-batch training for deep learning: generalization gap and sharp Minima Efficient Mini-Batch Training for Stochastic Optimization Understanding and Detecting Convergence for Stochastic Gradient Descent with Momentum An improved analysis of stochastic gradient descent with momentum Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods Dropout vs. Batch normalization: an empirical study of their impact to deep learning Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift Understanding Data Augmentation for Classification: When to Warp? An effective mental stress state detection and evaluation system using minimum number of frontal brain electrodes The essential guide to effect sizes: statistical power, meta-analysis, and the interpretation of research results A deep learning and grad-CAM based color visualization approach for fast detection of COVID-19 cases using chest X-ray and CT-scan images COVID CT-net: a deep learning framework for COVID-19 prognosis using CT images Prediction of COVID-19 from chest CT images using an ensemble of deep learning models Fuzzy rank-based fusion of CNN models using Gompertz function for screening COVID-19 CT scans An approach to the classification of COVID-19 based on CT scans using convolutional features and genetic algorithms