key: cord-0940287-57sm70pl authors: Nneji, Grace Ugochi; Cai, Jingye; Deng, Jianhua; Monday, Happy Nkanta; James, Edidiong Christopher; Ukwuoma, Chiagoziem Chima title: Multi-Channel Based Image Processing Scheme for Pneumonia Identification date: 2022-01-27 journal: Diagnostics (Basel) DOI: 10.3390/diagnostics12020325 sha: 8a93a41813227434a50564f2625234c6cb3061a7 doc_id: 940287 cord_uid: 57sm70pl Pneumonia is a prevalent severe respiratory infection that affects the distal and alveoli airways. Across the globe, it is a serious public health issue that has caused high mortality rate of children below five years old and the aged citizens who must have had previous chronic-related ailment. Pneumonia can be caused by a wide range of microorganisms, including virus, fungus, bacteria, which varies greatly across the globe. The spread of the ailment has gained computer-aided diagnosis (CAD) attention. This paper presents a multi-channel-based image processing scheme to automatically extract features and identify pneumonia from chest X-ray images. The proposed approach intends to address the problem of low quality and identify pneumonia in CXR images. Three channels of CXR images, namely, the Local Binary Pattern (LBP), Contrast Enhanced Canny Edge Detection (CECED), and Contrast Limited Adaptive Histogram Equalization (CLAHE) CXR images are processed by deep neural networks. CXR-related features of LBP images are extracted using shallow CNN, features of the CLAHE CXR images are extracted by pre-trained inception-V3, whereas the features of CECED CXR images are extracted using pre-trained MobileNet-V3. The final feature weights of the three channels are concatenated and softmax classification is utilized to determine the final identification result. The proposed network can accurately classify pneumonia according to the experimental result. The proposed method tested on publicly available dataset reports accuracy of 98.3%, sensitivity of 98.9%, and specificity of 99.2%. Compared with the single models and the state-of-the-art models, our proposed network achieves comparable performance. Pneumonia is an infectious lung illness in humans that affects one or both lungs and is caused by fungus, bacteria, and viruses, among other microorganisms. Pneumonia occurs as a result of pathogen-caused inflammation [1] , which causes the alveoli in the lungs to fill up with pus or fluid, reducing oxygen (O 2 ) and carbon-dioxide (CO 2 ) exchange between the lungs and blood, making it difficult for the infected person to breathe. Other causes of pneumonia are food aspiration and chemical exposure. Furthermore, patients with cancer, HIV/AIDS, hepatic disease, diabetes, cardiovascular diseases, chronic respiratory diseases, and other comorbidities, are vulnerable to pneumonia [1] [2] [3] . As the result of the inability to identify this lung illness at an early stage, children below the age of five years and aged people are readily transmitted with this illness. There are several methods to diagnose pneumonia, which include blood test, pulse oximetry, bronchoscopy, sputum test, pleural fluid culture, chest X-ray, magnetic resonance imaging, and CT scans. The blood tests are carried out to confirm an illness and to attempt to figure out the organism caused by it. However, this method is not always feasible to make an accurate identification. Pulse oximetry test determines how much oxygen is in the blood. Pneumonia may make it difficult for the lungs to get enough oxygen into the circulatory system. Another method of pneumonia diagnose is the sputum test which demands for a patient's deep cough, then the sample of fluid from the lungs (sputum) is obtained and tested to assist determine the source of the illness. A culture method called pleural fluid culture requires a collection of a fluid sample from the pleural region by inserting a needle between the patient's ribs and testing it to help diagnose the kind of infection. Chest X-ray is used to diagnose pneumonia and evaluate the degree and location of the illness. However, it cannot tell which bacterium caused the pneumonia. Finally, for a clearer view of the lungs, CT scans are highly recommended by the medical practitioners. For older citizens above 65 age of years with serious symptoms and health condition they are advised to undergo imaging examination such as CT scans [4, 5] . In most cases, the causing organism determines how pneumonia is treated. Anti-fungal treatments are used to treat fungus pneumonia whereas antiviral medications are used to treat viral pneumonia including influenza, SARS, and MERS, and antibiotics are used to treat bacterial pneumonia [6] . Despite the presence of pneumonia, the diagnosis is always dependent on the doctor's knowledge and experience. Howbeit, as the number of infected patients' increases, it becomes a difficult task for radiologists to access the screening process within a constrained time; therefore, there is a need for an AI diagnostic system for the classification of pneumonia. Deep learning (DL) algorithms have ascertained greater performance in the classification and detection of pneumonia-related ailment and provided high accuracy rate when compared to other methods. Emphatically, DL can identify hidden features of images which could never be detected by medical professionals. With regards to DL, the convolutional neural network (CNN) is the most often utilized DL technique in the medical system due to its capacity in feature extraction and training in differentiating between multiple classes. Transfer learning (TL) technique has made it simpler to retrain deep neural networks fast and accurately on different datasets [7] [8] [9] . With the introduction of wavelet in CNN, Happy et al. [10] proposed a CNN model integrated with wavelet multi-resolution analysis to effectively classify COVID-19 pneumonia from radiographs. Interestingly, Happy et al. [11] presented a case study to evaluate the capability of multi-resolution analysis in diagnosing COVID-19 pneumonia. Several applications of computer vision are so complicated that they cannot be achieved using only one algorithm, which has necessitated the creation of models that combined two or more of the methods evaluated. Since models are chosen based on the issue's requirements and features extraction, weighted fusion technique incorporates more than one model to address this issue. This technique was created to help single models overcome their flaws and solidify their strengths by aggregating the features extracted from single model in a weighted manner. This method reduces generalization error and minimizes prediction variance [12] . Thus, the goal of this study is to analyze the performance of weight fusing deep neural network models for multi-class of pneumonia. The following are the main contributions of our paper: • We preprocess our CXR images into three forms, namely: LBP, CLAHE, and CECED. • Each preprocessed image features are extracted individually, that is the LBP CXR features are extracted using the shallow CNN, the CECED CXR features are extracted using the MobileNet-V3 and the CLAHE CXR feature images are extracted using the Inception v3. • The feature vectors of these preprocessed CXR images are weighted fused for a robust prediction result. • We evaluate the performance of each of the single models and our proposed model in this study. The remaining part of this paper is broken down as follows; the related works on pneumonia disease are discussed in Section 2. Section 3 explains how we came up with our proposed framework. The experimental results and evaluation of our model are presented in Section 4. Our discussion is presented in Section 5. Finally, Section 6 presents the conclusion. In recent years, publications of the some of the state-of-the-art models used the deep learning (DL) approach for the automatic classification and detection of pneumonia from X-ray images. This section reviews and examines the most up-to-date approaches for detecting and classifying pneumonia with DL. Various investigations and research studies based on artificial intelligence and deep learning have been conducted in the area of disease diagnosis using medical imagery such as chest X-rays (CXR), ultrasound scans, computed tomography (CT) scans, MRI scans, and so on. Deep learning is perhaps the most widely used and reliable medical imaging approach available. It is a quick and accurate way for diagnosing variety of ailments. There are models that have been explicitly trained to classify different categories of specific ailment based on the disease type. These models have proven to achieve satisfactory results in the medical science utilizing image analysis for the detection of cardiac abnormalities, tumors, cancer, and a variety of different applications. Deep learning has been used to distinguish between scan images of COVID-19-infected and non-infected patients as proposed by Shah et al. [13] using a self-developed framework called CTnet-10, hence, achieving an accuracy of 82.1%. However, to further enhance the accuracy, different pretrained models were introduced to the CT scan dataset and VGG-19 attain a better performance of 94.5%. Deep neural network has the ability to improve the output of instruments utilized in pharmaceutical processes for analytical technology. Maruthamuthu et al. [14] developed a technique for identifying the contamination of micro-organisms using DL strategies based on Raman spectroscopy. The categorization of numerous sample consisting of microorganisms and bacteria mixed with Chinese Hamster Ovary cells achieved an accuracy of 95% to 100% using a convolutional neural network (CNN). Looking at a situation of very large margin of unlabeled dataset, Maruthamuthu et al. [15] , demonstrate the superiority of a data-driven strategy over data analytics and machine learning technique. The data analytics strategy, however, has the potential to improve the development of efficient proteins and affinity ligands, affecting the downstream refinement and processing of mAb products, according to the authors. The authors propose a deep neural network approach, which is also called a black box for the construction of soft sensors, which uses accumulated past data from process operation that is not present in a mechanistic orphenomenological approach. The data are incorporated to get apolynomial expression to generate product as a function of the inputs such as cell mass, substrate concentration, and initial product (streptokinase). Given enough dataset collected from various sources under different condition, the accurate data fit with automated error correction generated factor is utilized to compute the major outputs such as the number of workable cells and extent of product as a result of the marginal calibrated input parameters such as inoculum and substrate. Even though the molecular relationship of these variables to cell metabolism is uncertain, yet the deep learning model accounts for the impacts of other conditions such as excitation, individual critical media properties, mixing rate, and variations in metabolic concentrations. An application of the Inception-v3 model is used by Kermany et al. [16] to classify different types of pneumonia infections in pediatric patients. This method extracts features and utilizes the area under curve (AUC) as a metric to categorize bacterial and viral pneumonia. Santosh et al. [17] suggested a method for detecting pulmonary problems by comparing the symmetrical shape of the CXR lung region. Roux et al. [18] sought to determine the prevalence, intensity, and adverse outcomes for pneumonia in infants during their first year of life in a South African cohort study. They set up pneumonia monitoring systems and recorded outpatient pneumonia and pneumonia that required hospitalization. In addition, combined Poisson regression was used to calculate pneumonia incidence rate ratios. Table 1 gives a summary of the related works with the following observations: The popular medical mode of imaging for the classification and detection of pneumoniarelated ailment is the chest X-ray. • The publishers focus on either binary classification or multi-classification, although just a few considered the multi-classification task. • Some evaluation metrics such as accuracy, sensitivity, and specificity were utilized to evaluate the efficacy of DL models approaches whereas precision, recall, and/or F1-score metrics were used in [25, 26] to evaluate the model performance. However, there was no account where low quality was considered as a challenge for better classification performance of pneumonia. Therefore, we put this into consideration by preprocessing our images into LBP, CECED, and CLAHE for multi-channel weighted fusion technique using neural network for the identification of pneumonia. Additionally, we carried out a study to compare the performance of each single model and the weighted fusion model. The methodology employed in this study is detailed in this section. Data collection, image preprocessing, and the transfer learning networks for feature extraction and classification are the three phases of the proposed method. The procedure of the proposed method in this paper is presented in the subsequent subsections. This study utilized the pneumonia dataset from two public domain. The first dataset contains 3029 scans of bacterial pneumonia, 8851 scans of healthy patients and 2983 scans of viral pneumonia gotten from the Kaggle database of RSNA [33] . The second dataset is taken from the author in [34] which consist of 3616 CXR images of patients diagnosed with COVID-19. In all, we only considered 1000 CXR images each for the four classes (COVID-19, virus, bacteria, and normal), as explicitly illustrated in Table 2 . It is well known that data quality can be affected by factors such as noise, resolution and artifacts, hence, using such data directly in an algorithm may result to inaccurate outcomes. Data preprocessing phase could help in eliminating or reducing the noise and increase the data quality, thereby improving model performance. In this study, we have use the Local Binary Pattern (LBP), Contrast-Enhanced Canny Edge Detection (CECED), and Contrast Limited Adaptive Histogram Equalization (CLAHE) for our data preprocessing. A descriptor called LBP is often used to collect textural information about a specific object. Coding LBP pixel is determined by comparing its value to that of neighboring pixels [13] . At the left portion of Figure 1 depicts all the values of the pixel in a local region, whereas the right part depicts the binary pixel of the LBP coding while concentrating on the center pixel. The pixel value of LBP can be computed after it has been efficiently encoded using LBP coding. Equation (1) gives the mathematical expression of LBP. where S(g n − g c ) denotes the indicator function for the threshold binary value obtained from the subtraction of the neighbor pixel value from center pixel value. N is the total number of pixel value surrounding the center pixel value for the 3 × 3 window size since LBP considers 9 pixels at a time. g c represents the center pixel value and g n denotes the neighbor pixels values. After the thresholding, the pixel's value greater or equal to zero would be "1" and the pixel's value less than zero would be "0". Figure 1 illustrates the calculation of the pneumonia LBP CXR image and Figure 2 presents the preprocessing of LBP. CLAHE was used in this study to improve the image's contrast and features by making abnormalities more noticeable. Among the histogram equalization family, Contrast Limited Adaptive Histogram Equalization (CLAHE) is more natural in appearance and useful in the reduction of noise amplification, and we have investigated CLAHE and applied it to our dataset, as shown in Figure 3 . A full explanation of the CLAHE approach is given below to demonstrate its effectiveness: The generation of the image transformation using the bin value of the histogram is the first stage of the CLAHE technique. • Following that, using clip boundary, the contrast is confined to a binary count from 0 to 1. Before the image segment is processed, the clip boundary is added to the image. • To prevent mapping background areas to gray scale, a specific bin value from the histogram region is used to create the entire image region. Clip boundary is used with the help of histogram clip to obtain better mapping. • Finally, the finished CLAHE image is created by computing the image's regions, then extracting, mapping, and interpolating all of the image pixels to get the most out of the image. As shown in Figure 4 , edges are made up of crucial and relevant particular information and features. By employing an edge detector on an image, the amount of data that has to be processed can be reduced, and the information that is deemed less significant can be filtered. The CEED-Canny strategy combines the local morphological contrast enhancement and the Canny edge detection technology used in [24] . The steps are as follows: • Collection of the original pixel's value, as well as the local minimum and maximum; • The image's morphological contrast is increased; • To reduce noise, Gaussian smoothing is applied; • The intensity gradient of the image is determined; • A non-maximum suppression method is utilized; • The hysteresis thresholding technique is adopted. This article uses three CNN architectures: a shallow CNN, pretrained MobileNet-V3, and Inception-V3 as feature extractors with their layers trained and/or frozen. To prevent the model from overfitting, dropout, and regularizer are applied. For an automatic extraction of pneumonia-related features from LBP CXR images, we develop a shallow CNN model. Shallow CNN model simply means a lightweight convolutional neural network constructed from scratch to process LBP CXR images. In our study, we constructed a three-layer convolutional neural network which we called shallow CNN, as illustrated in Figure 5 . It has an input layer, three convolution ("C") and subsequent sub-sampling ("S") layers, and a feature vector ("fv") layer, as shown in Figure 5 . The first convolution layer ("C1") employs 32 filters and 7 × 7 convolution kernel that focus on the detailed information of possible indicators of pneumonia. This layer is followed by a sub-sampling layer ("S1"), which reduces the image to half its original size using optional max-pooling (with kernel size 2 × 2). To map the previous layer, a new convolution layer ("C2") conducts 64 convolutions with 3 × 3 kernel, followed by another sub-sampling layer ("S2") with 2 × 2 kernel. Lastly, 128 convolution layer ("C3") with 3 × 3 kernel is applied and followed by a max-pooling (with kernel size 2 × 2). Finally, we arrived at a feature vector of 512-neuron after the output is passed to the fully connected layer ("fv"). Adding "ReLU" activation after sub-sampling layers "S1" and "S2" guarantees the capacity to handle nonlinear data. Over-fitting can be avoided by utilizing the "dropout" operation [31] between the "S" layers ("S1" and "S2") and the "fv" layer (parameter was set to 0.5). We utilized the MobileNet-V3 network in this extraction phase because of its efficient performance in image classification and fast convergence. Table 3 depicts the parameters of the network. The model has an input size of 224 × 224 input image. It uses depth-wise separable convolutions, which applies two 1D convolutions with two kernels, rather than a single 2D convolution. This allows for a smaller and more efficient model with less memory and training parameters. The model is divided into two blocks: the residual block with a stride of 1 and the shrinking block with a stride of 2. Each block is made up of three layers: a 1 × 1 convolution with ReLU6, a depth-wise convolution, and another 1 × 1 convolution with different non-linearity. A few changes were made to our network to simplify it by replacing the dense layer with a dimension of 1 × 512, as shown in Figure 6 . The high performance of Inception-V3 [25] is due to a variety of network connection approaches, including batch normalization, using MLP convolutional layers to replace linear convolutional layers, and factorizing convolutions with larger kernel sizes. These methods significantly reduce the number of network parameters as well as the computational cost of the network, allowing it to be built considerably deeper and with more nonlinear expressive capacity than typical CNN models. The Inception-V3 model proposed in this paper was previously trained over the huge dataset ImageNet, and then the knowledge gained from transfer learning was transferred to CXR dataset to perform pneumonia identification. All layers before the fully connected (FC) and softmax layers were frozen. The softmax layer is eliminated and a new dense layer is trained for extracting the deep-level features of the CXR images by continuously modifying the network's parameters, resulting in a feature vector size of 1 × 512 as seen in Figure 7 . Similar to the authors in [25] , average pooling of 8 × 8 is employed instead of the traditional fully connected layer to flatten the feature vector as seen in Figure 7 . According to the proposed model shown in Figure 8 , shallow CNN is used to extract the feature vector (fv1) from the LBP CXR images, the CLAHE CXR images' feature vector (fv2) is retrieved using a pretrained Inception-V3 approach and the pretrained MobileNet-V3 is utilized to extract feature vector (fv3) from CECED CXR images. After flattening, each feature vector is connected to two dense layers. The first dense layers for fv1, fv2, and fv3 are fc1_1, fc2_1, and fc3_1, respectively, while the second dense layers are fc1_2, fc2_2, and fc3_2, respectively, as shown in Figure 8 . Using fc1_2 and fc2_2 and fc3_2, the network captures the distances between various lung properties and reveals them. Furthermore, fc1_2, fc2_2, and fc3_2 are fused in a weighted way into f1 to generate a fused vector. Softmax is used to classify pneumonia CXR images based on the fused feature vector. As a cost function, the categorical cross entropy is utilized, given in Equation (2) where n represents the number of class labels, p k is the instances for the k-th classes and t k is the corresponding label The chance of event k occurring is t k meaning that the total sum of t k is 1, which implies that only one event is possible. The negative sign minimizes the loss when the distributions come closer to one other. In the last layer of our proposed weighted model with categorical cross entropy loss function, the softmax function is utilized as a classifier to emphasize the highest values and suppress those that are far below the maximum value as well as normalizing the outputs to the total sum of 1, as illustrated in Equation (3) where e tk is the exponential term of the input vector, n is the number of class label, p k is the output vector of the softmax and ∑ n j=1 e t j is the normalization term which divides the exponential term to produce a real value greater than 0. This section gives thorough explanations of (1) the experimental setup and configuration (2) the performance metrics utilized for the evaluation of the proposed framework; (3) the performance of individual networks, (4) the performance of weighted fusion technique, and (5) comparisons with other research models. Our proposed model is evaluated using the Keras framework with Python programming language using NVIDIA GTX 1080 GPU. A data split ratio of 80% to 20% for training and testing, respectively. All the data are resized for each single model. For the shallow CNN, our LBP input dataset are resized to 112 × 112, whereas for the pretrained MobileNet-V3 network, the CECED CXR input dataset are resized to 224 × 224 and lastly for the pre-trained inception-V3 network, the CLAHE CXR input dataset are resized as 299 × 299. Using Adam optimization, we utilized batch size of 32, learning rate of 0.0001, L2-regularizer, weight decay, dropout to reduce over-fitting in the model and epoch size of 40. We trained separately each of the the single model (shallow CNN, pre-trained MobileNet-V3 and pre-trained Inception-V3) and the concatenation of these single models using the fusion technique with the same dataset. More so, the output classes of 1000 in ImageNet were replaced with 4 classes in the final dense layer, representing viral, bacteria, normal, and COVID-19. In evaluating the performance of the single and concatenated models, this paper used the evaluation metrics of accuracy, sensitivity, specificity, precision, and F1-score. The numerical expression for each metric is presented in Equations (4) We begin by presenting the accuracy, loss, ROC-AUC, and precision-recall curves produced by the various deep learning frameworks used in this article (Inception-V3, MobileNet-V3, and Shallow CNN). Then, using the metrics established in Equations (4)-(8), we evaluated the results of all designs to find the best method for classifying CXR images as bacteria, COVID-19, normal, and viral pneumonia as shown in Table 4 . The ROC-AUC, precision-recall, accuracy, and loss curves, as well as the several models utilized in this study and their interpretation, are shown in the next section. It is worth noting that the shallow CNN achieved considerable performance. The results from Table 4 show that the shallow CNN obtained 90.9% accuracy, 92.3% sensitivity, and 93.1% specificity. In comparison to Inception-V3 and MobileNet-V3, shallow CNN achieved the least score across all the evaluation metrics. Furthermore, the shallow CNN obtained 91.2% precision and 92.7% F1-score. Considering the evaluation performance metrics for the single model of MobileNet-V3, this model obtained accuracy of 93.7%, sensitivity of 95.4%, specificity of 95.7%, precision of 94.3%, and F1-score of 95.5%. These results further show that the CECED preprocessing technique using the MobileNet-V3 performs better than the LBP shallow CNN technique. From the experimental results presented in Table 4 , Inception-V3 shows satisfactory performance achieving accuracy of 95.6%, 94.9% sensitivity, and 96.2% specificity, precision of 95.8%, and F1-score of 95.3%. Compared to shallow CNN, this inception-V3 seems to perform slightly higher than shallow CNN across all the evaluation metrics, however, Inception-v3 achieves better performance on few evaluation metrics such as accuracy, specificity, and precision over the MobileNet-V3. On the pneumonia dataset, we executed various weighted fusion approaches to see the better performance for the identification of pneumonia. Our first investigation is the weighted fusion of LBP-channel shallow CNN and CECED-channel of MobileNet-V3 (LCSC + CCM). Secondly, we checkmate the fusion of the LBP-channel of shallow CNN and the CLAHE-channel of Inception-V3 (LCSC + CCI). The next analysis is the fusion of CECED-channel of MobileNet-V3, and CLAHE-channel of Inception-V3 (CCM + CCI)) and the last weighted-manner approach is the fusion of LBP-channel of shallow CNN, CLAHE-channel of Inception-V3 and CECED-channel of MobileNet-V3 (LCSC + CCI + CCM) for pneumonia identification. The proposed framework of the weighted fusion, as shown in Figure 8 , clearly shows that the complimentary fusion of LBP, CLAHE, and CECED image features is capable of managing low-quality images in pneumonia diagnosis, attaining improved recognition accuracy on the CXR dataset. Figure 9 shows classification performance of all the weighted fusion channels including the single channels across all the evaluation metrics. It is obvious that our proposed model outweighs the other weighted fusion channels, as well as the single channels by a considerable margin.Although the computational time is slightly longer than the other weighted channels and the single models, yet it achieves the best classification performance, as depicted in Table 4 . Figure 10 presents the test accuracy curves for the proposed three-channel weighted fusion including the single-and dual-channel models. For the first 10 epochs, the accuracy rapidly increases to about 90% for the proposed model and then maintained gradual progression. Figure 11 depicts the loss curves which show the gradual loss reduction of the different models. Figure 12 The proposed multi-channel weighted fusion model (LCSC + CCI + CCM) outperforms the single-based and dual-based channels on the pneumonia dataset, obtaining 99.0% AUC. The AUC of CECED-based channel of MobileNet-V3 is clearly higher than that of CLAHEbased channel of Inception-V3 and Shallow CNN of LBP-based channel, implying that CECED CXR images contribute more to pneumonia identification than CLAHE and LBP CXR images. The proposed weighted fusion model was further analyzed in terms of precision-recall curve, as shown in Figure 13 . As the curve approaches the upper right hand corner of the graph, it is obvious that the proposed multi-channel weighted fusion model outweighs the other schemes, indicating that the model has high precision and high recall. In comparison to the single-based and dual-based channels, the proposed model's performance in diagnosing pneumonia in CXR images from dataset obtained from various sources and combined has been demonstrated, and the identification result is given in Table 4 . The proposed model can effectively differentiate distinct pneumonia from healthy CXR images, as evidenced by the above mentioned results. It is worth noting that the weighted fusion of LBP, CLAHE, and CECED CXR images result in a greater generalization ability for our proposed model. We use the following terms to describe the channels as Shallow CNN for channel that employs LBP CXR images, inception-V3 for channel that uses CLAHE CXR images, and MobileNet-V3 for a channel that utilizes CECED CXR images. Table 4 shows the results of the identification for the different types of pneumonia. On all metrics, our proposed model surpasses the single-based and dual-based channels, with 98.3% accuracy, 98.9% sensitivity, and 99.2% specificity, 98.8% precision and 99% f1-score. We conducted a comparison between our proposed strategy and some recently published pneumonia classification methods. According to Table 5 , the proposed model obtained the maximum accuracy and sensitivity score of 98.3% and 98.9%, respectively, while Correa et al. [22] achieve the highest specificity score of 100% although their accuracy value was not reported. In terms of accuracy, the proposed model achieves the highest score of 98.3%, demonstrating its superiority in the identification of pneumonia. The complementing integration of several channels of CXR images gives our proposed technique a competitive advantage. It is worth noting that different deep learning models will perform differently depending on different circumstances. More so, we selected few recent state-of-the-art models and conducted a fair comparison using the same dataset as depicted in Table 6 . Figure 14 presents the performance evaluation of the selected few state-of-the-art models across different matrices using the same dataset while Figure 15 depicts the accuracy performance of the selected few state-of-the-art models using the same dataset. We conducted an ablation study utilizing several transfer learning models pretrained on the ImageNet dataset in order to find the best performing pretrained model for our proposed weighted deep learning fusion architecture. The results of the experiments employing the proposed scheme with several pretrained frameworks utilizing the same pneumonia dataset are presented in Table 7 . According to the results of the experiments, Shallow CNN outperforms MobileNet-V3 and Inception-V3 in extracting features from LBP CXR images, reaching 90.9% accuracy, 92.3% sensitivity, and 93.1% specificity. In comparison to Shallow CNN, the MobileNet-V3 network performs better in extracting features from CECED CXR images sachieving 93.7% accuracy, 95.4% sensitivity, and 95.7% specificity. When compared to Shallow CNN and MobileNet-V3, Inception-V3 performs significantly better in extracting features from CLAHE CXR images, reaching reaching 95.6% accuracy, 94.9% sensitivity, and 96.2% specificity. It is essential to use the ROC curve to estimate overall accuracy and the precision-recall curve to measure the mean average precision of the proposed model when identifying sensitive conditions like pneumonia. The ROC curves for single-and dual-based channels, as well as the proposed model on the pneumonia dataset, are shown in Figure 12 . Similarly, Figure 13 shows the precision-recall curve for single-and dual-based channels, as well as the proposed model. Furthermore, many of the CXR images were blurry and lacking in details, which may have hampered the proposed model's ability to extract and train relevant features. Interestingly, the benefits of utilizing LBP, CLAHE, and CECED preprocessing approaches to improve the low quality of CXR images, identify high representation details of the CXR images with visible trainable features. The proposed multi-channel weighted fusion performed admirably in identifying various kinds of pneumonia. In terms of ROC and precision-recall curves, the proposed multi-channel weighted fusion model appears to outperform the other single and dual-based channels, particularly when dealing with low-quality CXR images. Our proposed model's curve is closest to the upper left corner of the graph and has the largest area under curve, indicating that it has higher sensitivity and specificity according to the ROC graph in Figure 12 . Similarly, our proposed model's curve is closest to the graph's upper right corner with the largest area, meaning that it has higher precision and sensitivity in Figure 13 . More importantly, as explained above, the stated result in terms of Receiver Operating Characteristic (ROC) and precision-recall can aid expert radiologist in striking a balance between accuracy and precision. Table 5 . Result comparison of our proposed model with state-of-the-art methods for pneumonia classification. Cicero et al. [19] 91.0 91.0 91.0 Correa et al. [22] -90.9 100.0 Apostolopoulos et al. [27] 98.0 92.9 98.8 Xu et al. [28] 86.7 86.9 -Habib et al. [29] 98.93 --Chouchan et al. [30] 96.4 99.6 -Yamaç et al. [35] 86.5 79.2 90.7 Wang et al. [36] 93.3 90.7 95.5 Li et al. [37] 96.9 97.8 94.9 J.K. K. Singh and A. Singh [38] 95. Table 7 . Results obtained on our dataset using different pretrained models on our proposed model. Nevertheless, other methods have shown better performance for example, Pneumoni-aNet model [41] . PneumoniaNet model proposed by Alsharif et al. [41] uses CXR images to distinguish normal radiographic images from those with features consistent with viral or bacterial pneumonia in the pediatric group aged one-five years with a 99.72% accuracy, 99.74% sensitivity, 99.85% specificity, 99.7% precision, and 98.12% AUC. It achieved satisfactory performance because it combines the extracted feature of three subsequent convolution layers separated by ReLU and the batch normalization layer, whereas the general feature extraction uses only one convolution and batch normalization layer, hence permitting the CNN model to utilize the overall and marginal difference in CXR images. The authors of PneumoniaNet seeks to optimize the information flow and gradient within the network, hence making the optimization of deep learning network tractable. This model improves the feature propagation, encourages feature re-useability and reduction in parameters. The distinguishing of this model and ResNet50 is that the latter learns from the reference input layers by using the output of the existing layer as the subsequent input layer as compared to the former, whereas comparing with VGG model, the PneumoniaNet model uses a deeper structure of small receptive subsequent 3 × 3 filters. However, the authors did not consider the low quality of CXR images which is a major concern in real-life application. It is well known that data quality can be affected by factors such as noise, resolution, and artifacts, hence, using such data directly in an algorithm may affect model adaptability in medical application. A data preprocessing phase could help in eliminating or reducing the noise and increase the data quality, thereby improving model performance. The novelty of our methodology takes full advantage of the complementing integration of feature maps from multi-channels of data preprocessing technique in a weighted fusion manner, thereby enhancing the overall performance of our model. We conducted another ablation study to examine if hyperparameter tuning may improve the performance of our proposed multi-channel model. The results of different optimizers at different learning rates and dropouts are shown in Tables 8-10 . Our proposed multi-channel weighted fusion model obtains the best performance utilizing Adam optimizer with a learning rate of 0.0001 and dropout of 0.50, attaining 98.3% accuracy, according to Table 8 . It is worth mentioning that the Adam optimizer is substantially more robust than the other optimizers (RMSProp and SGD) due to its computational efficiency. Further experiment is conducted to examine and validate the contribution of LBP, CECED and CLAHE features to model performance over raw X-ray image feature. Figure 16 shows the accuracy results using the raw chest X-ray images while Figure 17 depicts the performance evaluation of the models across different matrices using the raw chest X-ray images. Table 11 shows the performance of the various pre-trained models and shallow CNN using the preprocessed images including the raw X-ray image. To further validate the contribution of LBP, CECED, and CLAHE image features to the overall performance the proposed ensemble model, we conducted another experiment with the same selected pretrained models using the raw chest X-ray images, as depicted in Table 12 . From all indication, the ensemble of LBP, CECED, and CLAHE yielded much better results compared to the ensemble of the raw chest X-ray image. Figure 16 . Accuracy results using the raw chest X-ray images. Figure 17 . Performance evaluations using the raw chest X-ray images. In this study, we proposed pneumonia identification method based on weighted fusion capable of processing LBP, CLAHE, and CECED CXR images simultaneously. We mentioned that the three channels are fused to complementary capture meaningful details from CXR images and achieved higher identification accuracy. The strategy of weighted fusion is utilized to take complete advantage of the visual features that have been captured from the different channels. The proposed weighted fusion model handles the problem of low-quality CXR images by fusing the weighted features generated from LBP, CLAHE, and CECED preprocessing stages. The shallow CNN, MobileNet-V3, and Inception-V3 models are employed to extract both healthy and non-healthy (COVID-19, virus, and bacteria pneumonia) features from LBP, CLAHE, and CECED CXR images. Furthermore, these features are merged by utilizing the weighted fusion strategy in order to take advantage of the complementary lungs information. Softmax was introduced as the classifier to obtain the fused features. By fusing channels of complementary attributes in a weighted manner, our proposed model outweighs several state-of-the-art methods. The evaluation results show that the proposed model achieves better performance with accuracy of 98.3%, sensitivity of 98.9%, specificity of 99.2% , precision of 98.8% and f1-score of 99.0% than just using the single-and dual-channels. From the comparative results of the other established methods, it is confirmed that the proposed weighted fusion model achieved state-of-the-art identification accuracy, which makes it robust and efficient identification solution for pneumonia based on low quality CXR images. These results could effectively assist the expert radiologist to diagnose what type of pneumonia disease is present in a patient's lungs while saving screening time. Pneumonia classification using deep learning from chest X-ray images during COVID-19 A transfer learning method for pneumonia classification and visualization An efficient deep learning approach to pneumonia classification in healthcare A deep learning based approach towards the automatic diagnosis of pneumonia from chest radio-graphs Large-scale screening to distinguish between COVID-19 and community-acquired pneumonia using infection size-aware classification Design ensemble deep learning model for pneumonia disease classification A Dual Weighted Shared Capsule Network for Diabetic Retinopathy Fundus Classification Enhancing Low Quality in Radiograph Datasets Using Wavelet Transform Convolutional Neural Network and Generative Adversarial Network for COVID-19 Identification A Super-Resolution Generative Adversarial Network with Siamese CNN Based on Low Quality for Breast Cancer Identification Improved Convolutional Neural Multi-Resolution Wavelet Network for COVID-19 Pneumonia Classification The Capability of Multi Resolution Analysis: A Case Study of COVID-19 Diagnosis Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation Diagnosis of COVID-19 using CT scan images and deep learning techniques Raman spectra-based deep learning: A tool to identify microbial contamination Process analytical technologies and data analytics for the manufacture of monoclonal antibodies Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning Automated chest X-ray screening: Can lung region symmetry help detect pulmonary abnormalities Incidence and severity of childhood pneumonia in the first year of life in a South African birth cohort. The Drakenstein Child Health Study Training and validating a deep convolutional neural network for computer-aided detection and classification of abnormalities on frontal chest radiographs Learning to recognize abnormalities in chest X-rays with location-aware dense networks Visualization and interpretation of convolutional neural network predictions in detecting pneumonia in pediatric chest radiographs Automatic classification of pediatric pneumonia based on lung ultrasound pattern recognition A neuro-heuristic approach for recognition of lung diseases from X-ray images Classification of Images of Childhood Pneumonia Using Convolutional Neural Networks Deep neural network ensemble for pneumonia localization from a large-scale chest X-ray database A transfer learning method with deep residual network for pediatric pneumonia diagnosis COVID-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks Ensemble of CheXNet and VGG-19 feature extractor with random forest classifier for pediatric pneumonia detection A novel transfer learning based approach for pneumonia detection in chest X-ray images Automated methods for detection and classification pneumonia based on X-ray images using deep learning. In Artificial Intelligence and Blockchain for Future Cybersecurity Applications Using X-ray images and deep learning for automated detection of coronavirus disease RSNA Pneumonia Detection Challenge | Kaggle Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images Convolutional Sparse Support Estimator-Based COVID-19 Recognition From X-Ray Images Prior-Attention Residual Learning for More Discriminative COVID-19 Screening in CT Images Multiscale Attention Guided Network for COVID-19 Diagnosis Using Chest X-Ray Images Diagnosis of COVID-19 from chest X-ray images using wavelets-based depthwise convolution network Lung Lesion Localization of COVID-19 From Chest CT Image: A Novel Weakly Supervised Learning Method Joint Learning of 3D Lesion Segmentation and Classification for Explainable COVID-19 Diagnosis Automated Detection and Classification of Pediatric Pneumonia Using Chest X-ray Images and CNN Approach Artificial Intelligence Framework for Efficient Detection and Classification of Pneumonia Using Chest Radiography Images Employing Texture Features of Chest X-Ray Images and Machine Learning in COVID-19 Detection and Classification A hybrid deep learning approach towards building an intelligent system for pneumonia detection in chest X-ray images Institutional Review Board Statement: Ethical review and approval were waived for this study, as this study only makes use of publicly available data.Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper. The dataset used in this study can be found in references [33, 34] . The authors declare no conflict of interest.