key: cord-0889666-ebm53xqv authors: Momeny, Mohammad; Neshat, Ali Asghar; Hussain, Mohammad Arafat; Kia, Solmaz; Marhamati, Mahmoud; Jahanbakhshi, Ahmad; Hamarneh, Ghassan title: Learning-to-augment strategy using noisy and denoised data: Improving generalizability of deep CNN for the detection of COVID-19 in X-ray images date: 2021-07-29 journal: Comput Biol Med DOI: 10.1016/j.compbiomed.2021.104704 sha: d9fa00b3f52bd4e3e8f5648db10d4bdb9c66faf4 doc_id: 889666 cord_uid: ebm53xqv Chest X-ray images are used in deep convolutional neural networks for the detection of COVID-19, the greatest human challenge of the 21st century. Robustness to noise and improvement of generalization are the major challenges in designing these networks. In this paper, we introduce a strategy for data augmentation using the determination of the type and value of noise density to improve the robustness and generalization of deep CNNs for COVID-19 detection. Firstly, we present a learning-to-augment approach that generates new noisy variants of the original image data with optimized noise density. We apply a Bayesian optimization technique to control and choose the optimal noise type and its parameters. Secondly, we propose a novel data augmentation strategy, based on denoised X-ray images, that uses the distance between denoised and original pixels to generate new data. We develop an autoencoder model to create new data using denoised images corrupted by the Gaussian and impulse noise. A database of chest X-ray images, containing COVID-19 positive, healthy, and non-COVID pneumonia cases, is used to fine-tune the pre-trained networks (AlexNet, ShuffleNet, ResNet18, and GoogleNet). The proposed method performs better results compared to the state-of-the-art learning to augment strategies in terms of sensitivity (0.808), specificity (0.915), and F-Measure (0.737). The source code of the proposed method is available at https://github.com/mohamadmomeny/Learning-to-augment-strategy. As a worldwide pandemic, Coronavirus disease 2019 (COVID- 19 ) has brought about a health crisis affecting all aspects of human life, and much effort has been being made to contain the virus since its inception. In the beginning, there were not many people who had contracted the disease, and it was not considered a great threat as most cases were treated in a short time. After a while, the World Health Organization (WHO) declared that the virus had an extreme potential to affect millions of individuals all around the world, especially in countries that had weaker healthcare systems. The disease is easily transmitted through direct or indirect contact to the affected person [1] . The coronavirus statistics are horrifying [2] . The United States (US) has recorded one of the largest numbers of COVID-19 victims, though it is one of the leading countries in healthcare facilities. Brazil, India, Russia, South Africa, and 215 other countries around the world follow the US on the list. Many governments and administrative authorities across the globe are still imposing noncompromising lockdown restriction to ensure social distancing for the containment of the disease due to the ever-increasing number of new cases every single day [2, 3] . According to the WHO and the US Center for Disease Control (CDC), fever, dry cough, vomiting, diarrhea, and myalgia are the most common symptoms of COVID-19 infection. To reduce morbidity rates, the general population in all countries has been made conscious of the symptoms so that they can seek treatment as soon as possible. Governments have begun to invest in COVID-19 vaccines and related research, and thus many studies and development activities are being conducted about the COVID-19 pandemic. Chest X-ray imaging has been playing a vital role in the rapid diagnosis and early management of COVID-19 [4, 5] . It is reportedly used for COVID-19 detection in countries with a shortage of testing kits [6] [7] [8] . Recent studies [9] [10] [11] [12] [13] [14] using machine learning (ML) and deep learning (DL) have shown promising results in the diagnosis of COVID-19. For example, convolutional neural networks (CNNs) have been applied for the classification of X-ray images [9] [10] [11] among COVID-19, non-COVID pneumonia (e.g., bacterial and viral pneumonia) and healthy cases. The quality of a chest X-ray image may be deteriorated due to different types of noise generated by a malfunction in X-ray receiver sensors, bit errors in transmission, and faulty memory locations in the hardware [15] . Typically, noise-based data augmentation in DL is performed when there is a possibility of image data being corrupted by noise [16] . Data augmentation by noise addition is a strategy that improves the robustness and generalization of CNNs [17] [18] [19] [20] [21] [22] [23] [24] . Moreno-Barea et al. [21] tested the noise injection to images from a Gaussian distribution, and showed it to be useful for improving CNN-based classification performance [23, 24] . Sezer and Sezer [25] proposed a data augmentation approach, where CNN-based speckle-noise reduction is used in the neonatal hip ultrasound image classification task. Their work introduced a method employing an optimized Bayesian non-local mean filter to reduce speckle noise for data augmentation. This data augmentation strategy improved the performance of the CNN from 92.29% to 97.70%. Ofori-Oduro and Amer [20] proposed a noise-robust CNN via data augmentation using Artificial Immune System. Their noise-robust model was tested under noise and shown to be useful for improving the performance of a CNN. In our previous work [23] , we proposed a noise-based adaptive data augmentation method to increase the CNN accuracy. Unlike conventional data augmentation approaches that use predefined rules and procedures for a specific target task, learning-toaugment strategies dynamically refine augmentation rules based on feedback network [26] [27] [28] . For example, Wang et al. [26] introduced an end-to-end compositional generative adversarial network architecture to generate natural and accurate face images. This model generates images of desired expressions, and edits the poses of faces. A reconstruction learning process was employed to re-generate the input data. The generators of the model encourage for preserving the important features of face. The augmented face images were used to train a robust expression recognition model. Cai et al. [27] introduced a fully data-driven and learnable framework to change the data distribution near reliable samples. This data augmentation method selects efficient learning samples and reduces the Impact of ineffective samples. Feng et al. [28] introduced an approach that generates new data obtained from a stationary distribution near the target data and implements a reinforced selector to automatically improve the augmentation strategy. However, these state-of-the-art methods [24] [25] [26] lack weak generalization ability under noisy conditions, which can cause overfitting. This paper focuses on the generation of noisy and denoised chest Xray images as augmented images to improve the generalization of deep CNNs for COVID-19 detection. We propose a noise-based novel data augmentation approach, where our method optimizes parameters of different noise types to generate new data. We summarize the contributions of this paper below: 1. We introduce (i) noising-and (ii) denoising-based data augmenters to improve the generalization of a deep CNN. The denoising-based approach further uses an autoencoder to generate new augmented data. 2. We propose a "learning-to-augment" strategy to generate noisy images. The learning-to-augment approach employs a Bayesian optimizer to determine the optimal noise parameters for new augmented images. 3. We show the effectiveness of our proposed approach on a challenging task of COVID-19 detection in chest X-ray images and outperform state-of-the-art data augmentation methods. We accumulated a dataset of 1248 chest X-ray images of posterior-anterior view (666 images) and anterior-posterior view (582 images) from two public repositories [29] [30] [31] . The first repository contains chest X-ray images of 215 COVID-19 patients and 33 non-COVID pneumonia patients [30] . The second repository contains chest X-ray images of 500 healthy subjects and 500 non-COVID pneumonia patients [31] . We carefully eliminated the X-ray images of lateral view and CT images from the data cohort of first repository [28] . Table 1 summarizes the patients' demographics (age and sex). All the X-ray images were either in Portable Network Graphics (PNG) or Joint Photographic Experts Group (JPEG) file format. All the X-ray images were resized to a common input size for the pre-trained CNNs (i.e., AlexNet, GoogleNet, ResNet18, and ShuffleNet). This is a standard practice of preprocessing computer vision data for CNN training, and similar preprocessing for X-ray data resizing is also widely used in recent literature (e.g. Refs. [12] [13] [14] ). On the other hand, X-ray images are stored and transmitted in the form of compressed data [32] . If a deep network is trained only using original images, the image distortion caused by lossy compression could deteriorate the test results. We therefore used lossless (i.e., PNG) and lossy (i.e., JPG) image compression as the inputs to deep convolutional neural networks. Also note that we accumulated our dataset from public repositories, and the data do not contain the exposure time parameter associated with those. However, as the radiopacity variation is the key to visualize COVID-19 infection in a chest X-ray image and its quality is dependent on the exposure time, we assume that these X-ray images are acquired with an exposure more than 6 mA-seconds (mAs) to ensure good quality of image. X-ray images often get degraded by impulse noise [15] , Gaussian noise [33] [34] [35] , speckle noise [36] [37] [38] and Poisson noise [39] [40] [41] [42] at the time of acquisition, transmission, or storage. Noisy images can be used as inputs to a CNN in two ways [39] : (i) using unprocessed noisy images as inputs to the networks, (ii) using denoised images as inputs to the network. When the noisy images are fed as inputs to the network, data augmentation using noise may improve the robustness of the classifier. On the other hand, if preprocessing is used to denoise images, then data augmentation using restored (denoised) images in training can improve the generalizability of the network. Fig. 1 illustrates the pipeline of the Fig. 1 . Schematic pipeline of the proposed learning-to-augment strategy using noisy and denoised data. Parameters of different noise types. proposed learning-to-augment strategy using noisy and denoised data. Adding noise to data is one approach to data augmentation [18] . We present a method for learning-to-augment via noisy input images. The learning-to-augment finds optimal noise parameters to generate the new data. The mean and variance are parameters of the Gaussian and speckle noise types [35, 43] . The impulse (salt-and-pepper) noise is, on the other hand, specified with density parameter [44] [45] [46] . The parameters of different noise models are shown in Table 2 . As shown in Fig. 2 , the proposed noisy image-based augmenter is composed of a noisy data generator, a controller, an augmenter, and child models. The steps of the noise-based data augmentation strategy are following: Step 1. Noisy data generator adds noise to the original images with specific noise parameters. In the first iteration, the parameters of noise (i.e., μ, v, and d) are set randomly. Then, the controller determines the parameters of each noise type as a new policy. Step 2. The augmenter produces new data by applying the noise to the images. Step 3. The child CNN models are trained using new augmented data to evaluate the performance of data augmentation policies. Step 4. The controller, a Bayesian optimizer-based search algorithm [47, 48] , substitutes existing weak policies with new data augmentation policies by exploring the search space of the parameters for each noise type. Step 5. The above steps are repeated until the best policies, i.e., the parameters of each noise type, are found. As shown in Fig. 1 , to augment the data, the input data pool is first divided into N equal folds. Then a noisy data generator adds noise (e.g., impulse, Gaussian, speckle, and Poisson noise) to each fold separately. The raw samples of the dataset were randomly split into N-folds to accelerate finding the best policies. Decreasing the number of samples using the N-fold split reduces the CNN training time for each fold [49] . The augmenters create new data based on new parameters that the Bayesian optimizer has found. In the next step, each fold is processed by the child CNN model. Use of child networks, instead of very deep CNN, in policy evaluation speeds up the execution of the proposed method. Based on the results of child CNNs, the controller improves weak policies and maintains strong policies. The controller uses the Bayesian optimizer to find the optimum set of augmentation policies (the parameters of each noise type) in a search space. Let G be the search space and f be the loss function of a classifier, then the Bayesian optimizer can be represented as: The optimization problem in Eq. (1) aims to find y that minimizes f(G) for G in a bounded domain G . The loss values of the child CNNs are used to calculate the loss function for the Bayesian optimizer. This process continues until the maximum iteration number is reached. The proposed method provides an automatic augmentation policy search method using the generation of restored images that had been corrupted by noise. Noisy images can be restored by enhancement algorithms such as autoencoder networks [50] . However, depending on the noise type and density, the pixel values in the restored image and the original noise-free image are not exactly equal [51] . We aim to leverage the dissimilarity between restored and original pixels as a data augmentation strategy. First, noise of specific type and density is added to the image. Then, the noise is partially removed from the image by using the proposed autoencoder. The denoising autoencoder aims to produce the output from the noisy input, where the target is set as the original images. Finally, the restored images are used as augmented data. In the proposed noise-based data augmentation algorithm, the type Table 3 The configuration of the proposed autoencoder. Fig. 5 . Sample chest X-ray images for (a) COVID-19, (b) healthy, and (c) non-COVID pneumonia cases [56] . and density of noise are important. Depending on the accuracy of the noise removal algorithm, the restored image could be very similar to the original image, especially when the noise magnitude is low, which could thus result in an ineffective augmented image. On the other hand, if the noise magnitude is high and the denoising is imperfect (as expected), the pixel values of the restored images would be more dissimilar from the original ones. As shown in Fig. 3 , noise of specific parameters is added to the original images by the noise generator (section 3.1). The noisy samples are then fed as inputs to the proposed autoencoder-based denoising model. The decoding weights of the trained autoencoder can be compared to a conventional image denoising filter parameter. Once the autoencoder is trained, it produces new augmented image data from the noisy input data during inference. Like our noisy image-based augmentation approach (section 3.1), here also, we feed the new augmented data as input to the child CNN models. The Bayesian optimizer finds optimal noise parameters. After finding the optimal policies, we use the whole dataset and deep convolutional autoencoder for the noise-based data augmentation (Fig. 4) . The proposed algorithm is trained using the deep learning toolbox of MATLAB 2020b in an Intel(R) Core(TM) i7-7700HQ CPU 2.81 GHz with 32 GB of RAM, and Nvidia GTX 1070 GPU with 8 GB VRAM. We used stochastic gradient descent with learning rate of 0.001 to train our networks (except for the autoencoder module) and used cross-entropy as the loss function. To train the autoencoder, we used Adam optimizer with learning rate of 0.001 and used mean squared error (MSE) as the loss function. We show the configuration of the autoencoder in Table 3 . To cope with the limitation of our computation power, we split each original image of size 224 × 224 pixels into 64 patches of size 28 × 28 pixels. After training the autoencoder with 250 epochs, we combined the 64 patches as the restored image. We also used N = 2 in this study. We used 80% of the data (998 X-ray images) for training, 10% (125 X-ray images) of the data for validation, and 10% of the data (125 X-ray images) for testing. Fig. 5 shows sample chest X-ray images for COVID-19 positive, healthy, and non-COVID pneumonia cases. We used pre-trained CNNs and finetuned them on the COVID-19 data for the classification task. We evaluated the performance of four pretrained models, AlexNet [52] , ShuffleNet [53] , ResNet18 [54] , and GoogleNet [55] . All of these networks were pre-trained on samples from the ImageNet Challenge database. Finetuning all these models on the COVID-19 dataset converge faster than training from scratch. Table 4 summarizes the ImageNet pre-trained networks used in the proposed framework. The noisy data generators separately add impulse noise and Gaussian noise to the chest X-ray images of COVID-19 positive, healthy, and non-COVID pneumonia cases in each fold (Fig. 2) . Since chest X-ray images are often corrupted by noise that is like the impulse and Gaussian noise, we used these two types of noise for simulation (Fig. 6) . Learning-to-augment with the restored or denoised images starts with dividing the dataset into N-folds (Fig. 3) . The controllers initialize the noise parameters randomly. The noisy data generators create noisy images. The input to the proposed convolutional autoencoder is the noisy chest X-ray images and the target is set to the original images. We first present qualitative results after adding noise and denoising the images (section 5.1). We then quantify the proposed learning-toaugment strategy after finding the optimal data augmentation policies (section 5.2.1), and training with selected policies (section 5.2.2). Finally, we present our discussion on the results (section 5.2.3). It is worth noting that the proposed learning-to-augment strategy using noisy and denoised data significantly increases the diversity in the training data. Unlike conventional data augmentation approaches that use predefined rules and procedures for a specific target task, the proposed learning-to-augment strategy dynamically refines augmentation rules based on feedback networks, and thus, reduces the negative effect of the small size of the training dataset. The denoised output images by the proposed convolutional autoencoder for impulse and Gaussian noise-corrupted X-ray images of healthy cases are shown in Fig. 7 . We show the noisy and restored chest X-ray images of the COVID-19 cases in Fig. 8 and Fig. 9 , respectively. The augmenters use the outputs of the proposed convolutional autoencoder (restored images) to create new data. The data augmentation policies, i.e., the optimal parameters of noise, are determined by the Bayesian optimization algorithm. Fig. 10 shows the results of the Bayesian optimizer to evaluate the data augmentation policies using the restored images, which were initially corrupted by the impulse noise. Learning-to-augment strategies using changing brightness, contrast, hue, saturation, and rotation of images are used to compare to the proposed approach. The optimal values of the parameters of the mentioned methods were chosen by the Bayesian optimizer for a fair comparison. The pre-trained networks (AlexNet, ShuffleNet, GoogleNet, and ResNet18) typically take input images of 3 channels (red, green, blue). Therefore, we use the same X-ray image and stack them three times to make the input image 3-channel. Then we use the augmentation operations, e.g., change of hue and saturation. As shown in Table 5 , the Table 5 The optimal values of the data augmentation policies. optimal values of the parameters of data augmentation methods were chosen by the Bayesian optimizer for the AlexNet classifier. ShuffleNet with 50 layers, ResNet18 with 18 layers, and GoogleNet with 22 layers were used for the classification of X-ray images. The COVID-19 classification accuracy curves during ResNet18 training and validation with the Gaussian noise corrupted and restored images are shown in Fig. 11 . We also illustrate the ResNet18 training and validation loss curves in Fig. 12 . The confusion matrices of COVID-19 classification Fig. 13 . Confusion matrices for X-ray image classification by the proposed learning-to-augment approach using restored images corrupted by noise. Here, 'Normal' represents the 'Healthy' subjects and 'Other_Pneumonia' represents the 'non-COVID pneumonia' patients. by the proposed data augmentation approach using restored images are shown in Fig. 13 . The results show that the generalization of ResNet18 has been improved using the proposed data augmentation method based on restored images from the Gaussian noise corruption. The results, shown in Fig. 12 , indicate that the data augmentation using the proposed noising-and denoising-based data augmenter performs overall better than other approaches. The ResNet18, trained with the restored images from the Gaussian noise corruption, performed the best among all the techniques. To evaluate the pre-trained deep CNN models, four metrics are employed for verifying the quality of the COVID-19 classification results, including accuracy, sensitivity, specificity, and F-Measure criteria [20, 21] . We show the quantitative comparison of different augmentation strategies in Fig. 14 , including changing brightness of the image, changing the contrast of the image, adjusting the hue of the image, changing the saturation of the image, rotation of the image, adding the impulse noise to the image, adding the Gaussian noise to the image, restoring the image corrupted by the impulse noise, and restoring image corrupted by the Gaussian noise. The pre-trained models were finetuned separately using optimal parameters of data augmentation methods (Table 5 ). In Fig. 15 , we show our 3-steps strategy to evaluate the X-ray image classification task in terms of sensitivity, specificity, and F-Measure. At each step of the evaluation, one class is considered positive and the others are considered negative classes. At first, we considered 'COVID-19' as the positive class and the other two (i.e., healthy and non-COVID pneumonia) as negatives. Table 6 demonstrates the efficacy of the proposed data augmentation strategy using noisy and denoised images with respect to the best performance by the state-of-the-art learning-toaugment method that uses the modification of the brightness, hue, contrast, saturation, and rotation. As shown in Table 6 , ShuffleNet performance for restored images (our approach) is improved by 5.7% in terms of sensitivity from 54.3% to 60%, compared to saturation-based augmentation. Similarly, specificity of the ResNet18 method for the restored images is improved, from 89% to 90.9%, over the hue-based augmentation. Similarly, F-Measure is improved from 54.8% to 62.0% for the restored images compared to contrast-based augmentation. Also Fig. 15 . Evaluation of the classification performance by identifying the positive and negative classes in three steps. Table 6 Comparing sensitivity, specificity, and F-measure of X-ray image classification with the learning-to-augment strategies. considering 'healthy' as the positive class, in ResNet18, the proposed strategy using restored images corrupted by Gaussian noise outperformed the results of hue-based augmentation in terms of sensitivity (from 74.1% to 81.1%). The performance of ShuffleNet increased in terms of specificity from 78.3% to 85.7%, when Gaussian noisecorrupted were restored, compared to saturation-based augmentation, and with ResNet18, F-Measure improved from 75.4% to 76.3% when images corrupted by impulse noise were restored rather than hue-based augmentation. Similar performance trends for the methods can be seen in Table 6 when the 'non-COVID pneumonia' cases are considered as the positive class in the third step. The proposed learning-to-augment strategy using denoised images corrupted by Gaussian noise yields the best results compared to contrast-based augmentation, for ShuffleNet the sensitivity was improved from 71.2% to 80.8%. Similarly, comparing with hue-based augmentation, the specificity of ResNet18 went from 79.7% to 91.5%, and the F-Measure of ShuffleNet was improved from 66.7% to 73.7%. We show the evaluation results in terms of sensitivity, specificity, and F-Measure metrics in Fig. 16 . The sensitivity and specificity values for the classification of COVID-19 by the proposed approach are better than those by other augmentation techniques. Thus, the noising-and denoising-based data augmenter is applicable to improve the generalization of deep CNNs for image classification. The performance of the COVID-19 classification task was improved by our methods capability to choose noise parameters. In addition, using restored images by the proposed autoencoder model helps to generalize CNNs. In a state-of-the-art method [29] , Nishio et al. applied six data augmentation strategies on image data, consisting of ±15 • rotation, ±15% x-axis shift, ±15% y-axis shift, horizontal flipping, and 85-115% scaling and shear transformation. They employed DenseNet201 (201 layers deep, 77 MB size, and 20.0 million parameters) and ResNet50 (50 layers deep, 96 MB size, and 25.6 million parameters) for the detection of COVID-19 in chest X-ray images. The accuracy on the test set for DenseNet201 and ResNet50 were 78.24 ± 2.23% and 77.76 ± 1.18%, respectively. On the other hand, according to results shown in Table 7 , using the proposed data augmentation approach, ResNet18 (18 layers deep, 44 MB size, and 11.7 million parameters) showed an accuracy of 77.6 ± 1.20%. With a shallower network and less "augmented" data, the Comparison of proposed data augmentation strategy using restored images corrupted by the Gaussian noise to the state-of-the-art method in term of accuracy. proposed data augmentation strategy using restored image is almost as accurate as the state-of-the-art method. In this paper, we proposed a learning-to-augment strategy using noisy and restored images to improve the generalizability of deep CNNs. Using a novel noise-based data augmentation approach, we tackled the overfitting problem of deep CNNs for automatic identification of COVID-19 in the chest X-ray images. A noisy data generator, a Bayesian optimizer-based controller, an autoencoder network, child augmenters, and child CNN models are key components of our proposed noising-and denoising-based data augmenter that increase the accuracy of the image classification task. Learning-to-augment strategies, including changing brightness of the image, changing the contrast of the image, adjusting the hue of the image, changing the saturation of the image, and the rotation of the image have been compared to the proposed method (adding impulse noise to an image, adding Gaussian noise to an image, restoring image corrupted by impulse noise, and restoring image corrupted by Gaussian noise). The proposed data augmenter also achieved the best performance in COVID-19, healthy, and non-COVID pneumonia classification in terms of sensitivity, specificity, and F-Measure. The learning-to-augment strategy on the restored Gaussian noise-corrupted images in a pretrained ResNet18 adapts properly on the new and previously unseen data (test set) and showed better classification accuracy compared to the state-of-the-art data augmentation approach. So, we can conclude that the proposed strategy improves the generalization of deep CNN. A new approach for classifying coronavirus COVID-19 based on its manifestation on chest X-rays using texture features and neural networks Deep learning and medical image processing for coronavirus (COVID-19) pandemic: a survey A deep learning-based social distance monitoring framework for COVID-19 CoroDet: a deep learning based classification for COVID-19 detection using chest X-ray images COVID-19 pneumonia: what has CT taught us? Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: a descriptive study COVID-19): role of chest CT in diagnosis and management Chest imaging appearance of COVID-19 infection Machine learning and image analysis applications in the fight against COVID-19 pandemic: datasets, research directions, challenges and opportunities A deep learning and grad-CAM based color visualization approach for fast detection of COVID-19 cases using chest X-ray and CT-Scan images Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: results of 10 convolutional neural networks Deep learning for chest X-ray analysis: a survey Deep learning approaches for COVID-19 detection based on chest X-ray images X-ray and CT-scan-based automated detection and classification of covid-19 using convolutional neural networks (CNN) Removal of salt-and-pepper noise for X-ray bio-images using pixel-variation gain factors Biomedical image augmentation using Augmentor Implicit adversarial data augmentation and robustness with noise-based learning neural networks A perlin noisebased augmentation strategy for deep learning with small data samples of HRCT images A noise robust convolutional neural network for image classification results in engineering Data augmentation using artificial Immune systems for noise-robust CNN models An integrated approach based on Gaussian noises-based data augmentation method and AdaBoost model to predict faecal coliforms in rivers with small dataset Ensemble methods and data augmentation by noise addition applied to the analysis of spectroscopic data A survey on image data augmentation for deep learning Forward noise adjustment scheme for data augmentation Deep convolutional neural network-based automatic classification of neonatal hip ultrasound images: a novel data augmentation approach with speckle noise reduction Learning to augment expressions for few-shot fine-grained facial expression recognition Data manipulation: towards effective instance learning for neural dialogue generation via learning to augment and reweight Learning to augment for datascarce domain BERT knowledge distillation Automatic classification between COVID-19 pneumonia, non-COVID-19 pneumonia, and the healthy on chest X-ray image: combination of data augmentation methods Covid-19 image data collection: prospective predictions are the future RSNA pneumonia detection challenge | kaggle Classification accuracies of malaria infected cells using deep convolutional neural networks based on decompressed images A Comparative Analysis of Various Filters to Denoise Medical X-Ray Images A method for modeling noise in medical images Removal of high density Gaussian noise in compressed sensing MRI reconstruction through modified total variation image denoising method Noise issues prevailing in various types of medical images X-ray image enhancement base on the improved adaptive low-pass filtering A comparative analysis of image denoising problem: noise models, denoising filters and applications Analysis of quantum noise-reducing filters on chest X-ray images: a review Fuzzy genetic-based noise removal filter for digital panoramic X-ray images Poisson denoising under a Bayesian nonlocal approach using geodesic distances with low-dose CT applications Poisson noise reduction from X-ray images by region classification and response median filtering A convex variational method for super resolution of SAR image with speckle noise Removal of sparse noise from sparse signals Weighted Schatten p-norm minimization for impulse noise removal with TV regularization and its application to medical images Removal of High Density Impulse Noise Using a Novel Decision Based Adaptive Weighted and Trimmed Median Filter Iranian Conference on machine Vision and image processing (MVIP) A tutorial on Bayesian optimization BOA: the Bayesian optimization algorithm Noise reduction in the spectral domain of hyperspectral images using denoising autoencoder methods NERNet: noise estimation and removal network for image denoising Imagenet classification with deep convolutional neural networks Shufflenet: an extremely efficient convolutional neural network for mobile devices Deep residual learning for image recognition Going deeper with convolutions Can AI help in screening viral and COVID-19 pneumonia? The authors have declared no conflict of interest.