key: cord-0671881-1a5xbe39 authors: Chang, Jiangeng; Cui, Shaoze; Feng, Mengling title: DiCOVA-Net: Diagnosing COVID-19 using Acoustics based on Deep Residual Network for the DiCOVA Challenge 2021 date: 2021-07-11 journal: nan DOI: nan sha: 6aa71be354368bf5ee05675d43bd5e556d5420c2 doc_id: 671881 cord_uid: 1a5xbe39 In this paper, we propose a deep residual network-based method, namely the DiCOVA-Net, to identify COVID-19 infected patients based on the acoustic recording of their coughs. Since there are far more healthy people than infected patients, this classification problem faces the challenge of imbalanced data. To improve the model's ability to recognize minority class (the infected patients), we introduce data augmentation and cost-sensitive methods into our model. Besides, considering the particularity of this task, we deploy some fine-tuning techniques to adjust the pre-training ResNet50. Furthermore, to improve the model's generalizability, we use ensemble learning to integrate prediction results from multiple base classifiers generated using different random seeds. To evaluate the proposed DiCOVA-Net's performance, we conducted experiments with the DiCOVA challenge dataset. The results show that our method has achieved 85.43% in AUC, among the top of all competing teams. Since the COVID-19 outbreak in 2020, countries all over the world have been shrouded in an eerie haze. Until recently, people in many countries were at risk of infection. Countries are actively working to avoid large-scale infection in their countries as part of their response to the epidemic. China, the United States, and the United Kingdom have all developed COVID-19 vaccines and have urged people to get vaccinated. However, the vaccine's protection period is limited, and many people are hesitant to be immunized. As a result, effective testing is still required to detect COVID-19 infected individuals. X-rays and computed tomography were the primary COVID-19 tests during the outbreak's early stages (CT). When medical images of the lungs show symptoms such as shadows, the patient is more likely to have COVID-19, according to [1] . This detection method produced relatively good results at the start of the epidemic and played an important role in the epidemic's spread. With the advancement of COVID-19 research, the swab test has become a reliable method of detecting COVID-19 infection. Currently, most countries use a throat swab or nose swab test. Furthermore, some studies have shown that saliva may be an effective test method. [2] . Other testing methods are being investigated in addition to the methods for infected people mentioned above. Numerous studies have shown that COVID-19 primarily damages lung tissue and has an effect on the respiratory system, according to [3] . As a result, it is hypothesized that COVID-19 could cause significant changes in the acoustic characteristics of infected patients. We will investigate a non-invasive method for detecting COVID-19 infected individuals in this paper by using cough audio data to build a classification model. Machine learning and deep learning have made remarkable advances in many fields in recent years, including image processing [4] , speech recognition [5] , and text analysis [6] . In the aftermath of the COVID-19 outbreak, researchers are also attempting to use artificial intelligence technology to combat the epidemic. Tuli et al. [7] , for example, created a model based on cloud computing and machine learning to forecast the growth and trend of the COVID-19 pandemic. Yan et al. [8] developed an XGBoost machine learning-based model capable of predicting the mortality rates of patients who had been hospitalized for more than 10 days Ardakani et al. [9] used CT images to manage COVID-19 in routine clinical practice. Based on these findings, this study aims to identify COVID-19 infected patients using audio recordings of their coughing. We used various data augmentation, transfer learning, costsensitive learning, and ensemble learning techniques to improve the recognition ability of our model, dubbed the DiCOVA-Net. We use a novel randomness method to further improve the model's robustness, and the ensemble results outperform the previous methods. The rest of the paper is structured as follows. The second section looks at related work. The proposed method is described in detail in Section 3. The experiment and its results are presented in Section 4. This research is summarized in the last section. In this paper, the task of identifying COVID-19 infected people using acoustic data falls under the category of speech recognition. In the early stages of speech recognition research, scholars used an acoustic model based on Gaussian Mixture Models (GMMs) to achieve a certain recognition effect. Scholars discovered that deep neural networks (DNNs) perform significantly better than GMMs as a result of recent advancements in deep learning methods, [10] . A spectrogram is a visual representation of a signal's frequency spectrum as it changes over time, which aids deep learning models in modeling audio data. [11] . The problem of imbalanced data is common in medical practice. We discovered, through analysis, that the identification of COVID-19 infected people suffers from the same problem of imbalanced data. The presence of imbalanced data will cause problems in model training, as the model will prioritize recognition of majority samples, which is not what medical staff or managers want. To address the imbalance, researchers have devised a number of strategies. For example, Jiang et al. [12] used the data augmentation technique to add noise to the data, with promising results. Furthermore, in addition to processing the data to address the issue of imbalanced data, the loss function of the model can be modified in a cost-effective manner to achieve the goal of modifying the model. Lin et al. [13] , for example, proposed a new loss function called Focal Loss as an extension of the conventional Cross-Entropy (CE) loss function in the dense object detection task. Experiments confirmed that the Focal Loss function aids in the detection of minority samples in imbalanced data. Popular deep learning models in image processing and recognition include AlexNet [14] , VGG [15] , and ResNet [16] , among others. Among the various deep learning models, ResNet is one of the most widely used and well-known. As a result, our proposed model is based on the ResNet50 network. With the advancement of deep learning technology, researchers discovered that it is inconvenient to retrain the entire deep neural network every time they encounter a new task, which takes a significant amount of time and computing resources. As a result, the researchers devised a technique known as "pre-training" [17] . An already trained deep learning model can be loaded through pre-training, and then the network weight can be partially adjusted (fine-tuning) for specific tasks in the domain, so as to effectively solve new problems. The "pre-train then tune" paradigm is also the core idea of transfer learning [18] . Ensemble learning is another popular machine learning technique. The goal of ensemble learning is to improve the overall model's generalization ability and prediction accuracy by combining the outputs of multiple base learners. Some research has found that combining the outputs of multiple deep neural networks using the ensemble learning method can help to improve prediction accuracy [19, 20] . The classical methods for generating the base model for ensemble learning primarily focus on changing the training data distribution or the model's structure. In this paper, we look at how randomness affects the model's diversity. In this paper, we propose a model for COVID-19 infection recognition based on deep residual networks, which combines the technologies of imbalanced data processing, transfer learning, and ensemble learning. Figure 1 depicts the proposed DiCOVA-Net method. To begin, the input acoustic data must be transformed into spectrogram data. The data is then divided into multiple data subsets for cross validation, and Gaussian noise is added to the minority samples to form new minority samples on each subset. The augmented data set is fed into the ResNet50 pre-training model for fine-tuning, with focal loss serving as the loss function. Finally, when forming the final prediction results, ensemble learning methods are used to combine the outputs of the various models generated by different random seeds. To increase the diversity of the training datasets, data augmentation is frequently required. Furthermore, according to [12] , data augmentation can reduce the domain mismatch between the enrolled and test set data. The most common data augmentation methods are flip, rotation, scale, crop, translation, and so on, while we use Gaussian noise. A pseudorandom number generator can generate Gaussian noise, which has a mean of zero and a standard deviation of one. Because DNNs have a large number of parameters, we added Gaussian noise to the data of minority samples (namely COVID-19 infected people) to generate some new synthetic minority samples, which is very beneficial for DNNs and helps to reduce the incidence of overfitting. Residual neural network [16] is a traditional method that performed best in the ILSVRC 2015 classification task. A new convolutional neural network structure is the residual neural network. With the success of the VGG network, the depth of the neural network has gained increasing attention, but it is accompanied by the difficult-to-solve problem of gradient disappearance. The residual neural network uses the residual block to solve this problem very well. The residual block can be calculated as Eq. (1): where yr represents the label of sample r, and h(xr) is the prediction that neural network output at the step r. The F (xr, Wr) is the residual result of the step l. In the residual neural network, there are two hypotheses: (1) h(x) is the direct mapping. (2) F (x, W ) is the direct mapping. Then, the loss gradient between the beginning layer and layer l can calculated as Eq. (2): where α can be presented as Eg. (3): where F can be presented as Eg. (4): Throughout the training process, α cannot always be −1, which means that the problem of gradient vanishing will not occur in the residual network By using a model that is trained on annotated source domain data to predict on unannotated target domain data, transfer learning [23] reduces the expense of human annotation. In the few data classification [24] , transfer learning has been shown to be effective. First, we pre-train the ResNet50 on the ImageNet [25] , which has 12 subtrees with 5247 synsets and 3.2 million pictures in total. Then we use the DiCOVA dataset to fine-tune the pre-training neural network structure. We discovered that the model will reach its best point in only a few epoches with the help of pre-training. The loss function is used in machine learning and deep learning models to assess the degree of discrepancy between the predicted and real values. If the model forecast is incorrect, the loss function value will be greater. In general, the better the loss function is developed, the better the model's performance. In DNNs, the loss function acts as a "supervisor," guiding model training to progress in the direction of minimizing the loss function in order to locate the network parameter combination that minimizes the loss function. When DNNs are trained, cross entropy (CE), which is derived as Eq. (5), is frequently employed as the loss function. CE can better represent the differences between various models than the classification error rate and the mean square error (MSE). CE also has the feature of being a convex function, which allows it to discover the global best value when calculating the derivative. where ys is the label of sample s and ps is the chance that sample i would be classified as positive. Cross entropy can produce decent results when the number of samples in each class is small, but it loses effectiveness when the data is uneven. Lin et al. [13] introduced a novel loss function called focused loss, which is derived as Eq. (6) to address the problem of imbalanced data. where γ (γ ≥ 0) is the focusing parameter used to change the weight of tough and easy samples, and (1 − pr) γ is referred to as the modulating factor. Furthermore, αr is utilized to balance the weights of positive and negative samples. An ensemble framework aggregates the predictions of multiple base models to get better prediction. Formally, suppose that we have K base models with predictions yt+1 = [y 1 t+1 , · · · , y K t+1 ], ensemble is an aggregation function yt+1 = g(yt+1; Φ), which has various implementations such as voting, averaging, and stacking [26] . We used the averaging method to ensemble different the same model trained on different distributed feature space. In this paper, we suggested a unique and robust technique for achieving excellent performance on this difficult dataset. Deep learning is rife with randomness, which adds uncertainty to the process. However, the researchers investigated randomization in the weight initialization for deep learning models, which aids in the generation of realized pictures without the need of training [21] . Other scholars investigated the randomness in the lay-ers and models selected for the ensemble methods [22] . On several domain datasets, this approach produces cutting-edge outcomes. However, none of the preceding techniques investigated the unpredictability in the incoming data. With our understanding, we may conclude that the neural network prioritizes the input entered at the start. So the different order of input batches will make the model start evolving in different places. Therefore, we find the different order of input batches in this challenge data will surely improve the final performance through the experiment. With the infinite search space is expensive, we suggested trying four or five different random seeds and ensemble results together. The novel method can be sum- (300, 2000) . The [f 1 t+1 , · · · , f K t+1 ] are all based on the same model but with different data input order. The experimental result shows our method improves the model's accuracy and robustness. In the DiCOVA 2021 challenge, [27] , we tested our proposed approach. For a two-class classification, this challenge presents a dataset of sound recordings taken from COVID-19 infected and non-COVID-19 individuals. This dataset has a total of 1040 samples, with 965 non-COVID-19 samples and 75 COVID-19 infected samples, resulting in a 13:1 skewed ratio, indicating that this is a highly imbalanced dataset. The average recording duration across subjects is 4.72 seconds (standard error (S.E.) ± 0.07). Furthermore, the challenge organizers use a blind test set with 233 samples, so the results obtained in this paper on the test set are provided by the challenge organizers. To be impartial, the organizers of the competition have previously divided the dataset into a train set and a validation set. The train set contains 822 data points, 772 of which are non-COVID-19 and 50 of which are COVID-19. The validation set contains 218 data points, 193 of which are non-COVID-19 and 25 of which are COVID-19. The organizers additionally provide a five-fold data set to assist participants in gaining a more generic and diversified model. We use the librosa 1 to transform the audio data into melspectrogram. In the guassian augmentation progress, we use the skimage 2 and set the random seed between 0 and 5. We list other parameters in the Table 1 . In the ResNet50, we use the focal loss as the loss function and use the adam as the optimization method. In model selection, we use the area under the curve (AUC) as an evaluation indicator in each folds. In the randomness experiment, we only change the random seeds at the beginning of the pipeline. For this challenge, the organizers present the performance of three baseline methods: Random Forest (RF), Multi-layer Perceptron (MLP), and Logistic Regression (LR). These three methods could not receive audio data, so the organizers use the mel-frequency cepstral coefficients (MFCC) and the delta and delta-delta coefficients methods to extract features. The performance of the three baseline methods which is provided by organizer is shown in Table 2 [27] . According to the results in Table 2 , LR has the worst performance, because compared with RF and MLP, LR has the worst nonlinear fitting ability. In addition, the difference between RF's performance on the validation set and the test set is larger than that of MLP, so MLP is more robust. The experiment result of proposed method is shown in Table 3. We compare the performance of fine-tuning model "Original" with two different loss functions and two different data augmentation methods. The two different loss functions respectively "cross entropy" (CE) loss and "focal loss" (FL). The two different data augmentation methods are "simple duplication" (Dul) and "Gaussian noise" (Gua). The "Ensemble" is the performance by integrating four models with different random seeds. In Table 3 , the performance of the fine-tuning model based on ResNet50 is better than that of the traditional machine learning methods such as RF, MLP and LR. In terms of the loss function, CE performs better than FL on the test set, and overfitting occurs when FL is used for training on the original training set. However, after using "Dul" or "Gua" for data augmentation, FL performed better than CE on the test set. This proves that a combination of multiple imbalanced data processing methods will achieve better results. In addition, it can also be seen that using Gaussian noise to process imbalanced data is better than directly expanding minority samples. Finally, using randomness to train multiple models for integration can further improve the prediction performance of the entire model. In order to use acoustics data to identify patients infected with COVID-19 more accurately, we propose a deep learning method that incorporates multiple image processing techniques. First, we transform the acoustics data into spectrogram data, which can better suit the deep learning model. After that, Gaussian noise-based data augmentation and focal loss are introduced to solve the problem of imbalanced data. Based on the pre-training model of ResNet50, we combine the fine-tuning technology in transfer learning to adjust the weight of the deep neural network to make it more suitable for the identification of COVID-19 infected persons. In addition, in order to make the model we designed more robust, we use ensemble learning to build multiple deep learning models. When training these models, we adopt an advanced data extraction method with randomness and uncertainty to build sample subsets. Our experimental results show that the proposed method can effectively identify the infected with COVID-19 and is superior to other state-of-the-art methods. Chest CT for Typical Coronavirus Disease 2019 (COVID-19) Pneumonia: Relationship to Negative RT-PCR Testing Saliva is more sensitive than nasopharyngeal or nasal swabs for diagnosis of asymptomatic and mild COVID-19 infection Respiratory physiology of COVID-19-induced respiratory failure compared to ARDS of other etiologies Classification of patterns of benignity and malignancy based on CT using topology-based phylogenetic diversity index and convolutional neural network Audio classification using attentionaugmented convolutional neural network A review of natural language processing techniques for opinion mining systems Predicting the Growth and Trend of COVID-19 Pandemic using Machine Learning and Cloud Computing An interpretable mortality prediction model for COVID-19 patients Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks New types of deep neural network learning for speech recognition and related applications: An overview Environment Sound Classification using Multiple Feature Channels and Attention based Deep Convolutional Neural Network The XMUSPEECH system for short-duration speaker verification challenge 2020 Focal loss for dense object detection ImageNet Classification with Deep Convolutional Neural Networks Very deep convolutional networks for large-scale image recognition Deep residual learning for image recognition Using Pre-Training Can Improve Model Robustness and Uncertainty Spottune: Transfer learning through adaptive fine-tuning Improving adversarial robustness via promoting ensemble diversity Deep learning for practical image recognition: Case study on kaggle competitions A Powerful Generative Model Using Random Weights for the Deep Image Representation RMDL: Random Multimodel Deep Learning for Classification A survey on transfer learning Meta-Transfer Learning for Few-Shot Learning ImageNet: A large-scale hierarchical image database Ensemble methods: foundations and algorithms DiCOVA Challenge: Dataset, task, and baseline system for COVID-19 diagnosis using acoustics We are appreciative to the DiCOVA 2021 Challenge organizers for their efforts in providing participants with data and a platform for the competition. And, this research is supported by the China Scholarship Council (No. 202006060162).