key: cord-0773549-o5flp8zb authors: Milanés-Hermosilla, Daily; Trujillo Codorniú, Rafael; López-Baracaldo, René; Sagaró-Zamora, Roberto; Delisle-Rodriguez, Denis; Villarejo-Mayor, John Jairo; Núñez-Álvarez, José Ricardo title: Monte Carlo Dropout for Uncertainty Estimation and Motor Imagery Classification date: 2021-10-30 journal: Sensors (Basel) DOI: 10.3390/s21217241 sha: bb91e0f20244f8a3595a5d351f8c5814532582da doc_id: 773549 cord_uid: o5flp8zb Motor Imagery (MI)-based Brain–Computer Interfaces (BCIs) have been widely used as an alternative communication channel to patients with severe motor disabilities, achieving high classification accuracy through machine learning techniques. Recently, deep learning techniques have spotlighted the state-of-the-art of MI-based BCIs. These techniques still lack strategies to quantify predictive uncertainty and may produce overconfident predictions. In this work, methods to enhance the performance of existing MI-based BCIs are proposed in order to obtain a more reliable system for real application scenarios. First, the Monte Carlo dropout (MCD) method is proposed on MI deep neural models to improve classification and provide uncertainty estimation. This approach was implemented using Shallow Convolutional Neural Network (SCNN-MCD) and with an ensemble model (E-SCNN-MCD). As another contribution, to discriminate MI task predictions of high uncertainty, a threshold approach is introduced and tested for both SCNN-MCD and E-SCNN-MCD approaches. The BCI Competition IV Databases 2a and 2b were used to evaluate the proposed methods for both subject-specific and non-subject-specific strategies, obtaining encouraging results for MI recognition. Deep neural network (DNN) techniques have gained enormous acceptance in the scientific community with respect to other machine learning techniques. For this reason, DNN is becoming more attractive for various research areas, such as language processing, computer-assisted systems, medical signal processing, and autonomous vehicles, among others. Particularly, Motor Imagery (MI)-based Brain-Computer Interfaces (BCIs) by using DNN have proven potential for MI tasks classification with good discrimination. BCI systems constitute an alternative communication pathway for patients with severe neural impairments, and they consist of brain signal acquisition and processing, which is then translated into control commands for robotic devices, such as exoskeletons, wheelchairs, etc. [1] . Despite the impressive accuracy of employing DNN-based BCIs, these approaches may produce overconfident predictions. Moreover, the analysis to quantify the uncertainty of predictions is still a challenge. Overconfident incorrect predictions may be undesirable; (Fz, FC1, FC3, FCz, FC2, FC4, C5, C3, C1, Cz, C2, C4, C6, CP3, CP1, CPz, CP2, CP4, P1 , Pz, P2, and POz) with a sampling rate at 250 Hz. A total of two sessions on different days was performed for each subject, completing a total of 288 trials per session. The first session was used here for training, and the other session was employed for testing, as done in previous works [29] [30] [31] [32] [33] [34] . Dataset 2b: This dataset comprises EEG signals from two MI tasks (left hand and right hand), which were collected from nine subjects over three bipolar EEG channels (around C3, Cz, and C4) with a sampling rate at 250 Hz. Each subject completed a total of five sessions summing 720 trials (first two sessions with 120 trials each, and the last three of 160 trials each). Then, the first three sessions were used for training, while the last two sessions were used for testing, as done in previous works [29, [35] [36] [37] . Some studies [38] [39] [40] [41] [42] [43] reported that real or imagined unilateral movements can attenuate or enhance the amplitude of mu (from 8 to 12 Hz) and beta (from 13 to 30 Hz) EEG rhythms over the primary motor cortex in both contralateral and ipsilateral hemispheres, respectively, which are phenomena known as event-related desynchronization/synchronization (ERD/ERS). For this reason, as shown in Figure 1 , the EEG signals are band-pass filtered here in a frequency range from 4 to 38 Hz through a Butterworth filter, aiming to preserve the ERD and ERS rhythms, rejecting undesirable physiological and non-physiological artifacts. Next, each filtered EEG trial x is standardized by applying the electrode-wise exponential moving standardization with a decay factor of 0.999 [29, 31] , according to the following equations: x(t i ) = (x(t i ) − µ(t i ))/σ(t i ) which compute mean µ(t i ), variance σ 2 (t i ), and standardized x(t i ) values on each electrode taken at sample t i . The initial values µ(t 0 ) and σ 2 (t 0 ) are the mean and variance calculated over periods corresponding to the rest state preceding each trial. To rectify outliers, the EEG amplitudes of each trial are limited to µ(t i ) ± 6σ(t i ). Finally, the trial crops strategy is employed for data augmentation. For both datasets, crops of 4 s every 8 ms in the interval from −0.5 to 4 s (cue onset at 0s) over all trials were extracted in our study. Two popular public datasets from BCI Competition IV, specifically datasets 2a and 2b [29] [30] [31] [32] [33] [34] [35] [36] [37] , were used. Dataset 2a: This dataset contains EEG signals of different MI tasks (left hand, right hand, tongue, and foot), which were recorded from nine healthy subjects on 22 locations (Fz, FC1, FC3, FCz, FC2, FC4, C5, C3, C1, Cz, C2, C4, C6, CP3, CP1, CPz, CP2, CP4, P1 , Pz, P2, and POz) with a sampling rate at 250 Hz. A total of two sessions on different days was performed for each subject, completing a total of 288 trials per session. The first session was used here for training, and the other session was employed for testing, as done in previous works [29] [30] [31] [32] [33] [34] . Dataset 2b: This dataset comprises EEG signals from two MI tasks (left hand and right hand), which were collected from nine subjects over three bipolar EEG channels (around C3, Cz, and C4) with a sampling rate at 250 Hz. Each subject completed a total of five sessions summing 720 trials (first two sessions with 120 trials each, and the last three of 160 trials each). Then, the first three sessions were used for training, while the last two sessions were used for testing, as done in previous works [29, [35] [36] [37] . Some studies [38] [39] [40] [41] [42] [43] reported that real or imagined unilateral movements can attenuate or enhance the amplitude of mu (from 8 to 12 Hz) and beta (from 13 to 30 Hz) EEG rhythms over the primary motor cortex in both contralateral and ipsilateral hemispheres, respectively, which are phenomena known as event-related desynchronization/synchronization (ERD/ERS). For this reason, as shown in Figure 1 , the EEG signals are band-pass filtered here in a frequency range from 4 to 38 Hz through a Butterworth filter, aiming to preserve the ERD and ERS rhythms, rejecting undesirable physiological and non-physiological artifacts. Next, each filtered EEG trial x is standardized by applying the electrodewise exponential moving standardization with a decay factor of 0.999 [29, 31] , according to the following equations: which compute mean ( ), variance ( ), and standardized ( ) values on each electrode taken at sample . The initial values ( ) and ( ) are the mean and variance calculated over periods corresponding to the rest state preceding each trial. To rectify outliers, the EEG amplitudes of each trial are limited to ( ) ± 6 ( ). Finally, the trial crops strategy is employed for data augmentation. For both datasets, crops of 4 s every 8 ms in the interval from −0.5 to 4 s (cue onset at 0s) over all trials were extracted in our study. A shallow architecture that performs temporal and spatial convolutions is used here. The temporal convolutional layer with a 45 × 1 filter and 40 channels has input tensors of size 1000 × 22 × 1 and output tensors of size 478 × 22 × 40 when using dataset 2a, while input tensors of size 1000 × 3 × 1 and output tensors of size 478 × 3 × 40 are used for dataset 2b. Then, downsampling from 250 to 125 Hz by employing a stride of 2 is performed. The spatial convolutional layer is composed of 40 channels and a 1 × 22 filter when using dataset 2a and a 1 × 3 filter when using dataset 2b. After the temporal convolution and the spatial filter, a squaring nonlinearity, an average pooling layer with 45 × 1 sliding windows, a max-pooling layer with an 8 × 1 stride, and a logarithmic activation function are applied. These steps together are analogous to the trial log-variance computation, which is widely used in the Filter Bank Common Spatial Patterns (FBCSPs) [30, 44] . The use of quadratic activation functions, or even higher-order polynomials, is not new in neural network research [45] , and to the best of our knowledge, it was first used in BCI applications by Schirrmeister [31] . The classification layer is composed of a dense layer with Softmax activation function that receives a total of 2160 features. To avoid overfitting, batch normalization and dropout layers are used, and also the "MaxNorm" regularization is further applied in both convolution and dense layers. Moreover, the "Early Stopping" method and the decay of learning rate are also considered. The Adam optimizer [46] and the Categorical Cross-Entropy as a cost function are employed. As a result, the proposed architecture contains a total of 45,804 weights for dataset 2a and 11,082 weights for dataset 2b. The neural network shown is totally deterministic and does not permit broader reasoning about uncertainty. To estimate the uncertainty, the Monte Carlo dropout described in the next section was used. The dropout technique is commonly used to reduce the model complexity and also avoid overfitting [24] . A dropout layer multiplies the output of each neuron by a binary mask that is drawn following a Bernoulli distribution, randomly setting some neurons to zero in the neural network, during the training time. Then, the non-dropped trained neural network is used at test time. Gal and Ghahramani [23] demonstrated that dropout used at test time is an approximation of probabilistic Bayesian models in deep Gaussian processes. Monte Carlo dropout (MCD) quantifies the uncertainty of network outputs from its predictive distribution by sampling T new dropout masks for each forward pass. As a result, instead of one output model, T model outputs {P t ; 1 ≤ t ≤ T} for each input sample x are obtained. Then, the set {p t } can be interpreted as samples from the predictive distribution, which is useful to extract information regarding the prediction's variability. This information is valuable for making decisions. In fact, quantifying the uncertainty of the model may allow uncertain inputs to be treated differently. The main drawback of MCD is its computational complexity, which can be proportional to the number of forward passes T. As an alternative, the forward passes can run concurrently, resulting in constant running time. Moreover, if the dropout layers are located near the network output, as in the SCNN model (see Figure 2 ), the input of the first dropout layer can be saved in the first pass, to reuse it in the remaining passes, avoiding redundant computation [26] . Consequently, the computational complexity of MCD can be significantly reduced, enabling it for real-time applications. The MCD model estimation can be computed as the average of T predictions: According to [23] , T = 50 is considered a safe choice to estimate the uncertainty, but this value must also be evaluated, considering the predictive performance of MCD. In our study employing the SCNN architecture (see Figure 2 ), the performance by applying MCD through different T samples was analyzed for each subject in both datasets 2a and 2b. Figure 3 shows the accuracy improvement (∆ ACC) from the baseline T = 1. We observed that generally when T increases, the accuracy of SCNN-MCD improves, reaching an evident stabilization for values prior to T = 50. For this reason, T = 50 was adopted in SCNN-MCD. According to [23] , = 50 is considered a safe choice to estimate the uncertainty, but this value must also be evaluated, considering the predictive performance of MCD. In our study employing the SCNN architecture (see Figure 2 ), the performance by applying MCD through different samples was analyzed for each subject in both datasets 2a and 2b. Figure 3 shows the accuracy improvement (∆ ACC) from the baseline = 1. We observed that generally when increases, the accuracy of SCNN-MCD improves, reaching an evident stabilization for values prior to = 50. For this reason, = 50 was adopted in SCNN-MCD. The Monte Carlo dropout can be seen as a particular case of Deep Ensembles (training multiple similar networks and sampling predictions from each), which is another alternative to improve the performance of deep learning models and estimate uncertainty. A brief description of Deep Ensembles is presented in the next section. Deep ensembles have shown potential to improve the accuracy and out-of-distribution robustness as well as reduce the uncertainty of deep learning models [27, 47] . The ensemble learning combines several models to obtain better generalization. Therefore, the disadvantages of using a single model can be tackled and masked by the strengths of other models. The averaged predictions are most useful when all models are statistically independent, having different hyperparameters, or being trained with different data. Bagging [27, 47] , also known as bootstrap aggregating, is the type of ensemble technique in which a single training algorithm is used on different subsets over the same ar- The Monte Carlo dropout can be seen as a particular case of Deep Ensembles (training multiple similar networks and sampling predictions from each), which is another alternative to improve the performance of deep learning models and estimate uncertainty. A brief description of Deep Ensembles is presented in the next section. Deep ensembles have shown potential to improve the accuracy and out-of-distribution robustness as well as reduce the uncertainty of deep learning models [27, 47] . The ensemble learning combines several models to obtain better generalization. Therefore, the disadvantages of using a single model can be tackled and masked by the strengths of other models. The averaged predictions are most useful when all models are statistically independent, having different hyperparameters, or being trained with different data. Bagging [27, 47] , also known as bootstrap aggregating, is the type of ensemble technique in which a single training algorithm is used on different subsets over the same architecture. Bagging samples may be generated with/without replacement. Given a dataset, an ensemble predictor can be obtained by training the same architecture several times, where each training instance uses one bagging sample as training set. At prediction time, the same input is evaluated by each network, and the results are averaged. The main drawback of deep ensembles is their high computational cost and complexity for training and implementation. To evaluate the generalization of DNN-based BCI systems, it is common to randomly partition the available data, defining a part for training, another set for validation, and the rest for testing. The pre-trained models from each partition can be added in an ensemble that corresponds to bagging without replacement [27, 47] . This strategy is followed in our work for obtaining the ensemble model. The uncertainty in neural networks measures how reliable a model makes predictions. Several uncertainty measures can be used to quantify model uncertainty [48, 49] . For a better understanding, we first present five well-known metrics, such as variation ratio (V R ), predictive entropy (H), mutual information (I), total variance (V tot ), and margin of confidence (M). The next descriptions assume the aforementioned predictive distribution obtained from the stochastic forward passes. Let C be the total number of classes for classification, and p t = (p 1t , p 2t , · · · , p Ct ); the model output for a forward pass, t, is 1 ≤ t ≤ T. If the last layer of the model is softmax, the sum of all outputs is equal to 1. Let p * = p * 1 , · · · , p * C be the average of the predictions {p t ; 1 ≤ t ≤ T}. Variation ratio V R . This measures the dispersion or how spread the distribution is around the mode. where f c * is the frequency of the mode c * of the discrete distribution {c t } and c t = arg max{p 1t , p 2t , · · · , p Ct } is the predicted class in each stochastic forward pass. Notice that V R ∈ [0, 1/C], and it reaches its minimum and maximum values for f c * closest to T and T/C, respectively. Predictive Entropy H. This metric captures the average of the amount of information contained in the predictive distribution. The predictive entropy attains its maximum value when all classes are predicted to have an equal uniform probability. In contrast, it obtains zero value as minimum when one class has a probability equal to 1, being 0 for all others (for instance, when the prediction is certain). The predictive entropy can be estimated as: The maximum value for H is log 2 C. Therefore, it is not fixed for datasets with different numbers of classes. To facilitate the comparison across various datasets, we normalize the predictive entropy here as follows: H n = H/ log 2 C, H n ∈ [0, 1]. Total variance V tot . It is the sum of variances obtained for each class: Margin of Confidence M. The most intuitive form to measure uncertainty is analyzing the difference between two predictions of the highest confidence. Let c = argmax p * j be the predicted class through the MCD approach. Then, for d t = p ct − max j =c p jt , we compute: where M takes values close to zero for points toward high uncertainty, and it increases when the uncertainty decreases. We noted that M can be negative. The prediction's uncertainty can be intuitively expected to be correlated with the classification performance. For instance, Figure 4 shows the histograms of the normalized predictive entropy, for predictions classified correctly and incorrectly, when applying subject-specific classification on dataset 2a. We observed for almost all subjects that wellclassified predictions were grouped predominantly toward low-entropy values, while the incorrect classified predictions were more clustered in regions of high entropy. A similar effect also occurred when applying the other uncertainty measure presented here. This indicates that the most uncertain predictions also tend to be incorrect. In areas of high uncertainty, the model can randomly classify patterns, and therefore, it is preferred to reject their associated inputs. The rejection decision can be carried out by using some uncertainty metrics, and preferably, it must be statistically inferred. Next, the more suitable uncertainty measures to achieve this purpose are determined. As a novelty, a new approach based on the Bhattacharyya distance to compare the ability of several uncertainty measures for discriminating correct and incorrect classified predictions is proposed here in order to enhance the MI tasks recognition. The Bhattacharyya distance measures the similarity of two probability distributions p and q over the same domain X, and it can be calculated as Table 1 shows the Bhattacharyya distance computed between histograms obtained from correct and incorrect classified predictions, using the aforementioned uncertainty measures. As a novelty, a new approach based on the Bhattacharyya distance to compare the ability of several uncertainty measures for discriminating correct and incorrect classified predictions is proposed here in order to enhance the MI tasks recognition. The Bhattacharyya distance measures the similarity of two probability distributions p and q over the same domain X, and it can be calculated as (10) Table 1 shows the Bhattacharyya distance computed between histograms obtained from correct and incorrect classified predictions, using the aforementioned uncertainty measures. Notice that the margin of confidence reached the highest Bhattacharyya distance on dataset 2a and the mean of both datasets, outperforming the other metrics. Thus, we used it in the classification process to reject those EEG trials that were less certain. The margin of confidence is a sample mean of 50 random values { }; consequently, a normal Notice that the margin of confidence M reached the highest Bhattacharyya distance on dataset 2a and the mean of both datasets, outperforming the other metrics. Thus, we used it in the classification process to reject those EEG trials that were less certain. The margin of confidence is a sample mean of 50 random values {d t }; consequently, a normal distribution can be assumed for the random variable M. This allows fixing a thresholdM on the values of M to split the predictions into certain M >M and uncertain M ≤M . Notice that if the prediction is certain, the zero value must be outside the confidence interval of M, and therefore, M must be necessarily greater than Φ is the cumulative distribution function (CDF) of the standard normal distribution, and 1 − α is the confidence level. Consequently, as threshold, the following equation can be used: The certainty condition is satisfied if the mean M of the differences {d t } is "very large" or if the standard deviation σ d is "very small". As a result, this threshold scheme does not classify as uncertain those predictions in which the model is consistent (σ d ≈ 0), even when M is close to zero. As a highlight, the proposed threshold does not require prior knowledge of the data, as it depends exclusively on the predictive distribution. Finally, four subsets for predictions can be obtained by using the proposed method, which are incorrect-uncertain (iu), correct-uncertain (cu), correct-certain (cc), and incorrectcertain (ic) predictions. Let N iu , N cu , N cc , and N ic be the number of predictions in each subset, N be the total number of predictions, and R c be the certain ratio. This last ratio is the proportion of certain predictions with respect to the total number of predictions. In any recognition system, the correct classification of certain predictions is desirable. Then, the correct-certain ratio R cc in Equation (12) [50] can be used to measure this expectation. On the other hand, if the model makes an incorrect prediction, it is desirable to have high uncertainty, which can be measured by the incorrect-uncertain ratio R iu [50] , as follows: The overall accuracy of the uncertainty estimation can be measured through the Uncertainty Accuracy (U A) as: where U A M penalizes the incorrect-certain and correct-uncertain predictions, aiming to increase the reliability, effectivity, and feasibility of EEG MI-based recognition systems in practical applications. U A takes higher values for the best threshold values; thus, it can be further used to compare different thresholds. A first experiment to evaluate the accuracy performance by applying the SCNN-MCD approach was carried out. For the second experiment, a Monte Carlo dropout of an ensemble (E-SCNN-MCD) was executed in order to verify its feasibility to discriminate MI tasks. Finally, a third experiment to analyze the uncertainty of predictions during MI tasks classification was performed for both SCNN-MCD and E-SCNN-MCD approaches. All experiments were designed according to the conditions of the BCI competition IV to compare directly with recent works that also employed this dataset. For this, the training set (the first session from dataset 2a and the first three sessions from dataset 2b) was employed to calibrate the recognition system, while the testing set (the second session from dataset 2a and the last two sessions from dataset 2b) was used only for evaluation. A repeated holdout validation over the same testing set was carried out for both subject-specific and non-subject-specific classification strategies to evaluate the model generalization. For instance, the training set T was split randomly into new sets T 1 , V 1 , one for training (T 1 ) and the other for validation (V 1 ), repeating this random procedure 16 times (T = T i ∪ V i ; 1 ≤ i ≤ 16). Once the model was trained for each T i and V i , the average accuracy was calculated by using the testing set E . Figure 5a ,b show the strategy to select the training and testing sets for both subject-specific and non-subject-specific strategies, respectively. As a result, when applying the subject-specific classification, the training data are composed randomly of trials from the training set of the same subject, as shown in Figure 5a for Subject 5. In contrast, when using the non-subject-specific strategy, the training data are selected randomly from the training set of all subjects, as shown in Figure 5b for Subject 5. An HPC using a Dell PowerEdge R720 server with four 2.40 GHz Intel Xeon processors and 48 GB RAM, NVIDIA GM107L Tesla M10 GPU with 32 GB memory was used to train and test the deep learning models by using the Python TensorFlow 2.3.0 framework. As a result, when applying the subject-specific classification, the training data are composed randomly of trials from the training set of the same subject, as shown in Figure 5a for Subject 5. In contrast, when using the non-subject-specific strategy, the training data are selected randomly from the training set of all subjects, as shown in Figure 5b for Subject 5. An HPC using a Dell PowerEdge R720 server with four 2.40 GHz Intel Xeon processors and 48 GB RAM, NVIDIA GM107L Tesla M10 GPU with 32 GB memory was used to train and test the deep learning models by using the Python TensorFlow 2.3.0 framework. (a) (b) Figure 5 . The training and testing sets selection for both strategies: (a) Subject-specific; (b) Nonsubject-specific. Prior to the implementation of Monte Carlo dropout in the Shallow Convolutional Neural Network (SCNN) of Figure 2 , several experiments were carried out on datasets 2a and 2b, using SCNN in [29] for both subject-specific and non-subject-specific classification. Once the model was trained, we then evaluated the Monte Carlo dropout accuracy on testing set ℰ. For a subject-specific session to session classification (subject-specific system), the training set from dataset 2a (first session) was composed of 288 trials per subject, while from dataset 2b, a total of 400 trials per subject (first three sessions) was used. The validation set was formed with 1/6 and 1/5 of the former training set when using dataset 2a and dataset 2b, respectively. Tables 2 and 3 show for our subject-specific system the averaged accuracy obtained on both dataset 2a and dataset 2b, respectively, as well as comparison with relevant state-of-the-art studies. Both tables highlight the best accuracies in bold, while the lowest accuracies of SCNN-MCD with respect to SCNN [29] are underlined. Prior to the implementation of Monte Carlo dropout in the Shallow Convolutional Neural Network (SCNN) of Figure 2 , several experiments were carried out on datasets 2a and 2b, using SCNN in [29] for both subject-specific and non-subject-specific classification. Once the model was trained, we then evaluated the Monte Carlo dropout accuracy on testing set E . For a subject-specific session to session classification (subject-specific system), the training set T from dataset 2a (first session) was composed of 288 trials per subject, while from dataset 2b, a total of 400 trials per subject (first three sessions) was used. The validation set V i was formed with 1/6 and 1/5 of the former training set T when using dataset 2a and dataset 2b, respectively. Tables 2 and 3 show for our subject-specific system the averaged accuracy obtained on both dataset 2a and dataset 2b, respectively, as well as comparison with relevant state-of-the-art studies. Both tables highlight the best accuracies in bold, while the lowest accuracies of SCNN-MCD with respect to SCNN [29] are underlined. Tables 2 and 3 show that the SCNN-MCD method improved the mean ACC for both databases (around 2% for dataset 2a and 0.22% for dataset 2b) with respect to SCNN, surpassing the other analyzed methods except for CWT-SCNN on dataset 2b. SCNN-MCD reached the best results on subjects A03, A04, A05, A06, A07, and B07 compared with the state-of-the-art. Interestingly, applying the SCNN-MCD approach for the subject-specific strategy did not improve the accuracy for some subjects (A09, B01, B02, B05, and B09), which suggests co-adaptation [24, 51] . When dropout is applied at test time, the dropped neurons may degrade the model's accuracy, especially if the neurons are relying too much on each other to make the prediction. The co-adaptation in the neural network is defined as the expected performance decrease when the dropout is applied at test time. In order to verify the co-adaptation scenario for these subjects, a Monte Carlo Dropout with T = 1 was performed, taking into account 16 pre-trained weights for each subject. Figure 6 shows that for subjects A09, B01, B02, B05, and B09, the mean accuracy decreased for T = 1 with respect to SCNN, indicating possibly a co-adaptation scenario. CWT-SCNN [36] Tables 2 and 3 show that the SCNN-MCD method improved the mean ACC for both databases (around 2% for dataset 2a and 0.22% for dataset 2b) with respect to SCNN, surpassing the other analyzed methods except for CWT-SCNN on dataset 2b. SCNN-MCD reached the best results on subjects A03, A04, A05, A06, A07, and B07 compared with the state-of-the-art. Interestingly, applying the SCNN-MCD approach for the subject-specific strategy did not improve the accuracy for some subjects (A09, B01, B02, B05, and B09), which suggests co-adaptation [24, 51] . When dropout is applied at test time, the dropped neurons may degrade the model's accuracy, especially if the neurons are relying too much on each other to make the prediction. The co-adaptation in the neural network is defined as the expected performance decrease when the dropout is applied at test time. In order to verify the co-adaptation scenario for these subjects, a Monte Carlo Dropout with = 1 was performed, taking into account 16 pre-trained weights for each subject. Figure 6 shows that for subjects A09, B01, B02, B05, and B09, the mean accuracy decreased for = 1 with respect to SCNN, indicating possibly a co-adaptation scenario. For a non-subject-specific session to session classification (non-subject-specific system), as shown in Figure 5b , the training set from dataset 2a was composed of 2592 trials (288 trials per subject), while the training set from dataset 2b was composed of 3600 trials (400 trials per subject). Tables 4 and 5 (for both datasets 2a and 2b, respectively) show the results obtained for non-subject-specific classification. Both tables highlight the best accuracies in bold, while the lowest accuracies of SCNN-MCD compared to [29] are underlined. Tables 4 and 5 show that the SCNN-MCD approach increased the mean ACC on both databases (2.5% for dataset 2a and 1.4% for dataset 2b), outperforming generally the SCNN approach. The SCNN-MCD improved its accuracy for most subjects, in comparison to SCNN, although it also decreased the ACC on some subjects (A06, A07, B03, and B05), indicating possibly co-adaptation [24, 51] . The highest ACC improvement occurred on subjects A01 (8%), A02 (7%), and A05 (4.75%) from dataset 2a as well as on B01 (6.55%) from dataset 2b. Here, the Monte Carlo dropout of an ensemble of SCNN, termed as E-SCNN-MCD, is evaluated and compared with other methods. Then, from previous experiments, 16 trained models M i per subject were obtained. It allowed the testing of an ensemble with 16 models {M 1 , M 2 , · · · , M 16 }. This ensemble can be seen as a particular case of bootstrap aggregating [27, 47] in which bagging samples are performed with replacement. To accomplish this experiment, a Monte Carlo dropout was implemented in SCNN as aforementioned, where each prediction p t , 1 < t < 50 of an ensemble was obtained by averaging the corresponding predictions of each model M i , using dropout at test time. The results obtained on dataset 2a are shown in Table 6 for both subject-specific and non-subject-specific classification. For both strategies, E-SCNN-MCD reached superior results compared to SCNN, with an improvement of 4.88% for subject-specific classification and 4.39% for non-subject-specific classification. With respect to SCNN-MCD, E-SCNN-MCD improved 3.03% for subject-specific classification and 1.89% for non-subject-specific classification. Similarly, Table 7 shows the results of E-SCNN-MCD on dataset 2b. For the subject-specific strategy, this approach improved 1.08% compared to SCNN and 0.86% with respect to SCNN-MCD. Furthermore, for non-subject-specific classification, E-SCNN-MCD increased the accuracy 2.87% and 1.43% compared to SCNN and SCNN-MCD, respectively. These results using ensemble are promising, as it reused trained models without another particular design. This experiment was carried out to analyze the uncertainty of predictions and evaluate the certainty condition based on the proposed threshold (see Equation (11)), splitting the predictions into both certain and uncertain groups for both SCNN-MCD and E-SCNN-MCD approaches, enhancing the decision making. Similar to previous experiments, subjectspecific and non-subject-specific classifications were considered. For this purpose, the confidence level (1 − α) at 0.95 was selected. To assess the suitability of using the proposed threshold with this confidence level, different thresholds from 0.1 to 10 were tested, and their corresponding R c , R cc , R iu , and U A values were also calculated. Figure 7 shows the uncertainty accuracy (U A) achieved for each threshold, using both subject-specific and nonsubject-specific strategies. It is worth noting that U A values for thresholds corresponding to confidence levels of 0.95 and 0.99 are very close to the optimal value U A * for both strategies (see Figure 7a ,b), which is remarkable. Particularly, the difference between U A * and U A (corresponding to α = 0.05) is less than 0.1%. confidence level (1 − ) at 0.95 was selected. To assess the suitability of using the proposed threshold with this confidence level, different thresholds from 0.1 to 10 were tested, and their corresponding , , , and values were also calculated. Figure 7 shows the uncertainty accuracy ( ) achieved for each threshold, using both subject-specific and non-subject-specific strategies. It is worth noting that values for thresholds corresponding to confidence levels of 0.95 and 0.99 are very close to the optimal value * for both strategies (see Figure 7a ,b), which is remarkable. Particularly, the difference between * and (corresponding to α = 0.05) is less than 0.1%. (a) (b) Figure 7 . The mean Uncertainty Accuracy (UA) for different thresholds using datasets 2a and 2b: (a) Subject-specific strategy; (b) Non-subject-specific strategy. Table 8 shows , , , and values, using = 0.05 for subject-specific classification on both datasets. Table 8 shows R c , R cc , R iu , and U A values, using α = 0.05 for subject-specific classification on both datasets. The SCNN-MCD approach provided different results of certainty in both datasets 2a and 2b, as shown in Table 8 . Although for both datasets, the Uncertainty Accuracy (U A) reached approximately an optimum value in which around 10% of the predictions were labeled as uncertain on dataset 2a. The highest predictions rejection was observed on subjects A02 (18.38%) and A06 (19.69%). On dataset 2b, it only rejected 3% of its predictions, which was influenced greatly by subjects B02 (5.6%) and B03 (6.66%). Notice that for datasets 2a and 2b, around 21% and 7% of the misclassified predictions were labeled as uncertain, respectively. A better idea-analyzing two uncertainty metrics-is given in Figure 8 . It shows the bivariate distribution of the mutual information and the margin of confidence over predictions obtained on subjects A04 and B01, as well as the marginal distributions. The average of margin of confidence was similar on both subjects; however, we found a large variation in the mutual information. The average mutual information for subject B01 was four times lower than for subject A04. Although only these subjects have been considered here, it is worth mentioning that the average for both datasets 2a and 2b had similar behavior. Given that mutual information captures the model's confidence in its output, it demonstrates that the SCNN-MCD approach provided more reliable outputs on dataset 2b over dataset 2a. For non-subject-specific classification, Table 9 shows the results achieved for both datasets, using confidence level (1 − ) at 0.95. Table 9 shows that the SCNN-MCD approach also presented the highest uncertainty for dataset 2a, rejecting approximately 14% of its predictions, with a big influence from subjects A02 (18.75%), A05 (18.87%), and A06 (19.03%). For dataset 2b, it also rejected approximately 5% of its predictions, which was mainly influenced by subjects B02 (7.29%) and B03 (7.71%). For dataset 2a, more than 24% of misclassified predictions were considered as uncertain, and it was lower than 10% on dataset 2b. The accuracy achieved using certain predictions ( ) of dataset 2a was 71.32% with an enhancement of 3.45% compared to SCNN, while it reached 77.63% with a marginal improvement of 0.86% for dataset 2b. As a highlight, the subject A03 in dataset 2a presented the best (above 85%), whereas the subjects B04 and B08 in dataset 2b achieved higher than 93%. The previous analysis was extended to the ensemble model (E-SCNN-MCD), and the results are shown in Tables 10 and 11 (for subject-specific and non-subject-specific classification, respectively), using both datasets and confidence level (1 − ) at 0.95. The achieved ACC for certain predictions (R cc ) when using dataset 2a was 80.32% with an improvement of 2.89% with respect to SCNN-MCD, whereas for dataset 2b, it reached 78.31% with a marginal improvement of 0.59%. For non-subject-specific classification, Table 9 shows the results achieved for both datasets, using confidence level (1 − α) at 0.95. Table 9 shows that the SCNN-MCD approach also presented the highest uncertainty for dataset 2a, rejecting approximately 14% of its predictions, with a big influence from subjects A02 (18.75%), A05 (18.87%), and A06 (19.03%). For dataset 2b, it also rejected approximately 5% of its predictions, which was mainly influenced by subjects B02 (7.29%) and B03 (7.71%). For dataset 2a, more than 24% of misclassified predictions were considered as uncertain, and it was lower than 10% on dataset 2b. The accuracy achieved using certain predictions (R cc ) of dataset 2a was 71.32% with an enhancement of 3.45% compared to SCNN, while it reached 77.63% with a marginal improvement of 0.86% for dataset 2b. As a highlight, the subject A03 in dataset 2a presented the best R cc (above 85%), whereas the subjects B04 and B08 in dataset 2b achieved R cc higher than 93%. The previous analysis was extended to the ensemble model (E-SCNN-MCD), and the results are shown in Tables 10 and 11 (for subject-specific and non-subject-specific classification, respectively), using both datasets and confidence level (1 − α) at 0.95. As expected, E-SCNN-MCD obtained predictions of highest certainty, being 97.89% and 99.05% on datasets 2a and 2b, respectively, when applying subject-specific classification. When comparing the results in Tables 8 and 10, we observed that U A values improved by using the E-SCNN-MCD approach compared to SCNN-MCD. This improvement occurred despite the fact that there was a decrease in the number of uncertain incorrect predictions as is reflected in a decrease in R iu . However, this decrease was compensated by a substantial increase in the number of correct certain predictions, which were expressed in the R cc indicator. Figure 9 shows the bivariate distribution of predictions by applying both mutual information and margin of confidence for Subject A05, using the SCNN-MCD and E-SCNN-MCD approaches. The mutual information decreased using E-SCNN-MCD, compared with respect to SCNN-MCD, although it was without substantial changes for the margin of confidence. It is worth mentioning that this effect also occurred for all subjects in both datasets. expressed in the indicator. Figure 9 shows the bivariate distribution of predictions by applying both mutual information and margin of confidence for Subject A05, using the SCNN-MCD and E-SCNN-MCD approaches. The mutual information decreased using E-SCNN-MCD, compared with respect to SCNN-MCD, although it was without substantial changes for the margin of confidence. It is worth mentioning that this effect also occurred for all subjects in both datasets. (a) (b) Similar behavior can be observed in Table 11 when applying non-subject-specific classification. Similar behavior can be observed in Table 11 when applying non-subject-specific classification. The advantages of applying the MCD method to enhance the performance of MI-based BCI schemes using deep neural network models have been proved in this study. Here, this approach was applied with the Shallow Convolutional Neural Network architecture, and an ensemble model, in order to validate its potential to improve subject-specific and non-subject specific MI classification and provide uncertainty estimation and consequently increase the reliability of BCIs. A threshold approach using uncertainty measures was also introduced here and applied on both SCNN-MCD and E-SCNN-MCD models to refuse automatically EEG trials that produce predictions of high uncertainty, obtaining lower error rates during MI classification. This proposed threshold does not require prior knowledge of the data, as it depends exclusively on the predictive distribution. In addition, it reaches its value near the expected optimum value for different MI datasets, using a confidence level at 0.95. Then, due to its statistical base, the selected threshold can be extended to other datasets. In future work, the proposed threshold including other uncertainty metrics can be explored to reject better EEG data that produce bad classified predictions. For clinical translation, this research has enormous potential due to the EEG variability, mainly in people with severe motor impairments, who increase the uncertainty of BCI predictions. Towards a new modality-independent interface for a robotic wheelchair The need for uncertainty quantification in machine-assisted medical decision making Neural network-based uncertainty quantification: A survey of methodologies and applications Exploiting uncertainty of deep neural networks for improving segmentation accuracy in MRI images Alzheimer's Disease Neuroimaging Initiative. Bayesian QuickNAT: Model uncertainty in deep whole-brain segmentation for structure-wise quality control Estimating uncertainty in deep learning for reporting confidence to clinicians in medical image segmentation and diseases detection Uncertainty quantification with statistical guarantees in end-to-end autonomous driving control Towards safe autonomous driving: Capture uncertainty in the deep neural network for lidar 3d vehicle detection The limits and potentials of deep learning for robotics Uncertainty Estimation in Deep Learning with Application to Spoken Language Assessment On Cost-Sensitive Calibrated Uncertainty in Deep Learning: An application on COVID-19 detection Uncertainty assisted robust tuberculosis identification with bayesian convolutional neural networks Automated detection of presymptomatic conditions in Spinocerebellar Ataxia type 2 using Monte Carlo dropout and deep neural network techniques with electrooculogram signals On the effect of inter-observer variability for a reliable estimation of uncertainty of medical image segmentation DR|GRADUATE: Uncertainty-aware deep learning-based diabetic retinopathy grading in eye fundus images Leveraging uncertainty information from deep neural networks for disease detection Uncertainty-Informed Detection of Epileptogenic Brain Malformations Using Bayesian Neural Networks Attention deficits and dual task demands after mild traumatic brain injury Advances in user-training for mental-imagery-based BCI control: Psychological and cognitive factors and their neural correlates. Prog Efficient neuroplasticity induction in chronic stroke patients by an associative brain-computer interface Bayesian learning via stochastic dynamics Uncertainty in Deep Learning Dropout as a bayesian approximation: Representing model uncertainty in deep learning Dropout: A simple way to prevent neural networks from overfitting The dropout learning algorithm Fast Monte Carlo Dropout and Error Correction for Radio Transmitter Classification Simple and scalable predictive uncertainty estimation using deep ensembles Uncertainty Estimation for Deep Learning-Based Segmentation of Roads in Synthetic Aperture Radar Imagery Shallow Convolutional Network Excel for Classifying Motor Imagery EEG in BCI applications Filter bank common spatial pattern algorithm on BCI competition IV datasets 2a and 2b Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp Convolutional neural network-based transfer learning and knowledge distillation using multi-subject data in motor imagery BCI Effects of local and global spatial patterns in EEG motor-imagery classification using convolutional neural network Deep Learning for EEG motor imagery classification based on multi-layer CNNs feature fusion A novel deep learning approach for classification of EEG motor imagery signals A novel simplified convolutional neural network classification algorithm of motor imagery EEG signals based on deep learning Deep learning based inter-subject continuous decoding of motor imagery for practical brain-computer interfaces Event-related synchronization (ERS): An electrophysiological correlate of cortical areas at rest Evaluation of event-related desynchronization (ERD) preceding and following voluntary self-paced movement Mu rhythm (de) synchronization and EEG single-trial classification of different motor imagery tasks Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin. Neurophysiol Event-related synchronization of mu rhythm in the EEG over the cortical hand area in man Event-related beta synchronization after wrist, finger and thumb movement. Electroencephalogr Filter bank common spatial pattern (FBCSP) in brain-computer interface Quadratic Polynomials Learn Better Image Features A method for stochastic optimization Bagging predictors What uncertainties do we need in bayesian deep learning for computer vision? Probabilistic Deep Learning: With Python, Keras and TensorFlow Probability Dropconnect is effective in modeling uncertainty of bayesian deep networks Improving Neural Networks with Dropout Acknowledgments: This research was conducted with the support of the Supercomputer of the Universidad de Oriente, Santiago de Cuba which is part of the Cuban Centre for Academic Supercomputing (HPC-Cuba) supported by VLIR-UOS JOINT project and the Iberoamerican Supercomputing Network (RICAP). Authors would like to thank CNPq (166359/2020-3 ) and CAPES (finance code 001) from Brazil. The authors declare no conflict of interest.Sensors 2021, 21, 7241