key: cord-0451454-uzd6d5fz authors: Wu, Dongrui; Fang, Weili; Zhang, Yi; Yang, Liuqing; Xu, Xiaodong; Luo, Hanbin; Yu, Xiang title: Adversarial Attacks and Defenses in Physiological Computing: A Systematic Review date: 2021-02-04 journal: nan DOI: nan sha: 04a1acba7a48f08db723e612374652a7daad0754 doc_id: 451454 cord_uid: uzd6d5fz Physiological computing uses human physiological data as system inputs in real time. It includes, or significantly overlaps with, brain-computer interfaces, affective computing, adaptive automation, health informatics, and physiological signal based biometrics. Physiological computing increases the communication bandwidth from the user to the computer, but is also subject to various types of adversarial attacks, in which the attacker deliberately manipulates the training and/or test examples to hijack the machine learning algorithm output, leading to possibly user confusion, frustration, injury, or even death. However, the vulnerability of physiological computing systems has not been paid enough attention to, and there does not exist a comprehensive review on adversarial attacks to it. This paper fills this gap, by providing a systematic review on the main research areas of physiological computing, different types of adversarial attacks and their applications to physiological computing, and the corresponding defense strategies. We hope this review will attract more research interests on the vulnerability of physiological computing systems, and more importantly, defense strategies to make them more secure. Physiological computing [37] is "the use of human physiological data as system inputs in real time." It opens up bandwidth within human-computer interaction by enabling an additional channel of communication from the user to the computer [22] , which is necessary in adaptive and collaborative human-computer symbiosis. Common physiological data in physiological computing include the electroencephalogram (EEG), electrocorticogram (ECoG), electrocardiogram (ECG), electrooculogram (EOG), electromyogram (EMG), eye movement, blood pressure (BP), electrodermal activity (EDA), respiration (RSP), skin temperature, etc., which are recordings or measures produced by the physiological process of human beings. Their typical measurement locations are shown in Fig. 1 . These signals have been widely studied in the literature in various applications, including and beyond physiological computing, as shown in Table I . Physiological signals are usually single-channel or multichannel time series, as shown in Fig. 2 . In many clinical applications, the recording may last hours, days, or even longer. For example, long-term video-EEG monitoring for seizure diagnostics may need 24 hours, and ECG monitoring in intensive care units (ICUs) may last days or weeks. Wearable ECG monitoring devices, e.g., iRythm Zio Patch, AliveCor KardiaMobile, Apple Watch, and Huawei Band, are being used by millions of users. A huge amount of physiological signals are collected during these processes. Manually labelling them is very labor-intensive, and even impossible for wearable devices, given the huge number of users. Machine learning [83] has been used to alleviate this problem, by automatically classifying the measured physiological signals. Particularly, deep learning has demonstrated outstanding performances [70] , e.g., EEGNet [48] , DeepCNN [74] , ShallowCNN [74] and TIDNet [45] for EEG classification, SeizureNet for EEG-based seizure recognition [5] , CNN for ECG rhythm classification [28] , ECGNet for ECG-based mental stress monitoring [36] , and so on. However, recent research has shown that both traditional machine learning and deep learning models are vulnerable to various types of attacks [27] , [57] , [68] , [78] . For example, Chen et al. [14] created a backdoor in the target model by injecting poisoning samples, which contain an ordinary sunglass, into the training set, so that all test images with the sunglass would be classified into a target class. Eykholt et al. [21] stuck a carefully crafted graffiti to road signs, and caused the model to classify 'Stop' as 'Speed limit 40'. Finlayson et al. [23] , [24] successfully performed adversarial attacks to deep learning classifiers across three clinical domains (fundoscopy, chest X-ray, and dermoscopy). Rahman et al. [69] performed adversarial attacks to six COVID-19 related applications, including recognizing whether a subject is wearing a mask, maintaining deep learning based QR codes as immunization certificates, recognizing COVID-19 from CT scan or X-ray images, etc. Ma et al. [51] showed that medical deep learning models can be more vulnerable to adversarial attacks than models for natural images, but surprisingly and fortunately, medical adversarial attacks may also be easily detected. Kaissis et al. [40] pointed out that various other attacks, in addition to adversarial attacks, also exist in medical imaging, and called for secure, privacy-preserving and federated machine learning to cope with them. Machine learning models in physiological computing are not exempt from adversarial attacks [42] , [43] , [90] . However, to the best of our knowledge, there does not exist a systematic review on adversarial attacks in physiological computing. This paper fills this gap, by comprehensively reviewing different types of adversarial attacks, their applications in physiological computing, and possible defense strategies. It will be very important to the security of physiological computing systems in real-world applications. We need to emphasize that this paper focuses on the emerging adversarial attacks and defenses. For other types of attacks and defenses, e.g., cybersecurity, the readers can refer to, e.g., [6] . The remainder of this paper is organized as follows: Section II introduces five relevant research areas in physiological computing. Section III introduces different categorizations of adversarial attacks. Section IV describes various adversarial attacks to physiological computing systems. Section V introduces different approaches to defend against adversarial attacks, and their applications in physiological computing. Finally, Section VI draws conclusions and points some future research directions. II. PHYSIOLOGICAL COMPUTING Physiological computing includes, or significantly overlaps with, brain-computer interfaces (BCIs), affective computing, adaptive automation, health informatics, and physiological signal based biometrics. A BCI system establishes a direct communication pathway between the brain and an extern device, e.g., a computer or a robot [47] . Scalp and intracranial EEGs have been widely used in BCIs [83] . The flowchart of a closed-loop EEG-based BCI system is shown in Fig. 3 . After EEG signal acquisition, signal processing, usually including both temporal filtering and spatial filtering, is used to enhance the signal-to-noise ratio. Machine learning is next performed to understand what the EEG signal means, based on which a control command may be sent to an external device. EEG-based BCI spellers may be the only non-muscular communication devices for Amyotrophic Lateral Sclerosis (ALS) patients to express their opinions [75] . In seizure treatment, responsive neurostimulation (RNS) [26] , [31] recognizes ECoG or intracranial EEG patterns prior to ictal onset, and delivers a high-frequency stimulation impulse to stop the seizure, improving the patients' quality-of-life. Affective computing is "computing that relates to, arises from, or deliberately influences emotion or other affective phenomena" [65] . In bio-feedback based relaxation training [15] , EDA can be used to detect the user's affective state, based on which a relaxation training application can provide the user with explicit feedback to learn how to change his/her physiological activity to improve health and performance. In software adaptation [3] , the graphical interface, difficulty level, sound effects and/or content are automatically adapted based on the user's realtime emotion estimated from various physiological signals, to keep the user more engaged. Adaptive automation keeps the task workload demand within appropriate levels to avoid both underload and overload, hence to enhance the overall performance and safety of the human-machine system [4] . In air traffic management [4] , an operator's EEG signal can be used to estimate the mental workload, and trigger specific adaptive automation solutions. This can significantly reduce the operator's workload during high-demanding conditions, and increase the task execution performance. A study [18] also showed that pupil diameter and fixation time, measured from an eye-tracking device, can be indicators of mental workload, and hence be used to trigger adaptive automation. Health informatics studies information and communication processes and systems in healthcare [16] . A single-lead short ECG recording (9-60 s), collected from the AliveCor personal ECG monitor, can be used by a convolutional neural network (CNN) to classify normal sinus rhythm, atrial fibrillation, an alternative rhythm, or noise, with an average test accuracy of 88% on the first three classes [32] . A very recent study [59] also showed that heart rate data from consumer smart watches, e.g., Apple, Fitbits and Garmin devices, can be used for pre-symptomatic detection of COVID-19, sometimes nine or more days earlier. Physiological signal based biometrics [76] use physiological signals for biometric applications, e.g., digitally identify a person to grant access to systems, devices or data. EEG [30] , ECG [1] , PPG [87] and multimodal physiological signals [8] have been used in user identification and authentication, with the advantages of universality, permanence, liveness detection, continuous authentication, etc. There are different categorizations of adversarial attacks [57] , [68] , as shown in Fig. 4 . According to the outcome, there are two types of adversarial attacks [57] : targeted attacks and non-targeted (indiscriminate) attacks. Targeted attacks force a model to classify certain examples, or a certain region of the feature space, into a specific (usually wrong) class. Non-targeted attacks force a model to misclassify certain examples or feature space regions, but do not specify which class they should be misclassified into. For example, in a 3-class classification problem, assume the class labels are A, B and C. Then, a targeted attack may force the input to be classified into Class A, no matter what its true class is. A non-targeted attack forces an input from Class A to be classified into Class B or C, but does not specify it must be B or C; as long as it is not A, then the non-targeted attack is successful. According to how much the attacker knows about the target model, there can be three types of attacks [89] : 1) White-box attacks, in which the attacker knows everything about the target model, including its architecture and parameters. This is the easiest attack scenario and could cause the maximum damage. It may correspond to the case that the attacker is an insider, or the model designer is evaluating the worst-case scenario when the model is under attack. Popular attack approaches include L-BFGS [78] , DeepFool [60] , the C&W method [13] , the fast gradient sign method (FGSM) [27] , the basic iterative method (BIM) [46] , etc. 2) Black-box attacks, in which the attacker knows neither the architecture nor the parameters of the target model, but can supply inputs to the model and observe its outputs. This is the most realistic and also the most challenging attack scenario. One example is that the attacker purchases a commercial BCI system and tries to attack it. Black-box attacks are possible, due to the transferability of adversarial examples [78] , i.e., an adversarial example generated from one machine learning model may be used to fool another machine learning model at a high success rate, if the two models solve the same task. So, in black-box attacks [63] , the attacker can query the target model many times to construct a training set, train a substitute machine learning model from it, and then generate adversarial examples from the substitute model to attack the original target model. 3) Gray-box attacks, which assume the attacker knows a limited amount of information about the target model, e.g., (part of) the training data that the target model is tuned on. They are frequently used in data poisoning attacks, as introduced in the next subsection. According to the stage that the adversarial attack is performed, there are two types of attacks: poisoning attacks and evasion attacks. Poisoning attacks [84] happen at the training stage, to create backdoors in the machine learning model by adding contaminated examples to the training set. They are usually white-box or gray-box attacks, achieved by data injection, i.e., adding adversarial examples to the training set [54] , or data modification, i.e., poisoning the training data by modifying their features or labels [9] . Evasion attacks [27] happen at the test stage, by adding deliberately designed tiny perturbations to benign test samples to mislead the machine learning model. They are usually white-box or black-box attacks. An example of evasion attack in EEG classification is shown in Fig. 5 . [41] , and even fewer on physiological signals. A summary of them is shown in Table III . Attacking the machine learning models in BCIs could cause serious damages, ranging from user frustration to serious injuries. For example, in seizure treatment, attacks to RNS's [26] seizure recognition algorithm may quickly drain its battery or make it completely ineffective, and hence significantly reduces the patient's quality-of-life. Adversarial attacks to an EEGbased BCI speller may hijack the user's true input and output wrong letters, leading to user frustration or misunderstanding. In BCI-based driver drowsiness estimation [81] , adversarial attacks may make a drowsy driver look alert, increasing the risk of accidents. Although pioneers in BCIs have thought of neurosecurity [38] , i.e., "devices getting hacked and, by extension, behavior unwillfully and unknowingly manipulated for nefarious purposes," most BCI research so far focused on making the BCIs faster and more accurate, paying little attention to its security. In 2019, Zhang and Wu [89] first pointed out that adversarial examples exist in EEG-based BCIs, i.e., deep learning models in BCIs are also vulnerable to adversarial attacks. They successfully performed white-box, gray-box and black-box nontargeted evasion attacks to three CNN classifiers, i.e., EEGNet [48], DeepCNN and ShallowCNN [74] , in three different BCI paradigms, i.e., P300 evoked potential detection, feedback error-related negativity detection, and motor imagery classification. The basic idea, shown in Fig. 6 , is to add a jamming module between EEG signal processing and machine learning to generate adversarial examples, optimized by unsupervised FGSM. The generated adversarial perturbations are too small to be noticed by human eyes (an example is shown in Fig. 5 ), but can significantly reduce the classification accuracy. Fig. 6 . The BCI evasion attack approach proposed in [89] . A jamming module is inserted between signal preprocessing and machine learning to generate adversarial examples. It is important to note that the jamming module is implementable, as research [77] has shown that BtleJuice, a framework to perform Man-in-the-Middle attacks on Bluetooth devices, can be used to intercept the data from a consumer grade EEG-based BCI system, modify them, and then send them back to the headset. Jiang et al. [39] focused on black-box non-targeted evasion attacks to deep learning models in BCI classification problems, in which the attacker trains a substitute model to approximate the target model, and then generates adversarial examples from the substitute model to attack the target model. Learning a good substitute model is critical to the success of black-box attacks, but it requires a large number of queries to the target model. Jiang et al. [39] proposed a novel query synthesis based active learning framework to improve the query efficiency, by actively synthesizing EEG trials scattering around the decision boundary of the target model, as shown in Fig. 7 . Compared with the original black-box attack approach in [89] , the active learning based approach can improve the attack success rate with the same number of queries, or, equivalently, reduce the number of queries to achieve a desired attack performance. This is the first work that integrates active learning and adversarial attacks for EEG-based BCIs. The above two studies considered classification problems, as in most adversarial attack research. Adversarial attacks to regression problems were much less investigated in the literature. Meng et al. [56] were the first to study white-box targeted evasion attacks for BCI regression problems. They proposed two approaches, based on optimization and gradient, respectively, to design small perturbations to change the regression output by a pre-determined amount. Experiments on two BCI regression problems (EEG-based driver fatigue estimation, and EEG-based user reaction time estimation in the psychomotor vigilance task) verified their effectiveness: both approaches can craft adversarial EEG trials indistinguishable from the original ones, but can significantly change the outputs of the BCI regression model. Moreover, adversarial examples generated from both approaches are also transferable, i.e., adversarial examples generated from one known regression model can also be used to attack an unknown regression model in black-box attacks. The above three attack strategies are theoretically important, but there are some constraints in applying them to real-world BCIs: 1) Trial-specificity, i.e., the attacker needs to generate different adversarial perturbations for different EEG trials. 2) Channel-specificity, i.e., the attacker needs to generate different adversarial perturbations for different EEG channels. 3) Non-causality, i.e., the complete EEG trial needs to be known in advance to compute the corresponding adversarial perturbation. 4) Synchronization, i.e., the exact starting time of the EEG trial needs to be known for the best attack performance. Some recent studies tried to overcome these constraints. Zhang et al. [90] performed white-box targeted evasion attacks to P300 and steady-state visual evoked potential (SSVEP) based BCI spellers (Fig. 8) , and showed that a tiny perturbation to the EEG trial can mislead the speller to output any character the attacker wants, e.g., change the output from 'Y' to 'N', or vice versa. The most distinguishing characteristic of their approach is that it explicitly considers the causality in designing the perturbations, i.e., the perturbation should be generated before or as soon as the target EEG trial starts, so that it can be added to the EEG trial in real-time in practice. To achieve this, an adversarial perturbation template is constructed from the training set only and then fixed. So, there is no need to know the test EEG trial and compute the perturbation specifically for it. Their approach resolves the trial-specificity and non-causality constraints, but different EEG channels still need different perturbations, and it also requires the attacker to know the starting time of an EEG trial in advance to achieve the best attack performance, i.e., there are still channel-specificity and synchronization constraints. Fig. 8 . Workflow of a P300 speller and an SSVEP speller [90] . For each speller, the user watches the stimulation interface, focusing on the character he/she wants to input, while EEG signals are recorded and analyzed by the speller. The P300 speller first identifies the row and the column that elicit the largest P300, and then outputs the letter at their intersection. The SSVEP speller identifies the output letter directly by matching the user's EEG oscillation frequency with the flickering frequency of each candidate letter. Zhang et al. [90] considered targeted attacks to a traditional and most frequently used BCI speller pipeline, which has separate feature extraction and classification steps. Liu et al. [50] considered both targeted and non-targeted white-box evasion attacks to end-to-end deep learning models in EEGbased BCIs, and proposed a total loss minimization (TLM) approach to generate universal adversarial perturbations (UAPs) for them. Experimental results demonstrated its effectiveness on three popular CNN classifiers (EEGNet, ShallowCNN, and DeepCNN) in three BCI paradigms (P300, feedback error related negativity, and motor imagery). They also verified the transferability of UAPs in non-targeted gray-box evasion attacks. To further simplify the implementation of TLM-UAP, Liu et al. [50] also considered smaller template size, i.e., mini TLM-UAP with a small number of channels and time domain samples, which can be added anywhere to an EEG trial. Mini TLM-UAPs are more practical and flexible, because they do not require the attacker to know the exact number of EEG channels and the exact length and starting time of an EEG trial. Liu et al. [50] showed that, generally, all mini TLM-UAPs were effective. However, their effectiveness decreased when the number of used channels and/or the template length decrease, which is intuitive. This is the first study on UAPs of CNN classifiers in EEG-based BCIs, and also the first on optimization based UAPs for targeted evasion attacks. In summary, the TLM-UAP approach [50] resolves the trialspecificity and non-causality constraints, and mini TLM-UAPs further alleviate the channel-specificity and synchronization constraints. All above studies focused on evasion attacks. Meng et al. [55] were the first to show that poisoning attacks can also be performed for EEG-based BCIs, as shown in Fig. 9 . They proposed a practically realizable backdoor key, narrow period pulse, for EEG signals, which can be inserted into the benign EEG signal during data acquisition, and demonstrated its effectiveness in black-box targeted poisoning attacks, i.e., the attacker does not know any information about the test EEG trial, including its starting time, and wants to classify the test trial into a specific class, regardless of its true class. In other words, it resolves the trial-specificity, channel-specificity, causality and synchronization constraints simultaneously. To our knowledge, this is to-date the most practical BCI attack approach. Fig. 9 . Poisoning attack in EEG-based BCIs [55] . Narrow period pulses can be added to EEG trials during signal acquisition. A summary of existing adversarial attack approaches in EEG-based BCIs is shown in Table IV . Adversarial attacks in health informatics can also cause serious damages, even deaths. For example, adversarial attacks to the machine learning algorithms in implantable cardioverter defibrillators could lead to unnecessary painful shocks, damaging the cardiac tissue, and even worse therapy interruptions and sudden cardiac death [62] . Han et al. [32] proposed both targeted and non-targeted white-box evasion attack approaches to construct smoothed adversarial examples for ECG trials that are invisible to one board-certified medicine specialist and one cardiac electrophysiology specialist, but can successfully fool a CNN classifier for arrhythmia detection. They achieved 74% attack success rate (74% of the test ECGs originally classified correctly were assigned a different diagnosis, after adversarial attacks) on atrial fibrillation classification from single-lead ECG collected from the AliveCor personal ECG monitor. This study suggests that it is important to check if ECGs have been altered before using them in medical machine learning models. Aminifar [2] studied white-box targeted evasion attack in EEG-based epileptic seizure detection, through UAPs. He computed the UAPs via solving an optimization problem, and showed that they can fool a support vector machine classifier to misclassify most seizure samples into non-seizure ones, with imperceptible amplitude. Newaz et al. [61] investigated adversarial attacks to machine learning-based smart healthcare systems, consisting of 10 vital signs, e.g., EEG, ECG, SpO 2 , respiration, blood pressure, blood glucose, blood hemoglobin, etc. They performed both targeted and non-targeted attacks, and both poisoning and evasion attacks. For evasion attacks, they also considered both white-box and black-box attacks. They showed that adversarial attacks can significantly degrade the performance of four different classifiers in smart health system in detecting diseases and normal activities, which may lead to to erroneous treatment. Deep learning has been extensively used in health informatics; however, generally it needs a large amount of training data for satisfactory performance. Transfer learning [83] can be used to alleviate this requirement, by making use of data or machine learning models from an auxiliary domain or task. Wang et al. [80] studied targeted backdoor attacks against transfer learning with pre-trained deep learning models on both image and time series (e.g., ECG). Three optimization strategies, i.e., ranking-based neuron selection, autoencoderpowered trigger generation and defense-aware retraining, were used to generate backdoors and retrain deep neural networks, to defeat pruning based, fine-tuning/retraining based and input pre-processing based defenses. They demonstrated its effectiveness in brain MRI image classification and ECG heartbeat type classification. Physiological signals, e.g., EEG, ECG and PPG, have recently been used in biometrics [76] . However, they are subject to presentation attacks in such applications. In a physiological signal based presentation attack, the attacker tries to spoof the biometric sensors with a fake piece of physiological signal [20] , which would be authenticated as from a specific victim user. Maiorana et al. [52] investigated the vulnerability of an EEG-based biometric system to hill-climbing attacks. They assumed that the attacker can access the matching scores of the biometric system, which can then be used to guide the generation of synthetic EEG templates until a successful authentication is achieved. This is essentially a black-box targeted evasion attack in the adversarial attack terminology: the synthetic EEG signal is the adversarial example, and the victim's identify is the target class. It's a black-box attack, because the attacker can only observe the output of the biometric system, but does not know anything else about it. Eberz et al. [20] proposed an offline ECG biometrics presentation attack approach, illustrated in Fig. 10(a) . The basic idea was to find a mapping function to transform ECG trials recorded from the attacker so that they resemble the morphology of ECG trials from a specific victim. The transformed ECG trials can then be used to fool an ECG biometric system to obtain unauthorized access. They showed that the attacker ECG trials can be obtained from a device different from the one that the victim ECG trials are recorded (i.e., cross-device attack), and there could be different approaches to present the transformed ECG trials to the biometric device under attack, the simplest being the playback of ECG trials encoded as .wav files using an off-the-shelf audio player. Unlike [52] , the above approach is a gray-box targeted evasion attack in the adversarial attack terminology: the attacker's ECG signal can be viewed as the benign example, the transformed ECG signal is the adversarial example, and the victim's identify is the target class. The mapping function plays the role of the jamming module in Fig. 6 . It's a graybox attack, because the attacker needs to know the feature distributions of the victim ECGs in designing the mapping function. Karimian et al. [42] proposed an online ECG biometrics presentation attack approach, shown in Fig. 10(b) . Its procedure is very similar to the offline attack one in Fig. 10(a) , except that the online approach is simpler, because it only requires as few as one victim ECG segment to compute the mapping function, and the mapping function is linear. Karimian [43] also proposed a similar presentation attack approach to attack PPG-based biometrics. Again, these approaches can be viewed as gray-box targeted evasion attacks. Although we have not found adversarial attack studies on affective computing and adaptive automation in physiological computing, it does not mean that adversarial attacks cannot be performed in such applications. Machine learning models in affective computing and adaptive automation are not fundamentally different from those in BCIs; so, adversarial attacks in BCIs can easily be adapted to affective computing and adaptive automation. Particularly, Meng et al. [56] have shown that it is possible to attack the regression models in EEG-based driver fatigue [20] ; (b) Online ECG biometrics presentation attack [44] . estimation and EEG-based user reaction time estimation, whereas driver fatigue and user reaction time could be triggers in adaptive automation. There are different adversarial defense strategies [7] , [68] : 1) Data modification, which modifies the training set in the training stage or the input data in the test stage, through adversarial training [78] , gradient hiding [79] , transferability blocking [34] , data compression [17] , data randomization [85] , etc. 2) Model modification, which modifies the target model directly to increase its robustness. This can be achieved through regularization [9] , defensive distillation [64] , feature squeezing [86] , using a deep contractive network [29] or a mask layer [25] , etc. 3) Auxiliary tools, which may be additional auxiliary machine learning models to robustify the primary model, e.g., adversarial detection models [67] , or defense generative adversarial nets (defense-GAN) [73] , high-level representation guided denoiser [49] , etc. As researchers just started to investigate adversarial attacks in physiological computing, there were even fewer studies on defense strategies against them. A summary of them is shown in Table V . Adversarial training, which trains a robust machine learning model on normal plus adversarial examples, may be the most popular data modification based adversarial defense approach. Hussein et al. [35] proposed an approach to augment deep learning models with adversarial training for robust prediction of epilepsy seizures. Though their goal was to overcome some challenges in EEG-based seizure classification, e.g., Application Data Model Adversarial Modification Modification Detection [35] BCI [72] BCI [10] Health Informatics [11] Health Informatics [42] Biometrics individual differences and shortage of pre-ictal labeled data, their approach can also be used to defend against adversarial attacks. They first constructed a deep learning classifier from available limited amount of labeled EEG data, and then performed white-box attacks to the classifier to obtain adversarial examples, which were next combined with the original labeled data to retrain the deep learning classifier. Experiments on two public seizure datasets demonstrated that adversarial training increased both the classification accuracy and classifier robustness. Regularization based model modification to defend against adversarial attacks usually also considers the model security (robustness) in the optimization objective function. Sadeghi et al. [72] proposed an analytical framework for tuning the classifier parameters, to ensure simultaneously its accuracy and security. The optimal classifier parameters were determined by solving an optimization problem, which takes into account both the test accuracy and the robustness against adversarial attacks. For k-nearest neighbor (kNN) classifiers, the two parameters to be optimized are the number of neighbors and the distance metric type. Experiments on EEG-based eye state (open or close) recognition verified that it is possible to achieve both high classification accuracy and high robustness against black-box targeted evasion attacks. Adversarial detection uses a separate module to detect if there is adversarial attack, and takes actions accordingly. The simplest is to discard adversarial examples directly. Cai and Venkatasubramanian [11] proposed an approach to detect signal injection-based morphological alterations (evasion attack) of ECGs. Because multiple physiological signals based on the same underlying physiological process (e.g., cardiac process) are inherently related to each other, any adversarial alteration of one of the signals will lead to inconsistency in the other signal(s) in the group. Since both ECG and arterial blood pressure measurements are representations of the cardiac process, the latter can be used to detect morphological alterations in ECGs. They demonstrated over 90% accuracy in detecting even subtle ECG morphological alterations for both healthy subjects and patients. A similar idea [10] was also used to detect temporal alternations of ECGs, by making use of their correlations with arterial blood pressure and respiration measurements. Karimian et al. [42] proposed two strategies to protect ECG biometric authentication systems from spoofing, by evaluating if ECG signal characteristics match the corresponding heart rate variability or PPG features (pulse transit time and pulse arrival time). The idea is actually similar to Cai and Venkatasubramanian's [11] . If there is a mismatch, then the system considers the input to be fake, and rejects it. Physiological computing includes, or significantly overlaps with, BCIs, affective computing, adaptive automation, health informatics, and physiological signal based biometrics. It increases the communication bandwidth from the user to the computer, but is also subject to adversarial attacks. This paper has given a comprehensive review on adversarial attacks and their defense strategies in physiological computing, hopefully will bring more attention to the security of physiological computing systems. Promising future research directions in this area include: 1) Transfer learning has been extensively used in physiological computing [83] , to alleviate the training data shortage problem by leveraging data from other subjects [33] or tasks [82] , or to warm-start the training of a (deep) learning algorithm by borrowing parameters or knowledge from an existing algorithm [80] , as shown in Fig. 11 . However, transfer learning is particularly susceptive to poisoning attacks [55] , [80] . It's very important to develop strategies to check the integrity of data and models before using them in transfer learning. 2) Adversarial attacks to other components in the machine learning pipeline (an example on BCI is shown in Fig. 12 ), which includes signal processing, feature engineering, and classification/regression, and the corresponding defense strategies. So far all adversarial attack Fig. 11 . A transfer learning pipeline in motor imagery based BCIs [83] . approaches in physiological computing considered the classification or regression model only, but not other components, e.g., signal processing and feature engineering. It has been shown that feature selection is also subjective to data poisoning attacks [84] , and adversarial feature selection can be used to defend against evasion attacks [88] . 3) Additional types of attacks in physiological computing [7] , [12] , [19] , [66] , [71] , as shown in Fig. 13 , and the corresponding defense strategies. For example, Paoletti et al. [62] performed parameter tampering attacks on Boston Scientific implantable cardioverter defibrillators, which use a discrimination tree to detect tachycardia episodes and then initiate the appropriate therapy. They slightly modified the parameters of the discrimination tree to achieve both attack effectiveness and stealthiness. These attacks are also very dangerous in physiological computing, and hence deserve adequate attention. 4) Adversarial attacks to affective computing and adaptive automation applications, which have not been studied yet, but are also possible and dangerous. Many existing attack approaches in BCIs, health informatics and biometrics can be extended to them, either directly or with slight modifications. However, there could also be unique attack approaches specific to these areas. For example, emotions are frequently represented as continuous numbers in the 3D space of valence, arousal and dominance in affective computing [53] , and hence adversarial attacks to regression models in affective computing should be paid enough attention to. Finally, we need to emphasize that the goal of adversarial attack research in physiological computing should be discovering its vulnerabilities, and then finding solutions to make it more secure, instead of merely causing damages to it. Heart biometrics: Theory, methods and applications Universal adversarial perturbations in epileptic seizure detection Adapting software with affective computing: a systematic review Adaptive automation triggered by EEG-based mental workload index: a passive brain-computer interface application in realistic air traffic control environment SeizureNet: Multi-spectral deep feature learning for seizure type classification Cyberattacks on miniature brain implants to disrupt spontaneous neural signaling Security in brain-computer interfaces: State-of-the-art, opportunities, and future challenges Biometric recognition using multimodal physiological signals Support vector machines under adversarial label noise Detecting malicious temporal alterations of ECG signals in body sensor networks Detecting signal injection attack-based morphological alterations of ECG measurements Security and privacy issues in implantable medical devices: A comprehensive survey Towards evaluating the robustness of neural networks Evolutionary multi-tasking singleobjective optimization based on cooperative co-evolutionary memetic algorithm Affective computing vs. affective placebo: Study of a biofeedback-controlled game for relaxation training Guide to Health Informatics Keeping the bad guys out: Protecting and vaccinating deep learning with JPEG compression Eye movement as indicators of mental workload to trigger adaptive automation Neurosecurity: security and privacy for neural devices Broken hearted: How to attack ECG biometrics Robust physical-world attacks on deep learning visual classification Fundamentals of physiological computing Adversarial attacks on medical machine learning Adversarial attacks against medical deep learning systems DeepCloak: Masking deep neural network models for robustness against adversarial samples Responsive neurostimulation: review of clinical trials and insights into focal epilepsy Explaining and harnessing adversarial examples Towards understanding ECG rhythm classification using convolutional neural networks and attention mappings Machine Learning for Healthcare Conf Towards deep neural network architectures robust to adversarial examples Exploring EEG-based biometrics for user identification and authentication Expanding brain-computer interfaces for controlling epilepsy networks: novel thalamic responsive neurostimulation in refractory epilepsy Deep learning models for electrocardiograms are susceptible to adversarial attack Transfer learning for brain-computer interfaces: A Euclidean space data alignment approach Blocking transferability of adversarial examples in black-box learning systems Augmenting DL with adversarial training for robust prediction of epilepsy seizures Deep ECGNet: An optimal deep learning framework for monitoring mental stress using ultra short-term ECG signals Physiological computing The ethics of neurotechnology Active learning for black-box adversarial attacks in EEG-based brain-computer interfaces Secure, privacy-preserving and federated machine learning in medical imaging Adversarial attacks on time series ECG biometric: Spoofing and countermeasures How to attack PPG biometric using adversarial machine learning On the vulnerability of ECG verification to online presentation attacks Thinker invariance: enabling deep neural networks for BCI across more people Adversarial examples in the physical world Brain-computer interface technologies in the coming decades EEGNet: a compact convolutional neural network for EEG-based brain-computer interfaces Defense against adversarial attacks using high-level representation guided denoiser Universal adversarial perturbations for CNN classifiers in EEGbased BCIs Understanding adversarial attacks on deep learning based medical image analysis systems On the vulnerability of an EEG-based biometric system to hill-climbing attacks algorithms' comparison and possible countermeasures Basic Dimensions for a General Psychological Theory: Implications for Personality, Social, Environmental, and Developmental Studies Using machine teaching to identify optimal training-set attacks on machine learners EEG-based braincomputer interfaces are vulnerable to backdoor attacks White-box target attack for EEG-based BCI regression problems Adversarial learning targeting deep neural network classification: A comprehensive review of defenses against attacks The Society of Mind Pre-symptomatic detection of COVID-19 from smartwatch data DeepFool: a simple and accurate method to fool deep neural networks Adversarial attacks to machine learning-based smart healthcare systems Synthesizing stealthy reprogramming attacks on cardiac devices Practical black-box attacks against machine learning Distillation as a defense to adversarial perturbations against deep neural networks Brainjacking: implant security issues in invasive neuromodulation Secure and robust machine learning for healthcare: A survey Review of artificial intelligence adversarial attack and defense technologies Adversarial examples -security threats to COVID-19 deep learning systems in medical IoT devices Deep learning in physiological signal data: A survey SoK: Security and privacy in implantable medical devices and body area networks An analytical framework for security-tuning of artificial intelligence applications under attack Defense-GAN: Protecting classifiers against adversarial attacks using generative models Deep learning with convolutional neural networks for EEG decoding and visualization A P300-based brain-computer interface: initial tests by ALS patients Bioelectrical signals as emerging biometrics: Issues and challenges Privacy and security issues in brain computer interfaces Intriguing properties of neural networks Ensemble adversarial training: Attacks and defenses Backdoor attacks against transfer learning with pre-trained deep learning models Driver drowsiness estimation from EEG signals using online weighted adaptation regularization for regression (OwARR) Switching EEG headsets made easy: Reducing offline calibration effort using active wighted adaptation regularization Transfer learning for EEG-based braincomputer interfaces: A review of progress made since 2016 Is feature selection secure against training data poisoning on Machine Learning Adversarial examples for semantic segmentation and object detection Feature squeezing: Detecting adversarial examples in deep neural networks Evaluation of PPG biometrics for authentication in different states Adversarial feature selection against evasion attacks On the vulnerability of CNN classifiers in EEG-based BCIs Tiny noise, big mistakes: Adversarial perturbations induce errors in brain-computer interface spellers