key: cord-0310509-9fnuei65 authors: Choi, Tae-jun; Lee, Honggu title: Using physiological signals to predict temporal defense responses: a multi-modality analysis date: 2020-12-20 journal: bioRxiv DOI: 10.1101/2020.12.17.423337 sha: 5e46223c8192cdf3976db37ac8391e713b17468d doc_id: 310509 cord_uid: 9fnuei65 Defense responses are a highly conserved behavioral response set across species. Defense responses motivate organisms to detect and react to threats and potential danger as a precursor to anxiety. Accurate measurement of temporal defense responses is important for understanding clinical anxiety and mood disorders, such as post-traumatic stress disorder, obsessive compulsive disorder, and generalized anxiety disorder. Within these conditions, anxiety is defined as a state of prolonged defense response elicitation to a threat that is ambiguous or unspecific. In this study, we aimed to develop a data-driven approach to capture temporal defense response elicitation through a multi-modality data analysis of physiological signals, including electroencephalogram (EEG), electrocardiogram (ECG), and eye-tracking information. A fear conditioning paradigm was adopted to develop a defense response classification model. From a classification model based on 42 feature sets, a higher order crossing feature set-based model was chosen for further analysis with cross-validation loss of 0.0462 (SEM: 0.0077). To validate our model, we compared predicted defense response occurrence ratios from a comprehensive situation that generates defense responses by watching movie clips with fear awareness and threat existence predictability, which have been reported to correlate with defense response elicitation in previous studies. We observed that defense response occurrence ratios are correlated with threat existence predictability, but not with fear awareness. These results are similar to those of previous studies using comprehensive situations. Our study provides insight into measurement of temporal defense responses via a novel approach, which can improve understanding of anxiety and related clinical disorders for neurobiological and clinical researchers. In animal studies, defense responses are measured by observing behaviors such as 27 freezing and flight [9, 10] . In contrast, human studies measure defense responses 28 explicitly through self-report or implicitly by measuring fear-potentiated startle 29 response (FPS) or physiological signals such as heartbeat, skin conductance, and pupil 30 dilation [11] [12] [13] [14] [15] . Self-report is most commonly used because of the low cost and ease of 31 implementation. However, the decision-making process that occurs to answer a question 32 may prove problematic as it may lead to inaccurate results that differ from what the 33 participant experiences in an actual experiment. Furthermore, self-report is subject to 34 differences in individual perception and intentional distortion of questions. 35 For these reasons, many conditioning studies measure defense responses with FPS. 36 FPS is a skeletal musculature contraction reflex that is elicited during defense responses 37 produced by unexpected or threatening sensory events. It is mediated by the brainstem 38 and amygdala [16] [17] [18] [19] [20] [21] and is generally evoked by a loud noise. FPS provides an implicit 39 measure of fear represented by the magnitude of startle reactivity at a certain point in 40 time measured with electrooculogram. Therefore, it is difficult to measure the frequency 41 of defense responses with FPS in an environment with unpredictable threats. Moreover, 42 a previous study reports that using FPS affects immersiveness of a study and may itself 43 influence fear acquisition [22] . Physiological signals, such as heartbeat, skin 44 conductance, and pupil dilations can also be used as measures of defense response. 45 However, these are nonspecific measures of arousal that cannot easily be interpreted as 46 representing emotional reactions. Moreover, these signals are not suited for measuring 47 continuous elicitation of defense responses due to slow reaction and recovery speeds in 48 response to stimuli. 49 In this study, the aim was to develop a new way to capture defense responses with a 50 multi-modality data analysis that uses physiological signals from electroencephalogram 51 (EEG), electrocardiogram (ECG), and eye-tracking data. Previous studies have reported 52 the characteristics of EEG during defense responses. In an animal study, 4 Hz oscillation 53 was observed in the prefrontal cortex during freezing [10] . Furthermore, in humans, an 54 oscillatory power change was observed at different band frequencies in the prefrontal, 55 frontal, and midline cortices [23] . In particular, EEG has the advantage of capturing 56 neural activity underlying defense responses due to its excellent temporal resolution. 57 Therefore, EEG was measured in the prefrontal cortex, which has been reported to play 58 a major role in both fear memory formation and fear expression [13, [24] [25] [26] [27] [28] [29] . 59 First, to develop a model to predict defense responses, a model was constructed 60 based on EEG, ECG, and eye-tracking data that distinguishes the following two states 61 during fear conditioning: (i) when a defense response occurs, and (ii) when a safety 62 signal is provided. Because neural activity is high-dimensional and occurs in multiple locations, it is difficult to directly understand information related to defense responses. 64 This issue can be addressed with machine learning, which was used to analyze hidden 65 relationships between neural activity and human responses. To predict the frequency of 66 defense responses, a learning model was adopted to classify medial prefrontal cortex 67 (mPFC) neural activity, physiological response data for CRs, and controlled responses in 68 a fear conditioning experiment. watched movie clip's contents. The latter represents the uncertainty of the existence of 79 threat in the movie clip, which would cause a defense response [31] . 80 Predicted defense response frequencies were compared with subjective fear ratings 81 and the predictability/unpredictability of each clip. Subsequently, the results were 82 compared with previous studies for which our model makes a valid prediction. It was 83 confirmed that our model could be useful for understanding clinical studies of mood and 84 anxiety disorders. Notably, the entire experiment was performed in a head-mounted 85 virtual reality environment, which has recently been used in the neuroscience field to 86 provide a high level of immersion for participants [32] [33] [34] [35] [36] . 87 The primary aim of the study was to develop an effective model of temporal defense 88 response elicitation that uses data from physiological signals (EEG, ECG, and eye 89 tracking). This study was approved by the institutional review board of Looxid Labs. Participants 92 received information about the experiment and provided informed consent prior to 93 participation. Twelve adults were recruited (female, n = 3; male, n = 9; mean age (SD), 30.5 (3.77) 96 years). All participants were right-handed, had no knowledge of neuropathology, and 97 were familiar with virtual reality environments. Before the experiment started, participants were provided explicit instructions about 99 the purpose and overall procedures of the experiment, as suggested by guidelines for 100 fear conditioning studies [8] . Participants sat on a chair in a relaxed position, with both 101 hands on the armrests. The temperature of the experiment room was set at 20°C with 102 an air conditioner to provide an immersed environment and to allow the collection of 103 high-quality physiological data. Physical features such as luminance and spatial frequency of emotion-related image 105 stimuli are known to affect early brain activity [37, 38] . To minimize the effect of 106 physical factors on brain activity, the mean frame pixel intensity of each task scene 107 (mean (SD), 34.67 (3.83) pixels) and the sizes of the stimuli were set at similar levels. The entire experiment consisted of three main parts. The first and second parts 110 comprised stimulus habituation and conditioned fear acquisition to elicit defense 111 responses represented by CRs. The final part comprised watching movie clips that were 112 designed to elicit specific emotions (Fig 1) . Tasks were conducted in a virtual reality 113 environment within a mobile-based head-mounted headset (LooxidVR, Looxid Labs, 114 Inc., CA, United States; Google Pixel phone, Google, Inc., CA, United States) with 115 noise-cancellation headphones (Bose Q-20, Germany) and were performed sequentially 116 without delay. The experimental setup for the fear conditioning task was constructed 117 with reference to the method of Lau et al. [39] . In the habituation task (Fig 2B) , five CS+, five controlled stimuli (CS-), and 10 119 auditory startle probes were presented in pseudorandom order without the US. The 120 CS+ and CS-were both neutral human faces (NimStim [40] ; 16 F, 32 M) and were each 121 presented for 6 sec. To reduce the cognitive load, participants were able to easily 122 discriminate between the CS+ (a neutral female face) and the CS-(a neutral male face). 123 Inter-trial intervals lasted 10 to 12 sec between the offset of the prior CS and the onset 124 of the next CS. To elicit a startle reflex at 50 ms, an auditory startle probe was 125 presented in the form of a 105 dB burst of white noise (Fig 2A) . 126 In the fear acquisition task (Fig 2C) , the US was presented immediately after CS+ 127 offset with a 100% reinforcement rate but was not presented in the CS-condition. The 128 US was a fearful female face with a loud female scream and was presented for 3 sec. The female face in the US was the same actress as in the CS+. The US scream was 130 presented at approximately 80 dB for 1 sec at the onset of the US. During the 131 acquisition training task, 10 CS+ with US and 10 CS-were presented in a 132 pseudorandom order, with a single condition presented for no more than two consecutive 133 trials. FPS and self-report questions were conducted to check whether fear conditioning 134 was successful. Auditory startle probes were provided 4.5 sec after stimulus onset for 135 the middle (5th) and last (10th) stimuli of each CS+/-acquisition trial. 136 Although FPS has been used to evaluate fear conditioning contingencies, this can be 137 rendered difficult by the frequency of blinking or frowning, which can affect 138 Electrooculography (EOG) signal quality. More frequent insertion of FPS might be an 139 alternative solution. However, [22] suggested that frequent inclusion of FPS could affect 140 the acquisition of conditioned fear. In addition, in our experiment, the FPS was not 141 measured in some trials, usually because participants were blinking just before the 142 stimuli were presented. These data were excluded from the FPS analysis according to 143 published recommendations [41] . To overcome this limitation, after the acquisition training task, online self-report of 145 fear/anxiety for the CS+/CS-/US and CS+/US contingency was performed. Participants were asked to rate on a ten-point Likert scale how fearful or anxious they 147 felt in response to each CS+ or CS-neutral face image and to the US fearful female face 148 image. Additionally, participants were asked, on a scale of 1 to 5, whether they knew 149 when they were going to receive the US. After the fear acquisition training task, eight movie clips inducing either fearful or 151 tender emotions (FilmStim [30] ) were presented to participants in a pseudorandom order 152 ( Table 1) . Four of the eight clips elicited fear, and the other four elicited tenderness. Each movie clip was presented for 2-5 min. At the end of each clip, participants 154 responded to two questions: 1) how much fear/anxiety did you feel in response to the 155 movie clip; and 2) choose one of the following three statements: "When the movie clip 156 was played, I recognized the genre and the detailed upcoming story because I have seen 157 it before", "When the movie clip was played, I recognized the genre and the upcoming 158 story roughly because I have seen it before but could not remember the detailed story", 159 or "When the movie clip was played, I did not know the genre and the upcoming story 160 at all because have not seen it before." Participants were asked to rest sufficiently to Index Movie title Duration Pre-labeled genre 1 Ghost 121 sec Tenderness 2 The shining 265 sec Fearful 3 The exorcist 105 sec Fearful 4 Benny and Joone 127 sec Tenderness 5 Chucky II 68 sec Fearful 6 When a man loves a woman 101 sec Tenderness 7 Forrest Gump 125 sec Tenderness 8 The The L2 regularization parameter (lambda) and kernel size (sigma) were estimated with 208 the Bayesian optimization method [45] . Ten-fold cross-validation was applied, and the 209 average fold loss was used to demonstrate model fitness. Before developing the classification model, it was validated whether the sample data 217 properly represented labeled information. Checking the contingency between the CS+ 218 and US conditions for each participant during acquisition was an important process 219 from the perspective of data validation. Therefore, FPS and self-report measures were 220 measured to validate the CS-US contingency and subjective fear ratings. Specifically, EOG-based FPS in the first half of acquisition and the second half of 222 acquisition after CS+ and CS-were measured. For all participants, the mean FPS 223 amplitude differed significantly between the CS+ and CS-stimuli (Fig 3A) . Additionally, the FPS for the first and second halves of the CS+ and CS-conditions 225 were compared. The average amplitude of CS+ in the second half was lower than that 226 during the first half, but still showed a meaningful difference from the CS-values in 227 both halves (Supplementary Fig 1) . comparison with other high-ranked feature models (Fig 4) . HOC measures the 244 zero-crossing counts of the filtered signal, which represents the oscillation pattern of the 245 signal in the time-series domain when a specific designed filter is applied. In the current 246 study, a simple HOC system, which sequentially applied the backward difference 247 operator as a high-pass filter was adopted [46, 47] . Physiological signals such as heart 248 rate variability (HRV) and pupil diameter change are known as reliable features for fear 249 conditioning. However, in our study, most physiological signal-based feature sets showed 250 low model fitness. It is possible that this discrepancy is because of the short sample To validate predicted results, participants were asked to answer two self-report 265 questions that are known to be related to defense response elicitation in previous studies 266 at the end of each movie clip (as described above). For the first question, which 267 measured subjective fear, the average rating for all movie scenes was 4.0417 (SEM, 268 0.8992). Moreover, the average ratings for horror and non-horror movie clips (as 269 previously categorized by [30] Based on the two self-report questions, participants' experiences of the movie content 279 could be classified into four emotional categories: 1) conscious fear with predictable 280 threat context; 2) conscious fear with unpredictable threat context; 3) no conscious fear 281 with predictably safe context; and 4) no conscious fear but unpredictably safe context. 282 Before comparing the predicted defense response ratios between the emotional 283 December 18, 2020 9/19 categories, movie contents were compared based on three broader criteria. First, 284 emotional content of the movie clips was classified as non-horror versus horror. Second, 285 the movie clips were divided into fearful (subjective fear rating ≥ 5) versus non-fearful 286 (rating < 5) based on the subjective fear ratings for each clip as rated by the 287 participants. Third, the movie clips were divided into those that participants had seen 288 before and those that they had not, regardless of the subjective fear ratings or 289 pre-identified emotional genre. Each criterion represents whether participants felt 290 conscious fear (criteria 1, 2) and the predictability of threat and safety in a context that 291 causes defense responses and anxiety (criterion 3). Subsequently, we compared the 292 predicted defense response occurrence ratio for the movie contents in the groups for 293 each criterion. For criteria 1 and 2, no statistically significant results were observed (First criteria, 295 F 1,62 = 0.2079, p = 0.6499; Second Criteria, F 1,62 = 1.8269, p = 0.1814). However, for 296 criterion 3, there was a statistically significant difference in the average predicted 297 defense response occurrence ratio between participants that had seen the movies before 298 and those who had not (F 1,62 = 15.0593, p < 0.0005) ( Figure 5A, 5B, 5C) . 299 Next, we compared the four emotional categories by combining the two self-report 300 questions ( Fig 5D) . First, when participants felt no conscious fear, there was a 301 significant difference depending on whether participants were able to predict the context 302 of movie clips (F 1,34 = 8.3375, p < 0.01). When participants felt conscious fear, a 303 difference was also observed (rating ≥ 5) (F 1,26 = 5.1080, p < 0.05). However, when 304 participants were able to predict the context of the movie clip, no significant defense 305 response ratio difference was observed between those who felt conscious fear and those 306 who did not (F 1,31 = 0.7169, p = 0.4036). The same trend was observed when 307 participants were not able to predict the context of the movie clip (F 1,29 = 0.1783, p = 308 0.6758). [7, [48] [49] [50] [51] [52] . Understanding the relationship between defense responses and anxiety 320 is essential for studying clinical anxiety disorders such as GAD. As such, a method to 321 continuously capture defense responses in the temporal domain is needed. Therefore, in 322 this study, we adopted a model-based approach using fear conditioning to overcome 323 such limitations. 324 We trained a binary classifier to classify mPFC neural activity, and physiological 325 responses to CRs and controlled responses from fear conditioning to predict defense 326 responses. Various features that have previously been reported to be related to 327 emotions were extracted from mPFC EEG, ECG, and pupillary responses data. As a 328 next step, we validated our model by comparing the predicted defense response 329 occurrence ratio while participants watched eight emotional movie clips that provide 330 fearful and nonfearful emotional states. For each movie clip, we measured two emotional 331 situations, fear awareness and threat existence predictability. Because emotional 332 situations are known to correlate with defense responses, we compared the predicted 333 defense response ratio between the different situations. In this study, there was no difference in predicted defense response ratios between 335 participants who felt conscious fear and those who did not. In contrast, we observed 336 that when the participant had not seen the movie clip before (representing 337 unpredictability of threat existence), the predicted defense response ratio was much 338 higher than for participants who had seen it before. This represents the predictability of 339 threat existence, regardless of genre or awareness of fear. A significant difference in 340 predicted defense response ratios between situations in which threat is predictable and 341 those in which it is unpredictable was observed both when participants felt fear and 342 when they did not. This suggests that mPFC neural activity patterns underlying 343 defense responses against upcoming threats is related to threat existence predictability 344 but not fear awareness. Several previous studies have suggested that unpredictable threat maintains defense 346 responses and thus increases anxiety [51, 53] . [53] reported that contextual fear induced 347 by a US-unpaired CS was associated with a high startle amplitude measured by the 348 FPS not only after CS onset, but also during the inter-trial interval, compared with that 349 measured after a neutral and US-paired CS. Moreover, [51] suggested that 350 unpredictability could lead to a sustained level of anxiety for sufficiently aversive stimuli. 351 Our result is consistent with previous studies showing that defense responses occur more 352 in situations with unpredictable threat than in those with predictable threat. The existence of an unpredictable future threat arouses anxiety and makes 354 individuals defensive against future situations [31] . In our experiment, without prior 355 experience, participants were unable to know whether a movie clip was part of a love 356 story or a horror movie. As a result, they remained defensive against any auditory, 357 visual, or contextual indicators of potential threat that they had learned to be 358 precursors of danger from experience or evolution [1] . Additionally, they paid attention 359 to the consequences of each indicator to determine whether they were in a safe situation 360 or not. Finally, when the movie ended, they were able to determine whether it had 361 contained an actual threat or not, based on the collected information. Conscious fear 362 would only occur with the actual presence of a threat or a memory of the consequences 363 of specific indicators that they had learned before. For example, when participants 364 watched non-horror movies without prior information, they unconsciously focused on 365 possible indicators, and defense responses were elicited. However, without the actual 366 appearance of a scary scene or a previously learned harmful indicator, no conscious fear 367 would arise. Our study also shows discrepancy between conscious fear awareness and 368 defense response occurrence. LeDoux [4, 54] proposed a model to explain this discrepancy, claiming that fear can 370 be described by a two-system model that comprises a conscious cognitive circuit and a 371 unconscious defense survival circuit. Cognitive circuits mainly control conscious fearful 372 feelings. Meanwhile, the defense survival circuit focuses on controlling unconscious In summary, we report a novel data-driven approach using fear conditioning to predict 379 temporal occurrence of defense responses. To validate our model, we compared defense 380 response ratios in situations with differing fear awareness and threat existence 381 predictability, which indicated that our model shows similar results to those in previous 382 studies. Moreover, defense response ratios predicted by our model are consistent with 383 previous studies using other methods such as FPS and self-report. Our study provides 384 insight into measuring temporal defense responses in a comprehensive situation with 385 threat and fear. This can help us to understand anxiety and related clinical disorders. 386 To further improve the model, model stabilization can be conducted with a fear 387 conditioning procedure that would enable the collection of more trial sample data. information have been clearly described previously [46, [55] [56] [57] [58] [59] [60] [61] [62] . A statistical approach for analysis of physiological signals was first proposed by Picard 400 et al. [56] , and this concept has been adopted for many analyses of EEG recordings. In 401 this experiment, power, mean, standard deviation, 1st difference, 2nd difference and 402 normalization of the 1st difference and the 2nd difference of the EEG signal were 403 measured and grouped as the 'Stat' feature set. 404 Hjorth 405 Hjorth developed 3 different time-domain EEG features that describe brain activity [57] . 406 In this experiment, three parameters -activity, mobility and complexity -were 407 measured and grouped as the 'Hjorth' feature set. Non-stationary index (NSI) 409 The NSI represents the local average changes over time, independent of the magnitude 410 fluctuation. Kroupi et al. [63] , adapted the NSI to analyze emotional states using EEG. 411 In this experiment, normalized NSIs were measured and grouped as the 'NSI' feature set. 412 Fractal dimension (FD) 413 The fractal dimension provides a statistical index of complexity that can be estimated 414 through several methods. This experiment used Higuchi's method [59] , which is widely 415 used to provide more accurate results than other methods, and these data were grouped 416 as the 'FD' feature set. Higher order crossings (HOC) 418 Petrantonakis and Hadjileontiadis [46] , proposed the HOC-based emotion recognition 419 system using EEG, which was originally introduced by Kedem. HOC measures the 420 zero-crossing count for each filtered signal. In this experiment, simple HOC, which use 421 the difference operator as a high-pass filter, for k = 1 to 20, were measured and grouped 422 as the 'HOC' feature set. amplitudes of the delta, theta, alpha, beta and gamma frequency bands were calculated 441 and grouped as the 'HHS' feature set. In addition, the root mean square and maximum 442 instantaneous amplitude, mean and weighted mean instantaneous frequency of each 443 IMF, levels 1 to 3, were measured and grouped as the 'HHS-IMF-1', 'HHS-IMF-2', and 444 'HHS-IMF-3' feature sets according to [64] . 4 Defensive behaviors, fear, and anxiety. Handbook of behavioral neuroscience Defense system psychopharmacology: an ethological approach to the pharmacology of fear and anxiety Phasic and sustained fear are pharmacologically dissociable in rats Coming to terms with fear A contemporary learning theory perspective on the etiology of anxiety disorders: it's not what you thought it was Uncontrollability and unpredictability in post-traumatic stress disorder: an animal model Fear and anxiety: animal models and human cognitive psychophysiology Don't fear 'fear conditioning': Methodological considerations for the design and analysis of studies on human fear acquisition, extinction, and return of fear Stimulus, environmental, and pharmacological control of defensive behaviors 4-Hz oscillations synchronize prefrontal-amygdala circuits during fear behavior A pupil size response model to assess fear learning Neural correlates of pupil dilation during human fear learning Neural pattern similarity predicts long-term fear memory Electrodermal responses: what happens in the brain Affective learning: Awareness and aversion Emotion and motivation I: defensive and appetitive reactions in picture processing Modulation of the acoustic startle response by film-induced fear and sexual arousal The startle probe response: a new measure of emotion Neural systems involved in fear and anxiety measured with fear-potentiated startle Fear-potentiated startle: a neural and pharmacological analysis Emotion circuits in the brain Don't startle me-interference of startle probe presentations and intermittent ratings with fear acquisition Oscillatory EEG activity induced by conditioning stimuli during fear conditioning reflects Salience and Valence of these stimuli more than Expectancy Prefrontal cortical regulation of fear learning Prefrontal neuronal assemblies temporally control fear behaviour Medial prefrontal cortex neuronal circuits in fear behavior Dissociable roles of prelimbic and infralimbic cortices, ventral hippocampus, and basolateral amygdala in the expression and extinction of conditioned fear Microstimulation reveals opposing influences of prelimbic and infralimbic cortex on the expression of conditioned fear Anxiety dissociates dorsal and ventral medial prefrontal cortex functional connectivity with the amygdala at rest Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers Uncertainty and anticipation in anxiety: an integrated neurobiological and psychological perspective Fear in virtual reality (VR): Fear elements, coping reactions, immediate and next-day fright responses toward a survival horror zombie virtual reality game Social Fear Conditioning Paradigm in Virtual Reality: Social vs Contextual fear conditioning in virtual reality is affected by 5HTTLPR and NPSR1 polymorphisms: effects on fear-potentiated startle Virtual reality in neuroscience research and therapy Contextual-specificity of short-delay extinction in humans: renewal of fear-potentiated startle in a virtual environment Low spatial frequency filtering modulates early brain processing of affective complex pictures Emotional facial expressions evoke faster orienting responses, but weaker emotional responses at neural and behavioural levels compared to scenes: A simultaneous EEG and facial EMG study Distinct neural signatures of threat learning in adolescents and adults The NimStim set of facial expressions: judgments from untrained research participants Committee report: Guidelines for human startle eyeblink electromyographic studies A wavelet-based estimator of the degrees of freedom in denoised fMRI time series for probabilistic testing of functional connectivity and brain graphs Independent component analysis of electroencephalographic data. In: Advances in neural information processing systems Infinite Latent Feature Selection: A Probabilistic Latent Graph-Based Ranking Approach Practical bayesian optimization of machine learning algorithms Emotion recognition from EEG using higher order crossings Time series analysis by higher order crossings Unraveling the mysteries of anxiety and its disorders from the perspective of emotion theory A modern learning theory perspective on the etiology of panic disorder Startle reactivity and anxiety disorders: aversive conditioning, context, and neurobiology. Biological psychiatry Anxious responses to predictable and unpredictable aversive events Unpredictability and uncertainty in anxiety: a new direction for emotional timing research Contextual fear induced by unpredictability in a human fear conditioning preparation is related to the chronic expectation of a threatening US Using neuroscience to help understand fear and anxiety: a two-system framework Feature extraction and selection for emotion recognition from EEG Toward machine emotional intelligence: Analysis of affective physiological state EEG analysis based on time domain properties EEG correlates of different emotional states elicited during watching music videos Approach to an irregular time series on the basis of the fractal theory The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis Classification of human emotion from EEG using discrete wavelet transform Direction of information flow in large-scale resting-state networks is frequency-dependent Dynamic markers of altered gait rhythm in amyotrophic lateral sclerosis Hilbert-Huang transform based physiological signals analysis for emotion recognition Phase transfer entropy: a novel phase-based measure for directed connectivity in networks coupled by oscillatory interactions Combining Eye Movements and EEG to Enhance Emotion Recognition Acknowledgments 492 We thank Kibum Choi and Hyeonyoung Choi for their help with developing the SW and 493 HW environment for experiment. Also we thank Jaeho Bae for designing image of the 494 fear conditioning experiment. This research was supported by the Looxid Labs. All 495 data are stored on Looxid Labs's server and are available upon request. ECG feature 487 ECG signals were recorded from the wrist opposite from the subject's dominant hand. 488Heart rate variability (HRV) 489 HRV was calculated as the average time interval of R peaks of the QRS complex and 490 represented as the 'HR' feature set.