key: cord-0996646-x3dcza7u authors: Kim, Garam; Seong, So Hyun; Hong, Seok-Sung; Choi, Eunsoo title: Impact of face masks and sunglasses on emotion recognition in South Koreans date: 2022-02-03 journal: PLoS One DOI: 10.1371/journal.pone.0263466 sha: 0a9f2481414bec1d71ae1e03a655c4d7b1cf2d36 doc_id: 996646 cord_uid: x3dcza7u Due to the prolonged COVID-19 pandemic, wearing masks has become essential for social interaction, disturbing emotion recognition in daily life. In the present study, a total of 39 Korean participants (female = 20, mean age = 24.2 years) inferred seven emotions (happiness, surprise, fear, sadness, disgust, anger, surprise, and neutral) from uncovered, mask-covered, sunglasses-covered faces. The recognition rates were the lowest under mask conditions, followed by the sunglasses and uncovered conditions. In identifying emotions, different emotion types were associated with different areas of the face. Specifically, the mouth was the most critical area for happiness, surprise, sadness, disgust, and anger recognition, but fear was most recognized from the eyes. By simultaneously comparing faces with different parts covered, we were able to more accurately examine the impact of different facial areas on emotion recognition. We discuss the potential cultural differences and the ways in which individuals can cope with communication in which facial expressions are paramount. Without doubt, we all have had trouble identifying emotional expressions from others in recent years when mask wearing became the norm. In this COVID-19-era, we now live in an environment where wearing masks is a necessity in our daily lives. As a result, we often encounter faces that are only partially exposed, impairing our daily social interactions due to our diminished ability to recognize facial expressions and their associated emotions. Thus, it is particularly important to understand the specific ways in which the ability to correctly infer emotions is restricted when parts of the face are occluded in this unusual time. Overall, researchers agree that facial expression recognition is hindered when parts of the face are covered. The two key areas of the face that are important for reading facial expressions are the mouth and eyes [1] [2] [3] [4] . There have been debates about which is more important, either the eyes or the mouth, in recognizing facial expressions. Notably, emotion type is considered to play a critical role in determining the key facial areas for reading emotions. Past research focusing on six basic emotions (i.e., happiness, sadness, fear, anger, disgust, and surprise) has found that some emotions render consistent results, while others do not. First, studies on happiness or fear reported consistent results such that the mouth plays a key role in recognizing happiness [5] [6] [7] [8] and the eyes in fear recognition [9] [10] [11] [12] [13] . Second, as for Also, the effects of masks or sunglasses on reading other person's facial expressions may differ depending on the cultural context (for a review see [18] ). As an example, consider the findings that a face covered with Islamic headdresses such as niqāb impacts the recognition of emotions differently by cultural groups [13, 25] , suggesting that a culturally attached meaning of headdress may play a role. As for East Asians, they are not only less accustomed to sunglasses than Westerners, but sunglasses are often considered rude in interpersonal relationships [26, 27] . Given such cultural background, thus, it is necessary to test whether the effects of masks and sunglasses on facial expression recognition that are documented with Western participants would also apply to East Asians. Building on past studies, the present study aims to fill the gaps in previous research. To do this, we first conducted an integrative investigation that examined the effects of both the mouth and eye regions with realistic facial occlusions (i.e., mask and sunglasses) on six basic emotions. Second, going beyond the past studies that have been primarily conducted in the West, we aimed to test whether similar findings are observed in non-Western samples. This is important because many previous studies have demonstrated that there are cultural differences in the areas of the face to which people pay attention. In the present study, we used a within-subject design with Korean undergraduate students as participants. To enhance the realism of the stimuli, we took steps to edit real-world masks and sunglasses images to face stimuli rather than simply cutting or covering images with a black box, as in previous studies [8, 13, 14, 28] . In addition to the main research question, we tested the effect of sex on facial expression recognition. According to previous studies, women appear to perform better than men when distinguishing facial expressions in faces covered with masks [29, 30] . In addition, it was found that individuals better identified (and labeled) emotions from facial expressions when the target was of different sex [31] . Thus, we examined whether emotion recognition differs according to the sex of the participant and the sex of the face stimuli. Based on a power analysis (G-power software version 3.1), a total of 28 participants were required for the current experiment, which was designed with a repeated-measures ANOVA. Given an expected effect size of .25, and α = 0.05, this led to an acceptable power of . 8 [32] . However, we sampled more participants than the required number for potential participants who would get excluded if computers malfunction or those who did not meet the pre-set exclusion criteria (less than a 50% correct facial expression recognition answer rate when presented with a fully visible face without a face mask or sunglasses were excluded). This predetermined criterion was based on that of Carbon's method [9] . Through an advertisement post on a Korean university community website, 40 students voluntarily registered for the study. All participants had normal or corrected-to-normal vision with no abnormal color discrimination ability. Participants received a 10,000 won (approximately 9 dollars) beverage gift card for participation in the study. This study was approved by the Institutional Review Board of Korea University (KUIRB-2020-0317-01). During the analysis, participants with less than a 50% correct facial expression recognition answer rate when presented with a fully visible face without a face mask or sunglasses were excluded. With 39 participants exceeding a 50% correct answer rate (average = 0.67), the final sample consisted of 39 participants (male = 19, female = 20, M age = 24.2 years, SD age = 4.7 years). The data can be accessed at https://osf.io/ fcg4d/. The Korean facial expression data used in the study was obtained from the Korea University Facial Expression Collection (KUFEC) [33] . The KUFEC was developed to minimize the cross-race effect by using the faces of Korean models [34] . Through an agreement between two researchers, six male and female models with relatively more accurate facial expressions were selected from a total of 49 models. Each identity posed a facial expression depicting happiness, surprise, fear, sadness, disgust, anger, and a neutral emotion. These were then photoshopped using Adobe Photoshop CC 2020 by adding surgical face masks (mask condition) and sunglasses (sunglasses condition). In total, there were 252 facial stimuli: 2 (sex) × 6 (individuals) × 7 (emotions) × 3 (uncovered vs. mask vs. sunglasses). All factors were within-subject factors. The experiments were conducted in individually assigned cubicles, where the participants were briefly informed about the study and signed an informed consent form. During the experiment, participants were instructed to recognize facial expressions in a picture in which the facial stimuli were presented randomly across all factors (sex, emotion type, and condition). The stimuli were presented using E-Prime 3.0. The facial expression stimuli were presented on the left side of the monitor and a 3 × 3 table on the right side, with each emotional example entered as corresponding positions up to seven on a numeric keyboard. Participants were instructed to respond as fast and accurately as possible by pressing the numeric key that was aligned to the facial expression represented by the presented picture. An example of this task is shown in The participants completed six practice trials and then performed 252 test trials. After completion of the test trials, participants responded to a questionnaire regarding demographic information, were involved in a debriefing of the study, and received their participation rewards. We tested the accuracy of the participants' baseline recognition rates by presenting the uncovered faces. Taking into consideration the chance rate of 14.2%, we can safely assume that the average correct answer rate of 68% for a fully visible face was well above the chance level (χ 2 = 21312.458, p < .001, Cramer's V = 0.601, p < .001). We tested whether there were any sex or age effects on facial recognition accuracy. Sex differences in the overall correct answer rate were not statistically significant (t = -1.303, p = .203), nor was there a significant relationship between age recognition accuracy (F = .032, p = .86). Before analyzing the main results, all data were checked for normality distribution. Normality was violated for some variables but we decided to use the repeated measures ANOVA because ANOVAs are generally considered robust to nonnormality with sample sizes being equal (within-subject design) [35] . A 3 (face condition) × 7 (emotion type) within-subjects ANOVA was conducted to test the main and interaction effects of the two variables: face regions that were covered and type of emotions. We reported the results of Greenhouse-Geisser correction whenever Mauchly's test of sphericity was significant. First, we predicted that the accuracy would be lower for recognizing faces wearing masks than for uncovered faces. The results showed that there was a significant main effect of face condition, F(2, 76) = 147.34, p < .001, η p 2 = .80, with the highest recognition rate observed for the uncovered condition (M = .68, SD = .01), followed by the sunglasses (M = .64, SD = .01) and mask conditions (M = .51, SD = .01). Specifically, pairwise comparisons demonstrated that the accuracy under the mask condition was lower than both the uncovered condition, t(38) = -15.98, p < .001, and the sunglasses condition, t(38) = -11.4, p < .001. Not surprisingly, recognition accuracy under the sunglasses condition was lower than that under the uncovered condition, t(38) = -4.74, p < .001. Covering parts of the face, whether it is the eyes or mouth, hinders the performance of facial expression recognition. In particular, covering the mouth rather than the eyes made it more difficult to identify emotions. There was also a significant main effect of emotion, using the Greenhouse-Geisser correction, F(4.04, 153.44) = 211.96, p < .001, η p 2 = .85. Specifically, happiness and neutral emotion had higher recognition rates compared to all other emotions (ts � 3.45, ps � .012; | ts | � 3.78, ps � .004, respectively), whereas fear showed the lowest recognition rates (| ts | � 12.28, ps < .001). In summary, facial expression recognition accuracy was high in the order of happiness, neutral-surprise-sadness, disgust, anger-fear. In the previous analysis, participants' emotion recognition was significantly impaired when the mouth (and also a part of the nose) was covered (mask condition) than when the eyes were covered (sunglasses condition). Next, we tested whether the impaired facial recognition by mouth occlusion (vs. eye occlusion) would depend on specific emotional expressions in the face. Based on prior research, we expected that emotions such as happiness and disgust, the recognition of which were more influenced by the mouth (vs. the eyes) would be more impaired by wearing a mask than sunglasses; in contrast, we expected that for emotions such as fear, for which the eye has a higher impact on emotion recognition, would be less recognized by wearing sunglasses compared to a mask. As shown in Fig 2, the interaction between face conditions and emotions was significant, F (7.29, 277.04) = 15.22, p < .001, η p 2 = .29. To interpret the interaction between the face conditions and the emotion type, we conducted pairwise comparisons by face condition in each emotion and examined whether the differences in accuracy rates between the mask, sunglasses and uncovered conditions would depend on the specific emotions. For instance, emotions such as happiness and disgust, which require cues primarily from the mouth, will show particularly lower accuracy under mask conditions than under sunglasses conditions. The results are as follows: Happiness. The recognition accuracy of happiness under the mask condition (M = .84, SE = .02) was lower than that of both the sunglasses (M = .97, SE = .01), t(38) = -8.8, p < .001, and the uncovered conditions (M = .92, SE = .02), t(38) = -3.58, p = .001. The recognition accuracy of the faces with sunglasses was higher than that of the uncovered faces, t(38) = 2.76, p = .009. This result was consistent with prior research that documented that the mouth is especially informative in the identification of happiness. However, it is noteworthy that participant accuracy in the sunglasses condition, which covered the eyes and left the mouth visible, was higher than that in the uncovered face condition. A possible reason for why people recognized happy faces with sunglasses better, and not worse, than uncovered happy faces may be because this allowed participants to concentrate only on the mouth without being distracted by the eyes. Previous studies have demonstrated that facial recognition performance improves by covering less important parts of the face [25] . When including both the mask and the sunglasses conditions, we were able to show that covering relatively less important areas in a face can actually increase recognition performance of certain emotions. Specifically, the present findings showed that people can perceive happiness by focusing only on the mouth, and information from the eyes interferes, rather than facilitates, emotion recognition. Disgust. For disgust, participants were less accurate when faces were covered by a mask (M = .34, SE = .03) than when they were uncovered (M = .56, SE = .03), t(38) = -6.31, p < .001, and when sunglasses were worn (M = .52, SE = .03), t(38) = -5.97, p < .001. Recognition accuracy for uncovered faces and sunglasses did not differ, t(38) = -1.42, p = .164. Considering that there was no significant difference between the sunglasses and uncovered conditions in disgust, disgust seems to be recognized from the mouth only, and the eyes do not play a major role. Sadness. Sad faces with masks (M = .32, SE = .04) showed lower detection accuracy than uncovered faces (M = .64, SE = .03), t(38) = -9.13, p < .001, and faces with sunglasses (M = .48, SE = .03), t(38) = -4.15, p < .001. Recognition accuracy under the sunglasses condition was also lower than that under the uncovered condition, t(38) = -6.34, p < .001. That is, it was particularly difficult to recognize sadness when the mouth was covered compared to when the eyes were covered. Anger. As for anger, faces covered by masks showed lower accuracy (M = .42, SE = .03) than uncovered faces (M = .62, SE = .03), t(38) = -5.58, p < .001, and those covered by sunglasses (M = .53, SE = .04), t(38) = -2.68, p = .011. Participants recognized angry faces with sunglasses less accurately than uncovered faces, t(38) = -2.75, p = .009. Thus, as with sadness, facial expression recognition of anger showed a significant decrease in accuracy in both the sunglasses and mask conditions, but especially in the mask condition. In other words, when the eyes, but not the mouth, were covered, people could still make emotional inferences of sad or angry faces from the mouth to some degree. Furthermore, the lower accuracy in the sunglasses condition compared to the uncovered condition further corroborates that people detect sadness and anger from the eyes as well. Surprise. As for the emotion of surprise, participants recognized the emotions in the masked condition less accurately (M = .61, SE = .03) than in the uncovered condition (M = .93, SE = .02), t(38) = -12.05, p < .001, and the sunglasses condition (M = .9, SE = .02), t(38) = -10.35, p < .001. The recognition accuracy of uncovered faces and faces with sunglasses did not differ, t(38) = 1.36, p = .181. That is, people were able to recognize surprise by focusing on the mouth and did not gain much from focusing on the eyes. Fear. Interestingly, recognition of fear did not seem to be affected by the covering of the mouth. The recognition accuracy of fear under the mask condition (M = .14, SE = .03) was similar to the uncovered condition (M = .15, SE = .03), t(38) = -.53, p = .602. However, accuracy under the sunglasses condition (M = .1, SE = .02) was lower than that under the other two conditions, | ts | � 2.14, ps � .039. This suggests that covering the eyes, but not the mouth, decreased recognition. Consistent with previous results, people tend to recognize fearful faces from the eyes (upper facial area) [9, 12] . We further analyzed how participants incorrectly recognized the emotions of a face covered by a mask. This will allow us to better understand how the lack of cues from masked faces is dealt with and provides real-world implications for individuals living during a pandemic. The percentages of correct and incorrect answers to all emotions are presented in the confusion matrix of emotions (Fig 3) . The correct answer rate of sadness in an uncovered face was 64.1%, but the correct answer rate of the sad face wearing a mask fell dramatically to 32.3%, of which 20.7% of participants incorrectly recognized as disgust. As for the emotion of disgust, the correct answer rate for the uncovered faces was 56.2%, but it dropped substantially to 34% when the face was covered with a mask. In this case, 24.8% of participants mistakenly recognized it as sadness and 23.7% as anger. This suggests that without the facial configuration of the mouth, sadness can easily be confused with disgust. Consistent with the present study, Carbon [9] also demonstrated that sadness, disgust, and anger were confused with each other in masked faces. Additionally, sadness was detected with 64.1% accuracy under uncovered conditions; however, this decreased to 47.9% in the sunglasses condition, where 23.9% misrecognized sadness as disgust. Furthermore, in the case of anger, the 61.5% accuracy rate in the uncovered condition decreased to 52.6% under the sunglasses condition, and 25.2% misrecognized it as disgust and 14.3% as neutral. Therefore, sadness, disgust, and anger were likely to be confused with each other under sunglasses conditions as well. As for surprise, the recognition rate was 61.3% in the mask condition, and 28.8% of participants misrecognized it as a neutral emotion. However, for the neutral faces with masks, the correct answer rate was 89.5%, with very few people mistaking these for other emotions. That is, wearing a mask increased the possibility of misrecognizing surprise as a neutral expression, but the reverse was not the case. The other emotions discussed earlier (i.e., sadness, disgust, and anger) were likely to be confused with each other, but interestingly, the confusion between surprise and neutral emotions was asymmetrical. Interestingly, surprise was not confused with a neutral emotion in the sunglasses condition, as indicated by its high recognition rate of 90%; the misrecognition of surprise as a neutral emotion was only observed when the mouth was covered. Another point to note in our findings is the misinterpretation of fear as surprise. Under the sunglasses condition, 66% of the participants incorrectly recognized fear as surprise. Moreover, 65% of the participants recognized fear as surprise when a face was uncovered, and these were often confused with one another [12, 36, 37] . In a study that used the KUFEC, participants were not able to make a clear distinction between surprise and fear [34]. In this experiment, picture stimuli consisted of six male and six female models. We tested whether there was a difference in the recognition accuracy of facial expressions depending on the sex of the target face. We conducted a repeated-measures ANOVA with the sex of the face as a within-subject factor and the sex of the participant as a between-subject factor. The results showed that the main effect of the sex of the stimuli was statistically significant, F(1, 37) = 5.87, p = .02, η p 2 = .14, indicating that the accuracy was higher when female faces (vs. male faces) were presented, t(37) = 2.42, p = .02. However, the main effect of the sex of the participants and the interaction between the sex of the participants and the sex of the face stimuli were not statistically significant, F(1,37) = 1.74, p = .20; F(1, 37) = 1.42, p = .24; respectively. Thus, participants were more accurate in recognizing the emotions of female faces, regardless of their sex. In this experiment, we set out to test whether facial occlusion in real-world settings such as occlusion with facial masks and sunglasses impairs recognition of emotions in a face, with a particular focus on the effects of mask wearing, which is still mandatory in many parts of the world due to the COVID-19 pandemic. We did this by comparing the recognition of six basic emotions under masked conditions compared to sunglasses or uncovered conditions. We found that the recognition rates for faces with masks and sunglasses were lower than for the uncovered faces. Wearing a mask particularly harmed the recognition of emotional expressions in the face, as indicated by a greater decrease in recognition under the mask condition compared to under the sunglasses condition. Specifically, happiness, surprise, sadness, disgust, and anger showed the lowest recognition accuracy when the mouth was covered (i.e., mask condition), suggesting the important role played by the mouth in these emotions. Fear, on the other hand, showed the lowest recognition accuracy when the eyes were covered (i.e., the sunglasses condition). As for the neutral face, there was no significant difference between the uncovered and masked faces. Taken together, our findings suggest that facial masks cause the most striking decline in recognition of the majority of basic emotions among Koreans, which is consistent with prior research on Western participants. In addition to replicating prior studies, the present study has several noteworthy findings. By simultaneously comparing faces with different parts covered, the present study found that covering certain parts of the facial areas increases, not decreases, recognition of some emotions. For instance, when the eyes were covered, happiness was recognized better than when faces were uncovered, suggesting that covering the upper facial area, which is not critical for perceiving happiness, could actually facilitate emotion recognition. Moreover, with the addition of the sunglasses condition in the present study, the critical role of the mouth in emotion recognition was confirmed in emotions such as surprise and disgust, as similar accuracy rates were observed for faces with sunglasses and uncovered faces. On the other hand, emotions such as sadness and anger in faces with sunglasses were not as recognizable as in uncovered faces, indicating that people attain facial expression information for these emotions from the eyes in addition to the mouth. In addition to examining the overall recognition rate of different emotions when different parts of the face are covered, it is worthwhile to have a closer look at how participants misinterpreted the emotional expressions of these faces. Specifically, negative emotions, including sadness, disgust, and anger, were often confused with one another. In particular, when the mouth was covered (i.e., mask condition), sadness and disgust were often misidentified as one another. That is, without any cues from the mouth, people have trouble distinguishing these emotions. These findings offer practical advice for people who communicate while wearing a face mask. For instance, individuals who are sad may benefit from knowing that people may misrecognize their sadness as disgust. Likewise, it would be helpful for people to know that what seems like a disgusted face may actually be a sad face. Finally, to our knowledge, this is the first study to test the effect of mask wearing on emotion recognition with a direct comparison to the effects of sunglasses among East Asians. It is widely known that East Asians focus on the eyes while Westerners focus on the mouth when recognizing facial expressions. Thus, we expected a decrease in recognition accuracy in the sunglasses condition, as the eyes are especially important in recognizing facial expressions for East Asians. The results showed that there was indeed a decrease in the recognition of faces with sunglasses. However, the largest decrease in recognition was observed for the masked faces, more than the faces with sunglasses, which is consistent with Noyes et al. [17] , suggesting that the mouth is the most important source of information for most basic emotions for Koreans as well. The overall facial expression recognition accuracy was similar to that of Westerners [9, 17] , but some emotions showed different patterns. For instance, in our findings, the mask had destructive effects on sad faces, but not on fearful faces, which was consistent with Germans [9] but not with British people [17] . Specifically, comparing faces covered with masks and faces covered with sunglasses, our participants recognized fearful faces more accurately in mask conditions and sad faces in sunglasses conditions, while British participants showed the opposite patterns. In addition, our participants had difficulty classifying surprise and neutral states from the faces who wore shades compared to uncovered faces, but there was no difference in British participants. This suggests that East Asians retrieve information about emotions from eyes more than Westerners when reading facial expressions of surprise. However, there was also a finding that Westerners, rather than East Asians, read emotions from eyes. For instance, our participants showed higher accuracy for angry faces in sunglasses than mask conditions, but UK participants showed no significant difference between these conditions, suggesting that the mouth area was more informative for Koreans, but both mouth and eye areas were informative for British people. However, since these participants were not collected at the same time with the same procedures, the current study cannot directly compare with British people, making it difficult to determine the influence of culture. Therefore, future research that directly compares the differences between East Asian and Western participants is needed. Finally, participant gender did not have any effect on recognition rate nor did they interact with the target's gender. However, the target stimuli's gender mattered. Overall, participants recognized female faces better than male faces. More research is needed to replicate this effect. Some limitations of our study should be considered in future research. First, because the participants in our study were limited to undergraduate students, possible age effects were not tested. Some studies suggest that people have more difficulty recognizing older faces than middle-aged or younger faces [9, 38] , and that middle-aged participants identify facial expressions better than children or older adults [39] . Future research should therefore examine the effects of the age of both the raters and the target faces with realistic facial occlusions. Second, the present study used still photos of facial expressions, which is far from the real-world communication of facial expressions. Recently, researchers measured the accuracy of emotional recognition using video stimulation, which adds a static background to the dynamic facial expression set [40] . However, since this study included only two emotions, happiness and sadness, it is difficult to grasp the effect of on recognition of various emotions in facial expression. Moreover, the face is not the only channel through which emotions are recognized. Other information such as context [41] , body language [42, 43] , and voice [44] are also used to express or read emotions. Thus, future studies using video stimuli that include various emotions and involve other sources of information would render a more ecologically valid effect of facial occlusion on emotion recognition. Wearing a facial mask will not go out of fashion anytime soon, as COVID-19 is unlikely to cease and continues to stay with us as an endemic influenza [45] . Given this situation, the present line of research could be developed so that we can find ways in which people can adapt to restricted communication due to facial masks. An intervention that targets specific groups that are particularly vulnerable to the current situation is a good example. For instance, service employees who are required to recognize customers' facial expressions while masks are worn are especially at a disadvantage. Given that tourism and hospitality staff showed an increase in both accuracy and speed of facial expression recognition [46] , a similar training would enhance emotion recognition for masked faces as well. Another group that requires special attention is children. In recent studies, researchers have found that children's emotion recognition is also affected by facial masks [15, 16, 47] . Given that childhood is an important developmental stage for the socialization of emotion [48, 49] and a potential influence of mask wearing on the development of necessary social interaction skills [18] , it would be important not only to understand the effect of facial occlusion among children but also to develop strategies to help young children learn to read emotions from others. In conclusion, our results suggest that the mouth is more important than the eyes in facial expression recognition. As mask wearing is becoming increasingly important, future studies should investigate effective ways, such as interventions, to minimize miscommunication from incorrectly reading emotions from a face. Garam Kim, So Hyun Seong, Eunsoo Choi. Happy mouth and sad eyes: scanning emotional facial expressions Bubbles: a technique to reveal the use of information in recognition tasks What makes Mona Lisa smile? Vision Research In the Blink of an Eye: Reading Mental States From Briefly Presented Eye Regions. i-Perception Emotion Recognition: The Role of Featural and Configural Face Information The components of conversational facial expressions Reaction Time Measures of Feature Saliency in Schematic Faces An analysis of facial expression recognition under partial facial image occlusion. Image and Vision Computing Wearing Face Masks Strongly Confuses Counterparts in Reading Emotions Facial expressions of emotion are not culturally universal Masking" our emotions: Botulinum toxin, facial expression, and wellbeing in the age of COVID-19 Mapping the emotional face. How individual face parts contribute to successful emotion recognition Veiled Emotions: The Effect of Covered Faces on Emotion Perception and Attitudes The effect of emotional dimension and facial expression's presenting areas on facial expression's recognition: A comparison of gender differences The Impact of Face Masks on the Emotional Reading Abilities of Children-A Lesson From a Joint School-University Project. i-Perception Children's emotion inferences from masked faces: Implications for social interactions during COVID-19 The effect of face masks and sunglasses on identity and expression recognition with super-recognizers and typical observers Reading Covered Faces Universals and cultural differences in the judgments of facial expressions of emotion American-Japanese cultural differences in intensity ratings of facial expressions of emotion Are the windows to the soul the same in the East and West? Cultural differences in using the eyes and mouth as cues to recognize emotions in Japan and the United States Cultural Confusions Show that Facial Expressions Are Not Universal Culture modulates face scanning during dyadic social interactions City under siege: authoritarian toleration, mask culture, and the SARS crisis in Hong Kong. Networked Disease: Emerging Infections in the Global City The Perception of Emotion from Body Movement in Point-Light Displays of Interpersonal Dialogue The perception of emotions by ear and by eye Will SARS-CoV-2 Become Just Another Seasonal Coronavirus? Viruses Development of hospitality and tourism employees' emotional intelligence through developing their emotion recognition abilities Masking Emotions: Face Masks Impair How We Read Emotions. Frontiers in Psychology How Preschoolers' Social-Emotional Learning Predicts Their Early School Success: Developing Theory-Promoting, Competency-Based Assessments. Infant and Child Development Emotion Knowledge as a Predictor of Social Behavior and Academic Competence in Children at Risk We would like to thank Dr. June-Seek Choi for letting us use the KUFEC stimuli set for this study.