key: cord-0168189-i6vfsrwc authors: Peter, Mantello; Ho, Manh-Tung; Nguyen, Minh-Hoang; Vuong, Quan-Hoang title: My Boss the Computer: A Bayesian analysis of socio-demographic and cross-cultural determinants of attitude toward the Non-Human Resource Management date: 2021-01-24 journal: nan DOI: nan sha: 409c50448f5698b03bdbdf90b7e2f74a21d9040c doc_id: 168189 cord_uid: i6vfsrwc Human resource management technologies have moved from biometric surveillance to emotional artificial intelligence (AI) that monitor employees' engagement and productivity, analyze video interviews and CVs of job applicants. The rise of the US$20 billion emotional AI industry will transform the future workplace. Yet, besides no international consensus on the principles or standards for such technologies, there is a lack of cross-cultural research on future job seekers' attitude toward such use of AI technologies. This study collects a cross-sectional dataset of 1,015 survey responses of international students from 48 countries and 8 regions worldwide. A majority of the respondents (52%) are concerned about being managed by AI. Following the hypothetico-deductivist philosophy of science, we use the MCMC Hamiltonian approach and conduct a detailed comparison of 10 Bayesian network models with the PSIS-LOO method. We consistently find having a higher income, being male, majoring in business, and/or self-rated familiarity with AI correlate with a more positive view of emotional AI in the workplace. There is also a stark cross-cultural and cross-regional difference. Our analysis shows people from economically less developed regions (Africa, Oceania, Central Asia) tend to exhibit less concern for AI managers. And for East Asian countries, 64% of the Japanese, 56% of the South Korean, and 42% of the Chinese professed the trusting attitude. In contrast, an overwhelming majority of 75% of the European and Northern American possesses the worrying/neutral attitude toward being managed by AI. Regarding religion, Muslim students correlate with the most concern toward emotional AI in the workplace. When religiosity is higher, the correlation becomes stronger for Muslim and Buddhist students. McDonalds to gauge customer sentiment while they are looking at digital menus to optimize the customer's experience while at the same time increasing sales. Moreover, emotional AI is increasingly being marketed as a surveillance tool for policing and border security. Yet, like many "surveillance tools" that rely on facial recognition, Amazon has stopped providing its REKOGNITION technology to the government, but it is still being sold to commercial customers (Chouinard et al., 2019) . Likewise, emotional AI technologies have been used to exploit the emotionality of political communication. A good example is Cambridge Analytica, which used micro-targeted ads to exploit and manipulate voter emotions in Brexit and US elections three years ago (Wylie, 2019) . The use of emotional AI in workplace settings is growing worldwide. For example, about 25% of the top 131 companies in South Korea have already used or planned to use AI in their hiring process (Partner, 2020) . Large corporations such as IBM, Unilever, and Softbank are already using AI to recruit and analyze which behaviors or traits expressed by the interviewees could predict future good performers (Richardson, 2020) . Critically, beyond the recruitment process, emotional AI technologies monitor stress, alertness, and the level of engagement and attention of workers (Larradet et al., 2020; Suni Lopez et al., 2019) . While emotional AI technologies are becoming more pervasive in society, their impact on individuals in the workplace is understudied. Top corporates that sell emotional AI products such as Affectiva or Empath claim an accuracy of 90 th percentile in reading correct emotions from facial expressions and other physiological signals (Heaven, 2020) . Both large and small companies are joining the race to develop and sell these AI products, driving the industry's value to USD 20 billion, reportedly (Telford, 2019 ). Yet besides there being no international agreement on the standards and principles for emotional AI technologies, the datasets on emotion, used to train the algorithms, are being coded and grown by humans using crowdsourcing platforms under Paul Ekman (1999) 's now contentious eight basic universal emotions (i.e., anger, fear, sadness, disgust, surprise, anticipation, trust, and joy) (Mitchell, 2019; Mohammad & Turney, 2013; Yue et al., 2019) . This leads to two glaring concerns over the accuracy and cultural bias inherent in the universal application of emotional AI. McStay (2018, p. 4 ) coined the term "machinic verisimilitude" to express sympathy for the technology and business communities, which have yet to deal properly with the social constructionist complexities of ethnocentric, context-dependent views of emotions. In stark contrast, in a recent article in Nature Neuroscience, seven top-tiered researchers proposed several different approaches to define and investigate fear, a seemingly simple and universal emotion (Mobbs et al., 2019) . Recent empirical and theoretical developments in the studies of emotion have questioned the validity of the so-called universality thesis of emotion (see Gendron et al. (2018) ) as the modus operandi for the emphatic media industry. Researchers such as Lisa Barrett and Hoemann et al.'s work on 'constructed emotion' show that the communication and inference of anger, fear, disgust, or any other Ekman's basic emotions have significant cultural and contextual variations as a result of reviewing more than 1,000 academic articles on emotional expression . Moreover, modes of emoting evolve since cultures are dynamic and unbounded, with the constant cultural transmission, learning, and unlearning (Boyd et al., 2011; Henrich, 2020; Vuong et al., 2020; Vuong, 2016) . This truism about culture challenges the traditional/normative and static ways of structuring emotion datasets into Ekman's eight basic types, the valence dimension (i.e., positive, neutral, negative sentiments), the arousal dimension (i.e., bored versus excited), favored by the tech companies (McStay, 2018) . Unlike how current machines read emotions, a vast body of 8 literature shows the human reading of emotions and their intensity depends on various external factors such as body movement, tone of voices, tones of skin, or the background scene (Benitez-Quiroz et al., 2018; Chen et al., 2018) as well as personal experiences and cultural setting (Barrett, 2017) . More importantly, the fact that many job seekers are now aware of AI-hiring and starting to game the algorithms by presenting themselves differently using different words than they naturally would (Borsellino, 2020) makes the concern over accuracy even graver. This can be witnessed by the plethora of amateur videos on YouTube that teach users 'how to beat AI recruiting'. Accuracy concern aside, the human-coded crowdsourcing datasets for training emotional AI raise serious concerns and algorithmic bias issues. For example, Rhue (2019) shows two facial recognition algorithms, Microsoft AI and Face++; both have a systematic bias about reading emotions such as anger and contempt, especially when interpreting emotions of different races. Purdy et al. (2019) explain the potential for algorithmic bias arising from simple facts; for instance, 89% of civil engineers and 81% of first-line police are male in America. Thus, an algorithm trained on these professions' datasets will struggle to read female employees' emotions in the field (Purdy et al., 2019) . Timnit Gebru, a former AI ethics researcher, was fired from Google over a co-authored paper she wrote concerning the risks of training a large language model (Singh, 2020) . One of Gebru and her colleagues' main discoveries was that facial recognition systems are less accurate at identifying women of color (Buolamwini & Gebru, 2018) . The lack of accuracy and embedded bias in current algorithmic decision-making suggests the need for greater accountability of AI technology and a systematic way of understanding how people perceive the introduction of the so-called "AI managers." Hence, in this paper, we survey a large body of students, 1015 future job seekers, from 48 countries and 11 regions around the world to understand how various socio-demographic, cultural, and economic factors influence perception and attitude toward three aspects of the AI managers: job entry gatekeeping, work monitoring, and the autonomy threat. There is a lack of coherent, empirical cross-cultural studies literature on perception toward AI. Critically, what little that exists focuses on AI applications' perception as the main tool to optimize non-human aspects of productivity in the modern workplace. This section examines the current findings of AI-perception literature. Of the few studies on the perception of AI in the modern workplace, it is clear the research methods to measure awareness of AI, and its effects are still in an early stage of development. Brougham and Haar (2017) designed a measurement instrument called STARA awareness scale, which stands for Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA), to measure the employees' perceptions of the future workplace. Testing the STARA scale on 120 employees in New Zealand, Brougham and Haar (2017) found the more employees were aware of STARA, and what it was meant for their job, the lower organizational commitment and career satisfaction the authors found among them. These findings are concurrent with previous studies that have examined the relationship between biometric surveillance and employee trust in the workplace (Ball, 2010; Marciano, 2019) . One consistent finding in the literature is that people have little concern over job loss due to AI (Brougham & Haar, 2017; Pinto dos Santos et al., 2019; Sarwar et al., 2019; Shen et al., 10 2020). Indeed one particular survey of 487 pathologists shows that nearly 75% of the respondents show excitement and interest in the prospect of AI integration in their work (Sarwar et al., 2019) . Similarly, Pinto dos Santos et al. (2019) surveyed 263 medical students and found although most respondents thought AI could revolutionize (77%) and improve radiology (86%), 83% did not think human physicians would be replaced. Likewise, Shen et al. (2020) found nearly 96% conducts 1,228 Chinese dermatologists from 30 provinces and regions in a webbased survey that believed AI roles are to assist with diagnosis and treatment. On the other hand, while a survey of 484 responses of 19 UK universities medical students show that a majority of students (88%) will play an important role in healthcare yet, nearly half the respondents express their concern about job loss in radiology due to AI (Sit et al., 2020) . However, overall, most medical students' population found students express interest and desire to receive more AI training in their degree (Bin Dahmash et al., 2020; Pinto dos Santos et al., 2019; Sit et al., 2020) . A mixed-method study on the perception of the proliferation of AI chatbots in healthcare showed that most internet users are receptive to this new technology; however, there was a level of hesitancy due to concerns about cyber-security (hacking/theft of personal data), privacy (third party sharing of information), accuracy, and questions concerning the ability for empathy of AI (Nadarzynski et al., 2019) . In the same vein, Gao et al. (2020) found an underlying reason for the negative public perception of medical AI was a fundamental distrust in profit-orientated companies and nascent stage of the technology as the result of tracking 2,315 social media posts in the Chinese platform Sina Weibo related to AI in 2017. They also found that of 965 posts where attitudes were expressed, nearly 60% were positive, and 6.2% were negative about using AI in medical settings. Public perception toward AI has been shaped by multiple factors such as socio-cultural backgrounds, fictional representation in popular culture, public positions of the experts in the field, political events and scandals, perception of economic and political rivals. Historically speaking, public perception of AI has been greatly influenced by fictional representation in popular novels, film, and television, many of which represent AI as an innately dystopian form of synthetic intelligence such as Colossus: The Forbin Project (1970 ), War Games (1983 ), Terminator (1984 ), I, Robot (2004 ), and Ex Machina (2014 . On the other hand, Asian people tend to associate AI in a more favorable capacity, such as Mighty Atom (Astro Boy) and the beloved manga/animation character Doraemon (Robertson, 2017) . However, there is contradictory evidence from the empirical literature. For instance, Bartneck et al.'s (2007) study found that among nearly 500 participants, people from the US are the most positive toward robots, while Japanese and Mexican are more negative toward robots. Mass media can also shape public perception of AI. Neri and Cozman (2020) have shown that cautious opinions offered by technology pundits such as Stephen Hawking or Elon Musk could change AI risk evaluation. Moreover, a study of media discussions on AI in the New York Times taken over 30 years showed a progressive increase in concern for the human loss of control over AI, ethical concerns about the role of AI in society, and displacement of human activity in the workforce (Fast & Horvitz, 2017) . On the other hand, the optimistic view of AI application in healthcare and education was also found to increase. Using the NexisUni database, Ouchchy et al. (2020) found the tone of media coverage of AI's ethical issues were initially optimistic and enthusiastic in 2014 and became more critical and balanced until 2018, with the privacy issue being the most salient aspects of this debate. Controversial political events such as the Cambridge Analytica case or the yet to be passed Algorithmic Accountability Act in the US can also shape the public discourse of the risk of AI misuse. The UK-based data broker company fell into disrupting when the public was made aware that its various parent companies, such as SCL Elections Ltd., had executed psychological operations (psy-ops), powered by harvesting massive social media data with algorithms to microtarget and change individual behaviors, in more than 200 elections around the world, mostly in underdeveloped countries (Kaiser, 2019; Wylie, 2019) . Bakir (2020) Analytica is now a nonoperational political data analytics company. Since then, in surveys around the world, where people are aware of digital microtargeting practices, they have expressed a clear desire for action against technologies that exploit the emotionality of voters in political campaigns (Woolley & Howard, 2018 ). Yet, many still do not realize that behind the political adverts they see online, AI-powered micro-targeting tools are deployed (Bakir, 2020) . For example, according to a YouGov survey in 2019, while 58% of the UK national sample were against tailoring political adverts, 31% of the UK sample were unaware of these problems (ORG, 2020) . In response to the growing public concerns over the manipulativeness and intrusiveness of the AI-powered digital political and marketing campaign, politicians in advanced democracies have started to push for legislation that increases companies' transparency and accountability to build and deploy these AI systems (Badawy et al., 2018) . Legislations such as the EU Digital Services Act, or the Algorithmic Accountability Act and the Filter Bubble Transparency Act in the US, or the German Medienstaatsvertrag (State Media Treaty) have sparked heated public debates and received supports from certain political factions 13 and stakeholder groups (Rieder & Hofmann, 2020) . Moreover, crucial data are absent on public debates about AI governance in developing countries. As such, the current literature on the perception of AI shows three major areas of concern. First, there is a clear lack of cross-cultural and cross-regional comparison. Second, empirical studies on the subject indicate a shortage of consistent measuring and testing instruments for AI perception determinants. Finally, the absence of studies on the impact of emotion-sensing technologies in the workplace suggests a strong necessity for further research to fill the existing intellectual vacuum. These are the three areas where this study seeks to contribute. Based on the literature, we ask the following two major research questions: RQ1: How do socio-demographic factors influence self-rated familiarity with AI? RQ2: How do socio-demographic factors (sex, income, religion, religiosity, major, school year, regions) and self-rated knowledge influence respondents' perception toward the AI managers? The survey was distributed in 14 online classes in Ritsumeikan Asia Pacific University (APU), Beppu, Oita, Japan, and a public Facebook group of APU students. APU is the most international campus in Japan, with students coming from 91 countries and regions in the world as of the academic year 2019. The first round of distribution is from July 15 th to August 5 th , 2020, and the second round from October 10 th to December 10 th , 2020. The research assistant came into online zoom classes and explained the purpose of the study is to measure the broad statistical patterns of how students' attitudes and knowledge about emotional AI vary according to their sociodemographic factors. The students were also told that the survey's participation would be voluntary, and all responses would be anonymized. 2) Do you agree that a company manager should use AI/smart algorithms to screen job applicants? 3) Are you worried about protecting your autonomy at work due to the wider application of AI/smart algorithms? Taking the average of the four questions on the right side ("1" being Not Ordinal/ Continuous low ("1"), middle ("2"), and high ("3") Self-perceived level of household income. Social studies ("0") vs. Business ("1") Students are asked to specify their majors. Binary Respondents are asked to specify their official religion and the lack thereof. There are very few Jewish and Shinto-ist respondents; thus they are not included in our analyses. Binary "1" for the very religious, "0" for the non-religious or mildly religious. Respondents are asked to choose their level of religiosity. Following the recent guidelines on conducting Bayesian inference, ten models are first constructed by gradually adding more variables and levels (Aczel et al., 2020; Gelman & Shalizi, 2013; Vuong et al., 2018; Vuong et al., 2020) . Then, the models are fitted with the data using the Bayesian Hamiltonian Monte Carlo approach. The Bayesian priors of all parameters' distribution 16 are set as default, which is 'uninformative' (Andrieu et al., 2003; McElreath, 2020) . The equations of the models are presented in Table 2 below: As can be seen, the models are gradually expanded, tested, and compared, which are aligned with the hypothetico-deductivist philosophy of science championed by Gelman & Shalizi (2013) . It is worth noted that models with both religion and religiosity variables are nonlinear to avoid confounding effects. Model 10 is then a multi-level model; thus, it is the most complex model with the Region variable functions as the varying-intercept. There are all other variables present in this model. Multi-level modeling has a natural fit with Bayesian analysis as it assigns probability distributions to the varying regression coefficients (Spiegelhalter, 2019) . Moreover, the multi-level fitting model also helps improve the estimate for imbalance in sampling and explicitly study the variation among groups. Partial pooling (or adaptive pooling) is another advantage of multi-level modeling. This kind of pooling enables us to produce less underfit estimates than complete pooling and overfit than no-pooling (Gelman & Hill, 2006; McElreath, 2020) . Finally, to guard against overfitting and find the model best fitted with the data, the models are compared in detail using the Pareto smoothed importance-sampling leave-one-out cross-validation (PSIS-LOO) approach (Vehtari et al., 2017) . We will compare the models' weights by computing the Pseudo-BMA weights without Bayesian bootstrap, Pseudo-BMA+ weights with Bayesian bootstrap, and Bayesian stacking weights (Vehtari & Gabry, 2019; Yao et al., 2018) . First, on the familiarity with emotional AI, when the students are asked to choose the most appropriate definition of this technology, to the best of their knowledge, nearly 80% chose intelligent machine/algorithms that attempt to read (44.7%) or display (34%) the emotional state of humans. This means nearly 78% chose the roughly correct definitions of emotional AI and affective computing (McStay, 2018; Richardson, 2020; Rukavina et al., 2016) . Meanwhile, 21.3% of the respondents chose AI that displays human consciousness ( Figure 1A ). In Figure 1B , when students are asked to rate their level of familiarity with emotional AI, there are around 23% who rate themselves as familiar to very familiar. Meanwhile, around 40% rate themselves as unfamiliar. These numbers show that although 80% of the respondents ( Figure 1A ) chose a very close definition of emotional AI; yet, the topic remains ambiguous for nearly the same percentage ( Figure 1B) . In terms of ranking the ethical issues concerning AI, we presented students with a list of nine ethical problems with AI proposed by the World Economic Forum (Bossman, 2016) and asked them to choose topic three. Interestingly, the top concern for international students' body is essentially about human-machine interaction, i.e., "Humanity. How do machines affect our behavior and interaction?" with 561 responses (55.3%). The second greatest concern, at 488 responses or 48.1%, is about the security of these smart systems, i.e., "how do we keep AI safe Table 3 shows 52% of the respondents hold a negative view of the AI managers, and 51% rated themselves below average regarding AI knowledge. After running the MCMC analyses for all 10 models (4 chains, 5000 iterations, 2000 warm-ups), the basic two standard diagnostic tests return good results. All Rhat's values equal one (1) First, after fitting the models, we run the PSIS-LOO test for all of the models and find that all Pareto k estimates are good (k<0.5) (See more details in the Supplementary Folder). In Bayesian 22 statistics, the weight is calculated to distribute the level of plausibilities among different models given the data. We compare all the models using for types of weights as shown in Table 4 . Table 4 shows that model 10 starkly outperforms all other models in all categories of weight, out of all the models with attitude toward the AI managers as the outcome variable. Meanwhile, table 5 shows that model 5 fits the data the best among models with self-rated familiarity with emotional AI as the outcome variable. Robustness check on the prior sensitivity Table 6 shows the tweaking of the Bayesian priors results in no real differences in the posterior distribution, suggesting the model is robust. The best performances were shown by model 10 and model 5. The results demonstrate that attitude toward the AI managers is a very multi-faceted issue. It is determined by sociodemographic factors and culturally and politically related factors such as religions and religiosity, and region. Self-rated familiarity with the topic of emotional AI, however, is less complicated. It can be predicted from basic socio-demographic factors such as sex, school year, income, and major, as shown in model 5. In the next section, we will compare the posterior distribution from multiple models to find consistent results across all models. However, as Model 10 and Model 5 fit the data the best, their findings will tend to be more reliable. As shown in Figure 5 and Table 7 , major and sex are the most predictive of self-rated familiarity with AI, while income and school year are not. Specifically, being male and having a business management major are predictive of rating oneself as more familiar with AI. Since the male gender has been found to positively associate with perceived technological self-efficacy, i.e., the belief that one is capable of performing a task using technologies (Cai et al., 2017; Huffman et al., 2013; Mackay & Parkinson, 2010; Tømte & Hatlevik, 2011; Vekiri & Chronaki, 2008) , this study's finding on the correlation of male gender and higher reporting of familiarity 29 with AI confirms this well-established trend. While the effects of the school year and income are more ambiguous, given the posterior distribution. Table 8 below presents the posterior distribution of all variables of model 10, which show the most goodness-of-fit among models with attitude as the outcome variable. Sex, income, school year, major, familiarity Across Model 1, Model 4, Model 9, and Model 10, the results consistently show students with higher income, being a male, business major, and being a senior are likely to have a less-31 worrying outlook toward AI-managers. Regarding income, one of the possible explanations might be the students with higher income are likely to have higher educational attainment (Aakvik et al., 2005; Blanden & Gregg, 2004) and end up in high-status occupations (Macmillan et al., 2015) ; thus they are less worried about being managed by AI algorithms. In all likelihood, they are more likely to become future managers who will use those AI tools to recruit and monitor their employees. Regarding the sex variables, our result is aligned with the well-established result in the literature that male-ness is correlated with higher perceived technological self-efficacy (Huffman et al., 2013; Mackay & Parkinson, 2010; Tømte & Hatlevik, 2011; Vekiri & Chronaki, 2008) . Being a business major is correlated with less anxiety toward AI applications in human resource management might be a product of the lack of emphasis on AI's ethical and social implications in business education or the business students' focus on making their future businesses profitable. It is reasonable to assume that these business majors' hope is to one day end up in a managerial position. Being a manager would incline a person to adopt the company position, thus seeing things in terms of productivity and performance results and not questioning (Ball, 2010) . Being a senior is correlated with a less worrying outlook of the AI managers could be interpreted that the concern of the students who are near their graduation is more on finding a job and less about any potential bias or misuse in the future application of AI in the workplace. Model 10 shows that students who rate themselves as more familiar with emotional AI tend to view the application of this technology in the workplace more positively. This result contradicts a Saudi Arabian study of medical students (Bin Dahmash et al., 2020) , which found 32 anxiety toward using AI was correlated with a higher self-perceived understanding of this technology. This divergence with the literature can be explained by the diversity of the surveyed population, of which there are 48 countries and 8 regions. Next, we will investigate the influence of religions and religiosity on the perception of AI human resource managers. Buddhist students are least likely to have a more worrying outlook toward the non-human Higher religiosity of the Muslim and Buddhist students appears to have made the students more anxious about using AI in human resource management. In Table 8 Ahmed and Matthes (2016) . They are also subjected to various forms of discrimination, real and perceived, in a post 9/11 world. On the other hand, Buddhism or Christianity as one's religion is also correlated with higher anxiety toward non-human resource management suggests having an 36 official religion at all might make the respondents more sensitive to the potential misuse and bias in AI applications in the modern human resource management context. (Henrich, 2020; Whitman, 1985) . Legitimate studies on the perception of AI tools in "moral governance" in China, for example, are rather positive (Roberts et al., 2020) . Such cultural differences could explain why Eastern Asian respondents express the least anxiety toward the prospect of AI managing the job-screening and work-monitoring tasks. Another notable result is that students from underdeveloped regions (Africa, Central Asia, Oceania) also tend toward a lower level of anxiety toward being managed by AI (Table 8 and Figure 9 ). This is likely the result of a concern with job finding and economic development over privacy and self-autonomy, predominant in AI discourses in developed, Western countries (Zuboff, 2019) . This study suffers from several limitations. First, it inherits the limitations of the convenient sampling method. First, as the surveyed population is young students who study in a multicultural, bilingual campus , the results should be interpreted in that context. Second, some regions such as Eastern Asia and South-Eastern Asia are over-represented in the sample. However, with a large and diverse sample of 1,015 respondents from 48 countries and 8 regions, the paper still makes a good contribution to the AI-perception literature, which is currently skewed toward the country and profession-specific findings. As this is among a few cross-cultural studies that focus on applying AI in human resource management, future studies can further explore the causal mechanisms of the correlations established in this study. For example, conducting in-depth interviews and controlled experiments with respondents from diverse cultural backgrounds can explain the influences of religions and religiosity on AI's perception. This study has several contributions to the literature. Besides the fact that it is among the few cross-cultural empirical studies on the perception of the use of AI in human resource management, the paper discovers that being managed by AI is the greatest AI risk perceived by the international future job seekers. Moreover, the analytical insights highlight the urgent need for better education and science communication about AI's risk in the workplace. The crosscultural and socio-demographic discrepancies in concern and ignorance about the AI managers can be bridged. Finally, methodologically, the use of Bayesian multi-level modeling shows a great advantage of the traditional frequentist multivariate regression in quantifying the regional and religious correlates of AI perception. The descriptive statistics section has indicated that being managed by AI and interaction with AI are major concerns for the respondents. Table 3 shows that 52% of the future job seekers express negative concern about the AI managers on average. In contrast, Figure 2 shows that human-AI interaction, i.e., "How do machines affect our behavior and interaction?" is the most outstanding ethical concern for the students with nearly 55% of the total responses. In comparison, job loss to AI only ranks third with 48%. These insights will prove crucial when communicating about the risks of AI. As workplace surveillance seeks to go beyond the exterior of the physical body and 39 attempts to datafy their emotional lives (Ball, 2010; Marciano, 2019; Richardson, 2020) , the greatest worry for young jobseekers is not AI replacing their jobs rather AI supervising, evaluating and making decisions about their performance and career advancement. This is perhaps the most significant concern from a majority of our respondents. In the next section, the implications and nuances of the cross-cultural and socio-demographic correlate for the perception of AI use in human resource management are discussed. As highlighted in the section on regional and religious differences, people from different sociocultural, economic backgrounds tend to form different perceptions of emerging technologies. Here, it is worth recalling previous studies on workplace surveillance, which show the employees' awareness of the presence of the smart surveillance technologies negatively correlates with organizational commitment (Ball, 2010; Brougham & Haar, 2017) . These two tendencies, combined with the risk of AI being misunderstood (Wilkens, 2020) , are important obstacles to overcome to harness the technologies' potential for good. In terms of regional differences, our analysis shows people from economically less developed regions (Africa, Oceania, Central Asia) exhibit less concern for AI managers (See Figure 9 ), while those surveyed from more economically prosperous regions (Europe, Northern America) tend to be more cautious and reserved. However, it is interesting to note that an economically prosperous region such as East Asia (including China, Japan, Mongolia, North Korea, South Korea, and Taiwan) correlates with less anxiety toward the AI managers. Our data show the opposite trends in the perception toward the AI managers between European/North American and people from three major East Asian countries. In the latter case, more people are showing an accepting attitude rather than a worrying or neutral attitude: 64% of the Japanese, 56% of the South Korean, and 42% of the Chinese exhibit the accepting attitude. For European and Northern Americans, an overwhelming majority of 75% possess the worrying/neutral attitude toward being managed by AI (see Figure 10 ). (Chung et al., 2008; Weatherley, 2002) , and there is a stronger emphasis on harmony, duty, and loyalty to the collective will (Vuong et al., 2018; Vuong et al., 2020) . Moreover, in Confucian ethics, the ultimate goal of self-development is to transcend one's self-interests and to align with things that are bigger than oneself: one's own community, nation, and nature Whitman, 1985) . Moreover, there is much more acceptance of moral intervention as people in the higher position of the social hierarchy and the government, at their best, are usually thought of as a source of moral guidance (Roberts et al., 2020) . The empirical findings on such stark cross-cultural and cross-regional differences could help educators, businesses, and policymakers to shape their action programs to address any stakeholder's concern or lack thereof for the future of an AI-powered workplace (Condie & Dayton, 2020) . For example, as shown in Table 8 and Figure 7 , religious respondents in the sample tend to show a highly negative attitude toward using AI in human resource management. Such concern needs to be addressed. As shown in this study, 21% of the respondents still equate emotional AI with machines or algorithms that display human consciousness ( Figure 1A ). Nearly 40% rated themselves as unfamiliar with emotional AI ( Figure 1B ). Our analysis indicates that many students are ignorant of their own biases/privileges and possible harmful social and ethical implications of AI managers. For example, while this study shows that being male and being from a higher-income background are correlated with less anxiety regarding non-human resource management (Table 8 and Figure 6 ), it also reveals that being of a specific religious denomination, being female, raises serious concerns about algorithmically driven managers. As past studies have shown, student engagement with ethics is contingent on several factors: first, the type of curriculum adopted by higher education institutions (Culver et al., 2013) ; second, how the concept of bias is communicated and understood through the course literature. And finally, courses orientated toward technology, where the potential pitfalls of algorithmic bias found in many current AI applications (Azari et al., 2020; Buolamwini & Gebru, 2018) are taught problematically, are only made available to a privileged class who can attend post-secondary institutions. On the latter, Table 8 and Figure 10 shows that self-rated familiarity with AI is correlated with a more positive outlook of the AI mangers (β_Familiarity_Attitude's mean = 0.21, sd = 0.04), which implies these students might be unaware of the biases and inaccuracy in the emerging technologies. Even though the problem of algorithmic bias has now moved from the periphery to the center of public discourse, as witnessed in Cathy O'Neil's New York Times bestseller, Weapons of Math Destruction (O'Neil, 2016) , the proposed US Algorithmic Accountability Act, or the recent media storm surrounding the firing of Google AI's top ethics researcher, Timnit Gebru (Singh, 2020; Hao, 2020) , when it comes to the cross-cultural, multinational population, this study indicates a clear lack of knowledge (Table 3) . There seem to be little push-backs on the use of emotional AI technologies in East Asian countries to bring these insights into context. For example, in South Korea, nearly 25% of the top 131 corporations stated they were planning to facilitate their recruitment with emotional AI tools (Condie & Dayton, 2020) . Large corporations such as Softbank, Honda, IBM, Microsoft, etc., are all pushing the production, promotion, and sales of these smart technologies. Under the influence of the COVID-19 pandemic, this trend is only growing. Many researchers have raised concerns regarding the use of AI technologies to surveil the population (Roussi, 2020) and the workforce (Condie & Dayton, 2020) , which can linger well after the pandemic. As such, university curriculums must include courses on social and ethical implications of AI in the workplace, especially in the business major, which has been shown to correlate with less concern about AI in HR management in this paper (see Figure 10 ). This is to correct any students' misconceptions and enrich their understanding of the positive (optimizing workplace productivity) and negative (algorithmic bias) potential of such technologies. It is important to remember that emotional AI technologies are designed to be empathic, not sympathetic 44 managers. Evaluation systems such as STEM and university level courses in Information Technology or applied business practices are insufficient tools in preparing future job seekers for the quantified workplace. Rather, ethical training and critical thinking should be integral to institutional higher learning epistemology that prepares younger generations for an AIaugmented workforce. Philosopher Toby Ord uses the metaphor of the precipice to help us visualize the existential risks humanity faces with the rise of smart technologies (Ord, 2020) . The transformation of the modern workplace through algorithmic decisionism and governance prompts urgent questions that stakeholders need to address, from software engineers and programmers to policymakers and business leaders about how to live ethically and well with 'machines that feel.' Yet, there are still no global standards or consensus on regulating these new technologies, especially in AI systems that can capture subjects' non-conscious data (McStay, 2020). There are already troubling tech companies' reports in more industrialized countries selling bias prone facial recognition algorithms to developing countries' governments and the private sectors, where regulation is more relaxed (Roussi, 2020) . Moreover, cultural norms and conventions arise when vendors try to transplant their biometric datasets and other analytic measurements to another culture. For example, US companies such as HireVue, are selling their video-interview analytic wares to Japanese companies and schools with claims to predict whether or not a candidate will make a successful future employee. Although the local distributor redesigns HireVue's interview questions to align with Japanese society's norms and conventions, the biometric algorithms are created by Western programmers (Nakamura Toru, personal communication, January 15, 2021). The fact that the culture is not static (Boyd et al., 2011) , the globalization and the hyperconnectivity of our digital life foster faster and more complex cultural additivity that generate new values and sub-cultures (Vuong et al., 2018; Vuong, 2016) . For example, the #BlackLivesMatter and #MeToo movements have already generated a wealth of new anti-racist and anti-sexist sentiments and vocabularies that are now spreading to the modern workplace (Hao, 2020) . This should raise concerns about the sophistication of the current modus operandi of emotional AI technologies-the Ekman's universal emotion hypothesis (1985) . The meteoric rise of AI-driven micro-targeting, misinformation, and fake news has already clarified the potential pandora's box of unregulated smart technologies (Woolley & Howard, 2018) . Much of the design and use of current artificial intelligence has exposed some of our natural stupidity . This is evident in the glaring AI's social sciences deficit in multiple research areas, from data structuring to algorithmic designs (Sloane & Moss, 2019) . Our study suggests three fundamental concerns for future job seekers who will be governed and assessed in either small or large ways by non-human resource management. The first is a privacy concern. The increased accuracy of emotion-sensing biometric technologies relies on a further blurring of personal/employee distinctions and tapping real-time unconscious data streams. The second is a concern for explainability. As emotional AI and its machine learning capabilities move toward greater complexity levels in automated thinking, many technologists believe that it will not be clear, even to the creators of these systems, how decisions are reached (Mitchell, 2019) . Finally, at a deeper biopolitical level, emotional AI represents an emerging era of automated governance where Foucauldian strategies and techniques of control are relegated to software systems in which "people willingly and voluntarily subscribe to and desire their logic, 46 trading potential disciplinary effects against benefits gained" (Kitchin & Dodge, 2011, p. 11 ). Instead of physically monitoring and confining individuals in brick-and-mortar enclosures or enacting forms of control based on the body's exteriority, the 'algorithmic governmentality' of emotion-sensing AI ultimately targets the mind and behavioral processes of workers to encourage their productivity and compliance (Mantello, 2016) . Thus, with the emotional AI technologies becoming more pervasive in the business sectors, job-entry-level employees have significant challenges to retain privacy over their emotional lives and, by extension, their dignity, and autonomy. An individual's inability to negotiate the terms of such kinds of workplace power relations (Marciano, 2019) is compounded by the current absence of legal or ethical oversight. In conclusion, the empirical cross-cultural and socio-demographic discrepancies observed in this paper seek to promote awareness and discussion and serve as a platform for further intercultural research on the ethical and social implications of emotional AI as a preeminent tool in non-human resource management. Educational attainment and family background Discussion points for Bayesian inference Media representation of Muslims and Islam from 2000 to 2015: A meta-analysis An introduction to MCMC for machine learning Comparing supervised and unsupervised approaches to emotion categorization in the human brain, body, and subjective experience Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) Psychological operations in digital political campaigns: Assessing Cambridge Analytica's psychographic profiling and targeting Workplace surveillance: An overview. Labor History How emotions are made: The secret life of the brain Emotional expressions reconsidered: Challenges to inferring emotion from human facial 48 movements The influence of people's culture and prior experiences with Aibo on their attitude towards robots Facial color is an efficient mechanism to visually transmit emotion Artificial intelligence in radiology: does it impact medical students preference for radiology as their future career Family Income and Educational Attainment: A Review of Approaches and Evidence for Britain Beat the robots: How to get your resume past the system & into human hands Top 9 ethical issues in artificial intelligence The cultural niche: Why social learning is essential for human adaptation Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA): Employees' perceptions of our future workplace Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification Gender and attitudes toward technology use: A meta-analysis Distinct facial expressions represent pain and pleasure across cultures Using automatic face analysis to score infant behaviour from video collected online. Infant Behavior and Development Ethical perceptions of business students: Differences between East Asia and the USA and among Four AI technologies that could transform the way we live and work Comparison of engagement with ethics between an engineering and a business program Culture changes how we think about thinking: From "Human Inference" to Basic Emotions Long-term trends in the public perception of artificial intelligence Bentham, Deleuze and beyond: An overview of surveillance theories from the panopticon to participation Public perception of artificial intelligence in medical care: Content analysis of social media Data analysis using regression and multi-level/hierarchical models Philosophy and the practice of Bayesian statistics Universality reconsidered: Diversity in making meaning of facial expressions We read the paper that forced Timnit Gebru out of Google. Here's what it says Why faces don't always tell the truth about feelings The WEIRDest people in the world: How the West became psychologically peculiar and particularly prosperous Using technology in higher education: The influence of gender roles on technology self-efficacy Targeted. My inside story of Cambridge Analytica and how Trump, Brexit and Facebook broke democracy The management century Code/Space. Massachusetts Toward emotion recognition from physiological signals in the wild: Approaching the methodological issues in real-life data collection Gender, self-efficacy and achievement among South African Technology teacher trainees Who gets the top jobs? The role of family background and networks in recent graduates' access to high-status professions The machine that ate bad people: The ontopolitics of the precrime assemblage Reframing biometric surveillance Statistical rethinking: A Bayesian course with examples in R and Stan Emotional AI: The rise of empathic media Emotional AI, soft biometrics and the surveillance of emotional life: An unusual consensus on privacy Artificial intelligence: A guide for thinking humans Viewpoints: Approaches to defining and investigating fear Crowdsourcing a word-emotion association lexicon Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. DIGITAL HEALTH The role of experts in the public perception of risk of artificial intelligence Internationalization and Its Discontents: Help-Seeking Behaviors of Students in a Multicultural Environment Regarding Acculturative Stress and Depression People as the roots (of the state): Democratic elements in the politics of traditional Vietnamese Confucianism Weapons of math destruction: How big data increases inequality and threatens democracy The Precipice: Existential Risk and the Future of Humanity Public are Kept in the Dark Over Data Driven Political Campaigning, Poll Finds AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media How to beat A.I. in landing a job Affective computing. MIT Media Laboratory Perceptual Computing Section Medical students' attitude towards artificial intelligence: a multicentre survey The risks of using AI to interpret human emotions Anchored to bias: How AI-human scoring can induce and reduce bias due to the anchoring effect Affective computing in the modern workplace Towards platform observability The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation Robo Sapiens Japanicus: Robots, gender, family, and the Japanese nation Resisting the rise of facial recognition Affective computing and the impact of gender and age Physician perspectives on integration of artificial intelligence into diagnostic pathology Web-based study on Chinese dermatologists' attitudes towards artificial intelligence Google workers demand reinstatement and apology for fired Black AI ethics researcher Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey AI's social sciences deficit The art of statistics: How to learn from data Towards real-time automatic stress detection for office workplaces Emotion detection' AI is a $20 billion industry Gender-differences in Self-efficacy ICT related to various ICT-user profiles in Finland and Norway. How do self-efficacy, gender and ICT-user profiles relate to findings from PISA Bayesian Stacking and Pseudo-BMA weights using the loo package Practical Bayesian model evaluation using leaveone-out cross-validation and WAIC Gender issues in technology use: Perceived social support, computer self-efficacy and value beliefs, and computer use beyond school Cultural additivity: behavioural insights from the interaction of On how religions could accidentally incite lies and violence: folktales as a cultural transmitter Artificial Intelligence vs. Natural Stupidity: Evaluating AI readiness for the Vietnamese medical information system Global mindset as the integration of emerging socio-cultural values through mindsponge processes: A transition economy perspective Harmony, hierarchy and duty-based morality: The Confucian antipathy towards rights Privacy in Confucian and Taoist thought Artificial intelligence in the workplace -A double-edged sword Computational propaganda: Political parties, politicians, and political manipulation on social media Mindf*ck: Inside Cambridge Analytica's plot to break the world Using Stacking to Average Bayesian Predictive Distributions (with Discussion) A survey of sentiment analysis in social media The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power