key: cord-0058403-qq90x27i authors: Walters, Kristin; Markazi, Daniela M. title: Insights from People’s Experiences with AI: Privacy Management Processes date: 2021-02-17 journal: Diversity, Divergence, Dialogue DOI: 10.1007/978-3-030-71292-1_4 sha: 3d5b3512b9b03d6e9f25cdabbf245275eff21383 doc_id: 58403 cord_uid: qq90x27i Given the lack of interview-based research in the Artificial Intelligence (AI) literature, we conducted 15 semi-structured interviews about people’s experiences with AI. From those interviews, many important themes emerged and we focused this paper on people’s experiences with privacy violations. We used the Communication Privacy Management Theory (CPM) to analyze distancing behaviors (avoidance or withdrawal) resulting from Boundary Turbulence (privacy violations) with people’s voice-activated phones and Google Search. Through this analysis, we found evidence that distancing behaviors may 1.) be able to transfer from one device to another; and 2.) not depend on the trustworthiness of a particular device or application company. This paper concludes with recommendations for applying the CPM framework more intentionally and rigorously to people’s experiences with voice-activated systems. Industries and communities are adopting more and more Artificial Intelligence (AI) technologies as algorithms advance, data sets grow, and computational power and storage become more economical [5] . Much is still unknown about the socio-technical implications of the proliferation of AI technologies in human-facing applications [3] . AI technologies increasingly make news headlines, either due to their promise of efficiency and industry disruption or due to their perpetuation of cultural rooted social inequalities. However, many Americans are still unaware of how AI does and will affect their daily lives [12] . Researchers have called for more human-centered design processes in AI development to encourage active collaboration between technology decision makers and users [13] . Better understanding of people's imaginaries around and interactions with AI technologies will support designers and policy makers as they strive to construct an AI landscape that works for the public. Current AI research has relied mostly on quantitative methods, lacking the texture and nuance that users' voices add to the conversation. In this paper, we provide data from 15 semi-structured interviews on people's experiences with AI. While many interesting themes emerged, this paper focuses on three participants' experiences with privacy violations and their resulting behaviors. Privacy management is made up of a person's choices and behaviors around disclosing or concealing information. It is a dynamic and dialectic process [2] . For this paper we used Sandra Petronio's 2002 Communication Privacy Management Theory (CPM) as a framework to help "conceptualize and operationalize privacy management" and Helen Nissenbaum's Contextual Integrity (CI) model to understand better how privacy contexts change in networked systems [8, 10] . Based on the work of Irwin Altman, Sandra Petronio developed the Communication Privacy Management Theory and as a means to "conceptualize and operationalize the nature of privacy" [11] . The theory states that individuals are owners of their information and they define that ownership through privacy boundaries. There are benefits and risks to disclosing and concealing information, and individuals have a set of privacy rules which govern when they share information with other people or entities. Petronio [11] categorizes privacy rules as core and catalyst. Core privacy rules are predictable and stable over time, such as one's cultural conception and valuation of privacy. Catalyst rules are less predictable and depend on context, like someone disclosing important in-formation during an emergency. When an individual shares information with another individual or entity, they become a co-owner. There are explicit and implicit expectations of how the information will be disclosed or concealed in the co-ownership. When those expectations are not met, Boundary Turbulence can occur. Boundary Turbulence can best be understood as a privacy violation. For example, people get angry at Facebook for sharing data with Cambridge Analytica. Much research has been done about the emotional and behavioral responses to Boundary Turbulence. People felt mostly anger, fear, and sadness when experiencing Boundary Turbulence in the context of online hacking and romantic relationships [1, 7] . People's behavioral responses to Turbulence vary between integrative and distributive and those behaviors correlate with certain emotions. For the purposes of this paper, we are focused on the distributive behaviors of "distancing" which can come in the form of withdrawal from or avoidance of the violating information co-owner [1, 7] . In 2004 Helen Nissenbaum published her privacy management model called Contextual Integrity. Contextual Integrity dives deeper into the "context" vari-able presented by CPM and aims to explain and predict people's evolving "in-formation norms" (which are similar to CPM's idea of privacy boundaries) in the face of society's rapidly changing technological landscape [8] . Nissenbaum [9] posits that "contextual norms may be explicitly expressed in rules or laws or implicitly embodied in convention, practice, or merely conceptions of "normal" behavior. A common thesis in most accounts is that spheres are characterized by distinctive internal structures, ontologies, teleologies, and norms." Contextual Integrity focuses on the relationship between information types, actors and transmission principles to model information norms. The model has been used to draft privacy policy in light of new contexts. For the purposes of this paper, we will use CPM's idea of Boundary Turbulence to describe participants' privacy violations that emerged in our data. We use the term Boundary Turbulence over Contextual Integrity violation because there has been more research about the emotional and behavioral effects of Boundary Turbulence in Communications research. Leading up to this paper, we found no studies on behavioral effects of CI violations in the Computer Science literature. While a lot of our privacy findings aligned with current research, we found evidence of Communication Privacy Management Theory's Boundary Turbulence with conversational devices which has not yet been explored in the research. Emotions and behaviors caused by Boundary Turbulence have been studied in a variety of contexts, namely Smart Phones, Fitness Trackers, romantic relation-ships, and online hacking [1, 4, 7, 14] . Given the limitations of our study (time and sample diversity), we cannot say anything definitive about emotional and behavioral responses regarding Boundary Turbulence with voice-activated devices but we can provide the justification for more significant research. We utilized the interview method to explore the inter-relatedness between people's history with AI, current use of AI, and perceptions of AI. Due to COVID-19 restrictions, participants were recruited through the researchers' personal Facebook pages, one researcher's Instagram page, the social media site Nextdoor, and the University of Illinois at Urbana-Champaign's Reddit page. These platforms offered access to a diverse age group, but did not recruit a sample representative of the population in terms of race. Eight males and seven females (ranging in age from 18 to 61 years old) were interviewed. Nine participants identify as white-non Latino (20 to 61 years old), four as Asian (18 to 21 years old), one as mixed-race (black/white, 21 years old), and one as Latina (22 years old). All participants had post-secondary educations, ranging from some college (no degree) to Ph.D. 15 people participated in semi-structured interviews about their experiences with AI technologies. Researchers recorded audio of 13 interviews (video or audio) and transcribed them with Otter AI, editing transcriptions as needed. One re-searcher manually transcribed two interviews with participants who declined to be recorded. Interviews ranged from 22-75 minutes. To start the interview, researchers asked broad questions to determine participants' level of knowledge about and usage of AI technologies. We asked participants the definition of AI and what words and ideas they associate with the concept. We asked if they were currently using intelligent technologies and what kinds of conversations they had with family and friends about intelligent systems. We provided a definition of AI to all participants partially through the interview and asked if the definition was in line with their understanding of the technology and its use. In addition, we asked our participants to describe their personal history of knowing about and using AI as well as their concerns about AI, specifically their privacy concerns. We also asked our participants how they imagine AI could be used in the future given and how they can see AI helping with the COVID-19 pandemic. Finally we asked participants to describe desirable policies and designs for AI technologies. Both researchers coded the interview transcripts separately by applying an inductive thematic analysis as outlined in Guest et al. [6] . Researchers compared their coding systems and ensured alignment. Due to the timeline of the project and the amount of data to be processed, researchers analyzed specific themes individually, altering the coding system as needed for their individual analyses. 14 out of 15 participants described the lack of privacy as inevitable and un-controllable and an inherent characteristic of today's society. However, privacy concerns and evidence of privacy management processes were still present in the data. Below are descriptions of three participants experiencing Boundary Turbulence with voice-activated devices and one of those three participants also experiencing Boundary Turbulence with Google Search. We also include the behavioral responses to the turbulent episodes. Three participants (P1, P2, P8) experienced turbulence when a targeted ad popped up after they were talking about the featured product in front of (but not to) their or their conversational partner's phone. The participants believe the device listened to them and then used the heard information to target them for a specific advertisement. Participants reported these incidents in the context of privacy concerns and violations. A 35-year-old white female with an MBA said, "And once we were done, I was like on my social media and boom, that was the brand right there trying to advertise. I was like, Oh my god, this is so freaky." 24-year-old white male who worked formerly as navy photographer and currently studies history said, "It's just a little bit unsettling then be it through, you know, passive artificial intelligence in the background, that these companies and who really just want to sell us something maybe maybe that we don't need, are able to target us to that extent in our, in our personal lives, like, you know, it's it's kind of nice to be able to If I wanted to buy something to look for it myself and and not have Uber freight come up on my timeline and be like, this is creepy. It's just creepy." A 22-year-old Asian female undergraduate studying New Media said, "I was talking to my mom about like, adopting a pet. And then I don't know how maybe it just like misclicked on my like, on my phone or something. But when I looked at it again, it had like Google search for pet shelters. And I was like, Wait, did I do that? I was like, I don't think I did. Yeah, I accidentally like pushed voice control or something. But I don't know. It just made me think of that like and how like, a lot when people talk like near Alexa or near like Google Home, that like the advertisements are also like, curated for them. So I guess that is a little bit funky. But a little bit concerning." This participant also experienced turbulence when doing an art project that revealed the amount of her personal data collected by Google, YouTube and Facebook. "Because like, it's good that they know that they know what I like, and they know what to show me. But I don't like that. They know that because like, they know everything about me." Two of the three participants (P1, P2) experiencing boundary turbulence reported distancing behaviors resulting from the turbulent episodes. The 22-year-old Asian female New Media student experienced turbulence with both Google Search and her Android phone. In response to the turbulent episode on her phone, there was no reported behavior. In response to the Google Search turbulence, she responded with a distancing behavior, describing it like: "I actually did switch to DuckDuckGo on this computer. But I kind of don't like it because sometimes it doesn't show me what I want to see." In response to a turbulent behavior with her phone, the 35-year-old white female with an MBA exhibited a distancing behavior towards the Alexa device, describing the situation like: "I use Siri sometimes when I'm driving. I definitely do. But Alexa, I'm not a big fan because I already am freaked out about like how I told you like when I talk, like I don't even mention the name and next thing I know is on my computer my web browser. I don't know. I don't use that." The results show that privacy management processes differ from person to person and from context to context, aligning with assertions from Altman [2] , Petronio [11] , and Nissenbaum [8] that privacy management is dynamic, dialectic, fluid, and contextual. Given CPM research by Aloia [1] and McLaren [7] on behavioral responses to boundary turbulence, it is not surprising that distancing behaviors appeared in the data. However, the distancing behaviors are different than one might predict and offer pathways for further investigation. P1 distanced herself from Google Search by using DuckDuckGo after experiencing turbulence. Yet, there was no indication of a distancing response (or the inclination for one) with her Android after turbulence. Given that both applications are run by the same company, these questions arise: What assumptions are people making about data collection on different devices and how does that affect privacy boundaries, their violations, and ensuing behaviors? How do different applications/devices from the same company moderate privacy boundaries between the owner (consumer) and co-owner (company). What entity do people believe they are having a co-owning relationship with? The device, its company or the device-as-proxy for company? P2 does not report distancing herself from her phone after turbulence, but instead reports avoiding buying an Alexa device. About this we pose these further questions: Can distancing behaviors transfer from one device to another? If so, how does device ownership moderate this effect? If so, how does trust in device companies moderate this effect? Due to COVID-19 restrictions, we were unable to recruit a representative sample of the general public. For future studies we recommend collecting a more diverse sample. Boundary turbulence occurs when an information co-owner violates a privacy boundary. Research shows that people can react to turbulence in a variety of ways given the dynamic nature of privacy management. Our research shows unexpected distancing behaviors from people who experience turbulence with their voice-activated phones. We think these findings warrant further research that can apply the CPM framework to people's privacy management processes with voice-activated applications. Insights into behaviors resulting from boundary turbulence could help companies foster relationships that might mitigate distancing behaviors. Insights could also help privacy advocates educate consumers about their behavior so they are empowered to manage privacy better with devices and institutions. The emotional, behavioral, and cognitive experience of boundary turbulence The Environment and Social Behavior: Privacy, Personal Space, Territory and Crowding Guidelines for human-AI interaction Smartphone privacy perceptions and behaviors generational influence quantitative analysis: Communications privacy management theory What is artificial intelligence? Technical considerations and future perception the Anatolian Applied Thematic Analysis Emotions, communicative responses, and relational consequences of boundary turbulence Privacy in Context: Technology, Policy And the Integrity of Social Life Respecting context to protect privacy: why meaning matters Boundaries of Privacy: Dialectics of Disclosure Conceptualization and operationalization: utility of communication privacy management theory. Current Opinion Psychol Brookings survey finds worries over AI impact on jobs and personal privacy, concern U.S. will fall behind China Toward human-centered AI: a perspective from human-computer interaction There's nothing really they can do with this information': unpacking how users manage privacy boundaries for personal fitness information