key: cord-0436350-kholrd1v authors: Chubba, ennifer; Missaouib, Sondess; Concannonc, Shauna; Maloneyb, Liam; Walker, James Alfred title: Interactive Storytelling for Children: A Case-study of Design and Development Considerations for Ethical Conversational AI date: 2021-07-20 journal: nan DOI: nan sha: 7574c1a2878ece33845028adaa67f84b77b0175e doc_id: 436350 cord_uid: kholrd1v Conversational Artificial Intelligence (CAI) systems and Intelligent Personal Assistants (IPA), such as Alexa, Cortana, Google Home and Siri are becoming ubiquitous in our lives, including those of children, the implications of which is receiving increased attention, specifically with respect to the effects of these systems on children's cognitive, social and linguistic development. Recent advances address the implications of CAI with respect to privacy, safety, security, and access. However, there is a need to connect and embed the ethical and technical aspects in the design. Using a case-study of a research and development project focused on the use of CAI in storytelling for children, this paper reflects on the social context within a specific case of technology development, as substantiated and supported by argumentation from within the literature. It describes the decision making process behind the recommendations made on this case for their adoption in the creative industries. Further research that engages with developers and stakeholders in the ethics of storytelling through CAI is highlighted as a matter of urgency. Conversational AI (CAI) agents are ubiquitous in the lives of adults and children across the developed world. Intelligent Personal Assistants (IPA) such as Cortana (Microsoft), Alexa (Amazon), Siri (Apple), and Google Assistant are perhaps the most well known form of CAI and are at the forefront of technological advancement. CAI has become more effective thanks to advances in automatic speech recognition (ASR) Karpagavalli and Chandra [2016] , Natural Language Processing (NLP) Trilla [2009] , Vanzo et al. [2019] , and Deep Learning (DL) models Abdel-Hamid et al. [2014] . The fast paced evolution of Artificial Intelligence (AI) has led to the regular use of high performance CAI systems in day-to-day activities. CAI software enables individuals to communicate with a wide range of applications in natural language via voice, text and video. Researchers have begun to explore how these technologies are embedded within family practices and how interactions differ when involving adults and children (e.g. Sciuto et al. [2018] , Druga et al. To the best of our knowledge, few studies have combined considerations of ML, NLP and DL innovation for CAI with a mapping of the ethical implications presented in the literature in the creative industries. Using a pilot case-study, we describe and reflect on the ethical design of a CAI meta-story tool for children's storytelling. By exploring previous research on both technical and ethical aspects, this paper reflects on the design and development decisions we made supported by argumentation in the literature. In doing so, we propose deeper and richer analysis of the issues for children's storytelling CAI in the creative industries. This paper begins with an overview of the ethical issues currently discussed with respect to children, both in policy and academic literature. This is then related to a mapping of the technical advances in the general area of CAI, focusing on acoustic models and data-driven models and the ethical considerations thereof as applied to our case-study -the development of a meta-story chat tool. Creative industry practitioners are looking to develop innovative and engaging experiences for children. As new forms of storytelling and immersive experience emerge, and virtual, mixed, diminished and extended reality projects become more commonplace the need to examine the associated risks becomes more pressing. Children may be encountering these technologies while they are still forming how they discern the difference between reality and fantasy (e.g. the use of Sesame Street in Stanford University's Virtual Human Interaction Lab, Virtual Reality 101) 2 . While certain aspects of the creative sector such as the ethics of games and children is relatively well researched Cano et al. [2015] including a range of work on parental concerns and consent Dixon et al. [2010] , Willett [2015] , Rode [2009] including their gamified uses even to teach ethics Bagus et al. [2021] , what happens with respect to children's data as they interact with voice technologies for entertainment, poses deep moral concerns. Recent work suggests that such immersive experiences reveal a range of social issues including social isolation, desensitization, depersonalisation, manipulation, privacy and data concerns Bailey and Bailenson [2017] , Grizzard et al. [2017] . The more widespread these immersive storytelling tools become, the greater need there is to reflect deeply on their design, in particular for children. Long and Magerko Long and Magerko [2020] highlight the importance of AI literacy, i.e. the competencies that enable individuals to critically evaluate and collaborate with AI technologies, and demonstrate the variety of factors that influence children's perceptions of AI. This is critical to the ethical design of CAI and a crucial aspect of child-computer interaction. Indeed, there is a need to empower children in the design process through participatory approaches relevant to the child-computer interaction field Kumar et al. [2018] , Yip et al. [2019] , Piccolo et al. [2021] . For creative sector organisations, many of which are SMEs, simultaneously directing attention towards the development of exciting and engaging experiences and ensuring the ethical and safe deployment for children (which as highlighted poses a number of unique considerations), can be a daunting endeavour. Furthermore, the over-abundance of ethical guidance documents, coupled with the limited mapping of these high level principles onto practical implementation strategies makes this a difficult space to navigate, especially with respect to children. Researchers have highlighted how ethical guidelines often fail to acknowledge the important practical difficulties of implementing AI systems or the additional work required to translate these high level principles and their various implications into actual workflows Ryan and Stahl [2020] , Tomalin et al. [2021] . AI in the creative industries and digital storytelling in its current manifestation presents, at best, an inconsistent approach to responsible innovation of CAI for children, often with a need to join up the ramifications of situating such technologies within the home with the consequential impacts on users (children). The inherent biases and assumptions underpinning current technical methodologies require the utmost scrutiny when applied to vulnerable groups such as children. As storytelling is a universal way of connecting with others and in the case of young people, these connections are vital to their mental wellbeing, safety, education and enjoyment. Responsible innovation in science and technology has a long history Bush [1945] but it is also a current issue and one with a newer research focus Owen et al. [2013] . There is also a growing interest in bridging the gap between AI practice and governance Bryson [2020] . This is reflected in the publication of a significant number of ethical guidance documents emerging from both commercial and academic sectors Morley et al. [2019 ], Hagendorff [2020 . The global political landscape also attends to issues concerning ethical AI e.g. see the European Commission's White Paper on AI ECW [2020] and the Children's Online Privacy Protection Act (COPPA) in the US. Perhaps the most active in the policy area of online harms and children is UNICEF (2020) and UNESCO the latter of which, embarked on the development of a global legal document on the ethics of AI for children (2021) 3 . The recommendations made by UNICEF include the need to closely examine privacy, safety and security by providing identity protection, detecting harmful content, focus on location detection and biological/psychological safety. Additionally, UNICEF is clear that inclusion and equitability are upheld -ensuring that systems are checked to mitigate against historic bias which may stand in the way of children's fair chances in life. In this respect, biases might include health, education, credit, financial status of family etc. Dignity should be upheld with respect to automation of roles in the future and finally, the cognitive and psychological implications of technology with respect to mental health and manipulation should be explored. They suggest that a range of actors across the AI community including scholars and agencies, need to come together to engage with these concerns. The UK Centre for Data Ethics and Innovation, called for participatory design of smart speakers and voice assistants stating that '[u] sers are expected to be active participants in the development of these technologies' for Data Ethics and Innovation [209] . They suggested that users should actively ask questions of their devices about how their data is used and stored, and even exert market influence to drive up demand for privacy preserving technologies. However, participatory approaches in ethical design which actively consult stakeholders, children and young people is a positive and progressive approach Cortesi et al. [2020] , Kumar et al. [2018] . We draw on argumentation from the academic and policy literature, to describe four emergent themes which guided the development and design of a meta-story chat tool for children. The themes which guided the co-production of this tool include: to consider the effects of CAI on the cognitive and linguistic development of children; moral care; inclusivity; and regulation. This paper aims to provide a lens through which to consider broader and deeper considerations for the responsible development of CAI for children's storytelling. Seeded by our work with this pilot study, we aim to highlight several themes with accompanying discussion that inform the development of responsible CAI and to promote thought on future research. The following sections present the findings of the technical and ethical scoping work. The focus of this paper is CAI for children's storytelling and it reflects on a research and development pilot project to design a meta-story chat tool. We present a pilot case-study of work conducted with a digital agency committed to the responsible innovation of child-friendly CAI technology called 'AI Fan Along'. The project was motivated by asking what the guiding ethical questions and principles pertinent to the design and development of CAI for children are and how they map onto its innovation. In order to investigate and answer these questions, we undertook a pilot-study involving background research to understand the most recent developments in the design and development of CAI for children from both technical and social perspectives. This led to the recommendations mapped out in the paper. The case-study which is the subject of this paper refers to the prototype 'AI Fan Along' -a meta-story chat tool to encourage children (ages 9-14) to engage with characters, storylines and issues using voice AI technology. The overarching aim of the platform was to increase social development within children, focusing on developing higher levels of social, literary and empathetic understanding through immersive digital storytelling. The tool would allow children to safely engage with their favourite characters on TV shows through voice-assisted technology and was designed so that when an episode of a TV programme ends, a child will be encouraged to speak to the characters to reflect on the events and participate with suggestions and predictions for the next episode thereby directing the narrative. To place children at the heart of the storytelling experience in an immersive way through voice technology was acknowledged by the developers as potentially harmful, raising a number of ethical considerations such as consent and privacy. Through research and development, the research team worked together to co-develop the technical design and ethical aspects of this prototype. In the following, we explain the process and methodology that was adopted to develop these recommendations. This pilot project was carried out in 2020, over a three month duration with academic and industry partners. The research was funded during the time of a national lockdown in the UK due to the COVID-19 pandemic. Our approach was two-fold; to conduct research on the technical potential of the tool and research on the ethical implications of these technologies for practice. From the perspective of ensuring ethical design of the tool, and in order to get a richness of perspectives on the effects of the tool, the team's original research plan involved interviews with children and their parents testing the tool and the analysis of transcripts. Due to the pandemic, the design had to be adjusted and gathering qualitative data was not possible. Instead, the methodology was adapted to include research on the ethics of CAI for children. This included a non-exhaustive but thorough review of the current literature which resulted, through thematic analysis Clarke and Braun [2014] , in guiding themes which aided the development of principles and ethical reflection for both the company and the researchers. Keywords developed to guide the non-exhaustive mapping of the literature on the ethics of CAI for children from recent years (up to 5), concurrent with a review of research on the technological research advances in CAI different categories included: CAI, ethical implications/ethics, children, young people, generations, safeguarding, impact, ASR, systems for conversational speech, voice assistants, Alexa, Google Home, Nest, Chat. The research team met through regular meetings which resulted over the three month period in two working papers covering both the technical and ethical aspects of the work. The ongoing iteration of the findings throughout characterise this case-study as a co-production project, whereby there was ongoing dialogue and ethical reflection between the research and development team. It is important to note the limitations of this research and the associated approaches. We aimed to devise a set of recommendations for the industry partner in a very limited time-frame. We do not have user experiences as a result of the adjustment to our methods within the given time-frame and acknowledge that further research will deepen our understanding by engaging with children and their parents through ethnographic or semi-structured interviews. The searching of the literature, though thorough, was not fully exhaustive or systematic in nature, again owing to the time and scope of this limited pilot study. As such, findings from this project may be limited in their generalisability. We aim to show how investigations of other technologies informed our design. We explore this by examining the technological options in CAI design supported by the literature. Even though CAI could be an effective tool to aid children in their cognitive, social, and linguistic development, their didactic potential in storytelling context is not well investigated. The effectiveness of voice assistants in storytelling for children could be highly influenced by technical implementation of the chosen technology. In working on this case-study project it was necessary to review the technical implementations of CAI, as different methods pose distinct ethical challenges and the forms of interaction the system aims to support would require different architectures (e.g. answering questions about a specific book or TV show, through to more open ended forms of dialogue). For instance, to develop a customized meta-story tool, which would engage children with their favourite TV show, we found it was Figure 1 : A high level architecture for voice-based CAI important to consider children's linguistic development challenges. In particular, 'AI Fan Along' needed to support the child's ability to express and understand feelings through an adapted technology. A mapping of ML, NLP and DL innovation in CAI technology and the implications for the design and deployment of voice cloning systems for children was undertaken including a review of the most popular tools and frameworks in use by both industry and academia. This included research of current practices and ongoing co-production with the industry partner. Similarly, research on the audio aspects of the tools development was conducted, with a particular focus on ASR systems and their compatibility with child voices and physiology, and the viability of voice cloning technologies to allow diegetic immersion to be maintained. Regular meetings ensured good dialogue and knowledge exchange at all stages. We mapped the literature in audio and speech using keywords: voice cloning, voice modelling, speech synthesis, deep fakes and voice spoofing, and performed searches concerning AI innovation using keywords; CAI, ASR, ML and voice assistants, neural approaches to conversational AI; DL models; NLP and IPA. We first describe the background to this work before describing the design choices. We aimed to provide our industry partners with a full picture on the CAI architecture and existing advances that could be easily adapted for AI Fan Along. As CAI requires the coordination and integration of several discrete systems performing pseudo-simultaneous tasks, we started by depicting the high-level architecture of CAI (see, Fig 1) , as it is important to understand the potential role of each of them on the direct interaction between children and the voice assistant. Typically, CAI systems include an Automatic Speech Recognition (ASR), Natural Language Understanding (NLU), Dialogue Management (DM), Natural Language Generation (NLG), and Text to Speech (TTS) modules, which together constitute the high-level architecture of CAI. It is important to highlight that the NLU, DM and NLG components collectively comprise the semantic layer and are responsible for inferring meaning from the input, determining an appropriate next action and generating meaningful responses to output in natural language. Design decisions are typically informed by the type of interaction the system seeks to support. This was particularly the case with our case-study as the CAI needed to support task-based and open components. Dialogues are typically classified as task-oriented, i.e. supporting the user in completing a specific task, or open-domain, i.e. able to speak on a range of topics as determined by the user. Different implementations invariably require distinct considerations, may be suited to support different types of dialogue, and pose unique challenges. CAI applications for children may encompass task-oriented and/or open-ended dialogues to support functional, educational, or entertainment-related interactions. However, selecting from the wide range of existing approaches in the case of 'AI Fan Along' was motivated by ethical concerns. In the following, we explore approaches used for implementing task-oriented and open-ended dialogue systems to identify their potential adaptation for AI Fan Along. The NLU is a core component that interprets the meaning that the user communicates and classifies it into proper intent . Rule-based approaches [Yaman et al., 2008, Schapire and Singer, 2000, e.g.] have been widely used for both classifying the user's intent and defining the system's action, i.e. what is said. Rule-based approaches often follow an established set of dialogue-flows or handcrafted rules. This enables the system to respond effectively to a specific domain (i.e., task-oriented dialogues), but may be less effective if users pose questions. Frame-based approaches use a template model to offer a more flexible approach. Consequently, the dialogue flow is not pre-determined, but adapts and incorporates the user's input, and can integrate additional information sources from either the dialogue history or an external database. For example, Question Answering (QA) systems draw on techniques from Information Retrieval (IR) to enable the user to receive a relevant answer to a question asked in natural language, with sufficient context to validate the answer Hirschman and Gaizauskas [2001] . QA agents employ large-scale Knowledge Bases (KB) or a document collection in natural language to retrieve information that then populates 'slots' in the dialogue, to provide concise and externally validated answers. QA systems have been employed for public engagement and entertainment purposes in culture and heritage contexts (e.g. Robinson et al. [2008] ), and effectively enable users to navigate the KB through conversational interaction. Such frame-based approaches have also been used in open-domain dialogue contexts, such as the ALICE chat-bot developed using AIML AbuShawar and Atwell [2015] . While designing a task-oriented dialogue system to assist users in performing a specific task (e.g., making a hotel reservation) requires a relatively constrained set of conversational possibilities, as this topic scope increases so will a system's complexity. A drawback of these approaches is that they have limited adaptability and a challenge can arise when user utterances fall beyond the scope of the dialogue-flow or domain of expertise (i.e., the used KB). Additionally, even when the scope of an agent is clearly communicated, users often persist in confronting them with off-topic or 'out-of-domain' talk Robinson et al. [2008] , Ameixa et al. [2014] . In the case of AI Fan Along, a child's speech behaviour is more variable than adults. While adults have been observed to modify their speech when interacting with CAI, e.g. using shorter and simpler phrases Mou and Xu [2017] , the same can not be assumed for children. It is highly expected that children will produce unformulated and unthought-out requests to 'AI Fan Along' as if it is a human. Approaches for managing off-topic talk include changing the topic and integrating a retrieval component using additional responses drawn from a corpus of film dialogues Ameixa et al. [2014] could be particularly important to the design of a meta-story tool. The literature reveals several attempts to understand child civility with machines and spoken dialogue systems Burrows [1992] , Potamianos and Narayanan [1998] , Arunachalam et al. [2001] . On the other hand, recent advances in Deep Learning (DL) and the availability of large conversational datasets have made open-domain dialogue systems, capable of generating content on a wide range of topics, more viable. Opendomain dialogue systems rely mainly on data-driven models and end-to-end (E2E) approaches Roemmele et al. [2011] , Sutskever et al. [2014] , Vinyals and Le [2015] . These have seen great success due to the availability of benchmarks (e.g., ConvAI Competition 4 ), and pre-trained language models such as BERT Devlin et al. [2018] . One of the advantages of data-driven models is the lack of dependencies on external resources such as API calls or KB. Moreover, these models can be totally trained from scratch independently from the NLU, DM and NLG components, which often require extensive domain expertise and contain limited design choices. Consequently, E2E systems demonstrate great promise for generating conversation on a more diverse range of topics as they require less sophisticated annotation schema. Overall, data-driven models can be more flexible than rule-based systems, which make them more suitable for engaging in open-domain and social dialogues. We found that although fully data-driven models are promising, they pose several challenges -particularly noteworthy with respect to our use-case. Neural response generation has a high likelihood of generating uninformative responses, e.g. "I'm not sure I understand". According to this issue is due to the training objective or a bias that emerges from the training data itself , Serban et al. [2017] , Zhang et al. [2018a] . Efforts to develop E2E systems capable of generating more naturalistic responses have included the development of datasets addressing more social and human-like aspects of dialogue. This was important to investigate for the use-case of AI Fan Along. We found that the use of personas, as in Zhang et al. Zhang et al. [2018b] and Lin et al. , could be a suitable solution. However, ensuring appropriate responses are generated consistently remains a challenge. For instance, Lin et al. point out that these approaches can still result in the development of morally dubious agents, who do not "have any sense of ethical value due to the lack of training data informing of inappropriate behavior" . By reviewing the state-of-the-art work in CAI design, we were able to highlight the potential of using a hybrid approach, using data-driven models that are tailored to specific personas together with rule-based approaches, which would need to be iteratively tested for safety. This would enable us to design a system that could respond safely and flexibly to children's conversational patterns and adequately parses out-of-domain talk. This choice led us to investigate other important challenges in the field, namely how can a child-friendly CAI relate to childrens' specific speech patterns. Therefore, we investigated the role of the ASR, voice synthesis and voice cloning techniques with a view to enhancing the effectiveness of the chat tool. One of the most distinctive aspects of 'AI Fan Along' was its acoustic features that would enable it to maintain comprehensive and engaging conversation/interaction with children. The literature highlighted the importance of developing a tool that accounts for and understands the highly varied inconsistencies and mutability of children's language. Hence, AI Fan Along required an ASR module built to intentionally learn from the ways children speak. The following goes deeper into features of ASR and voice cloning to distinguish possible challenges to be considered for adaptation of existing technology in our context. Automatic speech recognition is a core element in CAI that has a direct impact on the quality of interaction. ASR is the process that translates user-spoken utterances into text. The performance of an ASR system depends mainly on the robustness of its components, however, its ability to successfully handle the variability in the audio signals play a key criterion. Here we outline the ways in which many CAI designs and systems are more appropriate for adults and do not fully consider the physical and physiological development of children in their design. ASR faces several sources of acoustic variability Yu and Deng [2016] , which is caused by complicated interaction and speaker characteristics. These can be categorized as: firstly, within speaker variables, these concern momentary and longitudinal variations in the voice due to emotional expression and arousal Lee et al. [2004] , illness, age Vipperla et al. [2010] , Morris and Brown [1994] , body mass de Souza and dos Santos [2018] etc. All these factors need to be accounted for by the acoustic model to be representative of all potential speakers in all states. Secondly, between speaker variables (i.e. variations in spoken language, vocal tone and speech style) which mainly concern different accents, non-native accents, dialects, slang, speech impairment and disorders, gender Swartz [1992] , Morris and Brown [1994] and even raceXue and Fucci [2000] . The issue of speech impairment is particularly relevant in the case of children whose speech and articulation are still developing. Usually, children over-enunciate words, elongate certain syllables, punctuate inconsistently or skip some words entirely. Their speech patterns are not beholden to the patterns used for training systems built for adult users. Collectively, these variables impose a significant logistical challenge and necessitate substantially broad training data to provide any sense of accuracy. Moreover, the audio quality factor (i.e. the quality and clarity of speech received by the ASR device) also creates a possible technological bottleneck. The positioning of microphones within a physical CAI interface/device and the qualities of the space in which a device is placed (in addition to the position of the device within the space) are a critical factor that can influence the intelligibility of speech. The microphone directivity (polar pattern), arrangements of multiple directional microphones in an array, and frequency response(s) of said microphones employed within the device may necessitate different post-processing to any received speech, as will the method of transduction (dynamic, electret, or boundary-design) Borwick [1990] . Furthermore, the relative distance factors, critical distances, and the reverb time (RT60) and average absorption of the space will impact the intelligibility of any received speech. Finally, the shape of surrounding material, absorption coefficients of surrounding materials, and environmental noise within the space present another potential hurdle for ASR systems. Simply expressed: placing a CAI device on a high countertop in a reflective space such as a kitchen may preclude children from interacting with the system simply because of acoustic features and transduction methodologies. Within many CAI agent interactions a spoken response from the agent to the user is often required e.g. responding to questions, observing reminders, timing information. To generate these responses, several common systems are employed. Voice banking and phrase banking have been in use in various systems, notably in telephony systems and for individuals with vocal disabilities Veaux et al. [2013] , for several decades. However, the systems have been superseded by synthesis approaches that produce naturalistic intonation and rhythm patterns. These systems can be divided into Text-To-Speech (TTS) that generate the text-based semantic content of the phrases spoken, and the synthesis components that generate the corresponding audio i.e. the 'spoken text'. The TTS synthesis procedure and acoustic models are major elements of ASR and any improvement towards CAI for children needs to consider them. In particular the TTS is a sequential process that produces a speech utterance from an input text involving a set of high-level modules Reichel and Pfitzinger [2006] . A lot of advances have been achieved in Trilla [2009] . Besides achievements in the TTS and ASR field, existing systems are not designed for use with children, whose voices, and speech behaviour are more complex than that of adult users. The ability to replicate or otherwise synthesize a range of possible respondents in a CAI system raises important questions and challenges concerning inherent bias, race, gender, disability, nationality etc. These questions, arguably some of the most pressing considerations when working with children and CAI, are explored in greater depth in the discussion. We now draw together the themes noted in the academic and policy literature with respect to ethical design of CAI for children and discuss their implications both within the context of the use-case but also for their broader adoption within the creative industries. Guiding the decisions and recommendations for the responsible innovation of this meta-story tool were four broad themes as drawn from a mapping of the literature shown in 1. We discuss these themes with reference to the design choices made of this meta-tool and discuss their implications. Druga et al. [2017] . Fears about the effects on social relationships -where the anthropomorphised voice agent becomes an 'imaginary friend', listening to the children and harbouring their secrets are noted Biele et al. [2019] . In this regard, speech and thereby anthropomorphism can be seen to affect humanisation Schroeder and Epley [2016] . These aspects relate to the inclusion of children with impairments and disabilities. While the benefits for entertainment and accessibility seem clear, much research stresses the developmental aspects of how children acquire, process information and how they then might ultimately translate that into the world. These considerations formed a key part of the audio and technological development of the tool. We found that there is much research on the way in which CAI understands childrens' speech with a corpus of work on the analysis of language / developmental aspects Monarca et al. [2020] critical to the responsible design of AI Fan Along. As previously alluded to, children's speech is not yet developed and CAI are regularly found to misunderstand and research has explored whether CAI is able to uncover language discrimination in children Monarca et al. [2020] . The literature suggests there is a need for inclusive solutions. Druga et al.'s study of child-agent interaction (Alexa, Google Home, Cozmo and Julie Chatbot) Druga et al. [2017] , provides one such example posing a series of questions to children (aged 3-10 years) related to trust and their experiences of the interaction. They found child-agent interactions were particularly revealing about children's reflections of their own intelligence in comparison to that of the agents. The same study suggested that 'different modalities of interaction' may change how children perceive their own intelligence in comparison to agents. Agent voice, tone and friendliness are regularly mentioned as important considerations in ensuring interactive engagement and facilitating understanding and interactivity through expressions of characters' 'happy eyes', for instance. This echoes the literature on social robots which promotes the importance of tone and voice pitch, humour and empathy. We suggest that much could be applicable to voice agents where the voice pitch is seen to have a 'strong influence' on user experience and enjoyment Niculescu et al. [2013] . Further, in order to better child understanding of systems, research indicates that designers ought to consider embedding into design a transparent mechanism of explaining why an agent can/cannot answer a particular question to help in re-framing it to the child, and ensuring better understanding like human interaction Moreno and Mayer [2000] . These small design considerations are important for ensuring that agents become more like companions than foes and link to issues of trust and transparency. [2018] and the field for some time has looked into child-robot interaction and its effects on non verbal immediacy and childrens' education Chang et al. [2010] , Kennedy et al. [2015 Kennedy et al. [ , 2016 , and how people treat computers, TV and New Media like real people Mullen [1999] . Mayer, Sobko and Mautone's proposed Social Agency Theory Mayer et al. [2003] argues that the social cues of a computer (e.g., modulated intonation, human-like appearance) encourage people to interpret the interaction with a computer as being social in nature. Indeed, some users report having emotional attachments to their voice agents Shead [2017] and this is often debated in the literature because it infers 'humanness'when some claim human-like feelings should be reserved for human interaction Porra et al. [2019] . Research suggests that humans are more likely to engage in deep cognitive processing to make sense of what an artificial agent is saying and communicate accordingly. Children are shown to form bonds with robots and react with distress when they are mistreated Mayer et al. [2003] but associate mortality with living agents and less so robots and non living agents, which is seen to relate to them showing less moral care/ less involvement in sharing Sommer et al. [2019] . Some suggest interaction with CAI could hinder pro-social behaviour and to investigate repeated interaction over time. As such testing of the tool in this regard was suggested. A further study by Bonfert et al.' s study responds to the media's portrayal of how children 'adapt the consequential, imperious language style when talking to real people' [Bonfert et al., 2018, p.95 ]. The experiment involved rejection when children made impolite demands, and found they adapted and behaved more outwardly politely, saying please, etc. However, many reported feelings of discontent toward the AI. Our research revealed several attempts to understand child civility with machines and spoken dialogue systems Burrows [1992] , Potamianos and Narayanan [1998] , Arunachalam et al. [2001] . Finally, from a user-gender perspective, we were curious about considerations across variables. Research suggests no gender differences with respect to politeness, whereas males expressed more frustration Oviatt [2000] . As children are still learning how to formulate speech and infer meaning from interaction, it was noted that designers should accommodate and be responsive to the different languages of child users of varying ages and demographics. Collection of large scale data on children of different ages and backgrounds to pull out the 'idiosyncratic features' of children's spoken word was also recommended when personalising CAI Oviatt [2000] . . Parents express concern about online privacy with respect to internet connected devices as well as concerns about recording and monitoring child activity and what data is held by companies [McReynolds et al., 2017, p.5201] . Parents also are seen to be concerned over control and supervision, citing a lack of time to go through hundreds of recordings even if they were made available Horned [2020] . Conversely, it is also reported some parents find it useful to monitor their children using recordings as research suggests that parents would not wish to share their child's recording on social media McReynolds et al. [2017] . This is at odds somewhat with the findings from the children (from the same study) McReynolds et al. [2017] . In this study many children did not know the device was recording and some were reported to have tricked the system through secretly wanting to speak to the device at a fair distance from their parents (2 out of 4 participants said they would tell a toy/device a secret) McReynolds et al. [2017] . This highlights the need to consult both parent and child about these key issues and shaped our discussions about future qualitative work involving children and parents. Research recommends that in order to improve security and privacy: designers might 1) to include 'visual recording indicators'to raise transparency and show off the capability of the device, 2) offer parents the opportunity to to engage with privacy decisions, 3) consider trust and consent -on the one hand providing the ability for parents to monitor their children might safeguard them but also poses ethical and trust issues and also emphasized flexibility. For instance, it was important to consider the context and how adaptive the technology is. For example, violating certain norms such as freedom, privacy etc, only if it is in the best interest of the user or the greater good, i.e. the case of an accident and releasing medical data [Van Riemsdijk et al., 2015 , p.1204 . Flexible systems might 'alleviate ethical concerns' providing 'contextual integrity ' Nissenbaum [2004] . The need to ensure that systems ought to prevent unethical use, e.g a school using technology to find out if a child is skipping school is noted. Notwithstanding the limitations of contextual ethics, the importance of considering the contextual use and the everyday ethical norms which govern user behaviour remains pertinent. Issues of trust and transparency regularly emerge with respect to CAI ethical design McReynolds et al. [2017] . Transparency has been at the forefront of the AI ethics debate as it is a tool which helps to generate trust and ultimately understanding in technology. The recent focus on transparency has led to some innovative modelling of smart assistants in order to tackle the issue Geeng [2020] . Following our research we were clear that designers might consider explicit and implicit ways of ensuring transparency in CAI design to build respect and trust.This links to notions of fairness and inclusivity. Fairness is a key concept in the development of CAI technology for children. In AI, and ML field in particular, practitioners call for fairness as a solution to promote inclusivity and overcome bias (i.e., algorithmic and data bias) Jain et al. [2020] . Many interesting approaches have been proposed to approach fairness in AI, such as ML AI Fairness by IBM Bellamy et al. [2019] ; and FATE: Fairness, Accountability, Transparency, and Ethics in AI toolkit Bird et al. [2020]. Google has also released a version of what they called Fairness Indicators Xu and Doshi [2019] , which is mainly a suite of tools that enable regular computation and visualization of 'fairness metrics' for ML models. In 2020 they presented ML-fairness-gym a set of components for building simple simulations to explore long-term impacts of ML models Long and Magerko [2020] but many of the attempts of companies have been accused of tokenistic ethics washing. In order to promote inclusion, much of the literature focuses on negative gender stereotypes in IPAs particularly with respect to women Brahnam and De Angeli [2012] , Danielescu [2020] . Key research including UNESCO's 2019 paper 'I'd blush if I could' set the scene, voicing concern about assigning gender to voice assistants and the 'troubling repercussions' vis a vis children's digital skills development [UNESCO, 2019, p.85] . Additionally, much research draws attention to the issue of gender in design -rather than gender being implicit to voice -the listener assigns gender to the voice Sutton [2020] . It is suggested that until at least mid 2017, agents were evaluated as perpetuating gender stereotypes UNESCO [2019] . There is also interesting work on misuse and abuse of social agents Brahnam and De Angeli [2012] . Gendered aspects of voice are not the only elements to consider: the branding, the appearance, the quality of the voice, specific pronunciations, etc are also important Sutton [2020] . In the broader literature, Pearson & Borenstein looked into the ethics of designing companion robots for children -they suggest that an unexplored area is that of gender, which is something which has been a focus with respect to CAI in terms of persona and accent Pearson and Borenstein [2014] . For instance, one study found that if a robot has a male or female tone of voice, this will seriously affect the way we interact with it Siegel et al. [2009] . Similarly, research found that people trust a female voice more and found it to be more persuasive Crowelly et al. [2009] . Coeckelbergh Coeckelbergh [2011] suggests that this is simply reflective of our daily feelings and preferences with respect to gender norms and expectations reflective of stereotypes Nass and Brave [2005] and others talk about how humans assign their own gender to robots suggestive that one should neither gender technology, nor racialise it Ogunyale et al. [2018] . Some scholars suggest that males prefer male agents and female, female agents. This has paved the way to thinking about gendering CAI e.g. Donald [2019] who notes that the default voice for IPAs is almost always feminine and that their names are also female 'Cortana and Alexa' -indicative of a social signalling of gendering agents from embedded design -that their voice to language use and content. The 'neutral' Google Home is described as gender-less but only in name as it's voice is female -which is the same for Siri Fan [2021 Fan [ , 2020 . There is also increased focus on racial bias and injustice in technology Atanasoski and Vora [2019] . Human-agent (chatbots) interaction is influenced by racial mirroring -affecting interaction with agents with respect to 'personal interpersonal closeness, user satisfaction, disclosure comfort and desire to continue interacting ' Liao and He [2020] . The design implications are clear -that 'racial mirroring facilitates the interpersonal relationship between client and agent' [Liao and He, 2020, p. 430 ]. This should be borne in mind when customising personas of (in their case) therapeutic agents, and more generally other kinds of agents 7 . Recent research describes how the white, feminine voice "reflects characteristics of white femininity in voice and cultural configuration for the purposes of white supremacy and capitalistic gain", projecting white supremacy Moran [2020] . Others refer less to vocal cues relating to race and instead look at content and the culturally value-laden positioning of what subjects are deemed appropriate or not Schlesinger et al. [2018] . These findings indicated to the team that in terms of the meta-story chat tool it would be important to go beyond the voice when considering gender and racial issues in CAI design and to consider what is appropriate content for a particular use and what an appropriate response from a user would be. This scoping provided the research team with a clear approach from which to indicate recommendations and suggestions for the design of AI Fan Along. We outline these in the following section. We now draw together the discussion points toward what resulted in design recommendations for the responsible development of the meta-story tool. Informed by the literature and in consultation with industry, we firstly proposed a series of broad ethical considerations for developers of a meta-story chat tool for children: Q1. What data will be collected? Q2. How will the collected data be used? Q3. How far and in relation to which regulations has the AI safeguarded children's safety and privacy? Q4. How do we develop a child-friendly and engaging CAI and what behaviours should it exhibit? Q5. How do we reflect on and mitigate against bias? Q6. How do we ensure inclusive, responsible innovation and use participatory design techniques? Q7. What technology and approaches should be adapted to provide moral care and direct pro-social behaviour? Using these broad questions as a base-line, we draw together the discussion to describe how we approached these with respect to (a) regulatory and legal (b) cognitive and linguistic development (c) inclusivity and (d) moral care and social behaviour as identified in the literature. The ethical considerations of this meta-story chat tool were primarily concerned with data, privacy and user-security. Attending first to Q1 and data collection, we were conscious that the meta-story tool would collect voice recordings of the child-agent interaction -as a consequence, designers and developers must consider hosting and the security of the chosen system architecture. We proposed that an intelligent data privacy solution be implemented, including the gathering of consent from the parents and carers in line with data protection and privacy -particularly important when considering third party/external industrial collaboration. Additionally, we proposed that particular attention should be given to parental permissions and levels of control. Testing with users and parents would be paramount in its further development. In response to Q2 about the use of the data collected, there are clear concerns about surveillance in CAI and the extent to which AI voice assistants are always listening and the efficacy of wakewords. We recommended that CAI should not run as a background process, but rather should provide parents with the control to turn it on (e.g. directly after a TV show in order to start discussion between CAI and child). Transparency is of course key to this. We therefore suggested that CAI development should be clear about what data is collected, where it will be stored, as well as acting in compliance with GDPR. Parents should be asked to provide consent for the use of personal data in the development of the technology. With respect to Q3 about how far and in relation to which regulations has the AI safeguarded children's safety and privacy, there is a need to examine children's privacy, safety and security by providing identity protection, detecting harmful content and by focusing on location detection and biological/ psychological safety. UNICEF is clear that another risk for children pertains to inclusion and equitability. Ensuring that systems are checked to mitigate against historic bias which may stand in the way of children's fair chances in life becomes a key point of ethical reflection. Research debates the role that a voice assistant ought to play with respect to safeguarding and violation of the law, for example; if a child were to reveal they are being abused. In the UK children can consent to information services at age 13 enabling them to engage freely with the internet, which is an important and largely unavoidable tool. (2) for whom is it personalized, i.e., sensibility of a child's context; and (3) the level of automation of personalisation. Relatedly, CAI design should consider the speaker's variability, including age and emotion etc. This improves both the personalisation and broadens the inclusivity of CAI. As discussed inclusivity is a key consideration relating closely to the broad prompts outlined in Q5 and Q6 related to bias and participatory design. We noted that many of the adopted practices to ensure fairness are limited to quantitative techniques, e.g., statistical models or tools that mitigating algorithmic and data biases, and assess fairness by sampling uncertainty Kallus et al. [2020] , or de-biasing gender Sun et al. [2019] . In order to ethically design CAI for children, we proposed that these methods engage with the relevant ethical literature outside of the NLP or AI fields Blodgett et al. [2020] . In order to ensure fairness in CAI design, we called for an inclusive approach in the early stages of the design process. For example; inclusive methods to ideate answers to key questions like how to develop transparent algorithms and models that mitigate bias; e.g. adopting a task orientated dialogue system to avoid pitfalls of algorithmic bias. At all stages, we proposed that designers should consider how bias may have seeped into the development of CAIpertinent with respect to all aspects of CAI, not just the voice. With respect to Q6 about inclusive design, we suggest that the design of CAI should be participatory Kumar et al. [2018] , Yip et al. [2019], Piccolo et al. [2021] . We note how children are so often not included in co-production, though research involving the views of younger people are emerging Hasse et al. [2019] . By involving children and their parents in the design, it would be feasible to explore how far children use agents for entertainment, learning and more, especially with respect to the thematic areas we describe, particularly in the testing phase and for supporting positive child development. This was suggested for further research and development.This kind of user-involvement should keep participants as fully informed as possible about the objectives and procedures of the research to improve AI literacy Long and Magerko [2020] . Indeed, deception of participants (deliberately mis-representing the purposes and aims of the study) must be avoided whenever possible and any deception should be revealed during debrief interviews with parents/guardians. We noted that it is not out of the question that designers may need to employ some deception during the 'field tests' should there be issues with the proposed prototype and/or AI voice recognition. This should be limited to obfuscating the mechanisms by which children's interactions will be tracked, and in some instances may require responses from the prototype to be selected by researchers rather than the AI. In advocating a participatory approach, designers must ensure that parents/ legal representatives understand consent, the objectives, any potential risks and the conditions under which the research is to be conducted. They should have been informed of the right to withdraw the child / young person from the work at any time and have a contact point where further information about the work can be obtained. Further, we advised that designers of CAI should consider the potential vulnerability of children to exploitation in interaction with adults (potential power relationships between adult/child) in any testing and how this might affect the child's right to withdraw or decline in participating. We suggest that designers provide information about the task to children in an accessible way, properly explain data gathering and protection and manage expectations. We recommend that designers approach families in a timely way to ensure that children have time and opportunity to access support in their decision making about taking part. Where participants are not literate, verbal consent may be obtained and then documented. Every effort should be made to deal with consent through robust dialogue with both children and their parents. Whenever practical and appropriate, a child's assent will be sought before including them in the research. Future research should consider error scenarios in order to consider unforeseen risks and ethical concerns Arunachalam et al. [2001] . Finally addressing Q7, it is pertinent to ask what technology and approaches should be adapted to provide moral care and direct pro-social behaviour. As reflected in this paper, different approaches and architectures pose distinct challenges for developing safe and responsible CAI that attend to the aspects of moral care. One key consideration is the level of freedom versus constraint that is required over NLG. For example, rule and frame-based approaches involve tightly scripted dialogues and require the designer to devise appropriate response strategies for the potential directions the dialogue may take. In retrieval-based and E2E approaches, the quality of the corpus from which responses are selected or generated is evidently important and compared to rule-based or slot-filling approaches, there is less precise control over what response is generated. With retrieval-based systems, the possible range of responses in the corpus can be checked for suitability, but it is possible that seemingly harmless responses, when produced in a different conversational context, could produce a different meaning. As E2E systems are designed to mimic human-to-human conversations, the quality of the training data will impact on model predictions. Stringent data preprocessing efforts will be required to develop E2E systems that generate content suitable for younger audiences. Furthermore, Gehman et al. Gehman et al. [2020] demonstrate that even after implementing profanity filters on training data and fine-tuning on 'appropriate' data, systems can still produce toxic content. Consequently, ensuring the safety of a dialogue system requires more than removing profanities from a dataset. Harmful societal biases e.g. gender bias Dinan et al. [2019] , Liu et al. [2019] are often contained within datasets, and while Dinan et al. Dinan et al. [2019] demonstrate that it is possible to reduce the impact of gender bias in dialogue systems, ensuring against all forms of stereotyping and representational harm in E2E systems is a complex and difficult task. Retrieval-based and E2E approaches aim to increase the human-likeness of CAI agents, which affects how users perceive them. Moreover, some argue that CAI agents should emulate more precisely human-like behavior Ahmad et al. [2018] , Paikari and Van Der Hoek [2018] . In the context of child-friendly CAI, this arguably raises many ethical concerns related to trust and child protection. Finally, CAIs capable of engaging conversation, designed to utilise relational strategies may influence the child's perception on the humanness of the agent and influence their behaviour Lovato et al. [2019] . We also highlight the importance of these CAI agents to identify themselves as bots and to provide specific answers and clarify it to the user when the context/question is not comprehensible. The development of CAI in the creative industry for children has been limited and there is a growing need to connect theory and practice. Indeed, much of the research has been about the impact on children, as opposed to with and for Hodge et al. [2017] . The field in its current manifestation presents, at best, an inconsistent approach to the systems explored here, often with a need to join up the ramifications of situating such technologies within the home with the implications for children. As momentum grows in the overall ethics of AI, the inherent biases and assumptions underpinning the technical methodologies require the utmost scrutiny when applied to vulnerable groups such as children. This pilot case-study highlights the unique concerns located within AI storytelling tools for children. The reflections of the design choices made and recommendations provide a starting point from which to extrapolate and build on the field of AI ethics for children. However, further research to provide greater depth and richness of perspectives is recommended and significant remedial work is required at all levels of the design process across stakeholders inclusive of developers, content makers, users (including parents and guardians from all backgrounds) and importantly, educators and regulators. A review on automatic speech recognition architecture and approaches Natural language processing techniques in text-to-speech synthesis and automatic speech recognition. Departament de Tecnologies Media Hierarchical multi-task natural language understanding for cross-domain conversational AI: HERMIT NLU Convolutional neural networks for speech recognition hey alexa, what's up?" a mixed-methods studies of in-home conversational agent usage hey google is it ok if i eat you?" initial explorations in child-agent interaction Monitor report: A comprehensive annual report focused on children and young people's media consumption, purchasing habits, attitudes and activities Risks and safety on the internet: the perspective of european children: full findings and policy implications from the eu kids online survey of 9-16 year olds and their parents in 25 countries What is ai literacy? competencies and design considerations Online harms white paper Mimisbrunnur: Ai-assisted authoring for interactive storytelling Game ai as storytelling Broadening the discussion of ethics in the interaction design and children community Toys that listen: A study of parents, children, and internet-connected toys Intelligent personal assistants: A systematic literature review Agile software development process applied to the serious games development for children from 7 to 10 years old Parents' and children's perceptions of active video games: a focus group study The discursive construction of 'good parenting'and digital media-the case of children's virtual world games. Media Digital parenting: designing children's safety. People and Computers XXIII Celebrating People and Technology Designing serious games to teach ethics to young children Considering virtual reality in children's lives Repeated play reduces video games' ability to elicit guilt: Evidence from a longitudinal experiment Co-designing online privacy-related games and stories with children Laughing is scary, but farting is cute: A conceptual model of children's perspectives of creepy technologies Chatbots to support children in coping with online threats: Socio-technical requirements Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications The practical ethics of bias reduction in machine translation: Why domain adaptation is better than data debiasing Vannevar Bush. Science, the endless frontier Responsible innovation: managing the responsible emergence of science and innovation in society Thematic analysis Investigation of language understanding impact for reinforcement learning based dialogue systems An integrative and discriminative technique for spoken utterance classification Boostexter: A boosting-based system for text categorization Natural language question answering: the view from here What would you ask a conversational agent? observations of human-agent dialogues in a museum setting Alice chatbot: trials and outputs i am your father: dealing with out-of-domain requests by using movies subtitles The media inequality: Comparing the initial human-human and human-ai social interactions Not unles you ask nicely: The interpretative nexus between analysis and information Spoken dialog systems for children Politeness and frustration language in child-machine interactions Choice of plausible alternatives: An evaluation of commonsense causal reasoning Sequence to sequence learning with neural networks A neural conversational model Pre-training of deep bidirectional transformers for language understanding A diversity-promoting objective function for neural conversation models A hierarchical latent variable encoder-decoder model for generating dialogues Generating informative and diverse conversational responses via adversarial information maximization Personalizing dialogue agents: I have a dog, do you have pets too? Caire: An end-to-end empathetic chatbot Emotion recognition based on phoneme classes Ageing voices: The effect of changes in voice parameters on ASR performance Age-related differences in speech variability among women Body mass index and acoustic voice parameters: is there a relationship? Gender Difference in Voice Onset Time Effects of race and sex on acoustic features of voice analysis Microphones: Technology and Technique Towards personalised synthesised voices for individuals with vocal disabilities: Voice banking and reconstruction Text preprocessing for speech synthesis Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio Tacotron: Towards end-to-end speech synthesis Deep voice: Real-time neural textto-speech Deep voice 2: Multi-speaker neural text-to-speech Deep voice 3: 2000-speaker neural text-to-speech Neural voice cloning with a few samples Young children's reading and learning with conversational agents Communicative and social consequences of interactions with voice assistants Hey google, do unicorns exist? conversational agents as a path to answers to children's questions If you ask nicely, i will answer: Semantic search and today's search engines Personification of the amazon alexa: Bff or a mindless companion How might voice assistants raise our children Engineering UK. Kids keen to meet their rffs (robot friend forever) -engineeringuk | inspiring tomorrow's engineers Creating "companions" for children: the ethics of designing esthetic features for robots Mistaking minds and machines: How speech affects dehumanization and anthropomorphism Why doesn't the conversational agent understand me? a language analysis of children speech Making social robots more attractive: the effects of voice pitch, humor and empathy Engaging students in active learning: The case for personalized multimedia messages The ugly truth about ourselves and our robot creations: the problem of bias and social inequity Exploring the possibility of using humanoid robots as instructional tools for teaching a second language in primary school Higher nonverbal immediacy leads to greater learning gains in child-robot tutoring interactions Heart vs hard drive: children learn more from a human tutor than a social robot The media equation: How people treat computers, television, and new media like real people and places Social cues in multimedia learning: Role of speaker's voice Report: 1 in 4 people have fantasised about alexa, siri, and other ai assistants Can computer based human-likeness endanger humanness?"-a philosophical and ethical perspective on digital assistants expressing feelings they can't have Children's perceptions of the moral worth of live agents, robots, and inanimate objects If you ask nicely: a digital assistant rebuking impolite voice commands Talking to thimble jellies: Children's conversational speech with animated characters 18 years of ethics in child-computer interaction research: a systematic literature review examine the variables influencing the use of artificial intelligent in-home voice assistants Alexa, siri, cortana, and more: an introduction to voice assistants. Medical reference services quarterly Conversational agents in a family context: A qualitative study with children and parents investigating their interactions and worries regarding conversational agents Creating socially adaptive electronic partners: Interaction, reasoning and ethical challenges Privacy as contextual integrity Egregor: An eldritch privacy mental model for smart assistants Imperfect imaganation: Implications of gans exacerbating biases on facial data augmentation and snapchat selfie lenses Ai fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias Fairness indicators: Scalable infrastructure for fair ml systems Gender affordances of conversational agents Eschewing gender stereotypes in voice assistants to promote inclusion UNESCO. I'd blush if i could: closing gender divides in digital skills through education -unesco digital library Gender ambiguous, not genderless: Designing gender in voice user interfaces (vuis) with sensitivity Persuasive robotics: The influence of robot gender on human behavior Gendered voice and robot entities: perceptions and reactions of male and female subjects Humans, animals, and robots: A phenomenological approach to human-robot relations Wired for speech: How voice activates and advances the human-computer relationship Does removing stereotype priming remove bias? a pilot human-robot interaction study Lai-Tze Fan. Unseen hands: On the gendered design of virtual assistants and the limits of creative ai Is it human or machine?: Symbiotic authorship and the gendered design of ai Surrogate humanity: Race, robots, and the politics of technological futures Racial mirroring effects on human-agent interaction in psychotherapeutic conversations Racial technological bias and the white, feminine voice of ai vas. Communication and Critical/Cultural Studies Let's talk about race: Identity, chatbots, and ai Assessing algorithmic fairness with unobserved protected class using data combination Mitigating gender bias in natural language processing: Literature review Language (technology) is power: A critical survey of" bias Youth and artificial intelligence: Where we stand Realtoxicityprompts: Evaluating neural toxic degeneration in language models Queens are powerful too: Mitigating gender bias in dialogue generation Does gender matter? towards fairness in dialogue systems Review of chatbots design techniques A framework for understanding chatbots and their future Restricted content: Ethical issues with researching minor's video game habits This work was funded by the XR Stories: Young XR grant, AI Fan Along, and the Digital Creativity Labs, jointly funded by EPSRC/AHRC/Innovate UK, EP/M023265/1 and the Humanities and Social Change International Foundation.We would also like to thank our industry partners. https://www.gov.uk/government/publications/ cdei-publishes-its-first-series-of-three-snapshot-papers-ethical-issues-in-ai/ snapshot-paper-smart-speakers-and-voice-assistants, September 209. (Accessed on 11/27/2020). Sandra Cortesi, Alexa Hasse, Andres Lombana-Bermudez, Sonia Kim, and Urs Gasser. Youth and digital citizenship+ (plus): Understanding skills for a digital world. Berkman Klein Center Research Publication, 2020(2), 2020.