key: cord-0109366-jzh5wwwg authors: Beelen, Thomas; Velner, Ella; Ordelman, Roeland; Truong, Khiet P.; Evers, Vanessa; Huibers, Theo title: Designing Conversational Robots with Children during the Pandemic date: 2022-05-23 journal: nan DOI: nan sha: 600cefd02d57010c06098374d295bbf32e54b989 doc_id: 109366 cord_uid: jzh5wwwg Our research project (CHATTERS) is about designing a conversational robot for children's digital information search. We want to design a robot with a suitable conversation, that fosters a responsible trust relationship between child and robot. In this paper we give: 1) a preliminary view on an empirical study around children's trust in robots that provide information, which was conducted via video call due to the COVID-19 pandemic. 2) We also give a preliminary analysis of a co-design workshop we conducted, where the pandemic may have impacted children's design choices. (3) We close by describing the upcoming research activities we are developing. Children can access digital information in a variety of ways, for example by using a Search Engine (SE) on a computer, or by using a Voice Assistant (VA) on a mobile phone, or a dedicated device. As we outlined in [1] , many of these tools are flawed and do not support children adequately in finding the information they need. Our project called CHATTERS 1 focuses on a physically embodied agent (robot) as a search tool for children of 10-12 years old [1] . The project was defined in 2019 before the COVID-19 pandemic. The physical form of this agent provides an engaging interaction, as well as drawing attention in the context of a museum. Similarly to voice agents, our robot uses speech to communicate with a child. The project consists of two research topics: conversational search for children, and how the robot can make sure this happens responsibly. Where commercial Voice Agents use a limited interaction model consisting of a query and a direct response [14] , we suggest a spoken conversational interaction. This interaction instead follows the Spoken Conversational Search paradigm, meaning there is a mixed-initiative, back and forth around information needs [18] . To better support children, we study how we can use search via responsible conversation. While a robot may help children in their search journey by using speech and conversation, we also need to consider the risks of children using such an embodied agent. Robots are prone to errors. Furthermore, autonomously moderating online information is infeasible. This may lead to a robot presenting false or misleading information. Knowing that 1 https://chatters-cri.github.io/ Fig. 1 . Setup of experiment 1, where the robot was in a separate room with a camera, and appeared whenever the child was interacting with it. The child had a laptop with on the left side of the screen the video call, and on the right side the questionnaire. children build social bonds with robots and are prone to trust them, this could create precarious situations [2, 5] . To monitor how children assess the robot and the information it provides, we propose using the child's speech during the interaction to measure their trust in the robot. If this turns out to be too high (which is often referred to as overtrust [4, 12] ), the robot should intervene by altering its behavior to attempt to lower the trust. This can be seen as a sense-think-act loop, where the robot senses the trust level of the child in the robot, assesses whether there is overtrust, and acts accordingly with an intervention. We address the two research tracks by multiple experiments and (participatory) design activities. The next sections describe an experiment on children's perception of robots and online information, as well as a co-design activity that has been conducted. We focus on the impact of the global COVID-19 pandemic on search technology for children, as well as the impact on research methods. A first experiment was designed to 1) gain knowledge on how children's trust in a robot influences their attitude and behavior towards the robot and the information it provides, and 2) to collect speech data of children interacting with a robot in high and low trust conditions. In June 2021, we recorded 30 children (10-12 years old) playing a quiz with a trustworthy and untrustworthy robot and measured their attitudes toward the robots with questionnaires (trust, likability, knowledgeability, intelligence). Because of COVID-19, the experiment was done via video call (see Fig. 1 for setup) to minimize face-to-face interactions. After children interacted with both robots, we conducted a semi-structured interview. The goal of the interview was to gain insight on how children perceive the robots, how they expect robots to deal with credibility of online information, and their willingness to use a robot. Additionally, we asked them about their current use of voice assistants since this is a widespread technology for searching with voice. Although this study revealed many interesting results, we want to highlight the results of the interview since these are particularly interesting for the community of this workshop. A more detailed description of this study and other results will be reported on in a later publication. The participants (N=30) were asked about their view on robots, the information they can provide, and their current voice agent usage. Halfway throughout the study, we realized it is also valuable to ask children about their view on interacting with a robot over video-call compared to face-to-face (N=15). All child answers are translated from Dutch. 2.1.1 Robots and the digital information. Children seem to be aware of the existence of "bad information" and "bad websites". When asked what the robot should do with this, they suggest using specific websites that are known to be knowledgeable on the topic, or to compare multiple websites. One child also suggested that robots should use websites based on search engine ranking. However, not all children think robots are (currently) able to recognize these "bad websites". One child stated that robots are still in development and might be able to do this in the future. Several children mention the programmer of the robot, and that they can program the robot to be able to make this distinction, revealing a fairly profound understanding of how robots are made. Since currently spoken search is done with voice agents, such as Google Home or Siri, we would like to know if children are actually using these agents for this purpose, as previous research suggests [9, 14] . Two children stated that they don't trust voice agents and therefore don't use them, because they might be able to listen in or record random conversations. Some children do not use voice agents, stating that typing is easier. However, others confirmed previous work that voice agents might be preferred for children as opposed to traditional ways of search [8, 10] . They stated it "is easier than typing", and they use it "when I don't know how to write it". Most children that use voice agents use it for fun (e.g., music, jokes, funny interactions). Some also stated that they use it for information or to call or write messages to people, especially "when you have your hands full". Video call compared to face-to-face. Since talking to a physical robot in the same room might give a different experience than talking with a robot in front of a camera [13] , 15 participants were asked how they think the videomediated interaction compares to a face-to-face interaction with the robot. Note that this was a hypothetical question for most children, since they had never interacted with a robot face-to-face. Children's thoughts on these similarities and differences could especially be valuable with possible future lock-downs in mind. Furthermore, performing HRI studies online via video call could give more opportunities for much needed intercultural studies. Many children think it would be different to talk to a robot face-to-face, as expected. 6 children said this would be more fun, but 5 children also stated that it might make them more nervous. Although nervousness does not necessarily have to be a bad thing, it might be interesting to study this further and investigate where the nervousness comes from. One child explained this as "just like talking to the king over zoom is different than in real life". The robot we are designing needs to fit into children's lives. This introduces several design challenges such as the robot's physical design, social role, personality traits, conversational style, and approach to online information credibility. We invited children to a workshop (Februari 2022) as design-partners to work with us on these challenges [7] . During the workshop, children worked in groups on a worksheet with questions, robot design, and a storyboard. The worksheet is titled "Design your own homework robot", and we describe the robot may help the user by finding information. It has a few questions that prompt the children to discuss design aspects (e.g. characteristics and skills) together, a drawing of the robot, and finally a storyboard bringing it all together and demonstrating some of the robots capabilities. We worked together with a Dutch school to run these workshops, that also included an interactive presentation and a robot programming activity. To fit our activities into a school day, we needed to work with an entire class at once. Both two researchers and a teacher walked around the groups to assist where possible. Parents were asked for consent to include children's worksheets in our analysis. Mobility. Most groups designed a robot for at home (13 groups), this may have been influenced by framing the design challenge as a homework robot. Many groups designed robots that are mobile or portable and can thus be used at multiple locations. Many robots have legs or wheels, and in some cases it is explicitly mentioned that the robot is able to ascend stairs for example. This may be reflective of changes due to COVID-19, such as increased working from home where school materials may be taken home. Remarkably, several groups mention or depict the robot charging it's batteries. Some even show a robot that can plug itself in to charge. This attention to charging and battery power seems to further emphasize the need for a mobile or portable system that can be used in multiple locations. This focus does not seem present in earlier comparable design sessions [11] . Companionship. More than half of the groups personified their robot by giving it a name, in line with preferences discovered in previous work [17] . Furthermore, many groups emphasize the social character of the robot and focus on companionship. The five most used words to describe the robots' character are (translated from Dutch, in descending order): smart, funny, helpful, kind, and social. One group's storyboard begins with the robot noticing the child is in a bad mood and offering the child a hug as shown in figure 2. Another group shows the robot referring to the user as "bestie". Previous work on co-designing search agents with children also shows that children prefer a social and friendly robot [11] , though it may be possible that the COVID-19 pandemic increased the need of companionship. The work we describe comes with several limitations. 1) The children that participated (11) (12) (13) were older than our target audience (10) (11) (12) , where Druin [6] found children of ages 7-10 to be the best design partners. 2) Researchers and teachers had limited time to spend with each design group, while past studies suggest inter-generational work best [6] . However, the fact that children are a bit older in our sample may lead to a greater independence within teams. Our design partners being a bit older (limitation 1), as well as limited researcher interaction (limitation 2) may be the cause of children's disinterest in some cases. Some worksheets contained memes or jokes that may be indicative of this. We will continue our investigations in this project in the two previously described directions: conversational design and how we can do this responsibly, keeping in mind children's experiences and challenges in current times. We are currently developing two studies on the design of the conversation between children and a search agent, as well as a study on the how to further dampen the trust of children in robots. These will be outlined below. 4.1.1 Children's search conversations. In this planned study we want to learn about children's behavior and expectations during conversational search. Inspired by Trippas et al. [16] 's study with adults, children will search for information in dyads. One of the children takes the role of intermediary, the other of seeker. The seeker is given a search task, and the intermediary has access to a computer with a search engine. The participants communicate via speech and have to work together to solve the search tasks. The participants are seated in a way that they can see each other, but only the intermediate can see the computer screen. By analysing the resulting conversations, learn more about the child-specific interaction patterns and expectations of conversational search for children. The interactions will be annotated using the SCoSAS annotation scheme [16] . Then conversational aspects such as type of utterance (both parties), information requests (seeker), query formulation (intermediary), can be analysed and compared to findings with adults. 4.1.2 Conversational compared to query-response search. The second study we are developing has the goal to see whether an agent using a simple conversational approach is more successful in helping children during the search process. We want to find out if the simplified approach leads to more semantically rich descriptions of the child's information need, and how children experience the interaction. In the simplified approach, the robot aims to elicit semantically rich descriptions of the child's information need by asking questions that are generated with a rule-based algorithm. This algorithm will identify search terms in the child's speech and ask to elaborate. The semantic richness in a conversational setting will be compared to an interaction with an agent using a more traditional query-response interaction, commonly found in commercial voice agents. To measure children's experience we will use a questionnaires on engagement and trust, as well as a semi-structured interview. We are currently analyzing the speech that we collected in the first study described in section 2 to search for evidence that trust is reflected in child's speech. Although we tried to manipulate the robot to be either very untrustworthy or very trustworthy, we noticed high trust in both of them. Although even small behavioral changes in a robot is reported to potentially influence a child's perception of the robot [15] , our manipulation might not have been strong enough to achieve the impact we were aiming for. Alongside the behavior and appearance of the robot (which was the focus of the first manipulation), the robot's context is also a prominent factor on the trust people have in robots [3] . To look further into manipulating trust levels, we are thinking about purposely misaligning the context of the robot and the interaction. This could be implemented in the envisioned sense-think-act loop to dampen cases of overtrust. In this document we described our ongoing project on robots for children's IR. We described initial results of two recent studies where our analysis focused mainly on COVID-19. The global pandemic impacted our way of conducting research. Children we asked after the video-mediated robot interaction, expect that their experience changed compared to face-to-face interaction. In our design sessions with children, we noticed a strong focus on mobile robots, as well as companionship. These factors may be influenced by the pandemic where children spent extended periods following school from home. We further outlined the planned research in our project, which was updated with regards to Does your robot know? Enhancing children's information retrieval through spoken conversation with responsible robots Multimodal child-robot interaction: building social bonds Framing Factors: The Importance of Context and the Individual in Understanding Trust in Human-Robot Interaction Towards a Theory of Longitudinal Trust Calibration in Human-Robot Teams 2020. Shall I Trust You? From Child-Robot Interaction to Trusting Relationships Cooperative inquiry: developing new technologies for children with children The role of children in the design of new technology How Children Search the Internet with Keyword Interfaces He Is Just Like Me: A Study of the Long-Term Use of Smart Speakers by Parents and Children Children searching information on the Internet: Performance on children's interfaces compared to Google You've Got a Friend in Me: Children and Search Agents Trust in automation: Designing for appropriate reliance The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents Hey Google, Do Unicorns Exist? Conversational Agents as a Path to Answers to Children's Questions Robots educate in style: The effect of context and non-verbal behaviour on children's perceptions of warmth and competence Towards a model for spoken conversational search Children asking questions: speech interface reformulations and personification preferences Conversational Information Seeking This research is supported by the Dutch SIDN fund (https://www.sidn.nl/) and TKI CLICKNL funding of the Dutch Ministry of Economic Affairs (https://www.clicknl.nl/). earlier workshop contributions. We thereby continue working towards responsible conversational robots for children's information search.