key: cord-0456788-tzyc70o0 authors: Visram, Sheena; Leyden, Deirdre; Annesley, Oceiah; Bappa, Dauda; Sebire, Neil J title: Perceptions and attitudes of Children and Young People to Artificial Intelligence in Medicine date: 2021-10-10 journal: nan DOI: nan sha: 3320b97b24abeec6f58da470b679a286f6525429 doc_id: 456788 cord_uid: tzyc70o0 There is increasing interest in Artificial Intelligence and its application to medicine. Perceptions are less well-known, notably amongst children and young people. 21 members of a Young Persons Advisory Group for research, recommend creating an enabling environment with children and young people, through educational workshops with practical examples that use Artificial Intelligence to help, but not replace humans, address issues, build trust, and effectively communicate about potential opportunities. There is growing interest in the application of Artificial Intelligence (AI) to medicine. Initially described as exotic, expensive, and not of benefit to ordinary people 1 , global interest within the field has increased exponentially 2 . Highquality reviews of AI in healthcare have addressed its use, value, and trustworthiness 3−6 . In children's healthcare, parents ask for openness during AI development, and ask that technical experts consider shared decision making, the human element of care and social justice as part of the development process 7 . However, whilst views of Children and Young People (CYP) can shape healthcare provision 8−11 , few policy recommendations reflect their views and beliefs 12−14 . This is particularly the case for CYP with tacit healthcare knowledge. Great Ormond Street Hospital for children (GOSH) is the largest paediatric centre in the UK and an international centre of excellence for many clinical specialties. As part of the hospital, the Digital Research, Informatics, and Virtual Environments (DRIVE) unit aims to accelerates research and deployment of new technology including working with patients and families to optimise technologies such as AI. The Young Persons Advisory Group (YPAG) is a patient and public involvement group embedded at the hospital comprising CYP who are interested in improving health by advising on research, and forms part of a national network (Generation R). Using a workshop entitled AI&me, we address a current deficit in AI healthcare policy and practice, by exploring the perspective of CYP with lived experiences of healthcare including establishing priorities of GOSH YPAG in an exploratory PPEI design workshop on Healthcare AI. A single design workshop examined perceptions and attitudes of CYP on AI applications in medicine and healthcare. Findings were reported using the COREQ 32-point checklist for focus group reflexivity, design and analysis (included as a supplement) 18 Members of GOSH YPAG contributed to an exploratory engagement workshop run virtually, lasting one hour to explore their perceptions of AI in medicine and healthcare. They rated levels of comfort with AI-related design scenarios and discussed mechanisms to effectively engage with patients and families on AI's future potential. The virtual workshop opened with a short broad discussion about AI, after which nine design scenarios were presented, including: Virtual Reality visits to hospitals, cleaning robots, talking robots, chatbots to diagnose disease, self-driving vehicles, AI-powered nurses, 3D printed hearts and sensor technology to reduce overcrowding. These were developed from a recent survey of 2000 parents 15 and predominantly focused on healthcare applications of technologies intended to delight, inform, predict, automate or diagnose/treat. Quantitative polling of scenarios was undertaken anonymously using a 10-point Likert-scale. To collect comments, a virtual chat function and an agile, Audience Response System (Mentimeter AB, Stockholm, Stockholms Lan, Sweden) were used since these are effective for encouraging participation in virtual learning environments 16, 17 . Comments were collected verbatim. Inductive qualitative content analysis identified concepts and emergent themes using NVivo for Windows v.1.4.1 (QSR Inter-national, Melbourne, Australia) to shape future research questions, as there is limited existing qualitative research regarding perceptions of AI in medicine and healthcare amongst CYP with lived experiences. This involved data familiarisation, immersion and iterative identification of codes, concepts, phrases and language. Open codes were collated under emerging themes and findings supported by verbatim quotes. 21 YPAG members (aged 10-21 years) participated, generating 128 unique comments across platforms. The language used by participants comprised words that described how AI made them feel (58 generalised occurrences that included affect, care, compassion, consider, experience and fear), AI was commonly referred to as a 'robot' (18 incidents) and 'creepy' on six occasions. Patients were commonly mentioned (18 occurrences) and generalised words relating to comfort (assure and reassure) were used 26 times. The comments were conversational, but several comments were structured as questions (n=28, 22%) suggesting interest to understand more about AI (Figure 1b ). Of the nine design scenarios presented, sensor technology to reduce overcrowding (M 7.4, SD 2.7), cleaning robots (M 7.9, SD 2.4), virtual reality visits (M 6.5, SD 2.8) and 3D printed organs (M 6.2, SD 3.5) were the most accepted scenarios, whilst AI-powered nurses the least (M 2.4, SD 2.3; Figure 1c ). Three themes emerged from the exploratory engagement workshop: governance, human centredness and trust ( Figure 1a ). Safety and benefits formed the basis of a number of early inquiries about AI. There was an interest that access to AI-enabled technologies was fair and available to all. Ensuring safety, security risks, and reliability was of particular interest, one participant asking: "What safety measures are in place?", another: 'What happens if the robot makes(s) a mistake or the software breaks down?", expanding to ask: "Would the robot get the benefit of the doubt?" More broadly, on ethical use of AI, one participant asked: "How do you stop people abusing the system?" As members of YPAG at a specialist paediatric hospital, a number of questions were raised about the role of AI for rare diseases, and potential benefits to challenges faced in healthcare, one participant asking: "Will it speed up waiting times in A&E" and on effectiveness, one participant asked: "If a rare disease occurs, how will the robot know what to do as there is no specific treatment", another: "How do you train AI if someone develops a new illness" and: "Is an online chat bot actually more beneficial to patients?" The role of human-centred care in healthcare was another emergent theme with empathy, agency and power dynamics considered important. It was thought that AI would not take emotions into account and this could have an impact on treatment, especially where mental health and wellbeing are considered. One participant asked: "How do you teach AI to be empathetic and understand pain?" another: "How would bad news be broken to patients?" Agency and control over the use of AI was a pertinent topic, one participant reflecting: "I like the idea of AI looking at scans and in surgery, but definitely not for decision making or patient interaction", another asking: "Would AI make the decision or be the advisor to the doctor?" and another: "Would doctors be able to overrule AI if they're not happy with the decision/ course of action?" Replacing humans was commonly associated to the impact on jobs, one participant expressing: "I don't like the idea of robots taking jobs" and another asked: "What will happen to the doctors who are working now?" expanding to: "will their jobs get replaced?". Another participant reflected on the potential impact on skillset and disparities between countries using AI and others that do not, asking: "Will doctors need to be less qualified if the use of AI is normalised?" A popular remark anticipated the role of AI as supportive rather than to replace healthcare staff, one participant stating: "I will find it ok as long as it is just helping and does not replace humans", The influence of movies, games and science fiction on perceptions of AI was a popular topic. Opinions on AI amongst children and young people are influenced by pop culture and science fiction, which often depict robots as evil, one participant reflecting: "I think we watch too many sci-fi movies", another "that's why I'm scared of robots" This led to comments about creepiness, one participant stating: "AI is creepy if it acts like a human" Educational workshops with reassurance, practical examples that use AI to help, but not replace humans were suggested to address common worries, build trust and to effectively communicate about AI. To cultivate trust, it was recommended that healthcare staff are transparent about its use, with clear explanations and examples of its use in everyday life (Figure 1d ), one participant recommended: "Being transparent when you are already using it e.g., when AI is used in conjunction with surgeons" with "success stories & when things go wrong & how it was resolved". Ethical considerations about who would make decisions and what might happen should something go wrong were considered. One participant stating: "Make sure you address common worries instead of avoiding them when explaining AI". Overall, participants were interested to engage on further discussions about AI, and a generational gap was identified, that considers young people more open to and comfortable with AI in general. "YPAG members are keen to be involved, for our perspective and ideas, especially as AI is our future". The findings of this exploratory workshop, intended to infrom furture research have demonstrated that CYP are openminded to using AI in medicine and healthcare and believe that this technology will change everyday life in fundamental ways but find it difficult to articulate their views on how AI should be developed. This is partly due to the breadth of applications and their impacts 19 . CYP need to be educated about AI and encouraged to participate in its development including making AI explainable to children and young people by including them in AI policy development cycles 12 . Outside of healthcare, UNICEF recommends nine requirements for child-centred AI, including inclusion, safety, privacy, transparency, and the need to create an enabling environment to discover whether AI systems are designed for children and potential impacts 12 . Whilst participatory research is described as a key element, such policy guidelines are not accompanied by practical recommendations to enable such digital cooperation. This is the first exploration through a virtual, group-based workshop to engage with CYP with lived experiences of healthcare regarding perceptions of AI in medicine and healthcare. We demonstrate that by creating an open and enabling environment and using design scenarios to discuss potential applications, YPAG members were keen to participate, share opinions, outline concerns, and further develop their own understanding of AI. By including and involving CYP in this space, we can optimise AI to enhance future experiences of care 20 . To achieve this shared aspiration requires collaboration, and where there are areas of disagreement or uncertainty, these need to be clearly identified. This involves creating an enabling environment for CYP-centred AI and involving CYP with lived experiences of healthcare in the process in ways that engage, inspire and empower 13,21 . The preliminary findings reported reflect a PPEI workshop intended to inform future research and spur deliberations on this topic. Whilst content analysis represents an appropriate analytic approach which is unobtrusive, nonreactive and time-efficient when compared to methods such as ethnography, this workshop in its design was limited to the breadth of specific potential AI applications discussed, and by the depth of discussion achieved during a virtual onehour group session. Data saturation was not intended to be achieved, rather emergent themes identified to shape future research. Whilst the design of this exploratory workshop allowed for rapid, and well attended virtual participation, future research approaches might also include supplementary in-depth interviews and consensus building methods. CYP want to be included in the development of AI in medicine and healthcare. Whilst policy guidelines acknowledge the need to include CYP this ignores the infrastructure required to support ongoing digital cooperation. For AI in medicine, this requires an enabling environment for human-centred AI that involves CYP with lived experiences of healthcare and healthcare/AI professionals. With publication of the recent UK National Strategy for AI, future research should explore the ways CYP can participate in shaping an intelligent, empathetic, and inclusive healthcare system of tomorrow and in the application and development of AI in healthcare. We thank the 21 members of GOSH YPAG for their engagement in this exploratory workshop. This work is supported by the NIHR GOSH BRC. SV, NJS conceived the project. DL, DB were involved with arranging the YPAG meeting and inviting interested YPAG members to engage. SV, NJS led the facilitation of the YPAG session and data collection. DB collected independent observations to triangulate findings. SV conducted the analysis and reported on findings in compliance with the COREQ 32-point checklist, and creating Figure 1 . OA represented the viewpoint of a YPAG member to validate emerging themes, add reference to co-creation, and review the manuscript for layman language and style. All authors contributed to the validation process, checking for accurate representation of findings, completeness and provided revisions to early drafts of the manuscript. Field notes One independent observer took field notes during the session which were used to cross reference codes and emerging themes from the data 21. Duration One hour Continued on next page 8 Table 1 continued Data saturation This was an exploratory workshop and so data saturation was not anticipated 23. Transcripts returned Comments made in the chat function were made available to participants, and comments made anonymously on the audience response systems displayed as a rolling grid in real time during the session. Domain 3: analysis and findings Data analysis 24. Number of data coders 1 (SV) 25. Description of the coding tree Yes, for each emerging theme (human centredness, governance and trust) Coding tree on NVivo (Figure 2 ) 26. Derivation of themes Derived inductively from the data 27. Software NVivo for Windows v.1.4.1 (QSR International, Melbourne, Australia) 28. Participant checking One participant was invited to cross check emerging codes and themes for accuracy and co-author the paper that presented preliminary findings Reporting 29. Quotations presented Yes, quotations were presented, the participants were not identified as comments were made anonymously. This was intended to encourage open and honest discussions 30. Data and findings consistent Yes 31. Clarity of major themes Yes, for each emerging theme (human centredness, governance and trust) 32. Clarity of minor themes Yes, for divergence in opinions and open coding The application of artificial intelligence to medicine Intelligence-Based Medicine: Artificial Intelligence and Human Cognition in Clinical Medicine and Healthcare Artificial intelligence in medicine Artificial intelligence in paediatric radiology: Future opportunities Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence Parental Attitudes toward Artificial Intelligence-Driven Precision Medicine Technologies in Pediatric Healthcare WHO | Making health services adolescent friendly Measuring and improving the quality of NHS care for children and young people UNICEF policy guidance on AI for children The Commonwealth Artificial Intelligence for Children: Beijing Principles Generation AI 2020: Health, Wellness and Technology in a Post-COVID World The impact of audience response platform Mentimeter on the student and staff learning experience. Res Learn Technol Mentimeter Smartphone Student Response System: A class above clickers Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups Public views of machine learning: Digital Natives Children and young people's versus parents' responses in an English national inpatient survey Service design and healthcare innovation: from consumption