key: cord-0800000-zas0ccri authors: McKillop, Mollie; South, Brett R; Preininger, Anita; Mason, Mitch; Jackson, Gretchen Purcell title: Leveraging Conversational Technology to Answer Common COVID-19 Questions date: 2020-12-02 journal: J Am Med Inform Assoc DOI: 10.1093/jamia/ocaa316 sha: 49759653ba608e40e1af235ecbc89cbbe0934c55 doc_id: 800000 cord_uid: zas0ccri The rapidly evolving science about the Coronavirus Disease 2019 (COVID-19) pandemic created unprecedented health information needs and dramatic changes in policies globally. We describe a platform, Watson Assistant(TM) (WA), which has been used to develop conversational agents to deliver COVID-19 related information. We characterized the diverse use cases and implementations during the early pandemic and measured adoption through number of users, messages sent, and conversational turns (i.e., pairs of interactions between users and agents). Thirty-seven institutions in nine countries deployed COVID-19 conversational agents with WA between March 30 and August 10, 2020, including 24 governmental agencies, seven employers, five provider organizations, and one health plan. Over 6.8 million messages were delivered through the platform. The mean number of conversational turns per session ranged between 1.9 and 3.5. Our experience demonstrates that conversational technologies can be rapidly deployed for pandemic response and are adopted globally by a wide range of users. The rapidly evolving science about the Coronavirus Disease 2019 (COVID- 19) pandemic created unprecedented health information needs and dramatic changes in policies globally. We describe a platform, Watson Assistant TM (WA), which has been used to develop conversational agents to deliver COVID-19 related information. We characterized the diverse use cases and implementations during the early pandemic and measured adoption through number of users, messages sent, and conversational turns (i.e., pairs of interactions between users and agents). Thirty-seven institutions in nine countries deployed conversational agents with WA between March 30 and August 10, 2020, including 24 governmental agencies, seven employers, five provider organizations, and one health plan. Over 6.8 million messages were delivered through the platform. The mean number of conversational turns per session ranged between 1.9 and 3.5. Our experience demonstrates that conversational technologies can be rapidly deployed for pandemic response and are adopted globally by a wide range of users. A new coronavirus causing acute severe respiratory syndrome (SARS-COV-2) has infected millions worldwide with coronavirus disease 2019 (COVID-19) and caused significant mortality. SARS-CoV-2 is a highly contagious pathogen, with widely variable clinical manifestations. In March 2020, the World Health Organization (WHO) classified COVID-19 as a pandemic [1] . In the absence of proven therapies or a vaccine, public health departments, governments, employers, and healthcare institutions have taken measures to control the spread of the disease, including providing information and promoting nonpharmaceutical interventions such as social distancing and hand washing. Given the novelty of the disease, information is rapidly evolving, with new evidence often contradicting earlier findings. These inconsistencies create uncertainty, leading to a need for trustworthy, health-related information. Timely and accurate public health information related to COVID-19 is universally needed across stakeholders. Organizations have been asked to provide information on COVID-19, its symptoms, how it spreads, strategies for prevention, and how each organization is responding to the pandemic. Trusted sources of health information, like medical practices, have limited in-person visits to focus on treating the sick and reducing disease spread. Staff reductions have further compounded availability to answer questions. The pervasiveness of the pandemic has resulted in organizations assuming new roles related to the dissemination of public health information. Given the enormous demand for information about COVID-19, many stakeholders have leveraged emerging conversational technologies to automate responses to common COVID-19 related questions and information needs specific to their organizations. One way to scale dissemination of COVID-19 related information is through technologies that employ natural language conversation. Chatbots, sometimes called conversational agents or virtual assistants, often differ in functionality. Consensus on a taxonomy of these conversational technologies is lacking [2, 3] . The most simple chatbots are capable of matching a predetermined set of topics with predefined answers, whereas more sophisticated conversational agents expand on their functionalities to employ machine learning and natural language processing (NLP) to understand questions in everyday language and engage users in increasingly complex conversations. For example, conversational agents may understand meaning, maintain context in dialogue, and learn with time to improve their performance. Some describe conversational agents that aid users in performing specific tasks as virtual assistants; they often have a characteristic personality expressed by tone, dialect, or style in conversation. Such conversational tools have demonstrated promise in clinical applications [4] , including chatbots for determining social needs [5] and panic disorder [6] , as well as conversational agents for irritable bowel syndrome [7] and behavior change [8] . For the COVID-19 pandemic, conversational agents have been deployed to answer questions and to triage symptoms, but studies of their adoption and use to address questions surrounding COVID-19 have been limited to single institution experiences [9] [10] [11] . We sought to characterize the diverse use cases of COVID-related conversational agents built using the Watson Assistant TM (WA) platform between March 30, 2020 and August 10, 2020. We measured the adoption through the number of users and messages sent. We determined the average number of conversational turns, with one turn representing one question-response pair. WA is a platform for developing, training, and customizing conversational agents. Although not specific to the healthcare domain, this platform has previously been applied to medication prescribing, mental health, and Parkinson's Disease [12] [13] [14] . Core natural language capabilities include: (1) understanding input, (2) classifying topics, (3) state management and maintaining a structured dialog (e.g., functions to support dynamically collecting multiple pieces of information, digressions for allowing users to change topics without losing their place in the conversation, and disambiguation to clarify when users say something for which the system has multiple relevant responses [15]), and (4) retrieving information from a knowledge base through search. WA uses natural language processing and machine learning in the intent understanding, entity extraction, query expansion, and finding answers through estimating document relevancy. Search capacities also have NLP in its natural language understanding capabilities both when breaking down the user's query, as well as finding answers in documents. WA supports building a conversational interface into any application, device, or channel such as a website or interactive voice recognition system. WA conversational agents allow users to initiate a conversation by entering questions. For example, when a user enters a question about COVID-19, a conversational agent built using WA will interpret the question to identify the intent (target of a user's query) and match it to an internal list of intents and entities (for example, a condition) that answer the question within the dialogue interface, or find the most relevant answer in its knowledge base. WA is built upon a core set of functionalities with three main components that facilitate dialogue with users. The first component is the intent, which defines the type of information sought. The second component includes an entity that is used to provide a precise response for an intent. The final component is dialogue, which is the actual conversation a user has with the conversational agent. WA's proprietary NLP capabilities facilitate creation and training of conversational agents with a minimal amount of data. Agents can be delivered in any cloud environment, allowing users to maintain ownership and privacy of their data. The technical details of functionalities and implementation are beyond the scope of this brief communication but are provided elsewhere [16] . Beginning in March 2020, IBM® offered a program called Citizen Assistant TM to any organization worldwide, including WA for COVID-19 and assistance with initial setup at no charge for at least 90 days, as part of IBM®'s corporate social responsibility initiatives in response to the pandemic. WA is also free to use for anyone, for up to 10,000 messages per month and 1,000 users per month. Conversational agents built using WA were trained to understand and respond, through both voice and text, to common COVID-19 questions, WA provides both human-curated, predetermined responses as well as capabilities to dynamically search for and identify information from unstructured documents or websites on a scheduled basis. This architecture provides users with access to the most up-to-date information as science evolves and ensures some level of quality in the information provided through expert validation when needed. To dynamically search for and provide up-to-date information, WA treats the user input as a search query. It finds information that is relevant to the query from an external data source, such as the CDC, and returns it to the user [17] . Conversational agents built using WA can be customized for the specific use case. For example, conversational agents can be trained to include information related to a specific language, locale, or organization, such as links to local school closings, local news, and state websites. Once the assistant is live and users ask questions, a human will typically review subsets of conversations for knowledge gaps. The assistant is re-trained to answer any questions it was not initially trained on to cover these gaps. An initial catalog of COVID-19 intents was created by experts in conversational agent design to cover areas including testing, case counts, travel restrictions, preventative behaviors, symptoms, and contact information. The content was based on current evidence and best practices retrieved from the CDC, Department of Labor, World Health Organization (WHO), and USA.gov. Intents were implemented as static responses or dynamic searches, depending on the types and sources of information, as well as how often this information changes. Relatively consistent, high priority COVID-19 knowledge intents were curated by humans, with intents and responses independently reviewed and evaluated by two physicians for face validity related to public health and clinical acceptability. Disagreements were resolved through discussions with a third physician to reach consensus. Biweekly reviews to iteratively refine all intents and responses with clinicians and public health experts were also conducted and are ongoing. Intents with reliable sources of information and rapidly-changing answers (e.g., case counts) were implemented with dynamic search or lookup functions, with data sources routinely reviewed by experts. Additional intents specific to organizational information needs and use cases were developed, such as intents covering physician and medical center access for providers, intents for testing coverage and premium payments for health plan members, and intents relating to when and how employees may work or return to work for employers (see Supplementary File for a full characterization of intents). We assessed the initial success of the WA platform in delivering information for COVID- All institutions achieved end-to-end deployment in approximately three weeks or less; the average time to initial use was five business days. Two implementations were voice-based, requiring users to call the implementing organization's contact number, while the rest were web chat integrations. Each web-based agent was made available either through webchat on Table 1 , with a visualization of these data over time for each organizational type in Figure 2 . A total of 6,872,021 messages were sent in conversations about COVID-19 using the conversational platform. Mean conversational turns were highest for provider organizations (mean, 3.5 turns) and lowest for health plans (mean, 1.9 turns). This brief communication describes rapid and widespread deployment, adoption, and usage of a set of conversational agents to address the overwhelming information needs created by COVID-19. We show that conversational agents built to answer many different types of questions for COVID-19 pandemic response can be deployed quickly and were broadly adopted during the early stages of the pandemic. The COVID-19 pandemic generated an urgent need to provide answers to questions based on rapidly evolving scientific evidence. Citizens continue to want quick access to information in a manner that allows them to make informed decisions on how to protect themselves, their families, and their communities [18] . To address these needs, several institutions have reported leveraging conversational agent technologies. Most focused on symptom self-checking for patient triage or mental health [19, 20] , while automating answers to common questions was more limited. This manuscript describes a platform used to deploy conversational agents to address a diverse set of information needs for a wide variety of stakeholders including governments, employers, providers, and health plans. Thirty-seven organizations in nine different countries implemented agents and delivered over 6.8 million messages, indicating widespread geographic adoption of these conversational agents and demand for public health information related to the COVID-19 pandemic. Published studies of conversational agents to address COVID-19 have previously been limited to single-institution experiences with a single conversational agent [19] . Our experience demonstrates the ability of a conversational technology platform to support varied COVID-19 information needs across multiple institutions, representing diverse stakeholders and users. Further, we report on conversational turns, which are used to assess the amount of interaction between a user and a system. The mean number of conversational turns per session was two to three, indicating engagement with agents and suggesting they can answer most user questions efficiently. The relative number of turns may also underscore the complexity of some user questions, particularly clinical ones, since provider organizations had the most turns per session. Yet, across organizations, the number of conversational turns is not reflective of highly complex conversations. Due to the novel and rapidly evolving context in the 13 early stages of a pandemic, most users probably asked simple, transactional types of questions such as 'Is the hospital open?' and 'What is COVID-19?' This trend is likely to change as the pandemic evolves. For example, in the later weeks of this study, conversational length among employers spiked (see Figure 2 ). We hypothesize that as workers returned to work, more complex conversations around workplace safety and reopening policies occurred. This preliminary work has several limitations. This description of initial usage did not measure outcomes such as user satisfaction, frequency of intents, whether user questions were answered, or time and cost savings; these are topics of ongoing research. The manuscript reports adoption and use of system that is commercially available for enterprise solutions. However, this manuscript reported usage only during the period for which the platform was freely available as part of a philanthropic response to the pandemic, and the platform is freely available to anyone for low to medium volume applications. We have demonstrated the ability of a wide variety of organizations including governments, employers, providers, and payers to use conversational technologies to provide current information related to COVID-19 to their citizens, employees, patients, and beneficiaries. The WA platform enabled rapid implementation of a set of conversational agents for a wide variety of use cases, and usage data show demand for and adoption of these technologies during a rapidly-evolving public health crisis. Our ongoing research aims to WHO Declares COVID-19 a Pandemic Towards a Taxonomy of Platforms for Conversational Agent Design Conversational agents in business: A systematic literature review and future research directions Conversational agents in healthcare: a systematic review HarborBot: A Chatbot for Social Needs Screening Efficacy of mobile app-based interactive cognitive behavioral therapy using a chatbot for panic disorder An Exploration Into the Use of a Chatbot for Patients With Inflammatory Bowel Diseases: Retrospective Cohort Study Use of the Healthy Lifestyle Coaching Chatbot App to Promote Stair-Climbing Habits Among Office Workers: Exploratory Randomized Controlled Trial A Guide to Chatbots for COVID-19 Screening at Pediatric Health Care Facilities AI Chatbot Design during an Epidemic like the Novel Coronavirus. Healthcare (Basel) 2020 Report: Implementation of a Digital Chatbot to Screen Health System Employees during the COVID-19 Pandemic Artificial intelligence-based conversational agent to support medication prescribing Adherence of the #Here4U App -Military Version to Criteria for the Development of Rigorous Mental Health Apps Conversational Agent in mHealth to empower people managing the Parkinson's Disease Creating a search skill Framework for Managing the COVID-19 Infodemic: Methods and Results of an Online, Crowdsourced WHO Technical Consultation Rapid design and implementation of an integrated patient self-triage and self-scheduling tool for COVID-19 Chatbots in the fight against the COVID-19 pandemic The authors would like to gratefully acknowledge Yull Arriaga, Rubina Rizvi, Kristen Summers, for their subject matter expertise. 14 examine user conversations and how conversations change over time during the course of a pandemic. We are also investigating user satisfaction and experience with COVID-19 conversational agents. The authors of this manuscript are employed by IBM Watson Health. The data underlying this article will be shared on reasonable request to the corresponding author. This study is funded by IBM Watson Health.