key: cord-0976398-navmex9q authors: Barnett, Anthony; Savic, Michael; Pienaar, Kiran; Carter, Adrian; Warren, Narelle; Sandral, Emma; Manning, Victoria; Lubman, Dan I. title: Enacting ‘more-than-human’ care: Clients’ and counsellors’ views on the multiple affordances of chatbots in alcohol and other drug counselling date: 2020-10-12 journal: Int J Drug Policy DOI: 10.1016/j.drugpo.2020.102910 sha: 50b6e1efab1f9c3e35245dc5044da13d0ad3a7c8 doc_id: 976398 cord_uid: navmex9q Forms of artificial intelligence (AI), such as chatbots that provide automated online counselling, promise to revolutionise alcohol and other drug treatment. Although the replacement of human counsellors remains a speculative prospect, chatbots for ‘narrow AI’ tasks (e.g., assessment and referral) are increasingly being used to augment clinical practice. Little research has addressed the possibilities for care that chatbots may generate in the future, particularly in the context of alcohol and other drug counselling. To explore these issues, we draw on the concept of technological ‘affordances’ and identify the range of possibilities for care that emerging chatbot interventions may afford and foreclose depending on the contexts in which they are implemented. Our analysis is based on qualitative data from interviews with clients (n=20) and focus group discussions with counsellors (n=8) conducted as part of a larger study of an Australian online alcohol and other drug counselling service. Both clients and counsellors expressed a concern that chatbot interventions lacked a ‘human’ element, which they valued in empathic care encounters. Most clients reported that they would share less information with a chatbot than a human counsellor, and they viewed this as constraining care. However, clients and counsellors suggested that the use of narrow AI might afford possibilities for performing discrete tasks, such as screening, triage or referral. In the context of what we refer to as ‘more-than-human’ care, our findings reveal complex views about the types of affordances that chatbots may produce and foreclose in online care encounters. We conclude by discussing implications for the potential ‘addiction futures’ and care trajectories that AI technologies offer, focussing on how they might inform alcohol and other drug policy, and the design of digital healthcare. Within healthcare, the possible future applications of 'chatbots'artificially intelligent, computer programs that aim to simulate human conversation -continue to receive significant attention. Recent studies have explored the potential of chatbots to deliver healthcare information and support (Rizzo et al., 2016) , detect and prevent certain behaviours (e.g., suicide) (Martínez-Miranda, 2017) , and support treatment delivered by physicians in different fields of medicine (e.g., oncology) (Bibault, Chaix, Nectoux, & Brouard, 2019) . Furthermore, the novel coronavirus (COVID-19) pandemic has encouraged a rapid move towards telemedicine and online healthcare, including the use of chatbots informed by artificial intelligence (AI) technologies to respond to healthcare needs during a pandemic (e.g., World Health Organisation, 2020). As part of a wider investment in digital health, chatbots continue to be framed as an important part of future 'e-therapies' within psychiatry and mental health services for the treatment of mental health and alcohol and other drug concerns (Gratzer & Goldbloom, 2020) . In the National Health Service Topol Review which investigated the application of technology within mental healthcare, the use of chatbots to deliver mental health services in the future was specifically identified as an important part of a suite of automated, digital health interventions (Foley & Woollard, 2019) . Universities, governments and private https://doi.org/10.1016/j.drugpo.2020.102910 software companies are investing at unprecedented rates in chatbot and mobile health ('mHealth') technologies (Silva, Rodrigues, de la Torre Díez, López-Coronado, & Saleem, 2015) . These technologies promise to change how care is provided by delivering mental health services to larger audiences at cheaper cost, beyond the time, space and geographical constraints of traditional, face-to-face healthcare. Although chatbots are framed as a futuristic technological solution to overcome barriers to treatment access, the views of clients and counsellors about how these technologies may impact care have largely remained underexplored. Drawing on concepts from science and technology studies (STS), we critically examine clients' and counsellors' views about the affordances of chatbots for alcohol and other drug care. In doing so, we provide a novel perspective on the potential social implications of digital health technologies and reflect on the possibilities for 'more-than-human' care into the future. The potential use of chatbots to deliver healthcare interventions, such as counselling, has a surprisingly long history. McCorduck (2004) traced the history of chatbots back to Joseph Weizenbaum, when in the early 1960s at MIT he produced a system called 'ELIZA'. ELIZA was a computer program based on early forms of natural language processing -the use of automated techniques to analyse and respond to human language -and was designed to imitate the role of a Rogerian psychoanalyst by communicating with a human user (or 'patient') on a computer console. At the time, Weizenbaum argued that ELIZA was better viewed as an advance in language processing technology rather than representing a new technology for healthcare. By the mid-1970s, Weizenbaum had launched a critique of the developing field of AI, arguing that chatbots and AI systems designed to replace humans were immoral, unethical and lacked the human capacity for empathy on which effective psychotherapy depends (McCorduck, 2004) . The technological capability of chatbots has advanced since ELIZA. Current chatbots aim to employ various forms of AI technologies to enable greater autonomy. These include natural language processing and machine learning, where a computer system 'learns' and improves performance based on past experiences and interactions (Nguyen, 2020) . One example is 'SimSensei', a fully automated chatbot that conducts interviews to assess psychological distress (Morency et al., 2015) . Within alcohol and other drug service delivery, a number of chatbots of varying technical complexity have been developed. These include 'TalkToFrank' in the United Kingdom, which is designed to provide young people with information about drugs (Home Office, 2013) , and 'Bzz' in the Netherlands, which promises to answer adolescents' questions related to sex, drugs and alcohol (Crutzen, Peters, Portugal, Fisser, & Grolleman, 2011) . Viewed along a continuum, chatbots operating within 'hybrid' models of care alongside humans to perform simple tasks such as screening or referral have been characterised as examples of 'narrow AI'. In contrast, future chatbots designed to replace human functions (e.g., to emulate a counsellor) are considered forms of 'artificial superintelligence' (Müller & Bostrom, 2016) . Traditional models of care involving face-to-face interactions between clients and clinicians are increasingly being augmented by narrow AI digital health interventions, such as smartphone applications or simple chatbots that can perform basic functions, such as referring clients to a human-led service (Denecke, Tschanz, Dorner, & May, 2019) . The promise of, and increasing investment in, digital health interventions raises the question of whether chatbots offer 'hype or hope' for future healthcare. In a recent review, Denecke et al. (2019) suggested that the strengths of chatbots include their capacity to follow a 'conversational tree' and perform simple, specific tasks, such as patient history-taking or patient education. However, they cautioned that chatbots are not without disadvantages. For example, patients may become exhausted or frustrated if a chatbot does not understand their concerns and needs or if too many patient interactions with chatbots are required. On the one hand, chatbots may offer future opportunities for healthcare if they become more 'intelligent' and conversational barriers with patients are minimised. On the other, it is unclear whether and how chatbots could emulate the traditional doctor-patient relationship, which is built on trust and face-to-face communication (Denecke et al., 2019) . Empirical research examining the impact of chatbots on the therapeutic relationship, care experiences and outcomes is urgently needed (Laranjo et al., 2018) . A recent survey of physicians in the US found that while many viewed chatbots as potentially useful for undertaking simple administrative tasks, such as booking appointments or client coaching, concerns were raised about their inability to comprehend human emotion and to deliver expert medical care. Little research has been conducted to examine the views of other types of providers, including counsellors, who play a critical role in the alcohol and other drug treatment field. Outside of survey based research, only a few qualitative studies have explored health professionals' and clients' views about the acceptability of chatbots in healthcare more generally (e.g., Laumer, Maier, & Gubler, 2019; Nadarzynski, Miles, Cowie, & Ridge, 2019) . Crucially, though, little if any research has explored clients' and counsellors' views about how chatbots used in future alcohol and other drug service delivery may impact their experiences of care. This is surprising because digital health interventions may be especially relevant or useful for people with alcohol and other drug concerns in order to overcome barriers to accessing care, including: (i) drug-related stigma that may discourage treatment seeking; and, (ii) service delivery barriers such as lack of access to care in remote areas (Budney, Borodovsky, Marsch, & Lord, 2019) . Drawing on concepts from STS, including work on technological 'affordances' (Gibson, 1979; Hutchby, 2001; Latour & Venn, 2002; Norman, 1988) and 'more-thanhuman' approaches (Dennis, 2019) , this paper responds to this opening in the literature by examining clients' and counsellors' perceptions of the technological and social effects of chatbots in online alcohol and other drug care. The findings may have important implications for the design of future digital healthcare interventions and the kinds of 'addiction futures' that these interventions materialise and foreclose in alcohol and other drug care. In exploring clients' and counsellors' accounts of the future technological and social effects of chatbots in online care, our analysis is informed by the concept of 'affordances' (Gibson, 1979; Hutchby, 2001; Latour & Venn, 2002; Norman, 1988) . In his seminal work The Ecological Approach to Visual Perception, Gibson (1979) described 'affordances' in terms of how environments offered or 'furnished' animals different opportunities. A key tenet of Gibson's work is that affordances are not innate, fixed physical properties of an environment, but rather emerge as opportunities or constraints that an environmental feature might provide to a particular subject. For Gibson, different features of an environment have the ability to provide unique affordances for particular subjects (or even to the same subject at different points in time). Norman (1988) extended Gibson's (1979) work by applying the concept of 'affordances' to human-computer interactions. Where Gibson highlighted the role of environments, Norman (1988) foregrounded the agency of technology designers and developers by arguing that affordances are 'designed in' properties that provide indications of how technology could be used. Drawing on STS and posthumanist theory, socio-material approaches have extended thinking around affordances (Hutchby, 2001; Latour & Venn, 2002) . Rather than viewing affordances as predominantly shaped by the environment or determined by humans who design and use technology, socio-material approaches view affordances as emerging through human and non-human actors as they coalesce in encounters with technology. Given that human and non-human actors (e.g., discourses, time, places, objects) may combine differently in specific encounters with a technology, such as a chatbot, the opportunities for action (or 'affordances') that a technology enables or constrains are always contingent and situated. In view of this, a socio-material approach underscores the need to examine the unique relations of human and non-human forces in specific contexts in order to trace how affordances are differentially constituted (Dilkes-Frayne, Savic, Carter, Kokanović, & Lubman, 2019) . Within critical drug studies, this socio-material approach to 'affordances' has been mobilised by Fraser, Treloar, Gendera, and Rance (2017) to critically analyse the design of injecting packs for hepatitis C prevention. Recognising the need to incorporate social relationships in the design of the prevention object, Fraser and colleagues proposed a new injecting pack aimed at couples who inject together. This innovative approach to hepatitis C prevention treats the sexual partnership as the primary unit of intervention and aims to generate new affordances or possibilities for prevention. Socio-material approaches to affordances have since been productively applied to analyse a range of phenomena including the uses and effects of naloxone (an overdose reversal drug) (Farrugia et al., 2019) , supervised drug consumption sites (Boyd et al., 2020) and, related to our own work, the relationship between online counselling platforms and therapeutic outcomes (Dilkes-Frayne, Savic, Carter, Kokanović, & Lubman, 2019) . The concept of affordances has also been mobilised in a study of the gender affordances of chatbots, specifically how the gender of a chatbot influences users' engagement with it (Brahnam & De Angeli, 2012) . Brahnam and De Angeli found that users tended to attribute negative stereotypes to female-presenting chatbots more often than male-presenting chatbots, and that the former were more often subject to implicit and explicit sexual attention and swear words. By drawing on a socio-material approach to affordances, we explore clients' and counsellors' views on how chatbots might enable ('afford') or constrain alcohol and other drug online care. While the existing critical drug studies literature has applied the concept of 'affordances' to explore past encounters with technology, we focus on participants' perceptions of future encounters involving chatbots. In doing so, we attend to the role of anticipation and imagination, along with a range of other actors, in shaping the types of realities, possibilities and actions that may emerge when chatbots are encountered in future implementation situations (Groves, 2017) . In this way, we approach participants' accounts as implicated in making alcohol and other drug treatment and care futures. Our analysis is also informed by an emerging body of scholarship applying 'more-than-human' approaches within critical drug studies. Fay Dennis' (2017, 2019) ethnography of injecting drug use is a key example. In keeping with the posthumanist turn, Dennis shifts the focus away from the individual injecting subject by exploring how injecting drug use emerges through the relations of human and non-human actors. In doing so, her account disrupts anthropocentric conceptualisations of drug use as the outcome of human practices and decisions. Instead her work illuminates how drug injecting events materialise via the often fragile coalescence of human and non-human phenomena (e.g., syringes, substances, prohibitionist drug policies, the availability of sterile injecting equipment). By unsettling taken-for-granted distinctions between human/non-human, subject/object and agency/passivity, Dennis' approach encourages a more capacious understanding of agency as distributed along the human/non-human spectrum, rather than being the sole prerogative of individual human subjects. She concludes by advocating for a 'more-than-human' approach to care, one that extends beyond current harm reduction approaches and has the potential to "reconfigure our relationship to drugs and legitimise ways of living with drugs that are currently neglected, undermined, or worse still, punished" (Dennis, 2019, p. 199) . In the context of chatbots and online care, a 'more-than-human' approach invites us to rethink dominant addiction treatment models that tend to frame care as a human-centric, one-directional practice, which is provided to clients by treatment professionals. For example, in the conventional 'doctor-patient relationship', the medical professional is often conceptualised as the care provider, in control of (and responsible for) the clinical encounter. Despite the power asymmetries at play in clinical contexts, care is also typically considered self-evidently beneficial and nurturing, irrespective of the local contexts in which it is enacted. Questioning this framing of care as a one-directional form of service provision, our analysis draws on critical scholarship that emphasises care as relational, situated and made in everyday practices, including those associated with help-seeking, diagnosis and treatment (Puig de la Bellacasa, 2017; Mol, 2008) . This work considers care as differentially constituted -sometimes as supportive and sometimes as oppressive and coercive -depending on the unique configuration of human and non-human actors at work in specific care practices. Given the potential for care to materialise as coercive, and given that some forms of care (e.g., between people who consume drugs) are routinely obscured, care is political, contested and implicated in the making and maintenance of particular realities (Puig de la Bellacasa, 2017; Mol, 2008; Martin et al., 2015; Murphy, 2015) . Recent critical drug studies scholarship has similarly approached care as an ethico-political pursuit and productively traced the social, affective and material practices that bring care into being, or otherwise constrain, or foreclose specific care practices in relation to injecting drug use (Dennis, 2019) , naloxone (Farrugia et al., 2019) , and drug consumption rooms (Duncan et al., 2019) . Inspired by this work, we examine the affordances and limits of AI technologies for online care, paying particular attention to the social, affective and material actors at play. The advent of new technologies such as chatbots holds the potential for a redistribution of care between human and non-human actors. As Donna Haraway (1985; 2006) notes in her influential book, A Cyborg Manifesto, the distinction between animal and machine is increasingly being blurred: a blurring that unsettles anthropocentric accounts of the unified human subject and illuminates the shift to the hybridised posthuman of technoscience. While Haraway's concept of the cyborg is often cited as a feminist critique of gender, the blurring of the human/ non-human in the image of the cyborg has far-reaching implications, including for understandings of treatment and care. In the context of our work, the disruption of a hard and fast distinction between the human and the non-human invites a 'more-than-human' approach capable of capturing the dispersal of care across human/non-human (or 'more-than-human') relations. Applying this approach to analysing online care prompts us to (re)consider how human and chatbot hybridised models (where human and non-human actors work together and shape each other) afford multiple possibilities for care with varying implications for alcohol and other drug treatment futures. The qualitative data presented in this article were part of a broader mixed-methods study that explored clients' and counsellors' experiences of online care. The study was approved by Eastern Health Human Research Ethics Committee (Reference: HREC/18/EH/38). All participants had experience of an Australian national online, 24hour real-time web-chat alcohol and other drug counselling service, Counselling Online. Clients who had used the Counselling Online service were directed to a site after their counselling session ended where they could register their interest in participating in this study. After a client had registered their interest, we contacted them to discuss their participation. In total, 10 male and 10 female clients were recruited. Client participants had a mean age of 38 years (ranging from 22 to 76 years; NB: age not available for one participant) and were located across a range of different states of Australia. Client participants' previous engagements with alcohol and other drug treatment services varied. Many clients had seen a general practitioner in primary health care to discuss A. Barnett, et al. International Journal of Drug Policy xxx (xxxx) xxxx their alcohol and other drug concerns, some had experience of specialist addiction treatment services, and for a few, Counselling Online was the only service they accessed in the past. The primary drug of concern that clients accessed online counselling for included: alcohol (45%), methamphetamine (30%), cannabis or synthetic cannabis (15%), cocaine (5%) and pharmaceutical medications (5%). We report clients' primary drug of concern in the findings when presenting narrative excerpts. Counsellors who participated in the study were all currently working at Counselling Online. Counsellors were recruited via Counselling Online team leaders who referred counsellors to our study based on their availability, and desire to participate. In total, 8 counsellor participants were recruited, including 5 males and 3 females. The median duration of counsellors working at the service was 2 years (ranging from 1 to over 10 years). The counsellors came from a range of professional backgrounds including social work, psychology and counselling, and had received specialised training in telephonic and online alcohol and other drug counselling. The interviews and focus groups were conducted by AB, MS and ES. Twenty interviews were conducted with clients. The interview schedule covered a range of topics, including: clients' experiences of online care; clients' views about what constitutes quality online care; and, the focus of the current article, clients' views about how chatbots afford or constrain online care. Interviews were conducted over the phone and audiorecorded, and clients received a $30AUD voucher for their participation. Three focus groups were conducted with counsellors (two focus groups contained three participants and one contained two participants). The focus group schedule included a range of topics including: counsellors' experiences of online care; counsellors' views about strategies and techniques to deliver empathic care online; and, as reported in this paper, counsellors' views about how chatbots might afford or constrain online care. Focus groups were conducted in person and audio-recorded. Interviews and focus group audio recordings were transcribed verbatim. Transcripts were imported into the NVivo qualitative database management program and thematic analysis (Braun & Clarke, 2006) was conducted. This involved AB, MS and ES developing an initial coding framework based on readings of a few transcripts, which was then discussed and agreed on by all authors. AB then coded the transcripts. Themes were developed collaboratively in discussions between the authors and were informed by our reading of the data in relation to the theoretical work on affordances and 'more-than-human' approaches. In conducting our analysis, we begin by discussing how clients and counsellors imagined and encountered chatbots in everyday life. We then present clients' and counsellors' views on the affordances and constraints of chatbots for online care and client interactions. We use pseudonyms to protect participant confidentiality. Many clients stated that they knew they had communicated with a human counsellor during their online counselling sessions based on the responses received. As one client commented: You could tell that by the way they answered me and asked me questions […] Yeah, you could tell it wasn't a [chatbot] -I've had one of those computer-generated things before. No, no I could tell this was an actual person on the other end. (Jill, Female, 59, alcohol) Even though Jill had engaged with counselling delivered via text through an online interface, she reported feeling that the care she experienced had an affective element which only a human counsellor could provide. That is, the caring, came in human form. However, other clients were less sure about the type of counsellor (whether human or chatbot) with which they were communicating. Indeed, some counsellors discussed how at the start of an online counselling session, certain clients would seek to explicitly establish what type of counsellor they were talking to by asking if the counsellor was a robot or human. As illustrated in the following exchange, some counsellors commented that it was difficult to 'prove' their status as a human counsellor in an online care encounter: Hence, some clients expressed the expectation that chatbots may already be in use. The status of the 'human' counsellor as naturally given and uncontentious could no longer be taken for granted with the advent of new digital technologies, which blur the boundaries between human and non-human. Many participants drew on their previous, often negative or frustrating, everyday life experiences when describing the affordances and constraints of chatbots. For example, some commented that discussing the topic of chatbots reminded them of long wait times and service difficulties when interacting with, for example: "voice recognition where like you ring the ATO (Australian Taxation Office) […] to go through a whole bunch of stuff to get to what you actually physically want" (Kate, Female, 36, alcohol), or communicating with a chatbot at "Telstra (an Australian telecommunications company) […] to be hit with a computer, it's impersonal." (John, Male, 61, alcohol). These comments suggest that previous encounters with task-specific chatbots with limited capacity to emulate human conversation shaped participants' views on the affordances/constraints of chatbots for online care. If their previous encounters with task-specific chatbots were at best underwhelming, or at worst frustrating, it is perhaps no surprise that participants expressed reservations about the potential for chatbots to deliver empathic online care (as we discuss in the next section). However, beyond these mundane and often frustrating everyday interactions, some participants drew on science fiction imaginaries from popular culture when considering the potential use of chatbots in the future. For instance, in a focus group discussion, counsellors discussed whether they had all seen "The Mirror" or "Terminator" in a humorous way when referencing familiar science fiction tropes of machines taking over the world. Clients also mentioned science fiction. For example, when the interviewer commented that a participant seemed knowledgeable about chatbots, the participant explained: "I've watched a bit of movies, man, and I do a bit of research" (Bryce, Male, 26, cannabis). In these examples, participants often humorously drew upon imaginaries that depicted a dystopian future where machines/robots replace or conflict with humans. For some participants, such a future, while still in the realm of the imaginary, left little scope for considering the positive possibilities these technologies may afford. Having described participants' general comments about chatbots, we now present their accounts of the potential affordances and constraints of chatbots for online care. A. Barnett, et al. International Journal of Drug Policy xxx (xxxx) xxxx Chatbots as constraining empathic care Many participants expressed concerns that if chatbots were to replace human counsellors in online counselling, empathic care afforded by human-delivered counselling would be constrained. As Bryce (Male, 26, cannabis) stated: It's like trying to make a machine understand human life. You can't do it, unless they have autonomous -unless they are aware -like, they have sentience. Let's put it that way. You can't make a computer understand what a human is feeling. They can't exactly give the right answers, whereas a normal human can. Bryce and other participants raised the concern that the affective capacities of care would be diminished by chatbots, which were seen as lacking empathy and emotional intelligence. Related to this, participants expressed concern that chatbots would be unable to respond to clients' desires and concerns. In light of Puig de la Bellacasa's (2017) engagement with care as an ethico-political obligation, chatbots may be seen in this sense as potentially lacking response-ability -the capacity to respond affectively to the concerns, emotions and desires of others, in short to display empathy and compassion. While not wishing to dismiss participants' concerns, Bryce's view that AI cannot provide the "right answers" could also be interpreted as resting on the assumption that there are correct or incorrect responses in online care encounters that only a human can provide. However, complicating the anthropocentric notion that providing the "right answers" is the sole prerogative of human subjects, critical drug studies scholarship has foregrounded the many possible (and often contested/political) ways of enacting 'problems' and 'responses' in relation to alcohol and other drugs -not necessarily right or wrong in perpetuity but socio-historically situated, and as such, relationally constituted across the human-non-human spectrum with different effects (see for example: Barnett, Dilkes-Frayne, Fraser & Moore, 2011; Lancaster, Seear, Treloar, & Ritter, 2017; Pienaar & Savic, 2016; Savic, Ferguson, Manning, Bathish, & Lubman, 2017) . Other participants also questioned the capacity of chatbots to weigh up complex information, read emotions, and respond with empathy and compassion. For example, Rob (Male, 30, alcohol) stated that chatbots may not afford the empathic care that people desire in online care settings: Oh, the human counsellor can obviously show empathy and emotion where the automation can't. A lot of people access these things just to speak to someone. Similarly, Jess (Female, 53, methamphetamine) noted: Yeah I think there needs to be, it needs to have a human face behind it because of the empathy that's needed to support someone on their journey. As these accounts indicate, many client participants expressed the view that human actors are still needed in online counselling to deliver empathic care. That is, the full replacement of human actors with chatbots within online care was viewed as undermining the important empathic qualities desired in care encounters. In view of recent critical drug studies scholarship on care (see Dennis, 2019; Duncan et al., 2019; Farrugia et al., 2019) , these participants' accounts challenge the notion that chatbots are able to satisfy one element of caring: the affective component of helping people to feel a certain way. Elaborating on the point further, Sarah (Female, 22, promethazine) suggested that the best counsellors are likely to be those with lived experience of mental health concerns themselves (which chatbots could not have): But just the whole empathy and understanding, a robot's not got depression. It doesn't suffer from hormonal imbalances through its life to know what's going on. […] I know a lot of my psych tutors, they've always been, yeah, the best. Usually, some of the best people in the psych, in the history of psychology had mental health issues themselves, so they would understand and that kind of thing. I don't think that that could be properly replicated. Sarah's account suggests that genuine empathy cannot be a property designed and built into a machine nor acquired through AI technologies (for example, machine learning). For Sarah, the capacity of an actor to experience emotional pain was confined to human subjects. Implicit here is an understanding of empathy as the sole prerogative of humans. Importantly, empathy is seen as contingent on the capacity to have experienced emotional pain, and similar kinds of emotional pain in particular, and thus be able to imagine (or empathise with) the pain of others. In turn, the use of chatbots without this capacity in online counselling settings would constrain empathic care, which may undermine the therapeutic relationship. As Rachel (Female, 22, methamphetamine) explained, clients just want to be listened to by "human ears or seen by human eyes." In another example, Joe (Male, 25, methamphetamine and diazepam) emphasised that chatbots may prevent human connections, which he viewed as constituting a potential threat to delivering empathic care in the digital age: Because [a chatbot] stops humans from connecting to other humans and in a world of digital technology we need to hold onto that humanity. Joe's account reiterates the desire for the human not to be lost amidst the proliferation of digital technologies and human-technology relations. His account is consistent with discourses about technologies as social ills, as 'addictive', as eroding social connection and community, and the sorts of dystopian imaginaries described earlier. However, it runs counter to dominant discourses in digital health, where digital health technologies are promoted for their potential to connect people -indeed the importance of digital connections has been thrown into stark relief during the COVID-19 pandemic. Against this view, Joe highlights the central role of human actors in alcohol and other drug treatment futures and online care. Finally, counsellors agreed that chatbots were not sufficiently technologically advanced to be programmed to offer empathic counselling. However, some forecast a not-so-distant future in which these constraints could be overcome through advances in chatbots and AI: Counsellor 5: If the chatbot was perfectly programmed and was able to display levels of… Counsellor 7: Intelligence. Counsellor 5: …empathy and yeah, intelligence and interact flexibly with the client, explore with them and then do all that stuff, I still feel like even if the client knew or even if it was a person on the other end who said, 'yes, I'm a chatbot', I think that that on its own would be a significant enough factor to take away from the perceived value of that service because they're not being -I imagine they feel… Counsellor 7: Not being heard. Counsellor 5: …like they're not being heard by a person. That they're not being supported. That they're still in this alone. Facilitator 1: Yeah, okay, that… Counsellor 5: I don't know how you'd humanise a program to the point where it feels like there's actually somebody there. Facilitator 1: Yeah, that's… Counsellor 6: It depends how far in the future we're talking. If we're talking the next five years but if we're talking 20 years maybe that's how we're all comfortable talking like to chatbots and chatbots are our life. Who knows how we'll feel in 20 years time? Counsellor 5: It's certainly not inconceivable that you can develop a relationship with a program. In this focus group interaction, counsellors discuss a future where relationships between humans and non-humans (e.g., chatbots) may A. Barnett, et al. International Journal of Drug Policy xxx (xxxx) xxxx further materialise through technological developments. Although chatbots were viewed as not affording empathic care in the present, participants suggested this may be overcome in the future if chatbots were to become further 'humanised' -a vision that retains the centrality of human forms of empathy as measures of quality care. Although many participants expressed the view that if chatbots replaced human counsellors, online care would lack empathy, most agreed that care provided by human and chatbot hybrid models could potentially afford more efficient care. Many participants expressed the view that chatbots could be used as a tool to perform certain tasks and supplement, rather than replace, human-centred care. For example, Joe (Male, 25, methamphetamine and diazepam) stated: [A chatbot] has positives and negatives and if people go down the right path and use it for purposes more based on using it as a tool rather than taking the human aspect out, I think it would have a lot more of a positive input than using it to get rid of people. A number of participants gave specific examples of how chatbots could be used as a tool in online care. For instance, John (Male, 61, alcohol) mentioned chatbots could be helpful "maybe as a referral thing" if they could assess a client's needs and provide phone numbers or referrals to a service. Sarah (Female, 22, promethazine) also viewed chatbots as being able to provide referrals: […] like if [a chatbot] flagged the word 'anxiety', and it went: 'Oh, here's a link to a really good video on how to deal with anxiety.' I think that would be good, if they had that kind of function. Counsellors also identified a number of simple functions that chatbots could perform, highlighting their affordances for online counselling in terms of information provision and triaging: Facilitator 1: Could you see any positive aspects or areas where that could be useful in terms of online sessions? Counsellor 1: Maybe, only for information-giving -literally only for information-giving. Counsellor 2: Yeah, [chatbots could provide] counselling online for somebody to jump on and just the first question is: 'Do you have an alcohol or drug issue? Yes? Okay, stay on the chat!' Because of [the name of the service being] Counselling Online so many people jump on and they don't actually have an alcohol or drug issues […] Facilitator 2: Sort of filtering? Counsellor 2: Yeah. Elaborating on this concept, Phil (Male, 38, synthetic cannabis) discussed how hybrid human and chatbot systems could effectively deliver online interventions: Phil: I actually think that as long as [chatbots are] well-setup and well-qualified and well-trained, but also that there's a good level of human handoff, where humans can get involved very quickly and easily, I actually can see places where they would be useful in counselling. Things like maybe where you're just -where literally it's just asking for check-ins: 'Have you used [drugs] today, yesterday?' that kind of thing… 'Have you felt these things?' […] This is the domain the chatbots are useful for. Then I can see how they could at least do some of the legwork and do a little bit of the background before it went to a human. I can only see them as being a tool used by human counsellors rather than in any way taking over interactions. Facilitator: It sounds like there's a limit to how much they could provide? Phil: Yeah, but I could see how they could at least shoulder a little bit of the workload of gathering basic information and sticking it into some kind of metadata repository or database and an interaction log, doing a bit of that kind of initial data collection. These accounts point to various possibilities for human/chatbot hybridised systems to deliver more efficient online care. In the context of 'more-than-human' (Dennis, 2019) care networks, participants' accounts suggest that the use of AI technologies could help to distribute counselling workloads by enabling counsellors working in conjunction with chatbots to provide more care to more people. It is also worth reflecting on Phil's reference to chatbots needing to be "well-qualified and well-trained". At face value, describing the need for chatbots to have 'qualifications' is a rather odd formulation. However, perhaps Phil's account can best be explained by the blurring of boundaries between human and non-human actors such as chatbots: indeed, we normally associate obtaining qualifications as a human activity, the outcome of human learning and the development of specific skills and competencies. As Haraway observed, the "leaky distinction" (Haraway, 2006, p. 120 ) between human and machine is increasingly apparent in modern medicine becoming populated by cyborgs (actors that are simultaneously human and machine, for example, the surgeon with a robot-assisted surgical tool). In our participant accounts, we observe both pessimism and optimism regarding the effects of "complex hybridization" (Haraway, 2006, p. 144) within future healthcare. The conjoint work of humans and chatbots is perceived as having a double-edged potential to constrain empathic care, but also to increase the efficiency and reach of care delivered online. In addition to more efficient care, some participants suggested that chatbots could help to overcome certain limitations of human counsellors by, for example, minimising human error. For Sarah (Female, 22 promethazine), thinking about previous encounters with human health professionals led to a humorous reflection that she wished she was talking to a robot: Sarah: I don't think -yeah, I don't think [chatbots] would work as well... But then you also get really shit counsellors, though, where you wish that you were talking to a robot, so I don't know. Rather than inevitably materialising in particular ways, Sarah's quote indicates that chatbots (or human counsellors for that matter) may afford different possibilities for care in different situationssometimes "good", sometimes "bad", but different depending on how chatbots or counsellors came together within networks of interacting actors within care encounters. In another account, Lucas (Male, Age not available, alcohol) explained that chatbots in online care may reduce human errors, thus affording a higher standard of care: If you've got a chatbot talking to you which is highly intelligent and well-programmed compared to a human -chatbot, I -for me, maybe there's an advantage in the chatbot. Yeah, maybe, maybe because of human error [laughs] and the like, yeah. Spencer's (Male, 24, cannabis) discussion of chatbots affording increased expertise revealed a sense that individual humans may be limited in their knowledge and ability compared to a future chatbot. A chatbot could be programmed with a greater knowledge or evidence base from which to work. In regards to chatbots, Spencer stated: [Having chatbots delivering care] means the collective knowledge of say 50 counsellors could help one person as opposed to the full knowledge of one. A. Barnett, et al. International Journal of Drug Policy xxx (xxxx) xxxx Here, Spencer highlights that a future affordance of chatbots is their potential to draw on multiple knowledges about alcohol and other drug use and addiction to inform counselling practice. A chatbot's ability to draw on multiple ontologies of addiction might provide people experiencing alcohol and other drug concerns with different options for understanding their experiences, as well as offering a range of treatment options. In this way, tailoring care to a client's needs may have clinical benefits . However, whilst this might make for holistic, tailored care, it is unclear whether offering a broad spectrum of interventions (e.g., abstinence, harm reduction services) based on different ontologies of addiction (e.g., whether addiction is a disease, social problem) might contribute to confusion among clients, which in turn could undermine treatment engagement (Barnett, Hall, Fry, Dilkes-Frayne, & Carter, 2018) . In addition to reflecting on the benefits of chatbots for online care, clients and counsellors also expressed concerns about how the use of chatbots may constrain counselling interactions. Some participants expressed the view that if they interacted with a chatbot, they would be less likely to be open and honest in comparison to speaking with a human counsellor. Rob (Male, 30, alcohol) suggested he would share "way less" if he was communicating with a chatbot. In another example, Rachel (Female, 22, methamphetamine) described how interacting with a chatbot might also limit the information she disclosed during therapy: Facilitator: Yeah, okay. Let's say, for example, in some futuristic world, I guess, that they could program a machine to actually talk, I guess like a human […] How would that affect the way you interact and share information? Rachel: I wouldn't. Yeah, I just wouldn't. But, okay, let say if in the futuristic world, [chatbots could]. I certainly wouldn't be as open, I certainly -I wouldn't feel comfortable enough to be open, at all. In another account, Nicole (Female, 30, alcohol) expressed the view that while she would not necessarily share less with a chatbot, the nature and content of the information shared would be different. This was because, for Nicole, chatbots lack the capacity for inferential thinking, meaning that any information she shared would have to be curated and simplified: Facilitator: So would the way that you would potentially interact and share information with a chatbot -would that I guess be different from the way you'd interact and share information with a human counsellor? Nicole: I think so. I think because I would -if I knew I was talking to a chatbot, I would have to make sure I included the details and the phrases that I wanted feedback on. Whereas, the inferential capacity of a human to go, 'I think what you're saying, even though you're not saying it, is that you are really stressed.' Or: 'That you are really frustrated and angry.' Rather than a computer going: 'Oh it sounds like you are thirsty and that's why you are drinking.' So you need [laughs] -I think you would have to like -you would have to know the answers to your questions and just want somebody else to say them. Whereas, yeah, when you're approaching a human, you don't have to know and you don't have to -you can be more honest […] Yeah, you don't have to disclose everything either. Because a human can read between the lines. Similarly, the ability of human counsellors to "read between the lines" was also mentioned by a counsellor in a focus group. In this account, counsellors reflected on sensitive issues raised in online counselling and how chatbots may not be able to address clients' needs: Counsellor 1: [Talking about issues including] grief or domestic violence, that's something that as humans we can probably acknowledge and then possibly contain the conversation. But I'm not sure how a robot would do the same thing. Counsellor 3: Yeah. Counsellor 2: Yeah, [or be] able to read between the lines and those things. While not the only element of care, Nicole and the counsellors' accounts reiterate the importance of the affective dimension of care (see Dennis, 2019; Duncan et al., 2019) . In these examples, this includes the ability of a human counsellor to understand clients' concerns, and moreover, make them feel comfortable and safe to discuss sensitive topics. However, whilst the ability of human counsellors to "read between the lines" in online encounters is described as desirable, in certain situations this could also be interpreted as jumping to conclusions or making assumptions without asking people how they feel about or understand their experiences. Given that people with alcohol and other drug concerns tend to experience stigma in healthcare settings, inferring and making assumptions about their concerns could also reinforce stigma, or relegate a range of other possible explanations for alcohol and drug consumption to the background. While some participants viewed chatbots as potentially having a negative influence on client interactions, others suggested that chatbots may have a positive influence in online counselling contexts. For example, Nicole (Female, 30, alcohol) remarked that she wouldn't have to temper her behaviour in order to not offend a human: Facilitator: Yep. Any positive aspects at all of a chatbot? Nicole: You definitely don't have to worry about offending anybody! In another example, Tim (Male, 76, alcohol) noted the privacy that chatbots may afford could lead to higher rates of treatment engagement in certain cases: Facilitator: Do you think there would be any positives, maybe not for yourself, but for other people about this more computerised response or anything? Tim: I can imagine people wanting to try [speaking to chatbots] for reasons of extreme privacy and confidentiality. These accounts illustrate that for different types of clients, with different care needs (e.g., those wanting human contact, or seeking privacy), the affordances of chatbots and how these may influence client conduct as part of a human-computer interaction (Norman, 1988) was variable. Rather than being a pre-programmed effect designed by humans, affordances were seen to emerge as a result of human and nonhuman encounters in online care spaces. When reflecting on the use of chatbots in online care, some participants drew on their everyday experiences of encountering chatbots, while others drew on dystopian, futuristic imaginaries informed by popular cultural depictions of machines and their effect on society. These occasional references to dystopian futures point to a fear of AI technologies supplanting humans among some, but not all, participants. A strong theme in our analysis was the concern that the growing use of chatbots in online settings could lead to the replacement of human counsellors and undermine the kind of empathic care considered especially important for effective online counselling. Moreover, the 'human element' was viewed as an essential part of care that should be maintained. Viewed in light of recent critical work on 'care' (Puig de la Bellacasa, 2017; Dennis, 2019; Duncan et al., 2019; Farrugia et al., 2019; Mol, 2008) , our participants' concerns raise important ethico-political questions about the future of online care. Specifically, if the affective practices definitional to care, that is, care characterised by empathy and mutual understanding, are undermined or foreclosed by the use of chatbots in online counselling, exactly what type of 'care' materialises in these settings? If it is a perfunctory form of 'service delivery', is it the type of 'care' that alcohol and other drug digital interventions should afford? A. Barnett, et al. International Journal of Drug Policy xxx (xxxx) xxxx 'Service delivery' in this impoverished form risks entrenching, rather than alleviating, the problems encountered in face-to-face settings and thus undermining the goals of digital healthcare to reach a wider audience, and to overcome barriers such as stigma and low rates of treatmentseeking (Silva, Rodrigues, de la Torre Díez, López-Coronado, & Saleem, 2015) . Beyond these fears about the application of chatbots in alcohol and other drug treatment futures, most participants were receptive to the possibility of humans and chatbots working in unison to perform simple, basic data-gathering or repetitive tasks so that human intelligence and agency might be freed up to engage in more complex activities. For example, AI technologies could be used to collect basic demographic information, take client histories, and triage and refer clients to appropriate services, resources or health professionals. In this way, participants in this study expressed greater acceptance of hybridised-care delivery, where the non-human in the form of the chatbot supported the human to provide care and services. Some participants raised concerns that interacting with a chatbot may constrain open and honest communication between the client and chatbot. Others suggested a number of practical benefits of interacting with a chatbot, such as the increased assurance of clients' privacy and confidentiality. Situating our work within scholarship on technological affordances (Gibson, 1979; Hutchby, 2001; Latour & Venn, 2002; Norman, 1988) , our analysis suggests that chatbots are perceived as 'furnishing' (Gibson, 1979) clients and counsellors different opportunities in different circumstances. Thus, the affordances that chatbots may offer clients who access online care, are not fixed, stable or in-built features of the technologies themselves. Rather, depending on the context, desires and actions of humans interacting with chatbots, the human-computer interaction has the potential to emerge in multiple ways (Latour & Venn, 2002; Norman, 1988) . Recognising the complexities of delivering online care as evidenced in our participants' accounts, we have proposed a 'more-than-human' model of care, attuned both to the importance of traditional (human) modes of care and to the affordances of AI-driven technologies. We suggest such a model has the potential to disrupt outmoded, anthropocentric framings of care that overly rely on individual treatment providers, to the exclusion of forms of care that emerge at the intersection of human counsellors and non-human technologies. Embracing a 'more-than-human' approach opens the way for care provision to be distributed among human and nonhuman actors while recognising the continuing need for traditional counselling and also maximising the affordances and agency of technological actors (e.g., AI-driven chatbots). The distribution of care across the more-than-human spectrum has the potential to support counsellors to provide high quality care to more people in need -an issue that is of particular salience for alcohol and other drug counselling in Australia where unmet demand persists (Ritter, Chalmers and Gomez, 2018) . Beyond viewing chatbots as a technological solution, it is vital that policymakers formulating and implementing future technological change consider the social effects of digital health technologies. Addressing users' perceptions of the benefits and limits of digital health interventions such as chatbots and apps is vital to inform the design and deployment of new technologies. When designing digital health interventions, to only focus on quantitative outcome measures, such as whether an intervention increases rates of recovery or reduces rates of relapse, precludes consideration of the ways clients and counsellors interact with new technologies, and the effects these interactions generate. For example, in our own work, we see how clients and counsellors have concerns that care delivered online by chatbots may lack the empathy of a traditional, therapeutic relationship, thus potentially limiting the quality of care delivered. A broadening of evaluation and 'evidencing' methods, that also takes into account the social implications of novel digital health interventions as they emerge in local implementation situations, is vital to inform future digital health development (Murray et al., 2016; Rhodes & Lancaster, 2019; . Moreover, rigorous, critical research is needed into the social and political dimensions of 'more-than-human' alcohol and other drug interventions to minimise any damaging or counterproductive effects, and maximise the potential benefits of these new modes of care. AC receives an NHMRC Career Development Fellowship (No. APP1123311). This project was funded through the Monash University, Faculty of Arts and Faculty of Medicine, Nursing and Health Sciences Interdisciplinary Research Scheme. None. When the brain leaves the scanner and enters the clinic: The role of neuroscientific discourses in producing the problem of "addiction Implications of treatment providers' varying conceptions of the disease model of addiction: A response Healthcare ex machina: Are conversational agents ready for prime time in oncology? Clinical and translational radiation oncology Bed bugs and beyond": An ethnographic analysis of North America's first womenonly supervised drug consumption site Gender affordances of conversational agents Using thematic analysis in psychology Technological innovations in addiction treatment An artificially intelligent chat agent that answers adolescents' questions related to sex, drugs, and alcohol: an exploratory study Intelligent conversational agents in healthcare: Hype or hope? The injecting 'event': Harm reduction beyond the human Injecting Bodies in More-than-Human Worlds: ediating Drug-Body-World Relations Going online: the affordances of online counseling for families affected by alcohol and other drug issues Mapping the spatial and affective composition of care in a drug consumption room in Germany Conflict and communication: managing the multiple affordances of take-home naloxone administration events in Australia The digital future of mental healthcare and its workforce: A report on a mental health stakeholder engagement to inform the Topol Review Governing through problems: The formulation of policy on amphetamine-type stimulants (ATS) in Australia Affording'new approaches to couples who inject drugs: A novel fitpack design for hepatitis C prevention The ecological approach to visual perception Therapy and e-therapy-preparing future psychiatrists in the era of apps and chatbots Emptying the future: On the environmental politics of anticipation A cyborg manifesto: Science, technology, and socialist-feminism in the late 20th century FRANK: free practical drug advice for adults and children Technologies, texts and affordances The productive techniques and constitutive effects of 'evidence-based policy'and 'consumer participation'discourses in health policy processes Conversational agents in healthcare: a systematic review Morality and technology. Theory Chatbot acceptance in healthcare: explaining user adoption of conversational agents for disease diagnosis Embodied conversational agents for the detection and prevention of suicidal behaviour: current applications and open challenges The politics of care in technoscience Machines who think: A personal inquiry into the history and prospects of artificial intelligence The logic of care: Health and the problem of patient choice SimSensei demonstration: a perceptive virtual human interviewer for healthcare applications Future progress in artificial intelligence: A survey of expert opinion Unsettling care: Troubling transnational itineraries of care in feminist health Evaluating digital health interventions: key questions and approaches Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study How artificial intelligence and machine learning produced robots we can talk to The psychology of everyday things. US: Basic books Producing alcohol and other drugs as a policy 'problem': A critical analysis of South Africa's 'National Drug Master Plan Matters of care: Speculative ethics in more than human worlds Evidence-making interventions in health: A conceptual framing Autonomous virtual human agents for healthcare information support and clinical interviewing An argument against the implementation of an "overarching universal addiction model" in alcohol and other drug treatment What constitutes a 'problem'?" Producing 'alcohol problems' through online counselling encounters Making multiple 'online counsellings' through policy and practice: an evidence-making intervention approach Mobile-health: A review of current state in 2015 WHO launches a chatbot on facebook messenger to combat COVID-19 misinformation We express our thanks to the clients and counsellors who gave their time to participate in this study, and to the anonymous reviewers and editor(s) for their constructive feedback on our work. We also acknowledge the support of managers from the Counselling Online service, especially Rick Loos, Orson Rapose, and Darryl Jones.