key: cord-0052609-36bi27pr authors: Blease, C; Locher, C; Leon-Carlyle, M; Doraiswamy, M title: Artificial intelligence and the future of psychiatry: Qualitative findings from a global physician survey date: 2020-10-27 journal: Digit Health DOI: 10.1177/2055207620968355 sha: a64485886900cdc69f3d761e324309b4b2bcdcba doc_id: 52609 cord_uid: 36bi27pr BACKGROUND: The potential for machine learning to disrupt the medical profession is the subject of ongoing debate within biomedical informatics. OBJECTIVE: This study aimed to explore psychiatrists’ opinions about the potential impact innovations in artificial intelligence and machine learning on psychiatric practice METHODS: In Spring 2019, we conducted a web-based survey of 791 psychiatrists from 22 countries worldwide. The survey measured opinions about the likelihood future technology would fully replace physicians in performing ten key psychiatric tasks. This study involved qualitative descriptive analysis of written responses (“comments”) to three open-ended questions in the survey. RESULTS: Comments were classified into four major categories in relation to the impact of future technology on: (1) patient-psychiatrist interactions; (2) the quality of patient medical care; (3) the profession of psychiatry; and (4) health systems. Overwhelmingly, psychiatrists were skeptical that technology could replace human empathy. Many predicted that ‘man and machine’ would increasingly collaborate in undertaking clinical decisions, with mixed opinions about the benefits and harms of such an arrangement. Participants were optimistic that technology might improve efficiencies and access to care, and reduce costs. Ethical and regulatory considerations received limited attention. CONCLUSIONS: This study presents timely information on psychiatrists’ views about the scope of artificial intelligence and machine learning on psychiatric practice. Psychiatrists expressed divergent views about the value and impact of future technology with worrying omissions about practice guidelines, and ethical and regulatory issues. Worldwide it is estimated that 1 in 6 people suffer from a mental health disorder, and the personal and economic fallout is immense. 1 Psychiatric illnesses are among the leading causes of morbidity and mortality; between 2010 and 2030 this burden is estimated to cost the global economy $16 trillion. 2 Among younger people, suicide is the second or third leading cause of death. 2 Older generations are also affected by mental illness: currently, an estimated 50 million people suffer from dementia worldwide, and the World Health Organization (WHO) predicts this will rise to 80 million by 2030. 3 Stigmatization, low funding and lack of resources -including considerable shortages of mental health professionals -pose significant barriers to psychiatric care. 4, 5 According to recent WHO data, discrepancies in per-capita availability of psychiatrists is 100 times lower than in affluent countries. 2 Indeed, even in wealthy countries, such as the USA -which has around 28,000 psychiatrists 6 -those living in rural or poverty-stricken urban communities experience inferior access to adequate mental health care. It is anticipated that demographic and societal changes will put even greater pressure on mental health resources in the forthcoming decades. 5 These pressures include: ageing populations; increased urbanization (with associated problems of overcrowding, polluted living conditions, higher levels of violence, illicit drugs, and lower levels of social support); migration, at the highest rate recorded in human history; and the use of electronic communications which has amplified concerns about the effects of the internet on mental health and sociality. [7] [8] [9] [10] Against these myriad challenges, recent debate has centered on the potential of big data, machine learning (ML) and artificial intelligence (AI) to revolutionize the delivery of healthcare. [11] [12] [13] [14] According to AI experts, machine learning has the potential to extract novel insights from "big data" -that is, vast accumulated information about individual persons -by yielding precise patterns relevant to patient behavior, and health outcomes. 15, 16 An estimated excess of 10,000 apps related to mental health are now available for download; the vast majority of these apps have not been subject to randomized controlled trials (RCTs), and many may even provide harmful 'guidance' to users. 17 Mining this information for regularities, informaticians argue, may produce precision in diagnostics, prognostics, and personalized treatment plans. 18 Aside from health information gathered via electronic health records and patient reports, an exponentially increasing volume of data is being accumulated via in situ personal digital devices, especially smartphones usage. Social media posts, apps, purchases, and personal internet history, are already being used to support predictions about patient health, behavior, and wellbeing 19 ; other passively accumulated data from GPS, accelerometer sensors, text and call logs, and screen on/off time can be used to infer mobility, sociability, and other behaviors of smartphone users. Collectively, so-called 'digital phenotyping' provides a novel, indirect, and nontraditional route to yield inferences about patients' health status; it also presents a novel challenge to orthodox boundaries of traditional medical expertise. 20, 21 In light of these advances, some medical informaticians argue that the core functions of physicians -gathering and monitoring patient information, diagnostics, prognostics, and formulating personal treatment plans are vulnerable to disintermediation. 11, 15, [22] [23] [24] However, other AI experts predict that, in the future, physicians will always play a role in medical care with 'man and machine' working as 'team-players'. 18, 25, 26 When it comes to humanistic elements of medical care, many AI experts also argue that by outsourcing some aspects of medical care to machine learning, physicians will be freed up to invest more time in higher quality face-to-face doctor-patient interactions. 26 Going even further, and drawing on findings in the nascent field of affective computing, some informaticians speculate that in the long-term, computers may play a critical role in augmenting or replacing human-mediated empathy; for example, emerging studies suggest that under certain conditions, computers can surpass humans when it comes to accurate detection of facial expressions, and personality profiling. 27, 28 What do patients think about these advances? A study by Boeldt and colleagues found that patients were more comfortable with the use of technology performing diagnostics than physicians. 29 Surveys suggest a high level of interest among patients to use mobile technologies to monitor their mental health. A recent US survey reported that 70 per cent of patients had an interest in using mobile technologies to track their mental health status. 30 Studies also indicate that patients from diverse socioeconomic and geographical regions express willingness to use apps to support symptom tracking, and illness self-management. 30, 31 Recent findings also indicate that at least some patients with schizophrenia already use technology to manage their symptoms, or for help-seeking. 31, 32 However, increasing interest has not so far translated into high levels of usage of mHealth, and some surveyed patients express concerns about privacy. 33 Amid the debate, hype, and uncertainties about the impact of AI on the future of medicine, limited attention has been paid to the views of practicing clinicians including psychiatrists 29 -though there is evidence that this changing. [34] [35] [36] In 2018, a mixed methods survey of over 500 psychiatrists in France investigated attitudes to the use of disruptive new technologies. 37 The authors reported that there was moderate acceptability of connected wrist bands for digital phenotyping and MLbased blood tests and magnetic resonance imaging, but speculated that attitudes were "more the result of the lack of knowledge about these new technologies rather than a strong rejection". 37 In addition, psychiatrists expressed concerns about the impact of technologies on the therapeutic relationship, data security and storage, and patient privacy. In 2019 a focus group study by Bucci and colleagues in the UK found that many mental health clinicians believed that more time and resources should be invested in staff training and resources rather than in the adoption of digital technologies, but some expressed fears that aspects of their job could be disintermediated. 38 However, only 4 psychiatrists participated in this study. Another smallscale survey of 131 mental health clinicians (n ¼ 27 psychiatrists) in the USA, Spain and Switzerland, investigated clinicians' intentions to use and recommend ehealth applications among patients with postpartum depression. 39 The survey reported that, compared with primary care doctors, midwives and nurses, psychiatrists and clinical psychologists attributed lower utility to e-health applications for assessing, diagnosing, and treating maternal depression. While current surveys into psychiatrists' attitudes to ML/AI enabled tools provide some insights into clinicians' attitudes about adoption, and the potential impact of these technologies on aspects of clinical care, our aim in this survey was to build on these findings to focus, more directly, on how psychiatrists envisage the impact of AI/ML technologies on key components of their job. Specifically, we aimed to determine whether psychiatrists believed their profession would be impacted by advances in AI/ML in the short term (25 years from now), and in the long-term, and to identify the possible positive and negative effects of any such developments. Finally, our aim was to expand on existing research by widening the sample of psychiatrists in our study by undertaking a global survey. To address these objectives, we adapted a recently published existing mixed methods survey conducted among UK primary care physicians on the topic of disintermediation. 35, 36 We employed quantitative methods to investigate the global psychiatric community's opinions about the potential impact of future technologies to replace key physician tasks in mental health care. However, in light of the limited research into psychiatrists' views about the impact of AI/ML on their profession, and on potential harms and benefits of AI/ML, we incorporated 3 open-ended questions into the survey (see Table 1 ). These open-ended questions were aimed at acquiring more nuanced insights among our study population. A complete description of the survey methods and quantitative results has been published previously. 40 In summary, we conducted an anonymous global Web-based survey of psychiatrists registered with Sermo, a secure digital online social networking for physicians, and for conducting survey research. 41 Participants were randomly sampled from membership of the Sermo.org [23] . This is the one of the largest online medical networks in the world, with 800,000 users from 150 countries across Europe, North and South America, Africa, and Asia, employed in 96 medical specialties. Users are registered and licensed physicians. Invitations were emailed and displayed on the Sermo.org home pages of randomly selected psychiatrists in May 2019, with quasi-stratification. To overcome limitations of small, national samples in existing surveys, our aim was to recruit one third of participants from the USA, one third from Europe, and one third from the rest of the world. As this was an exploratory study, we aimed to target a sample size of roughly 750 participants to approximate a previous survey of general practitioners' views, on which the current project Table 1 . Open comment questions embedded in survey. 1. Please briefly describe the way(s) you believe artificial intelligence/machine learning will change psychiatrists' jobs in the next 25 years. a 2. Please provide any brief comments you may have about the potential benefits and/or potential harms of artificial intelligence/ machine learning in psychiatry. 3. We value your opinion. If you have any other comments about this survey topic or recommendations for other questions we should include, please add them below. a All participants were requested to respond to Questions 2 and 3. However, Question 1 was preceded by the following question: "In 25 years, of the following options, in your opinion what is the likely impact of artificial intelligence/machine learning on the work of psychiatrists". Options included "No influence (jobs will remain unchanged)", "Minimal influence (jobs will change slightly)"; "Moderate influence (jobs will change substantially)" or "Extreme influence (jobs will become obsolete)". Participants who selected the first response ["No influence (jobs will remain unchanged)"] were not invited to respond to Question 1. was based. 35, 36 The survey was closed with 791 respondents. This was an anonymous survey and an analysis of de-identified survey data was deemed exempt research by Duke University Medical Center Institutional Review Board in April 2019 (reference: Pro00102582). Invited participants were advised that their identity would not be disclosed to the research team, and all respondents gave informed consent before participating. The study team devised an original survey instrument specifically designed to investigate psychiatrists' opinions about the impact of artificial intelligence on primary care. As outlined in the survey (online Appendix 1), participants were expressly requested to provide their opinions about "artificial intelligence and the future of psychiatric practice". We avoided terms such as "algorithms" in favor of generic descriptors such as "machines" and "future technology." This was in part to avoid any confusion among physicians unfamiliar with this terminology and to avert technical debates about the explanatory adequacy or specificity of terms of art, such as 'machine learning'. We adapted a survey instrument from a previously published primary care survey, 35 to investigate whether psychiatrists believed 10 key aspects of their job could be fully replaced by future technology. To avoid ambiguities in how participants interpreted the question, we focused questions on the possibility of full replacement rather than partial replacement. In addition, Likert scales allowed respondents to provide discretionary views about the likelihood of replacement on each task. The 10 key tasks included: 1) provide documentation (e.g., update records about patients), 2) perform a mental status examination, 3) interview psychiatric patients in a range of settings to obtain medical history, 4) analyze patient information to detect homicidal thoughts, 5) analyze patient information to detect suicidal thoughts, 6) synthesize patient information to reach diagnoses, 7) formulate personalized medication and/or therapy treatment plans for patients, 8) evaluate when to refer patients to outpatient versus inpatient treatment, 9) analyze patient information to predict the course of a mental health condition (prognoses), and 10) provide empathetic care to patients. The survey was developed in consultation with psychiatrists in the USA (n ¼ 2) and was pretested with psychiatrists from other countries (n ¼ 9) to ensure face validity. Among the results of the closed-ended questions, the majority of the 791 psychiatrists surveyed (75%) believed that future technology could fully replace human psychiatrists in updating medical records, with around 1 in 2 (54%) believing that future technology could fully replace psychiatrists in synthesizing clinical information. 40 Only 17% of respondents believed that human psychiatrists could be fully replaced in the provision of empathic care, and overall only 4% believed that future technology would make their job obsolete. 40 To maximize response rate for the qualitative component, as noted, the survey instrument included three open-ended questions that allowed participants to provide more nuanced feedback on the topic of disintermediation within the questionnaire (see Table 1 ). Comments not in English were translated by Sermo; this process was undertaken by experienced medical text translators, subject to further proofreading and in-house checks. Descriptive content analysis was used to investigate these responses. 42, 43 Responses were collated and imported into QCAmap (coUnity Software Development GmbH) for analysis. The comment transcripts were initially read numerous times by CB, CL, and MLC to achieve familiarization with the participant responses. Afterward, an inductive coding process was employed. This widely used method is considered an efficient methodology for qualitative data. [44] [45] [46] A multistage analytic process was conducted: First, we defined the three open-ended questions as our main research questions. Second, we worked through the responses line by line. Brief descriptive labels ("codes") were applied to each comment. Multiple codes were applied to comments with multiple meanings. Comments and codes were reviewed by CB, CL and MLC. Third, after working through a significant amount of text, CB, CL and MLC met to discuss coding decisions, and subsequent revisions were made. This process led to a refinement of codes. Finally, first-order codes were grouped into secondorder categories based on the commonality of their meaning to provide a descriptive summary of the responses. 47 We followed the rules of summarizing qualitative content analysis for this step. 42 As outlined in the quantitative survey, 791 psychiatrists responded from 22 countries representing North America, South America, Europe, and Asia-Pacific. 40 Of the participants, 70% were male; and 61% were aged 45 or older (see Table 2 ). All respondents left comments (26,470 words) which were typically brief (1 phrase or 2 sentences). As a result of the iterative process of content analysis, four major categories were identified in relation to the impact of future technology on: (1) patientpsychiatrist interactions; (2) the quality of patient medical care; (3) the profession of psychiatry; and (4) health systems (see Figure 1 ). These categories were further subdivided into themes, which are described below with illustrative comments; numbers in parentheses are identifiers ascribing comments to individual participants. A foremost concern about future technology on psychiatry was the perceived "loss of empathy", and absence of a therapeutic interpersonal relationship in the treatment of mental health patients. Although most responses were short -for example, "no empathy", or "lack of empathy and humanity" -a significant number of respondents also perceived limitations that technology could ever accurately detect human emotions via verbal or nonverbal cues; for example, AI could be overwhelmed as it tries to sort out body language, affect, lying, and conversational subtleties. Another dominant view was the broader implications of technology for the therapeutic relationship with the majority of comments anticipating communication problems, lack of rapport, and the potential harms to patients. Notably, the majority of responses assumed that future technology would incur loss of contact with clinicians and even incur harm. For example: Taking an opposing view, a few psychiatrists suggested that future technology might improve on human interactions; for example: People interacting with machines is much easier than with fellow human beings. We are assessing this phenomenon today when children are having "best friends" who they have only met through Facebook. It is very comfortable to have an "avatar" as a friend. Because, we select when to cut them off. When I accept patients that have seen other providers in my town, I am ever amazed and disappointed in the report of the care they've received. Seems patients don't seem to connect with doctors (or any provider for that matter) any more. So if there is no interpersonal connection/relationship, why not type into a computer? Telepsychiatry Only a few participants predicted an increase in the use of telepsychiatry including the use of "psychotherapy via Skype". Notably, these comments tended to be neutral with respect to the potential benefits or harms of telepsychiatry on doctor-patient interactions; for example: However, one psychiatrist took an opposing and more optimistic view, responding that patients may exhibit greater trust in technology than in clinicians: People will have the confidence in bold technology as they'll feel more confident that they can be treated more safely. [Participant 779] The topic of data safety, misuse of data, and questions of privacy, received only a small number of truncated comments; for example: Can't keep patients' privacy -the data will be hacked. In contrast to these optimistic responses, however, a considerable number of comments suggested that future technology would lead to an increase in medical error. Many of these comments specifically referred to an increased risk of diagnostic error; for example: Many participants anticipated that artificial intelligence would be "more objective", "fairer", or "unbiased" compared to human psychiatrists; for example: Participants expressed a broad range of opinions about the impact of future technology on the profession: from outright replacement of psychiatrists to displacement of key functions of practice, and from skepticism about any change to uncertainty about the future. Responses also indicated a wide array of attitudes about the potential to influence of the field, from very negative to very positive with many psychiatrists displaying neutral perspectives. A common perspective was that specific aspects of the job would gradually be replaced by artificial intelligence with some psychiatrists predicting that this would lead to outright elimination; for example: Jobs will reduce as AI will replace humans. [Participant 15] I believe psychiatrists (and physicians in general) will continue to be more and more marginalized by AI, and that more treatment decision making will be guided by AI in the future. Multiple comments predicted that future technology could facilitate the work of psychiatrists. Although most responses were rather short -for example, "facilitation", or "it will make the job easy"; lengthier responses included: This help could enable the psychiatrist to carry on with his work and to be more effective. [Participant 239] It will help to relieve the burden on psychiatrists. A considerable number of comments indicated psychiatrists will need to control and verify the technology-based results since machine recommendations would likely be error-prone; for example: The problem is with being diagnosed by the machine. I think that the psychiatrist needs to verify the machine anyway. The machine cannot replace the human. One potential harm is over reliance and not enough critical thinking about results, particularly results that support one's viewpoint. [Participant 381] Furthermore, multiple comments suggested that psychiatrists and future technology might have a "job sharing" arrangement with machines and humans complementing and enriching each other; for example: Assistance and simplification of our work will be possible and will be welcome, freeing us from mechanical and boring jobs and preserving human knowledge in order to used it in an optimal way at crucial times. [Participant 199] Many comments specified how future technology could facilitate the work activity of psychiatrists. Different aspects of the profession were discussed, and a major theme was the role of technology in improving administrative tasks, especially documentation: some respondents couched this as the only benefit to be accrued to psychiatric practice; for example: Only benefit would be with some documentation or ordering. However, not all participants agreed: a few believed that technology would lead to "more bureaucracy" and "an increase in "administrative work"; for example: A related commonly perceived benefit was the provision of greater "consistency" or "standardization" in the application of evidence-based medicine and in clinical decision-making; for example: AI may help psychiatrists to follow standardized protocols better, or to deviate from these protocols with better reasoning. [Participant 710] Benefits will be to standardize and minimize interpsychiatrist variability across diagnoses. [Participant 577] Many comments indicated a role for "big data", "algorithms", and "data analysis" in augmenting clinical judgments but responses were limited and typically fell short of explanatory detail; for example: With regard to decisions about treatment course, many respondents stressed that future technology will influence various areas, such as the formulation of the treatment plan, and medication decisions; for example: AI will strongly influence the technique of taking medical histories and be helpful in the selection of the best treatments. [Participant 291] AI's ability to provide more complete information regarding patients' history and mental status will facilitate better management in terms of pharmacotherapy. [ Participant 68] In contrast, only a minority of physicians suggested that future technology will assist in determining the "effectiveness of therapy" [Participant 113]. Similarly, the use of brain imaging, genetic testing, and use of AI in monitoring symptoms received only a small number of comments. Many responses strongly suggested a risk of "dependence" on artificial intelligence in clinical decisions that would be inherently problematic; for example: A minority of comments also suggested that future technology might result in a reduction of psychiatric skills and that psychiatrists may lose their "critical thinking"; for example: Decision-making process will be based on low-quality statistical data, and this is not in patient's interests. Going further, numerous comments were associated with considerable skepticism that future technology might ever replace the "art of medicine" and that technology would "oversimplify" decisions; for example: Psychiatry is an art. Not a science that you plug in symptoms into an algorithm and pop out a diagnosis and treatment plan and prognosis. [Participant 581] Medicine is not black-and-white, but it is unlikely that an artificial intelligence will be able to detect that and make appropriate medical decisions on a regular basis without human intervention. [ Participant 44] More strongly, some psychiatrists surveyed stated that they do not expect future technology to impact the general professional status; for example: Finally, multiple comments expressed uncertainties about the impact of technology on the status of the profession, with many psychiatrists admitting they were "unsure" or "don't know". Comments encompassed a number of themes related to the impact of future technology on psychiatry at a systems level. The majority of these responses tended to be optimistic, with comments focusing on greater access to psychiatric care; lower costs; and improved efficiencies. Many participants described the many ways that technology could increase access to care particularly in remote or underserviced settings; for example: There is an already severe deficit for access to care to psychiatrist and this may bridge the gap. Costs Some psychiatrists speculated that technology could impact the cost of care. Many of these comments mentioned the potential benefit to health care organizations and insurance companies; for example: It will be possible to access treatments at lower cost. Only a few comments highlighted the potential for technology to stimulate scientific advancement, such as the facilitation of knowledge translation, increased knowledge exchange, or more specifically the identification of new biological markers or neuroimaging techniques: Potential benefits: support in the exploratory, diagnostic and treatment process by considering all clinical variables and having scientific information always up to date. Being able to obtain the right information on all accumulated advances and experience in psychiatric treatment. (. . .) Exchange with colleagues about the development in neuro-imaging techniques and description of these by experts at a distance, making these increasingly affordable and easy to do, as well as at a lower cost. As data points increase, with the addition of microbiomes, it will be necessary to have AI there to crunch the data into meaningful and interpretable factors guiding approaches toward wellness. This extensive qualitative study provides cross-cultural insight into the views of practicing psychiatrists about the potential influence of future technology on psychiatric care (see Box 1) . A dominant perspective was that machines would never be able to replace relational aspects of psychiatric care, including empathy and from developing a therapeutic alliance with patients. For the majority of psychiatrists these facets of care were viewed as essentially human capacities. Psychiatrists' expressed divergent views about the influence of future technology on the status of the profession and the quality of medical care. At one extreme, some psychiatrists considered outright replacement of the profession by AI was likely; yet others believed technology would incur no changes to psychiatric services. Many speculated that AI would fully undertake administrative tasks such as documentation; the vast majority of participants predicted that 'man and machine' would collaborate to undertake key aspects of psychiatric care such as diagnostics and treatment decisions. Participants were split over whether AI would ultimately reduce medical error, or improve diagnostic and treatment decisions. Although many believed that AI could augment doctors' roles, they were skeptical that technology would ever be able to fully undertake medical decisions without human input. For many participants diagnostics and other clinical decisions were quintessentially human skills. Relatedly, risk of overdependence on technology as driving medical error was a common concern. More positively, many respondents felt technology would be fairer and less biased than humans in reaching clinical decisions. Similarly, participants expressed optimism that technology would play a key role in undertaking administrative duties, such as documentation. Other expected benefits from future technology included improved access to psychiatric care, reduced costs, and increased efficiencies in healthcare systems. Although psychiatrists, like many informaticians, were optimistic that technology would increase access to psychiatric care, particularly among underserved populations, 48 they were cynical that technological advancements could fully replace the provision of human-mediated empathy and relational aspects of care. Interestingly, very few psychiatrists discussed telepsychiatry despite its potential to increase patient access and adherence to care, however this may have been due to the emphasis on machine learning and artificial intelligence. Technical quality and issues of privacy and confidentiality remain key drawbacks with this medium (see: Regulation of mHealth and Ethical Issues, below) but patients report high levels of satisfaction, convenience, and comfort with this approach, and evidence indicates that telepsychiatry provides comparable reliability and clinical outcomes as face-to-face consultations. [49] [50] [51] Similarly, despite a growing body of research to support digital cognitive behavioral therapy, 52, 53 there was limited discussion among psychiatrists about the role of future technology encroaching on psychological treatments. Responses revealed that psychiatrists have myriad, often disparate views about the value of artificial intelligence on the future of their profession. Notwithstanding the wide spectrum of opinion, similar to the views of many experts, a dominant, overarching theme was speculation about a hybrid collaboration between 'man and machine' in undertaking psychiatric care. 18, 25, 26 Like informaticians, in particular, many participants highlighted the potential for AI in risk detection and preventative care. 19 More generally, psychiatrists -like informaticians -were optimistic about the benefits of AI in augmenting patient care yet ergonomic and human factors remain ongoing issues in the design of technology. Without due attention to "alert fatigue" and clinical workflow, it is unclear whether AI applications will reap their anticipated potential in improving clinical accuracy, in strengthening healthcare efficiencies or in reducing costs. 54, 55 Although a considerable number of participants conceived of clinical decisions as essentially and ineffably, a human "art", biomedical informaticians argue that the ability to mine large scale health data for patterns in diagnosis and behavior is where machine learning presents unprecedented potential to disrupt diagnostic, prognostic, and treatment precision, yielding insights about hitherto undetected subtypes of diseases. 15, 16, 18, 23 Against the promise of pattern detection mediated by machine learning, many informaticians acknowledge that current AI is far from sufficient to fully undertake diagnostic decisions unaided, and significant breakthroughs will be necessary if machines are to avoid pitfalls in reasoning, and demonstrate causal and counterfactual reasoning capacities necessary to reach accurate medical decisions. 54, 56 Importantly, however, and in contrast to many of the physicians surveyed who considered clinical reasoning to be, in essence, a necessarily human capacity, leading AI experts assume that one cannot rule out, a priori, the Box 1 Key questions and findings. What is already known about this topic? -Informaticians and experts in artificial intelligence (AI) argue that big data and machine learning (ML) have the potential to revolutionize how psychiatric care is delivered. -Recent survey evidence suggestions that psychiatric patients, including those suffering from severe mental illness express an interest in using mobile technologies to monitor and manage their condition(s). -To date, in excess of 10,000 apps related to mental health are available to download; the vast majority have not been subject to RCTs. -Indirectly, data accumulated from in situ personal digital devices can also be used to support predictions about patient health, behavior, and wellbeing -this is known as 'digital phenotyping'. What are the new findings? -791 psychiatrists from 22 countries responded to an online survey via the physician social networking platform Sermo; 70% were male; 61% were aged 45 or older. -Overwhelmingly, psychiatrists were skeptical that machines could replace humans in the delivery of empathic care, and in forging therapeutic alliances with patients. -Many predicted that in the future 'man and machine' would increasingly collaborate on key aspects of psychiatric care, such as diagnostics and treatment decisions; psychiatrists were divided over whether technology would augment or diminish the quality of medical decisions and patient care. -In contrast to concerns of AI experts, psychiatrists provided limited or no reflection about issues relating to digital phenotyping, or on regulatory and ethical considerations related to mobile health. possibility that technology may one day be fully capable of fulfilling these key medical tasks. Disparities between psychiatrists and AI experts were apparent in respect of some key developments and debates about the use of technologies in mental health. For example, only a minority of psychiatrists discussed -whether positively or negatively -the role of smart phones in data gathering. So far, however, encouraging evidence demonstrates that utilizing customized smart phone apps with patient health questionnaires can help to capture patients' symptoms in real-time, allowing more sensitive diagnostic monitoring. 57, 58 Scarce reflection on the concept of digital phenotyping and the use of diagnostic and triaging apps among respondents contrasts with the predictions of biomedical informaticians who argue that apps and mobile technologies will play an increasing role in accumulating salient personal health information. Wearable devices, it is argued, will help to facilitate real-time monitoring of signs and symptoms, improving accuracy and precision in information gathering, and helping to avoid barriers associated with routine check-ups, such as missed appointments, personnel shortages, and costs on mental health services. 59, 60 Patients' preferences and mobile health Some psychiatrists argued that interfacing with technology would not be acceptable to many patients who would prefer to receive care from doctors. As noted, previous survey research in mobile health (mHealth) undermines the certitude of these claims; for example, a recent US survey of 457 adults identifying with schizophrenia, and schizoaffective disorders, 42% "often" or "very often" reported listening to music or audio files to help block or manage voices; 38% used calendar functions to manage symptoms, or sent alarms or reminders; 25% used technology to develop relationships with other individuals who have a lived experience related to mental illness; and 23% used technology to identify coping strategies. 32 Indeed, previously it was assumed that severity of mental health symptoms would pose a barrier to interest in mHealth 61 ; however, studies show that patients with serious conditions, including psychosis, indicate high levels of interest in the use of mobile applications to manage and track their symptoms and illness. 31,32,62 As Torous et al argue, it may be that patients are more comfortable using mobile technology to report and monitor symptoms than earlier methods such as sending text messages to clinicians, and that such a medium reduces stigma. 62 Relatedly, the co-production of medical notes -for example, patients entering information via semi-structured online questions prior to medical appointments -may also play a role in reducing barriers to help-seeking. 63 Although research is ongoing, initial disclosures of symptoms via online patient portals may mitigate stigmatization and feelings of embarrassment in initiating conversations about mental health issues with physicians. 64 Despite patient interest and evidence of high adoption rates for health and wellness apps, there remains well documented problems with drop off rates, and how to design for continuance -issues that surveyed psychiatrists did not directly discuss. 65, 66 Regulation of mHealth Conspicuously, participants provided scarce commentary about the regulatory ramifications of artificial intelligence on patient care. As noted, over 10,000 apps related to mental health are available to download, yet most have not been adequately investigated. 17 While recent meta-analyses and systematic reviews indicate that a number of safe, evidencebased apps exist for monitoring symptoms of depression, and schizophrenia, and for reducing symptoms of anxiety, patients and clinicians lack adequate guidelines to facilitate recommendations. [67] [68] [69] On the other hand, many psychiatrists expressed enthusiasm about the potential of future technology to provide more objective, and less biased clinical judgments. This optimism appeared to overlook concerns associated with "algorithmic biases" -the risk of discrimination against patients, associated with inferior design and implementation of machine learning algorithms. 70, 71 As AI experts and ethicists warn, bias can become baked into algorithms when demographic groups (for example, along the lines of ethnicity, gender, or age) are underrepresented in training phases of machine learning. Without adequate regulatory standards in the design and ongoing evaluation of algorithms medical decisions informed by machine learning may exacerbate rather than diminish discrimination arising in clinical contexts. The US Food and Drug Administration (FDA) has so far adopted a deliberately cautious approach to clarifying medical software regulations. 72 Some tech companies have emerged as, "default arbiters and agents responsible for releasing (and on some occasions, withdrawing) applications". 17 As medical legal experts warn, allowing unregulated market forces to determine 'kitemarks' of medical standards, is inadequate to protect patient health. 73 Related to regulatory issues, few comments -only nine in total -weighed in on ethical issues related to protections for sensitive personal data. Loss of patient data and privacy remain serious concerns for mobile applications and telepsychiatry. In 2018 the European Union (EU) enacted its 'General Data Protection Regulation' (GDPR) aimed at ensuring citizens have control of their data, and provide consent for the utilization of their sensitive personal information. The US has considerably weaker data privacy rules, and while similar legislation to the GDPR is mooted to come into effect in California in January 2020, no comparable laws have been enacted at a federal level in the USA nor is there legislative enthusiasm to do so. These issues have prompted much recent media coverage. Given the gravity of ethical issues surrounding adequate oversight for patient data gathering from apps and mobile technologies, including how they might impact doctorpatient relationships and adequate patient care, and the media coverage that these issues have prompted, it was conspicuous that privacy and confidentiality considerations, received scarce commentary from surveyed psychiatrists. 72, 74 Similarly, while may psychiatrists believed future technology would be a boon to patient access, issues of justice related to the 'digital divide'between those who have ready access to the internet and mobile devices, and those who did not -received no attention. This survey initiated an original qualitative exploration of psychiatrists' views about how AI/ML will impact their profession. The themes support and expand on findings of an earlier quantitative survey by providing a more refined perspective of psychiatrists' opinions about AI and the future of their profession. Utilizing the Sermo platform enabled us to gain rapid responses from verified and licensed physicians from across the world, and this survey benefits from a relatively large sample size of participants working in different countries across a broad spectrum of practice settings. The diversity of respondents combined with the unusually high response rate for questions requesting comments, are major strengths of the survey. The study has a number of limitations. Comments were often brief, and because of the restrictions of online surveys it was not possible to obtain a more nuanced understanding of participants' views. Therefore, although a rich and diverse range of opinions was gathered, further qualitative work is warranted to obtain more fine-grained analysis of physicians' views about the impact of AI/ML on the practice of psychiatry and on patient care. Furthermore, we did not gather information on physicians' level of knowledge or exposure to the topic or AI/ML in medicine, limiting inferences about awareness, and the depth of participants' reflections. Notably, some participants explicitly expressed uncertainty about whether AI could benefit medical judgment with some admitting they had limited familiarity with the field. The extent to which participants' views are comparable to laypersons' opinions about AI in psychiatry is unknown. Finally, the coronavirus crisis has witnessed an abrupt adoption of telemedicine, and new advances in triaging tools. Conceivably, had the survey been administered after this period, psychiatrists' responses may have been different. 75 We suggest that further in-depth qualitative interviews, or focus groups would help to facilitate deeper analysis of psychiatrists' perspectives and their understanding of AI and its impact on psychiatry. This study provides a foundational exploration of psychiatrists' views about the future of their profession. Perceived benefits and limitations of future technology in psychiatric practice, and the future status of the profession, have been elucidated. A variety of perspectives were expressed reflecting a wide range of opinions. Overwhelmingly, participants were skeptical about the role of technology in providing empathetic care in patient interactions. Although some participants expressed anxiety about the future of their job, viewing technology as a threat to the status of their profession, the dominant perspective was a prediction that human medics and future technology would work together. However, participants were divided over whether this collaboration might ultimately improve or harm clinical decisions including diagnostics and treatment recommendations, and overreliance on machine learning was a recurrent theme. Similar to biomedical informaticians, participants were also hopeful that technology might improve care at a systems level, improving access, increasing efficiencies, and lowering healthcare costs. While psychiatrists' opinions often mirrored the predictions of AI experts, results also revealed worrying omissions in respondents' comments. In light of high levels of patient interest in mental health apps, the effectiveness, reliability, and safety of machine learning technologies present serious ethical, legal, and regulatory considerations that require the sustained engagement of the psychiatric community. 72, 73, 76, 77 So far, the efficacy and safety of the overwhelming majority of downloadable mHealth apps have yet to be demonstrated. 78 Moreover, in contrast to the views of many leading informaticians, psychiatrists were often enthusiastic that technology would reduce biases in decisionmaking; however, without further regulatory attention to standards of design within machine learning, it is unclear that algorithms will help to redress rather than deepen healthcare disparities. Against these considerations, steadfast leadership is required from the psychiatric community to help patients navigate mobile health apps, and to advocate for guidelines with respect to digital tool, to ensure current mHealth as well as emerging technologies, do not jeopardize standards of safety and trust in patient care. Finally, given the sheer breadth of opinion, and oversights, 79 it is conceivable that many practitioners, for understandable reasons including work burdens and time constraints, are disengaged from the literature on healthcare AI. 25, 36 Some respondents admitted that they did not know much about the topic, and with more exposure to this field, psychiatrists' views may have been different. Recent physician surveys suggest medical education on health technology "leaves much room for improvement". 79 For example, an extensive cross-sectional survey of EU medical schools found that fewer than a third (90/302, 30%) offered any kind of health information technology training as part of medical degree courses. Similarly, a recent survey of physicians in South Korea reported that only 6% (40/669) of those surveyed described "good familiarity with AI". 80 While gaps in knowledge are understandable given the volume of medical course curricula and the time pressures of clinical practice, we conclude that the medical community must do more to raise awareness of AI among current and future physicians. Lacking adequate education about machine learning technology and its potential to impact the lives of patients, psychiatrists will be illequipped to steer mental health care in the right direction. Acknowledgements: The authors would like to than Sermo, especially Peter Kirk and Joanna Molke, for their collaboration, and Kaylee Bodner for help with Figure 1 . We would also like to express our gratitude to the doctors who participated in this survey and shared their valuable insights. Declaration of Conflicting Interests: The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Doraiswamy has received research grants from and/or served as an advisor or board member to government agencies, technology and healthcare businesses, and advocacy groups for other projects in this field. Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Blease was supported by an Irish Research Council-Marie Skłodowska-Curie Fellowship. Locher was funded by a Swiss National Science Foundation grant (P400PS_180730). Guarantor: CB is the guarantor of this article. Peer review: Dr. George Despotou, WMG has reviewed this manuscript. ORCID iD: C Blease https://orcid.org/0000-0002-0205-1165 Supplemental Material: Supplemental material for this article is available online. Substance use World Health Organization. Mental Health Atlas World Health Organization. Dementia. World Health Organization Empowering 8 Billion Minds: enabling better mental health for all via the ethical adoption of technologies,www.weforum.org/whitepapers/ empowering-8-billion-minds-enabling-better-mental-h ealth-for-all-via-the-ethical-adoption-of-technologies The WPA-lancet psychiatry commission on the future of psychiatry Addressing the escalating psychiatrist shortage Correlates of Facebook usage patterns: the relationship between passive Facebook use, social anxiety symptoms, and brooding Too many 'friends,' too few 'likes'? Evolutionary psychology and 'Facebook' depression Is social network site usage related to depression? A meta-analysis of Facebook-depression relations Association between social media use and depression among US young adults The fate of medicine in the time of AI A glimpse of the next 100 years in medicine Virtual care for improved global health Can mobile health technologies transform health care? Deep learning -a technology with the potential to transform health care The inevitable application of big data to health care Needed innovation in digital health and smartphone applications for mental health: transparency and trust Predicting the future -big data, machine learning, and clinical medicine Relapse prediction in schizophrenia through digital phenotyping: a pilot study Digital phenotyping: technology for a new science of behavior Harnessing smartphone-based digital phenotyping to enhance behavioral and mental health Machine learning and the profession of medicine The evolution of patient diagnosis: from art to digital data-driven science Could artificial intelligence make doctors obsolete? Lost in thought -the limits of the human mind and the future of medicine Deep medicine: how artificial intelligence can make healthcare human again. London: Hachette UK Automatic decoding of facial movements reveals deceptive pain expressions Computer-based personality judgments are more accurate than those made by humans How consumers and physicians view new medical technology: comparative survey Patient smartphone ownership and interest in mobile apps to monitor symptoms of mental health conditions: a survey in four geographically distinct psychiatric clinics Mobile phone ownership and endorsement of "mHealth" among people with psychosis: a meta-analysis of cross-sectional studies Digital technology use among individuals with schizophrenia: results of an online survey Mental health mobile phone app usage, concerns, and benefits among psychiatric outpatients: comparative survey study The role of artificial intelligence in diagnostic radiology: a survey at a single radiology residency training program Computerization and the future of primary care: a survey of general practitioners in the UK Artificial intelligence and the future of primary care: exploratory qualitative study of UK general practitioners' views Psychiatrists' attitudes toward disruptive new technologies: mixedmethods study They are not hard to reach clients. We have just got hard to reach services". Staff views of digital health tools in specialist mental health services Health professionals' perspective on the promotion of e-mental health apps in the context of maternal depression Artificial intelligence and the future of psychiatry: insights from a global physician survey Qualitative content analysis: theoretical foundation, basic procedures and software solution Using thematic analysis in psychology The health professional-patient-relationship in conventional versus complementary and alternative medicine. A qualitative study comparing the perceived use of medical shared decisionmaking between two different approaches of medicine Patients' perspectives on depression case management in general practice -a qualitative study The importance of social support for people with type 2 diabetes -a qualitative study with general practitioners, practice nurses and patients Focus group methodology: principle and practice The global shortage of health workers -an opportunity to transform care Review of key telepsychiatry outcomes Community based telepsychiatry service for older adults residing in a rural and remote region-utilization pattern and satisfaction among stakeholders Telepsychiatry for patients with movement disorders: a feasibility and patient satisfaction study Effect of digital cognitive behavioral therapy for insomnia on health, psychological well-being, and sleep-related quality of life: a randomized clinical trial Evaluation of two mobile health apps in the context of smoking cessation: qualitative study of cognitive behavioral therapy (CBT) versus non-CBT-based digital solutions Framing the challenges of artificial intelligence in medicine Information overload and missed test results in electronic health recordbased settings Artificial intelligence in healthcare Utilizing a personal smartphone custom app to assess the patient health questionnaire-9 (PHQ-9) depressive symptoms in patients with major depressive disorder Realizing the potential of mobile mental health: new methods for new data in psychiatry Artificial intelligence powers digital medicine T2DM self-management via smartphone applications: a systematic review and Metaanalysis Can't surf, won't surf: the digital divide in mental health Smartphone ownership and interest in mobile applications to monitor symptoms of mental health conditions Patients contributing to their doctors' notes: insights from expert interviews Empowering patients and reducing inequities: is there potential in sharing clinical notes? Barriers to and facilitators of engagement with remote measurement technology for managing health: systematic review and content analysis of findings The continued use of mobile health apps: insights from a longitudinal study Smartphone apps for schizophrenia: a systematic review The efficacy of smartphone-based mental health interventions for depressive symptoms: a meta-analysis of randomized controlled trials Can smartphone mental health interventions reduce symptoms of anxiety? A meta-analysis of randomized controlled trials Weapons of math destruction: how big data increases inequality and threatens democracy Genetic misdiagnoses and the potential for health disparities HIPAA and protecting health information in the 21st century Promoting trust between patients and physicians in the era of artificial intelligence Cops, docs, and code: a dialogue between big data in health care and predictive policing Telemedicine gets a boost from coronavirus pandemic: medicare patients get more flexibility in seeking remote treatment A hierarchical framework for evaluation and informed decision making regarding smartphone apps for clinical care Machine learning in medicine: addressing ethical challenges Towards a framework for evaluating mobile mental health apps Mapping the access of future doctors to health information technologies training in the European union: cross-Sectional descriptive study Physician confidence in artificial intelligence: an online mobile survey