key: cord-0779993-0fg5353v authors: Singh, Rishi P.; Hom, Grant L.; Abramoff, Michael D.; Campbell, J. Peter; Chiang, Michael F. title: Current Challenges and Barriers to Real-World Artificial Intelligence Adoption for the Healthcare System, Provider, and the Patient date: 2020-08-11 journal: Transl Vis Sci Technol DOI: 10.1167/tvst.9.2.45 sha: 73731b4c52fc510268fe8df723ad02254d7a8be2 doc_id: 779993 cord_uid: 0fg5353v nan Artificial intelligence (AI), or the use of automated systems that display the ability to correctly interpret, to learn from, and to achieve specific goals by use of external data, is an emerging technology that has myriad implications for changing the way we interact with the world. Although this technology is already being used in many fields such as banking, retail, and education, AI has the potential to transform other fields including healthcare. Within healthcare, ophthalmology is uniquely positioned to benefit from AI not only through clinical decision support technology but also through improved image processing innovations such as real-time segmentation, automated image quality improvements, and assisted or autonomous disease screening tools. 1,2 Although there are now Food and Drug Administration-approved technologies within ophthalmology, such as IDx-DR (Coralville, IA, USA) for early diagnosis of diabetic retinopathy and diabetic macular edema, numerous challenges still exist to realize the potentially transformative impact of these technologies in day to day practice. The need for the ophthalmology community to take a thoughtful approach to AI innovation and implementation is accentuated by the high stakes involved. The impact of misleading patients and clinicians on a health condition is much greater than a retail store misinterpreting the next book you may like to buy. As a result, we need to increase discussion about the issues surrounding who, what, when, how, and why we might use AI in practice, including ethical and liability considerations, to determine how best to implement AI for all stakeholders including practitioners, patients, practices/hospitals, and industry. This article aims to highlight the challenges and barriers to real-world AI adoption that impact the technology's utility. We examine these specific challenges that will face health care organizations, providers, and patients. systems can create independent AI for many aspects of their organizations such as billing. For clinical care, where regulatory approval is needed, health systems will play important research and development roles for new AI technology. Because health systems will be involved in both adopting and creating new AI platforms, organizations should consider the different challenges each option may present. To successfully implement industry-developed AI, collaboration, and, transparency with vendors will be critical because of the potential liability healthcare systems take when utilizing AI technology for patient care. Moreover, the AI business is rapidly evolving, and identifying leading AI vendors will be challenging early on. One challenge for individual organizations is to determine how they should assess different vendors of AI platforms. Notably, the lack of established AI suppliers may make healthcare vulnerable to companies exaggerating their offerings with limited understanding of how to apply AI's abilities to healthcare needs. Early AI offerings may lack features such as interoperability and integration with existing electronic infrastructures and electronic health record (EHR) systems. 3 Furthermore, because of regulatory considerations, initial AI products will necessarily have narrow clinical utility (e.g., detection of referable diabetic retinopathy but not other retinal or ophthalmic disease), whereas what might most benefit the organization and society would be a broader use case and product. Therefore many opportunities exist for health systems and industry to codesign systems that are most clinically useful for providers. Inherently, some AI algorithms, such as convolutional neural networks are "black boxes" in terms of which features are used to make decisions in AI models. Regulatory agencies are focused on safety and efficacy of these systems for a particular use case, but the healthcare community needs to carefully consider the relative risks and benefits of accepting an "uninterpretable" device compared with the current standard of care. It may be that we are willing to tolerate agnosticism of algorithm features if outcomes are improved in a meaningful way. 4 On the other hand, the art of medicine has always allowed providers to use their judgment to tailor clinical care to an individual patient, which may be in an algorithm. Although we as a field consider the acceptability of blockbox algorithms, 4 advances in computer science and a push for interpretability from a regulatory perspec-tive will likely lead to more explainable AI in the future. In healthcare, very few technologies become commonplace without favorable financial reimbursement models, and it remains to be seen how this will work for AI. Telemedicine is a perfect example. Despite clear cost-effectiveness for remote delivery of care models, it wasn't until the recent worldwide COVID-19 pandemic that reimbursement for telemedicine services encouraged widespread utilization. In a relative value unit (RVU)-driven reimbursement system, we need to advocate for reimbursements that appropriately incentivize AI technology that leads to cost savings through improved efficiency, outcomes, or access to care. The American Medical Association Digital Medicine Payment Advisory Group and the U.S. Congress have been working on payment models based on the CPT(R) system, but reimbursement solutions remain unclear. The uncertainty regarding financial reimbursement may affect whether organizations choose to be early adopters of these technologies. Developing organization-specific AI technology has a whole host of advantages such as system customizability to serve an organization's unique needs and better interoperability with the organization's existing infrastructure. However, organizations that choose to develop their own AI systems should be aware of the greater technical complexities of AI compared to previous technological innovations. One key component to successfully develop AI systems independently involves having the workforce to build, maintain, and improve AI systems. Knowledgeable personnel are likely to be in short supply during the early development of new systems, with Deloitte reporting that 68% of United States Information technology and related-services business leaders in their state of AI enterprise survey are concerned about a moderate to extreme AI skills gap. 5 Consequently, will an AI skills gap limit an organization's adoption and development of AI technology? Each organization can develop some implementations to address this, but a sizable, knowledgeable workforce is needed to develop these systems and to teach non-experts (e.g. clinicians and support staff) how to use the technology and to quickly address problems clinicians may have. Healthcare organizations will need to consider the human and material resources necessary for development and implementation of intramural AI systems. (1) Data infrastructure and storage is complex and expensive. (2) Data labeling is a laborious process that currently requires significant resources for novel AI development. Standard labeling protocols as part of clinical care may help with this but compliance with these labels is often noisy, which may complicate AI training. (3) The training of AI systems and quality improvement takes time. Labeling errors, for example, can impede training. Built-in biases can affect external performance. (4) Incorporating feedback regarding errors in the system requires both time and material efforts from clinician and programmers, which need to be accounted for. In reality, many practices lack the resources (e.g., financial, time, expertise) to stay up-to-date on the large systems needed to successfully maintain and operate an independent AI platform. It remains to be seen how adopters of AI technology might play a role in refining and improving AI platforms. In health care, datasets used for training of AI platforms are often limited in some way, either by small numbers, biased demographics, or for example in the case of radiology, institutional scanning parameters. Thus, models trained in one context may fail to generalize to other contexts. In the same way that pharmaceutical companies are required to do post-marketing (phase IV) surveillance, the regulatory landscape for evaluating AI technologies in this manner post-approval remains ambiguous, however, there is a significant potential role for health care organizations to play here. The major limitations here are regulations regarding patient privacy and data sharing that will need to be addressed. Setting up a business associate agreement on data liability and ownership with a vendor requires an individual agreement with each specific vendor involved. High-quality data sampling is the best proxy for data sharing, but challenges exist for collecting representative datasets to predict for diverse populations. Data sampling methods may reduce the amount of stored and shared data needed to run models, but methods to monitor data quality will be needed to ensure accurate outcomes. 6 The Role of "Company Culture" in Embracing AI Some organizations may value technological improvements such as AI more than others. AI is a potential paradigm shift for many aspects of health care delivery, and therefore systems will need to be adaptable to how the technology will disrupt the status quo. Organizations may question AI's value in their daily activities. Business leaders may lack understanding of how AI implementation can create value or have business goals that may not align with an AI implementation strategy. A recent Harvard Business Review article cites the integration of experts/business leaders and coders to build the AI system as the most commonly identified limitation to building a successful AI system. 7 Consequently, building an AI implementation strategy that creates value for the organization requires a strategic approach, setting objectives, identifying key performance indices, and tracking return on investment. Notwithstanding, communicating that value to all staff members can be difficult if some feel that the value of AI does not align well with their goals. Understanding AI within the context of an organization's strategy takes time, planning, and clear communication that company culture needs to support. As a field, we lack clarity in terms of what a successful AI implementation looks like. The ophthalmic community should discuss and develop concrete objectives and key results that should be met to track successful implementation. Individual organizations may focus more on financial and clinical efficacy measures, but ophthalmic practices should also measure and share how new technology impacts our staff and our patients. Generally speaking, it is to the advantage of AI developers to make AI platforms user friendly. 8 However, depending on the intended purpose of AI, there may be challenges integrating AI into daily clinical practice. Will physicians need to develop individual systems to coordinate AI results with EHR charting? With physicians already having varying levels of technology literacy, frustrations may be added as physicians learn how to incorporate and utilize AI platforms while struggling with existing technologies such as EHR. Furthermore, taking the time to understand how the AI algorithms operate may add more responsibilities that exacerbate physician burnout. For example, clinicians will need to consider the opportunity costs of utilizing AI technology to guide patient management versus seeing the patient in person. A world with autonomous AI clinical decision-making tools would likely have alert systems to advise the clinician of a problem. However, to minimize risk, these AI systems may be cautious in their approach to alert and err on the side of over referral. Depending on how this system is created, physicians may be at risk for alert fatigue. Consequently, communication systems between AI platforms and providers need to be thoughtfully designed. Physicians may also have concerns over bias built into AI technology. AI platforms are limited by the concept of "what goes in is what comes out," meaning that the algorithm is only as good as the data source teaching it. Consequently, depending on the condition that the AI platform is intended to address, clinicians may have concerns that the platform does not consider racial, ethnic, gender, and other sociodemographic characteristics that may be important to consider, as has been seen in other domains. 9 In the Framingham Heart study, 10 nonwhite cardiovascular event risk predictions were slightly biased. Like any clinical decision making tool, including our own "clinical judgment," clinicians need to learn to be conscious of hidden biases that may affect clinical decision making and outcomes. This could be particularly important in AI therapeutic models where the outcome tells the physician to inject but the physician disagrees. If the healthcare industry transitions to becoming AI reliant, then our training institutions have a responsibility to prepare current and future physicians and healthcare professionals to become AI competent. However, teaching approaches and the timing to add AI-related material into medical school curricula are unclear because medical schools do not know what a physician's job will look like in an AI world. This ambiguity creates uncertainty for what medical schools should embrace. Considering the current teaching environment, medical schools have different curricula around research and data science. Research and data skill sets will be important for future physicians to collaborate with AI developers to successfully implement new technologies, but what are the opportunity costs that we are willing to trade? An article by Paranjape and colleagues 11 suggests that future physicians will need to develop knowledge of mathematical concepts, AI fundamentals, data science, and corresponding ethical and legal issues. However, current incentives for medical schools are not well aligned for building these skill sets, considering the already dense medical school curriculum and the limited teaching medical faculty who are AI competent and capable of teaching how to incorporate AI into clinical practice. AI competency should focus on outcome expectations and risk assessment with basic literacy being developed at the medical school level and further developed in clinical training. For AI technology to be successfully implemented, patients must consent to use this technology in their care. AI has the potential to change the paradigm for how health diagnoses and treatment recommendations are delivered to patients. Will patients be willing to accept a computer diagnosis versus one from a human because it saves time and money? In an autonomous diagnostic setting, will patients depend on nonexpert device operators for comfort and clarification? Coping with a diagnosis may be challenging before meeting with the provider to answer questions and explain the context or relevance of diagnosis to the patient. Above all else, humans can provide gentleness and compassion that machines cannot. Moreover, a limiting factor to patients embracing the use of this technology is the trust that data collection is safe and secure. Patients may distrust "impersonal" data collection software such as AI to hold diagnoses and treatment information. A recent survey in the United States on AI indicates that data privacy is considered the most important issue when thinking about this technology. 12 How the ophthalmic community and healthcare industry implement AI may play a significant role in the patient's perceptions. For example, the implementation of AI at a physician's office versus a local drugstore may impact a patient's willingness to use this technology. There may be new concerns that AI platform developers will have access to patient data. Consequently, what trust and physician-patient confidentiality look like in an AI world merit consideration. The purpose of this perspective was to sketch the barriers that need to be addressed for AI to become a success for healthcare organizations, providers, and patients. Within the realm of design, AI has been based upon maximally reducible characteristics aligned with the scientific knowledge of human clinician cognition, rather than proxy characteristics. 13, 14 With regard to appropriate data usage, AI creators now must collect data in compliance with regulations and legislation, as well as maximum traceability from the data pedigree, and steward the data accordingly. 15, 16 To maximize alignment among clinical workflow, evidence-based clinical standards of care, and practice patterns from the quality of care organizations, professional medical societies and patient organizations are expressing their views and establishing preferred practice patterns. 17 In the realm of validation of safety, efficacy, and equity, reference standards are being validated against clinical outcomes or surrogate outcomes where appropriate to avoid subjectivity and intraobserver and interobserver issues with physicians and other human experts. 13, 16, 18, 19 Finally, there are great examples of progress with inclusion of AI systems in standards of care where appropriate validations of safety, efficacy, and equity exist such as the inclusion of autonomous AI within the Standard of Diabetes Care stewarded by the American Diabetes Association. 20 The initial integration woes of the AI system are solved by adding them to existing medical records systems through industry standards such as DICOM, HITECH, FHIR, and HL7. 21 Lastly the assignment of liability or other protections is being defined based on the accountability principle for the autonomous AI output commensurate with indications. 13 Last year, the AMA included in its AI policy that autonomous AI creators are responsible and liable if any harm should be caused by the diagnostic system they create. 13, 22, 23 This is most pertinent for autonomous AI, where it would be ill-suited to expect a provider to be held liable for a diagnosis that is out of scope and comfort level from their usual practice and expertise. AI is poised to be a technology that dramatically shapes the future of ophthalmic practices in multiple ways. However, there will be both predictable and unforeseen challenges that arise with the implementation of AI in clinical medicine. In this article, we have discussed some challenges that the ophthalmic and healthcare communities need to consider as AI technology improves and becomes available for clinical use. Even if ophthalmic practices embrace AI technology, important unknown variables to successful AI adoption are the response of patients to this technology and the impact on the physician-patient relationship. As AI technology continues to be developed and adopted, we encourage continued collaboration among all stakeholders including providers, industry, and patients to best support each party's respective interests and to ensure optimal outcomes for our patients. Google research shows how AI can make ophthalmologists more effective | EurekAlert! Science News Why digital medicine depends on interoperability In defense of the black box AI investment by country-survey | Deloitte Insights An AI boost for clinical trials Why AI underperforms and what companies can do about it User interface goals, AI opportunities Machine Bias -ProPublica Race/ethnic differences in the associations of the Framingham risk factors with carotid IMT and cardiovascular events Introducing artificial intelligence training in medical education Public opinion on the governance of artificial intelligence Lessons learned about Autonomous AI: Finding a safe, efficacious, and ethical path through the development process Adversarial attacks on medical machine learning Principles of biomedical ethics Health Care Annual Meeting Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices Identifying potential ethical concerns in the conceptualization, development, implementation, and evaluation of machine learning healthcare applications The autonomous point of care diabetic retinopathy examination Microvascular complications and foot care: standards of medical care in diabetes-2020 Can an artificial intelligence algorithm be sued for malpractice Potential Liability for Physicians Using Artificial Intelligence Supported by Grants R01EY19474, K12EY27720, and P30EY10572 from the National Institutes of Health (Bethesda, MD), by grant SCH-1622679 from the National Science Foundation (Arlington, VA), and by unrestricted departmental funding and a Career Development Award from Research to Prevent Blindness for MFC and JPC. Michael