key: cord-0057826-9gyhn7tx authors: Bauer, Cordula; Thamm, Alexander title: Six Areas of Healthcare Where AI Is Effectively Saving Lives Today date: 2021-03-14 journal: Digitalization in Healthcare DOI: 10.1007/978-3-030-65896-0_22 sha: bd1743efaa1d81d7761cadd41f51d68519a002b9 doc_id: 57826 cord_uid: 9gyhn7tx Less incidents in heart surgeries, less time-to-intervention in case of a stroke, lower costs due to less claim denials for hospitals, less deaths due to an earlier detection of breast cancer—these are real-world examples of AI being the catalyst of the next quantum leap in healthcare. Researchers, entrepreneurs, and tech incumbents are active within all subfields of healthcare and proofing the value of data-driven and AI-based initiatives. This paper aims at giving an overview of the broad variety of such engagements within six major fields in healthcare. Each of the six sections itself provides an overview of the most important AI applications within that field and a zoom-in into exemplary research work and already implemented industry solutions. A broad adoption of AI applications within healthcare institutions has not yet occurred, due to some current limitations in data quality and governance, industry standards, and ethical issues. The applications of AI in healthcare are vast and the number of scientific papers, research initiatives of large corporates, startups, and blog articles seems to be exploding since a couple of years. A recent bibliographic analysis of the 100 most cited papers in that field shows that medical informatics is the field which has received most attention by researches in the past years. Medical informatics is the management and use of patient healthcare information in applications such as precision medicine and diagnostics. Then follow radiology, oncology, and non-radiological medical image analysis. They also find that only 11 of the 100 most cited papers are clinical studies (Sreedharan et al. 2019) . Consequently, 89 of the 100 scientific papers in the top 100 are not supported by clinical studies, indicating further potential for the adoption of AI in healthcare. Conversely, there are a vast amount of companies with appealing offerings in the field of medical artificial intelligence receiving a lot of attention from venture capitalists. The market analysis platform CB Insights-specialized on tech trends and relying on AI technology themselves-show that funding in healthcare AI has multiplied since 2015 by factor eight (CB Insights 2020a), see Fig. 1 . Despite the large amount of activities and recent advancements in AI-driven healthcare, there are still very few productive AI applications deployed in healthcare institutions. In Germany, the main reasons for the low adaptation rate are seen in trust issues, which result from questionable data quality, data protection concerns, and insufficient verifiability and explainability of AI algorithms. The latter is problematic if an algorithm recommends or even takes a decision that is ethically complicated, such as deciding upon who gets the last available emergency care bed. In addition, IT infrastructure challenges, such as interoperability with other healthcare information systems and the lack of standards for data exchange and interconnection of multiple AI systems, are hindering adoption by the industry further (Frederking et al. 2019) . Within various sections of this article, we will come back to some of these hurdles and also see first promising approaches of how to overcome them. Let us now have a quick look at the main ingredients in AI applications: data and AI methods. What is true for human intelligence is true for artificial intelligence, too: Good decisions or meaningful predictions are only possible if relevant and good-quality input data is available. Thus, digitization is a prerequisite for AI to do a good job. The degree of digitization is different between healthcare sectors. The pharma industry leads the way with its large molecule databases contrasted by small medical care centers or single doctors in private practices where most medical data is recorded on hand-written paper files. Another issue when it comes to data, especially sensitive personal data as in patient records, are data privacy and data security issues that need to be taken seriously. Many healthcare institutions fail with their digitization efforts, as there is neither a secure legal basis nor a secure yet affordable way for them to store and process patient data. Healthcare AI funding from 2015 till 2020. Source: Authors, Data by CB Insights AI can be defined as the ability of a machine to perform tasks normally requiring human intelligence. In order to be capable of performing those tasks, a variety of AI technologies and algorithms exist. Those algorithms can be distinguished into three different categories: Traditional AI, weak AI, and strong AI, see Fig. 2 . Traditional AI, also called expert systems, are knowledge databases combined with hard-coded if-then rules that have been programmed by humans. In healthcare expert systems are known since the 1970s and are used within medical diagnostics for analyzing symptoms or clinical decision support systems (Davenport and Kalakota 2019) . Robotic process automation (RPA) can be considered a sub-field of traditional AI that is quite present at the moment. RPA is used to automate simple and repetitive tasks like updating patient records, filling out claims, scheduling appointments, or billing. Despite its name, RPA does not involve real robots, but is rather a renaissance of scripting and macros to mimic the work of an administrative employee. When rules become more complex and interdependent or are subject to frequent change, expert systems become overstrained and therefore create less reliable results. Machine learning comes into play. Weak AI or narrow AI represents the most exciting and relevant field of AI nowadays-not just in healthcare but across all industry sectors. In contrast to traditional AI, weak AI leverages machine learning (ML), meaning the machine learns rules and parameters from patterns in the (digital) data we feed it. Most common applications of machine learning are the so-called supervised learning methods (Ng 2019) , in which both inputs and outcomes (e.g. onset of disease) are observable in the data. The machine uses (often large) amounts of the so-called training data to learn where the outcome has already been determined and is labeled accordingly. During this process called training, the machine comes up with its own inference of input data to outcomes. In many methods this is a black box, not interpretable by humans (which can raise ethical concerns as mentioned in Sect. 1.1). After the learning is done, the model is tested and validated against known outcomes to assess its predictive quality. Many forms of ML are being successfully applied in medicine, as the following sections will depict. They range from statistical algorithms like regression and survival analysis (Sarstedt et al. 2010) to random forests and neural networks (NN). The latter are derived from basic functionality of the human brain and have been present within healthcare research for several decades (Sordo 2002) . Neural networks that consist of several deeper level represent what is called deep learning (DL) or deep neural networks (DNN). Deep learning is computationally much more complex than other machine learning methods but its rise is catalyzed by faster computing power with graphical processing units (GPUs) and vast capacity within cloud architectures. Deep learning can be considered the most promising field of application for healthcare as it can progress unstructured data like text, sound, and images often better than humans. Especially in combination with computer vision, DL is used a lot within radiology to fight diseases like cancer (Vial et al. 2018 ). We will see some examples in Sect. 2.1. The second largest field of DL is natural language processing (NLP), which makes sense of human language either in the form of speech or text. NLP has a huge potential that has just been tapped, since most medical data is available in form of medical documentation like patients logs or medical research studies like clinical trial documentations. Another great development in the area of neural networks are generative adversarial nets (GAN) aiming at generating new data. These networks consist of two artificial neural networks that perform a zero-sum game. One of them creates candidates (the generator), the second neural network evaluates the candidates (the discriminator). The goal of the generator is to learn to generate results according to a certain distribution. The discriminator, however, is trained to distinguish the results of the generator from the data of the real, given distribution. The goal of the generator is then to produce results that the discriminator cannot distinguish (Goodfellow et al. 2014) . They find their application in healthcare when it comes to creating new drugs as we will see in Sect. 4.2. For a further deep dive into machine learning techniques, the data and AI guide is a good reference (Thamm et al. 2020) . Strong AI or artificial general intelligence (AGI) represents the idea of a machine being capable of understanding and learning everything a human being can and for some researchers this means having a consciousness. This would ultimately mean that doctors get replaced by superhuman robots, but despite the hype about AGI in media and science fiction movies, it is more fantasy than fiction (Bradley 2019). A correct diagnosis is often more than half the battle when it comes to fighting a disease. It is thus no surprise that diagnosing diseases has been one of the first applications of AI in healthcare dating back to the 1970s. A rule-based expert system developed at Stanford University successfully diagnosed blood-borne bacterial infections (Buchanan and Shortliffe 1984) . However, it never found its way into clinical practices as it did not outperform human doctors and integration with other healthcare information systems was poor. Today we see many AI approaches in the area of diagnostics, most of them relying on machine learning methods rather than expert systems. First, it is an unmanageable task to keep them up-to-date with all the medical research going on and data exploding (Davenport and Kalakota 2019) . Second, machine learning techniques have experienced a quantum leap due to recent breakthroughs in deep learning and can perfectly be used in healthcare, too, with the most obvious field being image analysis in radiology. There is a lot of research going on in the area and both tech giants such as IBM and Google as well as startups frequently present promising results. Still, there is no extensive use of these practices in medical care centers, yet. In early 2020, Google Health presented an AI system that surpassed human experts in breast cancer prediction in an experiment. They used two datasets with breast cancer images, one from the UK and one from the USA. They also show the system's ability to generalize from the UK to the USA (McKinney et al. 2020) . Reducing time-to-treatment is key, also in preventing death and disability from a stroke: over 90% of patients who are not treated in time end up severely disabled. San Francisco based, tech company viz.ai uses machine learning algorithms to analyze CTA scans and automatically detect large vessel occlusion strokes triggering alerts if needed. They claim to have found evidence in a study with a Healthcare Center in New York that with the introduction of viz.ai's system both the timeto-treatment and the outcome of a stroke situation were improved: they found the median initial door-to-intervention being significantly faster in the post-viz cohort (21.5 min vs. 36 min; p = 0.02) and the measured degree of disability being significantly lower, too (Morey et al. 2020) . As of today, experts agree that AI will not replace radiologists, but augment their workflow and reduce simple tasks like prioritizing patients for immediate screening or standardize reporting, so that they can concentrate on the difficult cases and patient care (Reardon 2019) . A major challenge in AI-driven radiology is building datasets, which are large enough so that results have statistical significance. There is no common practice to share radiology images between clinics and most commercial AI products are built on proprietary datasets. Still, there are a couple of open access databases with medical images from CT scans, MRI exams, and other radiographs. Aylward provides a list of such databases on his website (Aylward 2020) . Another big challenge is still the integration of an AI-enhanced diagnostic system into the major healthcare information system operating within a medical institution. The University Medical Center Utrecht has taken initiative and launched their own AI workflow platform, called IMAGR, which integrates diagnostic capabilities from several radiological units. Their infrastructure is vendor agnostic: First, they do not want to deal with the burden of incorporating multiple vendor-specific AI solutions. Second, they want to be able to incorporate every new algorithm from the large research community in this field as fast as possible and not wait for commercial third parties to do so. Furthermore, IMAGR has the potential to become a standard for medical centers and also for commercial vendors of specific AI solutions to add their algorithms to the platform via licensing or a pay-per-use model (UMC Utrecht 2020). When it is not images that serve as input data for detecting diseases, it is other medical data, mostly laboratory findings that are analyzed and the likelihood of a diagnosis is calculated. San Francisco based startup Freenome, with a stunning funding of $500 million until August 2020 (CB Insights 2020b) engages in early cancer detection through blood samples. Blood plasma is analyzed for fragments of DNA, RNA, proteins, and other biomarkers. A machine learning model is trained on a large number of these samples where the outcome is already known. The model thus learns which biomarker patterns correlate to a cancer's stage and type. The model can then predict the affinity for a cancer type for each new blood sample and thus support physicians in detecting cancer at a very early stage. In primary care, general diagnostic systems based on a description of symptoms by the patient or easily observable symptoms have never really found their way into primary care centers as physicians are skeptical about the added value. However, self-service symptom checkers enjoy a rising popularity among patients and can still support physicians. They are described in more detail in Sect. 3.2. Healthcare as we know it today in our modern world is a system that has been more or less the same for many decades. You are sick, you go and see a doctor, he comes up with a diagnosis and suggests a treatment that is known to cure whatever you have, you go and get it, and you hopefully get better soon. What if the treatment was tailored not only to the diagnosis but to your individual organism, metabolism, and medical history? And what if you did not have to go to a doctor at all, because you could get useful individualized help from home? And would it not be great if you would not get sick in the first place because you get just the nutrition and fitness advice that your body needs to stay healthy based on its individual characteristics? Individualized healthcare, powered by AI makes all of this possible. Food is the engine that keeps our organisms going and still western medicine has put surprisingly few efforts into looking at the role that nutrition can play to prevent and cure diseases. AI is about to change this and has already helped to come to some mind-blowing conclusions. In their break-through study Zeevi et al. have continuously monitored week-long glucose levels of 800 people and measured blood-glucose levels after approximately 47,000 meals. The machine learning algorithm they used integrates blood parameters, dietary habits, anthropometrics, physical activity, and gut microbiota and accurately predicts personalized glycemic responses to meals. They found high variability in the response to identical meals, suggesting that universal dietary recommendations may have limited utility (Zeevi et al. 2015) . Another study has found that even identical twins respond differently to the same food and the machine learning model they used predicts both triglyceride and glycemic responses to meals and claim their findings to be useful for developing personalized diet strategies (Berry et al. 2020 ). They develop their own program and app to provide personal nutrition assistance based on user data, called ZOE TM . The deal in such applications is: you provide your data (what you eat, what you weigh, your blood-glucose level, data from your last stool sample, etc.). In return, you benefit from all other users providing their data, plus AI algorithms that find relations between nutrition data and medical conditions and provide individualized nutrition advice to you. Symptoma, Babylon Health, Symptomate, Your.MD, and Ada Health are five prominent examples of apps available today for symptom checking. They officially do not replace a diagnosis by a doctor and avoid the wording "diagnosis" as they are not allowed to use it by law in many countries. Instead, they provide "symptom evaluation" and derive "probable medical conditions." Still: Ada Health found in a trial run with primary care physicians in the UK that 14% of patients that completed an Ada assessment in the waiting room said that if they had used Ada at home they would not have felt the need to come to see the doctor that day (Lewin 2019) . So how do these apps work? The core is a combination of AI methods deriving probabilities of medical conditions-potential diagnoses-from the checked symptoms. Many of them primarily rely on expert systems with extensive knowledge bases provided by physicians. The community-based ones further use machine learning algorithms learning from the data of all the users of the app. Further, natural language processing is used in chatbots that enable the interaction via voice with the user. The digitization of the symptom analysis process is debated controversially. On the one hand, 1.5 million people die every year due to a misdiagnosis according to a statement made by Jama Nateqi, Co-Founder of Symptoma (2020). On the other hand, a human doctor sees, hears, and somehow "feels" the patient on many additional levels and thus might come to better conclusions. Also he is a human being who can be made responsible for errors. But what if you live in rural Africa and the next doctor is hours away and you cannot even afford the visit? Is not the probability of a false diagnosis better than none? Nutrition advice and symptom checker apps are tools that are mainly used by individuals for self-service health. Precision medicine is the term healthcare professionals use for their efforts of predicting the most suitable treatment plan for an individual patient. It is based on the same idea and uses similar data, for example, dietary and lifestyle information, but it typically includes more data collected during whatever examinations were conducted; it can even include DNA analysis. As it comes to play in a clinical context, expert knowledge from physicians can optimally add to the AI-based recommendations. Precision medicine also refers to the research efforts of tailoring treatments to subgroups. The AI framework for precision medicine is depicted in Fig. 3 . Lee et al., for example, presented promising findings when it comes to treating acute myeloid leukemia (AML), a malignant blood cancer. Cancer is well known for responding differently to the same drug regimen. Patients thus can highly benefit from methods, which match patients to drugs better. The researchers used data from 30 AML patients including genome-wide gene expression profiles and in vitro sensitivity to 160 chemotherapy drugs. A machine learning method was applied to identify reliable gene expression markers for drug sensitivity. The method outperformed many state-of-the-art approaches in identifying molecular markers and in predicting drug sensitivity. They also identified a particular marker to be a main driver of sensitivity to certain common substances in chemotherapy drugs (Lee et al. 2018) . Precision medicine can also be a paradigm shift for cardiovascular diseases (CVDs) as they are complex and heterogeneous, according to Krittanawong et al. (2017) . CVDs are influenced heavily by both external and internal variables of the patient like, for example, the lifestyle and diet of a person but also her genes. AI gives cardiovascular doctors the possibility to extract patterns of the huge amount of available information, which the human brain would never be capable processing. However, here often lies the risk and limitation of AI, not just in medicine: if the underlying use case or medical challenge is ignored and not represented in the data, even the best AI method cannot do its job properly. Physicians and data scientists need to work closely together and discuss medical and statistical assumptions in order to leverage AI to patients' benefits. It is an expensive and long process to go from early-stage research into providing a drug ready to be used. AI has tremendous potential to decrease both time and cost by speeding up tasks along every step during the process. AI application during the stages of the drug discovery and development process is depicted in Fig. 4 . The following sections explain the application and benefits of AI within exemplary phases of the drug discovery process. During the lead identification phase, AI technologies can help to speed up the screening process for lead molecules that can potentially be the base for a new drug. We will have a look at how AI can support within the so-called target-based virtual screening (TBVS). TBVS attempts to predict the best interaction between a protein target (a molecule associated with a disease) and a ligand (a molecule which binds reversibly to a protein in order to serve a biological purpose, e.g. incapacitate a protein). It involves virtually docking candidate ligands into the target-represented by a 3D structure-followed by applying a scoring function to estimate the likelihood that the ligand will bind to the target with high affinity. Ranks are calculated based on scores and other criteria and the most highly ranked molecules are typically selected for further experiments . Supervised machine learning methods can support here and either classify a ligand into potentially being "active" or "inactive" with a target (binary classifier) or a degree of affinity for binding with the target (numeric classifier). Input data consists of features representing molecule data and the determined class or affinity value with respect to the target. A common and free available dataset for research purposes is the directory of useful decoys enhanced (DUD-E) containing 22,886 active ligands and their affinities against 102 targets (Mysinger et al. 2012) . This data-driven research approach saves researches a lot of time and money by narrowing down the huge number of potential ligands to a much smaller one. Only for this subset, they then perform much more costly in vitro experiments. Machine learning based TBVS depends heavily on the accuracy of the representations generated for the ligand-target complexes and on the quality of the corresponding activity data. Deep learning can help here with the feature extraction. The screening method described in the previous section is discriminative: given a ligand, we can forecast whether it will bind or not. But what if we would like to design a molecule that has certain properties? The space of potential molecules is incredibly large and a different approach is needed. The art of creating novel (previously unknown) chemical entities with desired properties to serve a medical purpose is called "de novo drug" design. Zhavoronkov et al. have developed a generative model based on the principles of generative adversarial nets, their so-called generative tensorial reinforcement learning (GEN-TRL) approach. GENTRL optimizes synthetic feasibility, novelty, and biological activity. They used the model to discover potent ligands for the discoidin domain receptor 1 (DDR1), a target implicated in fibrosis and other diseases. They did so in only 21 days. Four molecules were active in biochemical examinations, and two were validated in cell-based assessments. One lead candidate was tested and it demonstrated favorable pharmacokinetics in mice (Zhavoronkov et al. 2019 ). Insufficient ADMET properties (= absorption, distribution, metabolism, excretion, and toxicity characteristics of a drug) are a common source of late-stage failure of drug candidates and have led to the withdrawal of approved drugs. AI techniques can help to determine ADMET properties of a molecular structure early in the drug discovery process. AI algorithms can, for example, predict solubility of a drug, metabolic reactions, or the concentration of certain substances as a reaction to the consumption of a drug. An example in this field is the prediction of lipophilicity, an important measure of the absorption of a drug by the human body. It is primarily measured by the octanolwater partition coefficient which has proven to be effectively predictable by using neural networks. The ALOGPS program described by Tetko and Tanchuk has been proven to reliably predict the coefficient for low molecular weight structures based on an associative neural network combining elements of the feed-forward network and the kNN approach (Tetko and Tanchuk 2002) . According to Yang et al. it has been applied by several drug research groups since then ). Robots in healthcare can be considered old hat. Since 1985 the idea existed to turn industrial robots into precision machines for assisting human doctors and there are indeed many applications today. However, with the help of AI, robots are becoming more autonomous and capable of augmenting human doctors' natural limitations. The real magic though can happen when robots learn so much that they can outperform human doctors not just in terms of precision and stamina, but also knowledge and decision making by combining all existing knowledge across multiple medical databases. While there is a vast variety (and fantasy) of leveraging robotics within healthcare, two major fields of application are standing out today: AI robot-assisted surgery and AI care robots. According to Davenport and Kalakota, the most common fields of robotic surgery include gynecologic surgery, prostate surgery, and head and neck surgery (Davenport and Kalakota 2019). It has been shown that robotic surgery can shorten hospital stays, allow surgeons to perform more accurate tasks in comparison to traditional approaches, and thus decrease complication rates of surgeries (Hussain et al. 2014 ). One of the pioneers in the field is the company Intuitive from Sunnyvale, California, with their da Vinci platform. They were founded 25 years ago and their platform was one of the first robotic-assisted systems to get FDA approval for general laparoscopic surgery. With its surgical machines featuring robotic arms, cameras, and surgical tools for minimally invasive procedures, da Vinci has assisted in over 5 million operations. A similar, yet newer to the market, company from the USA is Vicarious Surgical. They combine virtual reality with robotics and have raised $31.8 million from tech icons like Bill Gates or Marc Benioff. Another company from Sunnyvale, California named Accuray developed their CyberKnife system to precisely treat cancer tumors with stereotactic radiotherapy. They leverage AI computer vision methods to sense motions of the cancerous cells in real-time and spare healthy tissue. The precision of the first stereotactic surgeries was based on a rigid frame which was firmly mounted to the patient's head. Fortunately, this painful procedure is not needed anymore, since AI-based motion-sensing can correct the radiation focus in real-time. Located in Eindhoven, Netherlands MicroSure develops "MUSA," a robot with superhuman precision for microsurgery, which is the first of its kind to be clinically available CE-certified. Surgeons benefit from staying next to their patient, like in conventional microsurgery, but it scales down motion seamlessly and filters out tremor or unsteady movements. Mazor X is a spine surgery platform, created by Mazor Robotics in Israel, who got acquired by Medtronic end of 2018 for a stunning $1.7 billion. The platform visualizes surgical plans like a GPS system and leverages AI to recognize anatomical features of patients' spines. It further provides robotic guidance and live navigation feedback to allow for a more precise spinal operation. A last example that sounds truly like science fiction is Heartlander, a miniature mobile robot developed by Carnegie Mellon University. The mini robot is just a few centimeters of size and can enter the chest through a small incision. It then navigates autonomously on the epicardial surface of the heart to the specific location and administers therapy. Fortunately, we are living much longer now than in previous generations. However, age-related diseases such as dementia or osteoporosis have increased also in their duration. Furthermore, many elderly people but also people with disabilities are suffering from lost independence as well as from loneliness and lack of social interaction. According to reuters.com, the USA is set for a severe shortage of paid direct care workers of 155,000 by 2030 increasing to 355,000 by 2040 (Miller 2017) . Similar or worse situations are present in other countries like Germany or Japan. New generations of robots will support humans with a variety of partially or fully automated services. Such flexible and mobile service robots cooperate with humans or even act completely independently. A team of Irish scientists spun-off the company Akara.com and developed "Stevie," a robot that assists care personnel in elderly care facilities. Stevie watches over residents, runs group based social activities like bingo-nights, and even writes activity reports to reduce manual work for humans (Akara 2020) . Similar robots are already in use within elderly care facilities supporting to fight against labor shortage, social isolation, and dementia. Another example for AI care robots is "S3," a German government supported project with the goal of developing a 3D environmental sensor system for service robots that can reliably differentiate between objects and persons as depicted in Fig.5 . In addition to the sensor hardware, an AI-based computer vision technology is being developed, which can recognize objects and people in the environment and evaluate situations-such as a person lying on the ground or spilled liquids. This enables the robot to intervene according to whatever situation. Munich-based data and AI consulting company Alexander Thamm is developing the deep learning capabilities of the AI together with industry partners and research institutions (BMBF 2020). In addition to the core functionality of care, robots could in the future also interact with elderly people, talk to them, play games with them, and thus make a valuable contribution to keeping their minds sharp. Robots can potentially support elderly and disabled people to stay independent longer and reduce the need for nursing homes. Ethical implications are of course debatable with one central question being: is not a robot better than no support at all? The larger the medical care center, the more likely it is to suffer from inefficiencies like crowded waiting rooms, patients staying longer than necessary, staff being overloaded and unfriendly, costs exploding because they are poorly managed, and in worst cases treatments being delayed or cancelled due to staffing problems. Figure 6 explains some of the most common use cases. Primary data source for prediction efforts in patient care use cases are electronic health records (EHR) with one data point representing one patient or one case and features such as patient age, gender, medical history, etc. In order to build predictive features for AI, thorough data preprocessing is necessary. Since patient data is often unstructured and textual, nature language processing methods are used to extract the relevant information. Medical device monitors in the hospital or wearables worn by the patient can also deliver valuable data. In their retrospective study Hong et al. used data from all adult ED visits during a period of 3 years from three emergency rooms. 972 variables were extracted per data point and they were labeled as either admission or discharge. Nine binary classifiers were trained using logistic regression (LR), gradient boosting (XGBoost), and deep neural networks (DNN) on three dataset types: one using only triage information, one using only patient history, and one using the full set of variables. Best accuracy was achieved-not surprisingly-by using the full set of variables with all three methods. This shows that AI can indeed robustly predict hospital admission and help for better planning of staffing emergency departments (Hong et al. 2018) . One example for patient flow optimization platforms powered by AI being actively used is the one from Qventus, a Silicon Valley based startup partnering with Emory University Hospital. Qventus claims to have reduced the length of stay The denial of submitted claims to public or private payers such as insurance companies is a major cost driver for medical care centers. Reworking and resubmitting a denied claim is time-consuming and costly. According to Change Healthcare, a technology provider for revenue and payment cycle management in healthcare, AI can calculate denial probability of a claim and support to put more effort into these claims, in a two-step approach (Change Healthcare 2019): In step 1 a machine learning algorithm is fed with a medical center's historical remittance data and thus identifies the patterns associated with denied claims. Future claims exhibiting these patterns are then flagged to let staff know there is a potential issue. In a second step another machine learning algorithm conducts a root cause analysis across subcategories. Staff can then use this information to edit the claim before the submission to the payer. Error-free claims lead to fewer denials that lead to faster payments and more revenue and of course a more efficient use of staff's time. Healthcare specific processes can also benefit from the numerous AI-driven innovations that are available today in general process and service management software. And not only medical care centers but also other entities involved in healthcare can profit from AI-driven process optimization such as healthcare system providers and insurance companies. Serviceware's industry agnostic solution for process management, for example, supports agents of service providers, e.g. an insurance company, in handling incoming requests, claims, or questions more efficiently by automatically searching for similar requests from the past and in the knowledge base. It is a combination of several natural language processing methods doing the job-far better than convenient search algorithms relying only on the matching of key words (Serviceware 2020). COVID-19 has shown us how vulnerable we are as humankind towards viruses and also how fragile our healthcare systems are. Ever since the early reporting on the virus at the beginning of 2020, there has been a tremendous effort of researchers around the globe to fight the pandemic leveraging medicine, biotechnology, and of course data science and AI. Efforts include initiatives from all the subsections of AI in healthcare mentioned above: Early diagnosing of the disease based on symptoms, identifying risk factors for a severe disease course, discovering new drugs or repurposing existing ones to alleviate symptoms, and of course finding an effective vaccine. Furthermore, AI can prevent the spreading of an infectious disease by a location-based prediction of infection rates and setting alarms accordingly. Better Understanding the Virus ZOE, the startup we got to know in Sect. 3.1 when talking about personalized nutrition plan, together with some allied research institutes was among the first ones to establish the combination of symptoms most likely to predict COVID-19. They did so by collecting data from over 2.6 million people with and without symptoms in the UK and the USA through their COVID Symptom Study app and applying machine learning techniques to the data collected. They also claim to be the first one to have identified loss of taste and smell as symptoms for COVID-19 (Menni et al. 2020 ). OpenSAFELY is another success story in how leveraging data and statistical methods are helping to better understand the virus fast. The team who is working on behalf of National Health Service (NHS), England created OpenSAFELY-a secure health analytics platform holding patient data within the existing data center of a major vendor of primary care electronic health records. They linked primary care records of approximately 17 million adults pseudonymously to approximately 10,000 COVID-19-related deaths. In addition to high age and the presence of underlying medical conditions, they found that being of Black or South Asian ethnicity were among the main risk factors for mortality (Williamson et al. 2020) . Whereas the initiative from Zoe was relying on people voluntarily providing their data, the second one used patient data already existing. Williamson et al. state that only two authors accessed OpenSAFELY to run code, no pseudonymized patientlevel data was removed from the vendor's infrastructure and only aggregated, and anonymous study results were released for publication. Nevertheless, this raises the need for guidelines on data governance and data privacy (Williamson et al. 2020) . On December 31, 2019 the Canadian BlueDot AI platform sent an alert to its customers reporting about a cluster of "unusual pneumonia" cases happening around a market in Wuhan. BlueDot had spotted COVID-19-9 days before the World Health Organization released its statement on the matter. BlueDot uses an AIdriven algorithm that screens news reports in multiple languages, animal and plant disease networks, and official announcements. Natural language processing is thus an important AI method they use extensively. The system for example flagged articles in Chinese that reported 27 pneumonia cases associated with a live animal market in Wuhan. According to its founder, BlueDot does not rely on social media postings since they found the data to be too messy. But they use global airline ticketing data that can help predict where and when infected people are heading next. It correctly predicted that the virus would spread from Wuhan to Bangkok, Seoul, Taipei, and Tokyo in the days following its first appearance (Niiler 2020). There are many more initiatives now where institutions try to forecast new virus hotspots. Munich's university LMU is working together with data scientists of Alexander Thamm GmbH to improve their so-called nowcasting services for the German government. Nowcasting means predicting the actual spread of the virus, corrected for unrecorded cases. According to Goeran Kauermann, dean and professor of statistics, "the aim of the joint project is to provide local authorities and health authorities with statistically processed information and valid predictions about local infections, as well as to automate the flow of information to them" (Tiedemann 2020). As we have learned before, an AI-based system is only as good as the data it uses. And that is the tricky part with a new virus: as humans we can only feed the AI with what we know and with data we have. If we know little about a new virus and if we do not have relevant and well-structured data at hand, yet, the machine cannot do its, job properly, either. If there is one thing to learn from COVID-19 regarding AI for fighting a pandemic, then it is to do our homework in terms of digitization, data governance, and data industry standards in healthcare-in order to be better prepared for the next healthcare crisis. The effectiveness of AI approaches within the healthcare sector has been clearly demonstrated in a large variety of examples throughout multiple applications ranging from radiology and drug research to self-service healthcare, medical center process efficiency, surgery, and care. Also in less obvious subsectors which were not mentioned here, such as psychology and veterinary, AI can certainly play an important role. AI has the potential and most certainly will change the healthcare sector as we know it. The technology is ready, but are we as humans? There is still a way to go and some fundamental things have to change: First, we need to put effort into digitization, data collection, and data quality assurance, as data is the fuel of AI. We need to set standards, too, in order for institutions to be able to exchange data easily and to do so in a safe legal environment that people support. Second, we need to agree as a society on how to deal with the responsibility question when a machine is taking tough ethical decisions such as in a triage situation. Third, we need to embed AI into our healthcare system so that the technology can work hand in hand with health professionals and train these appropriately. And yes, AI has the potential to replace some of the healthcare jobs. But it is up to us to shape the future. Freed-up resources can either be fired or spend more time truly caring for patients. When machines take over analytical tasks, we as humankind need to focus even more on the areas where we still outperform machines: empathy and compassion. If we can get more of these human qualities into the healthcare sector, AI can definitely said to be a real benefit for healthcare. Open-access medical image repositories Human postprandial responses to food and potential for precision nutrition S3 Sicherheitssensorik für Serviceroboter in der Produktionslogistik und stationären Pflege. Bundesministerium für Bildung und Forschung Artificial general intelligence (AGI) is impeding AI machine learning success. Gartner Blog. Retrieved Rule-based expert systems: The MYCIN experiments of the Stanford heuristic programming project Healthcare AI in numbers Q1'20: The impact of Covid-19 on global funding, exits, valuations, R&D and more Expert collection on artificial intelligence in healthcare Will artificial intelligence and machine learning make denial management a thing of the past? Change Healthcare The potential for artificial intelligence in healthcare Exploiting machine learning for end-to-end drug discovery and development Anwendung künstlicher Intelligenz in der Medizin. Begleitforschung Smart Service Welt II, Institut für Innovation und Technik (iit) Generative adversarial nets Predicting hospital admission at emergency department triage using machine learning The use of robotics in surgery: A review Artificial intelligence in precision cardiovascular medicine A machine learning approach to integrate big data for precision medicine in acute myeloid leukemia After the holy grail of healthcare International evaluation of an AI system for breast cancer screening Real-time tracking of self-reported symptoms to predict potential COVID-19 The future of U.S. caregiving: High demand, scarce workers Impact of Viz LVO on time-to-treatment and clinical outcomes in large vessel occlusion stroke patients presenting to primary stroke centers Directory of useful decoys, enhanced (DUD-E): Better ligands and decoys for better benchmarking An AI epidemiologist sent the first warnings of the Wuhan Virus Cicero dos Santos, Boosting Docking-based Virtual Screening with Deep Learning How two leading health systems reduced length of stay with a system of action The rise of robot radiologists Die Hazard-Raten-Analyse: Methodische Grundlagen Serviceware introduces AI support for service centers Introduction to neural networks in healthcare The top 100 most cited articles in medical artificial intelligence: A bibliometric analysis Application of associative neural networks for prediction of lipophilicity in ALOGPS 2.1 program The ultimate data and AI guide: 150 FAQs about artificial intelligence, machine learning and data Ludwig-Maximilians-Universität und Alexander Thamm GmbH arbeiten an Frühwarnsystem für Corona-Neuinfektionen IMAGR assisteert radioloog met AI The role of deep learning and radiomic feature extraction in cancer-specific predictive modelling: A review Factors associated with COVID-19-related death using OpenSAFELY Concepts of artificial intelligence for computerassisted drug discovery Personalized nutrition by prediction of glycemic responses Deep learning enables rapid identification of potent DDR1 kinase inhibitors