key: cord-0460467-mybbz22u authors: Burger, Mitchell title: The Risk to Population Health Equity Posed by Automated Decision Systems: A Narrative Review date: 2020-01-18 journal: nan DOI: nan sha: 015dcf6ae3d379d9ca4e966c8ae85305b860d11b doc_id: 460467 cord_uid: mybbz22u Artificial intelligence is already ubiquitous, and is increasingly being used to autonomously make ever more consequential decisions. However, there has been relatively little research into the existing and possible consequences for population health equity. A narrative review was undertaken using a hermeneutic approach to explore current and future uses of narrow AI and automated decision systems (ADS) in medicine and public health, issues that have emerged, and implications for equity. Accounts reveal a tremendous expectation on AI to transform medical and public health practices. Prominent demonstrations of AI capability - particularly in diagnostic decision making, risk prediction, and surveillance - are stimulating rapid adoption, spurred by COVID-19. Automated decisions being made have significant consequences for individual and population health and wellbeing. Meanwhile, it is evident that hazards including bias, incontestability, and privacy erosion have emerged in sensitive domains such as criminal justice where narrow AI and ADS are in common use. Reports of issues arising from their use in health are already appearing. As the use of ADS in health expands, it is probable that these hazards will manifest more widely. Bias, incontestability, and privacy erosion give rise to mechanisms by which existing social, economic and health disparities are perpetuated and amplified. Consequently, there is a significant risk that use of ADS in health will exacerbate existing population health inequities. The industrial scale and rapidity with which ADS can be applied heightens the risk to population health equity. It is incumbent on health practitioners and policy makers therefore to explore the potential implications of using ADS, to ensure the use of artificial intelligence promotes population health and equity. There is tremendous hype surrounding the future of artificial intelligence 1 (AI) in health -particularly in medicine [2] [3] [4] [5] , and increasingly in public health [6] [7] [8] [9] [10] . The singular, global impact of AI on population health has been called out by the World Health Organization, who proclaim that "more human lives will be touched by health information technology than any other technology, ever." [11] While predictions vary about the extent to which AI will actually revolutionise medicine and public health practices in the shorter term [as discussed by, for example, 2, 5, 7, [12] [13] [14] , scholars have identified the longer-term potential of AI to reduce global health inequalities [15] [16] [17] . Even the United Nations Secretary-General has highlighted the potential of AI to advance human welfare, but has also emphasised its potential to widen inequality and increase violence [18] . The hype about the impact of artificial intelligence in medicine and public health comes at a time of unprecedented global interest in AI, wherein the potential future impacts of AI are being extensively analysed and discussed, including on the pages of this journal [e.g., [19] [20] [21] [22] . It has been common for the future risks of AI to predominate, especially those relating to automation [e.g., [23] [24] [25] [26] [27] [28] [29] , autonomous weapons [e.g., 24, 30, 31] , and superintelligence [e.g., 32 -36] . However, while the idea of artificial intelligence may still readily conjure science fiction dreams and nightmares in popular imagination, the reality is that AI is here already [37, 38] . Meredith Whittaker and colleagues at the AI Now Institute have stated: "The rapid deployment of AI and related systems in everyday life is not a concern for the future-it is already here, with no signs of slowing down." [39] Narrow artificial intelligence 2 and predictive algorithms suffuse society -they are woven into the fabric of our daily lives [42] [43] [44] ; mediating "our social, cultural, economic and political interactions" [38] . In this way, AI is already ubiquitous [37, 45, 46] , often in very mundane forms in everyday technologies [43] -smart phones, online advertising, social media, home assistants, recommendation engines for music and video, online dating, autopilots, and customer support chatbots. 'Automated decision systems' are a particular mode of implementing AI which use classifications and predictions produced by expert systems or machine learning algorithms to make decisions autonomously. 3 Such systems are already prevalent in a broad range of sectors, including loan and credit card applications, algorithmic trading, drone warfare, immigration, criminal justice, policing, job applications, education, university entry, utilities network management, and social welfare [37, 38, 47, [50] [51] [52] [53] . Their use by governments and corporations is rapidly expanding into ever more consequential and sensitive domains [38, 39] . And what is particularly insidious about the expanding use of automated decision making is the invisibility of their proliferation [37] . Crawford and colleagues write: "In many cases, people are unaware that a machine, and not a human process, is making life-defining decisions." [54, p. 23] In addition, people tend to become rapidly habituated to advances in AI performance, leading to creeping normalisation. Contributing to this normalisation is the tendency for AI to have an ever-evolving definition as what is not yet possible [55] . 4 Mundanity, invisibility, and habituation are enabling automated decision systems to proliferate unseen. As artificial intelligence takes on more and more responsibility for consequential decisions, fundamental questions of rights, fairness and equity arise [54, [57] [58] [59] . There is substantiated concern about the impact AI is already having on equity and justice, with mounting evidence that AI systems can perpetuate, entrench and amplify existing discrimination and inequality [e.g., 37, 54, [60] [61] [62] . This is prompting widespread debate about where and when AI and automated decision systems can be used [e.g., 37, 63, 64] . However, while high-level warnings about medium and long-term risks and societal impacts are widespread, the effects of the unseen proliferation of narrow AI and auto-mated decision systems in sensitive domains have only relatively recently started to be closely scrutinised. In health -compared to domains such as criminal justice, policing and autonomous warfare -there has been less in-depth analysis of the consequences for population health and population health equity of the use of narrow AI and automated decision systems. Acknowledging the conceptual 'fuzziness' [65] , for our purposes 'population health' can be taken to mean the 'collective health' of populations, drawing on Rose's conception that "healthiness is a characteristic of the population as a whole and not simply of its individual members." [66, p. 95 ] Population health equity on the other hand is a political concept requiring judgement based on concepts of social justice as to whether measurable differences in health (inequalities) are unjust or avoidable [67] . Key in this context is the differential impact of social determinants of health (Marmot, 2005) , and how these determinants are accounted for (or not) in data and automated decision systems [68] . While accounts in medical and public health literature have posed the question about the effect on equity, and identified risks [e.g., [69] [70] [71] , analysis to date has focused primarily on the ethical and legal implications of medical applications for individuals [e.g., 72]. The potential impact on population health equity of issues known to have emerged widely in other sensitive domains, and the mechanisms by which they are likely act, therefore require further investigation. This review aims to begin to address this evidence gap regarding the use of automated decision systems in public health by reviewing the current state of adoption of AI in health, drawing together detailed evidence of issues arising in other sensitive domains, and specifically considering the implications for population health equity, including the mechanisms by which population health equity may be affected. The specific research questions are: 1. Broadly, what is the current state of adoption of narrow AI and automated decision systems in medicine and public health? And how are they expected to be used in the future? 2. What key issues of relevance to equity have emerged in the application of narrow AI and automated decision making in other sensitive domains? Is there evidence of these issues emerging in medicine and/or public health applications? 3. What are the possible implications and risks for population health equity? To address these questions, a narrative review [73] using a hermeneutic approach has been undertaken. A hermeneutic review involves an iterative process of developing understanding through cycles of search and acquisition of literature, together with iterative analysis and interpretation [74] . It is an approach that it is suitable for questions requiring clarification and insight which cover diverse and dynamic bodies of scholarly and grey literature [74, 75] . Furthermore, this approach is consistent with Galea and colleagues' [68] call for transdisciplinary synthesis, in that the review draws on a range of literatures, including technical, public health, policy, and sociology. Initially, literature was gathered by searching Scopus, PubMed, Web of Science, and IEEE Xplore databases, and arXiv, medRxiv, and bioRxiv pre-print servers using the search terms 'artificial intelligence', 'machine learning', or 'big data', in combination with 'public health' or 'epidemiology'. Searches were also conducted using the search terms 'artificial intelligence' or 'machine learning', together with 'bias' or 'privacy'. Grey literature was sourced using Google Scholar, Hacker News 5 , WHO IRIS, United Nations Official Document System, and World Economic Forum Reports. Relevant articles were selected by scanning titles and abstracts, yielding 240 references. Citation tracking was then used to identify additional sources as analysis proceeded, in keeping with the hermeneutic approach. Mapping, classification, and thematic analysis was undertaken iteratively using NVivo 12 Pro qualitative analysis software. While the use of narrow artificial intelligence and automated decision systems is already widespread in sectors such as finance, policing, and criminal justice; the health sector has a comparatively low -but rapidly expanding -level of adoption [41, 76, 77] . In 2017, an American independent scientific advisory group JASON 6 described the state of adoption of AI in the health sector generally as being at an exploratory phase: "AI is beginning to play a growing role in transformative changes now underway in both health and health care, in and out of the clinical setting. At present the extent of the opportunities and limitations is just being explored." [79, p. 1] In the ensuing years the state of adoption of AI in health has advanced markedly, with increasing and sometimes hyperbolic reports that AI is now starting to replace doctors [e.g., [80] [81] [82] . The most active areas of application are use of machine learning for diagnostic support, for example in medical imaging interpretation and multivariate risk prediction. 7 In medicine, there have been prominent and increasingly frequent demonstrations of the capability of AI -in particular machine learning -to perform diagnostics using medical images with the same performance levels as experienced clinicians [2, 79, 87- A systematic review published in The Lancet Digital Health found the "diagnostic performance of deep learning models to be equivalent to that of health-care professionals", although concerns were raised about the prevalence of poor reporting in deep learning studies [88] . Since 2018 there has been an acceleration in US Food and Drug Administration (FDA) approvals of AI algorithms [95] . As a further indicator of the accelerating advance of AI adoption, AI-based tools have been widely implemented in response to the COVID-19 pandemic [e.g., [96] [97] [98] [99] [100] [101] [102] [103] . However, to date there has been little high-quality evidence establishing the efficacy of these tools in practice [104] . Future applications include "using artificial intelligence and machine learning to support the integration of genomic information into health care systems" [105, p. 22] , so as to enable personalised drug protocols, precision prevention [106] , and early diagnosis of rare childhood diseases [107] . In public health accounts, AI is typically regarded with cautious optimism as having the potential to re-envisage and transform public health practices [e.g., 6, 8, 9, 76, [108] [109] [110] [111] . In this way it is similar to the predicted impact of 'big data' in public health, a good overview of which is provided by Dolley [19] . Zandi and colleagues [77] capture the promissory potential in their call for papers on ethical challenges of AI in public health: These technologies promise great benefits to the practice of medicine and to the health of populations. This is especially true in epidemiology and the tracking of outbreaks of infectious diseases, behavioural science, precision medicine and the modelling and treatment of rare and/or chronic diseases. When combined with big data, AI approaches are expected to offer new opportunities to revolutionise epidemiology [10] and to enable measurement of the impact of upstream determinants of health over the lifecourse [106, 112, 113] . Importantly, this would be a way of quantifying and revealing the "structured chances" that "drive population distributions of health, disease, and well-being" [65] . Recent examples of this have been the use of machine learning to quantify the relationship between social determinants and unmet dental care needs [114] , prospective risk stratification for health plan payments [115] , and prediction of COVID-19 outcomes based on sociodemographics [116] . This opportunity arises because deep learning in particular offers novel capabilities to deal with complex, high-dimensional data and relatively small sample sizes [84, 86] -so called 'wide' data. The most aspirational accounts predict these new approaches will be able to facilitate action on social and environmental determinants of health, and thereby reduce health disparities [20, 68, 106, 113, 117] . However, despite this potential being recognised, public health has been comparatively slow to broadly adopt AI in practice [45, 76, 118, 119] . Predicting and tracking infectious disease outbreaks was an emerging application area for AI, and this has been greatly accelerated by the COVID-19 pandemic, with an AI epidemiological tool claiming to have been the first to sound warnings about the outbreak in Wuhan [120] . Key applications of AI in public health are: An illustrative example is McKinney and colleagues [146] demonstrating material reductions in the rates of false positives and false negatives using an AI system for breast cancer screening, highlighting AI's potential to improve the efficacy and cost-effectiveness of breast cancer screening programs, acknowledging however that robust evidence is lacking [150] . There have also been a number of demonstrations of the capability of AI to achieve accurate risk prediction [e.g., [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] . Weng and colleagues [169] exemplified this capability by demonstrating that machine learning approaches could use routine clinical data to significantly improve the accuracy of cardiovascular risk prediction, compared to an established algorithm. Risk prediction algorithms are also increasingly being used as a first line of automated triage in advance of primary care appointments [79, p. 23] . For example, UK-based company Babylon Health 8 has a partnership with the UK's National Health Service (NHS) called 'GP at hand' to provide online general practice consultations, with over 35,000 registered members as at January 2019 [170] . Babylon Health uses a digital symptom checker underpinned by AI to triage patients -this is an example of an automated decision system. Although concerns have been raised about the safety of digital symptom checkers [171] , in mid-2019 Babylon Health was able to raise an additional US$550m in investment capital in order to enable the company to expand into the United States and develop the capability of its AI to diagnose more serious conditions [172] . Another application is automated prescribing of contraceptives. A small-scale study published in September 2019 in the New England Journal of Medicine evaluated the safety of telecontraception, which involves the automated prescribing of contraceptives with or without clinicians in the loop. The study found that telecontraception may increase the accessibility of contraception, and also promote better adherence to treatment guidelines compared to in-person clinics [173] . AI-based risk stratification is also being used to enable automated, risk-adjusted, per capita funding allocation for health services and primary care. In this application the amount of money allocated to people for primary care services for a period of time is assigned based on their health status and algorithmic predictions of risk. For example, a commercial algorithm is used by a number of Accountable Care Organisations in America to make healthcare resourcing decisions for over 70 million people [174, 175] . As another example, the Australian Government recently trialled risk-adjusted funding for primary care through the Health Care Homes initiative. 9 The amount of funding provided to participating general practitioners to coordinate the care of individual patients will be decided using a predictive risk algorithm. The algorithm -developed by the CSIRO 10 -factors in more than 50 variables, including demographics, a proxy for social determinants (the Australian Bureau of Statistics' SEIFA indices for social and economic status 11 ), physiology, medicines, conditions, pathology results, and lifestyle factors [177] . In summary, accounts in literature and the media reveal a tremendous expectation on AI to transform medical and public health practices. There have been prominent demonstrations of successful narrow AI capability in medical and public health applications -particularly in diagnostic decision making, risk prediction and disease surveillance. These demonstrations reinforce the hype and expectation surrounding AI, and stimulate its rapidly expanding adoption in medicine and public health. This rapid adoption reinforces the need to carefully consider the longer-term hazards specific to public health. As the adoption of narrow AI and automated decision making in sensitive domains expands, this review has found that significant evidence of emerging issues has been gathered, including in health applications. Indeed, Whittaker and colleagues [39, p. 42] contend that the harms and biases in AI systems are now beyond question. "That debate has been settled," they write, "the evidence has mounted beyond doubt". They point to a growing consensus -citing a string of high-profile examples -that AI systems are perpetuating and amplifying inequities [39, 60] . This review will now focus on three key issues which have emerged in the analysis phase: 1) bias, 2) opacity and incontestability, and 3) erosion of privacy -as these appear to be materialising in medical and public health applications of AI, and also because of the potential for these issues to give rise to mechanisms by which existing health inequities are entrenched and amplified, engendering potential risk for population health equity. Of the issues that have emerged in the application of AI and automated decision making, bias is perhaps the most prominent in the literature reviewed. Defining bias is difficult because the term has specific meanings in fields such as statistics, epidemiology, and psychology, and these are often confusingly contradictory [37] . Whittaker and colleagues [39] distinguish between two types of bias arising from automated decision systems: allocative -where resources or opportunities are unfairly distributed; and representational -where harmful stereotypes and categorisations are reproduced and amplified. The hope that AI will assist to overcome biases in human decision making [e.g., [178] [179] [180] has been used as a justification for the use of automated decision systems [e.g., 52]. However, there have been glaring examples of racial, gender and socioeconomic biases evident in AI and automated decision making used in a number of sensitive domains, including: • criminal justice [50, 61, [181] [182] [183] ; • policing [184] [185] [186] [187] [188] [189] ; • hiring practice [190, 191] ; • university admissions [192] ; • online advertising [37, 193, 194] ; • education [37, [195] [196] [197] [198] ]; • immigration [39] ; and • facial recognition [199] [200] [201] [202] [203] [204] . There is also emerging evidence of the harmful impact of biases in the context of algorithmic censorship [205, 206] . For example, there is racial bias in how hate speech is moderated [207] , gender bias in how nudity is censored on Instagram [208, 209] , and censorship of marginalised communities through overly-restrictive automated filtering of LGBTQ content on YouTube, Tumblr and Twitter [210] . Generally, algorithmic biases can arise in two main ways: in the upfront design (specification) of an algorithm, and in the data that are used to train algorithms, for example by being unrepresentative, or encoding existing systemic biases [24, 54, 211, 212] . Bughin and colleagues [41, p.37 ] explain how bias can be caused by data: "Since the real world is racist, sexist, and biased in many other ways, real-world data that feeds algorithms will also have these features-and when AI algorithms learn from biased training data, they internalize the biases, exacerbating those problems." As bias can arise unintentionally from data used to train the algorithms, it can be very difficult to detect and measure [37, 47, 52, 213] . Algorithmic bias -especially undetected bias -can lead to inaccurate and inappropriate generalisation [3, 83, 123, 214] . Generalisation is a key issue in machine learning theory and practice [43] . The general rigidity and brittleness of machine learning models means that models built for a specific purpose cannot be readily transferred to other applications, nor are they robust to changes over time [83] . Barocas and Selbst [213] make the crucial point that inappropriate generalisation is typically a result of careless reliance on "statistically sound inferences that are nevertheless inaccurate" (p. 688) -rather than deliberate prejudice. Again, that the disparate impact is inadvertent, makes it wickedly difficult to detect. And moreover, inappropriate generalisation can have a performative 12 impact [43, 215] , where inaccurate predictions actively contribute to produce discriminatory outcomes. This is especially evident in criminal justice and predictive policing implementations of automated decision systems [50, [183] [184] [185] 187] . Inaccurate generalisation also stems from AI's inherent reliance on data, and the axiomatic tension between over-fitting to past data and predictive accuracy. Writing for Computerworld, George Nott quotes Genevieve Bell: "Humans can sometimes fear their choices are being "prescribed by their past" by these algorithms, which by their nature work on retrospective data" [216] . The reliance on past data is a key reason why there is a risk that automated decision systems will perpetuate inequities, particularly where the systems rely on data that either reflects past systemic inequalities, or does not adequately encode social and environmental determinants [6] . As with other high-stakes domains, bias has been called out as a key issue that will need to be addressed before AI can be trusted and more widely adopted in health [37, 85, 217, 218] . Specific to the health domain, numerous scholars have highlighted the lack of diversity, inclusiveness, and representativeness in health datasets [e.g., 19, 37, 39, 45, 80, 106, 213, [219] [220] [221] [222] [223] [224] [225] . And as previously noted, the use of biased data is known to reproduce and amplify discrimination and injustice [54, 213, 226] . For instance, Straw and Callison-Burch [227] demonstrated the existence of significant biases in natural language processing models used in psychiatry, and identified the risk that these biases may widen health inequalities. In public health too, it is well-recognised that skewed and unrepresentative data can bias the results of traditional epidemiological and population health analyses such as disease surveillance, leading to inaccurate estimates and inference for diverse populations [65, 128, 133] . Exemplifying how data quality can affect automated decision systems, flawed data was blamed for the failures of Idaho's automated decision system to equitably allocate home care funding [228] . And in a study that has striking similarities to ProPublica's revelatory investigative reporting into racially biased crime risk prediction [50] , Obermeyer and colleagues [174] detected significant racial bias in a commercial algorithm used by Accountable Care Organisations in America and applied to an estimated 200 million people each year. Their analysis revealed that White patients were given the same risk score as Black patients who were considerable sicker, inadvertently leading to Black patients having unequal access to care. The authors estimated that resolving this disparity would have more than doubled the proportion of Black patients receiving additional assistance (from 17.7% to 46.5%). The paucity of environmental and social exposure data has also been identified [e.g., 79, 229] , however the potential for this to lead to biases in narrow AI and automated decision systems needs to be further explored [224] . Another key issue is the opacity of artificial intelligence, and the ensuing incontestability of automated decisions. Algorithms and AI are opaque and invisible processes, often characterised as 'black boxes' [24, 38, 51, 230, 231] . Once an AI algorithm has been trained -particularly one based on deep learning -it is not clear how it is making decisions [51, 232] . The lack of explainability of AI when applied in healthcare has been identified as a threat to the core ethical values of medicine [233] . Research into public perceptions reveal confusion amongst the general public about the inner workings of algorithms, and wariness about inscrutable algorithmic processes that have delegated responsibility for high-stakes decisions [234] [235] [236] . A consequence of this opacity is the difficulty of questioning and contesting automated decisions. Whittaker and colleagues at the AI Now Institute observe that when automated decision systems make errors, "the ability to question, contest, and remedy these is often difficult or impossible" [39] . This is exemplified in the United States criminal justice system, where "Defendants rarely have an opportunity to challenge their [algorithmic] assessments" [50] . Furthermore, the ability of humans to intervene, override or even explain decisions is severely limited, rendering frontline workers disempowered intermediaries [39] . Early reports suggest that issues of incontestability have emerged in the use of automated decision systems in health. This powerlessness is evident in Colin Lecher's article for The Verge [52] about the case of a women with cerebral palsy who had her health services funding cut in half by an automated algorithmic decision. When an attorney began to investigate complaints about the algorithm, he found: "No one seemed able to answer basic questions about the process. The nurses said, 'It's not me; it's the computer'." For people who are the subjects of automated decisions, there is even more of a sense of powerlessness. Regarding the American Civil Liberties Union (ACLU) legal challenge to Idaho's use of an algorithmic decision system to allocate home care funding [see also 228], Lecher [52] writes: "Most importantly, when Idaho's system went haywire, it was impossible for the average person to understand or challenge". Incontestability can therefore lead to aggregation of power, limited opportunities for redress, and an unwillingness and inability of vulnerable people to contest their own treatment, thereby perpetuating and exacerbating existing inequities and discriminatory dynamics [54, 61] . The tendency to blindly trust complex statistical methodologies both fortifies the inscrutability of automated decisions, and also intensifies the performativity of prediction. In the field of public health, concern has been raised about overconfidence in big data and complex statistical techniques [45, 237] . Salathé [126] refers to this as "big-data hubris". Similarly, Krieger [112] , quoting prescient statistician Lancelot Hogben, cautions against hiding "behind an impressive façade of flawless algebra". Artificial intelligence systems, because they are considered 'intelligent technology', are particularly prone to going uncontested [54, pp. 6-7] . Underlying this misplaced trust is a reductionist belief in the neutrality of data -a belief in data being beyond reproach. Sheila Jasanoff [226] captures this eloquently: ...in modernity, information, along with its close correlate data, has been taken for granted as a set of truth claims about the way the world is. Information, as conventionally understood, quite simply is what is: it consists of valid observations about what the world is like. Data represents a specific form of information, a compilation of particular types of facts designed to shed light on identifiable issues or problems. As representations of reality, both public information and public data were seen until recently as lying to some extent outside the normal domains of political inquiry. (p. 5, emphasis in the original) But data are not neutral [44, 238, 239] . Krieger [65] for one cautions against treating populations as statistical entities comprised only of data, calling out the "uncritical approach to presenting and interpreting population data, premised on the dominant assumption that population rates are statistical phenomena driven by innate individual characteristics." What follows from overconfidence in complex statistical techniques and belief in the neutrality of data is a misplaced trust in the ability of automated decision systems to make correct, unbiased decisions. Virginia Eubanks is quoted by Lecher [52] as saying that "there is a "natural trust" that computer-based systems will produce unbiased, neutral results." Likewise, Campolo and colleagues [37] , citing Sandra Mayson's work on algorithmic risk assessment in setting bail, point out the potential of risk assessment to "legitimize and entrench" problematic reliance on statistical correlation, and to "[lend such assessments] the aura of scientific reliability." And similarly, Barocas and Selbst [213] identify the "imprimatur of impartiality" conferred on the decisions taken by algorithmic systems. This is important because it gives rise to a false confidence in the superiority of automated decisions. Through their complexity, invisibility, and inhumanity, algorithmic decisions are achieving incontestability. And thus, when predictions are made, they verge on acts of creation, of magic [240] . Will Knight wrote in 2017: "As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith." Considering how "indecipherable" algorithmic systems can be [38] , and how they can be "beyond the understanding even of the people using them" [52] , it should therefore not be surprising that the use of automated decision systems creates legal uncertainty by challenging conventional models of autonomy [241] and creating a growing "accountability gap" [39] . This is perpetuated by trade secrecy and intellectual property provisions that enable proprietary systems to be shielded from scrutiny, even in the face of legal challenge [37-39, 50, 123, 174, 215, 231] . The research of the AI Now Institute has uncovered "black boxes stacked on black boxes: not just at the algorithmic level, but also trade secret law and untraceable supply chains." [60] In this way, trade secrecy reinforces the incontestability of automated decisions [39] , which heightens the risk that existing biases and disparities are perpetuated and amplified through disproportionately affecting vulnerable and at-risk populations. The use of artificial intelligence is also having a significant impact on human rights to privacy, freedom of expression, and access to information [242] [243] [244] [245] . While privacy issues are not limited to AI, three dynamics eroding privacy: reidentification risk, intrusive data extraction and capitalisation, and invasive surveillance -stand out from the literature in relation to AI. Firstly, many scholars and institutions have highlighted the risk of re-identification and compromising individual privacy which arise from big data analytics and AI [e.g., 19, 44, 215, [246] [247] [248] . Secondly, expanding use of AI in surveillance -for example use of facial recognition in policing [215] , and monitoring of employees' emotional state for performance evaluation and retention decisions [37] -is eroding privacy and amplifying discriminatory dynamics [39, 215, 249] . And thirdly, as a result of the recognition of and the use of AI to exploit the economic value of personal data [250, 251] , systems for data extraction, 'datafication' and capitalisation are becoming increasingly intrusive and pervasive [31, 226, 249, [252] [253] [254] [255] . This intrusiveness and exploitation has resulted in growing community wariness of data sharing, and erosion of social licence for use of individual data [234, 256, 257] . Reports about erosion of privacy are also already prevalent in health applications of narrow AI. Like other domains, the value of personal health data has long been recognised [250, 251, [258] [259] [260] . The extraction of data will be driven more and more by commercialisation and productisation of data as a tradable asset [249, [252] [253] [254] [255] . In health, this has seen the emergence of specialist data brokers, such as Explorys 13 , which was purchased by IBM in 2015. An example brokerage is Memorial Sloan Kettering (MSK) Cancer Center entering into a licencing agreement with AI start-up Paige.AI 14 to "grant exclusive access to MSK's intellectual property in computational pathology, including access to MSK's 25 million pathology slides." [261] . However, the lure of using personal health data in AI and big data applications is driving an erosion of rights to privacy, confidentiality, and data ownership [10, [262] [263] [264] . For example, there are increasing privacy concerns expressed in the media and literature about health apps' lack of transparency around data sharing and use [265] . Another example is data sharing between the UK National Health Service and Google DeepMind being considered a betrayal of public trust [266] [267] [268] . Google has also faced media criticism for gathering personal health information on millions of people in the United States as part of 'Project Nightingale' [269] [270] [271] , as has Memorial Sloan Kettering health service for its data sharing arrangement with Paige.AI [272] . Throughout the COVID-19 pandemic there has also been widespread debate regarding the impact on privacy of using digital tracking tools including AI [e.g., 273 ]. Campolo and colleagues [37] point out that in domains like health, because of AI's reliance on large amounts of data, the privacy rights of vulnerable populations are particularly at risk due to lack of informed consent, inability to opt out, and poor due process mechanisms. In this way, erosion of privacy and commodification of individuals' data will likely disproportionately affect vulnerable populations. Increasingly frequent and prominent demonstrations of narrow AI capability in medicine and public health are stimulating a rapid expansion in their adoption, reinforced by economic drivers and accelerated by the COVID-19 pandemic. The examples given in section 3.1 of automated decision systems that allocate health services funding illustrate how decisions which are automatically made based on the results of an algorithm can be consequential for population health. In the example of the Health Care Homes program in Australia, through its scale, decisions have meaningful consequences for population health and wellbeing, for the livelihood of general practitioners, and for the sustainability of the primary care tier of the Australian health system. Malfunctioning of these systems would therefore be expected to adversely affect population health equity. Meanwhile, it is clear from the results of this review that there is significant evidence of and concern about issues that have emerged in the use of narrow AI and automated decision making in sensitive domains where AI is already widely utilised. As the use of narrow AI-based automated decision systems and algorithmic prediction in health continues to expand, it is probable that the same issues which have demonstrably emerged in other sensitive domains will also manifest widely in medicine and public health applications. Indeed, there is emerging evidence of this happening already -that it is not more widespread is perhaps due to the comparatively slower adoption of AI in public health. There are two key reasons why it is highly probable that issues such as bias, incontestability and erosion of privacy will manifest widely in health. Firstly, as outlined in the results section, early reports of issues associated with the use of automated decision systems and predictive algorithms in health have already surfaced, and it is reasonable to expect this trend to continue. Secondly, the same circumstances and drivers which have compelled adoption of automated decision systems and given rise to issues in other high-stakes domains, also exist in medicine and public healthindicating the adoption trajectory and consequences are likely to be similar. Key amongst these drivers are cost and capacity pressures facing health services, and the commercial imperative to use AI to capitalise on growing health data assets. The same imperatives to constrain costs and capitalise on data exist in other sensitive domains -such as education, policing, criminal justice, and immigration -and this has incentivised the adoption of automated decision systems by public agencies and corporations in those domains [39, 47] . A process of learning and emulation akin to policy transfer 15 will likely ensure the adoption trajectory of automated decision systems will be similar in the health domain. This emulation and diffusion of innovation occurs because jurisdictions and agencies "face common problems" and they look to other communities for lessons and solutions [275] . The process is accelerated by a futures industry whose purpose is to market ideas and trade on promises and expectation. Hadjiisky and colleagues write: ...an entire global marketplace of ideas and recommendations on 'best practices' has emerged, including international organizations, commissions, donor groups, consultants, think tanks, institutes, networks, partnerships, and various gatherings of the great and the good such as Davos. They may not use the terminology of 'policy transfer' but that, in essence, is what they are debating and selling. Indeed, there is tremendous expectation heaped upon AI and precision health approaches to increase the efficiency of healthcare services and systems as a means of containing costs [39, 52, 219, 220] . Dolley [19] captures this in relation to public health: Precision public health is exciting. Today's public health programs can achieve new levels of speed and accuracy not plausible a decade ago. Adding precision to many parts of public health engagement has led and will lead to tangible benefits. Precision can enable public health programs to maintain the same efficacy while decreasing costs, or hold costs constant while delivering better, smarter, faster, and different education, cures and interventions, saving lives. (p. 6) The drive to rapidly adopt AI in health is given urgency by the oft-cited pressures facing health systems around the world, including population ageing, workforce shortages, increased prevalence and incidence of noncommunicable diseases, and variability in service quality and clinical outcomes [76, [276] [277] [278] . These pressures are especially acute in low-income countries, where health resources are particularly scare [76] , contributing to the expectation that AI will benefit global health [15] [16] [17] . Similarly, long-standing problems with current diagnostic approaches, such as invasiveness, cost, accessibility, and low precision, as well as the limitations of traditional analytic approaches, are driving interest in improved AI-enabled methods [3, 79] . The COVID-19 pandemic has been a 'perfect storm' that has exacerbated these pressures, further accelerating the adoption of AI [98, 99, [101] [102] [103] . The drive to adopt AI in health also follows closely on the heels of the imperative in medicine and public health to capitalise on big data. Much like AI more recently, 'big data' has commonly been expected to transform medicine and public health practice [e.g., 19, 20, 105, 128, 133, 214, 231, 279, 280] . AI -particularly deep learning -promises the ability to finally exploit big, complex, noisy, highly-dimensional health data that health organisations have been accumulating [8, 45, 117, 128, 231, 281] . For example, an editorial in The Lancet Public Health [76] states: "The ability of artificial intelligence and machine learning algorithms to analyse these multiple and rich data types at a scale not previously possible could bring a step change in public health and epidemiology." There is a convergence between the accumulation of data [254] and hyper-enthusiasm about AI. The drive to make use of and assetize data in health [282, 283] will drive adoption of AI and automated decision systems. Typifying this drive is a call by the Chief Executive of a new United Kingdom National health Service agency, NHSX, to capitalise on big data and AI [284] . Underscoring the financial imperative pressuring health services and agencies to adopt AI, the economic opportunity has been identified not only by corporations, but by governments and public agencies [e.g. in Australia: 278, 285, 286] . What the drive to address health system pressures and capitalise on data portends is that the adoption of AI systems will be substantiated on the basis of health service efficiency and productivity -chasing "commercial vistas of fabulous scale" [287] -and not necessarily on improving population health or population health equity. Obermeyer and colleagues' [174] analysis already exhibits the perverse outcomes resulting from optimisation of AI prediction based on health service cost as a goal function, and not population health outcomes. Because of the likely focus on efficiency and headlong rush toward adoption, issues may well go overlooked or even ignored. So why does it matter that issues such as bias, incontestability, and privacy erosion manifest widely? It matters because these issues give rise to mechanisms by which existing social, economic and health inequities are perpetuated and amplified, and also potentially create new inequities. Bias does this by perpetuating and amplifying existing inequities where automated decision systems rely on data that unknowingly echo systemic inequities, or do not adequately take account of the social and environmental determinants of health which drive inequitable health disparities. The opacity and incontestability of automated decision systems disproportionately affect vulnerable populations through aggregation of power, limiting opportunities for redress, and the inability of vulnerable people to contest unjust decisions. Similarly, vulnerable populations are disproportionately at risk of privacy erosion and data exploitation through the use of AI. Together these issues have the potential to perpetuate and exacerbate existing inequities and discriminatory dynamics. This leads to a significant risk that use of automated decision systems in health will amplify and entrench existing population health inequities. Examples of population health inequities that may be affected include the persistent life expectancy gap afflicting Aboriginal and Torres Strait Islander peoples in Australia [288, 289] , and the stark socio-economic gradient in health outcomes evident in many countries [290] [291] [292] , through, for example, social and environmental determinants such as intergenerational trauma not being accounted for in algorithmic allocation of health resources. In addition, because of the strong influence of social determinants of health [290] , the use of automated decision systems in other domains such as welfare and immigration, will undoubtedly also have a downstream effect on population health equity [see for example 293] , either because these determinants are missing in the data, or the data lack diversity, inclusiveness, and representativeness. Previous scholarly warnings regarding the risk that 'precision' health approaches have the potential to exacerbate health inequities lends credence to the existence of the risk to population health equity posed by narrow AI and automated decision systems. There have been strong warnings from within public health (albeit with little empirical evidence of their manifesting to date) that 'precision medicine', as well as emerging 'precision public health' and 'precision prevention' approaches (which employ narrow AI, automated decision systems and algorithmic risk prediction) have the potential to produce disparate impacts, amplify existing prejudices, and propagate health inequities [7, 76, 106, 118, 220, 226, 294, 295] . Elucidating this, Lavigne and colleagues [45] write: ...particularly when applying these approaches to decision-making or predictions at a population level, attention must be paid to the potential for these approaches to produce health inequities, either through the use of biased data or through uneven access to the technology. Predictions and models based on non-representative or biased data can propagate underlying biases and exacerbate health inequities at a population level if sufficient care is not taken to mitigate these issues. (p. 176) Moreover, socioeconomic gradients in access to, as well as the means and resources to best utilise new precision health tools -for example genomic risk prediction -have the potential to widen inequalities further [106, 220, 296] . The focus on individual risk factors promoted by precision approaches can also reinforce the notion of individual responsibility, prolonging the use of individualist, behaviourist interventions, which tend to entrench and exacerbate socioeconomic disparities in health outcomes [7, 19, 106, 113, 220, 297] . The focus on individual risk factors also undermines the rationale and societal propensity to act on structural, upstream determinants of health inequities, thereby permitting inequities in population health to persist [6, 106, 118, 294] . Meagher and colleagues [106, p. 11] succinctly capture this idea in relation to genomic data: "genomic explanations for health disparities can distract and even exculpate society from taking responsibility for the structural determinants of those inequities, undermining the political momentum of those seeking justice". This same rationale applies to the use of AI and automated decision systems. Amid unprecedented global interest in artificial intelligence, there are tremendous expectations that AI will transform medicine and public health practice. And while it may go largely unnoticed, narrow AI and predictive algorithms are already ubiquitous; woven into the fabric of our daily lives. Decisions which are being made automatically about disease detection, diagnosis, treatment and funding allocation have significant consequences for individual and population health and wellbeing. The evidence collated in this review makes it clear that issues have emerged in sensitive domains like criminal justice where narrow AI and automated decision systems are already in common use. As their use in health rapidly expands, it is probable that the same issues -bias, incontestability, and privacy erosion -will also manifest widely in medicine and public health applications. Reports of this happening are already appearing. Moreover, the combination of hype, the drive to adopt automated decision systems to address cost pressures -accelerated by the COVID-19 pandemicand the commercial imperative to capitalise on health data assets, may conspire to obscure issues, as has occurred in other domains. Crucially, bias, incontestability, and erosion of privacy give rise to mechanisms by which existing social, economic and health disparities are perpetuated and amplified by automated decision systems. Therefore, there is a significant risk that the use of automated decision systems in health will exacerbate existing population health inequities and potentially create new ones. Medical and public health interventions have obviously produced disparate outcomes in the past; what makes the risk with narrow AI and automated decision systems different is the industrial scale and rapidity [287] with which they can be applied to whole populations, combined with the incontestability of decisions. This means negative consequences can quickly escalate to affect whole populations. While it is too soon to say whether the issues emerging in health applications of narrow AI and automated decision systems have led to worsened population health inequity, it is incumbent on health practitioners and policy makers to explore and be mindful of the potential implications of using automated decision systems, so as to ensure the use of AI promotes population health and equity. There is a need to design and implement automated decision systems with care (using equity impact assessments for instance [see for example 298]), monitor their impact over time (especially longer-term effects on population health), and take responsibility for responding to issues as they emerge -even if this is long after a system has first been introduced. To finish, Obermeyer and colleagues [174] set a very positive example. After uncovering inadvertent racial bias in an automated decision system allocating health assistance funding, they approached the algorithm manufacturer, who was able to independently replicate the results to confirm the existence of bias. The researchers and the algorithm manufacturer are now collaborating on developing solutions to address this bias. This is a fine example to emulate. The author gratefully acknowledges the generous and valuable contributions of Niamh House of Lords Select Committee on Artificial Intelligence. AI in the UK: ready, willing and able? Report. United Kingdom Parliament The fate of medicine in the time of AI Questions for Artificial Intelligence in Health Care How Can Artificial Intelligence Make Medicine More Preemptive?' In Time to reality check the promises of machine learning-powered precision medicine Precision" Public Health -Between Novelty and Hype Precision public health-the Emperor's new clothes Artificial Intelligence for infectious disease Big Data Analytics The intersection of genomics and big data with public health: Opportunities for precision public health Big data and artificial intelligence for achieving universal health coverage: an international consultation on ethics Meeting report Machine Learning and the Profession of Medicine In: The New Yorker Annals of Medicine Artificial intelligence in healthcare Artificial intelligence in health care: Laying the Foundation for Responsible, sustainable, and inclusive innovation in low-And middle-income countries Transforming Global Health with AI Artificial intelligence and the future of global health UN Secretary-General's Strategy of New Technologies Big Data's Role in Precision Public Health Editorial: Precision Public Health'. In: Frontiers in Public Health 6 Effect in Health Decision Making Involving Artificial Entities: A Psychological Perspective'. In: Frontiers in Public Health Editorial: When Data Science, Humanities and Social Sciences Meet: Cross-Talks and Insights in Public Health Should We Fear the Robot Revolution? (The Correct Answer is Yes) Bits & Atoms [website] The Impact of Artificial Intelligence on Work : An evidence review prepared for the Royal Society and the British Academy Life 3.0 : being human in the age of artificial intelligence Machine behaviour Artificial intelligence : a modern approach. 3rd. Prentice Hall series in artificial intelligence Artificial Intelligence : The Next Digital Frontier? Report. McKinsey Global Institute Algorithmic War: Everyday Geographies of the War on Terror The production of prediction: What does machine learning want Ten simple rules for responsible big data research A population health perspective on artificial intelligence Supply-Chain Security and Trust Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability How to Develop and Implement a Computerized Decision Support System Integrated for Antimicrobial Stewardship? Experiences From Two Swiss Hospital Systems Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) -Discussion Paper and Request for Feedback Machine Bias The Dark Secret at the Heart of AI What Happens When an Algorithm Cuts Your Healthcare Artificial Intelligence and Critical Systems: From Hype to Reality The AI Now Report : The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term The singularity is near : when humans transcend biology Balancing risks and benefits of artificial intelligence in the health sector Who and what is a "population"? Historical debates, current controversies, and implications for understanding "population health" and rectifying health inequities Rose's strategy of preventive medicine : the complete original text A glossary for health inequalities Social determinants of health, data science, and decision-making: Forging a transdisciplinary synthesis Artificial intelligence, intersectionality, and the future of public health Ethical, Social, and Political Challenges of Artificial Intelligence in Health Four equity considerations for the use of artificial intelligence in public health The ethical, legal and social implications of using artificial intelligence systems in breast cancer care Writing narrative style literature reviews A Hermeneutic Approach for Conducting Literature Reviews and Literature Searches Time to challenge the spurious hierarchy of systematic over narrative reviews Next generation public health: towards precision and fairness New ethical challenges of digital technologies, machine learning and artificial intelligence in public health: A call for papers Federation of American Scientists. JASON Defense Advisory Panel Reports Artificial Intelligence for Health and Health Care AIs that diagnose diseases are starting to assist and replace doctors Robots join the care team: Making healthcare decisions safer with machine learning and robotics The increasing role of artificial intelligence in health care: Will robots replace doctors in the future? Machine Learning Explained'. In: Robots, AI, and other stuff Deep learning Artificial intelligence, bias and clinical safety Deep learning and artificial intelligence in radiology: Current applications and future directions A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis COVID-19 pneumonia accurately detected on chest radiographs with artificial intelligence You should see a doctor', said the robot: Reflections on a digital diagnostic device in a pandemic age Digitalisation and COVID-19: The Perfect Storm A systematic review: Role of artificial intelligence during the COVID-19 pandemic in the healthcare system Automated detection of COVID-19 cases using deep neural networks with X-ray images Digital health and care in pandemic times: impact of COVID-19 Medicorobots' As an Emerging Biopower: How COVID-19 Can AI help in the fight against COVID-19? Investigating the use of datadriven artificial intelligence in computerised decision support systems for health and social care: A systematic review The Future of Precision Medicine in Australia Precisely Where Are We Going? Charting the New Terrain of Precision Prevention Paediatric genomics: diagnosing rare disease in children Public health in the twenty-first century: the role of advanced technologies Precision global health for real-time action AI's gonna have an impact on everything in society, so it has to have an impact on public health": a fundamental qualitative descriptive study of the implications of artificial intelligence for public health From high definition precision healthcare to precision public oral health: opportunities and challenges Health Equity and the Fallacy of Treating Causes of Population Health as if They Sum to 100 The promises of big data for public health: Opening or closing possibilities for addressing health inequities?' Unpublished Work Exploring the intersection between social determinants of health and unmet dental care needs using deep learning Incorporating machine learning and social determinants of health indicators into prospective risk adjustment for health plan payments How Much Does the (Social) Environment Matter? Using Artificial Intelligence to Predict COVID-19 Outcomes with Socio-demographic Data Big Data Techniques for Public Health: A Case Study Artificial intelligence: opportunities and risks for public health Machinelearned epidemiology: real-time detection of foodborne illness at scale'. In: npj An AI Epidemiologist Sent the First Warnings of the Wuhan Virus Speech Processing for Early Alzheimer Disease Diagnosis: Machine Learning Based Approach Machine learning to refine decision making within a syndromic surveillance service Crowdbreaks: Tracking Health Trends Using Public Social Media Data and Crowdsourcing Analysis of COVID-19 Infections on a CT Image Using DeepSense Model'. In: Frontiers in Public Health Innovations in Population Health Surveillance: Using Electronic Health Records for Chronic Disease Surveillance Digital Pharmacovigilance and Disease Surveillance: Combining Traditional and Big-Data Systems for Better Public Health Domestic violence crisis identification from facebook posts based on deep learning Surveillance as Our Sextant Automatic detection of mycobacterium tuberculosis using artificial intelligence Challenges and opportunities for public health made possible by advances in natural language processing Image Enhancement for Tuberculosis Detection Using Deep Learning Use of Machine Learning and Artificial Intelligence to predict SARS-CoV-2 infection from Full Blood Counts in a population Tracking Disease: Digital Epidemiology Offers New Promise in Predicting Outbreaks An unsupervised machine learning model for discovering latent infectious diseases using social media data Digital epidemiology: Use of digital data collected for non-epidemiological purposes in epidemiological studies Big Data and Disease Prevention: From Quantified Self to Quantified Communities Precision nutrition: hype or hope for public health interventions to reduce obesity?' In Artificial Intelligence for Diabetes Management and Decision Support: Literature Review Predictive Modeling for Public Health: Preventing Childhood Lead Poisoning A Hybrid Approach to Identifying Key Factors in Environmental Health Studies Prediction of malaria mosquito species and population age structure using mid-infrared spectroscopy and supervised machine learning'. bioRxiv. Preprint An overview of GeoAI applications in health and healthcare Deep Learning Model to Estimate Air Pollution Using M-BP to Fill in Missing Proxy Urban Data Artificial intelligence in public health prevention of legionelosis in drinking water systems A picture tells a thousand...exposures: Opportunities and challenges of deep learning image analyses in exposure science and environmental epidemiology International evaluation of an AI system for breast cancer screening Development of an Automatic Diagnostic Algorithm for Pediatric Otitis Media Artificial intelligence with deep learning technology looks into diabetic retinopathy screening Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy Social Bots for Online Public Health Interventions A Chatbot-supported Smart Wireless Interactive Healthcare System for Weight Control and Health Promotion Using Artificial Intelligence to Reduce the Risk of Nonadherence in Patients on Anticoagulation Therapy Chatbots as extenders of pediatric obesity intervention: an invited commentary on "Feasibility of Pediatric Obesity & Pre-Diabetes Treatment Support through Tess, the AI Behavioral Coaching Chatbot Applying Deep Learning to Public Health: Using Unbalanced Demographic Data to Predict Thyroid Disorder Using predictive analytics to identify children at high risk of defaulting from a routine immunization program: Feasibility study Application of machine learning on colonoscopy screening records for predicting colorectal polyp recurrence Applying Best Machine Learning Algorithms for Breast Cancer Prediction and Classification An Algorithm Based on Deep Learning for Predicting In-Hospital Cardiac Arrest Deep Patient: An Unsupervised Representation to Predict the Future of Patients from the Electronic Health Records Prediction of rapid kidney function decline using machine learning combining blood biomarkers and electronic health record data'. bioRxiv The utility of artificial neural networks and classification and regression trees for the prediction of endometrial cancer in postmenopausal women Machine Learning in Multi-Omics Data to Assess Longitudinal Predictors of Glycaemic Health'. bioRxiv. Preprint Chronic disease risk monitoring based on an innovative predictive modelling framework Machine Learning Based Models for Cardiovascular Risk Prediction Predicting Risk of Suicide Attempts Over Time Through Machine Learning Patient Risk Stratification with Time-Varying Parameters: A Multitask Learning Approach Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features Can machine-learning improve cardiovascular risk prediction using routine clinical data? Babylon GP at hand Progress to date [Presentation Safety of patient-facing digital symptom checkers Babylon Health confirms $550M raise at $2B+ valuation to expand its AI-based health services A Study of Telecontraception Dissecting racial bias in an algorithm used to manage the health of populations Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People Socio-Economic Indexes for Areas (SEIFA) Health Care Home Risk Stratification Tool How AI Can End Bias Amazon's sexist hiring algorithm could still be better than a human Towards artificial intelligence-based assessment systems Algorithms in the Criminal Justice System Crime-Predicting Algorithms May Not Fare Much Better Than Untrained Humans To predict and serve? Algorithmic prediction in policing: assumptions, evaluation, and accountability Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice' Policing Young People in NSW: A study of the Suspect Targeting Management Plan New Orleans Program Offers Lessons In Pitfalls Of Predictive Policing The Crime Machine, Part I The Crime Machine, Part II Amazon scraps secret AI recruiting tool that showed bias against women Hiring Algorithms Are Not Neutral Untold History of AI: Algorithmic Bias Was Born in the 1980s Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings Algorithmic Bias? An Empirical Study into Apparent Gender-Based Discrimination in the Display of STEM Career Ads Litigating Algorithms: Challenging Government Use of Algorithmic Decision Systems. Report Flawed Algorithms Are Grading Millions of Students' Essays Building Better Open-Source Tools to Support Fairness in Automated Scoring Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification Concerned Researchers. 'On Recent Research Auditing Commercial Facial Analysis Technology'. In: Medium Facial-Recognition Software Might Have a Racial Bias Problem Facial Recognition Is Accurate, if You're a White Guy Amazon's Facial Recognition Wrongly Identifies 28 Lawmakers, A.C.L.U. Says (NYT) AI researchers tell Amazon to stop selling 'flawed' facial recognition to the police Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation The Risk of Racial Bias in Hate Speech Detection Instagram's Shadow Ban On Vaguely 'Inappropriate' Content Is Plainly Sexist Genderless Nipples exposes Instagram's double standard on nudity Social Media Giants Have a Big LGBT Problem. Can They Solve It? Bringing Data Out of the Shadows The Elusive Rentier Rich: Piketty's Data Battles and the Power of Absent Evidence Big Data's Disparate Impact Big data meets public health Artificial Intelligence: Australia's Ethics Framework A Discussion Paper Genevieve Bell calls out the creeps and warns of analytics' unintended consequences Challenges to the Reproducibility of Machine Learning Models in Health Care For a critical appraisal of artificial intelligence in healthcare: The problem of bias in mHealth The "inconvenient truth" about AI in healthcare Precision Medicine Needs a Cure for Inequality The effective and ethical development of artificial intelligence: An opportunity to improve our wellbeing Sociodemographic Characteristics of Missing Data in Digital Phenotyping'. medRxiv Big data in context: Addressing the twin perils of data absenteeism and chauvinism in the context of health disparities research The Need for Ethnoracial Equity in Artificial Intelligence for Diabetes Management: Review and Recommendations Towards Equitable AI Interventions for People Who Use Drugs: Key Areas That Require Ethical Investment Virtual, visible, and actionable: Data assemblages and the sightlines of justice Artificial Intelligence in mental health and the biases of language based models Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case Report Australia's health series no. 16. AUS 221. Australian Institute of Health and Welfare The black box society : the secret algorithms that control money and information Digital epidemiology: what is it, and where is it going What are the limits of deep learning Explainability for artificial intelligence in healthcare: a multidisciplinary perspective 7 things we've learned about computer algorithms In AI we trust? Perceptions about automated decision-making by artificial intelligence Conditionally positive: a qualitative study of public perceptions about using health data for artificial intelligence research Data Are Not Enough-Hurray For Causality Why Data Is Never Raw Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon Introduction: Software, a Supersensible Thing Teasing out Artificial Intelligence in Medicine: An Ethical Critique of Artificial Intelligence and Machine Learning in Medicine' Artificial intelligence and privacy Issues paper Privacy International. 'Algorithms, Intelligence, and Learning Oh My' Privacy and Freedom of Expression in the Age of Artificial Intelligence Australian Human Rights Commission Is there a duty to participate in digital epidemiology Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization Estimating the success of reidentifications in incomplete datasets using generative models The age of surveillance capitalism : the fight for a human future at the new frontier of power Data from objects to assets Personal Data: The Emergence of a New Asset Class Strategic opportunities (and challenges) of algorithmic decision-making: A call for action on the long-term societal effects of 'datification Genesis: What is Bioinformation?' In: Bioinformation When data is capital: Datafication, accumulation, and extraction Data Capitalism: Redefining the Logics of Surveillance and Privacy The One-Way Mirror: Public attitudes to commercial access to health data A Day in the Life of Data: Removing the opacity surrounding the data collection, sharing and use environment in Australia Report prepared for Agency for Healthcare Research and Quality Precision health data: Requirements, challenges and existing techniques for data security and privacy Artificial intelligence in health care: value for whom The Top 5 AI In Healthcare Startups'. In: MedTech Boston Protecting Your Patients' Interests in the Era of Big Data, Artificial Intelligence, and Predictive Analytics Your health data was once between you and your doctor When digital health meets digital capitalism, how many common goods are at stake? Data sharing practices of medicines related apps and the mobile ecosystem: traffic, content, and network analysis Google 'betrays patient trust' with DeepMind Health move The challenge of privacy and security when using technology to track people in times of COVID-19 pandemic Who Learns What from Whom: a Review of the Policy Transfer Literature Introduction: traversing the terrain of policy transfer: theory, methods and overview Australian Commission on Safety and Quality in Health Care Human: Solving the Global Workforce Crisis in Healthcare Future of Health : Shifting Australia's focus from illness treatment to health and wellbeing management A new era for population health: government, academia, and community moving upstream together The impact of genomics on the future of medicine and health Insights into Pathogenic Interactions Among Environment, Host, and Tumor at the Crossroads of Molecular Pathology and Epidemiology From health to wealth: The future of personalized medicine in the making E-Infrastructures and the divergent assetization of public health data: Expectations, uncertainties, and asymmetries The thinking of the new chief executive of NHSX, which is charged with digitising the NHS The Senate Select Committee on Health. Sixth interim report. Big health data: Australia's big potential How the Enlightenment Ends Close The Gap. A ten-year review: the Closing the Gap Strategy and Recommendations for Reset Close the Gap Campaign Steering Committee for Indigenous Health Equality Closing the gap in a generation: Health equity through action on the social determinants of health. Final Report of the Commission on Social Determinants of Health Social determinants of health inequalities The National Academies Collection: Reports funded by National Institutes of Health'. In: U.S. Health in International Perspective: Shorter Lives, Poorer Health Over 2000 people died after receiving Centrelink robo-debt notice, figures reveal Will Precision Medicine Improve Population Health The "We" in the "Me": Solidarity and Health Care in the Era of Personalized Medicine Ethical Challenges of Big Data in Public Health Why behavioural health promotion endures despite its failure to reduce health inequities A rapid equity focused health impact assessment of a policy implementation plan: An Australian case study and impact evaluation