key: cord-0604899-u9axbksc authors: Ahmad, Kashif; Maabreh, Majdi; Ghaly, Mohamed; Khan, Khalil; Qadir, Junaid; Al-Fuqaha, Ala title: Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges date: 2020-12-14 journal: nan DOI: nan sha: cabe1ad68d400aa189e600e31ab40ef78f5eb2d4 doc_id: 604899 cord_uid: u9axbksc As the globally increasing population drives rapid urbanisation in various parts of the world, there is a great need to deliberate on the future of the cities worth living. In particular, as modern smart cities embrace more and more data-driven artificial intelligence services, it is worth remembering that technology can facilitate prosperity, wellbeing, urban livability, or social justice, but only when it has the right analog complements (such as well-thought out policies, mature institutions, responsible governance); and the ultimate objective of these smart cities is to facilitate and enhance human welfare and social flourishing. Researchers have shown that various technological business models and features can in fact contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In the light of these observations, addressing the philosophical and ethical questions involved in ensuring the security, safety, and interpretability of such AI algorithms that will form the technological bedrock of future cities assumes paramount importance. Globally there are calls for technology to be made more humane and human-centered. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical (data and algorithmic) challenges to a successful deployment of AI in human-centric applications, with a particular emphasis on the convergence of these concepts/challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions. We believe such rigorous analysis will provide a baseline for future research in the domain. According to a recent report [1] , around 54% of the world's population lives in cities, and the number is expected to reach 66% by 2050. Rapid urbanization is driven by economic incentives but it also has a significant collateral environmental and social impact. Therefore, environmental, social, and economic sustainability is very crucial to maintain a balance between rapid expansion in the urbanization and resources of the cities. Thanks to modern technologies, striving for an improvement in the environmental, financial, and social aspects of urban life, and mitigating the associated challenges. More recently, the concept of smart cities has been introduced, which aims to make use of modern technologies including a wide range of Internet of things (IoT) sensors to collect and analyze data on different aspects of urban life [2, 3] . A smart city application demands a joint effort of people from different disciplines, such Email address: aalfuqaha@hbku.edu.qa (Ala Al-Fuqaha) as engineering, architecture, urban design, and economics, to plan, design, implement, and deploy a smart solution for an underlying task. Artificial Intelligence (AI) techniques have also been proved very effective to gain insights from data collected through different IoTs sensors to manage and utilize the resources more efficiently. In this paper, we use the term AI broadly as an umbrella term including techniques and algorithms able to learn from data (i.e., data science, statistical learning, machine learning, deep learning) or intelligent systems able to perform tasks such as perception, reasoning, inference (i.e., expert systems, probabilistic graphical models, Bayesian networks). According to Greg Stone [4] , "If you know the right questions and understand the risks, data can help build better cities," and AI helps you extract such insights from the data. Some key smart city applications where AI has been proved very effective include healthcare, transportation, education, environment, agriculture, defense, and public services [5, 6, 7, 8, 9] . We note here that while our focus in this paper is on AI, our ideas apply more broadly to the the more general case of artificial intelligence (AI) technology for smart cities in general. We also note that most of the recent significant advances have been made possible using advances in AI. Much of the work on AI safety and AI ethics is directly relevant to AI: therefore, we will mostly use both of these term synonymously. In a smart city application, AI techniques aim to process and identify a pattern in data obtained from individual sensors or collective data generated by several sensors, and provide useful insights on how to optimize underlying services. For instance, in transportation, AI could be used to analyze data collected from different parts of a city (e.g., roads, commute mode, and number of passengers) for future planning or deploying different transportation schemes in the city. However, there are several risks and challenges, such as availability, biases, and privacy of data, to successfully deploy AI in different smart city applications [10, 11, 12] . Various data biases can result in detrimental AI predictions in sensitive human-centric applications-e.g., algorithmic predictions may be biased against certain races and genders as reported in [13] [14] . Apart from data-oriented challenges, there are some other threats to AI in smart cities' applications. For instance, attackers can launch different types of adversarial attacks on AI models to affect their predictive capabilities. Such attacks in sensitive application domains such as connected autonomous vehicles can lead to significant loss in terms of human lives and infrastructure [15] . Another key challenge to the deployment of AI in smart city applications is the lack of interpretability (i.e., humans are unable to understand the cause of an AI model's decision) [16] . Explainability is a key characteristic of AI models to be deployed in critical smart city applications, where the predictive capabilities of the models are not enough to solve a problem completely rather reasons behind the prediction are needed to be understood [17, 18] . It also helps to ensure that the AI decisions in an underlying application are equitable by avoiding decisions based on protected attributes (e.g., race, gender, and age, for instance, Amazon's AI recruiting tool was found biased against women [19] ; similarly Amazon's Face Rekognition, a gender recognition tool, was found 31.4% less accurate in classifying the gender of dark-skinned women compared to light-skinned men [20, 21] ), and ensuring an equal representation of protected attributes in the sample space [22] . In recent years, ever-growing concerns have been noticed on the deployment of AI algorithms in human-centric smart city applications, for instance, to ensure privacy issues in surveillance systems, unequal inclusion of citizens in different services, and biases in predictive policing [23, 24, 20] . A breakthrough in one of these challenges may have a knockon effect. For instance, to offset the problem of the interpretability and biases in decisions, explainable AI may also help to guard against adversarial attacks, and the explanations produced by explainable AI, on the other hand, may also help the attackers to generate more adverse attacks [25, 26] . In a smart city, sensors are deployed at various places to gather data about different aspects of the city-e.g., data related to transportation, healthcare, and environment-which is then sent to a central server for analysis or processed locally at the edge devices to obtain useful insights using AI techniques. Thanks to the recent advancement in technology, government authorities can now gather real-time data, combined with the capabilities of AI, can manage public services in cities more efficiently and effectively. For instance, having enough information about roads condition, traffic volume, and people's commute means in a city, authorities can eliminate the bottlenecks which can, in turn, reduce city traffic, crowd, and pollution, leading to a more optimized, sustainable, and clean services and environment. Some of the currently available key applications of AI in smart cities are illustrated in Figure 1 and described next. • Healthcare: The basic motivation of ML applications in healthcare lies in its ability to automatically analyze, identify hidden patterns, and extract meaningful clinical insights from large volumes of data, which is beyond the scope of human capabilities. The automatically extracted insights are generally efficient, and help medical staff in planning and treatment, ultimately leading to effective and low-cost treatment with increased patient satisfaction [28] . In recent years, AI has been heavily deployed in healthcare, and proved very effective, thanks to the recent advancement in deep learning. For instance, a solution proposed by Google [29] outperformed human doctors (i.e., by around 16% accuracy) in the identification of breast cancer in mammograms. Similarly, AI solutions proposed in [30, 31] , have been proved very effective in the diagnosis of skin and lung cancer, respectively. • Transportation and Autonomous Cars: Transportation can benefit from AI in several ways. For instance, its predictive capabilities can help in traffic volume and congestion estimation for route optimization [32] . AI algorithms can also be jointly used with multimedia processing techniques for road safety [33] , driver distraction [34] , and accident events detection [35] and road passability analysis [36, 37] . However, AI can be considered as a backbone of autonomous cars where one of the key responsibilities of the AI module is continuous monitoring of the surrounding environment and the prediction of different events, which generally involves the detection and recognition of various objects such as pedestrians, vehicles, and roadside objects [38] . • Education: AI brings several advantages in education by contributing to several tasks, such as automatic grading and evaluation, students' retention and dropout prediction, personalized learning, and intelligent tutoring systems [8] . AI predictive capabilities could also help in predicting students' career paths by applying AI techniques on students' data covering different aspects, such as interests and performances in different subjects. • Crime Detection/prediction and Tracking: It is another interesting smart city application where AI has shown its potential. AI is transforming the way law enforcement agencies operate to prevent, detect, and deal with crimes. In the modern world, law enforcement agencies are heavily relying on predictive analysis to track crimes and identify the most vulnerable areas of a city, where additional force and patrolling teams could be deployed. One example of such tools is PredPol, which relies on AI techniques to predict "hot spot" crime neighborhoods [39] . • Clean and Sustainable Environment: AI also helps in monitoring and maintaining a clean and sustainable environment. Thanks to the recent advancement in deep learning and satellite technologies, environment monitoring and enforcement are more efficient than ever [40] . AI techniques have been widely deployed in analyzing remotely sensed data for environmental changes. Moreover, AI techniques have also been demonstrated to be very effective in disaster detection [41] , water management [42] , and waste classification [43] . • Smart Building: Smart building represents an automatic structure/system to control the building's operations such as lighting, heating, ventilation, air conditioning, and security. AI has been widely exploited for various tasks in smart systems as elaborated upon in [44] . • Tourism, Culture, Services, and Entertainment: Tourism and entertainment industries are also benefited big time by AI and social media [45] . For instance, AI-based recommendation systems are widely used by travelers in the decisions of their holidays' destinations considering different variables, such as transportation and accommodation facilities, cost, food, and historical points. In addi-tion, AI-based applications could help travelers in fraud detection, cost optimization, and identification of entertainment venues and transportation facilities at the destination. Apart from the recommendation systems, which is one of the main applications of AI in the sector, AI enabled visual sentiment analysis tools could be used to search or extract scenes from long TV show videos based on sentiment analysis [46] . Despite the outstanding performances and success, AI also brings challenges in the form of privacy and unintentional bias in public services. For instance, to analyze people's commute patterns, the administration needs to collect and process a lot of people's data, including their movements risking people's personal information to be leaked. The intentional and unintentional bias in decisions of AI is even more dangerous, which might endanger citizens' lives in healthcare or law enforcement applications. For instance, an AI-based software used for future criminals predictions was found biased against blacks [14] . Similarly, the smart system used to predict the health care needs of about 70 million patients in the US was assigning higher risks (scores) to black patient compared to white patients with the same medical conditions [47] . It must be noted that the algorithms do not learn the bias on their own rather it comes from the data used to train the algorithms, which reflects the social and institutional biases of the society practiced over the years [3] . Moreover, being a product of humans, AI algorithms reflect the beliefs, objectives, priorities, and design choices of humans (i.e., developers). For instance, to make accurate predictions, AI algorithms need the training data to be properly annotated and must contain sufficient representation for each class. An over-representation of a class may develop a tendency towards the class in predictions. A trade-off between false positives and false negatives is also very crucial for AI predictions. These limitations of AI hinder its way of overcoming social and political biases to achieve smart cities' true objectives. According to Green [3] , AI algorithms in smart city applications are mostly influenced by the social and political choices of the society and authorities. Therefore, to ensure privacy and reduce bias of AI algorithms in human-centric applications, we need to discuss the need, goals, and potential impact of their decisions on society before deploying them. Moreover, there are several security threats to AI models in smart city applications, for instance, attackers can launch adversarial attacks on AI models to bias the decisions by disturbing their prediction capabilities. For example, an adversarial attacker might turn off an autonomous car on a high way, and ask for money to restart it. A more serious situation could be stopping a train on the platform just before the arrival of the next train [48] . Another challenge is the lack of interpretabilitywhich results in humans being unable to understand the causes of an AI model's decision. To deal with such risks involved in deploying AI in smart city applications, the concept of explainability and ethics in AI has been introduced. In the next sections, we provide a detailed overview and analysis of the potential security, robustness, interpretability, and ethical (data and algorithmic) challenges to AI in smart city ap-plications. The paper revolves around the key challenges to a successful deployment of AI in smart city applications including security, robustness, interpretability, and ethical (data and algorithmic) challenges. Figure 2 visually depicts the scope of the paper. The paper emphasizes these concepts/challenges by exploring how one of these challenges/problems may cause or help in solving others. We also analyze research trends in these domains. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps in the literature, and lead to better solutions. This Work Due to the keen interest in the research community in leveraging AI for smart city applications, it has been always a popular area of research [49] . In literature, several interesting articles analyzing different aspects of AI applications in smart cities have been proposed [9] . In addition, being among the key active research topics, a significant amount of the literature can be found on adversarial attacks, explainability, availability of datasets, and the ethical aspects of AI in human-centric applications. Adversarial and explainable AI are comparatively more explored in the literature. There are also some interesting surveys on these topics covering different aspects of individual topics. However, to the best of our knowledge, there is no survey jointly analyzing the challenges, and more importantly, emphasizing the connection between the four challenges. For instance, Zhang et al. [50] provide a survey of adversarial attacks on deep learning models. Similarly, Serban et al. [51] provide a comprehensive survey on adversarial examples of object recognition. Zhou et al. [52] on the other hand provide a survey of game-theoretic approaches for adversarial AI. There are also some recent surveys on explainable AI. For instance, in [53, 54, 55, 56] , a survey of existing literature on explainable AI is presented. Some surveys focus on a particular type of technique for explainable AI. For instance, [57] and [58] survey web technologies and reinforcement learning-based approaches for explainability. Baum et al. [59] on the other hand, provides a survey of AI projects on ethics, risk, and policy. Similarly, Morley et al. [60] provides an overview of the literature on AI ethics in healthcare. In contrast to other surveys, this paper emphasizes the connection between these four challenges, and analyze how a solution to one of the challenges may also help or cause the others. In this paper, we provide a detailed survey of the literature on the security, safety, robustness, interpretability, and ethical (data and algorithmic) challenges to AI in smart city applications. The paper mainly focuses on the connection among these concepts, and analyzes how these concepts and challenges are dependant on each other. The main contributions of the paper are summarized as follow: • We provide a detailed analysis of how AI is helping in developing our cities, and the potential challenges, such as salient ethical, interpretation, safety, security, and fairness, hindering its way in different smart city applications. • The paper analyzes the literature on major challenges including security, safety, robustness, interpretability, and ethical challenges in deploying AI in human-centric applications. • The paper provides useful insights into the relationship among these challenges and describes how they may affect each other. • We also identify the limitations, pitfalls, and open research challenges in these domains. The rest of the paper is organized as follows. Section 2 provides an overview of different security and robustness challenges to AI in smart city applications. Section 3 details the importance of interpretability and explains how explainable AI can help in extracting more insightful information from AI decisions, and how it can be linked with adversarial attacks. Section 4 details the challenges associated with data collection and sharing. Section 5 focuses on the ethical aspects of deploying AI in human-centric smart city applications. Section 6 summarizes the key insights and lessons learned from the literature. In Section 7, we highlights the open issues and future research directions. Finally, Section 8 provides some concluding remarks. Figure 3 visually depicts the structure of the paper. Machine Learning has tremendous potential in smart cities that can improve the productivity and effectiveness of the different city systems. Despite the positive outcomes and the promise of AI in smart cities, security is one of the main concerns that still need further investigation and experiments. AI models can be vulnerable to different kinds of attacks; such as adversarial examples, model extraction attacks, backdooring attacks, Trojan attacks, membership inference, model inversion [61] . Attacks on AI models introduce new challenges to the existing software security systems and approaches that need to address a bit different nature of challenges [62] . AI has its unique security issues where a small modification on the objects (inputs or data consumed by AI algorithms) might change the decision of AI models and cause serious consequences. The following shortlist, on the security issues of AI applications in the last five years, clearly raises the urgent need to intensively study the safety and security aspects of AI while transforming cities to be smart. • 2016, the Auto-driver system in Tesla also confused the white side of a truck with the sky in 2016, leading to the deadly crash 1 . • 2016, Microsoft chatbot was shut down and closed after a few hours of its release time. The model was attacked and forced to post offensive tweets for users 2 . Chatbots aren't just for businesses, but also for government services. The City of North Charleston in South Carolina, USA, has launched Citibot which is a communication tool for citizens and their governments. Citizens can ask for information or request repairs 3 . These smart systems are vulnerable to hacking as they consume data from citizens to analyze and manage their requests. • 2016, Google AV was in autonomous mode where a failure in speed estimation caused a crash. 4 • 2016, face recognition detection attack using eyeglasses frames [63] . • 2017, Apple face recognition has been fooled by a cheap 3D-printed mask 5 . • 2018, Uber's self-driving cars killed a pedestrian. The AV has not stopped at the right time. 6 • 2018, robust physical perturbations could fool the DNNbased classifier of a self-driving car to misclassify speed limit signs [64] . • 2018, targeted audio adversarial examples have successfully attacked DeepSpeech, a deep learning-based speech recognition system. AI works on sounds wave which can carry secret commands to the connected devices [65] . • 2019, the Tesla autopilot AI system has been attacked at Tencent's Keen Security Lab by small changes on the lane markings, yet clear for humans. Tesla Model S swerve to the wrong lane making the lane recognition models in Tesla risky and unreliable under some conditions [66] . • 2019, a neural network model diagnosis a benign mole as Malignant because of tiny noise added to the medical image [67] . • 2019, Deepfakes. Facebook creates a dataset for Deepfake detection. 7 • 2019, the smart algorithm guiding care for tens of millions of people is biased against dark-skinned patients in New Jersey, USA. It is assigning dark-skinned patients lower scores than white patients with the same medical conditions. 8 • 2020, A shopping guide robot in Fuzhou Zhongfang Marlboro Mall, China, walked to the escalator by itself. It fell off the escalator and knocked over passengers. The robot has been suspended from its duties 9 . • 2020, Starsky Robotics has been shut down due to a safety issue in the self-driving software on highways. The team of Starsky reported that supervised ML is not solely enough to build a safe robot-trucks industry 10 . • 2021, Tesla cars crash due to autopilot feature 11 . • 2021, FBI issued a warning about a rise in AI-based synthetic materials including deepfake content 12 . • 2021, AI chatbot has been suspended, and the firm is now being sued in South Korea, for making offensive comments and leaking users information. 13 . This list indicates that there are several issues to be handled beyond building AI models of good performance. In the following subsections, we discuss the strategies of attacks on machine learning models in smart city applications which shed the light on the necessity for safe and robust AI solutions at both technical and policy levels. This challenge has been recognized and discussed either for crafting fake data that could belong to different domains; text [68] , images [69] , audio [65] , network signals [70] known as adversarial examples or evaluating and developing solutions against this security threat [71] . Formally, given a benign input data X which is classified as class 1 by model M, find a function F to generate X (poisoning function F; F(X) = X ) so that X is classified as class 2 by the same model M where the difference between X and X is not being discovered by humans. This former definition is referred to as un-targeted adversarial attacks. The targeted attacks have a target class Y where function F is trying to find another version of any benign input where model M becomes biased towards Y in prediction. Another classification of adversarial attacks is based on the amount of knowledge the attackers have about the target model (victim). The threat model can be a white-box, gray-box, or black-box. In white-box attacks, adversaries have full knowledge about the targeted model architecture. This eases the process of crafting poisoned data and thus fools the system. While in gray-box threat models attackers could have some information about the overall structure of the model, in black-box threat models all that they have is just access to use the model [72] . A detailed taxonomy of adversarial attacks can be found in [5] . Figure 4 illustrates a common adversarial attack on an image classification classifier, where an AI model has been deceived by adding a tiny perturbation, amplified in the figure for visual depiction, to a legitimate sample to disturb the prediction capabilities of the model. Legitimate Sample Adversarial Sample Adversarial Perturbation Figure 4 : An illustration of a common adversarial attack on image classification AI model. The shown adversarial perturbation (amplified for illustration) is added into a sample to force the model to make a wrong prediction with high confidence [73] . Since the main focus of the paper is on smart city applications, thus without going into further details, in the next subsection, we provide an overview of the literature on adversarial attacks on AI models in smart city applications. Adversarial attacks are considered severe security threats in learner-based models due to its possible consequences. In smart cities, complex networks, and collaborations of data-driven applications and devices, the impact of misleading a model, e.g., a classifier, could result in harsh situations and a costly mess. This could happen no matter the attack has intentionally misled the model such as crafted inputs by attackers, or unintentionally "accidentally" such as a defect in traffic light signals, or varying weather conditions that could impact signs illumination consumed by autonomous vehicles [74] . In [75] , a perturbation on a regular image of a stop sign forces a deep neural network classifier to see it as a yield sign. This information can lead the vehicle to behave unsafely and might cause severe ac-cidents. This case could be worse if other neighboring vehicles consume some data sent by the attacked vehicle. A DNN-based solution was developed in [76] to detect and then isolate the attacked vehicle from the cooperative vehicles. AI models in Autonomous vehicles depend not only on the exchanged sensors data but also on consuming street signs to control the driving and traffic. The security of these models is crucial since a slight change in sign image could be enough to fool the model, for example, one pixel is often enough to attack a classifier in [77] . Similarly, human lives and billions of dollars could be victims of AI models that are misclassifying the diseases and medical reports. In [67] using a slight noise on disease images or even replacing some words in disease description by their synonyms, the AI models changed the decisions to the opposites of the true ones. Despite that medical images are taken in pre-defined settings, where some manipulations applied to other domains images are not valid such as rotations, some manipulation methods can be easily detected by specialists eyes [78] , there is still a chance to be manipulated by other methods [79] . In [80] , GAN was able to modify breast medical images through adding/removing features and change the AI decision while radiologists never discriminate the difference between the original and manipulated images at low-resolution rates. Brain medical images have been manipulated by three different methods; noise generated, fast gradient sign, and virtual adversarial training to generate adversarial examples to mislead the brain tumor classifier [81] . Fortunately, with the help of DNN, a detector has been developed and showed surprising high accuracy in detecting manipulated medical images [82] . Another detector is a result of ensemble CNN networks [83] , or by training dataset augmentation by the adversarial examples of modified CT scan images [84] The literature shows more evaluation of medical image attacks than text attacks. This is probably because the attacks arise in the computer vision field. However, texts in natural language are also liable to attacks [50] . This means prescriptions, medical records classification for insurance decisions, patient history, and allergic information, medical claims codes that determine reimbursement, are all vulnerable to attacks. The sensitive nature of these applications and the resulting harms (economic and well as social) raises the concern for safety, security, and dependability of AI systems. In the future, extra computational interventions (e.g. adversarial data detectors) may form an integral part of AI-based medical solutions. Other components of smart cities are not far from serious attacks. In the smart energy sector, attacks come in different forms; denial-of-service where systems or part of them become inaccessible and can also be optimized for more sufficient energy needs [85] , randomly manipulate the sensor readings, or with some information the attacker has about the system and sensors, false data are injected to the system [86, 87, 88] . Several detection solutions have been proposed and evaluated to mitigate the attacks in a grid such as false data injection detection [89] , securing the gird physical layers against attacks [90] . Adversarial attacks also have a serious impact on food safety and production control [91] . Several AI solutions feed on images, videos, text in smart agriculture, and smart waste. These two smart sectors may be more vulnerable to unintentional attacks, one reason is because of natural conditions where the sensors and cameras work. Table 1 provides a summary of some of the works on adversarial AI in smart city applications. In this section, we introduce the readers to some other common strategies to launch attacks on AI particularly in cloud and edge deployments, such as data poisoning, evasion attacks, exploratory attacks, model extraction, backdooring, trojan, model-reuse, cyber kill chain-based attacks, membership inference, and model inversion attacks, which are very common in smart city applications. In this attack, as illustrated in Figure 5 , attackers intentionally share manipulated data, e.g. incorrect labels, so the model would consume in any re-training process with a target to degrade the AI models' performance. In this case, attackers somehow have control over the training data or can contribute to the training data [109] . In smart cities, crowdsensing is an integral data source for smart services which is involved in several areas such as transportation, pollution monitoring, and energy management [110] . However, it is highly susceptible to data poisoning attacks [111, 112] , and in some settings, gain greater degrees of reliability so that they are hard to be identified [113, 114] . In a very sensitive field of study, an experiment on around 17,000 records of healthy, unhealthy (diseaseinfected) people, a poisoning attack on the training data was able to drop the classifier accuracy of about 28% of its original accuracy by poising 30% of the data [100] . This could have severe consequences, for example, on dosage or treatment management. Compared to data poisoning, evasion attacks can take place after model training as shown in Figure 6 . The attackers may have no idea about the required data manipulation to attack the model. A practical evaluation in [115] shows that commonly used classification algorithms, such as SVM and NN, can be easily evaded even with limited knowledge about the system, and an artificial dataset. To highlight the risk of using deep learning in the context of malware detection, [116] proposed a novel attack that changes a few bytes in the file header without injecting any other data and forces MalConv, a convolutional neural network for malware detection, to misclassify the benign and fabricated inputs. In this attack strategy, attackers would keep querying the AI models in a trial and error fashion so they can learn how to design their inputs to pass the model. This would create an overhead on the systems and a solution to identify suspicious queries that may save the availability of the systems and the power consumption, especially if the target models run on devices of limited energy supply. Trojan attacks on AI algorithms are also very common in cloud and edge deployments of AI [117, 118] . In a trojan attack, the attackers modify the weights of a model in a way that its structure remains the same. Moreover, a trojan attacked AI model works fine on normal samples, however, it predicts the trojan target label for an input sample when a trojan trigger, which is an infected sample to activate the attack, is activated. Figure 7 illustrates how trojan attacks behaves when a torjan attack is triggered on a face recognition system. In this case, the victim classifier always predicts the trojan target label when test samples with trojan trigger are used. This strategy is also called model extraction as illustrated in Figure 8 . As its name implies, the ultimate objective of the adversary is to clone or reconstruct the target model, reengineering a black-box model, or to compromise the nature and the properties of the training data [119] . This strategy of attacking the AI models dated back to 2005 when the authors of [120] were able to develop an effective algorithm for reverse engineering a spam filter model. Compared to the above two attack strategies in AI applications, model extraction needs neither any knowledge about the training data nor the model properties and architecture. All that the adversaries have is access to the model and they get its answers to the submitted queries [121, 75] . The MLaaS could be the main target of this attack since a few dollars may help in creating a free copy of the paid White and blackbox DNNs Self-collected Proposes a decentralized framework namely DeSVig to identify and to guard against adversarial attacks on an industrial AI system. The biggest advantage of the framework is its ability to reduce failure of identifying and being deceived by adversaries. [102] Smart Grids White-box GANs Self-collected Explores the vulnerabilities in smart grids. To this aim, a data-driven learning-based algorithm has been proposed to detect un-observable false data injection attacks in distribution systems. Moreover, the method needs less training samples and makes use of unlabelled data in a semi-supervised way. [88] Smart Grids Black-box RNN and LSTM The work analyzes and explores the vulnerabilities of smart grids against adversarial attacks. The authors mainly focus on the key functions of smart grids, such as load forecasting algorithms, and analyze the potential impact of the adversaries on load shedding and increased dispatch costs using data injection attacks. [103] Smart Grids Black-box RNN and LSTM The paper analyzes the vulnerabilities and resilience of AI models in power distribution networks against adversarial attacks on smart meters via a domain-specific deep learning architecture. Smart meters are attacked under the assumption that the attacker has full knowledge of both the model and the detector. Black-box attacks CNNs Market1501 [105] , CUHK03 [106] , and DukeMTMC [107] The work aims to explores and analyzes how the person re-identification in CCTV cameras frameworks can suffer from adversarial attacks. To this aim, the authors launch back-box attacks using a novel multi-stage network architecture stacking the features extracted at different levels for the adversarial perturbations. [91] Food safety White-box CNNs UCR [108] The paper analyzes the vulnerabilities of deep learning algorithms in timeseries data by adding noise to the input samples to decrease a deep learning model's confidence in food safety and quality assurance applications. model over cloud [122] . Creating a private copy of the victim model not only a copyright issue but also expose the victim model to other attacks of different strategies since the attackers have new information on crafting adversarial examples [75, 123] . In such attacks, the attackers do not necessarily need knowledge about the parameters of an AI model rather a knowledge of the type and architecture of the model and/or the service used for developing the model is used to launch an attack. Such attacks are very common due to the growing interest in using AI as a service allowing the attackers to develop and launch membership inference attacks using the same services. For instance, Shokri et al. [124] proposed a membership attack technique capable of launching attacks on AI models developed using Amazon and Google services. The severity and risks associated with membership inference attack largely depend on the applications and the type of data used for training an AI model. In certain applications involving complex image and speech classification tasks, the efforts involved in generating training data reduce the severity of the attacks. On the other hand, in some human-centric applications, such as education, finance, and healthcare applications with tabular data, which can be easily generated, membership inference attacks may have server implications. The demonstration of the model extraction attack, or stealing, in machine learning. In such an attack, the adversary queries the classifier by different inputs and collects the labels. The combination of the returned labels and the input data is used to build a training dataset to train another model. Table 2 provides a summary of some of the works on security attacks on AI in cloud and edge deployment for smart city applications. The concept of safety in AI is not much different from its definition in other engineering sectors. It mainly covers the minimization of risk and uncertainty of damage [140] . An important issue is that the safety evaluation of the AIbased systems could need further effort beyond the testing dataset since the real environment could have a larger probability of uncertainty and risk. The models trained and tested on large datasets could be more robust in production environments [141] . This simply means the availability of useful and representative datasets not only a concern to get benefit from AI algorithms, but also to build more safe and robust solutions. In subsequent sections, we discuss the challenge of dataset availability. Unsafe machine learning-based solutions could impact the lives of creatures directly; such as those systems in AV that killed people, or indirectly by raising racist issues, for example. The serious issue of Tesla auto-driver is the human driver was killed because of a mistake after millions of miles in testing the auto-driver system. The Google photo app also returned racist results after training on thousands of images. This simply means, even with the availability of massive datasets, AI-based systems still need serious and solid research works to mitigate the effect of mistakes and develop counter-strategies against illegal usage; adversarial attacks for example. In a typical AI framework, a set of features is feed to an AI algorithm, which learns from the data by identifying a hidden pattern, and in return produces some predictions. In such frameworks, which are also termed as black-boxes, the predictions come without any justification/explanation, and the users have no idea of the reasons behind the outcome. On the other hand, It provides a mechanism for guarding against extraction attacks where a framework is firstly attacked with extraction attacks by measuring the learning rate of the model. A cloud-based extraction monitoring mechanism is then developed to quantify the extraction status of models by analyzing the query and the corresponding response streams. Multiple dataset including AR Face [135] , BU3DFE [136] , and JAFFE [137] It aims to analyze whether a black box CNNs model can be steal or not? To this aim, a CNN model is queered with unlabeled samples to extract the model's information by analyzing its response to the unlabeled samples, which are used to create a fake dataset then. [138] Digits Classification CNNs MNIST [139] It analyzes the robustness and reliability of one of the commonly used types of evasion attacks defense methods namely watermarking schemes for CNNs where the authors claim that attackers can evade the verification of original ownership under such schemes. in an explainable AI framework, besides prediction/decisions, an AI model also details the causes of the prediction/decision. To this aim, additional functions/interface is used to interpret the causes behind an underlying decision [56] . In the literature, interpretability and explainability are generally used interchangeably. However, the terminologies are relevant but slightly different. Interpretability shows the extent to which a cause and effect can be observed within a system while explainability represents the extent of explanation/description of AI algorithms mechanism to a human. There are several factors motivating the need for explanation/justification of the potential causes of an AI model in general and in smart city applications in particular, where justification and explanation of an AI model's outcome are very critical for developing the users' trust in AI models used to take some critical decisions about their lives, such as whether we get a job or not (AI-based recruitment), whether an individual is guilty/involved in a crime or not (i.e., predictive policing) etc., [24] . According to Guidotti et al. [142] , these justification of the causes of an AI model's predictions could be obtained in two ways either by developing techniques/methods to describe the potential reasons behind the model's decision, which they termed as "Black-box Explanation" or directly designing and developing transparent AI algorithms. Some key advantages of explainable AI are: • Explainability of AI models helps in building users' trust in the technology, which will ultimately speed up its adoption by industry. • Explainability is a must characteristic for AI models in some sensitive smart city applications, such as healthcare and banking. • Explainable AI models are more impactful compared to traditional AI models in decision making. • Helps in detecting algorithms' biases. Explainable AI methods could be categorized, at different levels, using different criteria [143, 144, 56] . Figure 9 provides a taxonomy of explainable AI. There are two main categories of explainable AI, namely (i) transparent model, and (ii) Post-hoc explainability. The former represents the methods restricting the complexity of AI models for explainability while the other category represents the methods used for analyzing the models' behavior after the training. It is to be noted that there's a tradeoff between performance (e.g., accuracy) and explanation. In literature lower accuracy has been observed for the transparent models, such as fuzzy rule-based predictors, compared to the so-called black-box methods, such as CNNs [145] . However, the explanation and intrepretability are preferred properties in critical applications, such as healthcare, smart grids, and predictive policing. Thus, there's a particular focus on developing post hoc explainable methods to keep a better balance between accuracy and transparency. In the next subsections, we provide an analysis of how important explainable AI models are in smart city applications, and how explainable AI meets adversarial attacks and the ethical aspects of explainable AI. As described earlier, explainability brings several advantages to AI [146, 55, 147] . In smart city applications, its impact is more evident and crucial especially given the direct impact of the technology on society and its people. Explainable AI is particularly important in some key applications of smart cities, such as healthcare, transportation, banking, and other financial services, where key decision about humans-such as who should get a particular service? which medicine should be used? who should get a job?-are made [24] . Such decisions in smart city applications require interpretation of data (i.e., features) to mitigate the impurities, if any, for better predictions/decisions [148] . Healthcare is one of the critical smart city applications demanding explainable AI models instead of traditional black-box AI. No doubt AI has been proven very effective in healthcare facilitating health professional in diagnosis and treatment, however, traditional black-box AI just make decisions without interpretation. Several factors are motivating the need for explainable AI in healthcare, such as the farreaching consequences and the cost associated with a mistake in prediction [149] . Moreover, understanding the causes of AI predictions/decisions is very critical for building doctors' trust in AI-based diagnosis. Doctors would feel more confident in taking decisions given an AI-based diagnosis if the decision of the AI model is understandable/interpretable by humans. Explainable AI models would also benefit from the domain experts' knowledge to be refined. Moreover, in healthcare, the predictive performance is not enough to obtain clinical insights for decisions [150] . In [151, 142] , seven pillars of explainable AI in healthcare, showing its relationship with transparency, domain sense, consistency, parsimony, generalizability, trust/performance, and fidelity, have been provided. Transportation and autonomous cars is another critical smart cities application where the consequences and the cost associated with a mistake by an AI model is very high. For instance, an error in differentiating between red and green traffic lights or an error in pedestrian detection may lead to heavy losses in terms of human lives and damage to public property and vehicles. It has already happened when a self-driving Uber killed a woman in Arizona, where the object (i.e., the lady) was detected but treated it in the same way it would a plastic bag or tumbleweed carried on the wind, due to a prediction/classification error [55, 152] . We believe transportation in general and autonomous cars, in particular, will benefit from explainable AI. Some interesting works, such as proposed in [153, 154, 155, 156] , have already been reported in the domain. AI models also need to be interpretable and explainable to fully explore their potential in the education sector. Despite outstanding capabilities, it is still risky to blindly follow AI models' prediction in making a critical decision in such a highstake domain. How people will allow a machine (i.e., AI tools) to determine their child's education? In order to trust AI in education, AI models need to make sure stakeholders (i.e., parents, teachers, and administration) understand the decision-making processes [157, 158] . There are already some efforts in this directions [159, 160] . The literature also reports some efforts for explainable AI in defense. The concept of explainable AI has been firstly introduced in a defense project by Defence Advanced Research Project Agency (DARPA) [161] . Explainable AI is also a need for modern entertainment and businesses [162] . In order to trust in AI predictions, the prediction and decision-making process of the models should be understandable for all the stakeholders, such as investors, customers, and CEOs, etc., in the business. Table 3 summarises some key explainable AI publications in different smart city applications. The literature also shows a connection between adversarial attacks and explainability [25, 180, 26] . It is believed that explainable AI models are robust against adversarial attacks, and can help in the identification of adversarial inputs/samples by generating an anomalous explanation for the perturbed samples [26] . To verify the hypothesis, several efforts have been made in the literature to guard the AI model against adversarial attacks via the emerging concept of explainability. For instance, Fidel et al. [26] employed an explainable AI framework namely Shapley Additive Explanations (SHAP) [163] , which evaluates the relevance/importance of a feature by assigning it an important value for a particular prediction, to generate 'XAI Signatures' for the internal layers of a Deep Neural Network (DNN) classifier to differentiate between normal and adversarial inputs. Dhaliwal et al. [181] proposed a gradient similarity-based approach for differentiating between normal and adversarial inputs. According to them, gradient similarity shows the influence of training data on test samples, and behaves differently for genuine and adversarial input samples, enabling the detection of various adversarial attacks with high accuracy. Some other interesting works are relying on explainable AI techniques to guard against adversarial attacks [182, 183, 180, 25] . However, the explanations/information regarding the working mechanism of AI algorithms revealed by explainability methods could also be utilized to generate more effective adversarial attacks on the algorithms [56] . Adversarial AI also provides an opportunity to increase interpretability (i.e., human's understanding) of AI models [184, 185] . For instance, in [184] an adversarial AI approach is used to identify the relevance of features concerning the predictions made by an AI. The adversarial AI technique aims to find the magnitude of changes required in the features of the input samples to correctly classify a given set of misclassified samples, which is then used as an explanation of the misclassification. In [185] , on the other hand, adversarial AI techniques are used for explaining the predictions of a DNN by identifying the relevance/importance of features for the predictions based on the behavior of an adversarial attack on the DNN. Several challenges are associated with the collection, storage, sharing, ensuring, and maintaining the quality of data. For instance, the smart city's infrastructure requires physical resources for storing and processing the data. In addition, these resources also consume a significant amount of electricity and space as well as the environmental issues due to the carbon emissions by these resources. Smart city applications may also make use of cloud and edge deployment to overcome the lack of physical infrastructure for data storage and computing [61, 186, 187, 188] . Thanks to the recent advancement and popularity of cloud storage, the technology meets the data storage and processing requirements of a diversified set of smart city applications. However, cloud and edge deployment are also vulnerable to several adversarial, security, privacy and ethical challenges [61] . For instance, using third-party services may result in no control over the data, thus, the data's privacy settings are beyond the control of the enterprise/authorities. Such deployments may also lead to potential data leakage risk by the service provider [189, 190] . Moreover, the cloud edge deployments could also be subject to several types of attacks, such as adversarial attacks, backdoor attacks, cyber kill chain-based attacks, data manipulation attacks, and Trojan attacks [61] . There are also several challenges associated with the heterogeneous nature of the data, collected through several IoTs devices from different vendors, in smart city research [191] . Some of the key challenges are: • Quality of the data: The quality of the data in smart city applications largely depends on the accuracy of the IoTs devices/sensors used for collecting the data. Therefore, it should be ensured that the data infrastructure is accurate and error-free [191] . In addition, some external factors, such as temperature, weather, etc., may also affect the accurate data collection. • Diversity/characteristics of the data: Generally in typical smart city applications data is collected through several devices, making it hard to understand the characteristics of the data for removing outliers [192] . Moreover, the data is collected continuously, which may result in scalability issues in the infrastructure. Post hoc Self-collected The method provides a real-time prediction of the risk of hypoxemia during a surgery. The model is trained on time-series data from a large collection of medical records. The existing explanation methods namely Model-agnostic prediction explanation [164, 165] are employed for the explanation of the model's prediction. [166] Healthcare Post hoc Multiple online sources [166] It provides a distributed deep learning based framework for COVID-19 diagnosis in a distributed environment ensuring low-latency and highbandwidth using edge computing. Feature visualization methods are used for the explanation of the model's outcome. [ Buzzface [178] The methods relies on Extreme Gradient Boosting (XGB) machines [179] for fake news detection. Explanation of the outcomes is provided using feature relevance, and observed that some features favour in detecting certain types of fake news. • Constrained Environment: In smart city applications, generally, the devices including data collection sensors and data transfer networks have limited resources (i.e., storage, bandwidth, and processing power, etc.,) [191, 193] . In order to collect and transfer a large amount of data, such systems require a reliable data collection and transmission infrastructure. In the next subsection, we will focus on some major challenges and concerns in data collection, developing, and sharing smart cities data/datasets. The performance of AI algorithms is also constrained by the quality of the data. Thus, it is important to discuss the major challenges and issues related to dataset collection and sharing. These challenges and concerns are raised as a result of data collection, analysis, sharing, and the use of the data in sensitive applications [194] . The main challenges and concerns in dataset collection and sharing include informed consent in the form of understanding of how and for what purpose the data will be used, transparency, interpretation, and trust [195, 196] . Though informed consent is one of the key concerns of data collection, considering the fact that future applications are sometimes unspecified and unknown, it may be inconvenient to give prior commitments regarding potential future use of the data. Moreover, data could be merged with other existing sources making informed consent even more challenging [197] . In several cases, it is even not possible to make sure informed consent of all people subject to data collection. For instance, these days delivery by drone is very common where those who opt for free delivery consent to unlimited data collection from their home. In areas where drone delivery is permitted, a whole neighborhood could be subject to such data collection activities [48] . For data collection or annotation, usually, crowdsourcing studies are conducted where a large population is usually involved to collect or annotate training data for AI models in a particular application. During the process, several factors need to be considered. For instance, it is really important to inform the participants about your organization and the purpose for which the data is collected or annotated. The information of the participants should be kept confidential, and they should be allowed to withdraw from the data collection process at any time. More importantly, one should remain neutral and unbiased in conducting a crowd-sourcing study as personal preconceptions or opinions may affect the quality of the data. In the modern world, data is also collected as a result of a product/service. For instance, social media platforms can be used to collect users' data for different services. In such cases, several questions arise [48] . For instance, are the users aware of the data collection process and purpose? do they have a right and access to the data? is the company is sharing or selling the users' data? is there any policy for maintaining the informed consent if the company is sold to another one? how the companies can ensure the privacy of the users if their data is leaked to some bad actors? Data sharing is also subject to several questions, such as the transparency of the data, interpretation, and how much trusty the data is in a particular application? According to [198] , "data sharing is not simply the sharing of data, it's also the sharing of interpretation." Moreover, the re-identification of individuals or groups or linking data back to them through data mining and analysis are also key ethical concerns in data sharing. For instance, the possibility of identification or linking data to an individual or a particular group may result in gender, race and religious discrimination [194, 199] . Recently, a growing concern has been noticed for the transparency and interpretation of the data used for training AI models [200, 201] . For instance, Bauchner et al. [200] emphasize the importance of data sharing in healthcare, and ethical concerns regarding data collection and sharing in the domain. Bertino et al. [202] also analyze the importance of transparency and interpretation, which they termed as providing a 360 • view, of data in sensitive applications. The authors link the transparency and interpretation of data with the privacy, trust, compliance, and ethics of the data management systems. Figure 10 shows some of the major challenges in data management (collection and sharing) highlighted in the literature, and are summarized as follows: • Privacy: The biggest challenge in human-centric smart city applications is ensuring the privacy of the citizens, which is their fundamental right. An improved data privacy mechanism not only helps in developing citizens' trust in different smart city services and businesses but also ensures individuals' safety as the leakage of sensitive in- formation may endanger individuals' lives. For instance, though some off-the-shelf encryption, authentication, and anonymity techniques could reduce the chances, intelligent malicious attackers may misuse residences' sensitive information collected from smart home applications and surveillance systems to harm the individuals using a sidechannel and cold boot attack [203, 204, 205] . Thus for the effectiveness of smart city applications, the concerned authorities should ensure that individuals' information is not misused by the authorities or any individual for any sort of personal or financial gains [192] . In recent years, there's a growing concern over citizens' privacy, and several international bodies, such as the European Union (EU), have introduced new privacy regulations. One recent example of community's concerns over privacy and radical bias is the demand for abandon of face recognition technology from giant companies, such as Amazon, Microsoft, and IBM, for law enforcement [20] . To address the privacyrelated concerns, various privacy-friendly techniques and algorithms have been developed using methods where AI systems' "sight" is "darkened" via cryptography [206] . On the other hand, some believe that the traditional "narrow" understanding of privacy as a moral concept will eventually cease to exist and there is a need to revise the concept itself in the post-AI age [207] . Although it may entail some challenges, the newly introduced concept "data philanthropy" can also be of help in this regard [208] . The basic idea that we propose here is to extend the scope of this concept to include certain cases of individuals who would voluntarily "donate" their data for the advancement of science or better-functioning smart cities. Within the discourse of various religious and moral traditions, there is the concept of "charity," where people would voluntar-ily donate something they own and cherish for the benefit of others. Considering the great value that data can have in our modern world, one can argue that data would also fall within the category of valuable objects that can be donated for charitable purposes, under conditions that would vary from case to case. This will be especially applicable within the communities where familial or societal interests usually occupy a higher position than individual interests. Moreover, there are also different solutions, such as differential privacy, to ensure individuals' privacy by withholding individual's information or information that could lead to identification of an individual in a dataset [209] . • Informed Consent: Informed consent, which is the process of informing and obtaining participant's consent for data collection, is a key element of data ethics. In a data collection process, it is important to make sure the users subject to data collection know about the data collection process, goal, and the way and purpose of its use in future [48] . Informed consent should fulfill four conditions including (i) the participants have information/knowledge about the data collection process, (ii) they understand the information and fully aware of the goal, future use, and the way data is collected, (iii) the participants should volunteers and should not be manipulated or persuaded in any way, and (iv) the participants should have the capabilities to understand the risks involved with the data, and able to decide whether to participate or not [210]. • Open Data: For transparency and developing trust, the data and insights obtained from the data should be openly accessible. However, there are also several challenges associated with open data. For instance, it is important to determine which information should be made open, who should have an access to the data, and for what purpose the data should be allowed to be made open/used to ensure the individuals' privacy [211] . • Data Ownership: Data ownership is another key aspect of smart cities that raised serious concerns recently [212] . In smart cities, a lot of services are generally deployed by private companies, whose ultimate goal and priorities unlike public authorities are to make a profit, posing serious threats to citizens' data being monetized [213] . Under these circumstance, key questions will include: who will have access to, and control over, these data? Will the upper hand be given to private companies, where the market logic will dominate or will the voice of the normal citizen count and thus more weight will be given to the public control? The answer could have been straightforward if the services were initiated and sponsored by public authorities, however, the investment from the private sector makes it very complicated. The various choices to be made in this regard will greatly determine the level of (im)morality in big data management [214] . According to Ben Rossi [215], unfortunately in smart cities, public authorities provide the private companies with the opportunities to monetize smart city data by allowing them to deploy different services, and these companies have more information about citizens' compared to the public authorities. There are also some debates on data nationalization. For instance, Ben Rossi [215] provides hints on how the public authorities can get hold back on the data. One of the potential solutions is to encourage joint ventures of public and private sectors where public authorities could have control over the data. Some efforts have been already noticed in this direction. For instance, the Chinese Government has initiated several joint smart city projects with big private companies. There are also debates, and some efforts have been made in terms of legislation to give citizens/users the ownership of the data [216] . Moreover, there are also some solutions allowing users to retain ownership of their data while attaining different services. For instance, Bozzelli et al. [217] proposed a user data protection service. The service allows users to analyze and evaluate their data protection requirements by considering the terms and conditions of a service, which are normally overlooked by users, before using and accepting the terms and conditions of a service that might compromise their personal data. • Interpretation: Interpretation is another key challenge of data shared and used for training AI models. For better results, the data used for training an AI model should be interpretable as also demanded by explainable AI. For instance, the big data predictive policing solution namely PredPol used by police in the USA collects and analyzes the usefulness of the data before training and making predictions about crimes in an underlying area. A very significant reduction has been observed in the crimes mainly because of the useful and interpretable data [218] . However, in smart city applications data is collected through different IoTs sensors from various vendors. Managing, interpreting, and picking relevant and useful data from such a heterogeneous and unstructured collection is a very challenging task. • Data Biases: The datasets generally contain different types of hidden biases, either due to the collector or the respondent, in the collection phase, which are generally hard to undo and have a direct impact on the analysis [10] . These biases are very risky in human-centric applications and need to be eliminated at the beginning. In dataset collection using surveys/questionnaires, generally two types of biases can be incorporated, namely (i) response bias, and (ii) non-response bias. The former represents the intentional bias from the respondents by giving wrong answers while the latter type of bias is encountered when no response at all is received from the respondent. One of the possible solutions for avoiding bias in such processes is to use close end surveys or restrict the respondents to some pre-defined options [219] . However, in smart city application data is collected from different services using different IoT sensors, and the problem of data bias is beyond the typical data collection issues. Therefore, to avoid bias in such applications a more proactive response from the citizens and authorities is needed to help in eliminating un- Table 4 : Summary of some key works on challenges, risks, and issues associated with data collection and sharing in smart city applications in terms of application and issues covered. Challenges/Issues Discussed [225] Healthcare Security and privacy [226] Healthcare Data intrepretation and fusion [227] Healthcare Security and privacy [228] Healthcare Informed consent [229] Healthcare Informed consent and confidentiality [230] Surveillance Privacy [231] Surveillance Security and privacy [232] Surveillance Privacy [233] Recruitment Privacy and informed consent [234] Generic Security, privacy, bias, and informed consent [235] Generic Informed consent [236] Recruitment Bias [237] Generic Bias [238] Generic Bias [239] Generic Open data, intrepretation, and annotation intended bias in smart city solutions. For instance, the authorities need to invest more in research before deploying the technology in an application. Moreover, better communication and messaging strategies need to be adopted to inform and educate citizens about the goal, process, importance, and risks involved with the data collected around their city [220] . • Data Auditing: Data auditing involves the assessment of data to analyze whether the available data is suitable for a specific application or not, and the risks associated with poor data. In smart cities, data is generally collected through several IoTs sensors from various vendors, which results in an unstructured collection of data. There are several challenges associated with the unstructured collection of data as detailed earlier. Data auditing is essential, under such circumstances, to analyze and assess the quality of collected data as the performance of AI algorithms in smart city applications is also constrained by the quality of the data [221] . In literature, several interesting data auditing techniques have been proposed [221, 222, 223, 224] . For instance, Yu et al. [221] propose a decentralized big data auditing scheme for smart city application by employing blockchain technology. One of the key advantages of the method is the elimination of third-party auditors, which are prone to several security threats. Table 4 lists some key papers on the data associated challenges in different smart city applications. In the literature, the majority of the efforts made for explainable AI focus on the design of the algorithm to interpret AI predictions/decisions. However, other aspects are contributing to the interpretation of AI decision, such as the datasets, and post-modeling analysis [240] . For instance, a dataset used for training an AI model may contain features incomprehensible for the stakeholders, which may result in a lack of trust in the AI predictions. Therefore, to achieve better interpretation/explains of AI models' decision, explainability should be considered throughout the process starting from data/features and concluding at the post-modeling explainability [240, 241] . In this section of the paper, we focus on the explainability aspects of the dataset used for training and validation of AI models. The literature on the explainability of the dataset can be divided into four main categories, namely (i) exploratory data analysis of the dataset, (ii) description and standardization of dataset, (iii) explainable features, and (iv) dataset summarization methods. In the next subsections, we provide the details of these methods. Exploratory analysis of datasets aims to provide a summary of key characteristics, such as dimensionality, mean/average, standard deviation, and missing features, of the dataset used for training an AI model. Different data visualization tools are available to visualize different properties of a dataset and extract informative insights that could help in understanding its impact on the decisions of the AI model. For instance, Google's Facets [242] , which is an opensource data visualization library/tool, allows us to visualize and better understand data. The exploratory analysis helps in understanding the limitations of a dataset. For instance, in the case of an imbalanced dataset, such analysis could provide an early clue for the poor performance of a classifier, which can then be mitigated using different sampling techniques. AI datasets are usually released without proper documentation and description. In order to fully understand a dataset, proper description should be provided. In this regard, a standardized documentation/description of the dataset could be really helpful to mitigate the communication gap between the provider and user of a dataset. To this aim, several schemes, such as datasheets, data statements, and nutrition labels, have been proposed [243] . All the schemes aim to associate detailed and standardized documents containing a detailed description of a dataset's creation/collection process, composition, and legal/ethical considerations. Nutrition labeling, which is a diagnostic framework for datasets, provides a comprehensive overview of a dataset's ingredients helping the developers of AI models to be trained on the dataset [244] . Another important aspect of explainable AI is explainable feature engineering, which aims for the identification of features influencing an AI model's decision. Moreover, as one of the key characteristics of a dataset, the features should also be explainable, and make sense to the users and developers. Besides improvement in an AI model's performance, explainable features also help in the model's explainability. Explainable feature engineering can be performed in two different ways, namely (i) domain-specific feature engineering, and (ii) modelbased feature engineering [241] . The former method utilizes a domain expert's knowledge in combination with the insights extracted from exploratory data analysis while the latter makes use of various mathematical models to unlock the underlying structure of a dataset [245, 246] . For instance, Shi et al. [245] used domain exploratory data analysis for relevant feature selection for cloud detection in satellite imagery. Dataset summarization is a technique to achieve a representative subset of a dataset for case-based reasoning. Case-based reasoning is an explainable modeling approach aiming to predict an underlying sample based on its similarity with training samples, which are both presented to the users for explanations. One of the main limitations of case-based reasoning is keeping track of the complete training set for comparison purposes. Dataset summarization is one of the possible solutions to avoid keeping track of the complete training set and rather selects a subset providing a condensed view of the training set. AI code of ethics is another aspect of smart city applications that has recently received a lot of attention from the community. AI code of ethics is a formal document/statement from an organization that defines the scope and role of AI in human-focused applications. The three-volume Handbook of Artificial Intelligence published in 1981-1982 [247] hardly paid any attention to ethics [248] . After the lapse of about three decades, the situation has radically changed. The exponential progress in the AI systems and their applications in various aspects of life have produced great benefits but have concurrently continued to trigger complex moral questions and challenges. In response, an interdisciplinary AI ethics discourse is emerging. This is owed to scholarly input from cognate disciplines, including data ethics, information ethics, robot ethics, internet ethics, machine ethics, and military ethics. These new developments were reflected in an increasing number of publications that assumed more than one form. To provide a systematic overview, relevant literature will be divided into two main categories, viz., (a) academic publications and (b) policies and guidelines. The ethical and moral discourse on the AI systems is usually divided into two main branches. The larger and more mature branch, sometimes named just "AI ethics" or "robot ethics," is premised on a human-centered perspective which focuses on the morality of humans who deal with the AI systems, including developers, manufacturers, operators, consumers, etc. The smaller and younger branch, usually called "machine ethics," is a machine-centered discourse which mainly examines how the AI systems, intelligent machines and robots can themselves behave ethically [249, 250, 251] . Both the branches (i.e., humancentered and machine-centered) are overlapping as shown in Figure 11 . We will cover review the moral questions addressed within the first branch and the second branch separately in another Sections 5.3.2 and 5.3.3. The interdisciplinary character of AI ethics was manifested in the considerably diverse backgrounds and research interests of the academics who contributed to this emerging field. Due to their close connections with AI ethics, many of the contributing authors came from the cognate fields of (moral) philosophy, engineering, and computer science. Additionally, many important authors came from other fields as well, including nanotechnology, psychology, social sciences, applied ethics, bioethics, legal studies along with some researchers who simply identified themselves as AI researchers. It is to be noted that some of the contributing authors already have an interdisciplinary background. This diverse group of researchers contributed to the AI ethics in various ways, e.g., writing book chapters, journal articles, book-length studies, editing volumes, and editing journal special issues. Below, we give representative examples of each type of these publications. Besides individual book chapters [252, 253] , important book-length studies have provided rigorous and critical insights on the moral questions of the AI systems, AI and related fields. Examples include Moral machines: Teaching robots right from wrong, published in 2008 [254] , Machine ethics, published in 2011 [255] , The Machine question: Critical perspectives on AI, robots, and ethics, published in 2012 [256] , Robot ethics: The Ethical and social implications of robotics, published in 2012 [257] , Superintelligence: Paths, dangers, strategies, published in 2014, Programming machine ethics, published in 2016 [258] , and Robot ethics 2.0: From autonomous cars to artificial intelligence, published in 2017 [259] . In addition, many individual journal articles [260, 261, 262, 263, 264, 251, 265, 266, 267] , and a number of academic journals dedicated special issues to contribute to AI ethics. For instance, the Journal of Experimental & Theoretical Artificial Intelligence published "Philosophical foundations of artificial intelligence," in 2000 [268] , the IEEE Intelligent Systems published "Machine Ethics" in 2006 [269] , the AI & Society: Journal of Knowledge, Culture and Communication published "Ethics and artificial agents" in 2008 [270] , the Minds and Machines: Journal for Artificial Intelligence, Philosophy and Cognitive Science published "Ethics and artificial intelligence" in 2017, the Ethics and Information Technology published "Ethics in artificial intelligence" in 2018 [271] , the Proceedings of the IEEE published "Machine ethics: The Design and governance of ethical AI and autonomous systems" in 2019 [249] , and The American Journal of Bioethics published "Planning for the known unknown: AI for human healthcare systems" [272] . An important milestone towards the maturation and canonization of AI ethics, as a scholarly discipline, was the publication of some authoritative reference works. The Cambridge handbook of artificial intelligence, published in 2014, included a distinct chapter on "the ethics of artificial intelligence" [273] . Recently, dedicated handbooks started to appear, including Handbuch Maschinenethik (handbook of machine ethics), published in 2019 [274] , and The Oxford Handbook of Ethics of AI, published in 2020 [275] , where the last chapter was dedicated to "Smart City Ethics" [214] . These publications addressed a wide range of moral issues that are relevant to the context of smart cities, even if not explicitly stated. Thus, no serious moral discourse on smart cities can be developed without critical engagement with such publications. Additionally, an increasing number of publications started to highlight the AI moral questions within the specific context of smart cities, especially themes like privacy and information transparency. Besides the aforementioned chapter in The Oxford Handbook of Ethics of AI, journal articles and book chapters [207, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285] , one also observes a growing ethics genre with focus on smart cities. Reference works dedicated to the theme of smart cities also included chapters relevant to ethical, including The Routledge Companion to Smart Cities [286] and the Handbook of Smart Cities, which dedicated a distinct part to "Ethical Challenges" [287] . Representative examples also include book-length studies like Data and the City [288] ; The right to the smart city [282] ; Citizens in the 'Smart City': Participation, Co-production, Governance [289] ; Technology and the city: Towards a philosophy of urban technologies [290] . Besides academic researchers, AI ethics has proved to be of interest to a wide range of stakeholders. For instance, AI ethics is appealing to managers of tech giants such as Apple, Facebook, and Google, as well as politicians and policymakers. Rather than the theoretical and philosophical ramifications, which usually dominate the academic discourse, these stakeholders are more interested in applicable policies and practical guidelines that would help in developing morally-justified (self-)governance frameworks. For tech giants and multinational companies, having such policies and guidelines in hand usually serve the purpose of calming critical voices and improving the image of these companies among the general public, and particularly among their potential clients and customers. The efforts made by these stakeholders, especially from 2016 onwards, resulted in a great number of AI guidelines, poli-cies, and principles. These documents and reports were surveyed, sometimes with analytical and critical insights, by some recently published papers [291, 292, 249, 293, 206] . Furthermore, some academic researchers contributed to this debate by providing theoretical foundations and critical views concerning drafting AI codes of ethics [294, 295] . In her work, Boddington paid special attention to the Future of Life Institute's "Asilomar AI principles," which was the outcome of an international conference that hosted a large interdisciplinary group, with expertise in various disciplines, including law, philosophy, economics, industry, and social science [294] . From their side, almost all tech giants and multinational companies developed their own guidelines (see Table 5 ). After various checks, it seems that Twitter still has no published systematic AI guidelines, but this case would just represent the exception to the rule [206] . Google has "Artificial Intelligence at Google: Our Principles" [296] and "Perspectives on issues in AI governance" [297] . OpenAI issued their "OpenAI Charter" [298], IBM has "Everyday ethics for artificial intelligence," and Microsoft has "Microsoft AI principles" [299] . Sometimes, the adopted guidelines are the product of joint efforts and collaboration among more than one company. A good example here is the coalition "Partnership on AI," where large companies like Amazon, Apple, Facebook, Google, IBM, Sony, and Intel collaborated to facilitate and support the responsible use of AI [206] . Table 5 provides an overview of moral principles in the AI codes of Tech companies. One recent example of considering the ethical aspects of AI in human-centric applications from these companies is quitting the use of face recognition technology for law enforcement after the privacy and racial concerns over it from the community [20] . At the governmental level, many countries drafted guidelines and policy frameworks for AI governance. The two leading AI superpowers, China and the United States, were at the forefront in this regard. For the USA, there are several documents and reports, including the "Preparing for the future of artificial intelligence" published in 2016 and "The National artificial intelligence research and development strategic plan: 2019 update," by the National Science and Technology [300, 301, 302] . As for China, there is the "Beijing AI Principles" issued in 2019 by the Beijing Academy of Artificial Intelligence and backed by the Chinese Ministry of Science and Technology [303] . At the transnational or global level, there are also important initiatives [304, 305] . The Institute of Electrical and Electronics Engineers (IEEE) produced two versions of the "Ethically Aligned Design." The first version came out in 2016 and the second in 2019 [306, 307] . After open consultation on a draft made publicly available in December 2018, the European Commission published "Ethics guidelines for trustworthy AI" [308]. The last example to be mentioned here is the intergovernmental Organization for Economic Co-operation and Development (OECD), which adopted in May 2019 the "OECD Principles on AI." The document is meant to promote innovative and trustworthy AI that respects human rights and democratic values [309] . Inspired by this initiative, the G20 adopted the humancentered AI Principles [309, 310]. Figure 12 provides a taxonomy of the key ethical issues discussed in the literature. In this section, we will mainly focus on the algorithmic issues as a detailed description of the data ethics has been provided in Section 4. Before delving into the detailed issues addressed by the above-sketched literature (see overview below and in Table 6 ), a methodological note is in order. Due to the popularity of AI and the polarizing debates in media, some of the contributors to the field of AI ethics stresses the need to distinguish between genuine and pretentious moral problems and to stress that this field should focus on the former rather than the latter type of problems [250, 311] . The publicity of certain "exotic" anecdotes and their wide circulation in media would make people mistakenly think that they raise genuine ethical issues. This holds true for the public unveiling of the Japanese roboticist Hiroshi Ishiguro's Geminoids, an android that so closely resembles his own appearance and does human-like movements, such as blinking and fidgeting with its hands. Another example here is the robot "Sophia," which received the "citizenship" status from Saudi Arabia after her speech at a United Nations meeting [311] . It is to be noted that it is quite difficult to get the Saudi citizenship, even for people who were born in this country and spent a great deal of their life there but had no Saudi parents. Such incidents make some people imagine or create fearful scenarios that ethicists and policymakers should urgently address their moral ramifications, as if they are part of an already existing dilemma. However, Ishiguro's robot is a remotely controlled android, not an autonomous agent and the speech given by Sophia was not her own work but it was prerecorded by an organic human fe- Table 6 : Overview of the key issues that (should) deserve attention in the moral discourse on AI. The significance of some issues is agreed upon (Serious Issues), where other issues are viewed as less important or simply non-issues (Pretentious Issues). The (in)significance of some other issues, mainly represented by the singularity hypothesis, is still a point of controversy and agreement. male. Thus, the fears and concerns promoted after such incidents are more pretentious in nature and are usually viewed as non-issues, from a moral perspective. They come close to analogous claims made about earlier technologies, e.g., writing will destroy memory, trains are too fast for souls, telephones will destroy personal communication, video cassettes will make going out redundant, etc. [250] . Moral philosophers argue that such "non-issues" should not be part of the mainstream AI ethics [312, 311, 250] . However, sometimes it proves difficult to agree whether some AI-related questions and challenges should be considered as genuine or pretentious issues. The main example here is the so-called "singularity hypothesis", which will be discussed in a distinct section below. Unlike the usual concern linked with most technological advances, viz., undermining people's health or wellbeing, advances in the AI systems (sometimes together with the related field of neurology) is believed to pose an existential threat to the human species altogether. This concern is usually couched under the so-called "singularity hypothesis". The basic idea of this hypothesis is that once the AI systems are able to produce machines or robots with a human level of intelligence, these machines will also be able to act autonomously and create their own "superintelligent" machines that will eventually surpass the human level of intelligence. With such a shiftmaking sequence of developments, the point of "singularity," similar to that of physics, will be a natural outcome. After this point, the superintelligent machine will be the last invention made by man because humans will not be able to have things under control anymore, including their own destiny. Consequently, human affairs and basic values in life (including even what it means to be human), as we understand them today, will collapse [312, 250] . For those who believe in the singularity hypothesis, one of the possible post-singularity scenarios is that humans will be replaced by superintelligent machines and thus mankind will become obsolete. The proponents of a more optimistic scenario do not speak of human extinction but of transformation into superhuman intelligent beings. Owing to mutual hybridization between men and machines, humans will be able to exponentially increase their levels of intelligence, all other capacities, and their lifespan up to the possibility of achieving immortality [312, 250] . On the other hand, some voices consider the singularity hypothesis dubious, untenable and overestimation of AI risks. Thus, some wonder whether this hypothesis ever deserves to be viewed as a real moral issue or it should actually be seen as something imaginary whose right place is science fiction rather than moral discourse [312, 311, 250] . The critics of the singularity hypothesis sometimes even accuse its proponents of lacking work experience in the AI field [295, 206] . Such reservations about the singularity hypothesis and questioning whether it is even a serious issue to be addressed may explain the silence of many of the above-reviewed policies and guidelines on this issue [302] . Even the report released in 2017 by the US Center for a New American Security (CNAS), which had the term singularity in its title, did not provide serious analysis of the singularity hypothesis [313] . When the aforementioned "Preparing for the future of artificial intelligence" specifically touched upon the singularity hypothesis, it was stated that it should have little impact on current policy and that it should not be the main driver of AI public policy [302] . The same attitude was adopted by the first version of the IEEE's "Ethically aligned design," where an implicit reference was made to the singularity hypothesis warning against adopting "dystopian assumptions concerning autonomous machines threatening human autonomy" [306] . Data-related concerns (e.g., privacy, transparency, explainability, adversarial attacks). Broadly speaking, the efficiency of AI systems heavily depends on the quality of the training data. Thus, a great deal of the AI moral issues and dilemmas revolve around the central question of how such big data should be managed in an ethical way. While trying to collect and process as much data as possible, the AI systems can actually be seen as performing a modernized form of the conventional state surveillance by secret services. Various techniques that can be used in smart cities, such as face recognition and device fingerprinting, in combination with "smart" phones and TVs, "smart governance" and "Internet of Things," are tools for huge datagathering machinery. As some observers stated, the resulting data will not only include "private" or "confidential" information about us but these tools will even know more about us than what we know about ourselves. Consequently, the data gathered can be used to manipulate one's behavior. Besides the possibility of deploying it to infringe upon people's privacy and confidentiality of information, this massive data-gathering machinery can also make money through our collected data without consenting or even informing us. This is sometimes called "surveillance economy" and "surveillance capitalism" [314, 315, 250] . A more detailed discussion on the data-related ethical issues and concerns, such as privacy, bias, ownership, data openness, interpretation, and informed consent, has been provided in Section 4. Explainability-which is closely related to key moral concepts such as fairness, bias, accountability, and trust-is another significant aspect of big data management. The minimum level of required explainability intersects with the concept of transparency, which would simply mean developing easilyunderstood overview of system functionality. In other words, the AI systems should at least maintain precise accounts of when, how, by whom, and with what motivation these systems have been constructed, and these accounts should be explainable and understandable. Moreover, the very tools used to build the AI systems can be set to capture and store such information [316] . On the other hand, explainability, as a technical term, has further moral requirements. It means that the causes behind an AI model's decision should be explainable and understandable for humans so that stakeholders can be aware of the AI model's biases, potential causes of the bias, etc. [317] . The lack of explainability and transparency, which will be seen as opacity, continues to trigger public and scholarly debates about the possible moral violations related to discrimination, manipulation, bias, injustice, etc. An AI algorithm developed by Goldman Sachs was said to be discriminating against women [318] . Also, the Google Health study, published in Nature, which argued that an AI system can outperform radiologists at predicting cancer, was said to violate transparency and reproducibility [319, 320, 321] . To address such concerns, the AI field has been developing techniques to facilitate the so-called "explainable AI" and "discrimination aware data mining" [206] . On the other hand, governmental efforts continue to put pressure on the AI industry to produce more explainable applications. For instance, the EU General Data Protection Regulation (GDPR) underlined the "right to explanations" [317] . Furthermore, the aforementioned EU "Ethics guidelines for trustworthy AI" included the principle of explicability, as one of the four core ethical principles in the context of AI systems [308] . Another major concern related to data governance has to do with ensuring its security and developing protective measures against adversarial attacks, which can have a serious impact on the AI systems. AI algorithms, whose behavior fairly shapes life in smart cities, mainly feed on data collected from every participated device to have fully integrated complex smart solutions. However, AI algorithms are not safe by nature since the adversarial attacks have been approved in different smart domains. This creates deep ethical responsibility shared by all stakeholders to ensure data safety for both assets and people, to the extent that some considered it a human-rights issue [322] . Several defense techniques have being developed to mitigate or minimize the risk of adversarial attacks [323, 324, 325] . Also, the Generative Adversarial Networks (GANs) support decision systems in several smart areas by generating realistic examples to enrich the available data set (data augmentation) and thus improve the efficiency of the AI models [326, 327, 328] . It is to be noted that the commissioned cyberattacks, originally meant to test the immunity of the AI systems to the threat of Adversarial AI or Offensive AI, can also help address some of the aforementioned concerns. For instance, it can step in to secure fairness in AI solutions so that classifiers will not judge based on any protected attributes related to gender, religion, rich, poor, etc. [329] and this is called adversarial fairness. They may also need to keep a level of privacy of some sensitive data and this is called adversarial representation [330, 331] . Such benefits explain the presence of a clear trend in literature to expose all possible adversarial attacks on different systems. This can be viewed as part of typical ethical hacking, where AI specialists look for every possible form of attack to improve the process of defenders' development. Social concerns (e.g., discrimination, unemployment). In addition to the problems highlighted above, big data misgovernance can also create social problems. For instance, the absence of explainability can pose a serious threat to democracy; the so-called "threat of algocracy" [332] . This threat will likely happen by standardizing dependence on "intelligent" systems whose rationale or mode of reasoning for the decision they made is inaccessible to individual citizens and sometimes even experts [250] . Additionally, the aforementioned example of Goldman Sachs and similar stories show that data-driven algorithms can contribute to sexism, racism, or reproducing other negative stereotypes that we collectively agreed to judge as bad, even if they sometimes reflect part of our current reality. Unregulated usage of AI applications like automated facial analysis proved to have systematic biases by skin type and gender [248, 316] . Instead of helping us reform the exiting inequalities in societies, mathematical models and algorithms often reinforce them [24] . To address such biases and discriminatory stereotyping, more carefully programmed AI systems are being developed. For instance, some discrimination-sensitive programs can be used in the early stages of human resources processes to help shortlist diverse CVs [316] . What is also important in this regard is that the AI field itself should be more inclusive and diverse when it comes to the cultural and ethnic background and gender of the AI teams [271] . By its increasing ability to outsource skilled and unskilled jobs, another socio-economic concern is that AI will disrupt the labor market. The pessimistic view sometimes goes as far as to warn of a dystopian climax, where a handful of AI giants will take jobs away from millions of people who will end up having nothing to do except "entertaining" themselves by what the AI industry would allow them to access. At the other end of the spectrum, there is an optimistic view whose advocates promise of an AI utopia where the AI systems will generate wealth, create more jobs, and improve the overall economic growth. One of the key challenges to properly navigate these concerns is that there is little economics research in this area and available predictions are premised on past technologies. This state of academic research makes it difficult for policymakers to prepare well for the prospective AI impact on the labor market and economy in general [333, 334, 335] . Beyond the AI positive or negative economic impact, some researchers expressed specific concerns in certain applications, like the so-called "carebots," which are meant to offload caregiving to a machine. Even if this automation of caregiving will not result in job cuts, replacing human care will still have social costs, e.g., exchanging feelings and emotions among humans will cease to be part of caregiving [336] . The machine-centered branch of AI ethics, or "machine ethics," approaches machines as subjects or agents, rather than objects or tools used by humans. Despite some vagueness about the exact scope and subject of this branch, the basic idea is that "machine ethics" discourse would focus on questions related to the morality of the machine itself, e.g., can a machine behave ethically towards humans or other machines? and if yes, which moral standards should apply to judge this behavior? Would the machine in such a case be held accountable, morally responsible, or holder of rights and obligations? [249, 250] . Available research shows a variety of approaches, already applied in experimental demonstrations with robots, that explore how the machine can be trained to recognize and correctly respond to morally challenging situations. It is to be noted that the outcome of these trials is still far away from producing even a human-like being whose acts can be judged in the same way we judge human moral agents. Researchers just speak about "robots with very limited ethics in constrained laboratory settings" [249] . In order to accommodate the restricted moral autonomy in some (future) AI systems, some researchers proposed multi-layered typologies for ethical agents. In these typologies, the highest category of full ethical agents is (now) exclusive to an average adult human whereas the machines trained to behave ethically fall under lower categories [337] . Whatever one's conviction is about the nature of morality that can be assigned to certain AI systems and how far we can regard them as "artificial moral agents," the very idea itself raised complex questions about key concepts like moral responsibility, accountability, and liability. This holds particularly true for the two famous AI applications, namely autonomous vehicles and autonomous weapons. In principle, such applications challenge the conventional idea that whenever there is a victim, there should an identifiable culprit. The victims of violations made by autonomous cars or weapons will face the difficulty of allocating punishment, sometimes called the "retribution gap". This is because they will not have a human driver or shooter who can be held accountable [338, 250] . In response to these difficulties, proposals were made to forgo the idea of accountability assigned to a specific individual (e.g., the motorist or the shooter) and to assign it to a pool of involved stakeholders (e.g., programmers, manufacturers, and operators of the AI systems, besides the bodies responsible for taking infrastructure, policy and legal decisions, etc.) [255, 250, 339] . In this section, we present the insights and lessons learned from the literature on each challenge to AI in smart cities. The topic of adversarial AI is not an emerging topic, however, it becomes a crucial and hot topic in the era of smart cities which needs extra efforts to reach an acceptable level of trust in our AI-based products. Trust might be defined concerning the possible attacks, defense mechanisms, and the expected effect on the overall system. This may create a trade-off between safety and performance which needs further exploration. The AI safety strategies come in four categories based on four general safety strategies in engineering [141] . We highlight the basics of each of them with possible examples related to the discussion in this section. • Safe Design Strategy: The main idea in this strategy is to study the data and any potential bias or harm before building AI solutions. For example, training a model on a mix of animals and humans could lead to harmful results. Using a dataset that is biased to specific classes such as lighter-skin examples are overwhelming in a dataset compared to other darker-skin colors could also be a biased solution towards specific classes [21] . Therefore, the imbalance of the examples in the dataset forces the classifiers to perform better, in terms of accuracy, with specific classes related to male over female, and lighter-skin color over darker-skin color. The general purpose IBM face recognition was stopped because it was used for racial profiling, the MIT technology review showed that this software does well with lighter-skined color female than dark-skin color female 14 . • Safety reserves: The feature set could be partitioned into protected, such as gender, race. . . etc, and unprotected groups where the risk ratio of harm of a protected group to an unprotected group should not exceed a predefined threshold. • Safe Fail: If the decision cannot be given with confidence, the rejection option would be the choice. The human would step in to have manual decisions. • Procedural Safeguards: The availability of the opensource machine learning algorithms could improve the testing and auditing works. However, since the data is playing a major role in any AI-based solution, the open dataset; freely available, could help in developing more safe applications. Although the above strategies could improve the safety of AI-based solutions, several defense methods have been developed against security attacks to maintain the safety of AI-based applications. Some key lessons learned from this section are summarized as: • Adversarial attacks are proved in several smart city applications and they have serious consequences on people's lives, privacy, opportunities, and assets. They could also significantly impact the economy and the environment of countries. • All stakeholders in developing smart city applications are ethically responsible to follow the good technical practices and extensively evaluate the impact of any AI applications on fairness, privacy, and lives. • Anti-adversarial attack solutions are not magic and all authorities and organizations share the responsibility of risk prevention and mitigation. • Adversarial data do not mean "harm" all the time, it can be utilized as a data augmentation technique and to build more robust AI-based solutions. • Due to their high severity, adversarial attacks should be put into the educational track as an integral part of the model building and deployment process of AI applications. • The transferability of adversarial examples across models enables the attacker to target even the black-box model. There is no effective defense mechanism currently exist which shed the light on the techniques of substitution models. • Organizations may need to invest more not only in their collected data but also in securing the models they developed. This probably needs more budget on security, training, and tools. • AI models that show high accuracy at testing time could not be good choices if the robustness of the model against attacks becomes part of the evaluation process. Despite the outstanding capabilities, the decisions/predictions made by the traditional black-box AI algorithms are not straightforward, in fact un-understandable, for different stakes-holders, such as government authorities and citizens, involved in a smart city application. Even the data scientists that created the model may have trouble explaining why their algorithm made a particular decision. One way to achieve better model transparency is to adopt from a specific family of models that are considered explainable. Even, sometimes the developers of the AI models are not fully aware of the causes of a particular decision. Understanding the causes of a model's decision, in general, and in smart city applications in particular, are critical for developing users' trust in the system. For instance, in healthcare, understanding the causes of AI predictions/decisions is very critical for doctors to consider AI-based clinical insights. Doctors would feel more confident in taking decisions given AI-based diagnosis if the decision of the AI model is understandable/interpretable by a human. Explainability also provides an opportunity for AI models/developers to benefit from the domain experts' knowledge to deal with the impurities in data and structure of the models. Some key lessons learned from this section are summarized as: • A lot of interest and demand has been observed for explainable AI over the last few years. • Explianibility helps in building stakeholders' trust in AI models' predictions, which will ultimately speed up its adoption in critical smart city applications, such as healthcare. • Explainability also plays a vital role in ensuring fair AI decision by identifying and eliminating decisions based on protected attributes such as race, gender, and age. • There's a trade-off between explanation and performance. Transparent models are good for explanation, however, their performance is lower compared to the black box models, such as deep learning models. • There's a deep connection between explainability and other emerging concepts in AI, namely adversarial attacks and ethics. • Explainability helps AI models to guard against adversarial attacks by differentiating between genuine samples and adversaries. • Explainability and ethics also link and cross-fertilize each other in AI. The literature reviewed in this section demonstrates a growing interest and concern over the ethical aspects of the AI systems and their applications. A diverse group contributed to the emerging field of AI ethics, including not only academics and researchers but also governments and tech giants, such as Apple, Facebook, and Google. They all have realized the growing impact of AI technology on society and believe that ethical deliberations, guidelines, and governing policies are necessary to make a rigorous trade-off between potential benefits and possible harms. The key lessons learned can be summarized as follows: • AI ethics is increasingly moving towards a distinct scholarly field of inquiry with strong interdisciplinary character. Besides the two main involved groups, namely philosophers and engineers, this young field is also benefiting from insights provided by an interdisciplinary group of scholars, researchers, and practitioners. • In their attempt to canonize the young field of AI ethics and to theorize and standardize its scope, questions, and methodology, various academic journals and publishers have been actively producing books, edited volumes, journal special issues, and recently also handbooks. • The key players in the AI industry, including multinational companies alongside national and transnational governmental bodies, drafted various policies and guidelines meant to demonstrate their commitment to ethical governance of their activities in the AI industry. • The wide range of moral issues addressed by academic publications and/or guidelines show disagreement on certain issues (such as the singularity hypothesis) and whether they should be regarded as real problems. On the other hand, a great number of issues were consensually viewed as serious challenges, including those with relevance to smart city applications. Representative examples were discussed under broad themes, including big data management (e.g., privacy, explainability, transparency, opacity, bias), social problems (e.g., facilitating discrimination and disrupting the labor market). Google scholar shows growth in the number and scope of adversarial attacks research since the last decade [61] . The collaboration of multidisciplinary teams including data scientists, cybersecurity engineers, and domain-specific-professionals is needed for adversarial attacks research and development. Future research is expected to set a policy to accurately describe ethical outlines, and how and when the AI should be part of the organization's ecosystems [79, 67] . Some of possible research opportunities and open issues are: • Performance and Accuracy vs. Security. The classical trade-off between the response time and the safety procedure would be the first concern raised in deploying AI in smart cities where decisions are supposed to be taken on time. Applying detection algorithms against adversarial attacks must be carefully evaluated in different fields especially those that depend mainly on fast decisions such as autonomous vehicles (AVs). Another concern related to performance is the accuracy of AI models when these are trained on both benign and adversarial data, i.e., the false positive and true negative rates. Different parameter optimization methods of learning-based algorithms share the same objective, i.e., maximizing the overall accuracy of the model [340] . However, the interesting question would be: do those parameters have any impact on the model's immune system against adversarial attacks? • Estimating attacks implications (The ripple effect). The ripple effect of the attacks must be considered in future works. Given the complexity of the smart city's ecosystems, attacking one model may have a series of consequences on the whole city and also may unintentionally attack other models. Estimating the loss and effect of attacking a model and functional dependency evaluation could be integral parts of the future AI-based systems development life-cycle. We can expect more interest in simulation works in this area soon. • Real-Time Adversarial Attacks. This is another challenge for AI safety teams. There is a need to evaluate the current techniques in generating poisoning data when only part of benign data is available, i.e., streaming. How about the structures of defenders in realtime environments? [341] • Future works may show more efforts in defining the rules on operating smart cyber-systems and the accountability of the services providers and operators [342] . • Unintentional attacks in smart waste and agriculture Smart waste and agriculture mainly depend on a network of sensors that work in harder conditions compared to some other fields such as transportation. In such a scenario, the environments might be wet, humid, dirty, have different temperatures, and may suffer from pollution. For example, the sensors attached to the animals in large farms, sensors on trash bins, electrochemical sensors for soil nutrients, etc, are subject to convey some noise besides the required data due to the environmental effects. This could be an important source of unintentional attacks that should be evaluated and taken into account in future works. • AI models detection and isolation techniques In [76] , a technique of abnormal vehicle behavior detection and isolation is applied on the object level (i.e., vehicle) which may run several models to control the driving tasks and traffic management. Evaluating the approach on a lower level, i.e., models, to detect and isolate the possibly attacked models might add value to the overall safety. Developing guidelines for replacing suspected models or defining alternatives in AI models' maintenance plan could improve consumers' trust. • Robustness and safety metrics to be involved in the evaluation process of AI models. The current metrics to evaluate the performance of AI models could take into account the factor of safety and the robustness of the model against different types of attacks. AI models of high accuracy at testing time might be the worse with a little noise added by attackers at production environment [343] . This leads to the possible need of revisiting the AI models evaluation policy before deployment. Thus, the agreement between the stakeholders or services. Although a lot of efforts have been made for the interpretation/explainability of AI algorithms since the concept of explainable AI has been introduced. However, there are still many aspects of explainable AI that need to be analyzed. In this section, we provide some of the open issues and future research directions in the domain. Despite all the benefits it brings for all the stakeholders in different application domains, there are some concerns about its impact on the performance and the development process. It is believed that the efforts for explainability will not only slow down the development process but also put constraints on it, which might also hurt the performance (i.e., accuracy) of the models [344] . For better interpretability AI models to be simple as simpler the model more explainable is the causes of an underlying decision. However, literature shows that usually complex AI algorithms (e.g., deep learning) tend to be more accurate. The trade-off between explainability and performance is believed to be optimized with better explainability methods, which is one of the key research challenges in the domain [56, 345, 56] . The literature still lacks a common ground, structure, and a unified concept of explainability [56] . However, several efforts have been made in this regard. For instance, Arrieta et al. [56] attempted to provide a common ground or a reference point in this regard. According to them, the explainability of an AI model refers to its ability to make its functioning (i.e., causes of its decisions) clearer to an audience. The authors also emphasize the need and definition of an evaluation metric or set of metrics for the evaluation and comparison of AI models in terms of explainability and interpretation capabilities. Despite the sincere efforts made for explainable AI, there are still several challenges hindering its success and adoption. One of the key challenges is the interpretability of deep learning. In this regard, efforts are ongoing to develop explainable deep learning techniques or applications. To this aim, different visualization techniques are used to explain their reasoning steps, which is expected to make them explainable and trustworthy. As detailed earlier, explainability and adversarial AI has a direct connection. Explainability on the one side can guard against adversarial attacks by differentiating between a genuine sample and an adversary while on the other hand the information revealed by explainability techniques can be used both to generate more effective adversarial attacks on AI algorithms [56] . One of the interesting directions of research on explainable AI is to analyze how effectively it can be used to guard against adversarial attacks. There are already ongoing efforts in this direction as detailed in Section 3.2. Despite the significant progress AI ethics could make in a short period, many issues still remain open and various challenges still need to be addressed by future research. Below, we summarize the key points in this regard. • Due to its strongly interdisciplinary character and relatively young age, AI ethics suffers from serious conceptual ambiguity. Many of the key terms have fundamentally different and, sometimes even incompatible, meanings for different people. For example, key terms like agent, autonomy, and intelligence do not have the same meaning for moral philosophers and AI engineers. For engineers, cars or weapons will be "autonomous" when they can behave without direct human intervention. Moral philosophers, however, would use the term "autonomous" exclusively for an entity that can define its own laws or rules of behavior by itself [311] . To improve the AI moral discourse and make it more efficient, there is a dire need for future research that will enhance its conceptual clarity and standardize the primary and secondary meanings of its key terms. • There is a need for exploring innovative ways to bridge the existing gaps between academic research and policymaking on one hand and between policymaking and the AI reality on the other hand. The questions raised and addressed by the academics are sometimes too abstract and theoretical to be of relevance for policymakers and those engaged in the AI business. Instead of broad philosophical questions like "Will this contribute to human flourishing or put human species at risk?," policymakers are more interested in practical questions like "Which harms should we expect if we are going to do this, and how to mitigate or minimize these harms?." Despite some good but still seemingly exceptional instances, various researchers also warn that there is hardly any touchable impact of ethics in general or policies and guidelines in particular on the reality of the AI industry. Most of the time, large companies are driven by economic logic and incentives rather than by value or principle-based ethics [206, 346] . • The moral discourse on AI systems is almost exclusively "Western" in nature. In other words, ethical deliberations and academic publications are published by institutions based in Western Europe and the United States and thus imbued with secular-oriented moral thought. With the expected growth of the AI industry and the adoption of its technologies by other communities worldwide, there is a need for diversifying and enriching the current AI moral discourse by incorporating insights from other cultural and religious traditions. Available research shows that people's cultural norms do influence their understanding of what makes AI systems ethical [347] . Moreover, reports coming from Muslim-majority countries like Qatar, show that their interest in having AI technologies is espoused with a parallel interest in developing religio-culturally sensitive discourse and compliant policies, where also Arabic language processing will be a national priority [348] . In this paper, we have reviewed the key challenges in the successful deployment of AI in smart city applications. In particular, we focused on four key challenges namely security and robustness, interpretability, and ethical (data and algorithmic) challenges in the deployment of AI in human-centric applications. We particularly focused on the connection between these challenges and discussed how they are linked. Based on our analysis of the existing literature and experience in the domain, we also identified the current limitations and the pitfalls of existing solutions proposed for tackling these challenges. We also identify open research issues in the domain. We believe such a rigorous analysis of the domain will provide a baseline for future research. Secure, sustainable smart cities and the IoT Smart cities: A survey on data management, security, and enabling technologies The smart enough city: putting technology in its place to reclaim our urban future Arup: If you know the right questions and understand the risks, data can help build better cities Secure and robust machine learning for healthcare: A survey Deep learning for intelligent transportation systems: A survey of emerging trends A survey of blockchain technology applied to smart cities: Research issues and challenges Applications of artificial intelligence and machine learning in smart cities Caveat emptor: the risks of using big data for human development Big data, bigger dilemmas: A critical review There is a blind spot in AI research Machine bias: There's software used across the country to predict future criminals. and it's biased against blacks Artificial intelligence's white guy problem Securing connected & autonomous vehicles: Challenges posed by adversarial machine learning and the way forward Explainable artificial intelligence via Bayesian teaching Explainable machine-learning predictions for the prevention of hypoxaemia during surgery Grad-cam: Visual explanations from deep networks via gradient-based localization Amazon scraps secret AI recruiting tool that showed bias against women The two-year fight to stop amazon from selling face recognition to the police Gender shades: Intersectional accuracy disparities in commercial gender classification The measure and mismeasure of fairness: A critical review of fair machine learning The ethics of smart cities and urban science Weapons of math destruction: How big data increases inequality and threatens democracy On relating explanations and adversarial examples When explainability meets adversarial learning: Detecting adversarial examples using shap signatures How AI is transforming the smart cities IoT? The real-world benefits of machine learning in healthcare Keratinocytic skin cancer detection on the face using region-based convolutional neural network Deep-learning framework to detect lung abnormality-a study with chest X-Ray and lung CT scan images Intelligent traffic control for autonomous vehicle systems based on machine learning Vehicle re-identification with learned representation and spatial verification and abnormality detection with multi-adaptive vehicle detectors for traffic video analysis A temporal-spatial deep learning approach for driver distraction detection based on eeg signals Traffic anomaly detection via perspective map based on spatial-temporal information matrix How deep features have improved event recognition in multimedia: a survey Automatic detection of passable roads after floods in remote sensed and social media data A survey of deep learning applications to autonomous vehicle control Server and protect: Predictive policing firm PredPol promises to map crime before it happens Stanford scholars show how machine learning can help environmental monitoring and enforcement Natural disasters detection in social media and satellite imagery: a survey Real-time monitoring and prediction of water quality parameters and algae concentrations using microbial potentiometric sensor signals and machine learning tools Intelligent fusion of deep features for improved waste classification Leveraging machine learning and big data for smart buildings: A comprehensive survey Machine learning of robots in tourism and hospitality: interactive technology acceptance model (iTAM)-cutting edge Deriving emotions and sentiments from visual content: A disaster analysis use case Dissecting racial bias in an algorithm that guides health decisions for 70 million people The first two decades of smart-city research: A bibliometric analysis Adversarial attacks on deep-learning models in natural language processing: A survey Adversarial examples on object recognition: A comprehensive survey A survey of game theoretic approach for adversarial machine learning Explainable machine learning for scientific insights and discoveries A survey on explainable artificial intelligence (XAI): towards medical XAI Peeking inside the black-box: A survey on explainable artificial intelligence (XAI) Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI Semantic web technologies for explainable machine learning models: A literature review Explainable reinforcement learning: A survey A survey of artificial general intelligence projects for ethics, risk, and policy, Global Catastrophic Risk Institute Working Paper The ethics of AI in health care: A mapping review Securing machine learning (ML) in the cloud: A systematic review of cloud ML security Machine learning in IoT security: current solutions and future challenges Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition Robust physical-world attacks on deep learning visual classification Audio adversarial examples: Targeted attacks on speech-to-text Three small stickers in intersection can cause Tesla autopilot to swerve into wrong lane Adversarial attacks on medical machine learning Interpretable adversarial perturbation in input embedding space for text The limitations of deep learning in adversarial settings Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues On evaluating adversarial robustness Adversarial attacks and defenses in deep learning, Engineering Cross-resolution face recognition adversarial attacks computer: Benchmarking machine learning algorithms for traffic sign recognition Practical black-box attacks against machine learning Learning-based adversarial agent detection and identification in cyber physical systems applied to autonomous vehicular platoon One pixel attack for fooling deep neural networks Vulnerability analysis of chest X-ray image classification against adversarial attacks Adversarial attacks against medical deep learning systems Injecting and removing suspicious features in breast imaging with CycleGAN: A pilot study of automated adversarial attacks using neural networks on small images Risk susceptibility of brain tumor classification to adversarial attacks Understanding adversarial attacks on deep learning based medical image analysis systems Mitigating adversarial attacks on medical image understanding systems No surprises: Training robust lung nodule detection for low-dose CT scans by augmenting with adversarial attacks DoS attack energy management against remote state estimation Detection of faults and attacks including false data injection attack in smart grid using Kalman filter Adversarial deep learning for energy management in buildings Exploiting vulnerabilities of load forecasting through adversarial attacks Detection of false-data injection attacks in cyber-physical DC microgrids Physical layer security for the smart grid: vulnerabilities, threats, and countermeasures Adversarial attacks on deep neural networks for time series classification Darts: Deceiving autonomous cars with toxic signs Detection of traffic signs in real-world images: The German traffic sign detection benchmark Adversarial sensor attack on LiDAR-based perception in autonomous driving Vision meets robotics: The KITTI dataset Adaptive square attack: Fooling autonomous cars with adversarial traffic signs Hospital-scale chest X-ray database and benchmarks on weaklysupervised classification and localization of common thorax diseases On the vulnerability of data-driven structural health monitoring models to adversarial attack Structural health monitoring at Los Alamos national laboratory Adversarial attacks to machine learning-based smart healthcare systems Desvig: Decentralized swift vigilance against adversarial attacks in industrial artificial intelligence systems Detecting false data injection attacks in smart grids: A semi-supervised deep learning approach Koutsoukos, Evaluating resilience of grid load predictions under stealthy adversarial attacks Transferable, controllable, and inconspicuous adversarial attacks on person re-identification with deep mis-ranking Scalable person re-identification: A benchmark DeepReID: Deep filter pairing neural network for person re-identification Performance measures and a data set for multi-target, multi-camera tracking The UCR time series archive Robustness evaluations of sustainable machine learning models against data poisoning attacks in the internet of things Crowdsensing in smart cities: Overview, platforms, and environment sensing issues Deep reinforcement learning for partially observable data poisoning attack in crowdsensing systems Robust truth discovery against data poisoning in mobile crowdsensing Towards data poisoning attacks in crowd sensing systems Attack under disguise: An intelligent data poisoning attack mechanism in crowdsourcing Evasion attacks against machine learning at test time Explaining vulnerabilities of deep learning to adversarial malware binaries Proceedings of the 35th Annual Computer Security Applications Conference Prada: protecting against dnn model stealing attacks Adversarial learning Knockoff nets: Stealing functionality of black-box models Thieves on sesame street! model extraction of BERT-based APIs Prediction poisoning: Towards defenses against DNN model stealing attacks Membership inference attacks against machine learning models Sin 2: Stealth infection on neural network-a low-cost agile neural trojan attack methodology Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Trojan attacks on wireless signal classification with adversarial machine learning Radio machine learning dataset generation with gnu radio MILCOM 2018-2018 IEEE Military Communications Conference (MILCOM) Generative adversarial learning for spectrum sensing Certified defenses for data poisoning attacks Deep cnn-lstm with combined kernels from multiple branches for imdb review sentiment analysis Model extraction warning in mlaas paradigm Copycat cnn: Stealing knowledge by persuading confession with random non-labeled data The ar face database, CVC Technical Report24 A 3d facial expression database for facial behavior research Coding facial expressions with gabor wavelets Have you stolen my model? evasion attacks against deep neural network watermarking techniques Cuda implementation of deformable pattern recognition and its application to mnist handwritten digit database Practical solutions for machine learning safety in autonomous vehicles On the safety of machine learning: Cyber-physical systems, decision sciences, and data products Pedreschi, A survey of methods for explaining black box models An introduction to machine learning interpretability Explainability fact sheets: a framework for systematic assessment of explainable approaches On the accuracy versus transparency trade-off of data-mining models for fast-response pmu-based catastrophe predictors Building cognitive cities with explainable artificial intelligent systems AI fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias Visual analytics for explainable deep learning Interpretable machine learning in healthcare Towards a rigorous science of interpretable machine learning ACM webinar Uber shuts down self-driving operations in arizona: CNN Interpretable learning for self-driving cars by visualizing causal attention Explainable density-based approach for self-driving actions classification Explanations and expectations: Trust building in automated vehicles Textual explanations for self-driving vehicles AI in education needs interpretable machine learning: Lessons from open learner modelling Exploring the need for explainable artificial intelligence (XAI) in intelligent tutoring systems (ITS) Explainability in autonomous pedagogical agents Explainable AI for designers: A human-centered perspective on mixed-initiative cocreation A unified approach to interpreting model predictions Qlime-a quadratic local interpretable model-agnostic explanation approach Explaining prediction models and individual predictions with feature contributions B5g and explainable deep learning assisted healthcare vertical at the edge: COVID-19 perspective Meteorology-driven variability of air pollution (PM1) revealed with explainable machine learning SIRTA, a groundbased atmospheric observatory for cloud and aerosol research What lies beneath: A note on the explainability of black-box machine learning models for road traffic forecasting Real traffic flow data was retrieved from the madrid open data portal Analyzing the impact of traffic congestion mitigation: From an explainable neural network learning framework to marginal effect analyses Hierarchical travel demand estimation using multiple data sources: A forward and backward propagation algorithmic framework on a layered computational graph Reinforcement learning with explainability for traffic signal control An explainable deep machine vision framework for plant stress phenotyping Plant disease identification using explainable 3d deep learning on hyperspectral images Estimating and understanding crop yields with explainable deep learning in the indian wheat belt Explainable machine learning for fake news detection Buzzface: A news veracity dataset with Facebook user commentary and egos Xgboost: A scalable tree boosting system Explainable adversarial learning: Implicit generative modeling of random noise during training for adversarial robustness Gradient similarity: An explainable approach to detect adversarial attacks against deep learning Do gradient-based explanations tell anything about adversarial robustness to android malware? Explainability and adversarial robustness for rnns An adversarial approach for explainable AI in intrusion detection systems An adversarial approach for explaining the predictions of deep neural networks Smart cities and cloud computing: lessons from the storm clouds experiment Smartedge: An end-to-end encryption framework for an edge-enabled smart city application Trustbased cloud machine learning model selection for industrial IoT and smart city services security risks of enterprises using cloud storage and file sharing apps Security and privacy challenges in smart cities Smart city: The state of the art, datasets, and evaluation platforms Big data for development: applications and techniques Hierarchical classification for constrained IoT devices: A case study on human activity recognition What is data ethics?, in: Phil Reusing data: Technical and ethical challenges Ethical issues in research using datasets of illicit origin Aspects of data ethics in a changing world: Where are we now? The ethics of data sharing: A guide to best practices and governance Group privacy: New challenges of data technologies Data sharing: an ethical and scientific imperative Ethics in educational technology research: Informing participants on data sharing risks Data transparency with blockchain and AI ethics Smart community: an internet of things application Security and privacy in smart city applications: Challenges and solutions The pursuit of citizens' privacy: a privacy-aware smart city is possible The ethics of AI ethics: An evaluation of guidelines Smart cities, big data, and the resilience of privacy Data philanthropy Differential privacy techniques for cyber physical systems: a survey Data science ethics in government Own data? ethical reflections on data ownership Who owns the smart city's data? Smart city ethics: The challenge to democratic governance Why it's so hard for users to control their data An integrated vr/ar framework for user-centric interactive experience of cultural heritage: The arkaevision project Common challenges with interpreting big data (and how to fix them Response vs non response bias in surveys Gaddressing unintended bias in smart cities Decentralized big data auditing for smart city environments leveraging blockchain technology Improved dynamic remote data auditing protocol for smart city security A lightweight and privacy-preserving public cloud auditing scheme without bilinear pairings in smart cities Data auditing for the internet of things environments leveraging smart contract Big data security and privacy issues in healthcare Issues in data fusion for healthcare monitoring Security and privacy issues with health care information technology In defence of informed consent for health record research-why arguments from 'easy rescue','no harm'and 'consent bias' fail Graduate students reported practices regarding the issue of informed consent and maintaining of data confidentiality in a developing country Duplicitous social media and data surveillance: An evaluation of privacy risk A study on the security threats and privacy policy of intelligent video surveillance system considering 5g network architecture The necessity of the implementation of privacy by design in sectors where data protection concerns arise Collecting survey and smartphone sensor data with an app: Opportunities and challenges around privacy and informed consent Normative challenges of identification in the internet of things: Privacy, profiling, discrimination, and the gdpr Improving informed consent: Stakeholder views Mitigating bias in algorithmic hiring: Evaluating claims and practices Notes from the AI frontier: Tackling bias in AI (and in humans) Bias in data-driven artificial intelligence systems-an introductory survey A survey on data collection for machine learning: a big data-AI integration perspective Creation of user friendly datasets: Insights from a case study concerning explanations of loan denials The how of explainable AI: Pre-modelling explainability Facets: An open source visualization tool for machine learning training data MT-adapted datasheets for datasets: Template and repository The dataset nutrition label, Data Protection and Privacy: Data Protection and Democracy Daytime arctic cloud detection based on multi-angle satellite data with case studies Interpretable machine learning: definitions, methods, and applications The handbook of artificial intelligence Oxford handbook on AI ethics book chapter on race and gender Machine ethics: the design and governance of ethical AI and autonomous systems Ethics of artificial intelligence and robotics Emerging challenges in ai and the need for ai ethics education Moral enhancement and artificial intelligence: Moral AI?, in: Beyond Artificial Intelligence Emotion, artificial intelligence, and ethics Moral machines: Teaching robots right from wrong Machine ethics The machine question: Critical perspectives on AI, robots, and ethics Robot ethics: the ethical and social implications of robotics, Intelligent Robotics and Autonomous Agents series Superintelligence: Paths, dangers, strategies Robot ethics 2.0: From autonomous cars to artificial intelligence Universal empathy and ethical bias for artificial general intelligence Limitations and risks of machine ethics Ethical guidelines for a superintelligence Ethics of artificial intelligence Bench-Capon, Ethical approaches and autonomous systems You can't sit with us: Exclusionary pedagogy in ai ethics education Ethics as a service: a pragmatic operationalisation of ai ethics Lessons learned from ai ethics principles for future actions Introduction to the special issue on philosophical foundations of artificial intelligence Guest editors' introduction: Machine ethics Planning for the known unknown: Machine learning for human healthcare systems The ethics of artificial intelligence, The Cambridge handbook of artificial intelligence The oxford handbook of ethics of AI Smart cities, transparency, civic technology and reinventing government Semantic technologies in egovernment: Toward openness and transparency, in: Smart Technologies for Smart Governments Security, privacy and risks within smart cities: Literature review and development of a smart city interaction framework Ethics aware object oriented smart city architecture Docile smart city architecture: Moving toward an ethical smart city Ethics of using smart city ai and big data: The case of four large european cities The right to the smart city The ethics of smart city (eosc): moral implications of hyperconnectivity, algorithmization and the datafication of urban digital society Towards ethical legibility: An inclusive view of waste technologies A neuro fuzzy system for incorporating ethics in the internet of things The Routledge Companion to Smart Cities Handbook of smart cities Citizens in the 'Smart City': Participation, Co-production Technology and the city: Towards a philosophy of urban technologies Linking artificial intelligence principles The global landscape of AI ethics guidelines Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for ai Towards a code of ethics for artificial intelligence Artificial intelligence policy: a primer and roadmap Creation of the national artificial intelligence research and development strategic plan Preparing for the future of artificial intelligence The national artificial intelligence research and development strategic plan: 2019 update, National Science and Technology Council (US), Select Committee on Artificial Beijing Academy of Artificial Intelligence Towards intellectual freedom in an ai ethics global community Ai ethics in the public, private, and ngo sectors: a review of a global document collection The IEEE global initiative on ethics of autonomous and intelligent systems IEEE standard review-ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems G20 adopted human-centred AI principles The ethics of the ethics of ai The technological singularity Battlefield singularity: artificial intelligence, military revolution, and China's future military power Can machines read our minds? The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power: Barack Obama's Books of The artificial intelligence of the ethics of artificial intelligence: An introductory overview for law and regulation Explainable goal-driven agents and robots-a comprehensive review and new framework Apple card algorithm sparks gender bias allegations against goldman sachs International evaluation of an AI system for breast cancer screening Reply to: Transparency and reproducibility in artificial intelligence Transparency and reproducibility in artificial intelligence AI governance by human rightscentred design, deliberation and oversight: An end to ethics washing, The Oxford Handbook of AI Ethics Protecting classifiers against adversarial attacks using generative models Efficient defenses against adversarial attacks Defense against adversarial attacks using feature scattering-based adversarial training Generative adversarial networks for data augmentation in machine fault diagnosis Data augmentation generative adversarial networks Data augmentation in fault diagnosis based on the Wasserstein generative adversarial network with gradient penalty Ethical adversaries: Towards mitigating unfairness with adversarial machine learning Adversarial representation learning for synthetic replacement of private attributes Adversarial representation learning for private speech generation The threat of algocracy: Reality, resistance and accommodation The Economics of Artificial Intelligence: An Agenda Normative modes, in: The Oxford Handbook of Ethics of AI The future of work in the age of ai: Displacement or risk-shifting? Ethical issues in our relationship with artificial entities, in: The Oxford Handbook of Ethics of AI The nature, importance, and difficulty of machine ethics Robots, law and the retribution gap Accountability in computer systems Parameters optimization of deep learning models using particle swarm optimization Real-time adversarial attacks Algorithmic decision-making in AVs: Understanding ethical and technical concerns for smart cities Wild patterns: Ten years after the rise of adversarial machine learning Explaining explanations: An overview of interpretability of machine learning Explainable artificial intelligence (XAI), Defense Advanced Research Projects Agency (DARPA), and Web Integrating ethical values and economic value to steer progress in artificial intelligence Bonnefon, I. Rahwan, The moral machine experiment Qatar's national artificial intelligence strategy launched