key: cord-0662394-hz7273iz authors: Wang, Shen; Qureshi, M.Atif; Miralles-Pechu'an, Luis; Huynh-The, Thien; Gadekallu, Thippa Reddy; Liyanage, Madhusanka title: Explainable AI for B5G/6G: Technical Aspects, Use Cases, and Research Challenges date: 2021-12-09 journal: nan DOI: nan sha: 17b173f97496e620fa49f720d3c6a7d136d3b4fa doc_id: 662394 cord_uid: hz7273iz When 5G began its commercialisation journey around 2020, the discussion on the vision of 6G also surfaced. Researchers expect 6G to have higher bandwidth, coverage, reliability, energy efficiency, lower latency, and, more importantly, an integrated"human-centric"network system powered by artificial intelligence (AI). Such a 6G network will lead to an excessive number of automated decisions made every second. These decisions can range widely, from network resource allocation to collision avoidance for self-driving cars. However, the risk of losing control over decision-making may increase due to high-speed data-intensive AI decision-making beyond designers and users' comprehension. The promising explainable AI (XAI) methods can mitigate such risks by enhancing the transparency of the black box AI decision-making process. This survey paper highlights the need for XAI towards the upcoming 6G age in every aspect, including 6G technologies (e.g., intelligent radio, zero-touch network management) and 6G use cases (e.g., industry 5.0). Moreover, we summarised the lessons learned from the recent attempts and outlined important research challenges in applying XAI for building 6G systems. This research aligns with goals 9, 11, 16, and 17 of the United Nations Sustainable Development Goals (UN-SDG), promoting innovation and building infrastructure, sustainable and inclusive human settlement, advancing justice and strong institutions, and fostering partnership at the global level. The mobile network has been drastically revolutionised in the last few decades. The first generation mobile network (1G) started in the 1980s, and this system supported people calling from a moving place rather than a fixed location. The second-generation (2G) changed the signal transmitted from analogue to digital. It enabled services such as Short Messaging Service (SMS) so that both callers and receivers did not have to be "online" at the same time. The third-generation (3G) mobile network increased the data rate to the level of Mbps, which accelerated access to essential Internet services such as web browsing. The fourth-generation (4G) provides a much higher data rate up to 1Gbps by fully integrating with all-IP packet switching networks so that mobile users can easily access data-intensive services such as sharing video through TikTok wherever connected. The ongoing fifth-generation mobile network (5G) technology supports services such as Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine-Type Communications (mMTC). 5G realises the Internet of Things (IoT) by significantly increasing the density (100x) of various connected devices with much higher data rates (10x) and (10x) less latency than 4G. 5G network is being commercialised and deployed across the world. Many organisations have start planning beyond 5G (B5G) to develop the next generation of wireless cellular networks (6G). Throughout this paper, B5G and 6G are used interchangeably as in the current literature [1] - [3] , but 6G is used more often than B5G for simplicity. 6G will further extend the connection coverage by achieving spaceair-ground-sea integrated networks [4] to facilitate the Internet of Everything (IoE). The increased data rate in 6G will also support more data-intensive applications such as fullsensory digital reality. The super reliable and low latency that 6G provides can be well-suited for mission-critical scenarios such as autonomous driving and smart health care. 5G has a highly softwarised network infrastructure thanks to the software-defined network (SDN) and the network function virtualisation (NFV). Building on the top of this 5G feature, fully automated network management will be feasible with the power of Artificial Intelligence (AI) in the 6G era to increase the efficiency of network maintenance. More 6G features can be found in [5] - [8] . Above all, the key characteristic of 6G is that it is "human-centric" [6] rather than "machine-centric", as all the previous network generations mainly focused on improving the network performance technically. It implies that all the connected "things" in the 6G network will be working intelligently for humans as a giant but smart black box. Needless to say, that AI is indeed the cornerstone of 6G networks, while eXplainable AI (XAI) is crucial to enhance trust and transparency for people to use 6G networks. AI will play a critical role in realising 6G networks and their applications. There are several ways in which AI can arXiv:2112.04698v1 [cs.NI] 9 Dec 2021 be used in 6G, including the conventional use of AI for prescriptive, predictive, diagnostic, and descriptive analytics. Prescriptive analytics can be used for making decisions or predictions related to edge AI such as cache placement, AI model migration, dynamically scaling network slices and adapting its service function chains, and optimal automatic allocation of resources (e.g., spectrum, cloud, and backhaul). AIbased predictive analytics help to predict the future from the acquired data in real-time for events like resource availability, preference, user behaviour, user locations, and traffic patterns, then proactively change the network. Proactive actions can fine-tune the resource allocation, deployment of proactive security solutions, pre-migration of edge services, and edge AI models. Diagnostic analytics concerns with detecting faults in the network, thus detecting network anomalies, service impairments, network faults, and the root causes of these network faults, which ultimately helps enhance the network security and reliability. Due to the high scalability of the 6G network in terms of users, devices and services, AIenabled automatic services are essential for 6G. Descriptive analytics heavily rely on historical data to enhance the service provider's and network operator's situational awareness. The applications include user perspectives, channel conditions, traffic profile, network performance, and so on. Furthermore, handling, generating, and processing large volumes of data in real-time and in a collaborative way is yet another complicated task that requires scalable AI. AI will also play a vital role in controlling and orchestrating the 6G network. For instance, novel B5G/6G network controlling and orchestrating concepts of Intent-Based Networking (IBN) as well as Zero Touch Service and Network Management (ZSM) are primarily dependent on AI technologies. XAI is a promising set of technologies that increases the AI black-box models' transparency to explain why certain decisions were made. Especially the high-stake ones that are made for 6G stakeholders, such as service providers, endusers, and legal auditors. XAI is the key to implementing the "human-centric" AI-powered 6G network. Fig. 1 shows that AI is penetrated in all four layers of the AI-powered 6G network architecture that is proposed in [9] . From the bottom-up, the first layer is the intelligent sensing layer. It is designed to collect data using various sensors (e.g., phones, watches, drones, or vehicles) in multiple scenarios (e.g., space, sea, road, sky, or factory). AI technology can facilitate massive data collection to be a real-time, robust, and scalable process. For instance, this could be done by smartly utilising the scarce spectrum resources and automatically reporting unreliable data events such as broken sensors. XAI can ensure that the whole process works as expected by providing additional information regarding the AI black-box model. For example, the legal auditors may use XAI to check any privacy violations in the whole data collection process, which empowers the downstream AI system. The second layer of the AI-powered 6G architecture is the data mining layer. Due to the broad and diverse coverage of 6G networks, a massive amount of data will be collected from the intelligent sensing layer with a stringent latency requirement. Therefore, the objective of the data mining layer is to perform automatic feature engineering tasks, such as dimension reduction techniques, so that only the most relevant part of the data will be kept for the follow-up processing in the layer of intelligent control. This third layer will utilise the filtered data for making decisions such as resource allocations and network management to ensure a certain level of system performance that meets the application requirement. For both the data mining and intelligent control layers, XAI is particularly helpful for the service providers to diagnose the root cause of incorrect decisions by AI systems. As the top layer of the 6G architecture, the smart application layer interacts with the end-users who are not the technical experts in various scenarios. For example, in the autonomous driving use case, when the AI system suggests turning right, XAI will provide more user-friendly information explaining that the right turn will save five minutes of journey time but can have more curvy roads ahead. Instead of executing decisions straight away, XAI will enhance the trust between stakeholders and the AI-powered 6G networks for prescriptive, predictive, diagnostic, and descriptive analytics. 1. An illustration of the benefits (i.e., question-and-answer interactions) of introducing XAI to three typical stakeholders (i.e., end-users, legal auditors, and service providers) across all four layers [9] of AI-powered 6G network Although there are many surveys on XAI and B5G/6G separately, however, there is a lack of comprehensive survey that jointly explores the potential of XAI for implementing AIenabled human-centric B5G/6G networks. XAI has attracted the attention of many researchers since the Defense Advanced Research Projects Agency (DARPA) launched its XAI program in 2017 [10] . Das and Rad [11] compared and analysed commonly used XAI techniques in terms of their algorithmic mechanisms, taxonomies, and successful applications. Their paper proposed several promising future directions and challenges of XAI. However, the great potential of XAI in realising the "human-centric" 6G network is missing in existing XAI surveys. Saad et al. [7] have broadly described the vision of 6G, which is far beyond utilising more spectrum by including more technological trends and driving applications. Morocho-Cayamcela et al. [12] focus on the applications of AI at each main aspect in implementing B5G, ranging from wireless communications to e-health. It also mentioned the trade-off between interpretability and AI algorithms accuracy but did not extend the discussion on XAI to enhance the trust in using 6G cellular systems. Porambage et al. [13] review the recent progress of 6G in security and privacy areas. These areas will likely have many high-stake decisions by AI systems. However, their contribution lacks discussion of the importance and challenges of XAI for managing the risk of such highstake decisions. Guo [14] carefully discussed the potential of XAI in the key enabling technologies, such as radio resource management, for 6G at the physical layer and the MAC layer. Besides, it also proposed some initial plans for measuring the level of explainability, which was later formalised as the quality of trust (QoT) in [16] to the users of 6G networks, especially for deep-learning based 6G autonomy. Their paper lacked broader discussions on 6G, especially about the new use cases and the technical aspects that need XAI to uncover the myth of the decision-making process. As mentioned in earlier subsections, there is a high necessity of introducing XAI into AI-powered 6G. Therefore, a comprehensive survey of the state-of-theart XAI and its potentials in building the future B5G or 6G networks with a holistic view will be helpful to guide the researchers and practitioners. In Table II , we concisely summarise the comparison of important related survey papers. The gap of existing surveys are highlighted, which is the lack of comprehensive analysis of XAI for developing a trustworthy, responsible, and transparent AI-powered 6G network. The main contributions of this paper are summarised as follows: • Bridging the gap between XAI and 6G. Most of the existing surveys in XAI [11] , [17] , [18] are about pure AI applications such as natural language processing (NLP) and computer vision (CV). The discussions of XAI to 6G, which is the enabling infrastructure of future AI applications, are unfortunately missing. Similarly, many recent surveys in 6G [7] , [12] , [13] attempt to cover all possible enabling technologies and applications extensively, without a particular focus on interactions between human and 6G networks, where XAI can play an important role. This survey paper bridges this gap by comprehensively overviewing both XAI and 6G and their connections. [14] , [16] , this paper extends the scope of 6G in which XAI can help. Specifically, this paper goes beyond the smart radio resource management in the physical and MAC layer. Every key 6G technical aspects (e.g., network automation, security and privacy) and 6G use cases (e.g., industry 5.0 and extended reality) are examined to investigate how XAI can help in enhancing the transparency and trustworthiness of all 6G stakeholders. The relevant 6G and XAI standards, legal framework, and research projects are also reviewed. Moreover, this paper discusses several implementation challenges and possible solutions in applying XAI to 6G. II. BACKGROUND This section overviews the background of AI and XAI, which are prerequisites for understanding the potentials of XAI for 6G. This background information contains a brief history of the evolution of AI and XAI, the main algorithms, taxonomies, and successful applications. The background of AI is introduced first which shows that it is becoming a complicated black-box to obtain higher classification/prediction performance. XAI is then highly required to AI system stakeholders for more trustworthy interactions. A. The background of AI AI tries to reproduce the qualities that we consider intelligent in a person using algorithms and rules. It is related to all the disciplines involved in that process. The main domains in AI are reasoning, planning, learning, communicating, perception, interaction, services, and ethics and philosophy [19] . This paper focuses on an AI branch called machine learning (ML) since here is where the black-box model needs to be explained. This section in particular will review the history, typical algorithms, and successful applications of AI. In the end, we briefly describe the high-stake decisions of AI applications in 6G that call for XAI. 1) History of AI: Alan Mathison Turing is considered by many as the father of AI due to his substantial contributions at the beginning. His Turing test consists of making a machine indistinguishable from a human if a person in a different room was interacting with a machine. He also introduced in the report "Intelligent Machinery" some of the ideas by which computers could learn by themselves from their experience [20] . The term AI was coined by John McCarthy during the eight-week founding conference of AI held in the summer of 1956 [21] at Dartmouth College, New Hampshire, United States. At that time, Martin Minksy was too optimistic about the immediate future of AI to the point of predicting that machines would be as efficient as humans in many tasks in just a few years from the conference. In 1973, the prestigious British mathematician James Lighthill stated that machines at best will only reach in games a performance level of an experienced amateur. In those years, the UK government withdrew public funding from AI and so did the US government since they did not see short-term advantages in financing AI projects. This crisis around the mid-70s is known as the "AI winter" [22] . AI restarted in the 80s, at that time the governments of Japan invested heavily in AI and money started to be spent again in this area when companies realised that they could save millions of dollars. For example, the company Digital Equipment Corporation developed an expert computer system for ordering clients the right parts for their computer equipment based on a series of questions. This system processed more than 80 thousand orders, saving the company around 25 million dollars per year. From that moment, companies realised the potential AI had for their business and started investing again, although, AI also had a small crisis in the 90s [23] . There has always been competition between machines and men in games to see who is more 'intelligent'. Time has proven that the statements in James Lighthill's report were wrong and nowadays machines are much better than humans in the most difficult games e.g., chess, Go, or backgammon. Deep Blue created by IBM beat Gary Kasparov in 1997; an epic milestone in AI. Watson beat the previous champions in the famous TV show Jeopardy! in 2011. The program AlphaGo based on reinforcement learning (RL) was able to defeat the international master Lee Sedol four games to one in March 2016 in the game of GO [24] . The game of Go has more board combinations than the number of atoms in the known universe and because of that, it was considered too difficult for AI for a long time. Nowadays machines can beat humans in almost any online game ranging from the simplest games like Atari to the more recent and sophisticated video games such as DOTA 2 [25] . The exponential increase of computation power and memory since the 50s together with the expansion of the Internet has catapulted AI to a new level. It is also worth highlighting the development of Artificial Neural Networks (ANN) by Walter McCulloch and Walter Pits in 1943 because they are the predecessors of deep learning (DL) and explainability. ANN mimics the connections in the brain and their extension, DL networks, have been a breakthrough and have been applied to a vast number of areas. For example, the DL architecture VGG-16 is better than humans in recognizing images [26] and AlphaGo is better in the game of Go than any professional player [27] . Nevertheless, at this point machines only excel human's performance in a particular area. The ultimate goal of AI however is to create machines able to execute multiple tasks and to connect one area with the other in the same way humans do. Thanks to DNNs, machines can surpass human experts' performance in those areas in which there is enough data and machines can execute millions of simulations to learn from themselves [27] . But there is still a long way to get to general AI. We expect the future of AI will require joint efforts in connecting the different areas (i.e., connecting image recognition with language processing) to produce machines with a more powerful intelligence [28] . 2) Typical AI algorithms: In the next lines, we discuss some of the typical ML algorithms as they need XAI to uncover the black-box myth more than other AI algorithms in other areas such as planning and reasoning. As shown in Figure 3 , there are three general branches in ML: • Supervised learning: Supervised learning learns a function to map the input (one or more variables) with the output (typically, one variable) using labelled data. It is mainly applied to two problems: regression for predicting a real number and classification for predicting the category a given data point should belong to. Some typical supervised learning algorithms are introduced as follows: Linear regression tries to find the best line to estimate the relationship between the inputs and the output. Logistic regression uses a curve rather than a line and it is used for binary classification [29] as its output is within the range between 0 and 1. CART is a decision tree in which each node is a feature and the end-nodes (leaves) are classes. Instances go to one or another branch starting in the root depending on their values [30] . Random Forests are a collection of decision trees that belong to a category called Ensemble Methods. Other examples of Ensemble Methods are Gradient Boosting Machines or AdaBoost [31] . ANN is based on connected units called neurons that emulate the functioning of the human brain. Each neuron is connected to other neurons by edges. A neuron receives a signal, processes it, and transmits it to other neurons. Each edge has an associated weight and the process of learning involves calibrating those weights. DL methods are an evolution of ANNs; very similar but with more hidden layers [32] . K-Nearest Neighbour (KNN) model is an example of instance-based learning. It does not create any model while training but simply compares the predicted instance with the other instances of the dataset based on a distance metric for predicting [33] . Naive Bayes is an archetypal example of statistical learning algorithms like Linear Discriminant Analysis (LDA) in which a probability distribution model is created during the training phase [33] . Support Vector Machines (SVM) methods are based on finding the optimal hyperplane, the one with the highest margin, that separates the instances (represented as points) with the highest margin. SVM has different kernels such as Polynomial or Gaussian to transform data to form the hyperplane [33] . • Unsupervised learning: Unsupervised learning discovers patterns in data that has not been labelled. There are three main unsupervised learning tasks: clustering consists of creating groups of non-labelled data based on its similarities or differences. K-means clustering is a very typical algorithm in which k represents the number of groups and all the instances are assigned to one of these groups based on the distance to each group's centroid, which is calculated as the average of the points. There are different kinds of distances such as Euclidean, Manhattan, or Dynamic Time Wrapper [34] . Association rules are very useful in marketing. They are used to understand the relationships between the products and to make recommendations. For example, if most customers that bought product X also bought product Y, the company will recommend product Y to the customers that bought X. As an example, the Apriori algorithm [35] generates a list of the most frequent items and a list of the association rules given a dataset applying a general-to-specific search [36] . Dimensionality reduction is a popular technique to decrease the dimension of a dataset while maintaining as much information as possible. Principal Component Analysis (PCA) is a well-known algorithm using geometrical projections to convert correlated variables into new columns called Principal Components [34] . • Reinforcement learning: RL optimises an agent's actions in an environment by maximising the long term expected rewards. The agent gets positive feedback when the taken action leads to a reward, and negative feedback when the actions lead to a penalization. By doing this, the agent learns the optimal actions in an environment. The environment is generally represented as a Markov Decision Process (MDP) which is a mathematical framework for decision-making problems. RL is used in games such as chess or Go and can be used to represent the human brain or optimize energy systems [37] . RL models can be classified into model-based (finding the optimal policy from a model that represents the environment) or model-free (which does not create any model and defines the optional policy by trial-and-error). Alternatively, RL algorithms can also be classified depending on whether they try to calculate the value function (a measure of how good each state is based on the expected rewards) or the policy (a mapping from the states to actions). Lastly, RL has two kinds of methods: on-policy (they learn the value of a policy while using those values for taking actions) and off-policy (the behaviour they follow is not necessarily related to the policy they learn) [38] . Some of the popular RL algorithms are introduced as follows. Q-Learning is a model-free value-based offpolicy algorithm that tries to find the best action for each state of the MDP. The "Q" stands for quality and represents the future reward expected from an action. Deep Q-Learning is a similar algorithm but it replaces the Q- It is a policy gradient method that does not use a model but rather, tries to learn the optimal policy directly. It uses an advantage function to calculate how good an action is, given a state and to mitigate the high variability in the gradient which helps to stabilize the algorithm [39] . Advantage Actor-Critic (A2C) is an approach combining policy-based and value-based RL models. It consists of two networks (actor and critic) that interact with each other. The actor selects actions and the critic evaluates how good they are [40] . 3) Deep Learning methods: It is worth introducing DL separately as it leaps many areas that traditional ML cannot improve further even with more data fed on. DL is also difficult to explain due to the high complexity of this learning model. Specifically, DL can learn representations of the data in an automated fashion without human intervention. DL algorithms seek to exploit the unknown structure in the input distribution to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features without using humans to craft those features [41] . According to Y. Bengio [42] , DL can extract representations from the data using multiple levels, where the higher levels are located in the defined features from the lower features defined in lower levels. For example, the first layers could distinguish between a cat and a dog and the last layers could distinguish which breed the dog is. Some of the most commonly applied DL models are described as follows: • Convolutional Neural Networks (CNN): They were developed in 1988 by Yann LeCun and are especially good for image recognition, object detection, and also NLP. CNNs are an improved version of the multilayer perceptron composed of an input layer (entry), hidden layers (inbetween), and an output layer (end). The hidden layers consist of a sequence of different kinds of layers such as convolutional, pooling, normalization, and full-connected layers [32] . [44] . There are four prominent subfields of AI-based on the nature of data and need. • Automated Speech Recognition (ASR): ASR is the interdisciplinary scientific field within AI that develops techniques to recognise and translate spoken language into text by computers. ASR and NLP share the common goal of understanding the human language, but both differ as to the input, where ASR input is the raw human voice. Some of the well-known use cases of ASR are voice search, automated transcription, auto voice translator, virtual assistant, and home automation. • Natural Language Processing (NLP): NLP is the sub-field of AI that concerns interactions between computers and human language to develop the computational capability to understand and derive the meaning of the contents of documents. NLP is closely related to automated speech recognition, whereas ASR can be seen as having an additional step of converting voice data into textual format. Some typical use cases for NLP are chatbots, topic identification, text summarization, machine translation, text classification, sentiment analysis, and question answering. • Computer Vision (CV): CV is the interdisciplinary scientific field within AI that seeks to develop techniques that enable the machine to gain a high-level understanding from digital images and videos. In comparison to NLP, the input data are in the form of pixels instead of words. The popular use cases of CV are autonomous vehicles, facial recognition, real-time sports tracking, image segmentation of scans in healthcare, detection of crop rust via image analysis. • Recommender Engine: Recommender engine/system is designed to suggest items such as products, videos, or services, by predicting the rating or preference a user would give to an item. The prediction of preference can be made based on several approaches, out of which three are well known. The first approach exploits content similarity which describes an item to generate a recommendation, called content-based filtering. The second approach exploits user data, i.e., the users who like similar items in the past would like similar items in the future, called collaborative filtering. The third approach is the hybrid of other approaches, called simply hybrid recommender. Personalised merchandising, personalised content, job recommendations are some of the popular use cases. B. The background of XAI 1) Motivations of XAI: The main problem of the exiting AI algorithms, especially the most accurate ones, are the blackbox models. Their high internal complexity justifies the recent interest in XAI to develop new methods to illustrate how ML models work. XAI will also make users adapt and trust ML and incorporate them in their work [10] , [18] . The difference between AI and XAI pipelines further highlights the motivation of XAI. Both pipelines begin with training data being the input. However, the change happens when each model starts learning from the training data. It is here where the two start to separate. The XAI trains a model which is not agnostic of how the model learns, unlike the AI model. In other words, training an AI model is based on the black-box approach, whose internal working is hidden and not understood. However, the motivation of the XAI model is to produce a white-box model, which contains information on the inner working of the algorithm. After the training process, the AI-based model can only inform automated decisions without having support in the model to explain how the algorithm made the decision. The XAI model contains information of inner working but cannot explain well the decision to an end-user without incorporating an explainable interface that supports the generation of human-understandable explanations. In theory, all AI models can be converted into the XAI models by adding layers that support explanations both inside the trained model and in the interface part. Depending on the interface part of XAI, the algorithm can support interpretability and explainability or just one, depending on the objective. 2) Explainability, Interpretability, and Accountability: In the following lines, we explain the difference between these three popular concepts: • Explainability: Explainability is the ability to provide extra details to clarify the internal functioning of an AI/ML model for a given audience. It is a characteristic to provide information in the form of statements, that clarify, give context, or justify a particular prediction to a given audience. An example of explainability is to create a complex model but able to provide statements of why it has taken a particular decision. Note that the target audience is key for explainability since the type of explanations and how easy something is to understand largely depends on the person who is getting the information. What for a group of doctors may seem obvious, to a layperson can be completely incomprehensible. • Interpretability (Transparency): Interpretability, also known as transparency, is basically how easy is the AI/ML model per se to be understood. This property purely depends on how the model is inherently designed. For example, using a simple rule-based model like a decision tree that a human can easily understand would be an example of having good interpretability. Explainability requires an extra layer of skill to provide customised knowledge that a specific user can understand, whereas interpretability focuses on the essence of the model internally. As to the differences between explainability and interpretability, some researchers claim that Interpretable ML (better interpretability) rather than XAI (better explainability) is the preferable option [45] , [46] . They argue that using models to explain black-box models may lead to errors. However, XAI and interpretable ML do not exclude each other, and sometimes one approach would be more suitable than the other. As a suite of algorithms, XAI clarifies and simplifies the internal logic of blackboxes and is a great instrument to know whether or not they can be trusted [47] . • Accountability: The term accountability is essential in data protection and thus, it covers many disciplines such as finance and accounting. To restore trust in financial institutions due to high-level scandals in the 90s, institutions had to become more accountable. Accountability refers to making companies and individuals responsible for their actions. For example, if an accountant does not detect a clear anomaly in a company, the accountant can be responsible for that and face legal consequences. Furthermore, companies had to be examined by external auditors periodically. For XAI in particular, accountability is related to providing explanations to a given audience to justify certain actions for which someone is responsible. If a person is discriminated against by an algorithm, that person deserves an explanation. To provide such justification, the people in charge of the system need to be accountable. This means companies and organisations have to make an effort to align their technology with the principles of the different regulations such as GDPR [48] . 3) Taxonomy of XAI algorithms and applications: Based on different criteria, the XAI methods can be classified into several arrangements [49] - [51] which can be overlapping or otherwise. In this section, we discuss the taxonomy of XAI inspired by [52] as well as the popular XAI methods. a) Model-Agnostic vs Model-Specific: Model-agnostic methods are the ones that do not consider the internal components of the model (i.e., model weight and structural parameters). Therefore, they can be applied to any blackbox approach. In contrast, model-specific methods are defined using parameters of the individual model, such as interpreting weights of linear regression or using inferred rules from a decision tree that would be specific to the trained model [53] . There are some advantages of model-agnostic methods [54] such as high flexibility for developers to choose any ML model for generating interpretation which is different from the actual black-box model that generates decisions. b) Local vs Global: Based on the scope of explanations, provided methods can be classified into two classes: local and global methods. Local interpretable methods use a single outcome, or particular prediction or classification results of the model [47] to generate explanations. In contrast, the global interpretable methods use the entire inferential ability of the model or overall model behaviour [55] to generate explanations. In the local interpretable methods, only specific features and characteristics are essential. For the global methods, feature importance can be used to explain the general behaviour of the model. c) Surrogate vs Visual Aid: A popular way to explain a black-box model is to apply an interpretable approximate model that replaces the black-box model for explaining decisions. This interpretable approximate model is called the surrogate model, which is trained to approximate the predictions of a black-box and is later used to draw explanations interpreting the decisions from the black-box model. An example of a black-box model can be a deep neural network (DNN), whereas any interpretable model can be a surrogate such as decision trees or linear models. Besides surrogate models, visual explanations aid to generate explanations in a more presentable way showing the inner working of many model- XAI can be applied throughout the entire developmental pipeline of the model. The goal of pre-modelling explainability is to describe the dataset to gain better insights into the dataset used to build a model. The general objectives of the premodel are to perform data summarisation, dataset description, explainable feature engineering, and exploratory data analysis. Google Facets 2 is an example of pre-model explanations that enable learning of patterns from large amounts of data. In contrast, the goal of in-model explainability is to develop inherently explainable models instead of generating blackbox models. Methodologically, there are different strategies or ways to construct in-model explanations. The most obvious is to adopt an inherently explainable model, such as linear models, decisions trees, and rules sets. However, some efforts are needed to generate explanations using these methods, like picking important features. Other approaches are proposed beyond inherently explainable models, such as hybrid models, joint prediction and explanation, and explainability through architectural adjustments. In the hybrid approach, complex black-box methods are coupled with inherently explainable models to devise a high-performance and explainable model, such as combining a deeply hidden layer of neural network with KNN model [56] . Also, the model can be trained to provide a prediction and the corresponding explanation jointly [57] . The idea here is to produce a training dataset, where the decision is supplemented with the user's rationale for the decision. Lastly, explanations through architecture adjustments focus on deep network architecture to enhance explainability, such as pushing higher layer filters to represent an object part, as opposed to a mixture of patterns [58] . These in-model XAI approaches have two main shortcomings: first, they assume the availability of explanations in the training dataset, which is often not the case. Second, explanations generated by these methods are not necessarily reflective of how model predictions were made but rather what humans would like to see as an explanation. The post-model explainability method extracts explanations that are inherently not explainable to describe a pre-developed model. These popular post-hoc XAI methods generally operate over four key characteristics: the target, what is to be explained concerning the model; the drivers, what is causing the decision to be explained; the explanation family, how an explanation is going to be presented to a user; and the estimator, the computational process generating the explanation [18] . The following popular XAI techniques are summarised in Table III in terms of its corresponding XAI type under different taxonomies with the types of AI algorithms it can explain. Among model-agnostic XAI strategies, LIME [47] is one of the popular post-doc algorithms that explain an instance prediction: target, input features as drivers, importance scores as explanation family, and computed through local perturbations of the model input as the estimator. LIME implements a linear interpretable model, a surrogate model, in the local area as a local approximation to explain a prediction. Because of the local approximation, LIME works on all black-box models and data types (i.e., text, tabular data, image, graphs). SHAP [59] fundamentally differs from LIME in terms of how importance scores are obtained and generally performs better than LIME. In SHAP, feature contribution towards a decision is estimated by Shapley values, a classic method for estimating marginal contribution. However, aggregated local predictions can be used to generate global explanations. There are proposed optimisations in terms of computational complexity for SHAP, such as TreeSHAP [60] and Deep SHAP [59] , but they are not model agnostic as the name implies. Counterfactual is another algorithm that is available for both model-agnostics [61] and model-specific [62] variants. Counterfactual builds on explaining the prediction of the predictor algorithm by finding the slightest change in the input feature values causing the change in the original prediction. For instance, if changing the BMI of the person has resulted in flipping the original prediction from illness to being healthy, then using the BMI value is an indicative explanation for correlating with the original prediction, thereby counterfactual human-friendly explanations. The explanation is straightforward but with multiple possible explanations. Thus, it becomes tricky to know which is suitable, the simplest or the complex ones (i.e., the combination of several features). Layerwise Relevance Propagation (LRP) [63] is an algorithm designed to explain a DNN with an assumption that a classifier can be decomposed into different layers, making it a model-specific method. LRP is designed with the intuition that certain layers of inputs are relevant for the prediction. Furthermore, to gain insights into which neurons are significant, activation scores of each neuron are considered through backpass and eventually learn about the input data. It is generally applied to image data so as to highlight meaningful pixels that enable a certain prediction. Explainable Reinforcement Learning (XRL) is a promising but challenging research branch of XAI [64] as RL model often contains an enormous number of states and actions with a complex reward system. Nevertheless, XRL could accelerate the process of RL design by facilitating developers debugging RL systems. A. Heuillet et al [64] presents a rigorous classification of XRL methods based on types of explanations (text or images), on the level of explanation (local if it was for predictions or global if the whole model was explained), and also which algorithm was explained. Specifically, we present the following three recent XRL algorithms: • Programmatically Interpretable Reinforcement Learning (PIRL) is an example of a Global-Intrinsic method [65] . The idea of PIRL is using a programming language much closer to the human way of thinking to emulate the behaviour of the DRL model. PIRL uses a framework called Neurally Directed Program Search (NDPS) to learn the behaviour of the DRL model by imitation learning. Thus there are two steps, the construction of the DRL and the extraction of knowledge of the DRL model to create a sequence of instructions. The PIRL instructions are not as accurate as that of the neural network but they can be quite close and much more understandable. The PIRL model has been applied successfully to The Open Racing Car Simulator (TORCS) [66] . • Hierarchical and interpretable skill acquisition in multitask RL [67] is an example of the intrinsic-local category. This approach consists of presenting a policy with highlevel actions as a sequence of simpler actions because it is more familiar to humans. This approach was used to play the game Minecraft, and it implements a hierarchical policy based on two layers using the actor-critic algorithm and it was used to explain a multi-task RL model playing Minecraft. The model also uses a stochastic temporal grammar model to capture the relationships between the actions to create a hierarchical policy. Humans just say park the car, instead of defining all the actions related to the steering wheel, clutch, accelerator and brake. In the same way, before you move an object you need to find it. This method presents high-level instructions such as "Stack blue". Where "Stack blue" is composed of "Find/Get/Put/ Blue", and, at the same time "Find Blue" is composed of multiple less low-level "Go left, Move forward, Turn right...". • Structural Causal Models (SCM) are a very clear way to represent causal-effect relationships of events. In this case, P. Madumal et al. [68] proposed a framework, that falls in the post-hoc local category, based on SCM to explain the behaviour of model-free RL agents. They use a directed acyclic graph (DAG) in which the nodes represent the states and the edges the actions. By traversing the graph it can be observed which actions take from one state to the other. The process has three main steps: creating the DAG; using multivariate regression models to approximate the relationships using the minimum number of variables; and generating the explanations by analyzing the variables of the DAG to respond to the questions: "Why action A?" and "Why not action B?". The latter question is answered by simulating the counterfactual in the DAG. In their research, they create a model based on casual structures to evaluate six domains using six different RL algorithms to play Starcraft II. They also conducted a study in which a group of 120 people evaluated the quality explanations and they found that the people in the study preferred their explanations based on casual models to other baselines. 4) XAI stakeholders in 6G: Nearly every sector has a demand for automated algorithmic decision-making, and this demand is evolving into supplementing decisions with explanations generated by the XAI model. With the upcoming 6G making internet bandwidth faster and available to almost every other device, the demand for AI will be enhanced by XAI within the ecosystem. However, the question remains, who needs XAI and at what level of explanation sounds reasonable. Also, it is vital to note that different stakeholders have different expectations from the explanations [69] and based on the user requirement of XAI [70] , stakeholders' demands can be classified broadly into three categories. Clear enough explanation to verify if XAI goals can be met • The demand that will be useful for service providers to help them identify problems or bugs within the system that produce a decision and improve the performance by troubleshooting the decision making process. Service providers can be system designers, data scientists, AI/XAI researchers, software developers/testers, etc. • The demand of the end-users, who would be interested in understanding the decision for usage and application [71] purpose. For the end-user, the interface of explanations is essential, which should explain the decision in the form of a story that the end-user can easily understand [72] . Endusers can be businesses, non-technical people, consumers of technology, and policymakers. • The demand of the legal auditors, who would be interested in auditing legal compliance of automated decisionmaking algorithm. Here, the legal auditors would look for confirmation that ensures compliance, such as no racial discrimination or gender bias was observed while approving a loan application. These stakeholders can be auditors and other legal professionals. For instance, when an image is classified as a cat instead of a dog, the end-user will prefer a high-level explanation such as the difference in the tail. In contrast, the service providers would be interested in a more thorough explanation, such as pixel colours and other numerical differences between the images of cats and dogs in the training data. Likewise, the legal auditors would ensure compliance during the decisionmaking process, such as racial profiling does not contribute or skin tone toward decision-making during a legal process. We also summarised the XAI requirements for different 6G stakeholders in Table IV . In this section, we convey the following primary technical aspects in 6G networks: intelligent radio, security, privacy, resource management, edge network, and network automation. For each technical aspect, we firstly introduce the background and motivation of its importance in 6G. Then, besides technical requirements, the prospective challenges of the development of regular AI/ML algorithms in wireless networks are analysed. Finally, we explain how XAI can build trust between human and AI-enabled machines based on the capability of supporting human-understandable explanations. A. Intelligent Radio 1) Introduction: The intelligent radio in the intersection of AI and cognitive radio has recently attracted significant attention in solving spectrum problems, including access, monitoring, and management. The rise of modern communication systems with 5G, B5G, and towards 6G has extended radio services to various industrial domains, exposing several challenging issues and complicated problems in wireless communications. Undeniably, the unprecedented advance of AI algorithms, including data mining and ML, has been exploited to address many complex problems of intelligent radio in wireless networks. It is feasible to use AI algorithms with automatic learning models to effectively handle channel modelling, intelligent spectrum access, physical layer design, and other network management issues in wireless communications [9] . The emergence of ML, especially DL in the era of big data, has enabled revealing the essential and unexplored radio characteristics and boosting the progress of wireless and networking technologies with new architectures and novel analysis of pyramid structures. 2) Requirements and challenges: Extremely high data rate and low latency of massive machine-type wireless communications are realised as the key requirements of 6G. It can be achieved with an advanced-designed physical layer, wherein several fundamental signal processing and analysing tasks (e.g., source coding, modulation, orthogonal frequencydivision multiplexing (OFDM) modulation, and multi-input multi-output (MIMO) precoding) are powered by AI algorithms [8] . These tasks are typically deployed by the appropriate modules, which follow a forward procedure at the transmitter and an inverse procedure at the receiver. Previously, numerous intelligent radio signal processing approaches were studied with traditional ML algorithms, where expert knowledge of concerning domains is needed to fine-tune radio features and learning models. Being superior to conventional ML, DL has been recently applied on the intelligent radio area to improve performance significantly thanks to its great ability to deal with large-noisy-confusing raw datasets of radio signals [73] . For example, the accuracy of automatic modulation classification in 5G was improved with CNN architectures while keeping a reasonable complexity [74] . Although DL can extract underlying features from raw radio signals at multi-scale representations to learn complex discrimination patterns, it usually represents black-box models with insufficient interpretability and weak explainability [75] . Consequently, understanding why and how an AI model can predict outcomes is the key to helping network engineers and communication system designers to improve the system performance and manage operation risks sustainably. Fig. 4 . An example of XAI for intelligent radio: XAI can explain abnormal phenomena in wireless networks to end-users by ordinary explanation and to system engineers by specialized analysis. 3) How XAI can help: Integrating AI/ML algorithms with wireless communication systems has been found to address many challenging tasks in 6G networks [76] : signal identification, modulation classification, channel modelling, radio resource allocation, which accordingly helps to improve the performance of intelligent radio. Radio signal identification and modulation classification are usually formulated as a multi-class classification problem, where numerous ML-based approaches have been studied using feature engineering tech-niques (including handcrafted feature extraction and feature selection) and traditional classification algorithms (e.g., decision trees, random forest, KNN, ANN, SVM). With this signal analysis strategy, it is possible to examine the impact of every feature like higher-order cumulants on the overall system performance, which helps in fine-tuning the features to improve the classification. Humans can comprehend interpretable ML algorithms like decision trees and linear regression. However, these models become complex when the considered number of features increases, making them less understandable by humans. The challenge here is the interpretability-accuracy trade-off, which means that the greater the interpretability, the less likely the model is highly accurate and vice-versa. For instance, the decision tree model can improve the overall performance of radio waveform recognition in wireless communications by using more descriptive time and frequency features, but its complexity increases with more representative branches in the tree. Despite being superior to conventional ML algorithms, DL has a black-box nature that exposes a lack of explainability. For example, with the multi-class modulation classification using CNN in [77] , many ambiguous points are explained thoroughly, such as which modules in the architecture can help to improve the accuracy and why some modulations show better performance than others with the same condition. In this context, XAI, which can provide explanations concerning automated decisions and predictions, is recommended because it offers profound insights [78] . Those insights can help the system operators and network designer to solve unexpected problems, which in turn guarantees the reliability of the network and enhances the overall performance of 6G services in communication systems [79] . B. Trust and Security 1) Introduction: Since complex 6G networks may contain several heterogeneous dense sub-networks via intelligent connections with cloud-based infrastructures, they may expose some trust and security problems at multiple network connection levels. In this context, 6G communication systems should automatically detect proactive threats with intelligent risk mitigation and self-sustaining operation. To this end, AI-based trust and security become promising solutions to accurately identify and quickly respond to potential threats in an automatic manner [80] . Besides some new threats, 6G networks must cope with existing security issues in formergeneration networks, e.g., SDN, multi-access edge computing (MEC), and NFV. A distributed network that relies on the expansion of device-to-device communication with mesh networks and multi-connectivity is vulnerable and sensitive to the attacks of malicious parties. A hierarchical security protocol can be suitable for wide area network security and subnetwork communication security. Multiple radio units can be attacked over user and control plan microservices at the edge in the coexistence scenarios of centralised radio access network (RAN) and distributed core functions. In the perspective of AI-enabled 6G to achieve full automation, ML systems may become the target of several data-based and model-based security threats [81] , such as data injection, data manipulation, model evasion, and model modification. 2) Requirements and challenges: Along with the vast expansion and innovation of 5G and 6G, including vertical and horizontal dimensions with deeper integration of modern techniques, future services and applications will pose strict requirements concerning trust and security of communication systems. To guarantee high-quality services, the latency criteria of security mechanisms should be taken into consideration in enhanced ultra-reliable, low-latency communications (eU-RLLC) [82] . Moreover, high reliability will demand several extraordinary security solutions to maintain the availability of service operation effectively. Extreme data rate transmission can disclose some traffic processing security issues, e.g., traffic analysis, AI/ML-related processing flow, and pervasive encryption. These existing issues can be partly handled with distributed security solutions, where raw data and information should be processed locally and on the fly in decentralised systems or even in partitioned parts of a network. Distributed ledger technology (with some distinctive features such as transparency, redundancy, and security) and ultra-massive machine type communication (umMTC) can be applied to satisfy security requirements. However, implementing and integrating AI/ML algorithms for resource-constrained devices are still challenging besides multi-threat analysis on big datasets. Some other issues that may arise from AI-driven security solutions: the responsibility for mistakes made by AI, scalability and feasibility of AI models in diversified storage and computing infrastructures, and vulnerability of AI models in distributed systems [83] . For example, uploading local parameter sets from edge devices to a federated centre and broadcasting an updated global model to devices can become the target of poisoning attacks. 3) How XAI can help: AI/ML algorithms have been applied to detect security threats, classify attack types, and make some responses to ensure trust and security with selfconfiguration in self-monitoring, self-protecting, and selfhealing in communication networks. To facilitate a massive amount of raw data generated in the internet of everything (IoE) networks, distributed AI/ML techniques can accelerate security control and analytics. Numerous security models have been proposed by exploiting AI/ML models with supervised learning, unsupervised learning, and RL. Their advantages are higher accuracy and diversity. Primarily, advanced ML algorithms and DL models can deal with different cybersecurity problems [81] : poisonous attacks by tampering training data with malicious samples, evasion attacks by injecting disorders and outliers to testing data to circumvent the learned model, API-based attacks to pilfer prediction outcomes, infrastructure physical attacks and communication tampering by interfering communication-computing connections and shutting AI systems down [84] . However, most existing ML-powered security mechanisms are unable to explain their final decisions and response actions. Concretely, they fail to indicate how a system achieves threat detection more precisely than attack classification and how an action accordingly responds on time to protect networks from cyberattacks. Besides, the usage of AI/ML algorithms to address security problems raises a problematic concern about the trustworthiness of the components inside ML models [85] . Transparency and visibility are Cloud-based Service Provider A tta ck s (t he ft, in ve rs io n, ad ve rs ar ia l m an ip ul at io n, m em be rs hi p in fe re nc e) Apps produce unusual results Apps login failure Is the user information secured or being leaky? Authentication errors Data is missing Is the data integrity or being interfered? 5. An example of XAI for data privacy: XAI-based data analytics can provide specialized analysis and professional explanation to cloud-based service provider regarding undesirable issues of data privacy from end-users. essential that allow system supervisors and security experts to thoroughly understand the XAI model instead of a black-box model to make decision-making accountable and conduct selfprotecting strategies accordingly. Because security threats and cyberattacks can happen suddenly without any pre-warning signs, RL models have an explainable ability to identify the right time of incidents, which is an effective way to counteract threats actively. C. Privacy 1) Introduction: In this era, digital data generated and collected by multiple digital devices is valuable and represented as the key to improving service performance. Data leakage can violate users' privacy which can be prevented through comprehensive privacy-preserving algorithms. When the number of end devices progressively increases with high data variety, transmitting data over wireless networks, storing data in storage infrastructures, and processing data in computing infrastructures are burdensome with the inserted privacy protection mechanisms [86] . The possible number of wireless connections in 6G would be greater than 5G up to 1, 000 times. Therefore, ensuring high data privacy without service performance degradation is challenging (i.e., obtaining a good trade-off between the enhancement of data privacy and the preservation of service performance in terms of accuracy and processing speed). Moreover, the massive amount of data serving the learning process of statistical AL/ML models will expose a significant challenge for user privacy, which has attracted much more attention from various industrial and academic communities. 2) Requirements and challenges: With the increasing number of smart devices that allow collecting massive sensitive data effortlessly, the data privacy concerns in 6G would be significant due to the lack of data transparency [1] . Running intelligent applications on mobile devices and the edge of the network can become the vulnerable target of privacy attacks. Many problematic privacy concerns will be more severe in the era of big data with 5V features, including volume, velocity, variety, veracity, and value. Adding privacy protection mechanisms will increase the communication and computational cost and may not ensure the high quality of 6G-based applications and services unless addressed. Therefore, privacy protection mechanisms should be designed to be cost-efficient besides detecting potential privacy threats automatically and effectively [7] . Due to the diversity and variety of applications and services, ensuring privacy concerns is a significant challenge. First, the easier data acquisition and accessibility can cause regulatory difficulties of data storage, permission, and utilisation. Second is the development and integration of AI/ML algorithms into advanced privacy protection mechanisms which may provoke overloading on resource-constrained devices. Finally, it is noteworthy balancing between the high accuracy of services and the robust protection of user privacy, especially in the perspective of data ownership, access authorisation, and other regulations. 3) How XAI can help: Intelligent wireless communication with AI-enabled big data analytics for 6G can be attained comprehensively with ML to analyse a massive amount of data automatically. However, some privacy issues still remain due to the lack of transparency of regular ML models. When adopting ML models, data privacy violations may occur in different ways, in which the output of ML models can be combined with free-access supplementary information to reconstruct personal information, and personal data can be retrieved from public information via a model inversion process [86] . For instance, the daily activities, routines, and passwords of users can be retrieved using the model inversion mechanism on the sensory data of smartphones. In this context, a differential privacy mechanism can be studied by adding noises to the training dataset. However, the model's accuracy may be reduced as the trade-off of this solution. In addition to data, ML models can become prone to privacy attacks at different stages of model learning [87] , from which attackers can retrieve training data, extract trained models, and modify models to cause mispredictions. Poisoning attacks modify the model by changing training data in the training stage. Meanwhile, reverse and adversarial attacks focus on retrieving the data from the trained model and making prediction errors with adversarial examples. In [88] , a transfer protocol was combined with a distributed system to preserve the learned model and test data from privacy attacks, which in turn prevents the privacy attacks based on model similarity measurement. In another work [89] , a privacy-preserving linear regression-based collaborative approach was developed to detect intrusions early in vehicular ad-hoc networks (VANETs). With many statistical ML models represented as black-box models being less interpretable, analysing their inside components to find out weak points of privacy are problematic. Besides, several privacy protection methods have improved accuracy with advanced ML algorithms and DL architectures for different applications, such as detection of android malware and prevention of biometric templates. However, in many ML models, there usually exists a trade-off between accuracy and transparency of ML models [90] . For example, despite detecting malware more accurately than decision trees, the DL model with LSTM has low interpretability and is intelligible to a human. Resource management is challenging due to the inherent scarceness of radio resources in wireless communications. It then becomes more difficult in advanced and complicated networks like 6G, wherein the number of smart devices increases rapidly and being involved in different IoT networks (such as cellular IoT, cognitive IoT, and mobile IoT). The overall system performance of a wireless network certainly depends on how it monitors and manages hyper-dimensional resources (e.g., time slots, frequency bands, modulation types, and orthogonal codes) effectively. Besides, the incorporation of wireless channel variations and traffic load attributes in the network design phase impacts the connectivity among users having diverse quality-of-service (QoS) requirements. In the context of which new IoT applications demand high data rate, low latency, and efficient spectral utilisation and the expansion of personalised IoT services, the problems of resource monitoring and management are crucial. In the last decades, AI/ML algorithms and DL architectures have been widely used to tackle several challenging resource allocation and management issues in 6G related areas [91] . Concretely, they have contributed to many aspects including massive channel access, power allocation and interference management, user association and hand-off management, energy management, ultra-reliable and low-latency communication, and heterogeneous QoS. Traditional mechanisms cannot optimise the non-convex problem of resource allocation and being timeconsuming to manage resources in a crowded and complicated network. This drawback motivated the discovery of several data-driven ML-based resource allocation and management solutions, in which the high learning capability of AI/ML models is beneficial to the dynamic nature of 6G-enabled IoT networks [92] . 2) Requirements and challenges: Cellular IoT networks in B5G/6G are specialised with extremely high data rates and solid connectivity between heterogeneous devices/users and access points/base stations. The diverse requirements of various IoT services and applications can be met with carefully selecting various network parameters (e.g., channel state information and traffic characteristics) and communication parameters (e.g., angle of arrival and modulation types). These parameters are now can be remarkably identified by ML and DL in terms of high estimation accuracy and good ability to deal with big raw data. Some smart devices demand autonomous access to the available spectrum and adaptive tuning of transmission power to mitigate interference and save energy [93] . Some relative system parameters, such as the position and velocity of high-mobility users, propagation conditions, and interference patterns should be considered when designing an effective ML-based resource allocation solution. Notably, several traditional optimisation schemes cannot deal with diversified contexts for integration and cannot respond to the varying environments. In numerous IoT applications, ubiquitous and heterogeneous devices have diversified QoS requirements and randomly varying access to network resources, demanding upgrading traditional ML algorithms with RL to fully adapt to the diversity of network requirements and the variation of network conditions [94] . Additionally, the high computational complexity of heuristics-based resource allocation schemes should be considered to implement on resource-constrained devices. 3) How XAI can help: In the last decades, many ML/DLbased solutions have been introduced for various resource management tasks in 6G related areas, including scheduling and duty cycling, resource allocation, power allocation and interference management, resource discovery and cell selection, and mobility estimation. In long-range wide area networks, the packet transmission of IoT devices to a central aggregator can cause traffic collisions. Some up-link duty cycle optimisation methods have been recommended by applying RL models to avoid such traffic collisions. It is worth noting that some ML-based traffic scheduling methods should adapt to different traffic types (e.g., real-time traffic and delaytolerant traffic) and varying traffic load in a real-world scenario. In the centralised wireless networks, relying on the channel state information (CSI) and QoS requirements of users, resource allocation should be performed by the base station. Some ML algorithms can improve resource allocation performance to optimise the quality of experience (QoE) of end-users. For example, a multi-agent Q-learning algorithm and a decision tree classification model were studied to tackle the interference and power management issues in device-todevice (D2D) wireless networks [95] . In another work [96] , the interference and power wastage can be reduced by applying a linear regression model to estimate the interference level and transmit power level from CSI in radio channels. In [97] , a standard deep RL model was deployed for dynamic power management to minimise power consumption. The efficiency of spectrum mobility monitoring and management can be improved with transfer learning to enable secondary users to switch the spectrum bands in a cell and among different cells. The authors in [98] have found that SVM performed traffic network classification more accurately than backpropagation neural networks besides being more flexible with various realtime network application flows. Although the overall system performance can be improved by leveraging advanced ML algorithms for resource management, most of the existing ML-enabled solutions expose a critical problem about the confidence level of decisions. Indeed, the weak transparency and poor explainability of advanced ML and DL models can conduct some risk operation failures, where stakeholders cannot fully understand AI black-box models. To this end, XAI should be recommended for resource allocation and management to provide ethics-related validation and derive actionable insights. E. Edge AI 1) Introduction: Edge AI is one of the essential components missing in the existing 5G communication networks. Edge AI is a framework that focuses on the integration of mobile edge computing, communication networks, and AI seamlessly 11/21/21, 9: [99] and is considered to be one of the most critical enabling technologies for the futuristic 6G cellular networks [100] . Recently, many researchers have been working to make the 6G cellular network a fully autonomous and highly intelligent system. Edge AI plays a vital role in realising human-like intelligence in all the aspects of 6G network systems to improve the quality of experience of the users [101] . AIenabled decentralised mobile edge servers are deployed at a massive scale for performing processing and decision making at near proximity of service requests and the data generation, making edge AI a vital component that provides accelerated and ubiquitous integration of AI into future 6G networks [102] . 2) Requirements and challenges: Several resources are needed to execute the AI models in 6G, such as coordination of data, training of the model, caching, and computing [103] . One essential requirement for edge AI in 6G networks is the automatic creation of labels from the massive amount of the raw data generated by the wireless devices in the 6G cellular networks. Distributed AI, which performs the computation jointly at the cloud data centres with the edge servers that are distributed, is one of the key requirements for edge AI. Another essential requirement of edge AI in 6G is personalised AI, through which the decision making of AI algorithms can be improved by understanding the preferences of the human users [104] . Security is also a crucial requirement for edge infrastructure. Some security threats for edge infrastructure are resource or service manipulation, denial of service attacks, man-in-the-middle-attack, and privacy leakage. AI/ML algorithms can play a significant role in monitoring and predicting security attacks [105] , [106] . However, edge AI can vastly improve automation and lower the dependency on human intelligence of 6G cellular networks. In some missioncritical situations, humans have to be involved in decisionmaking. However, humans may not understand the reason for the predictions of the edge AI-based 6G applications, making it very difficult to make those decisions. One of the key goals of edge AI is self-evolution and self-adaptation so that human efforts can be reduced when processing data and making decisions. Furthermore, the development of the model by dynamically adapting it to unknown events based on the environment and the features of the data is another key goal of edge AI. However, due to the black-box nature of some AI algorithms, it will be challenging to evaluate/audit the effectiveness of AI models for the challenges mentioned earlier. 3) How XAI can help: XAI can play a massive role in guaranteeing the performance of the edge AI-enabled 6G networks where several network components are integrated for different requirements through justifiable results. Verification, validation, and auditing of the decisions at the edge for addressing the challenges mentioned above will become simple due to the justification of the decisions from XAI algorithms. Also, humans may have to be involved in some stage of decision making in mission-critical edge AI-enabled 6G applications. In those situations, the job of humans becomes simple as they can easily understand the reasons for the decisions/predictions of the AI algorithms at the edge as depicted in Fig. 6 . Hence they can take better decisions. For instance, in [107] , XAI is proposed to be used by the authors to make the healthcare professionals understand the findings of the DL algorithms to control the COVID-19 like pandemics using edge enabled 5G and beyond networks. The proposed XAI-based solution will ensure that the findings of the ML-based algorithms are justifiable, which will help the ethical acceptance of the deep neural network model to combat the pandemic situations from the healthcare professionals. F. Network Automation and ZSM 1) Introduction: New business models will be unlocked using technologies like SDN, MEC, network slicing, and NFV in 5G and beyond cellular networks. The increase in flexibility, cost efficiency, and performance, along with inter-domain cooperation and agility, results in a massive increase of complexity in the management and the operation of 5G and beyond networks. Hence, the solutions provided by conventional methods may be inefficient in network and service management. Thus, it is inevitable for the management operations through closed-loop automation. Management automation through self-managing capabilities will improve the efficiency and flexibility of delivering services and reduce the operating expenses [108] . Zero Touch Network and Service Management (ZSM) was established by ETSI to achieve selfmanaging capabilities. ZSM reference architecture aims to specify an E2E service and network management services that are fully automated, without the intervention of humans [109] . 2) Requirements and challenges: AI/ML and Big Data analytics are key enablers for 100% automation in cellular networks. AI algorithms can learn from the vast amount of data generated in the 6G cellular network. They can play a vital role in the self-management of the network (self-configuration, self-protecting, self-optimisation, and self-healing), resulting in reduced human errors, accelerated time-to-value and lower operational costs. AI/ML algorithms can be applied to raw data, filter the important events from the large volume of events, identification of problems in the network, and then send the most vital information to the upper layers [110] . However, the successful integration of AI/ML techniques in ZSM for full automation depends on the interpretability and transparency of the AI/ML models to ensure transparency, reliability, trustworthiness, and accountability in AI-enabled ZSM [2] . The type of ML algorithms used, input data used at the source to train the ML algorithms that enable the ML algorithms to arrive at the decisions are to be understood to provide reliable decisions on any automated tasks in ZSM. The end-user finds it challenging to explain the approach followed by the ML algorithms to arrive at the end results due to the increased complexity of the used ML algorithms. This situation is especially faced by the users, mainly when a series of updated models are applied to analyse the large volumes of data generated in 5G and beyond networks. 3) How XAI can help: With its ability to provide justification and interpretability, XAI has a vast potential to address the challenges mentioned above in integrating AI with ZSM. To facilitate XAI for ZSM, the authors in [111] have proposed to add a dictionary that acts as a repository to capture the AI models used in the system. The authors proposed using factor graphs or directed acyclic graphs to represent the taxonomy of AI models. To add the input/output variables and attributes of specific AI/ML algorithms used, the authors also proposed using the algorithm instance repository. The analysts can use this metadata to do reverse engineering and come up with an explanation of the results. Providing end-to-end virtual networks that cater to the diverse and customized needs of heterogeneous applications is one of the key technical requirements in 6G, that can be realized by network slicing (NS). Managing resources and functions in NS is a challenging and crucial task, where efficient decisions are required at all levels of the network in real-time. Hence, AI can play a vital role in automating the decision making for these key tasks in NS [112] . As these decisions involve complex management of resources that have financial and also service quality, making the humanin-the-loop understand the rationale behind these decisions is of paramount importance. Integrating XAI algorithms in 6G improves the credibility and accountability of resource allocation in NS [113] . IV. XAI FOR 6G USE CASES This section overviews comprehensively the potentials of XAI for typical 6G uses cases [8] , [9] , [12] , [15] , [114] including intelligent health and wearable, industry 5.0, collaborative robots, digital twin, CAVs and UAVs, smart grid 2.0, holographic telepresence, smart governance, and online advertising in 6G. Specifically, for each use case, it firstly introduces the motivation, which explains why this use case is important and required urgently. Then, it analyzes the enabling technologies required which normally includes the 6G communications technologies and AI algorithms. We also introduce some important existing work in the literature under each use case. Last but not least, we explain why XAI can improve the transparency of the high stake decision-making process to enhance the trust between humans and machines. The XAI requirements of these 6G use cases are summarised in Table V. A. Intelligent Health and Wearable, Body Area Networks 1) Motivation: The advancement of 6G is expected to drive an innovative development of eHealth systems and improve the performance of medical and healthcare services with advanced AI/ML algorithms [115] . The upcoming 6G communication can provide eHealth applications and services with ultrahigh data rates and ultra-low latency of a huge number of connected devices. In this context, an eHealth system can take responsibility for real-time monitoring and tracking, recording health information and storing eHealth records in cloudbased computing infrastructures. Furthermore, it can exchange medical records and health reports and provide a remote diagnosis by connecting different health service providers in a network [116] . Nowadays 6G eHealth solutions can be expanded to various scenarios, such as hospital, sport, home, and pharmacy, in which the QoS for all eHealth applications and services should be ensured in the indoor and outdoor environments. More importantly, 6G-enabled Internet of Medical Things (IoMT) networks promisingly provide precise medical services by applying AI/ML algorithms to process healthcare and medical data besides very high-quality connectivity. 2) Requirements: As the healthcare data acquired by various multimodal sources has a large volume, high velocity, and diversified variety, exploiting ML algorithms and DL architectures to develop data-driven solutions has been attracting much more attention from healthcare and medical communities via academic research and industrial products. An AI-aided eHealth system should process different data types (e.g., sequential versus high-dimensional data and structured versus unstructured data) and fulfil high-quality services with very high accuracy (e.g., image-based cancer detection and recognition) and very low latency (e.g., online surgery via video streaming) [117] . In several healthcare and wellness services, the raw sensory data collected from wearable devices usually have noise and outliers. Therefore, AI-based solutions should learn complicated patterns from messy datasets effectively. Compared with the data in other sectors, healthcare and medical data are more sensitive to security and privacy, hence ML-based solutions can be studied to automatically detect privacy threats and protect data from cyberattacks in distributed cloud-based systems. Recently, federated learning (FL) was introduced to overcome such kinds of data security and privacy by sharing the information of trained local models instead of the raw data from edge users [118] . 3) Existing solutions: The last few years have witnessed ubiquitous utilization of ML algorithms and DL architectures for a variety of healthcare and medical applications [119] , such as physical activity recognition with time series sensory data and diabetic retinopathy recognition with multimodal images. The authors in [120] proposed an intermediate fusion framework for human activity recognition using sensory data of wearable devices, in which the deep local features extracted by a deep convolutional network were combined with descriptive statistic features to improve the recognition rate. Subsequently, the fused feature vectors were passed into an SVM classifier to predict activities. A comprehensive diabetic retinopathy recognition method [121] leveraged DL technology with CNN architectures to learn the amalgamation between fundus images and wide-field swept-source optical coherence tomography angiography. In this method, a twofold feature augmentation mechanism was advanced to enrich the generalization capacity of the feature level and prevent CNN from the vanishing gradient problem. In another work [122] , a two-stage learning model with CNN architecture was presented in a lung nodule detection method to overcome the heterogeneity of lung nodules and the complex pattern of the noisy tomography image dataset. The proposed deep model not only improved the accuracy of early lung cancer detection but also facilitated small-scale datasets with a random maskbased data augmentation scheme. Recently, ML and DL have been applied for many other healthcare and medical applications using different data types, such as sleep analysis with electroencephalography signals collected by wearable in-ear devices [123] and retinopathy risk progression monitoring with electronic medical records [124] . 4) How XAI can help: Wearable-based physical activity recognition can support physicians and health-wellness experts to give some early risk warnings of heart disease by leveraging advanced AI technologies. In [120] , despite the high accuracy of daily activity recognition, the reliability of the proposed deep fusion model is questionable because the classifier was trained on a dataset having a limited number of individuals in few certain conditions. Physicians can conduct some wrong guidelines and recommendations from the misclassification results caused by changing conditions. Currently, DL with CNN architectures have been widely used for medical image analysis and demonstrated high performance in terms of accuracy for various fundamental tasks, including detection, classification, and recognition in many applications [125] , such as lung cancer detection and diabetic retinopathy recognition. As a block-box, the deep CNN model with a low level of accountability and transparency fails to explain and articulate how its decision can be reached. Consequently, understanding the operation mechanism of black-box models is nearly impossible for stakeholders, including end-users and service providers. In this context, developing inherently interpretable models is urgently necessary to render traceable explanations of AI outcomes. For example, the combination of decision trees and deep networks can offer a promising XAI solution to pursue great explainability and high prediction performance [126] . Some other supplementary techniques (such as visual explanation, LIME, and LRP [127] ) are useful to obtain understandability and explainability in the healthcare and medical domains. Besides the ability to access data and come to conclusions, XAI can provide doctors and specialists the decision routine information to understand how those conclusions are reached, meanwhile, few conclusions else require the hints of human interpretation [128] . As a result, with XAI, doctors are able to explain why a certain patient has a high risk of health problems when he should admit to the hospital for supervision, and what treatment would be most suitable. An example of applying XAI with a visual explanation technique to recognize image-based diabetic retinopathy is illustrated in Fig. 7 . B. Industry 5.0, Collaborative Robots, Digital Twin 1) Motivation: Different from Industry 4.0 which spearheads the explosion of IoT, cognitive computing, big data, and AI over technical interconnectivity and decentralization, Industry 5.0 commits the human touch of business and intelligent systems back into development and production. The primary mission of Industry 5.0 is to create a significant revolution in industrial processes, manufacture, and business, where problem-solving and creativity-making are the superior objectives instead of replacing repetitive jobs of people with automated robots [129] . In this context, the combination of increasingly powerful machines and better-trained experts motivates an effective, safe, and sustainable production, in which highly skilled operators and automated robots can work safely and effectively side-by-side on the same manufacturing roll to produce personalized and customized products. Such kinds of robots are known as collaborative robots and should be designed to completely accomplish different heavyprecise tasks in high consistency guarantee. Digital twins, recognized as virtual models of process, product or service enabling data analysis, system monitoring, and operation and performance assessment via simulations, are promising solutions to optimize business and manufacturer outcomes over managing the entire life cycle of a product. Relying on the comprehensively predictive and descriptive capabilities of digital twins, customers can comprehend the experience of product functionalities along with operational optimisation fully, while manufacturers provide maintenance services to guarantee digital twins manageable and profitable [130] . 2) Requirements: In the Industry 5.0 era, we expect to see an intensive upgrade from cyber-physical (i.e., using digital technologies to operate factories to reduce human participation) to human-cyber-physical. Interestingly, as the foreseeable evolution of collaborative robots and digital twins, AI plays a vital role in processing raw data from sensors, analyzing highlevel information, and proving decisions and recommendations automatically. At the centre stage of this new revolution, humans should work alongside collaborative robots with the supports from digital twin systems, teach them to do tedious, repetitive, and dangerous tasks, and correct them when they make operation mistakes or conduct wrong decisions [131] . Besides the requirement of faster and smarter decision making in such manufacturing tasks and processes, we desire collaborative robots and digital twins for Industry 5.0 to be more understandable and interpretable, which means, they are able to explain actions and decisions derived from AI/ML models. The complexity and sophistication of AI-powered automated manufacturing systems rapidly increase to argue that humans cannot understand the ambiguous mechanisms of AI systems, especially when they deliver unpredictable and unexpectable decisions [132] . 3) Existing solutions: Nowadays collaborative robots are being developed to support automatic inspection and corrective action in high-precision control systems [133] , in which there is an AI-enabled intelligent module designed with DRL to effectively learn and adaptively act based on inspection results. The approach shows two primary advantages: first is learning the AI model continuously without shutting systems down and second is extensibility to different real-world scenarios. In [134] , a dual-input DL model was developed to improve the performance of human-robot collaboration and allow robots to learn from human demonstrations effectively. This model synthesized the assembly contexts of multiple human demonstration processes and tasks to consequently accomplish suitable assistant actions. Compared with traditional featurebased approaches being more complex and time-consuming for labelling a huge amount of data, the proposed method can annotate data labels automatically by perceiving human demonstrations. In another work [135] , RL with CNN architecture was leveraged to optimize the working sequence of human-robot collaborative assembly, which in turn increases the working performance of smart manufacturing systems. Some complicated learning use cases, such as robot random failure and human behaviour uncertainties, were further taken into consideration to satisfy real-world conditions. For the promotion to fully smart manufacturing, cognitive digital twins [130] were presented by incorporating different modern digital technologies, including industrial IoT, big data, ML, virtual reality, which aimed to analyze and simulate operation modules, assets, systems, and processes. In [136] , a secure industrial automation system was built based on a digital twin replication model to identify and verify multilevel design-driving security requirements using sophisticated simulations and optimizations. The digital twin technology was also exploited in some heavy industry sectors [137] , such as shipbuilding, steel production, and oil and gas, to enhance safety and productivity while reducing operational costs and minimizing health risks and accidents. 4) How XAI can help: Industry 5.0 focuses on the cooperation between humans and machines to pass over the boundaries of physics on design and enhance user experiences via two principal spearheads of collaborative robots and digital twins, which in turn enhances production efficiency and reduces manufacturing costs accordingly. AI-based decisions can help module operators and system experts in making judgments accurately and promptly in the context of which their activities involve complicated processes and systems. However, verification and certification should be required for any decisions relating to products and processes. Concretely, AI models embedded in collaborative robots and digital twins should provide decision-making explanations to demonstrate that their outcomes are traceable and reproducible. To this end, XAI enables humans to understand certain aspects of AI-aided processes or systems, in which XAI is able to answer such kinds of questions: why is the prediction reliable, what are the stable working conditions of an AI model, and when is it likely to crash. In [138] , an XAI technique was developed to identify various damage conditions of bearings in motors by analysing vibration signals with short-time Fourier transform, in which a feature visualization-based approach was applied for result explanation. In experiments, the outcomes of the decision tree and adaptive network-based fuzzy inference model were compared with those of a DL model to verify the effectiveness of XAI-based explanations. Intelligent robots without explanatory capability may cause failures and dangerous actions unexpectedly, hence interpretability and explainability, including post-hoc rationalization and introspection, should be supposed to ensure that there is no conflict between supplementary explanations and regular requirements of applications. In the effort to reach human-like communication in human-robot communications, a novel XAI framework [139] was developed for collaborative robots by building a human-learn hierarchical AI model with a graph-based explanation scheme. The proposed XAI allows the robots to explain their own pedagogic behaviours by referencing the outcomes of the trained model with the mental state of the end-user. In the next industrial evolution, digital twins using explainable models and explanation interfaces can become a promising solution to enhance the trustworthiness of AI [140] . Interestingly, as an exact virtual representation, or virtual clone/mirror, of a real-world asset, a digital twin can help to heighten the explanatory capability with augmented reality and virtual reality technologies. positively change the manner vehicles move and the approach travellers obtain mobility. Connected autonomous vehicle (CAV) is recognized as one of the important vertical industries in 6G with its various quality levels of on-demand services [141] . CAV can mean autonomous vehicles which have the capabilities of connecting other vehicles/infrastructures over wireless communications and sensing the driving environment to achieve safe transportation with little or no human involvement. By incorporating advanced technologies, autonomous vehicles collaborate with each other directly over an intermediate infrastructure to improve the performance and efficiency of smart transportation systems if compared with individual autonomous vehicles without collaborative mechanisms. Indeed, the connected vehicle and automated vehicle technologies should be developed in parallel and closely cooperated to put forward completely smart transportation in the future [142] . To this end, besides 6G-enabled wireless communications, AI plays as one of the most important core technologies to process a massive amount of sensory data collected by multiple sensors, which helps autonomous vehicles to understand the surrounding environments and accordingly execute driving activities. In addition to CAVs, the emergence of flying platforms such as unmanned aerial vehicles (UAVs), a.k.a., drones, enables several key potential 6G applications and services in a broad spectrum of domains thanks to their mobility, flexibility, and adaptive altitude. 2) Requirements: Besides some key connectivity requirements to achieve high-speed, real-time, and reliable data transmission with different communication scenarios, including vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-cloud (V2C), vehicle-to-pedestrian (V2P), and vehicle-to-everything (V2X), CAVs should strictly demand about the QoS performance of autonomous driving systems. The requirement may vary according to the autonomous level of vehicles: no driving automation, driving assistance, partial automation, conditional automation, high automation, and full automation. To achieve the baseline autonomous requirement, a vehicle needs to be aware of its surrounding during the driving period by first perceiving information and subsequently acting with vehicle control. To fully understand the driving environment, a massive amount of data recorded by multimodal sensors should be processed automatically and accurately via a driving computer system with advanced AI/ML models. Due to the critical demand for safety, a self-driving AI-powered system is expected to operate flawlessly regardless of weather conditions, visibility, road surface quality, and other situational conditions. In this context, advanced ML algorithms and DL architectures have been recently exploited to cover all possible driving scenarios and surrounding environments [143] , however, the trustworthiness of AI systems embedded in CAVs is questionable. Besides providing useful driving recommendations and accurate driving activities immediately to drivers to minimize the probability of traffic accidents, a revolutionary AI system should be understandable and explainable to make sure that the driver feels confident about its decisions [144] . 3) Existing solutions: For safe and efficient operation on roads, CAVs should understand the current state of nearby vehicles and surroundings to proactively predict future driving behaviours, which in turn allows AI systems to react automatically and immediately. In [141] , a comprehensive survey on AI-aided driving behaviour prediction and potential risk analysis was presented, in which the advances of DL with different architectures, compared with conventional ML algorithms, were deliberated in terms of prediction performance. This survey showed that DL has represented as a promising solution to effectively deal with different sensors (e.g., cameras, LiDAR, and radar) for complicated driving scenarios. The survey also categorized the state-of-the-art driving behaviour prediction approaches into three classes: input representation (track history of target and surrounding vehicles, bird's eye view of the environment, and raw sensory data), output type (manoeuvre intention, unimodal trajectory, multimodal trajectory, and occupancy map), and prediction method (RNN, LSTM, and CNN). In [145] , an innovative road condition supervision system was developed by learning a deep network to reduce accidents caused by poor road quality, in which the sensory data of accelerometer and gyroscope were processed in coordinating with GPS data. Suddenly changing lanes and braking of the leading vehicle, caused by driver's distraction, misjudgment, and misoperation, increase the risk of accidents. An autonomous braking decision-making strategy was proposed with deep reinforcement learning in [146] to facilitate low-level control of CAVs in emergency situations. Despite promising as the core of next-generation intelligent transportation systems, CAVs will face some hidden security problems relating to the complicated AI configurations/settings of autopilot systems [147] , where end-users with limited AI experience and knowledge can be a nuisance and may cause accidents and risky dangers. ML and DL have been recently leveraged for various UAV-based applications in wireless communication systems (e.g., interference management, catching optimization, and resource allocation [148] ) and intelligent transportation systems (e.g., trajectory planning, traffic flow monitoring, and navigation [149] ). In cell-free and aerialassisted vehicular networks [150] , UAVs were used to assist CAVs to make decisions in driver assistance systems (for example, path planning and crash warning on highway), in which a supervised learning model was designed with CNN architectures and optimized with particle swarm optimization to make timely inference and facilitate online decisions in realworld scenarios. End-Users (CAV drivers/passengers) The important role of AI to driver assistance systems in CAVs is undeniable to move forwards to the next generation of intelligent transportation systems, however, there is an arguable issue of whether drivers completely feel confident and secure with AI-based decisions. Several advanced assistance systems developed by some big companies like Tesla are too complicated with numerous AI-based functionalities (e.g., automatic parking, adaptive cruise control, automotive navigation, collision avoidance system, driver drowsiness detection, and electronic stability control), which ask end-users to tune multiple settings to ensure that the systems will operate smoothly and properly. The transparency of system operation and the explainability of decision making are very immature in the field of driverless cars. For example, a self-organizing neuro-fuzzy model coupled with a densitybased feature selection technique was leveraged in [151] to explain automated reacting decisions (e.g., braking, speeding up, changing lane to left), in which the outputs of an AI model were capable of being interpreted by human-understandable ifthen rules. In [152] , the rule-based XAI was also investigated for different flying evens of UAVs, in which the decision of changing lying path can be explained regarding the weather conditions, surrounding environments, and relative enemy locations based on the representation of if-then rules derived from a fuzzy inference model. In another work [153] , XAI with a random forest algorithm was proposed to improve the self-confidence level of autonomous navigation in an autopilot system. The intermediate information extracted from trees was helpful to explain navigation autonomy and AI-driven decisions. Most of the existing works of XAI for CAVs have focused on explaining the decision of a single system instead of the decision derived from multiple interactive systems, where changeable environments and conditions can affect the outputs of prediction systems and consequently lead to some potential threats. Indeed, explaining why a vehicle changes a lane or hits a brake is a more complicated task. For an early risk-aware system, a hierarchical AI model [154] was studied to predict uncertainty and collision risks under the constraint of account perception, intention recognition, and tracking error. At the highest level of automation (i.e., a vehicle is free from geofencing with the capability of reacting as an experienced human driver), explainability is one of the most prior requirements for not only system developers and manufacturers but also customers, passengers, and society. Fig. 9 shows some exemplary questions CAV stakeholders may raise from each intelligent 6G layer that XAI can help to answer. D. Smart Grid 2.0 1) Motivation: In a traditional power system, the electricity is fed to the consumers in a centralized manner. This system is not reliable as the delivery of power/electricity may be interrupted in the entire network in case of a trip/power failure along the path of the network [155] . To overcome these issues, several utilities try to use a hybrid network topology or loop in case of a fault in the network [156] . Electricity generation affects the environment immensely as it is mostly generated from fossil fuels such as petroleum, natural gas, and coal, or biomass that comes through plants, or wastes from industry and municipalities that may result in increased poisonous gas emissions and rising temperatures. Some studies have estimated that almost 40% of the carbon dioxide emissions are due to the generation of the power [157] . Modernization of the traditional power grid is inevitable due to factors such as frequent power outages, the ever-increasing demand for power globally, security issues, theft of electricity, demand for building huge electricity infrastructure, etc. Recent blackouts throughout the globe have a need for smart grid that adds intelligence to the traditional electric/power grid system to have a more robust, responsive, and resilient grid [158] , [159] . A smart grid enables two-way communication between the consumers and the utility, and also allows sensing throughout the transmission lines. The smart grid is composed of computers, controls, automation, and new equipment and new technologies that work collaboratively so that the electrical grid can respond digitally based on the dynamic demands of the consumers [160] . The core infrastructure in the smart grid is provided by IoT. Smart grid, characterized by automation, informatisation, and interaction, provides diverse and quality power supply for the customers efficiently and securely. To provide supervision of the assets reliably, information interaction in real-time, peer-topeer trading of energy, load management and other electrical services smart grid needs a communication infrastructure, which is flexible and also highly reliable [161] . One of the challenges in making the smart grid more sustainable is to manage the remote communication among various systems that are connected by smart meters [162] . 6G can help in realizing intelligent applications of smart grid-like remote monitoring and remote controlling of distributed energy resources, automation of demand response, etc [163] . A smart meter needs the deployment of a distributed network, and wide coverage is essential in preventing blackouts and also to ensuring smart grid's self-healing capabilities. 6G can also help in realizing some of the applications of the smart grid that require high-speed connectivity such as predictive maintenance, video surveillance in real-time or during natural calamities, recovering proactively during emergency times, etc [164] , [165] . 2) Requirements: To deal with the massive volume of data generated due to constant communication and connectivity in the smart grid, sophisticated techniques that can analyze the data and can assist in the decision-making process are required [166] . ML can solve the problems arising due to the large volumes of data generated from smart grids and can assist the smart grids in the collection of data, analyze the patterns existing in the data and also in making decisions to run the smart grid. An ML-enabled 6G network can benefit the smart grid to solve some of the issues in real-time such as detecting the intruders in the network, forecasting the price of electricity consumption, electricity thefts, line maintenance, generation of power based on demand, optimal scheduling, detection of faults, demand response, prediction of the stability of a smart grid, etc [167] . 3) Existing Solutions: The incorporation of IoT sensors and communication technologies in the smart grid revolutionizes the distribution, production, control, and monitoring of electricity. Addressing the security issues in the smart grid is of immense importance. To address the security of the smart grid, Babar et al. [168] proposed a secured demandside management engine for the smart grid using Naive Bayes, a machine learning algorithm, to efficiently preserve the energy utilization in the smart grid. Due to the increased data generated at a rapid pace in 5G and beyond enabled smart grid, data acquisition and processing by a smart meter is vital. The redundant data present in the data acquired can be reduced by using event-driven sampling. To address this issue, Qaisar et al. [169] employed the SVM algorithm to identify the relevant features for analyzing the consumption patterns of appliances. Providing on-demand services for electric vehicles by vehicle-to-grid systems is very important because of the height manoeuvrability of the electric vehicles. In this regard, Shen et al. [170] proposed a hybrid architecture based on cloud and fog computing with applications in the 5G-enabled vehicle to grid networks. The proposed architecture allows the bi-directional flow of information and power between smart grids and schedulable electric vehicles to improve the costeffectiveness and QoS of the energy service providers. For proper scheduling, the selection of suitable electric vehicles is very important. To improve the scheduling efficiency, finding the categories of target electric vehicle users is required. To accurately identify the target electric vehicles, the authors proposed an artificial intelligence method that is based on the electric vehicle's charging behaviour. In a similar work, Sun et al. [171] proposed a novel architecture for 5G enabled smart grid based on edge computing and network slicing for providing provide on-demand services for electric vehicles. This architecture collects the bidirectional information of traffic between the electric vehicles and the smart grids to decrease the cost of the energy providers and improve the charging experience of electric vehicles. To improve the scheduling efficiency of electric vehicles, the authors proposed LSTM-based electric charging behaviour prediction, KNN based classification of electric vehicle charging and k-means based clustering of electric vehicle charging. 4) How XAI can help: With traditional AI/ML approaches, there is a lack of transparency on the predictions they make with regard to applications of 5G enabled smart grids as mentioned above. XAI can play a very important role in making the actions of smart grids accountable and transparent, which will take both the customers and prosumers into confidence. Kuzlu et al. [172] proposed a forecasting approach based on an XAI methodology to predict the generation of PV power, to increase the trustability of AI models and hence improve the acceptance of AI in smart grid applications. Zhang et al. [173] proposed a Shapley additive explanations (SHAPs) based backpropagation deep explainer, termed as Deep-SHAP method, that produces an interpretable model for emergency control applications in smart grids. Some of the use cases of XAI enabled 6G for the smart grid are discussed below and are summarized in Fig. 10 . Use Case 1: Stability predictions of a smart grid Maintaining the stability of a smart grid is of paramount importance. There are two criteria concerning the stability of a smart grid. The first criterion is to have a reserve of battery storage to meet the dynamic demand for electricity. The second criterion is providing enough capacity for the stability of the voltage at every location. The instability of a smart grid may lead to power outages and blackouts, which may lead to huge losses in revenue in several industries [174] , [175] . The data Identifying the exact location Reason for deterioration Quick actions to limit/avoid the damages Anti-tapping Justification for prediction Preventive measurement Quick action from operators Fig. 10 . An illustration of XAI for 6G-enabled smart grid 2.0: line maintenance, stability prediction, and energy prediction for theft detection from several sensors in the 6G enabled smart grid can be analyzed by AI/ML algorithms to predict the stability of a smart grid. The AI/ML algorithms should be able to accurately predict the stability of a smart grid early so that the operators can take necessary actions [176] . Since the traditional AI/ML algorithms could not justify the predictions, the operators of a smart grid might be reluctant to take preventive or corrective measures earlier to avoid losses due to instability. XAI can address this issue by providing the reasons why the smart grid is unstable, which may lead to quick actions from the smart grid operators without spending more time investigating the reasons. Use Case 2: Detection of Energy Theft in Smart Grid It is estimated that approximately 96 Billion dollars are being lost every year by the utilities due to energy thefts, which lead to increased prices of energy for the consumers [177] . The energy thieves make use of several methods such as tapping a line between a house and the transformer, hacking into meters of neighbours/their own meter and tampering the meters [178] . To minimize electricity thefts, we have to identify the most likely cases of theft that can be investigated further. By training the ML models on the data from the smart meters, other external factors like geographic risk in a particular area, and weather in a 6G enabled smart grid, we can generate such a list in real-time that will enable the operators to take appropriate measures immediately [179] . There is no fixed solution regarding the action taken by the investigators/operators of the smart grid. For example, a meter that is reversed may have to be disconnected, if a meter is intruded, the household has to be alerted and also the meter that is altered has to be replaced, etc. XAI can be helpful in this scenario to explain what kind of theft may happen, so that the operators of the smart grid can take relevant action to address the issue. Use Case 3: Line maintenance in Smart Grid The reliability of a smart grid depends on the proper maintenance of the infrastructure. The deterioration/ageing of a transformer or power lines due to several reasons such as weather conditions have to be detected at an early stage to prevent the failure of equipment that may cause power outages and blackouts [180] . The conventional AI/ML methods in 6G enabled smart grid can predict the ageing/deterioration of the lines based on several data generated from the sensors such as humidity, weather, and so on in real-time. XAI can be used efficiently in these scenarios to identify the exact location and reason for the deterioration of the power lines so that the maintenance crew can be sent to the exact location with the heads up on the reasons, so that they can take appropriate actions to limit/avoid the damages. E. Multi-sensory XR Applications, Holographic Telepresence 1) Motivation: Extended Reality (XR) is a combination of all the immersive technologies such as virtual reality (VR), augmented reality (AR), mixed reality (MR), etc. These immersive technologies are used to extend the reality that is experienced by the creation of an experience that is fully immersive or by amalgamating the real and virtual worlds using multiple sensors [181] , [182] . In AR, real world is overlaid by objects and virtual information. The digital details such as animation, text, and images enhance the real-world experience of the users through smartphones, tablets, screens or AR glasses [183] . Some of the examples of AR are filters in the Snapchat app that can place glasses or hats on our heads, overlaying of creatures that are digital into the real world in the Pokemon GO game, etc. In VR, the users will be immersed fully in a digital world that is simulated using a head-mounted display or a VR headset to experience the view of the artificial world in 360-degrees. By VR, the brain will be fooled to believe that the persons are actually present in the VR world that is created, for instance, swimming in the ocean, walking on the moon, participating in racing, etc. VR can be seen adapted in the entertainment and gaming industries. Other industries such as defence, healthcare, engineering, manufacturing, construction, etc. also can be benefited from VR [184] . In MR, objects of real and digital worlds can exist together and also they can interact with each other in realtime. MR requires more processing compared to VR and AR and also headset is required to experience the MR [185] . One of the best examples of MR is HoloLens from Microsoft, which places digital objects in our rooms and we can interact with these virtual objects/spin them around, etc. XR has many real-time and practical applications in many sectors where the travel cost and time of the customers can be saved, such as entertainment, retail, healthcare, real estate, marketing, remote working, disaster handling, etc [186] , [187] . Holographic Telepresence is a technology, where the systems can project real-time, full-motion, realistic 3D images of people located in distant places into a room, with real-time audio communication, which will make the users feel as if they are communicating with people in person. Unlike in AR or VR, the users doesn't require any device, sensors or headsets to experience holographic telepresence. In holographic telepresence, the captured images of people at remote locations along with their surrounding objects will be compressed and then transmitted through a broadband network. These images will be decompressed at the users' end and then projected through laser beams. Holographic Telepresence has the potential to revolutionize traditional communication through mobile phones by giving immersive experiences to the users. It has huge potential in many other applications of communications such as telemedicine, enhanced television and movie experiences, gaming, advertising, robot control, aerospace navigation, 3D mapping, and other simulations [188] , [189] . 2) Requirements: XR and holographic telepresence technologies require a communication network that has near zero latency, fast processing of the information from sensors. 6G, through its attributes like connection density, user-experienced data rate, scalability, mobility, reliability, traffic volume density, can play an efficient role in realizing the true benefits of XR [114] , [190] - [192] . 3) Existing solutions: The use cases of 6G networks in multisensory XR and holographic telepresence technology are too complex for humans to operate. AI/ML algorithms play a crucial role in automating the learning, reasoning, and making the decisions to manage the 6G networks for these applications. AI can be used for network and device management of the resources and communication-intensive applications. For example, AI algorithms can be effectively used at the network layer in traffic classification, delay mitigation, anomaly detection, caching, resource management, network slicing, routing, etc. AI can also be effectively used for self-managing of the participating devices in 6G enabled multisensory XR and holographic telepresence technologies. Some of the potential applications of AI in the end devices in these applications are to understand the environment by applying computer vision to analyze the multidimensional knowledge from the images captured by the devices, reduction of network volume by enabling AI-based applications in the mobile devices, etc. Some of the potential applications of AI/ML in multi-sensory XR applications and Holographic Telepresence are [193] : • Estimation of the Position of Objects: The object's position (such as fingers, hands) can be inferred for controlling the content of XR [194] . • Labelling of Scenes and Images: Triggering XR labels with image classification [195] . • Semantic Segmentation and Occlusion: Segmentation and occlusion of the specified objects [196] . • Object Detection: The object's extent and position in a scene can be estimated to form colliders and hitboxes to enable interactions between virtual and physical objects [197] . • Recognition of Audio: Triggering the effects of AR through recognition of keywords [198] . • Recognition and Translation of Text: The application interfaces of XR can be used for overlaying of the text detected from an image into the 3D world [199] . • Content Generation: Designing of environment, characters, and other graphical objects [200] . • Virtual Humans: Training of animations so that they can respond in real-time [201] . • Virtual Assistants for Dynamic Customer Experiences: Training of virtual assistants that answer the queries of customers to provide a virtual experience on the latest trends [202] . 4) How XAI can help: For the above-mentioned scenarios, 6G can provide the required communication infrastructure to handle the bandwidth requirements of the large number of sensors involved in XR/Holographic Telepresence applications. Due to their black-box nature, the classifications/predictions from AI/ML algorithms may not instil confidence in the decision-makers in mission critical XR/Holographic Telepresence applications. XAI can fill up this gap in providing valuable reasoning and justification on the predictions/classifications that can convince the providers of XR/Holographic Telepresence applications to make the decisions based on the outcome of the AI/ML algorithms. For instance, accurate estimation of the position of objects such as fingers/hands of humans is very important in controlling the content of XR. AI/ML algorithms can be used to estimate the position of the objects. Taking the decisions based on the recommendations from AI/ML algorithms in realtime may sometimes lead to inaccurate content of the XR due to false positives. If the 6G is enabled with XAI, the application providers will understand the reasoning behind the predictions/classifications by the AI/ML algorithms that can help them in taking accurate decisions in real-time. Similarly, if 6G is enabled with XAI, the virtual assistants can provide accurate information to the customers. For instance, [203] proposed guidelines for using XAI techniques and simulations using XR for secured human-robots interactions. F. Smart Governance 1) Motivation: Smart governance is perceived as the intelligent use of ICT and innovation to facilitate and support enhanced decision-making, planning, and citizens' role through collaborative decision-making [204] . The motivation of smart governance is similar to the ones realised under the ideals of good governance [205] in the modern-day democracies, with additional focus on ICT to uphold the ideals, ensuring the development and welfare of the public and public resources [206] . The fundamental challenge that remains relevant in the existing governance is that of corruption [207] and unfair policies, methods to improve education, security, transport, resource management, and economic infrastructure, which is where smart governance is envisioned to offer better solutions. 2) Requirements: To fully realise the vision of smart governance, we need ICT services with high bandwidth connectivity, more devices to connect and communicate with more focus on understanding decision-making. The understanding of decision-making means the internet supports XAI, empowering users with the decision and accompanying explanations to keep the user informed of it. This decision-making and explanation would demand real-time operations, such as while driving in cooperative traffic management; therefore, high latency would be catastrophic. Complex traffic management would require big data operation for faster and safer commute when many devices and sensors produce data simultaneously. Besides this, by the time 6G goes into effect, the advancement in XAI will be furthered, meaning smart governance applications would require support from infrastructure to consume XAI to futuristic potential. It will empower users with transparency, meaning 6G should allow application support in terms of explanation instead of focusing on improving hardware and signal processing. 3) Existing solutions: Smart infrastructure [208] is one of the vital aspects of smart governance, where infrastructure is equipped with sensors connected to a network to collect, analyse, and communicate structural data for maintenance purposes [209] , called smart monitoring. Here the analysis is performed using AI techniques that conclude damage detection, classification, localisation and condition assessment, and remaining-life prediction, therefore, aims to save the cost of maintenance while minimising disruption [210] . Some of the areas where smart monitoring have been applied are bridge monitoring [211] , flood monitoring [212] , dam monitoring [213] , subsea valve maintenance [214] , wind turbine monitoring [215] . 4) How XAI can help: Current smart monitoring services have limited use of transparency in decision-making [216] , which is one of the reasons engineers don't fully trust automated decision making, and human oversight plays a crucial role in the monitoring. In the future, the lack of trust will be addressed by further improvement of algorithmic decisionmaking. Also, these decisions will be made complemented with explanations, bringing more transparency, therefore increasing the usage of XAI. Besides this, 6G will maximise the use of sensors through the growth of IoT, making explanations much more relevant for the high-stack decision. It is essential to note that human oversight cannot be entirely eliminated in smart monitoring, but having an interface that explains decision-making most optimally would be necessary for smart governance. To ensure meaningful public engagement with governance and governmental decision-making, from day to day functioning of government to referendums, it is in the best interest of the ideal governance system to assure all offices and functions become transparent before the public. At present, governments are opting for limited use of dashboards when it comes to smart cities and smart governance [217] . As the technology develops further, XAI will enable governments to make themselves more accountable before the public, where the public can demand transparency into decision-making and verify claimed performance, such as during election campaigns [218] , movements of public concerns and social cause [219] 3 . G. On-line Advertising in 6G 1) Motivation: The online publicity market started in 1994, and in 2019 the digital advertising spending in the world was around $333 billion, surpassing the spending in traditional media (TV, radio, newspapers...) in the US for the first time [220] . Big tech companies such as Google, Facebook, Microsoft, Amazon, Twitter and others, make billions from online advertising. 6G provides 99.9% coverage and speed connections of 1Tbps (around 142 hours of Netflix films in one second) which enables new applications such as holograms, vehicular communications, intelligent IoT and so on. Additionally, 6G will allow people to work with all kinds of gadgets and VR. There is no question that 6G will create new opportunities and possibilities for companies to announce their products. Nowadays, many of the most powerful companies in the world have online advertising as their main income and to increase their business volume, those companies develop very complex algorithms to keep users using their products. Online platforms can be addictive for many users [221] . The search engine results are different depending on the person, viewed pages, and location. We know very little of how these algorithms are shaped and big companies claim that they cannot reveal their algorithms because it is their intellectual property. For example, Google has been accused of ranking its products above Foundem (an online engine for finding the lowest online prices), in 2006 [222] . Search engines have the power to filter information to the citizens and to promote certain values making a profound impact on society. For example, Frances Haugen a former Facebook employee claimed in the US congress on Oct 5, 2021 that Facebook puts profits over people and that the company repeatedly ignored the negative effects of their algorithms on the body image and mental health of its users in order to maintain its benefits [223] . Countries need to create new legislation to protect citizens and XAI is a promising instrument to address these threads. Another important aspect of 6G is that it enables the "Naked approach" for devices [224] . The "Naked Approach" is a new strategy based on the Nordic perspective, in which humans do not need to carry devices or gadgets anymore. The devices are in the environment and they appear only when they are needed and users are authenticated using biometrics. This approach is less intrusive and generates a more human-centred environment [224] . 2) Requirements and challenges: Online adverts allow advertisers to filter those users that are less likely interested in their products and make campaigns more effective. 6G will allow collecting even data from the users. However, targeting based on certain characteristics can be used for discrimination certain groups. For example, some advertisers were intentionally not showing the house on Facebook adverts to some groups based on their race, gender. Because of this, the US Department of Housing and Urban Development (Hud) charged facebook for violating the Fair Housing Act [225] . Besides, online advertising could be used to influence the way people think. For example, Cambridge Analytica is a famous scandal in 2010 in which the Facebook profiles of 87 million people were illegally collected for using them for political advertising and influence the results of democratic elections [226] . Lastly, Big Tech companies with market shares above 90% have a monopoly in many niches. Google has been accused by the justice department of the US of using anticompetitive practices to preserve a monopoly in search engine services. In particular, Google was paying Apple around 10 Billion$ per year to keep Google as the default search engine in iPhones, iPads, and Mac computers [227] . In a similar way, Apple is currently in a lawsuit as a consequence of a blockage in August 2020 to the app Fortnite of the company Epic Games. And Google has also been brought to trial by Epic games for similar reasons with the service Google Play Store. Both Google and Apple take a 30% cut of revenue for all the apps that are sold on their platforms which are considered excessive by Tim Sweeney, the founder of Epic Games since there are no other real alternatives [228] . From our perspective, 6G has three important challenges related to online advertising: protecting individuals from developing addictive behaviours, stopping advertisers from discriminating against certain vulnerable groups, and creating an ecosystem beneficial for all companies, not only the big tech. These challenges are addressed by creating new legal frameworks in which XAI plays a major role. There are some legal frameworks to avoid unethical behaviours by the companies and to protect the rights of the individual. 6G will enable a much more realistic experience of certain products through VR and therefore, more data will be available for companies. The GDPR [229] , introduce in Europe on the 25th May 2018 is concerned with three aspects: data permission (people need to express consent to receive promotional material i.e. marketing emails), data access (people are entitled to request companies to remove data collected from them), data focus (companies need to justify the data they are collecting). GDPR is legally binding and companies cannot ignore it. Fines for companies can go up to 20 million or 4% of their turnover-whichever is greater. Under GDPR, companies need your consent before installing a cookie in your computer which is used to track the behaviour of the users [230] . "The Algorithmic Accountability Act of 2019" in the US has a similar purpose to GDPR and balances the power between the big companies and the users. Under this legislation, companies have to give a detailed description of how their automated decision systems work in a comprehensible way. In other words, companies need to justify to users the adverts they are displaying to avoid unfair biases or another kind of discrimination. There are also other law drafts like the California Privacy Rights Act [231] and the Colorado Privacy Act [232] , that aim at other important facets of the algorithms such as preventing companies from using patterns to manipulate individuals and undermining their ability to make free choices to avoid compulsive buying. This is very related to mental problems in the form of addictions that the users can develop by using online platforms. In short, legal frameworks provide the general lines to protect citizens from being abused by companies. As shown in Fig. 11 , explainability is a key component to bridge the gap between the complex algorithms companies use to track the online behaviour of the users and the rights of the citizens being respected. There are some important publications aimed at developing better algorithms to explain how adverts influence users. For example, E. Park et al. propose a method called Significant Feature Lasso (SF-Lasso) to find out which are the most important features on a video or a banner for a given audience [233] . In a similar way, D. Li et al. developed a method based capsule network able to give shed some line into how the users' behaviour history influenced the number of clicks and conversion for the adverts [234] . According to GPDR users are entiled to ask for explanations Data is used to generate models Servers collect data 6G users connect to the online servers Models recommend adverts Fig. 11 . Representation of the typical workflow of users asking online networks for explanations about the displayed adverts. How XAI can help: 6G speed connections will pave the way to technologies that require a broad bandwidth like VR. VR started in the 60s and It became popular in some games in the 90s, and Ford was using it in 1999 for producing its vehicles, but it has been in the last decade when it has become more popular [235] . To experience VR and 3D images, we need a helmet and optionally some globes or devices to interact with our hands [235] . VR is a great new technology that has enough potential to make a profound shift in our daily lives since it allows experimenting before buying. Headmounted displays are becoming cheaper and they can be used in a smartphone which makes VR available to most people. For example, McDonald's created boxes for kid's meals called "Happy Goggles' that could be transformed into headset devices VR [236] . Volvo launched an app to promote their new car XC90 in which users could experience what it was like to drive inside one of those. Nissan created an app where consumers could design their own car. Honda launched an app to experience the superfast Dallara car at the same time the Indianapolis race was taking place [235] . To conclude, 6G will propel CV and companies will have access to an incredible stream of movements through all senses. Companies with the ability to process this information and target specific users with very powerful messages that will capture through all senses. In order to make technology beneficial for all parties, legal frameworks based on XAI are an important way by which society could take action to prevent abusive behaviour from big companies. Companies need to be accountable for how their algorithms are impacting the world and also how they deal with the data collected from the users. We, as a society, need to know and understand how the personal information of each individual is treated and if the way they are doing it is fair according to our ethical standards. Society needs also to be aware of the negative consequences of online platforms on the mental health of the citizens. This section reviews the attempts and potentials of XAI for a wide range of 6G applications. Different from 6G technical aspects, 6G applications involve more types of stakeholders than engineers only: such as end-users and legal auditors. Therefore, the requirements of XAI for 6G applications will have to be analysed case by case. In Table V , for each 6G use case, we describe its typical high-stake AI-powered decisions that need the XAI most. Then, we identify the level of demand for XAI for each stakeholder, and for each XAI challenge that needs to be addressed in the future. For instance, the collision avoidance of CAVs or UAVs is a typical high-stake AI decision as the incorrect decision can lead to the loss of life. Thus, all stakeholders would need the explainability for such decisions at the highest level. Moreover, collision avoidance requires both high model accuracy and explainability to give evidence for legal experts to judge responsibilities on various occasions. Another example from Table V is the quality inspection of Industry 5.0. If a product is mistakenly qualified, it would be risky for service providers mainly as it affects their reputation. As most of Industry 5.0 activities are within the factory, the demand for a wider legal engagement and higher privacy protection is low. In Table VI , we also summarised the key references that are discussed in both this section and the previous section about 6G applications and technical aspects. Lots of existing work focused on the security, privacy, and edge AI technical aspects as these contain the most high-stake decision-making process. In addition, these technical aspects also have AI solutions deployed already in many existing systems. However, the lower level intelligent radio and the backhaul ZSM are lack attention, partly because these technologies are still being studied in the early stage. The existing research interests are roughly evenly distributed across all 6G applications discussed in this paper. CAVs and UAVs are the ones that attract the most interest so far due to their large existing research communities, while intelligent health still demands more work as it requires close interdisciplinary collaborations. XAI-based explanation is verified by decision tree and fuzzy inference model. [139] Developed an XAI framework using a human-learn hierarchical AI model for collaborative robots to obtain human-like pedagogic behaviours. [151] Incorporated self-organizing neuro-fuzzy model and density-based feature selection to explain automated reacting decisions in autopilot systems. [152] Proposed a rule-based XAI technique to explain the lying path decision of UAVs regarding weather conditions, surrounding environments, and enemy locations. [154] Developed a hierarchical AI model to explain automated driving decisions in multiple interactive autopilot systems for CAVs. [172] Proposed XAI methodology based on random forest regressor to predict solar PV power generation. [173] Proposed a SHAP-based backpropagation deep explainer method, that produces a interpretable model for emergency control applications in smart grids. [203] Proposed guidelines for using XAI techniques and simulations using XR for secured human-robots interactions. [233] Maximizing Explainability with SF-Lasso and Selective Inference for Video and Picture Ads [234] Attentive capsule network for click-through rate and conversion rate prediction in online advertising [212] Proposed an XAI solution using DL and Semantic Web technologies for flood monitoring [208] Proposed a SHAP-based method to interpret the outputs of a multilayer perceptron for buildingdamage assessment. This section presents the important standardization activities, legal frameworks, and research projects related to the 6G XAI. IEEE has initiated several standardization activities related to XAI. In 2020, IEEE Computer Society / Artificial Intelligence Standards Committee (C/AISC) approved a project to specify an architectural framework for XAI [237] . This framework facilitates the adoption of XAI by defining standards and application guidelines for the following areas: • Specify the definition of XAI; • Defines the taxonomy of XAI; • Specify the application scenarios of xAI; • Defines the performance evaluation methods for XAI systems. In 2021, the IEEE Intelligence Society -Standards Committee (CIS/SC) approved a new project to develop the standard for XAI [238] . This group focuses on defining standards on mandatory and optional requirements that need to be satisfied by AI algorithms or systems to be recognized as explainable. In addition, the National Institute of Standards and Technology (NIST) has published a report on a set of principles that can be used to judge the explainability of AI decisions [239] . This report defines four principles of XAI: • Explanation: Ability to provide reasons for the outcomes of the system. • Meaningful: The provided explanation should be understandable and meaningful to the users. • Explanation Accuracy: The provided explanation should accurately describe the process of generating the outcome. • Knowledge Limits: The system should understand the cases which are not designed or approved to operate or unable to operate reliably. However, none of these XAI standardization activities is focused on 6G or communication networks. Thus, it is yet to initiate the more focused XAI standardization activities for B5G and 6G domains. As explained in Section II, XAI is important for auditors to evolve a legal framework to protect consumer rights under technology usage. Currently, there is no unified law that protects consumer rights of XAI technology. Nevertheless, different regions have started reacting to the evolution of AI and XAI. In future, with the advancement towards 6G and of XAI, we can expect international regulations which are mutually approved. For now, we list the adoption of legal frameworks emanating from different regions of the world concerning user privacy and rights to ensure fairness. • EU/EEA: The GDPR [240] is a regulation in EU law on data protection and privacy in the European Union (EU) and the European Economic Area (EEA) and came into effect on 25 May 2018. The GDPR law sets obligations for businesses and grants rights to citizens. Under GDPR, businesses require data protection compliance to ensure data protection concerning users and privacy. Failure to comply can cost up to 20 million euros or 4% of their global revenue. Under GDPR compliance, users have the "right to explanation" in algorithmic decision-making [71] , primarily AI systems. In addition, the regulation protects the fair usage of data collection, process and application, while maintaining an up-to-date and accurate reflection of data. Finally, it allows users to demand a copy of their personal data from the business. This regulation comes closest to realising and facilitating XAI goals of transparency and explanation. • USA: The US has taken a different approach when it comes to data protection. Instead of having a general data protection regulation, the US has chosen to implement sector-specific privacy and data protection policies that work with state laws to protect American citizens' interests. Some of the key sectors are healthcare under HIPAA [241] , finance sector and consumer rights under GLBA [242] , federal agencies under FISMA [243] , and protection of Controlled Unclassified Information in an in non-federal information systems and organisations under NIST 800-171 [244] . Overall, the US is concerned with data integrity as a commercial asset. In contrast, GDPR gives more to an individual instead of looking at it from the interest of businesses. However, this diversity of legal framework will benefit the adoption of XAI in full spectrum in the 6G world. With this diversity, businesses will communicate with each other through devices and with individuals who would be the end-user. [247] , Japan's Act on Protection of Personal Information [248] are steps in similar directions as to those discussed in 2016 for GDPR, which came into effect in 2018. C. Ongoing reputable research projects for 6G using XAI 1) European Union (EU) Funded Projects: Due to the popularity of XAI topics, several funding organizations have offered funding for XAI-related topics. European Union (EU) is one of such leading funding organizations that has funded several projects in XAI. Horizon H2020 (H2020) is one of the biggest funding program supported by EU. H2020 is a seven-year funding program which operated from 2014-2020 and offered an estimated C80 billion of funding [249] . Under direct H2020 funding, several XAI related projects got funded as listed below. is focusing on designing secure and trusted medical health systems to reduce the impact of cybercrimes and fuel the cross-border collaborative data-mining efforts. To realize this objective, the FeatureCloud project integrates XAI with blockchain and federated learning techniques. The XMANAI project [252] is focusing on the use of XAI for manufacturing to increase the trust in AI-based manufacturing processes. Moreover, the practical utilization of XAI is demonstrated by XMANAI projects in four industrial plants. The DEEP-CUBE project [253] is focusing on utilizing XAI Pipelines for extensive data analysis. Primarily, it analyses the Copernicus data, which is collected by European Union's Copernicus Space Programme [256] . The SPATIAL project [254] is focusing on the development of accountable, resilient, and trustworthy AI-based security and privacy solutions for future networks and ICT systems. Thus, the SPATIAL project focuses on using XAI to ensure the security and privacy of 5G and 6G networks. Several B5G and 6G use cases such as healthcare and IoT service are considered in this project. The STAR [255] project is studying the use of XAI techniques to increase the transparency of AI-based manufacturing processes and also to improve the user trust level on AI systems. In addition, H2020 has an element called Marie Skłodowska-Curie actions (MSCAs) [257] which offer grants for all stages of researchers' careers. Under the H2020 MSCA funding, there are two projects for training Early-stage researchers (ESRs) in the domain of XAI applications. • NL4XAI: Interactive Natural Language Technology for Explainable Artificial Intelligence [258] • GECKO: Building Greener and more sustainable societies by filling the Knowledge gap in social science and engineering to enable responsible artificial intelligence co-creation [259] The NL4XAI [258] project focuses on developing selfexplanatory XAI systems by utilizing natural language generation and processing, argumentation technology, and interactive technology. The GECKO [259] project is exploring the developments of interpretable XAI models to mitigate unintentional harmful and poorly designed AI models. 2) United State Government Funded Projects: Defense Advanced Research Projects Agency (DARPA) Information Innovation Office (I2O) in United Stated has started a funding program called Explainable Artificial Intelligence (XAI) [260] . Under this program, DARPA has funded several projects focusing on different aspects of XAI: • Driving-X: Study the use of XAI for self-driving vehicles. [261] which focuses on research and development of XAI technologies. The XAI center has supported several research activities which are mainly focused on the medical and financial sectors. In 2019, Europe-based Christ-Era organization funded 12 XAI projects under "Explainable Machine Learning-based Artificial Intelligence" 4 funding call. Some of these projects focus on B5G and 6G applications such as digital medicine, robotics, and predictive maintenance. Although there are many global-level research activities are initiated, many of these activities have 6G, 6G technologies, and 6G applications as a minor focus. EU H2020 M XMANAI [252] EU H2020 M DEEPCUBE [253] EU H2020 L STAR [255] EU H2020 M NL4XAI [258] EU MSCA L GECKO [259] EU This section summarises the major limitations of the recent research of XAI for 6G described in the earlier sections. The corresponding challenges and future research directions to move XAI for 6G forward are also elaborated. As previously mentioned, XAI can increase the trust and transparency of AI-powered "human-centric" 6G networks for all stakeholders. However, several well-known limitations [18] of existing XAI methods would delay the successful deployment of XAI in the future B5G/6G infrastructure. • Lack of in-model XAI methods: There is a widespread concern [10] that the performance of the AI models will go down with the growth of its corresponding level of explainability due to the ever-increasing level of AI model complexity. Recently, the in-model XAI methods are likely to be satisfactory in both AI performance and its corresponding explainability. It is because the in-model XAI methods are designed to be self-explanatory, rather than an add on the XAI method after the AI decision is made (i.e., post-hoc XAI methods). However, most existing XAI methods are still post-hoc (e.g., LIME, SHAP), as they are more straightforward [18] to be developed and pluggable with existing AI algorithms compared to in-model methods. • Lack of quantifiable explainability metrics: Visual and textual explanations are two commonly observed formats of XAI methods output. These explanations are intuitive for human beings but difficult to be measured objectively using quantifiable metrics. Therefore, it is challenging for XAI system designers to achieve standard/unified systems that are simple to deploy and use for all stakeholders. • Lack of engagement of stakeholders and legal experts: A strong motivation for introducing XAI technologies is to address the legal requirements. A typical example is the "right to explain" in the EU GDPR that requires machine algorithms to be capable of giving explanations to their outputs. In the last few years of early research of XAI, many new technologies are being proposed and applied by computer scientists. However, two important points are missing. Firstly, a meaningful engagement of legal experts is required to ensure that XAI complies with legal requirements. Secondly, a deep engagement of stakeholders is needed to ensure the explanations provided by new XAI methods make good sense to them. Apart from the three limitations mentioned above, possible privacy leakage [262] is also a significant concern that exists inherently without any assuring solutions. This possible privacy leakage refers to the fact that when XAI is applied, more information will inevitably be exposed externally concerning the AI decision-making process, which likely leads to the leakage of users' data. Anonymisation might be a possible solution to protect private information. However, if one can easily violate such protection, the risk of privacy leakage is still high. This subsection details the future research challenges, according to the previously mentioned limitations of using XAI for 6G. These challenges mainly include: devising quantifiable metrics to measure the effectiveness of explainability, proposing new XAI methods that can achieve a better trade-off between explainability and model performance, and improving societal and economic engagements through the application of XAI for 6G. 1) More quantifiable metrics for the measurement of explainability in 6G: In 2017, when DARPR launched its XAI program [10] , it was clearly stated that the effectiveness of explainability generated by XAI tools has to be measurable. Therefore, a quantifiable evaluation framework is highly required. To achieve such a framework, the engagement of stakeholders with the AI system is necessary, as they set the explainability goals when designing the AI system, and can confirm when the realised XAI methods met the objectives.. Several subsequent attempts were made to address the challenge of measuring the effectiveness of explainability of the XAI system. Doshi-Velez and Kim [263] proposed a taxonomy of XAI assessment methodology which contains three classes: application-grounded (i.e., measured for specific applications), human-grounded (i.e., measured for specific stakeholders), and functionality-grounded (i.e., measured for specific AI algorithms). Hoffman et al. [264] discussed the evaluation of XAI in depth from both psychometric and AI perspectives. They proposed an evaluation process in measuring: the goodness of explanations, user's satisfaction, user's understandings of AI system, user's motivation for explanations, user's trust and reliance on the AI, and the performance of human-XAI work system. Islam et al. [265] proposed a way to quantify the explainability of XAI methods, which considers: the number of input cognitive chunks, the number of output cognitive chunks, the interactions among these cognitive chunks, and the weights for these three factors according to specific applications. Inspired by the success of the system usability scale (SUS) that has been widely used for assessing the usability of the system-human interface for over three decades, Holzinger et al. [266] proposed a system causability scale (SCS). The SCS has ten questions to measure if XAI-generated explanations quickly meet the users' intention. The last two years have seen more concrete work focusing on measuring XAI in specific domains. Similar to the wellknown indicators: quality of service (QoS) and quality of experience (QoE) in wireless multimedia transmission, Li et al. [16] proposed a metric called quality of trust (QoT) to quantify the level of trust when a particular XAI model is applied for 6G applications. The QoT contains a physical and emotional trust, representing the objective and subjective assessment of explainability, respectively. For analysing photoplethysmography using deep learning methods, Zhang et al. [267] proposed two metrics (i.e., congruence, annotations classifications) to measure the quality of XAI explanations compared to the human experts' explanations. Similarly, Kaur et al. [268] proposed another metric called "Trustworthy Explainability Acceptance" that measures the Euclidean distance between XAI explanations and domain experts' reasonings in predicting Ductal Carcinoma in Situ (DCIS) recurrence using AI. Focused on the objective assessment of post-hoc XAI explainability, Rosenfeld [269] proposed a new method considering the number of data features used in the XAI explanation, the number of rules in the generated explanation, the performance difference between the original and explained (approximated) model and the stability of the explanation. The key attempts of the research mentioned in this section are summarised in Table VIII. Clearly, despite much ongoing research in progressing the measurement of XAI methods, there is still a lack of widely recognised quantifiable metrics, especially for 6G. We believe the possible solution to fill this gap is to actively engage with stakeholders starting from the design of the AI-enabled 6G systems. As shown in Table VIII , the existing measurement frameworks [10] , [263] , [264] for the general XAI will also be used for guiding through the whole XAI integrating process with the future 6G infrastructure. Although XAI methods are not considered in QoT [16] , it is still an ideal example to follow for proposing and validating more new metrics for measuring explainability in 6G-specific scenarios. It is worth noting that universally applicable metrics may not be possible as XAI goals should be made specific in each domain for each stakeholder. 2) Performance and Explainability Trade-off in large-scale 6G Infrastructure: During the last decades, researchers were mainly focused on the performance of AI, paying little or no attention to interpretability [270] . However, GDPR shifted the focus, a regulation recognising the "right to explanation" for the users and holding automated algorithms accountable. The caused shift is now deriving the trend towards the interpretability of the models. In AI, the system's performance can be measured by two popular metrics, which captures the effectiveness of the predictions. For classification, the metric used is called accuracy, which represents the percentage of correct predictions over the total predictions. For regression, the metric used is called mean square error (MSE), which captures the average of the It is important to note that interpretability can be subjective, which explains why it lacks a quantification metric and is still seen as a quality [270] . On the other hand, interpretable models are those in which it is possible to understand their inference capability due to the simplicity with which they were designed [271] . For example, in a simple decision tree where the nodes represent the values of specific attributes, edges are rules, and the leaf nodes represent the class. With a simple decision tree, it is more manageable to understand the overall decision by following rules through edges and nodes, and eventually, the leaf node's final decision. In simple words, it will be like if-else rules concluding a decision, therefore, making it interpretable. However, these rules will grow with the depth of the decision tree but remain interpretable, although it is more laborious for human cognition when depth increase significantly. Even though simple decision trees are interpretable but unlikely to be best at performance, to overcome the performance limitation, a committee of multiple trees were proposed, such as the Random Forest, however with a compromise on interpretability. Random Forest calculates the final decision based on voting among multiple trees. The strategy of exploiting different ML models to predict a decision is called ensemble methods, and in doing so, undermines interpretability. Some other methods like non-linear Support Vector Machine are not interpretable because of the complexity due to the nonlinear decision boundary [271] . Examples of interpretable models are Bayesian classifiers, linear models, and rule-based models. These models can be defined several rules that retain interpretability at the expense of performance [272] . Rule-based Learning Bayesian Models Fig. 12 . Representation of the Interpretability versus Accuracy according to [271] . As seen in Figure 12 , the least interpretable model is DL which is a clear example of this paradigm. Furthermore, the more interpretable models tend to be less accurate. DL is an evolution of the ANN, with an increase in the number of intermediate layers that allow it to learn complex relations from huge datasets. DL models could be composed of 30 layers with thousands of nodes where each node captures a distinct feature value. For example, the largest deep neural network until February 2020 is called Turing-NLG (from Natural Language Generation) and was released by Microsoft. Turing-NLG has around 17 billion parameters [273] . They are called black-box because it is hard to see the inference behind the predictions emanating from the combination of all the nodes across the multiple layers. It is evident from the research [271] that there is a clear trade-off between the performance (accuracy) and the simplicity (interpretability) of a model. The more complex a machine learning model is (such as a higher number of nodes, more rules, branches, or layers), the less likely it is interpretable. Adding complexity to the model is likely to model complex decision boundaries, making the model prediction more accurate. The fundamental challenge is to ensure higher accuracy without compromising on interpretability. In some domains, interpretable methods can provide similar levels of accuracy, and therefore they are recommended [46] . That is why when selecting the algorithms to build the large-scale 6G infrastructure, we need a clear knowledge of which of the two (accuracy or interpretability) is more important in each domain. Some methods are more accurate, and others are more interpretable [271] . In the same way, the simpler the model is, the faster it will be at prediction time. Selecting a suitable method depending on the domain and application is essential. In some domains, such as redirecting IP packages over a network, it is not crucial to understand the model's internal reasoning as long as it is very efficient. On the contrary, there are domains such as medicine or high-stack decision concerning an organisation in which understanding the model's internal reasoning is paramount. Explainability is different from interpretability because it can explain through additional strategies which may have no direct link with the inference of the true AI model (see Section II-B2). Explainability patches black-box models without making them fully interpretable, aiming to provide users with justification explanation. Suppose we want both high performance and justifying why a decision has been taken by the model. In that case, we can use different strategies like model-agnostic approaches or posthoc explanations to translate the complex reasoning of black-boxes models into a human-understandable for a specific type of audience. It is always vital to consider the target audience's needs to which explanations are presented; something readily understandable by a specialised group such as what may work for cardiologists might not work financial experts. One of the core challenges towards incorporating ML models in businesses is that business executives do not fully accept or trust ML models unless satisfactory explanations behind the decision are provided. Therefore, it is essential to develop more reliable and explainability-based techniques [51] . Another position is to entirely avoid black-box models because they hide the actual inference process, which increases the chance of making blunders without knowing why. Similarly, DL methods that automate the feature selection prevent developers from identifying important features against redundant or superfluous features [45] , [46] , which significantly hinders overcoming black-box challenges. There are two possible approaches to overcome the black-box limitation. First, promote the usage of simple models with limited but acceptable performance [45] . Second, develop better and more effective explainability techniques, which is the growing trend in the DL community [51] . It is vital to decide which algorithmic design should be preferred in each of the modules that compose 6G. In some modules like redirecting IP packages efficiently or selecting the best antenna for a given user, we will be more interested in accuracy than in interpretability. Nevertheless, other modules will interact with humans in a more direct way, such as route suggestions for cars, in which we need to prioritise the interpretability of the models [5] . 3) Societal and Economical Engagement in Integrating XAI with 6G infrastructure: The third challenge of adopting XAI for 6G is about societal and economical engagement. Specifically, we firstly examine trends concerning laws and ethics. Then, we discuss commercial challenges concerning technology producers and intellectual property. Finally, we present the need for new laws and regulations to evolve and support XAI for 6G. • Laws & Ethics With the growing demand and development of XAI and IoT, the challenges concerning ethics will continue to evolve [274] , empowering the user with more choices and preferences on transparency on automated decision making. Moreover, with the growth of IoT to IoE within 6G, more devices would be connected with decision-making capability, raising security and privacy concerns. Several devices will likely interact directly without requiring a human in the loop. However, the endpoint will serve human demands. Here, additional laws and understanding of ethics would be in demand to ensure humans control all activities. Laws like GDPR are necessary to mention, which will lead and enable innovation with the optimal level of governance over automated decision-making. However, GDPR alone does not encompass a full spectrum of sets of rules all around the globe. In addition, the Chinese PIPL and other regional laws would collectively undertake the future challenges of ethical practices concerning automateddecision algorithms and devices. • Commercial Side Constituting decision-making algorithms explainable through the law or practice can cause concern for the producer of technology. The producer would find making algorithms transparent a risky business to protect the intellectual property of the technology. This concern would further grow when 6G starts to converge on the application-centric approach to automated decision making, with the vision of IoE. It is here, the approach taken by the US is vital to consider that views data protection and data integrity as a commercial asset, unlike GDPR. It is important to note that explaining decisionmaking can leak critical information to the competitors, which can quickly compromise commercial assets and undermine the freedom to ownership. The right approach would balance GDPR and US laws regarding data privacy and protection while safeguarding consumer rights and businesses. • Compliance of New Laws With the boom of 6G, continuous evolution would take place from IoT to IoE, focusing on applications that could explain automated decision-making. However, unlike hardware infrastructure, software application gets upgraded quickly, and many new types of applications keep popping, unlike new hardware development. At this pace, the future applications that would be required to explain decisions can be made restricted by the strict laws that do not permit flexibility to confirm all types of growing needs. Here, the laws should tolerate flexibility to assert the law's spirit that encourages best practices. These laws should enable consumer rights protection and as well as adequate protection of commercial assets, ensuring both personal freedom and freedom to ownership. Finally, future laws should promote globally accepted rules while guaranteeing regional and transnational freedoms that provide some adjustment and flexibility. Otherwise, international acceptance of law that confirms the right to explanation to exact detail internationally may be too ideal to agree upon before 2030. The common future data privacy laws should strike a balance that protects businesses and users from exploitation while respecting national policies globally. A possible solution is to evolve current laws with a balanced approach that recognises consumer rights in terms of ethical considerations and ensures technology producers' intellectual property concerns. In addition, these laws should be flexible to allow rapid application development that matches the pace of 6G adoption. Since the upcoming 6G is a global phenomenon, future products would greatly benefit from common international law. This common international law would better con-nect the world, safeguarding consumers' rights and technology producers while ensuring rapid growth and adoption of 6G products internationally. At the beginning of this section, as XAI is still in its infancy, we have briefly discussed three main limitations of using XAI for 6G systems. Then, we described three promising research challenges to address these limitations accordingly. Firstly, more in-model XAI methods should be proposed to achieve a better trade-off between the explainability and the AI model performance. Secondly, more quantifiable metrics are required to verify the XAI goals. Thirdly, societal and economical engagement should be boosted to meet XAI requirements that are set in local regulations. Even when XAI becomes mature, XAI is still a doubleedged sword, the same as every other technology. When applying XAI for 6G, XAI's explanations help build trust for stakeholders. However, these explanations also expose extra information about the AI decision-making process. Such a risk of privacy leakage could be alleviated by closely working with stakeholders to perform privacy violation checks when developing XAI solutions. This section briefly summarises the lessons learned of the topics discussed in the previous sections, which includes the background of AI and XAI, the impact of XAI on typical 6G technical aspects and use cases, limitations and challenges when developing XAI for 6G, and major research projects and standardisation on XAI and 6G. The corresponding future research directions for each of these topics are also discussed. A. Background 1) Lessons learned: One of the most prominent lessons learned during several decades of research, development, and commercialisation of AI is that the performance of AI, especially ML systems, is heavily relying on the quality of the training dataset [275] . In many cases, the incorrect AI prediction or classification will lead to massive loss for the 6G stakeholders in both economical and non-economical (e.g., health and life) aspects. It means that the robustness of AI systems is of particular importance. Most of the existing solutions [275] focus on improving the AI robustness, which is to carefully design the data pipeline, including data collection, pre-processing, augmentation, and dimensionality reduction. This help with reducing the error rate, but the AI black box mechanisms remain unaddressed. XAI is a promising set of technologies that provide a level of transparency on the decision-making process behind the AI black box. Though XAI is still in its early stages, the current important lesson learned so far is the trade-off between explainability and performance. When the stakeholders need a higher degree of explainability, the AI system designers may have to compromise the quality of the prediction/classification results. In general, before 2010, although AI scientists made great efforts, the focus was mainly on improving model accuracy. Therefore, the complexity of the AI model sharply (i.e., from simple rule-based models like decision trees to DL now). In contrast the non-technical impact of complicated AI models was overlooked. After 2010, the focus shifted to concerns on transparency of the AI decisions, such as decisions made on the ethnic background of people. These concerns involve more questions like why the system decides in a particular way. In the imminent 6G age, when various model devices can talk to each other, more AI decisions will be made at a much faster pace. The transparency and trustworthiness of responsible AI have to be considered formally so that experts from academia and industry can improve the overall technology ecosystem for the future user experience. 2) Future Directions: The wide and deep convergence of XAI to the existing AI systems are foreseen to be increasingly important. Some promising future research directions about this deployment include: how to measure the level of explainability, how to satisfy the explainability demands from multiple stakeholders, how to ensure high performance while still being able to provide a high level of explainability, and how to work collaboratively in a multi-disciplinary team (i.e., typically, ICT researchers with legal experts) B. XAI for 6G Technical Aspects 1) Lessons learned: Conventional AI/ML algorithms and innovative DL architectures have been applied for different tasks in 6G networks when considered in technical aspects. The objective includes accuracy improvement in intelligent radio and edge network, reliability enhancement in network security and data privacy, and optimisation in resource management. While the system performance and automatic decision of communication networks mostly depend on AI models, it does not usually provide descriptions and explanations about results, especially in the how-why-when perspective. XAI, owning to three principal features, including explainability, interpretability, and accountability, represents a promising technique to help not only end-users but also AI stakeholders understand how an AI model processes data and conducts outcomes automatically, which in turn allows endusers to be confident with its decision as well as engineers to comprehend their systems. Some ML models present good interpretability, but their performance in terms of accuracy is unacceptable. Therefore, the balance between interpretability and accuracy should be considered in XAI-based system design. For example, some XAI approaches have exploited greater-interpretability of AI models like a rule-based model and linear regression to generate explanations. However, their accuracy can not satisfy the baseline QoS in 6G. On the other hand, although DL showed high performance in dealing with many fundamental tasks, such as detection, classification, and recognition, but offered little or no interpretability. Furthermore, depending on the input data type, storage infrastructure, computing platform, and communication infrastructure, XAI for explanation generation should be appropriately chosen to deal with a specific technical problem while ensuring a reasonable performance in terms of accuracy and complexity. Besides, the explanation should be simple for end-users with less domain knowledge and advanced for AI stakeholders with high expertise, which can be numerical results, text, graphs, images, and simulations. It can contain details on how a statistical AI model causes a prediction from a feature set, a decision path from a decision tree, a rule from a simple model, and a visual operation graph of information flow. 2) Future Direction: Despite certain benefits of interpretability and explainability, the utilisation and development of XAI for different technical aspects in 6G networks are still being limited. In this context, future work can focus on incorporating DL with several explanation techniques for XAI at multiple levels, from processing units to operation modules and systems. For example, the visual explanation technique can be applied to capture and then visualise the feature activation maps of a trained CNN, which are helpful to explain the labels outputted by a DL-based automatic modulation classifier in intelligent radio. Besides visual explanation techniques (e.g., class activation mapping, peak response map, and classenhanced attentive response), some other textual explanation (e.g., question-answering and semantic information retrieval) and numerical explanation (e.g., concept activation vectors and local interpretable model agnostic) approaches should be leveraged for a broad spectrum of 6G technical aspects. The combination of different explanation techniques (such as numerical and visual explanations) can be helpful for a sequence of interactive decisions produced by hierarchical AI models in complex systems. Many existing XAI works have concentrated on explainability for ML at different stages in developing AI systems to address several technical tasks in 6G. However, there remain some gaps for data explanation methods, which are helpful to select and explore a better-suited model later. Additionally, XAI methods should be designed to incorporate domain knowledge to explain useful inferences under clear and uncertain circumstances. C. XAI for 6G use cases 1) Lessons learned: Existing AI methodologies can provide prediction/classification for future 6G-based applications such as healthcare, Industry 5.0, CAV, smart grid, multi-sensory XR applications, smart governance, and online advertising to help make decisions in real-time. The decision-taking in missioncritical applications such as healthcare, smart grid, and smart governance should be done very carefully as it may result in loss of properties and lives and cause significant danger. However, the black-box nature of the AI-based algorithms makes it very difficult for the decision-makers to trust the results of these algorithms as they lack justification/explanation. The explanation should be technologically aware and thoroughly address the ethical, legal and societal questions. To address these issues, XAI will be essential in future 6G-based applications (especially healthcare, autonomous driving, smart governance) to trust, understand, and improve the accountability of the decisions made by the AI-based algorithms. It will help instil confidence in the end-users as they can understand the decision-making process of these algorithms. However, several key challenges and open issues need to be addressed to realise the full potential of XAI in the 6G based applications which are discussed below. The improved interpretability may result in reduced performance of AI algorithms in terms of real-time decision-making and prediction accuracy, which is unacceptable in missioncritical applications such as smart healthcare, autonomous vehicles, and smart grid. Hence, the trade-off between explainability and performance is an open issue that needs addressing. Another important challenge is addressing the issue of the high dimensionality of the data generated from the applications based on IoT in real-time. Furthermore, the generation of labels for the data in real-time in the big data era makes it suitable for classification is a tedious and demanding task. In the case of 6G-enabled applications that use heterogeneous networks, providing explainable and customised decisions is another open issue that needs to be addressed in future researches. Another critical issue is the privacy preservation of sensitive data generated from applications such as healthcare, connected and autonomous vehicles, and smart grid. 2) Future Direction: Some of the potential research directions that can address the aforementioned challenges and open issues are as follows. Researchers should focus on developing XAI algorithms that maintain the balance between explainability and the performance of the AI/ML algorithms by using technologies such as techno-economic analysis [276] , [277] . Several soft computing techniques such as meta-heuristic algorithms, principal component analysis, and fuzzy systems can be considered to address the challenge of high dimensionality through dimensionality reduction [278] . Unsupervised learning algorithms such as clustering that does not require the labels for prediction/classification can be used to address the issue of generation of labels in real-time for 6G-based applications [103] . Federated learning (FL), which is a recent development of ML, can be adopted in XAI enabled 6G applications to provide customised decisions to heterogeneous networks [279] . FL can be integrated with XAI enabled 6G applications to address the issue of privacy preservation [280] . D. Limiations and challenges of XAI for 6G 1) Lessons learned: The recent studies of XAI methods in 6G have three limitations. Firstly, there are not enough inmodel XAI methods proposed so far. Most of the existing XAI can only explain AI black-box after the 6G AI decisionmaking results is given. It prevents the achievement of a higher level trade-off between the explainability and model performance in 6G. Secondly, although many research studies emphasised the importance of XAI measurements, there are no widely recognised quantifiable metrics for explainability in typical AI applications in 6G. Thirdly, this is a lack of multidisciplinary collaborations between experts in AI and the legal community. 2) Future Direction: In upcoming years, we expect to see more applications of XAI in filling the gaps within the existing AI-driven 6G use cases and technical aspects. In particular, it is promising to develop more in-model XAI technologies, which can achieve a higher level of explainability and high decision-making performance. Moreover, the well-recognised metrics to evaluate explainability will be likely proposed in a domain-specific manner for a specific group of stakeholders. The engagement of legal experts is highly required in all major stages, from design to evaluation of any new XAI methods in 6G. The upcoming 6G brings lots of new challenges to some emerging technologies such as quantum computing and blockchain 3.0, which may need XAI intensively in the future. Quantum computing has been evolved quite quickly in recent years, after a long term silence since it was theoretically discussed in the early 1980s [281] . Quantum computer has a great potential to solve a particular type of computing task that used to be considered infeasible for classical computers. As one type of quantum computing, the power of adiabatic quantum computation was validated in a 6G smart transportation pilot project for assigning optimal bus routes in the city of Lisbon, Portugal [282] . Such quantum computing powered AI-decision making will be exponentially faster in 6G ages due to more input data every second. It requires XAI technology to explain high-stake decisions under strict performance pressure where data flows is extremely high volume. Blockchain 3.0 [283] generally refers to all noncryptocurrencies blockchain applications such as electronic voting and supply chain management. In 6G times, many heterogeneous blockchain systems need to be connected, which poses a great challenge to balancing network performance and system security and privacy demand. Moreover, the high heterogeneity of blockchain 3.0 also implies more diverse stakeholders from different organisations involved in 6G AIassisted decision-making. Ensuring all types of stakeholders satisfied with the outcomes of XAI and making XAI methods compliant with the local regulations would remain vital. 1) Lessons learned: XAI is becoming an exciting research area under 6G. For the moment, IEEE is leading the standardisation activities related to XAI. Especially, IEEE Computer Society / Artificial Intelligence Standards Committee (C/AISC) and IEEE Intelligence Society -Standards Committee (CIS/SC) are leading these tasks. However, the current 6G SDOs, such as ETSI, 3GPP, have not focused entirely on the XAI domain. Legal frameworks of XAI have already developed at the global level, including the EU and USA. Several reputable research projects for 6G using XAI have already started. Mainly, the EU H2020 funding program, EU Christ-era funding program, EU MSCA program, and US DARPA program have funded many projects on the XAI domain. Many of these projects are not directly related to 6G. However, most of these projects focus on technologies and applications associated with B5G and 6G networks. 2) Future Direction: Since AI will play a significant role in 6G networks, it is necessary to consider the necessity of XAI for 6G applications. Initially, research projects can build new knowledge on utilising XAI for B5G and 6G networks. The EU funding programs such as Horizon Europe and Eureka programs can be ideal venues for funding research related to XAI and 6G integration. In addition, global level 6G programs such as Japan 6G/B5G Promotion Strategy and South Korea MSIT 6G research program can also be possible venues to obtain research funding for XAI and 6G integration. 6G standardisation is essential to define the technological requirements of 6G networks and select suitable technologies to deploy 6G networks. XAI will be considered as one of the critical technologies to utilise in 6G networks. Especially, XAI can be coupled with standardisation activities related to ZSM (Zero-touch Service and Network) management. The leading telecom SDOs such as European Telecommunications Standards Institute (ETSI), Next Generation Mobile Networks (NGMN), 3rd Generation Partnership Project (3GPP), International Telecommunication Union -Telecommunication (ITU-T) will include XAI as a potential technology in their 6G standardisation activities. This paper comprehensively reviews and analyses the potentials of applying XAI methods to make a future AI-based 6G system more transparent and trustworthy. The existing ideas of designing 6G networks are overviewed at the beginning of this paper, along with the exhausted survey of the state-of-the-art AI and XAI methods. Later in this paper, several representative 6G technical aspects and use cases are carefully analysed in their existing AI-based solutions and the trend of applying XAI to enhance the trustworthiness of 6G network systems. At the end of this paper, the lessons learned about the limitations of existing work is summarised to remind researchers and practitioners that XAI can not solve all problems. Accordingly, the research challenges are also highlighted that are promising to overcome or alleviate the potential limitations of XAI. We hope this survey can guide the future 6G developments in a more sustainable direction. Overall this research aligns with goals 9, 11, 16, and 17 of UN-SDG. In terms of goal 9, this research promotes resilient infrastructure and sustainable industrialisation, fostering innovation by affirming XAI and 6G. This paper also highlights important ongoing projects in the related direction. In terms of goal 11, the discussed use cases promote sustainable cities and communities such as industry 5.0 and intelligent health, making cities and human settlement inclusive, safe, resilient and sustainable. In terms of goal 16, the discussion on the legal aspect, ethics, consumer rights and commercial protection aligns with the promotion of peace, justice and strong institutions ensuring access to justice for all and building effective, accountable and inclusive institutions at all levels. Finally, in terms of goal 17, the XAI and 6G would enable partnerships among multiple stakeholders. The technology producer needs to partner with domain and legal experts. Global coordination will be required to adopt and define a framework for 6G and XAI to unleash full potential. This work is partly supported by the European Commission in SPATIAL (Grant no: 101021808) Academy of Finland in 6Genesis (Grant no. 318927), Science Foundation Ireland under CONNECT phase 2 (Grant no. 13/RC/2077 P2) projects, and ADAPT Centre Phase 2 (Grant No ., 13/RC/2106 P2). Luis Miralles-Pechuán is currently an assistant Lecturer in TU Dublin. He also worked as a fulltime researcher/lecturer at University Panamericana in Mexico for more than three years. He decided to start a PhD in 2012 on creating new approaches within the Online Advertising world. During his PhD, he got familiar with ML and he published a good number of papers on topics related to how to apply ML to online advertising. After finishing his PhD, he worked in postdoc levels I and II in CeADAR, UCD and there, he published in the Digital Forensic conference and we won the prize for the best student paper. Currently, his favourite topic is how to apply Reinforcement Learning to fight the COVID-19 pandemic and to plan the containing levels considering both the public health and the economy. Lastly, he has expertise in human activity recognition and in generalised zero-shot learning (GZSL) and applying machine learning to improve the accessibility of the websites. 6g vision and requirements: Is there any need for beyond 5g? Ai-driven zero touch network and service management in 5g and beyond: Challenges and research directions Millimeter-wave communications with non-orthogonal multiple access for b5g/6g Optimizing space-air-ground integrated networks by artificial intelligence The roadmap to 6g: Ai empowered wireless networks What should 6g be? A vision of 6g wireless systems: Applications, trends, technologies, and open research problems 6g wireless networks: Vision, requirements, architecture, and key technologies Artificial-intelligence-enabled intelligent 6g networks Darpa's explainable artificial intelligence (xai) program Opportunities and challenges in explainable artificial intelligence (xai): A survey Machine learning for 5g/b5g mobile and wireless communications: Potential, limitations, and future directions The roadmap to 6g security and privacy Explainable artificial intelligence for 6g: Improving trust between human and machine A vision of 6g wireless systems: Applications, trends, technologies, and open research problems Trustworthy deep learning in 6g-enabled mass autonomy: From concept to qualityof-trust key performance indicators Peeking inside the black-box: a survey on explainable artificial intelligence (xai) Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai Ai watch. defining artificial intelligence. towards an operational definition and taxonomy of artificial intelligence Alan turing and the development of artificial intelligence The summer of 1956 at dartmouth A brief history of artificial intelligence: On the past, present, and future of artificial intelligence Introduction to artificial intelligence Google ai algorithm masters ancient game of go Mastering complex control in moba games with deep reinforcement learning Comparing deep neural networks against humans: object recognition when the signal gets weaker Mastering the game of go without human knowledge Brain intelligence: go beyond artificial intelligence Linear and logistic regression analysis Supervised machine learning techniques: An overview with applications to banking Random forest classifier for remote sensing classification Deep learning in neural networks: An overview Supervised machine learning: A review of classification techniques Unsupervised learning Fast algorithms for mining association rules Unsupervised learning Reinforcement learning: An introduction Taxonomy of reinforcement learning algorithms Proximal policy optimization algorithms Actor-critic algorithms Deep learning of representations for unsupervised and t ransfer learning Learning deep architectures for AI Long short-term memory recurrent neural network architectures for large scale acoustic modeling Human-level control through deep reinforcement learning Interpretable machine learning: Fundamental principles and 10 grand challenges Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead why should i trust you?" explaining the predictions of any classifier Accountability for the use of algorithms in a big data environment Interpretability of machine learning-based prediction models in healthcare One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques Explainable artificial intelligence: a systematic review Explainable deep learning models in medical image analysis Evaluating recurrent neural network explanations Model-agnostic interpretability of machine learning Towards explainable artificial intelligence Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning Ted: Teaching ai to explain its decisions AAAI/ACM Conference on AI Interpretable convolutional neural networks A unified approach to interpreting model predictions Consistent individualized feature attribution for tree ensembles Certifai: Counterfactual explanations for robustness, transparency, interpretability, and fairness of artificial intelligence models Counterfactual explanations without opening the black box: Automated decisions and the gdpr Explainable AI: interpreting, explaining and visualizing deep learning Explainability in deep reinforcement learning Programmatically interpretable reinforcement learning Torcs, the open racing car simulator Hierarchical and interpretable skill acquisition in multi-task reinforcement learning Explainable reinforcement learning through a causal lens Can we do better explanations? a proposal of user-centered explainable ai Reconstructive explanation: Explanation as complex problem solving Eve: explainable vector based embedding technique using wikipedia Lit@ eve: explainable recommendation based on wikipedia concept vectors Intelligent radio signal processing: A survey Mcnet: An efficient cnn architecture for robust automatic modulation classification Automatic modulation classification: A deep architecture survey Artificial intelligence enabled wireless networking for 5g and beyond: Recent advances and future challenges Sparsely connected cnn for efficient automatic modulation recognition Explainable neural network-based modulation classification via concept bottleneck models An interpretable neural network for configuring programmable wireless environments Physicallayer security in 6G networks 6G security challenges and potential solutions Joint European Conference on Networks and Communications 6G Summit (EuCNC/6G Summit The roadmap to 6g security and privacy Future intelligent and secure vehicular network toward 6g: Machine-learning approaches AI and 6G security: Opportunities and challenges Machine learning threatens 5g security When machine learning meets privacy in 6g: A survey Side-channel analysis for intelligent and connected vehicle security: A new perspective Preserving model privacy for machine learning in distributed systems Distributed privacy-preserving collaborative intrusion detection systems for vanets Significant permission identification for machine-learning-based android malware detection Machine learning for resource management in cellular and iot networks: Potentials, current solutions, and open challenges When deep reinforcement learning meets federated learning: Intelligent multitimescale resource management for multiaccess edge computing in 5g ultradense network Deep learning based radio resource management in noma networks: User association, subchannel and power allocation Artificial intelligence-empowered resource management for future wireless communications: A survey D2d power control based on supervised and unsupervised learning Using machine learning for adaptive interference suppression in wireless sensor networks A hierarchical framework of cloud resource allocation and power management using deep reinforcement learning Traffic graph convolutional recurrent neural network: A deep learning framework for network-scale traffic learning and forecasting In-edge ai: Intelligentizing mobile edge computing, caching and communication by federated learning Toward self-learning edge intelligence in 6g Edge artificial intelligence in 6g systems: Theory, key techniques, and applications Toward the 6g network era: Opportunities and challenges Machine learning techniques for 5g and beyond A survey of 6g wireless communications: Emerging technologies Sec-edgeai: Ai for edge security vs security for edge ai Security and privacy for edge intelligence in 5g and beyond networks: Challenges and solutions Explainable ai and mass surveillance system-based healthcare framework to combat covid-i9 like pandemics Zsm security: Threat surface and best practices Inspire-5gplus: Intelligent security and pervasive trust for 5g and beyond networks AIOps (Artificial Intelligence for IT Operations The challenge of zero touch and explainable ai Network slicing meets artificial intelligence: an ai-based framework for slice management Resource reservation in sliced networks: An explainable artificial intelligence (xai) approach Toward 6g networks: Use cases and technologies Survey on 6G frontiers: Trends, applications, requirements, technologies and future research When ehealth meets IoT: A smart wireless system for post-stroke home rehabilitation Role of machine learning in medical research: A survey The future of digital health with federated learning Secure and robust machine learning for healthcare: A survey Physical activity recognition with statistical-deep fusion model using multiple sensory data for smart health Convolutional network with twofold feature augmentation for diabetic retinopathy recognition from multi-modal images A two-stage convolutional neural networks for lung nodule detection Hearables: Automatic overnight sleep monitoring with standardized in-ear eeg sensor Bimodal learning via trilogy of skip-connection deep networks for diabetic retinopathy risk progression identification Groupinn: Grouping-based interpretable neural network for classification of limited, noisy brain data Explainability for artificial intelligence in healthcare: a multidisciplinary perspective Chexplain: Enabling physicians to explore and understand data-driven, ai-enabled medical imaging analysis A survey on explainable artificial intelligence (xai): Toward medical xai Robots in industry: The past, present, and future of a growing collaboration with humans Cognitive digital twins for smart manufacturing Industry 5.0: A survey on enabling technologies and potential applications Industry 5.0 and human-robot co-working A machine learning approach for collaborative robot smart manufacturing inspection for quality control systems Learn how to assist humans through human teaching and robot learning in human-robot collaborative assembly Mastering the working sequence in human-robot collaborative assembly based on reinforcement learning A digital twin based industrial automation and control system security architecture Digital twin for the oil and gas industry: Overview, research trends, opportunities, and challenges Vibration signals analysis by explainable artificial intelligence (xai) approach: Application on bearing faults diagnosis Joint mind modeling for explanation generation in complex human-robot collaborative tasks Digital twins: Universal interoperability for the digital age 6g cellular networks and connected autonomous vehicles Deep learning based intelligent inter-vehicle distance control for 6g-enabled cooperative autonomous driving Reshaping autonomous driving for the 6g era Trusting autonomous vehicles: An interdisciplinary approach A machine learning approach to road surface anomaly assessment using smartphone sensors A decisionmaking strategy for vehicle autonomous braking in emergency via deep reinforcement learning Deep learning-based vehicle behavior prediction for autonomous driving applications: A review Deep reinforcement learning based resource allocation in cooperative uav-assisted wireless networks Age of information aware trajectory planning of uavs in intelligent transportation systems: A deep learning approach Optimal uav caching and trajectory in aerial-assisted vehicular networks: A learningbased approach Explainable density-based approach for self-driving actions classification Evolving rule-based explainable artificial intelligence for unmanned aerial vehicles Autonomous navigation assurance with explainable ai and security monitoring A risk-aware architecture for autonomous vehicle operation under uncertainty Toward bulk power system resilience: Approaches for regional transmission operators practical recognition'as a suitable pathway for researching just energy futures: Seeing like a 'modern'electricity user in ghana Assessing concurrent effects of climate change on hydropower supply, electricity demand, and greenhouse gas emissions in the upper yangtze river basin of china A survey on smart grid technologies and applications Fault-tolerant multisubset aggregation scheme for smart grid Iot-enabled smart energy grid: Applications and challenges Smarter grid in the 5g era: Integrating power internet of things with cyber physical system Future generation 5g wireless networks for smart grid: A comprehensive review 5g networkbased internet of things for demand response in smart grid: A survey on application potential Vulnerability assessment of 6g-enabled smart grid cyber-physical systems Smart grid evolution and mobile communications-scenarios on the finnish power grid Adapting big data standards, maturity models to smart grid distributed generation: critical review A multidirectional lstm model for predicting the stability of a smart grid Secure and resilient demand side management engine using machine learning for iot-enabled smart grid Appliance identification based on smart meter data and event-driven processing in the 5g framework Ev charging behavior analysis using hybrid intelligence for 5g smart grid Integrated human-machine intelligence for ev charging prediction in 5g smart grid Gaining insight into solar photovoltaic power generation forecasting utilizing explainable artificial intelligence tools Explainable ai in deep reinforcement learning models for power system emergency control Stability analysis of networked control in smart grids Real time voltage stability prediction of smart grid areas using smart meters data and improved thevenin estimates Comparative analysis of machine learning algorithms for prediction of smart grid stability Detection of energy theft in smart grids using electricity consumption patterns Electricity theft detection base on extreme gradient boosting in ami Deep learning detection of electricity theft cyber-attacks in renewable distributed generation Adaptive feature boosting of multi-sourced deep autoencoders for smart grid intrusion detection From bim to extended reality in aec industry Developing multisensory augmented reality as a medium for computational artists A survey of industrial augmented reality Augmented reality and virtual reality displays: Perspectives and challenges A review on mixed reality: Current trends, challenges and prospects Extended reality in iot scenarios: concepts, applications and future trends Future networks 2030: Architecture & requirements Adventures in hologram space: Exploring the design space of eye-to-eye volumetric telepresence Toward truly immersive holographic-type communication: Challenges and solutions 6g networks: Beyond shannon towards semantic and goal-oriented communications 6g enabled smart infrastructure for sustainable society: Opportunities, challenges, and research roadmap Learning-driven wireless communications, towards 6g The Power of AI Combined with AR/VR Technology An automated training of deep learning networks by 3d virtual models for object recognition Snap2cad: 3d indoor environment reconstruction for ar/vr applications using a smartphone device Semantic segmentation on swiss3dcities: A benchmark study on aerial photogrammetric 3d pointcloud dataset Ar/vrbased live manual for user-centric smart factory services Exploiting sensing devices availability in ar/vr deployments to foster engagement Machine learning at facebook: Understanding inference at the edge Dxr: A toolkit for building immersive data visualizations The future of interpersonal skills development: Immersive virtual reality training with virtual humans Hey alexa. . . examine the variables influencing the use of artificial intelligent in-home voice assistants Trustworthy human-centered automation through explainable ai and high-fidelity simulation Smart governance in the context of smart cities: A literature review Good governance and its relationship to democracy and economic development Democracy, visibility and public good provision Corruption-the challenge to good governance: a south african perspective Earthquake-induced building-damage mapping using explainable ai (xai) Everything you wanted to know about smart cities: The internet of things is the backbone Proceedings of the institution of civil engineers-smart infrastructure and construction Railroad bridge monitoring using wireless smart sensors Explainable artificial intelligence for developing smart cities solutions Dam monitoring data analysis methods: A literature review Valve health identification using sensors and machine learning methods Wind turbine condition monitoring: technical and commercial challenges Algorithmic transparency for the smart city Data science empowering the public: Data-driven dashboards for transparent and accountable decision-making in smart cities Twittercracy: Exploratory monitoring of twitter streams for the 2016 us presidential election cycle Topy: Real-time story tracking via social tags Digital advertising: present and future prospects Facebook and youtube addiction: the usage pattern of malaysian students Does google favour its own platforms in search visibility?" FINIZ 2020-People in the focus of process automation Facebook's business model set to withstand exposé Towards gadget-free internet services: A roadmap of the naked world Contemporary housing discrimination: Facebook, targeted advertising, and the fair housing act User data privacy: Facebook, cambridge analytica, and privacy protection Us accuses google of illegally protecting monopoly Can david really beat goliath? a look into the anticompetitive restrictions of apple inc. and google, llc A study on subject data access in online advertising after the gdpr After gdpr, still tracking or not? understanding opt-out states for online behavioral advertising The california privacy rights act of 2020: A broad and complex data processing regulation that applies to businesses worldwide Understanding the new colorado privacy act Maximizing explainability with sf-lasso and selective inference for video and picture ads Attentive capsule network for click-through rate and conversion rate prediction in online advertising Understanding current and future issues in collaborative consumption: A four-stage delphi study Considering virtual reality in children's lives C/AISC -Artificial Intelligence Standards Committee Standard for XAI -eXplainable AI Working Group Four Principles of Explainable Artificial Intelligence (Draft) General Data Protection Regulation (GDPR) Compliance Summary of the HIPAA Security Rule Gramm-Leach-Bliley Act 2521 -Federal Information Security Modernization Act of Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations China Introduces First Comprehensive Legislation on Personal Information Protection What Changes with the LGPD Notifiable data breaches Personal Information Protection Commission Horizon Explainable Manufacturing Artificial Intelligence (XMANAI) Explainable AI Pipelines for Big Copernicus Data (DEEPCUBE) Security and Privacy Accountable Technology Innovations, Algorithms, and machine Learning (SPATIAL) Safe and Trusted Human Centric Artificial Intelligence in Future Manufacturing Lines (STAR) Copernicus:Europe's eyes on Earth Marie Skłodowska-Curie actions Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI) Building Greener and more sustainable societies by filling the Knowledge gap in social science and engineering to enable responsible artificial intelligence co-creation (GECKO) Explainable Artificial Intelligence (XAI) Explainable Artificial Intelligence(XAI) Center Show us the data: Privacy, explainability, and why the law can't have both Towards a rigorous science of interpretable machine learning Metrics for explainable ai: Challenges and prospects Towards quantification of explainability in explainable artificial intelligence methods Measuring the quality of explanations: the system causability scale (scs) Explainability metrics of deep convolutional networks for photoplethysmography quality assessment Trustworthy explainability acceptance: A new metric to measure the trustworthiness of interpretable ai medical diagnostic systems Better metrics for evaluating explainable artificial intelligence Interpretability of linguistic fuzzy rule-based systems: An overview of interpretability measures Xai-explainable artificial intelligence Explainable ai: From black box to glass box Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters Iot security, privacy, safety and ethics A berkeley view of systems challenges for ai The ai techno-economic complex system: Worldwide landscape, thematic subdomains and technological collaborations The ai techno-economic segment analysis Analysis of dimensionality reduction techniques on big data Federated learning for big data: A survey on opportunities, applications, and future directions Privacy-preserving traffic flow prediction: A federated learning approach Simulating physics with computers Quantum shuttle: traffic navigation with quantum computing Blockchain 3.0 applications survey Wang is a member of the IEEE and a reviewer of its major conferences and journals in intelligent transportation systems. His research interests include trajectory data mining and processing His research interests include natural language processing, machine learning, disinformation space, and explainable artificial intelligence