key: cord-0505487-bis91rlc authors: Senevirathna, Thulitha; Salazar, Zujany; La, Vinh Hoa; Marchal, Samuel; Siniarski, Bartlomiej; Liyanage, Madhusanka; Wang, Shen title: A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions date: 2022-04-27 journal: nan DOI: nan sha: cd65bd489d32cc64b97072c73014e39a3aa954f8 doc_id: 505487 cord_uid: bis91rlc With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, most of the existing surveys in B5G security focus on the performance of AI/ML models and their accuracy, but they often overlook the accountability and trustworthiness of the models' decisions. Explainable AI (XAI) methods are promising techniques that would allow system developers to identify the internal workings of AI/ML black-box models. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the security of systems to be transparent and comprehensible to stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI. The wireless communication industry can be considered one of the most rapidly developing sectors in technology. The innovations that thrive in the telecommunication sector have laid the infrastructure and led towards a consonant development that has led to exponential growth in living standards. The first generation of cellular networks started evolving wireless communication technology in the 1980s. 5G wireless technology, primarily based on softwarization, is expected to complete the transition with significant coverage by 2025. The most noticeable feature of 5G is the cloudification of networks via microservices-based architecture. With the start of commercialized implementation of 5G, experts predict that 6G mobile communication will become widely available in the following years [1] . Meanwhile, the academic community is more focused on new lines of study in advance of the beyond 5G or 6G standardization. Edge intelligence (EI), Beyond 6GHz to THz communication, Non-Orthogonal Multiple Access (NOMA), Large Intelligent Surfaces (LIS), and Zero-touch Networks have risen in recent years [2] - [4] . These concepts are being developed into the technology that will power the next generation of communication networks. There is still a long way to go in terms of 5G network capabilities to meet the needs of these applications, which need high-speed data transfer rates and real-time access to vital computing resources. IoE, enabled by 5G, seeks to connect vast numbers of devices and Cyber Physical Systems (CPS), surpassing 5G's capabilities into the B5G era. For example, 6G is expected to connect millions of devices and provide instant access to massive amounts of compute and storage power. For B5G wireless networks, the scientific community expects entirely intelligent network orchestration and management [2] , [5] . It will be distinct from previous generations concerning various aspects, including network infrastructures, radio access methods, processing and storage capacities, application types. New applications will need to intelligently use communications, compute, control, and storage resources. Moreover, wireless networks are producing a large amount of data. This paradigm shift allows data-driven real-time network design and operation in B5G. Physical attacks, eavesdropping, and authentication and authorization issues plagued the wireless communication technologies from 1G to 3G. It now includes more complicated attacks and tougher assailants. 4G networks' most prominent security and privacy threats come from malware programs and common MAC layer security vulnerabilities, like viruses, tampering, Denial of Services (DoS), replay attacks. Eavesdropping. These attacks have morphed into Software Defined Networks (SDN), Network Function Virtualization (NFV) and cloud computing in the 5G. Insecure SDN features include OpenFlow, centralized network administration (prone to DoS attacks), core and backhaul, edge device vulnerabilities, and open APIs [6] , [7] . Research communities are starting to focus on security vulnerabilities in B5G communication using advanced networking, AI/ML, and linked intelligence technologies that power the B5G vision. On top of the unsolved security issues brought forward from the previous generations, these new technologies open B5G networks to a whole new threat surface as never seen before. Nevertheless, the overall success of B5G ultimately depends on how well AI and 6G cooperate in the future [8] . The malicious use of AI is changing the threat landscape, adding constraints on many potential applications from seeing the light of day. With the advent of 6G technologies, misuse of AI might endanger increasingly complex systems, such as smart CPSs (SCPSs). SCPSs are advanced CPS systems that are increasingly linked through technologies like the Internet of the Things (IoT), Artificial Intelligence (AI), wireless sensor networks (WSNs), and cloud computing to enable a variety of unique services and applications [9] . Since SCPSs are interwoven with various domains, a single weakness can cause catastrophic failures (Butterfly effect). Aside from AI applications in services, it can also be used for malicious intents, allowing larger-scale attacks, unlike the attacks we have seen before. As a consequence, all interconnected devices and users stand at risk. Even though research on AI to protect against cyber threats has been ongoing for many years [10] , [11] , it is still unclear how to ensure the security of networks with AI integrated into their core operations. A significant drawback in AI security has derived from the black-box nature of those systems in one way or the other. Therefore, maintaining accountable and trustworthy AI in this regard is highly important. The Defense Advanced Research Projects Agency (DARPA) started the Explainable Artificial Intelligence (XAI) initiative in May 2017 to develop a set of new AI methodologies that would allow end-users to comprehend, adequately trust, and successfully manage the next generation of AI systems [12] . To further elaborate, it can be considered a collective initialization of computer sciences and the social sciences, which includes human psychology of explanations. The overall success of B5G would ultimately stand on how far the AI used in its implementation is going to be resilient and trustworthy for the general public for utilization [8] . Extending research on possible techniques such as XAI in this regard is a crucial step that needs to be taken abruptly. When writing this article, 5G is commercially rolling out, with many researchers focusing on the B5G. Its applications, architecture, and enabling technologies are the subject of a large number of studies published recently, as shown in Table II . In addition, studies such as [2] , [4] , [13] - [20] have mainly focused on the vision, potential applications and requirements of the B5G wireless communication technologies such as terabits per second speeds FeMBB, connected intelligence, and EDuRLLC, among others that would facilitate up and coming applications such as autonomous vehicles, telemedicine, the extended reality in the future. Among key enablers in B5G/6G mobile communication, such as THz communication, edge computing, swarm networks, full automation, blockchain; AI takes a prominent place. AI techniques are more suitable to solve complex problems due to their generalization capabilities and thus are fitting to use in many novel B5G era applications. Studies including [13] , [21] - [26] , elaborate on the importance of AI and its trends in B5G, and the challenges it brings to future communication technologies. Previous surveys such as [6] , [27] - [31] highlight the dynamics of security aspects in a range of B5G enabling technologies such as IoT, RAN and edge computing, while [8] , [29] , [32] , [33] focus entirely on the security threats and potential defenses that would improve the trust in AI/ML methods used in B5G. Although it shows promising results, only a few publications ( [34] - [36] ) have covered the XAI applications in the context of security or XAI research projects and standardization methods. Opportunities, challenges, and standardizations in XAI are still in their infancy that needs more collaborative work with experts from fields such as human psychology and sociology to move towards more concrete real-world applications. Summarized table II outlines contemporary research and surveys conducted on the advancements of B5G, AI, and XAI. Here we have found out that each paper presents applications in dis-articulated contexts. On the contrary, implementing B5G technologies rather begs for a holistic review of AI and XAI in security, given that accountability and resilience are core and essential characteristics of any mobile network generation. Many researchers focus on B5G, XAI, and AI techniques in sunders, but currently, there has not been a cognate approach where the viability of XAI techniques has been reviewed in the context of B5G use cases. As a response, this survey reports a comprehensive overview of XAI and security technical aspects, applications, requirements, limitations, challenges/issues, current projects, standardization initiatives, and lessons learned for the beyond 5G applications. To the best of the authors' knowledge, this paper is the first of its kind that attempts to explore the capacity of XAI in a wide range of B5G security aspects. Table II depicts some of the relevant but dissociated studies carried out in this regard. However, none of them has been able to convey a holistic image of the role of XAI in B5G security. Therefore, our main contributions from this survey are listed below: • Highlight the importance of XAI to the security of B5G: This paper elaborates on the potential of XAI in the path to realizing accountability for AI/ML models used in network security and improving the resilience of B5G telecommunications. Although many studies of B5G XAI Fig. 1 . Th overview of using XAI to improve the security of the B5G technologies and use cases. The left part of this figure shows how virtualization leads to the 5G enabling technologies compared with the traditional network layered stack and how AI evolves from 5G to B5G. The right part of this figure shows that XAI can improve the AI-based security solution for system stakeholders. security involve data-driven ML solutions, little focus is given to interpreting their decisions. Serious doubts and questions regarding accountability can arise with stakeholders when using black-box AI to secure critical applications. The ability of XAI methods to interpret the black-box nature of AI/ML-based security systems is a current requirement to fill this research gap. • Comprehensively analyze XAI for commonly discussed B5G technical aspects and use cases: Here, we explore the role of XAI in a range of B5G enabling technologies such as IoT/devices, Radio Access Network (RAN), Edge network, core, and backhaul network, E2E slicing, and network automation. This list of enablers is carefully selected to cover most of the ground in B5G telecommunication architecture and provide a holistic view of the impact of XAI in B5G security. The study extends towards discussing possible security issues and the impact of XAI on a popular set of use cases, including smart cities, smart healthcare, industry 4.0/5.0, smart grid 2.0, and Extended Reality (XR). • Survey of important, relevant research projects and standardizations: Unlike in many other survey papers, here we explore the research projects that are underway to realize the B5G implementations and standardizations incorporating AI/ML/XAI. A detailed discussion of current projects and initiatives involving academic and industry partners provides clarity on the ongoing areas and the research gaps that are currently explored. AI security standardizations in B5G are discussed here to determine the requirements for future B5G networks and their respective technologies. • Provide promising research directions as guidance: Existing limitations and challenges with current XAI methods in security are exhaustively discussed, along with possible research directions. Few of the proposed research directions include security and isolation between network slices, computationally efficient explain-able Edge-AI, and understanding the level of vulnerability of ML models to adversarial attacks in white-box and black-box contexts are some of the possible research directions that are identified. This section introduces the motivation and contribution of this survey paper. The second section gives the background of technical aspects in this paper, namely, B5G, XAI, and XAI's potential for improving B5G security. Then, the detail of these technical aspects is discussed in Section III, Section IV, and Section VI. Section III elaborates the taxonomy, threat modeling, and landscape of security aspects for developing B5G networks. Section IV analyses the impact of introducing XAI on the existing AI-powered B5G security solutions. Section VI highlights potential new security issues because of introducing XAI. Moreover, for the use cases enabled by B5G, Section V analyses the impact of XAI on the security aspects of these B5G use cases. Section VII strengthens the importance of this survey paper by listing the ongoing research projects and standardizations about B5G security and XAI. Section VIII summarises Section III, IV, VI, V, and VII with the lessons learned and future research directions. Finally, Section IX concludes the whole paper. This section briefly introduces the background of the related technologies discussed in this paper. In particular, the B5G technologies and XAI concepts are discussed, followed by the growing need for XAI for B5G security. The rapid growth of the communication industry in the last decade has enabled the 5G technologies to be widely commercialized in recent days. Following the success of 5G, 6G/B5G is becoming the focal point of academia and industry with research and implementations. 5G has addressed much of the prevalent problems [39] with high data rate enhanced mobile broadband systems (eMBB) and leaped on with new functionalities such as laying the foundation for enabling the Internet of Things (IoT). However, new IoT services are developed rapidly in applications such as virtual, augmented, mixed reality services (which fall under XR services), autonomous vehicle systems, brain-computer interfaces (BCI), telemedicine, haptic systems and blockchain-based systems [4] . In order to implement these services, ultra-reliable, lowlatency communications (URLLC) with short-packet support and high data rates in both uplink and downlink needs to be maintained in a secure and privacy protected wireless system [39] . A key factor affecting the dynamic of the revolution will be the profound number of human-type and machine-type devices connecting to the network. Massive Machine Type Communication (mMTC) is expected to be fully deployed alongside URLLC to address those devices' requirements to achieve end-2-end latency reduction once 5G is fully deployed later on. Catering those heterogeneous devices in 6G technologies would take the data rates to the terabyte realms (maximum 1 Terabits/second) in order to perform effectively [40] . In other words, nearly a 1000x increase from the last generation of wireless technologies [15] bringing in massive amounts of data each day. A cohort of technologies like AI, Symbiotic radio (SR), call-free massive MIMO (CFmMM), intelligent communication surfaces, index modulation (IM), simultaneous wireless information and power transfer (SWIPT), networkin-box [2] , [5] , [13] , [16] , [18] , [22] , [28] , [41] will be used in handling those services mentioned above. AI takes a prominent place out of them due to its unprecedented evidential capabilities and ubiquitous applications. Following the massive success of AI in computer vision, natural language processing, speech recognition, bioinformatics, social intelligence, and numerous others, the technology has proved to be ubiquitous [42] . Naturally, a profound amount of data is expected to be generated at high rates due to the vast and varied set of applications associating billions of devices in the B5G eco-system, making it the perfect grounds for the application of AI exploiting its capabilities of solving problems involving large amounts of unstructured data efficiently. So the potential to apply AI in numerous aspects of B5G ranging from network architectures to security, privacy, signal processing solutions, and system-level optimizations. B. Explainable AI 1) Motivations of XAI: While the early AI systems were simple to understand, opaque decision methods such as Deep Neural Network (DNN) have gained popularity in recent years. Deep Learning (DL) models are experimentally successful due to a combination of efficient learning algorithms and their large parametric field. DNNs are considered sophisticated black-box models since they have hundreds of layers and millions of parameters [43] . Transparency is the polar opposite of black-box-ness, which is the pursuit of knowledge of how a model functions. The need for explainability among AI stakeholders is growing as black-box Machine Learning (ML) algorithms are increasingly used to make significant predictions in critical settings [44] . The risk lies in making and implementing choices that are not reasonable, lawful, or do not allow for comprehensive explanations of their actions [12] . Explanations that back up a model's output are critical. For example, in medical applications, specialists need to uncover what causes are identified in the model to arrive at the forecast, which would reinforce their confidence in the diagnosis [45] . Telecommunication systems, B5G backed autonomous cars, security, and finance are just a few other examples. However, a better knowledge of a system may lead to its shortcomings being corrected. Interpretability as an extra design driver may enhance the implementation ability of a machine learning model for three main reasons according to [35] . It aids in guaranteeing objectivity in decision-making by rectifying bias in the training datasets. Secondly, it improves resilience by identifying possible adversarial events that may cause the forecast to alter. Finally, it will guarantee that only relevant variables are used to predict the outcomes -in other words, that the model reasoning is based on actual causation. The literature clearly distinguishes between models that are interpretable by design and those that can be explained using external XAI techniques. XAI creates a suite of machine learning techniques that enables human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners [35] . This dichotomy may be thought of as the distinction between interpretable models and model interpretability methods; a more generally recognized classification is transparent models, and post-hoc explainability [46] . The following section explains these categories in more detail. 2) Terminologies of XAI: Transparency: A model is deemed transparent if it is understood on its own. Transparent models, by themselves, provide some degree of interpretability. Models in this domain may also be categorized according to the context in which they are interpretable, notably algorithmic transparency, decomposability, and simulatability. Decomposability refers to the capacity of a model to be explained in terms of its constituent components. Simulatability refers to a model's ability to be simulated or thought about rigorously by a person. When the model is sufficiently self-contained for a person to think and reason about it in its entirety, it can be referred to as a decomposable model with simulatability. Algorithmic transparency may be interpreted in a variety of ways. Prominently, it refers to the user's capacity to comprehend how the model generates any given result from its input data. The primary restriction on algorithmically transparent models is that they must be completely explorable using mathematical techniques and analysis [47] . Each of these classes includes its antecedents; for example, a simulatable model is both decomposable and algorithmically transparent. Some popular models that fall under transparent models are Linear/Logistic regression, Decision Trees, K-Nearest Neighbors, Rule-based models, GAM -General Additive Models, and Bayesian models. These models are deemed to be expressive enough to be human-understandable. [35] . Output Blackbox AI Models System Stakeholders Fig. 3 . XAI Taxonomy. Pre-model XAI explains the training data used for building AI model (e.g.,Principal component analysis (PCA), t-Distributed Stochastic Neighbor Embedding (t-SNE) ). In-model XAI refers to transparent AI models that are self-explanatory (e.g., decision trees, random forests). Posthoc XAI models explain the results given by the trained AI models (e.g., LIME, SHAP). The XAI methods can be divided into multiple categories based on various criteria [35] , [48] . The most common XAI-based taxonomies are discussed below. XAI methods that fall into those categories are not necessarily exclusive for each group. According to the taxonomy, there can be methods that belong to even two or more categories. (a) Model-agnostic vs Model-specific: Model agnostic methods for XAI are the ones that are not constrained by the core parts of an AI algorithm when making a prediction. They are helpful in decoding black box models' decision processes and provide good flexibility for developers to apply them to a wide variety of ML models. On the other hand, model-specific methods are bespoke for specific models and take the use of core components of an ML model to interpret the outcomes. This characteristc makes model-specific methods more suitable to identify granular aspects of ML models, but they lack flexibility. heatmaps, graphs, etc.) on the original black-box model to explore its internal workings without using a representation. These XAI methods fall under the visualization category. 4) XAI Methods: There are numerous methods studied in the literature to explain black-box AI/ML models. Here we discuss a selected set of popular XAI methods that are more established in the academic and industrial community, as shown in Fig. 4 . Local Interpretable Model-Agnostic Explanations (LIME) [49] can be considered as one of the most popular modelagnostic XAI methods used to interpret model outputs in various applications. LIME provides a locally faithful explanation based on the feature importance that contributed to the output. It is achieved through a surrogate dataset obtained after sampling perturbations in the proximity of the original inputs. Then it creates a simpler interpretable model that can be used in identifying important features of a given output. Because of this local fidelity, LIME is faster and can be applied to all types of black-box models. Stemming from its popularity in the research community, application-specific variations of LIME such as OptiLIME [50] and CBR-LIME [51] are proposed in several studies. Shapley Additive Explanations (SHAP) [52] is a modelagnostic interpreter that can be used with black-box models to interpret their model outcomes. SHAP uses the concept of shapely values derived from the cooperative game theory. Shapley values represent the average marginal contribution of a feature calculated over all the subsets of features with and without the said feature. Feature importance values generated here can be coalesced to obtain a global (unlike LIME) explanation of the model outcome space, which paves the way for better explanations. TreeSHAP [53] and DeepSHAP (DeepLift [54] + Shapley values) [52] are few derivatives of SHAP that are developed to fit specific models and improve the computation efficiency. Layer-wise Relevance Propagation (LRP) [55] is popularly used to interpret models such as neural networks that are structured in a layered manner, making it a model-specific XAI method. LRP operates by propagating the prediction backward through the layers until it reaches the inputs while assigning relevance scores to each functional unit in the model. The level of contribution from one node to make the consequent node relevant to the output is quantified and aggregated to obtain the relevance of each layer. It is applied in a wide range of applications such as identifying model biases [56] , extracting points of interest in prevention of side-channel attacks [57] , audio source localization [58] , and EEG pattern recognition in brain-computer interfaces [59] . Counterfactual Explanations (CFE) belong to the subcategory under local explainers known as example-based explainers. The intuition here is to understand what would happen if a slightly different data point is given to the model in place of the original data point and how it would affect the prediction. For example, if a model classifies a person as not suitable to receive a loan, then the CFE would provide the reason that he/she needs to have a saving of 10 000 euros for the model to classify him/her as a suitable person to receive the loan. Unlike many other interpreters, the explanations are closer to human nature and provide actionable and precise recommendations. The basic idea of CFEs are model agnostic, but several variations are developed for model-specific applications [60] . XAI for RL has been explored for agent-based Reinforcement Learning (RL) AI systems from as early as 1994 [61] . It is not easy to deliver XAI for RL because it generally includes several judgments made over time, often seeking to offer the next action in real-time. Unlike conventional ML, RL explanations must cover a collection of acts spread across a plethora of different states that are connected in some manner. Also, the absence of an explicit training dataset can contribute to the difficulty in applying XAI techniques [62] . Some of the widely known AI techniques for RL is discussed below [63] . (a) Programmatically Interpretable Reinforcement Learning framework (PIRL) is a global, in-model method used in place of Deep Reinforcement Learning (DRL) [64] . In DRL, neural networks reflect the policies and are difficult to comprehend. On the other hand, PIRL policies are expressed using a high-level, human-readable programming language. However, unlike standard RL, they limit the number of target policies by using a (policy) sketch. They use a framework based on imitation learning called Neurally Directed Program Search to uncover these regulations (NDPS). (b) Hierarchical policies technique [65] is another in-model XAI technique used to interpret the decision process of multi-task complex RL systems but locally. The core tenet of this approach is to decompose a complicated task into smaller subtasks. These smaller tasks will be accomplished with the already learned policies or learn a new skill. This model also takes the temporal connections and task priorities to improve efficiency and accuracy. The technique builds on multi-task RL with modular policy design and a two-layer hierarchical policy based on minimal assumptions and limits. In [65] they assess this technique in object manipulation tasks in the Minecraft game. (c) Linear Model U-Trees (LMUT) [66] is a post-hoc explanation method unlike the methods mentioned above. They are flexible to be used to generate both local and global approximations of an RL model's Q-predictions [63] . LMUTs are an extension of Continuous U-Trees with the contrast in using linear models at each leaf node instead of constants, making them more interpretable and comprehensible. Because of the inherent interpretable nature of the trees, it becomes easier to generate explanations from the LMUTs as they mimic the original Q-function. In realization, there is a substantial community working on the issue of explainable AI today, with some attempting to enhance the latest technology, others attempting to evaluate, criticize, or regulate (policymakers) the technology, while the remaining seeking to manipulate or utilize AI (business partners) in a broad range of applications. The previous studies have considered the level of explainability and interpretability on the grounds of various stakeholder groups. These categories can include system creators, sys-tem operators, executors making decisions based on system outputs, decision subjects affected by an executor's decision, data subjects whose personal data is used to train a system, and system examiners, such as auditors or testers [67] . In the envisioned B5G application security, it is important to identify the parties involved in the full lifecycle of an intelligent system to improve accountability. This paper defines five main stakeholder communities: system creators, system operators, theorists, ethicists, and end-users. [44] , [67] . • Creators:. Here we identify creators as those involved in creating secure, high-fidelity AI-based applications for B5G era. This group is a superset of implementers (developers, testers, security experts, data scientists, etc.) and owners (agents, business owners, etc.) contributing to bringing AI/ML applications to reality. Many members of this community work in the industry -multinational companies and local firms -or in the public sector, though some are academics or scientists who develop systems for various reasons, including assisting them with their work. The highest level of explainability is required to make the systems unbiased and resilient. Their influence on XAI aspects is strong as well. • System Operators: System operators maintain the systems and ensure smooth operation after deploying an AI/MLbased system. Although they might not require a granularity of explanations as high as developers, they still require a high enough explainability to detect and verify anomalies in the system to provide runtime solutions. Similarly, the influence on the system and data can be considered moderately high. • Theorists: Theorists are those who are interested in comprehending and expanding AI theory, especially as it relates to DNNs. Members of this group are often associated with university or industry research institutions. This community requires a high level of explainability. Their influence on the XAI, in general, can be considered to be high. • Ethicists: Ethicists can be policymakers, commenters, and critics concerned with the fairness, accountability, and transparency of AI systems. While many computer engineers and scientists are part of this group, it can be considered interdisciplinary, including social scientists, attorneys, journalists, economists, and politicians. For the ethicist community, explanations must go beyond technical software quality to ensure fairness, unbiased conduct, and comprehensible disclosure for accountability and auditability. Furthermore, legal compliance with organizations such as European Union's GDPR [68] or United States DARPA Regulations [12] falls under this category of stakeholders. After all, their influence on AI systems can be considered very high. • End Users: Finally, the users need explanations to assist them in deciding whether or how to act in response to the systems' outputs and to assist them in justifying their actions. In addition to 'hands-on' end users, this community comprises everyone engaged in processes affected by an artificial intelligence system. The explainability requirement for end-users is similar to ethicists; however, their influence on the system is only strong under particular circumstances (e.g., community/group approach). In light of the preceding discussion, the most logical approach may be to provide different explanations tailored to the various stakeholders. Nevertheless, it is also possible to envision a composite explanation object containing all of the required information to satisfy multiple stakeholders at once. Human-centric AI-powered telecommunication in the B5G era would attract the attention of various contingent parties that need assurance for trusting these systems-giving convincing evidence on how the decision-making process inside an AI model will be challenging pertaining to the technical knowledge gap and obfuscated nature of internals in the widely used AI/ML model. Especially the security of this new technology in telecommunication has gained much attention from both malevolent and benevolent agents relevant to all the layers of the network. Network softwarization (NS) and network function virtualization NFV introduced with 5G are expected to be significantly enhanced as we enter the B5G era. Figure 1 shows that going; further, the conventional networking design will be evolved into a new layered architecture established on complex black-box AI/ML models in all facets of communication, including security. In the first layer of B5G architecture, data is gathered through IoT devices such as smartwatches, phones, drones, etc., to enable real-time services in higher layers. Although it's one of the essential operations, the convictions behind data collection and security/privacy issues are highly influenced by the demography and underlying regulations. Using XAI, such differences can be addressed evenly by giving more details about how collected data is used inside AI models in the rest of the pipeline. Further, it enables system operators to identify the performance of each device more intimately with respect to the overall AI system. RAN, edge, core, and backhaul layers provide the infrastructure to reach higher speeds and quality of services based on enhanced virtualization techniques. The security of these layers is envisioned to be addressed through AI/ML-based methods to accommodate the massive volumes of data. Getting automated feedback on the performance of those AI/ML systems is paramount to ensure maximum resilience by identifying false predictions and diagnosing any system issues. It will benefit system operators and stakeholders who are not technically sound in AI/ML. In E2E slicing and ZSM [3] , [69] security of AI/ML components are used in integral parts of the system architecture. For example, the ZSM's E2E service intelligence enables decisionmaking based on data collected in the domain and standard data services. An attacker may create inputs to cause the ML model to make incorrect judgments and threaten performance, financial loss, SLA fulfillment, and security assurances. XAI would be highly useful during the response process to estimate the overall effect and trace back to the most basic module responsible for the anomaly. Finally, the application layer would require the most high-level explanations relevant to the end-users in B5G. Techniques like counterfactual explanations are ideal for inculcating trust and confidence in the users in the application layer. When designing a system with explainable security, one must evaluate the 6W questions; Why, Who, What, Where, When, and How to generate security explanations. Inspired by Vigano et al. [70] , figure 6 depicts the flow of identifying basic building blocks to design an explainable security system. First, the apparent reason to why the system needs XAI must be identified. Then to whom and who create the explanation and decide the level of granularity of the content that is broadcast to each group of actors. Identifying the needs of each actor early on helps to decide on what aspects of the system that need to be explained. Here the system designers must consider the layer of B5G architecture and fit the explanation to meet its requirements. Although the explanation is generated in one layer, it will not be the same where it will be accessible. Whether it will be a separate service or embedded in the system/output must be decided. It is also essential to decide the when the explanations are needed during the process, i.e., during design, installation or maintenance, defense, etc. Finally, the nature of the explanation is decided by answering the question of how to interpret AI/ML model. It will lay the groundwork for choosing the correct XAI methods for highquality explanations. III. B5G THREAT MODELLING AND TAXONOMY After the stage of vision solicitation, 6G is currently in the early stage of determining the key performance indicators of the system. There is nothing fixed but 6G will be surely based on the development and evolution of 5G. 6G will inherit the advantages provided by 5G, improve the remaining shortcomings, and leverage new technologies and applications. In this section, we present the B5G threats in dividing them into 3 categories (Figure 7 ): B5G threats inherited from 5G networks, 6G Technology threats and 6G Application threats. This section aims to synthesize 5G security threats and cyber-attacks that attackers can reuse to target B5G networks. For the categorization, we focus the analysis on the technologies that are believed to play important roles in both existing 5G and future 6G networks: Software Defined Networks (SDN), Network Function Virtualization(NFV), Virtual Network Functions (VNF) and Virtualization/Containerization technologies. 1) SDN, NFV and VNF: SDN and NFV are the key 5G technologies facilitating the establishment of networks including the deployment, the management and the operation. They are expected to be among the key enabling technologies for 6G. However, in the absence of proper security mechanisms, for example, the lack of TLS adoption, malicious actors could perform a man-in-the-middle, and launch various attacks by impersonating or gaining unauthorized access of the controller, or modifying the channels [71] - [74] . Moreover, as the policy enforcement process is distributed across physical switches, security threats and new information disclosure risks are introduced. A malicious actor can identify the action applied to a packet type by performing packet processing timing analysis [72] , [73] , [75] . Also as the SDN controller manipulates flow rules in the data forwarding elements, it is vulnerable to DoS attacks [71] - [76] . Besides, SDN facilitates third party applications and, on the other hand, bring Fig. 6 . 6W analysis for explainable security in B5G. The procedure shown can be used as a framework to initiate laying the groundwork when designing security aspects of explainable intelligent systems built in/on B5G network its vulnerabilities to the system [71] , [75] , [76] . Simple faults in network applications might lead to the breakdown of the control plane and failure of network functionality. Moreover, traffic high-jacking and re-routing is possible, an illegitimate appropriation of routing group addresses by corrupting the routing tables. Finally, network security policies and protocols can have network vulnerabilities that affect the layers and interfaces of the SDN framework [74] - [77] . Whilst, NFVs enable placing various network functions in different network components based on their performance requirements and eliminate the necessity for function or servicespecific hardware. This technology is vulnerable to authentication threats, due to spoofing of information parameters in different VNFs, unauthorized use of VNF predefined accounts or attributes (e.g., guest, ctxsys), weaknesses in password policies, and traffic spikes. Furthermore, if authorizations for accounts and applications in the NFV are not reduced to the minimum required for the tasks they have to perform, the elevation of privilege via incorrect verification of access tokens is possible [78] . In addition, NFVs are exposed to exploitation and abuse threats, such as the exploitation of third-party hosted network functions, a lawful interception function, a weakly designed or configured API with inaccurate access control rules, or poorly configured systems/networks; as well as, the unauthorized access to a function when hosted outside the operator's network. Moreover, accessing the personal data stored in the log files can lead to remote access exploitation and compromise the system integrity [71] . Finally, NFVs can be attacked by exploiting vulnerabilities of their native protocols, e.g., GTP, Diameter, NGAP, NAS-5GS, JSON [79] . A VNF is a network service that the NFV allows to place in a virtualized form in dedicated hardware technology. Although VNFs are part of the NFV architecture, their threats are the result of the exploitation of code vulnerabilities while NFV threats are generally related to weaknesses in network protocols. For example, improper input validation, buffer overflows and underflows during read or write operations, dynamic memory deallocation, poorly defined restriction of operations within the boundaries of a memory buffer, integer overflow, path traversal, or vulnerable software components (i.e. libraries, frameworks, and other software modules, etc) are software-related vulnerabilities that can lead to attacks against a VNF, and compromise the rest of the 5G architecture [80] . Moreover, academic research [81] , [82] and standardization groups [83] report authentication threats that may lead malicious actors to access data or perform unauthorized actions, and consequently have a range of issues, including information exposures, DoS attacks and arbitrary code execution. 2) Virtualization/Containerization platform: 5G and B5G networks are deeply based on virtualization technologies, allowing VNF in virtual machines. Virtualization platforms face different threats, depending on the different virtualization approaches followed in the network. In this subsection, we focus mainly on server virtualization software security threats and container-based threats. Research divided the server virtualization security threats into three main categories: hypervisor-based attacks, VMbased attacks, and VM image attacks [84] . A hypervisor-based attack is an exploit in which a malicious actor takes advantage of vulnerabilities in the program to allow multiple operating systems to share a single hardware processor. If attackers gain command of the hypervisor, all the VMs and the data accessed will be under their full control to utilize. Furthermore, it could compromise the control of the underlying physical system and the hosted applications. Some of the well-known attacks (e.g., Bluepill, Hyperjacking, etc.) insert VM-based rootkits that can install a rogue hypervisor or modify the existing one to completely control the environment. Since the hypervisor runs underneath the host OS, it is difficult to detect these attacks using regular security measures. VM-based threats include VM escape, where malicious actors break the isolated boundaries of the VM and start communicating with the operating system directly by passing through the virtual machine manager (VMM) layer, such an exploit opens the door to attackers to gain access to the host machine and launch further attacks. VM sprawls, which occurs when a large number of VMs exist in the environment without proper management or control, and since they retain the system resources (i.e., memory, disks, network channels etc.) during this period, these resources cannot be assigned to other VMs. Cross VM-side channel attacks, when a malicious VM penetrates the isolation between VMs, and then access the shared hardware and cache locations to extract confidential information from the target VM. VM image threats comprise inside-VM attacks, where a VM image is infected with malware or OS rootkits at run-time; and outdated software packages in VMs, that can pose serious security threats in the virtualized environment, for example, a machine rollback operation may expose a software bug that has already been fixed [85] . Regarding the container management security threats, the two major types of risks that we examined are the compromise of an image or container and the misuse of a container to attack other containers, the host OS and other hosts. A container image that is missing critical security updates, or has an improper configuration, embedded malware or clear text secrets, can be the target of exploitation that compromised the security of the rest of the system. Likewise, images often contain sensitive components like an organization's proprietary software and embedded secrets. If connections to registries are performed over insecure channels, the contents of images are subject to the same confidentiality risks as any other data transmitted in the clear [86] . By default in most container runtimes, individual containers can access each other and the host OS over the network. If a container is compromised and acting maliciously, allowing this network traffic may risk other resources in the environment. Moreover, a container running in privileged mode has access to all the devices on the host, thus allowing it to essentially act as part of the host OS and impact all other containers running on it. 6G networks are expected to provide ultra-high data rate, low latency, high reliability, and improved localization precision in three dimensions. This section discusses the key trending technologies allowing 6G to achieve those goals and their corresponding security threats. • Terahertz communications (THz) THz communication technology may support ultra-high data rate (100 Gbps or greater) with low power consumption and restraint, effectively eavesdropping. However, to adopt such technology, 6G cells must be changed from "small" to "tiny" meaning that much more complicated hardware needs to be built. This requirement also brings new privacy and security concerns, especially eavesdropping and authorization. Indeed, eavesdropping is believed to be difficult but still feasible. According to [87] , eavesdroppers can capture THz signals using narrow beams and by intercepting signals in line-of-sight transmissions. A countermeasure against these narrow beam attacks has been introduced. It has been proved to detect some, although not all, eavesdroppers. Moreover, authorization is another security concern that has been investigated in [88] . The authors proved that an unauthorized entity could capture communications by strategically placing objects in the transmission path and scattering the radiation toward the user. In [89] , the concept of electromagnetic signature of THz frequencies was mentioned as a potentially useful authentication method. • Visible light communications (VLC) VLC is a promising technology that allows higher bandwidths and electromagnetic interference resistance. It is believed to be the solution to growing demands for wireless connectivity. However, due to the physical characteristics of the light communication medium, VLC is exposed to sniffing/ eavesdropping, jamming, and data modification attacks [90] [91] [92] . To address these issues, in [93] , the authors propose a protocol called SecVLC to protect the confidentiality and the integrity of data transmitted over vehicular networks. Besides, the authors of [94] present a VLC precoding technique to secure the physical layer and improve the confidentiality of VLC links. • Molecular communication The principal idea behind molecular communication technology is to transfer information using biological signals. It is a promising interdisciplinary technique that enables communication among moving nodes [95] . Nevertheless, it can be the target of various attacks [96] , including transport layer attacks (e.g., desynchronization, unfairness), link layer attacks (e.g., flooding, packets storage exhaustion), network layer attacks (unfairness, collision), and physical layer attacks (e.g., jamming, tampering). 2) AI/ML based technologies: AI and ML have been considered essential components of 6G technologies to make their security solutions more accurate, autonomous, and predictive. However, on the other hand, AI and ML bring additional associated security issues in multiple network layers. There are an unknown number of potential vulnerabilities. Some of the most common attacks include:: • Evasion attacks [97] bypass the learned model during the test stages by injecting the false tested data. • Model inversion attacks [97] attempt to recover the private dataset used to train a supervised learning model. • AI middle-ware layer attacks [98] aim for data tampering and malicious interruptions. • Adversarial attacks [98] fool or misguide the learning model by entering deceptive data to make the network system unstable, malfunctioning or unavailable. • Poisoning attacks [99] pollute a ML model's training data. 3) Distributed Ledger technologies: Blockchain and smart contracts are expected to be exploding in 6G networks. Nonetheless, several attacks may occur due to network connection security flaws, software development errors, and language restrictions [100] [101] . Firstly, blockchain-based systems must be capable of avoiding double-spending attacks in which the same single digital token can be duplicated or falsified and spent more than once [102] . Secondly, most majority voting blockchain systems are vulnerable to 51% attacks [102] . This vulnerability occurs when a malicious user controls 51 percent or more of blockchain nodes. Thirdly, an attacker or a group of attackers may create fake identifications to capture peer-topeer communications in the blockchain network. It is called a Sybil attack [102] that can target blockchain systems using automated member addition methods. Fourthly, a re-entrance attack may happen when two smart contracts frequently call each other, but one is called when it still has not updated its state. It may lead to some unexpected behavior. Lastly, blockchains and smart contracts must avoid privacy issues like transaction data leakage, user privacy leakage, and smart contract logic leakage during the execution. Computing is believed to be available in the 6G era and makes many crypto algorithms no longer secure due to the extremely rapid computational capacity. Quantum mechanical properties [103] like superposition and entanglement can solve rapidly heavy problems such as the prime factorization of a very large number and the problem of discrete logarithms. On the other hand, the adversaries have quantum abilities to perform quantumbased attacks [104] . It is challenging to integrate post-quantum cryptography (PQC) solutions into resource-constrained IoT devices to keep their resistance against quantum attacks. In addition, although quantum cloning is forbidden by the laws of quantum mechanics, different excellent cloning methods with the most excellent precision may still create an identical copy of a random, unknown quantum state without changing the original state. It is called a quantum cloning attack [104] . PQC is, thus, an active trending research topic, and an appropriate algorithm is expected to be selected between 2022 and 2024. Although 6G is still at the stage of vision solicitation and has not yet been developed, there have already been several emerging ideas considered as key applications empowered by 6G. This section summarizes a list of the most discussed applications and the potential corresponding security threats. • Smart Cities: 5G now and 6G in the future promise to enhance the productivity and efficiency of Smart Cities. However, the connected "things " deployed in Smart Cities exposes a wide range of security vulnerabilities [105] and serious risks that malicious actors can exploit. As mentioned in the previous sections, many authentication and encryption mechanisms used for the devices are no longer secure when 6G technologies empower the attacker. The attacker may hijack and take control of "weak" devices or target personal information for further fraudulent transactions and identify theft. Due to 6G capabilities, it is also easier for an attacker to perform DoS/DDoS attacks, e.g., flooding parking meters with superfluous requests to prevent legitimate ones from being treated. • Healthcare: 6G will likely become the central communication platform for future digital healthcare services [106] . However, access control, device authentication, and secure communication for billions of tiny health devices will be a challenge to overcome. The confidentiality and ethical usage of patients' electronic records will also be considered critical concerns. • Industry 5.0: 6G is crucial for the deployment and operation of Industry 5.0, which is believed to be very scalable and highly automated. The main related security concerns include access control and authentication for restricting access to sensitive resources such as robots or intellectual property connected with Industry 5.0 [107] . In addition, security monitoring solutions must be prepared to be able to deal with a huge volume of multi-dimensional captured monitoring data [108] . • Smart Grid 2.0: In comparison with Smart Grid 1.0, Smart Grid 2.0 provides self-healing and self-organized capabilities. However, the most relevant security concerns consist of physical attacks, AI/ML-based attacks, software-related attacks, and threats against control components (e.g., SCADA) [108] , [109] . Another critical issue is a mechanism of trust management to control the peer-to-peer trading of energy, which is among the main characteristics of Smart Grid 2.0. • Extended Reality: 5G technologies have leveraged the Virtual Reality (VR) experience thanks to increased bandwidth and lowered latency. 6G promises to bring VR even to another level and make it available for various applications, including online education, virtual tourism, online gaming and entertainment, robot control, and healthcare. These applications will collect a remarkable volume of sensitive personal information and thus, raise the need for solutions for gathering, storing, sharing, and protecting data. Traditional cryptography may help but bring a tradeoff between latency, reliability, and confidentiality. The characteristics of 6G promise potential physical layer solutions that can solve these problems by incurring fewer latency [110] . This section discusses the key enablers and network domains envisaged in the B5G era, elaborating how each technology demands XAI-based security to improve the accountability of constituent intelligent systems. The added cost of introducing XAI is also analyzed for enabling technologies and network domains. AI/ML-based security applications under each subsection are summarized in Table III. A. Security of B5G Devices/IoT 1) Possible Security Threats, Challenges, Issues: IoT represents a network of interconnected nearly all environment devices to collect and exchange data over the Internet, enabling many services and applications to raise the standard of living. Although the Internet of Things has many advantages, it also poses many problems, particularly security. Taking care of these problems and guaranteeing the security of IoT devices and services must be a top focus. The ever-changing and heterogeneous nature of the IoT systems can make this issue even more challenging [111] . As shown in Fig. 8 four main types of attacks; network, software, physical, and encryption attacks can be anticipated in an IoT system. Distributed Denial of Service (DDoS) attack is one of the common attacks that is seen in the connectivity or network layer. Besides, it is severe in the IoT because of the magnitude of the damage that could be inflicted upon the whole network [112] . In addition to that, traffic analysis attacks, RFID spoofing/cloning/unauthorized access, MITM, sinkhole attacks, and routing information attacks can be seen under network attacks [113] - [116] . Viruses, worms, spyware, phishing attacks, malicious scripts, and DoS attacks can be considered as the software attacks possible in the IoT [117] , [118] . Under physical attacks, Node tampering, malicious node/code injection, and sleep deprivation of sensors can be considered as some of the plausible attacks [111] . However, these attacks are rather difficult to achieve [118] . Encryption attacks are seen in the communication channels of the IoT. Side-channel attacks, cryptanalysis attacks, and MITM attacks are to name a few. [111] There are certain challenges for security that comes along with the IoT system. Resource limitation for processing and storage can be considered a major challenge. It might limit computationally intensive security measures (e.g., cryptography). As a result, there comes the challenge of handling the high volume of data that arrives with high velocity, veracity, and variety. In amidst of all this, security stands out as one of the major challenges of all [119] . Extensive research has been done on using ML/AI techniques to mitigate security issues in the IoT. Few examples can be stated as follows (Fig. 8) . In [120] , access control techniques were implemented using naive Bayes and SVM algorithms to mitigate intrusions. In literature, [121] reinforcement-based techniques (Q-learning and Dyna-Q) were used for authentication to prevent spoofing attacks, while in [122] and [123] the authors used SVM and DNN, respectively. To enable secure IoT offloading against jamming attacks, authors of [124] , [125] have used Q-learning; a RL techniques, while in [126] the authors have used Deep Q-Network (DQN). However, in all those algorithms, the ability to explain the outputs is lacking. Due to the nature of the application and its gravity, the outputs of these algorithms must be reliable. Hence, these black-box models need to be wrapped with some explainable layer to make the system more accountable. 2) How XAI can help to mitigate these attacks/issues: Although it is clear that a range of ML/AI techniques [127] are used in the security of IoT, there comes the question of whether these techniques can be trusted to perform well in the real world. Most machine learning techniques used in IoT security are hardly backed up with explainable techniques to be safe from any adversary. DQNviz proposes a visual analyticsbased method to expose the blind training in four levels which makes the large experience space more comprehensible to the users [128] . In addition to the usual statistics of model training, it enables the developers to ensure proper training through visualization of the epoch level training, episode level training, and most importantly, the segment-level interpretation that reveals what the agent sees. Such visualization is integral for a high stake security system in the IoT. Similar research which explains the workings of DQN is [129] . Following the abundance of RL methods for security in IoT, there remains the possibility of using mimic learning [130] to match the output of a Q function neural network. In [66] a linear model U-tree (LMUT) is used to achieve optimal performance with high interpretability in DRL. Completing computationally complex and latency-sensitive security tasks on IoT devices with the limited processor, memory, radio bandwidth, and battery resources is often very difficult. Low-cost sensors with few security measures are more susceptible to attack than computer systems [127] . Therefore, deploying XAI techniques in resource-constrained devices would not be ideal. The services providers will either use more expensive devices with high memory and computational power or generate explanations on edge/cloud servers that would require more bandwidth. Even if the explanations are generated locally, the frequency of the explanations will be lower than inferencing in most cases. Therefore it will not be a problem except for critical scenarios such as military applications and medical requirements. IoT is increasing rapidly. Its diverse applicability has drawn attackers' attention, opening the possibility of a wide variety of new attacks and attack vectors across a large pool of devices. Detection of these attacks is increasingly shifting towards using AI/ML-based systems. IoT/devices being the closest layer to end-users in network architectures, accountability and trustworthiness of the AI systems become important facets that need to be addressed at once. Using XAI, the outputs of those systems can be translated into more comprehensible explanations to the end-users. B. Security of Radio Access Network 1) Possible Security Threats, Challenges, Issues: RAN are are components of a telecommunications system that link mobile devices/User Equipment (UE) to public and a private core network via an existing network backbone. LTE and 5G RANs are capable of offering ultra-reliable (deterministic) wireless performance [131] A RAN can consist of a base-band unit (BBU), radio unit or remote radio unit, antennas, and software interfaces. One of the earliest RAN installments was the Global System for Mobile Communications (GSM) RAN. From there onwards, different types of RANs such as Enhanced Data Rates for GSM Evolution RAN (GERAN), Universal Mobile Telecommunications System RAN (UTRAN), Evolved UTRAN (E-UTRAN) have been deployed with the advancement of 2G, 3G, 4G radio access technologies respectively. However, the most recent additions are Centralized/Cloud Radio Access Network (CRAN), Virtualized Radio Access Network (VRAN) and Open Radio Access Network (ORAN) which are expected to be associated with 5G and beyond incorporating other contemporary technologies such as SDN and NFV [132]. We will mostly focus on threats and challenges in CRAN, ORAN, and VRAN. According to the authors of [31] the C-RAN architecture can be affected by a whole range of security threats. Some of the threats faced by RANs are common to any wireless network. Eavesdropping, Man in the Middle (MITM) attacks, MAC spoofing, identity theft attack, jamming attacks, and TCP/UDP flooding are to name a few. However, some threats are inherited from the predecessor of CRAN, Cognitive Radio Networks (CRN). For example, Primary User Emulation Attack (PUEA), Spectrum Sensing Data Falsification (SSDF) attacks, Common Control Channel (CCC) attacks, Beacon Falsification (BF) attacks, Cross-layer attacks targeted at several layers, and SDR (Software Defined Radio) attacks are few such attacks that can be seen in CRANs. Here the authors also emphasize the native challenges with CRAN regarding the security and trust of virtualization of BBUs pool. The virtualized BS needs to be secured where multi-point processing algorithms, terminal device data transmission, and dynamic traffic capacity allocation are done. Machine Learning-based IDS is the most promising anomaly-based IDS because it can gradually improve its performance by learning over time while performing a given task. and Support Vector Machines (SVM) enabled with kernel trick (KSVM) to classify and detect multi-stage jamming attacks in CRAN BBU pool. O-RANs are envisaged to be the future of RAN technologies in B5G. When it comes to O-RANs, self-organization and intelligence-based technologies will be extensively used in the deployment process [134] . Therefore the heavy automation and self-organization needed from operators' side to reduce costs can increase the necessity for reliable and more secure intelligence-based methods (i.e., AI/ML). Explainable AI has the potential to mitigate security issues that arise in those intelligent systems. 2) How XAI can help to mitigate these attacks/issues: The ORAN alliance has brought all the latest C/V/ORAN Fig. 9 . XAI in ORAN architecture: This figure shows a modified ORAN architecture to accommodate XAI in its security. Non-real-time intelligent controllers in ORAN would require pre-mode/in-mode/post-hoc XAI methods to continuously improve ML models' resilience. A real-time intelligent RAN controller will find the best use from in-model/post-hoc explainers as model training is not usually done in real-time. XApps (3rd party applications) will require following specific standards to meet the system explainability policies. technologies together ito realizeB5G RAN. As shown in Fig. 9 the architecture can be fairly modified to embed XAI techniques to improve its resilience. IDSs in RANs are one of the abundantly researched areas in RAN security using data-driven methods such as AI/ML. These systems must stay transparent to the operators, developers, and engineers working on these systems. Testing those models in the real world can result in some false classifications. However, the cost of such a misclassification which could result in a breach, can be too high in RAN systems. Using explainable AI methods makes it possible to interpret why a model behaved unexpectedly. The first step in making amends is to understand the reason for the misclassification. Once we know the cause, it's possible to prevent future attacks of such. [11] is one such example of using an adversarial approach to explain linear and MLP classifiers based on the minimum modifications required in the input features to correctly classify a misclassified model. 3) Added cost of Using XAI : AI/Ml powered radio resource allocation, resource scheduling, and power allocation are integral functions of ORAN (Fig. 9 ). To ensure accountability, open distributed units that host those models will also require pipelines built to generate and communicate explanations. This requirement calls for more computation power and resources [135] . ML and AI models use real-time data from the RAN to monitor the RAN's health and performance. As O-RAN's security and management capabilities are enhanced because of the obtained results, added costs are justifiable. XAI techniques applied to those ML techniques will require additional time, effort, and resources. Near-real-time and non-real-time RAN Intelligence Controller (RIC) will require additional computation power to host interpreters incurring further costs. However, explanations are typically not a real-time necessity. Therefore, a certain leeway in power is possible. RIC also offers an open platform to host third-party applications (x/rApps) by specialist software providers, and thus additional XAI-based measures will be needed. The third-party application vendors should use fitting XAI methods to compliment the requirements on RIC during the deployment process to ensure software isolation, secure, standardized interfaces, and access controls to guarantee that x/rApps cannot bring vulnerabilities into the RAN [136]-[138]. 4) Summary: RAN commercialization is headed toward an alliance between CRAN, VRAN, and ORAN (xRAN) technologies. Each of these technologies is closely coupled with intelligent systems in operations such as resource allocation and optimization. AI/ML-powered zero-trust architecture will revolutionize the security in RAN technologies, from automating user access control policies to auditing. Backing up such integral tasks with a canopy of user comprehensible explanations would increase the accountability of the intelligent systems used under the hood. C. Security of B5G Edge Network 1) Possible Security Threats, Challenges, Issues: Simply put, edge computing means performing computations as near to the resource-constrained devices where data is generated as feasible rather than at much further distances [139] . Edge layers preprocess data acquired from many sources using caching and processing modules to deliver near-real-time replies to mobile consumers. Edge networks are seemingly becoming more popular [140] due to their advantages in costeffectiveness across different fields. Certain clear advantages of edge computation can be seen in the cost-effectiveness of data usage [141] , privacy improvement, and bandwidth usage [142] , [143] which in turn enables the implementation of novel ML applications [144] . Authors in [25] have stated that the AI security in B5G edge networks can be considered under two sections "AI for edge security" and "security for edge AI." The prior refers to AI techniques used in securing edge systems, while the latter refers to the security of AI systems deployed in edge networks. Also, the authors state that the DoS attacks, service or resource manipulation, privacy leakage, and man-in-the-middle attacks as the most prevalent security concerns on edge infrastructure. Current research describes the use of artificial intelligence as a facilitator of edge security in various contexts, including more general applications and complete architectures that rely on AI. One such instance can be seen in [145] where the authors propose a secure architecture for IoT namely AI4SAFE-IoT. The three-layer (network, application, and edge) architecture uses an AI engine for security across all three layers. Network layer IDS is claimed to mitigate sinkhole, DoS, rank, and local repair attacks in the proposed architecture. Authors of [146] have emphasized the possibility of utilizing edge AI recommender systems to send suggestions to service customers via an app in the e-tourism domain. Although edge AI has other inherent challenges, the security risks associated with artificial intelligence may be reduced by providing the AI modules with more interpretable and fail-tolerant methods that make the models more transparent. 2) How XAI can help to mitigate these attacks/issues: Some designs, such as AI4SAFE-IoT, may contain a variety of AI models and topologies, which can result in a large number of complex computations being performed. So the basic issue that explanations of such processes must deal with is finding methods to make all of these processes more manageable in their complexity. Making a proxy model that acts similarly to the actual model, but more understandably, may accomplish this goal. Local Interpretable Model-Agnostic Explanations (LIME) [49] is a perfect example for linear proxy models. With LIME, a black-box system is described by probing behavior on perturbations of input, and then the data from that probing is used to build a local linear model that acts as a simplified proxy for the original model in the vicinity of the input. Since this method is model agnostic and comparatively faster, the author emphasizes that the technique may be used in a wide range of models and problem domains to find the areas of the input that are most important for a choice. Research and suggestions are going on in explainable recommender systems that would be more resilient in place of edge AI [147] recommender systems as given in [146] . 3) Added cost of Using XAI : Edge intelligence is a key approach in B5G networks that uses various resources, including storage, caching, computation power, and so on. For improved performance, the edge server is constantly loaded with powerful multidisciplinary algorithms, including ML, data mining, NLP, and deep reinforcement learning [148] . XAI broadens the horizon of edge intelligence (edge caching, training, inference, and offloading [149] ) by adding a fifth dimension; edge explanations. Access to edge caching for generating and storing pre-model explanations is necessary to ensure security in the edge and IoT layers. Thus, additional storage spaces will be required. Similarly, when generating in-model and post hoc explanations during edge training and inference stages, supplementary computational costs might be encumbered depending on whether the situation requires new devices or edge/cloud servers. On the off chance that XAI methods are deemed too computationally expensive, operations will be extended and distributed using edge offloading. Each of the above scenarios will have inherent costs associated with them. 4) Summary: Edge computing thrives in ways to reduce costs, latency, and bandwidth usage. However, it has its caveats for a new threat surface that could expose to attacks such as MITM, DoS, and privacy leakages. AI/ML is increasingly used to mitigate those attacks, and those systems must be reinforced with a concrete interpretable data flow. Local and global XAI methods will be more important for improving users' trust in the services despite the added resource utilization in the long run. D. Security of Core and Backhaul Networks 1) Possible Security Threats, Challenges, Issues: Core network refers to the highly functional communication facilities that link primary nodes providing routes to communicate between subnetworks. In other words, it is the core part of a telecommunication network that delivers services to the users who are linked via the access network. The backhaul network links BSs to network controllers within a coverage region, which interconnects to the core network through the core transport network. Backhaul network is also known as the first mile and last mile (first mile from a wireline perspective, and last-mile from a mobile perspective) [150] . A set of known threats to any network are seen in the backhaul and core networks. In backhaul networks Eavesdropping and DoS and possible solutions such as mutual authentication, key exchange, and perfect forward secrecy are discussed in [151] . Authors of [152] have proposed IPsec tunnel mode and IPsec bound end-2-end tunnel (BEET) modebased solutions to LTE-backhaul-related security issues such as DoS, distribution of viruses, and unwanted communication via VoIP. In [153] , authors have proposed security architectures for backhaul networks in an SDN environment using host identity protocol (HIP) and policy-based communication and synchronized networks for spoofing and DoS attacks. TCP reset attacks, DoS, and DDoS, were considered by the authors of [154] where they propose a VPN-based architecture for backhaul security. A similar VPN-based architecture was proposed later in [155] considering attacks such as IP-based attacks, replay attacks, eavesdropping, spoofing, and Dos attacks. In [156] authors have emphasized the Dos and MITM attacks where they propose IPsec and Firewall as possible solutions. One of the major challenges regarding security faced here is the decentralized and distributed data caching that can open up many attack possibilities. Therefore secure link management, communication, and handover security are some of the challenges that need to be addressed. In addition to that, network optimization, architectural enhancement, and performance metrics are considered in literature [157] . There is a growing trend of using reinforcement and machine learning methods to backhaul and core networks. For instance, in [158] , a Q-learning method is proposed for increasing the dependability of a millimeter-wave (mmW) non-line-of-sight small cell backhaul system is proposed. Also, the authors of [159] have addressed the issue of adaptive call admission control using a Q-learning algorithm. Author of [160] have emphasized on the usage of ML in an SDN environment. They have used ANN methods on top of IP routing to estimate and reallocate available network resources to newly added slices using Traffic Engineering (TE) logic. Like many other ML applications, this study has not considered the interpretability and transparency of their model in the evaluations. In the actual world, the accountability of these tactics and false tolerance is critical. A general XAI system may be utilized to address this research need. The following section explores some of these options. 2) How XAI can help to mitigate these attacks/issues: The lack of explainability in the above applications leads them to doubt the credibility and feasibility of industrial implementations. However, there is hope for these systems when coupled with interpretations of the internal workings. The MLP used in [160] for network optimization can be improved with various post-hoc techniques of explainability such as case-based reasoning (CBR) [161] coupled into the MLP network. With B5G around the corner, low latency and high throughput are closely associated with backhaul and core networks. Although ML is fully capable in this regard, the models need to be explainable enough without adding a burden on latency and throughput. Explaining RL models with LMUTs or replacing them with Programmatically Interpretable Reinforcement Learning (PILR) are worthy of application since they are efficient. PIRL policies are expressed using a high-level, human-readable programming language, making them understandable to humans [64] . Here identifying a policy that maximizes long-term reward is accomplished through Neurally Directed Program Search (NDPS), an imitation learning inspired technique [162] , [163] . The importance here is that in their experiments conducted in [64] , the NDPS model has surpassed the DRL model in performance. 3) Added cost of Using XAI : Dynamic resource management seems to be one of the most pressing issues for wireless backhauling in using limited resources efficiently. For this purpose, AI/ML-based systems have been widely adopted in recent studies. Consequently, XAI techniques become a requirement to increase their resilience and accountability. Deploying XAI methods in energy-efficient small cell backhauling techniques in Unmanned Aerial Vehicles (UAV), high altitude platform stations, and satellites [164] will be highly challenging in terms of costs. These costs might be incurred on additional computation power, caching, and bandwidth to generate and communicate explanations. Although cost constraints could be a damper for XAI in core and backhaul networks, system insights gained through explanations are important because they indicate that wireless backhaul can be used in the field without any performance losses, and they also highlight the adjustments needed for optimal field use and robustness of the AI/ML methods used. 4) Summary: Security is an emerging trend for SDN and NFV-based backhaul traffic monitoring. Better network optimization, architectural enhancements, and security enhancements are envisaged in future research in B5G networks. AI/ML-based systems to identify common attacks such as viruses, MITM, replay attacks, and DoS attacks will also be applied in the core and backhaul parts of the networks. To avoid backhaul bottlenecks, balance the load, and measure the overall performance of resilience in backhaul networks, interpretable AI/ML models are more beneficial than blackbox models. E. Security of B5G E2E Slicing 1) Possible Security Threats, Challenges, Issues: Network slicing means partitioning network architectures into virtual elements. This allows operators to meet customized client needs [165] . It is highly analogous to dynamically allocating computer resources to enable concurrent execution of threads in a complex software system, a notion known as program slicing. Program slicing divides (disaggregates) software routines into many threads and configures computing resources to create virtual computing environments for parallel processing. Similarly, through the segmentation of network designs into virtual components, SDN and NFV provide much more network flexibility than previously possible. In its most basic form, network slicing allows for the construction of numerous virtual networks on top of a single physical infrastructure, customizing the deployment of B5G resources and functions required to serve specific consumers and market groups by the network operators. Authors of [166] have reported on both classical (well researched) security threats and non-trivial, less researched threats affecting network slicing. Some classical security threats can include traffic injection into interfaces, network slice manager impersonation, host platform impersonation, and monitor interfaces. Among the security threats that are nontrivial but yet to further research, passive side channel, active side channel, compromise of the function, and end device vulnerabilities can be seen as prominent. It is worth noting that these security threats violate at least one of the leading security principles (confidentiality, authentication, authorization, availability, and integrity). Numerous ML-based techniques are available in the existing literature to prevent such security vulnerabilities. For mitigating in and out-of-band jamming and external polarization attacks, three distinct models were used in the literature [167] . (i) ANNs, (ii) semi-supervised one-class support vector machine (OCSVM), and (iii) unsupervised density-based spatial clustering of applications (DBSCAN). [168] and [169] proposed RL for Admission Control design that would interact with the dynamics of RAN operation. While the authors in [168] used a semi-markovian Decision Process (SMDP) model to represent the RL agent, the authors in [169] used a stochastic artificial neural network (S-ANN). In [170] , Qlearning was used to solve the issue of slice admission control for revenue maximization. Although this gives the adaptability to surroundings while still attaining near-optimal performance, due to the scalability issues inherent to Q-learning, the authors have extended the work in [168] where they have proposed an analytical model for the admissibility region in a sliced network which provides formal service guarantees to network slices. Also, they have proposed an online Machine Learning-based admission control algorithm that maximizes the infrastructure provider's monetization. Such models can draw the attention of stakeholders once they are commercially applied, and thus accountability and resilience are of great importance. 2) How XAI can help to mitigate these attacks/issues: As shown above, clustering-based solutions are prominently used in many AI attack mitigation algorithms in E2E slicing. Incidentally, explainable clustering is an up-and-coming area of research that can be used to improve ML-based clustering models' completeness, maintainability, resilience, sensitivity, consistency, accuracy, and robustness. For example, in [171] authors have proposed a general interpretability framework for any clustering/classification model called a single feature introduction test (SFIT) to explain the clusters. Although these techniques are used for different use cases, there is potential for them to be applied anywhere (e.g., B5G security) as long as the underlying algorithms are compatible. In addition to that, authors of [172] have proposed using decision trees to interpret the clusters provided by k-means and k-median clustering algorithms. ExKMC technique introduced in [173] is another approach to add the interpretability of the k-means clustering technique. It follows a similar approach as in [173] with small decision trees modified k-means algorithm. Since slicing is focused on delivering customized services to consumers, transparency is a growing concern in the community. Situations, where machine learning is used for security in E2E slicing should be wrapped in a mantle of explainability to inculcate trustworthiness about the services they are receiving. 3) Added cost of Using XAI : The primary goal of E2E slicing is to ensure that specified services meet their required performance criteria. The slices should adapt to traffic changes, detect potential security issues, and take countermeasures autonomously [167] in a real-time trustworthy network environment. To generate more targeted explanations to enable accountability, the present condition of the network's resources and services, telemetry data must be communicated between the data plane and the control plane. Additional communication protocols must be used to abstract and communicate domain-specific information for explanations alongside interpretations of ML models. Data from services providers also must be infused with model outputs to provide bespoke explanations. The XAI method will also require some computational power in the control and orchestration layers for generating inmodel explanations, which will add up to the final costs. automation comes together to achieve full network automation. The ultimate automation goal in B5G is to create fully autonomous networks that can self-configure, self-monitor, selfheal, and self-optimize without human involvement. These characteristics need a novel horizontal and vertical end-toend architecture suited for data-driven machine learning and AI algorithms. For self-managing AI functions, the ZSM framework depends on SDN,NFV technologies as well [174] . For example, ZSM plans to use DL to provide intelligent network management, and operation skills such as traffic categorization, mobility prediction, traffic forecasting, resource allocation, and network security [175] . This introduces a new threat surface that needs to be addressed separately. In [3] a range of attacks that are possible in the threat surface of ZSM on various network aspects is discussed. The E2E service intelligence offered by the ZSM enables decisionmaking and forecasting capabilities. Consequently, an attacker may design inputs to cause the machine learning models in E2E service intelligence services to make incorrect choices or predictions, possibly resulting in performance degradation and financial loss. On the other hand, this can jeopardize SLA fulfillment and security assurances. Furthermore, API based attacks such as parameter attacks, identity attacks, MITM, and DDoS attacks; Intent-based interface threats like information exposure, undesirable configuration, and abnormal behavior; threats on closed-loop automation control systems such as deception attacks; AI/ML system target attacks such as poisoning attacks and evasion attacks; threats on Programmable Network Technologies such as DoS, privilege escalation, malformed control message injection, eavesdropping, flooding and introspection attacks are some of the attack vectors emphasized in the threat surface of ZSM. The authors of [3] have proposed a range of solutions for these attacks, such as adversarial training, input validation, defensive distillation, defense Generative Adversarial Networks (GANs), and concept drift. They further elaborate on the efficacy of defense GANs against the white box and black-box attacks. It is one of the occasions where XAI would shine. Nevertheless, XAI can assist in numerous other areas of ML implementations across different sectors of network automation to make it more resilient and accountable. It has a high potential of addressing this research gap with black-box attacks. In literature, [176] the authors emphasize challenges such as the need for AI/ML security and how AI model interpretation will guarantee accountability, reliability, and transparency by improving the trustworthiness of AI-enabled systems. However, they also mention that the research gap in the field of ML security for network and service management limiting to only a few contributions (i.e., [177] , [178] ). 2) How XAI can help to mitigate these attacks/issues: The use of AI/ML methods in ZSM opens the door to many new possible scenarios for XAI applications. Authors of [176] have shown that a variety of ML techniques such as RNNs (LSTMs), support vector data descriptions (SVMinspired technique), Q-learning, and Gaussian models would be necessary to enable full network automation. These algorithms should be backed with an expandable set of interpretability techniques in said applications. For example, Partial [52] , [179] to name a few. Generating adversarial samples is still not fully clear in service and network management fields [176] which is a fundamental step for identifying security vulnerabilities. Incidentally, IDS development and implementation is a salient concern for full automation. The use of XAI in this area is still in its infancy, but it is showing signs of promise. For example, [180] gives an evaluation of perturbation-based post-hoc xAI tools in the intrusion detection field with network traffic data. In this case, we can observe that all of these tools work quite well, with LIME and SHAP providing exceptional results. Furthermore, the work in [181] , [182] proposes to use SHAP with their Multimodal DL-based Mobile TraffIc Classification system (MIMETIC) to evaluate the input importance. It shows that xAI will play a key role in fostering the trustworthiness and transparency of AI applications in security for ZSM. 3) Added cost of Using XAI : There are several management tasks bundled together in the ZSM Management Domain (MD), such as the domain data collection services, domain analytics services, domain intelligence services, domain orchestration services, and domain control services [176] . Additional channels to communicate explanations generated about domain intelligent services and data collection services must be looped into domain analytics of each MD so that any changes required in domain control and orchestration are properly executed. It will require additional computation power. Furthermore, generated domain-specific explanations will have to be stored in each domain data service, while cross-domain explanations will be stored in standard data services calling for additional storage and caching space. However, the existing ZSM system can be conveniently adapted to explanation-based analytics with minimal compromises. 4) Summary: ZSM or network automation can be simply identified as the future of telecommunication systems. In full automation, AI/ML is integral for the closed-loop management of a network. In closed-loop management, an undesirable configuration or an attack on the AI/ML-based systems can pull the malicious behavior into a whirl of abnormalities in the network domains. XAI is the viable candidate to uncover any underlying vulnerabilities in AI/ML systems and shed light on obscured attack data in black-box models. 1) Intent Based Network (IBN): IBN is a network that operates autonomously with the intent of a predetermined set of directives. In other words, an IBN is a closed-loop and selfoperating system. When compared to an imperative policy, an intent-based policy is a set of objectives that must be completed throughout network operation to achieve collective performance goals. At the interface level, the intent is a means of abstracting complexity. Given contextual awareness and explanation as a service, In the privacy and security policy etc. At system deployment, during/after attacks, before MR conferences, etc. Gamification into XR setup and usage, Graphical representation along with formal plans and natural language appropriate data availability from multiple networks and intent functional blocks, AI/ML is the cornerstone in realizing intentlevel inference. Service orchestration optimization, resource monitoring, context and behavior-based intent to service mapping, extracting service primitives from intents are some of the operations where ML might be required [183] , [184] . Such functional modules would be open targets for malicious agents looking to penetrate the security of the IBNs. For example, an anomaly in intent recognition can be quickly identified with the help of XAI techniques and made rectifications. The above-mentioned enabling technologies will spur the development of a variety of new applications, which were impossible before due to the lack of accountability in AI/MLbased systems shaping the human society in the 5G and beyond era. Some of the emerging use cases that will rely on future B5G network capabilities are discussed in this section. This section emphasizes the impact made by the advent of XAI in security. A. Smart Cities 1) Introduction: With the fast-growing urbanization and resource depletion [185] , [186] there comes the need to handle its drastic impacts on the cities. As a result, smart cities were proposed to manage and optimize resource and energy usage. To efficiently achieve the requirements of smart cities, suitable communication technologies that come with IoT have become an integral constituent. However, the immense amount of data collected in the IoT devices needs to be communicated appropriately, analyzed, and calculated precisely, and transformed into the envisaged services that are necessary to improve the standard of living in the cities. Thus AI and big data analytics come into play. More precisely, AI-based techniques will be found in many sectors of smart cities like Intelligent transportation, cyber-security, smart transportation, electric and water system, waste management, public safety, UAVs-assisted next-generation communication (5G and B5G), etc. [39] , [187] , [188] . With the usage of AI in integral parts of everyday life such as energy and security [127], [189] , [190] ; the reliability, accountability, and resilience of ML algorithms and systems using such AI, ML, DRL algorithms becomes paramount. On the other hand, the number of interconnected Fig. 10 . Smart cities in B5G era will involve collecting massive amounts of personal data that will require further accountability from the service providers end. From the collection of data to communicating the decisions taken by AI models, end users would require extra reassurance about the system security as the scope of services widen devices that becomes a part of the smart city can only increase with time. This inter-connectivity and the sheer number of devices have given rise to concerns with cybersecurity. Security in smart cities has been a growing field of research in the recent past. Numerous artificial intelligence and AI/ML-driven approaches considerably contribute to this. For example, authors of [191] emphasize the ability of ML to provide solutions for security threats such as DoS attacks (Using MLP based protocols), eavesdropping (Q-learning, Dyna-Q, and Bayesian techniques), spoofing (Dyna-Q, Qlearning, SVM, DNN, Incremental aggregated gradient (IAG), distributed FrankWolfe (dFW) techniques). They go on to show solutions to other challenges, such as privacy leakages and digital fingerprinting (SVMs, ANNs), where ML has become a viable candidate in the IoT and smart city domains. 2) Impact of XAI on the security of smart cities: With XAI, it is possible to overcome implementation challenges while also explaining decision-making processes and supplementary information. Fig.10 gives a holistic view of the effect of XAI in smart cities. This approach will lead to more understandable machine learning algorithms in smart city applications. However, when developing XAI-based machine learning algorithms, it is necessary to consider various levels of explanation, ranging from "comprehensive explanation" in the case of complex black-box ML algorithms to "no explanation" in the case of transparent ML algorithms. In addition, contracts must be made in terms of the knowledge and competence of stakeholders who are addressed by the explanations from farmers of smart agriculture programs to technicians and computer scientists using machine learning algorithms in engineering practices that integrate data analysis into smart (traffic, water, energy, etc.) monitoring systems. As we discussed earlier, the preponderance of AI in critical applications such as smart grids, security, and intelligent traffic monitoring systems of smart cities implores the need for transparency in those black-box systems with relevant explanations in some comprehensible form. The most important factor that can affect other sectors in a smart city; is the climate. Heavy rains (floods) or heavy droughts can equally affect a city's water management, socioeconomic, ecological, and environmental aspects in more than one way. Nevertheless, with intelligent monitoring systems, these problems can be mitigated to a greater extent. Forecasting and being ready for any climate change is the first step. Traditional rule-based climate and weather forecasting systems were the preferred techniques in heritage cities. However, data-driven AI and ML model-based forecasting systems are emerging with promising results (even exceeding physicalbased models [192] ). Such systems occupy powerfully but complicated black-box machine learning models such as neural networks. The main impediment to using such systems in a real-world setting is the absence of accountability frameworks due to their lack of transparency, explainability, and trust. So it's vital to have an explainable wrapping to back the outputs of these models in different situations. For example, during a security breach or a malfunction of a meteorological system (e.g., Cyclone prediction), all the components should be transparent enough to be evaluated by the operators and state inspectors to make amends and find the responsible parties. General public and government/private institutes affected by such situations has the right to know the reasons behind the damages incurred upon them. Therefore an explainable and interpretable layer for AI-based models is of utmost importance. In [193] authors have used SHAP-based interpretations for CNN and LSTM-based spatial drought prediction systems. Such work realizes a step towards witnessing AI/MLbased sophisticated and precise techniques used in real-world scenarios and ensures their sustenance. 3) Related Work: Numerous works of XAI usage can be found in various sectors in the realization of smart cities. Embark, and Osama et al. [194] emphasize the importance of XAI in the transition stage from heritage cities to smart cities. Major entities like governing bodies, investors,, and researchers must make deliberate decisions on policy standardizations and applications for intelligent urban development. They further elaborate on the need to have XAI in all layers from infrastructure to end-user (i.e., infrastructure layer, application layer, cloud data, service layer, and end-user). In [195] the authors identify the necessity of using XAI techniques in smart monitoring systems which autonomously collect, analyze and communicate structural data from wireless sensor networks and note the usefulness of LIME and LRP (layerwise relevance propagation). With the intent of evaluating data fusion techniques in smart city applications, authors of [196] have elaborated on the importance of having explainable outcomes from the data fusion algorithms, which are used alongside AI and ML-based automation systems in smart city applications. In [197] , the authors have surveyed the performance of 10 ML algorithms for smart medical waste management in Morocco, and out of them, they have shown that ANN and SVM are most accurate in an IoT environment. They have also emphasized ways to improve current AI-based waste management systems with the inclusion of explanations in the architecture of each layer. Optimizing the usage of depleting energy and resource management plays a vital role in smart cities. AI-based techniques will be found in many sectors of smart cities like intelligent transportation, cyber-security, smart grids, smart agriculture, and so on, where sensitive data will be processed to improve the standard of living. These systems should be held accountable for any abnormal behaviors while continuously updating for security vulnerabilities. With their capability to disentangle ambiguity in decision-making processes, XAI techniques could expose the truth from pedestrians to smart city service providers with relevant information. [198] . Smart health can be regarded as an intelligent and context-aware development of mobile-health services, expected to reduce future hospitalizations and provide remote healthcare services incorporating the latest wireless technologies fused with other enablers such as AI and ML. Following that, there's a growing use of ML in integral parts of healthcare such as medical image classification [199] - [201] , diagnostic science [202] - [206] , tumour analysis [207] - [209] , personalized healthcare [210] - [212] , etc. With the significant increase in the population above the age of 60 [213] the rise of health-related emergencies and illnesses is inevitable. That is why smart health must utilize cuttingedge technologies like AI and big data analysis in the solutions alongside advanced biotechnology and micro-electronics. Different stakeholders will come across intelligent technologies at different levels of the process. At the patient level, smart monitoring wearables, mobile devices, sensors, and actuators will intelligently gather data for diagnosis, virtual/remote support, emergency care, and monitoring in the household. From the doctors' and health care workers' perspectives, sophisticated algorithms (ML/AI, etc.) should accommodate the analysis of large amounts of data in making decisions related to diagnostics and clinical guidance. In addition to that, health administrators and research centers can leverage data-driven techniques in clinical management, prognosis, health decisionmaking, medical studies, and pandemic control [214] - [216] . However, the aptitude for processing large amounts of data with ML techniques comes with the price of black-box level interpretability concerns. Despite how good they perform accuracy-wise, lesser the transparency and lower trust and robustness in maintainability. For example, a false positive in diagnostics can lead to life-changing decisions for a patient. Therefore the current architectures of IoMT demand more robust and resilient security techniques such as XAI to wrap AI algorithms with fault traceable explainability layers so that imperfections can be recognized early as possible and strive towards the goal of accountability. 2) Impact of XAI on the security of healthcare: In smart healthcare, the use of AI has become essential in processing data obtained from personalized and real-time health monitoring services. As shown in Fig. ? ?, user-level information collected [217] from wearables, smartphones, and healthcare applications is processed with AI during health screenings and treatment plan selections, diagnostics, and emergency responses. But the lack of trust in AI systems is due to the blackbox nature and lack of fail-safe mechanisms in case of security breaches. Not only that but also with proper transparency, XAI can be used to figure out adversarial samples before a breach. In [218] , authors have used SHAP DeepExplainer to create signatures for input images which are then identified as either adversarial or not. Those signatures buffer the original AI model, preventing misclassifications due to adversarial inputs. Such an explanation based security approach can provide a layer of trust in a variety of image-based analysis systems using AI and ML [199] , [200] , [204] in smart healthcare such as endoscopic images, breast cancer analysis, skin lesion analysis, MRI brain tumor analysis, lung image analysis and much more. With XAI, these systems would be accountable for Black Box AI systems Contributing factors for diagnosis of a certain disease (e.g.: blood pressure level for heart problem diagnostics) Changes in feature importance after attacks on diagnostic engine Reasons for false emergency detection in elderly Troubleshooting details for issues in telesurgery systems. their decisions without losing the convenience and efficiency. One of the more outstanding examples where XAI makes a difference is diagnostics. Authors of [219] have pointed out the impacts of XAI in Healthcare as three folds; increased transparency, result tracking, and model improvement. From the data collected in the systems, if a user is categorized to have a specific illness by AIML models, say a high blood sugar level, the clinicians will be sent a report mentioning the features which were used in coming into the above decision as heart rate, body temperature, and calorie intake. When XAI is used and the list of features, it is also possible to show the most responsible attribute for the outcome as calorie consumption. It will make the process convenient for medical practitioners to quickly examine the characteristics and provide recommendations about suitable medications or activities accordingly. It also adds a certain level of resilience since the reason for the prediction can be traced if the outcome has deviated anomalously. 3) Related Work: Numerous work has been carried out within the domain of smart healthcare. However, the research work is lacking to show where XAI has been used in the security of smart health. In [220] , the authors have emphasized the possible security threats in IoT-based smart health care systems such as DoS attacks, fingerprint, and timing-based snooping, router attacks, select and forwarding attacks, sensor attacks, and replay attacks. In the paper [221] , the author presents a secure framework for hospital environments using IoT and AI. This system is expected to overcome frustrating queues, overwhelming paperwork, work overload for doctors, identifying critical illnesses in time, etc. The authors have also emphasized the security attacks that can potentially affect the hospital IoT environment, such as interruption, interception, modification fabrication, replay, protocol compromise, stack attacks, etc. However, in such an environment, the usage of XAI can be of utmost importance to enable trustworthy AIbased defense mechanisms. Security aspects of IoT using AI are explained in the literature [222] where they report the usage of AI/ML techniques used in the security of the healthcare sector. Although the research work on the use of AI in smart health has become prominent in the last several years, the security aspects of those AI techniques need to be addressed more due to the dynamic nature of the attacks and quick evolution of the attacking methods. XAI usage in this area is still limited to applications other than security, such as recent COVID-19 epidemic control [223] . Currently, there is a significant research deficit in XAI security in healthcare. Because of the essential nature of the application, the use of XAI in healthcare security is essential, and the issue must be solved as quickly as feasible. 4) Summary: AI/ML-based systems are proposed in many previous studies in the smart healthcare field, but only a smaller proportion of those systems see the light of day to be used in actual applications. Lack of causality and thereby accountability causes medical practitioners, insurance companies, etc., to lose confidence in intelligent health care systems. By making the AI systems and their security more transparent and convincible regarding their true capabilities through XAI, the dark cloud of untrustworthy perception in stakeholders can be mitigated to a certain extent. C. Industry 4.0/5.0 1) Introduction: Vehicles, clothes, buildings, and weapons have all been created and constructed by humans throughout several hundred years. The introduction of Industry 1.0 in 1784 marked the beginning of substantial change in industrial output with mechanical energy being involved. It was followed by industry 2.0 (1870) and industry 3.0 (1969), which marked the introduction of electrical energy and electronic/IT systems [224] . In just after 40 years (2011), industry 4.0 was introduced with the main objectives of increasing operational efficiency and productivity, just as increasing automation [225] encompassing a variety of enablers, such as AI [226] , Internet of Things (IoT), cloud computing, CPS, and cognitive computing, by completely transforming the manufacturing processes [224] . Characteristics such as digitization, optimization, and customization of production; automation and adaptation; human-machine interaction (HMI); value-added services and businesses; and automatic data exchange and communication [225] , [227] , [228] which are closely associated with internet technologies and sophisticated algorithms (ML, AI, etc.), are introduced. To summarize, the basic premise underlying industry 4.0 is to make manufacturing "smart" and automated to facilitate mass productivity with minimum human intervention. In contrast, the successor, industry 5.0, is envisaged to capitalize on the main principle of collaboration between human brainpower and creativity in the production workflows with intelligent systems enabling mass personalization. That is to say; industry 5.0 is expected to harness the synergy between humans and autonomous machines [229] , [230] in addition to full machine-based automation that predominates industry 4.0. Since the highest priority in industry 4.0 is automation, the security and accountability of deployed AI and ML algorithms are equally important and their accuracy and performance. In industrial-level applications, the black-box nature of ML models gives rise to questions about trust and maintainability when they are deployed. A defect in intelligent manufacturing equipment that goes unnoticed due to the opaque nature of the algorithm might lead to financial losses and wastage of resources in substantial proportions. The effect could exponentiate when AI is used in the context of security in IIoT (Industrial-IoT) or any other infrastructure underlying industry 4.0. In light of industrial applications of AI/ML, a technique like explainable AI will be of utmost importance in bringing bespoke solutions to its security issues. In 5.0, XAI will be even more important as it plans to increase human involvement with autonomous systems. To create an amicable understanding between humans and intelligent machines, the internal operations of the AI and ML models must be transparent to the stakeholders. Especially in critical contexts such as security and human safety. So a potential solution to this can be achieved using XAI as a medium to interface between two entities. 2) Impact of XAI on the security of Industry 4.0/5.0: When it comes to industry 5.0 applications, the usage of XAI may make a significant contribution in a variety of fields due to its primary focus on human-centric, personalized manufacturing processes, as shown in Fig. 12 . The deeper involvement of people in the industrial environments alongside intelligent machines makes the safety and well-being of those users a crucial aspect of the whole process. Sophisticated intelligent machines used in this environment can occupy AI/ML techniques with their issues and security vulnerabilities. A malfunction or security breach in such a system can cause property damage risking the closely associated people working alongside those machines. For example, the trustworthiness of a robot working with humans in a steel-based utensil manufacturing company is a high priority due to the high temperatures and extreme forces used in heavy machinery. A security attack on the robot could potentially affect the humans surrounding it. By using XAI, trust in the AI models used in intelligent machines like robots can be enhanced. Mapping the internal mechanics and providing clear explanations of the cause and effect of those algorithms could help the users to understand the operations and take proper precautions. More importantly, XAI helps trace back the cause in case of a security attack or an unexpected behavior where solutions to prevent future malfunctions can be taken. This will reduce financial losses for the manufacturing organizations and enable smoother production of things that would otherwise impede the achievement of the goal of mass production in the first place. 3) Related Work: The use of XAI in industrial settings has started to increase in the past couple of years [231] - [233] . However, the XAI security in the industry 4.0/5.0 field is still in its infancy, with only a few related researches coming to light. Nevertheless, all these works demonstrate promising results in facilitating AI/ML models in real-world applications. Zolanvari et al. [234] proposes a model-agnostic and high performing XAI model named TRUST (Transparency Relying Upon Statistical Theory), which they show results after testing it in an Industrial Internet of Things (IIoT) environment with different cybersecurity datasets (WUSTL-IIoT, NSL-KDD, and UNSW). They rank the most crucial input features for each class based on mutual information after converting them to latent variables using factor analysis. The likelihood of any new sample falling into the classes is calculated using multi-model Gaussian distributions. Authors argue that this system beats the LIME system in terms of speed, performance, and explainability. Federated Learning (FL) has become a prominent research topic in IIoT systems due to its perks such as privacy preservation and resource Here, in case of a security breach or other anomalous behavior, XAI may assist in determining the root cause so that preventative measures can be taken. and data management in the edge devices [226] . With FL's rapid rise in popularity, it's natural that it would draw the attention of cybercriminals. One of such devastating attacks on FL systems is the backdoor attack [235] , [236] . In order to identify backdoor inputs, Hou et al. [237] offers a filter system that is based on a mix of classifiers and XAI models. Here, the models are trained on the server-side and then sent to each IIoT application for identifying backdoor input data, which is then cleaned using an appropriate method. As a result of combining this technique with XAI, the authors claim to have obtained very high rates of backdoor recognition. The next chapter of industrialization is driving towards highly personalized, automated, mass production building up on IIoT and AI inspiring granular customizations wound with human creativity and machinery. Cloud-enabled super data storage, digital twin, and augmented presence of employees will all depend on high-speed ultra-reliable connectivity provided by B5G networks. The inclusion of XAI in security can be considered the adhesive that helps service providers inculcate confidence in their products and sustain their customer base. D. Smart Grid 2.0 1) Introduction: A smart grid is a self-healing system that unifies various power generation options and enables consumers to manage their energy consumption while reducing costs. It is achieved through the integration of ICT infrastructure with circuit topology where various distributed subsystems and complementary components are intelligently controlled through a distributed command and control system [238] . Appliances in houses will be able to connect with the smart meters to guarantee effective use of infrastructure, demand response, and energy management in the future [239] . Smart grid 1.0 was introduced as the initial step of getting the meter installation and integration. It added one-way automated meter reading (AMR) followed by two-way automated metering infrastructure (AMI), which opened avenues toward Smart grid 2.0. It is intended to implement smart grid 2.0 with additional functions that use meter data, such as line failure analysis, load management, revenue prediction, etc. Authors of [240] show that realizing smart grids would bring about the challenge of optimally controlling various facets such as ecology, glassy dynamics, information theory, cloud microphysics, and human cognition, and more. Optimizing all those aspects of a complex system would be an inherently severe problem. In the recent past, smart grids have been increasingly adapted and expanded worldwide. With the increase of its scale, interconnections, renewable energy integration, widespread usage of DC power transmission technologies, and deregulation of electricity markets have multiplied and evolved. The smart grid's capacity to maintain its stable state has become a more complex problem. Traditional stability analysis and control approaches have become lengthy/timeconsuming, inefficient, and expensive because of these new developments. However, they also emphasize the possibility of overcoming this issue by embedding intelligent systems in CPS. Interoperability of AI/ML systems associated with the underlying infrastructure of smart grids becomes crucial in the context of the security of CPS. Security compromises of CPSs used in smart grids can potentially cause devastating results in many aspects of life due to the deeply rooted applications of energy in the daily operations of the people. In [241] , the authors elaborate that a smart grid could be open to various DoS attacks (jamming in substations, spoofing attacks, traffic/buffer flooding) in different layers of the system. They further emphasize attacks that could potentially compromise the integrity and confidentiality of a smart grid (e.g., false-data injection). With the advent of B5G services in energy distribution systems, we can expect security concerns similar to that of numerous other cases discussed earlier in this paper. However, the consequences of a malicious intrusion could be far more severe in smart grids than in a typical application. These problems could range from privacy leakages of users' meter data to cascading blackouts or catastrophic infrastructure failures [242] . 2) Impact of XAI on the security of smart grid 2.0: As shown in Fig. 13 , XAI could provide useful insight into the AI-based control functions in smart grids of the future. In [243] the authors bring to light the potential to use AI in some of the critical operations in smart grids. Most of the applications are critical to the performance of the whole system, e.g., static/dynamic security assessments, stability control/assessment, and fault diagnosis. It is worth noting that these functions hold equally paramount importance to the grid and the livelihood of thousands that depend on the energy distribution system. The most important function of a smart grid is to control and maintain stability. During operations, the system could go through various states of power demand and other disturbances which would continuously drag the system away from the normal state. Corresponding control systems such as generation controls, damping controls, voltage controls, frequency controls, etc., need to be in place to handle each operative state. Lately, AI has been increasingly used in such control systems. For example, smart generations control systems are studied to use RL (DQN [244] and Q-learning [245] ). Among many other previous studies, frequency control systems have been implemented using LSTMs [246] , RL, and stacked denoising AEs [247] . Although these studies have shown promising results in terms of accuracy, limitations exist in practice. Interpretability of AI models, robustness to adversarial attacks, robustness to noise/data loss/time delay, and imbalanced data sets are a few. Plausible solutions to these issues can be drawn from XAI. In [248] authors have shown that with better interpretable methods such as DTs, security assessments of smart grids can be implemented accurately and in a transparent manner. Also, the authors of [249] have shown that using a local linear interpreter on top of a deep belief network would enable the users to identify crucial factors that contribute to system instability. Furthermore, these outcomes can be used in emergency control scenarios. 3) Related Work: XAI in smart grids is an emerging field of research now. Thus the field of security in smart grids using XAI is a research gap that needs to be addressed in the future. However, the following are some of the few XAI applications in smart grids. In the paper, [250] XAI tools (i.e., LIME, SHAP, and ELI5) are proposed to be used in anticipating the amount of solar photovoltaic energy that will be generated in the future smart grids. The authors have used a random forest machine learning model trained in an open-source data set to predict solar power output. It was followed by using the said XAI tools to comprehend the reasons for the predictions. Overall, LIME, SHAP, and ELI5 have exhibited to enhance model outcomes by providing precise information that can be easily understood. They have also pointed out how aspects like computational cost, local/global explanation, feature weights, etc. vary amongst XAI tools/packages in their particular use case. Finally, the study concludes by discussing the usefulness of XAI-based photovoltaic forecasting for building nextgeneration control centers with visualization and business analytics tools that support new technologies like AI and XR. Here the authors propose the importance of using XAI to enable users who are less tech-savvy to understand the workings of such complex technologies. The study [251] elaborates on the interpretability of the outcomes by DRL methods used in emergency control of power systems. The authors propose a Deep-SHAP to interpret the DRL model, which calculates the relevance of input features via a backpropagating mechanism. This method is expected to help the operators understand the outputs generated by the models and make amendments during troubleshooting. The XAI models proved to have a superior capacity at comprehending the model decisions than a human operator's, allowing it to detect errors in data while improving overall model performance. 4) Summary: Next-generation smart grids are expected to rely on the IoT heavily enabled intelligent CPSs to provide their core services to the general users. Real-time system stabilization, monitoring, and surveillance, demand response, maintaining distributed resources, and responding to natural disasters are a few operations that should run reliably and resilient to external influence. AI/ML used in these critical aspects should be readily available for scrutiny. The availability of explanations makes this process accountable and comprehensible when handling black-box AI/ML models in making the systems more robust towards malicious agents. 1) Introduction: Immersive technologies that are used today and the futuristic technologies that are up and coming are collectively known as XR. Here the X stands as a variable for the letters of all the other subsets of computer-altered realities in the spectrum of reality-virtuality continuum [252] . These can include Augmented Reality (AR), Virtual Reality (VR). and Mixed Reality (MR). New gadgets (wearables) Fig. 13 . Stability management and maintenance are two of a smart grid's most critical responsibilities. A growing number of these control systems now make use of AI. Security evaluations of smart grids may be correctly done using more interpretable methodologies that give holistic explanations about the AI systems. These findings may be put to good use in an emergency. and computer-generated graphics have made it possible to superimpose digital information over the real environment or incorporate real-world elements into virtual settings. These hybrids of the digital and physical worlds are increasingly being adopted by various businesses such as medicine [253] , manufacturing [254] , entertainment (games, cinema etc.), tourism [255] , marketing [256] , construction [257] , and the list goes on. It is fair to say that the applications are ubiquitous. An overlay of computer-generated material over the actual world, which may interact with the environment in real-time, is defined as AR. The actual environment and computergenerated imagery are seamlessly integrated with AR. However, in most cases, the occlusion between computer-generated content and the real world is limited in AR. Thus, its applications remain comparatively limited to this day. Google Glass was a great example of AR technology used in the recent past. AR was also adopted in HoloLens 2 by Microsoft and exhibited its usefulness in working environments. Some recent smart glass implementations include Spectacle smartglasses from Snap, Lenovo ThinkReality A3, and Vuzix's Next-Gen Smart Glass [258] . The term VR refers to a broader category of immersive media. Real-world 360-degree videos, computer-generated material, or a combination of the two may be used to make these media. The user might use a VR headset or surround VR displays to visualize the virtual world. There are many applications for VR in various fields such as entertainment (games), many fields of engineering, healthcare, manufacturing, etc. [259] . Here the computer-generated content is overlaid and anchored in the real world, where they also can interact with those objects. Unlike AR (or VR), computer-generated objects are expected to have occlusive effects with real-world objects during MR events. Pokemon GO mobile game is one of the recent successful examples of MR used in the entertainment industry. Microsoft has also demonstrated MR in their HoloLens 2 smart glasses. AI is being studied increasingly to be used to complement XR technologies in the recent past by many industries [260] . For example, MR is used to detect visual field defects of patients using headsets in the healthcare industry. There are propositions to even improve the vision of people wearing those headsets using applications that run with vision augmentation algorithms [261] . Businesses also have been implementing methods using AI-based AR/VR technologies to enhance the recruitment and training of employees in their businesses [262] . Also, some businesses are offering MR platforms for virtual event venues by making digital twins of the presentation spaces. These systems are popularly provided in the form "Metaverse as a Service (MaaS)" [263] , [264] . With the increasing applications and popularity of XR applications, it can attract attackers' attention, making the users question the security of those applications. 2) Impact of XAI on the security of XR: B5G needs to provide the necessary infrastructure to make XR possible. AI is going to be an integral part of this whole process. Previous examples show that various forms of XR will be used in a wide range of applications ranging from smart cities to health care. For XR-based applications, the B5G networks need to maintain sophisticated operations such as resource management, network slicing, security, traffic management, etc., which require AI support in their implementations. On top of that, CV (computer vision) based models will be used to extract spatial features in rendering virtual objects [265] . AI-based data-efficient image compression methods are also complementary applications that would help to reduce network volume [266] . However, in many of these applications, the security of AI and ML models needs more meticulous attention. Collaborative MR could cause more problems when under attack due to the many involved entities. MR systems gather data from various sources from a plethora of users that could add a fuzzy nature to the states of the system. Constantly acquiring the complete and comprehensible picture of massive scale collaborative MR to verify the secure and correct behavior of the environment could be extremely difficult even with AI/ML techniques. Attacks like DDoS/DoS on the wearables, collaborations models, and central servers could cause access limitations, data leaks, and collaboration issues among the users. In some instances, these attacks may cause drops in framerates resulting in physical symptoms such as nausea and headaches and annoy the users from utilizing the devices. Furthermore, spoofing, deep fakes, and social engineering could cause psychological harm and damage to the reputation of their victims [267] . Although defenses against these attacks are still to be explored, proactive measures such as training and educating the users are extremely important. To create comprehensible user explanations, the fuzzy layer of those systems might need to be understood. The fuzzy layer could consist of computer vision models, natural language models, etc., that integrate with many sensors and devices acting as data IO. XAI techniques would be instrumental in generating a simplified version of the data that can be used in creating comprehensible explanations for the users. XAI is not only suitable for user side precautions, but it's also handy in fortifying the anomaly detection systems to enable malfunction tracing during a breakdown. 3) Related Work: Although the security of XR applications has been studied for a long time, the use of XAI to realize trustworthy AI-backed XR systems is still in its infancy. It's important to know that effective and open communication builds user confidence in using autonomous techniques, especially in the transportation sector, due to the inherent safety concerns. In [268] VR is used in testing the importance of proactive communication rather than reactive communication when it comes to autonomous wheelchairs. Also, there are situations where VR systems are used for providing training environments for robots in various conditions and generate visuals for analysis [269] , [270] . In these scenarios, XAI methods could help generate additional data, which might lead scientists to understand the behaviors and vulnerabilities of the models in those virtual environments that could otherwise be hidden. Explainable and comprehensible security of AI methods would be even more critical when it comes to the Metaverse involving generation Z. Unlike the legacy games such as "second life," the novel metaverse is envisaged to integrate AI enhanced social interactions rendering a deeper and immersive social meaning. This system is further reinforced with virtual currencies and improved high-speed, always-on connectivity to the Internet through B5G networks [271] . Due to the similarities that would hold in the Metaverse and reality, security and trustworthiness must be ensured for the users before entering. Thus, XAI will be more important than ever to bridge the gap of understanding between the users and the underlying ML methods and avoid any unpleasant surprises inside the Metaverse. 4) Summary: XR is firmly on a trajectory to be applied in major lifestyle-altering applications (e.g., Metaverse). Security and privacy-related vulnerabilities could damper the use of AI/ML due to its gravity and widespread effect. XAI could be used to bridge the gap of misapprehension between B5G service providers, XR services, and end-users regarding the reasoning behind AI/ML decisions used in attack detection and the response of systems. The above-discussed use cases can be categorized as a broader view of the applications where XAI is functional in the B5G era. It can be further broken down into narrower technical applications. A few of them are briefly discussed below. Holographic telepresence (HT) enables real-time 3D projections of faraway persons and objects. With this application, 3D video conferencing and news broadcasting will be moved to a new paradigm. Media is captured by HT and transferred via a broadband network in a compressed form. AI/ML-based compression techniques here could introduce a new attack vector. [2] Thus, XAI used to identify any anomalies is recommended. In smart governance, intelligent and innovative ICT systems are used to promote and support enhanced decision-making, planning, and the involvement of citizens via collaborative decision-making. Smart governance has an added emphasis on ICT to sustain the ideals, assuring public welfare and development [272] . Corruption and unjust policies, techniques for improving education, security, transportation, resource management, and economic infrastructure remain the most pressing issues in current governance, and smart governance is expected to provide better answers to these problems. Intelligent decision processes here will require concrete backing. Using XAI would be more of a necessity in this situation due to the solid justifications required for civil activities. Although providing explainability to AI and ML solutions brings many benefits to B5G, it can be detrimental to ML models' security and the systems that embed these models. XAI may increase their vulnerability to ML attacks (i.e., attackers know how the black box model works), complicates their design (i.e., explainability must be considered in the trade-off between model performance and security), and open new attack vectors (i.e., the explanation itself can be falsified). A. Increased vulnerability to adversarial ML attacks 1) Introduction: Many attacks that already exist target ML models: adversarial ML attacks [273] . Membership inference and model extraction [274] attacks compromise the confidentiality of the training data and the ML model respectively. Model poisoning and model evasion attacks (a.k.a. adversarial examples) compromise the integrity of the ML model and its predictions. A common characteristic of adversarial ML attacks is that their effectiveness increases as the attacker's knowledge about the ML model and its decision process increase. Consequently, the obfuscation of ML models' decision process, by making it a black-box, is an effective defense to mitigate adversarial ML attacks [275] . Explainability deobfuscates the decision process of blackbox ML models, thereby revealing helpful information to an attacker, as depicted in Figure 15 . It has been shown that the information produced by explainable ML techniques can be leveraged to design more effective black-box attacks [276] . The effectiveness of membership inference, model extraction, poisoning, and evasion attacks increases against black-box ML models augmented with explainability. The explanation provided for an ML model prediction enables an attacker to modify a sample that they want to get misclassified manually. Explanation reveals the features from a sample that are the most significant in the prediction provided by the model. Attackers can iteratively modify these features, using the feedback from the explanation, to eventually change the ML model prediction for the sample. 2) Impact on B5G: Defending against adversarial ML attacks is still an open issue, and there is no foolproof defense against any of these attacks. Some vulnerabilities exploited by adversarial ML attacks are even claimed to be necessary features of ML models [277] . Most defenses, like model obfuscation, only increase the effort required to perform a successful attack, but they do not fully protect from attacks. Explainability denies the usage of obfuscation to make ML Fig. 15 . Explainability reveals new information that "white-box" black-box ML models and facilitate adversarial ML attacks against them. models more resilient to attacks. This increased exposure decreases the security of all ML-based systems used in B5G. For instance, ML-based security measures are deployed in the perimeter of 6G sub-networks for monitoring anomalous behavior within the sub-network or coming from other subnetworks [24] . DT, Random Forest, DNN, clustering, ensemble methods, Gradient Boosting Machines (GBM), etc., are used to detect common network attacks, like DDoS attacks, from traffic data [24] , [278] - [280] . The evasion and poisoning of these ML-based anomaly detectors are made easier if they are explainable [276] . They can cause malicious traffic to bypass the system defenses and cause exhaustion of network resources. The availability of the system resources to serve legitimate users will be constrained. As a result, many critical applications (e.g., telesurgery, smart grid stability control systems, etc.) that depend on the service layer of B5G could be affected by the exhaustion of resources. When the outermost security layer of a network fails, it leaves the internal modules exposed to a higher attack risk. ML-based decisions are also used for intelligence services in closed-loop E2E service management (Fig. ??) . Adversarial examples against these ML models can be more easily generated if the models are explainable. These adversarial examples can lead to inaccurate predictions and choices, such as falsely predicting the future requirement of resources for an E2E service or reconfiguring the management policies [176] . Likely results can range from performance deterioration, financial loss, and loss of security guarantees. 3) Possible solutions: Revealing information about ML models currently corresponds to an increased vulnerability to adversarial ML attacks. This means that explainability comes at the security price, and we can only decrease this impact but cannot cancel it. Nevertheless, a solution to this issue lies in controlling the explainability's provided information. First, one must define the minimum requirement and granularity of the explanation required to achieve an intended goal. The selected XAI method should only meet this minimum requirement without revealing more information than necessary. It limits the information an attacker can use in an adversarial ML attack to the required minimum. Second, one must control the access to the explanation, i.e., restrict it to only the necessary parties. The explanation can also be sealed, encrypted, and only revealed if there is a need to investigate a decision of the model, e.g., auditing by entitled parties. The default access to explanation must be as restricted as possible rather than wide open. This restriction limits the opportunity for an attacker to access this information. Finally, delaying the availability of explanation (by a few hours or days) when compared to the availability of the ML model decision can slow down attacks. In many ML use-cases, the decision from the ML model must be obtained quickly while the explanation is not time-sensitive. Adversarial ML attacks are typically iterative, counting 100s of steps. Each new step relies on the previous step(s) information. By delaying the availability of explanation, the utility of the ML model is not impacted, while an adversarial attack can be drastically slowed down or even completely prevented. There is no perfect solution to fix XAI's vulnerabilities against ML models. Currently, the most sensible solutions to mitigate attacks must be applied at the system level through, e.g., access control, encryption of explanation, delay in response, etc. This may change in the future as defenses against adversarial ML attacks become effective, and a foolproof defense against some of these attacks would be developed. The issue raised here is that current adversarial ML attacks are more effective in a white-box than in a black-box setting. Explainability has the "white-boxing" side-effect on black-box models. If black-box adversarial ML attacks progress such that they are as effective as white-box attacks, the information revealed by XAI will no longer impact the ML model's security. There is work already showing that, e.g., membership inference attacks can be run as effectively against blackbox and white-box models [281] . In such cases, a black-box model explained using XAI (white-boxed) would not be more vulnerable than its non-explained counterpart. The impact of XAI on its security would thus be canceled. B. Difficulty to design secure ML applications 1) Introduction: The design and implementation of MLbased systems are guided by the sole requirement of maximizing performance, i.e., high accuracy, high generalizability, and low response time. Adding security requirements to MLbased systems introduced the first trade-off between antagonist properties: performance vs. security. It has been shown that effective defenses against adversarial examples, like adversarial training, degrade the accuracy [282] and the generalizability [283] of protected ML models. There also exist trade-offs between security properties. For instance, increasing the resilience of ML models against evasion attacks makes them more vulnerable to privacy attacks like membership inference [284] . Explainability is a new requirement adding to the existing trade-off. Three properties partly detrimental to each other need now to be fulfilled by ML systems, as illustrated in Figure 16 : performance-security-explainability. When transparency provides explainability, all three requirements are applied to the ML model and its training algorithm. Providing explanation through transparency reduces the choice in training algorithms and models during the design of the ML system. This potentially leads to discarding the solution providing the best accuracy, security, or privacy to meet the explainability requirement. Trade off Fig. 16 . New trade-off required between performance, security, and explainability of ML systems. 2) Impact on B5G: The new requirement of achieving a performance-security-explainability trade-off makes it challenging to design well-balanced ML systems for B5G networks. B5G networks are massive scale heterogeneous networks where small form factor devices are used in many applications that collect information from the environment. This information is currently transmitted to centralized cloudbased servers for intelligent processing and decision-making. However, with the advent of IoE in B5G, there is a shift towards edge intelligence. Deploying ML models on-device enables training using federated learning and local decisionmaking, making communication more efficient. On the other hand, device resource limitations make it challenging to run ML models on-device. Performance becomes thus a primary requirement constrained by device resources, relegating security and explainability to secondary places. For example, body-sensors/fit-bits collecting vital signals to provide dietary and physical recommendations struggle to squeeze out the necessary computational power to run sophisticated cryptographic techniques on top of ML models, and they fail to provide sufficient security [24] . In such a case, running post-hoc explanation techniques would burden the already exhausted computation power found in such devices. Nevertheless, these are end-devices dealing with highly sensitive health data, and it is essential to include some form of explanation to make their operations trustworthy for customers. These constraints require developers to use transparent or in-model explanations, which might not be an ideal model selection for the particular use case in terms of accuracy, robustness, or privacy. 3) Possible solutions: Transparency puts the explainability requirement on the ML model, causing the trade-off between three properties. By favoring post-hoc explainability, the choice of ML model can only be dictated by performance and security considerations. Explainability is removed from the equation, and it is provided using an external post-hoc solution. Nevertheless, this solution has two drawbacks. First, the explanation from post-hoc methods sometimes has a lower correlation to the actual decision of the model, so it offers a lower-quality explanation. Second, we will later discuss that post-hoc solutions create new attack vectors and targets against the whole system, including the ML component. Thus, post-hoc explainability only moves the introduced security vulnerability from the ML model to the more extensive system that includes it. A second solution is the careful analysis and prioritization of the ML system requirements. Evaluating and quantifying the performance-security-explainability trade-off leads to making an informed choice about which requirement(s) to meet and which other(s) to neglect. Requirements neglected during the ML model design may be addressed later at the system level. The security of ML models can be increased through system security, e.g., by detecting adversarial queries to the model at inference time [274] , [285] . 4) Summary: As ML and AI become increasingly used in critical and high-risk applications, the consequence of incorrect decisions from these systems worries people. The trustworthy AI concept aims to ease these worries by enforcing a large number of desired properties to make AI and ML applications trustworthy [286] . Among the first requirements were accuracy, performance, security, and privacy. Many more requirements were added, such as explainability, transparency, accountability, fairness, etc. This list grows over time, complicating the design of trustworthy AI applications and simplifying the fulfillment of security requirements by design. How these properties interact and how they impact, positively or negatively, each other is not well understood yet. More studies are necessary to understand the several trade-offs involved in designing trustworthy AI systems. Only under this condition can trustworthy AI systems be secure, explainable, and more simultaneously. C. New attack vector and target 1) Introduction: Post-hoc methods for XAI are new components added to ML-based systems. This new component can complement the prediction of ML models, weighing heavily on the actions of systems and humans that depend on the ML model. In some cases, the explanation itself is more important than the prediction. This is the case for AI used in applications having a societal impact, where predictions must be fair and unbiased. This is also the case for security applications like detection and response (D&R), where an explanation is used to counter and recover from detected attacks using appropriate measures. Due to the importance of explanation, the XAI component can become the main target of an attack, as depicted in Figure 17 . Directly attacking post-hoc XAI methods can change the explanation while the prediction of the ML model remains the same, as demonstrated in [287] , [288] . The ML model makes the right decision, but the dependent system or human takes a wrong course of action based on the incorrect explanation. There is also the possibility of concealing unfair outcomes of an ML model with deceptive justifications to veil any underlying problems using XAI. It is defined as fairwashing [289] ; misapprehension that an ML model adheres to specific standards although its actual performance significantly deviates from its explanations. Both model explanations and outcome explanations are vulnerable to this issue. It is demonstrated further that post-hoc explanatory approaches depending on input perturbations, like LIME and SHAP, are unreliable and do not give definitive information regarding fairness [290] . An interpreter-only attack technique known as scaffolding is built based on this observation. An attacker can generate desired explanations for a given unfair ML model (which uses LIME/SHAP) by masking any biases in the model. Through this hack, a compromised XAI method enables hiding biased/unfair outcomes indicating that they are harmless/unbiased. 2) Impact on B5G: This threat exists for every ML application to B5G where explanation weighs equally or more than the prediction in the action it triggers. For instance, in D&R where an explanation is used to counter and recover from an attack, modifying the explanation for a prediction leads to fixing a non-existing or irrelevant issue. It will not block the detected attack or prevent it from happening again in the future. A reliable explanation also increases trust and improves the user experience for end-users of B5G networks. It fosters the adoption and usage of services provided over B5G networks. Certain decisions require user data to ensure the security and safety of the services provided, and it is also essential to provide proper explanations on how the data is used in decision-making. Critical applications such as autonomous driving are envisaged to rely on B5G networks [26] . When a system fails or crashes, the explanation for the incorrect prediction that led to it will be paramount for handling the lawsuits and other legalities that follow. Although an accurate prediction would have prevented a crash, an explanation during an exception would be of critical importance. A compromised explanation can divert attention from the real issue and protect the responsible party from any consequence. 3) Possible solutions: The main reason for this new attack target is because the explanation of post-hoc methods can sometimes be disconnected from the prediction of the ML model they interpret. This is a known weakness of post-hoc explanation methods. Using explainability through transparency, the explanation comes directly from the ML model itself, and it is usually well linked to its actual decision process. Both the ML model and explanation process must be fooled to succeed in an attack. Even though this is possible [287] , it is more complicated. Moreover, by using XAI methods based on transparency, an explanation would be partly protected by existing defenses against adversarial ML attacks that already protect the ML model. The state of security in the prevention of adversarial ML attacks is more advanced than it is for the protection of attacks against XAI methods. However, if explainability through transparency is not possible, selecting different post-hoc explanation methods can increase the resilience against attacks. For instance, empirical experiments [290] show that SHAP is more resilient than LIME when it comes to hiding biased and unfair outcomes. The addition of new functions and components in large systems always increases system complexity and vulnerabilities, exposing new attack vectors. XAI, primarily through post-hoc explainability, is such a new component that exposes new attack vectors against ML-based systems. Given that the security of XAI methods and their resilience to attacks are not currently well-known, XAI methods represent one of the weakest components in ML-based systems, which makes them a primary target for attackers. Attacks against XAI methods must be further researched, and defenses must be developed to make XAI secure. While the security of XAI has not reached sufficient maturity, the explanation should only be used as additional information rather than directly used in critical decision-making. Numerous B5G research initiatives are underway, bringing together academic and industry partners worldwide. This section summarizes several of those initiatives and their primary objectives. -EU-funded project addressing identified gaps in data and black-box AI through the design and development of resilient accountable metrics, privacypreserving methods, verification tools, and system solutions that will serve as critical building blocks for trustworthy AI in ICT systems and cybersecurity. The project addresses the uncertainties inherent in artificial intelligence that directly impact privacy, resilience, and accountability. The SPATIAL project identifies possible XAI attacks and potential XAI technique misjudgment. As a result, it seeks to propose robust accountability metrics and integrate them into existing "blackbox" AI algorithms. Another objective of the SPATIAL project is to develop detection mechanisms for detecting data biases and conducting descriptive studies on the various data quality trade-offs associated with AI-based systems. 2) XMANAI [292]-an EU-funded project focusing on explainable AI. The XMANAI project's researchers intend to carve out a 'human-centric,' trusting approach that will be tested in real-world manufacturing scenarios. XMANAI intends to demonstrate (through four real-world manufacturing cases) how it will assist the manufacturing value chain in transitioning to the amplifying AI era by combining (hybrid and graph) AI "glass box" models that are explainable to a "human-in-the-loop" and produce value-based explanations, with complex AI asset management-sharing-security technologies to multiply the latent data value in a trusted manner, and targeted manufacturing apps to solve concrete manufacturing problems. XMANAI pilots are being conducted in collaboration with CNHi of Italy (creating a virtual representation (digital twin) of the plant based on 3d-2d models and production, logistic, and maintenance data of the lines), Ford (real-time representation of production and traceability), UNIMETRIK (intelligent measurement software that warns if the point sets defined for the measurement strategy are adequate), and Whirlpool (platform capable of ensuring a reliable sales forecasting for the D2C channel). 3) STAR [293] -is a collaborative effort between experts in artificial intelligence and digital manufacturing to enable the deployment of standards-based, secure, safe, reliable, and trusted human-centric AI systems in manufacturing environments. STAR will investigate how artificial intelligence systems can acquire knowledge to make timely and safe decisions in dynamic and unpredictable environments. Additionally, it will conduct research into technologies that enable AI systems to deal with sophisticated adversaries and remain resilient to security attacks. Participants in this project consider a variety of AI-powered scenarios and systems, including active learning systems, simulated reality systems that accelerate RL in human-robot collaboration, XAI systems, human-centric digital twins, advanced RL techniques for optimal mobile robot navigation and detection of safety zones in industrial plants, and cyber-defense mechanisms for sophisticated poisoning and evasion. These technologies will be validated in challenging manufacturing scenarios in quality management, human-robot collaboration, and agile manufacturing powered by AI. STAR aims to remove security and safety barriers that currently prevent sophisticated AI systems from being deployed in production lines. 4) SPARTA [294] -SPARTA was founded to establish a long-term community capable of collaboratively defining, developing, sharing, and evolving solutions that assist practitioners in preventing cybercrime and enhancing cybersecurity. The SPARTA project is divided into four major components: SPARTA T-SHARK: established to develop and validate methodological, organizational, and technological solutions extending cybersecurity towards the comprehensive organization of security functions, enabling threat prediction and fullspectrum cybersecurity awareness, providing high situational awareness, informing decision and policymakers on broad or long-term issues, and providing a timely warning of threats. The SPARTA T-SHARK program aims to expand the reach of threat understanding, from the current investigative-level definition, up to strategic considerations on current, future, and down to real-time event handling and prevention. SPARTA CAPE: This program addresses the assessment of cybersecurity properties for software, focusing on two specific areas, cyber-physical systems, and complex systems. For cyber-physical systems, the objective of CAPE is to propose a method to specify security and safety properties jointly. SPARTA HAII-T: this program aims to develop an integrated framework and a toolkit supporting the design, development, and verification of security-critical, large-scale distributed systems. This aim will allow for the specification and enforcement of crucial security policies, including the confidentiality, integrity, resilience, and privacy of the exchanged data. The challenge will be tackled from multiple perspectives, including hardened legacy components, secure operating system software, resilience-and privacy by design. SPARTA SAFAIR: The SAFAIR program aims to conduct a thorough analysis of the threats and risks of AI. This is followed by providing mechanisms and tools to counter the deteriorating effects of recognized dangers in various critical AI applications, making them safe and secure from the possibility of being compromised. The program's impact on explainability will help AI users understand how the algorithms perform their tasks, which is particularly useful in domains where AI has already exceeded human performance. Finally, the work on fairness provides mechanisms and tools to ensure that the models created with AI methods do not rely on a skewed or prejudiced view of the situation they deal with. 5) 6G Flagship [295] -is a research project funded by the Academy of Finland that aims to commercialize 5G networks and develop a new 6G standard for future digital societies. 6G Flagship's primary objective is to develop the fundamental techniques required to enable 6G. The 6G Flagship research program recently published the world's first 6G white paper [296] , paving the way for the definition of the wireless era in 2030. The authors of that paper identified several intriguing security challenges and research questions, including how to improve information security, privacy, and reliability via physical layer technologies and whether this can be accomplished using quantum key distribution. Additionally, the 6G Flagship project will focus on key technology components of 6G mobile networks, including wireless connectivity, distributed intelligent computing, and privacy. Finally, with the support of industry and academia, the 6G flagship project will conduct large-scale pilots with a test network. 6) AI4EU [297] -through open calls and other actions, the project aims to create a comprehensive European AIon-demand platform to lower barriers to innovation, boost technology transfer, and catalyze the growth of start-ups and SMEs in all sectors. The AI4EU platform serves as a broker, developer, and one-stop-shop for services, expertise, algorithms, software frameworks, development tools, components, modules, data, computing resources, prototyping functions, and access to funding. Different user communities, such as engineers and civic leaders, can also get training to gain skills and certifications. The AI4EU platform aims to become a global standard built on existing AI, data components, and platforms. 7) SANCUS [298] -SANCUS aims to combine cutting-edge technologies for automated security validation and verification, dynamic risk assessment, AI/ML processing, security emulation and testing, and unique optimization modeling under the most recent containerized 5G system network platform. The project will develop several new engines and mechanisms to create a secure environment for product development and security posture assessment in a safe, multisectoral setting. The figure below depicts the conceptual approach to interconnecting such engines. All engines will be prepared and elaborated using virtual machines (VMs), allowing them to be integrated on the same platform simultaneously, conducting joint testing, formulating their instruction sets flexibly and on-demand, maintaining them quickly, and sharing them with research groups. 8) INSPIRE-5Gplus [299] -the project aims to advance the security and privacy of 5G and Beyond networks. Grounded in an integrated network management system and relevant frameworks, INSPIRE-5Gplus is devoted to improving security at various dimensions, i.e., overall vision, use cases, architecture, integration to network management, assets, and models. INSPIRE-5Gplus addresses key security challenges through vertical applications ranging from autonomous and connected cars to Critical Industry 4.0. INSPIRE-5Gplus will devise and implement a fully automated end-to-end smart network and service security management framework that empowers protection, trustworthiness, and liability in managing 5G network infrastructures across multi-domains. The conceptual architecture of INSPIRE-5Gplus is split into security management domains (SDM) to support the separation of security management concerns. Each SMD is responsible for intelligent security automation of resources and services within its scope. The end-to-end (E2E) service SMD is a special SMD that manages the security of end-to-end services. The E2E service SMD coordinates between domains using orchestration. Each SMD, including the E2E service SMD, comprises a set of functional modules that operate in an intelligent closed-loop way to provide a software-defined security orchestration and management that enforces and controls security policies of network resources and services in real-time. Standardization is critical for defining the technological requirements for B5G networks and should be utilized to determine the most appropriate technologies for 6G network deployment. Thus, standards shape the global telecommunications marketplace. Numerous Standards Developing Organizations (SDOs) are tasked with standardizing 6G. Table V summarizes standardization activities in the field of artificial intelligence security. This section discusses the lessons learned and, based on these lessons, synthesizes the future research directions that industrial or academic researchers can follow. A. B5G Threats and landscape 1) Lessons Learned: Any 5G technology can be inherited and improved in a 6G environment. In the context of this work, we concentrate our analysis on SDN-NFV-VNF and Virtualization technology-related security threats. SDN technologies utilize a wide variety of protocols and policies that could be exploited to disturb the layers and interfaces of the SDN framework with man-in-the-middle and DoS attacks, highjacking and re-routing data, impersonation users, etc. NFV technology is also exposed to malicious activity. In particular, an attacker could exploit vulnerabilities in its authentication process, protocols, third-party hosted network functions, and APIs. VNFs are pieces of software running that might be the victim of the abuse of typical software weaknesses, such as buffer overflows, dynamic memory deallocation, open APIs, etc. While by employing virtualization/containerization technology, network operators have significant benefits in terms of capital expenditure (CAPEX) and operating expenditure (OPEX). Nevertheless, there is an evident risk that vulnerabilities can spread within all VMs or containers unless the hypervisor is deployed correctly to manage VMs and OS securely. Additionally, new 6G technologies and applications directly affect security and privacy and bring new threats. They empower legitimate users with low latency reliability and efficient communication services and the malicious ones with more powerful means to do evil things. Researchers worldwide are intensively investigating and discovering more threats related to trending technologies and applications, as well as developing new solutions to make 6G secure and privacypreserving once it is commercialized in 2030 and beyond. 2) Remaining Research Questions: Firstly, open-source software in specific 6G components will likely bring zero-day threats. The number of vulnerabilities discovered and inherent in open source increases rapidly year by year. As the source code is disclosed, an attacker has time to investigate and select an attack and analyze the target's operation. In particular, open-source software, no longer developed and upgraded, is even more vulnerable. Moreover, security and isolation between network slices are fundamental but remain a challenge as it is a complex task. Physical isolation is not always possible, and the VNF of network slice instances is implemented on common cloudbased infrastructures. In Furthermore, an optimal security approach should protect the services, hardware resources, information, and data, both in transmission and storage IoT platforms. However, this is very challenging since IoT devices are designed to be deployed at a massive scale, creating a network of nearly identical appliances with similar characteristics. Thus, this similarity amplifies the magnitude of any vulnerability in the security that may significantly affect many of them. Likewise, the expected number of links interconnected between the low-performance devices (e.g., drones, home appliances, and smart sensors) is unprecedented, and many of these devices can establish connections and communicate with other devices automatically. This interconnection on IoT devices implies that if a device is poorly secured and connected, it can affect the security and the resilience of the whole network. This working group works on a standard that describes the technical elements required to create and get access to a personalized (AI) that will comprise inputs, learning, ethics, rules and values controlled by individuals. Finally, DoS/DDoS attacks will benefit from the more significant amounts of network traffic sent per device and the fact that many more devices can be simultaneously connected to the network, thanks to IoT technology. Much more powerful botnets could be created to carry out DoS and DDoS attacks effectively. The main challenge arising from the previous aspects is the effective detection of traditional DDoS attacks (e.g., flooding attacks) and more advanced stealthy DDoS attacks (e.g., SlowDoS attacks). 3) Possible Future Directions/Solutions: In 6G networks, there will constantly be a need to perform malware and attack detection and identification. Research about Intrusion Detection Systems (IDS) is a well-established field that, especially with the recent increase in the values of the data exchanged daily using the Internet, is expected to increase much more with the proliferation of IoT devices. Depending on IDS type, they can deploy different strategies to detect a potential attack within network traffic: rule-based and anomaly detection. Rule-based IDS are exact but not flexible and scalable enough for such dynamic network environments as those found in 6G, where new types, and variations of classical cyberattacks, are expected to be numerous. Moreover, the variety and dynamicity of the traffic malware pose a significant challenge to traffic monitoring tools in terms of flexibility and generalization of their algorithms. 6G traffic analysis tools need to be adaptable to ever-evolving attacks and new protocols and policies and potentially to breakthrough technologies that are constantly being developed. Finally, 6G infrastructure and users will be continuously protected by encryption. Therefore, anomaly detection leveraging AI and XAI techniques has gained popularity as a solution to detect malicious activity in 5G and B5G networks. Currently, more research on encrypted traffic analysis is required to improve the precision of the results. While high accuracy of the analysis is expected, simultaneously, the system cannot be prone to a high number of false positives. Additionally, to the best of our knowledge, there is a lack of large enough publicly available datasets of 5G and B5G malicious traffic to support this research. In addition, detecting attacks by only analyzing the network traffic may not always be possible, especially with the emergence of stealthy application-layer DDoS attacks, which aim at exhausting the server's resources while generating traffic that mimics the legitimate one. Thus, using multiple sources of information, such as resource usage and performance of service under attack, to feed AI algorithms is vital to discriminate against malicious behavior. Independent of the particular use case, most applications and security functions require the ability to directly react to what is happening in the network on the fly, especially when it involves the detection of malware. B. Role of XAI for B5G Security 1) Lessons Learned: ML is expected to become an integral feature in many B5G telecommunication infrastructure aspects. Security of the applied AI/ML techniques has long been the focus of many previous studies, but their applications in the real world are still limited. IoT is a dynamic and rapidly evolving enabler of B5G which requires thorough security over the network, software, and encryption attacks. Some supervised learning techniques (SVM) and RL (Q-learning) methods are popularly used to prevent said attacks. CRAN, VLANs, and ORAN technologies are the future of radio infrastructure in B5G. Fastpaced communication enabled through RAN in B5G should be trustworthy and transparent at the same time. However, most of the studies in the UE layer of telecommunication investigate AI/ML-based solutions while only a little focus on accountability and robustness. Edge-AI is becoming increasingly prominent due to its costeffectiveness and data privacy in a variety of industries. Not only that, AI/ML is preferred in securing the edge architectures as well. Edge devices carry out functions from data gathering to inferencing. Thus, identifying compromises in the pipeline is extremely important at each stage that is susceptible to attacks. The use of reinforcement and machine learning approaches in backhaul and core networks is becoming more popular. Some of the problems that must be addressed include secure connection management, communication, and handover security. Explainable techniques required to secure the core network must be lightweight and should not put a damper on the latency and throughput. Virtualization and SDN have opened up the telecommunication sector into new dimensions. End-to-end slicing and fully automated network management (ZSM) are envisaged with the advent of SDN and NFV alongside AI/ML. The ubiquitous use of AI/ML in automation and slicing requires a fully comprehensible security architecture covering the whole data pipeline from training to inference. A single point of attack that loops inside the system would be immensely detrimental in closed-loop automation. For this, inherently transparent models (LMUT, DTs) and post-hoc explainable techniques (LIME/SHAP) would be required, depending on the available computation power. 2) Remaining Research Questions: Computational power in user devices is increasing dramatically. New processing units are designed mainly to target running AI/ML systems. However, extra computational power would be required for encryption and decryption tasks and anti-virus/malware software on top to run additional security measures in those devices. It will further limit the computation power available to improve service accountability and trustworthiness. Thus, any implementation of XAI techniques would require the additional processing power of the device. Depending on the high velocity, veracity, and variety of incoming data in IoT and edge devices, the performance of the principal tasks would have to compete for the needed computation power bringing down the overall customer satisfaction. Therefore, a proper balance between performance and interpretability/accountability must be drawn without compromising the latency and throughput in the communication channels from UEs to core networks. Currently available metrics to quantify the interpretability is insufficient. Although there are multiple attempts to introduce metrics for interpretability, it still reigns as a heavily subjective matter and an open research question. Metrics are even more critical for closed-loop automation since all the functions are expected to execute with minimum human intervention. Interpreters used on ML models can expose sensitive information alongside helpful information for the stakeholders. A challenge would make XAI techniques less appealing for businesses using AI/ML on the B5G infrastructure. At the same time, it is crucial to convey the proper explanations to the right stakeholders as much as encapsulate sensitive information. 3) Possible Future Directions/Solutions: Developing computationally efficient XAI techniques is a primary requirement at the moment. The metrics must also adhere to the computational efficiency criteria. Fast computations are required to handle the massive amounts of streaming data expected to enter B5G networks. Extensive research needs to be carried out in creating proper metrics to quantify and detect any anomalies in the explanations. It would be imperative to realize fully automated network management (ZSM) with XAI. Some studies propose metrics to quantify the quality of ML model interpretations mathematically. However, it desperately requires the touch of sociological and psychological aspects of humans when it comes to real-world applications. Therefore, more collaborative studies are needed. Interpreters must be carefully adjusted to filter out any sensitive information generated to avoid privacy violations and intellectual property laws before conveying them to the stakeholders. Not only that the information must be communicated to the appropriate party at the right time, but it must also be easily accessible by the users. Bespoke explanations to different user groups would require automated classifiers to identify those user groups. Also, creative methods are needed to make the explanations clear to the general public. Explanations generated at one level of the telecommunication structure should be adequately communicated to the other levels that are related. Protocols and dedicated communication channels might need to be developed to have real-time network maintenance. C. New XAI security issues in B5G 1) Lessons Learned: Explainability, being it through transparency or post-hoc explainability, can compromise the security of ML models and ML-based systems. It simplifies adversarial ML attacks against models by deobfuscating their black-box decision process. It adds a new requirement that is detrimental to security and complicates the design of secure ML systems. Finally, it adds a new function and component vulnerable to attacks in ML systems, which can be used as a vector to compromise the whole system. The new security issues introduced by XAI have several detrimental effects on B5G networks. First, XAI can compromise the security of B5G network services relying on ML. For instance, network attacks can more easily evade anomaly detection systems, which leads to the compromise of B5G system components. Second, XAI can hinder the automated management of B5G networks. Management systems based on ML can be poisoned and evaded more quickly, e.g., to exhaust the system and network resources through lousy management. Third, XAI constrains the design of ML applications meant to be run on resource-limited edge devices, reducing their performance and security and making them more vulnerable to attacks. Finally, if XAI methods are not well-secured, they can have the reverse effect of what they are intended. A compromised explanation can enable biased and unfair decisions in critical scenarios and reduce the trust in ML and AI systems. This will hinder the adoption of B5G services that heavily relies on ML and AI. 2) Remaining Research Questions: Addressing a few research questions would help improve the security of XAI methods and their dependent services in B5G networks. First, we need to provide a definite answer to whether ML models have a different level of vulnerability to adversarial ML attacks depending on the knowledge about them: Are ML models more vulnerable to adversarial ML attacks in a white-box rather than in a black-box setting? The white-boxing of ML models by explainability would not increase the vulnerability to attacks if this is not the case. Second, we need to quantify the exact trade-off between the several ML properties required by trustworthy AI: What is the impact (positive or negative) of different ML properties on each other? The answer must primarily be on the explainability vs. security trade-off but also explore additional properties: performance, fairness, accountability, etc. Finally, more research is needed to identify the vulnerabilities of different XAI methods to attacks: How vulnerable are XAI methods to attacks? By knowing their level of security and the weaknesses of different XAI methods, we can make them more secure by designing or developing defenses against the discovered attacks. 3) Possible Future Directions/Solutions: Improving security often comes through an offensive approach against vulnerable systems. The development of new attacks against XAI methods and black-box ML models would be the basis to answer the first and third questions that we identified. Furthermore, the provision of theoretical guarantees for the robustness of XAI methods and white-box ML models against attacks would set an upper bound on the vulnerability of these components. The empirical study of trade-offs between ML model properties would be a starting point to increase the knowledge on this issue. Then, developing equations based on theoretical analysis to quantify these trade-offs would be very helpful to support the design of trustworthy ML systems. In the B5G world, numerous futuristic applications are expected to experience by the general public. Smart cities, healthcare, Industry 4.0/5.0, smart grid 2.0, and XR are some areas that B5G would heavily influence. Smart cities constitute numerous services that depend on ultra-low latency, high bandwidth, and ultra-reliable communication in various intelligent applications such as intelligent transportation, waste disposal, energy/water distribution, construction, UAV-assisted communication, home automation, and many other public services. However, AI-enabled solutions in many of these services need to be accountable and trustworthy to protect the users from damages to their lives, property, and finances. Remote healthcare services have been gaining popularity in the recent past. IoMT/IoHT-based big data analysis helps realize emergency support services, diagnosis, and care through the B5G networks. These services would require additional transparency and trustworthy AI/ML models to protect the integrity and confidentiality of the core services. Data leaks and adversarial attacks on AI/ML-based healthcare systems can be tracked down with the help of XAI, and proper amendments can be made to prevent future attacks. Also, in industry 5.0, XAI would facilitate smooth collaboration between humans and smart production facilities. Attacks on manufacturing plants could be detrimental to both finances and the safety of the workers. AI/ML-based attack surfaces in automated manufacturing processes can be resilient with more comprehensible and explainable use of AI/ML models. AI/ML in smart grids enables real-time monitoring against attacks. In such instances, these decisions need to be accountable, and the users should be informed with detailed explanations to avoid any false accusations. For that, the model outcomes need to be presented in a trustworthy and reasonable manner to the users. Immersive technologies such as XR would engulf the users in virtual reality. Intelligent decision-making in platforms is susceptible to attacks such as DDoS, social engineering, and deep fakes. Explainable AI methods will be required to identify the impacts on those systems and educate the users on the precautions. 2) Remaining Research Questions: Autonomous vehicles are going to revolutionize travel and transportation in smart cities. During a critical situation such as an accident, the cause could be a range of possibilities, from a fault in the device to an external attack. Therefore, AI/ML in autonomous driving requires further accountability for the ultra-low latency decision-making systems. Creating interpreters with sufficient computation power while maintaining super-fast communication with central servers will be a design side challenge. In many other IoT (IIoT/IoHT/IoMT) based ML applications, the performance vs. interpretability trade-off will be a challenge many designers have to face. Lack of methods to measure how successful the XAI-based decisions are still in their infancy. In AI/ML applications in Smart grids, the losses caused by attacks/tapping are easily quantifiable, but the effectiveness of XAI methods is not. Some general metrics are currently available, but the applicationspecific metrics need more focus from the research communities. Manufacturing plants use many proprietary designs and concepts that must be kept confidential. If proper measures are not taken, such proprietary information used in AI/ML systems could be exposed through interpreters to generate explanations. In smart healthcare, many users would divulge sensitive information to receive remote diagnostics. Information gathering would be done across a massively heterogeneous set of devices, making it complicated to access the explanations and understand them. So, the service providers will have to allocate resources to actively make explanations more accessible to their users. Similarly, in wide collaborative XR applications, conveying proper non-technical explanations will play a key role in building trust and solving any problems regarding ambiguities that could arise to hinder the right for explanations by regulatory bodies such as the EU General Data Protection Regulation (GDPR) [310] . 3) Possible Future Directions/Solutions: It is vital to maintain a delicate balance between performance and interpretability. Researchers must focus on computationally simpler but effective explainable methods on more performance-demanding AI methods such as DNN and DRL. More definitive metrics should be in place to measure the effectiveness of the XAI outputs and establish a balance between performance and explainability. General metrics for XAI outputs will not be sufficient in identifying the exact picture. Use case-specific metrics need to be developed in collaboration with experts for each particular use case. For example, inputs from experts in smartgrid stability control, data scientists, and sociology/psychology experts should be well considered when developing metrics for smart grid stability. New and creative communication techniques need to be implemented to make the public aware of the correct and narrowed-down explanations. It will require additional resources allocated by the service providers as well. At the same time, measures must be taken to identify direct or indirect exposure of proprietary information in AI/ML models used by the service providers. 1) Lessons Learned: According to our research, several EU-funded research projects have already started to address the challenges on the path toward 6G, and many major ICT companies are issuing announcements about internal programs focusing on 6G security. Outside of the EU, e.g., in the USA, the Next G Alliance started to work on the 6G security and privacy through private sector-led efforts. Most of the projects listed in Section VII aim to guarantee the following generation network's trustworthiness and security. However, it is exciting to see approaches beyond classical, for example, XAI-based techniques, to secure future networks that play a significant role in most of the research projects reviewed in this paper. Undoubtedly, global standards and new regulations will play a key role in developing and deploying 6G networks. However, effective and timely standardization is key to the fast and seamless adoption of new technologies, including 6G. Several Standards Developing Organizations (SDO) are expected to work in the near future or already work on 6G security and privacy, e.g., ETSI, IETF, IEEE, 3GPP, NIST, and ISO, in a much tighter way than they did for 5G, as 6G aims to merge different technologies already standardized by SDOs. The AI/ML mechanisms will have to become the main elements in 6G to achieve superior security, e.g., automating decisionmaking processes and accomplishing a zero-touch approach. 2) Remaining Research Opportunities: The analysis of recently released standards (2019-2021) in B5G security shows that most SDOs acknowledge the importance of AI/ML-based security solutions for B5G networks. However, only a few standardization documents mention the role of XAI, which we think is very significant, as the current lack of explainability leads to doubts about the credibility and feasibility of AI/MLbased implementations built to combat security threats. There are, however, working groups, such as IEEE XAI WG -Standard for XAI [311] that aim to standardize mandatory and optional requirements and constraints that need to be satisfied for AI method, algorithm, application, or system to be recognized as explainable. 3) Possible Future Directions: The European Partnership on Smart Networks and Services (SNS) established Europe's strategic research and innovation roadmap. The initiative is based on an EU contribution of C900 million over the next seven years. The objective is to enable European players to develop R&I capabilities for 6G systems and lead markets for 5G and 6G infrastructure, which will serve as the foundation for digital and green transformation. The SNS work program will be the basis for calls for proposals to be launched in early 2022. Concerning standards, we believe that projects under calls such as ICT-52-2020 are expected to provide valuable inputs to standardization bodies fostering the development of advanced 6G solutions. From the perspective of 3GPP, there are features and capabilities from existing 5G solutions that require full specification and are expected to be released at the end of 2023. The migration from legacy and existing proprietary radio protocols toward 3GPP protocols will take 5-10 years. AI/ML-assisted security still needs further development to respond to new security threats introduced by the dynamicity of 6G services and networks. This survey examines and evaluates the potential of using XAI to improve accountability and resilience beyond the 5G era of AI-based security in communication. The study begins by laying the background of current XAI technical concepts and their potential in the B5G era. This paper discussed an exhaustive assessment of the most cutting-edge AI, XAI, B5G technologies, and security aspects, including threat model and taxonomy. Technical aspects regarding the role of XAI in B5G security issues were thoroughly examined throughout major enablers of B5G, including IoT, RAN, Edge, core, backhaul, E2E slicing, and network automation, etc. It was followed by a detailed discussion on trending AI-based use cases of B5G and the potential of XAI in ensuring the trustworthiness of those networks. Apart from the favorable prospects of XAI, we also bring to light new security issues introduced to future network infrastructure along with AI explanations. Later in this paper, we focus on the active research initiatives to build and standardize B5G specific technologies involving both researchers and industry practitioners. Finally, this paper highlights the challenges and limitations in B5G AI security and future research directions to fill those gaps. In conclusion, this survey acts as a stepping stone for researchers, industry partners, or other stakeholders to absorb a holistic understanding of the potential of XAI to improve accountability and resilience in the security application of the B5G era. Thulitha Senevirathna received the degree for bachelor of the science in electrical and information engineering from University of Ruhuna, Sri Lanka. He is currently pursuing the PhD degree affiliated to the school of computer science in University College Dublin. His research interests include machine learning, explainable AI and AI security in B5G applications. Zujany Salazar is a CIFRE PhD student at the Université Paris-Saclay and Montimage, France. She received her M.Sc. in Computer Sciences for Communication Network from Telecom SudParis in 2020. Her research covers the areas of simulation and emulation of network traffic patterns and cyberattacks, and risk assessment, and monitoring techniques for 5G networks. Vinh Hoa La is currently a R&D Engineer and a project manager for the EU H2020 SPATIAL project at Montimage, an innovative company located in Paris. He received his engineering degree in Information and Communication Systems at Hanoi University of Science and Technology (Vietnam) in 2012 and the Master degree at UPMC-Paris 6 in 2013. In 2016, he was honored the title Doctorate of Telecom SudParis /Univeristy Paris Saclay. His research interests include Security Monitoring, 5G/IoT/Sensors Network Security and Root-cause Analysis. Samuel Marchal is a Senior Data Scientist and Team Lead in the Artificial Intelligence Center of Excellence (AICE) at WithSecure Corporation, where he leads the research and development on adversarial machine learning. He is also a Research Fellow in the Secure Systems Research Group at Aalto University. He received the engineer's and M.Sc. degree from TELECOM Nancy, France, and the Ph.D. degree from the University of Luxembourg. His research focuses on discovering security vulnerabilities and improving the security of machine learning-based systems. He also designs novel solutions that leverage machine learning to improve system and network security. Bartlomiej Siniarski (Member, IEEE) is currently a post doctoral researcher and a project manager for the EU H2020 SPATIAL project at University Colelge Dublin. He completed his undergraduate studies in Computer Science at University College Dublin (Ireland) and University of New South Wales (Australia). He was awarded with a doctoral degree in 2018. He has a particular interest and experience in the design of the IoT networks and in particular collecting, storing and analysing data gathered from intelligent sensors. Furthermore, he was actively involved in MSCA-ITN-ETN, ICT-52-2020 and H2020-SU-DS-2020 projects which are focused on solving problems in the area of network security, performance and management in 5G and B5G networks. Dr. Wang is a member of the IEEE and a reviewer of its major conferences and journals in intelligent transportation systems. His research interests include trajectory data mining and processing, connected autonomous vehicles, and explainable artificial intelligence. The roadmap to 6g security and privacy Survey on 6G Frontiers: Trends, Applications, Requirements, Technologies and Future Research Zsm security: Threat surface and best practices A vision of 6g wireless systems: Applications, trends, technologies, and open research problems Towards 6g wireless communication networks: Vision, enabling technologies, and new paradigm shifts Opportunities and challenges of software-defined mobile networks in network security A comprehensive guide to 5G security Artificial intelligence and the attack/defense balance The ai-based cyber threat landscape: A survey A machine learning based intrusion detection system for software defined 5g network An adversarial approach for explainable ai in intrusion detection systems Darpa's explainable artificial intelligence (xai) program Artificial intelligence-enabled cellular networks: A critical path to beyond-5g and 6g A comprehensive survey of 6g wireless communications 6g wireless communication systems: Applications, requirements, technologies, challenges, and research directions What should 6g be? 6g vision and requirements: Is there any need for beyond 5g Vision, requirements, and technology trend of 6g: How to tackle the challenges of system coverage, capacity, user data-rate and movement speed The roadmap to 6g: Ai empowered wireless networks The road towards 6g: A comprehensive survey A taxonomy of ai techniques for 6g communication networks Ten challenges in advancing machine learning technologies toward 6g Towards ubiquitous ai in 6g with federated learning Ai and 6g security: Opportunities and challenges Sec-edgeai: Ai for edge security vs security for edge ai Deep learning based intelligent inter-vehicle distance control for 6g-enabled cooperative autonomous driving Security and privacy for 6g: A survey on prospective technologies and challenges 6g: Opening new horizons for integration of comfort, security, and intelligence Intelligent security and pervasive trust for 5g and beyond A survey on internet of things: Security and privacy issues A survey on c-ran security Can machine learning be secure Explainable artificial intelligence for 6g: Improving trust between human and machine Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai Opportunities and challenges in explainable artificial intelligence (xai): A survey Explainable artificial intelligence: Concepts, applications, research challenges and visions A comprehensive survey on 6g networks: Applications, core services, enabling technologies, and future challenges Next-generation wireless solutions for the smart factory, smart vehicles, the smart grid and smart cities Terahertz communication for vehicular networks Thz precoding for 6g: Applications, challenges, solutions, and opportunities Artificial intelligence for 5g and beyond 5g: implementations, algorithms, and optimizations Can we open the black box of ai? Stakeholders in explainable ai A survey on explainable artificial intelligence (xai): Toward medical xai A survey of methods for explaining black box models Intelligible models for classification and regression Explainable deep learning models in medical image analysis why should i trust you?" explaining the predictions of any classifier Optilime: Optimized lime explanations for diagnostic computer algorithms Cbr-lime: a case-based reasoning approach to provide specific local interpretable model-agnostic explanations A unified approach to interpreting model predictions From local explanations to global understanding with explainable ai for trees Learning important features through propagating activation differences Layerwise relevance propagation for deep neural network architectures Unmasking clever hans predictors and assessing what machines really learn Deep neural network attribution methods for leakage analysis and symmetric key recovery Crnn-based multiple doa estimation using acoustic intensity features for ambisonics recordings Interpretable deep neural networks for single-trial eeg classification Counterfactual explanations for machine learning: Challenges revisited Agents that learn to explain themselves Explainable ai and reinforcement learning-a systematic review of current approaches and trends Explainable reinforcement learning: A survey," in International cross-domain conference for machine learning and knowledge extraction Programmatically interpretable reinforcement learning Hierarchical and interpretable skill acquisition in multi-task reinforcement learning Toward interpretable deep reinforcement learning with linear model u-trees Interpretable to whom? a role-based model for analyzing interpretable machine learning systems European union regulations on algorithmic decision-making and a "right to explanation Dynamic network slicing for scalable fog computing systems with energy harvesting Explainable security Threat landscape and good practice guide for software defined networks/5g Fs-opensecurity: A taxonomic modeling of security threats in sdn for future sustainable computing A review of security, threats and mitigation approaches for sdn architecture Towards secure and dependable software-defined networks A survey of security in software defined networks Overview of 5g security challenges and solutions Securing software defined networks: taxonomy, requirements, and open issues Security assurance specification (scas) threats and critical assets in 3gpp network product classes (3gpp tr 33.926 version 16.3.0 release 16) 5g core network security issues and attack classification from network protocol perspective Sensor function virtualization to support distributed intelligence in the internet of things Etsi gs nfv-sec 013 Virtualization vulnerabilities, security issues, and solutions: a critical study and comparison Evolution of attacks, threat models, and solutions for virtualized systems Application container security guide Security and eavesdropping in terahertz wireless links Thz communications -a boon and/or bane for security, privacy, and national security Terahertz band: Next frontier for wireless communications Study and validation of eavesdropping scenarios over a visible light communication channel Enhancement of physical layer security with simultaneous beamforming and jamming for visible light communication systems Enhancing security in 6g visible light communications Secvlc: Secure visible light communication for military vehicular networks Physical-layer security for indoor visible light communications Methods and applications of mobile molecular communication Security and privacy in molecular communication and networking: Opportunities and challenges Federated learning for 6g communications: Challenges, methods, and future directions Edge intelligence: Challenges and opportunities of near-sensor machine learning applications Towards artificial intelligence enabled 6g: State of the art, challenges, and opportunities Blockchain-based data security for artificial intelligence applications in 6g networks 6g security challenges and potential solutions Blockchain-enabled resource management and sharing for 6g communications A survey on quantum channel capacities Post-quantum Cryptography in 6G Security and privacy in 6g networks: New areas and new challenges 6G Communication Technology: A Vision on Intelligent Healthcare Industry 5.0 -a human-centric solution Security requirements and challenges of 6G technologies and applications Vulnerability assessment of 6g-enabled smart grid cyber-physical systems Physical layer security for ultra-reliable and low-latency communications iThings) and IEEE green computing and communications (GreenCom) and IEEE cyber, physical and social computing (CPSCom) and ieee smart data (SmartData Security in the internet of things: a review Rfid as an enabler of the internet of things: Issues of security and privacy Classifying rfid attacks and defenses A survey on detection of sinkhole attack in wireless sensor network Cloud computing: security issues and research challenges Security challenges in the ip-based internet of things Proposed embedded security framework for internet of things (iot) Three Major Challenges Facing IoT -IEEE Internet of Things Machine learning in wireless sensor networks: Algorithms, strategies, and applications Phy-layer spoofing detection with reinforcement learning in wireless networks Machine learning methods for attack detection in the smart grid Smart user authentication through actuation of daily activities leveraging wifi-enabled iot Competing mobile network game: Embracing antijamming and jamming strategies with reinforcement learning Multi-agent reinforcement learning based cognitive anti-jamming Two-dimensional anti-jamming communication based on deep reinforcement learning The emergence of edge computing Edge computing: Vision and challenges The Vital Role of Edge Computing in the Internet of Things Resourceaware on-device deep learning for supermarket hazard detection Bandwidth-efficient live video analytics for drones via edge computing Machine learning at the network edge: A survey Ai4safe-iot: An ai-powered secure architecture for edge layer of internet of things An edge intelligence empowered recommender system enabling cultural heritage applications Recommender systems: An explainable ai perspective Security and privacy for edge intelligence in 5g and beyond networks: Challenges and solutions Edge intelligence: Architectures, challenges, and applications Mobile broadband backhaul: Addressing the challenge Secure and efficient protocol for fast handover in 5g mobile xhaul networks A case study on security issues in lte backhaul and core networks Security for future software defined mobile networks Secured vpn models for lte backhaul networks Novel secure vpn architectures for lte backhaul networks New backhaul requirements for lte, lte advanced and beyond Security of 5g-mobile backhaul networks: A survey Adaptive call admission control under quality of service constraints: a reinforcement learning solution Intelligent nlos backhaul for 5g small cells Optimizing mobile backhaul using machine learning A case-based explanation system for black-box systems A reduction of imitation learning and structured prediction to no-regret online learning Is imitation learning the route to humanoid robots? Wireless backhaul in 5g and beyond: Issues, challenges and opportunities What Is 5G Network Slicing? Network slicing security: Challenges and directions Network slicing automation: Challenges and benefits A machine learning approach to 5g infrastructure market optimization Reinforcement learning for slicing in a 5g flexible ran Optimising 5g infrastructure markets: The business of network slicing Explainable clustering and application to wealth management compliance Explainable k-means and k-medians clustering Exkmc: Expanding explainable k-means clustering Zero-touch network and service management (zsm); reference architecture Deep learning in mobile and wireless networking: A survey Ai-driven zero touch network and service management in 5g and beyond: Challenges and research directions Adversarial attacks on cognitive self-organizing networks: The challenge and the way forward Reinforcement learning for autonomous defence in software-defined networking Explainable artificial intelligence approaches: A survey Evaluation of post-hoc xai approaches through synthetic tabular data Xai meets mobile traffic classification: Understanding and improving multimodal deep learning architectures Unveiling mimetic: Interpreting deep learning traffic classifiers via xai techniques Intent-driven autonomous network and service management in future networks: A structured literature review ETSI, Standard ETSI GR ZSM 005 V1.1.1 Smart energy systems for sustainable smart cities: Current developments, trends and future directions Intelligent edge computing for iot-based energy management in smart cities Amalgamation of blockchain and iot for smart cities underlying 6g communication: A comprehensive review Applications of artificial intelligence and machine learning in smart cities Artificial intelligence in sustainable energy industry: Status quo, challenges and opportunities Energy and ai A survey on iot security: application areas, security threats, and solution architectures Input selection and optimisation for monthly rainfall forecasting in queensland, australia, using artificial neural networks Interpretable and explainable ai (xai) model for spatial drought prediction Explainable artificial intelligence for services exchange in smart cities Artificial intelligence techniques for smart city applications A survey of data fusion in smart city applications Iot-and xai-based smart medical waste management Edge computing for smart health: Context-aware approaches, opportunities, and challenges A survey on artificial intelligence approaches for medical image classification Deep convolution neural network for big data medical image classification Medical image classification based on artificial intelligence approaches: a practical study on normal and abnormal confocal corneal images Artificial neural networks for diagnosis of kidney stones disease A new generalized regression artificial neural networks approach for diagnosing heart disease Artificial neural networks in mammography: application to decision making in the diagnosis of breast cancer A review on neural networks approach on classifying cancers An evolutionary artificial neural networks approach for breast cancer diagnosis Machine learning approach-based gamma distribution for brain tumor detection and data sample imbalance analysis Artificial intelligence applications in pediatric brain tumor imaging: A systematic review Multiclass magnetic resonance imaging brain tumor classification using artificial intelligence paradigm Ai, iot and wearable technology for smart healthcare-a review Applications of artificial intelligence in human life Ai based personalized healthcare application to keep up the well-being of people by boost the immunity through food and physical activity during pandemic World report on ageing and health. World Health Organization Artificial neural networks for prediction of tuberculosis disease Artificial intelligence and internet of things enabled disease diagnosis model for smart healthcare systems A deep-learning-based edge-centric covid-19-like pandemic screening and diagnosis system within a b5g framework using blockchain Smart healthcare: making medical care more intelligent When explainability meets adversarial learning: Detecting adversarial examples using shap signatures Explainable ai in healthcare Iot smart health security threats Smart and secure iot and ai integration framework for hospital environment Iot security in healthcare using ai: A survey Explainable ai and mass surveillance system-based healthcare framework to combat covid-i9 like pandemics Industry 5.0: a survey on enabling technologies and potential applications Industry 4.0: A survey on technologies, applications and open research issues Fusion of federated learning and industrial internet of things: a survey A complex view of industry 4.0 Visual computing as a key enabling technology for industrie 4.0 and industrial internet Industry 5.0 and human-robot co-working Industry 5.0-a human-centric solution Explainable ai (xai) for phm of industrial asset: A state-of-the-art, prisma-compliant systematic review Predictive and explainable machine learning for industrial internet of things applications Towards explainable process predictions for industry 4.0 in the dfki-smart-lego-factory Trust xai: Model-agnostic explanations for ai with a case study on iiot security Flame: Taming backdoors in federated learning Waffle: Watermarking in federated learning Mitigating the backdoor attack by federated filters for industrial iot applications The path of the smart grid An overview of the smart grid in great britain Energy internet-towards smart grid 2.0 Cyber security in the smart grid: Survey and challenges Smart grid security technology Artificial intelligence techniques for stability analysis and control in smart grids: Methodologies, applications, challenges and future directions Design of a novel smart generation controller based on deep q learning for large-scale interconnected power system Power control with reinforcement learning in cooperative cognitive radio networks against jamming Real-time identification of power fluctuations based on lstm recurrent neural network: A case study on singapore power system Data-driven load frequency control for stochastic power systems: A deep reinforcement learning method with continuous action search From optimization-based machine learning to interpretable security rules for operation Improved deep belief network and model interpretation method for power system transient stability assessment Gaining insight into solar photovoltaic power generation forecasting utilizing explainable artificial intelligence tools Explainable ai in deep reinforcement learning models for power system emergency control Augmented reality: A class of displays on the reality-virtuality continuum Extended reality in medical practice Testing and validating extended reality (xr) technologies in manufacturing Virtual reality: Applications and implications for tourism Understanding virtual reality in marketing: Nature, implications and potential Virtual reality applications for the built environment: Research trends and opportunities What's the Difference in Reality? Virtual reality: applications and explorations Augmented and Virtual Reality Wearable Virtual Visual Diagnostics ObEN Artificial Intelligence A 3d-deep-learningbased augmented reality calibration method for robotic environments using depth sensor data Cyber security threats and challenges in collaborative mixed-reality Exploring interaction design considerations for trustworthy language-capable robotic wheelchairs in virtual reality Virtual-to-real-world transfer learning for robots on wilderness trails Analysis of using virtual reality (vr) for command and control applications of multi-robot systems A metaverse: taxonomy, components, applications, and open challenges Smart governance in the context of smart cities: A literature review Wild patterns: Ten years after the rise of adversarial machine learning Prada: protecting against dnn model stealing attacks Practical black-box attacks against machine learning Adversarial xai methods in cybersecurity Adversarial examples are not bugs they are features Ml-based approach to detect ddos attack in v2i communication under sdn architecture Composite and efficient ddos attack detection framework for b5g networks Semi-supervised machine learning approach for ddos detection White-box vs black-box: Bayes optimal strategies for membership inference Adversarial machine learning at scale Adversarial training can hurt generalization Membership inference attacks against adversarially robust deep learning models Dawn: Dynamic adversarial watermarking of neural networks Commission, Directorate-General for Communications Networks and Technology Black box attacks on explainable artificial intelligence (xai) methods in cyber security Reliability of explainable artificial intelligence in adversarial perturbation scenarios Fairwashing: the risk of rationalization Fooling lime and shap: Adversarial attacks on post hoc explanation methods SPATIAL-H2020-EU project STAR -Safe and Trusted Human Centric Artificial Intelligence Key drivers and research challenges for 6G ubiquitous wireless intelligence AI4EU SANCUS -Analysis Software Scheme of Uniform Statistical Sampling Audit and Defence Processes CYBER; Critical Security Controls for Effective Cyber Defence Artificial Intelligence and the oneM2M architecture SmartM2M; AI for IoT: A Proof of Concept ETSI, Standard ETSI GR ZSM 010 V1.1.1 Information technology -Artificial intelligence -Overview of trustworthiness in artificial intelligence Considerations for Managing Internet of Things (IoT) Cybersecurity and Privacy Risks Security and Privacy Controls for Information Systems and Organizations A Proposed Standard on Transparency Personal Data AI Agent Working Group Why a right to explanation of automated decision-making does not exist in the general data protection regulation Standard for XAI -eXplainable Artificial Intelligencefor Achieving Clarity and Interoperability of AI Systems Design international conference on acoustics, speech and signal processing (ICASSP) . IEEE, 2017 IEEE, , pp. 2087 IEEE, -2091 [127] L. Xiao, X. Wan, X. Lu, Y. Zhang, and D. Wu . Available: https://stlpartners.com/articles/telco-cloud/ric-xapps-rapps-who-are-the-key-players/