key: cord-0528321-90xqcx82 authors: Khowaja, Sunder Ali; Lee, Ik Hyun; Dev, Kapal; Jarwar, Muhammad Aslam; Qureshi, Nawab Muhammad Faseeh title: Get your Foes Fooled: Proximal Gradient Split Learning for Defense against Model Inversion Attacks on IoMT data date: 2022-01-12 journal: nan DOI: nan sha: 84286d3d55122c4afde0fd5a3d6f60abfb4079eb doc_id: 528321 cord_uid: 90xqcx82 The past decade has seen a rapid adoption of Artificial Intelligence (AI), specifically the deep learning networks, in Internet of Medical Things (IoMT) ecosystem. However, it has been shown recently that the deep learning networks can be exploited by adversarial attacks that not only make IoMT vulnerable to the data theft but also to the manipulation of medical diagnosis. The existing studies consider adding noise to the raw IoMT data or model parameters which not only reduces the overall performance concerning medical inferences but also is ineffective to the likes of deep leakage from gradients method. In this work, we propose proximal gradient split learning (PSGL) method for defense against the model inversion attacks. The proposed method intentionally attacks the IoMT data when undergoing the deep neural network training process at client side. We propose the use of proximal gradient method to recover gradient maps and a decision-level fusion strategy to improve the recognition performance. Extensive analysis show that the PGSL not only provides effective defense mechanism against the model inversion attacks but also helps in improving the recognition performance on publicly available datasets. We report 17.9$%$ and 36.9$%$ gains in accuracy over reconstructed and adversarial attacked images, respectively. Abstract-The past decade has seen a rapid adoption of Artificial Intelligence (AI), specifically the deep learning networks, in Internet of Medical Things (IoMT) ecosystem. However, it has been shown recently that the deep learning networks can be exploited by adversarial attacks that not only make IoMT vulnerable to the data theft but also to the manipulation of medical diagnosis. The existing studies consider adding noise to the raw IoMT data or model parameters which not only reduces the overall performance concerning medical inferences but also is ineffective to the likes of deep leakage from gradients method. In this work, we propose proximal gradient split learning (PSGL) method for defense against the model inversion attacks. The proposed method intentionally attacks the IoMT data when undergoing the deep neural network training process at client side. We propose the use of proximal gradient method to recover gradient maps and a decision-level fusion strategy to improve the recognition performance. Extensive analysis show that the PGSL not only provides effective defense mechanism against the model inversion attacks but also helps in improving the recognition performance on publicly available datasets. We report 17.9% and 36.9% gains in accuracy over reconstructed and adversarial attacked images, respectively. Index Terms-Model inversion attacks, IoMT data, Adversarial attacks, Deep Learning, and Split Learning Modish growth in information, communication, and computing technologies have given rise to Deep learning (DL) and Internet of Things (IoT). Both computing paradigms when combined, cater to vast array of business requirements, technological benefits, and critical domain applications including industry, energy, transport, and healthcare sectors. The IoT covers the spectrum of data generation and collection from ubiquitous devices while underlying intelligence and automation lies on the shoulders of DL techniques. Over the years, the use of DL in IoT ecosystem has recorded unprecedented achievements by deriving automated inferences that were too * complicated for the conventional paradigms [1] . The amalgamation of DL and IoT has been gaining a lot of interest in healthcare field lately, specifically by associated practitioners and researchers. Medical data comprise of various modalities such as pathology test results, COVID19 results, biomedical images, and electronic health records. The corresponding medical data acquired from IoT devices is often referred to as Internet of Medical Things (IoMT) data. Some systems that are popular and being used in the medical field are but not limited to: 1) DL based breast cancer risk prediction from mammograms [2] ; 2) Detection of macular edema and diabetic retinopathy using DL and retinal fundus images [3] ; 3) Pattern detection from electronic health records using DL to determine risk factors and health trends [4] . Although the analytical results show drastic improvements, the issues concerning privacy of IoMT data remains at large. Medical institutions and IoMT data intrinsically hold a lot of individual's private information sch as age, gender, home address, drug usage patterns, medical history, medical test results, and so forth. The huge amount of sensitive information has attracted a lots of black hats for scoring monetary, commercial, and political gains through the exploitation IoMT data. Such information can be either leaked or intercepted when passed to the DL model for training or inferencing, respectively [5] . Last couple of years have witnessed a drastic increase in attacks concerning IoMT data or DL networks. The two most common attacks that exploit personal information from IoMT data are attribute inference and model inversion attacks, respectively. The former uses partial data and a trained DL model to infer the missing piece of information while the latter attacks intermediate layers of the trained DL models and uses the feature maps to recover the data itself [5] , [6] . The arousal of such attacks has hindered the hospitals' and patients' willingness to share the IoMT data and to use the DL for automated healthcare services. The lack of trust and data availability has slowed the research progress, accordingly. Therefore, it is essential to mitigate the attacks on IoMT data and develop necessary defenses for model inversion attacks. An example of adversarial attacks performed in the context of DL based IoMT data at different layers is shown in Figure 1 . The data can face adversarial attacks at edge device layer, aggregation layer, cloud storage or cloud analytics layer, accordingly. However, each of the attack impart different characteristics on the raw data or the derived inference. Existing adding label perturbations, model perturbations, or adding noise at the data input level. The label perturbations affects the class probabilities which not only affects the performance of decision analysis, in general, but do not provide the protection to raw data itself. Model perturbations adds noise to the model parameters which does not affects much to the inversion attacks, thus, raw data can be created by employing simple pre-processing attacks. Split networks [7] were proposed to preserve the raw data privacy but it is still susceptible to model inversion and attribute inference attacks. In this work, we propose the proximal gradient split learning (PGSL) for prevention against model inversion attacks. The network initiates an intentional one-and few-pixel attack to the input data, followed by a split deep neural network. We employ proximal gradients to reconstruct the data into its original form at the server side of the split networks. To the best of our knowledge, proximal gradients has not been explored for preserving data privacy within the training process. The contributions of this work are summarized below: • Initiation of adversarial attack on IoMT data for improving resilience. • Proximal Gradient Split Learning for training the network with adversarial samples • Late fusion strategy for improving the predictive performance on adversarial samples. • Experimental analysis for validating the effectiveness of PGSL network. The rest of the paper is structured is follows: Section 2 provides a brief literature review of the existing works. Section 3 presents the threat model. Section 4 provides the details regarding the proposed PGSL. Section 5 presents the experimental setup and analysis. Section 6 presents the insights, discuss the implications and limitations of the proposed work. Section 7 concludes the study along with potential future works. Over the years, the adoption of DL techniques for IoMT based critical and real-world services have been increased manifold. However, in recent years, research studies have exploited several vulnerabilities associated with DL in the form of adversarial perturbations. The adversarial attack was first pioneered in [8] that used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BGFS) method by searching minimal distorted space to generate adversarial samples. Similarly, Goodfellow et al. [9] generated adversarial examples by using fast gradient sign method to perform one-step update for each pixel in the direction of gradients. Several other studies including DeepFool [10] and universal adversarial perturbations (UAP) [11] have exploited the DL techniques for its susceptibility to adversarial perturbations. The domain of IoMT is mainly threatened by the privacy attacks that target inference and training data, accordingly. The most common types of attack on IoMT data include adversarial noise attack, model encoding attack, attribute inference attack, and model inversion attack. Adversarial noise attack can be performed on a single pixel or multiple pixels, accordingly. Su et al. [12] proposed the adversarial noise attack by corrupting a single pixel within the specific window size to degrade the DL performance. The aforementioned study shows that 16.04% of CIFAR10 and 67.97% of ImageNet dataset can be attacked by manipulating a single pixel value, thus causing DL to infer the wrong label. Existing studies have tried to counter this attack through patch selection denoiser [13] , image reconstruction [14] , and adversarial detection networks [15] but either they are too computationally complex or add blur artifacts to the original image. It is apparent by the studies that when corrupted by one/few pixel attack, it is difficult to not only recover the image but also the information concerning the inference. Attribute inference attack refers to the attacking of sensitive or prominent attributes that could either help in reconstructing the raw data or downgrade the predictive performance of the DL model. Attribute Inference attack and adversarial noise attack can be similar in the case of medical images as they both strive to corrupt the raw data itself. The study [16] propose the use of attribute inference attacks to increase the risk of data theft and privacy concerning CNN models. Model inversion attacks are mostly focused on reconstructing the input data from compromised model parameters or intermediary outputs of DL methods. The study [17] proposed the model inversion attack for recovering input images from intermediate outputs using softmax model's confidence scores. The study in [18] proposed the use of generative adversarial networks (GANs) and collaborative training system to recover the input image from intermediary DL architecture layers. It has also been suggested by the studies [5] , [19] that the reconstruction of intermediary outputs from DL architectures works better when extracted from initial layers as they tend to have a structural similarity with the input data. In this regard, NoPeekNN [20] limited the distance correlation between the intermediate tensors and the input data during the training process of splitNN. The method was specifically designed for autoencoders to limit the reconstruction of the input data but has not been applied or tested concerning model inversion attacks. The works [5] , [7] , [21] proposed the use of noise addition to the intermediate tensors which eventually helps to cope with model inversion attacks but fails to achieve the model's accuracy. Titcombe et al. [21] used noise to corrupt the intermediate data and used NoPeekNN for defense against model inversion attacks. However, the work ignores the attacks that could be initiated at the input part of the client side concerning SplitNN. In this work, we intentionally initiate the one/few pixel attack in order to keep the input data safe on the client side, the intermediate output from the attacked image is then sent to the server side. We use proximal gradient method to recover the image on the client side and use late fusion technique to not only deal with model inversion attack but also with the improvement of model's accuracy. As depicted in Figure 1 , the attacks on the medical data can yield severe implications for not only the inference system but for the user/patient as well. The threat model in this work considers an arbitrary number of clients that are responsible for training a part of the network and a computation server that carries on the training process on the server side. We presume that one party intends to fetch the data from other clients using model inversion attack.The process of attack is defined as follows: 1) The attackers gets their hands on data sent from database to the DL network and the intermediate feature maps from the model segment at client side; and 2) A model is trained by the attacker to reconstruct the raw data from intermediary feature maps from the client side. The aforementioned process is mainly categorized as a blackbox attack [21] . This study also assumes that there is only a single computation server and that a third party orchestrates the model training process. This study only considers the intermediate data fetched from the client side or from the server's input side for model inversion attacks. This study does not take into account the data collection during the training and susceptibility of split neural networks towards Sybil attacks or membership inference attacks. Furthermore, this study also not covers the spectrum of white-box model inversion attacks, accordingly. The workflow of PGSL method is illustrated in Figure 2 . As mentioned earlier, we intentionally initiate the pixel attack to the data, which then undergoes a sub-sampling layer that divides the image into patches. These patches in the form of a tensor are sent to the convolutional neural network (CNN) for partially training the network at the client side. The network is segmented at the split/cut layer along with the extraction of outputs. These outputs are then sent to server side and processed through the proximal gradient method to remove the pixel attacks. We branch out three streams from this point, the first one retains the up-sampled pixel-attacked data, the second performs a convolution sum between the pixel-attacked and proximal gradient data, and the third uses only proximal gradient data , accordingly. The streams are trained at the server side using forward propagation. The gradients from the last layer of the second stream (combining both the pixelattacked and proximal gradient data) at the server side are backpropagated to the last split layer. Only these gradients are sent back to the client side to fine-tune the training process. Once the network is trained, we employ a weighted-averaging decision-level fusion method to derive the desired label. The details for each of the PGSL building blocks are provided in the subsequent sub-sections. In this study, we mainly consider the image data for designing the privacy-preserving machine learning (PPML) method. For this study, we use jacobian-based saliency map attack pixel method [22] to initiate the attack. Let us consider that an image, label pair is represented as (x, y). The saliency map observes the influence of each pixel in image x for predicting the class y. The assumption is that the pixel correlates to the corresponding class positively Corr p = ∂ f (x) y ∂ x i > 0 and to the contradicting class negatively Corr n = ∑ y =y refers to the softmax probabilities and y corresponds to contradicting classes. Based on the aforementioned assumptions, the map can be formulated as shown in equation 1. The "True" in equation 1 refers to the satisfaction of Corr p and Corr n conditions. The attacker can target the saliency map such that the pixels are modified to increase the correlation of contradictory label. One simple way is to inverse the correlation conditions, i.e. Corr p = ∂ f (x) y ∂ x i < 0 and Corr n = ∑ y =y ∂ f (x) y ∂ x i > 0. The process of generating adversarial sample in the aforementioned way is best suited to this study as we do not intent to initiate a targeted attack but rather a nontargeted one. Let us denote the adversarial saliency map as Map , therefore the resultant attacked image can be given bŷ Post the initialized attack, we employ a reversible downsampling and up-sampling layer [23] , accordingly. The downsampling is applied at the client side while the up-sampling is performed at the server side. The reason for using this layer is three fold. First, the attacker needs a prior information for design of sub-pixel convolution to up-sample the image. Second, the increasing the receptive field while retaining the depth, and the third is reduction of artifacts that could affect the visual quality [23] . Suggesting that the size of adversarial imagex is represented by i × j, it will be downsampled into four images along with the concatenation of corresponding saliency map Map to form an input tensor having size i 2 × j 2 × (4ch + 1) where ch corresponds to the number of channels, i.e. 1 for grayscale and 3 for color. Let us denote the tensor asx. Let us represent the feature maps withx that are driven fromx, i.e.x h →x, where h represents the convolution function used to extract the activation. These activations are passes as an input to the proximal gradient method to recover the attacked IoMT data, accordingly. Recalling the initiated attack, the IoMT data can be recovered by x =x − Map , however, the information regarding the Map is not available in the activations. Therefore, we approximate the recovery using the inverted saliency map concatenated in the tensor activation, i.e. Equation 2 is considered to be a convex optimization problem [24] . The notation * refer to the nuclear norm of the matrix and λ represents the weighting parameter, accordingly. The said equation is also referred to as robust principle component analysis [25] , [26] which is commonly used for image recovery. Equation 2 is also considered to be a special case of general optimization problem that can be represented in the form shown in equation 3. The notation g, H , A and b refer to the convex function, real Hilbert space, linear map, and an observation, respectively. An efficient way to solve equation 3 is to relax the equality constraint and represent it into the following form. In the context of this study, that is responsible for penalizing in case of equality constraint violation, g is the convex function subject tox → Map, and the µ corresponds to the relaxation parameter subject to µ > 0. The assumption is that the solution of equation 3 approaches to that of equation 2 as the µ approaches 0. The function in equation 5 is assumed to be smooth and convex, thus, it can be solved by using Lipschitz continuous gradient function [25] , [26] shown in equation 6. The Fréchet derivate is represented by ∇ f that is represented as an element in the Hilbert space. The use of Lipschitz function has proven to make to solution more efficient in terms of computational complexity. The optimization function shown in equation 4, correspond to the family of proximal gradient function that is used to recover the attacked image in this study. As illustrated in Figure 2 , we perform the fusion of output activation maps from the split layer and the proximal gradient method. There are various ways to fuse the activation maps but the most common ones are sum, convolution, and convolutionsum fusion strategies. It has been proven in existing studies that the convolution-sum fusion yields better results in comparison to the former ones. In this regard, we adopt the convolution-sum fusion strategy proposed in [27] to fuse the activation maps, accordingly. The fusion comprises the orderly steps such as concatenation, convolution, dimension reduction, and summation. The first step concatenates the activation maps at some spatial locations across the channels. A bank of filters and biases are used to perform the convolution in the second step. The third step performs the dimensionality reduction within the convolution process by generating a weighted combination of the activation maps and the last step performs a linear summation of the corresponding maps that needs to be fused. From this point forward, three streams are trained using the attacked image, fused image, and the recovered image, accordingly. Existing studies have proven that defense measures for adversarial and model inversion attacks heavily affect the recognition performance of the system. In this regard, PGSL employs a decision-level fusion strategy that combines the classification results from the three streams. Generally, three kinds of fusion strategies, i.e. Weighted-Averaging, Adaptive-Weighted-averaging, and meta-learning, are employed to improve the recognition performance [27] . On one hand, metalearning is considered to be more effective while being computationally complex and on the opposite spectrum, weightedaveraging is simple and has least computational constraints. Adaptive-weighted-averaging provides an efficient trade-off between the effectiveness and computational complexity [27] , thus in this work, we use adaptive-weighted-averaging for decision-level fusion. Let us denote the classification scores from attacked image stream, fused activation maps stream, and recovered image stream as S a , S f , and S r , respectively. The adaptive-weighted-average for combining the scores from aforementioned three streams can be defined as shown in equation 7. where γ and ρ represent the weights for scores from recovered and fused activation maps, respectively. Let us denote the corresponding weights for the three streams as γ , ρ , and β . We first initialize the fixed weights as describe in experiments section and compute the values of γ and ρ as shown in equation 8 and 9. where S max represent the maximum average score of a particular class label and can be define for the corresponding streams as S max , and S max a = max L [S a (L )]. The notation L represents the class label. As the scope of this work is to demonstrate the effectiveness of defense against model inversion attack and data recovery to improve the recognition performance, we adopt a simple convolutional neural network (CNN) with 7 conv and 2 full connected (FC) layers that can be used for the employed datasets. Each of the convolutional layer comprises conv, ReLU, and batch normalization (BN) layer with 3x3 kernel size and 64 channels. The drop-out layer is used after every three conv layers. We employ the split layer after 2nd conv layer, accordingly. The details regarding the hyperparameters and distribution of datasets is given in experiment section. This section provides the experimental setup, results, and analysis to show the effectiveness on two fronts. The first is the defense mechanism for reconstruction of images from activation maps and its recovery, and the second is the recognition performance. We present extensive experimental analysis to show the effectiveness of the proposed approach. The corresponding results for each of the component is shown in subsequent subsections. As the study is centered around IoMT data, we employ two datasets to prove the efficacy of PGSL. The first is a publicly available Mammogram dataset MIAS [28] and the second is the MNIST dataset [29] . The reason for choosing MNIST dataset is the fairness of comparison with existing approaches and clarity of visual results. There are a total of 330 images in MIAS dataset. We clip the images to 1024x1024 and divide them into training and testing sets. The training set comprises 42, 57, and 181 while the testing set contains 12, 12, and 26 malignant, benign, and normal images, respectively. For MNIST, the training and testing set comprises 60,000 and 10,000 images. The network for both the datasets employ same set of parameters. We use the learning rate of 0.001 with a decay rate of 0.0003, the drop out ratio is 0.25, and the optimizer is set to ADAM. All the experiments are performed on Python, Intel Core i9 PC clocked at 3.5 GHz with 64GB of RAM and NVIDIA GeForce RTX3090. The proximal gradient method in this study relies on a hyperparameter, i.e. µ, for the optimal recovery of the attacked image. The mean squared error (MSE) is computed between the recovered and the original image (before attack initiation) to select the optimal µ value. Since the value of µ = 0 will yield the same result as of equation 2, we start the selection of value from 0.05 to 1.0 with the step size of 0.1. We conducted these experiments on the attacked image before giving it as an input to downsampling layer or CNN. The reason for not conducting on the activation maps is that the input is an attacked image therefore, comparing the corresponding activation maps would not be meaningful. We measure the MSE against the µ values on both the datasets. The results for this experiment are shown in Figure 3 . For the sake of generality, we choose a single value of µ for both the datasets. The analysis indicate that µ = 0.55 yields the lowest MSE for both the datasets, therefore, we will use this value for our next set of experiments, accordingly. To show the effectiveness of the proposed approach in terms of the reconstruction of attacked data, we use state-of-the-art methods to recover images from activation maps, i.e. deep leakage for gradient (DLG) [30] and DCGAN [18] method to reconstruct the images from their gradients. The DLG method already provides a pre-trained network for MNIST dataset Similarly, DCGAN was also trained from random initialization in order to recover the images from gradients. We first consider the gradients from the split layer and then the gradients from proximal gradient method to recover the images to prove the efficacy of the proposed approach. We evaluate the method using MSE as the comparison with existing approaches would be fair enough. The visual results for DLG and DCGAN on MNIST without and with proximal gradient is shown in Figure 4 . We also present the quantitative results in terms of MSE on both the datasets in Table 1 . It can be deduced from the results that the JSMA attack method used in this study is more difficult to recover from gradient/activation maps, thus, we assume that the PGSL framework has a better defense concerning model inversion attacks. For recovery method, we corrupted the images with laplacian noise [21] and used DLG and DGCAN to recover the images. The results show that the proposed PGSL method is able to improve the MSE concerning recovered images. We present an experimental analysis to show the effectiveness of PGSL in terms of recognition performance. For this experiment, we test the recognition accuracy on the images if attacked using the adopted jacobian method [22] , carlini and wagner (CW) method [31] , basic iterative method (BIM) [32] , and fast gradient sign method (FGSM) [9] , respectively. The attacked images using the respective methods are shown in figure 5 . These attacks have proven to be effective for making the end predictions completely or partially wrong as illustrated in the visual results. We report the results when trained directly with the attacked images, the images constructed using DLG and DCGAN from maps acquired using split layer, and the proposed PGSL method in Table 2 . The results indicate that the JSMA is a highly effective attack method when it comes of MIAS while CW attack yields the lowest accuracy on MNIST. The BIM and FGSM method are weaker attacks relative to the JSMA and CW, however, they do affect the overall accuracy of the recognition system. DLG relatively performs better than DGCAN. The stream computed on images recovered from proximal gradient yields the best accuracy, better than DLG, while the other two streams yield lower results. Considering that the attack is incorporated in the other two streams directly or indirectly, the degradation of performance makes sense. In this regard, we used the decision-level fusion strategy using adaptive-weighted-averaging method. We initialized the weights for all the streams with 0.5, 0.3, and 0.2, respectively, based on the results from individual streams and applied the fusion to derive the final label. The purpose of using the fusion of decisions from multiple streams is not only to improve the recognition performance but also to make the recognition network attack resilient which is supported by the highest recognition accuracy achieved on both the datasets. It should be noted that the accuracies may vary from the existing works as we trained and tested the images using the proposed CNN [22] 46.3% 65.4% CW [31] 51.2% 58.6% BIM [32] 56.8% 81.2% FGSM [9] 61.9% 83.8% DLG [30] 65.7% 88.3% DCGAN [18] 62 Referring to Figure 1 , we illustrated different types of attacks that can applied to IoMT data that can result in severe consequences such as data theft, wrong diagnosis, financial losses, and more. The first attack refers to the raw IoMT data which is attacked while being sent to a hospital in a use-case scenario. The results in Table 2 clearly indicates that any of the attack is capable of reducing the accuracy to almost a level where its a little better than taking a random decision. Model inversion attacks are also severe as they are able to reconstruct the raw data from gradient/activation maps which violates the privacy of a patient/user. The PGSL framework shows that it can provide some defense concerning the information associated with the raw data. DLG is considered to be stateof-the-art method for reconstruction of images from gradient maps but the results show that the proposed work helps in reducing recognition performance from the reconstructed images, thus, by extension, reduces the degree of recovery from gradients maps. The strength of the proposed work lies within its adoption in several emerging domains such as Spatial Computing, Virtual Medicine, Digital Twins, and Metaverse. All of the aforementioned domains are concerned with simulating humans in the digital world. The proposed work could be greatly helpful for preserving the users' data if any of the aforementioned technologies is realized for IoMT ecocystem. Although the PGSL method serve its purpose, it assumes the Map to be available from the data aggregator stage, however, with current progress in GANs, it has the capability to evolve for such defense mechanism. Furthermore, considering that the IoMT data is highly sensitive and a slight perturbation can cause the wrong diagnosis, the achieved accuracy still has the room for improvement when it comes to IoMT data. Nevertheless, PGSL reports approximately, 36.9% and 17.5% gains in comparison to the JSMA and DLG methods, accordingly. Moreover, this study considers mammogram images as IoMT data, but there are other homogeneous and heterogeneous medical modalities that can be explored for such adversarial affects such as X-Ray images, Fundus images, CT-scans, medical reports, electronic health records, and so forth. In this study, we have proposed PGSL framework for defense against the impact of data theft and model inversion attacks within an IoMT ecosystem. The underlying idea of PGSL shows that it not only helps in manipulating attacker for having the wrong or partial information from the IoMT data but also helps in defending the information against model inversion attacks. We also show through our analysis that the PGSL method can be used with other techniques to improve both the recovery and recognition performance, accordingly. The method has been tested on MIAS and MNIST dataset that proves the effectiveness of the proposed approach. The implication of PGSL can easily be realized in any IoMT ecosystem ranging from e-health to spatial computing domains. There are several directions in which the current work can be extended. One of the possible directions is its use in Private AI framework that can help in securing both data and model security. Another future work is to observe the effect of data privacy preservation when it comes to the adoption of virtual worlds such as Metaverse, Spatial computing, and Digital Twins. Lastly the proposed work can be extended to observe its effect on multiple client nodes or server nodes with split learning concerning the domain adaptation strategy. Deep learning in the industrial internet of things: Potentials, challenges, and emerging applications A deep learning mammography-based model for improved breast cancer risk prediction Applications of deep learning in fundus images: A review Deep representation learning of electronic health records to unlock patient stratification at scale Evaluation of inference attack models for deep learning on medical data Adversarial examples: Attacks and defenses for deep learning Split learning for health: Distributed deep learning without sharing raw patient data Intriguing properties of neural networks Explaining and harnessing adversarial examples Deepfool: A simple and accurate method to fool deep neural networks Universal adversarial perturbations One pixel attack for fooling deep neural networks Patch selection denoiser: An effective approach defending against one-pixel attacks Defense against n-pixel attacks based on image reconstruction Efficient detection of pixel-level adversarial attacks Image and attribute based convolutional neural network inference attacks in social networks Model inversion attacks that exploit confidence information and basic countermeasures Deep models under the gan: Information leakage from collaborative deep learning Model inversion attacks against collaborative inference Reducing leakage in distributed deep learning for sensitive health data Practical defenses against model inversion attacks for split neural networks Maximal jacobian-based saliency map attack Cascaded and recursive convnets (crcnn): An effective and flexible approach for image denoising Tensor robust principal component analysis: Exact recovery of corrupted low-rank tensors via convex optimization Fast convex optimization algorithms for exact recovery of a corrupted lowrank matrix Second-order optimality conditions for non-convex set-constrained optimization problems Hybrid and hierarchical fusion networks: a deep cross-modal learning architecture for action recognition The mammographic image analysis society digital mammogram database Gradient-based learning applied to document recognition Deep leakage from gradients Towards evaluating the robustness of neural networks Adversarial examples in the physical world