key: cord-0069238-yiy9jp77 authors: Wang, Jinjin; Yang, Jiadi title: Culture shaping and value realization of digital media art under Internet+ date: 2021-11-03 journal: Int J Syst Assur Eng Manag DOI: 10.1007/s13198-021-01463-7 sha: 9c948bb7581aeede38dd4ed7edd30741a8fea7e2 doc_id: 69238 cord_uid: yiy9jp77 This exploration aims to better realize the cultural shaping and the value of digital media art under the background of Internet+. The security of digital media art information dissemination is discussed first. Then, the carrier image generation algorithm under the steganography process and deep learning is analyzed. After the improvement by edge computing (EC) and image steganography technology, the mean square error of carrier image generation algorithm is about 0.2 smaller than the three comparison algorithms, indicating that the optimized steganography technology is more stable. Meanwhile, the peak signal-to-noise ratio of the improved algorithm is between 0.06 and 0.2, and the structural similarity index measure is close to 1. Compared with traditional image steganography algorithm, multi-objective optimization based on genetic algorithm (MO-GA) algorithm improves the invisibility and security of steganography. Furthermore, the genetic algorithm is used to iteratively detect individuals with higher fitness of filtering residuals, to obtain the optimal solution of evolutionary multi-objective optimization problem. Finally, it is concluded that the MO-GA image steganography technology based on EC has advantages in the above three indicators, which improves the information security in the process of culture shaping and value realization of digital media. Digital media art is a novel art based on digital technology and modern media, which integrates rational thinking and artistic thinking of humans. Digital media is also the latest information carrier after language, text and electronic technology (Kenza 2021) . With the development and popularization of digital media art, information transmission security is particularly important to the subsequent culture shaping and value realization of digital media art. A novel image steganography algorithm based on DL (deep learning) is proposed here, to generate new carrier images and embed information by the steganographic capacity. Finally, EC (edge computing) technology is used for MO (multi-objective optimization) based on GA (genetic algorithm) of the image steganography algorithm (Taleb et al. 2017) , to realize the safe transmission of information, the culture shaping and value realization of digital media art under safe development (Ariaji et al. 2020; Zhu 2020) . In the face of the problem that teachers and students cannot return to school during the epidemic period of COVID-19, under the organization of the Academic Affairs Office of the school, the online teaching plan based on Teaching Applysquare and video live broadcast was used in the teaching of illustration design curriculum of the digital media art specialty. The Teaching Applysquare is a platform for process teaching and academic evaluation independently developed by Nanjing University, which is an intelligent, comprehensive, cross-platform, and hybrid teaching support system driven by data for normal teaching in universities. WeChat Official Accounts on the mobile phone can be used for assisted instruction for teachers, and students and teachers have different PC interface systems. The rapid development of the Internet? technology has greatly affected human lifestyle. During the culture shaping and value realization of digital media art, there are increasing demands on individual information interaction, the amount of transmitted information, and the safety and reliability of information transmission (Degand 2019) . Information encryption technology can achieve limited effect on ensuring information transmission security, which only resists the attack of non-authenticated users without hiding the communication traces. Thus, the encrypted information is poor in imperceptibility and vulnerable to interception and attack. As one important research direction in the field of information security, information steganography technology aims to hide existing messages, which can effectively cover the shortage of information encryption technology (Ding et al. 2019; Sajedi 2018) . Steganography is mainly used to hide message transmission. Covert communication can greatly reduce the probability of interception and attack to communication, especially when the communication is monitored. When under monitoring, the direct transmission of encrypted information turns unsafe, and the receiver and sender are easily be directly exposed. Steganography can keep communication and both communication parties safe to the greatest extent by embedding the information into the carrier (Lu et al. 2018) . Considering the problems existing in the culture shaping and value realization of digital media art, EC and steganography are combined to implement MO-GA (multiobjective optimization based on genetic algorithm). This algorithm has great significance to the culture shaping and value realization of digital media under the background of Internet?. Through the data analysis of the Teaching Applysquare, students take a long time to read the courseware on average, indicating that most students need to spend some time understanding and absorbing these contents. Therefore, teachers need to focus on explaining these key and difficult points in class. The online learning platform with DL algorithm can record the whole learning process of students, analyze their learning characteristics, and facilitate personalized training programs designed by teachers for students. 2 Literature review Centobelli et al. (2021) stated that DL algorithm was a promising and disruptive technology which had potential in many fields, and this number had increased exponentially in recent years. Moreover, with the prosperity of DL multimedia technology and the promotion and application of wireless Internet technology, digital media art, a new art form, continued to create new miracles in the art field (Centobelli et al. 2021) . Kim (2020) showed that with the emergence of DL, stakeholders and experts ceded policy decisions in human affairs to computer algorithms in algorithmic governance. They mainly studied the cultural shaping and value realization of digital media art, analyzed the positive and negative values of digital media art from the perspective of culture. Besides, combined with a large number of art cases, they furnished the methods and ways of cultural shaping of digital media art, as well as the strategies to promote the smooth realization of its cultural value (Kim 2020) . Karnouskos (2020) stated that deepfakes was the phenomenon of creation of realistic digital products, and a plethora of videos had emerged over the last two years in social media. Especially the low technical expertise and equipment required to create deepfakes, meant that such content could be easily produced by anyone and distributed online. Therefore, they investigated the deepfakes via multi-angled perspectives that included media and society, media production, media representations, media audiences, gender, law, and regulation, as well as politics, and identified and critically discussed some key implications of these viewpoints (Karnouskos 2020) . Robinson (2020) discussed how public institutions could support and ensure these high levels of trust, transparency, and openness in Nordic culture and extend these concepts of ''digital trust'' to artificial intelligence where many AI processes were technologies hidden from view of the citizen. They found that one solution was to formulate national policies to safeguard cultural values and individual rights, and finally strengthen these values in their society (Robinson 2020) . González-Zamar et al. (2020) reported that the continuous development of digital technology enabled people to live in a connection based digital environment, and also changed the background of the educational process. Besides, experience showed that digital technologies had influenced the way of learning and, consequently, the way of teaching, and learning in the digital age was a complex process because it was a multifaceted and diversified action. The authors applied bibliometric techniques to identify global trends in digital education and its link with the learning of artistic and visual education in higher education settings, during the period 2000-2019, and the data showed increasing relevance, particularly in the last three years. They detected the lines of research that were related to the Internet, education, visuals, computer programs, learning, digital media literacy, and educational technology, contributing to academic, scientific, and institutional debate to enhance decision-making based on existing information (González-Zamar et al. 2020) . Leguina et al. (2021) studied the interaction between cultural capital and digital capital in public libraries. They explored a comparative profile of contemporary library users by analyzing a wide range of data sets from the UK participation Survey (2016-17) using two-step cluster analysis and multiple regression models. They identified four distinct user groups: Traditional, Active, Family, and Tech Access, which possessed different degrees of cultural and digital capital, had different demographic profiles, and benefited from digitalized libraries in different ways (Leguina et al. 2021 ). Neural networks among DL algorithms are a kind of artificial intelligence technology developed by simulating the relationship between neural networks of the human brain. The neural network in the human brain is an excessively complex network organization, and there is often information transmission between neurons. DL regards each neuron node in the network as an independent weight parameter, and numerous neuron nodes form a large number of weight parameter networks. Training these networks and optimizing the weight parameters of each neuron is an essential procedure of DL. The simplest neural network can be roughly divided into three layers, namely the input layer, hidden layer, and output layer. The input layer represents the data to be input, including images, voices, or other processed data. The hidden layer is connected with the input layer. Each neuron of the input layer can transmit the input information to the hidden layer. The hidden layer generally contains multiple neural networks. This layer is the most complicated layer with the largest number of neurons. An output layer is usually connected to the end of the hidden layer. The information of the output layer is provided by the front hidden layer, and the output layer is responsible for outputting the final training results. Figure 1 illustrates the structure of the neural network, where the hidden layer may include a multi-layer network structure. All neurons between the middle layers of the network are connected. This connection method is called a full connection. With various media as carriers, steganography has been widely used with the popularity of digital media. Common carrier media include image, audio, text, video, IP (Internet Protocol) massage, and compressed files, among which images are the most popular (Hui 2019; Arivazhagan and Amrutha 2020). Steganalysis is developing synchronously with steganography. As the opposite technology of steganography, steganalysis aims to discover the facts of encrypted communication. Images, even with high compression ratio, often contain some redundant information. Steganographic embedding often modifies some specific pixels or coefficients, resulting in changes of the correlation of pixels or coefficients. Although some steganography algorithms try to make minimal changes to the firstorder and second-order features of the modified pixels or coefficients, they cannot completely eliminate the correlation changes caused by embedding (Saeed et al. 2020) . Figure 2 shows the classifications of steganography. Image steganography is an important branch of information steganography technology, which guarantees the secure transmission of secret information carried by digital images in the network. With the help of EC characterized by its low latency, high bandwidth, and high performance, the edge server can calculate the undetectable embedding positions which the human perception system is insensitive to. Then secret information is quickly embedded into the digital image at the terminal to realize encrypted communication during transmission (Shi and Tingting 2020; Shaw 2020) . Figure 3 indicates the three parts of image steganography. Regarding the side close to the data source as the edge side of the network, EC provides services to users on the edge side of the network, based on the integration of network, computing, storage and so on. It practically suits real-time business with massive real-time data, security service, and privacy protection business. EC processes data Spatial digital image steganography algorithm usually modifies some pixels of the image to realize the information embedding. Under the adaptive steganography algorithm, the possibility of different pixels being modified is generally different. The pixel modification cost can describe the distortion introduced by the modified pixels. In general, the modification cost affects the modification probability of pixels. To be specific, the modification cost of pixels is negatively correlated with the modification probability. The modification probability directly affects the length of embedded information. With a given length of the embedded information, the modification probability is uniquely determined. The information embedding process often consists of the only embedding scenario and embedding extraction scenario. In the only embedding scenario, when the pixel is modified in a certain probability, the information is considered to be embedded in the image, but the embedded information cannot be extracted. The embedding process of only embedding scenario mainly contains modification cost calculation, modification probability calculation, and random embedding, as shown in Fig. 4 . The only embedded scenario is mostly used to simulate the performance of steganography algorithm and calculate the theoretical limit of minimum distortion (Johnvictor et al. 2018; Pei and Shunquan 2018) . An image generation network is proposed, which pays little attention to the main content of the image, but focuses on the generation of image details, and then embeds the details into the original image. As the input information to the image generation network, the original image determines the content of the detail information generated by the network. Besides, the network is trained mainly to make the image more suitable as the carrier image. Although image generation network needs to cooperate with other networks in the training, it can be used alone in image generation, enriching the method of carrier image generation. The following experimental results demonstrate that the regenerated image is more probably to be misclassified by the detector after the information embedding, which further improves its security. The image generation network shows the advantages of wide application range and strong practicability after being tested under multiple data sets. The framework uses four sub-networks for collaborative training, including image generation (to generate a new image), modification probability calculation (to calculate the modification probability of pixels), information embedding (to embed pixel modification), and steganalysis (to conduct steganographic analysis). There is an obvious cascade relationship between the sub-networks, for the output of the former sub-network is the input of the next sub-network. Such a structure needs to be trained in order according to the function and dependence of the sub-networks. Firstly, the information embedding sub-network is fixed after training. Then the modification probability calculation sub-network and steganalysis sub-network are alternately trained. Finally, the image generation sub-network is trained to generate images independently (Chutani and Goyal 2019) . The information embedding sub-network mainly randomly embeds information according to the modification probability of each pixel just through increasing or decreasing the pixel value by 1. The input of the information embedding sub-network consists of the modification probability of each image pixel and the random noise equal in size to the image. The information embedding subnetwork compares n (the value of random noise variable) with p (the value of pixel modification probability). If n \ p/2, the pixel value decreases by 1. If n [ (1 -p/2), the pixel value increases by 1. In other cases, the pixel value doesn't change. As shown in Eq. (1), such modifications ensure that the modification probability of each pixel is p, the probability of remaining unchanged is (1 -p), and the probability that the pixel is reduced by 1 or increased by 1 is both p/2. The operation of information embedding does not change the image size, but allocates the corresponding random noise for each modification probability. Therefore, sizes of pixel modification probability and random noise are stretched into [-1, 1] during the information embedding, where -1 denotes the value obtained by rearranging all elements according to constraints of other dimension. Those sizes are rearranged according to the original image size after the information embedding (Ghasemzadeh and Arjmandi 2017) . The information embedding sub-network is composed of two branches. One branch can add 1 or 0 (no embedding) to the pixel value, and the other can decrease the pixel value by 1 or 0. Each branch is a fully connected network with three layers, among which the input layer contains two neurons, while the hidden layer includes 10 neurons, and the output layer has a neuron with Sigmoid function. In the whole training process, labels with supervision should be provided. The MSE (mean square error) of the output and labels of the network are used as the loss function, as shown in Eq. (2). In Eq. (2), yˆis the output of network, and y is the label. The loss function is calculated according to Eq. (2). The training process is very fast since the framework has a relatively simple structure and clear functions. Once the training is completed, the information embedding subnetwork is fixed. Because the information embedding is a deterministic nonlinear process, the calculation directly using equations may cause the gradient transmission to be blocked. Thus, the neural network is used to simulate the information embedding process. The introduction of information embedding achieves the application of image generation in the steganography algorithm, naturally connecting the modification probability calculation with the steganalysis. The modification probability calculation sub-network and steganalysis sub-network are trained subsequently. With the preset image, the output of the modification probability calculation sub-network is input into the information embedding sub-network. Then, the steganographic image generated by the information embedding sub-network is input into the steganalysis sub-network. The label of the image is manually set when the image pair consisting of steganographic image and input image is input into the steganalysis sub-network. The embedding load of steganography should remain stable during the alternate training of the modification probability calculation sub-network and steganalysis sub-network. Generally, the modification probability and modified pixels both grow with the increase of the embedding load, bringing fast convergence and low error rate to the steganalysis. The low error rate of the steganalysis sub-network is also conducive to the convergence of the image generation sub-network. The output of the modification probability calculation subnetwork can affect the information capacity of the image. Thus, the modification probability calculation sub-network is expected to simulate a high-load scenario in the information embedding, which benefits the learning of the two sub-networks. The convolution network with 24 layers is used by the image generation sub-network and modification probability calculation sub-network. Each convolution layer contains 12 channels with convolution kernels of size 7*7, and PADDING is used to keep the image size unchanged. Through the modification of regularization, each convolution layer (except the last one) is activated by ReLU (Rectified Linear Unit) function before outputting data. The last layer is activated by the Sigmoid function. Cross-layer connections are added in the convolution network, through which the output of each even layer is merged into the next even layer for activation. The steganalysis sub-network uses a five-layer convolution network. In the steganalysis sub-network, the first layer contains 8 channels with convolution kernels of size 5*5, in which convolution sliding step is 1, and the absolute value is calculated after output. Besides, the first layer is modified by regularization, activated by TanH (hyperbolic tangent) function, and averaged by mean pooling. The second layer is similar to the first layer, except for containing 16 channels with convolution kernels of size 5*5. The TanH function used in the first two layers not only retains negative values, but also has an obvious gradient near 0. The third convolution layer, the fourth layer and the fifth layer respectively use 32 channels, 64 channels, and 128 channels with convolution kernels of size 1*1, with the conduction of regularization, ReLU function and mean pooling. Finally, the result is output through the whole connected network. The steganalysis sub-network takes cross-entropy function as the loss function, as expressed in Eq. (3) (Fan and Sun 2019) . In Eq. (3), y i is the ith dimension of the label, and p i is the ith dimension of the network output probability. The loss function of the modification probability calculation sub-network contains two parts, as shown in Eq. (4). One part is the loss of the steganalysis sub-network, and the other part is the load loss of the information embedding sub-network. The load loss is the MSE between the calculated channel capacity and the specified channel capacity, as shown in Eq. (5). The modification probability of the whole image is calculated according to the channel capacity of a single pixel, as shown in Eq. (6). In Eq. (4), loss payload represents the load loss, and k 1 and k 2 are factors adjusting the two load ratio. In Eq. (5), m represents the width of the image, while l is the length. The specified load is represented as payload. The load loss of each image is trained by mini-batching. The load loss of minibatch is the average value of the load loss of all images in a batch. In Eq. (6), m represents the width of the image, while l denotes the length. The embedding capacity of the whole image is the summation of the channel capacity of each pixel. As the last sub-network to be trained, the image generation sub-network is the only one used for generating images. The modification probability computation subnetwork, information embedding sub-network, and steganalysis sub-network are all prepared for training the image generation sub-network. The image generation subnetwork and the steganalysis sub-network after training are trained alternately. Meanwhile, the steganalysis sub-network still takes the image pair composed of the input image and the corresponding steganographic image as the input for learning. At this time, the steganalysis sub-network is not fix, so that the learning process of image generation is universal, rather than generating images only for a detector with a fixed weight (Abdullah et al. 2021) . Like the modification probability calculation sub-network, the loss function of the image generation sub-network also includes the loss and load loss of the steganalysis sub-network. On the one hand, the loss of the steganalysis sub-network helps to make the generated image difficult to detect when encountering. On the other hand, the load loss restricts the image generation sub-network to outputting images within the suitable scope of the modification probability calculation sub-network. The image generation sub-network adopts a seven-layer convolution network. In addition to the last layer using a single channel with convolution kernels of size 1*1, each layer contains 12 channels with convolution kernels of size 7*7. After regularization, each layer is activated by the ReLU function to output data to the next layer. Figure 6 illustrates the detailed structure of image generation sub-network. The images activated in every even layer of the image generation sub-network are overlaid on the images regularized in the next even layer, on which the ReLU function is performed simultaneously. The cross-layer connection effectively transmits the image details into deeper layers, and also quickly transfers the gradient from the back layer to the front layer during the back propagation of loss gradient. The last layer of the sub-network uses a channel with convolution kernels of size 1*1. The convolution kernels can scale the output of the front layer, and fuse the results of the front multi-channel layers into a single-channel image. Subsequently, regularization and TanH function are carried out. TanH function can ensure the boundedness of the output. The output multiplied by a certain strength is overlaid on the image to obtain a new carrier image. The pixel value of obtained images is artificially controlled in the range of 0 to 255 to ensure the actual meaning of the pixel value. The sub-network does not use the sliding step to generate a small image by convolution, nor does it use the whole connected layers. Therefore, although the training of the image generation sub-network is carried out on images of 512*512, the trained sub-network can adapt to and optimize the input of any size to generate carrier images of any size. Image steganography makes changes to images undetectable to the human perception system. A series of indicators of digital images can evaluate the degree of image distortion, including mean, standard deviation, mean gradient, information fidelity, visual information fidelity, PSNR (peak signal-to-noise ratio), MSE, and SSIM (structural similarity index measurement) (Johansen 2021 ). Among them, PSNR and MES are more common, and SSIM is one objective of MO. In the simulation experiment, MSE, PSNR and SSIM are used to evaluate the influence of four algorithms on image quality. Table 1 illustrates the average values of 100 sample pairs for calculation (Niankara and Niankara 2020; Olasina 2020; Karampidis and Kavallieratou 2020) , and the comparison between the MO-GA, TCA (Texture complexity algorithm), PDA (Pixel difference algorithm) and AIA (Adaptive image algorithm). PSNR and MSE are indicators determined based on this simple and direct idea. MSE, as the name suggests, needs no further explanation. PSNR is the ratio of the energy of the peak signal to the average energy of the noise, which is usually expressed in dB (decibel) after the logarithm operation. Since MSE is the average energy of the difference between the real image and the noisy image, and the difference between the two is noise, PSNR is the ratio of the peak signal energy to MSE. The definition can be written as Eq. (7). MaxValue 2 MSE ¼ 10 log 10 2 bits À 1 MSE ð7Þ The activation The activation Input Fig. 6 Partial structure of the image generation sub-network Since the image pixel values are saved in a quantitative manner, bits denote the number of bits each pixel stores. Therefore, MaxValue is 2^bits-1. The basic idea of SSIM is to evaluate the similarity of two images through the following three aspects, i.e., luminance, contrast, and structure. The basic process is as follows. First, calculate the luminance measurement of the input x and y, and compare the results to obtain the first evaluation related to similarity. After subtracting the influence of luminance, calculate contrast measurement and compare the results to get the second evaluation. Then, remove the contrast from the result of the previous step and compare the structures. Finally, combine the results to get the final evaluation results. In terms of implementation, luminance is represented by mean, contrast is represented by variance after mean normalization, and structure is represented by correlation coefficient (r of statistics, that is, the ratio of covariance to variance product). The specific calculation equations are as follows: Among the above equations, the definitions like l (x, y) can uniform the expressions and numbers in all equations, and satisfy Weber's law. Weber's theorem says that for the human visual system, the minimum perceived amplitude of brightness change is directly proportional to background brightness. In other words, the visual system of people is sensitive to relative brightness change, rather than absolute value. For instance, if l 1 and l 2 are proportional, assuming that the scale coefficient is (1 ? R), the calculation result by substituting it into the equation demonstrates that l (x, y) is only related to R and has nothing to do with the absolute value of the two mean values, indicating that the evaluation is similar to the subjective nature of human vision. In addition, the constant term C 1 aims to avoid fluctuation when the mean value is close to 0. The definition of contrast is similar to that of similarity. Moreover, the structural similarity is expressed as correlation coefficient, and a constant term is added to prevent division by 0. Finally, the weighted product of the three terms is taken, which is generally simplified as a = b = c = 1. Then, the general equation comes as: where S stands for similarity, and this operator should satisfy the basic properties as a measure. On the one hand, MSE and PSNR perform statistics and average calculation based on pixel gray value, completely ignoring the impact of image content on human eyes, so they cannot fully reflect the quality of images. On the other hand, the SSIMSSIM method simulates the influence of human visual system on the evaluation results and truly reflects people's subjective feelings. However, this method ignores the edge distortion of the image, resulting in different results from subjective perception when evaluating seriously distorted fuzzy or noisy images. In contrast, the DL algorithm proposed here considers the influence of edge change on image quality in the evaluation process, which ensures the precision of the evaluation results. As shown in Table 1 , MO-GA is superior to the comparison algorithms in terms of MSE, PSNR and SSIM. As one objective function of the evolutionary MO, SSIM is optimized by population evolution to find the best steganographic position (Kim and Park 2020) . In Table 1 , SSIM of MO-GA is almost 1, indicating that the steganography algorithm has little effect on image quality. The SPA (sample pair analysis) steganalysis tool is employed to analyze images in the experimental sample library. The security of MO-GA is evaluated by the error detection rate P ERR ¼ P FA þP MD 2 . Among them, P FA is the false alarm rate representing the ratio of the number of original images detected as encrypted images to the number of original images. In addition, P MD is the omission factor showing the ratio of the number of encrypted images detected as the original image to the number of encrypted images. Figure 7 reveals the mean values of P ERR of the four algorithms conducted in 100 sample pairs. And MO-GA is robust against the SPA steganalysis tool. The smaller the embedding capacity is, the stronger the anti-SPA ability of the MO-GA is (Tan et al. 2020; Jia and Wang 2020) . MO-GA is a typical evolutionary MO problem. GA is used to conduct global search in the noise and texture regions of the image to find the most suitable steganographic position (Chhikara and Kumar 2020) . In the search process, complex gene operations and population iterations increase the algorithm complexity and computational cost. Therefore, EC is used to iteratively find the steganographic position in the edge server to realize the information embedding and encrypted communication based on image steganography in the terminal with limited performance (Nam and Lee 2020) . The computational cost of MO-GA is better than those of TCA, PDA and AIA, because the entire execution process of the three comparison algorithms is completed on the terminal. The embedding time of MO-GA lasts within milliseconds when the embedding capacity is less than 0.1. Compared with the traditional image steganography technologies, MO-GA improves the imperceptibility and security of steganography. The embedding of secret information is limited in the noise and texture regions of the image through the filter residuals obtained by high-pass filter banks. Meanwhile, GA is used to iteratively detect individuals with high fitness in filtering residuals to obtain the optimal solution of the evolutionary MO problem. Considering that the terminal with limited computing resources cannot run complex applications, the edge server is utilized to locate the optimal steganographic position of the image. Furthermore, the terminal only needs to embed secret information according to the steganographic position without providing the information to the server, which improves the security of the information and the efficiency of information embedding. The proposed evolutionary MO-GA superimposes filtering residuals to the solution space forming the MO problem, regards embedding capacity as the constraint condition, and constructs fitness function by optimizing the imperceptibility and security of steganography. Moreover, GA is used to find individuals with high fitness in texture and noise regions to obtain better steganographic embedding positions. Besides, an encrypted communication method based on EC is proposed, considering that the terminal cannot run complex applications due to limited computing resources. Then, MO-GA is deployed in the edge server to locate the optimal steganographic position of the image, according to which the terminal embeds secret information. Compared with traditional image steganography technology, MO-GA improves the invisibility and security of steganography. Meanwhile, the GA is used to iteratively detect individuals with higher fitness of filtering residuals to attain the optimal solution of the evolutionary MO problem. This deployment doesn't provide secret information to the server, ensuring the security of secret information during the culture shaping and value realization of digital media. Due to the limited time and knowledge level, only a small range of creative cases are selected for integrated analysis. Besides, the research on digital media art is just in a theoretical phase, failing to carry out large-scale empirical research, and lacking detailed and accurate data investigation. Consequently, it is inevitable to lack persuasion when expounding some problems, which needs further research. Digital media art is a new research field, and many new terms and statements emerge one after another. Although a large number of domestic and foreign literature resources have been cited in the process of writing, due to the limitation of time and personal theoretical level, the application and definition of some concepts need to be further considered. Funding This research received no external funding. Conflict of interest The authors declare that they have no conflict of interest. Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors. Recent advances in collaborative scheduling of computing tasks in an edge computing paradigm Learning materials based on digital art student creativity in Universitas Muhammadiyah Tapanuli Selatan Digital image steganalysis: a survey on paradigm shift from machine learning to deep learning based techniques Surfing blockchain wave, or drowning? Shaping the future of distributed ledgers and decentralized technologies MI-LFGOA: multi-island levy-flight based grasshopper optimization for spatial image steganalysis A review of forensic approaches to digital image steganalysis Stereotypes vs. strategies for digital media artists: the case for culturally relevant media production Image inpainting using nonlocal texture matching and nonlinear filtering Image steganalysis via random subspace fisher linear discriminant vector functional link network and feature mapping Universal audio steganalysis based on calibration and reversed frequency resolution of human auditory system Comprehensive survey of 3D image steganography techniques Digital education and artistic-visual learning in flexible university environments: research analysis To promote the education of digital media art through design contest Learning selection channels for image steganalysis in spatial domain Public intellectuals on new platforms: constructing critical authority in a digital media culture. Rethinking cultural criticism Unsupervised optimization for universal spatial image steganalysis A dilated convolutional neural network as feature selector for spatial image steganalysis: a hybrid classification scheme Artificial intelligence in digital media: the era of deepfakes The poetry of Suheir Hammad: transnational interventions in the age of islamophobia and digital media Deep learning and principal-agent problems of algorithmic governance: the new materialism perspective CNN-based image steganalysis using additional data embedding Public libraries as reserves of cultural and digital capital: addressing inequality through digitalization Binary image steganalysis based on local texture pattern BitMix: data augmentation for image steganalysis The role of digital media in shaping youth planetary health interests in the global economy Cultural expression using digital media by students Feature selection for image steganalysis using levy flight-based grey wolf optimization Application of quantisation-based deep-learning model compression in JPEG image steganalysis Trust, transparency, and openness: how inclusion of cultural values shapes Nordic national public policy strategies for artificial intelligence (AI) An accurate texture complexity estimation for quality-enhanced and secure image steganography Adaptive image steganalysis ACM SIGGRAPH distinguished artist award for lifetime achievement in digital art AVG comprehensive practice reform of digital media art based on fuzzy theory On multi-access edge computing: a survey of the emerging 5G network edge cloud architecture and orchestration An adaptive image calibration algorithm for steganalysis Study of creative thinking in digital media art design education Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations Acknowledgements The authors acknowledge the help from the university colleagues.Author contributions All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.