key: cord-0146587-h9pwjwd9 authors: Afifi, Mahmoud; Abuolaim, Abdullah; Hussien, Mostafa; Brubaker, Marcus A.; Brown, Michael S. title: CAMS: Color-Aware Multi-Style Transfer date: 2021-06-26 journal: nan DOI: nan sha: 09c0b45d1cf96959797466bfa3a12eb523ed4a72 doc_id: 146587 cord_uid: h9pwjwd9 Image style transfer aims to manipulate the appearance of a source image, or"content"image, to share similar texture and colors of a target"style"image. Ideally, the style transfer manipulation should also preserve the semantic content of the source image. A commonly used approach to assist in transferring styles is based on Gram matrix optimization. One problem of Gram matrix-based optimization is that it does not consider the correlation between colors and their styles. Specifically, certain textures or structures should be associated with specific colors. This is particularly challenging when the target style image exhibits multiple style types. In this work, we propose a color-aware multi-style transfer method that generates aesthetically pleasing results while preserving the style-color correlation between style and generated images. We achieve this desired outcome by introducing a simple but efficient modification to classic Gram matrix-based style transfer optimization. A nice feature of our method is that it enables the users to manually select the color associations between the target style and content image for more transfer flexibility. We validated our method with several qualitative comparisons, including a user study conducted with 30 participants. In comparison with prior work, our method is simple, easy to implement, and achieves visually appealing results when targeting images that have multiple styles. Source code is available at https://github.com/mahmoudnafifi/color-aware-style-transfer. : We propose a color-aware multi-style transfer method that generates aesthetically pleasing results while preserving the correlation between the style and colors in the reference image (i.e., style image) and the generated one. Each shown style image (top) has different styles. We highlight two example styles in each style image in yellow and blue. We also highlight the corresponding transferred styles in our results (bottom). for image classification via image-based optimization. The process begins with a noise image, which is iteratively updated through an optimization process to make the CNN predicts a certain output class. Inspired by Deep Dream, Gatys et al., [4] proposed a neural style transfer (NST) method to minimize statistical differences of deep features, extracted from intermediate layers of a pre-trained CNN (e.g., VGG net [26] ), of content and style images. After the impressive results achieved by the NST work in [4] , many methods have been proposed to perform style transfer leveraging the power of CNNs (e.g., [1, 5, 6, 12, 15, 17, 18, 20, 22, 24, 27, 28, 29, 30, 31, 32, 33, 34] ). The work presented in this paper extends the idea of image-optimization NST to achieve multi-style transfer through a color-aware optimization loss (see Figure 1 ). We begin with a brief review of image-optimization NST in Section 2, then we will elaborate our method in Section 3. NST-based methods use similarity measures between CNN latent features at different layers to transfer the style statistics from the style image to the content image. In particular, the methods of [2, 4] utilize the feature space provided by the 16 convolutional and 5 pooling layers of the 19-layer VGG network [26] . The max-pooling layers of the original VGG were replaced by average pooling as it has been found to be useful for the NST task. For a pre-trained VGG network with fixed weights and a given content image, the goal of NST is to optimize a generated image so that the difference of feature map responses between the generated and content images is minimized. Formally, let I c and I g be the content and generated images, respectively. Both I c and I g share the same image dimensions, and each pixel in I g is initialized randomly. Let F l c and F l g be the feature map responses at VGG-layer l for I c and I g , respectively. Then I g is optimized by the minimizing the content loss L content (I c , I g ) as follows: Figure 2 : Traditional Gram matrix optimization does not consider the correlation between style image's colors and their styles. Therefore, if the style image has more than a single style, like the case shown in (A), this optimization often results in a mixed style as shown in (B). Our method, in contrast, considers the color-style correlation in both images, as shown in (C). Dashed lines in purple refer to the goal of the optimization process. where · 2 2 is squared Frobenius norm. Gatys et al., [4] leveraged the Gram matrix to calculate the correlations between the different filter responses to build a style representation of a given feature map response. The Gram matrix is computed by taking the inner product between the feature maps: where F l (i:) and F l ( j:) are the i th and j th vectorized feature maps of VGG-layer l, m l is the number of elements in each map of that layer, and ., . denotes the inner product. To make the style of the generated I g matches the style of a given style image I s , the difference between the Gram matrices of I g and I s is minimized as follows: where L style is the style loss, G l s and G l g are the Gram matrices of I s and I g , respectively, and w refer to scalar weighting parameters to determine the contribution of each layer in L style . To generate an image I g such that the general image content is preserved from I c and the texture style statistics are transferred from I s , the optimization process is jointly performed to minimize the final loss function: where α and β are scale factors to control the strength of content reconstruction and style transfer, respectively. From Equation 2, it is clear that Gram matrix measures the correlation of feature channels over the entire image. As a result, Gram-based loss in Equation 3 results in transferring the averaged global image style statistics to the generated image. That means if the style image has multiple styles, the traditional Gram-based optimization often fails to sufficiently convey all styles from the style image to the generated image; instead, it would generate images with mixed styles. Figure 2 illustrates this limitation. As shown in Figure 2 -(A), the style image has more than a single style, which results in unpleasing mixed-style when transferring using traditional Gram-based NST optimization (Figure 2-[B] ). Our style transfer result is shown in Figure 2- Figure 3 : Our method proposes to extract color palette from both the content and style images. This color palette is then used to generate color weighting masks. These masks are used to weight the extracted deep feature of both style and input images. This coloraware separation results in multiple Gram matrices, which are then used to compute our color-aware style loss. This loss along with the content loss are used for optimization. [2, 4, 5, 6, 14] . However, these methods are restricted with a single average style statistics per content and style image pair, and lack artistic controls. While the method of [21] proposed a procedure to control the style of the output image artistically, their procedure requires a tedious human effort that asks the users to annotate semantic segmentation masks and correspondences in both the style and content images. Unlike other existing methods, we introduce the first Color-Aware Multi-Style (CAMS) transfer method that enables style transfer locally based on nearest colors, where multiple styles can be transferred from the style image to the generated one. Our proposed method extracts a color palette from both the content and style images, and automatically constructs the region/color associations. The CAMS method performs style transfer, in which the texture of a specific color in the style image is transferred to the region that has the nearest color in the content image. Figure 1 shows multiple examples of the generated images (bottom row) from a single input content image with different style images (top row). The regions highlighted in yellow and blue indicate two example styles that were transferred from the style image to regions in the generated image based on the nearest color in the content image. Our proposed framework allows multi-style transfer to be applied in a meaningful way. In particular, styles are transferred with associated with colors. By correlating style and colors, we offer another artistic dimension in preserving the content color statistics together with the transferred texture. To further allow artistic controls, we show how our method allows the users to manually select the color associations between the reference style and content image for more transfer options. We believe that our proposed framework and the interactive tool are useful for the research community and enable more aesthetically pleasing outputs. Our source code will be publicly released upon acceptance. Figure 4 : In many scenarios, the style image could include multiple styles. Traditional Gram matrix-based optimization (e.g., Gatys et al., [4] ) cannot capture these styles and as a result it may result in noisy images. In contrast, our color-aware optimization produces more pleasing results while persevering the style-color matching. style images, respectively. We merge them to generate a single input color palette, C, which is then used to generate a set of color masks. We use these masks to weight deep features of both input and style images from different layers of a pre-trained CNN. This color-aware separation of deep features results in multiple Gram matrices used to compute our style loss during image optimization. In this section, we will elaborate on each step of our algorithm. Given an image, I ∈ R n×m×3 , and a target palette, C, our goal is to compute a color mask, M ∈ R n×m , for each color t in our color palette C, such that the final mask reflects the similarity of each pixel in I to color t. We generate M by computing a radial basis function (RBF) between each pixel in I and our target color t as follows: where σ is the RBF fall-off factor, I j is the j th pixel in I and t is the target color. Next, we blur the generated mask M by applying a 15×15 Gaussian blur kernel, with a standard deviation of 5 pixels. This smoothing step is optional but empirically was found to improve the final results in most cases. For each of I s and I g , we generate a set of masks, each of which is computed for a color in our color palette, C. Here, I g refers to the current value of the image we optimize, not the final generated image. Note that computing our color mask is performed through differentiable operations and, thus, can be easily integrated into our optimization. After computing the color mask sets {M g } t∈C for I s and I g , respectively, we compute two sets of weighted Gram matrices, for I s and I g . According to the given mask weights, each set of weighted Gram matrices captures the correlation between deep features (extracted from some layers of the network) of interesting pixels in the image. This weighted Gram matrix helps our method to focus only on transferring styles between corresponding interesting pixels in the style and our generated image during optimization. For a color t in our color palette C, this weighted Gram matrix, G l(t) , is computed as follows: whereF l (i:) andF l ( j:) are the i th and j th vectorized feature maps of the network layer l after weighting, m l is the number of elements in each map of that layer, is the Hadamard product, and W t represents the computed mask for t after the following processing. First, we linearly interpolate the width and height of our computed mask, for color t, to have the same width and height of the original feature map, F l , before vectorization. Second, we duplicated the computed mask to have the same number of channels in F l . For each layer l in the pre-trained classification network and based on Equation 7, we compute {G l(t) s } t∈C and {G l(t) g } t∈C for our style and generated image, respectively. Finally, our color-aware sytle loss is computed as follows: By generating different weighted Gram matrices, our method is able to convey different styles present in the style image, which is not feasible using classic Gram matrix optimization. As shown in Figure 4 , the style image includes different style and textures. NST using Gram matrix (e.g., Gatys et al., [4] ) fails to capture these multiple styles in the reference style image and produces an unpleasing result as shown in the third column in Figure 4 . In contrast, our color-aware loss considers these styles and effectively transfers them in the generated image as shown in the last column in Figure 4 . For example, the text transferred from the letter (white background) in the style image to the man's white beard in the generated image. The flow of our method is shown in Algorithm 1. The flow of our method is as follows. First, we initialize each pixel in I g with the corresponding one in I c . Afterward, we generate two color palettes for I g and I s , respectively. We used the algorithm proposed in [3] to extract the color palette of each image. The number of colors per palette is a hyperparameter that could be changed to get different results. In our experiments, we extracted color palettes of five colors of each of our content and style images. Then, we merge them to generate the final color palette, C. After merging, the final color palette has at most ten colors, as we exclude redundant colors after merging. After constructing C, we generate color masks {M g } t∈C for I s and I g , respectively. Then, we extract deep features from I s and I c , which represent our target style latent representation and content latent representation, respectively. We adopted VGG-19 net [26] as our backbone to extract such deep features, where we used the 4 th and 5 th conv layers to extract deep features for the content loss, and the first 5 conv layers to extract deep features for our color-aware style loss. We then construct the weighted Gram matrices, as described in Section 3.2, using the deep features of style and generated images. The weighted Gram matrices of both generated and style images, and the deep features of generated and content Algorithm 1: Color-aware optimization. Input: Style image I s ∈ R n×m×3 , content image I c ∈ R n×m×3 , a pre-trained network, f , for image classification, layer indices L s and L c for style feature and content features, respectively, and loss term weighting factors, α and β Result: Generated image I g ∈ R n×m×3 that shares styles in I s and content in I c . images are used to compute our color-aware style loss (Equation 8) and the content loss (Equation 1), respectively. Then, the final loss computed as: where α and β are set to 1.0 and 10 4 , respectively. After each iteration, we update the color masks of our generated image to track changes in I g during optimization. To minimize Equation 9 , we adopted the L-BFGS algorithm [16] for 300 iterations with a learning rate of 0.5. To generate our masks, we have a hyperparameter, σ , that can be interactively used to control the RBF fall-off, which consequently affects the final result of optimization. Our experiments found that σ = [0.25 − 0.3] works well in most cases. A nice feature of our method is that it allows more transfer flexibility by enabling the user to determine color associations between our style and content images. To that end, we follow the same procedure explained in Algorithm 1 with the following exception. We do not update color masks of the generated image, I g , to keep considering the user selection. Figure 5 shows two user cases that reflect the benefit of having our manual user selection tool. As shown in the first case (left), the user associates the reddish color in the content image's color palette to different colors in the style image's color palette. Based on this selection, our generated image has transferred styles based on this color-style correlation. In particular, the change happened only to the style of pixels associated with reddish pixels in the face region. As can also be seen in the second case (right), the transferred style is constrained to those styles associated with selected colors in the style image's color palette. For the second user case in Figure 5 (bottom row), the auto mode struggled to transfer multiple styles due to the fact that the given style image has a limited range of colors (i.e., only gray-color variations). Such style images, that have limited style options to offer, may result in less appealing outputs. Nevertheless, our manual color-association tool gives the user the flexibility to modify the generated image for more aesthetically pleasing outputs by associating the colors and restricting the modified region as shown in Figure 5 (bottom row) . Figure 5 : Our method allows artistic controls, where the user can manually select color association or discard some colors from the generated palettes. In this figure, we present our results of the auto and user-selection modes for two different reference style images. Evaluating NST techniques is a challenging problem facing the NST community, as indicated in [9, 15] . With that said, user studies have been widely adopted in the literature to evaluate subjective results. For the sake of completeness, we conducted an online user study to evaluate our proposed color-aware NST compared with other relevant techniques * . For each image pair (style/content), we used six different NST methods, which are: neural style transfer (NST) by Gatys et al., [4] , adaptive instance normalization (AdaIN) [8] , avatar net [25] , linear style transfer (LST) [13] , and relaxed optimal transport (ROT) [11] . The results of these methods, including ours, were anonymously shown to each subject in a single online webpage, such that the result of each method occupies ∼25% of the screen. For our method, we used the auto-style-transfer mode, where the color matching and optimization were performed automatically as described in Section 3. We collected answers from 30 subjects: 18 female and 12 male. The subjects were asked to give a score from one to five for the result of each method anonymously-higher score indicates more appealing result. We evaluated these methods on eight different image pairs. The images were randomly collected from Flick, such that each style image must include more than one style to evaluate our hypothesis of transferring multiple styles from the style image. The image pairs and results -along with the color-aware loss (Equation 8), style loss (Equation 3), content loss (Equation 1) -are shown in Figure 6 . Though Figure 6 shows that our method does not always outperform other methods in all loss terms, our method has a consistent trade-off between the color-aware, style, content losses in most cases, which is confirmed by the user study results (shown in Table 1 ). As can be seen, 38% of the total subjects rated the results of our method as high appealing results (score of five). On the other hand, the second best method (i.e., NST [4] ) obtained only 16% of the votes as five. The study emphasizes the superiority of our proposed method compared with other methods, especially in capturing multiple styles from the style images. Table 2 shows the mean value of the color-aware, style, and content losses in our Flickr test set. We also report the average processing time required by each method on a single NVIDIA GeForce GTX 1080 graphics card in Table 2 . In addition to the user-study evaluation, we collected another test set that includes 200 content and style pairs from real portrait face images (randomly selected from the FFHQ dataset [10] ) and painting portrait face images (randomly selected from the face set of the WikiArt dataset [1, 23] ). We report the color-aware, style, content losses in addition to the FID score [7] achieved by our method and other NST methods [12, 13, 15, 20, 25, 34] in * We acknowledge that an in-person user-study within a controlled display environments is preferred, however, due to the covid-19 pandemic we are limited to an online study. Table 3 . As shown in Table 3 , our method achieves competing results compared to other NST methods across all error metrics; see Figure 7 for qualitative comparisons. We have shown that Gram matrix-based optimization methods often fail to produce pleasing results when the target style image has multiple styles. To fix this limitation, we have presented a color-aware multi-style loss that captures correlations between different styles and colors in both style and generated images. Our method is efficient, simple, and easy to Figure 6 : Qualitative comparisons between our method and other style transfer methods, which are: neural style transfer (NST) [4] , adaptive instance normalization (AdaIN) [8] , avatar net [25] , linear style transfer (LST) [13] , relaxed optimal transport (ROT) [11] . 34% 13% 23% 14% 16% AdaIN [8] 16% 27% 32% 19% 6% Avatar net [25] 50% 27% 17% 4% 2% LST [13] 23% 35% 22% 15% 5% ROT [11] 35% 23% 20% 16% 6% CAMS (ours) 10% 12% 24% 16% 38% [15] , adaptive instance normalization (AdaIN) [8] , avatar net [25] , linear style transfer (LST) [13] , arbitrary style Transfer with style-attentional networks (SANET) [20] , style transfer via wavelet transforms (WCT2) [34] and rethinking style transfer (RST) [12] . implement, achieving pleasing results while capturing different styles in the given reference image. We also have illustrated how our method could be used in an interactive way by enabling the users to manually control the way of transferring styles from the given style image. Finally, through a user study, we showed that our method achieves the best visually appealing results compared to other alternatives for style transfer. HistoGAN: Controlling colors of GAN-generated and real images via color histograms Incorporating long-range consistency in cnn-based texture generation Palette-based photo recoloring A neural algorithm of artistic style Preserving color in neural artistic style transfer Characterizing and improving stability in neural style transfer GANs trained by a two time-scale update rule converge to a local nash equilibrium Arbitrary style transfer in real-time with adaptive instance normalization Neural style transfer: A review A style-based generator architecture for generative adversarial networks Style transfer by relaxed optimal transport and self-similarity Rethinking style transfer: From pixels to parameterized brushstrokes Learning linear transformations for fast arbitrary style transfer Demystifying neural style transfer Universal style transfer via feature transforms On the limited memory bfgs method for large scale optimization Learning to warp for style transfer Deep photo style transfer Inceptionism: Going deeper into neural networks Arbitrary style transfer with style-attentional networks Stable and controllable neural texture synthesis and style transfer using histogram losses Artistic style transfer for videos and spherical images Large-scale classification of fine-art paintings: Learning the right metric on the right feature Neural style transfer via meta networks Avatar-Net: Multi-scale zero-shot style transfer by feature decoration Very deep convolutional networks for largescale image recognition High-resolution multi-scale neural texture synthesis Two-stage peer-regularized feature recombination for arbitrary image style transfer Two-Stream Convolutional Networks for Dynamic Texture Synthesis Collaborative distillation for ultra-resolution universal style transfer Diversified arbitrary style transfer via deep feature perturbation Controllable artistic text style transfer via shape-matching GAN Filter style transfer between photos Photorealistic style transfer via wavelet transforms