key: cord-0025418-o6n6sacx authors: Wang, Qianqian; Liu, Fang’ai; Zhao, Xiaohui; Tan, Qiaoqiao title: Session interest model for CTR prediction based on self-attention mechanism date: 2022-01-07 journal: Sci Rep DOI: 10.1038/s41598-021-03871-y sha: b0ccbfceb75dee404f062143848210d736a7af29 doc_id: 25418 cord_uid: o6n6sacx Click-through rate prediction, which aims to predict the probability of the user clicking on an item, is critical to online advertising. How to capture the user evolving interests from the user behavior sequence is an important issue in CTR prediction. However, most existing models ignore the factor that the sequence is composed of sessions, and user behavior can be divided into different sessions according to the occurring time. The user behaviors are highly correlated in each session and are not relevant across sessions. We propose an effective model for CTR prediction, named Session Interest Model via Self-Attention (SISA). First, we divide the user sequential behavior into session layer. A self-attention mechanism with bias coding is used to model each session. Since different session interest may be related to each other or follow a sequential pattern, next, we utilize gated recurrent unit (GRU) to capture the interaction and evolution of user different historical session interests in session interest extractor module. Then, we use the local activation and GRU to aggregate their target ad to form the final representation of the behavior sequence in session interest interacting module. Experimental results show that the SISA model performs better than other models. It is a critical problem to predict the probabilities of users clicking on ads or items for many applications such as online advertising or recommender systems 1, 2 . Cost per click (CPC) 3 model is often used in advertising system. The accuracy of click-through rate (CTR) can influence the final revenue in CPC model. At the same time, displaying suitable advertisement to user can enhance their experience. Therefore, both academia and industry are concerned about how to design the CTR prediction models. Modeling feature interaction is very critical in CTR prediction tasks. Recently, some effective methods ignore to capture user interest. User interest has an important influence on CTR prediction. In the fields with rich internet-scale user behavior data, such as online advertising, user's sequential behaviors reflect user evolving interests. However, some models based on user interest overlook the intrinsic structure of the sequences. Multiple sessions make up a sequence. A session is a list of user behaviors that occur within a given time frame. The user behavior in each session is highly homogeneous, and the user behavior in different sessions is heterogeneous. Grbovic et al. 4 found the session division principle that there is a time interval of more than 30 min. For example, the user mainly browses the T-shirt in the first half an hour as session 1, and browses the sneakers in the second half an hour as session 2. User shows different interests in session 1 and session 2.We can know the fact that people has a clear and unique intent at a session, but the interest usually changes when user start a new session. Through the above observation, we propose Session Interest Model via Self-Attention (SISA) for CTR prediction, which uses multiple historical sessions to simulate the user's sequential behavior in the CTR prediction task. At session division module, we naturally divide the user sequential behavior into sessions. At session interest extractor module, a self-attention mechanism with bias coding is applied to model each session. Selfattention mechanism gets the internal relationship of each session behavior, then, extracts the user interest in each session. Since different session interest may be related to each other or follow a sequential pattern, we use Gated Recurrent Unit (GRU) to capture interaction and evolution of user different historical session interests at session interacting module. Because different session interests have different effects on the target item, we utilize attention mechanism to achieve local activation and use GRU to aggregate their target ad to get the final representation of the behavior sequence. The main contributions of this paper are as follows: 1. The user behavior in each session is highly homogeneous, and the behavior of user in different sessions is heterogeneous. We focus on user multiple session interest and propose a novel session interest model via self-attention (SISA). We can get more expressions of interest and more accurate prediction results. 2. We specially divide the user sequential behavior into sessions and design session interest extractor module. We employ a self-attention network with bias coding to obtain an accurate expression of interest for each session. Then we use GRU to capture the interaction of different session interests. At the same time, we exploit GRU with attentional update gate (AUGRU) to aggregate their target ad to find the influences of different session interests at session interacting module. 3. The experimental results show that our proposed model has great improvements over other models. At same time, the influence of key parameters and different variants is also explored, which proves the validity of the SISA model. The rest of the paper is organized as follows. We discuss the related work in "Related work» and introduce the detailed architecture of proposed SISA model in "Material and methods". Section "Experiments" verify the prediction effectiveness of the proposed model through experiments, and analyse the results. Furthermore, in "Conclusion", we summarize the model presented in this paper and introduce the direction of future work. The CTR prediction problem is normal formulated as a binary classification problem. Logistic regression (LR) 5 is a linear model that is used in the industry. Kumar et al. 6 used logistic regression to establish a model for CTR prediction. McMahan et al. 7 proposed a method to solve the Google's ad problem and got better performance. Multiple features are used as input data such as ad information and keywords. Chapelle et al. 8 used LR to solve the problem of prediction for Yahoo's. The linear model is easy, but it can not capture feature interaction. To overcome the limitation, Factorization Machine 9 (FM) and its variants 10 are used to capture feature interactions and get better results. The field-aware factorization machines (FFM) introduced field aware latent vectors to capture feature interaction. However, the Factorization Machine model is relatively weak in obtaining high-order feature interaction. He et al. 11 utilized decision trees and LR to improve the result. However, these models use shallow layer that have limited representation power of feature interactions. Recently, deep neural networks have achieved great success in many research fields such as in computer vision 12, 13 , image processing 14, 15 and natural language processing 16, 17 . Therefore, researchers have proposed many CTR prediction models based on deep learning. How to effectively model feature interaction is an important problem in most models. Zhang et al. 18 proposed the Factorization Machine based Neural Network (FNN). The model uses FM to pre-train the embedding layer based on forward neural network. FNN model has a better performance on capturing high-order feature interactions. Cheng et al. 19 combines the linear and the deep neural network to capture feature interactions. The wide part of the model is still needed feature engineering. This means that feature interaction also needs to be designed manually. To solve the problem, DeepFM model 20 uses FM to replace the wide part, and shared the same input. DeepFM model is considered to be the more advanced model in the field of CTR estimation. Product-based Neural Networks (PNN) model 21 is used for user response prediction. The model utilizes a product layer and gets feature interaction. Lian et al. 22 proposed a CIN model, and captured feature interactions at the vector-wise level. Deep and Cross Network (DCN) 23 efficiently learns feature crossing and no manual feature engineering. Shan et al. 24 found the relationship behind the user behavior based on the residual neural networks and proposed the deep crossing model 25 . In addition, some models also are proposed based on Convolutional Neural Networks (CNN). Kim et al. 26 designed a multi-array CNN Model for ad CTR Prediction. This method can capture local feature information based on CNN. Wang et al. 27 used CNN based on attention to find the different features. Ouyang 28 considered each target ad independently and proposed MA-DNN model that achieved a better result. In practical applications, different predictors usually have different predictive capabilities. Features that have a greater contribution to the prediction results should be given greater weights. As we all know, the attention mechanism 29 has a powerful function in distinguishing importance of features. Wang et al. 30 improves FM based on the attention mechanism to find the different importance of different features. Cao et al. 31 proposed a Meta-Wrapper model that utilized the attention mechanism and capture the user interested items in historical behaviors. Xiao et al. 32 builds Attentional Factorization Machine (AFM) model, the model can mine feature interaction based on neural attention network. However, the model ignored the important of user behavior for CTR prediction. In summary, the high-order expression and interaction of features significantly improves the expression ability of features and the generalization ability of the models. However, in the process of capturing feature interactions, the influence of user interest is often ignored. Constructing a model to capture the user's dynamics and evolving interests from the user's sequential behavior has been widely proven effective in CTR prediction tasks. At the same time, Dynamic Quality of Service (QoS) prediction for services is currently a hot topic and a challenge for research in the fields of service recommendation and composition. Jin et al. 33 addresses the problem with a Time-aWare service Quality Prediction method (named TWQP). Deep Interest Network (DIN) 34 introduced the influence of user interests and found user interests based on user behaviors. DIN can capture the diversity characteristic of user interests and improve the performance of CTR prediction. In order to capture the dynamic evolution of user interests, Deep Interest Evolution Network (DIEN) 35 was proposed. DIEN gets interest features and finds interest evolving process. Wang et al. 36 presented a Trust-based Collaborative Filtering (TbCF) algorithm to perform basic rating prediction in a manner consistent with the existing CF methods. The algorithm employs multiple perspectives to extract proper services and achieves a good tradeoff between the robustness, accuracy, and diversity of the recommendation. Liu et al. 37 proposed an attention-based bidirectional www.nature.com/scientificreports/ gated recurrent unit (GRU) model for point-of-interest (POI) category prediction (ABG_poic). They regard the user's POI category as the user's interest preference because the fuzzy POI category is easier to reflect the user interest than the POI. By modeling the user's sequential behavior, the feature representation is enriched, and the prediction accuracy is significantly improved. The concept of session often appears in sequential recommendation, but it is rarely seen in CTR prediction tasks. Session-based recommendation achieves good results via user dynamic interest evolving. Neural Attentive Recommendation Machine (NARM) 38, 39 used an attention mechanism to capture the user purpose in the current session. Zhang et al. 40 analyzes the current session information from multiple aspects and improves user satisfaction. However, most existing studies for CTR prediction ignore that the sequences are composed of sessions. Upon all these perspectives, we introduce a novel session interest model via self-attention (SISA) to get a better result for CTR. We propose Session Interest Model via Self-Attention (SISA) for CTR prediction, which uses multiple historical sessions to simulate the user's sequential behavior in the CTR prediction task. The SISA model includes five modules, we describe session interest model via self-attention in this section. We first introduce feature representation and embedding in "Feature representation and embedding". Next, "Session division module" illustrates the session division module. Then, we describe the session interest extractor module in "Session interest extractor module" and session interacting module in "Session interest interacting module". Finally, "The overall architecture of SISA model" presents the structure of SISA model. Session interest extractor module. On the one hand, the behaviors in the same session are closely related to each other, on the other hand, the user random behavior in the session deviates from the original expression of session interest. In order to capture the inner relationship between behaviors in the same session and find the impact of those irrelevant behaviors, a multi-head self-attention mechanism 41 is used for each session. At the same time, we apply positional encoding to the embedding based on self-attention mechanism. So as to take advantage of order relationship of the sequence, self-attention mechanism is used to positional encoding to the input embedding. Also, we capture the sequence of sessions and the bias in different representation subspaces. So, we define bias encoding BE as follows: where W U ∈ R d model is the bias vector of the unit position in the behavior embedding, and u is the index of the unit in the behavior embedding. W K ∈ R K is the bias vector of session, k is the index of session. W T ∈ R T is the bias vector of the position in the session. So user behavior sessions can be represented as follows via bias encoding: As we all know, the user's click behavior is influenced by many factors, such as color, shape, and price. Multihead self-attention can get the relationship in different representation subspaces. We use S k = [S k1 ; ...; S kn ; ...S kN ] , where S kn ∈ R T×d n is the n-th head of S k .N is the number of heads, d n = 1 n d model .Through these representations, we can calculate the output of head n as follows: where W Q , W K , W V are weight matrices. Then bias encoding-based feedforward neural network can further improve the nonlinear ability: Session interest interacting module. The different session interest may be related to each other or follow a sequential pattern, GRU performs better at capturing sequential relationships, so we use GRU 42, 43 to capture the interaction and evolution of user different historical session interests. The activation h t of the GRU at time t is a linear interpolation between the previous activation h t−1 and the candidate activation h t : The update gate is represented by: Candidate activation h t is calculated as follows: where r t is a set of reset gates and ⊙ is an element-wise multiplication. When off ( r t close to 0), the reset gate makes it forget the state of the previous calculation. At the same time, the r t is represented as follows: where σ is a sigmoid function, I t is the input of GRU, W and b are the parameters that are trained. The hidden state h t can capture the dependency between session interests. However, the user's session interest related to the target ad has a greater impact on whether the user will click on the target ad. So the weight of the user's session interest needs to be reassigned to the target ad. We use attention mechanism for local activation and use GRU to model the representation of session interests and target ad. We use I ′ t ,h ′ t represent the input and hidden state of GRU. The input of second GRU is corresponding state in the part of capturing session interest interaction: I ′ t =h t . The attention function we used can be formulated as: where W I has the corresponding shape, attention score can reflect the relationship between target ad X I and input h t , and the more relevant ones will get more attention weight. We combine attention mechanism and GRU and use the GRU with attentional update gate (AUGRU) to consider influences of between session interests and the target ad: where u ′ t is the original update gate of AUGRU, ũ ′ t is the attentional update gate that we use in AUGRU. and h′ t are the hidden states of AUGRU. Figure 1 is the framework of GRU application attention mechanism (AUGRU). By using AUGRU, we retain the original dimension information of the update gate. We measure all dimensions of the update gate by using attention score and consider the impact of different session interests on the target ad. The overall architecture of SISA model. The SISA model includes five modules, and the structure is shown in Fig. 2 . www.nature.com/scientificreports/ In feature representation and embedding module, informative features such as user profile, scene profile and target ad are transformed into dense vectors by using an embedding layer. In session division module, we divide user's behavior sequences into sessions and get use's session sequence. In session interest extractor module, we capture the inner relationship between behaviors in the same session. We employ multi-head self-attention to reduce the influence of unrelated behaviors. We also apply positional encoding to the input embedding based on self-attention mechanism and get order relations of the sequence. In session interest interacting module, we use GRU to capture the interaction and evolution of user different historical session interests. The user's session interest related to the target ad has a greater impact on whether the user will click on the target ad. So we use AUGRU to model the representation of session interests and target ad. In prediction module, embedding of sparse features and session interests that we capture are concatenated and then imported into MLP. Finally, the softmax function is used to get probability that people click on the ad. The loss function is a negative log-likelihood function and is usually expressed as: where D denotes the training size N , p(x) denotes the probability that the user clicks on an ad. Experiments setting. Datasets www.nature.com/scientificreports/ AUC: The area under the ROC curve is a more commonly used indicator for evaluating classification problems (such as CTR prediction) 45 . The value of AUC is larger, the result is better. Logloss: Logloss is applied to calculate the distance in a binary classification problem. The value of logloss is smaller, the performance of the model is the better. RMSE: RMSE 46 can be defined as follows: where y t i is the observed scores and ŷ t i is the value of prediction, T is the testing set. The score of RMSE is small, the result of the model is great. Parameter settings. We employ dropout to prevent over-fitting in the neural networks and the value of dropout rate is 0.4. We set the size of the hidden state in the GRU is 56. At the same time, we use 10 −4 , 10 −3 , 10 −2 , 10 −1 as learning rates to test. Also, different number of neurons from 100 to 800 is employed. Comparisons with different models. To verify the efficiency of the SISA that we proposed, we compare SISA with some mainstream CTR prediction model. It shows the results of the different models for AUC in Fig. 3 . Tables 2 and 3 show the value with logloss and RMSE, respectively. Through comparison, many aspects can be seen. The model captures the feature interactions based on neural networks. It is based on the attention mechanism to get the representation of the user behavior and target ad. DIN has the better performance than DeepFM. 6. ADI 35 captures interest evolving processes from user behaviors and gets higher prediction accuracy. However, the SISA model performs better than others. In SISA, we partition user behavior sequences into multiple sessions, user session interests follow a sequential pattern and more suitable for modeling. We can see that SISA model based on user session interest improves accuracy in all datasets. As we can see that in Avazu dataset the SISA increased by 1.8% compared to other models. Sensitivity analysis of the model parameters. We explore the influence of different parameters for the results of SISA model, such as the epoch, the number of neurons per layer, and the dropout rate β. Dropout is the probability of neurons remaining in the network. First we set β to be 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8. As shown in Fig. 4 , the β can help SISA learn powerful features. The β is properly set (from 0.5 to 0.8), the SISA model is able to reach its best performance at all datasets. However, with an increasing of the value of β , the performance of SISA shows a downward trend. So we choice β=0.6 in the following experiment. When other factors remain the same, we study the effect of different number of neurons. In Fig. 5 , it is not that the higher the number of neurons, the better the result. When the number of neurons is 500, 600 or 700, the performance of SISA is stably and even worse in all datasets. It is because that the model is overfit. So we select 400 in the experiment. Fig. 6 , the model performed poorly when the value of epoch is 0-5. This is because the number of iterations is too small to determine the appropriate parameters. As the value of epoch increases, the value of RMSE becomes smaller. Compared with other datasets, the value of RMSE in the Amazon dataset fluctuates relatively high. Because model needs different numbers of features to train on different datasets, and the diversity of the data will cause some errors. If the value of epoch is not suitable, the performance of the model will fluctuate greatly. At the same time, the model has better performance when the value of epoch is between 10 and 20 in Fig. 6 . Therefore, in the experiment we set the value of epoch to 16. Comparison among SISA variants. Although we have demonstrated strong empirical results, the results presented do not isolate the specific contributions of each component of SISA, so we conduct ablation experiments on SISA. In Table 4 , SN stands for a network like the one used in FNN. IN stands for complex network that can get user session interest. AVG represents average pooling and MAX is maximum pooling strategies. The self-attention module uses AVG and MAX instead, and ATT-IN stands for the SISA model. In Table 4 , the IN-ATT model has the highest AUC value. FNN model uses neural networks to automatically capture feature interactions. The model of SN-AVG, -MAX, and -ATT ignore the impact of interest on clickthrough rate prediction, not as good as a model based on session interest. At the same time, some models do not distinguish the contribution of different features to the prediction results, so the performance of prediction is not good. SISA model has achieved better results by combining the user session interest with the self-attention mechanism. In this paper, we propose a new model, namely SISA, to model user session interest for CTR prediction. First, we specially divide the user sequential behavior into sessions and design session interest extractor module. We employ a self-attention network with bias coding to obtain an accurate expression of interest for each session. Then, we use GRU to get the interaction of different session interests. We exploit GRU with attentional update gate (AUGRU) to aggregate their target ad to find the influences of different session interests at session interacting module. Next, embedding of sparse features and session interests that we capture are concatenated and fed into MLP in prediction module. Experiment results prove the efficiency of SISA on different datasets. In the feature, we will pay attention to use knowledge graph to capture user interests and model click-through rate predictions. Wide and deep learning for recommender systems Web-scale Bayesian click-through rate prediction for sponsored search advertising in Microsoft's bing search engine Cost-per-click pricing for display advertising Real-time personalization using embeddings for search ranking at airbnb Simple and scalable response prediction for display advertising Predicting clicks: CTR estimation of advertisements using logistic regression classifier Ad click prediction: A view from the trenches Modeling delayed feedback in display advertising Factorization machines Field-aware factorization machines for CTR prediction Practical lessons from predicting clicks on ads at facebook Computer vision techniques in construction: A critical review Deep learning-enabled medical computer vision Deep learning and medical image processing for coronavirus (COVID-19) pandemic: A survey A review of convolutional neural network applied to fruit image processing Transformers: State-of-the-art natural language processing Attention in natural language processing Deep learning over multi-field categorical data Wide and deep learning for recommender systems Deepfm: An end-to-end wide and deep learning framework for CTR prediction Product-based neural networks for user response prediction Combining explicit and implicit feature interactions for recommender systems Deep and cross network for ad click predictions Why do deep residual networks generalize better than deep feedforward networks?--A neural tangent kernel perspective Deep crossing: Web-scale modeling without manually crafted combinatorial features Design of a multi-array CNN model for improving CTR prediction Towards accurate and interpretable sequential prediction: A cnn & attention-based feature extractor Click-through rate prediction with the user memory network A novel deep learning method based on attention mechanism for bearing remaining useful life prediction A new approach for advertising CTR prediction based on deep neural network via attention mechanism Meta-wrapper: Differentiable wrapping operator for user interest selection in CTR prediction Attentional factorization machines: Learning the weight of feature interactions via attention networks A time-aware dynamic service quality prediction approach for services Deep interest network for click-through rate prediction Deep interest evolution network for click-through rate prediction Robust collaborative filtering recommendation with user-item-trust records Bidirectional GRU networks-based next POI category prediction for healthcare Neural attentive session-based recommendation Graph contextualized self-attention network for session-based recommendation Multi-aspect aware session-based recommendation for intelligent transportation services Neural news recommendation with multi-head self-attention Gate-variants of gated recurrent unit (GRU) neural networks Enhanced deep gated recurrent unit and complex wavelet packet energy moment entropy for early fault prognosis of bearing Image-based recommendations on styles and substitutes Partial AUC optimization based deep speaker embeddings with class-center learning for textindependent speaker verification A new design of evolutionary hybrid optimization of SVR model in predicting the blast-induced ground vibration The authors declare no competing interests. Correspondence and requests for materials should be addressed to F.L.Reprints and permissions information is available at www.nature.com/reprints.Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.