key: cord-0017267-bm6iujes authors: Shen, Guojiang; Yu, Kaifeng; Zhang, Meiyu; Kong, Xiangjie title: ST-AFN: a spatial-temporal attention based fusion network for lane-level traffic flow prediction date: 2021-04-22 journal: PeerJ Comput Sci DOI: 10.7717/peerj-cs.470 sha: 88cca3247419b5f938a6f9e7d8db8a46e33195f3 doc_id: 17267 cord_uid: bm6iujes Traffic flow prediction is the foundation of many applications in smart cities, and the granular precision of traffic flow prediction has to be enhanced with refined applications. However, most of the existing researches cannot meet these requirements. In this paper, we propose a spatial-temporal attention based fusion network (ST-AFN), for lane-level precise prediction. This seq2seq model consists of three parts, namely speed process network, spatial encoder, and temporal decoder. In order to exploit the dynamic dependencies among lanes, attention mechanism blocks are embedded in those networks. The application of deep spatial-temporal information matrix results in progresses in term of reliability. Furthermore, a specific ground lane selection method is also proposed to ST-AFN. To evaluate the proposed model, four months of real-world traffic data are collected in Xiaoshan District, Hangzhou, China. Experimental results demonstrate that ST-AFN can achieve more accurate and stable results than the benchmark models. To the best of our knowledge, this is the first time that a deep learning method has been applied to forecast traffic flow at the lane level on urban ground roads instead of expressways or elevated roads. Short-term traffic flow forecasting analyzes the historical data to complete the prediction of future traffic conditions . It can well fulfill various requirements in a smart city (Kong et al., 2020b; Kong et al., 2020a) , such as travel route guidance, congestion relief, and road planning (Do et al., 2019) . With the rapid development of AI and connected vehicle technology, smart cities push for higher-precision and fine-grained requirements (Kong et al., 2020b; Kong et al., 2020a) for the traffic flow predictions. Lane-level traffic forecasting is developed from traditional prediction, which utilizes lanes parameters instead of whole road segments (Shen, Zhao & Kong, 2021) . It can not only make the prediction more refined, but also lay the foundation for subsequent advanced applications such as high-precision navigation, unmanned vehicle technologies, and cooperative vehicle infrastructure system (Liao et al., 2018) . Even though it is one of the key issues of Intelligent Transportation System (ITS), it is often overlooked (Liu, Zhou & Li, 2019) . According to statistics, lane-level predictions only account for less than 10% of the total (Gu et al., 2019) . Apart from the difficulty in obtaining lane-based data, the main reason is the mistaken assumption that different lanes have similar traffic patterns. Recently, some studies have proved the independence of lanes (Daganzo, 2002) . The real-world data collected by us strongly supports this theory. As shown in Fig. 1A , lane 2 and lane 3 are the two straight lanes under the same junction and the difference in average traffic volume between them can be up to 21.34%. Figure 1B shows that the lane with the highest traffic volume (lane 3) is 2.63 times the lane with the lowest volume (lane 4). Compared to road sections or road networks, lanes are more susceptible to interference, tending to show different states when being affected by accidents, bad weather, and other factors. Recently, several research groups have studied the lane traffic models and prediction in deep learning methods. Gu et al. (2019) applied entropy-based gray correlation to analyze the dependency between lanes by employing the network structure of Long Short-Term Memory (LSTM) and Gate Recurrent Unit. Ke et al. (2020) expanded each lane data into each channel matrix, as the input of Convolutional Neural Networks (CNN), and then obtained the final result after fusion. Xie et al. (2019) utilized Deep Believe Networks (DBN) and LSTM to build a vehicle lane change model that includes lane change decision and lane change implementation. There are still some limitations with these studies: firstly, the traffic volume on the low-speed ground roads is smaller and the speed of vehicles is relatively slower when compared with highways. Due to the complex topographic road structures and various traffic signal schemes, there exist more diversified traffic patterns (Kamal, Hayakawa & Imura, 2020) . Secondly, in the forecasting, the state of a traffic node is related to its own historical situation and neighboring nodes. The processing of temporal and spatial sequence cannot be done through simple parallel analysis or linear fusion. How to deal with it still remains a thorny issue. In this paper, we present a spatial-temporal attention mechanism based fusion network (ST-AFN) to address these issues. The contributions of this study mainly involve the following three aspects: • ST-AFN uses the bidirectional LSTM as the basic unit, and is designed with a structure of seq2seq. Spatial attention blocks and temporal attention blocks are embedded in the encoder network and decoder network respectively. The blocks solve the problem of long-distance dependence efficiently in parallel, and capture deep level characteristics. Specially, the output matrix of the speed process network is fused with the result of spatial encoder to construct the information matrix. Continued analysis of this matrix can lead to the final volume prediction results. • Based on the traffic volume, vehicle speeds and complex road structures, we consider the relationship between lanes in detail and adopt a novel ground road lane selection strategy. • Real-world traffic data verifies that ST-AFN outperforms the four baseline methods (including the state of the art lane-level forecasting method FDL), in both accuracy and stability. Throughout the history of traffic flow prediction, the methods can be roughly divided into three categories: parametric, non-parametric, and deep learning (Xiang, Lu & Liu, 2011; Lint & Hinsbergen, 2012) . Parametric methods (e.g., Autoregressive Integrated Moving Average mode, Linear regression model) are based on the assumption that traffic runs smoothly, a deterministic model structure is established, and that the various parameters in the structure are determined by processing real data (Lippi, Bertini & Frasconi, 2013; Chan, Dillon & Singh, 2012) . However, these models depend on the assumption of stationary, cannot reflect the nonlinearity and uncertainty characteristics of traffic data . Non-parametric methods (e.g., Support Vector Machines, Markov model) can be more flexible due to their variable structure and parameters to cope with all kinds of unexpected situations in traffic (Hinsbergen et al., 2011; Wu, Wei & Su, 2003) . As one of the branches of machine learning, deep learning has promoted the development of various researches with its strong learning ability and excellent portability (LeYun, Bengio & Hinton, 2015; Kong et al., 2019) . Wu et al. (2018) trained the traffic data in three categories: hourly, daily, and weekly with a fully connected neural network in turn, and then fused those outputs to obtain the final result. Liu et al. (2019a) and Liu et al. (2019b) utilized a deep residual network, and proposed a Traffic State Index to measure the congestion of a region. Li et al. (2018) modeled the vehicle driving as a random diffusion process on a directed graph, introduced a diffusion convolutional recurrent neural network and achieved accurate prediction results. Zhao et al. (2019) combined Gate Recurrent Unit (GRU) and Graph Convolution Network (GCN): used GRU to process time dependence and GCN to process spatial dependence, finally completed the sub-region traffic flow prediction. In addition to traffic flow forecasting, deep learning is also widely used in other areas of urban computing. Liu et al. (2019a) and Liu et al. (2019b) presented a graph processing based traffic estimation system. The system decomposed the numerous computations involved in non-linear models, and used crowd density and vehicle to predict city scale traffic patterns. Zhang, Zhang & Qi (2017) designed the residual network to model the temporal proximity and periodicity. The output of two units were aggregated, and were given different weights. Wang et al. (2017) proposed an automobile demand model. This model consisted of three residual sub-networks, which were used to analyze weather data, traffic data, and order data. Tong et al. (2017) used taxi trajectory data to forecast taxi demand. Shen et al. (2019) used Siamese CNN for multi-pedestrian tracking. Attention mechanism was first applied to the research of natural language processing (NLP) (Bahdanau, Cho & Bengio, 2015) . It emphasizes the reasonable allocation of limited computing power when facing problems (Chaudhari et al., 2019) . Due to the excellent effect, this mechanism has made breakthroughs in NLP (Maharjan et al., 2018) , and computer vision (CV) . At the same time, attention mechanism is also introduced in transportation research. Guo et al. (2019) added attention blocks to GCN, and fused three subnet's output to obtain the final prediction result. Liu et al. (2019a) and Liu et al. (2019b) expanded a variety of machine learning result vectors into a multi-channel spatio-temporal matrix and used attention when calculating the weights between channel. Hao, Lee & Zhao (2019) used the attention mechanism twice in the encoder and decoder, and used the embedding layer to combine external factors such as weather and emergencies. Zhang et al. (2019) used GCN to extract spatial features, then inputted the features into the seq2seq model, the attention blocks were embedded in the encoder. Before ST-AFN is introduced in detail, this section will describe the specific ground road lane selection strategy and traffic flow prediction problem. Definition 1: Lane Selection. In the previous lane-level traffic flow prediction research, the experimental data came from high-ways or elevated freeways. Under this road condition, following is the corresponding solution: when on high-ways or elevated freeways, the long and straight roads are divided into multiple sub-sections based on the sensors in the main line and the ramp as shown in Fig. 2 . There are often some shortcomings in the corresponding strategies under this research background: firstly, the width of the ramp is narrow, which easily becomes a bottleneck in the morning and evening peaks with dense traffic. Existing strategy does not consider a large number of inflow and outflow vehicles in the ramp (as shown by the red arrows in Fig. 2 ) which may lead to inaccurate results. Secondly, due to the limitation of the research background, the experiment can only use a single straight flow lane as the research object. It has poor scalability and limited practical application range. In comparison, urban ground roads have the characteristics of a higher proportion of total road mileage, wider coverage area, and more complex traffic patterns (Kamal, Hayakawa & Imura, 2020) . However, there is still no mature and efficient lane selection strategy for ground roads. Considering the above research strategies and the characteristics of the ground road network, this paper proposes the selection strategy shown in Fig. 3 . On the ground roads in urban areas, the intersections are highly correlated. The basis for selecting the lanes with intersections alone is sufficient to support the subsequent prediction. This method not only takes the adjacent lanes below the same intersection into consideration, but also selects lanes in the upstream and downstream intersections, including straight, and often overlooked left-turn and right-turn lanes. This strategy is based on the physical connection structure of the road, and perfectly captures the information in lanes. In this way, it not only can improve the prediction accuracy, but also broaden the application range. Definition 2: Flow Prediction. After the lane traffic data is collected, we divide the training data set into n parts at fixed time intervals. For convenience, this article records the total number of lanes under k intersections as m. Each lane has h-dimensional attributes. The traffic characteristic parameters can be denoted by an n×h×m matrix T ± . In detail T + q denotes the forward volume, and T + s denotes forward speed. The predicted goal can be described as: where g is the encoding processing function, p is the speed processing function, f is the final decoding and output function, X q+ is the lane volume value, τ is the forecast time length. ST-AFN is mainly composed of three parts: deep speed processing network, the spatial encoder, and the temporal decoder. Deep bidirectional LSTM are connected in series to form the speed processing network. The encoder with the spatial attention blocks are utilized to analyze the spatial characteristics of the traffic parameters. Then the output of the above two networks are merged to build the information matrix which is the input of decoder. After the decoder extracting temporal feature, we use full connected layer (FC) to complete the final prediction, as shown in Fig. 4 . The following will introduce each network in order. The current traffic situation of the road section is closely related to its upstream and downstream sections. The upstream traffic flows toward the downstream section, and the state of the downstream section (such as congestion) will gradually accumulate, which in turn will impact the upstream traffic flow. In the direction of the time dimension, bidirectional LSTM is superimposed and fused by forward LSTM and backward LSTM. While overcoming the problems of gradient disappearance and gradient explosion, it also considers the forward and backward propagation of sequential data. As shown in the where h u ve is the result of splicing h u l ve (l = 1,2,...,m), and then it is changed linearly to get the final hidden state H u ve .W u ve is the weight matrix and b u ve is the bias term. C u ve is calculated at the same time, representing the final cell state. Then C u ve and H u ve are used for initialization in the next unit, and are collected to build the context vector. When predicting traffic flow, the processing methods of time dependence and space dependence directly affect the accuracy of the experimental results. The application of attention mechanism can help networks to accurately analyze the dependencies between lanes at each moment in real time. After selecting the target lane, it pays more attention to lanes with high correlation, reduces the weight value of irrelevant lanes, and efficiently allocates weights dynamically and optimally in a parallel strategy. The proposed encoder network is shown in the Fig. 6 . The encoder with embedded spatial attention mechanism blocks is composed of another bidirectional LSTM network. The spatial blocks in the unit can be summarized as: firstly we transform query Q i e , key K i e , and value V i e to getQ i e ,K i e , andV i e . Secondly, we calculate the dot product and use Softmax function to normalize it. Finally, we get the weights. The attention mechanism formula in i-th spatial unit is: i , which will be the input data of i-th Spatial Unit. The temporal attention mechanism blocks are embedded in the decoder network, they are employed to distinguish the importance of each period and assign the temporal weights. Context vector θ is the input of the decoder network. As shown in Fig. 7 , speed encoded consists of hidden layers of each subunit in the speed processing neural network. Similarly, volume encoded consists of hidden layers of each subunit in the encoder neural network. The batch_size in the figure means the number of batches per training, num_layer is the number of network loop iterations, T is the time step required for prediction, and input_size represents the number of lanes. where H n qe and H n ve is the final hidden state in each unit of speed process network and the encoder network respectively. H ve and H qe are the matrices formed by the final hidden state. W qv is the weight item and b qv is the bias item in the linear transform. As shown in the Fig. 8 , the input of the i -th temporal unit is θ i . We mark the final hidden state and final cell state as H i −1 qd and C i −1 qd respectively. Temporal attention mechanism formula can be written as: where W i ad is the weight matrix and b i ad is the bias item, which are all the learnable parameters. After obtaining d k i for each time, we apply Softmax function to normalize it, and get attention weight β k i . Finally we weight the original data θ i with β k i to getθ i . Thenθ i is used together with historical real traffic flow of target lane y i his to getŷ i his : According to the above method, each unit in decoder network iteratively allocates temporal weights until the N -th unit. N is the size of the dimension of θ (excluding the temporal dimension). Finally, we concatenate H N qd andθ N , and feed the matrix into fully connected (FC) layer to get the result of predicted traffic volume q i +τ tag : The training process In the training process of ST-AFN, training data are normalized and scrubbed to make up each batch, and the learning rate is adjusted using ReduceLROnPlateau: the maximum tolerance threshold is set, and the learning rate is dynamically adjusted downward using the loss value in each epoch as an index. The specific process is as follows: In this chapter, we use the real-world traffic data to evaluate the proposed ST-AFN and the benchmark algorithms. The experiments run on 64G Ubuntu 18.04 system, which is equipped with Intel Xeon Silver and NVDIA Quadro M4000. The hyperparameters in each experimental group were set uniformly: the learning rate is set to 0.001, batch size is set to 128, the epoch size is set to 100. The real traffic data required for the experiment are collected in Xiaoshan District, Hangzhou, China from July 1, 2017 to October 1, 2017, for a total of 123 days. These data are collected by surveillance cameras at various intersections. The main format of the original data is shown in Table 1 Because the original data are relatively scattered, we filter out the required vehicles based on the location attributes of the vehicle's departure time and the camera location, and eliminate the error data (the departure time is earlier than the entry time, the license plate number is empty, etc.). The distance is a fixed value, and the vehicle travel time can be obtained from the entry and departure time. What's more, the average speed is calculated As shown in Fig. 9A , each lane under the intersection of Tonghui Road-Changyuan Road and the intersection of Shixin Road-Boxue Road is used as the target lane in the two experiments, and the remaining lanes under the road participated in the experiment as candidate lanes. Data is counted at five-minute intervals (sum the flow in five minutes and average the speed), so each detector can generate 288 records per day. Then we draw a curve according to the obtained characteristics parameters to judge the noise points and make it as smooth as possible. A large amount of data is sufficient to support the training and testing of the model. In order to reduce the interference caused by noisy data, abnormal points are repaired by adjacent normal records. Then the StandardScaler method is applied to the repaired data. The data of Xiaoshan District, Hangzhou is used as the main experimental data and to prove ST-AFN's portability, we collect real traffic data from Qingdao for verification. The data is provided by the traffic big data competition held in Qingdao in October 2019. The original data is mainly composed of laneID, speed, cameraPos, timestamp, etc., from which we selected an arterial with different directions and adjacencies as the data set. The processing strategy is similar to the above, data is counted at five-minute intervals. As shown in Fig. 9B , each lane under the intersection of Jinshui Road-Dongchuan Road is used as the target lane. The lanes selected for each of the three experimental datasets are presented in Table 2 , arranged from top to bottom in the order in which they connect each road in a north-south direction. And in the Jinshui Road experiment set, they are arranged form left to right. The intersections where the target lanes are located in each experiment are marked by '*' (e.g., *Changyuan Road-N), and the N, E, and W after the intersections are directional characters (e.g., -N for the North intersection, -N&E&W for the North, East, and West intersections). To compare with our model, serval parametric, non-parametric, and neural network models were selected as baseline methods (including a state of the art lane-level forecasting method). • SVR: Support vector regression is a common method in traffic flow prediction. As an important branch of support vector machines, their difference is mainly in the evaluation index. The optimal hyperplane in SVM allows multiple categories to be the farthest apart, while the optimal hyperplane for SVR is the one that minimizes the sum of deviations of all sample points • ARIMA: Autoregressive integrated moving average is one of the time series forecasting methods. It contains an auto regressive block and a moving average block. • LSTM: LSTM composed of input gates, output gates, and forget gates is an improved version of Recurrent Neural Network (RNN). • TM-CNN: TM-CNN (Ke et al., 2020 ) refers a two-stream multi-channel convolutional neural network. Firstly the authors convert traffic speed data and volume data into matrices. Then they expand each lane data into each channel matrix, as the input of CNN. And obtain the final result after fusion. • FDL: FDL (Gu et al., 2019) refers a fusion deep learning model. Firstly, entropy-based gray relational analysis method is utilized to judge the dependency between lanes. Then they utilize the LSTM-GRU structure network to complete the lane-level speed forecast. It is one of state of the art models in lane level prediction. In this study, we evaluate the prediction results with Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE). where N is the number of forecast targets, y i is the history value andŷ i is the forecast result. The difference in traffic flow between weekdays and weekends is obvious, for example the peak trend is obvious in the morning and evening on weekdays, while the traffic flow is large throughout the day on weekends. We divide the data set into a training set and a test set at a ratio of 7:3. The test set has a total of 37 days, including 27 weekdays and 10 weekends. Table 4 shows the ST-AFN's performances. It can be obtained from the results. Although the difference is small, ST-AFN has achieved better prediction results on weekdays when the morning and evening peaks are obvious. We use the data set from Jinshui Road, Qingdao to prove ST-AFN's portability. The data is provided by the big data competition held in Qingdao in October 2019. In this data set, compared with the state of the art lane-level forecasting method FDL, ST-AFN achieves the same better results as before. This proves the great portability of ST-AFN. There are three main reasons in our opinion. Firstly, we have developed a processing strategy for traffic data in a standard format. Secondly, the specific lane selection strategy is based on the traffic volume, vehicle speed and complex road structures. Last, ST-AFN has a strong self-learning ability and can achieve a high accuracy rate after training with temporal series data and spatial series data. The performance of ST-AFN under sparse data conditions remains to be discussed. As shown in Table 5 . In the field of natural language processing, the seq2seq architecture is usually used in real-time translation tasks due to its rapidity (Maharjan et al., 2018) . And the attention mechanism emphasizes that when facing a series of problems, it is more reasonable to allocate limited computing power, filter out high-value information, and improve processing efficiency (Han et al., 2020) . It can get a relatively large accuracy improvement with a relatively small time consumption. Figure 9 shows the situation of each epoch during training. It can be seen from Fig. 10 that three algorithms have completed training after 50 epochs. TM-CNN (in blue) improves quickly at the beginning, but its training is completed relatively early, and the final loss value is higher. ST-AFN's training requires more epochs and achieves the best results. Figures 10A and 10B represent the results of Shixin Road and Tonghui Road respectively. Figure 11 and Table 6 show the training time of three models. Due to its seq2seq structure and attention module, ST-AFN training speed is slightly faster than FDL based on the gray correlation mechanism. TM-CNN takes longer because of its frequent convolution operation. Figures 12, 13 show that ST-AFN outperforms the state of art model FDL on the prediction of traffic flow in each lane on a certain day. In the morning and evening peaks, this superiority is more obvious. The attention result of ST-AFN is shown in the Figs. 14 and 15. In this picture, the y-axis represents the time intervals, and points on the x-axis represent the lanes. In detailed, we use the following lane sorting rules: Lanes closer to the target lane are assigned smaller values. If the distances are equal, it follows the order of upstream and downstream. The darker the color of each point in the figure means that it achieves a greater weight score. From the result we can mainly summarize that the closer the time, the closer the distance, the greater the weight value will be. For the same distance, the upstream lane has a greater weight of influence compared to the downstream lane. Moreover, in this experiment, the temporal dependency plays a more important role than the spatial dependency. In this paper, we integrate attention mechanism with deep learning techniques to yield the spatial-temporal attention mechanism based fusion network (ST-AFN). Furthermore, a specific ground road lane selection method is also proposed to ST-AFN. The experimental results show that out model outperforms than the previous state of the art algorithms in lane-level traffic prediction. There is no additional external funding received for this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Neural machine translation by jointly learning to align and translate Neural -network-based models for short-term flow forecasting using a hybrid exponential smoothing and levenberg-marquardt algorithm An attentive survey of attention models A behavioral theory of multi-lane traffic flow An effective spatial-temporal attention based neural network for traffic flow prediction Short-term prediction of lane-level traffic speeds: a fusion deep learning model Attention based spatial-temporal graph convolutional networks for traffic flow forecasting Recognition for hybrid urban road systems via digraph convolutional network Sequence to sequence learning with attention mechanism for short term passenger flow prediction in large-scale metro system Localized extended kalman fileter for scalable real-time traffic state estimation Development and evaluation of an adaptive traffic signal control scheme under a mixed-automated traffic scenario Two-stream multi-channel convolution neural network for multi-lane speed prediction considering traffic volume impact Mobile crowdsourcing and pervasive computing for smart cities Skill ranking of researchers via hypergraph Mobile edge coorperation optimization for wearable internet of things: a network representationbased framework Realtime mask identification for COVID-19: an edge computing-based deep learning framework Deep learning Attention based glaucoma detection: a largescale database and CNN model Diffusion convolutional recurrent neural network: data-driven traffic forecasting Deep sequence learning with auxiliary information for traffic prediction Short-term traffic and travel time prediction models Short-term traffic flow forecasting: an experimental comparison of time-series analysis and supervised learning A deep spatio-temporal residual network for regional traffic prediction Attention-based deep ensemble net for large-scale online taxi-hailing demand prediction Think like a graph: real-time traffic estimation at city-scale A genre-aware attention model to improve the likability prediction of books GCN2CDD: a commercial district discovery framework via embedding space clustering on graph convolution networks Infrared multi-pedestrian tracking in verrical view via siamese convolution network The simpler the better: a unified approach to predicting original taxi demands based on large-scale online platforms Deepsd: supply-demand prediction for online carhailing services using deep neural networks Travel-time prediction with support vector regression A hybrid deep learning based traffic flow prediction method and its understanding A batesian dynamic linear model approach for real-time short-term freeway travel time prediction A data-driven lane-changing model based on deep learning Deep spatio-temporal residual networks for citywide crowd flows prediction Multistep speed prediction on traffic networks: a deep learning approach considering spatio-temporal dependencies A temporal graph convolutional network for traffic prediction The authors thank Prof. Xun Yu from New York Institute of Technology for his help with the writing. The following grant information was disclosed by the authors: The National Natural Science Foundation of China: 62073295, 62072409, 61672463. Zhejiang Provincial Natural Science Foundation: LR21F020003. Fundamental Research Funds for the Provincial Universities of Zhejiang: RF-B2020001. Xiangjie Kong is an Academic Editor for PeerJ Computer Science. • Guojiang Shen conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft.• Kaifeng Yu conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft.• Meiyu Zhang performed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft.• Xiangjie Kong performed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. The following information was supplied regarding data availability:The data and code are available at GitHub: https://github.com/MCyutou/ST-AFN.