key: cord-0020128-bqwfhnqg authors: Liu, Yuting; Jiang, Du; Duan, Haojie; Sun, Ying; Li, Gongfa; Tao, Bo; Yun, Juntong; Liu, Ying; Chen, Baojia title: Dynamic Gesture Recognition Algorithm Based on 3D Convolutional Neural Network date: 2021-08-16 journal: Comput Intell Neurosci DOI: 10.1155/2021/4828102 sha: 1e84d7aa8435dee880911b02fa10b58e4189e156 doc_id: 20128 cord_uid: bqwfhnqg Gesture recognition is one of the important ways of human-computer interaction, which is mainly detected by visual technology. The temporal and spatial features are extracted by convolution of the video containing gesture. However, compared with the convolution calculation of a single image, multiframe image of dynamic gestures has more computation, more complex feature extraction, and more network parameters, which affects the recognition efficiency and real-time performance of the model. To solve above problems, a dynamic gesture recognition model based on CBAM-C3D is proposed. Key frame extraction technology, multimodal joint training, and network optimization with BN layer are used for making the network performance better. The experiments show that the recognition accuracy of the proposed 3D convolutional neural network combined with attention mechanism reaches 72.4% on EgoGesture dataset, which is improved greatly compared with the current main dynamic gesture recognition methods, and the effectiveness of the proposed algorithm is verified. As a common means of communication in people's daily life, gestures also have great application in human-computer interaction. Compared with expressions, actions, and other interactive means, gestures are more intuitive, natural, and comfortable. erefore, gesture communication is also the most used human-computer interaction means besides language [1, 2] . Gestures have a wide range of applications in human-computer interaction technology, such as intelligent driving, virtual reality, augmented reality [3, 4] , medical assistance [5] , and so on [6] [7] [8] . In intelligent driving, gestures are captured by vehicle intelligent control systems and analyzed by intelligent center. Instructions are sent out to complete the human control of vehicle navigation and entertainment functions [9, 10] . When talking about virtual reality and argument reality, Microsoft's HoloLens has already realized having entertainment of users in the virtual reality environment through both hands [11] . In medical assistances, gesture recognition can provide assistances for the hearing-impaired groups and realize the normal communication between deaf and dumb people. Gestures could be mainly divided into dynamic and static. Static gestures focus on the hand posture and shape at a single point in time, such as gesture action "OK." Only the spatial features of gestures are considered in recognition. Dynamic gesture recognition should not only consider hand postures and shapes but also pay attention to the spatial displacement and spatiotemporal correlation of the whole gesture [12] [13] [14] . Compared with static gesture recognition, dynamic gesture is closer to people's expression habits with more abundant information expression [15] [16] [17] , which has more practical significance. Nowadays, researchers have proposed a variety of dynamic gesture recognition algorithms, including dynamic gesture feature extraction algorithm such as MEI algorithm, HOG algorithm, and HOF [18] algorithm and classification algorithm such as hidden Markov model [19] . With the development of deep learning, many video classification algorithms, for example, C3D [20] and dual stream convolution network and LSTM [21] , have been applied to dynamic gesture recognition [22] [23] [24] , achieving high recognition rate. However, the amount of network inputs is in large scale because of the need to extract the video spatial information and temporal information, which result in huge number of parameters and calculation. Such networks have complex network structure and low real-time performance. It is possible to increase the effect of dynamic gesture recognition by optimizing input and improving the existing feature extraction methods. In this manuscript, we propose a dynamic gesture recognition algorithm based on attention mechanism of 3D convolutional neural network, which has several contributions: (a) Interframe difference method is optimized. Input video data are processed to improve the problem of data redundancy and format inconsistency. (b) 3D convolutional neural network is combined with attention mechanism. CBAM is used to optimize the structure of 3D convolutional neural network to reduce the transmission loss of input information and realize the feature extraction of spatial dimension and time dimension. (c) Multimodal joint is applied to train neural network. To improve the effect of gesture recognition, the fusion method of dual-mode feature input is used to realize the feature complementarity of the two modes. As deep learning develops by leaps and bounds, computer vision has been promoted. Many excellent image analysis and recognition algorithms have been proposed. For example, the Alexnet which is designed by Srivastava [25] has achieved the best recognition performance in the Imagenet, an image recognition competition [26, 27] . Different from traditional methods of artificial design features, deep learning automatically extracts features through convolution neural network. By training and debugging the feature extraction network, more critical and representative spatiotemporal features can be extracted by deep learning for video classification and action recognition [28, 29] . e dynamic gesture recognition networks based on deep learning are mainly divided into three types: two-stream networks, long short-term memory (LSTM) network, and three-dimensional convolutional neural network (3D-CNN). e concept of two-stream network was first proposed by Simonyan and Andrew [30] in 2014, and the optimal recognition effect was obtained in the behaviour recognition task of open data sets UCF-101 [31] and HMDB-5 [32] . e two-stream network algorithm also has some defects since the algorithm only gets spatial information through a single image. It is difficult to deal with the large changes in behaviour, and the optical flow image is only suitable for the small changes in motion information capture. Aiming at the problem of two-stream network, Wang et al. [33] designed a temporal segment network (TSN) to sparsely sample long time series images and obtain more robust spatiotemporal features through Inception v2 to improve the effect of action recognition. Based on the work of Wang et al., Feichtenhofer et al. [34] studied the method of fusing spatial and temporal information and found that the feature fusion in higher convolution layer of network has better recognition effect. In two-stream convolution networks, the operation of optical flow information could occupy a lot of memory and affect the recognition rate. Zhu et al. [35] designed a convolution network for optical flow estimation instead of optical flow information operation, cascaded temporal information network and spatial information network, and used multiple images stacked input to complete action recognition. Dynamic gesture recognition [36] is like action recognition. It also uses the algorithm to obtain the spatial and temporal information of the object expression in the video to realize the video action understanding. LSTM is actually a type of recurrent neural network (RNN). Inputs of each layer of RNN consist of the output of the upper layer and the output of the same layer, and outputs of the neuron are the inputs of the same layer. erefore, RNN could effectively deal with the problem of temporal feature extraction. However, the network can only solve the problem of short time series due to the limitation of structure. LSTM is designed to solve problem of long time series and historical information loss in the iteration. For the problem of information loss in long time sequence, LSTM controls the information processing of neurons with three structures: input gate layer, forget gate layer. and output gate layer. In the dynamic gesture recognition, LSTM uses the common convolutional network to extract the features, serializes the spatial features extracted by the previous network through LSTM, and then classifies them in the full connection layer. 3D-CNN is an improvement of convolution kernel and pooling method on traditional 2D convolution neural network. Continuous motions contain unique temporal information. However, 2D convolution kernel can only extract spatial information from image [37, 38] , and 3D convolution kernel is designed to extract features from continuous image to obtain temporal information. Spatial scale pooling and channel scale pooling are also included in pooling process. Many scholars have studied 3D convolutional neural network. For example, Tran et al. [39] proposed C3D network to realize dynamic gesture recognition. e I3D network is proposed by Carreira and Zisserman [40] . 2 Computational Intelligence and Neuroscience Some previous excellent models are used in this algorithm. To solve the problem of excessive computing cost, Qiu et al. [41] proposed the P3D network, which optimized the convolution model. Moreover, some algorithms use feature fusion to integrate image information and optical flow information, such as dual stream algorithm and MFFs-net algorithm [42] . e accuracy of the MFFs-net algorithm on Jester dataset is 96.28%, but the calculation of optical flow needs a lot of computing resources. Molchanov team has also done a lot of work in dynamic gesture recognition. In paper [25] , the team proposed to integrate RGB image, depth image, and radar data to realize dynamic gesture recognition. In paper [43] , the team proposed using 3D-CNN to train two different resolution networks and fusing the recognition results to improve the recognition accuracy. In paper [44] , the team also used residual neural network to optimize 3D-CNN and verified the effectiveness of its model in SKIG data set and CharLearn2014 data set. Since the sign language data set also contains a large number of dynamic gestures, most dynamic gesture recognition models also use sign language as the data set for training and testing. In paper [45] , a dual stream 3D-CNN is designed to realize dynamic gesture recognition based on effective fusion of multimodal data. Based on the analysis of the above current research situation, it can be found that dynamic gesture recognition is still a research hotspot in the field of computer vision at present. Although it is still in the initial stage of this technology, the recognition of simple gesture has achieved good results in daily application. ere are still some problems to be further studied and optimized. ree aspects should be considered in motion recognition of video with single gesture: (a) Appearance and texture feature of gestures (b) Changes of gesture features, namely, gesture space features (c) Time domain information between images, which is spatiotemporal characteristics of continuous changing gestures In view of the above three aspects, this paper takes RGB image and depth image as input and designs a dynamic recognition model of three-dimensional convolutional neural network combined with convolutional block attention module (CBAM-C3D). e process (shown in Figure 1 ) is as follows: firstly, a key frame extraction method based on interframe difference method is designed to process the original input video. In this way, redundant frames of network input are reduced, and scale alignments of input data are realized. en, the processed depth images and RGB images data are input into CBAM-C3D. Finally, the two data features are fused in series in the feature layer to complete the dynamic gesture recognition. e proposed CBAM-C3D algorithm shown in Figure 2 is optimized according to the C3D network's structure which was proposed by Du et al. Batch normalization and ReLU layer are added into 3D convolution layer. e full connection layer and maximum pooling layer relate to CBAM network to optimize the features. is fusion network can not only reduce transmission loss of input information but also automatically learn important spatiotemporal information contained in images. Gesture. Due to the disunity of action standards and personal physical factors, the duration of the same action and gesture changes may be greatly different, which is also one of the difficulties of gesture recognition. ere are two cases for the inconsistency of action duration: For these problems, the method of interframe difference to unify the scale of video data and simplify the data processing is proposed. e whole video is defined by several representative images. e traditional inferframe difference method is mainly used for moving target monitoring. In this manuscript, it is optimized to obtain more accurate key frame images. Figure 3 shows the calculation process of the inferframe difference method. Firstly, RGB-D image is used to segment the gesture area to obtain the hand image with the background removed. en, the adjacent image pixel standard deviation algorithm is used to calculate the inferframe difference of the adjacent image in the image sequence. Finally, the size of the inferframe difference is sorted to complete the key frame extraction. e standard deviation of inferframe difference L n is the evaluation standard of key frames. For example, the number of key frames K is preset, and the standard deviation of the gray value change of the frame n image is calculated. e continuous images of the input video sequence are supposed to be f n and f n+1 . e pixels on images are (x, y). Gray values of the corresponding image are f n (x, y) and f n+1 (x, y). According to formula (1), f i n represents the gray value of pixel i of image n: e maximum and minimum values of the sequence frame difference are counted, and intermediate value mid(L) is calculated according to formula (2) . All local extremums less than mid(L) are removed and assumed that the number of remaining is m. Finally, the extracted extremum points are sorted, in which the frames corresponding to the first K extremum points are taken as the key frames. If m ≤ K, the last in the sequence is copied and filled based on m images. mid(L) � (max(L) + min(L)) 2 . (2) information and channel information in feature extraction, that is, temporal features. In this paper, CBAM is used to optimize the structure of three-dimensional convolutional neural network, complete the important feature extraction of spatial dimension and time dimension, and strengthen the effect of network feature extraction. where I ∈ R C×H×W is feature map of input and A c ∈ R C×1×1 and A s ∈ R 1×H×W are one-dimensional and two-dimensional channel attention map, respectively. Two vectors with only channel dimension are obtained by maximum pooling and average pooling in the spatial dimension. en, the features are added and sigmoid activated by a two-layer neural network. e input of spatial attention processing can be obtained by multiplying the channel attention vector with the feature graph as follows: A c (I) � σ(MLP(AvgPool(I)) + MLP(MaxPool(I))) where σ is the sigmoid activation function. AvgPool( ) and MaxPool( ) represent average pooling and maximum, respectively. W 0 ∈ R C/r×C and W 1 ∈ R C×C/r are weights of MLP. is module is equivalent to a filter, the important channel weight is larger, and the unimportant channel weight is smaller. erefore, it realizes the attention mechanism in the feature dimension. To calculate spatial attention which concerns on the position of useful information, the maximum pooling and average pooling on the channel with only two channel dimensions are acquired. en, the two vectors are input into a two-layer neural network, respectively. After feature addition, it is input into the convolution layer for weight optimization. In this way, a spatial attention filter is generated as follows: where f is the convolution operation. e channel attention feature is input to the spatial attention mechanism network, and the spatial attention mechanism network is used to complete the feature extraction including space and temporal feature. Temporal attention concerns more on global information, while spatial attention focuses on local information. erefore, the combination of these two attention modules will effectively extract salient features and enhance the expression of features. Due to the trend of polarization distribution and uneven distribution of data, gradient may disappear or explode. To solve this problem, BN layer is added to ensure the consistency of input data in each layer. e specific algorithm is as follows: Formulas (6) and (7), respectively, calculate the mean value and variance of the data, formula (8) standardizes the data, and formula (9) offsets the data. After the data are processed by BN layer, the distribution of data will be more uniform, which is also conducive to improving the generalization performance of the network. In the field of behaviour recognition, many researchers have tried to use multimodal data as input to improve the recognition effect such as RGB image, depth image, and optical flow data. All kinds of modal data contain important information of gesture recognition, but the results of single-modal data recognition are not good enough. erefore, the recognition efficiency of dynamic gesture is improved in this paper by fusing multiple modal data. Considering that 50 kinds of actions with significant difference in gesture are included, RGB images are used to fuse depth images for gesture recognition. In this paper, RGB image and depth image are used as input data, and the two kinds of modal information are input Computational Intelligence and Neuroscience into the CBAM-C3D model, respectively, by the way of training, and then the fusion is carried out after the respective features are obtained. Assume that the output feature vectors of the two kinds of modal information are F RGB and F Depth , respectively, after feature extraction, and the final fusion feature vector is F u . (a) Average fusion: (b) Series fusion: where ⊗ represents a tandem operation. Different countries have different definitions of gesture, so there is no recognized dynamic gesture data set in this field. In order to facilitate multimodal data fusion training, this paper selects the EgoGesture [46] first person gesture database released by the State Key Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences in 2018 (Figure 4 ). e dataset contains 2081 RGB-D videos, 24161 gesture samples, and 2953224 frames from six different themes. Each video sample is taken by Intel Realsense sr300 camera, and the data format is RGB-D. Each frame of video is 640 × 480 pixel resolution and 30 fps recording. ere are 33 kinds of static and 50 kinds of dynamic gestures collected by 50 people from six different indoor and outdoor scenes. Figure 5 is a schematic diagram of some gesture categories in EgoGesture database. As this paper mainly focuses on dynamic gesture recognition, we select 50 dynamic gestures and their labels as RGB data and depth data input to filter and adjust the EgoGesture database. Some samples of RGB and depth video data are taken as examples. All the experiments are carried out in Window10 system. e graphics card is NVIDIA gtx3060ti 8g. e running software environment is Python 3.6, Python-1.3.0 + torch vision-0.5.0, OpenCV-Python-4.5.0, and other auxiliary Python libraries. e data input is EgoGesture dataset, and the training set, test set, and verification set are divided according to the ratio of 3 : 1 : 1. When training the model, the model is verified and adjusted every 20 steps. Before network training, key frames are extracted from RGB images. RGB images and depth images are selected according to the number of frames. To increase the generalization of the model, the image is randomly clipped, and the initial input image of 240 * 240 is randomly clipped to 112 * 112. In the model training, the small batch stochastic gradient descent algorithm with momentum is used to optimize the 3D convolutional neural network. e number of training steps is 101, batch size is 16, initial learning rate is 0.01, and learning rate attenuation factor is 0.1 every 3000 iterations. Strategies. e single-mode input and dual-mode input are used for comparative experiments. RGB images, depth images, optical flow images, and RGB-depth images are selected to be input mode. Furthermore, average fusion and series fusion are used for dual-mode input in feature fusion layer. e input is 16-frame image set as input training, and the accuracy of the final dynamic gesture recognition result is shown in Figure 6 . rough the analysis of the experimental results, for the samples in the training set, in the use of the single-mode data input model, the recognition accuracy of RGB image is the highest, with a recognition accuracy of 52.5%. After fusing depth image input, the recognition accuracy is improved by 9.16%. It can be found that the multimodal data input model has better performance than the single-modal data input model. On the other hand, the multimode fusion input based on feature layer series connection has better effect, and the accuracy is 5.07% higher than the average fusion input. What Computational Intelligence and Neuroscience can be inferred is that both input modes and fusion method influenced the performance of neural network. When the type of input data mode is fixed, using the appropriate data fusion method can make the characteristics of the object more prominent and train a better network performance. Strategy. For better reflecting the effectiveness of the multimodal feature fusion strategy, the confusion matrix is used to show effect of two different fusion methods. Figures 7 and 8 describe confusion matrices of two fusion methods of 50 kinds of gestures in EgoGesture dataset. e horizontal axis represents the categories predicted by the model for dynamic gestures, the vertical axis represents the real labels of dynamic gestures, and the right colour graph represents the prediction accuracy value and corresponding colour performance. Comparing the two figures, it can be found that the average fusion model is easy to confuse some gesture 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Predicted label True label 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 Figure 6 : Accuracy comparison of input data in different modes. 8 Computational Intelligence and Neuroscience recognition. For example, some confusions exist in gesture 1, 3, and 6 or gesture 2, 4, 5. ese gestures do have similar features in motion trajectory and hand posture, which leads to misjudgment between them. After using the series fusion model, the false detection probability between these similar gestures decreases to a certain extent and the recognition accuracy is improved to a certain extent. Take gesture 2 as an example. In the average fusion confusion matrix, it can be found that the probability that gesture 4 is mistaken for gesture 2 is greater than 0.4 (the grid colour in the matrix can be compared with the colour bar on the right). However, in the series fusion mode, the probability that gesture 4 is mistaken for gesture 2 is less than 0.2. In general, the recognition effect and average recognition rate of the series fusion model are better than those of the average fusion model. e above experiments show that in dynamic gesture recognition, the recognition model based on series fusion features can achieve better results mainly because it can save the features of each part when fusing. is method can avoid the loss of feature masking caused by direct fusion and can provide more complete feature information of the classifier so as to improve the performance of the whole model. Most of the action frames range from 20 to 50, so 8 frames, 12 frames, 16 frames, and 20 frames are selected for experimental comparison. e interframe difference method optimized in this paper is compared with the traditional interframe difference method, and the advantages and disadvantages of the method are judged by accuracy, which is shown in Figure 9 . As can be seen from Figure 8 , the accuracy of the optimization method in extracting 8, 12, 16, and 20 key frames is significantly improved compared with the traditional method, and the maximum recognition rate is increased by 2.82% from 60.45% to 63.27%. With the increase in the number of frames, the recognition effect is also improving. However, when the number of frames reaches 20, the recognition rate decreases. It is speculated that the number of frames of some gestures is less than 20. e key extraction process needs to expand the number, resulting in redundant frames, which will have a negative impact on the effect of gesture recognition. e proposed dynamic gesture recognition method and other representative algorithms are tested on the EgoGesture 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Predicted label True label 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 Table 1 , the accuracy of the proposed method is 72.4%, which is greatly improved compared with the traditional dynamic gesture recognition method. A dynamic gesture recognition algorithm based on attention mechanism of 3D convolutional neural network is proposed in this manuscript. e optimized interframe difference method is used to deal with problems about video data redundancy and format disunity. Combined with CBAM network, the important spatiotemporal features are enhanced and invalid features are suppressed to realize the prominent expression of features. Finally, the dual-mode feature input fusion method is adopted to realize the complementary features of the two modes and improve the effect of gesture recognition. Meanwhile, the model designed in this paper is compared with other mainstream methods on EgoGesture dataset to verify the effectiveness of the proposed method. e recognition accuracy of the designed method is 72.4%, which is better than other networks. is method also has some defects, such as large amount of network parameters and slow network prediction, resulting in poor real-time performance. It only recognized gesture for video containing a single action. Moreover, the dual-modal data fusion strategy proposed in this paper has mentioned that the input modal RGB image and depth image are related to each other, but this paper only fuses them on the feature layer. In the future work, it can be considered to fuse the dual-modal data in the preprocessing stage and increase the input modal information, such as the addition of optical flow data, to improve the recognition accuracy. e data used to support the findings of this study have not been made available because the relevant data involve legal issues and related confidentiality. e authors declare that there are no conflicts of interest regarding the publication of this article. Figure 9 : Comparison of accuracy of two key frame extraction methods. 10 Computational Intelligence and Neuroscience Review of constraints on vision-based gesture recognition for human-computer interaction Grip strength forecast and rehabilitative guidance based on adaptive neural fuzzy inference system using sEMG Holoscopic 3D microgesture recognition by deep neural network model based on viewpoint images and decision fusion Survey and evaluation of monocular visual-inertial SLAM algorithms for augmented reality Decomposition algorithm for depth image of human health posture based on brain health Manipulator grabbing position detection with information fusion of color image and depth image using deep learning Occlusion gesture recognition based on improved SSD Analysis of user needs on downloading behavior of English vocabulary APPs based on data mining for online comments SLAM based on adaptive SVSF for cooperative unmanned vehicles in dynamic environment Research on strong agile response task scheduling optimization enhancement with optimal resource usage in green cloud computing A continuous hand gestures recognition technique for human-machine interaction using accelerometer and gyroscope sensors Smart gloves for hand gesture recognition: sign language to speech conversion system Gesture recognition based on multi-modal feature weight Analysis of social effects on employment promotion policies for college graduates based on data mining for online user review in China during the COVID-19 pandemic Dynamic gesture recognition based on feature fusion network and variant ConvLSTM Research on gesture recognition of smart data fusion features in the IoT Monitoring and recognizing enterprise public opinion from high-risk users based on user portrait and random forest algorithm Human action recognition based on HOIRM feature fusion and AP clustering BOW Gesture recognition based on skeletonization algorithm and CNN with ASL database Gesture recognition based on deep deformable 3D convolutional neural networks An analysis of convolutional long short-term memory recurrent neural networks for gesture recognition Tiny hand gesture recognition without localization via a deep convolutional network Gesture recognition based on binocular vision An effective and unified method to derive the inverse kinematics formulas of general six-DOF manipulator with simple geometry Dropout: a simple way to prevent neural networks from overfitting Semantic segmentation for multiscale target based on object recognition using the improved faster-RCNN model Gesture recognition based on surface electromyography-feature image Surface-electromyography-based gesture recognition by multi-view deep learning Two-phase multidocument summarization through content attentionbased subtopic detection Two-stream convolutional networks for action recognition in videos Struck: structured output tracking with kernels Machine learning for highspeed corner detection Temporal segment networks: towards good practices for deep action recognition Convolutional two-stream network fusion for video action recognition Hidden twostream convolutional networks for action recognition Dynamic gesture recognition in the internet of things Enhancement of real-time grasp detection by cascaded deep convolutional neural networks Gesture recognition based on an improved local sparse representation classification algorithm Learning spatiotemporal features with 3D convolutional networks Quo vadis, action recognition? a new model and the kinetics dataset Learning spatio-temporal representation with pseudo-3d residual networks Deep learning Online detection and classification of dynamic hand gestures with recurrent 3D convolutional neural network Hand gesture recognition with 3D convolutional neural networks Online detection and classification of dynamic hand gestures with recurrent 3D convolutional neural networks EgoGesture: a new dataset and benchmark for egocentric hand gesture recognition