key: cord-0071678-5am1jtiq authors: Tian, Sen; Zhang, Jin; Shu, Xuanyu; Chen, Lingyu; Niu, Xin; Wang, You title: A Novel Evaluation Strategy to Artificial Neural Network Model Based on Bionics date: 2021-12-16 journal: J Bionic Eng DOI: 10.1007/s42235-021-00136-2 sha: 15275cef70d11edbc67d69ac13c666e8c97b0b19 doc_id: 71678 cord_uid: 5am1jtiq With the continuous deepening of Artificial Neural Network (ANN) research, ANN model structure and function are improving towards diversification and intelligence. However, the model is more evaluated from the pros and cons of the problem-solving results and the lack of evaluation from the biomimetic aspect of imitating neural networks is not inclusive enough. Hence, a new ANN models evaluation strategy is proposed from the perspective of bionics in response to this problem in the paper. Firstly, four classical neural network models are illustrated: Back Propagation (BP) network, Deep Belief Network (DBN), LeNet5 network, and olfactory bionic model (KIII model), and the neuron transmission mode and equation, network structure, and weight updating principle of the models are analyzed qualitatively. The analysis results show that the KIII model comes closer to the actual biological nervous system compared with other models, and the LeNet5 network simulates the nervous system in depth. Secondly, evaluation indexes of ANN are constructed from the perspective of bionics in this paper: small-world, synchronous, and chaotic characteristics. Finally, the network model is quantitatively analyzed by evaluation indexes from the perspective of bionics. The experimental results show that the DBN network, LeNet5 network, and BP network have synchronous characteristics. And the DBN network and LeNet5 network have certain chaotic characteristics, but there is still a certain distance between the three classical neural networks and actual biological neural networks. The KIII model has certain small-world characteristics in structure, and its network also exhibits synchronization characteristics and chaotic characteristics. Compared with the DBN network, LeNet5 network, and the BP network, the KIII model is closer to the real biological neural network. ANN is an abstract mathematical model proposed and developed based on contemporary neuroscience, which is intended to reflect the structure and function of the human brain. In terms of anatomy, C. Golgi, an Italian anatomist in the early nineteenth century, invented the silver staining method, which can fully display neuronal cells. The contemporary Spanish scientist S. R. Cajal did a lot of experiments and observations based on this result and published "Histology of Human and Vertebrate Nervous System" in 1904. In psychology, W.S. McCulloch, a psychologist, and W. Pittts, a mathematical logician, established a neural network and a mathematical model, which is called the MP model [1] . This model puts forward the formal mathematical description of neurons and the method of network structure and proves that a single neuron can perform logical functions. Thus, opening the era of artificial neural network research. The ANN is an intelligent information network built by simulating the genuine biological nervous system. It simulates some rudimentary functions of the biological nervous system by using the electrophysiological characteristics of biological neurons and the synaptic connections between neurons. In essence, it is a simple mapping of the biological nervous system. With the in-depth study of the ANN, researchers have made breakthroughs in such fields as pattern recognition [2] , natural language processing [3] , intelligent control [4, 5] , expert system [6] and predictive analysis [7, 8] , and the network structure and function is developing towards complexity and intelligence. The development of the ANN has been distributed over three stages [9] : the first-generation neural network is a very simple model, which is composed of threshold neurons, such as multi-layer perceptron [10] and Hopfield neural network [11] . The neurons of the second-generation neural network no longer use the threshold function to calculate their output signals. And to better to simulate the output, it utilizes the nonlinear activation function to approximate the arbitrary function. The second-generation neural network is further enhanced in the learning algorithm. Instead of iteratively updating the weight, it can use its characteristics of data to learn independently, such as the BP network [12] , DBN network [13] , and Convolutional Neural Network (CNN) [14] . In particular, CNN better simulates the real nervous system in-depth, with a depth of up to 1202 layers [15] , which can further learn the potential characteristics of data. The third generation of the neural network is the pulse neural network. This is explained by the fact that, in a real biological neural network, both the input and output of neurons have impulsive characteristics. Data encoding method is spike coding, which can be closer to the real biological neural network. The commonly used neural models include the Hodgkin-Huxley (H-H) model [16] , the LIF model [17] , the Izhikevich model [18] , etc. Morita et al. [19] used the pulsed neural network to extract common words from Japanese speech data. Kim et al. [20] proposed a weighted pulse deep neural network, which accelerated the coding rate compared with the traditional pulse network. The construction of artificial neural networks draws on the ideas of bionics. Bionics is an interdisciplinary subject that combines biology, mathematics, and engineering. Its establishment and research are tantamount to better serve human learning, production, and life. The compound eyes of insects have a powerful field of vision without angular distortion, which inspired the development of bionic eyes. Song et al. [21] designed a hemispherical fly's eye, combines optics and elastic composite with deformable thin silicon photodetector array, may be formed into a hemisphere-wide field of view to grasp an image. Zhang et al. [22] proposed a new type of bionic neural network to control the fish robot according to the neural structure of fish, which is the same as the neuron connection mode of the central nervous system of fish in the model. The experimental results show that the fish robot can accomplish basic activities, such as start, stop, forward, and backward swimming. And with the continuous development of biological sciences, researchers have a certain understanding of the structure and function of the nervous system and use the topology of the nervous system or corresponding functions as a guide to construct a bionic artificial neural network. Wei et al. [23] developed a new type of bionic in vitro biological electronic tongue based on cardiomyopathy and microelectrode arrays, which can be used for bitterness and umami taste detection. As a bionic system, an electronic tongue can be used to obtain taste information about different types of food. Liljenstrom model [24] adopts network units as simple as possible to construct the olfactory model according to the structure of the olfactory cortex and olfactory bulb layer. Freeman [25] proposed a set of the nonlinear neural network model, namely, the KIII model, based on the anatomical structure and electrophysiological experiments on the olfactory. Chen et al. [26] used the bionic neural network to recognize the license plate, and the signal lag characteristics in the biological neural network were invoked as the activation function of neurons. The experimental demonstration shows that the proposed algorithm can improve classification accuracy and generalization ability. Combined with the development and application of artificial neural networks, researchers all target the real nervous system, and at the same time establish neural networks that meet the actual needs and solve practical problems. However, with the establishment and development of various types of neural networks, researchers' evaluation of artificial neural networks tends to evaluate models based on the perfection of actual problem solving. For example, in the field of pattern recognition, researchers tend to evaluate models based on the accuracy of recognition. This evaluation method is simple and direct, easy to understand at the same time, and can address the problem well. However, there is no clear and accurate interpretation of the process of real nervous systems solving authentic problems. It is not ample enough to evaluate the model only from the pros and cons of the problem-solving results. So, how to establish an evaluation indexes for evaluating the similarity between the artificial neural network model and the actual biological nervous system is not only a problem to be solved in this article but also an unsolved problem of this field. Therefore, in view of the above problems, this paper initially constructed the evaluation index of ANN from the bionics perspective, and deeply analyzed the neural network, to understand the operation mechanism of the neural network. In this work, we firstly introduce four classical neural network models--BP network, DBN network, LeNet5 network, and KIII model--and analyze qualitatively the neuron transmission mode and equation, network structure, and weight updating principle of the models. Then, evaluation indexes of ANN are constructed from the perspective of bionics: small-world, synchronous, and chaotic characteristics. Finally, we conduct experiments on the ORL data set to evaluate the network model from the perspective of bionics. In summary, this paper mainly contains the following contributions: 1. This paper provides a basis of the selection of models from the perspective of bionics. 2. From the perspective of bionics, it is the first time that to provide a preliminary measurement scale for the evaluation of various models of artificial neural networks, and to in-depth analysis of neural networks to understand the operating mechanism of neural networks. 3. It is the first time that to evaluate the performance of the model from the perspective of bionics and the characteristics of the model itself, and provide suggestions for the improvement on the model. In the following section, we first introduced four classical neural networks--BP network, DBN network, LeNet5 network, and the KIII model--and qualitatively analyze them from the aspects of structure, neuron transmission mode and equation, and self-learning mode. Besides, the evaluation indexes and calculation method of the neural network model are introduced in detail, and the model is compared and analyzed quantitatively through experiments. Finally, the paper summarizes the whole text, draws conclusions, and puts forward new thoughts and prospects. This chapter introduces four classic neural networks--BP network, DBN network, LeNet5 network, and KIII model--and then carries out a qualitative analysis on the network structure, neuron transmission mode and equation, weight updating principle, and other aspects of the model. BP has become one of the most popular neural network models because of its simple model, strong generalization ability, easy to understand and implement, etc. The structure of the model is shown in Fig. 1 . The network is generally comprised three parts: the input layer of the initial input data, the middle layer of processing data hiding, and the output layer of the final output results. In general, the hidden layer of BP is set at 1-2 layers, and the neurons in the hidden layer of the network are connected to each neuron in the front and rear layers. The computation of neurons is to multiply and sum the input of each layer and their corresponding synaptic weight. The computational equation of the neuron is shown in Eq. (1): where x i is the input data, w i is the weight, b is the bias term of the neural network, and φ is the activation function. BP network updates the weight through error backpropagation. It feeds back the error between the actual output and the expected output to the input layer-by-layer through the hidden layer and allocates the error to all units in each layer. Thus, the error calculated by each layer is used as a basis for updating the weight and threshold values of each unit. To ensure the minimum error between the output value of the neural network and the target output value, the adjustment direction of weight is determined by the gradient descent method and the weights are updated along the negative gradient direction. In 2006 and before, there was always a problem of low training efficiency in the BP network due to a large number of hidden layers. Therefore, Hinton proposed a method of limited Boltzmann machine for layer-by-layer greedy pre-training to solve the problem, which greatly improved the efficiency of training and improved the local optimal problem, thus opening a new era of development of the deep neural network. Hinton called this structure based on Boltzmann's machine pre-training Deep Belief Network (DBN). The neuron is the most basic and smallest unit in any neural network. Similarly, the DBN network is composed of many neurons, and a typical network structure is illustrated in Fig. 2 . From Fig. 2 , it can be observed that the elementary unit of the DBN network is Restricted Boltzmann Machines (RBM). DBN network generally updates weight through the Contrastive Divergence learning algorithm. Sometimes people generally use the trained weight of the DBN network as the initial weight of other neural networks to achieve better experimental results, such as the BP network, for training and then reverse update the weight. Normally, the Restricted Boltzmann Machine (RBM) is made up of two layers of neurons, the explicit layer, and the hidden layer. Among them, the explicit layer is composed of visible neurons for the input of training data. The hidden layer, correspondingly, consists of hidden neurons and serves as a feature detector, as shown in Fig. 3 . It only has connections between the neurons in the hidden layer and the neurons in the visible layer, and no connection between the neurons in the visible layer and the neurons in the hidden layer. LeNet5 neural network also belongs to the hierarchical network. Compared with other feedforward neural networks, the form and function of layers have been slightly changed. Generally, it is composed of the input layer, convolutional layer, pooling layer, full connection layer, and output layer. The convolution layer and pooling layer can be alternately put into the network framework. Therefore, the depth of the network is deeper than the ordinary fully connected feedforward neural network, as shown in Fig. 4 . As shown in Fig. 4 , it is deeper than a normal fully connected forward network, with eight layers (including the input layer). LeNet5 neural network also updates the weight through the backpropagation of errors. And, when calculating the operation, the main function of convolution is to extract the features of data, and the core function of pooling is to reduce the dimensions of features. According to the physiological anatomy and electrophysiological experiments in the olfactory system, Freeman [25] proposed a bionic olfactory model, namely, the KIII model, Topology diagram of the KIII model structure whose model structure is shown in Fig. 5 . The KIII model is divided into R layer (olfactory sensory neuron), PG layer (peri-glomerular), OB layer (olfactory bulb), AON layer (anterior olfactory nucleus), PC layer (papilliform cortex), C unit (pyramidal cells in EC), and Di unit (delay feedback unit). In the KIII model, a unified second-order differential equation can be used to describe the dynamic behavior of each neuron, as shown in Eqs. (2) and (3) [27]: where a and b, respectively, represent two-time constants of nerve electrophysiological activity; x i (t) and x j (t) are the potential state variables representing the i-th and j-th nerve clusters, respectively; N represents the number of parallel units in the model; W ij denotes the strength of the synaptic connection of the i-th and j-th nerve mass; Q(x j (t), q j ) is a nonlinear input or output function derived from the H-H (Hodgkin-Huxley) equation, and the value of q will be expressed differently at different positions; I i (t) represents the i-th nerve mass receiving external input. For example, q = 1.824 in the peribulbar cell layer, and q = 5 in the olfactory bulb layer, anterior olfactory nucleus, and piriform cortex. KIII model updates the weight of synaptic connections through Hebbian learning rules and adaptive learning rules [27] , which mainly updates the connection weights of the OB layer. Hebbian learning rules are designed to enhance the desired stimulus pattern, while adaptive learning rules are designed to reduce the impact of unnecessary factors, such as background noise. Although the KIII model has been around for a long time, it has unfortunately not received much attention or discussion from researchers. The KIII model is innovative in terms of recognition. Therefore, this paper hopes to present the KIII model to researchers as a different model. All of these four classical networks simulate the authentic biological nervous system to some extent, and all of them are hierarchical networks. The BP network is a feedforward neural network, and the network connection is the full connection mode between layers, which greatly increase the learning parameters of the network. The DBN network, like the BP network, is a feedforward neural network, and the connection method is a full connection between layers. As the number of network layers reaches a certain "depth", neural networks begin to have the concept of depth, and this connection improves the training efficiency of the network. The LeNet5 network is also a feedforward neural network with inter-layer connection, but its depth is deeper than the BP network, DBN network, and KIII model. A deeper network can better learn the potential characteristics of data, but it is not possible to achieve better results simply by deepening the network depth. For example, the general design of a BP network only contains an intermediate layer. If the depth of the network is deepened and a hidden layer is added, the learning parameters of the network will be increased, making the network difficult to train and time consuming. The depth of the LeNet5 model network is deepened but not too many extra parameters are mainly due to its special structure, such as convolutional layer and pooling layer, which greatly reduce the learning parameters of the network. For example, in the image recognition task, the two convolutional layers, and the two pooling layers, a total of four layers only need to learn 552 parameters, while the three-layer structure of the BP network needs 10,400 parameters. DBN network is composed of several RBMS, and its network structure is designed to solve the problem of low training efficiency caused by many hidden layers in the neural network. The deeper network depth of the LeNet5 network is due to the special structure of the convolutional layer and the pooling layer, which greatly reduce the learning parameters of the network. Structurally, the KIII model can increase the network size according to the dimension size of the features, mainly adding parallel OB layer channels [25] . In the network, each connection edge of the KIII model is a parameter that has to be learned, and the network does not use a fully connected method to connect neurons. Therefore, the required learning parameters are the least among these networks. Compared with the BP network, DBN network, and the LeNet5 network, the KIII model simulates a certain kind of nervous system, the olfactory nervous system, which is more special. However, BP network, DBN network, and LeNet5 network simulate the deep characteristics of the general nervous system to a large extent, and are more generalized. In terms of neuron transmission mode, the signals in the BP network, DBN network, and LeNet5 network are unidirectional transmission, while there is more obvious feedback between different layers in the KIII model. The KIII model introduces a feedback mechanism so that the model can truly simulate the response of neurons to stimulation, while the BP network, DBN network, and LeNet5 network extract sample features from the whole layer-by-layer. The feedback mechanism of the KIII model makes the model have a control function, while the deep BP network, DBN network, and LeNet5 network are more for judgment and decisionmaking. This difference in transmission mode also reflects the different orientation of the model establishment. KIII model is more about simulating the working principle of the nervous system from the bottom up, while the BP network, DBN network, and LeNet5 network model are more about simulating the working principle of the nervous system from the top down in terms of function and performance. In terms of the neuronal equation, the BP network and DBN network adopt the simplified working principle of biological neurons, that is, way of full connection summation, and then output the activation function. The LeNet5 network uses the convolution kernel and the corresponding area of the feature map to perform convolution operations, extracts the effective features of the feature map in the convolution layer, selects the pooling operation method in the pooling layer to reduce the dimensionality, and performs the same operation as BP network and DBN network in the full connection layer. The neurons of the KIII model adopt the pulse neuron, namely, Hodgkin-Huxley (H-H) model. This neuron has biological characteristics, such as pulse, and can better learn the pattern of data generation. BP network and LeNet5 network use an error backpropagation method to update the weight of the network. DBN network uses the contrastive divergence learning algorithm to update the weight, and the KIII model uses Hebbian learning rules and adaptive learning rules to update the weights. The comparison of the KIII model, LeNet5 model, and BP network in structure, neuron transmission mode and equation, and self-learning mode is given in Table 1 . From the qualitative analysis of the model, it can be observed that the KIII model is closer to the actual biological neural networks, while the LeNet5 network simulates the nervous system in-depth, which can better learn the characteristics of data. At the beginning of a deep network, the DBN network has better training efficiency and performance than the BP network. And BP network structure is relatively simple, so its performance is relatively poor. According to the structural characteristics and functional characteristics of the actual nervous system, the evaluation indexes of ANN are constructed from the perspective of bionics in this chapter: small-world characteristics, synchronous characteristics, and chaotic characteristics. And then the characteristics of the model are evaluated by the three indexes. In the model structure, the small-world characteristics are used to measure the structural characteristics of the neural network model, and the chaotic characteristics and synchronization characteristics are used to measure the inherent dynamic characteristics of the neural network on the neuron. In 1998, Watts et al. [28] first proposed a new type of network on Nature, that is, a class of networks with short feature path lengths and large clustering coefficients. Different from the traditional regular network and a random network, this kind of network is called a small-world network, and the nature of the satisfying small-world network is called small-world characteristics. In real life, it is gradually found that there are numerous networks with small-world characteristics, such as brain neural networks, protein networks, and social impact networks. Xie et al. [29] studied the network characteristics of the retinal nervous system and lamprey spinal nervous system. By analyzing the clustering coefficient and characteristic path length of these two nervous systems, they found that the information transmission between neurons in the nervous system has the small-world characteristic. Researchers found that the anatomical structure connection map of the brain has the small-world characteristic through electroencephalogram, magnetoencephalogram, or fMRI [30, 31] , and studied the brain structure network map of healthy people and schizophrenic patients. Experimental results showed that healthy people have an effective small-world characteristic which is destroyed in the structural map of patients [32] . Also, Andreas et al. [33] found that the metabolic network of Escherichia coli also has a small-world characteristic, and its network structure is still a small-world network. The network with small-world characteristics has the following advantages: first, it can speed up the transmission efficiency of the network. Especially in the face of an epidemic or infectious diseases, understanding the principle of the epidemic or infectious disease suppression or control has definite guiding significance. Lin et al. [34] used the small-world network model to study the spread of the SARS virus, and their research showed that the simulation results of the small-world network model were in good agreement with the progress of SARS in Beijing. Second, the network structure is optimized by the characteristics of the small world. Luo et al. [35] proposed a wireless network model with the characteristics of a small world, which can significantly improve the network transmission performance and greatly improve the energy-saving ratio of network nodes. There is a great amount of synchronization in the biological nervous system. Yuan et al. studied the collective swimming behavior of Caenorhabditis elegans, and measured and simulated the results, indicating that steric hindrance is the main factor of movement synchronization [36] . Riehle et al. found a synchronous firing pattern in the sensory cortex of monkeys [37] . Subsequently, neuroscientists have carried out a series of neurobiological experiments. The experimental results show that the human and mammals have the phenomenon of synchronous oscillations among neurons [38, 39] . The synchronization of the nonlinear network is a type of common motion or behavior phenomenon which is achieved by the interaction of nodes in the network. The neural equation of ANN is firstly through the linear sum, and then through the nonlinear function activation output, so ANN belongs to the nonlinear network. To explore the synchronization characteristic of ANN, this paper considers whether there are synchronization characteristics between neurons from the perspective of neurons, and then judges whether the ANN has the identical synchronization characteristics as the real nervous system. Chaos is an unpredictable, random-like motion in deterministic dynamical systems, which is sensitive to initial values. In the biological nervous system, researchers have conducted a large number of electrophysiological experiments. And the experimental results show that there are obvious chaotic phenomena from microscopic neurons to macroscopic brain waves and even the nervous system [40] . Baysal et al. [41] studied the influence of external chaotic signals on the weak signal detection performance of Hodgkin-Huxley neurons through numerical simulation. From the equation, the researchers found that the neural membrane has nonlinear phenomena, such as oscillation and chaos. Doungmo et al. [42] investigated the possible existence of attractive chaotic poles in the dynamics of Hindmarsh-Rose neurons with external current input. Therefore, researchers began to combine chaos theory with ANN to construct a chaotic neural network. This kind of ANN with chaotic characteristics simulates the actual neural network, which makes the ANN further simulates the dynamic characteristics of biological neural networks, and its bionic degree will be higher. To evaluate the small-world, synchronous, and chaotic characteristics of the neural network, the ORL face recognition data set was chosen for the experiment. Created by AT&T LABS at the University of Cambridge. It contains 400 facial images of 40 people, each with different changes in their posture, expression, and facial accessories. Image size is 92 × 112 Gy image, the image background is black, as shown in Fig. 6 . There are two key indicators in the small-world network: the average path length L that reflects the global connectivity and the clustering coefficient C that measure the degree of local connectivity. In this section, the average path length and clustering coefficient are used to judge whether the BP network, DBN network, LeNet5 network, and KIII model have small-world characteristics in the structure. The definitions of these two indicators are as follows. Assuming a graph G with N nodes, the minimum number of edges that one node must pass through to another node is called the shortest path length d between two nodes, and the average path length L refers to the average value of the shortest path length of all node pairs. The average path length of node i is shown in Eq. (4), and the average path length of the entire network is shown in Eq. (5) . A node with k neighbors can have at most k(k − 1)/2 edges; the ratio of the actual number of edges between each node and the neighbors E to k(k − 1)/2 is calculated, and then the average value of this ratio for all nodes is the clustering coefficient C. The clustering coefficient of node i is shown in Eq. (6) and the clustering coefficient of the entire network is shown in Eq. (7). In the BP network and KIII model, each neuron is regarded as a node, and each connection is defined as an edge. In the LeNet5 network, each feature map is defined as a node, and the convolution operation or pooling operation is defined as an edge. When judging whether a network has the characteristics of a small world, it cannot be judged directly based on the size of L and C. Usually the equivalent random graph with the same number of nodes, the number of edges, and average degree < k > corresponding to the network are found out first, and L rand and C rand are calculated, Eqs. (8) and (9) . If L ≥ L rand and C > > C rand , it means that the network has small-world characteristics. This judgment method is called the semi-quantitative categorical definition of the small-world network [28] . To judge the small-world network more accurately, Humphries et al. [43] gave a quantitative categorical definition of the small-world network. The ratio γ of the given network C to the corresponding equivalent random graph C rand , Eq. (10), the ratio of the given network L to the corresponding equivalent random graph L rand , Eq. (11), and the ratio S of γ to λ, Eq. (12), are calculated. If S > 1, the network is considered to be a small-world network: In the experiment, each neuron neural network is provided BP 7, 100, 40 three floors, while the calculated depth is 4 BP layer neural networks which each neuron is set to 7, 100, 50, 40. The structure of the DBN network contains 2 RBMs. That is, a 4-layer neural network and the neurons in each layer are set to 944, 100, 100, and 40. The size parameters of the convolutional layer, pooling layer, and fully connected layer in the network structure of the LeNet5 network are given in Table 2 . The channel number of the KIII model is set at 20, 30, 50, 80, and 100, respectively. The experimental results related to the small-world characteristic are given in Table 3 . As shown in Table 3 , the C of the BP network and DBN network is 0, the C of the LeNet5 network is close to 0, and the C of the KIII model is greater than 0. This is explained by the fact that the first three types of networks belong to feedforward neural networks. The results of networks are too regular and the neurons or feature diagrams between the same network layer do not have corresponding operations, that is, there is no edge of connection with the layer, so no triangular loop is constructed, resulting in the result that C is 0 or close to 0. However, there are connections within the layers of the KIII model. As the number of channels increases, the C of the KIII model gradually becomes smaller, while the change gradually becomes slower and tends to be stable. As the network depth and the number of channels increase, the L and < k > of the BP network, DBN network, and KIII model increase, and < k > is positively correlated with L. Among them, the BP network and DBN network have a relatively large increase in < k > , while L is small. The increase in L of the KIII model is relatively large, and < k > is small. This is because the BP network and the DBN network are fully connected networks. Improving the depth of the network will greatly increase the number of connections in the network. In the KIII model, as the number of channels increases, the number of neurons in the model increases, which enhances internal connectivity. The L rand and C rand corresponding to the equivalent random graph of the model is estimated, and the small-world characteristic statistics are calculated. Table 3 shows that S < 0 for BP network, DBN network, and LeNet5 network, and S > 1 for the KIII model. According to the quantitative classification definition of small-world network, BP network, DBN network, and LeNet5 network do not have small-world characteristics, but the KIII model has smallworld characteristics. The lock phase is one way to measure the degree of synchronization between two signals [44] . The output of a neuron is the direct reflection of the inner mechanism of ANN. Therefore, research of neural network synchronization can output the value of neurons, and use the lock phase method to measure whether there is synchronization between neurons and neurons. The specific calculation steps are as follows: Step 1. Suppose there is two signal time series x i (t) and x j (t) . First, we perform band-pass filtering on the two signals to determine the range of the target frequency band of the two signals. Then, the prerequisite for calculating the lock phase value is to select two signals on the same frequency band. Therefore, before the phase lock calculation, the signal is band-pass filtered to remove noise to select the range of the target frequency band. In the experiment, we assume that the two signals are located in the same frequency band, so no filtering is performed. Step 2. Calculate the instantaneous phases of the two signals, respectively. After performing step 1, the filtered signal is generated, and then the instantaneous phase is obtained. In this paper, the Hilbert transform is selected to solve the instantaneous phase of different signals at each sampling point. The calculation formula of the Hilbert transform is as shown in Eq. (13): The calculation formula of the instantaneous phase of the signal at any time is as Eq. (14): Step 3. Calculate the phase-locked value according to the instantaneous phase calculated by the two signals. The phase difference between the two signals at any time is represented by (t) , then (t) = i (t) − j (t) ; the phase lock between the two signals is calculated as Eq. (15): The range of the calculated synchronization value PLV should be between 0 and 1. The larger the PLV value is, the stronger the synchronization is; otherwise, the weaker it is. When PLV = 0, the two signals are completely out of sync, and when PLV = 1, the two signals are completely out of sync. To explore whether the BP network, DBN network, LeNet5 network, and KIII model have a synchronous characteristic, this section selects their neuron output values. In the experiment, the BP network chose a three-layer network structure with neurons in each layer set to 7, 100, and 40, including a hidden layer. The DBN network chooses a fourlayer network structure with each layer of neurons set to 944, 100, 100, and 40, including 2 RBMs, that is, two hidden layers. The number of channels selected by the KIII model is 20. The size parameters of the convolutional layer, pooling layer, and fully connected layer in the network structure of the LeNet5 network are shown in Table 2 . BP network and KIII model directly select the output of neurons. The BP network iterates 400 times and selects the values of the first, second, and fifth neurons in its hidden layer and the first neuron in the output layer for each iteration. Each neuron outputs 400 values. The KIII model calculates the duration, that is, the iteration step size was still 400 to obtain the output of the neuron, and the output of the first, second, fifth, and first neuron of the M1 layer and the M2 layer is selected. The lock phase method is utilized to calculate the degree of synchronization between the neuron and the time series signal composed by the neuron output. The results are presented in Table 4 . The symbol 1-2 represents the PLV value calculated by the output of the first and second neurons. The sequence number indicates the same meaning as above in the table. Also, 1(1)-2(1) represents the synchronization between the first neuron in the first layer and the first neuron in the second layer. BP network, DBN network, LeNet5 network, and KIII model all have synchronization characteristics in Table 4 . The PLV value between each neuron of the BP network, DBN network, and LeNet5 network is close to 1, that is, it has high synchronization characteristics in the neural network. The reason is that the weight correction in the neural network is performed in the direction of a negative gradient, which has certain synchronization. And then updating the value of the neuron output also has a synchronous trend of change. As shown in Table 4 , the BP network has strong temporal synchronization between the same layer neurons, and the synchronization between the layers is slightly weakened. The PLV value of the DBN network remains relatively stable, indicating that the neurons in the same layer of the DBN network and the neurons between layers have stable synchronization. When the LeNet5 model is iterated 200 times, the value of the neuron will converge, so the subsequent output does not change much and the synchronization is increased. The KIII model is also synchronous, with the highest degree of synchronization between adjacent nodes. However, when the neurons are not adjacent, the synchronization between them is poor. In particular, there is a major difference in the variation trend of output between the M1 node and M2 node, and the synchronization is low. Whether a neural network or a dynamic system has chaotic characteristics can be judged in qualitative and quantitative ways. Qualitative methods can be analyzed by phase diagram and power spectrum, and quantitative methods can be calculated by calculating Lyapunov exponents. In this paper, a quantitative method is utilized to calculate the Lyapunov index for analysis. If the Lyapunov index is greater than 0, it means that the neural network has chaotic characteristics, and if it is less than 0, it means that it does not possess chaotic characteristics. The steps for calculating the Lyapunov exponent using the Wolf method [45] are as follows: Step 1. Determine the embedding dimension m and the inter-delay τ of the time series of the C-C method, and use the fast Fourier transform to calculate the average period of the series. Step 2. Time series phase space reconstruction. Suppose the time sequence x 1 , x 2 , … , x l , the embedding dimension m, and the time delay τ, the reconstructed phase space can be calculated from Eq. (16) to obtain M m-dimensional vector sequences. Expansion of the time series in a multi-dimensional phase space through phase space reconstruction, fully displaying the information contained in the time series. Step 3. Calculation of the largest Lyapunov exponent in time series. 1. Take Y t 0 from the reconstructed vector, set the distance L 0 between it and the nearest neighbor point Y 0 t 0 , and track the evolution of these two points over time. 2. Suppose that the time when the distance exceeds the specif ied value > 0 is t 1 , at t his time To explore whether the BP network, DBN network, LeNet5 network, and KIII model have chaotic characteristics, this section selects the time series of the output of the first, second, and fifth neurons of other hidden layers of the network and the M1 node of the KIII model to explore whether it has a chaotic characteristic. In the experiment, the BP network chose a three-layer network structure with neurons in each layer set to 7, 100, and 40, including a hidden layer. The DBN network chooses a four-layer network structure with each layer of neurons set to 944, 100, 100, and 40, including 2 RBMs, that is, two hidden layers. The number of channels selected by the KIII model is 20. To obtain the best embedding dimension m and time delay τ, this paper increases the iteration time of the neural network to obtain a longer time series. The DBN network and LeNet5 network are prone to overfitting when the number of iterations is raised. And, change of neuron output value is not obvious, and it is easy to form a stable state. Therefore, the chaotic character of the LeNet5 network is not discussed here, and the chaotic characteristics of the deep network are discussed with the DBN network as a representative. In the experiment, the output time series length of each neuron of the BP network and KIII model is 3000, and the output time series length of the DBN network is 280. For the KIII model, this section sets the duration as 3000 s, inputs the stimulation at the 1000 s, and ends the stimulation in the 2000s. Therefore, the Lyapunov index of the KIII model is computed by three stages. And, the BP network and DBN network continue to iterate 3000 times and directly calculate the Lyapunov index of the sequence. Experimental results of chaos characteristics are shown in Table 5 . In Table 5 , the Lyapunov exponents of BP-1, BP-2, and BP-5 in the BP network are all found to be far less than 0, so the neural network does not get the chaotic characteristic. In the DBN network, the Lyapunov index of DBN-1 in the hidden layer is slightly less than 0, the Lyapunov index of DBN-2 in the hidden layer is much less than 0, and the Lyapunov index of DBN-5 in the hidden layer is slightly more than 0; then some neurons of this neural network have the chaotic characteristic. Therefore, the deep network may have a certain chaotic characteristic. The Lyapunov exponent of the KIII-M1-1, KIII-M1-2, and KIII-M1-5 neurons in the interval of is all greater than 0, indicating that the model has chaotic characteristics before receiving stimulation. When the Lyapunov exponent is less than 0 in the interval , it indicates that the system quickly enters a stable state from chaos when stimulated. In the interval [2001-3000], the Lyapunov exponent slowly approaches 0, and the system gradually returns to the original chaotic state. In the previous part of this paper, the bionic problems of four kinds of artificial neural networks (BP network, DBN network, LeNet5 network, and KIII model) are analyzed and researched from both quantitative and qualitative aspects. Through research, we can draw the following conclusions. (1) In terms of qualitative analysis, compared with the other three neural networks, the KIII model comes closer to the real neural network and has a higher degree of bionics. This is because the KIII model is structurally simplified based on the anatomical physiological structure of the olfactory system. Compared with the hierarchical connection to the other three neural networks, there is no connection between each layer, which is closer to the actual neural network. In terms of neuron equations and neuron transmission, the KIII model has obvious feedback and control functions. Neurons use impulse neurons, that is, Hodgkin-Huxley (H-H) model, which has biological characteristics, such as pulses. In the self-learning method, the KIII model uses Hebbian learning rules and adaptive learning rules to update weight. (2) In terms of qualitative analysis, by analyzing the smallworld characteristics, synchronization characteristics, and chaotic characteristics of the four artificial neural networks, compared with the other three networks, the KIII model is closer to the real biological neural network. This is because the KIII model has the characteristics of a small world in the structure and the characteristics of synchronization and chaos within the network. The BP network, DBN network, and LeNet5 network have synchronization characteristics. At the same time, the DBN network and LeNet5 network have certain chaotic characteristics. In this paper, these four artificial neural networks (BP network, DBN network, LeNet5 network, and KIII model) are selected as the research object of the following reasons. (1) BP network is one of the most widely used and classical artificial neural network models. Its simplicity and easy implementation make it widely applied by many scholars and industry people. (4) KIII model is based on the physiological anatomy and electrophysiological experiments on the olfactory system to propose a bionic olfactory model. The research of KIII model has a relatively perfect process from theory to implementation, from modeling to application. The reason why this paper does not choose the complex neural network model and large data, such as ResNet [46] , is that it will undoubtedly increase the difficulty of the research to start from the complex neural network, and it will not make use of the development of the research. Starting from the simple neural network model, it is easier to study the biomimetic problem of ANN to real neural network. From simple to complex, such research ideas are more conducive to promoting and carrying out the study of ANN bionic degree. And, it is the first time that this paper provides a preliminary evaluation scale for the evaluation of various models of ANN from the perspective of bionics, which is an attempt. In this paper, small-world characteristics, synchronization characteristics, and chaos characteristics were selected as the evaluation indicators for the following factors. (1) In real life, it is found that many networks have small-world characteristics [47] . (2) There are a lot of synchronization phenomena in the biological nervous system [48] . (3) In the biological nervous system, researchers have conducted a large number of electrophysiological experiments, and the experimental results show that there are obvious chaotic phenomena [40] by microscopic neurons to macroscopic brain waves and even the nervous system. Therefore, it is an important basis to study the biomimetic degree of ANN to research the small-world characteristic of ANN, the synchronization characteristic, and chaos characteristic between neurons. It also provides reference for future research on complex ANN. As this paper is a preliminary attempt of ANN to the biomimetic degree of real neural network, there are still some works worth improving in this paper, such as further improving the evaluation index, testing the versatility of the evaluation index in different data sets, researching on the bionic degree of complex ANN, and using evaluation indicators to evaluate the performance of the medical knowledge graph [49] . With the development of various types of neural networks, researchers' evaluation of artificial neural networks tends to evaluate models based on the perfection of actual problem solving. However, this is not comprehensive. Therefore, from the perspective of bionics, this paper provides a measurement scale for the evaluation of various models of artificial neural networks for the first time and initially studies the degree of bionics of artificial neural networks to real neural networks. (1) This paper explores the bionic degree of four artificial neural network models (BP network, DBN network, LeNet5 network, and KIII model) from the aspects of the model network structure, neuron transmission and equations, and learning principles. From the results of the comparative analysis, it can be obtained that the KIII model is closer to the actual biological neural network, and the degree of bionics is relatively the highest. At the same time, the DBN network and LeNet5 network deeply simulate the structural characteristics of the real nervous system. (2) In this paper, four neural network models are quantitatively analyzed by establishing and evaluating indicators. The analysis results show that the BP network, DBN network, and LeNet5 network have synchronization characteristics in the learning process of neural networks. At the same time, the DBN network and LeNet5 network have certain chaotic characteristics inside. However, they still have a certain distance from the actual biological neural network. The KIII model has the characteristics of small world in structure, and the characteristics of synchronization and chaos within the network. Comparing with the other three networks, it is closer to the actual biological neural network. In the future, we will conduct the following research on evaluation indicators. A logical calculus of the ideas immanent in neuron activity Artificial neural network models for prediction of net radiation over a tropical region Low testosterone on social media: Application of natural language processing to understand patients' perceptions of hypogonadism and its treatment A probability density function generator based on neural networks SOAR improved artificial neural network for multistep decision-making tasks Action-specialized expert ensemble trading system with extended discrete action space using deep reinforcement learning Application of BP neural network to the prediction of coal ash melting characteristic temperature Forecasting of bioaerosol concentration by a Back Propagation neural network model An overview of biomimetic robots with animal behaviors Predicting risk of antenatal depression and anxiety using multi-layer perceptrons and support vector machines Energy based logic mining analysis with hopfield neural network for recruitment evaluation Martial ats competitive decision-making algorithm based on improved BP neural network A multi-level output-based DBN model for fine classification of complex geo-environments area using ziyuan-3 TMS imagery Introspective analysis of convolutional neural networks for improving discrimination performance and feature visualization Deep networks with stochastic depth Statistical field theory of the transmission of nerve impulses A network model of the barrel cortex combined with a differentiator detector reproduces features of the behavioral response to single-neuron stimulation Constraint-induced intervention as an emergent phenomenon from synaptic competition in biological systems Spiking neural network to extract frequent words from Japanese speech data Deep neural networks with weighted spikes Digital cameras with designs inspired by the arthropod eye Design of an artificial bionic neural network to control fish-robot's locomotion A novel bionic in vitro bioelectronic tongue based on cardiomyocytes and microelectrode array for bitter and umami detection Modelling ECT effects by connectivity changes in cortical neural networks Model of biological pattern recognition with spatially chaotic dynamics Bioinspired neural network with application to license plate recognition: Hysteretic ELM approach Image recognition technology based on deep learning Collective dynamics of 'small-world' networks Coding characteristics and small world characteristics of biological nervous system Reduction in interhemispheric functional connectivity in the dorsal visual pathway in unilateral acute open globe injury patients: A resting-state fMRI study Altered intrinsic functional connectivity of the primary visual cortex in patients with retinal vein occlusion: A resting-state fMRI study Disrupted smallworld networks in schizophrenia The small world inside large metabolic networks Study on the spread of SARS virus using small-world network model Topology optimization of wireless sensor networks based on small world features Gait synchronization in Caenorhabditis elegans Attention modulates synchronized neuronal firing in primate somatosensory cortex Energy estimation and coupling synchronization between biophysical neurons Synchronization of human autonomic nervous system rhythms with geomagnetic activity in human subjects A chaotic outlook on biological systems Chaotic resonance in Hodgkin-Huxley neuron On the chaotic pole of attraction for Hindmarsh-Rose neuron dynamics with external current input Network 'small-worldness': A quantitative method for determining canonical network equivalence The role of phase synchronization in memory processes Determining Lyapunov exponents from a time series A crowd counting framework combining with crowd location Searching for small-world and scale-free behaviour in long-term historical data of a real-world power grid Various firing activities and finite-time synchronization of an improved Hindmarsh-Rose neuron model under electric field effect Research on medical knowledge graph for stroke The authors declare that they have no competing interests.