key: cord-0063065-eao58t6f authors: Jia, Heming; Sun, Kangjian title: Improved barnacles mating optimizer algorithm for feature selection and support vector machine optimization date: 2021-05-13 journal: Pattern Anal Appl DOI: 10.1007/s10044-021-00985-x sha: 2b02956a12dbf32400bea954c09bed8ca5e1c647 doc_id: 63065 cord_uid: eao58t6f With the rapid development of computer technology, data collection becomes easier, and data object presents more complex. Data analysis method based on machine learning is an important, active, and multi-disciplinarily research field. Support vector machine (SVM) is one of the most powerful and fast classification models. The main challenges SVM faces are the selection of feature subset and the setting of kernel parameters. To improve the performance of SVM, a metaheuristic algorithm is used to optimize them simultaneously. This paper first proposes a novel classification model called IBMO-SVM, which hybridizes an improved barnacle mating optimizer (IBMO) with SVM. Three strategies, including Gaussian mutation, logistic model, and refraction-learning, are used to improve the performance of BMO from different perspectives. Through 23 classical benchmark functions, the impact of control parameters and the effectiveness of introduced strategies are analyzed. The convergence accuracy and stability are the main gains, and exploration and exploitation phases are more properly balanced. We apply IBMO-SVM to 20 real-world datasets, including 4 extremely high-dimensional datasets. Experimental results are compared with 6 state-of-the-art methods in the literature. The final statistical results show that the proposed IBMO-SVM achieves a better performance than the standard BMO-SVM and other compared methods, especially on high-dimensional datasets. In addition, the proposed model also shows significant superiority compared with 4 other classifiers. Due to rapid technology advancement, an enormous amount of data is stored in databases. It becomes hard to make decisions for industrial intelligence by analyzing the stored data. Data mining is a process of acquiring information and knowledge from such huge data [1] . Feature selection (FS) is an important preprocessing step in the field of data mining and machine learning [2] . Its purpose is to eliminate the redundant and irrelevant features to compress the original data into a low-dimensional space, reduce the computational complexity, and increase the classification accuracy [3] [4] [5] . In essence, the process of FS is to select the optimal feature subset from the original dataset. In other words, it can be regarded as a combinatorial optimization task [6] . FS methods explicitly or implicitly combine some subset search mechanism and subset evaluation mechanism, which can be divided into three categories: filter, wrapper, and embedding [7] . The filter method performs FS on the dataset based on correlation statistics and then trains the learning model. There is no interaction between the process of FS and the process of training the learning model [8] . The wrapper method evaluates the selected feature subset based on the performance of the learning model. In other words, the purpose of the wrapper method is to select the optimal feature subset for a given learning model [9] . Therefore, the wrapper method usually achieves better results than the filter method. However, since the learning model needs to be trained many times in the FS process, the computational overhead of the wrapper method is usually much higher than that of the filter method [10] . For the embedding method, its idea is to embed the FS process into the construction of the learning model. Because of the complexity of the concepts, it is not easy to construct such models. In addition, it is also hard to improve the learning model to get better results [11] . After comparison and consideration, the wrapper-based FS is used in this paper. In general, learning tasks are divided into two categories: unsupervised learning and supervised learning. The unsupervised learning does not know the label of each training sample (i.e., the class of each training sample) in advance. For supervised learning, the training samples include inputs and outputs (i.e., features and class labels), which results in a better result than unsupervised learning in most cases [12] . The supervised algorithm commonly used includes decision tree (DT) [13] , naïve Bayes (NB) [14] , k-nearest neighbor (kNN) [15] [16] [17] , neural networks (NNs) [18, 19] , and support vector machine (SVM) [20] [21] [22] . Among them, SVM was first formally proposed by Cortes and Vapnik in 1995. Based on the statistical learning theory, SVM minimizes the structural risk to design the learning model. In addition, SVM has been used to solve the various artificial intelligence enabled applications due to excellent learning ability and generalization ability [23] , such as face recognition [24] , text classification [25] , handwriting character recognition [26] , and bioinformatics [27] . Although SVM has many advantages, it also has some limitations. For instance, it is sensitive to the initial values of parameters. These parameters include the penalty factor and the kernel parameters. The setting of these parameters can affect the generalization performance of SVM. The details of the SVM classifier will be shown in Sect. 3 of this paper. It is worth noting that the performance of SVM, like many other wrapper methods, also depends on the selected feature subset. The better feature subset can be obtained by an excellent search mechanism, which is crucial to improve the computational efficiency and classification accuracy [28, 29] . The curse of dimensionality (CoD) is the main obstacle to big data classification [30] . If a dataset contains N features, the number of available solutions increases exponentially with the number of features, resulting in 2 N solutions being generated and evaluated. This requires high computational cost, making researchers spend too much time to get a result [31] . Traditional dimension reduction methods cannot solve this problem well because of some limitations in hardware. Based on published high-quality papers, a new trend to solve this problem is developed. Researchers introduce metaheuristic algorithms (MAs) to solve the FS problem in classification tasks. MAs do not provide an exact solution but only an estimated result in a feasible time. According to the number of solutions, MAs can be divided into single-point search and population-based methods [32] . The single-point search method describes the search trajectory of a solution in the search space, such as Tabu search and simulated annealing [33] . Meanwhile, the population-based method describes the evolution process of a set of points in the search space, such as swarm intelligence (SI) algorithm and evolutionary algorithm (EA) [34] . So far, many MAs have been proposed. Barnacle mating optimizer (BMO) is a newly proposed bio-inspired EA, originally designed by Sulaiman in 2020 [35] . BMO has the features of fewer parameters and can search promising regions of the search space. However, in the field of machine learning, the no free lunch (NFL) theorem logically proves: there is no algorithm for solving all optimization problems [36] . In other words, it is pointless to discuss which algorithm is better without the specific problem. This is the motivation of this research, as well as the NFL theorem, whereby we use Gaussian mutation, logistic model, and refraction-learning to improve the performance of BMO for the first time. Generally, an improved algorithm can help evaluate the potential features from the pool of features of a given machine learning problem. It can improve the performance and computation speed of the given machine learning models. Or, it is used to resolve the parameters tuning problem with most machine learning models. To realize a simultaneous optimization process, the proposed IBMO finally helps the SVM classifier find the optimal feature subset and parameters at the same time. In terms of experiments, a set of 23 classical benchmark functions are used to verify the impact of control parameters and introduced strategies. In addition, IBMO-SVM is also applied to 20 real-world datasets, including 4 high-dimensional datasets, and compared with other 6 state-of-the-art methods. They are particle swarm optimization (PSO) [37] , grasshopper optimization algorithm (GOA) [38] , slap swarm algorithm (SSA) [39] , Harris hawks optimization (HHO) [40], teaching-learning-based optimization (TLBO) [41] , and hypergraph-based genetic algorithm (HG-GA) [42] . The effectiveness and superiority of IBMO-SVM are evaluated by classification accuracy, selection size, fitness value, running time, Wilcoxon rank-sum test, and Friedman's test. Finally, the experimental results are more comprehensive and convincing through comparison with other 4 classifiers. They are logistic regression (LR), decision tree (DT), feedforward neural network (FNN), and k-nearest neighbor (kNN). The rest of this paper is organized as follows: Sect. 2 presents the previous related works. Section 3 introduces some preliminary knowledge, including a brief overview of BMO and SVM. Section 4 highlights the details of the proposed method. Experiments are implemented, and results are analyzed in Sect. 5. Finally, in Sect. 6, conclusions and future works are given. The learning algorithms combining with the machine learning techniques are currently used for classification tasks. Wan et al. proposed a novel manifold learning algorithm based on local structure, namely two-dimensional maximum embedding difference (2DMED). This method directly extracted the optimal projective vectors from 2D image matrices. In addition, it successfully avoided computing inverse matrices by virtue of difference trace. Experimental results showed that 2DMED got better recognition rates on face database and handwriting digital database [43] . Fuzzy 2D discriminant locality preserving projections (F2DDLPP) is a novel combination of 2D discriminant locality preserving projections (2DDLPP) and fuzzy set theory. This method enhanced the discriminant power in mapping into a low-dimensional space. Through comparison and analysis, F2DDLPP can select the most useful features for classification [44] . In 2017, the maximum margin criterion and fuzzy set theory were used to extend the development of locally graph embedding algorithms. It was an effective face recognition technique [45] . For other supervised learning problems, there are also many learning algorithms. SVM has some parameters to control different aspects of algorithm performance. Generally, there are three basic methods for tuning these parameters. Some researchers try different values to tune these parameters by orthogonal experiments. The manual selection method needs to know the influence of parameters on model capacity in advance. When there are three or fewer parameters, another common method is grid search. This method is very slow due to a large number of parameter combinations. The third method is to use MAs. The parameter search problem can be transformed into an optimization problem. In this case, decision variables are parameters, and the cost of optimization is the fitness value of the fitness function. To build an efficient classification model, FS can help improve the accuracy of the model. Some distinguished lines of researches perform FS and simultaneously consider parameters of SVM. Such examples are presented as follows. In [37] , Huang et al. combined discrete PSO with continuous PSO to simultaneously perform the feature subset selection and SVM parameter setting. Additionally, PSO-SVM was implemented with a distributed parallel architecture to reduce the computational time. A hybrid method based on the GOA was presented by Aljarah et al. [38] to achieve the same goal in 2018. The experimental results revealed that GOA was superior to grid search, PSO, genetic algorithm (GA), multi-verse optimizer (MVO), gray wolf optimizer (GWO), firefly algorithm (FF), bat algorithm (BA), and cuckoo search (CS) on improving the SVM classifier accuracy. In 2020, Al-Zoubi et al. applied the SSA-SVM method to 3 widespread medical cases. Compared with other methods, this model had better performance in accuracy, recall, and precision, and was an effective method to solve popular diagnosis problems [39] . Recently, Houssein et al. have hybridized HHO with SVM and kNN for chemical descriptor selection and compound activities. Compared with competitor methods, HHO-SVM had higher performance. In addition, when the number of iterations increases, HHO-SVM obtained better results than HHO-kNN [40] . Examples of such native MAs which are applied for this optimization field are also GA [46] , ant colony algorithm optimization (ACO) [47] , teaching-learning-based optimization (TLBO) [41] , brain storm optimization (BSO) [48] , etc. A hypergraph framework was added to GA (called HG-GA) by Gauthama Raman et al. [42] . By using the hyperclique property of hypergraph to generate the initial population, the search for the optimal solution was accelerated, and trapping at the local optimum was prevented. To deal with an intrusion detection system (IDS), the HG-GA-SVM model was used and compared with GA-SVM, PSO-SVM, BGSA-SVM, random forest, and Bayes net. In terms of classifier accuracy (approximately increase 2%), detection rate, false alarm rate, and runtime, HG-GA-SVM achieved overwhelming performance. Baliarsingh et al. [49] proposed a method known as memetic algorithm-based SVM (M-SVM), which was inspired by embedding social engineering optimizer (SEO) in emperor penguin optimizer (EPO). SEO was considered a local search strategy, and EPO was used as a global optimization framework. The experiment was analyzed from two aspects, including binary-class datasets and multi-class datasets. It is observed from statistical results that the proposed method over other competent methods for gene selection and classification of microarray data. Based on the literature review, it can be found that researchers have never stopped exploring. According to the NFL theorem, it motivated us to propose a novel method to better tackle this problem. Barnacles are microorganisms that attach themselves to objects in the water. The long penis is their main feature. Their mating group includes all neighbors and competitors within reach of their penis. Barnacle mating optimizer is inspired by the mating process of barnacles. By simulating three processes (i.e., initialization, selection process, and reproduction), the practical optimization problem is solved. Details are described as follows [35] : Firstly, it is assumed that the candidate solution is barnacles, where the matrix of the population can be expressed using Eq. (1). The evaluation of the population and sorting process are done to locate the best solution so far at the top of X . Then, the parents to be mated are selected by Eqs. (2) and (3) . where N is the number of barnacle population, n is the number of control variables, and barnacle_d and barnacle_m represent the parents to be mated. Since there are no specific equations to derive the reproduction process of barnacles, BMO emphasizes the genotype frequencies of parents to produce the offspring based on the Hardy-Weinberg principle [50, 51] . It is worth highlighting that the length of their penises ( pl ) plays an important role in determining the exploitation and exploration processes. Assuming pl = 7 , it can be seen from Fig. 1 that barnacle #1 can only mate with one of the barnacles #2-#7. If the selection of barnacles to be mated is within the range of pl of Dad barnacle, the exploitation process is occurred. Equation (4) is proposed to produce new variables of offspring from barnacle parents. where p is the normally distributed random number between [0, 1], q = (1 − p) , x N barnacle_d and x N barnacle_m represent the variables of Dad and Mum barnacles that have been selected in Eqs. (2) and (3) . p and q represent the genotype frequencies of Dad and Mum barnacles in the new offspring. If barnacle #1 selects barnacle #8, it is over the limit. Thus, the normal mating process does not occur. At this time, the offspring is produced by the sperm cast process. In BMO, the sperm cast is regarded as the exploration process, which is expressed as follows. where rand() is the random number between [0, 1]. It can be seen from Eq. (5) that the new offspring is produced by Mum barnacle since it obtains the sperms that are released into the water by other barnacles elsewhere. During the iteration, the position of the barnacle is updated according to Eq. (4) or Eq. (5) . Finally, the BMO can be defined to approximate the global optimum for optimization problems. For linear separable problems, the core idea of SVM is to find an optimal hyperplane that maximizes the margin between two classes. In this case, the generalization ability of the model is the strongest, and the classification result is the most robust. Some concepts in SVM are shown in Fig. 2 . I f t h e g i v e n d a t a s e t i s D = (x i , y i ), i = 1, ..., N, x ∈ R d , y ∈ {±1} , the hyperplane is: Fig. 1 Selection of mating process of BMO [35] (image of barnacles adopted from [52]) Further, the maximizing margin is equivalent to minimizing ∥ ∥ 2 . Introducing the slack variable , > 0 represents that there are a small number of outliers. The penalty factor c is one of the critical parameters that represent the tolerance to outliers. The standard SVM model is as follows: where is the inertia weight, and b is a constant. For the nonlinear case, SVM maps the data in the input space to the high-dimensional feature space. This idea is vividly shown in Fig. 3 . The inner product of feature vectors needs to be calculated in nonlinear transformation. To avoid this obstacle, the kernel function k(⋅, ⋅) is introduced to express the result of the inner product. In this case, the SVM model can be transformed into the following dual problem: where represents the Lagrange multiplier. In this paper, a widely applicable radial basis function (RBF) kernel is adopted, whose expression is: where represents the width of the RBF kernel. The penalty factor c and kernel parameter directly affect the generalization ability and complexity of SVM. In this section, the proposed model followed to use IBMO for FS and SVM optimization is described in detail. Firstly, two equation issues are addressed, including the representation of the solution and the definition of the fitness function. Secondly, the improvement ideas of IBMO are elaborated. In addition, the pseudocode and flowchart of IBMO are also presented. Finally, the flowchart of the proposed application model is given. In FS tasks, the solution is represented in binary form. Each variable is limited between [0, 1]. If the value is within (0.5, 1], it is mapped to bit "1." Bit "1" means the corresponding feature is reserved. If the value is within [0, 0.5], it is mapped to bit "0." Bit "0" means the corresponding feature is rejected. As shown in Fig. 4 , the solution contains 8 variables (i.e., 8 features). The 1st, 5th, and 6th features are selected. In this paper, the first two variables of the solution are defined as the penalty factor c and kernel parameter . Other variables correspond to the selected features. In other words, each solution has n variables in Eq. (1). After redefinition, each new solution, as shown in Eq. (10), has n + 2 variables. In this paper, a fitness function is required to evaluate the solution. FS is a multi-objective optimization problem, which needs to achieve fewer selected features and higher classification accuracy. To balance the relationship between the two, the fitness function in Eq. (11) is defined as follows: where R (D) is the error rate of the SVM classifier, |R| is the number of selected features, |N| is the total number of original features, and are two parameters corresponding to the impact of classification performance and feature size, ∈ [0, 1] and =(1 − ). A well-designed optimizer should make full use of and generalize random operators in the early phase. In this way, the diversity of the population can be enhanced, and solutions can deeply explore each region of the feature space. At the same time, the tail of the Gaussian distribution is narrow, so the mutation has a higher probability to generate a new solution in the vicinity of the original position. Hence, the search process utilizes smaller steps to search each position in the solution space. The Gaussian density function is defined as follows [53] : where represents expected value, 2 represents the variance. Assuming =0, 2 = 1 , this equation is reduced to the generated random variable. The mutant position of barnacles can be expressed by Eq. (13) . where G( ) corresponds to the Gaussian step vector created by Eq. (12) , is the Gaussian random value between [0,1]. The well-organized optimizer should achieve a high level of exploration at the beginning of the search and more exploitation in the last phase. In BMO, the value of pl plays an important role in determining the exploitation and exploration processes. The original paper concluded through experiments that when the value of pl is small, too many exploration processes occurred. Instead, too much exploitation occurred. It is suggested that the selection of pl can be set between 50% and 70% of the total population size. In the original paper, the value of pl is set to a constant. We bring out a mathematical model to change the value of pl so that it can be adjusted dynamically with the lapse of iteration. Thus, the logistic model is finally adopted, and its mathematical expression is [54] : where pl max and pl min represent and the maximum and minimum values of pl , respectively, t represents the number of iteration, and represents the initial decay rate. Using the method of variable separation to solve Eq. (14), Eq. (15) is obtained. It can be seen from Eq. (15) that the conversion parameter pl(t) = pl min when t = 0; while t → ∞ , pl(t) = pl max . The influence of the conversion parameter on the optimization process is analyzed as follows. As mentioned above, a high level of exploration is required in the early phase, and a small value of pl can help the exploration process occur. Therefore, when t = 0, pl(t) = pl min . As the search progresses, the exploitation phase is normally performed after the exploration phase. When the number of iterations increases, the value of pl also increases according to Eq. (15) . A larger value of pl is beneficial to the exploitation process. By dynamic conversion parameter, a reasonable and fine balance between the exploration and exploitation is achieved. In Fig. 5 , some concepts about refraction are noted [55] . Let the rate k = h h � , Eq. (16) can be transformed into the following form: where a represents the upper bound and b represents the lower bound. x ′ is called the opposite solution of x based on refractionlearning. Generally, Eq. (17) can be extended to n-dimensional space. where a j represents the jth dimension of upper bound, b j represents the jth dimension of the lower bound. x j and x ′ j are the jth dimension of x and x ′ , respectively. More exploitation are often required in the last phase. But there is the possibility of trapping in the local optimum. In the last phase of BMO, the refraction-learning strategy is introduced to overcome this drawback. The global optimal solution is carried out refraction-learning strategy to generate the opposite solution by Eq. (18) . Then, they will be evaluated and updated. The native BMO has some drawbacks such as low search accuracy and easy to trapped in the local optimum. In this paper, three strategies are introduced to improve the performance of the algorithm. Firstly, Gaussian mutation is applied to initial barnacles to enhance the diversity of the population. Secondly, the logistic model is adopted to realize the dynamic conversion of the important parameter pl , so as to achieve a fine balance between exploration and exploitation. Finally, the global optimal solution is carried out the refraction-learning strategy to generate the opposite solution. By evaluating and updating them, the algorithm has a higher probability of escaping the local optimum. These strategies are considered from different levels of the algorithm. A more detailed analysis has also been mentioned above. The pseudocode of IBMO is described in Algorithm 1. The intuitive and detailed process of IBMO is shown in Fig. 6 . The computational complexity of IBMO is mainly related to dimension (D), population size (N), maximum iteration times (T), and cost of fitness function (F). To sum up, the computational complexity analysis focuses on four components: initialization, fitness evaluation, sorting, and barnacle updating. Note that the computational complexity of initiali- Hence, the overall computational complexity of IBMO can be expressed as follows: The proposed method commences by dividing the preprocessed dataset into training and testing sets. After that, the most optimal model is achieved by using tenfold cross-validation. IBMO starts executing the random vector generated by Eq. (10) . Then, SVM begins its training process by running the training set with selected features. During this phase, the inner cross-validation is carried out to produce (19) a more robust model and avoid over fitting. IBMO will receive the fitness function value at the end of the training process. All the previous steps are repeated until the termination criterion (i.e., the maximum number of iterations) is met. Finally, the proposed method reports the optimal individual. The final selected individuals are applied to the testing phase. Figure 7 shows the framework of the proposed method. To validate the efficiency of the proposed method, 20 standard datasets from UCI are utilized [56] . Table 1 reports the details of the selected datasets, such as the number of features, instances, and classes. As can be seen, some datasets are considered high-dimensional datasets because they have thousands of features. It will make our work more challenging and generate more comprehensive results. Before using the datasets, it is essential to preprocess them. This process is divided into two steps. Firstly, all the features are converted into numeric form. For example, in the Hepatitis dataset, males and females can be (2) and (3) Check if any barnacle goes beyond the search space and amend it Update offspring generation using Eq. (5) Update offspring generation using Eq. (4) Perform the refraction-learning for the global optimal solution using Eq. Calculate the fitness of each barnacle Sort and update the best solution t=t+1 Return the best solution converted into 0 and 1, respectively. Then, the min-max normalization is used to scale the features to [0, 1]. In this way, the effect of numeric magnitude on feature weights can be alleviated. Equation (21) is provided. where F norm represents the normalized feature, and F min and F max are the minimum and maximum values of the targeted feature F , respectively. LIBSVM is used for the SVM classifier [57] . Tenfold cross-validation is used to obtain unbiased classification results. This method divides each dataset into ten equal parts. Nine folds are used for training and the rest of one fold for testing. Then, this process is repeated ten times to ensure that each part is used as the testing set. Figure 8 shows the diagram of tenfold cross-validation for a single run. The proposed method is compared with 6 state-of-theart methods, including PSO [37] , GOA [38] , SSA [39] , HHO [40] , TLBO [41] , and HG-GA [42] , based on some evaluation metrics. The maximum of iterations for all algorithms is 100, and the population size is 30. We follow the same parameters in the original papers. The parameter settings of algorithms are shown in Table 2 . Moreover, the parameter in the fitness function is set to 0.99, the parameter is set to 0.01 according to domain-specific knowledge [58, 59] . In the same experimental conditions, the fairness of comparison is guaranteed. Table 3 shows these details. To prevent the random nature of the test results, each experiment is run 10 times independently. • Classification accuracy this metric evaluates the accurate of the classifier in predicting the right class using selected feature subsets. • Selection size this metric evaluates the size of the optimal feature subset obtained by the search algorithm. • Fitness value this metric combines the above two factors as the fitness function in FS optimization problems. • Running time this metric reflects the execution speed of the method. • P-value this metric is used to detect significant differences between two methods based on two nonparametric statistical tests (i.e., Wilcoxon rank-sum test and Friedman's test). As discussed in Sect. 4.2, the conversion parameter strategy based on the logistic model allows IBMO to smoothly transit between exploration and exploitation. The refraction-learning strategy is more effective to enhance exploitation during the evolution. However, some control parameters are crucial to improve the performance of the algorithm. The purpose of this subsection is to analyze the sensitivity of these control parameters and to provide the theoretical basis for the following experiments. In Eq. (15), the parameter controls the changing trend of the pl value. For intuitive comparison, Fig. 9 shows the fixed pl value in BMO and different pl values in IBMO with =0.1, 0.05, 0.03 . In BMO, the original paper suggests that the pl value is set to 70% of the population size. In IBMO, the pl min and pl max values are set to 50% and Table 7 shows that there is no regular increase or decrease in the average as the changes. IBMO with =0.05 can get better results except for function F8. This is because the conversion parameter strategy based on the logistic model with =0.05 makes IBMO more effective in the transition between global and local terms. In Eq. (18), the refraction index and the rate k affect the position of the opposite solution in the search space. The refraction index is studied using 4 different values 10, 100, 1000 ) . The rate k is also set to the same values ( k=1, 10, 100, 1000 ). Different types of functions are tested to find the optimal combination of parameters and k . Table 8 gives the results of average fitness values. As can be inferred from Table 8 , IBMO with =1 and k=1 obtains relatively weak results. Some similar results are obtained by other cases. Figure 10 is used to explain the impact of parameter combinations on the refraction-learning strategy. The current solution, the opposite solution, and the optimal solution are shown in Fig. 10 . When =1 and k=1 , Eq. (18) can be simplified to x � j =a j + b j − x j , and the opposite solution corresponding to the current solution x is x ′ 1 . By tuning parameters and k , the opposite solution Fig. 9 Comparison of the control parameter x ′ 2 can be closer to the optimal solution. The proper combination of parameters increases the probability of escaping the local optimum. In addition, the larger and k values result in unchanged in the performance of the algorithm. We finally use the values of 100 and 1000 for and k , respectively. [0,10] -10.5363 The purpose of this subsection is to study the impact of each improvement strategy. Five different types of algorithms are shown in Table 9 . If the corresponding strategy is used in BMO, it is represented by "1." Otherwise it is represented by "0." 23 classical benchmark functions are implemented to evaluate performance. We report the average (avg) and standard deviation (std) of fitness values in Table 10 . The best results are displayed in bold. By referring to Table 10 , it can be found that IBMO's avg and std are the smallest in most cases. BMO-1, BMO-2, and BMO-3 are also smaller than the native BMO. These promising results show that each strategy can improve the performance of the native algorithm, and the combination effect is better. Convergence accuracy and stability are the main gains. To visualize the data, Fig. 11 shows the trend of fitness values of F1, F10, and F14. In Sect. 4.2, the gain effect of each strategy has been analyzed and elaborated. Now, it is further confirmed by the convergence curve. To sum up, IBMO can achieve excellent performance on almost all benchmark functions, which can be concluded that the results are not accidental, and the improvement is significant. Sixteen low-dimensional datasets are used in this subsection to compare the performance of the proposed IBMO-SVM with novel compared algorithms. The quantitative and qualitative analyses are as follows. Table 11 shows the average and standard deviation of classification accuracy. Inspecting the results in this table, it can be observed that IBMO-SVM performs better than others. In terms of average, IBMO obtains the highest results on 68.75% of the datasets, while SSA, HHO, and HG-GA can outperform IBMO on 12.5%, 12.5%, and 6.25% of the datasets, respectively. In terms of standard deviation, IBMO-SVM obtains the smallest results on 62.5% of the datasets. Both optimizers obtain the same std value on one dataset (i.e., ILPD). Figure 12 exhibits the results of box charts of eight algorithms on Iris, Wine, Parkinsons, and Sonar. In these figures, it can be seen that IBMO can achieve higher and more centralized data, and no many outliers. The metric of classification accuracy proves the stability of IBMO and the capability to search the promising regions in the search space. The number of selected features is another important metric for wrapper FS methods. Table 12 shows a comparison for the average number of selected features on all datasets. Further analyzing reported results, IBMO can select the most significant features on 11 out of 16 datasets. But for the Breast Cancer dataset, our method also ranks the second. Based on the results obtained, it can be observed that IBMO significantly outperforms others in minimizing the number of selected features. The fitness function involves two metrics: classification accuracy and feature selection ratio. Table 13 presents the best, worst, avg, and std of fitness values of eight algorithms. IBMO contributes to the best fitness values on 56.25% of the datasets, the lowest avg values on 68.75% of the datasets, and the lowest std values on 75% of the datasets. Thus, IBMO perceives the most consistent results. Figure 13 compares the convergence behavior of different algorithms. As can be seen from Fig. 13 , IBMO provides the lowest position curves compared with other state-of-the-art algorithms, and occasionally escapes from the local optimum to continue searching effective spaces. Overall, IBMO-SVM shows the best convergence behavior on real-world datasets. This also indicates the substantial impact of the proposed improvements on the native BMO. Running time metric indicates the execution speed of an algorithm. The average running time (in second) is given in Table 14 . Taking Zoo dataset as an example, the running time is sorted as follows: TLBO > SSA > GOA > BMO > IB MO > PSO > HG-GA > HHO. Table 14 shows that for almost all datasets, the running time by the proposed method is ranked in the middle of the eight algorithms. In addition, the running time of IBMO is slightly higher than that of BMO. We have analyzed the time complexity in Sect. 4.2.5, and the combination of three strategies leads to these slight changes. To improve the overall performance of BMO, it cannot guarantee to obtain all optimal parameters on all cases. So the running time of IBMO is acceptable. To detect significant differences between proposed IBMO-SVM versus compared algorithms, we apply a statistical test based on the Wilcoxon rank-sum test. The null hypothesis H 0 represents the statement of no difference, whereas the alternative hypothesis H 1 represents the presence of significant differences. A p-value represents the probability of observing given results at the 0.05 significance level. The p-value less than 0.05 represents a strong evidence against H 0 [60, 61] . Table 15 exhibits the results, where the p-value greater than 0.05 is bold. According to this table, the superiority of IBMO-SVM is statistically significant on most the datasets because most of the p-values are less than 0.05. On the whole, it is observed from the above study that the overall performance of IBMO-SVM is better than other compared algorithms for all evaluation metrics on the low-dimensional datasets. After analyzing the above results, four high-dimensional datasets are implemented to further evaluate the overall performance of the proposed algorithm. It is a challenging task that can make the experiments more comprehensive and the results more convincing. For high-dimensional datasets, the dimension of feature vectors is often larger than the capacity of available training samples. In the classification task, it often leads to the curse of dimensionality or empty space phenomenon [30] . Only a few of the thousands of features are important. Many classification methods with good performance become poor or even fail on testing high-dimensional datasets. This is the motivation and design purpose of this subsection. Further, the brief description of four high-dimensional datasets is shown in Table 16 . Table 17 compares the average and standard deviation of classification accuracy based on four high-dimensional datasets. Figure 14 also shows the feature selection ratio. Observing the results in Table 17 and Fig. 14 , it can be seen that IBMO is far superior to other competitors in dealing with high-dimensional datasets. Taking the Gastrointestinal lesions dataset as an example, the accuracy of IBMO is improved by 3.59% based on the native algorithm. Compared with PSO, IBMO is no less than 10% higher. Analyzing the number of features, for the Arcene dataset, the feature selection ratio of IBMO is 0.51 and is ranked first. Generally, HHO is also a good FS method with strong competitiveness. The fitness function is a comprehensive measure of the above two metrics. These results are shown in Table 18 . It is not hard to see that the results are consistent and significant, and IBMO is still the champion algorithm. Friedman's test is a nonparametric statistical inference technique. It involves first ranking the data and then testing to see whether k ( k ≥ 3 ) samples are significantly different. Equation (22) is used to compute the Friedman statistic S for k samples with m sample size. R represents the rank obtained. S follows 2 distribution with degrees of freedom k − 1 . When S ≥ 2 (k−1) , the null hypothesis H 0 can be rejected at 0.05 significance level [61] . Using the data obtained above as input, Table 19 provides the results of additional statistics structure, and Table 20 shows the ranking obtained by Friedman's test. When the degree of freedom is 7 and the significance level is 0.05, the critical value of the test statistic is 14.067. The calculated Chi-square statistic is greater than 14.067, so the null hypothesis H 0 can be rejected. Moreover, small p-values cast doubt on the validity of H 0 . In terms of the ranking obtained, IBMO has obtained the highest ranking and always shows excellent performance. To comprehensively verify the effectiveness, the proposed model is further compared with 4 other classifiers, including logistic regression (LR) [62] , decision tree (DT) [13] , feedforward neural network (FNN) [18] , and k-nearest neighbor (kNN) [16] . To achieve a fair comparison, IBMO is also used for other classifiers with default parameter values to find feature subsets. k = 5 for kNN is used in this work. For each method, accuracy, sensitivity, and specificity are used to evaluate the performance. The sensitivity can describe the proportion of the identified positive classes to all positive classes, so it is also called the true positive rate. The sensitivity can present the proportion of the identified negative classes to all negative classes, so it is also called the true negative rate. They are defined in Eqs. (23) and (24) Fig. 12 Box charts of each algorithm on Iris, Wine, Parkinsons, and Sonar In terms of sensitivity, our proposed method accomplishes the higher results on 70% of the datasets. On the Ionosphere dataset, even that our proposed method does not achieve better than kNN, but it ranks second. While looking at the specificity, our proposed method outperforms others on 90% of the datasets and achieves the best results with 1.000 of sensitivity on the DBWorld e-mails dataset. To sum up, our proposed method proves highly competitive results, and can more accurately identify positives and negatives. This paper proposes a novel classification model using IBMO for FS and parameter setting in SVM. The Gaussian mutation strategy is used to enhance population diversity. The conversion parameter strategy based on the logistic model is used to achieve a fine balance between exploration and exploitation. The refraction-learning strategy helps the algorithm escape the local optimum. Thus, different strategies are designed at different evolution This dataset contains 64 e-mails from DBWorld newsletter. We use them to train different algorithms in order to classify between "announces of conferences" and "everything else" Arcene Arcene is obtained by merging three mass spectrometry datasets. The original features show the abundance of proteins in human sera having a given mass value. Based on these features, cancer patients and healthy patients should be separated Amazon reviews This dataset is derived from the reviews in Amazon Commerce Website for authorship identification. It identifies 50 of the most active users. The number of reviews collected for each author is 30 phases. To verify the impact of control parameters and introduced strategies, some experiments are done on 23 classical benchmark functions. In addition, the proposed method is compared with 6 state-of-the-art methods such as PSO, GOA, SSA, HHO, TLBO, and HG-GA based on 20 datasets where 4 datasets are high-dimensional. The comparisons and extensive results reveal that IBMO-SVM outperforms other wrapper methods using different evaluation metrics. According to accuracy, sensitivity, and specificity, the proposed IBMO-SVM achieves superiority over the competitor classifiers. Different directions for future work are suggested. Other real-world datasets can be further tested, such as the coronavirus disease (COVID-19) dataset. IBMO can also be explored in other optimization domains. Internet of Things, computer vision, and cloud computing are all the focus. Tic-tac-toe 1.58E-12 5.46E-13 7.20E-13 3.18E-14 1.67E-10 9 Congressional VR 1.75E-02 5.98E-03 2.43E-03 7.43E-06 39E-06 Soybean small 8.70E-03 5.34E-06 6 Data Preprocessing Hybrid whale optimization algorithm with simulated annealing for feature selection Binary grasshopper optimization algorithm approaches for feature selection problems Feature selection for high-dimensional classification using a competitive swarm optimizer Genetic algorithm with aggressive mutation for feature selection in BCI feature space Feature selection in mixed data: A method using a novel fuzzy rough set-based information entropy Toward integrating feature selection algorithms for classification and clustering Hybrid particle swarm optimization with spiral-shaped mechanism for feature selection mr2PSO: a maximum relevance minimum redundancy feature selection method based on swarm intelligence for support vector machine classification Feature subset selection in large dimensionality domains A review of feature selection techniques in bioinformatics A feature selection approach for hyperspectral image based on modified ant lion optimizer DTBC: decision tree based binary classification using with feature selection and optimization for malaria infected erythrocyte detection Feature subset selection using naive Bayes for text classification Granger Causality Driven AHP for Feature Weighted kNN ) k-NN binary classification of heart failures using myocardial current density distribution maps Towards an energy-efficient anomaly-based intrusion detection engine for embedded systems Training feedforward neural networks using multi-verse optimizer for binary classification problems Hybrid hidden Markov models and artificial neural networks for handwritten music recognition in mensural notation Feature selection for linear SVM with provable guarantees SVMQA: support-vector machinebased protein single-model quality assessment A method for multiclass sentiment classification based on an improved one-vsone (OVO) strategy and the support vector machine (SVM) algorithm The nature of statistical learning theory A SVM face recognition method based on Gabor-featured key points Web page classification based on a support vector machine using a weighted vote schema Online handwriting recognition with support vector machines -a kernel approach Support vector machine applications in bioinformatics Optimal feature selection for support vector machines Feature selection for SVMs Training neural networks on highdimensional data using random projection An introduction to variable and feature selection Metaheuristics in combinatorial optimization: overview and conceptual comparison A simulated annealing algorithm for the clustering problem Evolutionary algorithm based techniques to handle big data Barnacles Mating Optimizer: A new bio-inspired algorithm for solving engineering optimization problems No free lunch theorems for optimization A distributed PSO-SVM hybrid system with feature selection and parameter optimization Simultaneous feature selection and support vector machine optimization using the grasshopper optimization algorithm Salp Chain-Based Optimization of Support Vector Machines and Feature Weighting for Medical Diagnostic Information Systems A novel hybrid model using teachinglearning-based optimization and a support vector machine for commodity futures index forecasting An efficient intrusion detection system based on hypergraph -Genetic algorithm for parameter optimization and feature selection in support vector machine Feature extraction using two-dimensional maximum embedding difference Two-dimensional discriminant locality preserving projections (2DDLPP) and its application to feature extraction via fuzzy set Local graph embedding based on maximum margin criterion via fuzzy set Feature selection and parameter optimization for support vector machines: a new approach based on genetic algorithm with feature chromosomes Intelligent fault diagnosis of rotating machinery using support vector machine with ant colony algorithm for synchronous feature selection and parameter optimization Classification and feature selection method for medical datasets by brain storm optimization algorithm and support vector machine A memetic algorithm using emperor penguin and social engineering optimization for medical data classification Performing the exact test of hardy-weinberg proportion for multiple alleles Hardy Weinberg and language impediments Enhanced Moth-flame optimizer with mutation strategy for global optimization Logistic regression-based pattern classifiers for symbolic interval data Refraction-learningbased whale optimization algorithm for high-dimensional problems and parameter estimation of PV model Machine Learning Repository LIBSVM: a library for support vector machines Binary ant lion approaches for feature selection Evolutionary population dynamics and grasshopper optimization approaches for feature selection problems An evolutionary gravitational search-based feature selection A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms Linear support vector machine and logistic regression for cerebral infarction classification Acknowledgements This work was supported by Sanming University introduces high-level talents to start scientific research funding support project (20YG14), Guiding science and technology projects in Sanming City (2020-G-61), Educational research projects of young and middleaged teachers in Fujian Province (JAT200618), Scientific research and development fund of Sanming University (B202009). Conflicts of interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.