key: cord-0047488-q5u41gzp authors: Tungom, Chia Emmanuel; Gulan, Maja; Niu, Ben title: A Performance Class-Based Particle Swarm Optimizer date: 2020-06-22 journal: Advances in Swarm Intelligence DOI: 10.1007/978-3-030-53956-6_16 sha: 2cbb4c5c50adf1868107fca2db121de60ced1352 doc_id: 47488 cord_uid: q5u41gzp One of the main concerns with Particle Swarm Optimization (PSO) is to increase or maintain diversity during search in order to avoid premature convergence. In this study, a Performance Class-Based learning PSO (PCB-PSO) algorithm is proposed, that not only increases and maintains swarm diversity but also improves exploration and exploitation while speeding up convergence simultaneously. In the PCB-PSO algorithm, each particle belongs to a class based on its fitness value and particles might change classes at evolutionary stages or search step based on their updated position. The particles are divided into an upper, middle and lower. In the upper class are particles with top fitness values, the middle are those with average while particles in the bottom class are the worst performing in the swarm. The number of particles in each group is predetermined. Each class has a unique learning strategy designed specifically for a given task. The upper class is designed to converge towards the best solution found, Middle class particles exploit the search space while lower class particles explore. The algorithm’s strength is its flexibility and robustness as the population of each class allows us to prioritize a desired swarm behavior. The Algorithm is tested on a set of 8 benchmark functions which have generally proven to be difficult to optimize. The algorithm is able to be on par with some cutting edge PSO variants and outperforms other swarm and evolutionary algorithms on a number of functions. On complex multimodal functions, it is able to outperform other PSO variants showing its ability to escape local optima solutions. Optimization is one of the key features in obtaining good performance in systems. In fact optimization problems can be found everywhere in real life from transportation to even dieting. PSO is an intelligent optimization algorithm designed in the mid 1990's by Eberth and Shi [1, 2] . The algorithm's working principle is based off-of simulating the collective behavior of bird flock and fish school. The original PSO algorithm has a topology fully connected network where all particles learn from their personal best historical search position and the global best particle in the swarm. This learning structure is the main reason why the original PSO algorithm is inefficient and ca easily be trapped into a local minima as all particles are guided by one global leader. Several other topological structures have been introduce to enhanced performance e.g. the ring topology, the von Neumann topology, the pyramid topology [3] . These topologies use different ways to update the velocity and position of particles in the swarm. Fully Informed PSO (FIPS) determines the velocity of a particle by looking at its neighborhood topology [4] . Comprehensive learning PSO (CLPSO) tries to solve the problem of premature convergence by using different learning topologies on different dimensions to ensure diversity is maintained [5] . Exploration, the ability of the swarm to search its entire environment (global search) and exploitation, the ability for particles to thoroughly search their neighborhood (local search) are two important features of any PSO or search algorithm. To ensure swarm stability, stability based adaptive inertia weight (SAIW) uses a performance based approach to determine each particles inertia weight [6] . Mixed Swarm Cooperative PSO (MCPSO) achieves exploration and exploitation by dividing particles into exploration and exploitation groups [7] . By leveraging comprehensive learning strategy, Heterogeneous Comprehensive Learning PSO (HCLPSO) enhances its exploration and exploitation [8] . There are several other balanced Algorithms that are specifically designed to ensure both exploration and exploitation [8] [9] [10] . In this study, a novel PSO algorithm called PCB-PSO is proposed, with a new learning topology to ensure exploration and exploitation while also ensuring a high convergence speed therefore avoiding premature convergence. In PCB-PSO, particles are divided into three groups, upper class, middle class and lower class based on their fitness values, and a particles' group might change from iteration to iteration. The upper class consist of particles with superior performance while the lower class consist of the poorest performing members. The middle class is made-up of members considered not to be performing poorly and not having superior performance. Particles in the same group have a common learning strategy or topology, which is different from those in the other groups. Lower class particles are designed to enhance exploration while the middle class particles are designed for exploitation. Upper class particles are designed for fast convergence. The intuition here is that if a particle is performing poorly, it has to do more exploration and if its performance is good, it focuses on converging faster, and if its performing neither poorly nor well, then it should exploit its neighborhood. The main contributions of this study can be listed as follows: 1. Introduces a new paradigm of PSO learning that enhances exploration, exploitation and convergence speed simultaneously. 2. Deals with the problem of premature convergence by making some particles continue exploration and exploitation while others converge to a given minima or optima. 3. The problem of swarm diversity is dealt with by continuous exploration of the search space by lower class particles. 4. Introduces flexibility by allowing a given behavior to be prioritized by simply assigning more or all particles to a given class. The rest of the paper is organized as follows: The original PSO algorithm and some learning topologies are reviewed in Sect. 2. The proposed PCB-PSO algorithm is discussed in Sect. 3 and in Sect. 4, analysis and comparison of the results with other PSO variants on several benchmark functions is discussed. Finally, in Sect. 5 we draw our conclusion from this paper and propose future research directions. PSO is a population based stochastic swarm and evolutionary computational algorithm. In PSO, a population of particles, with each having a position and velocity component, is use to find the solution to an optimization problem. Each particle is a solution and the search space is the set of all solutions to the given problem. The particles or solutions are evolved by updating the velocity and position after every iteration. A particles update is done using its personal best experience and the best experience of the entire swarm. This update is designed to guide the particles towards the global best solution and eventually and eventually towards the optimal solution. The update of velocity and position of each particle are done using Eqs. 1 and 2 respectively. The particles continue to evolve until a termination criterion is met usually the maximum iteration pre-determined before start of search. where w is the inertia weight and wV t i the Inertia or momentum component. The inertia component directs a particles' trajectory in the search space for both exploration and exploitation towards unvisited search areas, its choice of value usually depend on the search dimension and varies in the range [0, 1]. Particles trajectories are maintained by the inertia component, which forces particles to navigate through areas independent of previously successful searches. For a given particle in the swarm, X i is its current positon, X t i its personal best solution at a given iteration time is X t P b and the best position X t Gb of the entire swarm referred to as global best. The social component c 2 r 2 (X t Gb − X t i ) and cognitive component c 1 r 1 (X t P b − X t i ) guide the particles towards the swarms best solution and a particles personal best solution respectively. c 1 and c 2 are cognitive and social acceleration coefficients respectively usually set to 2 by default in the basic PSO algorithm. r1 and r2 are uniform random numbers generated between 0 and 1. w, c 1 and c 2 play an influential role in determining the swarm's behavior. A choice of each value is problem dependent and determines the convergence speed, exploitation and exploration ability of the swarm. Therefore, the choice of values should be harmonious. In a simple PSO algorithm, w can be between 0.4 and 0.9 and c 1 and c 2 can be set to 2. In this section, an efficient variant of PSO called PCB-PSO is proposed to simultaneously tackle the problems of premature convergence, exploration, exploitation and diversity while also converging faster to the global minima. The algorithm introduces a new learning topology and the major difference from the base PSO is as follows (1) Particles in the swarm belong to one of three groups based of their fitness value. (2) Each group has a learning strategy or update mechanism unique to it. In PCB-PSO, The best or top performing particles fall in the Upper Class (UC), average performing in the Middle Class (MC) and poorly performing particles fall in the Lower Class (LC). The population size of each class is predetermined and will be discussed in the later section. The intuition here is that, classifying particles into three groups can allow us to design a learning strategy for each group of the swarm to simultaneously explore, exploit and converge while maintaining diversity throughout the search. This is opposed to the base PSO where all the particles learn from a global leader making them to move towards one region of the search space. UC particles, which, have good performance and are most likely to find the optimal solution are designed to enhance convergence speed. LC particles, which are poorly performing in the swarm, roam the search area exploring new solutions and maintaining diversity of the swarm. MC particles, which are performing neither poorly nor well, are designed for exploitation since they move in areas between UC and MC particles (Fig. 1) . We want to sort the particles into three classes UC, MC and LC with population sizes N uc ,N mc and N lc respectively. The population of each class is determined in three simple steps. First, a class is chosen and the proportion of the total population N to belong to that class is assigned to determine the population of the given class. Secondly, the remaining population is used and a proportion of it is determined to fall in to the next class. Finally, the last remainder of the population falls into the last class. Note that we can start with any class. For convenience sake, in this paper, we will be starting with UC, MC and then LC in that order. Let us say the proportion of particles we want in UC is 1/3 of N then S uc = 3 and Let's assume the proportion of the remaining particles we want for MC is 1/2, then S mc = 2. N is used to determine N uc while N − N uc is use to determine N mc as shown in Eqs. 3 and 4 respectively. The remaining particles after selecting N uc and N mc make up N lc (Fig. 2) . UC and LC particles have probabilities associated to them. This associated probability is a measure of how likely a particle is chosen to be learned from by another particle in the learning strategy of a given class, which will be discussed later. For UC particles, the probability is proportional to the fitness value meaning the higher the value, the higher the probability while for LC particles; the probability is inversely proportional to the fitness value. MC particles learn from both UC and LC members and so we want the best particles in UC to be less likely to learn from and the best particles in LC to be more likely to learn from so as to keep them exploiting regions between MC and LC. The probability of a UC particle is calculated using Eq. 6 while that of LC is calculated using Eq. 7. where UC i is the i th particle of UC and P UCi is the probability of that particle.LC i is the i th particle of LC and P LCi is the probability of that particle.f is calculated such that the appropriate probabilities are derived for LC particles i.e. the lowest fitness values are flipped for all members with the poorest fit particle becoming the fittest and vice versa. In PSO particles are updated by directing them towards the global best solution of the swarm and the personal best solution of the given particle with a velocity. This might be problematic if the solution found is not actually the best solution in the search space then particles will fail to explore other search regions missing other potentially better solutions. This problem has been solved by introducing different learning strategies and parameter modification. The simplest way is to vary the inertia weight and acceleration coefficients throughout the search to enhance exploration in the early stages and exploitation in the late stages. We leverage this idea but ensure both exploration and exploitation are carried-out throughout the search by introducing a new topology structure. This ensures a thorough search of the solution space making it less likely to miss a global best solution. The velocity component is not used in this algorithm because in higher dimensions, the inertia weight is required to be less than 0.1 to achieve reasonable results. Instead of using the recommended small value of the inertia weight, we focus on tuning acceleration of the social and cognitive component, which further simplifies the position update equation by eliminating the inertia weight and velocity. Hence, in this algorithm, we do not need to calculate the velocity component of a particle. In PCB-PSO, the update mechanism of each class is design for a given property of exploration, exploitation and convergence of the particles. Three different update mechanisms are designed for the three classes. Each update strategy is unique to a given class. These are the best performing particles in the swarm and so are close to the best solution found at any period during the search. The update equation for UC is as shown in Eq. 8 The parameters are same as discussed in Sect. 2 (Fig. 3) . Particles in this class are designed to exploit the region outside the best solution found which ensures diversity and thorough exploitation of the space. The position update of this class is as shown in Eq. 9. where SX t uc and SX t lc are stochastically selected UC and LC particles at time t respectively. Taking the difference between a UC and LC particle allows the particle to fall somewhere outside the best solution already found and using the particles previous position instead of the personal best boosts the stochasticity of the search. LC Update Mechanism. The update mechanism here is designed so that particles will explore the entire search region. The movement of particles in this class is chaotic which helps them to roam the swarm looking for better solutions. The position update is as shown in Eq. 10. where X t R is a randomly generated position in the search space at iteration t aiding with the chaotic movement of particles in this group across the search space. To further speed up convergence, at certain short intervals during the search, the population of the entire swarm is set to N uc for UC learning as shown in Algorithm 1. There are two groups of parameters to be set in this algorithm. The first is the class populations and the second is the acceleration coefficients for each class. The recommended settings for the proposed algorithm is outlined in Table 1 . Note that the population size of any group cannot be greater than the entire swarm and the sum of all class population must be equal to the population of the entire swarm. At shown in Algorithm one, Such can be varied to push for quicker convergence at certain periods of the search. A higher swarm population will always enhance better results as opposed to other algorithms where increasing the swarm size after a certain threshold is reached might not help the performance of the algorithm which is because of the classification and behavioral setting of the particles. In PCB-PSO, the threshold is very high especially for problems with higher complexity. To evaluate and ascertain the capability of the proposed algorithm, it is compared with other existing swarm and evolutionary algorithms on a set of benchmark functions. The PSO variants we compare with have been adopted for real world engineering applications and so can be regarded as state of the art. The algorithms used for this comparison include PSO, Harmony Search (HS), GA, Artificial Bee Colony (ABC), Cultural Algorithm (CA) and other advanced PSO variants. The benchmark functions used for evaluation have widely been used to evaluate swarm intelligence and evolutionary algorithms [11] [12] [13] . They include a set of 8 benchmark unimodal and multimodal functions as shown in Table 3 . These functions have varied difficulty and have proven to be difficult to find optima solutions in high dimensions (>30). The unimodal functions (F1-F5) have single optimal solutions while the multimodal functions (F6-F8) have multiple. These functions are challenging and are often used to evaluate local exploitation and global exploration ability of a search algorithm. For each of the Algorithms, swarm size is set to 50 and the search Dimension for each optimization function is set to 50. The maximum number of iteration is 1000 and each algorithm undertakes 30 independent runs. The parameter settings for each of the algorithms to be used is as shown in Table 2 . The proposed Algorithm shows superior performance in comparison to the other variants on the multimodal functions F6-F8. This shows the agility of the algorithms and its ability to escape the local optima while still managing to push for faster convergence. On unimodal functions, F1-F5 the performance is on par with other PSO variants but outperforms the other swarm and evolutionary algorithms. In unimodal functions, there is only one minima but our algorithm was still set to continuously explore and exploit other regions. If we set the whole population to UC, the algorithm will work towards faster convergence and we will expect achieve better quality results. In all we can see the flexibility and robustness of the algorithm given its built characteristics and swarm behavior in mind (Table 4 ). In this study, a novel learning paradigm for PSO is introduce to balance exploration, exploitation and convergence while maintaining diversity of the swarm. The algorithm uses only the social and cognitive components of PSO for learning which further simplifies the algorithm. Particles Learn according to the class they fall in and these gives flexibility and robustness to the algorithm as different learning methods are in cooperated at the same time. The algorithm further introduces dynamism by varying class population at given intervals during the search. The algorithm is tested on a number of benchmark functions, compared with other swarm and evolutionary optimization algorithms including other advance PSO techniques, and see the superiority of the proposed algorithm. In future work, this algorithm will be tested on more functions like the 2013 or 2017 CEC suit. It will also be used to solve a real world problem. Particle swarm optimization A new optimizer using particle swarm theory Population structure and particle swarm performance The fully informed particle swarm: simpler, maybe better Comprehensive learning particleswarm optimizer for global optimization of multimodal functions A novel stability-based adaptive inertia weight for particle swarm optimization Formalized model and analysis of mixed swarm based cooperative particle swarm optimization Clustering and pattern search for enhancing particle swarm optimization with Euclidean spatial neighborhood search Accelerating particle swarm optimization using crisscross search Multiple learning particle swarm optimization with space transformation perturbation and its application in ethylene cracking furnace optimization. Knowl.-Based Syst Classifier ensemble reduction using a modified firefly algorithm: an empirical evaluation Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl.-Based Syst Acknowledgement. This work is partially supported by Shenzhen Philosophy and Social Sciences Plan Project (SZ2019D018), Guangdong Provincial Soft Science Project (2019A101002075)