key: cord-0426122-0oaulejj authors: Castelli, Mauro; Manzoni, Luca; Mariot, Luca; Nobile, Marco S.; Tangherloni, Andrea title: Salp Swarm Optimization: a Critical Review date: 2021-06-03 journal: nan DOI: 10.1016/j.eswa.2021.116029 sha: 327ff4967b8a23f67054e74331c3bc16b14f2120 doc_id: 426122 cord_uid: 0oaulejj In the crowded environment of bio-inspired population-based metaheuristics, the Salp Swarm Optimization (SSO) algorithm recently appeared and immediately gained a lot of momentum. Inspired by the peculiar spatial arrangement of salp colonies, which are displaced in long chains following a leader, this algorithm seems to provide an interesting optimization performance. However, the original work was characterized by some conceptual and mathematical flaws, which influenced all ensuing papers on the subject. In this manuscript, we perform a critical review of SSO, highlighting all the issues present in the literature and their negative effects on the optimization process carried out by this algorithm. We also propose a mathematically correct version of SSO, named Amended Salp Swarm Optimizer (ASSO) that fixes all the discussed problems. We benchmarked the performance of ASSO on a set of tailored experiments, showing that it is able to achieve better results than the original SSO. Finally, we performed an extensive study aimed at understanding whether SSO and its variants provide advantages compared to other metaheuristics. The experimental results, where SSO cannot outperform simple well-known metaheuristics, suggest that the scientific community can safely abandon SSO. and well-known metaheuristics on a set of benchmark functions. This analysis aims at understanding whether the scientific community could abandon the use of SSO and ASSO, or if these algorithms provide some advantages with respect to simpler metaheuristics widely established in this research field. As a side note, SSO changed name several times in the literature: indeed, we can find it as Salp Swarm Algorithm [56] , Salp Swarm Optimizer [22, 58] , Salp Swarm Optimization [11] , Salp Optimization Algorithm [64] , and so forth. Incidentally, in the highlights of the original paper [43] , the authors state that the official name of the algorithm is Salp Swarm Optimizer, although the acronym SSA is used in the rest of the article. In this paper, we will use the name Salp Swarm Optimization and the acronym SSO to prevent confusion. We wish to emphasize that this work aims at discussing the limitations and issues of the original SSO algorithm, showing how even a simple Random Search (RS) algorithm can outperform the original incorrect SSO algorithm, under specific circumstances. Therefore, we think that our results and findings raise an alert on the existing SSO literature. This paper is structured as follows. Section 2 reviews the related works highlighting the flaws and limitations of some of the recently defined metaheuristics. Section 3 provides a critical review of SSO, in particular concerning the methodological issues with the updating rule of the position of the leader salp and the physically-inspired motivation for the updating rule of the positions of the follower salps. Section 4 presents an experimental evaluation of the improved version of SSO that fixes some of the raised issues, and compares the result with the original version on a set of standard benchmark functions. Subsequently, we analyze the performance of SSO and its variants on the CEC 2017 benchmark function suite against two commonly used metaheuristics, namely Covariance Matrix Adaptation Evolution Strategy (CMAES) [30] and Differential Evolution (DE) [60] . Finally, Section 5 summarises the main issues of the original SSO algorithm, and concludes with a broader remark on the nature of the salp metaphor. Recent years have witnessed the definition of a significant number of metaheuristics inspired by some natural phenomenon [24] . The common idea behind the definition of these metaheuristics is to consider a natural process and, subsequently, to design the underlying metaheuristic by exploiting the natural metaphor observed. After the publication of a given metaheuristic, it is also common to see a significant number of scientific papers that use the new metaheuristic to address complex real-world problems claiming its superior performance compared to existing metaheuristics. Luckily, a research strand started to criticize the definition of these metaheuristics, observing that, in most of the cases, their performance cannot be better than the commonly used evolutionary strategies [69] . Despite the lack of novelty that characterizes these metaheuristics (i.e., the change concerns only the underlying natural metaphor), a fundamental issue is that some of the results achieved through them are not reliable. In this sense, one of the clearest examples is the paper by Weyland [69] , in which the author demonstrated that Harmony Search (HS) [39] cannot be used to successfully solve a sudoku, thus contradicting the results obtained by Geem [27] . More precisely, Weyland first proved that HS is a special case of evolution strategies, thus highlighting the lack of novelty of the metaheuristic. As a consequence, the performance of HS is always bounded by the performance that can be obtained by evolution strategies. Finally, Weyland demonstrated that the results achieved in [27] are flawed both from the theoretical and the practical point of view, concluding that there is no reason for the existence of HS as a novel metaheuristic. Despite the Weyland's work clearly demonstrated the lack of novelty of HS, the algorithm is still largely used nowadays. Further, even though Weyland proved beyond any doubt that HS cannot perform better than evolutionary strategies, several papers still claim its presumed superior performance [5] . Thus, it seems that practitioners are still deceived by metaheuristics whose novelty is based only on the use of some natural metaphors. The truth is that HS, and other related metaheuristics, are simply using a different terminology with respect to the classic evolutionary strategies. While the lack of novelty did not prevent metaheuristics based on natural metaphors to be published in well-renowned scientific venues, some scientific journals are seriously tackling the problem. For instance, Marco Dorigo, the editor in chief of Swarm Intelligence, published an editorial note [16] stating that he observed a new trend consisting in "taking a natural system/process and use it as a metaphor to generate an algorithm whose components have names taken from the natural system/process used as metaphor". Dorigo also highlighted that "this approach has become so common that there are now hundreds of so-called new algorithms that are submitted (and unfortunately often also published) to journals and conferences every year", and concluded his editorial stating that "it is difficult to understand what is new and what is the same as the old with just a new name, and whether the proposed algorithm is just a small incremental improvement of a known algorithm or a radically new idea". A similar analysis on this trend appeared in the work by Cruz et al. [14] . There, the authors first highlighted the vast number of swarm intelligence algorithms developed by taking inspiration from the behavior of insects and other animals and phenomena. Subsequently, they showed that most algorithms present common macro-processes among them, despite the fact that they are inspired by different metaphors. In other words, the considered metaheuristics are characterized by common issues and features which happen at the individual level, promoting very similar emergent phenomena [14] . Thus, it is difficult (if not impossible) to claim that such metaheuristics are really novel. Focusing on specific metaheuristics, some contributions that analyze the behavior of a given algorithm started to appear [68, 45, 67] . In [68] , Villalón et al. thoroughly investigated the Intelligent Water Drops (IWD) algorithm [33] , a metaheuristic proposed to address discrete optimization problems. The authors demonstrated that the main steps of the IWD algorithm are special cases of Ant Colony Optimization (ACO) [18] . Thus, the performance of IWD cannot be better than the best ACO algorithm. Moreover, the authors analyzed the metaphor used for the IWD definition, and from their analysis, the metaphor is based on "unconvincing assumptions of river dynamics and soil erosion that lack a real scientific rationale". Finally, they pointed out that the improvements proposed to the IWD algorithm are based on ideas and concepts already investigated in the literature many years before in the context of ACO. Niu et al. [45] analyzed the Grey Wolf Optimization (GWO) algorithm [44] , and demonstrated that, despite its popularity, GWO is flawed. In particular, GWO shows good performance for optimization problems whose optimal solution is 0, while the same performance cannot be obtained if the optimal solution is shifted. In particular, when GWO solves the same optimization function, the farther the function's optimal solution is from 0, the worse its performance is. Interestingly, GWO was proposed by the same author of the SSO algorithm analyzed in this paper, and it presents some of the SSO's flaws. Villalón et al. [67] analyzed the popular Cuckoo Search (CS) algorithm [75] , a metaheuristic introduced in 2009. The authors analyzed CS from a theoretical standpoint, and showed that it is based on the same concepts as those proposed in the (µ + λ) evolution strategy proposed in 1981 [57] . Further, the authors evaluated the algorithm and the metaphor used for its definition based on four criteria (i.e., usefulness, novelty, dispensability, and sound motivation), and they concluded that CS does not comply with any of these criteria. Finally, they pointed out that the original CS algorithm does not match the publicly available implementation of the algorithm provided by the authors of the algorithm. This analysis is quite surprising, given the popularity of CS, and it highlights the need for a thorough investigation of the existing metaheuristics, with the goal of understanding which of them should be abandoned by the scientific community. Indeed, the impressive number of metaheuristics published in the literature makes it difficult to determine whether they really contribute to the advancement of the field. This problem was pointed out by Martínez et al. [26] . Specifically, the authors investigated whether the increasing number of publications is correlated with real progress in the field of heuristic-based optimization. To answer this research question, the authors compared five heuristics proposed in some of the most reputed journals in the area, and compared their performance to the winner of the IEEE Congress on Evolutionary Computation 2005. The results showed that the considered methods could not achieve the result of the competition winner, which was published several years before. Moreover, a comparison with the state-of-theart algorithms is often missing, thus making it impossible to understand the real advantage provided by a new method. In the same vein, Piotrowski and Napiorkowski [49] highlighted the risk associated with the definition of new and increasingly complex optimization metaheuristics and the introduction of structural bias [38] . In particular, the authors focused on two winners of the CEC 2016 competition, and they found out that each of them includes a procedure that introduces a structural bias by attracting the population towards the origin. As a final message, the authors highlighted that some metaheuristics have to be simplified because they contain operators that structurally bias their search, while other metaheuristics should be simplified (or abandoned) as they use unnecessary operators. This section outlines the design issues we found in the original definition and implementation of the SSO algorithm. More precisely, the following issues are discussed here and in Section 4: • the update rule of the main salp does not work correctly when one of the dimensions has a lower bound different from zero (Section 3.1). In the experimental part, we show that simply shifting a 2D sphere function from the origin makes SSO perform worse than a simple RS (Section 4); • the physical motivations for the updating rule of the follower salps are incorrectly derived from Newton's laws of motion (Section 3.2); • there is a clear divergence between the algorithm described in [43] and the available SSO implementation. This makes it difficult, or even impossible, to compare results from different papers (Section 3.3); • finally, we experimentally show that the original SSO algorithm has a bias towards the origin. Since many of the considered benchmark problems have the optimum in the origin, the results are biased in favor of SSO (Section 4). In what follows, we will refer to the equations that were introduced in the original SSO paper by [43] . We assume here that a chain of N different salps moves within a bounded Ddimensional search space, aiming at identifying the optimal solution. The first issue is related to the definition of the updating rule for the position of the leader salp. In Equation 3.1 of the original paper [43] , the update for the leader salp along the j-th dimension, with j = 1, . . . , D, is given as follows: where the upper and lower bounds of the search space in the j-th dimension are ul j and lb j , respectively. The value F j ∈ [lb j , ul j ] is the position of the best solution found so far in the j-th dimension, that corresponds to the best food source. On the other hand, c 1 decreases exponentially with the number of iterations according to the following rule: where is the current iteration number while L is the total number of iterations. Finally, both c 2 and c 3 are random numbers selected uniformly in the range [0, 1]. Here, we can find the first definitional issue, although it did not propagate in the original source code. Indeed, c 3 ∈ [0, 1] implies that the second case of Equation (1) is never verified. However, in the source code the threshold for c 3 is set to 0.5 instead of 0, which means that the two cases of the updating rule occur with equal probability. This kind of oversights is usually not a problem, but several successive papers do not correct the issue (see, e.g., [4, 8, 56, 35, 72, 34, 65, 73, 21] ). Further, if a new implementation following the specifications of [43] is used, instead of the original one released by the authors, the results might not be comparable. It is currently unknown how many papers on SSO employ the same implementation of the original paper. The main issue with the updating rule is however more significant, and it is still present in the source code and widespread across all existing SSO variants ( [4, 8, 56, 35, 72, 22, 1, 7, 34, 65, 3, 73, 54, 21] ). Let us consider the variation with respect to F j , which is: Notice that while c 1 c 2 (ul j −lb j ) gives a value in [0, ul j −lb j ] (i.e., F j +c 1 c 2 (ul j −lb j ) might remain inside the search space), the value c 1 lb j is also added and it can be arbitrarily large in absolute value. Although this is not the case in the experiments performed in the original paper, simply shifting the search space by a suitably large constant might significantly hamper the search process of SSO. The issue could also potentially affect research in applied disciplines where SSO has been used (e.g., COVID-19 related research, as proposed by [4] ). As an example, let us suppose that the lower and the upper bounds are of the same order of magnitude for all the D dimensions, and in particular that lb j = 10 k and ub j = 10 k + 1, with j = 1, . . . , D. In other words, the search space is the hypercube [10 k , 10 k + 1] j . By using the updating rule given in Equation (1), the position of the leader salp is updated as Since c 1 c 2 ≤ 2 (recall that c 2 ∈ [0, 1]), the position update in Equation (2) is dominated by the c 1 10 k term. Consequently, this update will move the leader salp out of the admissible bounds for most of the values taken by c 1 , forcing its position to be clipped on the borders of the search space most of the times. This implies that SSO is not invariant with respect to translations of the search space. Indeed, given that c 1 = 2e −( 4 L ) 2 , the salp remains inside the search space [10 k , 10 k + 1] j only if c 1 10 k ≤ 1, namely: The effect is that, depending on the search space, the majority of the updates of the leader salp can force it on the boundary of the space (due to clipping), with only the later iterations (with c 1 small enough) resulting in the leaders salp moving without clipping. In particular e −( 4 L ) 2 yields the smallest value when = L, i.e., at the last iteration and in this case we obtain that k ≤ log 10 1 2 e −16 ≈ 6.648 . Therefore, k > 6.648 will result in a search space where the position of the leader salp will always be forced outside of the search space, or equivalently, the leader salp will continue to "bounce" on the boundaries of the search space. Similar pathological examples can be found by tweaking the values of the upper and lower bounds. This observation reveals that the initial value of c 1 must be carefully chosen with respect to the size and the shift of the search space. In other words, SSO is also not invariant with respect to rescalings of the search space. Another different issue is that for any dimension j, the quantity c 2 (ul j − lb j ) + lb j has an expected value of ul j +lb j 2 . When the search space is centered in zero the expected value is then zero. As we will see in the experimental part, this gives an unfair advantage to problems where the search space is symmetric (with respect to 0) and the global optimum is in 0. The original paper claimed that the definition of the updating rule for the follower salps is based on the principles of classical mechanics (Newton's laws). However, there are important issues concerning the formulation of this rule, as well as the correct use of Newton's laws of motion. The equation for the update of the follower salps in SSO is: where it can be assumed that a = a i j and v 0 = v i j at t = 0, for each dimension j (with j = 1, . . . , D) and follower salp i (with i = 1, . . . , N). In the original paper, the acceleration a is calculated as: which is incorrect, since the average acceleration in a time interval ∆t is: Notice that the incorrect formula also gives a dimensionless quantity, instead of a length divided by a squared time. Notably, this issue is only sometimes corrected x and x 0 being the final and initial positions, respectively, and t being the time interval, is correct for the average speed, which is however not necessary in all the derivations of the paper. In fact, for computing the average acceleration, the instantaneous speed must be used instead. Furthermore, the aim of the derivation is to obtain the final position of the salp, which we cannot use to compute the average speed. Since the salps are initially still, the original definition of SSO [43] explicitly uses v 0 = 0 which, when substituted in the Equations 3 and 4, gives an infinite acceleration that would lead the salps outside any boundary of the search space. Regardless of these issues, the authors eventually point out that t, in the equations, corresponds to the iteration number, so that ∆t = 1 and the time term can be cancelled out from the equations. They conclude that this step leads to the final formula: Unfortunately, this equation cannot be derived from the previous ones. In fact, the position of the next salp in the chain never appears before this point and it is not taken into account in any of the previous derivations. A correct way to derive the previous formula would be the following. Assume that the i-th salp is moving toward the current position of the (i − 1)-th salp (in the j-th dimension) with starting speed of 0, final speed of 1 unit for time step, and constant acceleration of (x i j (t) − x i−1 j (t)) units for time step squared. Hence, the new position after one time unit can be computed as follows: Notice that this is only one of the possible ways to correctly derive the updating rule for the follower salps, but it is completely unrelated to the biological metaphor exploited by the authors. This critique only concerns the physical motivations for the definition of the updating rules, not the updating rule itself. However, the flawed explanation presented in the original paper is restated into multiple papers [35, 72, 34, 65] , without any significant correction. It is worth mentioning that both the MATLAB ® and Python implementations 1 do not correctly implement the pseudo-code and, unfortunately, do not follow the explanations provided by the authors in the original paper [43] . As a matter of fact, in both implementations, the authors updated the first half of the population using Equation 1 and the second half using Equation 5 . Considering that the salp chain is composed of N different individuals, the first N 2 − 1 salps perform the exploration process for food sources attracted by the best food source found so far (updated by Equation 1 ). The N 2 -th individual is the leader salp that drags the follower salps, which exploit the area surrounding the leader (updated by Equation 5 ). In what follows, we will refer to this implementation as SSO-code. We modified the implementation of SSO-code by removing the term c 1 lb j from Equation (1), for each dimension j (with j = 1, . . . , D) . In such a way, we are able to avoid as much as possible the clipping step of the salp positions due to the wrong update, proposed in the original Equation (i.e., Equation 1), which sends the salps out of the admissible bounds. We will refer to this version as ASSO. As a first batch of tests, we compared the performance of a simple Random Search (RS), SSO, SSO-code, and ASSO using 2-D standard benchmark functions (i.e., Ackley, Alpine, Rosenbrock, and Sphere). Then, the RS, SSO, SSO-code, and ASSO were compared against basic versions of CMAES [30] and DE [60] . We used the standard version of CMAES and DE implemented in the Pymoo library (Multi-objective Optimization in Python) [10] , which allows for easily using both single-and multi-objective algorithms, exploiting the default parameters proposed by the authors of the library. Specifically, concerning CMAES, the initial standard deviation was set to 0.5 for each coordinate, and no restart strategy was applied. Regarding DE, according to the classic DE taxonomy, the DE/rand/1/bin variant was used with a differential weight F = 0.3 and a crossover rate equal to 0.5. To show that both SSO and SSO-code are not shift-invariant, the search spaces of the tested benchmark functions were shifted by a large constant (i.e., 10 9 ). To collect statistically sound results, for each function, we ran the tested techniques 30 times. For each iteration, we kept track of the fitness value of the best individual, over the 30 repetitions, to calculate the Average Best Fitness (ABF). For a completely fair comparison among the different techniques, we fixed a budget of 100 iterations using 50 individuals. Note that the implemented RS randomly generates 50 particles, at each iteration, without taking into account any information of the previous iterations. Figure 1a clearly shows that both SSO and SSO-code are not shift-invariant. Indeed, shifting 2-D standard benchmark functions by a large constant hampered the optimization abilities of both SSO and SSO-code. Across all the tested functions, even RS obtained better results compared to SSO and SSO-code. On the contrary, our proposed algorithm ASSO -where we simply removed the term c 1 lb j -was able to outperform the other techniques. It is worth reminding that ASSO is not a novel algorithm, but an amended version of SSO in which the mathematical errors have been corrected. In order to evaluate whether the achieved results were different also from a statistical point of view, we applied the Mann-Whitney U test with the Bonferroni correction [42, 70, 19] . Specifically, we applied this statistical test to independently compare the results obtained by the techniques on each benchmark function. Thus, for each benchmark function and for each technique, we built a distribution by considering the fitness value of the best individual at the end of the last iteration over the 30 repetitions. The boxplots in Figure 1b show the distribution of the best fitness values, achieved at the end of the executions of the tested techniques. Figure 1b also reports the results of the statistical tests by using the asterisk convention. These results indicate that, generally, there is no statistical difference among SSO and SSO-code. Only considering the Rosenbrock function a strong statistical difference is present between SSO and SSO-code. The simple RS obtained similar or even better results than SSO and SSO-code, while there is always a strong statistical difference between the results achieved by ASSO and those achieved by the other techniques, demonstrating the effectiveness of our correction. Concerning the problem described in Section 3.1, we show that, on a search space symmetric with respect to zero, the SSO algorithm has a bias toward the origin. In order to show this behavior, we will use a fitness function that returns a random number with uniform distribution in [0, 1). For a swarm intelligence algorithm, we would expect a uniform distribution of particles across the entire search space using such a fitness function. Stated otherwise, using a random fitness function, the salps should not converge anywhere, and should randomly wander across the search space. However, we will show that, following the non-amended equations provided in the original SSO paper [43] , the swarm converges in the origin providing an unfair advantage in the case of optimization problems whose global optimum lies in x = 0 (and possibly lead to sub-optimal performance in the case of real-world functions). Figure 2 shows the result of this test, performed on both SSO and SSO-code with and without the food attractor. The Figure reports the positions of all salps during the first 10 iterations of the optimization; the position of the leader salp is highlighted by an orange circle, and the initial position is denoted by the text "Start". According to our results, the leader salp get attracted toward zero. The same happens for SSO-code: the leader salp it is inevitably attracted towards the center of the search space. The attraction towards the origin is even more evident in the case of SSO and SSO-code without food attraction: the swarm perfectly converges to the origin and no longer moves. During the last years, different benchmark function suites have been proposed to test and compare existing and novel global optimization techniques [63] . The benchmark functions they contain try to mimic the behavior of real-world problems, which often show complex features that basic optimization algorithms might not be able to grasp [25, 63] . Regarding the real-parameter numerical optimization, the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation Conference (GECCO) include competitions where complex benchmark function suites have been designed to test and compare global optimization techniques [13, 28, 29, 15] . Here, we evaluated the performance of the considered optimization techniques using the CEC 17 benchmark problems for single-objective real-parameter numerical optimization [9] , which were previously used to compare the performance of different metaheuristics [62, 47, 63] . Table 1 reports the tested benchmark problems, which are based on shifted, rotated, non-separable, highly ill-conditioned, and complex optimization benchmark functions [9] . We optimized each function f k (with k = 1, . . . , 30) considering the dimensions D = {10, 30}, search space boundaries For each technique, we executed 30 independent runs to collect statistically sound results. Figure 3 depicts the Average Best Fitness (ABF) obtained by the analyzed techniques for each function f k (with k = 1, . . . , 30) and D = 10, showing that DE was able to obtain better results than all the SSO-based strategies, including ASSO, and the simple RS tested here. As one can see, DE outperformed the SSO-based strategies in 27 out of 30 functions. CMAES also obtained better results than the SSO-based strategies in more than half of the tested functions. As we did for the standard benchmark functions, we evaluated if the achieved results were also different from a statistical point of view by using the Mann-Whitney U test with the Bonferroni correction [42, 70, 19] . Thus, we independently compared the results obtained by the six techniques on each benchmark function f k (with k = 1, . . . , 30) considering the distributions built using the value of the best individual at the end of the last iteration over the 30 repetitions. The heatmaps showed in Figure 4 clearly point out that there is almost always a strong statistical difference between the results achieved by all the SSO-based strategies, including ASSO, and those obtained by DE. Specifically, SSO-based strategies were able to obtain comparable or better results than those reached by DE only for the functions f 21 , f 25 , and f 28 . Comparing the results achieved by CMAES and those obtained by SSO-based strategies, there is a strong statistical difference in more than half of the tested functions. These results are coherent with the probabilistic re-formulation of the no-free lunch theorem (NFL) [40] , an extension of the original version of the NFL theorem [71, 41] , which prove the validity of the theorem in continuous domains. Thus, according to the NFL theorem, no algorithm outperforms all the competitors in any optimization problem. Increasing the dimensions of the tested benchmark functions from 10 to 30 does not allow the SSO-based approaches to obtain better results than DE. As a matter of fact, Figures 5 and 6 This paper shows systematic and deep issues in the definition of a widely-cited optimization algorithm, namely Salp Swarm Optimization. In particular, the multiple issues concerning the updating rules, the physically-inspired motivations, and the inconsistency between the code and the description in the original paper that proposed Salp Swarm Optimization raise concerns about all ensuing related literature, since the erroneous derivations and rules are present in most of the published papers on the topic. Furthermore, it is currently problematic to discern which results can be trusted, which ones are based on an incorrect implementation, and which papers have an incorrect description but are using a correct implementation. The most serious issue analyzed in this paper is perhaps the presence of the lower bound lb j in the updating rule of the leader salp (see Equation 1 ). In particular, this term makes the algorithm not shift-invariant, introducing a severe search bias that depends on the distance between the lower bound and the origin. As shown in our experiments, under some specific circumstances, this factor can significantly affect the search capabilities of SSO, which was outperformed by a simple random search. If the term lb j was inadvertently introduced, all the current literature on SSO contains results not reflecting the intended definition of the algorithm. On the other hand, if the authors meant to insert the term, the SSO algorithm cannot work in spaces that have lower bounds too far from 0 in all dimensions. We also compared the described SSO-based approaches against DE and CMAES for the optimization of the CEC 2017 benchmark functions [9] , showing that all the SSO-based versions were outperformed by a simple DE version on almost all functions. These results highlight, once more, that SSO and similar algorithms do not give any particular advantages with respect to widespread and common metaheuristics. Considering all the results discussed in this work, we expect that more sophisticated metaheuristics, such as Linear population size reduction Successful History-based Adaptive DE (L-SHADE) [61, 51, 50, 48] that already showed their superior performance compared The confidence interval was divided into 4 levels indicating a strong statistical difference, statistical difference, weak statistical difference, and no statistical difference, respectively. to DE, should outperform all the SSO-based approaches. Thus, based on the evidences of this work, we discourage the use of SSO by the scientific community. In particular, there is no theory supporting the convergence properties of SSO and its (supposed) superiority with respect to the existing metaheuristics. On the contrary, SSO is defined and implemented based on a wrong mathematical formulation, as discussed in the first part of this paper. We conclude this paper with a general remark about the metaphor-based approach for metaheuristics. As mentioned in the Introduction, a lot of metaheuristic optimization algorithms have been proposed in the last years, most of them based on a particular natural process or animal behavior as a metaphor for the exploration and exploitation phases of the search space. As noted by [59] , one of the likely causes of this phenomenon is the excessive focus on the novelty of such methods in part of the research community on metaheuristics. This research approach, however, has the downside of shadowing the true search components of an optimization algorithm with terms and concepts borrowed from the considered metaphor. Therefore, it can happen that a "novel" metaheuristic optimization technique turns out to be just another well-known algorithm under a heavy disguise. This is the case, for example, of three other recent Swarm Intelligence algorithms, namely: the Grey Wolf Optimizer, the Firefly Optimization Algorithm, and the Bat Algorithm, which were shown by [17] to have strong similarities with Particle Swarm Optimization. While in this manuscript we showed and fixed the methodological and implementation flaws of SSO, we believe that a closer inspection of the algorithm's underlying metaphor would also highlight a strong resemblance to other established swarm algorithms. All source code used for the tests is available on GitLab at the following address: https://gitlab.com/andrea-tango/asso. An efficient salp swarm-inspired algorithm for parameters identification of photovoltaic cell models Salp swarm algorithm: a comprehensive survey Feature selection using salp swarm algorithm with chaos Optimization method for forecasting confirmed cases of COVID-19 in China Comprehensive review of the development of the harmony search algorithm and its applications Asynchronous accelerating multi-leader salp chains for feature selection Asynchronous accelerating multi-leader salp chains for feature selection Improved multiobjective salp swarm optimization for virtual machine placement in cloud computing. Hum.-centric Comput Problem definitions and evaluation criteria for the CEC 2017 special session and competition on single objective real-parameter numerical optimization pymoo: Multi-objective optimization in python A new combined model based on multi-objective salp swarm optimization for wind speed forecasting Solving multiobjective optimization problems using an artificial immune system Special Session & Competitions on Real-Parameter Single Objective Optimization A critical discussion into the core of swarm intelligence algorithms Towards a theoryguided benchmarking suite for discrete black-box optimization heuristics: profiling (1+ λ) ea variants on onemax and leadingones Swarm intelligence: A few things you need to know if you want to publish in this journal Grey wolf, firefly and bat algorithms: Three widespread algorithms that do not contain any novelty Ant colony optimization Multiple comparisons among means A new optimizer using particle swarm theory Parameter optimization of power system stabilizer via salp swarm algorithm Extracting optimal parameters of PEM fuel cells using salp swarm optimizer An efficient binary salp swarm algorithm with crossover scheme for feature selection problems Meta-zoo-heuristic algorithms Towards improved benchmarking of black-box optimization algorithms using clustering problems Since cec 2005 competition on real-parameter optimisation: a decade of research, progress and comparative analysis's weakness Harmony search algorithm for solving sudoku GECCO Workshop on Real-Parameter Black-Box Optimization Benchmarking (BBOB) Online; Accessed on COCO: A platform for comparing continuous optimizers in a black-box setting Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation Completely derandomized self-adaptation in evolution strategies Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence Problem solving by intelligent water drops Swarming behaviour of salps algorithm for predicting chemical compound activities Improved salp swarm algorithm based on particle swarm optimization for feature selection Emended salp swarm algorithm for multiobjective electric power dispatch problem A comprehensive survey: artificial bee colony (ABC) algorithm and applications Structural bias in population-based algorithms A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice A probabilistic reformulation of no free lunch: Continuous lunches are not free What makes an optimization problem hard? On a test of whether one of two random variables is stochastically larger than the other Salp swarm algorithm: A bio-inspired optimizer for engineering design problems Grey wolf optimizer The defect of the grey wolf optimization algorithm and its verification method. Knowledge-Based Systems Fuzzy Self-Tuning PSO: A settings-free algorithm for global optimization Computational intelligence for parameter estimation of biochemical systems L-SHADE optimization algorithms with population-wide inertia Some metaheuristics should be simplified Step-by-step improvement of JADE and SHADE-based algorithms: Success or failure? L-SHADE with competing strategies applied to constrained optimization Particle swarm optimization Differential evolution: a practical approach to global optimization Enhanced salp swarm algorithm: Application to variable speed wind generators GSA: a gravitational search algorithm A novel chaotic salp swarm algorithm for global optimization and feature selection Numerical optimization of computer models A multiobjective salp optimization algorithm for techno-economic-based performance enhancement of distribution networks Metaheuristics -the metaphor exposed Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces Improving the search performance of SHADE using linear population size reduction Proactive particles in swarm optimization: A settings-free algorithm for real-parameter single objective optimization problems Biochemical parameter estimation vs. benchmark functions: a comparative study of optimization performance and representation design An improved salp optimization algorithm inspired by quantum computing A novel robust methodology based salp swarm algorithm for allocation and capacity of renewable distributed generators on distribution grids DISH-XX Solving CEC2020 Single Objective Bound Constrained Numerical optimization Benchmark Cuckoo search≡(µ+ λ)-evolution strategy The intelligent water drops algorithm: why it cannot be considered a novel algorithm A critical analysis of the harmony search algorithm-how not to solve sudoku Individual comparisons by ranking methods No free lunch theorems for optimization Novel bio-inspired memetic salp swarm algorithm and application to MPPT for PV systems considering partial shading condition Hybrid wind energy forecasting and analysis system based on divide and conquer scheme: A case study in China A new metaheuristic bat-inspired algorithm Engineering optimisation by cuckoo search Firefly algorithm: recent advances and applications This work was supported by national funds through the FCT (Fundação para a Ciência e a Tecnologia) by the projects GADgET (DSAIPA/DS/0022/2018) and the financial support from the Slovenian Research Agency (research core funding no. P5-0410).