key: cord-0045753-a3jz98xp authors: Osaba, Eneko; Del Ser, Javier; Yang, Xin-She; Iglesias, Andres; Galvez, Akemi title: COEBA: A Coevolutionary Bat Algorithm for Discrete Evolutionary Multitasking date: 2020-05-25 journal: Computational Science - ICCS 2020 DOI: 10.1007/978-3-030-50426-7_19 sha: 654a2012ec3ac733198879306758a2897b9b209a doc_id: 45753 cord_uid: a3jz98xp Multitasking optimization is an emerging research field which has attracted lot of attention in the scientific community. The main purpose of this paradigm is how to solve multiple optimization problems or tasks simultaneously by conducting a single search process. The main catalyst for reaching this objective is to exploit possible synergies and complementarities among the tasks to be optimized, helping each other by virtue of the transfer of knowledge among them (thereby being referred to as Transfer Optimization). In this context, Evolutionary Multitasking addresses Transfer Optimization problems by resorting to concepts from Evolutionary Computation for simultaneous solving the tasks at hand. This work contributes to this trend by proposing a novel algorithmic scheme for dealing with multitasking environments. The proposed approach, coined as Coevolutionary Bat Algorithm, finds its inspiration in concepts from both co-evolutionary strategies and the metaheuristic Bat Algorithm. We compare the performance of our proposed method with that of its Multifactorial Evolutionary Algorithm counterpart over 15 different multitasking setups, composed by eight reference instances of the discrete Traveling Salesman Problem. The experimentation and results stemming therefrom support the main hypothesis of this study: the proposed Coevolutionary Bat Algorithm is a promising meta-heuristic for solving Evolutionary Multitasking scenarios. congregated an active scientific community in recent years [28] . The principal idea behind this field is to exploit what has been learned through the optimization of one specific problem or task, when tackling of another related or unrelated optimization task. Due to its relative youth, Transfer Optimization has not been studied as deeply as other research areas. It has not been until these last years when the transferability of knowledge among tasks has become a priority among researchers from the Evolutionary Computation arena. Within the Transfer Optimization paradigm, three separated categories can be identified: sequential transfer, multitasking and multiform optimization. The first of these classes refers to those situations in which tasks are faced sequentially, assuming that for solving a new problem/instance, the knowledge collected when solving previous tasks is used as external information [11] . The second of these categories (Multitasking) deals with different optimization tasks simultaneously by dynamically scrutinizing existing complementarities and synergies among them [16, 39] . Finally, multiform optimization aims at solving a single problem by resorting to different alternative problem formulations, which are optimized simultaneously. In all these categories, there is a clear consensus in the community on the capital importance of the correlation among the tasks to be solved for positively capitalizing on the transfer of knowledge over the search [17] . Among the three divisions pointed out above, multitasking is the one that has arguably grasped most attention by the community. The study presented in this manuscript is focused on this specific category. Specifically, we focus on multitasking optimization through the perspective of Evolutionary Multitasking (EM, [27] ). In short, EM tackles the simultaneous optimization of several optimization tasks by relying on concepts and methods from Evolutionary Computation [1, 8] . In the last years, a particular flavor of EM grounded on the so-called Multifactorial Optimization strategy (MFO, [17] ) has shown a superior efficiency when dealing with different environments involving several continuous, discrete, singleoptimization and multi-objective optimization problems and tasks [14, 18, 38, 43] . The majority of the literature related to this area is focused on a solver belonging to this flavor: the Multifactorial Evolutionary Algorithm (MFEA, [17] ). Unfortunately, alternatives to MFEA still remain scarce to date. Bearing this in mind, the research work presented in what follows revolves on a novel EM meta-heuristic algorithm that adopts the Bat Algorithm (BA, [41] ) at its core. Specifically, we present a Coevolutionary Bat Algorithm (COEBA) for discrete evolutionary multitasking. Through this proposal, we take a step further over the state of the art by elaborating on a new research direction in two different directions. On the one hand, we contribute to the EM area by introducing a new efficient meta-heuristic scheme. It is important to point out here that, unlike most articles published so far around EM, COEBA does not find its inspirational source in the MFO paradigm. On the other hand, COEBA is the first attempt at using BA for Transfer Optimization. It is also relevant to underscore here that the experimentation carried out in this paper considers a less studied discrete environment comprising different instances of the Traveling Salesman Problem (TSP, [21] ). Concretely, we assess the performance of the proposed COEBA by comparing its performance to that obtained by MFEA. Our main purpose with this performance comparison is to elucidate that COEBA embodies a promising alternative to deal with EM scenarios. To this end, we have chosen 8 different TSP instances, giving rise to 15 multi-tasking environments with varying degrees of phenotypical relationship. The rest of the paper is organized as follows. Section 2 introduces the background related to both Evolutionary Multitasking and the Bat Algorithm. Next, Sect. 3 exposes in detail the main features of the proposed COEBA. The experimentation setup, analysis and discussion of the results are given in Sect. 4 . The study ends in Sect. 5 with conclusions and future research directions. This section is dedicated to providing a brief background on Evolutionary Multitasking (Sect. 2.1) and the Bat Algorithm (Sect. 2.2). In recent years, EM has arisen as a promising paradigm for facing simultaneous optimization tasks. There are two main features that motivated the first formulation of EM. The first one is the parallelism inherent to the population of individuals, which eases the management of diverse concurrent optimization tasks faced simultaneously. Thanks to this feature, latent synergies between tasks can be automatically harnessed during the solving process [28] . The second feature is the continuous transfer of genetic material between the individuals, which allows all tasks to benefit from each other, even for those that are not strongly correlated with the rest of the pool [17] . It is widely accepted that the concept of EM was only materialized through the vision of the MFO until late 2017 [6] . Today, this nascent research stream is receiving interesting contributions in terms of new algorithmic schemes, such as the Coevolutionary Multitasking scheme proposed in [5] , or the multitasking multi-swarm optimization described in [37] . Additional alternatives to MFEA have also been proposed, such as the multifactorial brain storm optimization algorithm presented in [45] , the Multifactorial Differential Evolution in [10] or the hybrid particle swarm optimization-firefly algorithm introduced in [40] . Despite these recently proposed methods, MFO and its related MFEA have monopolized the research activity around this field since its inception. In fact, the authors of MFEA have recently introduced an adaptive variant of MFEA, coined as MFEA-II, thereby eliciting the momentum played by this algorithm in the field [2] . Going into mathematical details, we can formulate EM as an environment in which K tasks or problems should be optimized in a simultaneous fashion. This environment is characterized by the existence of as many search spaces as tasks. Thus, for the k-th task, its objective function T k is characterized as f k : Ω k → R, where Ω k represents the search space of T k . Let us assume that all tasks are minimization problems, so that the main objective of EM is to find a set of solutions {x 1 , . . . , x K } such that x k = arg min x∈Ω k f k (x). A crucial aspect to properly understand EM is that all individuals x p in the population P to be evolved belong to a unified search space Ω U that relates to Ω 1 to Ω K by means of a encoding/decoding mapping functions ξ k : to represent a taskspecific solution x p,k for each of the K tasks. Shifting our attention on MFO and MFEA, four different definitions are associated with each individual x p of the population P : Factorial Cost, Factorial Rank, Scalar Fitness and Skill Factor. With the intention accommodating this work to the extension requisites, we refer interested readers to [2, 17] for additional deeper details on how these definitions are exploited during the search over the unified space Ω U . Several significant works have been recently published around EM and MFO. In [44] , authors present an influential application of the MFEA to different discrete problems. This paper also introduces the discrete unified encoding, used as a reference in subsequent works. A related study is [47] , where MFEA was applied to the Vehicle Routing Problem. Gong et al. presented in [14] and improved version of the MFEA, endowing the algorithm with a dynamic resource allocating strategy. An interesting discrete MFEA has been also developed in [38] for the composition of semantic web services. Gupta et al. presented in [18] a multi-objective variant of MFEA, giving evidence of its efficiency on a real-world manufacturing process design problem. Finally, the work in [43] follows a similar strategy by enhancing MFEA with the incorporation of opposition-based learning. Further theoretical studies on EM and MFEO can be found in [22, 46] . BA is a nature-inspired metaheuristic based on the echolocation system of bats. In the nature, bats emit ultrasonic pulses to the surrounding environment with navigation and hunting purposes. After the emission of these pulses, bats listen to the echoes, and based on them they can locate themselves and also identify and locate preys and obstacles. Besides that, each bat is able to find the most "nutritious" areas performing an individual search, or moving towards a "nutritious" location previously found by any other component of the swarm. It is important to mention that some rules have to be previously established with the aim of making an appropriate adaptation [41] : 1. All bats use echolocation to detect the distance, and they are assumed to be able to distinguish between an obstacle and a prey. 2. All bats fly randomly at speed v i and position x i , emitting pulses with a fixed frequency f min , varying wavelength λ and loudness A i to search for a prey. In this idealized rule, it is assumed that every bat can adjust in an automatic way the frequency (or wavelength) of the pulses, emitted at a rate r ∈ [0, 1]. This automatic adjustment depends on the proximity of the targeted prey. 3. In the real world, the bats' emissions loudness can vary in many different ways. Nevertheless, we assume that this loudness can vary from a large positive A 0 to a minimum constant value A min . Since its proposal, BA has emerged as one of the most successful metaheuristic solvers. It has been applied to a manifold of problems such as logistic [32] , industry [24] , or medicine [19] . The literature behind BA is huge and diverse, as manifested by comprehensive surveys on practical applications of BA [12, 42] . Following concepts previously embraced by other alternatives in the literature [5] , one of the main characteristics of the designed COEBA is its multi-population nature. By this we mean that COEBA is a method composed by a defined number of populations, or demes [25] , comprised by the same number of individuals. More specifically, the number of groups is equal to K, i.e. the number of tasks to be optimized. Additionally, each of the K subpopulations concentrates on solving a specific task T k . This means that bats corresponding to the k-th deme are only evaluated on task T k . As in MFEA, a unified representation Ω U is used for encoding individuals. However, the most innovative aspect of COEBA is that each subpopulation has its own search space. This involves a slight size readjustment when different demes exchange individuals among them. We will hereafter use the TSP to show this size readjustment problem. Hence, we denote the size of each problem T k (i.e. the number of cities) as D k . Let us assume that individual x i is encoded as a permutation of the integer set {1, 2, . . . , D k }. In this way, when x k p ∈ Ω k is migrated to a subpopulation in which the size of task T k to be solved is D k < D k , only integers lower than D k are considered, thus reducing the phenotype of the individual. These integers maintain the same order as in x k p . The reverse procedure applies if D k > D k . In that case, and considering that each time an individual x k p is transferred to a deme it replaces an alternative bat x k p , all integers between D k and D k are inserted in x k p in the same positions as in x k p . This multiple search space strategy enhances the exploitation of the search over the demes, making the movement operators more effective. With all these considerations in mind, Algorithm 1 shows the pseudocode of the designed COEBA. As can be seen, in the initialization process a number X of individuals are randomly generated. After initialization, each individual is evaluated over all K tasks. Then, within an iterative process, each subpopulation is built by choosing the top X/K individuals for the corresponding task (the same bat can be selected by different tasks). Once demes are composed, each one is evolved independently by following the concepts of the discrete version of the BA [32] . To be more concise, the distance between two different bats is measured by means of the Hamming Distance, namely, the number of non-corresponding elements in the sequence. Furthermore, the inclination mechanism is also used [31] . Thanks to this feature (lines 10-14 in Algorithm 1), the method intelligently selects the movement function suited for each bat at every iteration, depending on its specific situation regarding the leading bat of the swarm. As is shown in Algorithm 1, 2-opt and insertions are used as movement functions. if v t i ri then 15 Select one solution among the best ones 16 Generate a new bat by selecting the best neighbor of the chosen bat 17 if rand