key: cord-0058691-lem8qslr authors: Rocha, Ana Maria A. C.; Matos, Marina A.; Costa, M. Fernanda P.; Gaspar-Cunha, A.; Fernandes, Edite M. G. P. title: Single Screw Extrusion Optimization Using the Tchebycheff Scalarization Method date: 2020-08-20 journal: Computational Science and Its Applications - ICCSA 2020 DOI: 10.1007/978-3-030-58808-3_48 sha: d7ced941e8a3cc0430c7a80d293df215e1669e4d doc_id: 58691 cord_uid: lem8qslr The optimal design of a single screw extrusion (SSE) is a very difficult task since it deals with several conflicting performance indices. Past research to find the optimal SSE design has been successfully conducted by optimization procedures, in particular by multi-objective optimization. Problems with two or more objectives have been addressed by multi-objective evolutionary algorithms that search for the whole set of promising solutions in a single run. Our approach has been guided by the bi-objective optimization problems, using a methodology based on the weighted Tchebycheff scalarization function. The numerical results show that the proposed methodology is able to produce satisfactory results with physical meaning. In the context of the polymer extrusion, the optimal design of a single screw extrusion (SSE) is a very difficult task since it deals with several performance indices (the objective functions) that are conflicting [1] [2] [3] [4] . That is, the improvement of one performance index leads to another performance index degradation. This SSE design is concerned with the definition of the optimal screw geometry/configuration for obtaining the best performance indices, while manufacturing a certain product. Frequently, the screw geometry is established based on empirical knowledge, combined with a trial-and-error approach until the desirable performance indices are achieved. However, a more efficient approach is to handle the SSE design as an optimization problem, where the several conflicting objective functions are optimized simultaneously. Previous studies concerning with this SSE process have addressed the multiobjective optimization (MOO) problem by adopting a methodology based on multi-objective evolutionary algorithms (MOEA), namely the reduced Pareto set genetic algorithm (RPSGA) [2, 3, 5] . Most MOEA treat the MOO problem as a whole and find the entire set of promising and desirable solutions in a single run of the algorithm. They are mainly population-based stochastic using wellknown and established meta-heuristic, e.g., genetic algorithm, particle swarm optimization, differential evolution, ant colony optimization, to name a few [6] [7] [8] [9] [10] [11] . Simulated annealing and tabu search are point-to-point based stochastic algorithms that have also been used in this context [12, 13] . Multi-objective evolutionary algorithms based on decomposition (recognized in the scientific literature as MOEA/D) decompose a MOO problem into a number of scalar optimization subproblems (using a weight-dependent scalar aggregation function) and optimize them simultaneously [14] [15] [16] . The subproblems are simultaneously solved by handling a population of solutions that comprise the best solutions found so far for each subproblem. Methods for constructing a scalar aggregation function, combining the multiobjective functions into a weighted scalar objective function, that is used in a single objective optimization (SOO) context, thus producing a single optimal solution, are the easiest to understand and implement. Furthermore, wellestablished and known either deterministic or stochastic SOO algorithms can be used to find one optimal solution [6, 7] . However, to obtain a set of promising and desirable solutions of the MOO problem, the SOO method must be run as many times as the desired number of points using different sets of weights. The most popular scalar aggregation functions include the weighted sum and the weighted Tchebycheff approaches. The contribution of this paper is to show that a methodology based on the weighted Tchebycheff scalarization function, when used to optimize bi-objective optimization problems, and based on an evenly distributed set of weight vectors, provides acceptable and reasonably good approximations to the solutions of the SSE optimization problem. This study clearly shows that the used methodology is an effective tool to the SSE design optimization. The achieved optimal solutions are meaningful in physical terms. This paper is organized as follows. Section 2 describes the single screw extrusion problem exhibiting the objective functions to be optimized and the decision variables of the problem. Section 3 presents some basic concepts concerning with MOO and the proposed methodology based on the weighted Tchebycheff approach. Finally, Sect. 4 exposes the carried out numerical experiments and we conclude the paper in Sect. 5. The most relevant performance indices in the SSE design are the mass output (Q), the length of the screw required for melting the polymer (Z t ), the melt temperature at die entrance (T melt ), the mechanical power consumption (P ower), the weighted-average total strain (W AT S) and the viscous dissipation (V isco). These objective functions vary and depend on the values of two sets of parameters: the geometric and the operating parameters. Their values are obtained using numerical modelling routines that describe the plasticizing SSE process [1] . The geometric (or/and the operating) parameters are the inputs of the computerized simulator and the objective values Q, Z t , T melt , P ower, W AT S, V isco are the output. Usually, the best design is attained by maximizing the objectives Q and W AT S, and minimizing Z t , T melt , P ower and V isco. The geometric parameters are related to the internal screw diameter of the feed zone (D 1 ) and metering zone (D 3 ), the axial lengths of the feed (L 1 ), compression (L 2 ) and metering (L 3 ) zones, the flight thickness (e) and the screw pitch (p). See Fig. 1 . The operating parameters that correspond to the operating conditions of the extruder are: the screw speed (N ) and the temperature profile of the heater bands in the barrel (T b 1 , T b 2 , T b 3 ). In this paper, it is assumed that the operating parameters are previously fixed. The aim is to find the optimal values for the geometric parameters -the decision variables of the MOO problem -while optimizing the objectives. This is a crucial issue since, for example, if the compression zone is too short, the rate of decreasing channel depth downstream could become higher than the melting rate, resulting in material clogging. Furthermore, since the shallower the screw channel the higher the melting rate, a very long compression zone will result in an unnecessarily long melting stage. The SSE MOO problem can be formulated as follows: Find a set of values for the vector (L 1 , L 2 , D 1 , D 3 , p, e) ∈ Ω ⊂ R 6 such that the vector (Q, Z t , T melt , P ower, W AT S, V isco) is optimized, (1) where the set Ω of feasible solutions is defined as However, due to the complexity of problem (1) that involves a vector of six objective functions, the design of the SSE process is optimized considering the optimization of only two objective functions simultaneously. Since the mass output Q is the most relevant performance index in the polymer extruder, Q is present in all of the five alternative formulated bi-objective optimization problems, while the geometric parameters are optimized. The analyzed bi-objective optimization problems are formulated as: Find values for the vector 1 Thus, in this study, we deal with bi-objective optimization problems. They are easier to solve and to check if the solutions produced by the optimization algorithm are suitable for the extrusion process. Moreover, it is simpler to identify and to visualize the trade-offs between the solutions. Nowadays many design problems faced by decision-makers are tackled by a multiobjective approach. This means that more than one objective may be measured and are expected to be optimized. Frequently, there is a conflict while optimizing more than one objective, i.e., does not exist one single solution that optimizes simultaneously all the objectives. However, a compromise solution can be selected among a set of promising and desirable solutions -further ahead denoted by Pareto optimal set -according to the preferences of the decision-maker. In general terms, the MOO problem can be formally defined as: Thus, when two solutions Two other important definitions in the MOO context are the following. Assuming that the optimization problem involves the minimization of the functions in f , the Definition 1 says that x 1 is Pareto optimal if there is no other feasible solution x 2 which would decrease some objective f i without causing a simultaneous increase in at least one other objective. That is, does not exist a single solution, but a set of solutions called Pareto optimal set (in the space of the decision variables). Their corresponding function vectors are said to be non-dominated. Given a MOO problem with objective function vector f ∈ R r and the Pareto optimal set X * , the Pareto optimal front (P F * ) is defined as: The algorithms for MOO aim to find a good and balanced approximation to the Pareto optimal set (and Pareto front P F * ). That is, the goal is to find a manageable number of Pareto optimal (function) vectors which are evenly distributed along the Pareto optimal front [14] . The goal is to support the decision-maker to formulate his/her preferences and identify the best (or compromise) solutions. The most popular methods to solve a MOO problem are based on: i) the aggregation of the objectives, ii) the -constraint strategy, iii) producing an approximation to the P F * directly. The aggregation of the objectives that requires weighting coefficients is used as an a priori approach since a total order is defined on the objective space (by defining a scalar function) and a statement of additional preferences, e.g., a weights vector for the objectives, should be provided by the decision-maker prior to the optimization. The aggregation method combines the objective functions into a scalar objective function that is used in a SOO context, thus producing one single compromise solution. To obtain an approximation to the P F * , the SOO method must be run as many times as the desired number of points using different sets of weights vector [16] . In the -constraint method, one objective is selected to be minimized and all the other objective functions are converted into inequality constraints by setting upper bound values to each one, 1 , . . . , r−1 [6] . Again, by varying and properly choosing the values set to the bounds 1 , . . . , r−1 , one can obtain different points on the Pareto optimal front [7] . Further, the method is able to find Pareto optimal solutions in convex and non-convex regions of the Pareto optimal front. Methods to compute an approximation to the P F * in a single run are in general stochastic population-based search techniques. They belong to the class of a posterior approaches. Popular MOEA rely on meta-heuristics and work reasonably well on difficult problems. In general, they are based on generating iteratively a population of solutions which are manipulated through operations like crossover, mutation and selection. Population-based meta-heuristics are naturally prepared to produce many solutions from which the set of Pareto optimal solutions can be emanated. Known examples with industry applications are NSGA-II [8] , SPEA-2 [17] , MOEA/D [14] and RPSGA [3] . The reader is referred to [6, 7, 18] for more details. Our proposal to solve the SSE problem, as formulated in (1), is to use the weighted Tchebycheff approach, a rather efficient aggregation method [19] . The aggregation method combines the objective functions into a scalar objective function that is used in a SOO context, thus producing one single compromise solution. However, defining the scalar objective function requires specific knowledge about search domain and function ranges that are in general not available. The most used aggregation method is the weighted sum approach that assigns to each objective function f i , of the vector f , a non-negative weight w i in a way that r i=1 w i = 1, and minimizing the function that is the weighted sum of the objectives. This approach could not be able to find certain Pareto optimal solutions in non-convex regions of the Pareto optimal front. The weighted Tchebycheff approach also assigns a weights vector to the objectives to form a single objective [15, 19] . As opposed to the linear aggregation of the weighted sum approach, the Tchebycheff approach relies on a nonlinear weighted aggregation of the functions f i , that is why it can deal with non-convex Pareto front [7] . In the minimization context, the SOO problem of the Tchebycheff approach has the form where w = (w 1 , . . . , w r ) is the vector of non-negative weights and z * = (z * 1 , . . . , z * r ) is the ideal point in the objective space, i.e., z * i = min{f i (x) such that x ∈ Ω} for i = 1, . . . , r. The Tchebycheff approach guarantees finding all Pareto optimal solution with ideal solution z * . In practice, this approach requires finding z * by independently optimizing each objective function. We also note that problem (5) is non-differentiable (function g(x; w) is not smooth at some points), although this disadvantage can be easily overcome implementing a derivative-free optimization method. We remark the following: Remark 1. Under some mild conditions, for each Pareto optimal x * ∈ X * there exists a weight vector w such that x * is the optimal solution of problem (5) , and each optimal solution of problem (5) (associated with a weights vector w) is a Pareto optimal solution to problem (3) [6, 14, 15] . It is advisable to rescale or normalize the objective functions so that their objective values are approximately of the same magnitude. Based on the ideal objective vector z * and on the nadir objective vector, z nad , each objective f i is replaced by and, consequently, the range of the normalized function is [0, 1]. The nadir point is constructed with the worst objective function values in the complete Pareto optimal set X * , i.e., z nad i = max{f i (x) such that x ∈ X * } for i = 1, . . . , r, which makes the accurate estimation of the nadir objective values a difficult task [20] . An estimate of z nad , herein denoted by the vector f max , obtained from a payoff table may be used instead. The function g(x; w) in this Tchebycheff approach is then replaced by Remark 2. For normalized objectives, the maximization of f i (x) can be reformulated as a minimization objective as follows: Nowadays, a large number of complex and difficult to solve real-world optimization problems are solved in an approximate way by meta-heuristics. These algorithmic frameworks combine heuristics with local search and population-based strategies to explore the feasible region and escape from local optimal solutions. Frequently, they are greedy methods. Meta-heuristics provide a good quality solution to the problem in a reasonable amount of time (instead of guaranteeing convergence to an optimal solution like the exact methods do). Meta-heuristics are general-purpose optimizers and mostly problem-independent, making them better suited to real world optimization problems [21] . The simulated annealing (SA) method is a single solution-based metaheuristic with origins in statistical mechanics. The method models the physical process of heating a material and, to minimize the system energy, it carefully controls the reduction of the temperature -the cooling process -in order to reduce defects [22] . This process is known as annealing. It is adequate to solve unconstrained and bound constrained global optimization problems. Since the feasible region Ω of each formulated bi-objective problem (2) is defined by box constraints alone, our proposal to solve the resulting problem (5) -with the objective function (7) (with r = 2) -in the Tchebycheff approach context, is to use a simulated annealing algorithm [23] . This is a well-known meta-heuristic for global optimization. At each iteration, of the SA algorithm, a new point is randomly generated. The distance between this new point and the current point, or the extent of the search, is based on a probability distribution with a scale proportional to the temperature. The algorithm accepts a new point if it improves the objective function, but also accepts, with a certain probability, a new point that deteriorates the objective. Using the temperature parameter, the SA algorithm controls the search for the global solution, e.g., a higher temperature allows more new points to be accepted which lead to the exploration of different regions of the search space. On the other hand, a lower temperature favors the acceptance of improving new points which result in the local exploitation of a promising region. Along the iterative process, the temperature is systematically decreased through a cooling schedule [24] . The main steps of the SA algorithm are presented in Algorithm 1. For the sake of brevity, details of the SA algorithm are not included and the reader is referred to the literature. Require: T0, Itmax 1: Set a starting point x and evaluate the objective using (7) 2: Set T = T0 and a cooling rate 0 < κ < 1 3: Set Nt the number of trials per temperature 4: Set It = 0 5: repeat 6: for j = 1 to Nt do 7: Generate a new pointx (in the neighborhood of x and according to a generating probability) and evaluate the objective using (7) 8: ifx is accepted according to the probability P (x,x; T ) then 9: Set x =x 10: end if 11: end for 12: Set T = κT 13: Set It = It + 1 14: until It ≥ Itmax In the sequence of the strategy referred to in Sect. 2, Table 1 exposes the objective functions and their f min and f max values to be used in each of the five previously mentioned bi-objective optimization problems. Let w 1 , . . . , w P be the set of evenly spread weight vectors, where P represents the chosen number of tested weight vectors. According to the above referred Remark 1, for each set of weights, w i (i = 1, . . . , P ), the computed approximation to the optimal solution of problem (5), x * (w i ), is an approximation to a Pareto optimal solution (of the set X * ) to the problem (3). Thus, our methodology to obtain an approximation to the P F * is as follows. For each weights vector w i , Algorithm 1 is used to compute an approximation to x * (w i ) and the corresponding functions vector, approximation to f (x * (w i )). This process is repeated N times independent times. From the N times sets of function vectors (approximations to the Pareto optimal front), the non-dominated function vectors are selected to better represent the trade-off between the objectives. From there on the decision-maker may identify a set of compromise solutions. Algorithm 2 describes the main steps of the methodology. The weighted Tchebycheff scalarization algorithm was coded in MATLAB (MATLAB is a registered trademark of the MathWorks, Inc.). The code invokes the simulated annealing algorithm from the Global Optimization Toolbox (simulannealbnd function) as the optimization solver to compute the optimal solution of each MOO problem under study, throughout the minimization of the objective in (7) . The solver simulannealbnd resorts the computerized simulator of the SSE process that provides the objective function values (output) given a set of values of the decision variables (input). This simulator is a computer program that simulates the SSE process. It comprises a dynamic model of the Require: Ntimes, P ; 1: Set step = 1/(P − 1) 2: Randomly generate y ∈ Ω 3: for N = 1 to Ntimes do 4: Set w 1 1 = 0 5: for i = 1 to P do 6: Set Based on y, use Algorithm 1 to provide x(w i ), an approximation to x * (w i ) 8: Set P F N,i = f (x(w i )) 9: Set w i+1 1 = w i 1 + step 10: Problem 1. With the objectives Q and Z t optimized simultaneously, the algorithm converges to a curve, the Pareto front, that defines the trade-offs between the objectives. See Fig. 2 . The (blue) small full circles represent the solutions obtained for all the sets of weight vectors, over 5 runs, and the (red) large circles are the non-dominated solutions among the whole set. It is possible to see that the higher the mass output Q the higher is the length of screw for melting. Table 2 shows the decision variables and the corresponding objective functions values for four selected solutions from the Pareto front. The solution D provides the maximum value of the mass output, while the solution A gives the minimum value of the length of the screw required for melting. Two different trade-offs between the objectives are shown in the table that correspond to points B and C in Fig. 2 . We note that from solution A to B a small increase in Q leads to a large degradation in Z t . While Q improves from solution C to D, a large degradation is noted in Z t . Table 3 reports the values of the decision variables, Q and T melt , and the other objective values obtained from the non-dominated points A and B. Solutions A and B are the optimal solutions in terms of T melt and Q respectively. Fig. 4 the Pareto front computed by our proposed approach. Table 4 reports four non-dominated solutions. At solution A, we note that Q has the worst value but P ower attains the best value, and at solution D, Q achieves the best value while P ower has its worst value. The remaining solutions are compromise solutions between the two objectives. When comparing solutions A and B, B is clearly preferable since the loss in P ower is negligible but the gain in Q is reasonable. When solution C is analyzed, and compared to D, we observe that a significant reduction in P ower leads to a small degradation in Q. Figure 5 shows the Pareto front computed by the proposed approach. The maximum value of W AT S is attained at solution A and the best solution in terms of Q is the solution D. The other solutions show the trade-offs between the objectives. See Table 5 . An emphasis is placed on solutions B and C. C is clearly preferred when compared to B since a very small degradation on W AT S implies a large improvement on Q. Problem 5. Figure 6 shows the Pareto front obtained when maximizing Q and minimizing V isco. From the overall 5 non-dominated solutions, we stress points A and B. The best value in terms of V isco is attained at solution A and the maximum value for Q is reported with the solution B. The reader is referred to Table 6 for details concerning the values of the other objectives and the optimized parameter values. After these experimental studies, we may conclude that the weighted Tchebycheff scalarization approach to solve the bi-objective problems formulated for the SSE optimal design has supplied a valuable procedure to identify good trade-offs between conflicting objectives. Fig. 6 . Pareto front for Problem 5 The weighted Tchebycheff scalarization is a simple and easy to understand methodology that provides a viable approach to solve the MOO problems that emerge from the SSE design optimization. In particular, the direct visualization of the trade-offs through the solutions of the approximate Pareto front assists the decision-maker in the selection of crucial SSE performance index and geometric parameter values. From the experimental studies on the five bi-objective problems, good trade-offs between conflicting objectives have been identified. We may also conclude that the proposed methodology provides optimal solutions that are meaningful in physical terms. An optimization approach to practical problems in plasticating single screw extrusion Optimisation-based design of extruders RPSGAe -reduced pareto set genetic algorithm: application to polymer extrusion Polymer extrusion -setting the operating conditions and defining the screw geometry Optimization of single screw extrusion A posteriori methods A tutorial on multiobjective optimization: fundamentals and evolutionary methods A fast and elitist multiobjective genetic algorithm: NSGA-II MOPSO: a proposal for multiple objective particle swarm optimization An algorithm based on differential evolution for multi-objective problems On ant colony optimization algorithms for multiobjective problems A simulated annealing-based multiobjective optimization algorithm: AMOSA SSPMO: a scatter tabu search procedure for non-linear multiobjective optimization MOEA/D: a multiobjective evolutionary algorithm based on decomposition Expensive multiobjective optimization by MOEA/D with Gaussian process model A multiobjective optimization based framework to balance the global and local exploitation in expensive optimization SPEA2: improving the strength Pareto evolutionary algorithm Multi-Objective Optimization Using Evolutionary Algorithms An interactive weighted Tchebycheff procedure for multiple objective programming Toward an estimation of nadir objective vector using a hybrid of evolutionary and local search approaches Metaheuristics: review and application Optimization by simulated annealing Adaptive simulated annealing (ASA): lessons learned Simulated annealing with asymptotic convergence for nonlinear constrained optimization Acknowledgments. The authors wish to thank two anonymous referees for their comments and suggestions to improve the paper.This work has been supported by FCT -Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020, UIDB/05256/2020 and UIDP/05256/2020, UIDB/00013/2020 and UIDP/00013/2020 of CMAT-UM, and the European project MSCA-RISE-2015, NEWEX, Reference 734205.