^{1}

^{1}

^{1}

^{1}

^{1}

^{1}

^{1}

^{*}

In order to improve some shortcomings of the standard particle swarm optimization algorithm, such as premature convergence and slow local search speed, a double population particle swarm optimization algorithm based on Lorenz equation and dynamic self-adaptive strategy is proposed. Chaotic sequences produced by Lorenz equation are used to tune the acceleration coefficients for the balance between exploration and exploitation, the dynamic self-adaptive inertia weight factor is used to accelerate the converging speed, and the double population purposes to enhance convergence accuracy. The experiment was carried out with four multi-objective test functions compared with two classical multi-objective algorithms, non-dominated sorting genetic algorithm and multi-objective particle swarm optimization algorithm. The results show that the proposed algorithm has excellent performance with faster convergence rate and strong ability to jump out of local optimum, could use to solve many optimization problems.

Particle swarm optimization (PSO) algorithm, which is an intelligent optimization algorithm, has many characteristics such as simple structure, less parameter, easy description and realization, strong global search ability and no need for gradient information, widely used in many fields, such as function optimization, multi-objective problem solving, pattern recognition, etc., especially applicable to solving the problem of nonlinear, multi-extreme value and non-differentiable and multivariable complexity optimization [

For the lack of PSO algorithm, the researchers propose many improvement strategies [

Although the improved PSO algorithm improves both performance and efficiency, it is difficult to improve the local search ability of the algorithm while avoiding precocious convergence. There are many academics and industry researchers have been exploring and experimenting with new approaches to provide better performance, higher efficiency and lower cost particle swarm optimization [

PSO is an evolution algorithm proposed in 1995 by scholars Eberhart and Kennedy [

v i , d k + 1 = ω v i , d k + c 1 ( p i , d k − x i , d k ) + c 2 ( p g , d k − x i , d k ) (1)

x i , d k + 1 = x i , d k + v i , d k + 1 (2)

where i = 1 , ⋯ M ; ω is called an inertial weight factor, which keeps the particles in motion and has the ability to expand the search space. c_{1} and c_{2} are the learning factors, which represent the weight of each particle to the extreme position statistical acceleration item. v i , d k , x i , d k are respectively the velocity and position of the d dimension in the k_{th} iteration of the particle i; p i , d k is the position of the individual extremum of particle i in d dimension, p g , d k is the position of the population in the global extreme value of d dimension.

There are three factors that determine the change in particle velocity in PSO algorithm. 1) Inertial weight factor, instruct the speed information of the previous moment which compares current speed with front speed. 2) Cognitive factors, which is the development capacity factor, not only the optimal error of the particle itself, but also the partial excavation and development capability of the particle. 3) Exploration factor, the social sharing ability coefficient, expresses its error that distances the best and how the particle share and cooperate. Under the circumstances, the inertia coefficient determines the search step length, which is better for global search and smaller for local exploration. Cognitive factors and exploration factors are collectively called learning factors, which represent the optimal and global optimal proportion of particles themselves. By adjusting the learning factors properly, the global and local search steps of the particles can be weighed and the exploration factors can be changed to realize the local optimal algorithm, when the algorithm is in precocious convergence.

Based on the above, the authors developed LSA-DPSO algorithm by mixing She mixed Lorenz equation and dynamic adaptive strategy into a dual population PSO algorithm. LSA-DPSO using adaptive evolutionary inertia weight adjustment strategy, improve the convergence speed, chaotic sequence produced by Lorenz equation optimizing algorithm of learning factor c_{1} and c_{2}, jump out of local optimum when entering the premature convergence and using a dual population to improve the diversity of it. The inertial weight factor is dynamically adjusted by formula (3).

ω = ω max − P g b e s t ( k ) / P l b e s t a v e − ( ω max − ω min ) × k / k max (3)

Among them, omega max and omega min respectively indicate the maximum and minimum value of the inertial weight; Pgbest(k) represents the global optimum of the k-th iteration; Plbest_{ave} represents the local optimal average of all particles; k_{max} represents the maximum number of iterations and k is the current iteration number.

The learning factor c_{1} and c_{2} produce chaotic sequences through the typical Lorenz equation, as shown in formula (4).

{ d x d t = − a ( x − y ) d y d t = r x − y − x z d z d t = x y − b z (4)

In the formula, the parameters a, b and r are the positive control parameters, which are respectively 10, 8/3, and 28, and the learning factor (c_{1}, c_{2}) has a chaotic state, which is defined as:

{ c 1 = x ( t ) c 2 = y ( t ) (5)

Because of the randomness and regularity, the change of chaotic variables making the algorithm can maintain the diversity of the population, improve the problem of premature convergence and improve the global search performance.

The steps of LSA-DPSO algorithm are as follows:

1) Initialize the particle group

The particles are initialized to two subpopulations A and B, and the positions and velocities of all the particles of A and B are initialized, and the initial positions and velocities of the particles are randomly generated. The current position of each particle is used as the maximum value of the particle, and the optimal value of the subgroup A and B is selected as the global optimal value of subgroup A and B.

2) Calculate the adaptive value of population particles.

3) Compare the adaptive value of each particle with the adaptive value of the best position of its own, and make the better current position as the best position of the particle.

4) The adaptive value of each particle is compared with the adaptive value of the global best position, and the better current position is the best position.

5) To compares the best position of the global best position in subgroup B with the global best position in subgroup A, and USES A better position as the global extreme value of A group. Then we can compare the good position as the global extreme value of group B.

6) Obtain the learning factor c_{1}, c_{2} and inertial weight respectively, and update the velocity and position of the particles.

7) If the end condition of the algorithm is satisfied, the global best position is the optimal solution, saving the result and ending. Otherwise return steps (2).

Multi-objective optimization is the most typical optimization problem. As the goals are often contradictory and constrained, it is difficult to achieve optimal simultaneous optimization, which means that one of the objectives must be optimized at the expense of other goals. The solution to such a problem is usually not unique, but the existence of a series of optimal solutions, known as non-inferior solutions, which are often referred to as Pareto optimal solutions. Because swarm intelligence algorithm can parallel search multiple solutions in the search space, some classical multi-objective optimization problem can verify the performance of the algorithm very well. In this paper, the multi-objective optimization test function proposed by Schaffer [

The advantages and disadvantages of non-inferior solutions are evaluated by means of convergence and distribution indicators:

Convergence index (GD) [

G D = ∑ I = 1 N d i 2 N (6)

Among them, N represents the number of ungoverned solutions that the algorithm searches for, d i Represents the shortest Euclidean distance of all solutions in the optimal front end of the true Pareto and the i-th pare to solutions.

Distribution index (SP) [

S P = [ 1 n ∑ i = 1 n ( d ¯ − d i ) 2 ] 1 / 2 d ¯ (7)

d ¯ = 1 n ∑ i = 1 n d i (8)

Among then, N is the number of ungoverned solutions, d i Represents the shortest distance between the i th non-inferior solution in the target space and all solutions in the optimal front end of the real Pareto.

Function | Define | Feature |
---|---|---|

SCH1 | min f ( x ) = ( f 1 ( x ) , f 2 ( x ) ) s . t . x ∈ [ − 5 , 7 ] f 1 ( x ) = x 2 f 2 ( x ) = ( x − 2 ) 2 | Convex type |

SCH2 | min f ( x ) = ( f 1 ( x ) , f 2 ( x ) ) , s . t . x ∈ [ − 5 , 10 ] f 1 ( x ) = { − x , ( x ≤ 1 ) − 2 + x , ( 1 < x ≤ 3 ) 4 − x , ( 3 < x ≤ 4 ) − 4 + x , ( x > 4 ) } f 2 = ( x − 5 ) 2 | Discontinuous |

ZDT2 | min f ( x ) = ( f 1 ( x ) , f 2 ( x ) ) , s . t . x ∈ [ − 5 , 10 ] f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) [ 1 − ( x 1 / g ( x ) 2 ) ] g ( x ) = 1 + 9 ( ∑ i = 2 n x i ) / ( n − 1 ) | Concave type |

ZDT3 | min f ( x ) = ( f 1 ( x ) , f 2 ( x ) ) , s . t . x i ∈ [ 0 , 1 ] f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) [ 1 − x 1 / g ( x ) − x 1 g ( x ) sin ( 10 π x 1 ) ] g ( x ) = 1 + 9 ( ∑ i = 2 n x i ) / ( n − 1 ) | Discrete type |

It proposed the LSA-DPSO algorithm to simulate the multi-objective test function SCH1, SCH2, ZDT2 and ZDT3. The population size is 60; the maximum iteration number is set to 500; the maximum and minimum values of inertia weight are 0.9 and 0.3 respectively. Pareto non-inferior solution was obtained by algorithm, and the distribution of Pareto non-inferior solutions of each function was plotted in Figures 1-4.

To solve the multi-objective function, the optimal target domain is the boundary of the fitness value region, which is called the effective interface. As can be seen from the Pareto non-inferior solution drawn from the graph, the four test functions accurately give the effective interface, and the complete Pareto non-inferior solution can be obtained. It can be seen from the distribution of solution that the solution distribution of each function is even.

When solving complex multi-objective functions such as discrete and discontinuities, the algorithm also gives an accurate and non-inferior solution set, such as SCH2 and ZDT3.

The above results show that the LSA-DPSO algorithm is superior in solving the multi-objective problem, and obtains a non-inferior solution which is very close to the real Pareto frontier, and has a large number of non-inferior solutions with good distribution, what is verifies the feasibility and accuracy of the algorithm.

In order to further verify the robustness and stability of the algorithm, each test function with the LSA-DPSO algorithm run 5 times, statistics of each test function index of the convergence of GD, SP distribution index and calculating the average time of CT, and four statistical evaluation index of the test function, and the results are shown in

The data show that the LSA-DPSO algorithm performs well in solving multi-objective problems that confirms the feasibility, accuracy and efficiency of LSA-DPSO algorithm. GD indicates that noninferior-solutions are very close to the real Pareto. And through the data of SP, noninferior-solutions are well distributed. Meanwhile, from the differentia of CT index, the difference between functions is within acceptable limits.

To compare the advantages of each algorithm. We contrast LSA-DPSO algorithm with the NSGA-II algorithm and the MOPSO algorithm by decomposing each algorithm five times using various target test functions. The statistical results are shown in

In terms of convergence accuracy and distribution, the LSA-DPSO algorithm is superior. At the point of GD and SP, The optimal front-end distance that is obtained through the LSA-DPSO algorithm is closer than other two algorithms, what indicates LSA-DPSO has better convergence and accuracy. At the point of CT, the execution time of LSA-DPSO algorithm is lower than NSGA II algorithm but higher than that of MOPSO. The reason is that the LSA-DPSO algorithm is dynamically adjusted with the flight process of the particle, which will surely consume more time than the standard PSO algorithm, which is long and single-directional flight search. As an inevitable result, the convergence and distribution of the non-inferior solutions obtained by the LSA-DPSO algorithm are better than those of the other two algorithms.

Through the LSA-DPSO algorithm for four multi-objective optimization function of numerical experiments show that the LSA-DPSO algorithm has a better comprehensive performance, introducing Lorenz equation of chaotic

Performance indicators | SCH1 | SCH2 | ZDT2 | ZDT3 | Average |
---|---|---|---|---|---|

GD | 0.000335 | 0.000336 | 0.000353 | 0.000340 | 0.000341 |

SP | 0.00338 | 0.00335 | 0.00336 | 0.00325 | 0.00334 |

CT^{a} | 17.7 | 17.6 | 18.4 | 19.8 | 18.4 |

^{a}Computation time.

Performance indicators | NSGA II^{a} | MOPSO^{b} | LSA-DPSO |
---|---|---|---|

GD | 0.000366 | 0.000465 | 0.000341 |

SP | 0.00465 | 0.00521 | 0.00334 |

CT | 22.5 | 17.5 | 18.4 |

^{a}non-inferior classification multi-objective genetic algorithm, ^{b}multi-target particle swarm optimization algorithm.

sequence to improve the premature convergence problem of PSO algorithm, through dynamic adaptive mechanism of multiple species and improve the algorithm convergence speed and accuracy.

This paper presents an algorithm that is based on the dynamic adaptive dual population particle swarm of Lorenz equation. By using the chaos sequence generated by Lorenz equation, the learning factor is optimized and the dynamic adaptive adjustment strategy is adjusted to the inertia weight, it improved the convergence speed, accuracy and the problem that the convergence of PSO algorithm is premature. Through the test function experiment, the proposed algorithm is used to solve the multi-target problem, and the obtained non-inferiority solution can approach the optimal solution set of Pareto very well, and the distribution is even. It indicates the performance of LSA-DPSO algorithm is better. In this paper, the algorithm is used to solve the problem of complex optimization problems such as non-linear, multipolar and multivariable, which can provide references for many practical engineering applications.

The authors gratefully acknowledge the support from the National Natural Science Foundation of China (Grant Numbers: 51663001, 51463015, 51377025) and the science and technology research project of the education department of Jiangxi province (Grant Numbers: GJJ150983, GJJ151012).

Wu, Y., Sun, G.Q., Su, K.M., Liu, L., Zhang, H.J., Chen, B.S. and Li, M.S. (2017) Dynamic Self-Adaptive Double Population Particle Swarm Optimization Algorithm Based on Lorenz Equation. Journal of Computer and Communications, 5, 9-20. https://doi.org/10.4236/jcc.2017.513002