Applied Mathematics and Computation 189 (2007) 1205–1213 www.elsevier.com/locate/amc
A modified particle swarm optimizer with dynamic adaptation Xueming Yang a
a,*
, Jinsha Yuan a, Jiangye Yuan a, Huina Mao
b
Department of Electronic and Communication Engineering, North China Electric Power University, Baoding 071003, China b School of Informatics, University of Manchester, Manchester City M60 1QD, United Kingdom
Abstract This paper proposes a modified particle swarm optimization algorithm with dynamic adaptation. In this algorithm, a modified velocity updating formula of the particle is used, where the randomness in the course of updating particle velocity is relatively decreased and the inertia weight of each particle is different. Moreover, this algorithm introduces two parameter describing the evolving state of the algorithm, the evolution speed factor and aggregation degree factor. By analyzing the influence of two parameters on the PSO search ability, a new strategy is presented that the inertia weight dynamically changes based on the run and evolution state. In the strategy the inertia weight is given by a function of evolution speed factor and aggregation degree factor, and the value of inertia weight is dynamically adjusted according to the evolution speed and aggregation degree. The feature of the proposed algorithm is analyzed and several testing functions are performed in simulation study. Experimental results show that, the proposed algorithm remarkably improves the ability of PSO to jump out of the local optima and significantly enhance the convergence precision. Ó 2006 Elsevier Inc. All rights reserved. Keywords: Particle swarm optimization; Inertia weight; Swarm intelligence
1. Introduction Particle swarm optimization (PSO) is a stochastic population based optimization algorithm, firstly introduced by Kennedy and Eberhart in 1995 [1,2]. In PSO algorithm, each member of the population is called a ‘‘particle’’, and each particle ‘‘flies’’ around in the multidimensional search space with a velocity, which is constantly updated by the particle’s own experience and the experience of the particle’s neighbours or the experience of the whole swarm. It has already been applied in many areas, such as function optimization, artificial neural network training, pattern classification and fuzzy system control. The advantages of PSO are that PSO is rapidly converging towards an optimum, simple to compute, easy to implement and free from the complex computation in genetic algorithm (e.g., coding/decoding, crossover and mutation). However, PSO does exhibits some disadvantages: it sometimes is easy to be trapped in local optima, and the convergence rate decreased considerably in the later period of evolution; when reaching a near optimal solution, the algorithm stops optimizing, and thus the accuracy the algorithm can achieve is limited. *
Corresponding author. E-mail address:
[email protected] (X. Yang).
0096-3003/$ - see front matter Ó 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.12.045
1206
X. Yang et al. / Applied Mathematics and Computation 189 (2007) 1205–1213
Various attempts have been made to overcome the problem. Among them, many approaches and strategies are proposed to enhance the performance of PSO via adjusting inertia weight, Such as, Fuzzy adaptive particle swarm optimization [3], Linearly Decreasing Weight (LDW) [4], increasing inertia weight [5], and randomized inertia weight [6,7], etc. The strategy of linearly decreasing weight (LDW) is most commonly used, and it can improve the performance of PSO to some extent, but it may be trapped in local optima and fail to attain high search accuracy. In this paper, a modified adaptive particle swarm optimizer with dynamic adaptation (DAPSO) is proposed. Firstly, in this algorithm, a modified velocity updating equation is used, where the randomness in the course of updating particle velocity is decreased reasonably and the inertia weight of each particle is different. Then, a novel dynamically changing inertia weight strategy based on search state of particle swarm is proposed. Finally, the superiority of the proposed algorithm both in robustness and efficiency is verified in numeric simulation. The rest of this paper is organized as follows. Section 2 introduces the standard PSO shortly. In Section 3, the traditional convergence Criterion of PSO is formulated and a new convergence definition of PSO is given. In Section 4, the proposed PSO, which has a different particle velocity updating rules and a novel dynamically changing inertia weight strategy based on search state of particle swarm, is presented. Computational experiences with a number of test problems from the literature are shown in Section 5. Finally, Section 6 summarizes and draws conclusions. 2. Standard PSO In the original PSO, which is proposed by Kennedy and Eberhart, the velocity and position updating rule is given by: vidtþ1 ¼ vtid þ c1 r1 ðpbesttid xtid Þ þ c2 r2 ðgbesttd xtid Þ;
ð1Þ
xidtþ1
ð2Þ
¼
xtid
þ
vtþ1 id ;
i ¼ 1; 2 . . . ; n;
where cl and c2 are constants named acceleration coefficients. r1 and r2 are two independent random numbers uniformly distributed in the range of [0, 1]. Vi 2 [Vmax, Vmax], where Vmax is a problem-dependent constant defined in order to clamp the excessive roaming of particles. pbesttid is the best previous position along the dth dimension of particle i in iteration t (memorized by every particle); gbesttd is the best previous position among all the particles along the dth dimension in iteration t (memorized in a common repository). The original PSO is improved in [9] by modifying (1) to vidtþ1 ¼ xvtid þ c1 r1 ðpbesttid xtid Þ þ c2 r2 ðgbesttd xtid Þ;
ð3Þ
where x P 0 is defined as inertia weight factor. Empirical studies of PSO with inertia weight have shown that a relatively large x have more global search ability while a relatively small x results in a faster convergence. 3. The convergence criterion of PSO Generally, the definition of convergence is that a system or process reaches a stable state. For the population-based optimization algorithm, the convergence of algorithm can be defined in terms of either individual or the whole swarm. For instants, there are two convergence definitions for Genetic Algorithm. van den Bergh gave the convergence definition of PSO in [10], stated as follows. Definition 1. Given a particle position x(t) and an arbitrary position p in search space, the convergence is defined as lim xðtÞ ¼ p:
t!1
ð4Þ
This definition implies that the convergence of particles is that the particle ultimately stops at a certain position p in search space. By analyzing the trajectories of particles, van den Bergh concludes that all the particles are convergent to the positions of the global best solutions. This conclusion is of significance, since it reveals
X. Yang et al. / Applied Mathematics and Computation 189 (2007) 1205–1213
1207
an important feature of PSO, i.e., gbest is the attractor of the whole swarm. Of course, gbest itself changes the algorithm runs. If all the particles achieve the convergence, the swarm does not change any more, which is the stable state. Thus, it can be stated that the PSO algorithm has achieved convergence. Accordingly, gbest will not change. So, another convergence definition of PSO can be given. Definition 2. Given that the best position of PSO in time t or in tth generation is gbest(t), gbest* is a fixed position in search space, the convergence definition is written as, lim gbestðtÞt!1 ¼ gbest
ð5Þ
Definition 2 implies that, if gbest output by PSO does not change any more, then convergence is achieved. If the gbest is the global best solution, then the algorithm attains the global best convergence. Otherwise, the algorithm is stuck in local optima. 4. The proposed PSO with dynamic adaptation The original PSO algorithm and its improved algorithms are typically based on Definition 1. By analyzing (1) and (2), it can been obtained that, each particles follow two ‘‘best’’ values, the current global best value and the best solution it has achieved so far. The velocity of particles rapidly approach zero, which causes the particles to be stuck in local optima. In [11] this phenomenon is called ‘‘similarity’’ of particle swarm, which can be observed through experiments. The ‘‘similarity’’ constricts the search area of particles. Enlarging the search area necessitates either increasing the number of particles or weakening the ability of particles to track the present global best value. However, the former entails an enhanced computational complexity and the latter lead to a slow convergence. It is noticeable that, the convergence criterion based on Definition 2 does not require that all the particles end up with staying one or several fixed position p, and thus the velocity of particles do not need to approach zero in convergence. It provides the PSO with a more free choice. Thus, this paper proposes a novel adaptive PSO. The core of PSO is the updating formulae of the particle. In the proposed PSO, the velocity and position updating rule is given by vtþ1 ¼ xti vti þ c1 r1 ðpbestti xti Þ þ c2 r2 ðgbestt xti Þ; i
ð6Þ
xtþ1 i
ð7Þ
¼
xti
þ
vtþ1 i ;
i ¼ 1; 2 . . . ; n;
compared with that in the conventional PSO, the velocity updating formula (6) has two different characteristics: (1) the value of r1 and r2 only vary stochastically with the number of particles and iterations. In other words, in (t + 1)th iteration each dimension of the ith particle shares the same random value. (2) The inertia weight is also variable with the number of particles and iterations. Since the search process of PSO is non-linear and highly complicated, linearly decreasing inertia weight can not truly reflect the actual search process. In the algorithm of this paper, the inertia weight is affected by the evolving state of algorithm and determined by the evolution speed factor of each particles and the aggregation degree factor of the swarm. Zhang et al., also introduced these two factors in [8], but the definition of the factors and strategy is quite different from this paper, and due to the limitation of its strategy and concept, the algorithm does not show a remarkable improvement on convergence accuracy, and its ability to jump out of local search is even lower then the strategy of LDW. 4.1. The evolution speed factor and aggregation degree factor Different from [8], this paper defines the evolution speed and aggregation degree using the following formula:
1208
X. Yang et al. / Applied Mathematics and Computation 189 (2007) 1205–1213
(1) Evolutionary speed factor min F pbestt1 ; F pbestt i i t ; hi ¼ t max F pbestt1 ; F pbest i i
ð8Þ
where F ðpbestti Þis the fitness value of pbestti . Under the assumption and definition above, it can be obtained that 0 < h 6 1. This parameter takes the run history of each particle into account, and reflects the evolutionary speed of each particle, that is, the smaller the value of h, the faster the speed. (2) Aggregation degree min F ; F tbest t ; ð9Þ s¼ max F tbest ; F t where F t is the mean fitness of all particles in the swarm at the tth iteration. Note that F ðgbestt Þ cannot be substituted for Ftbest, since Ftbest represents the optimal value found in this iteration, while F ðgbestt Þ denotes the optimal value that the whole swarm has found up to the tth iteration. Compared with that in [8], the formula of aggregation degree factor in this paper can faster response to evolving state.
4.2. The proposed strategy with dynamic inertia weight The evolution speed factor and aggregation degree factor of the swarm are the two typical characteristic parameters in the search course of PSO. The value of inertia weight should vary with the evolution speed and aggregation degree of the swarm. So, x can be written as the function of the parameters h and s, xti ¼ f ðhti ; sÞ:
ð10Þ
It is a core problem that how x varies with the evolution speed factor and aggregation degree factor of the swarm. The purpose of the variation is to give the algorithm a better ability to rapidly search and move out of the local optima. The standard PSO is motivated from the simulation of the social behavior of bird flocking, while each ‘‘bird’’ of the algorithm in this paper conforms better than the standard PSO to the food searching habit of birds in nature: in a search course of an individual (e.g., a particle), if the possibility of finding the object increases (i.e., the convergence rate is relative large), the individual does not rush at the next position with acceleration, but rather decelerates (i.e., decrease the inertia weight) to fly towards the optimal value, which lead to increasing the search intensity in the current small search area. Otherwise, increase the search velocity and the search intensity in a large area. Meanwhile, in order to prevent the similarity of swarm, the ability to jump out of local optima should be enhanced, that is, when the aggregation degree factor becomes larger, the inertia weight should increase proportionally. In this paper, the inertia weight is given by xti ¼ xini a 1 hti þ bs; ð11Þ where xini is the initial value of x and set xini = 1 in this paper. Since 0 6 h 6 1 and 0 6 s 6 1, it can be obtained that 1 a 6 x 6 1 þ b. The choice of a and b is typically within the range [0, 1]. It can be seen that there is a large difference between the algorithm in this paper and that in [8]. The basic view of the former is that, as the evolution speed increases, the inertia weight proportionally decays, whereas the latter is in reverse. 5. Experiment and evaluation The performance of the proposed PSO model is tested for a number of analytical benchmark functions which have been extensively used to compare both PSO-type and non-PSO-type metaheuristic algorithms. This paper utilizes the benchmark function set, shown as follows.
X. Yang et al. / Applied Mathematics and Computation 189 (2007) 1205–1213
1209
Rosenbrock function: n1 X 2 2 100 xiþ1 x2i þ ðxi 1Þ : f1 ðxÞ ¼ i¼1
Generalized Rastrigrin function: n X 2 xi 10 cosð2pxi Þ þ 10 : f2 ðxÞ ¼ i¼1
Generalized Griewank function: n n Y 1 X xi 2 xi cos pffi þ 1: f3 ðxÞ ¼ 4000 i¼1 i i¼1 Schaffers f6 function: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2 sin ðx21 þ x22 Þ 0:5 : f4 ðxÞ ¼ 0:5 þ 2 ð1 þ 0:001ðx21 þ x22 ÞÞ Sphere function: n X x2i : f5 ðxÞ ¼ i¼1
Ackley function: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! ! n n X 1X 1 x2 exp cosð2pxi Þ þ 20 þ e: f6 ðxÞ ¼ 20 exp 0:2 n 1 i n 1
5.1. Characteristics of the proposed PSO The Griewank function is used to show the characteristics of DAPSO. The parameters are set as particles 20, dimensions 30, and max iterations 2000(a = 0.4, b = 0.8). Fig. 1 illustrates the variation of parameters in the evolution through the course of a DAPSO run, including the velocity of a particle of a certain dimension, aggregation degree, evolution speed, inertia weight and the best fitness. The result shown in Fig. 1 is typical. It can be seen that, the aggregation degree and evolution speed has different variation characteristics, and in difference to that in the standard PSO, the particle velocity in DAPSO does not approach zero, but ranges within [Vmax, Vmax]. The evolution speed factor h and the inertia weight x are two-dimension parameters that vary with the number of particles and iterations. In the later stage of evolution in standard PSO, the velocity of particles decreases and approaches zero, and the aggregation degree increases and approaches 1. In the proposed algorithm, the inertia weight stay at a small level in the later period of iterations. 5.2. Convergence precision and convergence rate of DAPSO The parameter settings of the benchmark functions are shown in Table 1. As in [3], for all the test functions except f4, three different dimension sizes are tested: 10, 20 and 30. The maximum number of generations is set as 1000, 1500 and 2000 corresponding to the dimensions 10, 20 and 30, respectively. In order to eliminate stochastic discrepancy, a total of 50 runs for each experimental setting were conducted. Table 1 also illustrates the convergence threshold of each testing function. In this paper, if the algorithm fails to converge to a given threshold, then this run of the algorithm is regarded as invalidation, but the result of this run needs to be counted in the mean best fitness and success rate.
1210
X. Yang et al. / Applied Mathematics and Computation 189 (2007) 1205–1213
Fig. 1. The typical variation of parameters in the evolution through the course of a DAPSO run.
Table 2 lists the testing results on the functions of f2, f3, f4, and f5. The algorithm in this paper can find the global optimal solutions for all these four functions, and the success rate attains 100% in 50 runs. M_iteration1 denotes Mean number of iterations before finding the global optimal value.
Table 1 Parameter settings of the test functions Function f1 f2 f3 f4 f5 f6
Name Rosenbrock Rastrigrin Griewank Schaffer f6 Sphere Ackley
Trait Unimodal Multimodal Multimodal Multimodal Unimodal Multimodal
Domain
Asymmetric initialization range
[30, 30] [5.12, 5.12] [600, 600] [100, 100] [100, 100] [30, 30]
n
(15, 30) (2.56, 5.12)n (300, 600)n (50, 100) (50, 100)n (15, 30)n
Xmax
Vmax
Minimum
Threshold
100 10 600 100 100 30
100 10 600 100 100 30
0 0 0 0 0 0
500 70 0.15 105 0.01 0.1
Table 2 Testing results for the test functions f2, f3, f4 and f5 Function
Name
Pop. size
Dim.
Gene.
a
b
Mean best fitness
M_iteration1
Success ratio (%)
f2 f2 f2 f3 f3 f3 f4 f5 f5 f5
Rastrigrin Rastrigrin Rastrigrin Griewank Griewank Griewank Schaffer f6 Sphere Sphere Sphere
20 20 20 20 20 20 20 20 20 20
10 20 30 10 20 30 2 10 20 30
1000 1500 2000 1000 1500 2000 1000 1000 1500 2000
0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4
0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8
0 0 0 0 0 0 0 0 0 0
25.62 54.58 73.12 59.5 114.08 161.56 8.10 47.66 124.42 171.14
100 100 100 100 100 100 100 100 100 100
X. Yang et al. / Applied Mathematics and Computation 189 (2007) 1205–1213
1211
Table 3 Testing results for the test functions f1 and f6 Function
Name
Pop. size
Dim.
Gene.
a
b
Mean best fitness
M_iteration1
Success ratio (%)
f1 f1 f1 f6 f6 f6
Rosenbrock Rosenbrock Rosenbrock Ackley Ackley Ackley
20 20 20 20 20 20
10 20 30 10 20 30
1000 1500 2000 1000 1500 2000
0.4 0.4 0.4 1.0 1.0 1.0
0.8 0.8 0.8 0.1 0.1 0.1
0.4588 0.5206 0.0724 1.138 0.295 0.581
114.82 207.26 221.82 49.28 124.76 138.22
100 100 100 94 98 96
The testing results on the function of f1, and f6 is given in Table 3. For the function f1, the proposed algorithm demonstrates a far better average convergence precision than the standard PSO and improved PSO algorithms. For function f6, the proposed algorithm also yields a compelling result in terms of average convergence precision and success rate. M_iteration2 in the table denotes mean number of iterations before success (i.e., the iteration number to converge to a given threshold).
Fig. 2. Mean best fitness trendlines on different benchmarks: (a) f1 (Rosenbrock); (b) f2 (Rastrigrin); (c) f3 (Griewank); (d) f4 (Schaffer f6); (e) f5 (Sphere) and (f) f6 (Ackley).
1212
X. Yang et al. / Applied Mathematics and Computation 189 (2007) 1205–1213
Fig. 2 illustrates the mean best fitness trendlines, averaged for 50 runs on the six benchmarks functions calculated by the proposed algorithm. The results are consistent with those shown in Tables 2 and 3. In Fig. 2, the vertical coordinates in (a), (b), (c) and (e) indicate the mean best fitness in the form of logarithm value. Considering that the value of the mean best fitness is likely to be zero, a displacement disp is added to it, where disp = 10e20. It is apparent from the testing results of six functions that DAPSO outperforms the LDW and the other improved PSO algorithms in [3,11–15]. The proposed PSO has a strong ability to move out of the local optima, and it can effectively prevent the premature convergence and significantly enhance the convergence rate and accuracy in the evolutionary process. In fact, the two parameters a and b in Formula (11) have some impact on the algorithm performance. However, It is difficult to make a theoretical case for choosing the Values of the two parameters, a and b, here, their role in the algorithm will be investigated experimentally. In this paper, a large number of experiments on the six benchmark functions above are conducted to study the impact of the two parameters on algorithm performance. The result illustrates that the algorithm perform well when a and b is set within [0, 1]. Take the functions of Rastrigrin and Schaffer f6 as examples. The parameters are set as Pop. Size = 20, Dim = 10, and max iterations = 1000. It can be seen from the experiment that, when the parameters of a and b in formula (11) ranges within [0, 1], the proposed algorithm can always find the global optimal solution, and the success rate attains 100%. Tables 4 and 5 respectively show the average iteration number to reach the global optimal solution on the functions of Rastrigrin and Schaffer f6. Like the experiments above, a total of 50 runs for each parameter setting of a and b are conducted. The experiments have shown that the performance of the DAPSO are not strongly dependent on parameters a and b. So in the design of an algorithm, it is not necessary to pay much attention on how to choose these parameters. Robustness is an advantage of the proposed algorithm.
Table 4 Mean number of iterations the Schaffer f6 function requires to converge to the global optima
b = 0.1 b = 0.2 b = 0.3 b = 0.4 b = 0.5 b = 0.6 b = 0.7 b = 0.8 b = 0.9 b = 1.0
a = 0.1
a = 0.2
a = 0.3
a = 0.4
a = 0.5
a = 0.6
a = 0.7
a = 0.8
a = 0.9
a = 1.0
9.30 8.88 7.98 8.66 8.92 9.70 8.90 10.00 9.16 8.76
9.48 11.16 9.50 9.32 9.50 8.42 8.28 10.28 9.40 8.30
10.12 7.12 9.16 8.70 8.68 10.42 7.12 8.04 8.96 8.70
8.42 8.52 9.40 8.52 7.50 9.04 7.74 8.44 8.36 8.36
10.24 7.70 8.30 9.50 7.64 8.10 7.16 8.78 9.44 8.90
9.64 9.10 9.04 8.16 8.60 8.58 8.64 9.00 9.36 8.44
12.12 9.16 10.10 8.84 8.36 8.20 8.50 8.40 8.10 9.16
11.5 8.72 9.16 8.10 9.30 8.40 7.84 8.60 8.64 8.84
12.36 9.28 8.20 8.20 8.42 7.90 8.40 8.56 7.80 9.16
10.72 10.24 8.50 9.08 8.88 8.00 8.48 8.88 9.00 8.60
Table 5 Mean number of iteration the generalized Rastrigrin function requires to converge to the global optima
b = 0.1 b = 0.2 b = 0.3 b = 0.4 b = 0.5 b = 0.6 b = 0.7 b = 0.8 b = 0.9 b = 1.0
a = 0.1
a = 0.2
a = 0.3
a = 0.4
a = 0.5
a = 0.6
a = 0.7
a = 0.8
a = 0.9
a = 1.0
56.56 54.30 47.30 28.50 30.88 25.64 31.20 21.30 22.60 24.64
85.68 51.50 65.68 45.10 25.20 33.30 27.80 23.64 24.68 23.00
89.12 52.08 44.30 43.32 25.60 29.90 25.72 36.40 26.66 24.68
80.50 85.00 63.36 47.54 42.72 46.44 23.92 24.48 22.12 26.92
139.08 72.44 49.72 37.00 59.74 52.64 26.56 24.56 25.44 31.34
77.36 60.80 76.30 42.30 49.88 30.76 34.00 29.98 30.76 29.24
111.84 65.18 81.52 41.68 72.60 42.70 44.12 36.76 30.42 27.08
111.24 82.60 57.52 68.04 59.68 29.50 44.72 31.78 29.04 28.10
149.08 99.14 74.72 56.12 53.08 57.48 37.14 39.90 27.28 29.98
162.64 119.80 113.94 76.12 50.56 49.96 58.34 41.32 34.10 42.48
X. Yang et al. / Applied Mathematics and Computation 189 (2007) 1205–1213
1213
6. Conclusion The improved PSO algorithm proposed in this paper is easy to implement, without additional computational complexity. Through experiments this algorithm gives a quite promising result: the ability to jump out of the local optima is considerably improved, the convergence precision and speed in later stage are remarkably enhanced, and thus the high precision and efficiency of PSO are achieved. Compared with PSO and the other improved PSO algorithms, the DAPSO demonstrates its superiority in computational complexity, success rate and solution quality. Acknowledgements This work is partially supported by Natural Science Foundation of China(NSFC) under the Grant 60372035. The authors would like to thank anonymous reviewers and the editor for constructive comments and suggestions. References [1] J. Kennedy, R.C. Eberhart, Particle Swarm optimization, in: Proc. of IEEE Int. Conf. on Neural Networks, Perth, Australia (1995) 1942–1948. [2] J. Kennedy, R.C. Eberhart, A new optimizer using particle swarm theory, in: Proc. of the Sixth Int. Symp. on Micro Machine and Human Science (MHS’95), Nagoya, Japan (1995) 39–43. [3] Y.H. Shi, R.C. Eberhart, Fuzzy Adaptive particle swarm optimization, in: Proc. of the IEEE Congress on Evolutionary Computation, vol. 1, Seoul Korea (2001) 101–106. [4] Y.H. Shi, R.C. Eberhart, A modified particle swarm optimizer, in: Proc. of the IEEE Congress on Evolutionary Computation. IEEE Service Center, USA (1998) 69–73. [5] Y.L. Zhang, L.H. Ma, L.Y. Zhang, J.X. Qian, On the Convergence Analysis and Parameter Selection in Panicle Swarm Optimization, in: Proc. Int. Conf. on Machine learning and Cybernetics. Zhejiang University, Hangzhou, China, (2003) 1802–1807. [6] R.C. Eberhart, Y.H. Shi, Tracking and optimizing dynamic systems with particle swarms, in: Proc. of the IEEE Congress on Evolutionary Computation. San Francisco, USA (2001) 94–100. [7] L.P. Zhang, H.J. Yu, D.Z. Chen, S.X. Hu, Analysis and improvement of particle swarm optimization algorithm, Inform. Control 33 (2004) 513–517, Shengyang, China. [8] X.P. Zhang, Y.P. Du, G.Q. Qin, Adaptive particle swarm algorithm with dynamically changing inertia weight, J. Xian Jiaotong Univ. 39 (2005) 1039–1042. [9] Y.H. Shi, R.C. Eberhart, in: Proc. of Int. Conf. on Evolutionary Computation, Washington, USA (1999) 1945–1950. [10] F. van den Bergh, An Analysis of Particle Swarm Optimizers. Ph.D. thesis. Department of Computer Science, University of Pretoria, Pretoria, South Africa, 2002. [11] A.G. Li, Particle Swarms Cooperative Optimizer, J. Fudan Univ. (Natural Science) 43 (2004) 923–925, ShangHai, China. [12] G.M. Chen, J.Y. Jia, Q. Han, Study on the strategy of decreasing inertia weight in particle swarm optimization algorithm, J. Xian Jiaotong Univ. 40 (2006) 1039–1042. [13] Z.S. Lu, Z.R. Hou, Particle swarm optimization with adaptive mutation, Acta Electron. Sinica 32 (2004) 416–420, Beijing, China. [14] F. Pan, X.Y. Tu, J. Chen, J.W. Fu, Harmonious particle swarm optimizer – HPSO, Comput. Eng. 31 (2005) 169–171, Shanghai, China. [15] Yu Liu, Zheng Qin, Zhewen Shi, Jiang Lu, Center particle swarm optimization, Neurocomputing, doi:10.1016/j.neucom.2006.10.002.