A Novel Hybrid Particle Swarm Optimization for Multi ... - Springer Link

Report 0 Downloads 165 Views
A Novel Hybrid Particle Swarm Optimization for Multi-Objective Problems Siwei Jiang and Zhihua Cai School of Computer Science, China University of Geosciences, Wuhan 430074, China [email protected],[email protected]

Abstract. To solve the multi-objective problems, a novel hybrid particle swarm optimization algorithm is proposed(called HPSODE). The new algorithm includes three major improvement: (I)Population initialization is constructed by statistical method Uniform Design, (II)Regeneration method has two phases: the first phase is particles updated by adaptive PSO model with constriction factor χ, the second phase is Differential Evolution operator with archive, (III)A new accept rule called Distance/volume fitness is designed to update archive. Experiment on ZDTx and DTLZx problems by jMetal 2.1, the results show that the new hybrid algorithm significant outperforms OMOPSO, SMPSO in terms of additive Epsilon, HyperVolume, Genetic Distance, Inverted Genetic Distance. Keywords: Multi-Objective Optimization, Uniform Design, Particle Swam OPtimization, Differential Evolution, Minimum Reduce Hypervolume, Spread.

1

Introduction

Multi-objective Evolutionary Algorithms(MOEAs) are powerful tools to solve multi-Objective optimize problems(MOPs), it gains popularity in recent years [1]. Recently, some elitism algorithms are proposed as: NSGA-II [2], SPEA2 [3], GDE3 [4], MOPSO [5, 6, 7]. NSGA-II adopt a fast non-dominated sorting approach to reduce computer burden, it uses Ranking and Crowding Distance to choose the candidate solutions [2]. SPEA2 proposes a fitness assignment with cluster technique, it designs a truncation operator based on the nearest Neighbor Density Estimation metric [3]. GDE3 is a developed version of Differential Evolution, which is suited for global optimization with an arbitrary number of objectives and constraints [4]. MOPSO is proposed by Coello et al, it adopts swarm intelligence to optimize MOPs, and it uses the Pareto-optimal set to guide the particles flight [5]. Sierra adopt Crowding Distance to filter the leaders, the different mutation methods are acted on divisions particles [6]; Mostaghim introduces a new Sigam-method to find local best information to guide particle [7]. 

The Project was supported by the Research Foundation for Outstanding Young Teachers, China University of Geosciences(Wuhan)( No:CUGQNL0911).

H. Deng et al. (Eds.): AICI 2009, LNAI 5855, pp. 28–37, 2009. c Springer-Verlag Berlin Heidelberg 2009 

A Novel Hybrid Particle Swarm Optimization

29

Population initialization is an important part in EAs, but it has been long ignored. Orthogonal design and Uniform Design belong to a sophisticated branch of statistics [9,10]. Zeng and Cai adopt orthogonal design to solve MOPs, results show that it is more powerful than the Random Design [11, 12]. Leung utilizes the uniform design to initial population and designs a new crossover operator, which can find the Pareto-optimal solutions scattered uniformly [13]. Zhang proposes a hybrid PSO with differential evolution operator to solve the single objective problem, which provide the bell-shaped mutation to guarantee the evolutionary population diversity [8]. Interested in the hybrid PSO with differential evolutionary operator, we propose a hybrid particle swarm optimization called HPSODE: the first population is constructed by uniform design; offspring regeneration method has two phases, in first phase the particles are updated according to its own experience(pbest ) and social information(gbest ) with constriction factor χ, in second phase the archive is operated by Differential Evolution; a new accept rule is designed as Distance/Volume function to update the archive. The rest of the paper is organized as follows. In section 2, we briefly introduce the uniform design. In section 3, we describe the two phases of regeneration method. In section 4, we design a new archive update rule. In section 5, Experiment on bio-objective and tri-objective has show that the new algorithm is powerful than MOPSO, SMPSO in terms of additive Epsilon, hypervolume, Genetic Distance, Inverted Genetic Distance. In section 6, we make conclusions and discuss the future research on MOEAs.

2

Population Initialization: Uniform Design

Population initialization has long been ignored in MOEAs, but it is a very important component for MOEAs. Orthogonal Design and Uniform Design are experimental design method, which belong to a sophisticated branch of statistics. Both of them can get better distribute population in feasible searching space than Random Design, then the statistical population provide more information to generate next offspring. In this section, we briefly describe an experimental design method called uniform design. We define the uniform array as UR (C), where Q is the level, it’s primer, R, C represent the row and column of uniform array, they must be satisfid with:  R=Q>n (1) C=n where n is the number of variables. When select a proper parameters of Q, σ form table 1, uniform array can be created by Equation 2 Ui,j = (i ∗ σ j−1 mod Q) + 1

(2)

For one decision variable Xj with the boundary [lj , uj ], then the quantize technical divide the domain into Q levels αj1 , αj2 , · · · , αjQ , where the design parameter

30

S. Jiang and Z. Cai Table 1. Parameter σ for different number of factors and levels number of levels of per factor number of factors 5 2-4 7 2-6 11 2-10 13 2 3 4-12 17 2-16 19 2-3 4-18 23 2,13-14,20-22 8-12 3-7,15-19 29 2 3 4-7 8-12,16-24 13-15 25-28 31 2,5-12,20-30 3-4,13-19

σ 2 3 7 5 4 6 10 8 14 7 15 17 12 9 16 8 14 18 12 22

Q is primer and αi is given by αjk = lj + (k − 1)(

u j − lj ), 1 ≤ k ≤ Q Q−1

(3)

In other words, the domain [lj , uj ] is quantized Q − 1 fractions, and any two successive levels are same as each other.

3

Offspring Regeneration Method

In this paper, we propose a hybrid Particle Swarm Optimization algorithm called HPSODE, which has three populations: the first population is the original particles, the second population is local best particles which store only one evolutionary step according to original particles, the third population is global best particle which is utilized as archive. The regeneration method has two phases, original particle with adaptive PSO update, archive with differential evolution operator. In first phase, the particle’s new position is updated by its own experience (pbest ) and social information(gbest ) with constriction factor χ:  Vid (t + 1) = χ[ωVid (t) + C1 ϕ1 (pbid (t) − Xid (t)) + C2 ϕ2 (gbid (t) − Xid (t))] Xid (t + 1) = Xid (t) + Vid (t + 1) where χ =

2−ϕ−

√2

ϕ2 −4ϕ

(4) with ϕ = C1 + C2 , when ϕ ≤ 4, χ = 1.0. And ω = 0.1,

C1 , C2 ∈ rand[1.5, 2.5],ϕ1 , ϕ2 ∈ rand[0.0, 1.0]. The global best particle is binary tournament choose from archive which has larger crowding distance, it forces the original particles to explore the sparse

A Novel Hybrid Particle Swarm Optimization

31

space. If the new particle position is non-dominated by original particle, replace the original particle and local best particle, Xi (t) = Xi (t + 1), pbi(t) = Xi (t + 1), then add the new particle to archive; otherwise discard the new particle and unchange the local best. In second phase, the third population is update by differential evolution operator: ⎧ ⎪ Xid (t + 1) = Xr1 d (t) + F (Xr2 d (t) − Xr3 d (t)) ⎨ ⎪ ⎩

if rndd (0, 1) < CR||d == drnd Xid (t + 1) = Xr1 d (t) otherwise

(5)

where d is the dimension of solution, the three rand solutions are choose from archive and r1 = r2 = r3 = i. where F = 0.5, CR = 0.1. If the new solution is non-dominated by the archive’s solutions, then add it to archive. When the number of archive is small, the guide information is rare, it will lead the original particle to assemble to few best positions, and then the final Pareto-optimal set will get worse diversity. Differential evolution operation is simple to implement and it has powerful capacity to scatter the feasible space. The DE operation on archive is useful to enhance the diversity of archive, and provide good guide information to the original particle.

4

Archive Update Rule

When MOEAs get a set of equal good solutions full to the setting size(usually archiveSize = 100), an accept rule must be designed to decide which one should be cut off from archive. It’s a critical issue in MOEAs which directly influence the quality of finally optimal set in convergence and spread metric. Some popular accept rules have been presented: NSGA-II adopts the Ranking and Crowding Distance metric, SPEA2 uses the nearest Neighbor Density Estimation metric. In this paper, we design a new accept rule called Minimum Reduce Hypervolume. Hypervolume is a quality indicator proposed by Zitzler et al, it is adopted in jMetal 2.1 [14]. Hypervolume calculates the volume covered by members of a non-dominated set of solutions (the region enclosed into the discontinuous line respect the worst point W in the figure 1 is ADBECW in dashed line). If two solutions D and E are non-dominated each other, NSGA-II choose the solution D remained in archive if CD(D) > CD(E), it maintains the spread along the Pareto-front.  CD(D) = AD + D B (6) CD(E) = BE  + E  C If one solution is deleted, it will lead to a hypervolume decrease, because the higher hypervolume means the better quality of optimal set, we will delete the solution which reduce the hypervolume minimum. MRV chooses the solution E remained in archive if hv(D) < hv(E)(then the hypervolume is ABECW rather

32

S. Jiang and Z. Cai

D1

D’

hv(D)=DD1*DD2

CD(D)=AD’+D’B

D

D

D2

E1

E’ CD(E)=BE’+E’C

hv(E)=EE1*EE2 E

E

E2

Fig. 1. The comparison of Crowding Distance and MRV is described in the left and right figure

than ABDCW ), it maintains the hyervolume along the Pareto-front.  hv(D) = DD1 ∗ DD2 hv(E) = EE1 ∗ EE2

(7)

Crowding Distance maintains the spread and expands the solution to feasible searching place, and Minimum Reduce Hypervolume maintains the hypevolume and forces the solution near to Pareto-front. We combine the two properties, a new fitness assignment for solution s is designed as follows(called Distance/Volume fitness): ⎧ ⎪ ⎨ DV (s) = CD(s) + scale ∗ hv(s) n (8) CD(si ) ⎪ i=1 scale = ⎩ n i=1 hv(si ) The factor scale is designed to equal the influence of crowding distance and MRV, but if it is too large, we set it is to 1000 when scale > 1000.

5

Experiment Results

Experiment is based on jMetal 2.1 [14], which is a Java-based framework aimed at facilitating the development of metaheuristics for solving MOPs, it provides large block reusing code and fair comparison for different MOEAs. The paper selects the algorithm OMOPSO [6], SMPSO [14] as the compare objectives. Each algorithm independent runs for 100 times and maximum evolution times is 25, 000. The test problems are choose from ZDTx, DTLZx problem family. The performance metrics have five categories: 1 ). The metric indicator of an approxUnary additive epsilon indicator(I+ 1 imation set A (I+ (A)) gives the minimum factor  by which each point in the

A Novel Hybrid Particle Swarm Optimization

33

real front can be added such that the resulting transformed approximation set is dominated by A: 1 I+ (A) = inf∈R{∀z 2 ∈ R\∃z 1 ∈ A : zi2 ≤ zi1 + ∀i}

(9)

Hypervolume. This quality indicator calculates the volume (in the objective space) covered by members of a nondominated set of solutions with a reference point.The hypervolume (HV) is calculated by: |Q|

HV = volume(



vi )

(10)

i=1

Inverted Generational Distance. The metric is to measure how far the elements are in the Pareto-optimal set from those in the set of non-dominated vectors found. It is defined as:  |N | 2 i=1 di IGD = (11) N Generational Distance. The metric is to measure how far the elements are in the set of non-dominated vectors found from those in the Pareto-optimal set. It is defined as:  |n| 2 i=1 di (12) GD = n Spread. The Spread indicator is a diversity metric that measures the extent of spread achieved among the obtained solutions. This metric is defined as: n−1 df + dl + i=1 |di − d| ∆= df + dl + (n − 1)d

(13)

1 The higher Hypervolume and lower I+ ,GD, IGD, Spread mean the better algorithm. The results are compared with median and interquartile range(IQR) which measures of location (or central tendency) and statistical dispersion, respectively, the best result is grey background. From table 2, in term of median and IQR for additive epsilon metric, HPSODE get best results in all of the 12 MOPs. From table 3, in term of median and IQR for HyperVolume metric, HPSODE get best results in all of the 12 MOPs. From table 4, in term of median and IQR for Genetic Distance metric, HPSODE get best results in 11 MOPs, only worse in DTLZ6. OMOPSO get best results only in 1 MOPs: DTLZ6. From table 5, in term of median and IQR for Inverted Genetic Distance metric, HPSODE get best results in 10 MOPs, only worse in ZDT2, DTLZ6. OMOPSO get best results only in 1 MOPs: ZDT2. SMOPSO get best results only in 1 MOPs: DTLZ6.

34

S. Jiang and Z. Cai 1 Table 2. Unary additive epsilon indicator(I+ ) Median and IQR

ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7

OMOPSO 6.15e − 033.7e−04 5.93e − 034.7e−04 6.87e − 032.1e−03 5.39e + 004.7e+00 4.92e − 035.9e−04 1.09e + 013.2e+00 1.29e − 012.0e−02 1.03e + 023.6e+01 1.12e − 012.3e−02 5.14e − 036.6e−04 4.35e − 035.2e−04 1.95e − 017.6e−02

SMPSO 5.63e − 033.4e−04 5.49e − 033.0e−04 5.85e − 031.0e−03 6.43e − 035.7e−04 4.72e − 033.9e−04 5.65e − 027.8e−03 1.40e − 011.9e−02 1.61e − 011.3e−01 1.20e − 013.8e−02 4.87e − 035.5e−04 4.35e − 033.9e−04 1.71e − 016.9e−02

HPSODE 5.33e − 032.0e−04 5.27e − 032.2e−04 4.56e − 036.4e−04 5.40e − 032.2e−04 4.45e − 033.1e−04 4.68e − 026.5e−03 1.22e − 012.1e−02 1.19e − 011.9e−02 1.03e − 012.2e−02 4.36e − 032.8e−04 4.26e − 033.9e−04 1.25e − 013.6e−02

Table 3. Hypervolume Median and IQR

ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7

OMOPSO 6.611e − 014.4e−04 3.280e − 014.0e−04 5.140e − 011.1e−03 0.000e + 000.0e+00 4.013e − 011.1e−04 0.000e + 000.0e+00 3.537e − 017.1e−03 0.000e + 000.0e+00 3.591e − 019.7e−03 9.344e − 021.6e−04 9.493e − 025.5e−05 2.610e − 019.9e−03

SMPSO 6.618e − 011.2e−04 3.285e − 011.1e−04 5.154e − 014.2e−04 6.613e − 012.3e−04 4.012e − 011.1e−04 7.372e − 019.0e−03 3.463e − 011.0e−02 3.406e − 014.3e−02 3.577e − 011.6e−02 9.366e − 021.6e−04 9.488e − 026.4e−05 2.746e − 017.4e−03

HPSODE 6.621e − 011.7e−05 3.288e − 011.7e−05 5.160e − 011.2e−05 6.621e − 012.3e−05 4.015e − 011.5e−05 7.688e − 017.5e−03 3.821e − 015.1e−03 3.864e − 015.8e−03 3.839e − 016.3e−03 9.407e − 024.4e−05 9.496e − 024.1e−05 2.916e − 013.7e−03

From table 6, in term of median and IQR for Spread metric, HPSODE get best results in all of the 6 MOPs. OMOPSO get best results only in 2 MOPs. SMOPSO get best results only in 4 MOPs. The comparison for three algorithms in five categories: epsilon indicator, hypervolume, Generation Distance, Inverted genetic Distance, Spread, it shows that the new algorithm is more efficient to solve the MOPs. Now, we summarize the highlight as follows: 1. HPSODE adopt the statistical method Uniform Design to construct the first population, which can get well distributed solutions in feasible space. 2. HPSODE combine the PSO and Differential Evolution operation to generate next population. DE operation enhances the diversity for global guide population.

A Novel Hybrid Particle Swarm Optimization Table 4. Generational Distance Median and IQR

ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7

OMOPSO 1.43e − 043.8e−05 7.68e − 052.6e−05 2.25e − 043.3e−05 6.57e − 015.8e−01 2.19e − 024.4e−02 1.00e + 011.3e+00 3.32e − 034.1e−04 3.51e + 016.0e+00 5.83e − 035.0e−04 3.00e − 044.3e−05 5.62e − 043.0e−05 5.26e − 031.2e−03

SMPSO 1.24e − 044.9e−05 5.27e − 055.4e−06 1.88e − 042.3e−05 1.35e − 044.6e−05 2.18e − 034.3e−02 5.06e − 031.4e−03 3.94e − 039.2e−04 5.29e − 035.7e−02 5.98e − 036.4e−04 2.85e − 045.6e−05 5.69e − 042.6e−05 4.84e − 031.3e−03

HPSODE 7.41e − 053.5e−05 4.66e − 052.8e−06 1.76e − 041.4e−05 7.67e − 054.2e−05 5.37e − 042.0e−05 7.00e − 041.9e−04 1.25e − 034.6e−04 1.17e − 031.8e−04 4.95e − 033.9e−04 2.50e − 043.3e−05 5.67e − 042.6e−05 3.57e − 031.1e−03

Table 5. Inverted Generational Distance Median and IQR

ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7

OMOPSO 1.37e − 042.3e−06 1.43e − 043.2e−06 2.14e − 041.4e−05 1.61e − 011.5e−01 1.20e − 047.9e−06 2.71e − 011.0e−01 7.58e − 044.6e−05 2.03e + 006.7e−01 1.23e − 031.6e−04 1.49e − 055.0e−07 3.38e − 051.1e−06 2.66e − 033.1e−04

SMPSO 1.35e − 041.0e−06 1.40e − 041.9e−06 2.00e − 047.2e−06 1.38e − 041.4e−06 1.18e − 047.9e−06 6.38e − 044.5e−05 8.03e − 045.1e−05 1.43e − 037.8e−04 1.17e − 032.1e−04 1.44e − 056.8e−07 3.43e − 051.3e−06 2.72e − 034.8e−04

HPSODE 1.34e − 041.2e−06 1.44e − 042.9e−06 1.90e − 042.7e−06 1.34e − 041.5e−06 1.10e − 047.0e−06 5.27e − 043.5e−05 7.04e − 044.1e−05 1.11e − 036.8e−05 1.17e − 031.5e−04 1.41e − 053.8e−07 3.40e − 051.4e−06 2.43e − 033.0e−04

Table 6. Spread Median and IQR

ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7

OMOPSO 8.29e − 021.5e−02 7.99e − 021.5e−02 7.13e − 011.2e−02 8.89e − 018.8e−02 1.07e + 009.9e−01 6.90e − 011.0e−01 6.22e − 014.9e−02 7.28e − 011.4e−01 6.36e − 016.9e−02 2.02e − 016.8e−02 1.24e − 014.0e−02 7.07e − 015.6e−02

SMPSO 7.90e − 021.3e−02 7.21e − 021.8e−02 7.11e − 019.8e−03 9.84e − 021.6e−02 3.17e − 011.2e+00 6.78e − 014.2e−02 6.38e − 015.1e−02 6.65e − 012.0e−01 6.59e − 011.3e−01 1.70e − 018.5e−02 1.39e − 013.9e−02 7.08e − 017.0e−02

HPSODE 9.49e − 021.6e−02 9.48e − 021.4e−02 7.05e − 013.8e−03 1.01e − 012.1e−02 6.76e − 021.2e−02 7.28e − 015.5e−02 6.32e − 015.7e−02 6.43e − 015.5e−02 6.41e − 015.3e−02 1.24e − 012.3e−02 1.19e − 012.4e−02 6.99e − 015.8e−02

35

36

S. Jiang and Z. Cai

3. HPSODE design a new accept rule Distance/volume fitness, which expands the solution to feasible searching place and forces the solution near to Paretofront. 4. HPSODE significant outperforms OMOPSO, SMPSO in terms of additive Epsilon, HyperVolume, Genetic Distance, Inverted Genetic Distance. 5. HPSODE get better results than OMOPSO, SMPSO in term of spread metric.

6

Conclusion and Future Reasearch

Population initialization, regeneration method, archive update rule are three critical issues in MOEAs. In this paper, we propose a hybrid PSO algorithm called HPSODE which include uniform design initialization, two phase regeneration with DE operator, new accept rule Distance/Volume fitness. Experimental results show that HPSODE can get higher Hypervolume, lower additive epsilon, GD, IGD and competitive spread. Our future research will focus on enhance the Quantum PSO algorithm to solve the MOPs.

References 1. Coello, C.A.C.: Evolutionary multi-objective optimization: A historical view of the Field. IEEE Computational Intelligence Magazine 1(1), 28–36 (2006) 2. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA - II. IEEE Transactions on Evolutionary Computation 6(2), 182–197 (2002) 3. Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: Improving the strength Pareto evolutionary algorithm, Technical Report 103, Computer Engineering and Networks Laboratory (2001) 4. Kukkonen, S., Lampinen, J.: GDE3: The third evolution step of generalized differential evolution. In: Proceedings of the 2005 IEEE Congress on Evolutionary Computation (1), pp. 443–450 (2005) 5. Coello Coello, C.A., Toscano Pulido, G., Salazar Lechuga, M.: Handling Multiple Objectives With Particle Swarm Optimization. IEEE Transactions on Evolutionary Computation 8, 256–279 (2004) 6. Sierra, M.R., Coello, C.A.C.: Improving PSO-based multi-objective optimization using crowding, mutation and -dominance. In: Coello Coello, C.A., Hern´ andez Aguirre, A., Zitzler, E. (eds.) EMO 2005. LNCS, vol. 3410, pp. 505–519. Springer, Heidelberg (2005) 7. Mostaghim, S., Teich, J.: Strategies for Finding Good Local Guides in Multiobjective Particle Swarm Optimization (MOPSO). In: 2003 IEEE Swarm Intelligence Symposium Proceedings, Indianapolis, Indiana, USA, pp. 26–33. IEEE Service Center (2003) 8. Zhang, W.J., Xie, X.F.: DEPSO: Hybrid particle swarm with differential evolution operator. In: IEEE International Conference on Systems Man and Cybernetics, pp. 3816–3821 (2003) 9. Fang, K.T., Ma, C.X.: Orthogonal and uniform design. Science Press (2001) (in Chinese)

A Novel Hybrid Particle Swarm Optimization

37

10. Leung, Y.W., Wang, Y.: An orthogonal genetic algorithm with quantization for global numerical optimization. IEEE Transactions on Evolutionary Computation 5(1), 41–53 (2001) 11. Zeng, S.Y., Kang, L.S., Ding, L.X.: An orthogonal multiobjective evolutionary algorithm for multi-objective optimization problems with constraints. Evolutionary Computation 12, 77–98 (2004) 12. Cai, Z.H., Gong, W.Y., Huang, Y.Q.: A novel differential evolution algorithm based on -domination and orthogonal design method for multiobjective optimization. In: Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T. (eds.) EMO 2007. LNCS, vol. 4403, pp. 286–301. Springer, Heidelberg (2007) 13. Leung, Y.-W., Wang, Y.: Multiobjective programming using uniform design and genetic algorithm. IEEE Transactions on Systems, Man, and Cybernetics, Part C 30(3), 293 (2000) 14. Durillo, J.J., Nebro, A.J., Luna, F., Dorronsoro, B., Alba, E.: jMetal: A Java Framework for Developing Multi-Objective Optimization Metaheuristics, Departamento de Lenguajes y Ciencias de la Computaci´ on, University of M´ alaga, E.T.S.I. Inform´ atica, Campus de Teatinos, ITI-2006-10 (December 2006), http://jmetal.sourceforge.net