An Improved Discrete Particle Swarm Optimization Algorithm for TSP

Report 2 Downloads 114 Views
2007 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Workshops

An Improved Discrete Particle Swarm Optimization Algorithm for TSP Changsheng Zhang Jigui Sun Yan Wang College of computer science and technology, Jilin University [email protected]

Qingyun Yang Changchun Institute of Optics ,fine Mechanics and Physics, Chinese academy of Science [email protected]

Abstract

ultimately fitness stagnation of the swarm. An accepted hypothesis is that maintenance of high diversity is crucial for preventing premature convergence. Although the goal of maintaining high diversity and obtaining fast convergence partially contradicts, it makes perfectly good sense to try to improve the optimization algorithm, in order to avoid sub-optimal solutions more frequently [8]. With respect to the PSO only few papers have been written on the subject and even fewer have accomplished the goal of dealing with premature convergence[9-11]. Furthermore, the proposed methods all concentrated on continuous PSO algorithm and few are for DPSO algorithm. In this paper, various operations performed for computing particle velocity and updating particle positions are redefined and a permutation based DPSO called PDPSO is constructed for the TSP. In this algorithm, a diversity-measure is proposed which can be used to measure diversity index of the current swarm. In order to overcome the problem of premature convergence, a novel repulsive procedure called depressor is devised and introduced into the PDPSO algorithm and the diversity index is use to switch between the attractive procedure called attractor and depressor. When the diversity index of the current swarm is below a certain threshold, the depressor start to run until the diversity index of is above a certain threshold, then the attracted phrase called attractor is begin.

An Improved discrete particle swarm optimization (DPSO)-based algorithm for the traveling salesman problem (TSP) is proposed. In order to overcome the problem of premature convergence, a novel depressor is proposed and a diversity measure to control the swarm is also introduced which can be used to switch between the attractor and depressor. The proposed algorithm has been applied to a set of benchmark problems and compared with the existing algorithms for solving TSP using swarm intelligence. The results show that it can prevent premature convergence to a high degree, but still keeps a rapid convergence like the basic DPSO.

1. Introduction TSP is a well-known and extensively studied benchmark for many new developments in combinatorial optimization problems [1,2]. Its objective is to determine a minimum distance tour visiting each city only once. Although easy to define, it is difficult to solve. It is the foundation of many logistic and distribution problems. Moreover, a lot of different permutation problems may be also reduced to it, such as those arising in job scheduling and wallpaper cutting. Many evolution algorithms have been proposed to deal with it, such as simulated annealing (SA), tabu search (TS), ant colony system, GA and some others [3-6]. Compared to these algorithms, PSO has much better intelligent background and could be easily applied to a wider range of problems than TSP. Presently the PSO has attracted broad attention in the fields of evolutionary computing optimization and many others[7]. But as many other evolutionary algorithms, PSO also tends to suffer from the premature convergence problem. This is mainly for a decrease of diversity in search space that leads to a total implosion and

0-7695-3028-1/07 $25.00 © 2007 IEEE DOI 10.1109/WI-IATW.2007.38

2. The improved DPSO for TSP Clerc proposed a swap DPSO algorithm and used it to solve the TSP [12]. Based on this, a permutation DPSO algorithm is brought forward by Rameshkumar to solve the Flowshop Scheduling problem [13]. Compared to swap DPSO, it does not introduce new element and only swaps the element orders of the corresponding locations. Inspired by it, we introduce

35

Definition

the permutation concept into our proposed DPSO algorithm for the TSP. For an m-ordered TSP problem, the ith particle’s

Given

two

particle

X i = { xi1 , xi 2 ,… , xim } and

positions

X j = { x j 1 , x j 2 ,… ,

x jm } , define their addition as:

position is denoted as X i = { xi1 , xi 2 ,… , xim } , which represents the traveling path xi1 ⇒ xi 2 ⇒

5.

m

ADD ( X i , X j ) = X i + X j = ∪ E ( X ik , X jk )



xim ⇒ xi1 .To express the PDPSO algorithm clearly, we use Pk ,best denotes the personal best position of the

k =1

(

)

{ xik } , if  Φ , if

xik = x jk

where E xik , x jk = 

kth particle, Pk , worst denotes the personal worst

xik ≠ x jk

Definition 6. For a given set S = {e1 , e2 , … , es } ,

position of the kth particle, Pk , current denotes the

the result of operation G ( S ) is to randomly generate

current position of the kth particle, Gbest represents

a set R as follows:

t

current global best position, Vk represents velocity of

{

G ( S ) = R = r1 , r2 ,… , r2

the kth particle during kth iteration and some problem related definitions are made as follows. Definition 1. For an m-ordered sequence X =

{

}

}

where ri = ( ril , rik ) rik ∈ S ∧ ril ∈ S ∧ rik ≠ ril , and

{ x1 , x2 ,… , xm } , the slide operator acts on X is that: SL ( X , k ) = { x(1+ k )% m , x(2+ k )% m ,… , x( m + k )% m } .

do not exist rj = {( rik , ril )} .

{ x1 , x2 ,… , xm } , the reverse operator that: RE ( X ) = { xm , xm −1 , … , x1} .

x jm } , define the operator ⊕ acting on them as

Definition 7. Given

3.

Given

two

acts on X is

particle

follows:

or

x jm } , define their subtraction as:

SUB ( X 1 , X 2 ) = X 1 − X 2 = ∪ P ( X ik , X jk )

}

Definition

4.

Given

X i = { xi1 , xi 2 ,… , xim }

two

xik ≠ x jk xik = x jk particle

i

j

Definition 8: For a given swarm S , its diversity coefficient div is defined as follows:

.

2

div = D ( S ) =

N

S

∑ ∪{ }

−1

pij

positions

1+ e

and X j = {x j1 , x j 2 ,… ,

x jm } , define the operator

))) ( ( ( = G ( ADD ( X , RE ( SL ( X , l ) ) ) )

maximize R .

k =1

{

X j = { x j 1 , x j 2 ,… ,

where l takes a value between 0 and m-1, which can

m

 ( xik , x jk ) , if P ( xik , x jm ) =  , if Φ

positions

X i ⊕ X j = G ADD X i , SL RE ( X j ) , l

positions

X i = { xi1 , xi 2 ,… , xim } and X j = {x j1 , x j 2 , …,

where

particle

X i = { xi1 , xi 2 ,… , xim } and

Definition 2. For an m-ordered sequence X =

Definition

two



j =1 i =1

NiS

Where S is the swarm-size, N is the dimensionality

acting on them as

follows:

of the problem, pij denotes the jth component of the

Xi

ith particle’s current position. Note that this diversity measure is independent of swarm-size, the dimensionality of the problem. Based on the above definitions, we propose the PDPSO algorithm. The velocity of each particle Vk is updated using the following attractor and depressor equations.

X j = SUB ( X i , SL( RE ( X j ), l ) ) or

(

SUB X i , RE ( SL( X j , l ) )

)

where l takes a value between 0 and m-1, which can minimize X i

Xj .

36

Vkt+1 = w×Vkt +φ1 ( Pk,best when div > ch t +1 k

V

Pk,current ) + φ2 ( Gbest

Pk ,current )

compared with the DPSO, ACO algorithms. The results are shown in table1. The column of average result is the mean of running each algorithm twenty times.

Itermax − Itercurr < num

or

= w ×V + φ1 ( Pk ,worst ⊕ Pk ,current ) + φ2 ( Gbest ⊕ Pk ,current ) t k

when div ≤ cl

Tabl e 1. Resul ts of TSP probl ems

Itermax − Itercurr ≥ num

and

where w is the inertia weight,

φ1 and φ2 are

Problem Algorithm Best Average Opt PDPSO 30.8785 30.8785 burma14 DPSO 30.8785 31.5551 30.8785 ACO 30.8785 31.4075 PDPSO 423.7410 424.8267 Oliver30 DPSO 453.4200 15.4413 423.7406 ACO 434.2214 447.9351 PDPSO 436.7730 440.7810 eil51 DPSO 476.9910 523.5421 426.0000 ACO 449.6437 454.3333

learning

coefficients, cl and ch are the lower bound and upper bound respectively, num is a constant related to the concrete problem, Itermax and Itercurr are the maximum iteration number and the current iteration number. Each particle position is updated using the t +1

t +1

t

equation Pk = Pk + Vk , that is New position = Current position + Particle velocity. Obviously, in the permutation based DPSO algorithm for TSP, the velocity of each particle is a set of transposition pairs. Other operations performed that is not explained for computing particle velocity and updating positions are explained as the following rules: Rule 1: For a given particle position

Because Oliver30 is used more extensively, so we make many tests on it. The optimal path we got is shown in Fig.1. The evolution of the best fitness values with iterations about DPSO and PDPSO is show in Fig.2 which is the mean of running them twenty times.

X i = { xi1 , xi 2 ,… , xim } and a velocity containing the number of q transposition

{{k , l } ,{k

Vi =

pairs

i1

i1

i2

}

, li 2 } ,… , {kiq , liq }

,

X i + V generates a new particle position X new by applying the first transposition of V i to X i ,then the second to the result etc. Rule 2: Let φ be the learning coefficient and Vi =

{{k , l } ,{k , l } ,… ,{k , l }} be the velocity, 1

1

2

2

q

q

Fig.1. The optimal path of the Oliver30

φ × Vi

results in a new velocity Vk and is used to find out the number of velocity components to be applied over the position. For example, if the coefficient value is 0.5, then 50 percent of the velocity components are randomly selected from the velocity list and applied over the position. Rule 3: Let

{{k , l } ,{k , l } ,…,{k , l }} , = {{k , l } , {k , l } ,… , {k , l }} ,

Vi = Vj

i1

j1

i1

j1

i2

i2

j2

iq

j2

iq

jq

jq

then Vi + V j = Vi ∪ V j . Fig. 2. Mean relative performance It can be seen from the above experiments that the PDPSO algorithm not only converges quickly but also has a high robustness. It has a high probability to find the optimal solution for burmal14 and Oliver30.

3. Numerical results To verify the validity of the PDPSO algorithm, three different scale instances burmal14[14], Oliver30[15] and eil51[14] are selected to test and

37

genetic algorithms, European Journal of Operational Research, Volume 175, Issue 1, 16 November 2006, Pages 246-257 [7] X.H. Shi, Y.C. Liang, H.P. Lee Particle swarm optimization-based algorithms for TSP and generalized TSP, Information Processing Letters, Volume 103, Issue 5, 31 August 2007, Pages 169-176 [8] Jacques Riget, Jakob S. Vesterstrom. A diversity-guided particle swarm optimizer the ARPSO, Technical Report no. 2002-02. Department of Computer Science, University of Aarhus, 2002. [9] Løvbjerg, M., Rasmussen, T., K. and Krink, T. “Hybrid Particle Swarm Optimiser with Breeding and Subpopulations”. Proceedings of the third Genetic and Evolutionary Computation Conference (GECCO-2001). [10] T. Krink, J. Vesterstrøm, J. Riget. “Particle Swarm Optimization with Spatial Particle Extension”. To appear in: Proceedings of the Congress on Evolutionary Computation 2002 (CEC-2002). [11] Christopher K. Monson, Kevin D. Seppi. Adaptive Diversity in PSO. Proceedings of the 8th annual conference on Genetic and evolutionary computation. Seattle, Washington, USA.2006 [12] M.Clerc: Discrete particle swarm optimization, illustrated by the Traveling Salesman Problem. In: New Optimization Techniques in Engineering. Heidelberg, Germany, (2004) 219-239 [13] K.Rameshkumar, R.K. Suresh, and K.M.Mohana sundaram: Discrete Particle Swarm Optimization (DPSO) Algorithm for Permutation Flowshop Scheduling to Minimize Makspan. In: Proc. ICNC 2005, LNCS 3612, (2005) 572-581 [14] G. Reinelt, http://www.iwr.uni-heidelberg.de/groups/ comopt/software/TSPLIB95/ 2007 [15] Colorni A, Dorigo M, Maniezzo V. An investigation of some properties of an ant algorithm. In: Proc PPSN'92, London, 1992. 509~520

4. Conclusions TSP is a typical combinatorial optimization problem. In this paper, a DPSO base algorithm is proposed. A diversity-measure used to measure diversity index of the current swarm is proposed and a novel depressor is defined. Its performance is evaluated on publicly available instances of TSP, and experiments show its validity. Although not as dominant as the Lin-Kernighan algorithm that is known for solving TSP, the proposed algorithm is a break through attempt in solving combinatorial optimization problems. In addition to solving TSP, the proposed algorithm may also be utilized in resolving common routing problems and other combinatorial optimization problems once the adjustment has been put in place.

References [1] R.E. Bellman, Dynamic programming treatment of the traveling salesman problem, Journal of the ACM 9(1962) 61–63. [2] Leonardo Zambito, The traveling salesman problem: a comprehensive survey, Project for CSE 4080, 2006. [3] T. Munakata, Y. Nakamura, Temperature control for simulated annealing. Physical Review. E 64 (2001) 046127. [4] L. Huang, C.G. Zhou, K.P. Wang, Hybrid ant colony algorithm for traveling salesman problem, Progress in Natural Science 4 (13) (2003) 295–299. [5] Alfonsas Misevi č ius, Jonas Smolinskas, Arūnas Tomkevičius, Iterated tabu search for the traveling salesman problem: new results. Information technology and control, 2005, Vol.34, No.4, 327-337. [6] E. Carter and Cliff T. Ragsdale, A new approach to solving the multiple traveling salesperson problem using

38