A hybrid genetic algorithm and particle swarm optimization for ...

Report 4 Downloads 137 Views
Applied Soft Computing 8 (2008) 849–857 www.elsevier.com/locate/asoc

A hybrid genetic algorithm and particle swarm optimization for multimodal functions Yi-Tung Kao a, Erwie Zahara b,* b

a Department of Computer Science and Engineering, Tatung University, Taipei City 104, Taiwan, ROC Department of Industrial Engineering and Management, St. John’s University, Tamsui 251, Taiwan, ROC

Received 1 December 2006; received in revised form 25 June 2007; accepted 1 July 2007 Available online 7 July 2007

Abstract Heuristic optimization provides a robust and efficient approach for solving complex real-world problems. The focus of this research is on a hybrid method combining two heuristic optimization techniques, genetic algorithms (GA) and particle swarm optimization (PSO), for the global optimization of multimodal functions. Denoted as GA-PSO, this hybrid technique incorporates concepts from GA and PSO and creates individuals in a new generation not only by crossover and mutation operations as found in GA but also by mechanisms of PSO. The results of various experimental studies using a suite of 17 multimodal test functions taken from the literature have demonstrated the superiority of the hybrid GAPSO approach over the other four search techniques in terms of solution quality and convergence rates. # 2007 Published by Elsevier B.V. Keywords: Heuristic optimization; Multimodal functions; Genetic algorithms; Particle swarm optimization

1. Introduction In the last two decades there has been a growing interest in evolutionary computing, which has inspired new ways for solving optimization problems. In contrast to traditional optimization methods, which emphasize accurate and exact computation, but may fall down on achieving the global optimum, evolutionary computation provides a more robust and efficient approach for solving complex real-world problems [1,2]. Among existing evolutionary algorithms, the best-known branch is the genetic algorithm (GA). GA is a stochastic search procedure based on the mechanics of natural selection, genetics and evolution [3]. Since this type of algorithm simultaneously evaluates many points in the search space, it is more likely to find the global solution of a given problem. In addition, it uses only a simple scalar performance measure that does not require or use derivative information, so methods classified as GA are easy to use and implement. More recently, based upon the interaction of individual entities called ‘‘particles,’’ Kennedy and Eberhart [4,5] proposed a new heuristic algorithm called ‘‘particle swarm optimization’’

* Corresponding author. E-mail address: [email protected] (E. Zahara). 1568-4946/$ – see front matter # 2007 Published by Elsevier B.V. doi:10.1016/j.asoc.2007.07.002

(denoted as PSO). The development of this algorithm follows from observations of social behaviors of animals, such as bird flocking and fish schooling. The theory of PSO describes a solution process in which each particle flies through the multidimensional search space while the particle’s velocity and position are constantly updated according to the best previous performance of the particle or of the particle’s neighbors, as well as the best performance of the particles in the entire population. Compared with GA, PSO has some attractive characteristics. It has memory, so knowledge of good solutions is retained by all the particles; whereas in GA, previous knowledge of the problem is discarded once the population changes. It has constructive cooperation between particles; that is, particles in the swarm share information among themselves. To date, PSO has been successfully applied to optimizing various continuous nonlinear functions in practice [6]. Hybridization of evolutionary algorithms with local search has been investigated in many studies [7–9]. Such a hybrid is often referred to as a mimetic algorithm. In the case at hand, we will combine two global optimization algorithms, i.e., GA and PSO, as PSO and GA both work with an initial population of solutions and combining the searching abilities of both methods seems to be a reasonable approach. Originally, PSO functions according to knowledge of social interaction, and all individuals are taken into account in each generation. On the

850

Y.-T. Kao, E. Zahara / Applied Soft Computing 8 (2008) 849–857

contrary, GA simulates evolution and some individuals are selected while some others are eliminated from generation to generation. Taking advantage of the compensatory property of GA and PSO, we propose a new algorithm that combines the evolutionary natures of both (denoted as GA-PSO). The robustness of GA-PSO will be tested against a set of benchmark multimodal test functions collected from Siaary and Berthiau [10] and the results are compared extensively with those obtained by the continuous genetic algorithm, continuous hybrid algorithm, hybrid Nelder–Mead simplex search and particle swarm optimization, and hybrid continuous tabu search and Nelder–Mead simplex algorithm. 2. Genetic algorithms and particle swarm optimization 2.1. Genetic algorithms (GA) The discovery of genetic algorithms (GA) was dated to the 1960s by Holland and further described by Goldberg [3]. GA is a randomized global search technique that solves problems by imitating processes observed from natural evolution. Based on the survival and reproduction of the fittest, GA continually exploits new and better solutions without any pre-assumptions, such as continuity and unimodality. GA has been successfully adopted in many complex optimization problems and shows its merits over traditional optimization methods, especially when the system under study has multiple local optimum solutions. GA evolves a population of candidate solutions. Each solution is usually coded as a binary string called a chromosome. The fitness of each chromosome is then evaluated using a performance function after the chromosome has been decoded. Upon completion of the evaluation, a biased roulette wheel is used to randomly select pairs of better chromosomes to undergo such genetic operations as crossover and mutation that mimic nature. Should the newly produced chromosomes turn out to be stronger than the weaker ones from the previous generation, they will replace these weaker chromosomes. This evolution process continues until the stopping criteria are reached. A real-coded GA uses a vector of floating-point numbers instead of 0’s and 1’s for implementing chromosome encoding. The crossover operator of a real-coded GA is constructed by borrowing the concept of linear combination of vectors from the area of convex set theory. The random mutation operator proposed for real-coded GA operates on the gene by introducing into it a perturbation that is a random number in the range of 0–1 in the feature’s domain. With some modifications of the genetic operators, the real-coded GA has resulted in better performance than the binary coded GA for continuous problems [11].

particles are adjusted according to its previous best position, and the neighborhood best or the global best. Since all particles in PSO are kept as members of the population throughout the course of the searching process, PSO is the only evolutionary algorithm that does not implement survival of the fittest. As simple and economic in concept and computational cost, PSO has been shown to successfully optimize a wide range of continuous optimization problems [12,13]. 3. Hybrid genetic algorithm and particle swarm optimization This section discusses the infrastructure and rationale of the hybrid algorithm. Fig. 1 depicts the schematic representation of the proposed hybrid GA-PSO. As can be seen, GA and PSO both work with the same initial population. When solving an Ndimensional problem, the hybrid approach takes 4N individuals that are randomly generated. These individuals may be regarded as chromosomes in the case of GA, or as particles in the case of PSO. The 4N individuals are sorted by fitness, and the top 2N individuals are fed into the real-coded GA to create 2N new individuals by crossover and mutation operations, as shown in Fig. 1. The crossover operator of the real-coded GA is implemented by borrowing the concept of linear combination of two vectors, which represent two individuals in our algorithm, with a 100% crossover probability. The random mutation operator proposed for the real-coded GA is to modify an individual with a random number in the problem’s domain with a 20% probability. The effect of mutation rate will be discussed in Section 4.1. The new 2N individuals created from real-coded GA are used to adjust the remaining 2N particles by the PSO method. The procedure of adjusting the 2N particles in the PSO method involves selection of the global best particle, selection of the neighborhood best particles, and finally velocity updates. The global best particle of the population is determined according to the sorted fitness values. The neighborhood best particles are selected by first evenly dividing

2.2. Particle swarm optimization (PSO) Particle swarm optimization (PSO) is one of the latest evolutionary optimization techniques developed by Eberhart and Kennedy [4]. PSO concept is based on a metaphor of social interaction such as bird flocking and fish schooling. The particles, which are potential solutions in the PSO algorithm, fly around in the multidimensional search space and the positions of individual

Fig. 1. Schematic representation of the GA-PSO hybrid. (!) Associated with the GA operations. (- - - !) Associated with the PSO operations.

Y.-T. Kao, E. Zahara / Applied Soft Computing 8 (2008) 849–857

Fig. 2. The GA-PSO hybrid algorithm.

851

852

Y.-T. Kao, E. Zahara / Applied Soft Computing 8 (2008) 849–857

Table 1 Effect of mutation rate on the solution quality and performance on GA-PSO Test function

Performance

Mutation rate (%) 1

5

10

15

20

25

RC

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.00061 2.844 11429

100 0.00149 2.978 12098

100 0.00053 2.908 12734

100 0.00083 2.563 10102

100 0.00009 2.117 8254

100 0.00049 2.933 11298

ES

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.00006 0.198 880

100 0.00015 0.178 760

100 0.00008 0.186 787

100 0.00015 0.192 804

100 0.00003 0.195 809

100 0.00006 0.225 918

GP

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.07092 9.400 38127

100 0.00585 6.692 26659

100 0.00097 6.689 26074

100 0.00100 7.485 29108

100 0.00012 6.686 25706

100 0.00051 6.642 25402

B2

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.00013 0.037 147

100 0.00003 0.036 140

100 0.00005 0.042 163

100 0.00002 0.047 179

100 0.00001 0.044 174

100 0.00009 0.0468 162

SH

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.00017 54.658 215910

100 0.00009 36.892 143154

100 0.00023 18.271 70526

100 0.00021 22.616 86509

100 0.00007 25.984 96211

100 0.00018 27.915 104588

R2

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.42739 5.447 21783

100 0.22121 14.164 55802

100 0.00517 33.744 132191

100 0.00238 32.172 124650

100 0.00064 36.822 140894

100 0.00056 25.219 95950

Z2

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.00013 0.027 100

100 0.00030 0.030 110

100 0.00006 0.030 112

100 0.00027 0.025 96

100 0.00005 0.025 95

100 0.00006 0.030 108

DJ

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.00003 0.049 169

100 0.00002 0.042 156

100 0.00001 0.047 170

100 0.00002 0.053 185

100 0.00004 0.056 206

100 0.00003 0.055 182

H3,4

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.00020 0.497 1914

100 0.00016 0.506 1920

100 0.00015 0.444 1688

100 0.00015 0.555 2032

100 0.00020 0.577 2117

100 0.00024 0.519 1908

S4,5

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.00024 161.322 571912

100 0.00025 125.283 440400

100 0.00021 150.151 524484

100 0.00018 126.225 433624

100 0.00014 154.556 529344

100 0.00029 152.067 515792

S4,7

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.00018 13.488 41611

100 0.00030 15.372 45451

100 0.00017 12.358 33979

100 0.00027 14.924 43611

100 0.00015 18.119 56825

100 0.00025 18.388 57314

S4,10

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.00019 17.973 46030

100 0.00022 13.231 33597

100 0.00024 16.389 41234

100 0.00014 17.245 43158

100 0.00012 17.472 43314

100 0.00019 23.416 57989

R5

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.33370 1122.213 8240526

100 0.00261 835.414 6086084

100 0.00053 393.870 2840790

100 0.00024 289.845 2070242

100 0.00013 213.922 1527953

100 0.00009 173.464 1209662

Z5

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.00000 0.078 326

100 0.00001 0.094 358

100 0.00001 0.094 358

100 0.00002 0.103 392

100 0.00000 0.105 398

100 0.00001 0.094 372

Y.-T. Kao, E. Zahara / Applied Soft Computing 8 (2008) 849–857

853

Table 1 (Continued ) Test function

Performance

Mutation rate (%) 1

5

10

15

20

25

H6,4

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.00032 3.139 11256

100 0.00028 3.633 13976

100 0.00028 3.622 11405

100 0.00031 3.583 13627

100 0.00024 3.326 12568

100 0.00024 3.189 11882

R10

Success rate (%) Average error Average time (s) Average func_evaluation

100 1.08786 2471.923 10000040

100 0.00603 2471.923 10000040

100 0.00043 2448.219 9904148

100 0.00010 2027.30 8101236

100 0.00005 1358.753 5319160

100 0.00005 1079.558 4243428

Z10

Success rate (%) Average error Average time (s) Average func_evaluation

100 0.00000 0.184 784

100 0.00000 0.189 764

100 0.00000 0.199 804

100 0.00000 0.211 848

100 0.00000 0.227 872

100 0.00000 0.242 924

the 2N particles into N neighborhoods and designating the particle with the better fitness value in each neighborhood as the neighborhood best particle. By Eqs. (5) and (6), velocity and position updates for each of the 2N particles are then carried out. The result is sorted in preparation for repeating the entire run. The hybrid algorithm, which is described in Fig. 2, terminates when it satisfies a convergence criterion that is based on the standard deviation of the objective function values of N + 1 best individuals of the population. It is defined as follows: X 1=2 N þ1 2 ¯ Sf ¼ ð f ðxi Þ  fÞ =ðN þ 1Þ