Modified Differential Evolution with Local Search Algorithm for Real World Optimization Ankush Mandal, Aveek Kumar Das, Prithwijit Mukherjee, Swagatam Das
Ponnuthurai Nagaratnam Suganthan School of Electrical and Electronic Engineering, Nanyang Technological University Singapore 639798, Singapore E-mail:
[email protected] Dept. of Electronics and Telecomm. Engg., Jadavpur Kolkata 700032, India E-mails:
[email protected],
[email protected],
[email protected],
[email protected] Abstract—Real world optimization problems are used to judge the performance of any Evolutionary Algorithm (EA) over real world applications. This is why the performance of any EA over the real world optimization problems is very important for judging its efficiency. In this work, we represent a multipopulation based memetic algorithm CDELS. It is hybridization of a competitive variant of Differential Evolution (DE) and a Local Search method. As the number of optima is large in this case, we have also incorporated a distant search method to hop from one optima to other optima. However, it is well known that DE has fast but less reliable convergence property. To overcome this limitation, a hybrid mutation strategy is developed to balance between exploration and thorough search. In addition, a proximity checking method is applied to distribute the subpopulations over a larger portion of the search space as this further enhances the searching ability of the algorithm. The performance of CDELS algorithm is evaluated on the test suite provided for the Competition on Testing Evolutionary Algorithms on Real-world Numerical Optimization Problems in the 2011 IEEE Congress on Evolutionary Computation and the simulation results are shown in this paper. Keywords-Memetic Algorithm, Differential Evolution, Real World Numerical Optimization problems.
I.
INTRODUCTION
In recent years, Evolutionary Algorithms (EAs) have been applied in many real world optimization problems [1], [2], [3], [4]. EAs with proper enhancement are good choice for real world optimization problems because they are inspired by natural phenomena where the interfacing is with real world. However, the real world optimization problems are very challenging due to the presence of enormous number of local optima within the search space. The Differential Evolution (DE) algorithm [5, 6] belongs to EAs and now-a-days it is considered one of the most powerful tools in EAs. It can be interpreted as discrete dynamical system governing the movements of a set of vectors in the search space. Behavior of the individuals greatly influence the progress of the search process and therefore on the convergence of the algorithm. However, EAs are good for global optimization where the exploration of the entire search space is required within relatively small no of iteration. But they are not good for producing precise solutions.
978-1-4244-7835-4/11/$26.00 ©2011 IEEE
Local Search (LS) algorithms [7] are used to explore the nearby region around the current solutions with high intensity. So, by using a LS method, highly accurate solutions can be obtained. EAs hybridized with LS method are commonly called Memetic Algorithms (MAs) and these MAs [8 - 10] are proven to be more efficient than the EAs themselves. The reason behind this is the combination of global exploration and local exploitation. In this paper, we propose a MA to address some real world optimization problems. In our proposed algorithm, a competitive variant of DE is used for global exploration and the classic Solis Wet’s algorithm [11] is used as a part of the local search. Other part of the LS algorithm consists of generating several points within a sphere of specific radius centered on the current best solutions found by the Solis and Wet’s algorithm and check whether the best of them is better than the current best solution or not. Also a distant search is used for hopping from one optima to other optima. For the DE algorithm, we have developed a hybrid mutation strategy by hybridizing a modified “DE/current-to-best/2” mutation strategy with a modified “DE/rand/1”mutation strategy. We have discussed the mutation strategy later in sufficient details. The proposed MA, denoted by CDELS, is multi-population based and a proximity checking mechanism is applied to ensure the distribution of subpopulations over a larger portion of the search space which encourages the search process to find new optima. We have also incorporated differential evolution between best individuals of the subpopulations after some regular intervals. A worst performance checking method is developed to reinitialize the individuals for consistently performing worst in the corresponding subpopulations. We have tested the performance of our algorithm on the test suite proposed in the Competition on Testing Evolutionary Algorithms on Real-world Numerical Optimization Problems in the 2011 IEEE Congress on Evolutionary Computation. For details, see the technical report for this competition [12]. II.
RELATED WORK
A. Classical DE Differential evolution (DE) algorithm, proposed by Storn and Price [5] is a simple but effective population-based
1565
stochastic search technique for solving global optimization problems. D
1) Initialization of the Population: Let S ⊂ ℜ be the D dimensional search space of the problem under construction. The D.E. evolves a population of NP D-dimensional
G 1 2 D individual vectors, X i = {xi , xi ,......, xi } i=1,2,…NP from one generation to another. The initial population should ideally cover the entire search space by randomly distributing each parameter of an individual vector between the prescribed upper and lower bounds
for the (G+1) generation is formed as follows: (for a maximization problem) G G G if f (Ui,G ) ≥ f (Xi,G ) ⎧Ui,G G Xi,G+1 = ⎨ G (5) otherwise ⎩Xi,G where f (.) is the objective function to be maximized.
G crossover operation corresponding to every target vector X i ,G G and produces the trial vector, U i ,G G 2) Mutation operation: For every target vector, X i ,G , in G any generation G, a mutant vector Vi ,G is generated according to a certain mutation scheme. The most common mutation
G
G
G
G
a) DE/rand/1/bin: Vi ,G = X r1,G + F ⋅ ( X r1,G − X r 3,G ), b) DE/best/1/bin:
G f (U i ,G ) is compared to that of its corresponding target G vector f ( X i ,G ) in the current population and the population
x uj and x lj , j ∈ [1, D] respectively.
For every generation G, the D.E. employs mutation and
policies are:
operation is performed. The fitness value of each trial vector
G G G G Vi,G = Xbest ,G + F ⋅ ( X r1,G − X r 2,G ),
(1) (2)
c) DE/current -to-best/1/bin:
G G G G G G Vi ,G = X i,G + F1 ⋅ ( X best ,G − Xi ,G ) + F 2 ⋅ ( X r1,G − X r 2,G ), (3)
Where r1, r2, and r3 are random and mutually exclusive integers generated in the range [1, NP], which should also be different from the trial vector’s current index i. F is a factor for G scaling differential vectors and X best ,G is the individual vector with best fitness value in the population at generation G.
3) Crossover operation: This operation involves binary
G crossover between the target vector X i ,G and the mutant G vector Vi ,G produced in the previous step which produces the G trial vector U i ,G = {u i ,1,G , u i , 2 ,G ,....u i , D ,G }. The crossover
The above 3 steps are repeated until some stopping criteria is satisfied.
B. Previous Works on Real World Optimization In recent years, many algorithms have been developed to solve real world optimization problems. Some of these algorithms have been discussed here in brief. Jeong et al. proposed a hybrid algorithm [1] by combining Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) to address real world optimization problems. They applied it to diesel engine combustion chamber design problem and the results show that this hybrid algorithm is more efficient than GA or PSO separately. Yang and Liu proposed a Bi-criteria Optimization algorithm [2] for Scheduling in a Real-World Flow Shop with Setup Times. In this algorithm, a modified genetic local search method and two new neighborhood structures is used to improve the efficiency. Abido proposed a Multi-objective evolutionary algorithm [3] for electric power dispatch problem. In this work, a Paretobased genetic algorithm and a feasibility check procedure are applied to increase the potential and effectiveness of the algorithm. Ahn et al. proposed a Memetic Algorithm implemented with GA and Mesh Adaptive Direct Search [4] for design of electromagnetic systems. This hybrid algorithm was developed to reduce the computation time effectively for the optimal design of a PM wind generator. III.
operation is done as follows. ui,j,G
⎧⎪v = ⎨ i, j,G ⎪⎩ xi, j,G
if rand [0 ,1) ≤ CR or j = jrand , otherwise,
(4)
where CR is a user-specified crossover constant in the range [0, 1) and jrand is a randomly chosen integer in the range [1, D] to ensure that the trial vector corresponding target vector
REAL WORLD OPTIMIZATION PROBLEMS
The technical report of Das and Suganthan [12] presents a comprehensive account of some real world optimization problems. The technical report and the codes are available at http://www3.ntu.edu.sg/home/epnsugan/index_files/CEC11RWP/CEC11-RWP.htm.
G U i ,G will differ from its
G X i ,G by at least one component.
4) Selection Operation: If the values of some parameters of a newly generated trial vector exceed the corresponding upper and lower bounds, we randomly and uniformly reinitialize it within the search range. Then the fitness values of all trial vectors are evaluated. After that, a selection
1566
The optimization problems are listed below: •
Problem 1: Parameter Estimation for FrequencyModulated (FM) Sound Waves.
•
Problem 2: Lennard-Jones Potential Problem.
•
Problem 3: The Bifunctional Catalyst Blend Optimal Control Problem.
•
Problem 4: Optimal Control of a Non-Linear Stirred Tank Reactor.
•
Problem 5 & 6: Tersoff Potential Problems.
•
Problem 7: Spread Spectrum Radar Polly phase Code Design.
•
Problem 8: Transmission Network Expansion Planning (TNEP) Problem.
•
Problem 9: Large Scale Transmission Pricing Problem.
•
Problem 10: Circular Antenna Array Design Problem.
•
Problem 11: The ELD problems (DED, ELD, Hydrothermal Scheduling Instances).
•
Problem 12: Messenger Spacecraft Trajectory Optimization Problem.
•
Problem 13: Cassini 2 Spacecraft Trajectory Optimization Problem. IV.
As mentioned earlier, depending on the success rate, either the current individual or the nearest neighbor of the current individual is used for the mutant vector generation process. G Let the chosen individual be X c ,G .
PROPOSED ALGORITHM
In this section we have discussed the proposed CDELS algorithm in sufficient details. In this algorithm, we have developed a competitive variant of DE which is accompanied by a local search method. Furthermore, this algorithm employs a hybrid mutation strategy for DE, a proximity checking method, and a worst performance checking mechanism. The pseudo code of the proposed CDELS algorithm is shown later in Figure 1.
A. The modified DE algorithm 1) Competitive variant of DE: For a heuristic search process, it is useful to exploit the neighborhood because it is similar to information exchange between the neighbors which leads to better solutions. So, here we have incorporated a competition between the neighbors. Also the success rate is measured at each generation which helps in determining the new individual generation process for the next generation. Actually, depending on the success rate, either the current individual or its nearest neighbor is used for mutant vector formation. If the corresponding trial vector is chosen for next generation then the corresponding success rate is increased by 1 and if it is not chosen then the corresponding success rate is decreased by 1. If both the success rates for current individual and its nearest neighbor are equal then the current individual is used. At the time of initialization, all the success rates were set to 0. Using this method, we can get far better solutions with less function evaluations. Also the population does not converge to any local minima too quickly because we set the competition with nearest neighbor. Here, the nearest neighbor is selected on the basis of Euclidean distance between the current individual and the other individuals in the corresponding subpopulation. 2) Hybrid mutation strategy:
In DE, greedy strategies like DE/current-to-best/n and DE/best/n benefit from their fast convergence property by guiding the search process with the best solution so far discovered, thereby converging faster to that point. But, as a result of such fast convergence tendency, in many cases, the population may lose its diversity and global exploration abilities within a relatively small number of generations, thereafter getting trapped to some locally optimal point in the search space. Taking into consideration these facts and to overcome the limitations of fast but less reliable convergence, we have developed a hybrid mutation strategy. For constructing the final mutant vector, two mutant vectors generated by two different mutation strategies are combined with a weight factor ω . This way we developed a hybrid mutation strategy to prevent the algorithm from converging too quickly and at the same time exploring the whole search space to produce high quality results. For the first mutation strategy, we have used a modified “DE/current-to-best/2” mutation strategy. For this modified mutation strategy, best individual of each subpopulation is stored in a memory archive; this memory archive is updated at each generation to store the new best individuals and delete the previous best individuals. During the mutation process, the nearest memory individual is used instead of the global best individual. The mutation process can be expressed as follows: G G G G G G Vmut,1 = X c,G + Fbest ⋅ ( X m,G − X c,G ) + F ⋅ ( X r1,G − X r 2,G ) (6) where
G X r1,G
G X m,G is the nearest best individual as mentioned above. G and X r 2,G are two distinct vectors randomly chosen
from the subpopulation. For the second mutation strategy, we have used “DE/current/1” mutation strategy. The mutation process can be
G G G G Vmut, 2 = X c,G + F ⋅ ( X r1,G − X r 2,G ) (7) G and X r 2,G are two distinct vectors randomly
expressed as follows: where
G X r1,G
chosen from the subpopulation independently of first mutation process. Now, the final mutant vector is a weighted sum of two
G
above mentioned mutant vectors. If the weight factor for Vmut ,1 is
ω
then the final mutant vector is:
G G G Vmut = ω ⋅Vmut ,1 + (1 − ω ) ⋅ Vmut , 2
(8)
B. Local Search 1) Local Search method: As mentioned earlier, we have used Solis and Wet’s algorithm as a part of the LS method. The algorithm is a randomized hill climber. It starts from a initial point i. Then a deviation d is chosen from a normal distribution whose mean is 0 and variance is a parameter σ which is adaptive in nature. If i+d is better or i-d is better than the
1567
current individual in terms of fitness value then the current individual is replaced by the better one and a success is recorded. Otherwise a failure is recorded. After some consecutive success (we set this value to 1) σ is increased (we set this increment as 1.5 times) to progress quicker. Similarly, after some consecutive failure (we set this value to 1) σ is decreased (we set this decrement as 1/3 times) to focus the search. Also a bias term is included to guide the search in the direction which gives success. As σ is adaptive, the step size of the search method is also adaptive which makes the algorithm very effective in exploiting the near region of the solution. In our algorithm, the solutions got after executing the local search are also recorded. If the success exceeds the failure by at least 50 then the solution is recorded as a good solution and the corresponding σ value is also recoded. If the success does not exceed the failure by at least 50 then the solution is recorded as bad solution. When the local search algorithm runs again, the chosen individuals are compared with the previous good and bad solutions. If the current individual is a previous good solution then the search process starts with the previous value of σ. If the current individual is a previous bad solution then local search is not applied to the individual. We have incorporated this memory system to avoid unnecessary function evaluations which in turn increases the efficiency of the algorithm. For the second stage of the LS method, we developed a method of generating several points around the best solution found by Solis and Wet’s algorithm. This method consists of generating 3 types of individuals. For 1st type, we generate a vector ( Sph _ mut ) whose components are randomly selected from a normal distribution whose mean is 0 and variance is 1. For 2nd and 3rd type, we generate a vector ( Sph _ mut ) whose components are randomly chosen from a uniform distribution within the range [0,1]. The radius within which the points are to be generated is a parameter rLS . Now the vector rLS • Sph _ mut is added to the current best individual in case of 1st and 2nd type individuals and the vector is subtracted from the current best individual in case of 3rd type of individuals. If the best of all newly generated individuals is found to have better fitness value than the current best individual then the current best individual is replaced by that individual. In our algorithm, during the local search, the whole local search method is applied over the best individuals of all subpopulations.
2) Population Reinitialization & Mutation towards Global Best Position: After executing the LS algorithm, only best individuals of the respective subpopulations are kept unaltered. Other individuals in the subpopulations are reinitialized. The method of re-initialization is to generate individuals within a specific radius R_LS from the best individual. This time each of the newly generated individuals produce a mutant vector according to “DE/current-to-best/1” mutation strategy where the best individual is the global best one. If any component of the mutant vector crosses the given bounds then it is re-mutate again with less scaling factor (actually the range is reduced to half). Now, if the mutant vector is better than the current
individual then the current individual is replaced by the mutant vector.
C. Distant Search As we are dealing with real world optimization problems which are often very difficult to solve due to enormous number of local optima, it is very common thing to happen that the population gets trapped in some local optima. So, we use a method to hop from one optimum to another. In this method, a high value deviation D_dis is added to current best solutions of the subpopulations and check whether the modified solution is better or not. If it is better than the current best solution then it is replaced by the modified solution. Actually, we take the difference between the upper bound and the lower bound of the search space to get a difference vector and then divide it by 2. Now some dimensions are randomly chosen and the components of the vector along these dimensions are made negative of their previous values. This vector is chosen as D_dis.
D. Proximity Checking Method In the proximity checking method, distance between every two best individuals of corresponding subpopulations is measured and checked whether this value falls below a prespecified value. If it falls below the prespecified value then the subpopulation having best individual of lower fitness value is reinitialized. This method ensures that the subpopulations are located at different regions of the search space which in turn increases the searching efficiency of the algorithm. E. Bad Performance Checking Method Each time an individual shows the worst fitness value within a subpopulation, its bad performance index is increased by 1. If the bad performance index for any individual becomes 15 then the individual is reinitialized and its bad performance index is set to 0. This way, the individuals with consistent bad performance is eliminated. F. Differential Evolution between Best Individuals of the Subpopulations After every 40000 iterations, DE is applied between the best individuals of the subpopulations. Actually, all the best individuals are taken as a population and differential evolution is applied to them. This method farther increases the quality of the solutions. The mutation strategy used for this purpose is the second mutation strategy of the Hybrid mutation strategy which is mentioned earlier. V. EXPERIMENTAL SETTINGS In this section we give the details about parameter values used for the experiment of the performance of the algorithm over the test suite.
A. All Parameters to be adjusted
1568
The population size (NP), scaling factor (F), crossover probability (CR), weight factor used in hybrid mutation strategy ( ω ), variance ( σ ) of the normal distribution from which the deviation is randomly chosen for the 1st stage of Local Search, radius (rLS) used in 2nd stage of Local Search, radius(R_LS) used in Population re-initialization after Local Search, boundary value of distance ( ρ ) between best individuals of different subpopulations used for proximity checking.
between best individuals of different subpopulations used for proximity checking method was set to:
B. Parameter values and Setting Used in Experiment 1) Population size: NP was kept fixed at 60 throughout the search process. We divided the whole population into 6 subpopulations, each containing 10 individuals. 2) Scaling factor: In this algorithm, scaling factor for each dimension of the difference vector is generated randomly depending on the value of the difference vector along the corresponding dimension. F is generated independently for each dimension of the difference vector. Scaling factor generation can be explained as follows:
F jd = rand ( 0 ,1 ) ⋅ e
− x
d j
/ x Rd
(9)
where d ∈ [1, D(Dimension of the search space)]. We are generating scaling factor for the d th dimension of the j th individual.
x dj is the value of the difference vector along d th
dimension,
x Rd is the search range along that dimension.
3) Weight factor for hybrid mutation strategy: Weight factor ω for the first mutation scheme in hybrid mutation strategy was set to 0.7. 4) Variance ( σ ): Variance ( σ ) of the normal distribution from which the deviation is randomly chosen for the 1st stage of Local Search was updated at each generation as follows:
σ d = 0.02 ∗ (1 − (0.2 ∗ ( FEs / 150000))) * x Rd (10) where σ d is the value of variance for dimension d. FEs is the number of function evaluations.
5) Radius rLS: Radius rLS used in 2nd stage of Local Search method was updated at each generation as follows: rLS d = ( x Rd 20) ∗ (1 − (0.2 * ( FEs / 150000))) (11) where rLS d is the value of the radius vector rLS along dimension d.
6) Radius R_LS: Radius R_LS used in population reinitialization after Local Search was updated at each generation as follows:
R _ LS d = ( x Rd 10) ∗ (1 − (0.2 * ( FEs / 150000))) (12)
R _ LS d is the value vector R _ LS along dimension d. where
of
the
radius
7) Boundary value of distance between best individuals of different subpopulations: The boundary value of distance ( ρ )
1569
ρ d = x Rd /(6 * dim)
(13)
1. Initialize the population (Number of subpopulation = NS, Number of individual in each subpopulation = IND) 2. Set Success1 & Success2 = 0; CR = 0.9; FEs=0; Gen=0 3. Evaluate the fitness of the individuals & update FEs 4. For FEsSuccess2 9. use nearest neighbor of the individual to generate mutant vector 10. else if Success1=Success2 11. then if rand>0.85 use nearest neighbor of the individual to generate mutant vector 12. else use the current individual to generate mutant vector 13. end if 14. else Success1<Success2 15. use the current individual to generate mutant vector 16. end if 17. Construct the trial vector; check its fitness value & update FEs 18. If the fitness value of the trial vector ≥ the fitness value of the current individual then only replace the current individual with the trial vector & increase the corresponding Success by 1, else decrease the corresponding Success by 1 19. end For 20. if remainder(Gen, 200) = 0 21. run the Distant Search & update FEs 22. end if 23. if remainder(Gen, 50) = 0 24. run the Local Search(σ & rLS are used in this method) & reinitialize the all the individuals except the best individuals. Individuals are generated within the radius R_LS of the best individual. Update FEs during the process. 25. end if 26. if remainder(Gen, 40000) = 0 27. apply DE between best individuals of the Subpopulations; update the subpopulation and FEs 28. end if 29. Apply the proximity checking method and reinitialize any subpopulation if required 30. Apply bad performance checking method and if the bad performance index of any individual becomes ≥15 then reinitialize the individual 31. Gen = Gen+1 32. end For
• • •
Figure 1. Pseudo code for the proposed CDELS algorithm.
where
ρ d is the value of vector ρ along dimension d, dim is
the dimension of the search space.
VI. EXPERIMENTAL RESULTS In this section, we represent the simulation results in tabular form. We measured the best, average and worst objective function values after 50000, 100000, 150000 Function Evaluations by executing our algorithm 25 times for each problem.
C. Experimental Environment • •
Language: Matlab Algorithm: CDELS Runs/problem: 25
CPU: 3.2 GHz Intel Core i5 RAM: 2 GB DDR3
TABLE 1. EXPERIMENTAL RESULTS PART 1
FES 50000
100000
150000
Objective Function Value Mean
Problem 1
Problem 2
Problem 3
Problem 4
Problem 5
Problem 6
Problem 7
0.008734
-11.578171
0.000287
13.894459
-31.796216
-20.988602
1.069861
Best
0.005844
-12.498144
0.000287
13.770762
-33.400500
-22.428184
0.935573
Worst
0.015078
-10.789927
0.000287
14.286552
-28.262336
-19.388167
1.332351
Mean
0.003708
-11.834426
0.000287
13.779116
-33.640953
-21.651074
0.987677
Best
0.001655
-12.590218
0.000287
13.770762
-33.990500
-22.553102
0.926147
Worst
0.015078
-11.161209
0.000287
13.934459
-31.856689
-19.388167
1.302351
Mean
0.000026
-21.411557
0.000287
13.770824
-34.308068
-25.826496
0.832774
Best
0.000003
-22.598335
0.000287
13.770762
-34.599020
-27.417080
0.723051
Worst
0.000111
-19.495569
0.000287
13.820414
-34.097061
-21.246350
0.908693
TABLE 2. EXPERIMENTAL RESULTS PART 2
FES 50000
100000
150000
Objective Function Value Mean
Problem 8
Problem 9
Problem 10
Problem 11.1
Problem 11.2
Problem 11.3
Problem 11.4
220.0000
263008.42
-20.780065
532135.14
5014707.22
15449.213
19182.85
Best
220.0000
242491.68
-21.249358
447226.92
5314704.81
15447.000
19138.14
Worst
220.0000
314865.89
-20.054793
685073.53
4842002.43
15458.709
19204.44
Mean
220.0000
169005.68
-20.958112
170343.17
3541922.46
15447.005
19159.45
Best
220.0000
146768.19
-21.741048
136480.68
3475908.72
15447.000
19007.73
Worst
220.0000
206963.50
-20.556494
207071.98
3903673.82
15449.635
19204.42
Mean
220.0000
104833.14
-21.089715
73806. 30
2446175.74
15446.999
19120.01
Best
220.0000
94457.80
-21.769134
68146.40
2422839.04
15446.550
19007.72
Worst
220.0000
143008.78
-20.841390
80638.85
2566265.39
15449.634
19185.83
1570
TABLE 3. EXPERIMENTAL RESULTS PART 3
FES 50000
100000
150000
Objective Function Value Mean
Problem 11.5
Problem 11.6
Problem 11.7
Problem 11.8
Problem 11.9
Problem 11.10
Problem 12
33030.65
139804.41
1957937.80
1599518.87
2306703.22
1531825.64
18.411089
Best
33008.60
135965.38
1920904.71
1352823.30
2648685.58
1352823.31
16.513068
Worst
33074.18
141771.96
2015860.68
1700257.23
2023710.35
1747220.76
20.874999
Mean
33010.60
138824.41
1951230.35
1511648.83
2030833.94
1421154.53
16.273977
Best
32956.88
135935.37
1920904.67
1352823.27
1889174.13
1352823.27
14.957941
Worst
33060.96
141394.66
1978511.43
1699537.13
2231175.22
1699537.13
19.100023
Mean
32958.29
137710.68
1946440.35
1018834.48
1537495.73
1029531.22
13.420167
Best
32928.03
135906.94
1920904.66
983017.54
1459303.42
1002619.80
10.439291
Worst
33031.15
141184.62
1957357.47
1275596.75
1872274.21
1275596.75
16.490167
TABLE 4. EXPERIMENTAL RESULTS PART 4
FES 50000 100000 150000
Objective Function Value Mean Best Worst Mean Best Worst Mean Best Worst
Problem 13 20.466793 19.469068 23.934499 19. 597972 18.715588 22.294781 16.961617 10.294767 19.437542
Figure 2. Radiation pattern of circular antenna array in optimal setting (Problem Number 10)
1571
The simulation results are represented in Table 1, 2, 3 and 4. Also a representitive radiation pattern plot for circular array design problem has shown above. VII. CONCLUSION In this paper, a competitive variant of Differential Evolution with Local Search algorithm is proposed to address real world optimization problems. These optimization problems are very hard to optimize due to large number of local minima. So, a distant search method is also included to farther ensure that any subpopulation does not get trapped in some local optima. We have also developed a hybrid mutation strategy to overcome the fast but less reliable convergence. As this algorithm is multi-population based, it is important that the subpopulations are located at different portion of the search space because this increases the searching efficiency of the algorithm to a greater extent. For that purpose, a proximity checking method is applied between the best individuals of the subpopulations. Finally, we have tested our algorithm over the 13 real world optimization problems provided for the Competition on Testing Evolutionary Algorithms on Realworld Numerical Optimization Problems in the 2011 IEEE Congress on Evolutionary Computation. We represented the simulation results in Tabular form and also a sample radiation pattern of the circular array design problem to show how efficient our algorithm in optimizing real world optimization problems.
REFERENCES [1]
S. Jeong, S. Hasegawa, K. Shimoyama, and S. Obayashi, “Development and investigation of efficient GA/PSO-HYBRID algorithm applicable to real-world design optimization”, IEEE Computational Intelligence Magazine,vol. 4, 36 – 44, 2009. [2] K. Yang and X. Liu, “A Bi-criteria Optimization Model and Algorithm for Scheduling in a Real-World Flow Shop with Setup Times”, International Conference on Intelligent Computation Technology and Automation (ICICTA), 2008 , 535 – 539. [3] M. A. Abido, “Multiobjective evolutionary algorithms for electric power dispatch problem”, IEEE Transactions on Evolutionary Computation, vol. 10, 315 – 329, 2006. [4] Y. Ahn, J. Park, C. G. Lee; J. W. Kim, and S. Y. Jung, “Novel Memetic Algorithm implemented With GA (Genetic Algorithm) and MADS (Mesh Adaptive Direct Search) for Optimal Design of Electromagnetic System”, IEEE Transactions on Magnetics, vol. 46, 1982 – 1985, 2010. [5] R. Storn and K. Price, “Differential evolution: A simple and efficient heuristic for global optimization over continuous spaces”, Journal of Global Optimization, vol. 11, 341-359, 1997. [6] S. Das and P. N. Suganthan, “Differential evolution – a survey of the state-of-the-art”, IEEE Transactions on Evolutionary Computation, Vol. 15, No. 1, pp. 4 – 31, Feb. 2011. [7] H. H. Hoos and T. Stutzle, Stochastic Local Search: Foundations and Applications, Morgan Kaufmann, 2005. [8] P. Moscato, "On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms". Caltech Concurrent Computation Program (report 826), 1989. [9] Y. S. Ong, M. H. Lim and X. S. Chen, "Research frontier: memetic computation - past, present & future", IEEE Computational Intelligence Magazine, Vol. 5, No. 2, pp. 24 -36, 2010. [10] D. Molina, M. Lozano, C. García-Martínez, F. Herrera, “Memetic algorithms for continuous optimisation based on local search chains”, Evolutionary Computation,18(1): 27-63, 2010. [11] F. J. Solis and R. J. Wets, “Minimization by random search techniques”, Mathematical Operations Research, 6:19–30, 1981. [12] S. Das and P. N. Suganthan, “Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems”, Technical Report, Jadavpur University, Nanyang Technological University, December 2010.
1572