2011 Seventh International Conference on Natural Computation
Global Artificial Bee Colony Search Algorithm for Numerical Function Optimization Guo Peng1, Cheng Wenming1, Liang Jian2 College of Mechanical Engineering, Southwest Jiaotong University Chengdu 610031, China 2 School of Machanical Eengineering & Automation, Xihua University Chengdu 610039, China
[email protected],
[email protected],
[email protected] 1
Abstract—The standard artificial bee colony (ABC) algorithm as a relatively new swarm optimization method is often trapped in local optima in global optimization. In this paper, a novel search strategy of main three procedures of the ABC algorithm is presented. The solutions of the whole swarm are exploited based on the neighbor information by employed bees and onlookers in the ABC algorithm. According to incorporating all employed bees’ historical best position information of food source into the solution search equation, the improved algorithm that is called global artificial bee colony search algorithm has great advantages of convergence property and solution quality. Some experiments are made on a set of benchmark problems, and the results demonstrate that the proposed algorithm is more effective than other population based optimization algorithms. Keywords—Artificial bee colony, Search strategy, Particle swarm optimization, Function optimization
I.
Introduction Recently, swarm intelligence-based algorithms have drawn more and more attention of many research scientists of related fields. Some nature-inspired optimization techniques were introduced, such as genetic algorithm (GA), ant colony optimization (ACO), particle swarm optimization (PSO), and so on. They have been used to find the minimum or maximum of the function under consideration in combinatorial and continuous optimization problems. Artificial bee colony (ABC) algorithm based on foraging behavior of honey bees, that was introduced by D. Karaboga [1], is a relatively new population based optimization technique that has been used to find an optimal solution in numeric optimization problems. The algorithm is competitive with existing other algorithms including GA, PSO, differential evolution algorithm (DE), evolution strategies and particle swarm inspired evolutionary algorithm (PS-EA) on many benchmark functions [2-4]. Since its invention in 2005, ABC algorithm has been applied to solve many kinds of problem beside numerical function optimization. In [5], ABC algorithm is applied for generalized assignment problem (GAP), and the experimental results show that the proposed bee algorithm is very effective in solving small to medium sized GAP. Afterwards, a novel discrete artificial bee colony algorithm is proposed to solve lotstreaming flow shop scheduling problem [6]. In addition, the ABC algorithm is also applied to design infinite impulse response (IIR) filters [7]. Although, the standard ABC algorithm can provide powerful methods for optimization problems, but it still has shortcomings such as premature convergence and stagnation. Consequently, some modified versions of the algorithm have been presented. Akay, B. and D.
978-1-4244-9952-6/11/$26.00 ©2011 IEEE
Karaboga modified the standard version of ABC by introducing two elements: control parameter and scaling factor [8] , moreover used the modified ABC algorithm to solve constrained optimization problems [9]. In order to improve the convergence characteristics and prevent the ABC to get stuck on local solutions, different chaotic maps have been embedded to adapt parameters of ABC algorithm [10]. Similarly, the information of global best solution was incorporated into solution search equation, and a gbest-guided ABC (GABC) algorithm was proposed [11]. Furthermore, some hybrid swarm intelligent approaches are developed by integrating ABC and other population-based algorithm, such as GA and PSO [12, 13]. Numerical results show that these hybrid methods are effective and perform better than original algorithms. However, using the information of the global best individual and the current best individual are lacked in ABC algorithm comparing other swarm optimization algorithms [14, 15]. Inspired by PSO, in order to improve the search ability, the search equations of neighbor solution and random solution are both modified by embedding the position information of the best food source in this paper. The proposed algorithm is called global artificial bee colony search algorithm (GABCS). Its performance is investigated for numerical function optimization on some benchmark functions. The rest of this paper is organized as follows. Section 2 describes the standard ABC algorithm. Section 3 presents the modified ABC algorithm. Section 4 presents and discusses the experimental results. II. Artificial Bee Colony Algorithm ABC algorithm is a nature-inspired algorithm based on the intelligent foraging behavior of honey bee swarm. The colony of artificial bees consists of three groups of bees to search foods generally, which includes employed bees, onlookers and scouts. The first half of colony consists of the employed artificial bees and the second half includes the onlookers. The employed bees search the food around the food source in their memory. They perform waggle dance upon returning to the hive to pass their food information to the other of the colony (the onlookers). The onlookers are waiting around the dance floor to choose any of the employed bees to follow based on the nectar amount information shared by the employed bees. The employed bee whose food source has been exhausted by the bees becomes a scout who must find new food source. For every food source, there is only one employed bee. In other words, the number of employed bees is equal to the number of food sources around the hive. In ABC algorithm, the position of a food source represents a possible solution to the
1306
optimization problem and the nectar amount of a food source corresponds to the quality of the associated solution (fitness value).
the optimum values of the decision variables and objective function for the considered problem.
The detailed implementation procedures of the algorithm are given below:
III. Global Artificial Bee Colony Search Algorithm In standard ABC algorithm, the employed bees and onlookers exploit their solutions based on the neighbor information of each individual. However, the exploitation should refer to the ability to apply the knowledge of the found good solutions to find better solution. Generally, in bee swarm, the experienced foragers can use previous information of position and nectar amount of food source to adjust their movement directions in the search space. Because of this, inspired by particle swarm optimization with inert weight, a global artificial bee colony search algorithm (GABCS) is proposed. Taking advantage of the position information of the best food source (i.e. solution), the solution search equation used by employed bees and onlookers is modified by replacing Eq. (3) with Eq. (5).
step 1: Generate randomly an initial population of N food sources with the range of the boundaries of the variables. xij = x min + rand (0,1)( x max − x min j j j )
,
(1)
where i=1…N, j=1…D. N is the number of food source and D is the number of optimization variables. step 2: Evaluate the fitness of each food source (i.e. calculate the nectar amount) according to Eq.(2). if f i ≥ 0 ⎫ ⎧1 /(1 + f i ) fitness i = ⎨ ⎬ abs f if f i < 0⎭ 1 + ( ) i ⎩
(2)
vij = xij + φij ( xij − xkj ) + c1rand (0, 1)( x best − xij ) j
x
where f i is the cost value of solution i . For maximization problems, the cost function can be directly used as a fitness function. step 3: Each employed bee searches a candidate food source
vi according to Eq. (3). Evaluate the candidate food source and
apply greedy selection to select a better one as the new food source.
+c2 rand (0, 1)( y best − xij ) j
(5)
where c1 and c 2 are two positive constants, in this paper, c1
= c 2 =1.5.
x best j
is the jth element of the global best solution
y best j
(3)
is the jth element of the best solution in the φij is a uniformly distributed real random current iteration, number in the range [-1, 1].
where j is a random integer in the range [1,D] and k ∈ {1,2,...,N} is a randomly chosen index that is different from
The second modified step is the generated solution of the scout. At first, a random solution is produced according to Eq.
i. ij is a uniformly distributed real random number in the range [-1,1].
(6.a), meanwhile the selected solution xi is mutated by using the information of global best solution according to Eq. (6.b) and (6.c). Then comparing the fitness value of random
step 4: Calculate probability values based on the fitness values of the solutions in the population. Each onlooker selects a food source according to Eq. (4) by roulette wheel selection and generates a candidate solution according to Eq. (3).
generated solution i and mutation solution better one is chose as a new food source.
v ij = x ij + φ ij ( xij − x kj )
.
φ
pi =
fitnessi i
i =1
x rand
ximutation
xijrand = x min + rand (0, 1)( x max − x min j j j ),
, the (6.a)
if rand(0,1)0.5, xijmutation = xij + rand (0, 1) * (1 −
where b is a scaling parameter which generally is a positive integer within the range of [2,5]. In this paper, b=4. IV.
Experiments
A. Benchmark Functions In order to compare the performance of GABCS algorithm with the standard ABC, GABC, PSO [14] and comprehensive learning particle swarm optimizer (CLPSO) [15], we choose
1307
is set to 1200. When solving the 30-D problems, the population size is set to 80 and the itermax is set to 2000.
two unimodel functions (f1, f2) and five multimodel benchmark functions (f3-f7) here. All functions are tested on 10 and 30 dimensions. Table 1 shows the benchmark functions.
The other parameters of PSO and CLPSO algorithm are the same to relevant reference [14] and [15] respectively. For ABC, GABC and GABCS, the percentage of both employed bees and onlookers are set 50% of the colony. The number of scouts is set to 1. The parameter of “limit” is set to 50. All experiments were run repeated 30 times, and the mean values and standard deviation of the reported results are presented.
B. Parameter Settings for the algorithm PSO, CLPSO, ABC, GABC and GABCS – all have two parameters in common: population size and maximum iterations. When solving the 10-D problems, the population size is set to 50 and the number of maximum iterations (itermax)
Table 1: The seven benchmark functions for experimental study.
[-2.048,2.048]D
f (x* ) G f ( 0) = 0 G f (1 ) = 0
[-32,32]D
G f ( 0) = 0
[-600,600]D
G f ( 0) = 0
f
Formula
Search range
Sphere
f 1 ( x) = ∑i =1 x i2
[-100, 100]D
Rosenbrock
f 2 ( x) = ∑i =1 100( xi +1 − xi2 ) 2 + ( xi − 1) 2
Ackley
D
D −1
[
]
1 D 2 1 D ∑ xi ) − exp( D ∑i =1 cos(2πxi )) + 20 + e n i =1 x D D 1 f 4 ( x) = ∑ xi2 − ∏i =1 cos( ii ) + 1 4000 i =1
f 3 ( x) = −20 exp(−0.2
Griewank
[
]
Rastrigin
f 5 ( x) = ∑i =1 xi2 − 10 cos(2πxi ) + 10
Schwefel
f 6 ( x) = D * 418.9829 − ∑i =1 xi sin( xi )
D
D
Schaffer
f 7 ( x ) = 0.5 +
sin 2 (
∑
D
i =1
xi2 ) − 0.5
(1 + 0.001(∑i=1 x )) D
C. Experimental Results In this section, the proposed algorithm was compared with other four algorithms on seven benchmark functions. The mean and the standard deviations of the function values obtained by above these algorithms for under the same conditions are presented in table 2. The best results among the five algorithms are shown in bold. As seen from the results, all artificial bee colony algorithms have great advantage of convergence property and robustness compared to other two PSO algorithms. The proposed algorithm performs better optimization on only Rosenbrock function with dimension 30 for unimodel functions. Ackley Function with D=10
5
10
PSO 10
0
ABC
CLPSO GABC
10
-5
GABCS
10
10
2
-10
-15
0
200
400 600 800 Number of Iterations
1000
1200
[-100, 100]D
G f ( 0) = 0
Since GABCS considers the effect of the found good solutions, it has a large potential search space, thus it could not converge as fast as the ABC and GABC. However, GABCS obtained better results on all five multimodel functions than other algorithms. The GABCS algorithm performs better on more complex problems when the other algorithms miss the global optimum solution. The Schwefel function is a good example, as it traps all other algorithms in local optima for solving 30-D problem. The GABCS successfully avoids falling into the deep local optimum and obtained the global optimum. For space limitation, here two representative cases of the convergence curves are listed, which are shown in Figure 1.
Mean of Best Function Values
Mean of Best Function Values
10
2 i
[-500,500]D
G f ( 0) = 0 G f (420.96) = 0
[-5.12,5.12]D
Ackley Function with D=30
5
PSO 10
0
CLPSO ABC GABC
10
10
10
-5
GABCS
-10
-15
0
500
1000 1500 Number of Iterations
Figure 1. Convergence curves of five algorithms for the Ackley function
1308
2000
Table 2: Mean of best results obtained through 30 runs for each function by five algorithms. For GABC, C=1.5. f f1 f2 f3 f4 f5 f6 f7
D
PSO
CLPSO
ABC
GABC
GABCS
10
4.55e-031±1.47e-030
1.97e-014±1.83e-014
1.21e-016±5.51e-017
6.31e-017±1.14e-017
5.96e-017±1.48e-017
30
1.41e-013±1.66e-013
2.09e-014±1.98e-014
1.18e-015±2.53e-015
5.81e-016±1.15e-016
5.21e-016±1.71e-017
10
3.15e-000±1.15e-000
3.41e-000±7.34e-001
5.48e-001±4.29e-001
5.20e-003±4.40e-003
2.08e-002±3.21e-002
30
2.46e+001±1.96e+001
2.51e+001±1.33e+000
1.23e+001±4.02e+000
8.23e+000±7.28e+000
1.24e+001±2.14e+000
10
6.63e-015±1.25e-015
8.57e-008±3.30e-008
9.71e-015±3.41e-015
6.63e-015±1.25e-015
3.79e-015±9.17e-016
30
2.28e-007±2.82e-007
8.87e-004±1.84e-004
2.56e-013±4.99e-014
4.50e-014±2.05e-015
2.96e-014±2.05e-015
10
5.84e-002±2.15e-002
1.17e-005±9.89e-006
2.22e-012±8.09e-012
5.92e-017±5.73e-017
1.48e-017±3.91e-017
30
1.33e-002±8.80e-003
1.09e-004±4.86e-005
7.99e-011±1.37e-010
7.77e-016±3.84e-016
7.40e-017±6.41e-017 0±0
10
9.10e-003±3.53e-002
5.11e-012±8.47e-012
0±0
0±0
30
4.05e-002±1.11e-001
3.13e-002±1.05e-002
5.92e-014±7.56e-014
0±0
0±0
10
3.85e+000±1.59e+000
1.02e-006±1.13e-006
1.18e-016±4.58e-016
0±0
0±0
30
4.08e+001±1.15e+001
2.72e-000±7.81e-001
2.14e-002±3.71e-002
2.79e-011±7.35e-012
0±0
10
1.52e-002±1.16e-002
9.71e-003±1.33e-007
7.41e-002±1.29e-002
2.62e-002±1.42e-002
4.27e-003±9.78e-005
30
1.22e-001±2.82e-002
8.12e-002±3.90e-003
4.91e-001±4.70e-003
4.46e-001±2.74e-002
3.32e-002±9.70e-003
V. Conclusion In this paper, a modified version of artificial bee colony (ABC) algorithm was presented. Through observing the foraging behavior of experienced bees, the standard ABC algorithm was improved by incorporating the global best position information of food source found so far and the best individual of current iteration into solution search equation. In order to demonstrate the performance of the proposed algorithm, PSO, CLPSO, ABC, GABC and GABCS algorithms were tested on two unimodel functions and five multimodel benchmark functions. The simulation experimental results show that GABCS algorithm outperforms other algorithms in almost all of these experiments, especially multimodel benchmark functions. Acknowledgment This work is partially supported by the Fundamental Research Funds for the Central Universities (Grant No. 2010ZT03), and Science and Technology Planning Program of General Administration of Quality Supervision, Inspection and Quarantine of the PRC (Grant No. 2010QK302). Authors also thank professor Suganthan for the codes of CLPSO. References [1] D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimization, in: Technical Report TR06, Computer Engineering Department, Erciyes University, Kayseri, Turkey, 2005. [2] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, Journal of Global Optimization, vol. 39, 2007, pp. 459-471. [3] D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC) algorithm, Applied Soft Computing, vol. 8, 2008, pp. 687-697.
[4] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm, Applied Mathematics and Computation, vol. 214,2009, pp.108132. [5] A. Baykasoglu, L. Ozbakir, P. Tapkan, Artificial bee colony algorithm and its application to generalized assignment problem, Swarm Intelligence: Focus on Ant and Particle Swarm Optimization, 2007, pp. 113-144. [6] Q.-K. Pan, M. Fatih Tasgetiren, P.N. Suganthan, T.J. Chua, A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem, Information Sciences, vol. 181, 2011, pp. 2455-2468. [7] N. Karaboga, A new design method based on artificial bee colony algorithm for digital IIR filters, Journal of the Franklin Institute, vol. 346 2009, pp. 328-348. [8] B. Akay, D. Karaboga, A modified Artificial Bee Colony algorithm for real-parameter optimization, Information Sciences, 2010, in press. [9] D. Karaboga, B. Akay, A modified Artificial Bee Colony (ABC) algorithm for constrained optimization problems, Applied Soft Computing Journal, vol. 11, 2011, pp. 3021-3031. [10] B. Alatas, Chaotic bee colony algorithms for global numerical optimization, Expert Systems with Applications, vol. 37, 2010, pp. 56825687. [11] G. Zhu, S. Kwong, Gbest-guided artificial bee colony algorithm for numerical function optimization, Applied Mathematics and Computation, vol. 217, 2010, pp. 3166-3173. [12] H. Zhao, Z. Pei, J. Jiang, R. Guan, C. Wang, X. Shi, A Hybrid Swarm Intelligent Method Based on Genetic Algorithm and Artificial Bee Colony, in: First International Conference on Advances in Swarm Intelligence, Springer Verlag, Beijing,China, 2010, pp. 558-565. [13] X. Shi, Y. Li, H. Li, R. Guan, L. Wang, Y. Liang, An integrated algorithm based on artificial bee colony and particle swarm optimization, in: Sixth International Conference on Natural Computation, IEEE, Yantai,China, 2010, pp. 2586-2590. [14] Y. Shi, R.C. Eberhart, Empirical study of particle swarm optimization, in: International Conference on Evolutionary Computation, IEEE, Washington, USA, 1999, pp. 1945-1950. [15] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Transactions on Evolutionary Computation, vol. 10, 2006, pp. 281-295.
1309