JOURNAL OF COMPUTERS, VOL. 6, NO. 9, SEPTEMBER 2011
1935
Parameters Optimization of Least Squares Support Vector Machines and Its Application Chunli Xie1,2 1. Dalian University of Technology/ School of Electronic and Information Engineering, Dalian, 116024, China 2. Dalian Nationalities University/College of Electromechanical and Information Engineering, Dalian, 116024, China Email:
[email protected] Cheng Shao1 and Dandan Zhao3 3. Dalian Nationalities University/School of Computer Science and Engineering, Dalian, 116024, China Email:
[email protected],
[email protected] Abstract—Parameters optimization plays an important role for the performance of least squares support vector machines (LS-SVM). In this paper, a novel parameters optimization method for LS-SVM is presented based on chaotic ant swarm (CAS) algorithm. Using this method, the optimization model is established, within which the fitness function is the mean square error (MSE) index, and the constraints are the ranges of the designing parameters. After having been validated its effectiveness by an artificial data experiment, the proposed method is then used in the identification for inverse model of the nonlinear underactuated systems. Finally real data simulation results are given to show the efficiency. Index Terms—Least Squares Support Vector Machines, Parameters Optimization, Chaotic Ant Swarm Algorithm
I. INTRODUCTION A novel statistical learning method called Support Vector Machines (SVM) was presented by Vapnik in 1995. Due to the advantages such as the complete statistical learning theory foundation and perfect study ability, SVM has become quite an active research field in machine learning and broadly used in many fields such as pattern recognition and regression estimation problems [1, 2]. The classical training algorithm of SVM is equivalent to solving a quadratic programming with linear and inequality constraints. Least squares support vector machines (LS-SVM) has been recently introduced by Suykens et al. as reformulations to standard SVM [3, 4], which simplifies the training process of standard SVM in a great extent by replacing the inequality constraints with equality ones. The simplicity of LS-SVM promotes the applications of SVM, and many pattern recognition and function approximation problems have been tackled with LS-SVM in the last decade [5-9]. The parameters in regularization item and kernel function are called parameters in LS-SVM, which play an important role for the algorithm performance. The Corresponding author: Chunli Xie.
© 2011 ACADEMY PUBLISHER doi:10.4304/jcp.6.9.1935-1941
existing techniques for tuning the parameters in LS-SVM can be summarized into two kinds: one is based on analytical techniques, the other is based on heuristic searches. The first kind of techniques determines the parameters with gradients of some generalized error measures [10]. And the second kind of techniques determines the parameters with modern heuristic algorithms including genetic algorithms (GA), simulated annealing algorithms (SA), particle swarm optimization algorithms (PSO) and other evolutionary algorithms [1115]. Iterative gradient-based algorithms rely on smoothed approximation of a function. So, it does not ensure that the search direction points exactly to an optimum of the generalization performance measure which is often discontinuous. Grid search [16] is one of the conventional approaches to deal with discontinuous problems. However, it needs an exhaustive search over the space of parameters, which must be time consuming. This procedure also needs to locate the interval of feasible solution and a suitable sampling step. In this paper, a novel algorithm of parameters optimization is presented based on the principles of the chaotic ant swarm (CAS) algorithm. Inspired by the chaotic and self-organizing behavior of the ants in nature, the novel CAS [17] algorithm is developed in 2006, which combines the chaotic behavior of individual ant with the intelligent foraging actions of ant colony via the organization variable for solving optimization problems. Similar to GA, EA and PSO, the CAS algorithm is a population-based optimization tool, which searches for optima by updating generations. However, unlike GA and EA, the CAS algorithm does not need evolutionary operators such as crossover and mutation. Compared to GA and EA, the advantages of CAS algorithm are that it possesses the capability to escape from local optima, is easy to be implemented, and has fewer parameters to be tuned. Compared to PSO, the advantages of CAS algorithm are that it has higher convergent precision. The CAS algorithm has been successfully applied to parameters estimation, artificial network training and fuzzy system control, etc [18-26]. The CAS algorithm is used to the parameters optimization of LS-SVM, and the
1936
JOURNAL OF COMPUTERS, VOL. 6, NO. 9, SEPTEMBER 2011
feasibility of this approach is examined on the testing function and nonlinear under-actuated systems. This paper is organized as follows. The LS-SVM regression algorithm is briefly reviewed in Section 2. Parameters optimization algorithm based on the CAS algorithm is addressed in Section 3. The results of testing and simulation are presented to demonstrate the effectiveness of the proposed method in Section 4. The application of LS-SVM based on the CAS Algorithm is given in Section 5. Finally, the paper is concluded in Section 6. II. LS_SVM REGRESSION The LS-SVM, evolved from the SVM, changes the inequality constraint of a SVM into an equality constraint and forces the sum of squared error (SSE) loss function to become an experience loss function of the training set. Then the problem has become one of solved linear programming problems. This can be specifically described as follows [4]: Given the following training sample set (D):
D = {( xk , yk ) k = 1,2, L , N }
α k = γek N
∑α
xk ∈ R
f ( x ) = wTϕ ( x ) + b
(1)
where the nonlinear mapping ϕ : R n → R nh maps the input data into a so-called high dimensional feature space (which can be infinite dimension). The regularized cost function of the LS-SVM is given as: N
∑
1 T 1 ek2 w w+ γ 2 2 k =1
s.t. yk = w Tϕ ( xk ) + b + ek , k = 1,2,L, N
(2)
After elimination of w and ek , the solution of the optimization problem can be obtained by solving the following set of linear equations vT −1 0 b 0 = α v −1 Ω + γ I y r with y = [ y1 , L , y N ]T ∈ R N , = [1,L,1]T ∈ R N
N
T
k
k
(8)
α = [α1 , L , α N ]T ∈ R N and Ω is an N × N kernel matrix. By using the kernel trick [2], one obtains
Ωkl = ϕ ( xk )T ϕ ( xl ) = K ( xk , xl ), ∀k , l = 1,2,L, N .
N
f ( x) =
∑ α K ( x, x ) + b k
− yk
}
k
(9)
k =1
where
α k , b are the solution to Eq. (8).
Note that the dot product ϕ (⋅) T ϕ (⋅) in the feature space is replaced by a prechosen kernel function K (⋅,⋅) due to the employment of the kernel trick. Thus, there is no need to construct the feature vector w or to know the nonlinear mapping ϕ (⋅) explicitly. Given a training set, the training of an LS-SVM is equal to solving a set of linear equations as Eq. (8). This greatly simplifies the regression problem. The chosen kernel function must satisfy the Mercer’s condition [2]. Possible kernel functions are, e.g.: l Linear kernel K ( x k , xl ) = x k ⋅ xl .
K ( x k , xl ) = ( xk ⋅ xl + 1) m . l Gaussian RBF kernel
K ( xk , xl ) = exp(− xk − xl
L( w,b,e;α ) =
k
(7)
l Polynomial kernel
where, w ∈ R n h is the weight vector, ek ∈ R is slack variable, b ∈ R is a bias term and γ ∈ R is regularization item. The Lagrangian corresponding to Eq. (2) can be defined as follows:
∑α {w ϕ ( x ) + b + e
(6)
And the resulting LS-SVM regression model becomes
output. According to SVM theory, the input space R n is mapped into a feature space, and then the linear equation in the feature space can be defined as:
J ( w,e) −
=0
w Tϕ ( xk ) + b + ek − yk = 0
is the regression vector and yk ∈ R is the
min J ( w , e ) =
k
k =1
where N is the total number of training data pairs, n
(5)
(3)
k =1
where α k ∈ R(k = 1,2,L, N ) are the Lagrange multipliers. The KKT conditions can be expressed by
2
/ 2σ 2 ) .
where d denotes the polynomial degree, σ is the kernel (bandwidth) parameter. It is well known that LS-SVM generalization performance depends on a good setting of regularization parameter and the kernel parameter. In order to achieve the better generalization performance, it is necessary to select and optimize these parameters.
N
w=
∑α ϕ ( x ) k
k =1
© 2011 ACADEMY PUBLISHER
k
(4)
III. PARAMETERS OPTIMIZATION OF LS_SVM BASED ON CAS ALGORITHM
JOURNAL OF COMPUTERS, VOL. 6, NO. 9, SEPTEMBER 2011
A. Overview of CAS Algorithm. Ants have attracted many scientists’ significant interests because their colonies can achieve the selforganizing behavior and the high level of structure. Most of the existing ant-inspired optimization algorithms are based on the random meta-heuristic of nondeterministic probability theory. However, Cole suggested that ant colony exhibits a periodic behavior while single ant show low-dimensional deterministic chaotic activity patterns [27]. From the view of dynamics, the chaotic behavior of single ant has some relation to the self-organizing and foraging behaviors of ant colony. The chaotic behavior of individual ant and the intelligent organization actions of ant colony are adaptations to the environment. These behaviors help the ants to find food and survive. According to the theory, a novel optimization algorithm, called CAS algorithm, was presented. In the CAS algorithm, the chaotic system θ n +1 = θ n exp(µ (1 − θ n )) [28] was introduced into the heuristic equation of the CAS algorithm for obtaining the chaotic search initially. The adjustment of the chaotic behaviour of individual ant is achieved by the introduction of a successively decrement of organization variable µi and leads the individual to move to the new site acquired with the best fitness value eventually. ( pid − θ id ) is introduced to achieve the information exchange of individuals and the movements to new site taken on the best fitness value. pid is selected based on the fitness theory which is very widely developed in optimization theory such as genetic algorithm and tabu search, and so on. θ id is the state of the d th dimension of ant i . The CAS algorithm is a kind of iterative optimization algorithm, which is firstly employed in the optimization of sequential space. In the sequential space coordinates, the mathematic description [17] of the CAS algorithm as follows: µi (n) = µi (n − 1) (1+ ri ) θ (n) = (θ (n − 1) + 7.5 × V ) × id i id ψ id 7. 5 aµi ( n )( 3−ψ id (θ id ( n−1) + ×Vi )) ψ id 7 .5 (1− e e − × Vi + ψ id e ( −2 aµ i ( n ) + δ ) ( pid (n − 1) − θ id (n − 1))
(10)
where i = 1,2,L, N , N is the size of the ant swarm; d = 1,2,L, L , L is the dimension of the optimization space; n means the current iteration, and n − 1 is the previous iteration; µ i is the current state of the ith ant’s organization variable, µi (0) = 0.999 ; ri is termed by us as the organization factor of ant i ; ψ id determines the selection of the search range of the dth element of variable in the search space; Vi determines the search region of ant i and offers the advantage that ants could
© 2011 ACADEMY PUBLISHER
1937
search diverse regions of the problem space. The value of Vi should be suitably selected according to concrete optimization problems; a is a sufficiently large positive constant and can be selected as a = 200 ; δ (0 ≤ δ ≤ 2 / 3) is a constant; pid (n − 1) is the best position found by the ith ant and its neighbors within n − 1 steps; θ id is the current state of the dth dimension of ant i , θ id (0) = (7.5 /ψ id )(1 − Vi ) R , where R is a uniformly distributed random number in R ∈ [0,1] .
ri and ψ id are two important parameters. ri is the organization factor of ant i , which affects the convergence speed of the CAS algorithm directly. If ri is very large, the iteration step of ‘‘chaotic” search is small then the system converges quickly and the desired optima or near-optima cannot be achieved. If ri is very small, the iteration step of ‘‘chaotic” search is large then the system converges slowly and the runtime will be longer. Since small changes are desired as iteration step evolves, the value of ri is chosen typically as 0 < ri ≤ 0.5 . The format
ri can be designed according to concrete problems and
of
runtime. Each ant could have different ri , such as ri = 0.3 + 0.02 ⋅ rand (1) . ψ id affects the search ranges of the CAS algorithm. If the interval of the search is
[−
ωid ωid 2
ωid =
,
2 7.5
ψ id
] , then we can obtain an approximate formula
.
In principle, a neighborhood can be any ordered finite set. These neighbors are not necessarily individuals who are near them in the parameter space, but rather ones that are near them in a topological space. In fact the CAS algorithm does not impose any limitation on the definition of the distance between two ants. In order to simulate the behaviors of ants, we use the Euclidian distance. Supposing there are two ants whose positions are (θ i1 ,L,θ iL ) and (θ j1 ,L,θ jL ) , respectively, where
i, j = 1,2,L, N (where, N is the size of ant swarm) and i ≠ j , the distance between the two ants is 2
(θi1 − θ j1 ) 2 + L + (θ iL − θ jL ) 2
In the CAS algorithm, the neighbor selection can be defined in two ways. The first is the nearest fixed number of neighbors. The nearest m ants are selected as the neighbors of single ant. The second way is to consider the situation in which the number of neighbors increasing with iterative steps. This is due to the influence of selforganization behavior of ant i . The impact of organization will become stronger than before and the neighbor of the ant will increase. That is to say, the number of nearest neighbor is dynamically changed as time evolves or iterative steps increase. The number q of
1938
JOURNAL OF COMPUTERS, VOL. 6, NO. 9, SEPTEMBER 2011
single ant is defined to increase for every T iterative steps.
B. Parameters Optimization of LS-SVM Based on CAS Algorithm As stated before, the CAS algorithm has powerful global search ability to find exact or approximate solutions for optimization and search problems. Thus, a parameters selection approach using the CAS algorithm for LS-SVM is presented in this paper. There are two key factors to determine the optimized parameters using the CAS algorithm: one is how to represent the parameters as the ant’s position, namely how to encode. Another is how to define the fitness function which evaluates the goodness of an ant. These two key factors are given as follows: Encoding parameters: the optimized parameters for LS-SVM include kernel parameter and regularization parameter. In solving parameters selection by the CAS algorithm, each ant is requested to represent a potential solution, namely parameters combination. So let us denote an m -parameters combination as a vector of dimension m . For example, if Gauss radial basis function (RBF) is chosen as a kernel function, we denote the vector as v = (γ ,σ ) . Fitness function: the fitness function is generalization performance measure. There are some different descriptions for the generalization performance measure. Therefore, the corresponding fitness can be determined. The fitness of an ant is evaluated by the mean square error (MSE) index, which is defined as the error between the function estimation of LS-SVM and the reference model. It can be expressed by 1 N
N
∑ ( y − f ( x))
2
i =1
where N denotes the number of training data, y is the reference model, and f ( x) is the function estimation of LS-SVM. In the CAS algorithm one aims at minimizing the MSE through choosing the optimal parameters combination, that is minf (z1,L, z i ) = minMSE
(11)
subject to the equality constraints
g i ≤ zi ≤ hi , i = 1,2 where
the
optimization
variables
are
γ
and
σ respectively, [ gi , hi ] denotes the value range for each variable, which is different with different reference model and training data. The flowchart of the CAS-based parameters selection algorithm for the LS-SVM is shown in Fig. 1.
© 2011 ACADEMY PUBLISHER
IV. SIMULATION RESEARCH Experiment of a typical test function estimation is performed to evaluate the performance of the proposed parameters selection method. All experiments are performed on a PC with Pentium IV 2.93GHz processor, 512MB of main memory and the Matlab 6.5 simulation software. Given one-dimensional Sinc function f ( x) = sinc( x) + v, x ∈ [-3,3]
(12)
where v is the Gaussian noise with zero mean and standard deviation 0.1. We select 100 pairs of data as the train set from the input variable range. One aims at minimizing the MSE via the CAS algorithm to select the optimal kernel parameter σ of Gauss RBF kernel function and regularization item γ .The searching ranges are set as follows: γ ∈ [0,30] , σ ∈ [0,5] . The CAS algorithm parameters are chosen as follows: N = 20 , the maximum number of iterations is 200, δ = 2 / 3 , a = 200 , ri = 0.05 + 0.02 × rand () , ψ i1 = 0.25 ,
ψ i 2 = 1.5 . In simulation, the first way is used to select neighbours of single ant. The researching results of parameters are γ = 7.7379 and σ = 0.8851 , respectively. The training result for LS-SVM via the above parameters is shown in Fig. 2. It can be seen from Fig.2 that LSSVM realizes very good function approximation, so the CAS algorithm successfully realizes the parameters optimization selection for the test function.
JOURNAL OF COMPUTERS, VOL. 6, NO. 9, SEPTEMBER 2011
…...
Simulation — Real ····Training data
Figure 2. Simulation result of sin c function
In order to explain the effectiveness of this method, we adopt the genetic algorithm (GA, crossover rate is 0.8, mutation rate is 0.2%, population size is 30, the maximum number of iterations is 200) and particles swarm optimization algorithm (PSO, the population size and maximum number of iterations is the same as GA) to carry out many times’ experiments. The model of LSSVM is tested with the testing set about 50 data produced by randomly initialized, the average results is recorded in Table 1. Table 1 shows the model testing MSE of this paper method is the minimum.
1939
stabilization is sampled to the workspace of Matlab environment by the communication interface, which is stored as the text document by the command “save~”. The data includes seven items such as the sampling period, the control variable, angle of the pendulum, position of the cart, angular rate of the pendulum, velocity of the cart and displacement of the objective. Angle of the pendulum, angular rate of the pendulum, position of the cart, velocity of the cart and the control variable are selected as multiinput and single-output model for LS-SVM. 100 pairs data from the input variable are chosen as the training sample set, in which 40 pairs data are selected as the testing sample set. The minimum MSE error as the fitness function, we utilize the CAS algorithm to carry the optimization selection for the regularization item γ and kernel parameter σ . The researching results of parameters are γ = 7.7379 and σ = 0.8851 , and the testing error is 0.0022. The estimation for the inverse model is achieved using the above result. The simulation result is shown in Fig. 3. Fig. 3 shows that the estimation value approaches to the real sampling value. simulation results show the LS-SVM model has good generalization performance and stronger robust performance after optimized by the CAS algorithm.
TABLE I. AVERAGE RESULTS OF PARAMETERS OPTIMIZATION OF LS-SVM BY DIFFERENT METHODS
Selection method
Testing error
GA
7.9057×10-4
PSO
5.5782×10-4
CAS
3.1550×10-4
V. APPLICATION OF LS_SVM BASED ON THE CAS ALGORITHM The inverted pendulum artificially created is a complex nonlinear system in order to deeply research the control for the nonlinear, high order and under-actuated system. Characterized as a typical nonlinear, high order, unstable and under-actuated system, it is very difficult to give a precise mathematical model. Therefore, the model identification research for the inverted pendulum system is very important. The GPIP2003 single planar inverted pendulum is considered as a plant in the paper, whose inverse model is identified by LS-SVM. We adopt the example provided by the inverted pendulum toolbox, where the pendulum is displaced from lower position to the upper. After the pendulum reaches the upper position, one applies the disturbance by plucking the pendulum. The experiment data from the process overcoming the disturbance to the
© 2011 ACADEMY PUBLISHER
…...
Simulation — Real
Figure 3. Simulation result of inverse modeling for the inverted pendulum
VI. CONCLUSION Appropriate parameters are very crucial to leastsquares support vector machines (LS-SVM) learning results and generalization ability. This paper presents a novel parameter selection method for LS-SVM is presented based on chaotic ant swarm (CAS) algorithm. The selection problem of LS-SVM parameters is considered as a swarm intelligence optimization problem, and a CAS optimization algorithm is employed to search the optimal objective function. CAS algorithm is global search method and it need not to consider LS-SVM dimensionality and complexity. Simulation and experiment results show that the proposed method is an effective approach for parameter optimization.
1940
JOURNAL OF COMPUTERS, VOL. 6, NO. 9, SEPTEMBER 2011
ACKNOWLEDGMENT The authors are grateful to the anonymous referees for their valuable remarks and helpful suggestions, which have significantly improved the paper. This work was supported in part by a grant from the support of Key Project of Chinese National Programs for Fundamental Research and Development (973 Program) (2007CB7140 06), National Nature Science Foundation of China (61074020) and the Fundamental Research Funds for the Central Universities (DC10040101). REFERENCES [1] V. N. Vapnik, The Nature of Statistical Learning Theory, chapter 6, New York: Springer-Verlag, 1995. [2] V. N. Vapnik, “An Overview of Statistical Learning Theory,” IEEE Trans on Neural Networks, vol. 10, p. 988999, 1999. [3] J. A. K.Suykens, “Support vector machines: a nonlinear modeling and control perspective,” European Journal of Control, vol. 7, p. 311-327, 2001. [4] J. A. K.Suykens, “Nonlinear Modeling and Support Vector Machines (Published Conference Proceedings style),” Proc. of 18th Annu. IEEE Conference on Instrumentation and Measurement Technology, Budapest, p. 287-294, 2001. [5] K. S. Chua, “Efficient computations for large least square support vector machine classifiers,” Pattern Recognition Letters, vol. 24, p. 75-80, 2003. [6] I. Goethals, K. Pelckmans, J. A. K. Suykens and B. De Moor, “Identification of MIMO Hammerstein models using least squares support vector machines,” Automatica, vol. 41, p. 1263-1272, 2005. [7] L. Bako, G. Mercere, S. Lecoeuche and M. Lovera, “Recursive subspace identification of Hammerstein models based on least squares support vector machines,” IET Control Theory and Application, vol. 3, p. 1209-1216, 2009. [8] L. K. Hou, Q. X. Yang and J. L. An, “Modeling of SRM Based on XS-LSSVR Optimized by GDS,” IEEE Transactions on Applied Superconductivity, vol. 20, p. 1102-1105, 2010. [9] Z. J. Li, Y. N. Zhang and Y. P. Yang, “Support vector machine optimal control for mobile wheeled inverted pendulums with unmodelled dynamics,” Neurocomputing, vol. 73, p. 2773-2782, 2010. [10] N. E. Ayat, M. Cheriet and C. Y. Suen, “Automatic model selection for the optimization of SVM kernels,” Pattern Recognition, vol. 38, p. 1733-1745, 2005. [11] Y. W. Kang, J. Li, G. Y. Cao, H. Y. Tu, J. Li and J. Yang, “Dynamic temperature modeling of an SOFC using least squares support vector machines,” Journal of Power sources, vol. 179, p. 683-692, 2008. [12] P. F. Pai and W. C. Hong, “Support vector machines with simulated annealing algorithms in electricity load forecasting,” Energy Conversion andManagement, vol. 46, p. 2669-2688, 2005. [13] X. L. Tang, L. Zhang, J. Cai and C. B. Li, “Multi-fault classification based on support vector machine trained by chaos particle swarm optimization,” Knowledge-Based Systems, vol. 23, p. 486-490, 2010. [14] S. J. An, W. Q. Liu and S. Venkatesh, “Fast crossvalidation algorithms for least squares support vector machines and kernel ridge regression,” Pattern Recognition, vol. 40, p. 2154-2162, 2007.
© 2011 ACADEMY PUBLISHER
[15] W. T. Mao, G. R. Yan, L. L. Dong and D. Hu, “Model selection for least squares support vector regression based on small-world strategy,” Expert Systems with Applications, vol. 38, p. 3227-3237, 2011. [16] T. V. Gestel, J. A. K. Suykens and B. Baesens, S. Viaene, J. Vanthienen and G. Dedene, et al., “Benchmarking least squares support vector machine classifiers,” Machine Learning, vol. 54, p. 5-32, 2004. [17] L. X. Li, Y. X. Yang, H. P. Peng and X. D. Wang, “Parameters identification of chaotic systems via chaotic ant swarm,” Chaos, Solitons and Fractals, vol. 28, p. 1204–1211, 2006. [18] L. X. Li, Y. X. Yang, H. P. Peng and X. D. Wang, “An optimization method inspired by chaotic ant havior,” International Journal of Bifurcation Chaos, vol. 16, p. 2351-2364, 2006. [19] J. J. Cai, X. Q. Ma, L. X. Li, Y. X. Yang, H. P. Peng and X. D. Wang, “Chaotic ant swarm optimization to economic dispatch,” Electric Power Systems Research, vol. 77, p. 1373-1380, 2007. [20] L. X. Li, Y. X. Yang and, H. P. Peng, “Computation of multiple global optima through chaotic ant swarm,” Chaos, Solitons and Fractals, Vol. 40 (2009), p. 1399-1407. [21] Y. G. Tang, M. Y. Cui, Li L. X., H. P. Peng and X. P. Guan, “Parameter identification of time-delay chaotic system using chaotic ant swarm,” Chaos, Solitons and Fractals, vol. 41, p. 2097-2102, 2009. [22] L. X. Li, Y. X. Yang and, H. P. Peng, “Fuzzy system identification through chaotic ant swarm,” Chaos, Solitons and Fractals, vol. 41, p. 401-408, 2009. [23] H. Zhu, L. X. Li, Y. Zhao, Y. Guo and Y. X. Yang, “CAS algorithm-based optimum design of PID controller in AVR system,” Chaos, Solitons and Fractals, vol. 42, p. 792-800, 2009. [24] Y. Y. Li, Q. Y. Wen, L. X. Li and H. P. Peng, “Hybrid chaotic ant swarm optimization,” Chaos, Solitons and Fractals, vol. 42, p. 880-889, 2009. [25] W. C. Hong, “Application of chaotic ant swarm optimization in electric load forecasting,” Energy Policy, vol. 38, p. 5830-5839, 2010. [26] A. Chatterjee, S. P. Ghoshal and V. Mukherjee, “Chaotic ant swarm optimization for fuzzy-based tuning of power system stabilizer”, Electrical Power and Energy Systems, in press. [27] B. J.Cole, “Is animal behavior chaotic? Evidence from the activity of ants.” Proc R Soc Lond Ser B biol Sco, vol. 244, p. 253-259, 1991. [28] R. V. Solé, O. Miramontes and B. C. Goodwill, “Oscillations and chaos in ant societies,” Journal of Theory Biology, vol. 161, p. 343-357, 1993. Chunli Xie Xie received his B.Sc. and M.Sc. degrees from Fushun Petroleum Institute and Liaoning Shihua University, Fushun, China, in 1995 and 2003, respectively. He is currently working toward the Ph.D degree with Dalian University of Technology, Dalian, China. His research interests include adaptive control, robust control, machine learning, nonlinear systems, artificial intelligence and application. Cheng Shao was born in Shenyang, P. R. China, on June 7, 1958. Shao received his B.Sc. degree from Liaoning University, Shenyang, China, in 1982. Then he received the M.Sc and Ph.D. degrees from Northeastern University, Shenyang, China, in 1987 and 1992. He is currently a full-time Professor and Ph.D. Advisor with School of Electronic and Information Engineering, Dalian
JOURNAL OF COMPUTERS, VOL. 6, NO. 9, SEPTEMBER 2011
University of Technology, China. His research interest covers complex system modeling and intelligence control. Dandan Zhao was born in Fuxin, P. R. China, on March 4, 1975. Zhao received her B.Sc. and M.Sc. degrees from Fushun Petroleum Institute and Liaoning Shihua University, Fushun,
© 2011 ACADEMY PUBLISHER
1941
China, in 1997 and 2003, respectively. She is currently a fulltime lecturer of School of Computer Science and Engineering, Dalian Nationalities University, Dalian, China. Her research interests include electronic commerce, semantic network, swarm intelligent and information processing.