Predication based immune network for multimodal function optimization

Report 3 Downloads 70 Views
ARTICLE IN PRESS Engineering Applications of Artificial Intelligence 23 (2010) 495–504

Contents lists available at ScienceDirect

Engineering Applications of Artificial Intelligence journal homepage: www.elsevier.com/locate/engappai

Predication based immune network for multimodal function optimization Qingzheng Xu a,b,n, Lei Wang a, Jing Si a a b

School of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China Xi’an Communication Institute, Xi’an 710106, China

a r t i c l e in fo

abstract

Article history: Received 22 December 2008 Received in revised form 10 August 2009 Accepted 11 January 2010 Available online 9 February 2010

For the problem of indeterminate direction of local search, lacking of efficient regulation mechanism between local search and global search and regenerating new antibodies randomly in the original optimization version of artificial immune network (opt-aiNet), this paper puts forward a novel predication based immune network (PiNet) to solve multimodal function optimization more efficiently, accurately and reliably. The algorithm mimics natural phenomenon in immune system such as clonal selection, affinity maturation, immune network, immune memory and immune predication. The proposed algorithm includes two main features with opt-aiNet. The information of antibodies in continuous generations is utilized to point out the direction of local search and to adjust the balance between local and global search. PiNet also employs memory cells to generate new antibodies with high affinities. Theory analysis and experiments on 10 widely used benchmark problems show that when compared with opt-aiNet method, PiNet algorithm is capable of improving search performance significantly in successful rate, convergence speed, search ability, solution quality and algorithm stability. & 2010 Elsevier Ltd. All rights reserved.

Keywords: Natural computing Artificial immune network Immune predication Predication based immune network Multimodal function optimization

1. Introduction A large number of real-world problems turn out to be multimodal function optimization problem that exist in many fields. An objective function may have several global optima, i.e. several points in which the values of the objective function are equal to the global optimum value. Furthermore, it may have some local optima in which the values of the objective function are close to the global optimum value. Since the mathematical formulation of a real-world problem often involves several simplifications, finding all global or even these local optima would provide the decision makers with multiple options to choose from Ahrari et al. (2009). Recently several methods have been proposed for the solution of the multimodal optimization problem. These methods can be divided in two main categories: deterministic and stochastic. When facing complex multimodal optimization problem, the methods which belong to the first category, such as gradient descent, the quasi-Newton method and the Nelder-Mead’s simplex method, may exploit all local information in an ineffective way, easily trap into the local optimum and fail to provide reliable results and they depend too much on a priori information about the objective function.

In the last two decades, there has been an ever-increasing interest in the area of natural computing and their applications, which have developed new computational tools by taking inspiration from nature for the solution of complex optimization problems. In contrast to traditional optimization methods, which emphasize accurate and exact computation, but may fall down on achieving all global and local optima, computing inspired by nature provides a more robust and efficient approach for solving complex real-world problems. Among the existing approaches within computing inspired by nature, the most well-known ones are genetic algorithms (GA), artificial neural networks (ANN), evolutionary algorithms (EA), particle swarm optimization (PSO) and artificial immune systems (AIS) (De Castro, 2007). The remainder of the paper is organized as follows. Section 2 briefly reviews the relevant literature and identifies contributions and deficiencies, if any, for each prominent approach. Motivated by these findings, Section 3 describes in details predication based immune network. Section 4 presents the convergence and complexity analysis and features of algorithm. Section 5 provides several experimental results and tests to demonstrate the performance of PiNet on solving benchmark functions. Finally, Section 6 concludes the paper and gives future remarks of the study.

2. Literature survey n

Corresponding author at: School of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China. Tel.: + 86 2982312220. E-mail address: [email protected] (Q. Xu). 0952-1976/$ - see front matter & 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.engappai.2010.01.006

GA, which is a heuristic based optimization technique inspired by natural selection process and population genetics theory,

ARTICLE IN PRESS 496

Q. Xu et al. / Engineering Applications of Artificial Intelligence 23 (2010) 495–504

provides a general architecture for solving complex optimization problem. Various concepts have been introduced to realize multimodal function optimization in GA, such as sharing function mechanism (Goldberg and Richardson, 1987; Miller and Shaw, 1996), deterministic crowding (Mahfoud, 1995), niching method ¨ ¨ (David et al., 1993; Sareni, Krahenb uhl, 1998; Dilettoso, 2006), restricted competition selection (Lee et al., 1999; Kim et al., 2002), and so on (Qi, 1999; Li, Marton, 2002; Ling et al., 2008; Tsoulos and Lagaris, 2008). Basically, however, algorithms based on the GA do not guarantee convergence to global optima because of its poor exploitation capability. GA also has other drawbacks such as premature convergence which occurs because of the loss of diversity in the population and it is a commonly encountered problem when the search goes on for several generations. These drawbacks (Gudla and Ganguli, 2005; Wei and Zhao, 2005) prevent the GA from being really of practical interest for a lot of applications. PSO is a kind of stochastic optimization algorithms proposed in Kennedy and Eberhart (1995) and originally, is inspired by the emergent motion of a flock of birds searching for food. In recent years, there have been several attempts to apply the PSO to multimodal function optimization problem (Liang et al., 2006; Parrott and Li, 2006; Seo et al., 2006; Ho et al., 2007; Seo et al., 2008; Chen and Zhao, 2009). However, as a newly emerging optimal method, the PSO algorithm is still in its development infancy, when compared to its well developed counterparts, and there are still many problems or issues that require further study. For example, the PSO algorithm has difficulties in striking a balance between exploration (global investigation of solution space) and exploitation (the refinement of searches around a local optimum). Also, the algorithm has the problem of converging to undesired local solution because of the diversity of population decreasing in latter periodic of evolutionary. Recently, the artificial immune system and its mechanism, which attempts to algorithmically mimic natural immune system’s behavior of human beings and animals, have attracted the researcher’s considerable attention to solve engineering problems (Klarreich, 2002). AIS has been proven to work in various engineering applications and a useful summary of application areas can be found in Hart and Timmis (2008), which holds that application areas can be roughly classified into 12 headings such as clustering/classification, anomaly detection, computer security, numeric function optimization, combinatorial optimization and learning. To a great extent, a mature theory depends on perfect and strictness mathematical description. Timmis and coworkers have reviewed the recent theoretical advances in artificial immune systems (Timmis et al., 2008). They have focused their discussions on clonal selection based AIS and negative selection based AIS. That is primarily because theoretical studies have thus far concentrated on these aspects of AIS. Despite notable empirical analysis of immune networks, very little has been done on theoretical aspects of those. The clonal selection principle (Burnet, 1959) is used to explain the basic features of an adaptive immune response to an antigenic stimulus, which can be interpreted as a remarkable microcosm of Charles Darwin’s law of evolution, with the three major features of repertoire diversity, genetic variation and natural selection. It establishes the idea that only those cells that recognize the antigens are selected to proliferate. The selected cells are subject to an affinity maturation process, which improves their affinity to the selective antigens. A clonal selection algorithm (CLONALG) (De Castro and Von Zuben, 2002), is derived primarily to perform machine-learning and pattern-recognition tasks and then it is adapted to solve multimodal and combinatorial optimization problem.

The artificial immune network (aiNet) algorithm is a discrete immune network algorithm that was developed for data compression and clustering by means of clonal selection and affinity maturation principle as well as immune network theory of biology immune system (De Castro and Von Zuben, 2000), and is then further extended to create an optimization version of aiNet (opt-aiNet), which is then applied to multimodal optimization problems (De Castro and Timmis, 2002). Opt-aiNet is capable of either unimodal or multimodal optimization and it presents several interesting and powerful features, such as population size dynamically adjustable, exploitation and exploration of the search-space for new and better solutions, location of multiple optima, capability of maintaining many local optima solutions and defined automatic stopping criterion, among others. Tang and Qiu develop a dynamic population immune algorithm (DPIA) for multimodal optimization problem (Tang and Qiu, 2006). From the initial population, n sub-populations are produced and undergo parallel search in all subspaces by performing mutation and selection in each sub-population. Similar antibodies are eliminated, retaining those with better affinity; hence the resulting population has better fitness than the initial population. The newcomers are then introduced to yield a broader exploration of the search space. The mutation operates only on low bits of the gene to keep the higher-bit genes unchanged in order to preserve the obtained local solutions from being destroyed and finding precise solution in the subspace. The algorithm is designed such that the number of newcomers decreases gradually with the increase of the generation. In Lv (2007), the author presents chaos immune network (CIN) algorithm for multimodal function optimization. The proposed algorithm makes use of the rule of chaos variable itself to search for the peaks around memory cells, accordingly search precision is deepened to optimize these antibodies. In Liu and Xu (2008), a cooperative artificial immune network (CoAIN) model is proposed for multimodal function optimization. To explore and exploit searching space efficiently and effectively, CoAIN uses cooperative strategy inspired by particle swarm behavior. That is to say, each network cell cooperates as particle flying around in a multidimensional searching space to adjust position according to its own experience and the experience of its neighbor, making use of the best position encountered by it and other particles. However, in our opinion, there are some inherent drawbacks in the original opt-aiNet. First of all, it is without conjunction and cooperation of homologous network cells that opt-aiNet has done lots of redundant explorations made by individuals. A large number of computational resources and storage resources are wasted seriously, which may lead to slow down searching speed and subsequently affects the quality of solutions and convergence speed of algorithm. Another limitation of the current artificial immune systems, including opt-aiNet, is the lack of efficient regulation mechanism between local and global search, which impairs their convergence towards the global optimum. As a result, the best global optimum found through local search is at standstill and the global search ability of opt-aiNet algorithm is not fully utilized yet. Finally, in general, opt-aiNet relies on substitution of weak antibodies with randomly generated antibodies which can land in already exploited landscape. This has a drawback of repeatedly exploring some landscapes and there is no guarantee if all of the solution space will be fully explored. Hence, the aim of this paper is to solve previous existing problems, and describe a new method, predication based immune network, designed specifically for multimodal function optimization problem. To explore and exploit searching space efficiently and effectively, the PiNet uses cooperative strategy inspired by homologous antibodies. That is to say, the next position of

ARTICLE IN PRESS Q. Xu et al. / Engineering Applications of Artificial Intelligence 23 (2010) 495–504

antibody is related with not only the position of itself but also its homologous antibody. In addition, two modifications of optimization mechanism are proposed as follows. (1) After local search over iterations, we determine when we should turn to global search based on the information of antibodies of continuous generations. (2) The selection probability of the feasible solution dynamically changes according to the sum of affinities between feasible solution and memory cells. The experiments show that when compared with opt-aiNet method, the new algorithm is capable of improving convergence speed and solution quality significantly and will be used in many practical problems.

3. Predication based immune network Since PiNet is extended from opt-aiNet, accordingly, there are two entities concerned in the case of population-based PiNet: antigen and antibodies. The problem to tackle or the function to be optimized is seen as antigen, whereas the candidate solutions, which are a member of a set of possible solutions to the given problem, are seen as antibodies. The interaction between antibody and antigen is denoted by fitness and the interaction between antibodies is denoted by affinity. The main process of the new algorithm can be explained in detail as follows: N Step 1. Randomly initialize an antibody population AN 1 ; A1 ¼ a1 þa2 þ . . . þaN . Step 2. Determine the fitness of each antibody ai (i= 1, 2, y, Nk) k , and normalize the vector of fitness, that is in the population AN k f  ðai Þ ¼ ðf ðai Þfmin Þ=ðfmax fmin Þ A ½0; 1. Step 3. Proliferate a number Nc of clones for each network cell and Nc Nc c Nk 1 2 c k ¼ Tc ðAN Þ ¼ BN form the population BN 1 þ B2 þ. . . þ BNk ¼ fb1 þ b1 k k Nc Nc 1 2 1 2 i c þ. . . þbN 1 g þfb2 þ b2 þ . . . þ b2 g þ fbNk þ bNk þ. . . þ bNk g, where bj ¼

ai ði ¼ 1; 2; . . .; Nk ; j ¼ 1; 2; . . .; Nc Þ. The clone operation Tc inspired from asexual propagation of the immune system can keep down the former population with probability and reproduce all the individuals by Nc times, which expand search space Nc times accordingly. Step 4. Mutate each clone and form the population Nc c Nk c Nk c 1 2 1 ¼ Tm ðBN Þ ¼ C1Nc þ C2Nc þ . . . þCN CN Nk ¼ fc1 þ c1 þ. . . þ c1 g þ fc2 þ k k N N c22 þ . . . þ c2 c g þfcN1 k þ cN2 k þ. . . þ cNkc g. The mutation operation Tm is defined by the following Eq. (1), where cij is mutated cell, bji is j c Nk1 clone cell, b i is homologous antibody of bji in BN , b is k1

mutation strength that controls the decay of the inverse exponential function, N(0,1) is a Gaussian random variable of Þ is zero mean and standard deviation s =1, Df ðbji Þ ¼ f ðbji Þf ðbj i fitness variation between the parent and the son cell in continuous antibody populations, f  ðbji Þ ¼ f  ðai Þ is the fitness of an antibody normalized in the interval [0, 1]. It is note to that the mutation can be accepted unless the mutated cell cij is within the range of domain. bji b i 1 þ  expðDf ðbji Þf  ðbji ÞÞ  Nð0; 1Þ 2 b

k Step 6. Determine the average fitness of the population DN . If the k average of fitness errors between the current and previous generations k k Þ o lf Df ðAN Þ is met, then is less than stabilization error e or Df ðDN k k the network is said to have already stabilized and we can continue.

N

kþ1 k Otherwise, let Ak þ ¼ DN , Nk+1 =Nk, k=k+1, and then return to 1 k k k Þ ¼ f ðDN Þ step 2. Where lf is fitness improvement threshold, Df ðDN k k k k k k1 k1 k1 f ðAN Þ, Df ðAN Þ ¼ f ðAN Þf ðAN Þ ¼ f ðDN Þf ðAN Þ, where f ðZnm Þ ¼ k k k k1 k1 k1 k k k1 k1    þ f ðzn ÞÞ, Znm ¼ DN ; AN ; DN ; AN . k k k1 k1 k and Step 7. Suppress the similar antibodies in population DN k N1k Nk form the memory cells population Ek ¼ Tsup ðDk Þ ¼ e1 þ e2 þ    þeN1k . There are three steps in suppress operation Tsup as k follows: first, sorting the antibodies in the population DN k according to their fitness. Second, determine the affinity of all cells in the network. In this case, affinity is defined as the Euclidean distance between two cells, i.e. aff(di, dj)= 99di–dj992 (i aj,i, j =1, 2,y,Nk). Third, suppress all but the highest fitness of those cells whose affinities are less than the affinity suppression threshold ss. Obviously, the role of procedure Tsup is to maintain the diversity of antibody population.

1 n ðf ðz1 Þ þ f ðz2 Þ þ

1k and form the population Step 8. Update the population EN k 1k þ dNk 1k FN ¼ Tupd ðEN Þ ¼ e1 þ e2 þ    þ eN1k þf1 þf2 þ . . . þ fdNk . k k There are five steps in update operation Tupd as follows: first, determine the affinity aff(xm, ei) defined as above between the candidate solution xm in solution space and the memory cells ei (i =1, 2,y, N1k). Second, the selection probability of the candidate solution xm is defined by the following equation: pðxm Þ ¼ PN1k aff ðxm ; ei Þ. Third, normalize the selection probability, i.e. i¼1 p ðxm Þ ¼ ðpðxm Þpmin Þ=ðpmax pmin Þ A ½0; 1. Fourth, d  Nk new antibodies are generated using roulette wheel selection, where d is 1k þ dNk update rate. Finally, form new antibody population FN by k combining the new antibodies with the memory cells. Step 9. If termination condition is met, the memory cells 1k are the solutions of the multimodal function in population EN k optimization and the antibody with the highest fitness is the

N

kþ1 1k þ dNk ¼ FN ; Nk þ 1 ¼ N1k þ global optimum. Otherwise, let Ak þ 1 k

d  Nk ; k ¼ kþ 1, and then return step 2. We also note that there are two termination conditions adapted for the algorithm. (i) The network scale of continuous two iterations is not changed any more. (ii) The max search space is met. In general, if the fitness and affinity of the memory cells do not vary, then the remaining cells are memory cells corresponding to the solutions of the given problem. However, the new algorithm may run endlessly only considering termination condition (i) for some multimodal function with infinity local optima. So, a pre-defined max number of search space can be adopted as an alternative when computational complexity and computational precision are considered.

4. Algorithm analysis

j

cij ¼ bji þ

497

ð1Þ

From Eq. (1), we can see that a new solution cij is obtained in the neighborhood of bji , and the mutation strength is inversely proportional to the normalized fitness f n. Step 5. Determine the fitness of all individuals of the c Nk , and select the cell with highest fitness to population CN k c Nk k ¼ Ts ðCN Þ ¼ d1 þd2 þ . . . þdNk , where, form the population DN k k i ¼ 1; 2; . . .; Nk , m A f1; 2; . . .; Nc g, f ðcim Þ ¼ maxff ðcij Þ9j ¼ 1; 2; . . .; Nc g. The selection operation Ts performs in each subpopulation simultaneously, which has quite the opposite effect of clone operation Tc.

The implementation of PiNet is given in Fig. 1 below. There are two search methods, local search and global search, in PiNet. In every iteration, antibodies are optimized locally through clonal selection including clone operation, mutation operation and selection operation to exploit. Then, in suppress operation antibodies with higher similarities between antibodies are eliminated to avoid one peak clustering. Finally, a number of generated antibodies with higher affinities are added to the current population to explore full solution space and the process of local optimization restarts if the termination condition is not met.

ARTICLE IN PRESS 498

Q. Xu et al. / Engineering Applications of Artificial Intelligence 23 (2010) 495–504

Fig. 1. Computational procedure of PiNet.

Let

4.1. Convergence analysis of the algorithm

z ¼ minPfMðk þ2Þ [ Nðk þ1Þ \ R a f9Mðkþ 1Þ [ NðkÞ \ R ¼ fg; Referring to probability analysis method in reference (Du et al, 2005; Gong et al., 2006), we prove the global convergence of PiNet as follows. In order to depict conveniently, I is called antibody population space, f a real function in set I, M(k) antibody population, and N(k) memory cells population.

ð2Þ

Definition 2. For random state M1, the algorithm converges to global optimum with probability 1, if and only if PfMðkþ 1Þ [ NðkÞ \ R a 9Mð1Þ ¼ M1 ; Nð0Þ ¼ g ¼ 1

Obviously PfMðkþ 2Þ [ Nðkþ 1Þ \ R a f9Mðk þ1Þ [ NðkÞ \ R ¼ fg Z z 40 ð8Þ Consequently

Definition 1. Define the set of global optima as R ¼ fX A I9f ðXÞ ¼ maxff ðYÞ; Y A Igg

k

k ¼ 0; 1; 2; . . .

ð3Þ

PfMðk þ2Þ [ Nðk þ1Þ \ R ¼ f9Mðkþ 1Þ [ NðkÞ \ R ¼ fg ¼ 1PfMðk þ2Þ [ Nðk þ1Þ \ R a f9Mðkþ 1Þ [ NðkÞ \ R ¼ fg r 1z o 1

ð9Þ

Therefore 0 rP0 ðkþ 1Þ r ð1zÞ  P0 ðkÞ r ð1zÞ2 P0 ðk1Þ

This leads to Theorem 1. The algorithm of PiNet for multimodal function optimization is convergent to global optimum with probability 1. n

Proof. Let P0(k) =P{M(k+ 1)[N(k)\R = f}, according to full probability formula, we get P0 ðk þ 1Þ ¼ PfMðk þ2Þ [ Nðkþ 1Þ \ R ¼ fg ¼ PfMðk þ2Þ [ Nðk þ 1Þ \ R ¼ f9Mðk þ1Þ [ NðkÞ \ R a fg PfMðkþ 1Þ [ NðkÞ \ R a fg þ PfMðkþ 2Þ [ Nðk þ1Þ \ R ¼ f9Mðk þ 1Þ [ NðkÞ \ R ¼ fg  PfMðkþ 1Þ [ NðkÞ \ R ¼ fg:

Note that lim ð1zÞk ¼ 0, 0 rP0(0) r1, thus, k-1

0 r lim P0 ðkÞ r lim ð1zÞk P0 ð0Þ ¼ 0 k-1

k-1

ð11Þ

k-1

lim PfMðk þ 1Þ [ NðkÞ \ R a f9Mð1Þ

k-1

¼ M1 ; Nð0Þ ¼ fg ¼ 1 lim P0 ðkÞ ¼ 1: k-1

ð12Þ

ð4Þ

ð5Þ

In the process of global search, the global optimum is saved in memory cells population N(k+ 1) for the elitist selection mechanism in suppress operation. So PfMðkþ 2Þ [ Nðk þ1Þ \ R ¼ f9Mðkþ 1Þ [ NðkÞ \ R a fg ¼ 0

ð10Þ

Then, lim P0 ðkÞ ¼ 0.So,

In the process of local search, the memory population is not varied, namely N(k+1) =N(k). From the property of the selection Nk þ 2 operation Ts, the global optimum of population Ak þ ðMðk þ2ÞÞ is 2 Nk þ 1 superior to the one of population Ak þ 1 ðMðk þ 1ÞÞ. So PfMðkþ 2Þ [ Nðk þ1Þ \ R ¼ f9Mðkþ 1Þ [ NðkÞ \ R a fg ¼ 0

r . . . rð1zÞk þ 1  P0 ð0Þ

ð6Þ

Therefore P0 ðk þ 1Þ ¼ PfMðk þ2Þ [ Nðk þ 1Þ \ R ¼ f9Mðk þ1Þ [ NðkÞ \ R ¼ fg  PfMðk þ 1Þ [ NðkÞ \ R ¼ fg ¼ PfMðk þ 2Þ [ Nðk þ 1Þ \ R ¼ f9Mðk þ1Þ [ NðkÞ \ R ¼ fg P 0 ðkÞ: ð7Þ

Thus, the algorithm of PiNet for multimodal function optimization converges to global optimum with probability 1. 4.2. Complexity analysis of the algorithm ‘‘Complexity of an algorithm’’ refers to the amount of time and space required to execute the algorithm in the worst case (Horowitz and Sahni, 1978). Determining the performance of a computer program is a difficult task and depends on a number of factors such as the computer being used, the way the data are represented, and how and with which programming language the code is implemented. Here, we will present a general evaluation of the complexity of PiNet, taking into account the total computational cost and its memory requirements. The time needed to execute it comprises with three main parts: (1) Initialization phase: initialize a population AN 1 of antibodies and determine the fitness of each antibody ai. These can be performed in O(N) time.

ARTICLE IN PRESS Q. Xu et al. / Engineering Applications of Artificial Intelligence 23 (2010) 495–504

(2) Local search phase: the computational time required in Steps 3, 4 and 5, Tc, Tm and Ts operation, both is O(Nc  Nk) (Fischetti, 1988). (3) Global search phase: suppress operation demands a computational time of the order O(N21k). In this way, regenerating some new antibodies demands O(m  N1k +d  Nk) in the worst cases, where m is the number of all candidate solutions.

By summing up the computational time required for each of these phases, it is possible to determine the total computational time of the algorithm. Let the number of local search and global search over iterations is gc and gg respectively, hence, the computational time of the whole process is given by OðNÞ þ OðNc  Nk  gc Þ þ OðNc  Nk  gc Þ þOðNc  Nk  gc Þ 2 þ OðN1k  gg Þ þOððm  N1k þ d  NÞ  gg Þ 2 ¼ OðN þð3Nc  Nk Þ  gc þ ðN1k þ m  N1k þ d  NÞ  gg Þ 2 ¼ OðNc  Nk  gc þ ðN1k þm  N1k Þ  gg Þ:

ð13Þ

It is clear that the computational time of the algorithm is determined by some factors such as antibody population scale Nk, antibody clone scale Nc, memory cells population scale N1k, the number of local search and global search gc and gg, and the number of all candidate solution m.

499

The required memory to run the algorithm is proportional to the number of antibodies Nk, plus the number of generated clones Nc  Nk, plus the number of memory cells N1k. 4.3. Features of the algorithm From Fig. 1, we can see that there are some differences between PiNet and opt-aiNet as follows. (1) In opt-aiNet, its position in next generation is the result of local optimized around the antibody. In the PiNet case, the next position of antibody is decided by not only the position of itself but also its homologous antibody. In Eq. (1), the position variation between the homologous antibodies in j continuous generations is denote as bji -b i . Accordingly, j j j antibody is optimized locally around bi þ ðbi b i Þ=ð2Þ, which means that the best solution is predicated following the direction of bji–bjni . Moreover, the fitness variation between j them is denoted by Df ðbji Þ ¼ f ðbji Þf ðb i Þ. Therefore, antibodies with higher variation of fitness will change heavily so as have many chances to be better than their parents. On the contrary, antibodies with lower variation mutate relatively slightly to reserve their priority. The basic features of antibody cji in the next generation can be roughly predicated using the data we have collected from previous generation, which points out

Table 1 Specification of benchmark functions.

Notation

Name

Function

Interval

Source

Number of local optima

Number of global optima

F1 F2

Multi function Schaffer function

x1, x2A[  2, 2] x1, x2A[  10, 10]

(De Castro and Timmis, 2002) (De Castro and Timmis, 2002)

100 inf

4 1

F3

Roots function

x1, x2A[  2, 2]

(De Castro and Timmis, 2002)

6

6

F4

-

f1 ¼ x1 sinð4px1 Þx2 sinð4px2 þ pÞþ 1 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sin2 ð x21 þ x22 Þ0:5 f2 ¼ 0:5 ð1 þ 0:001ðx21 þ x22 ÞÞ2 1 f3 ¼ ; z A C; z ¼ x1 þ ix2 1 þ 9z6 19 2 P f4 ¼ 900 ½ðxi 5Þ2 10 cosð2pðxi 5ÞÞ

x1, x2 A[  10, 10]

(Im et al., 2004)

441

1

x1, x2A[  20, 20]

(Im et al., 2004)

inf

1

i¼1

sinð9x1 109þ 9x2 109Þ 9x1 109 þ 9x2 109

F5

Sinc function

F6

Alex function

f6 ¼ ðx21 þ x2 11Þ2 ðx1 þ x22 7Þ2 sinðx21 þ x22 Þ

x1, x2A[  4, 4]

(Lee et al., 2007)

4

1

F7

Rastrigin function Shubert function

f7 ¼ 20 þ x21 þ x22 10  ðcosð2px1 Þ þ cosð2px2 ÞÞ

x1, x2A[  2.5, 2.5]

(Lee et al., 2007)

36

4

x1, x2A[  10, 10]

(Ahrari et al., 2009)

761

9

F8

f5 ¼

f8 ¼

5 P

i cos½ðiþ 1Þx1 þ i

i¼1

5 P

j cos½ðj þ 1Þx2 þ j

j¼1

F9

Camel function

f9 ¼ 4x21 þ 2:1x41  13 x61 x1 x2 þ 4x22 4x42

x1, x2A[  3, 3]

(Tsoulos and Lagaris, 2008)

6

2

F10

Rastrigin function

f10 ¼ cosð18x1 Þ þ cosð18x2 Þx21 x22

x1, x2A[  2, 2]

(Tsoulos and Lagaris, 2008)

169

1

Table 2 Results of PiNet for different values of mutation strength ba. Generations of Mutation Success convergence strength b rate (%)

25 50 100 200 400

100 100 100 100 86

Computation time(s)

Function evaluation number

Local optima number

Global optimum number

Best global optimum

avg

Sig

avg

Sig

avg

Sig

avg

Sig

avg

Sig

avg

Sig

142 152 219 354 482n

0.000 0.000  0.000 0.000n

0.860 0.884 1.263 1.610 2.097n

0.000 0.000  0.000 0.000n

113,647 123,141 187,374 293,604 411,256n

0.000 0.000  0.000 0.000n

95.5 94.1 94.9 94.9 93.3n

0.346 0.261  1.000 0.041n

4 4 4 4 4n

1.000 1.000  1.000 1.000n

4.253878810 0.000 4.253886059 0.000 4.253888169  4.253888262 0.372 4.253888428n 0.000n

Note: N1 =20, Nc = 10, lf = 0.1, ss = 0.2, S= 550,000, d= 40% a (1) avg Is the abbreviation for average, SD for standard deviation, and Sig for significance with a two-sided 5% significance level; (2) The better result for each function appear in boldface; (3) ‘‘n’’ is the result obtained only when PiNet is convergence; (4) ‘‘ +’’ is the result obtained only when PiNet finds a global optimum. It is the same below.

ARTICLE IN PRESS 500

Q. Xu et al. / Engineering Applications of Artificial Intelligence 23 (2010) 495–504

Table 3 Results of PiNet for different values of affinity suppression threshold ss. Affinity Success Generations of suppression rate (%) convergence threshold ss avg Sig

Computation times(s)

Function evaluation numbers

Local optima numbers

Global optimum numbers

Best global optimum

avg

Sig

avg

Sig

avg

Sig

avg

Sig

avg

Sig

0.05 0.1 0.2 0.3 0.4

1.121 1.130 1.263 1.218 0.667

0.008 0.016 – 0.402 0.000

179,808 188,577 187,374 196,145 70,121

0.383 0.882 – 0.249 0.000

95.0 94.8 94.9 95.4 46.9

0.828 0.914 – 0.440 0.000

4 4 4 4 3.98

1.000 1.000 – 1.000 0.322

4.253887624 4.253887912 4.253888169 4.253887897 4.253887441

0.022 0.009 – 0.117 0.000

100 100 100 100 100

218 219 219 226 143

0.930 0.980 – 0.268 0.000

Note: N1 = 20, Nc = 10, b = 100, lf = 0.1, S= 550,000, d= 40% Table 4 Results of PiNet for different values of update rate d. Update Success Generations of rate d rate (%) convergence

10% 20% 40% 60% 80%

100 100 100 100 100

Computation times(s)

Function evaluation numbers

Local optima numbers

Global optimum numbers

Best global optimum

avg

Sig

avg

Sig

avg

Sig

avg

Sig

avg

Sig

avg

Sig

87 254 219 184 158

0.000 0.000 –

0.438 1.147 1.263 1.197 1.408

0.000 0.045 –

26,968 143,995 187,374 219,622 256,184

0.000 0.000 –

33.4 83.1 94.9 98.2 99.2

0.000 0.000 –

2.9 4 4 4 4

0.000 1.000 –

4.253874951 4.253887924 4.253888169 4.253888003 4.253888057

0.013 0.001 –

0.000 0.000

0.190 0.019

0.000 0.000

0.000 0.000

1.000 1.000

0.013 0.106

Note: N1 = 20, Nc = 10, b = 100, lf = 0.1, ss = 0.2, S = 550,000. Table 5 Parameters used for optimization of benchmark functions in Table 1.

Functions

Initial antibody population scale N1

Antibody clone scale Nc

Mutation strength b

Fitness improvement threshold lf (only for PiNet)

Affinity suppression threshold ss

Search space S

Update rate d

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10

20 20 20 20 20 20 20 20 20 20

10 10 10 10 10 10 10 10 10 10

100 100 100 100 100 100 200 100 100 100

0.1 0.1 0.01 0.1 0.1 0.1 0.1 0.1 0.1 0.3

0.2 0.2 0.2 0.6 0.2 0.2 0.1 0.2 0.5 0.2

550,000 550,000 550000 500,000 1,000,000 200,000 400,000 800,000 100,000 800,000

40% 40% 40% 40% 40% 40% 40% 40% 40% 40%

Table 6 Results from optimization of benchmark functions presented in Table 1 by using opt-aiNet and PiNet. Functions Algorithm Success rate (%)

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10

opt-aiNet PiNet opt-aiNet PiNet opt-aiNet PiNet opt-aiNet PiNet opt-aiNet PiNet opt-aiNet PiNet opt-aiNet PiNet opt-aiNet PiNet opt-aiNet PiNet opt-aiNet PiNet

96 100 0 0 100 100 38 100 0 0 100 100 66 100 0 0 100 100 100 100

Generations of convergence

Computation time (s)

Function evaluation number

Local optima number

Global optimum Best global number optimum

avg

SD

avg

SD

avg

SD

avg

SD

avg

SD

avg

SD

441n 219 682 317 542 191 100n 67 1040 458 620 175 674 208 485 239 491 194 492 253

72n 27 37 17 326 136 14n 14 62 42 70 31 165 52 20 12 68 111 54 37

11.523n 1.263 17.204 1.934 1.005 0.564 12.159n 6.333 28.457 2.528 3.549 0.541 9.791 0.753 48.107 5.432 3.242 0.673 15.283 1.848

3.168n 0.300 0.125 0.135 0.543 0.382 3.477n 3.123 0.306 0.107 0.343 0.099 6.298 0.188 0.282 0.220 0.381 0.341 2.674 0.328

370657n 187,374 550,000+ 550,000+ 79,705 35,051 292,066n 207,481 1,000,000 + 1,000,000 + 99,990 28,517 229,247 77,396 800,000 + 800,000 + 82,933 31,955 561,508 383,938

122270n 34,881 580 1320 43,179 20,512 103997n 83,875 947 1992 16,326 4659 88,901 28,126 1832 2361 8074 16,225 101,582 81,899

91.1n 94.9 219.3 279.5 6.1 6 357n 433 287.2 444.9 4 4 35.97 34.76 538.4 596.94 5.94 5.98 120.14 157.42

5.8n 3.4 10.3 9.5 0.54 0 5.24n 11.2 38.3 27.3 0 0 3.522 1.836 32.480 17.875 0.240 0.141 2.382 4.343

4n 4 0.96 1 5.88 6 0.79n 1 0.94 1 1 1 3.03 3.74 6.80 8.5 2 2 1 1

0n 0 0.20 0 0.33 0 0.42n 0 0.24 0 0 0 0.88 0.49 1.34 0.70 0 0 0 0

4.253880080n 4.253888169 0.999666834 + 0.999611350 0.999714360 0.999760464 917.438041043n + 919.917513602 0.999930768 + 0.999999997 0.960797709 0.960853602 52.499946990 52.499954521 210.482279261 210.482285829 1.031268380 1.031628389 1.999980333 1.999995222

0.000016974n 0.000000287 0.001587408 + 0.001923249 0.000118384 0.000130920 1.030000187n + 0.272526243 0.000338786 + 0.000000002 0.000315611 0.000000835 0.000093624 0.000038556 0.000014169 0.000007931 0.000000071 0.000000080 0.000032958 0.000006101

ARTICLE IN PRESS Q. Xu et al. / Engineering Applications of Artificial Intelligence 23 (2010) 495–504

mutation direction of antibody bji and improves speed of local search. (2) While opt-aiNet adjusts the balance between local and global search at any moment during running algorithm by using the average of fitness errors between the current and previous generations, PiNet is based upon not only the average of fitness errors but also the improved amplitude compared with k k Þ o lf Df ðAN Þ is satisfied, then it is said that the last. If Df ðDN k k average fitness is not significantly improved anymore and we should switch from local to global search. The iteration times spent on local search are reduced and the population scale expanding dramatically is avoided through this method. (3) Since the memory cells are the best solutions found up to now, we can instruct update operation based on the information of them. While new candidate solutions are generated randomly in full feasible space followed by uniform distribution in opt-aiNet, the selection probability of the candidate solution is based on the sum of Euclidean distance between it and all memory cells in PiNet. Further from all memory cells the candidate solution is, more chance to be generated as new antibody it has. The result is that new antibodies with higher affinities cannot land in exploited landscape repeatedly and global search ability is significantly improved.

501

higher mutation strength and smaller exploit scope. Table 2 also shows, when b is equal to 400, PiNet is not convergent in restricted function evaluation numbers. That means that too high mutation strength and intensive local search may easily lead to slow convergence rate and poor local optima numbers. However, the quality of the best solution is improved unceasingly. Table 3 demonstrates the effect of affinity suppression threshold ss on the performance of PiNet. When affinity suppression threshold ss varies in reasonable scope, it has little effect on all criteria except the average of computation time and the best global optimum. On the contrary, PiNet is deteriorated sharply when ss is escaped from that scope. In general, the scope is problem-dependent and is the interval from zero to the shortest distance between any local optima. For example, as shown in Table 3, when we use the setting of ss = 0.4, we get the following results: the average generation of convergence is equal to 143 and the average number of local optima is equal to 46.9. When ss goes near or beyond the scope, it is likely to result in extensive suppress operation. After these suppress operation, lots of not very good local optima are grown up into memory cells and it is difficult for neighborhood cells to become other memory cells. Table 4 shows the effect of varying update rate d on the performance of PiNet. It may be observed from Table 4 that the

5. Experiments In this section we will examine the search performance of the proposed PiNet by using 10 benchmark functions with different complexities, which are listed in Table 1. To avoid attributing the optimization results to the choice of a particular initial population and to conduct fair comparisons, we perform each test 50 times, starting from various randomly selected points in the search domain given in the literature. The algorithm is coded in Matlab 7.0 and the simulations are run on a Pentium IV 2.4 GHz with 512 MB memory capacity. The data are analyzed with software Statistical Product and Service Solutions (SPSS) 14.0 version and significance is assumed at 0.05. To evaluate the algorithm’s efficiency and effectiveness, we have applied the following criteria to each test function: the rate of successful maximization, the average generations of convergence, the average of computation time, the average of the objective function evaluation number, the average of local optima and global optima number, and the average of the best global optimum found.

5.1. Effects of different parameters Unsuitable values for algorithm parameters may result in low convergence rate, more computation time, large function evaluation number, convergence to a local maximum or unreliability of solutions. Tables 2–4 show the statistical optimization results for F1 according to various parameter settings. The results illustrate the role of various parameters of PiNet. Based on these results, we can know how to set and tune the parameters to achieve the good performance of PiNet. For every set of different parameter settings, we performed the test 50 times. To evaluate the effect of mutation strength b on the performance of PiNet, we have monitored the performance of PiNet for diverse value of b. The experimental results are presented in Table 2. Obviously, the average generations of convergence, the average of computation time, the average of function evaluation number and the best global optimum increase remarkably as b increases. That means the quality of solutions is enhanced and the computation cost of PiNet is deteriorated for

Fig. 2. Comparison of max fitness and average fitness of antibody population of two algorithms on the multi function. (a) opt-aiNet. (b) PiNet.

ARTICLE IN PRESS 502

Q. Xu et al. / Engineering Applications of Artificial Intelligence 23 (2010) 495–504

average number of local and global optima found are increased as update rate is increased. Of course, the average generation of convergence, the average of computation time and the average of objective function evaluation number are increased accordantly. PiNet explores more candidate solutions in each generation and may achieve more local optima and better global optimum as d is increased. However, the PiNet performs well, needs to be at the cost of the average of computation time and the average of objective function evaluation number. Furthermore, we also notice in Table 4 that too low update rate may easily lead to a premature convergence and is difficult to use directly in engineering.

5.2. Comparison with opt-aiNet The parameters used for maximization of all benchmark functions are depicted in Table 5. The performance of PiNet compared with opt-aiNet is presented in Table 6. The max fitness and average fitness of antibody population on F1 over iterations can be seen in Fig. 2. The typical distribution maps of local optima of F1 obtained from two algorithms are shown in Fig. 3. Fig. 4 illustrates the statistic results of generations of convergence, computation time, function evaluation number and the best global optimum through box plot.

The probabilities of finding a stable state of memory cells population in limited numbers of evaluation required are listed in Table 6. PiNet shows the better accuracy expecting F2, F5 and F8, which has an infinity of local optima respectively, with a 100% rate of successful performance on all independent experiments. However, the lowest successful rate of opt-aiNet is 38%. It is proved that the more and closer local optima of benchmark function, the easier opt-aiNet leads to serious degeneracy and premature convergence, while PiNet will not be influenced by the distribution of local optima. In our study, we investigate convergence speed from three respects. As the first three criteria are the average generations of convergence when memory cells population is in stable position or max number of function evaluation is met. With the help of the homologous antibody in continuous generations, PiNet assure that each antibody can find better performance as much as possible at each iteration. It is therefore not surprising that the iteration times suffering by PiNet is less than the one by opt-aiNet, and the average generations of PiNet is only 44% in average of opt-aiNet. For example, Fig. 2 illustrates how the max fitness and average fitness of the antibody population varies with increasing iterative step of the two algorithms. We can observe that PiNet determined the global optimum and been convergent faster than opt-aiNet. When the number of iterative steps reaches about 210, all local optima and global optima are found by PiNet, while opt-aiNet is convergent

Fig. 3. Typical results of the maximization of the multi function. (a) 3D graph of multi function. (b) local optima found by opt-aiNet. (c) local optima found by PiNet.

ARTICLE IN PRESS Q. Xu et al. / Engineering Applications of Artificial Intelligence 23 (2010) 495–504

503

Fig. 4. The box plot for opt-aiNet and PiNet on the multi function. (a) Generations of convergence. (b) Computation times. (c) Function evaluation numbers. (d) Best global optimum.

until it reaches about 430. Computation time, also called running time, is the length of time required to perform a computational process. It is well known that it depends on a lot of factors such as data structure, programming style, performance of computer being used and antibody population randomly initialized. However, it also reflects the convergence speed of algorithm in a sense. Some results depicting the supremacy of PiNet with opt-aiNet in this way is provided in Table 6, and most interesting thing is the computation time of PiNet is as low as 21% in average with that of opt-aiNet, which implies that PiNet saves much time to find optima. The last criterion to estimate convergence speed is the number of function evaluation required to obtain as much as possible local optima. In the case of PiNet, we use information of antibody in continuous generations for reference, which is helpful to advance local search speed, and to regulate the balance between local and global search. As a result, number of function evaluation of PiNet is much less than opt-aiNet. Overall, a quicker convergence speed of PiNet is achieved and it is more satisfied for engineering demands. As it was mentioned before, the ability of finding multiple global optima with high accuracy, if available and local optima can be counted as basic advantage for an optimization algorithm for multimodal function problem. The results in Table 6 also illustrate that, when compared with opt-aiNet, PiNet finds more local

optima and global optimum expecting function F7, and the global optimum have been found with high precision. It means that the quality of solutions found is superior to other and would meet with the engineer’s approval. The typical distribution map of local optima of F1 obtained by opt-aiNet and PiNet are shown in Fig. 3. Comparing two figures, it clearly shows that PiNet has identified many local optima with lower fitness as opposed to opt-aiNet. It is worth pointing out that the number of local optima found may be more than the theoretical number sometimes just because it is equal to the number of memory cells which may include some immaturity antibodies yet. For example, there are 6 local optima in function F3 theoretically. However, we obtain 6.1 local optima called false optima, which is invalid and lethal in engineering application, for serious premature phenomena in opt-aiNet. The functions with varied features being used in this paper are used widely to evaluate algorithm for multimodal function optimization. We eliminate the influence of subjective and objective conditions by large scale independent experiments. As can be seen from Table 6 and Fig. 4, the standard deviation in each criteria obtained by PiNet is almost less than that obtained by opt-aiNet, which may clearly indicate that PiNet is reliability and robustness and is not apt to be impacted by diverse function selected and distinct population initialized.

ARTICLE IN PRESS 504

Q. Xu et al. / Engineering Applications of Artificial Intelligence 23 (2010) 495–504

6. Conclusion Multimodal function optimization problem, a kind of NP hard problem, is used to evaluate the performance of new algorithm. In the present paper, an improved algorithm, named PiNet for efficient and reliable multimodal function optimization is proposed. The information of the antibodies and the memory cells is effectively utilized. Theory analysis and experiment results show that when compared with the opt-aiNet method, the new algorithm is capable of improving search performance significantly in successful rate, convergence speed, etc. However, ‘‘No free lunch’’ theorems (Wolpert and Macready, 1997) indicate that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. When new antibodies are generated, we only consider update rate d neglecting the current memory cell scale and solution ability. Next, we will study on how to control antibody update dynamic, and reduce the expanding population scale. Dynamic parameter tuning can also be utilized to improve the new proposed algorithm. And, we will find out which features of multimodal function are well suited for PiNet.

Acknowledgment We are grateful to the anonymous referees for their invaluable suggestions to improve the paper. This work was supported by the National Natural Science Foundation of China (Grant nos. 60603026 and 60802056). References Ahrari, A., Shariat Panahi, M., Atai, A.A., 2009. GEM: a novel evolutionary optimization method with improved neighborhood search. Applied Mathematics and Computation 210 (2), 376–386. Burnet, F.M., 1959. The Clonal Selection Theory of Acquired Immunity. Cambridge University Press. Chen, D.B., Zhao, C.X., 2009. Particle swarm optimization with adaptive population size and its application. Applied Soft Computing 9 (1), 39–48. David, B., David, R.B., Ralph, R.M., 1993. A sequential niche technique for multimodal function optimization. Evolutionary Computation 1 (2), 101–125. De Castro, L.N., 2007. Fundamentals of natural computing: an overview. Physics of Life Reviews 4 (1), 1–36. De Castro, L.N., Timmis, J. An artificial immune network for multimodal function optimization, In: Proceedings of IEEE Congress on Evolutionary Computation, 2002, pp. 699–704. De Castro, L.N., Von Zuben, F.J. AiNet: an evolutionary immune network for data clustering, In: Proceedings of the 6th Brazilian Symposimu on Neural Network, 2000, pp. 231–259. De Castro, L.N., Von Zuben, F.J., 2002. Learning and optimization using the clonal selection principle. IEEE Transactions on Evolutionary Computation 6 (3), 239–251. Dilettoso, E., Salerno, N., 2006. A self-adaptive niching genetic algorithm for multimodal optimization of electromagnetic devices. IEEE Transactions on Magnetics 42 (4), 1203–1206. Du, H.F., Gong, M.G., Liu, R.C., et al., 2005. Adaptive chaos clonal evolutionary programming algorithm. Science in China: Series F—Information Sciences 48 (5), 579–595. Fischetti, M., Martello, S., 1988. A hybrid algorithm for finding the kth smallest of n elements in O(n) time. Annals of Operations Research 13 (1), 401–419. Goldberg, D.E., Richardson, J. Genetic algorithms with sharing for multimodal function optimization. In: Proceedings of the 2nd International Conference on Genetic Algorithms and their Application, 1987, pp. 41–49.

Gong, M.G., Du, H.F., Jiao, L.C., 2006. Optimal approximation of linear systems by artificial immune response. Science in China: Series F—Information Sciences 49 (1), 63–79. Gudla, P.K., Ganguli, R., 2005. An automated hybrid genetic-conjugate gradient algorithm for multimodal optimization problems. Applied Mathematics and Computation 167 (2), 1457–1474. Hart, E., Timmis, J., 2008. Application areas of AIS: the past, the present and the future. Applied Soft Computing 8 (1), 191–201. Ho, S.L., Yang, S.Y., Ni, G.Z., et al., 2007. An improved pso method with application to multimodal functions of inverse problems. IEEE Transactions on Magnetics 43 (4), 1597–1600. Horowitz, E., Sahni, S., 1978. Fundamentals of Computer Algorithms. Pitman, New York. Im, C.H., Kim, H.K., Jung, H.K., et al., 2004. A novel algorithm for multimodal function optimization based on evolution strategy. IEEE Transactions on Magnetics 40 (2), 1224–1227. Kennedy, J., Eberhart, R. Particle swarm optimization. In: Proceedings of the IEEE International Conference on Neural Networks, 1995, pp. 1942–1948. Kim, J.K., Cho, D.H., Jung, H.K., et al., 2002. Niching genetic algorithm adopting restricted competition selection combined with pattern search method. IEEE Transactions on Magnetics 38 (2), 1001–1004. Klarreich, E., 2002. Inspired by immunity. Nature 415 (31), 468–470. Lee, C.G., Cho, D.H., Jung., H.K., 1999. Niching genetic algorithm with restricted competition selection for multimodal function optimization. IEEE Transactions on Magnetics 35 (3), 1722–1725. Lee, J.H., Song, S.H., Yang, Y.J., et al. Multimodal function optimization based on the survival of the fitness kind of the evolution strategy. In: Proceedings of the 29th Annual International Conference of the IEEE EMBS, 2007, pp. 3164–3167. Li, J.P., Marton, E.B., Geoffrey, T.P., et al., 2002. A species conserving genetic algorithm for multimodal function optimization. Evolutionary Computation 10 (3), 207–234. Liang, Jj, Qin, A.K., Suganthan, P.N., et al., 2006. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Transactions on Evolutionary Computation 10 (3), 281–295. Ling, Q., Wu, G., Yang, Z.Y., et al., 2008. Crowding clustering genetic algorithm for multimodal function optimization. Applied Soft Computing 8 (1), 88–95. Liu, L., Xu. W.B. A cooperative artificial immune network with particle swarm behavior for multimodal function optimization. In: IEEE Congress on Evolutionary Computation, 2008, pp. 1550–1555. Lv, J. Study on chaos immune network algorithm for multimodal function optimization. In: The 4th International Conference on Fuzzy Systems and Knowledge Discovery, 2007, pp. 684–689. Mahfoud, S.W. Niching methods for genetic algorithms. Ph.D. dissertion, Illinois Genetic Algorithm Laboratory, University of Illinois, Urbana, IL, 1995. Miller, B.L., Shaw, M.J. Genetic algorithms with dynamic niche sharing for multimodal function optimization. In: Proceedings of the 3rd IEEE Conference on Evolutionary Computation, 1996, pp. 786–791. Parrott, D., Li, X.D., 2006. Locating and tracking multiple dynamic optima by a particle swarm model using speciation. IEEE Transactions on Evolutionary Computation 10 (4), 440–458. Qi, H., 1999. Agenetic algorithm with tabu list and sharing scheme for optimal design of electrical machines. Electric Power Components and Systems 27 (5), 543–552. ¨ ¨ Sareni, B., Krahenb uhl, L., 1998. Fitness sharing and niching methods revisited. IEEE Transactions on Evolutionary Computation 2 (3), 97–106. Seo, J.H., Im, C.H., Heo, C.G., et al., 2006. Multimodal function optimization based on particle swarm optimization. IEEE Transactions on Magnetics. 42 (4), 1095–1098. Seo, J.H., Im, C.H., Kwak, S.Y., et al., 2008. An improved particle swarm optimization algorithm mimicking territorial dispute between groups for multimodal function optimization problems. IEEE Transactions on Magnetics 44 (6), 1046–1049. Tang, T.Y., Qiu, J.J. An improved multimodal artificial immune algorithm and its convergence analysis. In: Proceedings of the 6th World Congress on Intelligent Control and Automation, 2006, pp. 3335–3339. Timmis, J., Hone, A., Stibor, T., et al., 2008. Theoretical advances in artificial immune systems. Theoretical Computer Science 403 (1), 11–32. Tsoulos, I.G., Lagaris, I.E., 2008. GenMin: an enhanced genetic algorithm for global optimization. Computer Physics Communications 178 (11), 843–851. Wei, L.Y., Zhao, M., 2005. A niche hybrid genetic algorithm for global optimization of continuous multimodal functions. Applied Mathematics and Computation 160 (3), 649–661. Wolpert, D.H., Macready, W.G., 1997. No free lunchtheorems for optimization. IEEE Transactions on Evolutionary Computation 1 (1), 67–82.