Multiple Solution Search Based on Hybridization of Real ... - IEEE Xplore

Report 1 Downloads 15 Views
Multiple Solution Search Based on Hybridization of Real-Coded Evolutionary Algorithm and Quasi-Newton Method Satoshi Ono, Member, IEEE, Yusuke Hirotani, and Shigeru Nakayama, Non-Members, IEEE

Abstract— In recent years, many evolutionary computation methods have been proposed and applied to real-world problems; however, gradient methods are regarded as promising for their capacity to solve problems involving real-coded parameters. Addressing real-world problems should not only involve the search for a single optimal solution, but also a set of several quasi-optimal solutions. Although some methods aiming the search for multiple solutions have been proposed (e.g. Genetic Algorithm with Sharing and Immune Algorithm), these could not render highly optimized solutions to real-coded problems. This paper proposes hybrid algorithms combining real-coded evolutionary computation algorithms and gradient search methods for multiple-solution search in multimodal optimization problems. Furthermore, a new evaluation function of solution candidates with gradient is presented and discussed in order to find quasi-optimal solutions. Two hybrid algorithms are proposed – a hybridization between Immune Algorithm and Quasi-Newton method (IA+QN) and a hybridization between Genetic Algorithm with Sharing and Quasi-Newton method (GAS +QN). Experimental results have shown that the proposed methods can find optimal and quasi-optimal solutions with high accuracy and efficiency even in high-dimensional multimodal benchmark functions. The results have also shown that GAS +QN has better performance and higher robustness in terms of parameter configuration than IA+QN.

I. I NTRODUCTION In real world optimization problems like architectural structure design, optical lens design or protein structure prediction; availability of several feasible solutions can become more important than a single optimal solution in order to allow designers or engineers to choose the most desirable approach out of the solution candidates. Various algorithms such as Simulated Annealing, Tabu Search, Genetic Algorithm (GA), etc. have been proposed to solve large-scale combinatorial optimization problems in the real world. Nevertheless, these probabilistic search methods are inadequate to find multiple optimal or promising quasi-optimal solutions simultaneously because they were primarily designed to find a single optimal or quasi-optimal solution. Genetic Algorithm with sharing (GAS ) [1], Immune Algorithm (IA) [2]–[6] and other iterated genetic algorithms [7] are proposed for simultaneous search of multiple solutions. It is well known that hybridizing evolutionary computation algorithms (which are appropriate for global search) and local search algorithms enhances search performance. Although many studies have been conducted on the hybridization of evolutionary computation algorithms for a single solution and S. Ono, Y. Hirotani, and S. Nakayama are with Department of Information and Computer Science, Faculty of Engineering, Kagoshima University, Japan (phone: +81-99-285-8453; fax.: +81-99-285-8464; email: {ono, sc101064, shignaka}@ics.kagoshima-u.ac.jp).

local search algorithms [8]–[11]; as far as we know, little attention has been given to hybridization of search algorithms for multiple solutions and local search algorithms. In this paper, two methods of simultaneous search for multiple optimal and quasi-optimal solutions are proposed, namely hybridizations between Quasi-Newton (QN) method and real-coded evolutionary computation algorithms (i.e. real-coded IA and GAS ). These methods also calculate fitness function with gradient, which allows finding multiple solutions with high accuracy and efficiency even in highdimensional multimodal functions. Experimental results with up to 15 dimension problems have shown that the proposed methods can find optimal and quasi-optimal solutions with high accuracy and efficiency even in high-dimensional multimodal benchmark functions. The results have also shown that GAS +QN has better performance and less dependency on search parameters than IA+QN. II. P RINCIPLES OF THE PROPOSED METHODS We propose two hybrid algorithms IA+QN and GAS +QN for multiple solution search, based on three principles as follows: 1) Utilizing real-coded genes. In recent years, various genetic operations like crossover and mutation for real-coded GA have been proposed such as linear crossover (LX) [12], blend crossover (BLX-α) [13], Simplex Crossover (SPX) [14], and Unimodal Normal Distribution Crossover (UNDX) [15]. Few or no realcoded algorithms, however, have been proposed for multiple solution search. The proposed methods are based on evolutionary algorithms, which aim at simultaneous search for optimal and semi-optimal solutions, and utilize real-coded genes so that solutions of great precision in various real-world problems can be found. 2) Applying Quasi-Newton method to promising solution candidates. In general, most of niching algorithms work effectively in low dimensional problems [5], [7], [16], [17]. The effectiveness of QN also decreases in higher dimensional problems. The proposed methods use GAS or IA for wellbalanced global search between search intensification and diversification, and QN for careful local search. The hybridization enables multiple solution search even in high dimensional problems on which the

1133 c 1-4244-1340-0/07$25.00 2007 IEEE

0

0.5

0 0

0.5 v

1

(a) W1 = 0, W2 = 0

0.5

0 0

0.5 v

1

Fig. 1.

0.5 v

f (v) + =

W1 × fmax 1 + |f  (v)| + W2 × u0 (f  (v)) . (1) (1 + W1 ) × fmax

f (v + dv) − f (v) . dv ua (t) is a step function defined as follows:  1 t≥a ua (t) = . 0 t 0. We evaluate the algorithms by measuring their SAll [%] (the rate of runs in which the algorithm succeeds in finding all solutions), FOpt [%] (mean discover rate of optimal solutions), FSemi [%] (mean discover rate of quasi-optimal solutions), EOpt (mean number of function calls until finding an optimal solution), and EAll (mean function calls until findind all optimal and semi-optimal solutions). We tested an application rate of QN PQN with 40[%], 80[%], and ‘Elite Only’ (EO), which allows IA+QN and GAS +QN to apply QN to only promising solution candidates: memory cell candidates in IA+QN and new elite individuals in GAS +QN. In case of PQN = EO, if there is no promising candidates in a generation, an antibody (or an individual) is selected randomly and QN is applied to the antibody. The proposed affinity calculation equation (1) involves two weight parameters W1 and W2 . W2 does not require detailed adjustment: much higher value than 1.0 like 1.0 × 102 or 1.0 × 103 is sufficient for W2 to work properly as shown in Figure 1(c) and (e). We therefore focus on the effectiveness of W1 varying with 0.0, 1.0, and 2.0. C. Experimental results in F 1, F 2, and F 3 Table III shows results of IQN, IA, GAS , IGA+QN, IA+QN, and GAS +QN in functions F 1 (n = 3), F 2(n = 3), and F 3(n = 2). A brief observation in Table III shows that IQN, IA+QN, and GAS +QN could find all solutions simultaneously, and IA, GAS , and IGA+QN could not. IA and GAS could find all optimal solutions with the accuracy of about 1.0×10−4 , but could not find the solutions with the accuracy of 1.0 × 10−8 . IGA+QN succeeded in finding more than half of the solutions, but could not find all of them. IQN could find all solutions with 100% accuracy and EAll was lower than IA+QN and GAS +QN. IA+QN and GAS +QN also succeeded in finding all solutions with 100% accuracy. In F 3, IA+QN found all solutions faster than IQN in the case N = 250, W1 = 0, and PQN = EO in particular. In F 1, GAS +QN found all solutions faster than IQN in the case N = 250, W1 = 0, and PQN = EO. In addition, results in F 2 suggests that GAS +QN was more robust against parameter configuration than IA+QN. D. Experimental results in F 4 Table IV shows results of the algorithms that showed good search performance in F 1, F 2, and F 3: IQN, IA+QN, and GAS +QN, in function F 4. A brief observation in Table IV

2007 IEEE Congress on Evolutionary Computation (CEC 2007)

1137

TABLE III E XPERIMENTAL RESULTS ON F 1, F 2, AND F 3.

Method

N W1

PQN [%]

[Previous methods] IQN 0 1 2 IA 250 0 1 2 500 0 1 2 GAs 250 0 1 2 500 0 1 2 IGA+QN 0 250 1 2 500 0 1 2 [Proposed methods] IA+QN 0 EO 250 40 80 1 EO 40 80 2 EO 40 80 500 0 EO 40 80 1 EO 40 80 2 EO 40 80 GAs+QN 0 EO 250 40 80 1 EO 40 80 2 EO 40 80 500 0 EO 40 80 1 EO 40 80 2 EO 40 80

F 1 (n = 3) SAll FOpt [%] [%] EOpt

EAll

F 2 (n = 3) SAll FOpt FSemi [%] [%] [%] EOpt

F 3 (n = 2) SAll FOpt [%] [%] EOpt

EAll

100 100 100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

100 100 100 0 0 0 0 0 0 14 0 0 3 0 0 43 48 49 62 54 57

1.7×103 1.6×105 2.2×103 1.7×105 2.0×103 1.6×105 6.0×106 6.6×106 6 9.4×10 6 6.6×10 6.1×106 3 8.7×10 1.1×104 1.1×104 1.1×104 1.4×104 1.5×104 -

0 100 100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

100 100 100 0 0 0 0 0 0 0 0 0 0 0 0 57 50 60 83 57 70

21 100 100 0 0 0 0 0 0 0 1 6 0 1 5 41 45 48 56 50 53

4.2×104 4.0×104 1.6×105 4.5×104 1.7×105 4 1.2×10 1.3×104 3.1×104 9.7×104 4.9×104 3.8×104 -

100 100 100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

100 100 100 0 0 0 0 0 0 0 0 0 0 0 0 51 61 61 73 69 70

2.5×103 1.8×105 3.1×103 2.0×105 2.9×103 2.2×105 3 6.5×10 8.9×103 7.8×103 7.4×103 1.1×104 1.1×104 -

100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100

100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100

5.5×103 9.4×104 1.2×105 3.1×104 1.1×105 1.3×105 3.4×104 1.0×105 1.2×105 1.3×104 1.3×105 1.6×105 6.7×104 1.4×105 1.8×105 6.9×104 1.3×105 1.7×105 1.6×104 5.6×104 9.2×104 1.9×104 5.4×104 9.7×104 1.8×104 5.7×104 9.7×104 1.8×104 1.1×105 1.9×105 2.2×104 1.1×105 1.9×105 2.0×104 1.2×105 2.0×105

0 0 0 0 97 100 10 100 100 0 0 0 0 100 100 30 100 100 0 0 30 100 100 100 100 100 100 0 0 0 100 100 100 100 100 100

100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100

9 21 26 64 100 100 93 100 100 13 41 45 65 100 100 95 100 100 36 52 62 100 100 100 100 100 100 33 43 40 100 100 100 100 100 100

5.0×103 1.0×105 1.4×105 3.8×104 1.3×105 1.7×105 4.7×104 1.4×105 1.7×105 6.7×103 1.6×105 2.1×105 5.3×104 1.6×105 2.1×105 7.0×104 1.9×105 2.4×105 2.1×104 5.6×104 8.7×104 2.3×104 6.4×104 8.9×104 2.6×104 6.3×104 9.5×104 1.9×104 1.1×105 1.7×105 2.2×104 1.1×105 1.8×105 2.3×104 1.1×105 1.8×105

100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100

100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100

6.5×103 8.3×104 1.0×105 2.8×104 1.6×105 1.7×105 3.0×104 1.7×105 1.7×105 6.3×103 1.1×105 1.3×105 2.9×104 1.2×105 1.5×105 3.0×104 1.3×105 1.7×105 7.4×103 2.0×104 3.5×104 8.5×103 2.2×104 3.6×104 7.8×103 2.3×104 3.6×104 7.4×103 4.1×104 6.8×104 8.8×103 4.3×104 7.2×104 8.1×103 4.3×104 7.1×104

1.6×105 4.1×105 4.4×105 7.5×105 4.0×105 5.0×105 8.4×105 4.0×105 4.3×105 1.3×105 5.7×105 5.6×105 7.5×105 4.7×105 4.8×105 8.1×105 4.7×105 4.4×105 5.5×104 1.3×105 1.6×105 8.0×104 1.3×105 1.6×105 7.1×104 1.4×105 1.6×105 6.6×104 1.7×105 2.0×105 1.2×105 1.6×105 2.1×105 9.3×104 1.8×105 2.1×105

shows that IQN could not find all solutions in the case n ≥ 10, and IA+QN and GAS +QN could even in the case n = 15. QN could find all solutions in F 4 with n = 5, but numerous local optimal solutions disturbed QN in the case

1138

EAll

2.6×106 1.8×106 8.1×106 6.6×105 1.1×106 1.7×106 9.1×105 8.2×106 5.7×105 5.1×105 2.5×105 1.6×105 1.6×105 1.2×105 1.4×105 1.5×105 5.2×105 1.7×105 2.0×105 1.8×105 1.6×105 2.0×105

4.5×104 3.2×105 3.8×105 1.8×105 5.9×105 6.1×105 2.2×105 5.4×105 6.1×105 4.5×104 3.4×105 4.2×105 1.9×105 8.1×105 8.4×105 2.1×105 8.9×105 8.8×105 8.5×104 1.1×105 1.2×105 8.1×105 1.8×105 1.4×105 8.6×105 1.6×105 1.6×105 1.2×105 1.2×105 1.4×105 1.6×106 1.7×105 1.7×105 1.4×106 1.7×105 1.8×105

n ≥ 10, resultingly QN failed to find even one optimal solution in the case n = 15. Although both IA+QN and GAS +QN could find all optimal solutions even when n = 15, it is obvious that

2007 IEEE Congress on Evolutionary Computation (CEC 2007)

TABLE IV E XPERIMENTAL RESULTS ON F 4. PQN [%] [Previous method] IQN 0 1 2 [Proposed methods] IA+QN 0 EO 250 40 80 1 EO 40 80 2 EO 40 80 500 0 EO 40 80 1 EO 40 80 2 EO 40 80 GAs+QN 0 EO 250 40 80 1 EO 40 80 2 EO 40 80 500 0 EO 40 80 1 EO 40 80 2 EO 40 80

method N W 1

Fr 4(n = 5) SAll FOpt [%] [%] EOpt

EAll

100 100 100

100 1.1×105 1.1×106 100 1.3×105 1.6×106 100 1.2×105 1.2×106

100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100

100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100

3.6×104 4.6×105 5.7×105 2.9×105 6.1×105 7.0×105 4.4×105 6.7×105 7.3×105 3.1×104 7.8×105 9.4×105 3.0×105 1.1×106 1.3×106 3.2×105 1.1×106 1.3×106 6.0×104 1.2×105 1.6×105 8.6×104 1.2×105 1.6×105 6.3×104 1.3×105 1.6×105 4.0×104 1.5×105 2.4×105 1.6×105 1.9×105 2.4×105 1.3×105 1.7×105 2.6×105

7.9×104 6.5×105 8.0×105 5.1×105 8.5×105 1.0×106 7.1×105 1.0×106 9.6×105 6.0×104 1.2×106 1.4×106 5.8×105 1.5×106 1.8×106 6.2×105 1.9×106 1.9×106 9.0×104 3.3×105 3.9×105 6.2×105 3.8×105 4.7×105 6.5×105 4.2×105 4.8×105 8.0×104 4.6×105 5.7×105 7.9×105 5.9×105 6.2×105 1.0×106 5.5×105 7.2×105

GAS +QN was more robust against parameter configuration than IA+QN. In F 4 with n = 15, GAS +QN succeeded in finding all solutions when mainly PQN =40 , W1 = 0, or N = 250, whereas IA+QN succeeded only only when PQN = EO and W1 = 0. Focusing on EAll reveals that IA+QN with PQN = EO and W1 = 0 was almost the fastest of the three algorithms. VI. D ISCUSSION A. Parameter W1 Parameter W1 influences the performance of multiple solution search. From Table III and IV, all tested algorithms worked better when W1 = 0.0 than when W1 > 0.0 in problems whose objective is to find plural optimal solutions such as F 1, F 3 and F 4. W1 higher than 0.0 increased affinity (or fitness) of local optimal solutions and reduce areas of hillsides as shown in Figure 1; consequently the search performance was aggravated.

Fr 4(n = 10) SAll FOpt [%] [%] EOpt 0 0 0 100 100 100 87 100 100 47 100 100 100 93 77 80 67 63 63 57 47 100 100 100 100 100 100 100 100 100 100 100 100 93 100 100 80 100 100

15 6.7×106 13 7.5×106 9 8.7×106 100 100 100 87 100 100 50 100 100 100 99 98 83 96 95 63 93 93 100 100 100 100 100 100 100 100 100 100 100 100 99 100 100 90 100 100

4.8×105 3.1×106 3.6×106 2.3×106 4.1×106 3.9×106 2.2×106 3.6×106 4.3×106 3.1×105 5.5×106 5.9×106 3.7×106 6.4×106 6.6×106 3.7×106 6.8×106 7.0×106 1.7×106 5.9×106 6.8×106 3.3×106 1.9×106 2.4×106 4.0×106 2.4×106 2.8×106 1.4×106 2.7×106 3.8×106 4.8×106 2.8×106 3.7×106 6.5×106 2.5×106 3.8×106

EAll -

1.6×106 6.6×106 8.0×106 6.0×106 8.8×106 9.7×106 3.9×106 8.5×106 9.8×106 7.2×105 1.2×107 1.4×107 6.7×106 1.5×107 1.6×107 9.2×106 1.6×107 1.6×107 2.9×106 9.7×106 1.3×107 6.1×106 4.8×106 5.2×106 7.0×106 4.3×106 5.7×106 2.2×106 5.8×106 8.0×106 1.0×107 5.5×106 7.2×106 9.6×106 5.3×106 8.5×106

Fr 4(n = 15) SAll FOpt [%] [%] EOpt 0 0 0

0 0 0

100 33 3 13 17 0 0 7 3 100 0 0 0 0 0 0 0 0 100 100 100 100 100 100 97 100 100 100 100 100 67 100 93 57 100 80

100 92 83 13 79 78 3 81 75 100 56 49 7 47 50 0 44 46 100 100 100 100 100 100 100 100 100 100 100 100 81 100 97 76 100 96

EAll

-

-

2.3×106 9.6×106 1.0×107 1.2×107 1.3×107 1.1×107 3.2×105 1.2×107 1.2×107 1.3×106 1.6×107 1.8×107 1.4×107 1.8×107 1.7×107 2.0×107 1.8×107 1.7×106 5.9×106 6.8×106 1.3×107 7.2×106 1.0×107 1.3×107 7.9×106 1.0×107 1.4×106 9.4×106 1.2×107 1.7×107 9.7×106 1.5×107 1.8×107 1.1×107 1.4×107

6.8×106 2.5×107 2.9×107 2.0×107 2.7×107 2.6×107 2.7×107 3.5×106 2.9×106 9.7×106 1.3×107 1.8×107 1.2×107 1.6×107 1.8×107 1.2×107 1.6×107 2.2×106 1.6×107 2.1×107 2.2×107 1.7×107 2.3×107 2.3×107 1.9×107 2.3×107

In contrast to F 1, F 3, and F 4, W1 set to 1.0 or 2.0 helped the tested algorithms find quasi-optimal solutions in F 2 whose objective is to find optimal solution and all quasi-optimal solutions. W1 should therefore be determined depending on the purpose — If quasi-optimal solutions are necessary, W1 should be set 1.0 or higher. B. Parameter PQN Parameter PQN influences search cost balance between IA (or GAS ) and QN. In problems whose objective is to find multiple optimal solutions such as F 1, F 3 and F 4, proposed IA+QN and GAS +QN found all solutions quickly when PQN = EO in which QN was applied to only promising solution candidates. In F 2 in which quasi-optimal solutions must be founded, IA+QN and GAS +QN succeeded in finding all quasi-optimal solutions when PQN ≥ 40; this shows that applying QN to solution candidates chosen randomly aids in finding quasi-optimal solutions. GAS +QN with lower PQN

2007 IEEE Congress on Evolutionary Computation (CEC 2007)

1139

could find all solutions more quickly than with higher PQN , because evolution of GAS can search for solutions better, without fall into local optima, than QN in high dimensional problems like n ≥ 10, C. Comparison between IA+QN and GAS +QN One difference between behaviors of IA and GAS is whether density of solution candidates is considered explicitly (in IA) or implicitly (in GAS ); IA obtains promising candidates from gathered antibodies, whereas GAS simply regards individuals with high fitness as the candidates. Other difference between them is whether promising solution candidates are used for reproduction (in GAS ) or not (in IA); elite individuals are subjects of crossover and mutation, whereas memory cells are not. These two differences yield a little difference of search performance between IA and GAS in functions F 1, F 2, and F 3. Combining QN emphasizes the former difference. Applying QN to an antibody (or an individual) chosen randomly may improve the antibody to be a promising candidate. IA+QN does not admit an isolated antibody as a memory cell even though the antibody has the highest affinity, whereas GAS +QN treats an isolated promising individual as a elite immediately. This causes the difference of robustness against parameter configuration of PQN and W1 between IA+QN and GAS +QN in F 2 and F 4 with n ≥ 10. VII. C ONCLUSION In this paper, we proposed hybrid algorithms of evolutionary computation and gradient search, and evaluation function for a solution candidate with gradient of fitness landscape. We performed experiments to compare proposed IA+QN and GAS +QN with other conventional algorithms in high dimensional functions. Experimental results showed how the hybridization enables to find all solutions in high dimension functions. In addition, the results showed the evaluation function of solution candidates with gradient helps all tested algorithms find quasi-optimal solutions. In the future, we plan to apply the proposed methods to real-world problems such as building structure design and optical lens design. R EFERENCES [1] D. E. Goldberg, “An investigation of niche and species formation in genetic function optimization,” in Proceedings of the Third International Conference on Genetic Algorithms, pp. 42–50, 1989. [2] K. Mori, M. Tsukiyama, and T. Fukuda, “Application of an immune algorithm to multi-optimization problems,” IEEJ Transactions on Electronics, Information and Systems, no. 5, pp. 593–598, 1997. [3] T. Fukuda, K. Mori, and M. Tsukiyama, “Parallel Search for MultiModal Function Optimization with Diversity and Learning of Immune Algorithm”, Artificial Immune Systems and Their Applications Springer, pp. 210–220, 1998. [4] C. A. C. Coello and N. C. Cortes, “Use of emulations of the immune system to handle constraints in evolutionary algorithms,” in Intelligent Engineering Systems through Artificial Neural Networks, vol. 11, 2001, pp. 141–146. [5] K. Yasuda and T. Hamada, “A study of the similarity between immune algorithms and genetic algorithms from the viewpoint of optimization methods,” IEEJ Trans. Electronics, Information and Systems, vol. 123, no. 3, pp. 576–584, 2003.

1140

[6] K. Mitsui, J. Osaki, H. Omori, H. Tagawa, and T. Honnma, “Heuristic methods for optimization of structual systems,” Corona Publishing Co. Ltd., 2004. [7] D. Beasley, D. R. Bull, and R. R. Martin, “A sequential niche technique for multimodal function optimization,” Evolutionary Computation, vol. 1, no. 2, pp. 101–125, 1993. [8] J. Finckenor, “Genetic algorithms, with inheritance, versus gradient optimizers, and ga/gradient hybrids,” pp. 257–264, 1997. [9] J. He, J. Xu, and X. Yao, “Solving equations by hybrid evolutionary computation techniques,” IEEE Transactions on Evolutionary Computation, vol. 4, no. 3, 2000. [10] T. Hiroyasu, M. Miki, Y. Minami, and Y. Tanimura, “Global optimalpoint search of hybrid genetic algorithms using gradient method,” in in Proceedings of 6th Modeling and Problem Solving Symposium, Information Processing Society of Japan, pp. 57–64, 2000. [11] G. Zhang and H. Lu, “Hybrid real-coded genetic algorithm with quasisimplex technique,” International Journal of Computer Science and Network Security, vol. 6, no. 10, pp. 246–253, 2006. [12] A. H. Wright, “Genetic algorithms for real parameter optimization,” in Foundations of genetic algorithms, G. J. Rawlins, Ed. San Mateo, CA: Morgan Kaufmann, pp. 205–218, 1991. [13] L. J. Eshelman and J. D. Schaffer, “Real-coded genetic algorithms and interval-schemata,” in Foundations of Genetic Algorithms, vol. 2, no. 1993, pp. 187–202, 1993. [14] S. Tsutsui and A. Ghosh, “A study on the effect of multi-parent recombination in real coded genetic algorithms,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC98), pp. 828–833, 1998. [15] I. Ono, H. Kita, and S. Kobayashi, “A robust real-coded genetic algorithm using unimodal normal distribution crossover augmented by uniform crossover: Effects of self-adaptation of crossover probabilities,” pp. 496–503, 1999. [16] O. Takahashi, S. Kimura, and S. Kobayashi, “An adaptive neighboring search using crossover-like mutation for deceptive multimodal function optimization,” Journal of the Japanese Society for Artificial Intelligence, vol. 16, no. 2, pp. 175–184, 2001. [17] S. W. Mahfoud, “Crowding and preselection revisited,” Parallel Problem Solving From Nature, vol. 2, pp. 27–36, 1992. [18] D. Dasgupta, Ed., Artificial Immune Systems and Their Applications. Springer, 1998. [19] N. Toma, S. Endo, and K. Yamada, “The proposal and evaluation of an adaptive memoryzing immune algorithm with two memory mechanisms,” Journal of Japanese Society for Artificial Intelligence, vol. 15, no. 6, pp. 1097–1106, 2000. [20] N. Toma, S. Endo, K. Yamada, and H. Miyagi, “An immune optimization inspired by biological immune cell-cooperation for divisionand-labor problem,” in Proceedings of the 4th International Conference on Computational Intelligence and Multimedia Applications (ICCIMA’01), 2001, pp. 153–157. [21] D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning. Addison Wesley, Reading, 1989. [22] M. Himeno and R. Himeno, “The niching method for obtaining global optima and local optima in multimodal functions,” IEICE Transactions on Information and Systems, vol. J85-D-I, no. 11, pp. 1015–1027, 2002. [23] A.-R. Hedar and M. Fukushima, “Minimizing multimodal functions by simplex coding genetic algorithm,” Optimization Methods and Software, vol. 18, pp. 265–282, 2003. [24] Y. Hirotani, S. Ono, and S. Nakayama, “A fundamental study of multiple-solution search based on a hybrid algorithm of real-valued imuune algorithm and quasi-newton method,” IEICE Transactions on Information and Systems, vol. J89-D, no. 1, pp. 121–128, 2006.

2007 IEEE Congress on Evolutionary Computation (CEC 2007)