A Constrained Optimization Evolutionary Algorithm ... - Semantic Scholar

Report 0 Downloads 192 Views
A Constrained Optimization Evolutionary Algorithm Based on Multiobjective Optimization Techniques Yong Wang

Zixing Cai

College of Information Science and Engineering Central South University Changsha, 410083, P.R.China. [email protected]

College of Information Science and Engineering Central South University Changsha, 410083, P.R.China. [email protected]

Abstract—This paper presents a novel evolutionary algorithm for constrained optimization. During the evolutionary process, our algorithm is based on multiobjective optimization techniques, i.e., an individual in the parent population may be replaced if it is dominated by a nondominated individual chosen from the offspring population. In addition, a model of population-based algorithm-generator and an infeasible solutions archiving and replacement mechanism are introduced. Furthermore, the simplex crossover is used as a recombination operator to enrich the exploration and exploitation abilities of the approach proposed. The new approach is tested on thirteen well-known benchmark functions, and the empirical evidences suggest that it is robust, efficient and generic when handling linear/nonlinear equality/inequality constraints. Compared with some other state-of–theart algorithms, our algorithm remarkably outperforms them in terms of the best, median, mean, and worst objective function values and the standard deviations.

(EAs) have been broadly applied to solve COPs and a considerable number of constrained optimization evolutionary algorithms (COEAs) have been proposed ([1], [2]). Penalty function methods are among the most common methods for solving COPs. The principal idea of this kind of methods is to transform a constrained optimization problem into an unconstrained one by introducing a penalty term into the original objective function to penalize constraint violations. In general, penalty term is based on the distance of an individual from the boundaries of the feasible set. The distance of an r individual x from the j-th constraint can be constructed as:

1 Introduction In many science and engineering disciplines, it is not uncommon to face a large number of constrained optimization problems (COPs). Without loss of generality, the general nonlinear programming (NLP) problem that we are interested in can be formulated as: r r r Find x ( x = ( x1, x2 ,L, xn ) ∈ ℜn ) which optimizes f (x ) r where x ∈ Ω ⊆ S , and S is an n-dimensional rectangle space in ℜn defined by the parametric constraints: l (i ) ≤ xi ≤ u (i ),

1≤ i ≤ n

The feasible region Ω ⊆ S is defined by a set of m additional linear or nonlinear constraints ( m ≥ 0 ): r r g j ( x ) ≤ 0, j = 1,L, q and h j ( x ) = 0, j = q + 1,L, m where q is the number of inequality constraints and m − q is the number of equality constraints. r For an inequality constraint that satisfies g j ( x ) = 0 at r

r

any point x ∈ Ω , we say it is active at x . Apparently, all r equality constraints h j ( x ) ( j = q + 1,L, m ) are considered active at all points of Ω . During the past decade, as a search and optimization technique inspired by the nature, evolutionary algorithms

r r max{0, g j ( x )}, 1 ≤ j ≤ q r G j (x) =  . q +1 ≤ j ≤ m | h j ( x ) |, r r Let G( x ) = ∑ mj=1 G j ( x ) denote the distance of the r individual x from the boundaries of the feasible set,

which also reflects the degree of constraint violation of it. However, Yu et al. [3] pointed out that even the most dynamic setting methods, which start with a low parameter value and end up with a high one, are unlikely to work well for problems where the unconstrained global optimum is far away from the constrained one. Recently, emphasis has been increasingly placed on COEAs based on multiobjective optimization concepts [4] because of no requirement of a penalty function. Motivated by this fact, the current study proposes a novel evolutionary algorithm based on multiobjective optimization techniques for COPs. In the new approach, a given COP is converted into a bi-objective optimization problem. Meanwhile, nondominated individuals replacement scheme is devised in terms of the converted fitness. A model of population-based algorithm-generator is introduced. The aim of this model is to guide the search toward the global optima of COPs. To enhance the convergence speed, an infeasible solutions archiving and replacement mechanism that help the population to rapidly approach or land in the feasible region of the search space from different directions is added. Simplex crossover is used to improve the exploration and exploitation capabilities of our algorithm. With these combined elements, we show that the proposed algorithm has advantages over other algorithms compared in many indicators of the experimental results. The remainder of this paper is organized as follows. Section II reviews the related work of handling COPs via EAs. Section III presents a detailed description of the

proposed algorithm. In Section IV, our algorithm with two state-of-the-art algorithms is used to produce experimental results for the chosen test functions. Several performance metrics are examined to compare the performance of our algorithm against the chosen algorithms. Finally, Section V provides some remarks.

2 Previous Work The main idea of COEAs based on multiobjective optimization techniques is to convert COPs into multiobjective optimization problems (MOPs) that have m + 1 objectives, where m is the number of constraints. Then, we can apply any multiobjective optimization r r r r techniques to the new vector v = ( f (x), f1(x),L, fm(x)) , where r r f1 ( x ),L, f m ( x ) are constraints of the given problem. Several representative approaches in this area are reviewed next. Based on Fonseca and Fleming’s Pareto ranking process [5], Coello et al. [6] proposed an alternative approach. In this approach, each individual in the population is associated with a rank and feasible individuals have a higher rank value than infeasible ones. Coello et al. [7] implemented a version of the NichedPareto Genetic Algorithm (NPGA) [8]. Nevertheless, unlike NPGA, this approach does not require niching. Instead, it use an additional parameter called Sr to control the diversity of the population. A ratio S r of individuals is selected by dominance-based tournament selection. The remaining ( 1 − Sr ) individuals are selected by a purely probabilistic approach. Aguirre et al. [9] proposed an alternative approach, which is an extension of the Pareto Archived Evolutionary Strategy (PAES) [10] and uses Pareto dominance as the selection criterion. In this method, two crucial features are introduced, i.e., inverted “ownership” and shrinking the objective space. In the process of evolution, the search space is shrunk continuously through exploring the information of the individuals surrounding the feasible region. Thus the size of the search space will be very small and the solutions obtained will be competitive in the end. Zhou et al. [11] proposed a method which uses Pareto dominance to assign individual’s Pareto strength. Based on Pareto strength, the ranking of individuals in the population is conducted in such a manner that: 1) compare each individual’s strength, the one with high strength wins; 2) if the strength is equal, the one with lower degree of constraint violation is better. Besides, the method [12] based on Vector Evaluated Genetic Algorithm (VEGA) [13] and the method [14] based on the combination of VEGA with Pareto ranking are also developed to handle COPs. In order to evaluate the capabilities of a set of constraint-handling methods based on multiobjective optimization techniques, Mezura et al. [4] presented a comprehensive experimental study, in particular, focusing on four of this kind of methods. An important conclusion

in [4] is that additional mechanisms have to be used to improve the effectiveness of these approaches.

3 The Proposed Algorithm As a matter of fact, COEAs have two definite objectives: 1) landing in or approaching the feasible domain promptly, and 2) reaching the global optimal solution in the end. However, these ultimate goals are far from being accomplished by the existing COEAs according to empirical evidences. One of the main deficiencies of the existing COEAs lies in designing a suitable scheme to compare and select individuals, which is vital for the achievement of the second objective. In addition, unlike unconstrained optimization problems, the search space of COPs is composed of the feasible and infeasible regions. As a result, how to effectively utilize infeasible solutions becomes very important, which also has a notable impact on the realization of the first objective, especially when the optimum is located on the boundaries of the feasible region or the ratio of the feasible region is very small compared to the entire search space. Motivated by these considerations, we present a novel approach based on multiobjective optimization techniques. The new approach can find the true feasible optima of thirteen well-known benchmark functions chosen from [15]. It converts COPs to bi-objective optimization problems to minimize the original objective function r r f (x ) and the degree of constraint violation G (x ) . For r r r the sake of clarity, let f (x) = ( f (x),G(x)) . 3.1 Nondominated Individuals Replacement Scheme Without loss of generality, minimization problems are assumed in this paper. We next introduce four essential definitions related to multiobjective optimization in the context of our approach. Definition 1 (Pareto Dominance) r A vector u = (u1 ,L, uk ) is said to Pareto dominate r r r another vector v = (v1,L, vk ) , denoted as u p v , if ∀i ∈ {1,L k}, ui ≤ vi and ∃j ∈ {1,L, k }, u j < v j . Definition 2 (Pareto Optimality) r xu ∈ S is said to be Pareto optimal (in S ) if and only r r r r r r r if ¬∃xv ∈ S , v p u , where v = f (xv ) = (v1, v2 ) , u = f ( xu ) = (u1 , u2 ) . Definition 3 (Pareto Optimal Set) The Pareto optimal set, denoted as ρ ∗ , is defined as: r

r

r

r

ρ * = {xu ∈ S | ¬∃xv ∈ S , v p u} .

The vectors included in the Pareto optimal set are called nondominated individuals. Definition 4 (Pareto Front) According to the Pareto optimal set, the Pareto front, denoted as ρf ∗ , is defined as: r

r

r

ρf ∗ = {u = f ( xu ) | xu ∈ ρ *} .

Apparently, the Pareto front is the image of the Pareto optimal set in the objective space. Since COPs are transformed to MOPs, the definition of Pareto optimality

is considered with respect to the decision variable space S instead of the feasible region Ω . As nondominated individuals represent the most important information of the population they belong to, our concern in this paper is only nondominated individuals. The complexity of checking all nondominated individuals is O( N 2 ) , where N is the population size. Note, however, that despite a COP has been converted into a MOP, its solution still has a natural difference with the general MOP. That is, the main goal of the former is to reach the global optimum, while the latter aims to find a set of tradeoff solutions. Therefore, it is unnecessary to care about the uniform distribution of points in the Pareto front for COPs. After all nondominated individuals are selected from the offspring population, they are applied to replace individuals in the parent population dominated by them. From the characteristics of nondominated individuals, three important conclusions can be drawn. First, nondominated individuals may consist of feasible solutions, infeasible solutions, or the integration of feasible and infeasible solutions. Second, at most one feasible solution exists in nondominated individuals. Third, feasible solutions of nondominated individuals in the offspring population can dominate either feasible or infeasible solutions in the parent population. However, infeasible solutions of nondominated individuals in the offspring population can only dominate infeasible solutions in the parent population. 3.2 A Model of Population-based Algorithm-generator Real-coded genetic algorithm (RCGA) has been proposed to overcome the drawbacks of the binary string representation in traditional GA. In RCGA, crossover is the principal operator for search, which utilizes the information on several parents to generate new offspring adaptively according to the distribution of the parents. Recently, Deb [16] proposed a population-based algorithm-generator for real-parameter optimization, which divides the task of searching the optimum into four independent plans: 1) selection plan, 2) generation plan, 3) replacement plan, and 4) update plan. The primary advantage of this algorithm-generator is the functional decomposition of these four important plans. It is described as follows: Step 1 Choose u solutions (the set Q ) from B using a selection plan (SP). Step 2 Create λ solutions (the set C ) from Q using a generation plan (GP). Step 3 Choose γ solutions (the set R ) from B for replacement using a replacement plan (RP). Step 4 Update these γ solutions by γ solutions chosen from a comparison set of R , Q and C , using an update plan (UP).

Inspired by Deb’s algorithm-generator, we propose a computation model on the basis of the nondominated individuals replacement scheme in Section III-A as follows:

Step 1 Choose u solutions (the set Q ) from the set B randomly. Step 2 B = B − Q . Step 3 Create λ offspring solutions (the set C ) from the set Q using simplex crossover (SPX, see Section III -D for details). Step 4 Choose all nondominated individuals (the set R ) from the set C , then randomly select one individual from r the set R , denoted as x1 . Step 5 Suppose that there are n′ individuals of the set Q r dominated by x1 . if n′ = 0 , then no replacement will happen; else if n′ = 1 , then the corresponding dominated individual will be r replaced by x1 ; else /*it means n′ > 1 */ one of the dominated individuals chosen at r random, will be replaced by x1 . end if Step 6 B = B U Q .

3.3 Infeasible Solutions Archiving and Replacement Mechanism Researchers have gradually realized the effects of infeasible solutions on finding the global optimum in the feasible region. Farmani et al. [17] formulated a method to ensure that slight infeasible solutions with a low objective function value remain fit. Mezura et al. [18] and Coello et al. [7] brought into play the functions of infeasible solutions by a diversity mechanism. Whereas, unlike the above approaches, an alternative method, called infeasible individuals archiving and replacement mechanism, is introduced here. It first aims to direct the population to approach or land in the feasible region from different directions, and then to combines feasible solutions with infeasible ones to reach the global optimum. The main principle of our mechanism is that the “best” of the infeasible individuals of nondominated individuals, who has the lowest degree of constraint violation and emerges in the current offspring population, is stored into a predefined archive A . Then, after a fixed interval of generations, some randomly selected individuals of the population P (the population P corresponds to the set B in Section 3.2) are replaced by the same number of randomly selected infeasible individuals in the archive A . This process can be described in Fig. 1. With respect to COPs with a large proportion of the feasible region, the proposed mechanism seems to play a minor role in finding the optimal solution. Since the initial population consists of a large number of feasible solutions and the acceptation of feasible solution may take place at the early stage in this case, thus, the archiving of the “best” of the infeasible individuals does not happen unless the set R are composed of complete infeasible individuals, but the probability of such case occurring is very small. However, the proposed mechanism is very useful for COPs with a small proportion of the feasible region. Because in this case, the population P and the set R

R : set of nondominated individuals in the offspring population xˆ : “best” of infeasible individuals, which refers to the infeasible individual with the lowest degree of constraint violation and is from the set R (if exists). A : archive, which is used to store the “best” of the infeasible individuals m'' and n '' : user-defined parameters

∃ feasible individual which has been accepted in former generation then /*it implies some individuals in the population P have been feasible */ if ¬∃ feasible individual in R then /*preserve the “best” of A := A U xˆ the infeasible individuals */ end if else /*it implies no feasible individual has been contained by the population P */ if ∃ infeasible individual in R then /*preserve the “best” of A := A U xˆ the infeasible individuals */ end if end if if mod (gen, m'' )=0 then

if

offspring consists in: 1) employing a certain ratio to r r expand the original simplex in each direction ( xi − o , r where o is the center of n + 1 vectors, i.e., r o=

1 n +1r ∑ xi ) and forming a new simplex, 2) choosing n + 1 i =1

one point from the new simplex as an offspring. For simplicity, we consider this process with a 3-parent SPX r r r in two-dimensional space, where x1 , x 2 and x3 indicate three vectors of the parents. Then, these vectors form a simplex. We expand this simplex in each direction by a factor of (1 + ε ) ( ε ≥ 0 is the expanding rate). Let r r r r 1 3 r r r r o = ∑ x i and yi = (1 + ε )(xi − o) , thus y1 , y 2 and y 3 3 i =1 r form a new simplex. We then randomly choose a point z r r r r r from the new simplex, i.e., z = k1 y1 + k 2 y2 + k3 y3 + o ,

where, k1, k 2 , and k 3 are randomly selected within the range [0, 1] and satisfy the condition k1 + k 2 + k3 = 1 . Fig. 2 illustrates the density of the offspring produced with 3-parent SPX.

randomly choose at most n '' individuals from the archive A to replace the same number individuals randomly selected from the population P ; A := φ ; end if Fig.1. Pseudocode of the infeasible solutions archiving and replacement mechanism. It also embodies a self-adaptive process, because the number of infeasible individuals in archiving and replacing changes dynamically.

are always composed of infeasible solutions at the early stage, the proposed mechanism will drives the population approaching or landing in the feasible region constantly from the fact that the replacement of infeasible individuals occurs frequently. Thereafter, in the general cases, 1) if the optimum is located on the boundaries of the feasible region, according to the nondominated individuals replacement scheme, then the search will synchronously occur from both sides of the boundaries of the feasible region; 2) if the optimum lies within the feasible region, based on the nondominated individuals replacement scheme, then feasible individuals will rapidly accumulate approximately from the middle stage, and finally strive for meeting the optimum. In addition, this mechanism can also serve as a diversity operator to balance the proportion of feasible and infeasible individuals in the population. 3.4 Recombination In this paper, our algorithm adopts SPX [19], which generates offspring based on uniform probability distribution and does not need any fitness information, as the recombination operator. Note that, no mutation operator is used in this paper. In R n , n + 1 mutually independent parent vectors r ( xi , i = 1,L, n + 1 ) form a simplex. The producing of an

Fig. 2. Density of the offspring produced with 3-parent SPX

3.5 Framework We combine the individual components described in details above and present the framework of our algorithm as follows: Begin Select appropriate parameters and generate the initial main population P randomly; Repeat Execute the model of population-based algorithmgenerator; Implement the infeasible solutions archiving and replacement mechanism; Until either an acceptable solution or a predetermined number of fitness function evaluations (FFEs) is reached End.

4 Experimental Studies We evaluate the performance of our algorithm on thirteen well-known benchmark functions taken from [15]. These test cases include various types (linear, nonlinear and quadratic) of objective functions with different number of decision variables (n) and a range of types (linear inequalities (LI), nonlinear equalities (NE), and nonlinear

Table I Comparison our algorithm (WC) with respect to the algorithms in [15] (RY) and [17] (SAFF) function

optimal

g01

-15.000000

g02

g03

g04

g05

g06

g07

g08

g09

g10

g11

g12

g13

-0.803619

-1.000000

-30665.539

5126.4981

-6961.81388

24.3062091

-0.095825

680.6300573

7049.3307

0.750000

-1.000000

0.0539498

method

best

median

mean

st.dev

worst

FFEs

WC

-15.000

-15.000

-15.000

1.2E-14

-15.000

350000

RY

-15.000

-15.000

-15.000

0.0E+00

-15.000

350000 1400000

SAFF

-15.000



-15.000

0

-15.000

WC

-0.80362

-0.80362

-0.80322

2.0E-03

-0.79261

350000

RY

-0.80352

-0.78580

-0.78198

2.0E-02

-0.72629

350000

SAFF

-0.80297



-0.79010

1.2E-02

-0.76043

1400000

WC

-1.000

-1.000

-1.000

0.0E+00

-1.000

350000

RY

-1.000

-1.000

-1.000

1.9E-04

-1.000

350000 1400000

SAFF

-1.000



-1.000

7.5E-05

-1.000

WC

-30665.539

-30665.539

-30665.539

2.6E-11

-30665.539

300000

RY

-30665.539

-30665.539

-30665.539

2.0E-05

-30665.539

350000 1400000

SAFF

-30665.50



-30665.20

4.9E-01

-30663.30

WC

5126.4981

5126.4981

5126.4981

4.4E-12

5126.4981

200000

RY

5126.497

5127.372

5128.881

3.5E+00

5142.472

350000

SAFF

5126.9890



5432.0800

3.9E+03

6089.4300

1400000

WC

-6961.814

-6961.814

-6961.814

1.6E-11

-6961.814

200000

RY

-6961.814

-6961.814

-6875.940

1.6E+02

-6350.262

350000

SAFF

-6961.800



-6961.800

0

-6961.800

1400000

WC

24.306

24.306

24.306

2.8E-11

24.306

350000

RY

24.307

24.357

24.374

6.6E-02

24.642

350000 1400000

SAFF

24.48



26.58

1.1E+00

28.40

WC

-0.095825

-0.095825

-0.095825

3.5E-17

-0.095825

200000

RY

-0.095825

-0.095825

-0.095825

2.6E-17

-0.095825

350000

SAFF

-0.095825



-0.095825

0

-0.095825

1400000

WC

680.630

680.630

680.630

6.5E-13

680.630

350000

RY

680.630

680.641

680.656

3.4E-02

680.763

350000 1400000

SAFF

680.64



680.72

6.0E-02

680.87

WC

7049.248

7049.248

7049.248

4.4E-07

7049.248

350000

RY

7054.316

7372.613

7559.192

5.3E+02

8835.655

350000

SAFF

7061.34



7627.89

3.8E+02

8288.79

1400000

WC

0.750

0.750

0.750

0.0E+00

0.750

200000

RY

0.750

0.750

0.750

8.0E-05

0.750

350000 1400000

SAFF

0.750



0.750

0

0.750

WC

-1.000

-1.000

-1.000

0.0E+00

-1.000

200000

RY

-1.000

-1.000

-1.000

0.0E+00

-1.000

350000 1400000

SAFF

-1.000



-1.000

0

-1.000

WC

0.053950

0.053950

0.053950

5.1E-17

0.053950

350000

RY

0.053957

0.057006

0.0675430

3.1E-02

0.216915

350000

SAFF













inequalities (NI)) and number of constraints. For each test case, 50 independent trials are executed in MATLAB (the source code may be obtained from the authors upon request). The population size of each test case is set as follows: 50, 0 < n < 5  N = 100, 5 ≤ n ≤ 15  150, n > 15.

Also, we use µ = n + 1 , λ = 10 , m'' = 10 , n '' = 2 . It is inconclusive to recommend that if 2 ≤ n ≤ 10 , ε can be an integer between 3 and 6, and if 10 < n ≤ 20 , ε can be an integer between 8 and 11. Because of the different characteristics of the test functions, the computational

cost (measured by the number of FFEs) is problemdependent and set as follows: 200 000,  FFE = 300 000,  350 000,

problems g05, g06, g08, g11 and g12 problem g04 the remaining problems.

4.1 General Performance of the Proposed Algorithm We summarize the experimental results using the above parameters in Table I. For each test case, we list the “known” optimal solution and the best, median, mean and worst objective function values and the standard deviations after 50 independent runs by our algorithm. As shown in Table I, our algorithm performs pretty well in that it consistently finds the global optima of all test

cases for all 50 runs, except for problem g02. Moreover, our algorithm finds a “new” best known solution for problem g10. For instance, with respect to problem g10, a typical solution found by our algorithm is: r (579.30663371658 1359.97068227017 x= 5109.97070454191 182.01769534597 295.60117181832 217.98230465403 286.1652352765 395.60117181832) with r f (x ) = 7049.248020528672, which is the “best” result reported so far. For problem g02, a typical solution found r by our algorithm is: x = (3.16246065061821 3.12833143125259 3.09479210698838 3.06145063097946 3.02792912724552 2.99382607867366 2.95866874241085 2.92184233357046 0.49482516067620 0.48835710726434 0.48231645861281 0.47664473053803 0.47129550866925 0.46623098401343 0.46142001695710 0.45683664685665 0.45245869011821 0.44826764426227 0.44424698415295 r 0.44038291462275) with f (x ) = -0.80361910412559, which is also the “best” result reported so far. Furthermore, for problem g02, the resulting objective function values are less than -0.803619 for 48 out of the 50 runs. As already analyzed, regarding some COPs with large ratio of the feasible set similar to problems g02, the role of the infeasible solutions archiving and replacement mechanism is not very evident, but empirical results indicate that the model of population-based algorithmgenerator have powerful abilities in finding the global optimum in these cases. In addition, it is clear that the standard deviations provided in Table I are very small, which reflects our approach is capable of performing robust and stable search. The results in Table I also reveal that our algorithm has substantial potential in coping with various COPs without any complex operators. 4.2 Comparison of Different Constrained Optimization Approaches We use two metrics, i.e., solution qualities and computation effort, to compare our algorithm against the following two approaches: 1. The Stochastic Ranking (RY) method [15]. 2. The Self-Adaptive Fitness Formulation (SAFF) method [17], The experiment results are also listed in Tables I. It is noteworthy that, each test case is carried out 50 trials in our approach, but the trial number is 20 and 30 for algorithms 1-2, respectively. We conclude that our algorithm has the best performance in terms of the best, median, mean, and worst objective function values and the standard deviations. Although algorithms 1-2 offer competitive results, most of their results are worse than or match the solutions found by our algorithm. In addition, our algorithm consistently converges to the optimum for all 50 runs, except for problem g02 where premature convergence occurs for 2 out of 50 runs, whereas algorithms 1-2 seem to have a great tendency to converge to the local optimum, especially for those complicated functions, e.g., g02, g05, g07, g10 and g13. For the large feasible problem g02, algorithms 1-2 are unable to reach the true optimum. In terms of the highly constrained problem g10, the “best” objective function values

provided by these two algorithms are still far from the true optimum. As far as the computational cost (FFEs) is concerned, our algorithm has the minimum computational cost for most test functions, while algorithm 1 has considerable computational cost for all test functions (1 400 000 FFEs).

5 Conclusion and Future Work Based on multiobjective optimization techniques, a novel constrained optimization evolutionary algorithm is presented in this paper. In our algorithm, an individual in the parent population may be replaced if it is dominated by a nondominated individual from the offspring population. In addition, two detached but interactive operators are introduced, one is the model of populationbased algorithm-generator, and the other is the infeasible solutions archiving and replacement mechanism. COEAs have two definite goals that can be achieved by the above two operators as expected. From the experiment results, it is evident that the approach presented in this paper has the substantial potential in handling various COPs and that its performance remarkably outperforms other algorithms in many aspects, though the results of other algorithms are statistically competitive. Meanwhile, our algorithm can meet the feasible optima of all test cases. The standard deviations also reveal the strong robustness of our algorithm. Since the test function used in this paper are still far from embodying a complete COP test suite, a more profound study in designing a more representative test function set in this field is absolutely necessary in future work. Another direction of future work is to apply the method to other types of COPs, such as combinational constrained optimization problems, multiobjective optimization with constraints, etc.

Acknowledgments The authors appreciate the support from the National Natural Science Foundation of China (NSFC, No. 60234030).

Bibliography [1] Z. Michalewicz and M. Schoenauer. Evolutionary

algorithm for constrained parameter optimization problems. Evolutionary Computation, 4(1):1–32, 1996. [2] C. A. C. Coello. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art. Computer Methods in Applied Mechanics and Engineering, 191(11-12):1245–1287, 2002. [3] J. X. Yu, X. Yao, C. Choi and G. Gou. Materialized view selection as constrained evolutionary optimization. IEEE Transactions on Systems, Man and Cybernetics, Part C, 33(4):458–467, 2003.

[4] E. Mezura-Montes and C. A. C. Coello. A numerical

comparison of some multiobjective-based techniques to handle constraints in genetic algorithms. Tech. Rep. EVOCINV-03-2002, Evolutionary Computation Group at CINVESTAV, Sección de Computación, Departamento de Ingeniería Eléctrica, CINVESTAVIPN, México D.F., México. [5] C. M. Fonseca and P. J. Fleming. Multiobjective optimization and multiple constraint handling with evolutionary algorithms-part Ⅰ : a unified formulation. IEEE Transactions on Systems, Man and Cybernetics, Part A, 28(1):26-37, 1998. [6] C. A. C. Coello. Constraint handling using an evolutionary multiobjective optimization technique. Civil Engineering and Environmental Systems, 17: 319–346, 2000. [7] C. A. Coello Coello and E. Mezura-Montes. Constraint-handling in genetic algorithms through the use of dominance-based tournament Selection. Advanced Engineering Informatics, 16(3):193–203, 2002. [8] J. Horn, N. Nafpliotis and D. E. Goldberg. A niched Pareto genetic algorithm for multiobjective optimization. In Proceedings of the First IEEE Conference on Evolutionary Computation, volume 1, pages 82–87, Piscataway, New Jersey, June 1994. [9] A. H. Aguirre, S. B. Rionda, C. A. C. Coello, G. L. Lizáraga and E. Mezura-Montes. Handling constraints using multiobjective optimization concepts. International Journal for Numerical Methods in Engineering, 59(15):1989–2017, Apr 2004. [10] J. D. Knowles and D. W. Corne. Approximating the nondominated front using the Pareto archived evolutionary strategy. Evolutionary Computation, 8(2):149–172, 2000. [11] Y. Zhou, Y. Li, J. He and L. Kang. Multi-objective and MGG Evolutionary Algorithm for Constrained Optimization. In Proceedings of the Congress on

Evolutionary Computation 2003 (CEC’2003), volume 1, pages 1-5, Piscataway, New Jersey, December 2003. Canberra, Australia, IEEE Service Center. [12] C. A. C. Coello. Treating constraints as objectives for single-objective evolutionary optimization. Engineering Optimization, 32(3): 275–308, 2000. [13] J. D. Schaffer. Multiple objective optimization with vector evaluated genetic algorithms. In Proceedings of the First International Conference on Genetic Algorithms, pages 99-100, 1985. [14] P. D. Surry and N. J. Radcliffe. The COMOGA method: constrained optimization by multiobjective genetic algorithm. Control and Cybernetics, 26(3): 391–412, 1997. [15] T. P. Runarsson and X. Yao. Stochastic ranking for constrained evolutionary optimization. IEEE Transactions on Evolutionary. Computation, 4(3): 284–294, Sept 2000. [16] K. Deb. A population-based algorithm-generator for real-parameter optimization. KanGAL Report No. 2003003. [17] R. Farmani and J. A. Wright. Self-adaptive fitness formulation for constrained optimization. IEEE Transactions on Evolutionary Computation, 7(5): 445–455, 2003. [18] E. Mezura-Montes and C. A. C. Coello. A simple multimembered evolution strategy to solve constrained optimization problems. IEEE Transactions on Evolutionary Computation, 9(1): 1–17, 2005. [19] S. Tsutsui, M. Yamamure and T. Higuchi. Multi-Parent recombination with simplex crossover in real coded genetic algorithms. In: Banzhaf W, Daida J, Eiben E, eds. GECCO’99: Proceedings of the Genetic and Evolutionary Computation Conference. San Mateo: Morgan Kaufmann Publishers, pages 657-664, 1999.