Multiobjective Optimization and Hybrid Evolutionary Algorithm to Solve ...

Report 4 Downloads 172 Views
560

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 37, NO. 3, JUNE 2007

Multiobjective Optimization and Hybrid Evolutionary Algorithm to Solve Constrained Optimization Problems Yong Wang, Zixing Cai, Senior Member, IEEE, Guanqi Guo, and Yuren Zhou

Abstract—This paper presents a novel evolutionary algorithm (EA) for constrained optimization problems, i.e., the hybrid constrained optimization EA (HCOEA). This algorithm effectively combines multiobjective optimization with global and local search models. In performing the global search, a niching genetic algorithm based on tournament selection is proposed. Also, HCOEA has adopted a parallel local search operator that implements a clustering partition of the population and multiparent crossover to generate the offspring population. Then, nondominated individuals in the offspring population are used to replace the dominated individuals in the parent population. Meanwhile, the best infeasible individual replacement scheme is devised for the purpose of rapidly guiding the population toward the feasible region of the search space. During the evolutionary process, the global search model effectively promotes high population diversity, and the local search model remarkably accelerates the convergence speed. HCOEA is tested on 13 well-known benchmark functions, and the experimental results suggest that it is more robust and efficient than other state-of-the-art algorithms from the literature in terms of the selected performance metrics, such as the best, median, mean, and worst objective function values and the standard deviations. Index Terms—Constrained optimization, evolutionary algorithm (EA), global search, local search, multiobjective optimization.

I. I NTRODUCTION

I

N MANY science and engineering disciplines, it is common to encounter a large number of constrained optimization problems (COPs). During the past decade, as a search and optimization technique inspired by nature, evolutionary algorithms (EAs) including genetic algorithm (GA), evolutionary strategy (ES), and evolutionary programming (GP) have been broadly applied to solve COPs, and a large number of constrained optimization EAs (COEAs) have been proposed [1]–[4]. Manuscript received September 8, 2005; revised February 22, 2006. This work was supported in part by the National Natural Science Foundation of China under Grants 60234030, 60404021, and 60673062, by the National Basic Scientific Research Founds under Grant A1420060159, by the Hunan S&T Founds under Grant 06IJY3035, and by the Key Project of Chinese Ministry of Education under Grant 206101.This paper was recommended by Associate Editor B. Sick. Y. Wang and Z. Cai are with the School of Information Science and Engineering, Central South University, Changsha 410083, China (e-mail: [email protected]; [email protected]). G. Guo is with the Department of Computer Science and Information Engineering, Hunan Institute of Science and Technology, Yueyang 414000, China. Y. Zhou is with the School of Computer Science and Engineering, South China University of Technology, Guangzhou 510640, China. Digital Object Identifier 10.1109/TSMCB.2006.886164

As a matter of fact, COEAs have two definite objectives, namely: 1) landing in or approaching the feasible region promptly (the search space of COPs is composed of the feasible and infeasible regions) and 2) reaching the global optimal solution at the end. However, these ultimate goals are far from being accomplished by the existing COEAs, which is according to empirical evidence on the benchmark test functions reported in [5]. In essence, COEAs can be generalized as constrainthandling techniques plus EAs, i.e., a proper constraint-handling technique needs to be considered in conjunction with an appropriate search algorithm, and these two aspects, thereby, should be mainly responsible for the performance of COEAs. Michalewicz and Schoenauer [1] and Coello [2] provided a comprehensive survey of the most popular constraint-handling techniques using EAs and grouped them into four and five categories, respectively. Penalty function methods are among the most common methods to solve COPs. The principal idea of this kind of method is to transform a COP into an unconstrained case by introducing a penalty term into the original objective function to penalize constraint violations. However, as pointed out in [6], even the most dynamic setting methods, which start with a low parameter value and end up with a high value, are unlikely to work well for problems where the unconstrained global optimum is far away from the constrained global optimum. Recently, emphasis has been increasingly placed on COEAs based on multiobjective optimization techniques [7] as there is no need for the penalty function. In view of this fact, our approach adopts multiobjective optimization techniques to handle constraints. In addition, a novel hybrid EA is developed to improve the search capability. With these two combined elements, i.e., the two important aspects of COEAs as discussed, this paper presents a hybrid constrained optimization EA (HCOEA) that consists of two parts, i.e., global and local search models. In HCOEA, a given COP is converted into a biobjective optimization problem. During the evolutionary process, on the one hand, a niching GA (NGA) based on tournament selection is designed as the global search model. The idea is to accept the winner of the similar parent and offspring as a member of the next population based on Pareto dominance. Herein, similarity is measured by Euclidean distance among individuals in the solution space. This model intends to preserve the diversity and refine the overall quality of the population. On the other hand, the local search model implements a parallel search by partitioning a population into a group of subpopulations and

1083-4419/$25.00 © 2007 IEEE

WANG et al.: MULTIOBJECTIVE OPTIMIZATION AND HYBRID EVOLUTIONARY ALGORITHM

using multiparent crossover to produce the offspring population. After that, to ensure the monotone decline of an individual’s fitness, nondominated individuals in the offspring population are used to replace the dominated individuals in the parent population. Besides, we use the best infeasible individual replacement scheme (BIIRS) to address the first objective of COEAs, i.e., guiding the population to land within or approach the feasible region quickly. Thus, HCOEA can not only reach the feasible region promptly but also attain competitive results. The efficiency and effectiveness of the proposed HCOEA are evaluated using 13 benchmark test functions. The experimental results indicate that HCOEA is superior to other state-of-the-art COEAs referred in this paper when measured by selected performance indicators. The remainder of this paper is organized as follows. Section II defines the general nonlinear programming (NLP) problem of our interest. Section III introduces some basic concepts of multiobjective optimization that establish the foundation of our approach to compare and select individuals. In Section IV, we review the previous work in this area. Section V presents a detailed description of the proposed algorithm. In Section VI, HCOEA is exploited to produce experimental results for the given test cases. In Section VII, the effects of algorithm parameters on performance are demonstrated by various experiments. Finally, we conclude in Section VIII. II. S TATEMENT OF THE P ROBLEM Without loss of generality, a general NLP problem is presented as (in the minimization sense) Find x(x = (x1 , x2 , . . . , xn ) ∈ n) such that f (x) is minimized where x ∈ Ω ⊆ S, and S is an n-dimensional rectangular space in n defined by the following parametric constraints: l(i) ≤ xi ≤ u(i),

1 ≤ i ≤ n.

(1)

561

Assume, without loss of generality, a MOP with n decision variables and m objective functions has the following form: min y = f(x) = (f1 (x), f2 (x), . . . , fm (x))

where x = (x1 , x2 , . . . , xn ) ∈ X ⊂ n is the decision vector, X is the decision space, y ∈ Y ⊂ m is the objective vector, and Y is the objective space. Definition 1 (Pareto Dominance): A vector u = (u1 , . . . , um ) is said to Pareto dominate another vector v = (v1 , . . . , vm ), denoted as u ≺ v , if ∀i ∈{1, . . . , m}, ui ≤ vi ∧ ∃j ∈{1, . . . , m}, uj < vj .

j = 1, . . . , l j = l + 1, . . . , p

(2) (3)

where l is the number of inequality constraints, and p − l is the number of equality constraints. Any point x ∈ Ω is called a feasible solution; otherwise, x is an infeasible solution. For an inequality constraint that satisfies gj (x) = 0 (j ∈ {1, . . . , l}) at any point x ∈ Ω, we say it is “active” at x. All equality constraints hj (x) (j = l + 1, . . . , p) are considered active at all points of Ω. III. B ASIC C ONCEPTS Since the constraint-handling method in this paper is based on multiobjective optimization techniques, we first give the relevant description of multiobjective optimization problems (MOPs) and the basic definitions related to multiobjective optimization.

(5)

In this case, we also call that the vector v is dominated by the vector u. Note that if Pareto dominance does not hold between two vectors, they are then considered nondominated with each other. Definition 2 (Pareto Optimality): xu ∈ X is said to be Pareto optimal in X if and only if ¬∃xv ∈ X, v ≺ u, where u = f(xu ) = (u1 , u2 , . . . , um ) and v = f(xv ) = (v1 , v2 , . . . , vm ). Definition 3 (Pareto Optimal Set): For a given MOP f(x), the Pareto optimal set, denoted as ρ∗ , is defined as ρ∗ = {xu ∈ X|¬∃xv ∈ X, v ≺ u} .

(6)

The vectors included in the Pareto optimal set are called nondominated individuals. Definition 4 (Pareto Front): For a given MOP f(x), according to the Pareto optimal set, the Pareto front, denoted as ρf ∗ , is defined as   (7) ρf ∗ = u = f(xu )|xu ∈ ρ∗ . Clearly, the Pareto front is the image of the Pareto optimal set in the objective space.

The feasible region Ω ⊆ S is defined by a set of p additional linear or nonlinear constraints (p ≥ 0), i.e., gj (x) ≤ 0, hj (x) = 0,

(4)

IV. P REVIOUS W ORK Taking advantage of multiobjective optimization techniques to handle constraints is a new idea arising in recent years. The main characteristics of this type of methods are twofold, namely: 1) to redefine a COP as a MOP and 2) to utilize multiobjective optimization techniques for the converted problem. In this case, constraints can be regarded as one or more objectives. If constraints are treated as one objective, then the original problem will be transformed into a MOP that has two objectives. In general, one objective is taking into account the original objective function f (x), and the other one is considering the degree of constraint violation of an individual x, i.e., G(x), where G(x) =

p 

Gj (x)

(8)

j=1

 Gj (x) =

max{0, gj (x)}, |hj (x)|,

1≤j≤l l + 1 ≤ j ≤ p.

(9)

562

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 37, NO. 3, JUNE 2007

Alternatively, each constraint can be treated as an objective, and in this case, the original problem will be transformed into a MOP that has p + 1 objectives accordingly. Therefore, we get a new vector v = (f (x), f1 (x), . . . , fp (x)) to be optimized, where f1 (x), . . . , fp (x) are the constraints of the given problem. Next, we discuss several representative approaches in this area. Camponogara and Talukdar [8] proposed the Pareto scheme to tackle COPs, in which a COP is transformed into a biobjective optimization problem. The main feature of this scheme is to calculate improvement directions from Pareto sets defined by the objective function and constraint violations. For instance, consider two Pareto sets Si and Sj , where i < j, and two solutions xi ∈ Si and xj ∈ Sj , where xi ≺ xj . With these two solutions, a search direction is defined as  d = (xi − xj ) |xi − xj |.

(10)

A linear search is performed in this improvement direction, so that a solution x can be found such that x dominates xi and xj . Although this scheme outperforms GA based on penalty functions, it still has obvious problems to keep diversity [2]. By combining the vector evaluated GA (VEGA) [9] with Pareto ranking, Surry and Radcliffe [10] proposed a method called constrained optimization by multiobjective GA (COMOGA). COMOGA uses a single population but randomly decides whether to consider the original problem as a constraint satisfaction problem or as an unconstrained optimization problem. Also, the relative likelihood of adopting each view is adjusted using a simple adaptive mechanism that tries to set a target proportion (e.g., 0.1) of feasible solutions in the population. Coello [11] used a population-based multiobjective technique such as VEGA [9] to treat each constraint as an objective. In each generation, the population is split into p + 1 subpopulations with equal size, where p refers to the number of constraints (they add 1 to consider also the objective function). In this approach, some individuals in the population are selected using the objective function as their fitness, and the remaining individuals are selected by applying the corresponding constraints as their fitness. This method is able to achieve good results without the need to use any extra parameters for the GA. However, how to determine the appropriate size of each subpopulation remains an open issue. Based on Fonseca and Fleming’s Pareto ranking process [12], Coello [13] proposed an alternative approach. In this approach, each individual in the population is associated with a rank, where feasible individuals have a higher rank value than infeasible ones. This rank assignment makes feasible individuals more likely to survive into the next population than infeasible ones. The main highlight of this method is that it does not require additional procedures to keep diversity. Coello and Mezura-Montes [14] implemented a version of the niched Pareto GA (NPGA) [15]. Nevertheless, unlike NPGA, this approach does not require niching. Instead, it uses an additional parameter Sr to control the diversity of the population. A ratio Sr of the individuals is selected by dominance-based tournament selection. The remaining 1 − Sr

individuals are selected by a probabilistic approach. The main advantage of this method is its computational efficiency. By emulating society behavior, Ray and Liew [16] made use of intra- and intersociety interactions within a formal society and civilization model to solve COPs. At each generation, the population corresponding to a civilization is split into several subpopulations, each of which corresponds to a society. In the process of search, the intrasociety information exchange between the leader and other individuals in the society leads the migration of a point toward a better performing point. On the other hand, the intersociety information exchange of leaders helps the better performing societies to expand and flourish. In addition, this paper presents an adaptive mechanism to identify leaders. This method is capable of arriving at comparable solutions using significantly fewer function evaluations. Nevertheless, one common drawback of the three methods above is that an artificial preference has been imposed on feasible individuals, i.e., feasible individuals are always considered better than infeasible ones. It is noteworthy that, once such preference exists, the balance between feasible and infeasible solutions in the population will be difficult to maintain, and consequently, premature convergence could tend to occur. As an extension of Pareto archived evolutionary strategy (PAES) [17], IS-PAES (inverted–shrinkable PAES) may represent the nontrivial algorithm using Pareto dominance as the selection criterion [18]. In this method, besides using a secondary population, the search space is shrunk continuously by exploring the information of the individuals surrounding the feasible region. Thus, the size of the search space will be very small, and the solutions obtained will be competitive in the end. This method is the first constraint-handling technique based on multiobjective optimization concepts that provides results competitive with respect to state-of-the-art approaches according to the authors. However, once the shrinking of the search space is directed toward the wrong direction, the algorithm might be trapped in a local optimum. Zhou et al. [19] proposed a novel method that uses Pareto dominance to assign an individual’s Pareto strength. Definition 5 (Pareto Strength): Each individual xi in a population Pt is assigned an integer s(xi ), which is called Pareto strength of xi . s(xi ) is the number of individuals in the population Pt dominated by xi , i.e., s(xi ) = #{xj |xj ∈ Pt ∧ xi ≺ xj }

(11)

where # is the cardinality of the set. Based on Pareto strength, the ranking of individuals in the population is conducted in such a manner that, upon comparing each individual’s strength: 1) the one with high strength wins, and 2) if the strength is equal, the one with the lower degree of constraint violation is better. Venkatraman and Yen [20] presented a two-phase framework for solving COPs. In the first phase, COP is treated as a constrained satisfaction problem as in [10], and the genetic search is directed toward minimizing the constraint violations of the solutions. In the second phase, COP is treated as a biobjective optimization problem, and a nondominated sorting like in [21] is adopted to rank the individuals. This algorithm

WANG et al.: MULTIOBJECTIVE OPTIMIZATION AND HYBRID EVOLUTIONARY ALGORITHM

563

has the advantage of being problem independent and does not rely on any parameter tuning. To evaluate the capabilities of a set of constraint-handling methods based on multiobjective optimization techniques, Mezura-Montes and Coello [7] presented an experimental study, in particular, focusing on four of this kind of methods. The experimental results indicate that the selection criterion of Pareto dominance gives better results than both Pareto ranking and population-based approaches. On the other hand, an important conclusion in [7] is that additional mechanisms have to be used to improve the effectiveness of these approaches. V. M ULTIOBJECTIVE O PTIMIZATION AND H YBRID EA (HCOEA) From the literature review, it is clear that most of the existing COEAs based on multiobjective optimization techniques can be considered as simple simulations of multiobjective EAs (MOEAs). Although a COP has been converted into a MOP, its solution still has an essential difference with the general MOP. That is, the main goal of the former is to reach the global optimum, whereas the latter aims to find a set of the so-called Pareto optimal solutions. The deficiency of constraint-handling techniques based on multiobjective optimization lies mainly in designing a suitable scheme to compare and select individuals. Besides, the search space of COPs consists of both feasible and infeasible regions. As a consequence, infeasible individuals play a very important role in searching the optimum, especially when the proportion of the feasible region is very small compared to the entire search space, or the optimum is located precisely on the feasible region’s boundaries. However, under these circumstances, this kind of constraint-handling technique cannot effectively utilize the information provided by infeasible individuals. On the other hand, the capabilities of converging to the global optimum and keeping a fast convergence speed of the simple EAs are limited if the fitness landscapes and the structure of the feasible region are very complex. Although ES has been gaining wide attention to solve COPs [22], an essential shortcoming of ES should be recognized. In ES, crossover operation of the decision variables is usually neglected. Such operation, however, is very useful for constrained optimization, especially when it occurs between feasible and infeasible individuals close to the boundaries of the feasible region. Even if discrete and intermediate crossover operators can be employed [23], their abilities are limited. With respect to unconstrained optimization problems, memetic algorithms that integrate EAs with local exploration mechanism have been viewed as a successful improvement for the classic EAs [24]–[30]. During the evolutionary process, the global search operator can maintain the diversity of the population, and the local search operator can speed up the convergence. An interesting question is whether this technique can be combined with COEAs in a reasonable way to obtain an improved performance. As we know, the evaluation of an individual in unconstrained optimization problems is usually based only on its objective function value. However, with respect to COPs, both the objective function and the constraint

Fig. 1. Graph representation for f ( x). The Pareto optimal set ρ∗ is mapped to the Pareto front ρf ∗ . The feasible region Ω is mapped to the solid segment. The global optimum  x∗ is mapped to the intersection of the Pareto front and the solid segment. The search space S is mapped to points on and above the Pareto front.

Fig. 2. Pseudocode of iterative search procedure.

violation should be taken into account to assess an individual. Hence, the key question to achieve the above assumption is the rules to compare individuals. Inspired by [25], we developed a novel hybrid EA, which is the first attempt (to the authors’ best knowledge) to combine multiobjective optimization with global and local search models to solve COPs. The new approach converts COPs into biobjective optimization problems to minimize the original objective fitness f (x) and the degree of constraint violation G(x) simultaneously. For clarity, let f (x) = (f (x), G(x)). Definition 6 (Global Minimum): x∗ ∈ S is called global minimum if and only if G(x∗ ) = 0 and ¬∃x ∈ S such that G(x) = 0 and f (x) ≤ f (¯ x∗ ). Based on the definitions of multiobjective optimization given in Section III, a further illustration of f (x) is shown in Fig. 1. A. General Framework This section describes the general framework of HCOEA. An abstract depiction of a generic iterative search procedure is given in Fig. 2, where the integer t represents the iteration count, A(0) represents the initialized main population, and r represents the reference point arbitrarily chosen from the search space. Note that the reference point is used only in the local

564

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 37, NO. 3, JUNE 2007

Fig. 3. Pseudocode of the global search model. The function distance(a, b) denotes the Euclidean distance between individuals a and b in the solution space.

search procedure. The purpose of the function global_search is to generate a new population A by using the global search in the population A. Similarly, the purpose of the function local_search is to produce another new population A by implementing the local search in the population A . The population sizes of A, A , and A are equal to N .

Fig. 4.

Pseudocode of the function choose.

Fig. 5.

Schematic diagram to illustrate the local search model.

2) Advantages of the Global Search Model: The NGATS operator can alleviate selection pressure and maintain the diversity of the population. Compared to fitness sharing, which is another niching mechanism with complexity of O(N 2 ), NGATS has a complexity of only O(N ), where N is the population size. Furthermore, the structure of NGATS is very simple and needs no a priori knowledge. C. Local Search Model

B. Global Search Model An NGA based on tournament selection (NGATS) is adopted as the global search model in this paper. First, the main population A of size N is mated into N/2 pairs of couples at random. Pairwise individuals are then sequentially chosen and used to produce pairwise offspring through crossover and mutation operations. 1) Tournament Selection Criterion: It is noteworthy that, after converting a COP into a biobjective problem, the task of finding the global optimum becomes minimization of a 2-D vector f (x). However, the comparison of vectors is essentially different from that of real numbers since the relationship between any two vectors is partially ordered in the sense that two arbitrary individuals in the population are related to each other in two possible ways: either one dominates the other or neither dominates. Based on this feature, the relevant criterion of tournament selection is devised as follows. If the offspring dominates the similar parent, then replacement occurs with a probability 1.0. Herein, similarity is measured by Euclidean distance among individuals in the solution space. Figs. 3 and 4 specify the global search model, where Nk denotes the number of individuals in the population A after k iterations, N1 = N , rand ∈ U (0, 1), and U (0, 1) is a uniform random number generator between 0 and 1.

Although offering a low selection pressure, the convergence speed of NGATS is slow. In particular, if the proportion of the feasible region is very small, the similar offspring and parent could be easy to be nondominated with each other since they are both often infeasible. Thus, replacement is expected to occur with a low probability. To address this situation, a local search model on the basis of clustering partition and multiparent crossover is incorporated into the current algorithm. A schematic illustration of the local search model is shown in Fig. 5. It works as follows. First, the population A of size N is split into a group of disjoint subpopulations according to the location of individuals in the solution space, each of which comprises q individuals with adjacent location. Then, we use q individuals of each subpopulation (i.e., the parent population denoted as M ) to generate the same number of offspring (i.e., the offspring population denoted as M  ) by crossover operation. 1) Nondominated Individuals Replacement Mechanism: In MOEAs, each individual in the population is associated with a rank. Moreover, all nondominated individuals in the population have the same rank value. For instance, the rank value of nondominated individuals is assigned to 1 in [12] but to 0 in [31]. In contrast to MOEAs, since nondominated individuals represent the most important feature of the population they belong to, our concern in this paper is only the nondominated individuals.

WANG et al.: MULTIOBJECTIVE OPTIMIZATION AND HYBRID EVOLUTIONARY ALGORITHM

565

Fig. 6. (a) Nondominated individuals in a population consist of one feasible and two infeasible solutions. (b) Nondominated individuals in a population consist of one feasible solution. (c) Nondominated individuals in a population consist of three infeasible solutions.

Thus, we can avoid assigning a rank value to each individual in the population, which makes the new algorithm more efficient. The complexity of identifying nondominated individuals in a ¯ ), where N is the population size, and N ¯ population is O(N N is the number of nondominated individuals in the population. Fig. 6(a) shows an example for illustrating the importance of nondominated individuals. For example, there are three nondominated individuals denoted as “e,” “f,” and “g” in a population. It is evident that individual “e” denotes the best feasible solution, individual “f” denotes the infeasible solution with the lowest degree of constraint violation, and individual “g” denotes the infeasible solution with the minimum objective function value; thus, the most important information of a population is represented by nondominated individuals clearly. As for nondominated individuals, their properties and corresponding advantages can be summarized as follows. Theorem 1: There exists at most one feasible solution in nondominated individuals in a population. Proof: Assume that there are k(k > 1) feasible solutions in nondominated individuals in a population. Then, the feasible solution with the minimum f (x) in these k individuals must dominate other feasible solutions in these k individuals based on Pareto dominance. This is a contradiction. Theorem 1 implies that in terms of the whole search space, the feasible individual in nondominated individuals is just the global optimum (see Fig. 1).  Property 1: Nondominated individuals in a population may consist of one feasible solution [Fig. 6(b)], infeasible solutions [Fig. 6(c)], or a combination of feasible and infeasible solutions [Fig. 6(a)]. Property 1 enables the population to approach the global optimal solution from both sides of the feasible/infeasible border since nondominated individuals can be composed of a combination of feasible and infeasible solutions. Property 2: Feasible solutions of nondominated individuals in the offspring population can dominate either feasible or infeasible solutions in the parent population. However, infeasible solutions of nondominated individuals in the offspring population can only dominate infeasible solutions in the parent population. Property 2 can be obtained from the definition of Pareto dominance. After nondominated individuals are selected from the offspring population, they are used to replace individuals in the parent population dominated by them.

The proposal of the nondominated individuals replacement mechanism is quite different from the previous approaches. For instance, see Fig. 6(c), Zhou et al. [19] argued that individual “g” is the best individual because of its biggest Pareto rank value, whereas individual “e” is considered as the elite individual to be archived because of its minimum degree of constraint violation according to [20]. Note, however, that individual “g” or “e” is only a part of nondominated individuals in a population. Thereby, extracting nondominated individuals from the population ensures more desirable exploitation of the evolutionary information. In addition, unlike [16], where nondominated individuals are concerned only when the population is infeasible and the identification of nondominated individuals is based only on the constraint violations, our method selects nondominated individuals from the population based on the combination of the objective function and constraint violations, regardless of the state of the population. More importantly, our method uses nondominated individuals to eliminate the dominated individuals. 2) BIIRS: As discussed previously, how to utilize infeasible individuals is very important for COPs. Researchers have gradually realized the effects of infeasible solutions on finding the optimum. Farmani and Wright [32] allowed the slightly infeasible, low objective function value solutions to be selected for reproduction. Coello and Mezura-Montes [14] and MezuraMontes and Coello [23] tended to make use of infeasible solutions by a diversity mechanism. Binh and Korn [33] exploited the niche infeasible individuals to explore new areas of the feasible region. However, unlike the above approaches, the BIIRS, the aim of which consists in guiding the population toward feasibility from various directions efficiently, is presented in this paper. In this scheme, the best infeasible individual refers to the infeasible individual with the lowest degree of constraint violation in the current offspring population. After selecting the best infeasible individual, it will replace an individual in the parent population picked up randomly, provided the current offspring population is composed of only infeasible individuals. Theorem 2: If all nondominated individuals in a population are infeasible, then this population is also infeasible. Proof: Assume that all nondominated individuals in population K are infeasible and some individuals in K are feasible. Since infeasible solutions cannot Pareto dominate feasible solutions, there must exist one feasible individual in nondominated

566

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 37, NO. 3, JUNE 2007

individuals under this circumstance from Theorem 1. This is a contradiction.  Theorem 3: If a population is infeasible, then the individual with the lowest degree of constraint violation chosen from nondominated individuals is also the individual with the lowest degree of constraint violation in the population. Proof: Let the individual with the lowest degree of constraint violation chosen from nondominated individuals in the population K be denoted as x and the individual with the lowest degree of constraint violation in the population K be denoted as x . Assume that the population K is infeasible and x = x . Then, x either dominates x or is nondominated with x because G(x ) < G(x ), signifying that nondominated  individuals contain x , resulting in a contradiction. Theorem 2 indicates that when we judge whether a population is infeasible, we merely need to inspect whether all nondominated individuals in this population are infeasible. In addition, Theorem 3 indicates that when we search the individual with the lowest degree of constraint violation in an infeasible population, we only need to find the individual with the lowest degree of constraint violation in nondominated individuals. These two theorems can simplify the computation process of the proposed replacement scheme if nondominated individuals have been selected from a population since, in general, the number of nondominated individuals is much less than the size of the population they belong to. These two theorems have been applied in Fig. 7 (steps 10–12). This replacement scheme is very beneficial to COPs with a small proportion of the feasible region. At the early stage, replacement will frequently take place under this condition because the offspring population consists of only infeasible individuals. As a result, the subpopulations will approach the feasible region from different directions over time. In addition, at the later stage, since it is highly likely that most of the chromosomes in the population have been feasible, the offspring population may always be composed of a combination of feasible and infeasible solutions or only feasible solutions at each generation. Following this phenomenon, the possibility that the replacement takes place will significantly decrease, which helps the population to converge. The discussion above also indicates that the proposed replacement scheme has an adaptive ability during the different stages of evolutionary search. However, as for COPs with a relatively large proportion of the feasible region, this mechanism plays a minor role throughout the search. This is not difficult to understand because, in this case, the offspring population may consistently comprise partial or total feasible individuals at each generation, and the replacement will seldom happen. 3) Discussion About the Setting of the Subpopulation Size: Taking the subpopulation size q into account, Guo and Yu [24] argued that a smaller q has a superiority compared to a bigger one for unconstrained optimization problems. However, with respect to COPs, a bigger q seems to be more suitable. There are two reasons to support this point of view: 1) a smaller q means the subpopulation occupies a smaller search space and, consequently, the capability of each disjoint subpopulation to rapidly approach or land within the feasible region is quite limited, especially when the proportion of the feasible region

Fig. 7.

Pseudocode of the local search model.

is very small and 2) the probability that the subpopulation with a smaller q involves both feasible and infeasible individuals is very low. Nevertheless, recombining feasible solutions with infeasible solutions in promising areas is very effective for finding the global optimum solutions located precisely on the boundaries of the feasible region of the search space (which are known as the most difficult solutions to be reached). Note, however, that the value of the parameter q cannot be constructed as the bigger the better, since if q is too big, the number of subpopulations in a given population will be dramatically reduced accordingly, thus each subpopulation cannot exert its strength very well. Therefore, a reasonable size of subpopulation should be used. 4) Advantages of the Local Search Model: Fig. 7 provides a detailed illustration of the above procedure, where Nk denotes the number of individuals in the population A after k iterations, and N1 = N . There are two advantages of the local search model that deserve attention. In one respect, exploiting nondominated individuals to eliminate the dominated individuals ensures the monotone decline of fitness of the population. This

WANG et al.: MULTIOBJECTIVE OPTIMIZATION AND HYBRID EVOLUTIONARY ALGORITHM

567

behavior can serve as a downhill operator. On the other hand, the parallel subpopulation search operator can rapidly drive the population moving into the feasible region from different directions by means of the BIIRS. D. Genetic Operators Recognizing the drawbacks of the binary string representation [34], we adopt the real-coded representation in this paper. 1) Crossover Operator in the Global Search Model: In the global search model, we use the two-point crossover operator. Consider two different individuals of the given population, i.e., xi = (xi1 , . . . , xir , . . . , xis , . . . , xin )

(12)

xj = (xj1 , . . . , xjr , . . . , xjs , . . . , xjn )

(13)

where r and s denote two randomly selected crossover points, and r ≤ s. After the crossover operation, the produced individuals are xi = (xi1 , . . . , xjr , . . . , xjs , . . . , xin )

(14)

xj = (xj1 , . . . , xir , . . . , xis , . . . , xjn ).

(15)

It is hard to present a strict theory to justify our choice, but it seems intuitively that two-point crossover supports the building block hypothesis and the genetic repair hypothesis [35]. 2) Mutation Operator in the Global Search Model: An improved version of the breeder GA mutation operator [36] is proposed in this paper for the global search. Let us suppose x = (x1 , x2 , . . . , xn ) is a chromosome to be mutated. A new chromosome x = (x1 , x2 , . . . , xn ) is generated as  15 −k xi = xi ±rangi · k=0 αk2 , U (0, 1) < 1/n , otherwise xi , i = 1, . . . , n

(16)

where U (0, 1) is the uniform random number generator between 0 and 1, and rangi defines the mutation range and is set to (u(i) − l(i)) · (1 − current_gen/total_gen)6 . The plus or minus sign is chosen with a probability of 0.5, and αk ∈ {0, 1} is randomly generated with P (αk = 1) = 1/16. Values in the interval [xi − rangi , xi + rangi ] are generated using this operator, with the probability of generating a neighborhood of xi being very high. The improvement in this paper is the dynamic setting of the parameter rangi , which gradually decreases as the number of iterations increases by a nonlinear function; this intends to produce a highly explorative behavior in the early stage and ensures the global convergence in the later stage. 3) Crossover Operator in the Local Search Model: The simplex crossover (SPX) [37], which generates offspring individuals based on uniform probability distribution and does not need any fitness information, is adopted in the local search model. In n , n + 1 mutually independent parent vectors xi , i = 1, . . . , n + 1, form a simplex. The procedure to produce an offspring consists of the following: 1) employing a certain ratio o), where to expand the original simplex in each direction (xi −  xi , o is the center of n + 1 vectors, i.e., o = (1/(n + 1)) n+1 i=1 

Fig. 8. Three-parent SPX in 2-D space.

thereby forming a new simplex; and 2) choosing one point from the new simplex as an offspring. For simplicity, we consider this process with x1 , x2 , and x3 in 2-D space, where x1 , x2 , and x3 indicate three parent vectors. These vectors then constitute a simplex. We expand this simplex in each direction by (1 + ε) times, where ε is a control parameterthat defines the expanding rate, and ε ≥ 0. Let o = (1/3) 3i=1 xi and yi = (1 + ε)(xi − o). Thus, y1 , y2 , and y3 constitute a new simplex. We then randomly choose a point z from the new simplex, i.e., z = k1 y1 + k2 y2 + k3 y3 + o, where k1 , k2 , and k3 are randomly selected within the range [0, 1] and satisfy the condition k1 + k2 + k3 = 1. The reason for using SPX is that it is very simple and the computational complexity of creating an offspring is only O(n). Fig. 8 illustrates the procedure to produce offspring with three-parent SPX in 2-D space. E. Computational Time Complexity of HCOEA From all of the pseudocodes we provided, we can easily obtain the computational time complexity of HCOEA. The global search model requires 2N distance measures among individuals in both the parent and offspring populations. In the local search model, it first requires about N (N + q − 1)/q distance measures to generate disjoint subpopulations of size q, the identification of nondominated individuals demands 2N q¯ comparisons among individuals in the offspring population, using nondominated individuals to replace the dominated individuals requires 2N q¯ comparisons, and the number of comparisons can be negligible for the BIIRS, where q¯ is the number of nondominated individuals in the offspring population. Therefore, the computational complexity of HCOEA is O(N 2 ). VI. E XPERIMENTAL S TUDY A. Test Problems and Experimental Conditions To examine the performance of the selected COEAs and the proposed algorithm, 13 well-known benchmark functions are used. All of the test functions are taken from [38]. These test cases include various types (i.e., linear, nonlinear, and quadratic) of objective functions with different numbers of

568

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 37, NO. 3, JUNE 2007

TABLE I SUMMARY OF 13 BENCHMARK FUNCTIONS

decision variables (n) and a range of types (i.e., linear inequalities, nonlinear equalities, and nonlinear inequalities) and numbers of constraints. The main characteristics of the test cases are reported in Table I, where a is the number of constraints active at the optimal solution. In addition, ρ is the ratio between the size of the feasible search space and that of the entire search space, i.e., ρ = |Ω|/|S|

(17)

where |S| is the number of solutions randomly generated from S, and |Ω| is the number of feasible solutions out of these |S| solutions. In our experiment setup, |S| = 1 000 000. In all runs, we use N = 250. The probability of the crossover operation and the mutation operation in the global search model (denoted as pgc and pgm ) is 0.8 and 0.1, respectively. The probability of the crossover operation in the local search model (denoted as plc ) is 1.0. The reference point r is randomly chosen from the search space at each generation. In this paper, we perform 240 000 fitness function evaluations (FFEs) for each test function. In addition, all equality constraints are converted into inequality constraints as |h(x)| − δ ≤ 0, where δ = 1E − 07. Regarding the implementation of the SPX, if n < q, one should randomly select n + 1 individuals to generate n + 1 offspring; afterward, randomly select n + 1 individuals again to generate another n + 1 offspring. This process continues until the total number of the offspring created is reached, i.e., q. If n ≥ q, use q individuals to produce q offspring directly. In addition, it is inconclusive to recommend that ε can be an integer between 5 and 8 if 2 ≤ n ≤ 10, and ε can be an integer between 8 and 11 if 10 < n ≤ 20.

The results are presented in Fig. 9. It is clear that the global convergence reliability of HCOEA with q = 12 is higher than that of other algorithms. More specifically, smaller sizes (i.e., 6 and 8) reduce the global convergence reliability of HCOEA as expected, especially on optimizing problems g02, g05, and g10. Also, the similar performance degradation occurs for problem g07 in the case of a HCOEA with a large subpopulation size (i.e., 14). Although it is hard to provide a conclusive observation about the optimal subpopulation size q for HCOEA, a visual inspection of Fig. 9 allows one to conclude that the setting of q = 12 for HCOEA is an appropriate choice with regard to all the q values analyzed. The observations from other additional experiments (the related results are not listed herein for limited pages) conformably justified this decision. Hence, in the remaining experiments, the value of q is fixed to 12. C. General Performance of HCOEA Another version of the experiment is implemented to investigate the general performance of our HCOEA. For each test case, 30 independent trials are executed in MATLAB (source code may be available from the authors upon request). It is worth noting that when using q = 12, ten chromosomes have not been evaluated in the local search model at each generation. To overcome this defect, we change the population size into 252, which implies that a population consists of 21 subpopulations, but execute the same number of FFEs from now on. Table II summarizes the experimental results using the above parameters, showing the known “optimal” solution, the “best,” “median,” “mean,” and “worst” objective function values, the standard deviations, and the average percentage of feasible solutions in the final population from 30 independent runs for each test problem. As described in Table II, in terms of the selected performance metrics, we can conclude that the results obtained by HCOEA are very competitive under a low computational effort. Indeed, the global optima are consistently found by HCOEA over 30 independent runs in ten test cases (i.e., g01, g03, g04, g05, g06, g08, g09, g11, g12, and g13). For the remaining test cases, the resulting solutions achieved are very close to the global optima and have been exhibited in Fig. 10. From Table II, it is evident that the standard deviations are very small, which reflects the fact that HCOEA is capable of performing a robust and stable search. Furthermore, feasible solutions are consistently found for all test problems. The above observations validate that HCOEA has substantial potential in coping with various COPs. D. Comparison of Different COEAS

B. Influence of the Subpopulation Size The impact of the subpopulation size q on the behavior of our algorithm has been studied empirically. We have used five different values of q (i.e., 6, 8, 10, 12, and 14) to optimize the four complicated functions g02, g05, g07, and g10 based on 30 independent runs, with the aim of determining the more robust value for this parameter. The runs terminate when the predefined numbers of FFEs are reached.

Since HCOEA is compared with other four state-of-the-art COEAs in the following experiments, we first give a brief description of them. • IS-PAES method [18]. This method has been discussed in Section IV. • Simple multimembered evolution strategy (SMES) method [23]. Based on the self-adaptation mechanism of an ES and three simple comparison criteria, this method also

WANG et al.: MULTIOBJECTIVE OPTIMIZATION AND HYBRID EVOLUTIONARY ALGORITHM

569

Fig. 9. Box plots of the experimental results with different subpopulation sizes on problems (a) g02, (b) g05, (c) g07, and (d) g10 over 30 independent runs. The box stretches from the lower hinge (defined as the 25th percentile) to the upper hinge (the 75th percentile) and therefore contains the middle half of the scores in the distribution. The median is shown as a line across the box. TABLE II EXPERIMENTAL RESULTS WITH 30 INDEPENDENT RUNS ON 13 BENCHMARK FUNCTIONS. A RESULT IN BOLDFACE INDICATES THAT THE GLOBAL OPTIMUM (OR BEST KNOWN SOLUTION) WAS REACHED

introduces three key components: 1) a diversity mechanism, the aim of which is to add some infeasible solutions into the next population; 2) the combined recombination; and 3) the reduction of the initial step size of the ES. • Self-adaptive fitness formulation (SAFF) method [32]. This is a modified version of the method in [39], where the penalty is divided into two stages. The improved approach eliminates the fixed penalty factor for the second penalty stage proposed in [39] by assigning the penalized objective

function value of the worst individual to be equal to that of the individual with the maximum objective function value in the current population. This makes the method selfadaptive and more dynamic considering that the individual with the maximum objective function value might vary from one generation to another. • Stochastic ranking (RY) method [38]. Runarsson and Yao introduced a stochastic ranking method to balance two objectives, namely: 1) f (x) and 2) G(x). A probability

570

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 37, NO. 3, JUNE 2007

Fig. 10. Plots show the number of trials versus the quality of the resulting solutions achieved. Results are derived from 30 independent trials on problems (a) g02, (b) g07, and (c) g10.

parameter pf is involved to compare individuals as follows. Given pairwise adjacent individuals: 1) if both individuals are feasible, the one having better objective function value wins; else 2) if a uniformly generated random number u between 0 and 1 is less than pf , the one with better objective function value wins; otherwise, the one with smaller degree of constraint violation wins. The numerical results obtained from HCOEA and methods 1–4 are analyzed according to the following aspects. • Robustness and search quality: the capability of HCOEA and methods 1–4 to generate search performances on different problems and the capability of them to provide high-quality feasible solutions. • Computational cost: the computational efforts needed by HCOEA and methods 1–4. 1) Robustness and Search Quality: As shown in Table III, the performance of HCOEA is compared in detail with methods 1–4 by examining the selected performance metrics. It is clear that HCOEA performs significantly better than methods 1–4 in terms of the “best,” “mean,” and “worst” objective function values and the standard deviations. Methods 1–4 provide competitive results; however, most of their results are worse than or match the solutions resulted from HCOEA. In addition, from Table III, we can see that methods 1–4 have a great tendency to converge to the local optima especially for some intricate test functions. For instance, the resulting solutions of methods 1–4 are still far from the global optima for problems g02, g05, g07, g09, g10, and g13. Moreover, the standard deviations provided by methods 1–4 are relatively large, resulting in a lack of robustness. In addition, to find feasible solutions, an equality constraint is always substituted by pairwise inequality constraints, i.e., h(x) − δ ≤ 0 and −h(x) − δ ≤ 0, where δ is a small positive value. It is worth noting that this conversion directly influences the capability of the approach and the accuracy of the solution found. Moreover, the extent of effect is related to the value of the parameter δ. As a result of such transformation, for problems g03, g04, g06, and g11, the best results provided by method 2 [40] are better than the “known” optima, and for problem g05, the best result provided by method 4 is also better than the known optimum, but these behaviors do not mean that they really find the “new” optima. For these cases, the best

solutions reported by HCOEA are very close to the known optimal solutions since the parameter δ is set to 1E − 07 in this paper, which is remarkably less than that used by methods 1–4 and makes the problems more difficult to solve. In all trials, HCOEA can consistently find feasible solutions for all the test cases. However, encountering the highly constrained problems (i.e., the problems with a very small proportion of the feasible region), it is less robust for methods 1–4 to find feasible solutions. For problem g05, feasible solutions are only found for 9 out of 20 runs in method 3. For problem g10, method 4 finds only 6 feasible solutions from 30 runs, and 17 feasible solutions from 20 runs were found by method 3. For problems g03, g11, and g13, feasible solutions are not reported by method 1 because there is no transformation of equality constraints. 2) Computational Cost: As far as computational costs are concerned (measured by the number of FFEs), HCOEA and method 2 have the lowest cost (240 000 FFEs) among the compared methods. The computational efforts of methods 1 and 4 are 350 000 FFEs. Particularly, for method 3, the number of FFEs is 1 400 000, which is nearly six times more than that for HCOEA and SMES. Conclusion From the Experiment: We can obtain the conclusion that HCOEA outperforms methods 1–4 over the quality of the resulting solutions, as well as the reliability and efficiency of the global convergence. E. Effectiveness of the BIIRS We have also demonstrated the effectiveness of the BIIRS proposed in Section V-C. We conduct our optimization runs using two different experiments on problems g02, g05, g10, and g13, i.e., execute HCOEA with and without using this scheme. The results are shown in Table IV. The following is a summary of the results obtained. 1) Test Function g02: In this case, the difference in the results between two different versions is marginal. This phenomenon means BIIRS plays a minor role in terms of the problems with a relatively large proportion of the feasible region as analyzed. 2) Test Functions g05 and g13: These cases have a very small proportion of the feasible region. The version

WANG et al.: MULTIOBJECTIVE OPTIMIZATION AND HYBRID EVOLUTIONARY ALGORITHM

571

TABLE III COMPARISON OF OUR ALGORITHM (INDICATED BY HCOEA) WITH RESPECT TO IS-PAES [18], SMES [23], SAFF [23], AND RY [38] ON 13 BENCHMARK FUNCTIONS. A RESULT IN BOLDFACE INDICATES A BETTER RESULT OR THAT THE G LOBAL O PTIMUM ( OR B EST K NOWN S OLUTION ) W AS R EACHED ; NA = N OT A VAILABLE

without BIIRS is unable to reach the feasible region. Clearly, the use of the proposed scheme significantly improves the results by directing the population toward feasibility from different directions. 3) Test Function g10: Although feasible solutions can be found consistently, the mean result reported is still far away from the known optimum, which means the

robustness of the algorithm without BIIRS is not sufficient. This is due to the fact that the population enters the feasible region only from few directions; thus, the diversity of the population is not good enough. Conclusion From the Experiment: From the results obtained from this experiment, we can see that BIIRS is very suitable

572

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 37, NO. 3, JUNE 2007

TABLE IV COMPARISON OF HCOEA WITH AND WITHOUT USING THE BIIRS ON PROBLEMS g02, g05, g10, AND g13 OVER 30 INDEPENDENT RUNS. (#) DENOTES THE NUMBER OF TRIALS THAT FEASIBLE SOLUTIONS ARE NOT FOUND IN THE FINAL POPULATIONS IN 30 TRIALS

TABLE V EXPERIMENTAL RESULTS ON 13 BENCHMARK FUNCTIONS WITH VARYING MUTATION PROBABILITY IN THE G LOBAL S EARCH M ODEL ; 30 I NDEPENDENT R UNS A RE P ERFORMED

for the problems in which the proportion of the feasible region is very small compared to the whole search space. Moreover, as for this kind of problems, the proposed scheme can not only guide the population toward feasibility but also help the population to meet competitive results.

VII. D ISCUSSION In this section, the effect of algorithm parameters on the performance will be discussed by various experiments.

A. Effect of Mutation in the Global Search Mutation adjusts the diversity of the population and the convergence speed of search process. If the mutation rate is too small, although the convergence speed is very high, the diversity becomes low, and the search points often converge to a local optimum. However, mutation with a high rate may lead to loss of good solutions, hence, tends to deteriorate the speed of the algorithm in some cases. Therefore, a suitable mutation rate must be selected. Table V summarizes the mean of the objective function values in the case of the mutation rate alone being changed to 0.0, 0.05, 0.1, 0.15, and 0.2. The better cases are in boldface. In the case of pgm = 0.0, the results for problems g06 and g10 are much worse than other results. Also, in the case of pgm = 0.2, the result for problem g13 is worst than other results. These results highlight the necessity of mutation

and show that a mutation rate between 0.05 and 0.15 is an appropriate setting for HCOEA. B. Effect of Crossover in the Global Search As one of the major genetic operators, crossover is designed to produce offspring in the hope that better fitness is achieved through exchanging partial genetic information (the gene segment of the chromosome) of two parents. In this experiment, we test the global search model with five different crossover rates, i.e., 0.6, 0.7, 0.8, 0.9, and 1.0. We summarize the mean of the objective function values in Table VI. It is clear that the impact of the crossover rate mainly focuses on problems g01 and g02. In the case of pgc = 0.6, the results for problems g01 and g02 are much worse than other results. This behavior reveals that in the global search model, a value of the crossover rate between 0.7 and 1.0 is an appropriate setting for many problems. C. Effect of Crossover in the Local Search As indicated before, the crossover operator plays a critical role in the search process. In this experiment, we also test the local search model with five different crossover rates, i.e., 0.6, 0.7, 0.8, 0.9, and 1.0, to see the effect of this operator on the performance of our HCOEA. A crossover rate less than 1.0 signifies that only a part of subpopulations evolves at each generation.

WANG et al.: MULTIOBJECTIVE OPTIMIZATION AND HYBRID EVOLUTIONARY ALGORITHM

573

TABLE VI EXPERIMENTAL RESULTS ON 13 BENCHMARK FUNCTIONS WITH VARYING CROSSOVER PROBABILITY IN THE G LOBAL S EARCH M ODEL ; 30 I NDEPENDENT R UNS A RE P ERFORMED

TABLE VII EXPERIMENTAL RESULTS ON 13 BENCHMARK FUNCTIONS WITH VARYING CROSSOVER PROBABILITY IN THE LOCAL SEARCH MODEL; 30 INDEPENDENT RUNS ARE PERFORMED. (#) DENOTES THE NUMBER OF TRIALS THAT FEASIBLE SOLUTIONS ARE NOT FOUND IN THE FINAL POPULATIONS IN 30 TRIALS

We summarize the mean of the objective function values in Table VII. In the case of plc = 0.6, 0.7, and 0.8, the algorithms cannot consistently find feasible solutions in 30 trials for problem g13. In the case of plc = 0.9 and 1.0, the algorithms reach more competitive results. We conclude that a value between 0.9 and 1.0 is suitable for this parameter of our local search model. Conclusion From the Experiment: As shown in Tables V–VII, although different combinations of the parameters pgm , pgc , and plc might induce the best results for different problems, HCOEA can search high-quality solutions when these parameters are changed among appropriate and large range. This behavior adequately indicates that HCOEA is not sensitive in relation to these parameters.

3) speeding up the convergence by taking advantage of parallel subpopulation search mechanism; 4) the BIIRS; and 5) eliminating individuals in the context of nondominated individuals. From the comparative study, HCOEA has shown its potential to handle various COPs, and its performance is much better than all other state-of-the-art COEAs referred in this paper in terms of the selected performance metrics. An important subject of ongoing work is to try different global and local search models since suitable search model can improve the capability of the algorithm remarkably. Another direction of further investigation is to apply our method to deal with engineering design problems in the real world, as well as other types of COPs, e.g., multiobjective optimization with constraints.

VIII. C ONCLUSION AND F UTURE W ORK

ACKNOWLEDGMENT

In this paper, a novel HCOEA is proposed. HCOEA can be characterized as follows: 1) combining multiobjective optimization with global and local search models; 2) maintaining the diversity of the population through an NGA, in which new individual selection criterion based on tournament is presented;

The authors would like to thank the anonymous reviewers and the Associate Editor for their valuable and constructive comments and suggestions, which greatly improve this paper, as well as J. Cai and Q. Cai for improving the presentation of this paper.

574

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 37, NO. 3, JUNE 2007

R EFERENCES [1] Z. Michalewicz and M. Schoenauer, “Evolutionary algorithm for constrained parameter optimization problems,” Evol. Comput., vol. 4, no. 1, pp. 1–32, 1996. [2] C. A. C. Coello, “Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art,” Comput. Methods Appl. Mech. Eng., vol. 191, no. 11/12, pp. 1245–1287, Jan. 2002. [3] Z. Michalewicz, K. Deb, M. Schmidt, and T. Stidsen, “Test-case generator for constrained parameter optimization techniques,” IEEE Trans. Evol. Comput., vol. 4, no. 3, pp. 197–215, Aug. 2000. [4] S. Koziel and Z. Michalewicz, “Evolutionary algorithm, homomorphous mappings, and constrained parameter optimization,” Evol. Comput., vol. 7, no. 1, pp. 19–44, 1999. [5] E. Mezura-Montes, “Alternative techniques to handle constraints in evolutionary optimization,” Ph.D. dissertation, Comput. Sci., CINVESTAVIPN, Mexico City, Mexico, Dec. 2004. [6] J. X. Yu, X. Yao, C. Choi, and G. Gou, “Materialized view selection as constrained evolutionary optimization,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 33, no. 4, pp. 458–467, Nov. 2003. [7] E. Mezura-Montes and C. A. C. Coello, “A numerical comparison of some multiobjective-based techniques to handle constraints in genetic algorithms,” Evol. Comput. Group, CINVESTAV, Sección de Computación, CINVESTAV-IPN, Dept. de Ingeniería Eléctrica, México D.F., México, Tech. Rep. EVOCINV-03-2002, 2002. [Online]. Available: http://www.cs. cinvestav.mx/constraint/ [8] E. Camponogara and S. N. Talukdar, “A genetic algorithm for constrained and multiobjective optimization,” in Proc. 3rd NWGA, J. T. Alander, Ed., Vaasa, Finland, Aug. 1997, pp. 49–62. [9] J. D. Schaffer, “Multiple objective optimization with vector evaluated genetic algorithms,” in Proc. 1st Int. Conf. Genetic Algorithms and Their Appl., J. J. Grefenstette, Ed., Hillsdale, NJ, 1985, pp. 93–100. [10] P. D. Surry and N. J. Radcliffe, “The COMOGA method: Constrained optimization by multiobjective genetic algorithm,” Control Cybern., vol. 26, no. 3, pp. 391–412, 1997. [11] C. A. C. Coello, “Treating constraints as objectives for single-objective evolutionary optimization,” Eng. Optim., vol. 32, no. 3, pp. 275–308, 2000. [12] C. M. Fonseca and P. J. Fleming, “Multiobjective optimization and multiple constraint handling with evolutionary algorithms—Part 1: A unified formulation,” IEEE Trans. Syst., Man, Cybern., vol. 28, no. 1, pp. 26–37, Jan. 1999. [13] C. A. C. Coello, “Constraint handling using an evolutionary multiobjective optimization technique,” Civ. Eng. Environ. Syst., vol. 17, pp. 319–346, 2000. [14] C. A. C. Coello and E. Mezura-Montes, “Constraint-handling in genetic algorithms through the use of dominance-based tournament selection,” Adv. Eng. Inf., vol. 16, no. 3, pp. 193–203, 2002. [15] J. Horn, N. Nafpliotis, and D. Goldberg, “A niched Pareto genetic algorithm for multiobjective optimization,” in Proc. 1st IEEE Conf. Evol. Comput., IEEE World Congr. Comput. Intell., Jun. 1994, vol. 1, pp. 82–87. [16] T. Ray and K. M. Liew, “Society and civilization: An optimization algorithm based on the simulation of social behavior,” IEEE Trans. Evol. Comput., vol. 7, no. 4, pp. 386–396, Aug. 2003. [17] J. D. Knowles and D. W. Corne, “Approximating the nondominated front using the Pareto archived evolutionary strategy,” Evol. Comput., vol. 8, no. 2, pp. 149–172, 2000. [18] A. H. Aguirre, S. B. Rionda, C. A. C. Coello, G. L. Lizáraga, and E. Mezura-Montes, “Handling constraints using multiobjective optimization concepts,” Int. J. Numer. Methods Eng., vol. 59, no. 15, pp. 1989–2017, Apr. 2004. [19] Y. Zhou, Y. Li, J. He, and L. Kang, “Multiobjective and MGG evolutionary algorithm for constrained optimization,” in Proc. CEC, Dec. 2003, vol. 1, pp. 1–5. [20] S. Venkatraman and G. G. Yen, “A generic framework for constrained optimization using genetic algorithms,” IEEE Trans. Evol. Comput., vol. 9, no. 4, pp. 424–435, Aug. 2005. [21] K. Deb, A. Pratab, S. Agrawal, and T. Meyarivan, “A fast and elitist nondominated sorting genetic algorithm for multi-objective optimization: NSGA-II,” IEEE Trans. Evol. Comput., vol. 6, no. 2, pp. 182–197, Apr. 2002. [22] E. Mezura-Montes and C. A. C. Coello, “On the usefulness of the evolution strategies self-adaptation mechanism to handle constraints in global optimization,” Evol. Comput. Group, CINVESTAV, Sección de Computación, CINVESTAV-IPN, Dept. de Ingeniería Eléctrica, México

[23] [24] [25] [26] [27] [28] [29] [30] [31]

[32] [33] [34] [35] [36] [37] [38] [39] [40]

D.F., México, Tech. Rep. EVOCINV-01-2003, 2003. [Online]. Available: http://www.cs.cinvestav.mx/constraint/ ——, “A simple multimembered evolution strategy to solve constrained optimization problems,” IEEE Trans. Evol. Comput., vol. 9, no. 1, pp. 1–17, Feb. 2005. G. Guo and S. Yu, “Evolutionary parallel local search for function optimization,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 33, no. 6, pp. 864–876, Dec. 2003. G. Guo, “Analysis and suppressing methods of genetic drift in evolutionary computation,” Ph.D. dissertation, Control Theory Control Eng., Central South Univ., Changsha, China, 2003. H. Ishibuchi and T. Murata, “Multiobjective genetic local search algorithm and its application to flowshop scheduling,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 28, no. 3, pp. 392–403, Aug. 1998. M. Lozano, F. Herrera, N. Krasnogor, and D. Molina, “Real-coded memetic algorithms with crossover hill-climbing,” Evol. Comput., vol. 12, no. 3, pp. 273–302, Sep. 2004. J. D. Knowles and D. W. Corne, “M-PAES: A memetic algorithm for multiobjective optimization,” in Proc. CEC, 2000, pp. 325–332. A. Jaszkiewicz, “Genetic local search for multiple objective combinatorial optimization,” Eur. J. Oper. Res., vol. 137, no. 1, pp. 50–71, 2002. [Online]. Available: http://www.densis.fee.unicamp.br/~moscato/ memetic_home.html/ E. Zitzler, M. Laumannns, and L. Thiele, “SPEA2: Improving the strength Pareto evolutionary algorithm for multiobjective optimization,” in Proc. EUROGEN-Evol. Methods Des., Optim. and Control With Appl. Ind. Problem, K. C. Giannakoglou et al., Eds., 2001, pp. 95–100. R. Farmani and J. A. Wright, “Self-adaptive fitness formulation for constrained optimization,” IEEE Trans. Evol. Comput., vol. 7, no. 5, pp. 445–455, Oct. 2003. T. Binh and U. Korn, “MOBES: A multiobjective evolution strategy for constrained optimization problems,” in Proc. 3rd Int. Conf. Genetic Algorithms (Mendel), Brno, Czech Republic, 1997, pp. 176–182. F. Herrera, M. Lozano, and J. L. Verdegay, “Tackling real-coded genetic algorithms: Operators and tools for behavioral analysis,” Artif. Intell. Rev., vol. 12, no. 4, pp. 265–319, 1998. J. H. Holland, “Building blocks, cohort genetic algorithms, and hyperplane-defined functions,” Evol. Comput., vol. 8, no. 4, pp. 373–391, Winter 2001. H. Mühlenbein and D. Schlierkamp-Voosen, “Predictive models for the breeder genetic algorithm I: Continuous parameter optimization,” Evol. Comput., vol. 1, no. 1, pp. 25–49, 1993. S. Tsutsui, M. Yamamure, and T. Higuchi, “Multi-parent recombination with simplex crossover in real coded genetic algorithms,” in Proc. GECCO, 1999, pp. 657–664. T. P. Runarsson and X. Yao, “Stochastic ranking for constrained evolutionary optimization,” IEEE Trans. Evol. Comput., vol. 4, no. 3, pp. 284–294, Sep. 2000. J. A. Wright and R. Farmani, “Genetic algorithm: A fitness formulation for constrained minimization,” in Proc. Genetic and Evol. Comput. Conf., San Francisco, CA, 2001, pp. 725–732. E. Mezura-Montes and C. A. C. Coello, “A simple multimembered evolution strategy to solve constrained optimization problems,” Evol. Comput. Group, CINVESTAV, Sección de Computación, CINVESTAV-IPN, Dept. de Ingeniería Eléctrica, México D.F., México, Tech. Rep. EVOCINV-042003, 2003. [Online]. Available: http://www.cs.cinvestav.mx/constraint/

Yong Wang was born in Hubei, China, on December 26, 1980. He is currently working toward the Ph.D. degree at Central South University, Changsha, China. He has published more than ten academic papers since 2001. His current research interests include the theory of evolutionary computation, constrained optimization, multiobjective optimization, and their applications in multiple mobile robots.

WANG et al.: MULTIOBJECTIVE OPTIMIZATION AND HYBRID EVOLUTIONARY ALGORITHM

Zixing Cai (SM’98) received the Diploma degree from Jiao Tong University, Xi’an, China, in 1962. He has been teaching and conducting research with the School of Information Science and Engineering, Central South University, Changsha, China, since 1962. From May to December 1983, he visited the Center of Robotics, Department of Electrical Engineering and Computer Science, University of Nevada, Reno. Then, he visited the Advanced Automation Research Laboratory, School of Electrical Engineering, Purdue University, West Lafayette, IN, from December 1983 to June 1985. From October 1988 to September 1990, he was a Senior Research Scientist with the National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science, Beijing, and with the National Laboratory of Machine Perception, Center of Information, Beijing University, Beijing. From September 1992 to March 1993, he was a Visiting Professor with the Center for Intelligent Robotic Systems for Space Exploration, Department of Electrical, Computer and System Engineering, Rensselaer Polytechnic Institute, Troy, NY. From April to July 2004, he visited the Institute of Informatics and Automation, Russian Academy of Sciences, Moscow, Russia. Since February 1989, he has become an Expert of the United Nations, which was granted by UNIDO. He has published more than 500 papers and 28 books/textbooks. His research interests include intelligent systems, artificial intelligence, intelligent computation, and robotics. Prof. Cai received more than 30 state, province, and university awards in science, technology, and teaching. In August 2006, he received the Outstanding Robotics Education Prize from the Chinese Association for Artificial Intelligence. He also received the State Eminent Professor Prize of China from the Ministry of Education, China.

575

Guanqi Guo received the B.Sc. degree in electronics from Huazhong Institute of Science and Technology, Wuhan, China, in 1983 and the Ph.D. degree from the Central South University, Changsa, China, in 2003. He is currently a Professor at the Department of Computer Science and Information Engineering, Hunan Institute of Science and Technology, Yueyang, China. His research interests include self-adaptive niching techniques in evolutionary computation, hybrid evolutionary algorithms, artificial intelligence, and adaptive control systems. He has published more than 25 research papers in journals (such as the IEEE TRANSACTIONS ON SYSTEMS, MAN , AND CYBERNETICS—PART B: CYBERNETICS, the Chinese Journal of Computer, and the Journal of Software, Control Theory and Application) and international conferences (such as WCICA, ACC, and DCABES).

Yuren Zhou received the B.Sc. degree in mathematics from Peking University, Beijing, China, in 1988, and the M.Sc. degree in mathematics and Ph.D. degree in computer science from Wuhan University, Wuhan, China, in 1991 and 2003, respectively. He is currently an Associate Professor in the School of Computer Science and Engineering, South China University of Technology, Guangzhou. His current research interests are focused on evolutionary computation and data mining.