Mathematical and Computer Modelling 48 (2008) 232–248 www.elsevier.com/locate/mcm
Using the technique of scalarization to solve the multiobjective programming problems with fuzzy coefficients Hsien-Chung Wu Department of Mathematics, National Kaohsiung Normal University, Kaohsiung 802, Taiwan Received 21 June 2007; received in revised form 31 July 2007; accepted 22 August 2007
Abstract Scalarization of the multiobjective programming problems with fuzzy coefficients using the embedding theorem and the concept of convex cone (ordering cone) is proposed in this paper. Since the set of all fuzzy numbers can be embedded into a normed space, this motivation naturally inspires us to invoke the scalarization techniques in vector optimization problems to evaluate the multiobjective programming problems with fuzzy coefficients. Two solution concepts are proposed in this paper by considering different convex cones. c 2007 Elsevier Ltd. All rights reserved.
Keywords: Fuzzy number; Convex cone; Partial ordering; Minimal element; Pareto optimal solution
1. Introduction In the conventional optimization problems, the coefficients are all assumed as real numbers. However, uncertainty always occurs in the real world. Therefore, we shall develop some efficient methods to solve the optimization problems concerning uncertainty. Up to now, there are two kinds of uncertainties in optimization problems that have been widely studied. They are termed as the stochastic optimization and fuzzy optimization. When the coefficients are modelled as random variables, the efficient methodology developed in the field of stochastic optimization should be invoked to solve the optimization problem with random coefficients. We may refer to Birge and Louveaux [3], Kall [9], Pr´ekopa [19], Stancu-Minasian [27] and Vajda [28] for this topic. When the coefficients are taken as fuzzy numbers, the optimization problems with fuzzy coefficients should be solved by invoking the efficient methodology in the field of fuzzy optimization. Bellman and Zadeh [2] inspired the development of fuzzy optimization by providing the aggregation operators to combine the fuzzy goals and fuzzy decision space. After this motivation and inspiration, there come out a lot of literature dealing with the fuzzy optimization problems. The collection of papers on the topic of fuzzy optimization edited by Slowi´nski [26] and Delgado et al. [5] gave the main stream of this topic. On the other hand, the books by Zimmermann [33] and Lai and Hwang [11,12] also gave the insightful survey. Zimmermann [32] first used the fuzzified constraint and objective functions to solve the multiobjective linear programming problems. Chanas [4] used the parametric programming technique to solve the fuzzy multiobjective
E-mail address:
[email protected]. c 2007 Elsevier Ltd. All rights reserved. 0895-7177/$ - see front matter doi:10.1016/j.mcm.2007.08.011
H.-C. Wu / Mathematical and Computer Modelling 48 (2008) 232–248
233
linear programming problems. Li and Lee [13–15] proposed a two-phase approach to solve the (crisp) multicriteria de Novo programming problems, and then used the same technique to solve the fuzzy multiobjective programming problems with fuzzy coefficients by considering the (α, β)-level problem, since the (α, β)-level problem is the conventional multiobjective problem (i.e. the two-phase approach is applicable to the (α, β)-level problem). Sakawa [21] introduced the concept of α-Pareto optimal solution to solve the multiobjective programming problems with fuzzy coefficients by using the interactive method. Sakawa and his team (Kato, Sawada and Yano) [22–25] also solved the large-scale multiobjective block-angular linear programming problems with fuzzy parameters by considering the concept of α-Pareto optimality and using the interactive method. Mohan and Nguyen [17] incorporated the reference direction into the interactive method proposed by Sakawa and Yano to solve the fuzzy multiobjective programming problems. There are many other interesting articles concerning the fuzzy multiobjective programming problems. Esogbue [6] used the fuzzy dynamic programming algorithms to solve the fuzzy multistage decision processes. Fatma [7] used the differential equation approach to solve the fuzzy vector optimization problems in which the fuzzy parameters were characterized as fuzzy numbers, and the concept of α-Pareto optimality was also introduced. Nishizaki and Sakawa [18] considered a two-person nonzero-sum bimatrix game with single and multiple payoffs, and then examined equilibrium solutions in terms of the degree of attainment of a fuzzy goal for games in fuzzy and multiobjective environments by assuming that a player tried to maximize the degree of attainments of the fuzzy goal, where a fuzzy goal was introduced for a payoff in order to incorporate the ambiguity of human judgments. We may also refer to the book by Bector and Chandra [1] for the topic of fuzzy matrix games. The technique for solving fuzzy optimization problems using embedding theorem was proposed by Wu [30]. Also, the solution concept of fuzzy multiobjective programming problem based on convex cones was proposed by Wu [29]. The purpose of this paper is to consider another viewpoint, i.e. the scalarization of multiobjective programming problem with fuzzy coefficients based on the concept of convex cone and the embedding theorem simultaneously, where the embedding theorem used in this paper is different from that of Wu [30]. The set of all fuzzy numbers is not a vector space in general. However, Puri and Ralescu [20] and Kaleva [9] proved that the set of all fuzzy numbers can be embedded into a normed space. Under this motivation, the scalarization technique in vector optimization turns into a useful tool in solving the corresponding vector optimization problem which is transformed from the original multiobjective programming problem with fuzzy coefficients using the embedding theorem and a suitable linear defuzzification function. The notions of convex cone and partial ordering on a vector space are essentially equivalent. This inspires us to consider the notion of optimality in multiobjective programming problem with fuzzy coefficients by taking into account the convex cones. In this paper, we introduce the concept of Pareto optimal solution of multiobjective programming problem with fuzzy coefficients by considering the notion of minimal (maximal) element that were introduced by Jahn [8] for vector optimization problems in partially ordered vector spaces. In Section 2, we shall present the embedding theorem and prove the order preserving property under the embedding function; that is, the order does not change the direction under the embedding function. In Section 3, we formulate the multiobjective programming problems with fuzzy coefficients using the convex cones, and introduce the different notions of optimality. In Section 4, the scalarization methodology for multiobjective programming problem with fuzzy coefficients is developed by following the essence of scalarization technique in vector optimization problems. 2. Embedding and order preserving Let A be a subset of R. Then the corresponding indicator function of A is given by χ A (x) = 1 if x ∈ A and χ A (x) = 0 if x 6∈ A. The fuzzy subset a˜ of R is defined by a function ξa˜ : R → [0, 1], which is an extension of indicator function and is called a membership function of a. ˜ The α-level set of a, ˜ denoted by a˜ α , is defined by a˜ α = {x ∈ R : ξa˜ (x) ≥ α} for all α ∈ (0, 1]. Suppose that R is endowed with the usual topology. The 0-level set a˜ 0 is defined as the closure of the set {x ∈ R : ξa˜ (x) > 0}, i.e. a˜ 0 = cl({x ∈ R : ξa˜ (x) > 0}). Definition 2.1. The fuzzy subset a˜ of R is said to be a fuzzy number if the following conditions are satisfied: (i) a˜ is normal, i.e. there exists an x ∈ R such that ξa˜ (x) = 1; (ii) ξa˜ is quasi-concave, i.e. ξa˜ (t x + (1 − t)y) ≥ min{ξa˜ (x), ξa˜ (y)} for t ∈ [0, 1]; (iii) ξa˜ is upper semicontinuous, i.e. {x ∈ R : ξa˜ (x) ≥ α} is a closed subset of R for each α ∈ (0, 1]; (iv) The 0-level set a˜ 0 is a closed and bounded subset of R.
234
H.-C. Wu / Mathematical and Computer Modelling 48 (2008) 232–248
Since a˜ α ⊂ a˜ 0 for each α ∈ (0, 1], we see that the α-level sets a˜ α are bounded subsets of R for all α ∈ (0, 1]. We denote by F(R) the set of all fuzzy numbers. It is well known that if a˜ ∈ F(R), then the α-level set of a˜ is a closed, bounded and convex subset of R, i.e. a closed interval in R. Therefore, the closed interval a˜ α is denoted by a˜ α = [a˜ αL , a˜ αU ]. We say that a˜ is a crisp number with value m if its membership function is defined by 1 if r = m ξa˜ (r ) = 0 otherwise. We also use the notation 1˜ {m} to represent the crisp number with value m. It is easy to see that 1˜ {m} ∈ F(R) and ˜ ˜ ˜L ˜U (1˜ {m} )αL = (1˜ {m} )U α = m for all α ∈ [0, 1]. For convenience, we denote by 0 = 1{0} . Therefore, we have 0α = 0 = 0α for all α ∈ [0, 1]. Let a˜ and b˜ be two fuzzy numbers. Using the extension principle in Zadeh [31] and referring to Puri and Ralescu [20], the membership function of the operation a˜ b˜ is defined by ξa ˜ b˜ (z) = sup min{ξa˜ (x), ξb˜ (y)},
(1)
x◦y=z
where the operations = ⊕ and ⊗ correspond to the operations ◦ = + and ×. The membership function of scalar multiplication λa, ˜ λ ∈ R, is defined by if λ 6= 0 ξa˜ (z/λ) if λ 6= 0 ξa˜ (z/λ) if λ = 0 and z 6= 0 = 0 if λ = 0 and z 6= 0 (2) ξλa˜ (z) = 0 sup ξa˜ (y) if λ = 0 = z 1 if λ = 0 = z, y∈R
since a˜ is normal. It also means that λa˜ = 0˜ if λ = 0. For λ 6= 0, we see that λa˜ = 1˜ {λ} ⊗ a. ˜ Proposition 2.1. Let a, ˜ b˜ ∈ F(R). Then a˜ ⊕ b˜ ∈ F(R) and a˜ ⊗ b˜ ∈ F(R). Moreover, we also have the following useful results: ˜ αL = a˜ αL + b˜αL and (a˜ ⊕ b) ˜ U (i) (a˜ ⊕ b) ˜ αU + b˜αU for α ∈ [0, 1]. α =a L L L L U U L U ˜ α = min{a˜ α b˜α , a˜ α b˜α , a˜ α b˜α , a˜ α b˜αU } and (a˜ ⊗ b) ˜ U (ii) (a˜ ⊗ b) ˜ αL b˜αL , a˜ αL b˜αU , a˜ αU b˜αL , a˜ αU b˜αU } for α ∈ [0, 1]. α = max{a L L U U (iii) (λa) ˜ α = λ · a˜ α and (λa) ˜ α = λ · a˜ α for λ > 0 and α ∈ [0, 1]. (iv) (λa) ˜ αL = λ · a˜ αU and (λa) ˜ U ˜ αL for λ < 0 and α ∈ [0, 1]. α =λ·a In general, F(R) is not a vector space according to the addition and scalar multiplication described in (1) and (2), respectively. However, Puri and Ralescu [20] and Kaleva [9] proved that F(R) can be embedded into a normed space (N , k · k) isometrically and isomorphically. In other words, if π is the embedding function π : F(R) → N , then we have ˜ = π(a) ˜ (i) π(a˜ ⊕ b) ˜ + π(b); (ii) π(λa) ˜ = λπ(a) ˜ for λ ≥ 0; ˜ =k π(a) ˜ k, (iii) d(a, ˜ b) ˜ − π(b) where the metric d(·, ·) on F(R) is defined by ˜ = sup d H (a˜ α , b˜α ), d(a, ˜ b) 0 0. Then p˜ j = v˜ j H u˜ j exists and η(v˜ j ) ≥ η(u˜ j ). Therefore v˜ j = p˜ j ⊕ u˜ j . Using Proposition 2.1, we have λv˜ j = λ p˜ j ⊕ λu˜ j , i.e. λ p˜ j = λv˜ j H λv˜ j exists for all j = 1, . . . , n. Now η(λv˜ j H λu˜ j ) = λη(v˜ j ) − λη(u˜ j ) ≥ 0 for all j = 1, . . . , n, i.e. λ˜v H λu˜ ∈ C 1 . It shows that λu˜ 1 λ˜v. For the binary relation “2 ”, we can obtain the same result from Propositions 2.1–2.3 by using the similar arguments. Remark 2.4. Although the binary relations “1 ” and “2 ” satisfies axioms (1)–(4) of Definition 2.4, we cannot say that “1 ” and “2 ” are partial orderings on F n (R), since F n (R) is not a real vector space in general. However, if we regard F n (R) as a set, then “1 ” and “2 ” are partial orderings on F n (R). On the other hand, if the real vector space V in Definition 2.4 is relaxed (replaced) as just saying that V is a set instead of a real vector space with some defined addition and scalar multiplication, then we can conclude that “1 ” and “2 ” are partial orderings on F n (R) under this new definition for partial ordering. Sometimes, if V is a set, then we say that “” is a partial ordering on V if conditions (1) and (2) in Definition 2.4 are satisfied. We don’t have to check conditions (3) and (4), since V is not a vector space. Proposition 2.5. The following statements hold true: ˜ v˜ ∈ C 1 . Then we have λu˜ ∈ C 1 for λ > 0 and λu˜ ⊕ (1 − λ)˜v ∈ C 1 for λ ∈ (0, 1). (i) Let u, ˜ v˜ ∈ C 2 . Then we have λu˜ ∈ C 2 for λ > 0 and λu˜ ⊕ (1 − λ)˜v ∈ C 2 for λ ∈ (0, 1). (ii) Let u, Proof. It is easy to see λu j , λu˜ j ⊕ (1 − λ)v˜ j ∈ F(R) for all j = 1, . . . , n. Since η is a linear defuzzification function, we have η(λu˜ j ) = λ · η(u˜ j ) ≥ 0 and η(λu˜ j ⊕ (1 − λ)v˜ j ) = λ · η(u˜ j ) + (1 − λ) · η(v˜ j ) ≥ 0 for all j = 1, . . . , n. This proves (i). The results of (ii) follow from Proposition 2.1 and Remark 2.3 immediately. Remark 2.5. Proposition 2.5 shows that C 1 and C 2 have the structure of convex cone in some sense. However, we cannot say that C 1 and C 2 are convex cones, since F n (R) is not a vector space. Of course, we may say that C 1 and C 2 are convex cones in F n (R) if the definition of convex cone is taken in a set instead of a real vector space. Now we consider the product vector space N n = N × · · · × N (n times). Then, from Kreyszig [10, p.71], we see that N n is a normed space with norm given by ksk = max{ks 1 k, . . . , ks n k}, where s = (s 1 , . . . , s n ) ∈ N n . Let π be the embedding function given in (4). We define a function Π : F n (R) → N n by ˜ = π(u˜ 1 ), . . . , π(u˜ n ) Π (u) (6) for u˜ ∈ F n (R). Proposition 2.6. The sets Π (C 1 ) and Π (C 2 ) are convex cones in N n . ˜ v˜ ∈ C 1 such that π(u˜ j ) = s j and π(v˜ j ) = t j for all j = 1, . . . , n. Proof. Let s, t ∈ Π (C 1 ). Then there exist u, j j j We have λs + (1 − λ)t = λ · π(u˜ ) + (1 − λ) · π(v˜ j ) = π(λu˜ j ⊕ (1 − λ)v˜ j ). From Proposition 2.5, we see that λs+(1−λ)t ∈ Π (C 1 ). It shows that Π (C 1 ) is a convex subset of N n . We also see that λs ∈ Π (C 1 ) for λ > 0. Therefore Π (C 1 ) is a convex cone in N n . Similarly, from Proposition 2.5, we see that Π (C 2 ) is a convex cone in N n . Using Proposition 2.6 and Remark 2.2, we can induce two partial orderings “≤1 ” and “≤2 ” on N n from Π (C 1 ) and Π (C 2 ), respectively. Now we are going to present an order preserving property under the function Π . ˜ v˜ ∈ F n (R). Then u˜ 1 v˜ if and only if Π (u) ˜ ≤1 Π (˜v), and u˜ 2 v˜ if and Proposition 2.7 (Order Preserving). Let u, 2 ˜ ≤ Π (˜v). only if Π (u)
238
H.-C. Wu / Mathematical and Computer Modelling 48 (2008) 232–248
˜ = Proof. By Proposition 2.3, we see that π(v˜ j )−π(u˜ j ) = π(v˜ j H u˜ j ) for all j = 1, . . . , n. It says that Π (˜v)−Π (u) ˜ ∈ Π (C 1 ), i.e., Π (u) ˜ ≤1 Π (˜v). Conversely, if Π (u) ˜ ≤1 Π (˜v), i.e., Π (˜v) − Π (u) ˜ ∈ Π (C 1 ), then there exists Π (˜v H u) ˜ ∈ C 1 such that Π (˜v) − Π (u) ˜ = Π (w). ˜ It says that π(v˜ j ) = π(u˜ j ) + π(w˜ j ) = π(u˜ j ⊕ w˜ j ) for all j = 1, . . . , n. aw j Since π is one-to-one, we have v˜ = u˜ j ⊕ w˜ j . This shows that w˜ j = v˜ j H u˜ j exists for all j = 1, . . . , n, i.e., ˜ ∈ C 1 . It also means that u˜ 1 v˜ . For the case of “≤2 ”, we can similarly obtain the results. This completes v˜ H u˜ = w the proof. In order to interpret the ordering concept for fuzzy constraint function values, we consider the following two sets Cπ1 = {a˜ : η(a) ˜ ≥ 0 and a˜ ∈ F(R)} and Cπ2 = {a˜ : a˜ is nonnegative and a˜ ∈ F(R)} . Remark 2.6. Let s ∈ N n . We see that s ∈ Π (C 1 ) if and only if s j ∈ π(Cπ1 ) for all j = 1, . . . , n, and s ∈ Π (C 2 ) if and only if s j ∈ π(Cπ2 ) for all j = 1, . . . , n, where π is the embedding function given in (4). Using the similar arguments in Proposition 2.6, we can show that π(Cπ1 ) and π(Cπ2 ) are convex cones in N . Therefore, we can induce two partial orderings “≤1π ” and “≤2π ” on N from π(Cπ1 ) and π(Cπ2 ), respectively. According ˜ if the Hukuhara difference b˜ H a˜ exists and to Definition 2.6, for a, ˜ b˜ ∈ F(R), we can define a˜ 1π b˜ (resp. a˜ 2π b) 1 2 b˜ H a˜ ∈ Cπ (resp. b˜ H a˜ ∈ Cπ ). We also have an order preserving property under the function π . ˜ and a˜ 2π b˜ if and Proposition 2.8 (Order Preserving). Let a, ˜ b˜ ∈ F(R). Then a˜ 1π b˜ if and only if π(a) ˜ ≤1π π(b), ˜ only if π(a) ˜ ≤2π π(b). Proof. Using the similar arguments in Proposition 2.7, we complete the proof.
3. Multiobjective programming problems with fuzzy coefficients Now we consider the following two multiobjective programming problems with fuzzy coefficients: ˜f(x) = f˜1 (x), . . . , f˜n (x) (FMOP1) min subject to g˜i (x) 1π k˜i , i = 1, 2, . . . , m x ∈ Rn+ , and (FMOP2)
f˜1 (x), . . . , f˜n (x) subject to g˜i (x) 2π k˜i , i = 1, 2, . . . , m x ∈ Rn+ , min
˜f(x) =
where the decision variables xi , i = 1, . . . , n, are assumed to be the nonnegative real variables. The maximization problems for multiobjective programming problems with fuzzy coefficients can be similarly defined and discussed. We introduce the so-called triangular fuzzy numbers. The membership function of a triangular fuzzy number a˜ = (a L , a, a U ) is defined by (r − a L )/(a − a L ) if a L ≤ r ≤ a ξa˜ (r ) = (a U − r )/(a U − a) if a < r ≤ a U 0 otherwise. The α-level set of a˜ is then a˜ α = [(1 − α)a L + αa, (1 − α)a U + αa];
H.-C. Wu / Mathematical and Computer Modelling 48 (2008) 232–248
239
that is, a˜ αL = (1 − α)a L + αa
and a˜ αU = (1 − α)a U + αx.
We also see that −a˜ = (−a U , −a, −a L ). The fuzzy objective functions f˜j , j = 1, . . . , n, and the fuzzy constraint functions g˜i , i = 1, . . . , m, are functions with fuzzy coefficients. The function with fuzzy coefficients will look like the following form: ˜ 2 x 2 x5 ˜ 3 x 3 x4 ⊕ 6x ˜ 1 x 2 ⊕ 8x ˜ 2 x3 ⊕ 7x ˜ 5 x3 x52 ⊕ 4x (7) 5x 2 4 5 2 1 3 that is interpreted as 5˜ ⊗ 1˜ {x1 x 2 } ⊕ 8˜ ⊗ 1˜ {x2 x3 } ⊕ 7˜ ⊗ 1˜ {x 5 x3 x 2 } ⊕ 4˜ ⊗ 1˜ {x 3 x 3 x4 } ⊕ 6˜ ⊗ 1˜ {x 2 x 2 x5 } 5
2
5
1 3
2 4
by looking at (1) and (2) and Proposition 2.1. Example 3.1. We consider the following bi-objective programming problem with fuzzy coefficients ˜f(x1 , x2 ) = f˜1 (x1 , x2 ), f˜2 (x1 , x2 ) = 1x ˜ 2 ⊕ 1x ˜ 2 ⊕ 1, ˜ 1x ˜ 2 ⊕ 1x ˜ 2 ⊕ 2˜ min 1 2 1 2 subject to
˜ 1 ⊕ (−1)x ˜ 2 2π (−1) ˜ (−1)x e ˜ 1 ⊕ (−2)x ˜ 2 2 (−12) (−6)x π
x1 , x2 ≥ 0, e = (11, 12, 13) are triangular fuzzy numbers. where 1˜ = (0, 1, 2), 2˜ = (1, 2, 3), 6˜ = (5, 6, 7) and 12 Each problem will be solved with respect to its corresponding solution concept. The partial orderings 1 and in Definition 2.6 will be used to tackle the fuzzy multiobjective function values ( f˜1 (x), . . . , f˜n (x)) in problems (FMOP1) and (FMOP2), respectively. Let us write ˜f(x) = ( f˜1 (x), . . . , f˜n (x)). Then (Π ◦ ˜f)(x) = Π (˜f(x)) = π( f˜1 (x)), . . . , π( f˜n (x)) = (π ◦ f˜1 )(x), . . . , (π ◦ f˜n )(x) . 2
Proposition 2.7 will be used to handle the fuzzy multiobjective function values and Proposition 2.8 will be used to handle the fuzzy constraint function values. Therefore, applying the embedding function π to problems (FMOP1) and (FMOP2), it is reasonable to consider the following two corresponding multiobjective programming problems (MOP1) and (MOP2) (MOP1) min (Π ◦ ˜f)(x) = (π ◦ f˜1 )(x), . . . , (π ◦ f˜n )(x) subject to (π ◦ g˜i )(x) ≤1π π(k˜i ), i = 1, 2, . . . , m x ∈ Rn+ , and (MOP2)
(Π ◦ ˜f)(x) = (π ◦ f˜1 )(x), . . . , (π ◦ f˜n )(x) subject to (π ◦ g˜i )(x) ≤2π π(k˜i ), i = 1, 2, . . . , m x ∈ Rn+ . min
Since (Π ◦ ˜f)(x) ∈ N n , the partial orderings “≤1 ” and “≤2 ” induced from the convex cones Π (C 1 ) and Π (C 2 ), respectively, will be used to tackle the multiobjective function values in problems (MOP1) and (MOP2). ˜ = π(a) ˜ and π(λa) Example 3.2. Continued from Example 3.1, since π(a˜ ⊕ b) ˜ + π(b) ˜ = λπ(a) ˜ for λ ≥ 0, its corresponding bi-objective programming problem is given by min (Π ◦ ˜f)(x1 , x2 ) = (π ◦ f˜1 )(x1 , x2 ), (π ◦ f˜2 )(x1 , x2 ) ˜ 2 + π(2) ˜ ˜ 2 + π(1)x ˜ 2 + π(1), ˜ π(1)x ˜ 2 + π(1)x = π(1)x 1 2 1 2
240
H.-C. Wu / Mathematical and Computer Modelling 48 (2008) 232–248
subject to
˜ 1 + π(−1)x ˜ 2 ≤2π π(−1) ˜ π(−1)x e ˜ 1 + π(−2)x ˜ 2 ≤2 π(−12) π(−6)x π
x1 , x2 ≥ 0, e 6= −π(12) e in general. We remark that π(−12) Let us recall some solution concepts. A convex cone CV defining a partial ordering as described before in the real vector space V is also called an ordering cone. Let S be any subset of V equipped with a partial ordering “≤”. Referring to Jahn [8], an element x ∗ ∈ S is called a minimal element of S if x ≤ x ∗ for x ∈ S then x ∗ ≤ x. If the partial ordering “≤” is regarded as an ordering cone CV , then an element x ∗ ∈ S is a minimal element of the set S if ({x ∗ } + (−CV )) ∩ S ⊆ {x ∗ } + CV . Similarly, an element x ∗ ∈ S is called a maximal element of S if x ∗ ≤ x for x ∈ S then x ≤ x ∗ . Equivalently, an element x ∗ ∈ S is a maximal element of the set S if ({x ∗ } + CV ) ∩ S ⊆ {x ∗ } + (−CV ). Definition 3.1. Let η be a linear defuzzification function. We say that η is a canonical linear defuzzification function ˜ if η(a) ˜ = 0 implies a˜ = 0. Proposition 3.1. Let Π be the function given in (6). Then the following statements hold true. (i) If η is a canonical linear defuzzification function, then the set Π (C 1 ) is a pointed convex cones in N n . (ii) The set Π (C 2 ) is a pointed convex cones in N n . Proof. From Proposition 2.6, it is enough to show that ˜ . . . , π(0)) ˜ = Π (C 2 ) ∩ −(Π (C 2 )), Π (C 1 ) ∩ −(Π (C 1 )) = (π(0), ˜ . . . , π(0)) ˜ is the zero element of the normed space N n . where (π(0), 1 ˜ v˜ ∈ C 1 such that Π (u) ˜ = s (i) Let s ∈ Π (C ) ∩ −(Π (C 1 )). Then s, −s ∈ Π (C 1 ). Therefore, there exist u, j j j j and Π (˜v) = −s, i.e., π(u˜ ) = s and π(v˜ ) = −s for all j = 1, . . . , n. By adding them together, we have ˜ (note that π(0) ˜ is the zero element of the normed space N ). Since π is one-toπ(u˜ j ⊕ v˜ j ) = π(u˜ j ) + π(v˜ j ) = π(0) j j ˜ ˜ = η(u˜ j ⊕ v˜ j ) = η(u˜ j ) + η(v˜ j ). We also have η(u˜ j ) ≥ 0 and one, we see that u˜ ⊕ v˜ = 0. Then we have 0 = η(0) j 1 ˜ v˜ ∈ C . Therefore we obtain η(u˜ j ) = 0 = η(v˜ j ). It shows that u˜ j = 0˜ = v˜ j for all j = 1, . . . , n, η(v˜ ) ≥ 0, since u, ˜ . . . , π(0)). ˜ since η is a canonical linear defuzzification function on F(R). We conclude that s = (π(0), 2 j j ˜ (ii) For case Π (C ), from the proof of (i), we can also obtain u˜ ⊕ v˜ = 0 for all j = 1, . . . , n. By Proposition 2.1, j U ˜ v˜ ∈ C 2 , we also have (u˜ j )αL ≥ 0, (v˜ j )αL ≥ 0, (u˜ j )U we have 0 = (u˜ j )αL + (v˜ j )αL = (u˜ j )U α + (v˜ )α . Since u, α ≥ 0 j U j )U for all α ∈ [0, 1] and all and (v˜ )α ≥ 0 by Remark 2.3. Therefore we obtain 0 = (u˜ j )αL = (v˜ j )αL = (u˜ j )U = ( v ˜ α α j = 1, . . . , n. This completes the proof. Now we let o n X 1 = x ∈ Rn+ : (π ◦ g˜i )(x) ≤1π 0, i = 1, . . . , m
and
n o S 1 = (Π ◦ ˜f)(x) : x ∈ X 1
(8)
and
n o S 2 = (Π ◦ ˜f)(x) : x ∈ X 2 .
(9)
and n o X 2 = x ∈ Rn+ : (π ◦ g˜i )(x) ≤2π 0, i = 1, . . . , m
Proposition 2.8 says that problems (FMOP1) and (MOP1) have the identical feasible set. Similarly, problems (FMOP2) and (MOP2) also have the identical feasible set. Since π is one-to-one, we propose the following definition. Definition 3.2. Let us consider the convex cones Π (C 1 ) and Π (C 2 ). We say that x∗ is a Pareto C 1 -optimal solution (resp. Pareto C 2 -optimal solution) of problem (FMOP1) (resp.(FMOP2)) if (Π ◦ ˜f)(x∗ ) is a minimal element of the set S 1 (resp. S 2 ) under the convex cone Π (C 1 ) (resp. Π (C 2 )). In the sequel, we are going to apply the technique of scalarization to obtain the Pareto optimal solution of problems (FMOP1) and (FMOP2).
H.-C. Wu / Mathematical and Computer Modelling 48 (2008) 232–248
241
4. Scalarization We define (N n )0 to be the set of all linear functionals from N n to R. Then the set n o C(1N n )0 = φ ∈ (N n )0 : φ(s) ≥ 0 for all s ∈ Π (C 1 ) is also a convex cone and is called a dual cone for Π (C 1 ). The set defined by n o ˜ . . . , 0)} ˜ (C 1 )◦ n 0 = φ ∈ (N n )0 : φ(s) > 0 for all s ∈ Π (C 1 ) \ {Π (0, (N )
˜ . . . , 0) ˜ is the zero element of the normed space is called the quasi-interior of the dual cone for Π (C 1 ), where Π (0, N n . Similarly, we can also define the following sets n o C(2N n )0 = φ ∈ (N n )0 : φ(s) ≥ 0 for all s ∈ Π (C 2 ) and n o ˜ . . . , 0)} ˜ (C 2 )◦(N n )0 = φ ∈ (N n )0 : φ(s) > 0 for all s ∈ Π (C 2 ) \ {Π (0, . The linear functional φ defined above will be used to handle the multiobjective function values in problems (MOP1) and (MOP2). In order to handle the constraint function values in problems (MOP1) and (MOP2), we need to consider the set N 0 of all linear functionals from N to R. Therefore, we similarly adopt the following notations: o n 1 0 1 CN 0 = φπ ∈ N : φπ (s) ≥ 0 for all s ∈ π(Cπ ) n o ˜ (C 1 )◦N 0 = φπ ∈ N 0 : φπ (s) > 0 for all s ∈ π(Cπ1 ) \ {π(0)} n o 0 2 2 CN 0 = φπ ∈ N : φπ (s) ≥ 0 for all s ∈ π(Cπ ) n o ˜ . (C 2 )◦N 0 = φπ ∈ N 0 : φπ (s) > 0 for all s ∈ π(Cπ2 ) \ {π(0)} Then we have the following interesting results. 1 2 Proposition 4.1. We consider problems (MOP1) and (MOP2). If φπ ∈ CN 0 (resp. φπ ∈ CN 0 ), then (π ◦ g˜i )(x) ≤1π π(k˜i ) (resp. (π ◦ g˜i )(x) ≤2π π(k˜i )) if and only if φπ ((π ◦ g˜i )(x)) ≤ φπ (π(k˜i )) for i = 1, 2, . . . , m.
Proof. We see that π(k˜i ) − (π ◦ g˜i )(x) ∈ π(Cπ1 ) if and only if φπ (π(k˜i ) − (π ◦ g˜i )(x)) ≥ 0, i.e., φπ (π(k˜i )) ≥ φπ ((π ◦ g˜i )(x)). For case of “≤2π ”, we can similarly obtain the same results. We complete the proof. In order to transform problems (FMOP1) and (FMOP2) into the conventional optimization problems, we need to determine the more specific linear functionals φ and φπ . As described above, the set F(R) can be embedded into the ˜ for a, normed space N that consists of the equivalence classes [[a, ˜ b]] ˜ b˜ ∈ F(R). Let s ∈ N n . Then s = (s 1 , . . . , s n ) = [[u˜ 1 , v˜ 1 ]], · · · , [[u˜ n , v˜ n ]] for some u˜ j , v˜ j ∈PF(R), j = 1, . . . , n. Given a positive n-vector w = (w 1 , . . . , w n ) ∈ Rn with w j > 0 for all j = 1, . . . , n and nj=1 w j = 1. Let η be a linear defuzzification function, we define a functional φ : N n → R by n n X X φ(s) = φ [[u˜ 1 , v˜ 1 ]], . . . , [[u˜ n , v˜ n ]] = w j η(u˜ j ) − w j η(v˜ j ). j=1
(10)
j=1
We need to claim that this functional is well defined. If (a˜ j , b˜ j ) ∈ [[u˜ j , v˜ j ]] for a˜ j , b˜ j ∈ F(R) and j = 1, . . . , n, then [[v˜ j , u˜ j ]] = [[a˜ j , b˜ j ]] for j = 1, . . . , n. In this case, we have to show that φ([[u˜ 1 , v˜ 1 ]], . . . , [[u˜ n , v˜ n ]]) = φ([[a˜ 1 , b˜ 1 ]], . . . , [[a˜ n , b˜ n ]]). By definition, we have (u˜ j , v˜ j ) ∼ (a˜ j , b˜ j ), i.e., u˜ j ⊕ b˜ j = v˜ j ⊕ a˜ j , which also says
242
H.-C. Wu / Mathematical and Computer Modelling 48 (2008) 232–248
that η(u˜ j ) + η(b˜ j ) = η(v˜ j ) + η(a˜ j ) for j = 1, . . . , n. Therefore, we have n X h i φ [[u˜ 1 , v˜ 1 ]], . . . , [[u˜ n , v˜ n ]] = w j η(u˜ j ) − η(v˜ j ) j=1
=
n X
h i w j η(a˜ j ) − η(b˜ j ) = φ([[a˜ 1 , b˜ 1 ]], . . . , [[a˜ n , b˜ n ]]).
j=1
This shows that the functional φ is well defined. Proposition 4.2. Let φ be a functional defined in (10). Then the following properties hold true: (i) φ is a linear functional on N n , and φ ∈ C(1N n )0 . (ii) If η is a canonical linear defuzzification function, then φ ∈ (C 1 )◦(N n )0 . (iii) If η(a) ˜ ≥ 0 for nonnegative a, ˜ then φ ∈ C(2N n )0 . ˜ then φ ∈ (C 2 )◦ n 0 . (iv) For nonnegative a, ˜ if η(a) ˜ = 0 implies a˜ = 0, (N ) Proof. (i) We have φ ([[u˜ 1 , v˜ 1 ]], . . . , [[u˜ n , v˜ n ]]) + ([[a˜ 1 , b˜ 1 ]], . . . , [[a˜ n , b˜ n ]]) = φ [[u˜ 1 ⊕ a˜ 1 , v˜ 1 ⊕ b˜ 1 ]], . . . , [[u˜ n ⊕ a˜ n , v˜ n ⊕ b˜ n ]] =
=
n X j=1 n X
w j η(u˜ j ⊕ a˜ j ) −
n X
w j η(v˜ j ⊕ b˜ j )
j=1
w j η(u˜ j ) +
j=1
n X
w j η(a˜ j ) −
j=1
n X
w j η(v˜ j ) −
j=1
n X
w j η(b˜ j )
j=1
= φ [[u˜ 1 , v˜ 1 ]], . . . , [[u˜ n , v˜ n ]] + φ [[a˜ 1 , b˜ 1 ]], . . . , [[a˜ n , b˜ n ]] and φ [[λu˜ 1 , λv˜ 1 ]], . . . , [[λu˜ n , λv˜ n ]] if λ ≥ 0 φ λ([[u˜ 1 , v˜ 1 ]], . . . , [[u˜ n , v˜ n ]]) = 1 1 n n φ [[(−λ)v˜ , (−λ)u˜ ]], . . . , [[(−λ)v˜ , (−λ)u˜ ]] if λ < 0 n n X X j j w η(λ u ˜ ) − w j η(λv˜ j ) if λ ≥ 0 j=1 j=1 = X n n X j j w j η((−λ)u˜ j ) if λ < 0 w η((−λ) v ˜ ) − j=1
j=1
=
n n X X j j λ · w η( u ˜ ) − λ · w j η(v˜ j ) j=1
if λ ≥ 0
j=1
n n X X j j w η( v ˜ ) − (−λ) · w j η(u˜ j ) (−λ) · j=1 j=1 1 1 n n = λφ [[u˜ , v˜ ]], . . . , [[u˜ , v˜ ]] .
if λ < 0
˜ for some This shows that φ is a linear functional on N n . On the other hand, if s ∈ Π (C 1 ), then s j = π(u˜ j ) = [[u˜ j , 0]] j j u˜ with η(u˜ ) ≥ 0 for all j = 1, . . . , n. Therefore, by Remark 2.1, we have n n n X X X ˜ . . . , [[u˜ n , 0]] ˜ = ˜ = φ(s) = φ [[u˜ 1 , 0]], w j η(u˜ j ) − w j η(0) w j η(u˜ j ) ≥ 0. j=1
This shows that φ ∈
C(1N n )0 .
j=1
j=1
243
H.-C. Wu / Mathematical and Computer Modelling 48 (2008) 232–248
P (ii) From the proof of (i), we see that φ(s) = nj=1 w j η(u˜ j ). Therefore, if φ(s) = 0, then η(u˜ j ) = 0 for all j = 1, . . . , n, since η(u˜ j ) ≥ 0 and w j > 0 for all j = 1, . . . , n. Therefore we have u˜ j = 0˜ for all j = 1, . . . , n. It ˜ . . . , [[u˜ n , 0]]) ˜ = ([[0, ˜ 0]], ˜ . . . , [[0, ˜ 0]]) ˜ is the zero element of N n . In other words, if s ∈ Π (C 1 ) shows that s = ([[u˜ 1 , 0]], and is not a zero element, then φ(s) > 0. This shows that φ ∈ (C 1 )◦(N n )0 . Pn j j j (iii) Using the proof of (i), we can also obtain that, for s ∈ Π (C 2 ), φ(s) = j=1 w η(u˜ ), where u˜ are nonnegative for all j = 1, . . . , n. From the hypotheses, we see that φ(s) ≥ 0, which shows that φ ∈ C(2N n )0 . (iv) Using the proof of (ii), we can also obtain η(u˜ j ) = 0 for nonnegative u˜ j , j = 1, . . . , n, which implies that j ˜ j = 1, . . . , n, by the hypothesis. This shows that φ ∈ (C 2 )◦ n 0 . u˜ = 0, (N ) Let us define another functional φπ : N → R by ˜ = η(a) ˜ φπ ([[a, ˜ b]]) ˜ − η(b)
(11)
for describing the constraint function values. Then we can also show that φπ is linear. Moreover, we have ˜ = η(a) ˜ = η(a). φπ (π(a)) ˜ = φπ ([[a, ˜ 0]]) ˜ − η(0) ˜
(12)
Example 4.1. Continued from Example 3.2, we can apply φπ to the objective functions φπ ((π ◦ f˜1 )(x1 , x2 )), φπ ((π ◦ f˜2 )(x1 , x2 )) . Since φπ is linear, using (12), we obtain the corresponding objective functions ˜ 2 + η(1)x ˜ 2 + η(1), ˜ η(1)x ˜ 2 + η(1)x ˜ 2 + η(2) ˜ φπ ((π ◦ f˜1 )(x1 , x2 )), φπ ((π ◦ f˜2 )(x1 , x2 )) = η(1)x 1 2 1 2 ≡ ( f 1 (x1 , x2 ), f 2 (x1 , x2 )), where f 1 and f 2 are real-valued functions with real coefficients that are defuzzified from the corresponding fuzzy coefficients. Now we are going to apply the linear functional φπ to the objective functions in problems (MOP1) and (MOP2). Inspired by Example 4.1, the linearity of φπ implies that φπ ((π ◦ f˜j )(x)) = f j (x)
for j = 1, . . . , n,
(13)
where each f j (x) is a real-valued function with coefficients η(a) ˜ corresponding to the fuzzy coefficients a˜ of f˜j (x). Similarly, if we apply the linear functional φπ to the constraint functions in problems (MOP1) and (MOP2), we also have the corresponding real-valued constraint functions gi (x) = φ((π ◦ g˜i )(x))
for i = 1, . . . , m.
(14)
Then we have the following useful results. 1 (resp. φ ∈ C 2 ), then g˜ (x) 1 k˜ (resp. g˜ (x) 2 k˜ ) if and only if g (x) ≤ η(k˜ ) for Proposition 4.3. If φπ ∈ CN 0 π i i i i π i π i N0 i = 1, . . . , m.
Proof. From Propositions 2.8 and 4.1, we have g˜i (x) 1π k˜i if and only if π(g˜i (x)) ≤1π π(k˜i ) if and only if φπ (π(g˜i (x))) ≤ φπ (π(k˜i )). From (14) and (12), we see that gi (x) = φπ (π(g˜i (x))) ≤ φπ (π(k˜i )) = η(k˜i ). For the case of “2π ”, we can similarly obtain the results. This completes the proof.
From Proposition 4.3, the corresponding multiobjective programming problem of (FMOP1) or (FMOP2) is formulated as follows: (MOP3)
min ( f 1 (x), . . . , f n (x)) subject to gi (x) ≤ η(k˜i ), i = 1, 2, . . . , m x ∈ Rn+ .
244
H.-C. Wu / Mathematical and Computer Modelling 48 (2008) 232–248
Example 4.2. Continued from Example 3.2, if we apply φπ to the objective and constraint functions and use Proposition 4.3, then the corresponding bi-objective programming problem is given by ˜ 2 + η(1)x ˜ 2 + η(1), ˜ η(1)x ˜ 2 + η(1)x ˜ 2 + η(2) ˜ min ( f 1 (x1 , x2 ), f 2 (x1 , x2 )) = η(1)x 1
subject to
2
1
2
˜ 1 + η(−1)x ˜ 2 ≤ η(−1) ˜ η(−1)x e ˜ 1 + η(−2)x ˜ 2 ≤ η(−12) η(−6)x x1 , x2 ≥ 0,
e = −η(12). e ˜ = −η(1), ˜ η(−2) ˜ = −η(2), ˜ η(−6) ˜ = −η(6) ˜ and η(−12) Since η is linear, we have η(−1) We see that the vector of objective functions in problem (MOP3) is obtained by applying φπ to the components of vector of objective functions in problems (MOP1) and (MOP2). Now we are going to apply the linear functional φ in (10) to the whole vector of objective functions in problems (MOP1) and (MOP2), which is termed as scalarization. Let φ be the linear functional defined in (10). From (11), we see that n n X h i X w j η(u˜ j ) − η(v˜ j ) = w j φπ ([[u˜ j , v˜ j ]]). φ [[u˜ 1 , v˜ 1 ]], . . . , [[u˜ n , v˜ n ]] = j=1
(15)
j=1
From (15), we see that n n X X φ (Π ◦ ˜f)(x) = φ (π ◦ f˜1 )(x), . . . , (π ◦ f˜n )(x) = w j φπ (π ◦ f˜j )(x) = w j f j (x). j=1
(16)
j=1
Therefore, from Proposition 4.3 again, the corresponding scalar optimization problem of (FMOP1) or (FMOP2) is formulated as follows: (MP1)
n X
min
w j f j (x)
j=1
subject to
gi (x) ≤ η(k˜i ), x ∈ Rn+ .
i = 1, 2, . . . , m
In this case, we see that the scalar optimization problem (MP1) is the weighting problem of multiobjective programming problem (MOP3). Example 4.3. Continued from Example 3.2, since we consider the bi-objective programming problem, from (10), we can define the functional φ : N 2 → R by i 1h i 1h η(u˜ 1 ) − η(v˜ 1 ) + η(u˜ 2 ) − η(v˜ 2 ) , φ [[u˜ 1 , v˜ 1 ]], [[u˜ 2 , v˜ 2 ]] = 2 2 where w 1 = w 2 = 1/2. Therefore, the corresponding scalar optimization problem is given by 1 1 ˜ 2 + η(1)x ˜ + η(2) ˜ ˜ 2 + 1 η(1) min f 1 (x1 , x2 ) + f 2 (x1 , x2 ) = η(1)x 1 2 2 2 2 ˜ ˜ ˜ subject to η(−1)x1 + η(−1)x2 ≤ η(−1) e ˜ 1 + η(−2)x ˜ 2 ≤ η(−12) η(−6)x x1 , x2 ≥ 0. Let us recall some well-known results in multiobjetive programming problems. Proposition 4.4 (Miettinen [16]). The following properties hold true: (i) The optimal solution of weighting problem (MP1) is the Pareto optimal solution of problem (MOP3) if the weighting coefficients w j > 0 for all j = 1, . . . , n.
H.-C. Wu / Mathematical and Computer Modelling 48 (2008) 232–248
245
(ii) Suppose that the multiobjective programming problem (MOP3) is convex. If x∗ is a Pareto optimal solution of (MOP3), then there exists a weighting vector w ∈ Rn+ , i.e. w j ≥ 0 for all j = 1, . . . , n, with Pnproblem j ∗ j=1 w = 1 such that x is an optimal solution of the weighting problem (MP1). Now we are in a position to present the scalarization results. Theorem 4.1 (Scalarization). Suppose that problem (FMOP1) is feasible and η is a canonical linear defuzzification function. Then the following statements hold true. (i) If x∗ ∈ Rn+ is a unique optimal solution of the corresponding scalar optimization problem (MP1), then x∗ is a Pareto C 1 -optimal solution of the original problem (FMOP1) , and is also a Pareto optimal solution of problem (MOP3). (ii) If x∗ ∈ Rn+ is an optimal solution of the corresponding scalar optimization problem (MP1), then x∗ is a Pareto C 1 -optimal solution of the original problem (FMOP1), and is also a Pareto optimal solution of problem (MOP3). Note that (ii) is more useful than (i). However, we still present (i) because we need to have it for comparison below. Proof. (i) Let X FMOP1 and X MP1 be the feasible sets of problem (FMOP1) and (MP1), respectively. Then, from Proposition 4.3, we see that X FMOP1 = X MP1 . Now, from Jahn [8, p. 128, Theorem 5.18], if Π (C 1 ) is a pointed convex cone, and if there exists a linear functional φ ∈ C(1N n )0 and an element y ∗ ∈ S 1 with φ(y ∗ ) < φ(y) for all y ∈ S 1 \ {y ∗ }, then y ∗ is a minimal element of S 1 . We are going to use this fact to prove the result. Let φ be a linear functional defined in (10). Since, from (16), φ((Π ◦ ˜f)(x∗ )) =
n X j=1
w j f j (x∗ )
0 for all j = 1, . . . , n. Now we can define φ in (10) by taking the nonnegative weights w j ≥ 0 for all j = 1, . . . , n. In this case, Proposition 4.2 (ii) and (iv) will not hold true, but (i) and (iii) still hold true. Therefore, for problem (FMOP1), we can use Theorem 4.1 (i). Suppose that problem (FMOP1) is convex and feasible, and η is a canonical linear defuzzification function. From Proposition 5.1, we see that the corresponding multiobjective programming problem (MOP3) is also convex. If x∗ ∈ Rn+ is a Pareto optimal solution of problem (MOP3), then we expect to conclude that x∗ is a Pareto C 1 -optimal solution of the original problem (FMOP1). This will be true by applying Proposition 4.4(ii) and Theorem 4.1(i) if x∗ happens to be a unique optimal solution of problem (MP1). However, Proposition 4.4(ii) does not guarantee this uniqueness. Therefore, we encounter a difficulty. Now we can avoid the uniqueness by applying Theorem 4.1(ii). However, in this case, Proposition 4.2(ii) should be true. In other words, the weights w j , j = 1, . . . , n, should be taken as all positive instead of nonnegative, since the proof of Theorem 4.1(ii) uses Proposition 4.2 (ii). Therefore we encounter another difficulty, since Proposition 4.4(ii) can just guarantee the nonnegative weights w j for all j = 1, . . . , n, If the above difficulties could be overcome, we just need to obtain the Pareto optimal solutions of problem (MOP3) in order to obtain the C 1 -optimal and C 2 -optimal solutions of problems (FMOP1) and (FMOP2). There are many methods, except for the weighting method described above, in the literature of multiobjective programming problems which can be invoked to obtain the Pareto optimal solutions of problem (MOP3). References [1] C.R. Bector, S. Chandra, Fuzzy mathematical programming and fuzzy matrix games, in: Studies in Fuzziness and Soft Computing, vol. 169, Springer-Verlag, Berlin, 2005. [2] R.E. Bellman, L.A. Zadeh, Decision making in a fuzzy environment, Management Science 17 (1970) 141–164. [3] J.R. Birge, F. Louveaux, Introduction to Stochastic Programming, Springer-Verlag, 1997. [4] S. Chanas, Fuzzy programming in multiobjective linear programming — A parametric approach, Fuzzy Sets and Systems 29 (1989) 303–313. [5] M. Delgado, J. Kacprzyk, J.-L. Verdegay, M.A. Vila (Eds.), Fuzzy Optimization: Recent Advances, Physica-Verlag, 1994. [6] A.O Esogube, Computational aspects and applications of a branch and bound algorithm for fuzzy multistage decision processes, Computers and Mathematics with Applications 21 (11–12) (1991) 117–127. [7] M.A. Fatma, A differential equation approach to fuzzy vector optimization problems and sensitivity analysis, Fuzzy Sets and Systems 119 (2001) 87–95. [8] J. Jahn, Mathematical Vector Optimization in Partially Ordered Linear Spaces, Verlag Peter Lang GmbH, Frankfurt am Main, 1986. [9] O. Kaleva, The calculus of fuzzy valued functions, Applied Mathematics Letters 3 (1990) 55–59. [10] E. Kreyszig, Introductory Functional Analysis with Applications, John Wiley & Sons, 1978. [11] Y.-J. Lai, C.-L. Hwang, Fuzzy mathematical programming: Methods and applications, in: Lecture Notes in Economics and Mathematical Systems, vol. 394, Springer-Verlag, 1992. [12] Y.-J. Lai, C.-L. Hwang, Fuzzy multiple objective decision making: Methods and applications, in: Lecture Notes in Economics and Mathematical Systems, vol. 404, Springer-Verlag, 1994. [13] R.J. Li, E.S. Lee, Fuzzy Approaches to Multicriteria de Novo Programs, Journal of Mathematical Analysis and Applications 153 (1990) 97–111. [14] R.J. Li, E.S. Lee, De novo programming with fuzzy coefficients and multiple fuzzy goals, Journal of Mathematical Analysis and Applications 123 (1993) 212–220. [15] E.S. Lee, R.J. Li, Fuzzy multiple objective programming and compromise programming with pareto optimum, Fuzzy Sets and Systems 53 (1993) 275–288. [16] K.M. Miettinen, Nonlinear Multiobjective Optimization, Kluwer Academic Publishers, Boston, 1998. [17] C. Mohan, H.T. Nguyen, Reference direction interactive method for solving multiobjective fuzzy programming problems, European Journal of Operational Research 107 (1998) 599–613. [18] I. Nishizaki, M. Sakawa, Equilibrium solutions for multiobjective bimatrix games incorporating fuzzy goals, Journal of Optimization Theory and Applications 86 (1995) 433–457. [19] A. Pr´ekopa, Stochastic Programming, Kluwer Academic Publishers, 1995. [20] M.L. Puri, D.A. Ralescu, Differentials of fuzzy functions, Journal of Mathematical Analysis and Applications 91 (1983) 552–558. [21] M. Sakawa, Fuzzy Sets and Interactive Multiobjective Optimization, Plenum Press, 1993. [22] M. Sakawa, K. Kato, Interactive decision making for large-scale multiobjective linear programs with fuzzy numbers, Fuzzy Sets and Systems 88 (1997) 161–172. [23] M. Sakawa, K. Kato, An interactive fuzzy satisficing method for multiobjective block angular linear programming problems with fuzzy parameters, Fuzzy Sets and Systems 111 (2000) 55–69. [24] M. Sakawa, K. Sawada, An interactive fuzzy satisficing method for large-scale multiobjective linear programming problems with block angular structure, Fuzzy Sets and Systems 67 (1994) 5–17. [25] M. Sakawa, H. Yano, A fuzzy dual decomposition method for large-scale multiobjective nonlinear programming problems, Fuzzy Sets and Systems 67 (1994) 19–27.
248 [26] [27] [28] [29] [30] [31] [32] [33]
H.-C. Wu / Mathematical and Computer Modelling 48 (2008) 232–248 R. Slowi´nski (Ed.), Fuzzy Sets in Decision Analysis, Operations Research and Statistics, Kluwer Academic Publishers, 1998. I.M. Stancu-Minasian, Stochastic Programming with Multiple Objective Functions, D. Reidel Publishing Company, 1984. S. Vajda, Probabilistic Programming, Academic Press, 1972. H.-C. Wu, A solution concept for fuzzy multiobjective programming problems based on convex cones, Journal of Optimization Theory and Applications 121 (2004) 397–417. H.-C. Wu, An (α, β)-optimal solution concept in fuzzy optimization problems, Optimization 53 (2004) 203–221. L.A. Zadeh, The concept of linguistic variable and its application to approximate reasoning I, II and III, Information Sciences 8 (1975) 199–249; 8 (1975) 301–357; 9 (1975) 43–80. H.-J. Zimmermann, Fuzzy programming and linear programming with several objective functions, Fuzzy Sets and Systems 1 (1978) 45–55. H.-J. Zimmermann, Fuzzy Set Theory—And its Applications, 3rd ed., Kluwer Academic Publishers, 1996.