Accepted Manuscript A combined scalarizing method for multiobjective programming problems Narges Rastegar, Esmaile Khorram PII: DOI: Reference:
S0377-2217(13)00937-5 http://dx.doi.org/10.1016/j.ejor.2013.11.020 EOR 12006
To appear in:
European Journal of Operational Research
Received Date: Accepted Date:
20 April 2013 16 November 2013
Please cite this article as: Rastegar, N., Khorram, E., A combined scalarizing method for multiobjective programming problems, European Journal of Operational Research (2013), doi: http://dx.doi.org/10.1016/j.ejor. 2013.11.020
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
A combined scalarizing method for multiobjective programming problems Narges Rastegar a , Esmaile Khorram a, ∗ a Department of Applied Mathematics, Faculty of Mathematics and Computer Science,
Amirkabir University of Technology, No. 424, Hafez Ave., Tehran, Iran
December 4, 2013
Abstract In this paper, a new general scalarization technique for solving multiobjective optimization problems is presented. After studying the properties of this formulation, two problems as special cases of this general formula are considered. It is shown that some well-known methods such as the weighted sum method, the -constraint method, the Benson method, the hybrid method and the elastic -constraint method can be subsumed under these two problems. Then, considering approximate solutions, some relationships between ε-(weakly,properly)efficient points of a general (without any convexity assumption)multiobjective optimization problem and -optimal solutions of the introduced scalarized problem are achieved. Keywords: Multiple objective programming, Scalarization method, Approximate solutions, Properly efficient solutions, ε-properly efficient solutions.
1
Introduction One part of mathematical programming is multiobjective optimization programming when the conflicting objective functions must be minimized over a feasible set of decisions. In many areas in engineering, economics, and science new developments are only possible by the application of multiobjective optimization problems (MOPs) and related methods. There are many recent publications on applications of MOPs,[9, 24, 27, 28, 40, 12] and many others. Various monographs collected many results in theory and methodology, [8, 13, 39], or provided ∗
Corresponding author: E-mail addresses:
[email protected] (E. Khorram),
[email protected], (N.Rastegar)
1
a comprehensive review of methods [34]. For solving MOPs, there are a number of methods and algorithms which are classified according to participation of the decision maker in the solution process [25]. The traditional and common approach for solving MOPs is a reformulation as a parameter scalar optimization problem. In other words, they are most commonly solved indirectly by using conventional (single-objective) optimization techniques by the aid of scalarization. In general, scalarization means the replacement of a vector optimization problem by a suitable scalar optimization problem which is an optimization problem with a real valued objective function. Since the scalar optimization theory has been widely developed, scalarization turns out to be of great importance for the vector optimization theory, as it is done in the well known weighted sum method [35, 17], the -constraint method [36, 5], the hybrid method [20, 26], the Benson method [3], the normal boundary intersection method [6], and so on. For a survey on the scalarizing technique, the reader is referred to [10]. Our focus in this paper is based on the main idea of the elastic ε-constraint method introduced by Ehrgott and Ruzika in [11]. Since the ε-constraint method has no result about properly efficient solutions, Ehrgott and Ruzika have presented two modifications of the ε-constraint method to remedy this weakness. We use their strategy to constitute a general form. We show that the weighted sum method, the -constraint method, the Benson method, the hybrid method and the elastic -constraint method can be seen as special cases of our problem. Then, we prove some necessary and sufficient conditions for (weakly,properly) efficient points of a general MOP via optimal solutions of the presented scalarized problem. Researchers have tried to present general formulations for multiobjective optimization problems. For example, Luque et al. [33, 37, 38], introduced a general formulation for several interactive methods. Their general formulation can accomodate some well-known interactive methods. Our formulation in this paper is not for interactive methods and so, is different from the formulation in [33, 38]. It should be mentioned that there exist several publications about properly efficient solutions, [26, 5] and many others, which use terms of stability of the scalarized problem or the K.K.T multipliers. However, our results on proper efficiency are more direct. On the other hand, the importance of approximation solutions for MOPs in recent decades motivated us to investigate ε-efficient solutions. The first notion of approximation was suggested by Kutateladze [29] and extended by Loridan [32]. White [41] investigated six kinds of ε-approximate efficient solutions. Many authors studied the properties of this kind of solution. Some necessary and sufficient conditions for ε-(weak)efficiency can be found in [7, 21, 22] and others. Engau and Wiecek [14] investigated scalarization approaches to generate ε-efficient solutions of MOPs. Since our presented problems are extensions of methods in [14], the results in the current paper are extension of those of special cases in [14]. 2
Also, one of the most important notions in multiobjective optimization theory is proper efficiency introduced by Li and Wang [30]. Liu [31] derived some necessary and sufficient conditions for ε-proper efficient solutions of convex MOPs. See also [1, 15, 16]. The methods considered in [14] have no result on ε-proper efficiency. So, Ghaznavi and Khorram [18] and Ghaznavi et al.[19], using the elastic ε-constraint method, provided some necessary and sufficient conditions for ε-(weak,proper)efficiency. Since our problem is a general form and the elastic ε-constraint method is a special case of that, the obtained results extend the results obtained in [18, 19, 14]. It is worth mentioning that the obtained results are general and we do not assume any convexity assumption. The outline of this article is as follows: in Section 2, we provide preliminaries and basic definitions. In Section 3, we present the general formulation and study some properties of this formula. In Sections 4 and 5, two problems are presented which are special cases of the general formula presented in Section 3. Section 6 is devoted to the necessary and sufficient conditions to obtain ε-(weakly,properly) efficient solutions in three subsections. The conclusions are derived in Section 7.
2
Preliminaries and basic definitions In this paper, optimization of the multiple objective problem is studied as follows: min
where,
f (x) =(f1 (x), f2 (x), ..., fp (x))
(2.1)
gi (x) ≤ 0,
i = 1, 2, ..., m
hk (x) = 0,
k = 1, 2, ..., m ´
fj , gi , hk : Ω ⊂ Rn −→ R,
∀j, i, k,
and Ω = ∅. Here, we show all the feasible points by X. In other words, X = {x ∈ Ω|gi (x) ≤ 0, hk (x) = 0, ∀i, k}. Now, the following definitions are presented to determine efficient solutions of the MOP. Definition 2.1. A feasible solution x∗ ∈ X of the MOP is called (1)efficient optimal solution if there does not exist another x ∈ X such that fj (x) ≤ fj (x∗ )
for all
j = 1, 2, ..., p
3
and
f (x) = f (x∗ ).
(2)weakly efficient solution if there is no x ∈ X such that fj (x) < fj (x∗ );
j = 1, 2, ..., p.
(3)strictly efficient solution if there does not exist another feasible solution x = x∗ such that j = 1, 2, ..., p. fj (x) ≤ fj (x∗ ); Let XE (XwE , XsE ) be the set of efficient(weakly, strictly efficient) solutions. If is an efficient (weakly efficient)solution, f (x∗ ) is called a nondominated (weakly nondominated) point. The set of nondominated (weakly nondominated)points is denoted by YN (YwN ). In other words, YN := f (XE )(YwN = f (XwE )). We assume throughout this paper that Y = f (X) is bounded and that XE is nonempty. This is guaranteed, e.g. if X is compact and fi are continuous (see [8]). Throughout this paper, we use the following notations: p := {y ∈ Rp |yi > 0, i = 1, 2, ..., p}. • R> p • R≥ := {y ∈ Rp |yi ≥ 0, i = 1, 2, ..., p} \ {0}. p := {y ∈ Rp |yi ≥ 0, i = 1, 2, ..., p}. • R x∗
On the other hand, there exists a well-known kind of efficient points which are named properly efficient solutions. Properly efficient points are those efficient solutions that have bounded trade-offs between the objectives. There are some definitions for proper efficiency given by Borwein [4], Hartley [23], Benson [2] and others. Here we use the definition of proper efficiency in the sense of Geoffrion [17]. Definition 2.2. A feasible solution x ˆ ∈ X is called properly efficient in Geoffrions’s sense, if it is efficient and if there is a real number M > 0 such that x) there exists an index j such that for all i and x ∈ X satisfying fi (x) < fi (ˆ x) < fj (x) and fj (ˆ fi (ˆ x) − fi (x) < M. fj (x) − fj (ˆ x) The set of properly efficient solutions is denoted by XpE . ε-(weakly)efficient solutions of MOP (2.1) are defined as follows [32]: p . A feasible Definition 2.3. Take into consideration MOP (2.1). Let ε ∈ R point x ˆ ∈ X is called: (1) ε-weakly efficient if there is no other x ∈ X such that f (x) < f (ˆ x) − ε. (2) ε-efficient if there is no other x ∈ X such that f (x) ≤ f (ˆ x) − ε.
4
Definition 2.4. [30] A feasible point x ˆ ∈ X is called ε-properly efficient point of problem (2.1), if it is ε-efficient and there is a real positive number M > 0 such x) − εi , there exists an that for all i ∈ {1, 2, ..., p} and x ∈ X satisfying fi (x) < fi (ˆ x) − εj < fj (x) and index j ∈ {1, 2, ..., p} such that fj (ˆ fi (ˆ x) − fi (x) − εi < M. fj (x) − fj (ˆ x) + εj The set of all ε-weakly efficient, ε-efficient and ε-properly efficient solutions of an MOP will be indicated by XεW E , XεE and XεP E , respectively. Notice that for ε = 0, ε-weak efficiency, ε-efficiency and ε-properly efficiency collapse in the usual definition of weak efficiency, efficiency, (Definition 2.1) and properly efficiency (Definition 2.2). Remark 2.5. Obviously, XεP E ⊆ XεE ⊆ XεW E . The customary approach to solve a given MOP is to formulate a single objective program (SOP) associated with it. Let us consider an SOP as follows: g(x),
minx∈X
where g : X → R. The notation of optimality, -optimality and strict -optimality for given SOP are defined as follows: Definition 2.6. Let ≥ 0. For the SOP, a feasible solution x ˆ ∈ X is called: (1) an optimal solution if g(ˆ x) ≤ g(x) for all x ∈ X. (2) an -optimal solution if g(ˆ x) ≤ g(x) + for all x ∈ X. (3) a strictly -optimal solution if g(ˆ x) < g(x) + for all x ∈ X.
3
A general scalarized problem In this section, using slack and surplus variables we consider the following formulation which is a general scalarizing method for solving MOP (2.1). The objective function equals the positive weighted sum of objectives, the positive weighted sum of surplus variables and the negative weighted sum of slack variables. This general form can be formulated as follows: min
p
λi fi (x) −
i=1
p i=1
γi s+ i
+
p i=1
μi s− i ,
− fi (x) + s+ i − si ≤ αi , 1 ≤ i ≤ p,
5
(3.1)
x ∈ X, s+ , s− 0, where λi , μi and γi , for all i, are nonnegative weights, and αi , (∀i) are given upper bounds. − In SOP (3.1), the slack and surplus variables s+ i and si , for all i, might be changed simultaneously by an amount of βi ∈ R without effecting the feasibility of the constraints. This proposition has been discussed completely in [11] and we refer the reader to that for more details. The first property of the SOP (3.1) is stated as the following lemma. Lemma 3.1. Let γ 0 and the set of optimal solutions of SOP (3.1) be not empty. Then SOP (3.1) has an optimal solution such that all the α-constraints are active. If γ > 0, then all the α-constraints are active in every optimal solution of SOP (3.1). Proof. Assume (ˆ x, sˆ+ , sˆ− ) is an optimal solution of SOP (3.1), and there is some j x)+ sˆj + − sˆj − < αj . Define I(ˆ x, sˆ+ , sˆ− ) = {j : fj (ˆ x)+ sˆj + − sˆj − < αj } such that fj (ˆ + − + − x) − sˆj + sˆj > 0 ∀j ∈ I(ˆ x, sˆ , sˆ ). Now, we define s˜i + = sˆi + , and δj = αj − fj (ˆ + + + − x, sˆ+ , sˆ− ). Clearly, (ˆ x, s˜+ , sˆ− ) is if i ∈ / I(ˆ x, sˆ , sˆ ) and s˜i = sˆi + δi , if i ∈ I(ˆ feasible for SOP (3.1) and all the constraints are active. On the other hand, since s˜+ ≥ sˆ+ , we have p i=1
λi fi (ˆ x) −
p i=1
γi s˜i + +
p
μi sˆi − ≤
i=1
p i=1
λi fi (ˆ x) −
p i=1
γi sˆi + +
p
μi sˆi − .
i=1
This means that (ˆ x, s˜+ , sˆ− ) yields a better objective function value for SOP (3.1) + − x, sˆ+ , sˆ− )) or the same as (ˆ x, sˆ+ , sˆ− ) (if than (ˆ x, sˆ , sˆ ) (if γj > 0 for some j ∈ I(ˆ + − x, sˆ , sˆ )). This contradicts the optimality of (ˆ x, sˆ+ , sˆ− ). γj = 0 for all j ∈ I(ˆ Now, we start analyzing the SOP (3.1) theoretically. Theorem 3.2. Let (ˆ x, sˆ+ , sˆ− ) be the optimal solution of SOP (3.1). If ˆ is a weakly efficient solution of (i) λ + γ ≥ 0 or (λ + μ ≥ 0 and sˆ− > 0), then x the MOP (2.1). ˆ is unique, then x ˆ is a strictly (ii) λ + γ ≥ 0 or (λ + μ ≥ 0 and sˆ− > 0) and x efficient solution of the MOP (2.1). ˆ is an efficient solution of the (iii) λ + γ > 0 or (λ + μ > 0 and sˆ− > 0), then x MOP (2.1). Proof. Here, we give the proof of part (iii). The proofs of parts (i) and (ii) are similar and will be omitted.
6
(iii) Assume x ˆ is not an efficient solution. So, there exists a feasible point x ∈ X x) and for some j the inequality is strict. We have: such that for all i fi (x) ≤ fi (ˆ fi (x) + sˆi + − sˆi − ≤ αi ∀i = j, fj (x) + sˆj + − sˆj − < αj . We consider two cases: Case 1) Let λ + γ > 0. Clearly, there is some ν > 0 such that fj (x) + sˆj + − sˆj − + ν ≤ αj . + + + + ˆ− ) is feasible for SOP Set s+ i = sˆi for i = j and sj = sˆj +ν. Obviously, (x, s , s + (3.1) and yields a better objective function than (ˆ x, sˆ , sˆ− ) since λj + γj > 0. − Case 2) If λ + μ > 0 and sˆ > 0. So, there is some ν > 0 such that sˆj − − ν > 0 and fj (x) + sˆj + − sˆj − + ν ≤ αj . − − − In this case define s− ˆ+ , s− ) is i = sˆi for i = j and sj = sˆj − ν. Therefore, (x, s + feasible for SOP (3.1)and yields a better objective function than (ˆ x, sˆ , sˆ− ) since + − x, sˆ , sˆ ). λj + μj > 0. This contradicts the optimality of (ˆ
Next, we state an easy approach to check the sufficient condition for identifying properly efficient solutions among the solutions of SOP (3.1). For the proof we need a technical lemma relating properly efficient solutions of the MOP with the feasible set of SOP (3.1) and the set X, respectively. This lemma is very similar to the idea mentioned by Ehrgott and Ruzika in [11]. Lemma 3.3. Let x ˆ be a properly efficient solution of the MOP with feasible set x) < αi of SOP (3.1). Let there be a partition I ∪ I¯ of {1, 2, ..., p} such that fi (ˆ ¯ Then, x for all i ∈ I and fi (ˆ x) > αi for all i ∈ I. ˆ is a properly efficient solution of the MOP with feasible set X. Proof. The proof is similar to the Lemma 3.2 in [11] and will be omitted here. Theorem 3.4. Let λ + μ > 0 and λ + γ > 0. If (ˆ x, sˆ+ , sˆ− ) is an optimal ¯ solution of SOP (3.1) and also there is a partition I ∪ I of {1, 2, ..., p} such that ¯ then x ˆ is a properly sˆi + = 0, sˆi − > 0 for i ∈ I and sˆi − = 0, sˆi + > 0 for i ∈ I, efficient solution of the MOP. Proof. Helping part (iii) of Theorem 3.2, xˆ is efficient. On the other hand, we
7
have: p
λi fi (ˆ x) −
i=1
p
γi sˆi + +
i=1
p
μi sˆi − =
i=1
p
λi fi (ˆ x) −
i=1
γi sˆi + +
i∈I¯
μi sˆi −
i∈I
and since (ˆ x, sˆ+ , sˆ− ) is an optimal solution of SOP (3.1) without loss of generality, we can write: =
p
λi fi (ˆ x) −
i=1
γi (αi − fi (ˆ x)) +
μi (fi (ˆ x) − αi ).
i∈I
i∈I¯
Therefore x ˆ is an optimal solution of the weighted sum problem ¯ fi (ˆ min{ (λi + γi )fi (ˆ x) + (λi + μi )fi (ˆ x) : fi (ˆ x) < αi i ∈ I, x) > αi i ∈ I}. i∈I¯
i∈I
By Geoffrion’s theorem [17], x ˆ is a properly efficient solution of the MOP with additional constraints. Using Lemma 3.3, x ˆ is properly efficient for the MOP with feasible set X and the proof is completed. Remark 3.5. If λ = ek , γk = 0 and μk = 0, Theorem 3.4 reduces to Theorem 5.2 in [11]. In the following theorem, we present a necessary condition for properly efficient solutions of the MOP. We will show how properly efficient solutions can be obtained by appropriate choices of parameters. Theorem 3.6. Let x ˆ be properly efficient for the MOP. Then, there exist λ, μ, γ x, sˆ+ , sˆ− ) is an optimal solution of SOP (3.1). 0, α < ∞ and sˆ+ , sˆ− , such that (ˆ Proof. Set γ = 0, sˆ+ = 0. We setαi := fi (ˆ x), i = 1, 2, .., p. So, we can choose ˆ is properly efficient, there sˆ− = 0. Also, let λ ≥ 0 such that pi=1 λi = 1. Since x x), there exists is M > 0 such that, for all x ∈ X and for all i with fi (x) < fi (ˆ x) < fj (x) and j = i such that fj (ˆ x) − fi (x)/(fj (x) − fj (ˆ x)) < M. (fi (ˆ Now, define μˆi = M, ∀i. Let (x, s+ , s− ) be a feasible point for SOP (3.1). Since γ = 0, we can put s+ = 0 and s− as follows: x)}, s− i = max{0, fi (x) − αi } = max{0, fi (x) − fi (ˆ 8
that is the smallest possible value it can take. We need to show that p
λi fi (x) +
i=1
p i=1
p
μi s− i ≥
λi fi (ˆ x) +
i=1
p
μi sˆi =
i=1
p
λi fi (ˆ x).
i=1
Let k ∈ {1, 2, ..., p}. We have two possible cases: (case 1) If fk (x) ≥ fk (ˆ x), then fk (x) +
p i=1
μˆi s− x). i ≥ fk (ˆ
(a)
(case 2) If fk (x) < fk (ˆ x), set I ∗ = {i : fi (x) > fi (ˆ x)}. The set I ∗ = ∅ because x ˆ ∈ XpE . We can write: fk (x) +
p i=1
μˆi s− i
= fk (x) +
p
μˆi max{0, fi (x) − fi (ˆ x)}
i=1
= fk (x) +
μˆi (fi (x) − fi (ˆ x)).
i∈I ∗ x)−fk (x) Since x ˆ ∈ XpE and fk (x) < fk (ˆ x), there is k ∗ ∈ I ∗ such that ff∗k (ˆ x) < M . k (x)−fk∗ (ˆ So, we have μˆi (fi (x) − fi (ˆ x)) ≥ fk (x) + μˆk∗ (fk∗ (x) − fk∗ (ˆ x)) fk (x) + i∈I ∗
> fk (x) +
x) − fk (x) fk (ˆ (fk∗ (x) − fk∗ (ˆ x)) = fk (ˆ x). fk∗ (x) − fk∗ (ˆ x)
(b)
Applying (a) and (b) we have: p
λi fi (x) +
i=1
and since
p
i=1 λi
p i=1
p p − λi ( μˆi si ) ≥ λi fi (ˆ x), i=1
i=1
= 1, we have p i=1
λi fi (x) +
p i=1
μˆi s− i
≥
p
λi fi (ˆ x).
i=1
The above inequality is true for all μ μ ˆ and the proof is complete.
9
In Theorem 3.6, the boundedness of f (X) cannot be omitted. Examples 3.2 and 4.2 in [11] shows that if f (X) is unbounded the result is no longer true. Additionally, we can obtain a necessary condition for efficient solutions as follows: Theorem 3.7. Let x ˆ be efficient for the MOP. Then, there exist λ, μ, γ 0, x, sˆ+ , sˆ− ) is an optimal solution of SOP (3.1). α < ∞ and sˆ+ , sˆ− , such that (ˆ x) for all i, γ = 0, μ = 0, sˆ+ = 0 and sˆ− = 0. Proof. It is sufficient to set αi = fi (ˆ Therefore, by Theorem 4.7 in [8] there exists λ > 0 such that x ˆ is an optimal solution of SOP (3.1). In Section 4 and 5, we investigate two special cases of the SOP (3.1). We study these two problems with more details. Also, we show that some well-known scalarizing methods for solving the MOP can be seen as special cases of our problems.
4
A problem with slack variables In this section, we consider problem (3.1) only with slack variables for solving the MOP. The objective function equals the positive weighted sum of objectives and the negative weighted sum of the slack variables. In other words, in SOP (3.1) we put μ = 0 and s− = 0. So, we have: min
p
λi fi (x) −
i=1
p
γi si ,
(4.1)
i=1
fi (x) + si ≤ αi , 1 ≤ i ≤ p, x ∈ X, s 0, where λi and γi , for all i, are nonnegative weights, and αi , (∀i) are given upper bounds. We will suppose that in SOP (4.1), αi , (∀i) are selected such that the mentioned problem remains feasible. The SOP (4.1) is the extended form of some of the well known scalarizing methods. Table 1 shows these relations. Table 1: The achieved scalarized methods by conditions on parameters of problem (4.1) Conditions on parameters
The achieved scalarized method
γ = 0, α = ∞
Weighted-Sum method
γ = 0, λk = 1, λi=k = 0, αk = ∞
-Constraint method
γ = 0, α = f (x0 ) where x0 is an arbitrary feasible point
Hybrid method
λ = 0, γ = 1, α = f (x0 ) where x0 is an arbitrary feasible point
Benson method
λk = 1, λi=k = 0, αk = ∞
Elastic Constraint method
10
SOP (4.1) has some properties. The first one is that similar to the SOP (3.1) there are always some optimal solutions such that the additional constraints are active at these points. In other words, SPO (4.1) has the properties presented in Lemma (3.1). Depending on the choice of the weight vectors, different results can be derived for SOP (4.1). The next theorem shows some results. Theorem 4.1. (1)Let (ˆ x, sˆ) be an optimal solution of SOP (4.1) with λ + γ > 0. Then, x ˆ is an efficient solution of MOP (2.1). (2)Let (ˆ x, sˆ) be an optimal solution of SOP (4.1) with λ + γ ≥ 0. Then, x ˆ is a weakly efficient solution of MOP (2.1). (3)Let (ˆ x, sˆ) be an optimal solution of SOP (4.1) with λ + γ ≥ 0. If x ˆ is unique, then x ˆ is a strictly efficient solution of MOP (2.1). Proof. Putting γ = 0 and s− = 0, this theorem is a special case of Theorem 3.2 and the proof is obvious. The results obtained by Theorem 4.1 are true for the special cases presented in Table 1. In other words, the properties of the weighted sum method, the -constraint method, the Benson method, the hybrid method and the elastic constraint method can be considered as special cases of Theorem 4.2 and achieved by that. The next theorem states an easy to check sufficient condition for identifying properly efficient solutions of the MOP among the solutions of SOP (4.1). Theorem 4.2. If (ˆ x, sˆ) is an optimal solution of SOP (4.1) with λ + γ > 0 and sˆ > 0, then x ˆ is a properly efficient solution of MOP (2.1). Proof. Since in SOP (4.1) sˆ− = 0, we can assume that μ > 0. Hence, the proof is achieved by Theorem 3.4. Remark 4.3. Putting λk = 1, λi=k = 0, γk = 0, Theorem 3.2 in [11] can be seen as a special case of Theorem 4.2. Similar to Theorem 3.6, any efficient solution can be considered as an optimal solution of SOP (4.1) with positive weights. So, we have: Theorem 4.4. Let x ˆ be an efficient solution of the MOP. Then there exist α < ∞, sˆ, λ and γ, such that (ˆ x, sˆ) is an optimal solution of SOP (4.1). Notice that in Theorem 4.4 the parameters αi , ∀i are finite. So, for some scalarizing technique with infinite values for some αi (see Table 1), we cannot use 11
this theorem. For this kind of scalarized problem more assumptions are needed, i.e. convexity for the weighted sum method or proper efficiency for the elastic ε-constraint method and so on. In the following section, we allow the added constraints to be violated and then penalize these violations in the objective function.
5
A problem with flexible constraints In this section, we study the problem (3.1) in a special case when γ = 0 and s+ = 0. So, consider the following problem: min
p
λi fi (x) +
p
i=1
μi si ,
(5.1)
i=1
fi (x) − si ≤ αi , 1 ≤ i ≤ p, x ∈ X, s 0, where λi and μi , for all i, are nonnegative weights, and αi , (∀i) are given upper bounds. We will suppose that in SOP (5.1), αi , (∀i) are selected such that the mentioned problem remains feasible. Note that if (ˆ x, sˆ) is an optimal solution, x) − αi }. then we may assume without loss of generality that sˆi = max{0, fi (ˆ It should be mentioned that although the SOP (5.1) is a special case of the SOP (3.1), since s+ = 0, it does not have the property of Lemma 3.1. In other words, the α-constraints are not always active in optimality. Hence, some properties of the problem (3.1) and also (4.1) are not true for the SOP (5.1). Remark 5.1. If λk = 1, λi=k = 0 and μk = 0, the SOP (5.1) reduces to the second modification of the ε-constraint method introduced by Ehrgott and Ruzika [11]. The following results obtained by Theorem 3.2 and Theorem 3.4, are extensions of the results in [11]. Theorem 5.2. Let (ˆ x, sˆ) be the optimal solution of SOP (5.1). If (1) λ = 0 or (λ + μ ≥ 0 and sˆ > 0), then x ˆ is a weakly efficient solution. (2) λ = 0 or (λ + μ ≥ 0 and sˆ > 0) and x ˆ is a unique solution, then x ˆ is a strictly efficient solution. (3): (a) λ > 0, then x ˆ is an efficient solution. (b) λ + μ > 0 and sˆ > 0, then x ˆ is an efficient solution.
12
Theorem 5.3. If (ˆ x, sˆ) is an optimal solution of SOP (5.1) with λ + μ > 0 and sˆ > 0, then x ˆ is a properly efficient solution of the MOP. Remark 5.4. Theorem 5.3 extends the result obtained by Theorem 4.1 in [11]. Using Theorem 3.6, we now turn to the problem of showing that properly efficient solutions of the MOP are optimal solutions of SOP (5.1) for appropriate choices of α and μ. Theorem 5.5. Let x ˆ be a properly efficient solution of the MOP. Then, there x, sˆ) is an optimal solution of are α < ∞, sˆ, λ, μ ˆ with μˆi < ∞ for all i, such that (ˆ ˆ. SOP (5.1) for all μ ∈ Rp , μ ≥ μ In SOP (4.1), we use slack variables. Insertion of these variables results in obtaining some information about proper efficiency. The negative sign of the weight coefficients of the slack variables s in objective functions allows s to be as large as possible. On the other hand, the constraints limit the magnitude of slack variables. Besides the additional constraints in SOP (4.1) are inflexible. Thus, we decide to address the inflexibility of the constraints. In SOP (5.1), the constraints are allowed to be violated using the variable s and these violations are penalized in the objective function with positive weight coefficients μi for each violation. The idea of flexible constraints is that, in some problems, a small deviation in constraints may result in the attainment of a better solution. The SOP (3.1) has the advantages of two SOPs (4.1) and (5.1). In the next section we investigated approximate solutions and obtained some necessary and sufficient conditions for ε-(weakly,properly)efficient solutions via approximate solutions of SOPs (3.1),(4.1) and (5.1).
6
ε-(weakly,properly)efficient solutions In this section, we are going to characterize the approximate (weakly,properly) efficient solutions of the general multi-objective optimization problem (2.1) using the SOPs (3.1),(4.1) and (5.1).
6.1
Some necessary/sufficient conditions for problem (3.1) In this subsection, we consider SOP (3.1) and provide some necessary/sufficient conditions for characterizing (weakly,properly)efficient solutions of the MOP through SOP (3.1). The following theorem provides sufficient conditions for ε-efficiency.
13
p Theorem 6.1. Assume ε ∈ R . p x, sˆ+ , sˆ− ) is an -optimal i) Let ≤ i=1 (λi + γi )εi , and λ + γ > 0. If (ˆ solution of SOP (3.1), then x ˆ is an ε-efficient solution of the MOP. -optimal solution of SOP (3.1) where 0 ε < sˆ− , ii) If (ˆ x, sˆ+ , sˆ− ) is an p ˆ is an ε-efficient solution of λ + μ > 0 and ≤ i=1 (λi + μi )εi , then x the MOP.
Proof. i) Let x ˆ ∈ / XεE . Then, there exists x ∈ X such that f (x) ≤ x) − εi for all i and for some index, f (ˆ x) − ε. In other words, fi (x) ≤ fi (ˆ namely j, the inequality is strict. So, x) + sˆi + − sˆi − ≤ αi , fi (x) + εi + sˆi + − sˆi − ≤ fi (ˆ
∀i = j.
Also, we have x) + sˆj + − sˆj − ≤ αj , fj (x) + εj + sˆj + − sˆj − + ν ≤ fj (ˆ + for some ν > 0. Therefore, if we define s+ i = sˆi + εi , ∀i = j and + + + − sj = sˆj + εj + ν, then (x, s , sˆ ) is feasible for SOP (3.1). So, we have: p
λi fi (x)−
i=1
p i=1
=
p
p
γi s+ i +
p
−
μi sˆi =
i=1
λi fi (x) −
i=1
p
p p + λi fi (x)− γi (sˆi +εi )−γj ν+ μi sˆi −
i=1
γi sˆi + +
p
i=1
i=1
μi sˆi − −
i=1
i=1
p
γi εi − γj ν
i=1
and since λj + γj > 0,
0 such that fj (x) + εj + sˆj + − sˆj − + ν ≤ αj , and
sˆj − − εj − ν ≥ 0.
− − − Now, define s− ˆ+ , s− ) i = sˆi − εi , ∀i = j and sj = sˆj − εj − ν. So, (x, s is feasible for SOP (3.1). With a similar calculation like part (i) we will have: p p p + λi fi (x) − γi sˆi + μi si − i=1
0 Theorem 6.3. Let ε ∈ R x, sˆ+ , sˆ− ) is an -optimal solution of SOP (3.1) and pi=1 λi = 1. If (ˆ + − x) + sˆi − sˆi < αi for all i, then x ˆ is an ε-properly efficient with fi (ˆ solution to the MOP. Proof. From Theorem 6.1 it follows that x ˆ ∈ XεE . Now, we prove that x ˆ is an ε-properly efficient solution. By contradiction, assume x ˆ ∈ / XεP E . Then there exists sequence {Mβ } of positive scalars such that limβ→∞ Mβ = ∞ and for each Mβ there is an xβ ∈ X and an index x) − εi and i ∈ {1, 2, ..., p} with fi (xβ ) < fi (ˆ x) − fi (xβ ) − εi fi (ˆ > Mβ , fj (xβ ) − fj (ˆ x) + εj
(6.1)
x) − εj < fj (xβ ). Without loss of generality, for each j = i with fj (ˆ we can consider an unbounded subsequence of {Mβ } such that index i x) − εj } is constant for each β. Now and the set Q = {j : fj (xβ ) > fj (ˆ choose j ∈ {1, 2, ..., p}. We have two possible cases: Case 1: If j ∈ / Q, then, fj (xβ ) ≤ fj (ˆ x) − εj < αj − sˆj + + sˆj − − εj , ⇒ fj (xβ ) + sˆj + − sˆj − + εj < αj . So, there is some νβj > 0 such that fj (xβ ) + sˆj + − sˆj − + εj + νβj ≤ αj . Case 2: If j ∈ Q, then fj (xβ ) > fj (ˆ x) − εj . Since f (X) is bounded, by inequality (6.1), we have: x) − εj < αj − sˆj + + sˆj − − εj . limβ→∞ fj (xβ ) = fj (ˆ Hence, there exists β0 > 0 such that fj (xβ ) + sˆj + − sˆj − + εj < αj
∀β ≥ β0 .
Also, for all β ≥ β0 there is νβj > 0 such that fj (xβ ) + sˆj + − sˆj − + εj + νβj ≤ αj . 16
+ Now, define s+ βj = sˆj + εj + νβj for all 1 ≤ j ≤ p and β ≥ β0 . So, + − (xβ , sβ , sˆ ) is feasible for SOP (3.1) for all β ≥ β0 . On the other hand, when j ∈ Q and β ≥ β0 , we have
x) − εj , limβ→∞ fj (xβ ) = fj (ˆ ⇒ limβ→∞ (−fj (xβ ) + fj (ˆ x) − εj + γ1 νβ1 ) = γ1 νβ1 > 0. So, there exists some β´0 > β0 such that for all β ≥ β´0 fj (xβ ) < fj (ˆ x) − εj + γ1 νβ1 .
(a)
If j ∈ / Q, we also have: x) − εj . fj (xβ ) ≤ fj (ˆ
(b)
x, s¯+ , sˆ− ) = (xβ ∗ , sβ ∗ , sˆ− ). By apNow, select some β ∗ > β´0 and put (¯ plying (a) and (b), we can write: p
λi fi (¯ x) −
p
i=1
=
p
λi fi (¯ x) −
i=1
=
≤
p
p
μi sˆi −
i=1
γi (sˆi + εi ) − +
p
γi νβ ∗ i +
i=1 p
λi fi (¯ x) −
γi sˆi + −
i=1
i∈Q
p
p
μi sˆi −
i=1
γi εi −
i=1
p
γi νβ ∗ i +
p
i=1
μi sˆi −
i=1
p p p p λi (fi (ˆ x)−εi )+ λi (fi (ˆ x)−εi +γ1 νβ ∗ 1 )− γi sˆi + − γi εi − γi νβ ∗ i + μi sˆi − i=1
i∈Q
i∈Q /
=
i=1 p i=1
λi fi (¯ x) +
i∈Q /
γi s¯i + +
i=1
i=1
p p p p λi fi (ˆ x)− γi sˆi + + μi sˆi − − (λi +γi )εi +γ1 νβ ∗ 1 ( λi )− γi νβ ∗ i ,
i=1
and since
0, ∀i and i=1 λi = 1. If (ˆ x, sˆ) is an -optimal solution of SOP x) + sˆi < αi for all i, then x ˆ is an ε-properly efficient (4.1) with fi (ˆ solution to the MOP. In the next subsection, the SOP (5.1) will be investigated.
6.3
Some necessary/sufficient conditions for problem (5.1) In this subsection, we restrict our attention to SOP (5.1). We provide some necessary/sufficient conditions for characterizing ε-(weakly,properly) efficient points of MOP via SOP (5.1). The following theorem provides a sufficient condition for ε-weak efficiency. 18
p p Theorem 6.7. Suppose ε ∈ R and ≤ x, sˆ) is an i=1 λi εi . If (ˆ -optimal solution of SOP (5.1), then, x ˆ ∈ XεW E . Proof. The proof is easily obtained and will be omitted here. Remark 6.8. Theorem 6.7 extends Theorem 3.2 in [18], which is obtained by λ = ek and μk = 0. SOP (5.1) satisfies in part (ii) of Theorem 6.1 and parts 2 and 4 of Theorem 6.2. Additionally, in the following theorem, we present more necessary conditions for ε-efficiency of the MOP. p . Theorem 6.9. Given ε ∈ R
1)If (ˆ x, sˆ) is a strictly -optimal solution of SOP (5.1) and ≤ pi=1 λi εi , then x ˆ ∈ XεE . 2) If (ˆ x, sˆ) is an -optimal solution of SOP (5.1) and < pi=1 λi εi , then x ˆ ∈ XεE . Proof. 1) Suppose x ˆ is not ε-efficient. Then, there exists some x ∈ X with f (x) ≤ f (ˆ x)−ε. It is a simple matter to show that (x, sˆ) is feasible for SOP (5.1). So, p
λi fi (x) +
i=1
⇒
p
μi sˆi +
i=1 p
λi fi (x) +
i=1
p
λi εi ≤
i=1 p
μi sˆi + ≤
i=1
p
λi fi (ˆ x) +
i=1 p i=1
λi fi (ˆ x) +
p
μi sˆi ,
i=1 p
μi sˆi ,
i=1
a contradiction. 2) Similar to Part (1). Remark 6.10. By letting λ = ek and μk = 0, Theorem 6.9 reduces to Theorem 3.7 in [18]. The following theorem utilizes SOP (5.1) to provide a sufficient condition for ε-proper efficiency. p Theorem 6.11. Suppose ε ∈ R and ≤ pi=1 λi εi . If (ˆ x, sˆ) is an -optimal point of SOP (5.1) with λ + μ > 0 and sˆ > 0, then x ˆ ∈ XεP E . Proof. Let us first prove that x ˆ is ε-efficient. Without loss of generality, x) − αi }. Since sˆ > 0, it follows that sˆi = assume sˆi = max{0, fi (ˆ fi (ˆ x) − αi > 0, ∀i. Let x ˆ not be an ε-efficient solution. Therefore, there x) − εi , ∀i and for at least one index exists x ∈ X such that fi (x) ≤ fi (ˆ 19
j, fj (x) < fj (ˆ x) − εj . Define si = max{0, fi (x) − αi } for all i = j. Since fj (x) − sˆj < αj , there is some ν > 0 such that fj (x) − sˆj + ν ≤ αj and sj = sˆj − ν > 0. Put si = sˆi , ∀i = j and sj = sˆj + ν. Obviously, (x, s) is feasible for SOP (5.1) and s ≤ sˆ, specially, sj < sˆj . Because λj + μj > 0, x) + μj sˆj , λj fj (x) + λj εj + μj sj < λj fj (ˆ and for all i = j, λi fi (x) + λi εi + μi si ≤ λi fi (ˆ x) + μi sˆi . So, p
λi fi (x) +
i=1
⇒
p
μi si +
i=1 p i=1
λi fi (x) +
p
λi εi
αi , ∀i x∈X Now, since λ + μ > 0, employing Theorem 2 in [31], yields x ˆ as an ε-properly efficient point with added constraints fi (x) > αi , ∀i. Then from a result similar to Lemma 3.3, we conclude that x ˆ ∈ XεP E . Remark 6.12. Letting λ = ek and μk = 0, Theorem 6.11 reduces to Theorem 3.14 in [18]. In the following theorem, a necessary condition for ε-properly efficient solutions of MOP (2.1) is obtained. This theorem extends Theorem 3.21 in [18]. Theorem 6.13. Suppose x ˆ ∈ XεP E and pi=1 λi = 1. Then, there are 20
α < ∞, sˆ, μ ˆ with x, sˆ) is an -optimal point of SOP μˆi < ∞, such that (ˆ ˆ. (5.1) with = pi=1 (λi + μi )εi for all μ μ x) − εi and sˆi = εi for all i. Since x ˆ ∈ XεP E , there Proof. Let αi = fi (ˆ x) − εi , is M > 0 such that, for all x ∈ X and for all i with fi (x) < fi (ˆ x) − εj < fj (x) and there exists j = i such that fj (ˆ (fi (ˆ x) − εi − fi (x)/(fj (x) − fj (ˆ x) + εj ) < M. We define μˆi = M, ∀i. Let x ∈ X and s be such that : si = max{0, fi (x) − αi } = max{0, fi (x) − fi (ˆ x) + εi }, the smallest possible value it can take. We need to show that p
λi fi (x) +
i=1
p
μi si ≥
i=1
p
λi fi (ˆ x) +
p
i=1
μi sˆi − .
i=1
Let k ∈ {1, 2, ..., p}. We have two possible cases: Case 1) If fk (x) ≥ fk (ˆ x) − εk , then fk (x) +
p
μˆi si ≥ fk (ˆ x) − εk .
(a)
i=1
x) − εk , set I ∗ = {i : fi (x) > fi (ˆ x) − εi }. The set Case 2) If fk (x) < fk (ˆ ∗ ˆ ∈ XεP E . We can write: I = ∅ because x fk (x) +
p
μˆi si = fk (x) +
i=1
p
μˆi max{0, fi (x) − fi (ˆ x) + εi }
i=1
= fk (x) +
μˆi (fi (x) − fi (ˆ x) + εi ).
i∈I ∗
Since x ˆ ∈ XεP E and fk (x) < fk (ˆ x) − εk , there is k ∗ ∈ I ∗ such that fk (ˆ x)−fk (x)−εk f ∗ (x)−f ∗ (ˆ x)+ε ∗ < M . So, we have k
k
fk (x) +
k
μˆi (fi (x) − fi (ˆ x) + εi ) ≥ fk (x) + μˆk∗ (fk∗ (x) − fk∗ (ˆ x) + εk∗ )
i∈I ∗
> fk (x)+
x) − fk (x) − εk fk (ˆ (fk∗ (x)−fk∗ (ˆ x)+εk∗ ) = fk (ˆ x)−εk . ∗ fk (x) − fk∗ (ˆ x) + εk∗ 21
(b)
Applying (a) and (b) we have: p
λi fi (x) +
i=1
and since
p
λi (
i=1
p
i=1 λi
p
p
μˆi si ) ≥
p
i=1
λi fi (ˆ x) −
i=1
p
λi εi ,
i=1
= 1, we have
λi fi (x) +
i=1
p i=1
=
p
μˆi si ≥
p
λi fi (ˆ x) +
i=1
λi fi (ˆ x) +
i=1
p
p
μi εi −
i=1
μi sˆi − .
i=1
The above inequality is true for all μ μ ˆ and the proof is completed.
7
Conclusion In this paper, a general form of scalarization technique for solving multiobjective optimization has been proposed. It is shown that some well-known methods such as the weighted sum method, the -constraint method, the Benson method, the hybrid method and the elastic -constraint method can be seen as special cases and can be subsumed under this general problem. With this problem, we are able to prove some results on (weakly,properly)efficiency of optimal solutions. Additionally, we relax the constraints and obtain some other results. On the p and ∈ R for other hand, by considering the relationships between ε ∈ R the multiobjective and presented problems, we derived necessary and/or sufficient conditions for ε-(weakly,properly)efficient solutions of the MOP (2.1). Our results extend some results obtained by Ehrgott and Ruzika [11], Engau and Wiecek [14], Ghaznavi and Khorram [18] and Ghaznavi et al. [19]. In Tables 2 and 3, we summarize some of the results obtained for these problems.
22
Table 2: Summary of results obtained for SOPs (3.1), (4.1) and (5.1) Scalarization method
Implication for x ˆ
Parameters > 0)
Reference
x ˆ ∈ Xw E
Theorem 3.2 (i)
x ˆ ∈ XsE
Theorem 3.2 (ii)
λ + γ ≥ 0 or (λ + μ ≥ 0, sˆ− > 0)
x ˆ ∈ XE
Theorem 3.2 (iii)
SOP (4.1)
λ+γ >0
x ˆ ∈ XE
Theorem 4.2 (1)
SOP (4.1)
λ+γ ≥0
x ˆ ∈ XwE
Theorem 4.2 (2) Theorem 4.2 (3)
SOP (3.1)
λ + γ ≥ 0 or (λ + μ ≥
0, sˆ−
SOP (3.1)
λ + γ ≥ 0 or (λ + μ ≥ 0, sˆ− > 0) and x ˆ is a unique solution
SOP (3.1)
SOP (4.1)
λ + γ ≥ 0 and x ˆ is a unique solution
x ˆ ∈ XE
SOP (4.1)
λ + γ > 0, sˆ > 0 and all constrains are active
x ˆ ∈ XpE
Theorem 4.4
SOP (5.1)
λ = 0 or (λ + μ ≥ 0 and sˆ > 0)
x ˆ ∈ XwE
Theorem 5.2 (1)
SOP (5.1)
λ = 0 or (λ + μ ≥ 0 and sˆ > 0) and x ˆ is a unique solution
x ˆ ∈ XsE
Theorem 5.2 (2)
SOP (5.1)
λ>0
x ˆ ∈ XE
Theorem 5.2 (3,a)
SOP (5.1)
λ + μ > 0, sˆ > 0
x ˆ ∈ XpE
Theorem 5.3
Table 3: Sufficient conditions for generating ε-(weakly,properly) efficient points of SOPs (3.1), (4.1) and (5.1) Scalarization method SOP (3.1) SOP (3.1) SOP (3.1) SOP (3.1) SOP (3.1) SOP (3.1) SOP (3.1) SOP (4.1) SOP (5.1) SOP (5.1) SOP (5.1) SOP (5.1)
Parameters λ + γ > 0, ≤ pi=1 (λi + γi )εi p λ + μ > 0, ≤ i=1 (λi + γi )εi , ε < sˆ− λ + γ ≥ 0, ≤ pi=1 (λi + γi )εi p λ + μ ≥ 0, ≤ i=1 (λi + γi )εi , ε < sˆ− λ + γ ≥ 0, < pi=1 (λi + γi )εi p λ + μ ≥ 0, < i=1 (λi + γi )εi , ε < sˆ− γ > 0, ≤ pi=1 (λi + γi )εi , pi=1 λi = 1 p γ > 0, ≤ i=1 (λi + γi )εi ,all the constraints are inactive ≤ pi=1 λi εi ≤ pi=1 λi εi < pi=1 λi εi ≤ pi=1 λi εi , λ + μ > 0, sˆ > 0
Results
Reference
-opt.→ ε-eff.
Theorem 6.1 (i)
-opt.→ ε-eff.
Theorem 6.1 (ii)
-opt.→ ε-weak eff.
Theorem 6.2 (1)
strict -opt.→ ε-weak eff.
Theorem 6.2 (2)
-opt.→ ε-eff.
Theorem 6.2 (3)
-opt.→ ε-eff.
Theorem 6.2 (4)
-opt.→ ε-pro. eff.
Theorem 6.3
-opt.→ ε-pro. eff.
Theorem 6.6
-opt.→ ε-eff.
Theorem 6.7
strict -opt.→ ε-eff.
Theorem 6.9 (1)
-opt.→ ε-eff.
Theorem 6.9 (2)
-opt.→ ε-pro. eff.
Theorem 6.11
References [1] Beldiman, M., Panaitescu, E., Dogaru, L., Approximate quasi efficient solutions in multiobjective optimization, Bull. Math. Soc. Sci. Math. Roumanie Tome, 99, (2008)109121. [2] Benson, H., An improved definition of proper efficiency for vector maximization with respect to cones, J. Math. Anal. Appl., 71, (1979)232-241. [3] Benson, H., Hybrid approach for solving multiple objective programs in outcome space, Journal of Optimization Theory and Applications, 98, (1998)17-35. 23
[4] Borwein, J., Proper efficient points for maximization with respect to cones, SIAM J. Control Optim., 15, (1977)57-63. [5] Chankong, V., Haimes, Y., Multiobjective decision making: Theory and methodology, Elsevier science Publishing Co., New York, (1983) [6] Das, I., Dennis, J. E., Normal boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems, SIAM J. Optim. 8, 631-657 (1998). [7] Dutta, J., Vetrivel, V., On approximate minima in vector optimization, Numer. Funct. Anal. Optim., 22, (2001)845-859. [8] Ehrgott, M., Multicriteria Optimization, Springer, Berlin, (2000). [9] Ehrgott, M., Klamroth, K., Schwehm, C., An mcdm approach to portfolio optimization, Eur. J. Oper. Res., 155, (2004)752-770. [10] Ehrgott, M., Wiecek, M., Multiobjective programming, In: Figueira, J., Greco, S., Ehrgott, M., Multicriteria decision analysis: state of art survey, Springer, (2005)667722. [11] Ehrgott, M., Ruzika, S., Improved -constraint method for multiobjective programming, J. Optim. Theory. Appl., 138, (2008)375-396. [12] Ehrgott, M., Klamroth, K., Schwehm, C., An MCDM apprach to portfolio optimization, Euro. Jour. Oper. Res., 155, (2004)752-770. [13] Eichfelder, G., Scalarizations for adaptively solving multi-objective optimization problems, Comput Optim. Appl. 44, 249-273 (2009). [14] Engau, A., Wiecek, M. M., Generating epsilon-efficient solutions in multi-objective programming, Europen Journal of Operational research, 177, (2007)1566-1579. [15] Gao, Y. , Yang, X., Lee, H. W. J., Optimality conditions for approximate solutions in multiobjective optimization problems, J. Inequal. Appl., (2010) Article No. 620928. [16] Gao, Y., Yang, X., Teo, K. L., Optimality conditions for approximate solutions in vector optimization problems, J. Ind. Manag. Optim., 7, (2011)483-496. [17] Geoffrion, A. M., Proper efficiency and the theory of vector maximization, Journal of Optimization Theory and Application, 22, (1968)618-630. [18] Ghaznavi, B. A., Khorram, E., On approximating weakly/properly efficient solutions in multi-objective programming, Mathematical and Computer Modelling, 54, (2011)31723181. 24
[19] Ghaznavi, B. A., Khorram, E., Soleimani-Damaneh,M. ,Scalarization for characterization of approximate strong/weak/proper efficiency in multiobjective optimization, Optimization, (2012)1-18. [20] Guddat, J., Guerra, F., Tammer, K., Wendler, K., Multiobjective and Stocahstic optimization based on parametric optimization, Akademie-Verlag, Berlin, (1985). [21] Gutierrez, C., Jimenez, B., Novo, V., On approximate solutions in vector optimization problems via scalarization, Comput. Optim. Appl., 35, (2006)305-324. [22] Gutierrez, C., Jimenez, B., Novo, V., Optimality conditions for metrically consistent approximate solutions in vector optimization, J. Optim. Theory Appl., 133, (2007)49-64. [23] Hartley, R., On cone-efficiency, cone-convexity and cone-compactness, SIAM Journal on Applied Mathematics, 34, (1978)211-222. [24] Hillermeier, C., Jahn, J., Multiobjective Optimization, survey of methods and industrial applications, Surv. Math. Ind.11, 1-42 (2005). [25] Hwang, C. L., Masud, A. S. M., Multiple objective decision making, Methods and Application, Lecture notes in economics and mathematical systems, (1979)164. [26] Huang, X. X., Yang, X. Q., On characterization of proper efficiency for nonconvex multiobjective optimization, J. Glob. Optim., 23, (2002)213-231. [27] Hutterer, A., Jahn, J., On the location of antennas for treatment planning in hyperthermia, OR Spectrum, 25, (2003)397-412. [28] Jahn, J., Vector Optimization: Theory, Applications and Extensions, Springer, Berlin, (2004). [29] Kutateladze, S. S., Convex ε-programming, Soviet Mathematics-Doklady, 20, (1979)391393. [30] Li, Z., Wang, S., ε-efficient solutions in multiobjective optimization, Optimization, 44, (1998)161-174. [31] Liu, J. C., ε-properly efficient solution to nondifferentiable multi-objective programming problems, Applied mathematics letters, 12, (1999)109-113. [32] Loridan, P., ε-solutions in vector minimization problems, Journal of Optimization Theory and Applications, 43, (1984)265-276. [33] Luque, M., Ruiz, F., Miettinen, K., Global formulation for interactive multiobjective optimization, OR Spectrum, 33, (2011)27-48. 25
[34] Marler, R. T., Arora, J. S., Review of multiobjective optimization conceps and methods for engineering, Technical report number ODL-01.01, University of Iowa, Optimal design laboratory, (2003). [35] Marler, R. T., Arora, J. S., The weighted sum method for multiobjective optimization: new insights, Struct. Multidisc. Optim., 41, (2010)853-862. [36] Mavrotas, G., Effective implementation of the ε-constraint method in Multi-Objective Mathematical Programming problems, Applied Mathematics and Computation, 213, (2009)455465. [37] Romero, C., Extended lexicographic goal programming: a unified approach, Omega, 29,(2001)6371. [38] Ruiz, F., Luque, M., Miettinen, K., Improving the Computational Efficiency in a Global Formulation (GLIDE) for Interactive Multiobjective Optimization, Annals of Operations Research, 197 (1), (2012)47-70. [39] Ruzika, S., Wiecek, M. M., Approximation Methods in Multiobjective Programming, J. Optim. Theory Appl. 126(3), 473-501 (2005). [40] Steuer, R. E., Na, P., Multiple criteria decision making combined with finance: a categorized bibliographic study, Eur. J. Oper. Res., 150, (2003)496-515. [41] White, D. J., Epsilon efficiency, J. Optim. Theory Appl., 49, (1986)319-337.
26
Research highlights • A new scalarization technique for multiobjective programming is presented. • It is shown that some well-known scalarization methods can be seen as special case of that. • We prove some results on (weakly,properly) efficient solutions. • We deal with approximate solutions and derive some necessary/sufficient conditions. • We summarize the obtained results in two tables.
1