Mathematical and Computer Modelling 48 (2008) 69–82 www.elsevier.com/locate/mcm
A study on optimality and duality theorems of nonlinear generalized disjunctive fractional programming E.E. Ammar Department of Mathematics, Faculty of Science, Tanta University, Egypt Received 10 November 2006; accepted 19 September 2007
Abstract This paper is concerned with the study of necessary and sufficient optimality conditions for convex–concave generalized fractional disjunctive programming problems for which the decision set is the union of a family of convex sets. The Lagrangian function for such problems is defined and the Kuhn–Tucker Saddle and Stationary points are characterized. In addition, some important theorems related to the Kuhn–Tucker problem for saddle and stationary points are established. Moreover, a general dual problem is formulated and weak, strong and converse duality theorems are proved. Throughout the presented paper illustrative examples are given to clarify and implement the developed theory. c 2007 Elsevier Ltd. All rights reserved.
Keywords: Generalized fractional programming; Disjunctive programming; Convexity; Concavity; Optimality; Duality
1. Introduction Fractional programming models have became a subject of wide interest since they provide a universal apparatus for a wide class of models in corporate planning, agricultural planning, public policy decision making, and financial analysis of a firm, marine transportation, health care, educational planning, and bank balance sheet management. However, as is obvious, just considering one criterion at a time usually does not represent real-life problems well because almost always two or more objectives are associated with a problem. Generally, objectives conflict with each other; therefore, one cannot optimize all the objectives simultaneously. Nondifferentiable fractional programming problems play a very important role in formulating the set of most preferred solutions and a decision maker can select the optimal solution. Chang in [8] gave an approximate approach for solving fractional programming with absolute-value functions. Chen in [9] introduced higher-order symmetric duality in nondifferentiable multiobjective programming problems. Benson in [6] studied two global optimization problems, each of which involves maximizing a ratio of two convex functions, where at least one of the two convex functions is of quadratic form. Frenk in [11] gives some general results of the above Benson problem. The Karush–Kuhn–Tucker conditions for an optimization problem with interval-valued objective function are derived by Wu in [28]. Balas introduced Disjunctive programs in [3,4]. The convex hull of the feasible points has been characterized for these programs with a class of problems that subsumes pure mixed integer programs and for many other nonconvex E-mail address:
[email protected]. c 2007 Elsevier Ltd. All rights reserved. 0895-7177/$ - see front matter doi:10.1016/j.mcm.2007.09.002
70
E.E. Ammar / Mathematical and Computer Modelling 48 (2008) 69–82
programming problems in [5]. Helbig presented in [16,17] optimality criteria for disjunctive optimization problems with some of their applications. Gugat studied in [14,15] an optimization of a problem having convex objective functions, whose solution set is the union of a family of convex sets. Grossmann proposed in [13] a convex nonlinear relaxation of the nonlinear convex disjunctive programming problem. Some topics of optimizing disjunctive constraint functions were introduced in [27] by Sherali. In [7], Ceria studied the problem of finding the minimum of a convex function on the closure of the convex hull of the union of a finite number of closed convex sets. The dual of the disjunctive linear fractional programming problem was studied by Patkar in [24]. Eremin introduced in [10] disjunctive Lagrangian function and gave sufficient conditions for optimality in terms of their saddle points. A duality theory for disjunctive linear programming problems of a special type was suggested by Gonc¸alves in [12]. Liang in [20] gave sufficient optimality conditions for the generalized convex fractional programming. Yang introduced in [29] two dual models for a generalized fractional programming problem. Optimality conditions and duality were considered in [23] for nondifferentiable, multiobjective programming problems and in [19,21] for nondifferentiable, nonlinear fractional programming problems. Jain et al. in [18] studied the solution of a generalized fractional programming problem. Optimality conditions for generalized fractional programming involving nonsmooth Lipschitz functions are established by Liu in [22]. Roubi [25] proposed an algorithm to solve the generalized fractional programming problem. Xu [30] presented two duality models for a generalized fractional programming and established its duality theorems. Necessary and sufficient optimality conditions for nonlinear fractional disjunctive programming problems for which the decision set is the union of a family of convex sets are introduced in [1]. Also Optimality conditions and duality for nonlinear fractional disjunctive minmax programming problems were considered in [2]. In this paper we define the Lagrangian function for the nonlinear generalized disjunctive fractional programming problem and investigate optimality conditions. For this class of problems, the Mond–Weir and Schaible type of duality are proposed. Weak, strong and converse duality theorems are established for each dual problem. 2. Problem statement Assume that N = {1, 2, . . . , p} and K = {1, 2, . . . , q} are arbitrary nonempty index sets. For i ∈ N , let g ij : R n −→ R be a vector map whose components are convex functions, g ij (x) ≤ 0, 1 ≤ j ≤ m. Suppose that f ik , h ik : R n+q → R are convex and concave functions for i ∈ N , k ∈ K , respectively, and h ik (x) > 0. We consider the generalized disjunctive convex–concave fractional program problem as in the following form: GDF(i) :
inf max
x∈Z i k∈K
f ik (x) h ik (x)
Subject to x ∈ Z i , where Z i = {x ∈ R n :
g ij (x)
(1) i ∈ N,
(2)
≤ 0, j = 1, . . . , m}. Assume that Z i 6= ∅ for i ∈ N .
Lemma 1 ([7]). Let α k , β k , k ∈ K be real numbers and α k > 0 for each k ∈ K . Then P k β k β k∈K max k ≥ P k . k∈K α α
(3)
k∈K
By using Lemma 1 the generalized fractional problem GDF(i) may be reformulated [3] as in the following two forms: K P k ik t f (x) k=1 GDF(i, t) : inf inf , (4) K i∈N x∈Z i P k ik t h (x) k=1
where
tk
∈
q R+ .
Denote
E.E. Ammar / Mathematical and Computer Modelling 48 (2008) 69–82
Mi = inf
x∈Z i
K P t k f ik (x) k=1
K P t k h ik (x)
:i ∈N
71
as the minimal value of GDF(i, t)
k=1
and let Pi = {x ∈ Z i :
P K k ik t f (x) Pk=1 K k ik k=1 t h (x)
= Mi , i ∈ N } be the set of solutions of GDF(i, t).
The generalized disjunctive fractional programming problem is formulated as: K P
GDF(t)
t k f ik (x)
k=1 inf inf K i∈N x∈Z P
,
(5)
t k h ik (x)
k=1 q R+ ,
where t k ∈ k ∈ K and Z = ∪i∈N Z i is the feasible solution set of problem GDF(t). For problem GDF(t), we assume the following sets: (I) M = infi∈N Mi is the minimal value of GDF(t). P
(II) Z ∗ = {x ∈ Z : ∃ i ∈ I (x), infi
K
t k f ik (x) k ik k=1 t h (x)
Pk=1 K
= M}, is set of solutions of GDF(t),
where I (x) = {i ∈ I : x ∈ S}, I = {i ∈ N : Z ∗ 6= φ} and I = {1, 2, . . . , a} ⊆ N . Problem GDF(t) may be reformulated in the following form: ! K K X X i k ik i k ik GDF(t, d) inf inf F (x, t, d) = t f (x) − d t h (x) , i∈I x∈Z
where d i =
P K k ik t f (x) Pk=1 K k ik k=1 t h (x)
k=1
(6)
k=1
> 0.
We define the Lagrangian functions of problems GDF(t, d) and GDF(t) [20,23,24] in the following forms: GLi (x, µi ) = F i (x, t, d i ) +
m X
µij g ij (x),
(7)
j=1
and ui L i (x, u, µ) =
K P
t j f ik (x) +
k=1
m P j=1
ui
K P
µij g ij (x) (8)
t k h ik (x)
k=1 a are Lagrangian multipliers. Then the Lagrangian functions GLi (x, λ) and where µij ≥ 0, j = 1, . . . , m and u i ∈ R+ i L (x, u, λ) of GDF(t, d) are defined by: ) ( m X i i i i i i µr gr (x) , GL(x, µ) = inf GL (x, µ ) = inf F (x, t, d ) + (9) i∈I
i∈I
r =1
and
L(x, u, µ) = inf L i (x, u, µ) = inf i∈I
where x ∈ Z ,
tk
∈
q R+ ,
i∈I
u∈
a R+
and µ ∈
K m P P i i k ik i u k=1 t f (x) + j=1 µ j g j (x) m R+
ui
K P k=1
t k h ik (x)
,
are Lagrangian multipliers, respectively.
(10)
72
E.E. Ammar / Mathematical and Computer Modelling 48 (2008) 69–82
3. Kuhn–Tucker saddle point problem Definition 3.1. A point (x o , µo ) in R n+ p+m , and µo ≥ 0 is said to be a GL-saddle point of problem GDF(t, d) if and only if GL(x o , µ) ≤ GL(x o , µo ) ≤ GL(x o , µo )
(11)
m. for all x ∈ R n+ p and µ ∈ R+
Definition 3.2. A point (x o , u o , µo ) in R n+ p+m , with u o ≥ 0 and µo ≥ 0 is said to be an L-saddle point of problem GDF(t) if and only if L(x o , u, µ) ≤ L(x o , u o , µo ) ≤ L(x, u o , µo )
(12)
a and µ ∈ R m . for all with x ∈ R n+ p , u ∈ R+ +
Theorem 3.1 (Sufficient Optimality Criteria). If for d ◦i ≥ 0 the point (x o , µo ) is a saddle point of GL(x, µ) and F i (x, t, d i ) and g ij (x) are bounded and convex functions. Then x ◦ is a minimal solution for the problem GDF(t, d). Proof. Let (x o , µo ) be a saddle point of GL(x, µ). Then for all µ ≥ 0 in R m and all x ∈ Z it follows from (5) and (11) that inf GLi (x o , µi ) ≤ infi∈I GLi (x o , µoi ) ≤ inf GLi (x, µoi ).
i∈I
(13)
i∈I
Then, inf F i (x o , t, d oi ) + inf
i∈I
i∈I
m X
µij g ij (x o ) ≤ inf F i (x o , t, d oi ) + inf i∈I
r =1
i∈I
m X
µoij g ij (x o ).
r =1
Hence inf µij g ij (x o ) ≤ inf µoij g ij (x o ) ≤ µoij g ij (x).
i∈I
(14)
i∈I
Since inf g ij (x o ) ≤ g ij (x o )
∀i = 1, . . . , a,
i∈I
then m X
µij inf g ij (x o ) ≤ i∈I
j=1
m X
µij g ij (x o )
∀i = 1, . . . , a.
j=1
Thus inf
m X
i∈I
j=1
µij inf g ij (x o ) ≤ inf i∈I
i∈I
m X
µij g ij (x o )
∀i = 1, . . . , a.
(15)
j=1
Now assume that: g sj (x o ) = inf g sj (x o )
and
i∈I
m X
µbj g sj (x o ) = inf i∈I
j=1
m X
µij g sj (x o ).
j=1
From (14) and (15), we get m X j=1
µbj g sj (x o ) = inf i∈I
m X j=1
µij g sj (x o ) ≤
m X j=1
µoij g ij (x o )
∀i = 1, . . . , a.
(16)
E.E. Ammar / Mathematical and Computer Modelling 48 (2008) 69–82
Thus
Pm
b s o j=1 µ j g j (x )
m X
≤
Pm
os s o j=1 µ j g j (x )
73
and hence
s o (µbj − µos j )g j (x ) ≤ 0.
(17)
j=1 s o b os Let µbj = µos j , µc = µc + 1, j = 1, 2, . . . , c, c + 1, . . . , m. From (17) we get g j (x ) ≤ 0 and for each c = 1, . . . , m we get x o ∈ Z s or x o ∈ Z ; i.e., x o is a feasible point of GDF(t, d) and µoi ≥ 0. It follows that
inf
i∈I
m X
µoij g ij (x o ) ≤ 0.
(18)
j=1
By setting µij = 0 in the first inequality of (13), we get inf
i∈I
m X
µoij g ij (x o ) ≥ 0.
(19)
j=1
From (18) and (19), it follows that inf
i∈I
m X
µoij g ij (x o ) = 0.
(20)
j=1
By substituting (20) in the second inequality of (13), we get inf F i (x o , t o , d oi ) ≤ inf F i (x, t o , d oi ) + inf
i∈I
i∈I
m X
i∈I
µoij g ij (x)
∀x ∈ Z .
j=1
Then inf F i (x o , t o , d oi ) ≤ inf F i (x, t o , d oi )
i∈I
i∈I
Hence x o is an optimal solution of GDF(t, d).
∀x ∈ Z .
Corollary 3.1. If the point (x o , u o , µo ) is a saddle point of L(x, u, µ) and F i (x, t, d i ) and g ij (x) are bounded and convex functions. Then x ◦ is a minimal solution for the problem GDF(t). The proof follows similarly as in the proof of Theorem 3.1 Assumption 3.1. Let q i (x, y, d i ) = 0 be a convex function on Conv Z (Z = ∪i∈I Z i ). If for all x ∈ Conv S, the q functions F i (x, t o , d oi ) − F i (x o , t o , d oi ), x o ∈ Conv Z , i ∈ I and t o ∈ R+ are bounded, then infi∈I {F i (x, t o , d oi ) − i o o oi F (x , t , d )} is a convex function on Conv Z . Proposition 3.1. Under the Assumption 3.1, and if the system ) inf{F i (x, t o , d oi ) − F i (x o , t o , d oi )} < 0, i∈I has no solution on Conv Z , g ij (x) ≤ 0 for at least one i ∈ I q
m , (µo , µoi ) ≥ 0 and t o ∈ R such that then ∃ µo ∈ R+ , µoi ∈ R+ +
µo inf F i (x, t o , d oi ) + inf i∈I
i∈I
m X
µoij g ij (x) ≥ 0,
∀x ∈ Conv Z .
j=1
Proof. Since Z is convex and the functions F i (x, t, d i ) and g ij (x) are convex on Conv Z for each i, then, from q Assumption 3.1, we get, infi∈I {F i (x, t o , d oi ) − F i (x o , t o , d oi )} for t o ∈ R+ , k = 1, . . . , K and d oi > 0, i ∈ I is
74
E.E. Ammar / Mathematical and Computer Modelling 48 (2008) 69–82
convex. Since the system inf{F i (x, t o , d oi ) − F i (x o , t o , d oi )} < 0,
i∈I g ij (x)
≤0
) has no solution on Conv Z .
for at least one i ∈ I
q
m , (µo , µoi ) ≥ 0, and t o ∈ R such that: Then ∃ µo ∈ R+ , µoi ∈ R+ +
µo inf{F i (x, t o , d oi ) − F i (x o , t o , d oi )} + inf i∈I
m X
i∈I
µoij g ij (x) ≥ 0
∀x ∈ Conv Z and i ∈ I.
j=1
Definition 3.3 (Constraint Qualifications (CQ)). For all i ∈ I , we say that g ij (x) satisfies Constraint Qualifications iff there exists a positive point x ∈ Z such that g ij (x) < 0, ∀ j = 1, . . . , m. Theorem 3.2 (Necessary Optimality Criteria). With Assumption 3.1, g ij (x), i ∈ I, j = 1, . . . , m satisfy the CQ and x o is an optimal solution of problem GDF(t, d), then there exists µo ≥ 0 such that (x o , µo ) is a saddle point of GL(x o , µo ). q
Proof. Since x o is an optimal solution of GDF(t, d), then for t o ∈ R+ and d oi > 0 the system ) inf F i (x, t o , d oi ) − inf F i (x o , t o , d oi ) < 0, i∈I i∈I has no solution on Conv Z . g ij (x) ≤ 0 for at least one i ∈ I = {1, . . . , a}, This implies that the system n o inf F i (x, t o , d oi ) − F i (x o , t o , d oi ) < 0, i∈I g ij (x)
≤0
)
for at least one i ∈ I = {1, . . . , a},
has no solution on Conv Z . q
Then there exist µo ∈ R and µoi ∈ R am , (µo , µoi ) ≥ 0, (µo , µoi ) 6= 0, t o ∈ R+ and d oi > 0 such that: µo inf{F i (x, t o , d oi ) − F i (x 0 , t o , d oi )} + inf i∈I
m X
i∈I
µoij g ij (x) ≥ 0,
∀x ∈ Conv Z , i ∈ I.
(21)
j=1
Pm
≥ 0, i ∈ I, j = 1, . . . , m. P But, from feasibility it follows that mj=1 µoij g ij (x o ) ≤ 0, i ∈ I . Thus, inequality (21) takes the form:
For x = x o , we get
oi i o j=1 µ j g j (x )
µo {F i (x, t o , d oi ) − F i (x 0 , t o , d oi )} +
m X
µoij g ij (x) ≥ 0,
∀x ∈ Conv Z , i ∈ I.
j=1
oi o it follows that Then for α oi j = µ j /µ i o oi GLi (x, α 0i j ) = F (x, t , d ) +
m X
i i 0 0i α oi j g j (x) ≥ GL (x , α j )
j=1 m X
= F i (x 0 , t o , d oi ) +
i o α oi j g j (x ),
j=1
(22)
∀x ∈ Conv Z , i ∈ I.
For each x ∈ Conv Z , inequality (22) implies i oi inf GLi (x 0 , α oi j ) ≤ inf GL (x, α j );
i∈I
i∈I
i.e., GL(x o , α o ) ≤ GL(x, α o ), α o = (α1o1 , . . . , α oa j ).
(23)
75
E.E. Ammar / Mathematical and Computer Modelling 48 (2008) 69–82
Since
Pm
oi i o j=1 µ j g j (x )
m X
µij g ij (x o ) ≤
j=1
m X
= 0 and, for µij ≥ 0, we get µoij g ij (x o ),
Pm
i i o j=1 µ j g j (x )
≤ 0 and
i ∈ I.
j=1
Adding µo F i (x, t o , d oi ) to both sides, we get µo F i (x o , t o , d oi ) +
m X
µij g ij (x o ) ≤ µo F i (x o , t o , d oi ) +
j=1
Pm
oi i o j=1 µ j g j (x )
m µi X j j=1
µ
µoij g ij (x o ),
i ∈ I.
j=1
If µo = 0, then, from inequality (22), we get condition. Thus µo > 0 and consequently F i (x o , t o , d oi ) +
m X
g i (x o ) ≤ F i (x o , t o , d oi ) + o j
≥ 0, i ∈ I, x ∈ Conv Z , which contradicts the CQ
m µi X j j=1
µo
g ij (x o ),
i ∈ I,
and ( inf F i (x o , t o , d oi ) +
i∈I
m X
)
(
α ij g ij (x o ) ≤ inf F i (x o , t o , d oi ) + i∈I
j=1
m X
) i o α oi j g j (x ) .
(24)
j=1
From inequalities (23) and (24) we get ( ) ( ) m m X X i o o oi i i o i o o oi oi i o inf F (x , t , d ) + α j g j (x ) ≤ inf F (x , t , d ) + α j g j (x ) i∈I
i∈I
j=1
j=1
( ≤ inf F (x, t , d ) + i
o
i∈I
Hence GL(x o , µ) ≤ GL(x o , µo ) ≤ GL(x, µo ).
oi
m X
) i α oi j g j (x)
,
i ∈ I.
j=1
Corollary 3.2. With Assumption 3.1, g ij (x), i ∈ I, j = 1, . . . , m satisfy the CQ and x o is an optimal solution of problem GDF(t), then there exist u o ≥ 0 and µo ≥ 0 such that (x o , u o , µo ) is a saddle point of L(x o , u o , µo ). Proof follows similarly as in the proof of Theorem 3.2 4. Kuhn–Tucker stationary point problem m , if they exist, such that Definition 4.1. The point (x o , µo ), x o ∈ x ∈ R n+ p , µo ∈ R+
∇x GL(x o , µo ) ≥ 0 ∇µx GL(x o , µo ) ≥ 0 m X
µoij g ij (x o ) = 0,
x o ∇x GL(x o , µo ) = 0
(25)
µ ∇µ GL(x , µ ) = 0
(26)
µij ≥ 0, i ∈ I, j = 1, . . . , m
(27)
o
o
o
j=1
is the Kuhn–Tucker stationary point of problem GDF(t 0 , d 0 ). Or, equivalently, ∇x inf{F i (x o , t 0 , d 0i ) + µoij g ij (x o )} = 0, i∈I
g ij (x o ) ≤ 0, m X j=1
i ∈ I,
i ∈ I, j = 1, . . . , m,
µoij g ij (x o ) = 0,
µij ≥ 0, i ∈ I, j = 1, . . . , m
(28) (29) (30)
76
E.E. Ammar / Mathematical and Computer Modelling 48 (2008) 69–82
a and µ ∈ R m , if they exist, such that Definition 4.2. The point (x o , u o , µo ), x ∈ R n+ p , u ∈ R+ +
∇x L(x o , u o , µo ) ≥ 0
x 0 ∇x L(x o , u o , µo ) = 0
(31)
∇u L(x , u , µ ) ≥ 0
u ∇u L(x , u , µ ) = 0
(32)
∇µ L(x , u , µ ) ≥ 0
µ ∇µ L(x , u , µ ) = 0
(33)
o
o
m X
o
o
0
o
0
o
µoij g ij (x o ) = 0,
o
o
o
o
o
o
µij ≥ 0, i ∈ I, j = 1, . . . , m.
(34)
j=1
is the Kuhn–Tucker stationary point of problem GDF(t 0 ). Or, equivalently, K m P P oi g i (x o ) oi 0 j f ik (x o ) + u t µ j j k=1 j=1 ∇x inf =0 K i∈I P j ik o oi t h (x ) u
(35)
k=1
g ij (x o ) m X
≤ 0,
i ∈ I, j = 1, . . . , m.
µoij g ij (x o ) = 0,
(36)
µij ≥ 0, i ∈ I, j = 1, . . . , m.
(37)
j=1
Theorem 4.1. Assume that F i (x, t, d i ), g ij (x), i ∈ I , j = 1, . . . , m are convex differentiable functions on Conv S. If F i (x, t, d i ) and g ij (x) are bounded functions for each x ∈ Conv S and g ij (x) satisfy CQ for i ∈ I , then x o is an optimal solution of GDF(t, d) if and only if there are Lagrange multipliers µo ∈ R p+m , µo ≥ 0 such that (25)–(27) are satisfied. Proof (Necessity). Assume that x o is an optimal solution of Problem GDF(t, d), then from Proposition 3.1, there p+m exists µo ∈ R+ such that (x o , µo ) is a saddle point of GL(x, µ); i.e., (x o , µo ) satisfies inequality (11). Suppose that there is a negative of ∇x GL(x o , µo ) so there exists a vector x with components x` = x`◦ , ` 6= k and xk > xk◦ such that GL(x, µo ) < GL(x o , µo ), which is a contradiction, since (x o , µo ) is a saddle point of GL(x, µ), and hence ∇x GL(x o , µo ) ≥ 0. Since x o ≥ 0, all of the summands xko ∇xk GL(x o , µo ) in the product x o ∇xk GL(x ◦ , µ◦ ) ≥ 0. Now, if there exists k such that xko ∇xk GL(x ◦ , µ◦ ) > 0 and xko > 0, there would also exist a vector x with components x` = x`◦ , ` 6= k and 0 ≤ xk ≤ xko such that GL(x, µo ) < GL(x o , µo ). This contradicts the first inequality of (11), since (x o , µo ) is a saddle point of GL(x, µ). This implies that x o ∇x GL(x ◦ , µ◦ ) = 0. Since a point GL(x o , µ) is affine linear in µ, then GL(x o , µ) = GL(x o , µo ) + (µ − µo )∇µ GL(x o , µo )
for µ ≥ 0.
Since a point (x o , µo ) ∈ R n+ p+m with x o ≥ 0 and µo ≥ 0 is a saddle point of GL(x, µ), then GL(x o , µ) ≤ GL(x o , µo );
i.e., (µ − µo )∇µ GL(x o , µo ) ≤ 0,
∀µ ∈ R m .
For µ > µo , we have ∇µ GL(x o , µo ) ≤ 0 and, for another µ < µo , ∇µ GL(x o , µo ) ≥ 0. Then ∇µ GL(x o , µo ) = 0; hence µo ∇µ GL(x o , µo ) = 0. (Sufficiency): Conversely, let (x o , µo ) be an optimal solution of (25)–(27), x o ∈ S, µo ∈ R p+m . From convexity and differentiability of infi∈I F i (x, t ∗ , d ∗ i ) = 0, d ∗ > 0 and Assumption 3.1, we have inf{F i (x, t ∗ , d ∗ i )} − inf{F i (x 0 , t ∗ , d ∗ i )} ≥ ∇x inf{F i (x 0 , t ∗ , d ∗ i )(x − x o )}
i∈I
≥ −∇x
i∈I
m X j=1
µoij g ij (x o ) ≥ 0
i∈I
∀x ∈ Conv S, i ∈ I.
E.E. Ammar / Mathematical and Computer Modelling 48 (2008) 69–82
Since ∇x inf{F i (x 0 , t ∗ , d ∗ )} = − inf ∇x i∈I
≥−
m X
i∈I
µoij g ij (x o )(x − x o ) = −
j=1
m X
77
! µoij g ij (x 0 ) ≥ 0
j=1
m X
µoij g ij (x) ≥ 0.
j=1
(From convexity of g ij and (30).) Hence, for all x ∈ Conv S, it follows that infi∈I q i (x, y ∗ , d ∗ i ) ≥ infi∈I q i (x o , y ∗ , d ∗ i ). Then x o is an optimal solution of problem GDF(t, d). Corollary 4.1. Suppose that F i (x, t, d i ), g ij (x), i ∈ I , j = 1, . . . , m are convex differentiable on Conv S. If F i (x, t, d i ) and g ij (x) are bounded functions for each x ∈ Conv S and g ij (x) satisfy CQ for i ∈ I , then x o is an a and µo ∈ R p+m optimal solution of GDF(t) if and only if there are Lagrange multipliers u o ≥ 0, µo ≥ 0 u ∈ R+ such that (31)–(34) are satisfied. Proof follows similarly as in the proof of Theorem 4.1 P Theorem 4.2. Assume that F i (x, t, d i ) i ∈ I are pseudoconvex function at x ∈ Conv S and that mj=1 µij g ij (x) j = 1, . . . , m are quasiconvex function. If F i (x, t, d i ) and g ij (x) are bounded functions for each x ∈ Conv S, and if the p+m
k and µo ∈ R Eqs. (28)–(30) are satisfied for t ∈ R+ +
. Then x o is an optimal solution of GDF(t, d).
Proof. Suppose that x o is not an optimal solution of GDF(t, d); then there exists a point x ∗ ∈ S such that inf F i (x ∗ , t, d i ) ≤ inf F i (x o , t, d i ).
i∈I
(38)
i∈I
k, Then for t ∈ R+
PK
k=1 t
k
> 0, d oi > 0 inequality (31) gives
F i (x ∗ , t o , d oi ) ≤ F i (x o , y o , d oi ).
(39)
Using the pseudoconvexity of F i (x, t, d i ), we get (x ∗ − x o )T ∇x F i (x o , t 0 , d 0i ) < 0.
(40)
Consequently, (27) and (33) give m X inf µij ∇x g ij (x o ) > 0. i∈I
Quasiconvexity of inf
i∈I
(41)
j=1
m X j=1
Pm
i i j=1 µ j g j (x)
µij g ij (x ∗ ) > inf i∈I
m X
at x o implies that
µij g ij (x o ).
(42)
j=1
From (41) and (42) it follows that m X inf µij g ij (x ∗ ) > 0. i∈I
(43)
j=1 p+m
But for x ∗ ∈ S and µo ∈ R+ solution of (DFP(t, d)).
, we have infi∈I
i i ∗ j=1 µ j g j (x )
< 0. This contradicts (43). Hence, x o is an optimal
Pm f ik (x), i ∈ I are pseudoconvex function at x ∈ Conv S and that j=1 P K µij g ij (x), i ∈ I are quasiconvex function. If k=1 t i f ik (x) and g ij (x) are bounded functions for each x ∈ Conv S, if Corollary 4.2. Assume that
PK
Pm
k=1 t
i
p+m
k , u i ∈ R a and µo ∈ R Eqs. (28)–(30) are satisfiled for t ∈ R+ + +
The proof follows similarly as in the proof of Theorem 4.2.
. Then x o is an optimal solution of GDF(t).
78
E.E. Ammar / Mathematical and Computer Modelling 48 (2008) 69–82
5. Duality using Mond–Weir type According to optimality Theorems 4.1 and 4.2, we can formulate the Mond–Weir type dual (M–WDGF) of the disjunctive fractional minmax problem GDF(t, d) as follows: !! K K X X (M–WDGF) Maxn sup H i (y, t, D) = t k f ik (y) − D i t k h ik (y) , (44) y∈R i∈I
P K k ik t f (y) Pk=1 K k ik k=1 t h (y)
where D i =
k=1
> 0. Problem (M–WDGF) satisfies the following conditions:
( sup ∇ y
H (y, t, D) +
i∈I m X
k=1
m X
) µij g ij (y)
= 0,
(45)
j=1
µij g ij (y) = 0,
µij ≥ 0, i ∈ I, j = 1, . . . , m,
(46)
j=1
and K X
t f (y) − D k
ik
k=1
i
K X
! t h (y) ≥ 0,
i ∈ I, D > 0.
k ik
(47)
k=1
Theorem 5.1 (Weak Duality). Let x be feasible for DGF(t, d) and (u, P µ, t) be feasible for (M–WDGFD). If for all feasible (y, µ, t), H i (y, t, D) are pseudoconvex for each i ∈ I , and mj=1 µij g ij (y) are quasiconvex for i ∈ I , then inf(GDF(t, d)) ≥ sup(M–WDGF). Proof. Assume that ( ( ) ) K K K K X X X X inf inf t k f ik (x) − D i t k h ik (x) < sup sup t k f ik (y) − D i t k h ik (y) . i∈N x∈Z
k=1
i∈I y∈R n
k=1
k=1
k=1
Hence, for i ∈ I , we get K X
t k f ik (x) − d i
k=1
K X
t k h ik (x)
0. i∈I
(51)
j=1
By (44), inequality (51) implies m m X X sup µij g ij (x) > sup µij g ij (u) ≥ 0. i∈I j=1
Then
Pm
i i j=1 µ j g j (x)
i∈I j=1
> 0, which contradicts the assumption that x is feasible with respect to GDF(t, d).
E.E. Ammar / Mathematical and Computer Modelling 48 (2008) 69–82
79
Theorem 5.2 (Strong Duality). If x o is an optimal solution of DGF(t, d) and CQ is satisfied, then there exists (y o , µo ) ∈ R n+m feasible for (M–WDGF) and the corresponding value of inf (DGF(t, d)) = sup(M–WDGF). Proof. Since x o is an optimal solution of DGF(t 0 , d 0 ) and g ij (x) satisfy CQ, then there is a positive integer µ∗ij ≥ 0, i ∈ I , j = 1, . . . , m such that Kuhn–Tucker conditions (45)–(47) are satisfied. Assume that µo = τ −1 µ∗ in the Kuhn–Tucker stationary point conditions. It follows that (y o , µo ) is feasible for (M–WDGF). Hence K K P ok ik o P ok ik o t f (x ) f (y ) t k=1 k=1 inf = sup . K K i∈I P i∈I P ok ik o ok ik o t h (x ) t h (y ) k=1
k=1
Theorem 5.3 (Converse Duality). Let x o be an optimal solution of DGF(t 0 , d 0 ) and CQ is satisfied. If (y ∗ , µ∗ ) is an optimal solution of (M–WDFD) and H i (y ∗ , t ∗ , D ∗ ) is strictly pseudoconvex at y ∗ , then y ∗ = x o is an optimal solution of DGF(t, d). Proof. Let x o be an optimal solution of DGF(t 0 , d 0 ), and CQ is satisfied. Assume that y ∗ 6= x o . Then (y ∗ , µ∗ ) is an optimal solution of (M–WDGF). Then sup sup F i (x o , t 0 , d 0i ) = sup sup H i (y ∗ , t ∗ , D ∗i ). i∈I k∈K
(52)
i∈I k∈K
Because (y ∗ , µ∗ ) is feasible with respect to (M–WDGF), it follows that m X
µ∗ij g ij (x o ) ≤
m X
j=1
µ∗ij g ij (y ∗ ).
j=1
Quasiconvexity of
Pm
sup(x o − y ∗ ) i∈I
∗i i j=1 µ j g j (x)
m X
implies that
∇x µ∗ij g ij (y ∗ ) ≤ 0.
(53)
j=1
From (52) and (53), it follows that sup(x o − y ∗ )∇ y H i (y ∗ , t ∗ , D ∗i ) ≥ 0.
(54)
i∈I
From (54) and the strict pseudoconvexity of F i (x, t 0 , d 0i ) at y ∗ , it follows that sup ∇x F i (x 0 , t 0 , d 0i ) > sup ∇ y H i (y ∗ , t ∗ , D ∗i ). i∈I
i∈I
This contradicts (52); hence
y∗
= x o is an optimal solution of DGF(t 0 , d 0 ).
6. Duality using Schaible formula The Schaible dual of GDF(t, d) has been formulated in [26] as follows: (SGD)
Max D ,
(y,µ)∈R n+m
m ; satisfying where (y, µ) ∈ R n × R+ ( ) K K m X X X k ik i k ik i i sup ∇x t f (y) − D t h (y) + µ j g j (y) = 0, i∈I
m X j=1
k=1
µij g ij (y) ≥ 0,
k=1
i ∈ I,
(55)
j=1
(56)
80
E.E. Ammar / Mathematical and Computer Modelling 48 (2008) 69–82 K X
t k f ik (y) − D i
k=1
K X
t k h ik (y) ≥ 0,
(57)
i ∈ I,
k=1
and Di ≥ 0
µij ≥ 0,
and
i ∈ I, j = 1, . . . , m.
(58)
Theorem 6.1 (Weak Duality). LetPx be feasible with respect to GDF(t, d). If for all feasible (y, µ), supi∈I H i (y, t, D) is pseudoconvex at u and supi∈I mj=1 µij g ij (y) is quasiconvex, then inf GDF(t, d) ≥ sup(SGD). Proof. For each i ∈ I suppose that K P k=1 K P
t k f ik (y) < Di . t k h ik (y)
k=1
Hence, for each y ∈ R n and i ∈ I , we get K X
t k f ik (y) − D i
k=1
K X
t k h ik (y) < 0.
k=1
Hence K X
sup i∈I
t f (y) − D k
ik
k=1
i
K X
! t h (y) < 0. k ik
(59)
k=1
From (57) and (59) with t 6= 0, we have ! ! K K K K X X X X k ik k ik i k ik k ik i t f (x) − d t h (x) < t f (y) − D t h (y) . k=1
k=1
k=1
By the pseudoconvexity of supi∈I (x − y)T
K X
H i (y, t,
t k f ik (y) − D i
k=1
K X
k=1
D) at u, it follows that !
t k h ik (y) < 0.
(60)
k=1
Consequently, (56) and (60) yield (x − y)T
m X
µij ∇x g ij (y) > 0
(61)
j=1
and, by the quasiconvexity of m X j=1
µij g ij (x) >
m X
Pm
i i j=1 µ j g j (y),
inequality (32) implies
µij g ij (y).
(62)
j=1
From inequalities (56) and (62) it follows that m X
µij g ij (x) > 0.
j=1
But, from the feasibility of x ∈ S and µij ≥ 0, i ∈ I , j = 1, . . . , m, (1) implies m X j=1
µij g ij (x) ≤ 0,
(63)
81
E.E. Ammar / Mathematical and Computer Modelling 48 (2008) 69–82 P K k ik t f (x) Pk=1 K k ik k=1 t h (x)
this contradicts (63). Hence,
≥ D i ; i.e., inf(FDPd ) ≥ sup(SDD).
Theorem 6.2 (Strong Duality). Let x o be an optimal solution of GDF(t, d) so that CQ is satisfied. Then there exists (y o , µo ) feasible for (SDD) and the corresponding value of inf GDF(t, d) = sup(SDD). If, in addition, the hypotheses of Theorem 6.1 are satisfied, then (x o , µo ) is an optimal solution of (SDD). Proof. The proof is similar to that of Theorem 5.2.
Theorem 6.3 (Converse Duality). Suppose that x o is an optimal solution of GDF(t, d), and g ij (x) satisfy CQ. Let the hypotheses of the above Theorem 6.1 hold. If (y ∗ , µ∗ ) is an optimal solution of (SDD) and H i (y, t ∗ , D ∗ ) is strictly pseudoconvex at y ∗ , then y ∗ = x o is an optimal solution of GDF(t o , d o ). Proof. Assume that y ∗ 6= x o , x o is an optimal solution of GDF(t o , d o ) and try to find a contradiction. From Theorem 4.2, for each i ∈ I , it follows that K P k=1 K P
t 0k f ik (x 0 ) = d 0i .
(64)
t 0k h ik (x 0 )
k=1
Using (1) with (56) we get each i ∈ I , it follows that (x o − y ∗ )
m X
Pm
∗i i o j=1 µ j g j (x )
≤
Pm
∗i i ∗ j=1 µ j g j (y ).
By the quasiconvexity of
∇x µ∗ij g ij (y ∗ ) ≤ 0.
Pm
∗i i j=1 µ j g j (x)
and for
(65)
j=1
From (55) and (65) it follows that ( ) K K X X 0 ∗ ∗k ik ∗ ∗i ∗k ik ∗ (x − y )∇x t f (y ) − D t h (y ) ≥ 0. k=1
(66)
k=1
P P K ∗k ik K ∗k ik i ∗ From (57) and (66) and the strict pseudoconvexity of k=1 t f (y) − D k=1 t h (y) for each i ∈ I at y , it follows that ! ! K K K K X X X X 0k ik 0 0i 0k ik 0 ∗k ik ∗ ∗i ∗k ik ∗ t f (x ) − d t h (x ) > t f (y ) − D t h (y ) . (67) k=1
k=1
k=1
k=1
Inequality (67) implies that K X
t
0k
f (x) − d ik
0i
K X
k=1
! t h (x) > 0, 0k ik
(68)
i ∈ I;
k=1
i.e., for each i ∈ I it follows that
P K 0k ik t f (x) Pk=1 K 0k ik k=1 t h (x)
> d 0i .
Hence, K P k=1 K P k=1
K P
t 0k f ik (x 0 ) ≥ t 0k h ik (x 0 )
k=1 K P
t 0k f ik (x) > d 0i . t 0k h ik (x)
k=1
This contradicts (64), so that y ∗ = x o is an optimal solution of GDF(t o , d o ).
82
E.E. Ammar / Mathematical and Computer Modelling 48 (2008) 69–82
7. Conclusion This paper addresses the solution of generalized disjunctive programming problems, which corresponds to minmax continuous optimization problems that involve disjunctions with convex–concave nonlinear fractional objective functions. We use Dinkelbach’s global approach for finding the maximum of this problem. We first describe the Kuhn–Tucker saddle point of nonlinear disjunctive fractional minmax programming problems by using the decision set that is the union of a family of convex sets. Also, we discuss necessary and sufficient optimality conditions for generalized nonlinear disjunctive fractional minmax programming problems. For the class of problems, we study two duals; we propose and prove weak, strong and converse duality theorems. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30]
E. Ammar, On optimality and duality theorems of nonlinear disjunctive fractional minmax programs, EJOR 180 (2007) 971–982. E.E. Ammar, On optimality of nonlinear fractional disjunctive programming problems, Comput. Math. Appl. 53 (2007) 1527–1537. E. Balas, Disjunctive programming, Ann. of Discrete Math. 5 (1979) 3–1. E. Balas, Disjunctive programming and a hierarchy of relaxation for discrete optimization problems, SIAM J. Alg. Discrete Math. 6 (1985) 466–486. E. Balas, Disjunctive programming: Properties of the convex hull of feasible points, Discrete Appl. Math. 89 (1–3) (1998) 3–44. H.P. Benson, Fractional programming with convex quadratic forms and functions, European J. Oper. Res. 173 (2) (2006) 351–369. S. Ceria, J. Soares, Convex programming for disjunctive convex optimization, Math. Program. 86A (3) (1999) 595–614. Ching-Ter Chang, An approximate approach for fractional programming with absolute-value functions, J. Appl. Math. Comput. 161 (1) (2005) 171–179. Xiuhong Chen, Higher-order symmetric duality in non-differentiable multiobjective programming problems, J. Math. Anal. Appl. 290 (2) (2004) 423–435. Ivan I. Eremin, About the problem of disjunctive programming, J. Yugosl. J. Oper. Res. 10 (2) (2000) 149–161. J.B.G. Frenk, A note on paper “Fractional programming with convex quadratic forms and functions” Benson H.P, European J. Oper. Res. 176 (2007) 641–642. Amilcar S. Gonc¸alves, Symmetric duality for disjunctive programming with absolute value functionals, J. European J. Oper. Res. 26 (1986) 301–306. I.E. Grossmann, S. Lee, Generalized convex disjunctive programming nonlinear convex hull relaxation, Comput. Optim. Appl. 26 (1) (2003) 83–100. M. Gugat, One-sided derivatives for the value function in convex parametric Programming, Optimization 28 (1994) 301–314. M. Gugat, Convex parametric programming optimization One-sided differentiability of the value function, J. Optim. Theory Appl. 92 (1997) 285–310. S. Helbig, An algorithm for vector optimization problems over a disjunctive feasible set, Oper. Res. (1989). S. Helbig, Duality in disjunctive programming via vector optimization, J. Math. Program. 65A (1) (1994) 21–41. S. Jain, A. Magal, Solution of a generalized fractional programming problem, J. Indian Acad. Math. 26 (1) (2004) 15–21. D.S. Kim, S.J. Kim, M.H. Kim, Optimality and duality for a class of nondifferentiable multiobjective fractional programming problems, J. Optim. Theory Appl. 129 (2006) 131–146 (Online Aug. 2006). Zhi-an Liang, Zhen-wei Shi, Optimality conditions and duality for a minimax fractional programming with generalized convexity, J. Math. Anal. Appl. 277 (2) (2003) 474–488. S. Liu, E. Feng, Optimality conditions and duality for a class of nondifferentiable nonlinear fractional programming problems, Int. J. Appl. Math. 13 (4) (2003) 345–358. J.C. Liu, Y. Kimura, K. Tanaka, Generalized fractional programming, RIMS Kokyuroku 1031 (1998) 1–13. S.K. Mishra, S.Y. Wang, K.K. Lai, Higher-order duality for a class of non-differentiable multiobjective programming problems, Int. J. Pure Appl. Math. 2 (2004) 221–232. Vivek Patkar, I.M. Stancu-Minasian, Duality in disjunctive linear fractional programming, European J. Oper. Res. 21 (1985) 101–105. A. Roubi, Method of centers for generalized fractional programming, J. Optim. Theory Appl. 107 (1) (2000) 123–143. S. Schaible, Fractional programming, I, Duality, Mang. Sci. 12 (1976) 858–867. H.D. Sherali, C.M. Shetty, Optimization with Disjunctive Constraints, Springer Verlag Berlin, Heidelberg, New York, 1980. H.-Ch. Wu, The Karush–Kuhn–Tucker optimality conditions in an optimization problem with interval-valued objective function, European J. Oper. Res. 176 (2007) 46–59. X.M. Yang, X.Q. Yang, K.L. Teo, Duality and saddle-point type optimality for generalized nonlinear fractional programming, J. Math. Anal. Appl. 289 (1) (2004) 100–109. Z.K. Xu, Duality in generalized nonlinear fractional programming, J. Math. Anal. Appl. 169 (1992) 1–9.