J Glob Optim (2011) 49:237–263 DOI 10.1007/s10898-010-9542-8
Using the Dinkelbach-type algorithm to solve the continuous-time linear fractional programming problems Ching-Feng Wen · Hsien-Chung Wu
Received: 11 August 2009 / Accepted: 8 March 2010 / Published online: 23 March 2010 © Springer Science+Business Media, LLC. 2010
Abstract A Dinkelbach-type algorithm is proposed in this paper to solve a class of continuous-time linear fractional programming problems. We shall transform this original problem into a continuous-time non-fractional programming problem, which unfortunately happens to be a continuous-time nonlinear programming problem. In order to tackle this nonlinear problem, we propose the auxiliary problem that will be formulated as parametric continuous-time linear programming problem. We also introduce a dual problem of this parametric continuous-time linear programming problem in which the weak duality theorem also holds true. We introduce the discrete approximation method to solve the primal and dual pair of parametric continuous-time linear programming problems by using the recurrence method. Finally, we provide two numerical examples to demonstrate the usefulness of this practical algorithm. Keywords Approximate solutions · Continuous-time linear fractional programming problems · Dinkelbach-type algorithm · Weak duality · Weak-star convergence 1 Introduction The theory of continuous-time linear programming problem has received considerable attention for a long time. Tyndall [38,39] treated rigorously a continuous-time linear programming problem with the constant matrices, which was originated from the “bottleneck problem”
Research of Ching-Feng Wen is partially supported by a grant from the Kaohsiung Medical University Research Foundation (KMU-M098008). C.-F. Wen Center for General Education, Kaohsiung Medical University, Kaohsiung 807, Taiwan e-mail:
[email protected] H.-C. Wu (B) Department of Mathematics, National Kaohsiung Normal University, Kaohsiung 802, Taiwan e-mail:
[email protected] 123
238
J Glob Optim (2011) 49:237–263
proposed by Bellman [4]. Levison [13] generalized the results of Tyndall by considering the time-dependent matrices in which the functions shown in the objective and constraints were assumed as continuous on the time interval [0, T ]. Meidan and Perold [16], Papageorgiou [19] and Schechter [30] have also obtained some interesting results of continuous-time linear programming problem. Anderson et al. [1–3] , Fleischer and Sethuraman [7] and Pullan [21–25] investigated a subclass of continuous-time linear programming problem, which is called the separated continuous-time linear programming problem and can be used to model the job-shop scheduling problems. Recently, Weiss [41] proposed a simplex-like algorithm to solve the separated continuous-time linear programming problem. On the other hand, the nonlinear type of continuous-time optimization problems was also studied by Farr and Hanson [5,6], Grinold [10,11], Hanson and Mond [12], Reiland [26,27], Reiland and Hanson [28] and Singh [34]. The nonsmooth continuous-time optimization problems was studied by Rojas-Medar et al. [29] and Singh and Farr [35]. The nonsmooth continuous-time multiobjective programming problems was also studied by Nobakhtian and Pouryayevali [17,18]. The optimization problem in which the objective function appears as a ratio of two real-valued function is known as a fractional programming problem. Due to its significance appearing in the information theory, stochastic programming and decomposition algorithms for large linear systems, the various theoretical and computational issues have received particular attention in the last decades. For more details on this topic, we may refer to Liang et al. [14], Pardalos and Phillips [20], Stancu-Minasian [36] and Schaible et al. [9,31–33]. On the other hand, Zalmai [42–45] investigated the continuous-time fractional programming problems. Moreover, Stancu-Minasian and Tigan [37] studied the stochastic continuous-time linear fractional programming problem. Under some positivity conditions, by using the minimum-risk approach, the stochastic continuous-time linear fractional programming problem can be shown to be equivalent to the deterministic continuous-time linear fractional programming problem. Wen et al. [40] proposed a recurrence method to numerically solve a special class of continuous-time linear programming problem. Unfortunately, up to the limited knowledge of the authors, it seems that there does not exist a practical algorithm to solve the deterministic continuous-time linear fractional programming problem. The main purpose of this paper is to develop a Dinkelbach-type algorithm to solve a class of continuous-time linear fractional programming problems. Of course, two numerical examples will be provided to clarify the discussions of this paper. This paper is organized as follows. In Sect. 2, a continuous-time linear fractional programming problems will be formulated, which will also be transformed into a continuous-times nonlinear programming problem. In order to tackle this nonlinear problem, we propose the auxiliary problem that will be formulated as parametric continuous-time linear programming problem. We also introduce a dual problem of this parametric continuoustime linear programming problem in which the weak duality theorem will be created. In Sect. 3, we introduce the discrete approximation method to solve the primal and dual pair of parametric continuous-time linear programming problems by using the recurrence method developed by Wen et al. [40]. In order to solve the original problem numerically, in Sect. 4, we shall create the relations between the transformed continuous-times nonlinear programming and the parametric continuous-time linear programming problem. Finally, in Sect. 5, the Dinkelbach-type computational procedure is proposed and two numerical examples are provided to demonstrate the usefulness of this practical algorithm.
123
J Glob Optim (2011) 49:237–263
239
2 Continuous-time linear fractional programming problems Given any optimization problem (P), we denote by V (P) the optimal objective value of problem (P); that is, V (P) will be obtained by taking the supremum or infimum. The superscript “” denotes the transpose operation. Let L ∞ + [0, T ] be the space of all measurable, nonnegative and essentially bounded realvalued functions defined on the time space [0, T ]. We consider the continuous-time linear fractional programming problem (CLFP) that is formulated as follows: q T μ + j=1 0 f j (t)x j (t)dt (CLFP): max q T ν + j=1 0 h j (t)x j (t)dt subject to
q
β j x j (t) ≤ g(t) +
j=1
q
t
γ j x j (s)ds for all t ∈ [0, T ]
j=1 0
x j ∈ L∞ + [0, T ] for j = 1, . . . , q, where ν > 0, β j > 0 and γ j ≥ 0 are given constants for j = 1, . . . , q and f j , h j and g are continuous real-valued functions defined on [0, T ] for j = 1, . . . , q. We also assume that g and h j are nonnegative for j = 1, . . . , q. We see that problem (CLFP) is feasible with the trivial feasible solution x(t) = (x1 (t), . . . , xq (t)) = 0 for all t ∈ [0, T ]. Let us write q T μ + j=1 0 f j (t)x j (t)dt . (1) λ= q T ν + j=1 0 h j (t)x j (t)dt Then problem (CLFP) is equivalent to the following continuous-time optimization problem (CP)
max
λ q
⎛
T
subject to
μ+
f j (t)x j (t)dt = λ · ⎝ν +
j=1 0 q
q
⎞
T
h j (t)x j (t)dt ⎠
j=1 0
β j x j (t) ≤ g(t) +
j=1
q
t
γ j x j (s)ds for all t ∈ [0, T ]
j=1 0
x j ∈ L∞ + [0, T ] for j = 1, . . . , q and λ ∈ R. Remark 2.1 When we say that (x∗ (t), λ∗ ) is an optimal solution of problem (CP), we see that the optimal objective value of problem (CP) is λ∗ . However, when we say that the optimal objective value of problem (CP) is λ∗ , it does not necessary say that the problem (CP) has an optimal solution (x∗ (t), λ∗ ), and it just means that the optimal objective value λ∗ is obtained by taking the supremum. Proposition 2.1 Problem (CP) is feasible with V (CP) ≥
μ . ν
Proof Let λ0 = μ/ν. Then, we see that (x(t), λ) = (0, λ0 ) is a feasible solution of problem (CP). This completes the proof.
123
240
J Glob Optim (2011) 49:237–263
Since problem (CP) is not linear, in order to design the Dinkelbach-type algorithm we are going to consider an auxiliary problem that will be formulated as the parametric continuous-time linear programming problem. For any λ ∈ R, we consider the following parametric continuous-time linear programming problem: q T
(CLPλ )
μ − λν +
max
f j (t) − λh j (t) x j (t)dt
(2)
j=1 0 q
subject to
β j x j (t) ≤ g(t) +
j=1
q
t
γ j x j (s)ds for all t ∈ [0, T ]
j=1 0
x j ∈ L∞ + [0, T ] for j = 1, . . . , q.
(3)
We also remark that, for all λ, the problems (CLPλ ) have the same feasible region. Proposition 2.2 The problem (CLPλ ) is feasible for all λ ∈ R. If λ ≤ (μ/ν), then V (CLPλ ) ≥ 0. Proof It is obvious that the trivial solution x(t) = 0 is feasible for (CLPλ ). Since μ ≥ λν, this says that V (CLPλ ) ≥ μ − λν ≥ 0. We complete the proof. According to Wen et al. [40], the dual problem (DCLPλ ) is defined as follows: (DCLPλ )
μ − λν +
min subject to
β j y(t) −
T
0 T
g(t)y(t)dt
γ j y(s)ds ≥ f j (t) − λh j (t) for j = 1, . . . , q
(4)
t
and t ∈ [0, T ]y ∈ L ∞ + [0, T ] The weak duality for the primal and dual pair of problems (CLPλ ) and (DCLPλ ) is given below. Theorem 2.1 (Weak Duality between (CLPλ ) and (DCLPλ )) Considering the primal and dual pair of problems (CLPλ ) and (DCLPλ ), for any feasible solutions x(t) = (x1 (t), . . . , xq (t)) and y(t) of problems (CLPλ ) and (DCLPλ ), respectively, we have q T
μ − λν +
f j (t) − λh j (t) x j (t)dt ≤ μ − λν +
j=1 0
T g(t)y(t)dt; 0
that is, V (CLPλ ) ≤ V (DCLPλ ). 3 Discrete approximation method We are going to propose the discrete approximation method to solve the parametric problem (CLPλ ). This method is parallel to the method proposed by [40]. Following [40], for each n ∈ N, we take (n − 1)T T 2T ,..., ,T P n = 0, , n n n
123
J Glob Optim (2011) 49:237–263
241
as a partition of [0, T ], which divides [0, T ] into n subintervals with equal length T /n. For l = 1, . . . , n, we define (l − 1)T lT (n) bl = min g(t) : t ∈ , for l = 1, . . . , n. (5) n n and (n,λ)
a jl
= min
(l − 1)T lT , n n
f j (t) − λh j (t) : t ∈
for j = 1, . . . , q, l = 1, . . . , n. (6)
n
0
For convenience, we define l=1 xl and l=n+1 xl to be zero. According to the continuous-time linear programming problem (CLPλ ), its discrete version can be defined as the following finite-dimensional linear programming problem (P(λ) n )
maximize
μ − λν +
q n T (n,λ) a jl x jl n j=1 l=1
subject to
q
(n)
β j x jl ≤ bl
+
l−1 q T γ j x jω for l = 1, . . . , n n
(7)
ω=1 j=1
j=1
x jl ≥ 0 for j = 1, . . . , q, l = 1, . . . , n. (λ)
(λ)
The dual problem (Dn ) of (Pn ) is defined by (D(λ) n )
minimize
μ − λν +
n T (n) bl yl n l=1
subject to
(n,λ)
β j yl ≥ a jl
+
n T γ j yω for j = 1, . . . , q, l = 1, . . . , n n ω=l+1
yl ≥ 0 for l = 1, . . . , n.
(8)
Remark 3.1 We have the following observations. (λ)
(i) For each n ∈ N and λ ∈ R, the problem (Pn ) is feasible, since (x1 , x2 , . . . , xn ) with q (λ) xl = 0 ∈ R+ for all l = 1, . . . , n is a feasible solution of problem (Pn ). (λ) (ii) The feasibility of problem Dn can be realized by Proposition 3.1 below. (iii) From (i) and (ii), we see that the strong duality theorem holds true for the primal and (λ) (λ) (λ) (λ) dual pair of problems (Pn ) and (Dn ), i.e., −∞ < V (Pn ) = V (Dn ) < ∞. (λ)
(λ)
The problems (Pn ) and (Dn ) can be solved efficiently using the recurrence method developed by Wen et al. [40]. We first adopt the following notations: σ =
min {β j }, κ = max γ j ,
(9)
j=1,...,q j=1,...,q
τ (λ) = max
(10)
max max f j (t) − λh j (t), 0 .
(11)
j=1,...,q t∈[0,T ]
(λ)
The recursion for obtaining the optimal solution of problem (Dn ) is given below.
123
242
J Glob Optim (2011) 49:237–263
Proposition 3.1 We define y¯n(n,λ) and (n,λ) y¯l
= max
max
j=1,...,q
(n,λ)
Then y¯ (n,λ) = ( y¯1 we have
(n,λ)
a jl
= max
+
T n
γj
max
(n,λ) )
(n,λ) y¯l
(n,λ)
a jn
,0
βj
n
(n,λ) i=l+1 y¯i
βj
, . . . , y¯n
0≤
j=1,...,q
,0
,
for l = n − 1, n − 2, . . . , 1. (λ)
forms an optimal solution of problem (Dn ). Moreover,
κT τ (λ) ≤ · exp σ σ
for all l = 1, . . . , n.
(12)
(λ)
Before presenting the recurrence method for solving problem (Pn ), we also adopt some notations. Let (n,λ)
= l : 1 ≤ l ≤ n and y¯l >0 and, for l ∈ , let
l = {i : 1 ≤ i ≤ l − 1 and i ∈ } . If 1 ∈ , we assume 1 = ∅. For 1 ≤ l ≤ n and 1 ≤ j ≤ q, we define n 1 T (n,λ) (n,λ) d jl = y¯i a jl + γ j . βj n i=l+1
For l ∈ , we also define Jl = j ∗ : 1 ≤ j ∗ ≤ q such that d j ∗ l ≥ d jl > 0 for all 1 ≤ j ≤ q and jl∗ = min Jl , i.e., jl∗ is the smallest index such that d jl∗ l ≥ d jl > 0 for all 1 ≤ j ≤ q. (λ)
The recursion for obtaining the optimal solution of problem (Pn ) is given below. Let ζ := max {g(t)} , t∈[0,T ]
(13)
then, by the same argument as in [40], we have the following result. Proposition 3.2 For 1 ≤ l ≤ n and 1 ≤ j ≤ q, the following x¯ jl ⎧ if l ∈ /
⎪ ⎨ 0, (n,λ) i f l ∈ and j = jl∗ x¯ jl = 0, ⎪ ⎩ 1 bl(n) + T i∈ γ j ∗ x¯ (n,λ) , i f l ∈ and j = jl∗ β ∗ n j ∗i l i jl
i
(λ) (Pn ).
forms an optimal solution of problem Moreover, we have qκ T ζ (n,λ) 0 ≤ x¯ jl ≤ · exp for all j = 1, . . . , q, l = 1, . . . , n and n ∈ N. σ σ (λ)
(λ)
(λ)
We denote by Vn (λ) = V (Pn ) = V (Dn ) the optimal objective value of problems (Pn ) (λ) and (Dn ). Then, we see that Vn (λ) < ∞ for each n. In order to implement an algorithm, we need to show that Vn (λ) is a continuous function of λ.
123
J Glob Optim (2011) 49:237–263
243
Proposition 3.3 For each n, Vn (λ) is a strictly decreasing and continuous function of λ. (λ)
Proof Recall that the objective function of problem (Dn ) is given by μ − λν +
n T (n) bl yl , n l=1
(n) bl
where ν > 0 and each is fixed for l = 1, . . . , n. Since ν > 0 and by Proposition 3.1, we see that Vn (λ) will be a strictly decreasing and continuous function of λ if we can show (n,λ) that a jl defined in (6) is a decreasing and continuous function of λ for each n, j and l. We write (l − 1)T lT (n) . Il = , n n Fixing n, j and l, we consider a sequence {λm } such that λm → λ as m → ∞. We are (n,λ ) (n,λ) going to show that a jl m → a jl as m → ∞. According to Marsden [15, p. 98, Ex. (n,λm )
6], it is equivalent to show that every subsequence of {a jl
} has a further subsequence
(n,λ) which converges to a jl . First of all, we are going to show that there exists a subsequence (n,λm ) (n,λ ) (n,λ) {a jl k } of {a jl m } that converges to a jl . Since the functions f j and h j are continuous (n) (m) (n) on Il , for each m, there exist t jl , t ∗jl ∈ Il such that (n,λm )
a jl (n)
Since Il (m )
(m)
(m)
(n,λ)
= f j (t jl ) − λm h j (t jl ) and a jl
= f j (t ∗jl ) − λh j (t ∗jl ). (m k )
is compact, there exists a convergent subsequence {t jl
(m )
(m )
(m)
} of {t jl } such that (n,λm ) a jl k
(n)
(n,λ)
t jl k → t jl 0 as k → ∞, where t jl 0 ∈ Il . We are going to show that → a jl as k → ∞. Now, by the continuity of f j and h j , we have (n,λm ) (m ) (m ) (m ) (m ) (n,λ) lim a jl k = lim f j (t jl k ) − λm k h j (t jl k ) = f j (t jl 0 ) − λh j (t jl 0 ) ≥ a jl . k→∞
k→∞
(14) By the definition of
(n,λ ) a jl m
(n,λm )
a jl
in (6), for each m, we have
Therefore, we obtain (n,λm k )
lim a jl
k→∞
(m)
(m)
= f j (t jl ) − λm h j (t jl ) ≤ f j (t ∗jl ) − λm h j (t ∗jl )
≤ lim
k→∞
(n,λ) f j (t ∗jl ) − λm k h j (t ∗jl ) = f j (t ∗jl ) − λh j (t ∗jl ) = a jl . (n,λm k )
From (14) and (15), we obtain a jl
(n,λ)
→ a jl
(15)
as k → ∞. Therefore, using the same (n,λm ) ∞ }m=1 has a (n,λ) a jl as m → ∞.
arguments as described above, we can also show that any subsequence of {a jl (n,λ)
further subsequence which converges to a jl Thus,
(n,λ) a jl
(n,λm )
. This shows that a jl
→
is a continuous function of λ for all n, j and l. We complete the proof.
Next, we are going to construct the feasible solutions of problems (CLPλ ) and (DCLPλ ) (λ) (λ) by virtue of the optimal solutions of problems (Pn ) and (Dn ), respectively. (n,λ) (n,λ) (n,λ) (n,λ) (n,λ) (n,λ) Let (¯x1 , x¯ 2 , . . . , x¯ n ), where x¯ l = (x¯1l , . . . , x¯ql ) , be an optimal solu(λ)
(n,λ)
tion of problem (Pn ). For j = 1, . . . , q, we define the step functions x¯ j as follows:
: [0, T ] → R
123
244
J Glob Optim (2011) 49:237–263
(n,λ) x¯ j (t)
=
(n,λ)
x¯ jl
,
if
(n,λ) x¯ jn ,
(l−1)T n
≤t
Q(λ2 ); that is, the real-valued function Q(·) is strictly decreasing. Proof Let δ = ν (λ2 − λ1 ) > 0. Given > 0 with δ > , by the definition of supremum, there exists a feasible solution x(t) of (CLPλ2 ) such that 0 < Q(λ2 ) −
⎧ ⎨ ⎩
q T
μ − λ2 ν +
j=1 0
⎫ ⎬
f j (t) − λ2 h j (t) x j (t)dt ≤ , ⎭
which implies q T
μ − λ1 ν +
f j (t) − λ1 h j (t) x j (t)dt
j=1 0
⎛
q
⎞
T
+ (λ1 − λ2 ) · ⎝ν +
h j (t)x j (t)dt ⎠ ≥ Q(λ2 ) − .
(35)
j=1 0
Since q
T
h j (t)x j (t)dt ≥ 0,
j=1 0
it follows that ⎛ δ ≤ (λ2 − λ1 ) · ⎝ν +
q
T
⎞ h j (t)x j (t)dt ⎠ .
(36)
j=1 0
123
250
J Glob Optim (2011) 49:237–263
Since x(t) is also a feasible solution of problem (CLPλ1 ), using (35) and (36), we obtain q T
Q(λ1 ) ≥ μ − λ1 ν +
f j (t) − λ1 h j (t) x j (t)dt
j=1 0
⎛
q
⎞
T
≥ Q(λ2 ) − + (λ2 − λ1 ) · ⎝ν +
h j (t)x j (t)dt ⎠
j=1 0
≥ Q(λ2 ) + δ − > Q(λ2 ) (since δ − > 0).
This completes the proof.
Proposition 4.2 Given any λ ∈ R, we have that Q(λ) > 0 if and only if λ < V (CP). Equivalently, we also have that Q(λ) ≤ 0 if and only if λ ≥ V (CP). Proof Suppose that Q(λ) > 0. By Proposition 4.1 and the continuity of Q(λ) in Theo¯ = Q(λ) − . Since rem 4.1, there exist > 0 and λ¯ such that Q(λ) − > 0 and Q(λ) the feasible region of problems (CLPλ ) are identical for each λ and there exists a feasible solution x(t) = (x1 (t), . . . , xq (t)) of problems (CLPλ ) and (CLPλ¯ ) such that q T
μ − λν +
¯ = Q(λ) − > 0, f j (t) − λh j (t) x j (t)dt = Q(λ)
j=1 0
which implies q
⎛
T
μ+
f j (t)x j (t)dt = λ ⎝ν +
j=1 0
q
⎞
T
h j (t)x j (t)dt ⎠ + Q(λ) − .
j=1 0
Hence, q
⎛
T
μ+
⎞
T
f j (t)x j (t)dt = (λ + λˆ ) ⎝ν +
j=1 0
q
h j (t)x j (t)dt ⎠ ,
j=1 0
where λˆ =
ν+
Q(λ) − > 0. T j=1 0 h j (t)x j (t)dt
q
ˆ is a feasible solution of problem (CP). Therefore, we obtain This shows that (x(t), λ + λ) λ < λ + λˆ ≤ V (CP). For the converse, we assume that λ < V (CP). If all feasible solutions (x(0) (t), λ0 ) of problem (CP) satisfy λ0 ≤ λ, then we see that V (CP) ≤ λ, which contradicts λ < V (CP). Therefore, there exists a feasible solution (x(0) (t), λ0 ) of problem (CP) such (0) (0) that λ < λ0 . Using the feasibility of (x(0) (t), λ0 ), where x(0) (t) = (x1 (t), . . . , xq (t)) , we have ⎞ ⎛ q T q T (0) (0) f j (t)x j (t)dt = λ0 ⎝ν + h j (t)x j (t)dt ⎠ , (37) μ+ j=1 0
123
j=1 0
J Glob Optim (2011) 49:237–263
251
and q
(0) β j x j (t)
≤ g(t) +
j=1
q
t (0)
γ j x j (s)ds for all t ∈ [0, T ],
(38)
j=1 0
Therefore, by (38), we see that x(0) (t) is a feasible solution of problem (CLPλ0 ), and, by (37), we have q
T
μ − λ0 ν +
(0)
( f j (t) − λ0 h j (t))x j (t)dt = 0.
(39)
j=1 0
From (39), we have Q(λ0 ) ≥ 0. Since λ < λ0 , using Proposition 4.1, we have 0 ≤ Q(λ0 ) < Q(λ). This completes the proof. The optimal solutions of problem (CLPλ ) can be guaranteed by Theorem 3.2. The following results are very useful for solving problem (CP). Proposition 4.3 The following statements hold true. (i) Suppose that (xλ∗∗ (t), λ∗ ) is an optimal solution of problem (CP) such that xλ∗∗ (t) is an optimal solution of the problem (CLPλ∗ ). Then, we have Q(λ∗ ) = 0. (ii) If problem (CLPλ ) has an optimal solution xλ∗ (t) such that Q(λ) = 0, then problem V (CP) has an optimal solution (xλ∗ (t), λ) with V (CP) = λ. In this case, xλ∗ (t) is also an optimal solution of (CLFP). Proof (i) Since (xλ∗∗ (t), λ∗ ) satisfies the constraints of problem (CP), we see that Q(λ∗ ) ≥ 0. Suppose that Q(λ∗ ) > 0. We are going to lead to a contradiction. Since Q(λ∗ ) > 0 and ν > 0, we define δ=
Q(λ∗ ) > 0. T ∗ j=1 0 h j (t)x jλ∗ (t)dt
q
ν+
(40)
Since q
T
∗
μ−λ ν+
( f j (t) − λ∗ h j (t))x ∗jλ∗ (t)dt = Q(λ∗ ),
j=1 0
we have q
T
μ+
j=1 0
⎛ ⎞ q T
f j (t)x ∗jλ∗ (t)dt − λ∗ + δ · ⎝ν + h j (t)x ∗jλ∗ (t)dt ⎠ ⎛
= Q(λ∗ ) − δ ⎝ν +
q T
⎞
j=1 0
h j (t)x ∗jλ∗ (t)dt ⎠ = 0 [by (40)].
j=1 0
Therefore, we obtain q
T
μ+
j=1 0
⎛
f j (t)x ∗jλ∗ (t)dt = λ∗ + δ · ⎝ν +
q
T
⎞ h j (t)x ∗jλ∗ (t)dt ⎠ ,
j=1 0
123
252
J Glob Optim (2011) 49:237–263
which says that (xλ∗∗ , λ∗ + δ) is a feasible solution of problem (CP). Therefore, we have λ∗ = V (CP) ≥ λ∗ + δ > λ∗ . A contradiction occurs. Therefore, we conclude that Q(λ∗ ) = 0. (ii) Since Q(λ) = 0, from Proposition 4.2, we have λ ≥ V (CP). Since q T
μ − λν +
f j (t) − λh j (t) x ∗jλ (t)dt = 0,
j=1 0
we have q
⎛
T
μ+
f j (t)x ∗jλ (t)dt = λ ⎝ν +
j=1 0
q
T
⎞ h j (t)x ∗jλ (t)dt ⎠ .
j=1 0
On the other hand, since q j=1
x ∗jλ (t)
≤ g(t) +
q
t
γ j x ∗jλ (s)ds for all t ∈ [0, T ],
j=1 0
we see that (xλ∗ (t), λ) is a feasible solution of problem (CP), i.e., λ ≤ V (CP). Therefore, we conclude that λ = V (CP). This completes the proof.
5 Dinkelbach-type computational procedure In the sequel, we provide a Dinkelbach-type computational procedure to obtain the approximate solution of (CLFP). The conventional Dinkelbach-type algorithm for solving the fractional programming problem (FP) is briefly described below: • • •
Step 1. Transform the original fractional programming problem (FP) into a parametric programming problem (Pλ ). Step 2. Define a real-valued function F(λ) which represents the optimal value of problem (Pλ ). The purpose is to find the zeros of F(λ). Step 3. If F(λk ) = 0, then λk is the optimal value of the original fractional programming problem (FP). In this case, the optimal solution of problem (Pλ ) is also the optimal solution of problem (FP).
Now, in order to solve the continuous-time linear fractional programming problem (CLFP), we formulate the parametric programming problem (CLPλ ) that is corresponding to the parametric programming problem (Pλ ) mentioned in Step 1. Therefore, the real-valued function Q(λ) which is the optimal value of problem (CLPλ ) is corresponding to the real-valued function F(λ) mentioned in Step 2. Finally, we need to find the zeros of Q(λ), which has been guaranteed in Proposition 4.3. Based on the view point of numerical analysis, we are going to obtain a sequence {λ∗n } such that Q(λ∗n ) → 0 when n is sufficiently large, which will be shown below in Theorem 5.1. To see this, we need some useful results. Lemma 5.1 The function εn (λ) defined in (28) is a continuous function of λ.
123
J Glob Optim (2011) 49:237–263
253
Proof We recall T εn (λ) = ¯n · δn (λ) · (n + exp (ρT ) − 1) + (ηn (λ) + δn (λ))
ρ · exp (ρ(T − t)) g(t)dt. 0
By Proposition 3.1 and the proof of Proposition 3.3, we see that the optimal solution y¯ (n,λ) of (λ) (Dn ) is a continuous function of λ. Therefore, by (25), δn (λ) is also a continuous function of λ. Therefore, we remain to show that ηn (λ) is a continuous function of λ. Let l −1 l −1 l l (n) (n) Jl := T, T and Il := T, T . n n n n From (23), we have ηn (λ) = max
sup
j=1,...,q t∈[0,T ]
(n,λ)
f j (t) − λh j (t) − ϕ j
(t)
⎫ ⎬ (n,λ) (n,λ) sup f j (t)−λh j (t)− ϕ j (t) , f j (T )−λh j (T )−ϕ j (T ) = max max ⎭ j=1,...,q l=1,...,n ⎩ (n) t∈Jl ⎧ ⎫ ⎨ ⎬ (n,λ) (n,λ) = max max , f j (T ) − λh j (T ) − a jn sup f j (t) − λh j (t) − a jl ⎭ j=1,...,q l=1,...,n ⎩ (n) t∈Jl (n,λ) (n,λ) , f j (T ) − λh j (T ) − a jn = max max max f j (t) − λh j (t) − a jl j=1,...,q l=1,...,n
⎧ ⎨
(n)
t∈Il
(by the continuity) = max
max
j=1,...,q l=1,...,n
max
(n)
t∈Il
(n,λ) f j (t) − λh j (t) − a jl
(41)
From Proposition 3.3, we see that (n,λ)
a jl
= min
(n) s∈Il
f j (s) − λh j (s)
is a continuous function of λ. Therefore, using the arguments of Proposition 3.3, we can similarly show that max f j (t) − λh j (t) (n)
t∈Il
is a continuous function of λ, which says that ηn (λ) is a continuous function of λ. Finally, we conclude that εn (λ) is a continuous function of λ. (λ)
(λ)
We recall that Vn (λ) denotes the optimal objective value of problems (Pn ) and (Dn ). Lemma 5.2 The following statements hold true. (i) If λ ≤ (μ/ν), then Vn (λ) ≥ 0. Therefore, if λ ≤ (μ/ν), then Vn (λ) + εn (λ) ≥ 0. (ii) Suppose that each function h j satisfies the Lipschitz condition for 1 ≤ j ≤ q. For sufficiently large n, there exists λn ≥ 0 such that Vn (λn ) + εn (λn ) ≤ 0. Moreover, there exists λ◦n with (μ/ν) ≤ λ◦n ≤ λn such that Vn (λ◦n ) + εn (λ◦n ) = 0, εn (λ◦n ) → 0 and εn (λn ) → 0 as n → ∞.
123
254
J Glob Optim (2011) 49:237–263 (λ)
Proof For λ ≤ (μ/ν), by observing the objective value of problem (Dn ), we see that Vn (λ) ≥ 0, which also implies Vn (λ) + εn (λ) ≥ 0, since εn (λ) ≥ 0. This proves (i). For the proof of (ii), we define the constant c1 = max max max f j (t), 0 ≥ 0. j=1,...,q t∈[0,T ]
Therefore, for λ ≥ 0, since h j ≥ 0 for all j = 1, . . . , q, from (11), we have τ (λ) = max max max f j (t) − λh j (t), 0 ≤ max max max f j (t), 0 = c1 . j=1,...,q t∈[0,T ]
j=1,...,q t∈[0,T ]
Now, we also define the constant
κT c1 c2 = T · ζ · · exp . σ σ (λ)
Let y¯ (n,λ) be the optimal solution of problem (Dn ) derived by the recursion in Proposi(n) tion 3.1. Then, by (12) and the fact bl ≤ ζ for all l = 1, . . . , q, we have Vn (λ) = μ − λν +
n T (n) (n,λ) bl y¯n n l=1
κT τ (λ) · exp ≤ μ − λν + c2 ≤ μ − λν + T · ζ · σ σ and
δn (λ) = max
l=1,...,q
T (n,λ) T τ (λ) κT T c1 κT ≤ · y¯ · exp ≤ · · exp . n l n σ σ n σ σ
On the other hand, from (41), we have ηn (λ) = max
(42)
max
j=1,...,q l=1,...,n
max f j (t) − λh j (t) − min f j (s) − λh j (s) (n)
t∈Il
(43)
(n)
'
( = f j0 (t1 ) − λh j0 (t1 ) − f j0 (t2 ) − λh j0 (t2 )
s∈Il
(44)
(n)
for some j0 ∈ {1, . . . , q} and t1 , t2 ∈ Il . Since h j satisfies the Lipschitz conditions for all j = 1, . . . , q, there exists a constant c3 such that ) ) )h j (t1 ) − h j (t2 )) ≤ c3 |t1 − t2 | (45) for all t1 , t2 ∈ [0, T ]. Therefore, from (44), we have ' ( ' ( ηn (λ) = f j0 (t1 ) − f j0 (t2 ) − λ h j0 (t1 ) − h j0 (t2 ) ) ) ≤ ) f j0 (t1 ) − f j0 (t2 )) + λc3 |t1 − t2 | ) ) T ≤ ) f j0 (t1 ) − f j0 (t2 )) + λc3 · . n Now, we define T c4 =
ρ · exp (ρ(T − t)) g(t)dt 0
123
(46)
J Glob Optim (2011) 49:237–263
and rn = ¯n
255
( κT κT ' c1 T T c1 ¯n (exp (ρT ) − 1) + c4 . exp + · exp σ σ n σ σ
(47)
Since ¯n → 0 as n → ∞, we see that rn → 0 as n → ∞. Now, we have T εn (λ) = ¯n · δn (λ) · (n + exp (ρT ) − 1) + (ηn (λ) + δn (λ))
ρ · exp (ρ(T − t)) g(t)dt 0
' ( = n ¯n · δn (λ) + δn (λ) ¯n (exp (ρT ) − 1) + c4 + ηn (λ)c4 ≤ rn + ηn (λ)c4 [by (43) and (47)].
(48)
Therefore, from (42), (48) and (46), we also obtain ) ) T Vn (λ) + εn (λ) ≤ μ − λν + c2 + rn + c4 ) f j0 (t1 ) − f j0 (t2 )) + λc3 · n ) ) T ) ) = μ + c2 + rn + c4 f j0 (t1 ) − f j0 (t2 ) + λ c3 c4 · − ν . n
(49)
We define n0 = and
c5 = max
j=1,...,q
c3 c4 T ν
(50)
) ) max f j (t) − min f j (t) ≥ ) f j0 (t1 ) − f j0 (t2 )) ≥ 0.
t∈[0,T ]
t∈[0,T ]
For n ∈ N with n > n 0 , we also define λn = Then, we have λn ≥
μ + c 2 + c4 c 5 + r n ν − c3 c4 Tn
.
) ) μ + c2 + c4 ) f j0 (t1 ) − f j0 (t2 )) + rn ν − c3 c4 Tn
.
(51)
Since c 3 c4 ·
T −ν n 0 , the following statements hold true. (i) We have −εn (λ◦n ) ≤ Q(λ◦n ) ≤ 0 ≤ Q(λn ) ≤ εn (λn ). (ii) There exists λ∗ such that λn ≤ λ∗ ≤ λ◦n and Q(λ∗ ) = 0, i.e., V (CP) = V (CLFP) = λ∗ . Therefore, the optimal solution of (CLPλ∗ ) is also an optimal solution of (CLFP). (iii) We define the sequence {λ∗n } by λ∗n =
1 ◦ λn + λn . 2
(55)
Then, we have ) ∗ )
)λ − λ∗ ) ≤ 1 λ◦ − λn → 0 as n → ∞, n 2 n i.e., λ∗n → λ∗ as n → ∞. ∗ (iv) Let x¯ (n,λn ) (t) be the natural solution of problem (CLPλ∗n ) constructed from the optimal (λ∗n )
∗
). Then x¯ (n,λn ) (t) is also a feasible solution of (CLFP) and ∗ ∗ (56) 0 ≤ V (CLFP) − ξ x¯ (n,λn ) (t) ≤ Er (¯x(n,λn ) (t)),
solution of problem (Pn
where
T (n,λ∗n ) μ+ q (t)dt j=1 0 f j (t) x¯ j (n,λ∗n ) (t) = ξ x¯ q T (n,λ∗n ) ν + j=1 0 h j (t)x¯ j (t)dt ∗
is the objective value of problem (CLFP) at the feasible solution x¯ (n,λn ) (t) and ) ) ∗ ) ) )θ x¯ (n,λn ) (t) )
1 ∗ (n,λn ) ◦ λ − λn + (t)) = Er (¯x q T (n,λ∗ ) 2 n ν + j=1 0 h j (t)x¯ j n (t)dt with q T
(n,λ∗ ) (n,λ∗n ) ∗ f j (t) − λ∗n h j (t) x¯ j n (t)dt. (t) = μ − λn ν + θ x¯ j=1 0
123
(57)
J Glob Optim (2011) 49:237–263
257 ∗
Moreover, we have Er (¯x(n,λn ) (t)) → 0 as n → ∞. This says that the constructed ∗ solution x¯ (n,λn ) (t) is an approximate optimal solution of (CLFP) with error bound ∗ Er (¯x(n,λn ) (t)). Proof Let n 0 be defined in (50). Given a natural number n > n 0 . Since Vn (λn ) = 0, by Theorem 3.1, we have 0 ≤ Q(λn ) − Vn (λn ) = Q(λn ) ≤ εn (λn ).
(58)
From the proof of Lemma 5.2 (ii), there exists λ◦n with (μ/ν) ≤ λ◦n ≤ λn such that Vn (λ◦n ) + εn (λ◦n ) = 0.
(59)
Now, using Theorem 3.1 again, we have 0 ≤ Q(λ◦n ) − Vn (λ◦n ) ≤ εn (λ◦n ), which implies −εn (λ◦n ) ≤ Q(λ◦n ) ≤ 0 by (59). Therefore, from (58), we obtain − εn (λ◦n ) ≤ Q(λ◦n ) ≤ 0 ≤ Q(λn ) ≤ εn (λn ).
(60)
This proves (i). Since Q(λ) is continuous by Theorem 4.1, there exists λ∗ such that λn ≤ λ∗ ≤ λ◦n and Q(λ∗ ) = 0, and this implies V (CP) = V (CLFP) = λ∗ by Proposition 4.3. This proves (ii). For proving (iii), we consider the sequence {λ∗n } defined in (55). Since εn (λ◦n ) → 0 and εn (λn ) → 0 as n → ∞ by Lemma 5.2, from (60), we conclude that Q(λn ) − Q(λ◦n ) → 0 as n → ∞. By Proposition 4.1 and Theorem 4.1, we see that the inverse function Q −1 (·) is also continuous, which implies
λ◦n − λn = Q −1 Q(λ◦n ) − Q −1 (Q(λn )) → 0 as n → ∞. Hence, ) ∗ )
)λ − λ∗ ) ≤ 1 λ◦ − λn → 0 as n → ∞. n n 2 ∗
Finally, for proving (iv), it is obvious that x¯ (n,λn ) (t) is also a feasible solution of (CLFP). Since q T
(n,λ∗ ) (n,λ∗n ) ∗ θ x¯ (t) = μ − λn ν + f j (t) − λ∗n h j (t) x¯ j n (t)dt, j=1 0
we have q
⎛
T
μ+
(n,λ∗n )
f j (t)x¯ j
(t)dt = λ∗n ⎝ν +
j=1 0
q
T
⎞
(n,λ∗n )
h j (t)x¯ j
∗ (t)dt ⎠ + θ x¯ (n,λn ) (t) ,
j=1 0
which implies (n,λ∗n ) (t) ¯ θ x j=1 0 = λ∗n + . q T q T (n,λ∗ ) (n,λ∗ ) ν + j=1 0 h j (t)x¯ j n (t)dt ν + j=1 0 h j (t)x¯ j n (t)dt
μ+
q
T
(n,λ∗n )
f j (t)x¯ j
(t)dt
(61)
123
258
J Glob Optim (2011) 49:237–263 ∗
We denote by ξ(¯x(n,λn ) (t)) the objective value of problem (CLFP) at the feasible solution ∗ x¯ (n,λn ) (t). Then, from (61), we have ∗ θ x¯ (n,λn ) (t) ∗ ξ x¯ (n,λn ) (t) = λ∗n + . (62) q T (n,λ∗ ) ν + j=1 0 h j (t)x¯ j n (t)dt Therefore, by (ii), (iii) and (62), we have ∗ ∗ 0 ≤ V (CLFP) − ξ x¯ (n,λn ) (t) = λ∗ − ξ x¯ (n,λn ) (t) ) ) ) )) ∗ ) ≤ )λ∗ − λ∗n ) + )λ∗n − ξ x¯ (n,λn ) (t) ) ) ) ∗ ) ) )θ x¯ (n,λn ) (t) )
1 ◦ ∗ ≤ ≡ Er (¯x(n,λn ) (t)). λ − λn + q T (n,λ∗n ) 2 n ν + j=1 0 h j (t)x¯ j (t)dt ∗ ∗ We remain to show that Er (¯x(n,λn ) (t)) → 0 as n → ∞. Note that θ x¯ (n,λn ) (t) is the objec∗
tive value of problem (CLPλ∗n ) at the feasible solution x¯ (n,λn ) (t). Therefore, by Theorem 3.1 (iv), we have ∗ 0 ≤ Q(λ∗n ) − θ x¯ (n,λn ) (t) ≤ εn (λ∗n ), which also implies ∗ Q(λ∗n ) − εn (λ∗n ) ≤ θ x¯ (n,λn ) (t) ≤ Q(λ∗n ).
(63)
Using the above results (ii) and (iii), and the continuity of Q(λ) given in Theorem 4.1, we have lim Q(λ∗n ) = Q(λ∗ ) = 0.
n→∞
(64)
According to the same arguments of Lemma 5.2 by referring to) (54), also we ) see that ∗ ) ) εn (λ∗n ) → 0 as n → ∞. Therefore, by (63) and (64), we otain )θ x¯ (n,λn ) (t) ) → 0 as n → ∞. From (57) and the above result (iii), we conclude that ) ) ∗ ) ) )θ x¯ (n,λn ) (t) )
1 ∗ (n,λn ) ◦ (t)) ≤ λ − λn + → 0 as n → ∞. 0 ≤ Er (¯x 2 n ν This completes the proof.
According to Theorem 5.1, we are in a position to propose the computational procedure. First of all, we need to provide two subroutines. The computational procedure of subroutine for evaluating Vn (λ) is given below. Subroutine for evaluating Vn (λ): • Step 1. Evaluate the values shown in Eqs. (5) and (6). • Step 2. Formulate the finite-dimensional primal and dual pair of linear programming (λ) problem (Pn ) using the values obtained in Step 1.
123
J Glob Optim (2011) 49:237–263
259
• Step 3. Use the recurrence method as shown in Proposition 3.2 to obtain the optimal (λ) objective value Vn (λ) of problem (Pn ). Given a continuous function f , there are many efficient numerical methods that can be used to obtain the zeros of f . We write ZERO( f ) to denote a subroutine for obtaining the zeros of function f . Of course, for obtaining the zeros of Vn (λ) by calling the subroutine ZERO(Vn (λ)), we also need to call the above subroutine many times to evaluate the function values of Vn (λ) for different values of λ. Now, the main computational procedure is given below. Main Program • Step 1. Set the tolerance and the initial number n such that n > n 0 , where n 0 is defined in (50). • Step 2. Since Vn (λ) is a continuous function of λ by Proposition 3.3, find the zero λn of Vn (λ) by invoking the subroutine ZERO(Vn (λ)). • Step 3. Since Vn (λ) + εn (λ) is a continuous function of λ by Proposition 3.3 and Lemma 5.1, find the zero λ◦n of Vn (λ) + εn (λ) by invoking the subroutine ZERO(Vn (λ) + εn (λ)). • Step 4. Set λ∗n =
1 ◦ λ + λn . 2 n
Call the subroutine which evaluates Vn (λ∗n ) to obtain the optimal solution of problem (λ∗ )
∗
(Pn n ) and use this optimal solution to construct the natural solution x¯ (n,λn ) (t) from (16). ∗ Evaluate the error bound Er (¯x(n,λn ) (t)) defined in (57). ∗ ∗ • Step 5. If Er (¯x(n,λn ) (t)) < , then STOP and return x¯ (n,λn ) (t) as an approximate optimal solution of the original problem (CLFP) by Theorem 5.1 (iv); otherwise, set n ← n + 1 and go to Step 2. Now, two numerical examples are provided to clarify the above discussions. Example 5.1 We consider the following problem
maximize
1 5
+
1 0
1+
ln t + 21 · x(t)dt
1 0
cos(t) · x(t)dt t
subject to
2x(t) ≤ et − 1 + 7
x(s)ds for all t ∈ [0, 1] 0
x(t) ∈ L ∞ + [0, 1]. Using the proposed computational procedure, the numerical results are shown in the follow∗ ing table. The approximate optimal solutionx¯ (n,λn ) (t) of the above problem∗ can be obtained ∗ by (16) with approximate objective value ξ x¯ (n,λn ) (t) and error Er (x¯ (n,λn ) (t)).
123
260
J Glob Optim (2011) 49:237–263
n
λn
λ◦n
λ∗n
∗ ξ x¯ (n,λn ) (t)
Er (x¯ (n,λn ) (t))
210 211 212 213 214 215 216 217 218 219 220
0.273257571 0.273547446 0.273692869 0.273765704 0.273802153 0.273820384 0.273829502 0.273834062 0.273836342 0.273837482 0.273838051
0.286472746 0.280041832 0.276911421 0.275367788 0.274601395 0.274219556 0.274028976 0.273933770 0.273886189 0.273862403 0.273850512
0.279865158 0.276794639 0.275302145 0.274566746 0.274201774 0.274019970 0.273929239 0.273883916 0.273861265 0.273849942 0.273844282
0.273424902 0.273663255 0.273758381 0.273800397 0.273819998 0.273829425 0.273834052 0.273836344 0.273837485 0.273838054 0.273838338
0.013047844 0.006378578 0.003153040 0.001567392 0.000781397 0.000390131 0.000194924 0.000097426 0.000048704 0.000024350 0.000012174
∗
Example 5.2 We consider the following problem maximize
1' (
+ 0 ln t + 21 · x1 (t) + t 2 · x2 (t) dt 1 + 0 [cos(t) · x1 (t) + sin(1 − t) · x2 (t)] dt 1 3
1 2
t subject to x1 (t) + 3x2 (t) ≤ t +
[4x1 (s) + 2x2 (s)] ds for all t ∈ [0, 1] 0
x j (t) ∈ L ∞ + [0, 1] for j = 1, 2. Using the proposed computational procedure, the numerical results are shown in the fol∗ (n,λ∗ ) (n,λ∗ ) lowing table. The approximate optimal solution x¯ (n,λn ) (t) = (x¯1 n (t), x¯2 n (t)) of the ∗
above problem can be obtained by (16) with approximate objective value ξ x¯ (n,λn ) (t) and ∗
error Er (¯x(n,λn ) (t)). n
λn
λ◦n
λ∗n
∗ ξ x¯ (n,λn ) (t)
Er (¯x(n,λn ) (t))
210 211 212 213 214 215 216 217 218 219 220
0.803853375 0.804077369 0.804189438 0.804245494 0.804273528 0.804287546 0.804294555 0.804298060 0.804299813 0.804300689 0.804301127
0.877700885 0.840538873 0.822306635 0.813275949 0.808781750 0.806539909 0.805420301 0.804860824 0.804581167 0.804441359 0.804371460
0.840777130 0.822308121 0.813248037 0.808760722 0.806527639 0.805413728 0.804857428 0.804579442 0.804440490 0.804371024 0.804336294
0.804124884 0.804227103 0.804266908 0.804284977 0.804293430 0.804297550 0.804299567 0.804300569 0.804301068 0.804301317 0.804301441
0.073576001 0.036311770 0.018039727 0.008990972 0.004488320 0.002242359 0.001120733 0.000560254 0.000280099 0.000140043 0.000070020
∗
6 Conclusions The Dinkelbach-type algorithm has been widely used for solving the fractional programming problem. In this paper, the similar type of Dinkelbach algorithm has been successfully used to solve the continuous-time linear fractional programming problem.
123
J Glob Optim (2011) 49:237–263
261
We first transform a continuous-time linear fractional programming problems into a continuous-times nonlinear programming problem. In order to follow the Dinkelbach-type algorithm, we propose the auxiliary problem (CP) that will be formulated as a parametric continuous-time linear programming problem (CLPλ ). We also introduce a dual problem (DCLPλ ) of this parametric continuous-time linear programming problem in which the weak duality theorem has been created. In order to obtain the approximate solutions of (CLPλ ), we also introduce the discrete approximation method to solve the primal and dual pair of parametric continuous-time linear programming problems (CLPλ ) and (DCLPλ ). A Dinkelbach-type computational procedure to obtain the approximate solution of (CLFP) is briefly described below: • Step 1. Transform the original continuous-time linear fractional programming problem (CLFP) into a parametric programming problem (CLPλ ). • Step 2. Define a real-valued function Q(λ) which represents the optimal value of problem (CLPλ ). The purpose is to find the zeros of Q(λ). • Step 3. If Q(λk ) = 0, then λk is the optimal value of the original continuous-time linear fractional programming problem (CLFP). In this case, the optimal solution of problem (CLPλ ) is also the optimal solution of problem (CLFP). Two numerical examples are also provided to demonstrate the usefulness of this practical algorithm. Of course, many types of conventional fractional programming problems can also be extended to be the continuous-time problems. However, it is not possible to solve all these formulations when they are extended as the continuous-time problems. In the future research, we are going to explore the popular types of continuous-time fractional programming problems, and to develop some efficient computational procedure to solve these problems.
References 1. Anderson, E.J., Nash, P., Perold, A.F.: Some properties of a class of continuous linear programs. SIAM J. Control Optim. 21, 758–765 (1983) 2. Anderson, E.J., Philpott, A.B.: On the solutions of a class of continuous linear programs. SIAM J. Control Optim. 32, 1289–1296 (1994) 3. Anderson, E.J., Pullan, M.C.: Purification for separated continuous linear programs. Math. Methods Oper. Res. 43, 9–33 (1996) 4. Bellman, R.E.: Dynamic Programming. Princeton University Press, Princeton (1957) 5. Farr, W.H., Hanson, M.A.: Continuous time programming with nonlinear constraints. J. Math. Anal. Appl. 45, 96–115 (1974) 6. Farr, W.H., Hanson, M.A.: Continuous time programming with nonlinear time-delayed. J. Math. Anal. Appl. 46, 41–61 (1974) 7. Fleischer, L., Sethuraman, J.: Efficient algorithms for separated continuous linear programs: the multicommodity flow problem with holding costs and extensions. Math. Oper. Res. 30, 916–938 (2005) 8. Friedman, A.: Foundations of Modern Analysis. Dover Publications, New York (1982) 9. Frenk, H., Schaible, S. : Fractional programming. In: Floudas, C.A., Pardalos, P.M. (eds.) Encyclopedia of Optimization, 2, pp. 162–172. Kluwer, Dordrecht (2001) 10. Grinold, R.C.: Continuous programming part one: linear objectives. J. Math. Anal. Appl. 28, 32–51 (1969) 11. Grinold, R.C.: Continuous programming part two: nonlinear objectives. J. Math. Anal. Appl. 27, 639– 655 (1969) 12. Hanson, M.A., Mond, B.: A class of continuous convex programming problems. J. Math. Anal. Appl. 22, 427–437 (1968) 13. Levinson, N.: A class of continuous linear programming problems. J. Math. Anal. Appl. 16, 73–83 (1966)
123
262
J Glob Optim (2011) 49:237–263
14. Liang, Z.A., Pardalos, P.M., Huang, H.X.: Optimality conditions and duality for a class of nonlinear fractional programming problem. J. Optim. Theory Appl. 110, 611–619 (2001) 15. Marsden, J.E.: Elementary Classical Analysis. W. H. Freeman, New York (1973) 16. Meidan, R., Perold, A.F.: Optimality conditions and strong duality in abstract and continuous-time linear programming. J. Optim. Theory Appl. 40, 61–77 (1983) 17. Nobakhtian, S., Pouryayevali, M.R.: Optimality criteria for nonsmooth continuous-time problems of multiobjective optimization. J. Optim. Theory Appl. 136, 69–76 (2008) 18. Nobakhtian, S., Pouryayevali, M.R.: Duality for nonsmooth continuous-time problems of vector optimization. J. Optim. Theory Appl. 136, 77–85 (2008) 19. Papageorgiou, N.S.: A class of infinite dimensional linear programming problems. J. Math. Anal. Appl. 87, 228–245 (1982) 20. Pardalos, P.M., Phillips, A.: Global optimization of fractional programming. J. Glob. Optim. 1, 173– 182 (1991) 21. Pullan, M.C.: An algorithm for a class of continuous linear programs. SIAM J. Control Optim. 31, 1558– 1577 (1993) 22. Pullan, M.C.: Forms of optimal solutions for separated continuous linear programs. SIAM J. Control Optim. 33, 1952–1977 (1995) 23. Pullan, M.C.: A duality theory for separated continuous linear programs. SIAM J. Control Optim. 34, 931– 965 (1996) 24. Pullan, M.C.: Convergence of a general class of algorithms for separated continuous linear programs. SIAM J. Optim. 10, 722–731 (2000) 25. Pullan, M.C.: An extended algorithm for separated continuous linear programs. Math. Program. Ser. A 93, 415–451 (2002) 26. Reiland, T.W.: Optimality conditions and duality in continuous programming I: convex programs and a theorem of the alternative. J. Math. Anal. Appl. 77, 297–325 (1980) 27. Reiland, T.W.: Optimality conditions and duality in continuous programming II: the linear problem revisited. J. Math. Anal. Appl. 77, 329–343 (1980) 28. Reiland, T.W., Hanson, M.A.: Generalized kuhn-tucker conditions and duality for continuous nonlinear programming problems. J. Math. Anal. Appl. 74, 578–598 (1980) 29. Rojas-Medar, M.A., Brandao, J.V., Silva, G.N.: Nonsmooth continuous-time optimization problems: sufficient conditions. J. Math. Anal. Appl. 227, 305–318 (1998) 30. Schechter, M.: Duality in continuous linear programming. J. Math. Anal. Appl. 37, 130–141 (1972) 31. Schaible, S.: Fractional programming: applications and algorithms. Eur. J. Oper. Res. 7, 111–120 (1981) 32. Schaible, S.: Fractional programming. Zeitschrift für Oper. Res. 27, 39–54 (1983) 33. Schaible, S., Shi, J. : Recent developments in fractional programming: single-ratio and max- min case. In: Takahashi, W., Tanaka, T. (eds.) Nonlinear Analysis and Convex Analysis, pp. 493–506. Yokohama Publishers, Yokohama (2004) 34. Singh, C.: A sufficient optimality criterion in continuous time programming for generalized convex functions. J. Math. Anal. Appl. 62, 506–511 (1978) 35. Singh, C., Farr, W.H.: Saddle-point optimality criteria of continuous time programming without differentiability. J. Math. Anal. Appl. 59, 442–453 (1977) 36. Stancu-Minasian, I.M.: Fractional Programming: Theory, Methods and Applications. Kluwer, Dordrecht (1997) 37. Stancu-Minasian, I.M., Tigan, S.: Continuous time linear-fractional programming: the minimum-risk approach. Rairo Oper. Res. 34, 397–409 (2000) 38. Tyndall, W.F.: A duality theorem for a class of continuous linear programming problems. SIAM J. Appl. Math. 15, 644–666 (1965) 39. Tyndall, W.F.: An extended duality theorem for continuous linear programming problems. SIAM J. Appl. Math. 15, 1294–1298 (1967) 40. Wen, C.-F., Lur, Y.-Y., Wu, Y.-K.: A recurrence method for a special class of continuous time linear programming problems. J. Glob. Optim. doi:10.1007/s10898-009-9459-2 41. Weiss, G.: A simplex based algorithm to solve separated continuous linear programs. Math. Program. Ser. A 115, 151–198 (2008) 42. Zalmai, G.J.: Duality for a class of continuous-time homogeneous fractional programming problems. Z. Oper. Res. Ser. A-B 30, 43–48 (1986) 43. Zalmai, G.J.: Duality for a class of continuous-time fractional programming problems. Utilitas Math. 31, 209–218 (1987)
123
J Glob Optim (2011) 49:237–263
263
44. Zalmai, G.J.: Optimality conditions and duality for a class of continuous-time generalized fractional programming problems. J. Math. Anal. Appl. 153, 365–371 (1990) 45. Zalmai, G.J.: Optimality conditions and duality models for a class of nonsmooth constrained fractional optimal control problems. J. Math. Anal. Appl. 210, 114–149 (1997)
123