J Optim Theory Appl (2011) 148: 237–256 DOI 10.1007/s10957-010-9753-7
Semi-Infinite Optimization under Convex Function Perturbations: Lipschitz Stability N.Q. Huy · J.-C. Yao
Published online: 19 October 2010 © Springer Science+Business Media, LLC 2010
Abstract This paper is devoted to the study of the stability of the solution map for the parametric convex semi-infinite optimization problem under convex function perturbations in short, PCSI. We establish sufficient conditions for the pseudo-Lipschitz property of the solution map of PCSI under perturbations of both objective function and constraint set. The main result obtained is new even when the problems under consideration reduce to linear semi-infinite optimization. Examples are given to illustrate the obtained results. Keywords Convex programming · Semi-infinite optimization · Solution map · Lipschitz stability · Slater constraint qualification
1 Introduction We will denote by CO(Rn ) the set of all the finite convex functions on Rn . Let T be nonempty compact subset of a metric space. C1 (Rn × T ) be the set of all continuous function g : Rn × T → R, such that gt (·) := g(·, t) is convex for all t ∈ T . Consider parametric convex semi-infinite optimization problem PCSI under functional perturbations of both objective function and constraint set on the parameter
This work was supported by the Grant NSC 99-2221-E-110-038-MY3 and was supported by the Vietnam’s National Foundation for Science and Technology Development (NAFOSTED), respectively. N.Q. Huy Department of Mathematics, Hanoi Pedagogical University No. 2, Xuan Hoa, Phuc Yen, Vinh Phuc Province, Vietnam e-mail:
[email protected] J.-C. Yao () Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung 804, Taiwan e-mail:
[email protected] 238
J Optim Theory Appl (2011) 148: 237–256
space P := CO(Rn ) × C1 (Rn × T ) formulated as follows: For each the parameter p := (f, g), we have the convex semiinfinite optimization problem (CSI)p :
min f (x)
subject to x ∈ C(p),
where C(p) = {x ∈ | gt (x) := g(x, t) ≤ 0, ∀t ∈ T } is the set of feasible points of (CSI)p and is a closed convex subset of Rn . We aim at studying the recent development of parametric convex semi-infinite optimization. Namely, we will investigate the pseudo-Lipschitz property of the solution map of PCSI. It is well known that semi-infinite optimization problems naturally arise in approximation theory, optimal control, and in numerous engineering problems. Many papers are published every year on theory, methods and applications of semi-infinite programming and its extensions; see, e.g., [1–22] and the references therein for more comments and discussions. Especially, solution stability, such as lower semicontinuity, upper semicontinuity, pseudo-Lipschitz property, metric regularity, has attracted much attention of researchers in last years (see [1–10, 12–17, 21, 22]). In parametric linear semi-infinite optimization, sufficient conditions for the lower and upper semi-continuity of the minimal value functions under perturbations of both objective function and constraint set have been given by B. Brosowski in [2]. M.J. Cánovas et al. [3] derived other characterizations of the same properties of the solution map and optimal value function together with the Lipschitz property of optimal value function. Furthermore, the pseudo-Lipschitz property of the solution map was investigated in [8] for parametric linear vector semi-infinite optimization problems under linear perturbations of the objective function and continuous perturbations of the right-hand side of the constraints. In the framework of parameter convex semiinfinite optimization problems, sufficient conditions for the metric regularity of the inverse of the solution map (or, equivalently, for the pseudo-Lipschitz property of solution map) under continuous perturbations of the right-hand side of the constraints and linear perturbations of the objective function was presented in [5]. The main goal of this paper is to establish sufficient conditions for the pseudoLipschitz property of the solution map of PCSI S(p) := {x ∗ ∈ C(p) | f (x ∗ ) ≤ f (x), ∀x ∈ C(p)} under convex functional perturbations of both objective function and constraint set. The main result obtained in the paper is new even when the problems under consideration reduce to linear semi-infinite optimization. Our result extends the corresponding result in [5] for convex semi-infinite optimization problems under canonical perturbations. The paper is organized as follows. In Sect. 2, we recall some basic definitions and preliminaries from convex analysis and set-valued analysis, and give some auxiliary
J Optim Theory Appl (2011) 148: 237–256
239
results, which will be useful in next section. In Sect. 3, we present sufficient conditions for the pseudo-Lipschitz property of the solution map of PCSI under convex functional perturbations of both objective function and constraint set. Moreover, examples are given to illustrate the obtained results. An application to parametric linear semi-infinite optimization problems is presented in Sect. 4.
2 Preliminaries and Auxiliary Results Let us first recall some standard notions from convex analysis and set-valued analysis; see, e.g., [19, 23–25]. Let (X, d) be a metric space. Given M ⊂ X, the closure and the interior of M are denoted by cl M and int M, respectively. We will use N (x) to denote the set of all neighborhoods of x ∈ X. The distance from x ∈ X to M is defined by d(x, M) := inf {dist(x, y) | y ∈ M}, where dist(x, y) denotes the distance between two points x and y, and d(x, ∅) := +∞. In particular, when X = Rn (n ∈ N), we will denote by co M and cone M the convex hull and the convex conical hull of M, respectively. Write co ∅ = ∅ and cone ∅ = {0n }, where 0n is the zero vector of Rn . B stands for the open unit ball in Rn , and B(x, ρ) := x + ρB. Let F : X ⇒ Y be a multifunction between metric spaces. The effective domain and the graph of the multifunction F are given, respectively, by dom F := {x ∈ X | F (x) = ∅},
gph F := {(x, y) ∈ X × Y | y ∈ F (x)}.
Definition 2.1 (i) F is said to be closed at the point x 0 ∈ X iff for all sequences {x i } in X and {y i } in Y satisfying x i → x 0 , y i → y 0 , y i ∈ F (x i ) for all i ∈ N, one has y 0 ∈ F (x 0 ). (ii) F is upper semicontinuous (usc for brevity) at x 0 ∈ X iff for every open set V containing F (x 0 ) there exists U0 ∈ N (x 0 ) such that F (x) ⊂ V for all x ∈ U0 . (iii) F is said to be lower semicontinuous (lsc for brevity) at x 0 ∈ dom F iff for any open set V ⊂ Y satisfying V ∩ F (x 0 ) = ∅ there exists U0 ∈ N (x 0 ) such that V ∩ F (x) = ∅ for all x ∈ U0 . (iv) F is said to be continuous at x 0 ∈ dom F iff it is both upper semicontinuous and lower semicontinuous at x 0 . (v) F is pseudo-Lipschitz (also called Aubin continuous or Lipschitz-like) at (x 0 , y 0 ) ∈ gph F iff there exist U ∈ N (x 0 ), V ∈ N (y 0 ) and a constant > 0, such that d(y 2 , F (x 1 )) ≤ d(x 1 , x 2 ), for all x 1 , x 2 ∈ U , and all y 2 ∈ V ∩ F (x 2 ).
240
J Optim Theory Appl (2011) 148: 237–256
Now we define, as in [20], a distance between convex functions in CO(Rn ). Let n {Km }∞ m=1 be a sequence of compact sets in R , such that Km ⊂ int Km+1
∞
and Rn =
Km .
(1)
m=1
In particular, we could have considered Km = m(cl B). Let f, g ∈ CO(Rn ). First, for each m = 1, 2, . . ., we define δm (f, g) := sup {|f (x) − g(x)|}. x∈Km
Then we define a metric σ on CO(Rn ) by ∞ 1 δm (f, g) σ (f, g) := 2m 1 + δm (f, g)
∀f, g ∈ CO(Rn ),
(2)
m=1
which describes the topology of the uniform convergence of convex functions on compact sets. n Lemma 2.1 [20, Proposition 3.1] Let f ∈ CO(Rn ) and {f k }∞ k=1 ⊂ CO(R ). Then, k k σ (f , f ) → 0 as k → ∞, if and only if f converges uniformly to f on any compact subset of Rn .
Let h : Rn → R ∪ {+∞} be a proper, lower semicontinuous and convex function. Denote by ∂h(x) ¯ the subdifferential of f at x¯ ∈ dom f , ∂h(x) ¯ := {u∗ ∈ Rn | h(x) − h(x) ¯ ≥ u∗ , x − x, ¯ ∀x ∈ Rn }. ¯ the normal cone to ⊂ Rn at x¯ ∈ cl , We denote by N (x), ¯ := {u∗ ∈ Rn | u∗ , x − x ¯ ≤ 0, ∀x ∈ }. N (x) The set of active constraints at x ∈ C(p) is defined by Tp (x) = {t ∈ T | gt (x) := g(x, t) = 0}. Lemma 2.2 The multifunction T : P × Rn ⇒ T , such that T (p, x) = Tp (x), is usc at every (p 0 , x 0 ) ∈ P × Rn . Proof The proof is straightforward and so is omitted.
Let p ∈ P . We say, as in [12], that p satisfies the Slater constraint qualification iff there exists xˆ ∈ such that g(x, ˆ t) < 0 ∀t ∈ T .
J Optim Theory Appl (2011) 148: 237–256
241
Lemma 2.3 Let p = (f, g) ∈ P . Then, p satisfies the Slater constraint qualification, if and only if 0n ∈ / co({∂gt (x) ¯ | t ∈ Tp (x)}) ¯ + N (x) ¯ for all x¯ ∈ C(p) with Tp (x) ¯ = ∅. Proof Define the function h : Rn → R by h(x) := maxt∈T gt (x). Obviously, h is convex on Rn and C(p) = {x ∈ | h(x) ≤ 0}. Since h is a finite-valued convex function, it is continuous. Moreover, for each x¯ ∈ C(p) with Tp (x) ¯ = ∅, we have ¯ = 0. h(x) ¯ := max gt (x) t∈T
Then p satisfies the Slater constraint qualification, if and only if there is no x¯ ∈ C(p) with Tp (x) ¯ = ∅ such that it is a minimizer of h on . This is equivalent to / ∂h(x) ¯ + N (x). ¯ 0n ∈
(3)
It follows from [19, Theorem VI.4.4.2] that ∂h(x) ¯ = co({∂gt (x) ¯ | t ∈ Tp (x)}). ¯ / co({∂gt (x) ¯ | t ∈ Tp (x)}) ¯ + N (x). ¯ Combining (3) and (4) we obtain 0n ∈
(4)
Next, we define the distance between functions in C1 (Rn × T ), which is given by σ∞ (g, g) ¯ := sup σ (gt , g¯ t )
for all g, g¯ ∈ C1 (Rn × T ),
(5)
t∈T
where gt (x) := g(x, t), ∀x ∈ Rn ∀t ∈ T . Then (σ∞ , C1 (Rn × T )) is a complete metrizable space. It is well known that the convergence of a sequence {g k }∞ k=1 ⊂ C1 (Rn × T ) to g ∈ C1 (Rn × T ) describes the uniform convergence, on T , of the functions g k (x, ·) to g(x, ·) for every x ∈ Rn . From now on, · denotes the Euclidean norm in Rn . The distance between two elements p = (f, g) and p¯ = (f¯, g) ¯ belonging to the parameter space P := CO(Rn ) × C1 (Rn × T ) is formulated by ¯ := max σ (f, f¯), σ∞ (g, g) ¯ , (6) d∞ (p, p) where σ and σ∞ are defined as in (2) and (5), respectively. Lemma 2.4 Let p 0 = (f 0 , g 0 ) ∈ P . If p 0 satisfies the Slater constraint qualification, then C : P ⇒ Rn is lower semicontinuous at p 0 . Proof Let p 0 = (f 0 , g 0 ) ∈ P . Let V be an open convex set such that V ∩ C(p 0 ) = ∅. Since p 0 satisfies the Slater constraint qualification and T is compact, there must exist an element xˆ ∈ C(p 0 ) and ρ > 0, such that ˆ t) ≤ −ρ g 0 (x,
∀t ∈ T .
(7)
242
J Optim Theory Appl (2011) 148: 237–256
Take x ∈ V ∩ C(p 0 ) and choose a number r ∈ ]0, 1] such that xr := x + r(xˆ − x) ∈ V . By the convexity of C(p 0 ) and V , xr ∈ V ∩ C(p 0 ) for all r ∈ [0, r]. It follows from (7) that g 0 (xr , t) ≤ (1 − r )g 0 (x, t) + r g 0 (x, ˆ t) ≤ −r ρ
∀t ∈ T , ∀r ∈ [0, r].
(8)
It remains to prove that there exists r1 > 0 such that, for every p¯ := (f¯, g) ¯ ∈ P satisfying d∞ (p, ¯ p 0 ) < r1 ρ, V ∩ C(p) ¯ = ∅.
(9)
B(p 0 , ρε) := {p ∈ P | d∞ (p, p 0 ) < ρε}.
(10)
Take an arbitrary ε > 0. Let
Take any p¯ := (f¯, g) ¯ ∈ B(p 0 , ρε). Let {Km } be a sequence of compact subsets of n R defined as in (1). Then there exists j ∈ N such that xr ∈ Kj for all r ∈ [0, 1]. Let τ t ∈ T . Since the function τ → 1+τ is increasing on [0, +∞[, it follows from (1) that ∞
0≤
1 δi (g¯ t , g 0 ) 1 δj (g¯ t , gt0 ) t ¯ g 0 ). ≤ = σ (g¯ t , gt0 ) ≤ σ∞ (g, 2j 1 + δj (g¯ t , gt0 ) 2i 1 + δi (g¯ t , gt0 ) i=1
¯ g 0 ) ≤ d∞ (p, ¯ p 0 ). Hence, Obviously, σ∞ (g, 0≤
δj (g¯ t , gt0 ) d∞ (p, ¯ p0 ) ≤ < ε. ρ ρ 2j (1 + δj (g¯ t , gt0 )) 1
δ (g¯ ,g 0 )
This implies j ρt t → 0 as ε → 0 for all t ∈ T . Therefore, there must exist r1 > 0 and r0 ∈]0, r[ such that δj (g¯ t , gt0 ) ≤ r0 ρ for all t ∈ T whenever d∞ (p, ¯ p 0 ) ≤ r1 ρ. Hence, δj (g¯ t , gt0 ) ≤ r0 ρ < rρ
∀t ∈ T .
(11)
Let xs := x + s(xˆ − x), s ∈ [r0 , r]. Clearly, xs ∈ V ∩ C(p 0 ). We have xs ∈ C(p). ¯ Indeed, it follows from (11) that, for every t ∈ T , g(x ¯ s , t) − g 0 (xs , t) ≤ sup |g(x, ¯ t) − g 0 (x, t)| =: δj (g¯ t , gt0 ) ≤ r0 ρ. x∈Kj
Then, by (8), g(x ¯ s , t) ≤ g 0 (xs , t) + r0 ρ ≤ −(s − r0 )ρ ≤ 0 ∀t ∈ T . This implies xs ∈ C(p) ¯ whenever d∞ (p, ¯ p 0 ) ≤ r1 ρ. Thus C is lower semicontinuous at p. The proof is complete.
J Optim Theory Appl (2011) 148: 237–256
243
Lemma 2.5 Let p 0 = (f 0 , g 0 ) ∈ P . Suppose that the following conditions hold: (i) p 0 satisfies the Slater constraint qualification; (ii) S(p 0 ) = {x 0 }. Then S is lower semicontinuous at p 0 . Proof We first show that C is closed at p 0 . Let {p k = (f k , g k )}∞ k=1 ⊂ P and n be sequences such that x k ∈ C(p k ), x k → x 0 and p k → p 0 as k → ∞. {x k }∞ ⊂ R k=1 It is sufficient to show that x 0 ∈ C(p 0 ). From Lemma 2.1 and the compactness of T , it follows that, for any ε > 0, there exist a ρ > 0 and a positive integer k0 such that |g k (x, t) − g 0 (x 0 , t)|
0, there exists an index k0 ∈ N such that
∂ϕ k (x k ) ⊂ ∂ϕ(x) + εB
for all k ≥ k0 .
3 Lipschitz Stability of the Solution Map In this section we present sufficient conditions for the pseudo-Lipschitz property of the solution set mapping S of PCSI at the nominal parameter. We first recall a optimality condition for PCSI at a given point. Lemma 3.1 Let p 0 = (f 0 , g 0 ) ∈ P . Suppose that p 0 satisfies the Slater constraint qualification. Then x 0 is a solution of PCSI, if and only if −∂f (x 0 ) ∩ cone ∂gt (x 0 ) + N (x 0 ) = ∅. t∈T (x 0 )
Proof It is immediate from Theorem 4.1 in [22].
From the above optimality condition we can establish the following result, which is useful in the sequel. |T | denotes the cardinality of T . Proposition 3.1 Let p 0 = (f 0 , g 0 ) ∈ P and x 0 ∈ S(p 0 ). Suppose that the following conditions hold: (i) p 0 satisfies the Slater constraint qualification; (ii) There is no T0 ⊂ Tp0 (x 0 ) with |T0 | < n satisfying
0 0 0 −∂f (x ) ∩ cone ∂gt (x ) + N (x ) = ∅. 0
0
t∈T0
Then, the following statements are valid: 0 0 (a) for any {(p k , x k ) = (f k , g k , x k )}∞ k=1 ⊂ gph S which converges to (p , x ) = k k k 0 0 0 k k k k (f , g , x ) in gph S, there exist u ∈ ∂f (x ), ti ∈ Tpk (x ), ui ∈ ∂g k (x k ) and ti
λki > 0 for i ∈ {1, 2, . . . , n}, such that −uk −
n
λki uki ∈ N (x k )
i=1
and {uk1 , . . . , ukn } forms a basis of Rn ; (b) S(p 0 ) = {x 0 }.
for k large enough
246
J Optim Theory Appl (2011) 148: 237–256
Proof (a) It is easily seen that p satisfies the Slater constraint qualification in a some neighborhood of p 0 . Let {(p k , x k ) = (fk , g k , x k )}∞ k=1 be a sequence of gph S k k 0 0 0 0 0 such that {(p , x )} converges to (p , x ) = (f , g , x ) ∈ gph S. Since (p k , x k ) → (p 0 , x 0 ) as k → ∞, it follows that p k satisfies the Slater constraint qualification for k large enough. Applying Lemma 3.1, we can assert from the Carathéodory’s theorem that, for k large enough, there exist q ∈ N, uk ∈ ∂f k (x k ), tik ∈ Tpk (x k ), uki ∈ ∂g kk (x k ) ti
and λki > 0 for i ∈ {1, 2, . . . , q}, such that q ≤ n, −uk −
q
λki uki ∈ N (x k ),
(17)
i=1
and {uki | i = 1, . . . , q} is a linearly independent system. It remains to prove that q = n. Conversely, suppose that q < n. By the compactness of T , we can assume, by taking a subsequence if necessary, that {tik } converges to some ti ∈ T for each i ∈ {1, . . . , q}. Since tik ∈ Tpk (tik ), it follows from Lemma 2.2 that ti ∈ Tp0 (x 0 ). We claim that, for each i ∈ {1, . . . , q}, there exists λi ≥ 0 such that lim λki = λi .
(18)
k→∞
Indeed, if our claim were false then, by taking a subsequence if necessary, we can assume that there exists i0 ∈ {1, . . . , q} such that lim λki0 = +∞.
k→∞
Put μk :=
q
k i=1 λi ,
k ≥ 1. Then limk→∞ μk = +∞, and there is no loss of genλk
erality in assuming that the sequence { μik }k≥k0 converges to some μi ≥ 0 for each i ∈ {1, . . . , q}. Dividing by μk in (17) and letting k → ∞, we deduce from assumption (i), the robustness of the normal cone [10, p. 58], Lemma 2.6 and the compactness of the subgradient of a convex function that −
q
μi ui ∈ N (x 0 )
with
i=1
q
μi = 1
i=1
and ui ∈ ∂gt0i (x 0 ). This means that 0n ∈ co({∂gt0 (x 0 ) | t ∈ Tp0 (x 0 )}) + N (x 0 ), which contradicts the assertion of Lemma 2.3, and (18) follows. Letting k → ∞ in (17), we get u ∈ ∂f 0 (x 0 ), ui ∈ ∂gt0i (x 0 ) (i = 1, 2, . . . , q), −u −
q
μi ui ∈ N (x 0 )
with {t1 , . . . , tq } ⊂ Tp0 (x 0 ) and q < n,
i=1
which contradicts assumption (ii). Thus, q = n.
(19)
J Optim Theory Appl (2011) 148: 237–256
247
(b) Since x 0 ∈ S(p 0 ), it follows from the Carathéodory’s theorem and assumption (ii) that there exist u ∈ ∂f 0 (x 0 ), ti ∈ Tp0 (x 0 ), ui ∈ ∂gt0i (x 0 ) and λi > 0 for i ∈ {1, . . . , n}, such that {u1 , . . . , un } is a basis of Rn and −u −
n
λi ui ∈ N (x 0 ).
(20)
i=1
Take any y ∈ S(p 0 ). Then, for each i ∈ {1, . . . , n}, gt0i (x 0 ) = 0 and ui , y − x 0 ≤ gt0i (y) − gt0i (x 0 ) = gt0i (y). Hence, ui , y − x 0 ≤ 0. Besides,
−u −
n
(21)
λi ui , y − x
0
≤ 0.
i=1
Therefore, 0 = f 0 (y) − f 0 (x 0 ) ≥ u, y − x 0 ≥ −
n
λi ui , y − x 0 ≥ 0.
i=1
This implies that ni=1 λi ui , y − x 0 = 0 and so, ui , y − x 0 = 0 for every i ∈ {1, . . . , n}. Since {u1 , . . . , un } is a basis of Rn
, it follows that there exist the real numbers βi , i ∈ {1, . . . , n}, such that y − x 0 = ni=1 βi ui . Hence, (y − x n ) = y − x , y − x = 0
2
0
0
n
βi ui , y − x 0 = 0.
i=1
Thus y = x 0 . The proof is complete.
Note that in [6] the combination of both conditions (i) and (ii) of Proposition 3.1 is referred to as extended Nürnberger condition; for more details and discussions we refer the reader to [5–7]. n Let {Km }∞ m=1 be a sequence of compact sets of R defined as in (1). The following lemma is useful in the sequel. Lemma 3.2 Let be a compact subset of Rn and let {x k }∞ k=1 be a sequence ben n longing to \ {0}. Let {f k }∞ k=1 ⊂ CO(R ) and f ∈ CO(R ). If limk→∞
then, for each m ∈ N such that ⊂ Km , one has
(f k ,f ) limk→∞ δmx k
= 0.
σ (f k ,f ) x k
=0
248
J Optim Theory Appl (2011) 148: 237–256
Proof Let m ∈ N be sufficient large such that ⊂ Km . Since the function γ (r) := r 1+r is increasing on R+ := [0, +∞[, we have δm (f k , f ) δm+k (f k , f ) ≤ 1 + δm (f k , f ) 1 + δm+k (f k , f )
for every k ≥ 1.
Obviously, ∞ ∞ 1 δm (f k , f ) 1 δj (f k , f ) ≤ =: σ (f k , f ). 2i 1 + δm (f k , f ) 2j 1 + δj (f k , f ) j =1
i=m+1
Hence 0≤
1 δm (f k , f ) ≤ σ (f k , f ) 2m 1 + δm (f k , f )
and so, 0≤
1 δm (f k , f ) σ (f k , f ) ≤ . 2m (1 + δm (f k , f )) x k x k
Combining this with Lemma 2.1 we obtain the aimed conclusion.
We now state the main result. Theorem 3.1 Let p 0 = (f 0 , g 0 ) ∈ P and x 0 ∈ S(p 0 ). Suppose that the following conditions hold: (i) p 0 satisfies the Slater constraint qualification; (ii) There is no T0 ⊂ Tp0 (x 0 ) with |T0 | < n satisfying
0 0 0 −∂f (x ) ∩ cone ∂gt (x ) + N (x ) = ∅. 0
0
t∈T0
Then S is pseudo-Lipschitz at (p 0 , x 0 ). Proof Let p 0 = (f 0 , g 0 ) ∈ P and x 0 ∈ S(p 0 ). Suppose, in the contrary to our claim, n 0 k that there exist a sequence {x k }∞ k=1 ⊂ R converging to x , the sequences {p = ∞ ∞ k k k k k 0 ¯ (f , g )}k=1 and {p¯ = (f , g¯ )}k=1 in P , both converging to p , such that x k ∈ S(p k ) and d(x k , S(p¯ k )) > kd∞ (p k , p¯ k )
for all k ≥ 1.
(22)
Since S is lower semicontinuous at by Lemma 2.5, it follows that there exists x¯ k ∈ S(p¯ k ) satisfying x¯ k → x 0 as k → +∞. Hence, by (22), for each k ≥ 1 and x k = x¯ k , we have p0
d∞ (p k , p¯ k ) 1 σ (f k , f¯k ) σ∞ (g k , g¯ k ) , = < . max x k − x¯ k x k − x¯ k x k − x¯ k k
(23)
J Optim Theory Appl (2011) 148: 237–256
249
From Proposition 3.1, it follows that, for k large enough, there exist uk ∈ ∂f k (x k ), u¯ k ∈ ∂ f¯k (x¯ k ), tik ∈ Tpk (x k ), t¯ik ∈ Tp¯ k (x¯ k ), uki ∈ ∂g kk (x k ), u¯ ki ∈ ∂ g¯ k¯k (x¯ k ), λki > 0, ti
ti
λ¯ ki > 0 for all i = 1, 2, . . . , n, such that both {uk1 , . . . , ukn } and {u¯ k1 , . . . , u¯ kn } form bases of Rn and −uk −
n
λki uki ∈ N (x k ),
−u¯ k −
i=1
n
λ¯ ki u¯ ki ∈ N (x¯ k ).
(24)
i=1
By taking a subsequence if necessary, we can assume that, for each i ∈ {1, . . . , n}, the sequences {tik }k≥k0 and {t¯ik }k≥k0 converge to some ti and t¯i , respectively. Then, by Lemma 2.2, we obtain ti , t¯i ∈ Tp0 (x 0 ),
i ∈ {1, 2, . . . , n}.
As in the proof of Proposition 3.1, we can assume that {λki }k≥k0 and {λ¯ ki }k≥k0 converge to some λi and λ¯ i , respectively. Letting k → ∞ in (24), we deduce from assumption (i), the robustness of the normal cone [23, p. 58], Lemma 2.6, and the compactness of the subgradient of any finite-valued convex function that, by taking a subsequence if necessary, limk→∞ uk = u ∈ ∂f 0 (x 0 ), limk→∞ u¯ k = u¯ ∈ ∂f 0 (x 0 ), limk→∞ u¯ ki = u¯ i ∈ ∂gt0i (x 0 ) with i ∈ {1, 2, . . . , n}, and −u −
n
λi ui ∈ N (x 0 ),
−u¯ −
i=1
n
λ¯ i u¯ i ∈ N (x 0 ).
(25)
i=1
From the Carathéodory’s theorem and assumption (ii), it follows that λi > 0, λ¯ i > 0 for all i = 1, . . . , n, and both {u1 , . . . , un } and {u¯ 1 , . . . , u¯ n } must be basis of Rn . n Let {Km }∞ m=1 be a sequence of compact subset of R defined as in (1). Then there must exist m ∈ N such that k ∞ k ∞ {x¯ k − x k }∞ k=1 , {x }k=1 , {x¯ }k=1 ⊂ Km .
On the hand, since tik ∈ Tpk (x k ) and x¯ k ∈ S(p¯ k ) ⊂ C(p¯ k ) for every i ∈ {1, . . . , n}, it follows that gtkk (x k ) = 0, g¯ tkk (x¯ k ) ≤ 0. i
i
Hence, uki , x¯ k − x k ≤ gt k (x¯ k ) − gt k (x k ) ≤ gt k (x¯ k ) − g¯ t k (x¯ k ) ≤ δm (gtkk , g¯ tkk ). i
i
i
i
i
i
Therefore, δm (g kk , g¯ kk ) x¯ k − x k ti ti k ui , k ≤ . x − x¯ k x k − x¯ k
(26)
Furthermore, σ (gtk , g¯ tk ) ≤ sup σ (gtk , g¯ tk ) =: σ∞ (g k , g¯ k ) ≤ d∞ (p k , p¯ k ) t∈T
∀t ∈ T .
250
J Optim Theory Appl (2011) 148: 237–256
This and (23) imply σ (g kk , g¯ kk ) lim
ti
k→∞
ti
x k − x¯ k
= 0 ∀i ∈ {1, 2, . . . , n}.
(27)
By taking a subsequence if necessary, we can assume that { xx¯k −−x } converges x¯ k n k≥k0 n to some z ∈ R with z = 1. Letting k → ∞ in (26), we obtain from (27) and Lemma 3.2 that k
ui , z ≤ 0 ∀i ∈ {1, . . . , n}. Besides, the first inclusion in (24) implies n k k k k k λi ui , x¯ − x ≤ 0. −u −
k
(28)
(29)
i=1
Dividing both sides of (29) by x k − x¯ k and letting k → ∞, we can assert from (28) that n λi ui , z ≤ 0. (30) −u, z ≤ i=1
On the another hand, since {1, . . . , n}, it follows that
t¯ik
∈ Tp¯ k (x¯ k ) and x k ∈ S(p k ) ⊂ C(p k ) for every i ∈
g¯ tk¯k (x¯ k ) = 0, i
gtk¯k (x k ) ≤ 0. i
This implies that δm (g¯ kk , g kk ) x k − x¯ k ti ti k u¯ i , k ≤ . k k k x¯ − x x¯ − x In the same manner we see that, for every i ∈ {1, . . . , n}, u¯ i , −z ≤ 0,
−u, ¯ −z ≤
n
λ¯ i u¯ i , −z.
i=1
Hence, u¯ i , z ≥ 0,
−u, ¯ z ≥
n
λi u¯ i , z ≥ 0.
i=1
By the convexity of f k and f¯k , we have uk , x¯ k − x k ≤ f k (x¯ k ) − f k (x k ) and u¯ k , x k − x¯ k ≤ f¯k (x k ) − f¯k (x¯ k ).
(31)
J Optim Theory Appl (2011) 148: 237–256
251
Therefore, uk , x¯ k − x k + u¯ k , x k − x¯ k ≤ f k (x¯ k ) − f¯k (x¯ k ) + f¯k (x k ) − f k (x k ) ≤ 2δm (fk , f¯k ).
(32)
Dividing both sides of (32) by x k − x¯ k and letting k → ∞, we can assert from (23) and Lemma 3.2 that u, z + u, ¯ −z ≤ 0, and so, −u, ¯ z ≤ −u, z.
(33)
Combining (28), (30), (31), and (33) we conclude that, for every i ∈ {1, . . . , n}, u¯ i , z = ui , z = 0. Since both {uk1 , . . . , ukn } and {u¯ k1 , . . . , u¯ kn } are basis of Rn , it follows that z = 0, a contradiction. The proof is complete. The next example illustrates Theorem 3.1. Example 3.1 For PCSI, let T = [1, 2], P := CO(R) × C1 (R × T ) and := R. Let p 0 = (f 0 , g 0 ) ∈ P be defined by f 0 (x) = x 2 + 3x + 2∀x ∈ R, and gt0 (x) = −x − t
∀t ∈ T , x ∈ R.
Then C(p 0 ) = [−1, +∞[. Let x 0 = −1 ∈ S(p 0 ). We now check the assumptions of Theorem 3.1. Clearly, xˆ = 3 is a Slater element for p 0 and so, assumption (i) is fulfilled. Let us examine assumption (ii). It is easy to show that ∂f 0 (x 0 ) = {1}, Tp0 (x 0 ) = {1} and N (x 0 ) = {0}. If there exists T0 ⊂ Tp0 (x 0 ) such that |T0 | < 1 then T0 = ∅. Hence −∂f 0 (x 0 ) ∩ cone ∂gt0 (x 0 ) + N (x 0 ) = ∅, t∈T0
and assumption (ii) is fulfilled. Applying Theorem 3.1 we conclude that S is pseudoLipschitz at (p 0 , x 0 ). The following examples show that the assertion of Theorem 3.1 may be false, if one of the assumptions (i) and (ii) is violate. Example 3.2 For PCSI, let T = {1, 2, 3} ∪ [4, 5] ⊂ R, P := CO(R2 ) × C1 (R2 × T ) and := R2+ . Let p 0 = (f 0 , g 0 ) ∈ P be formulated by f 0 (x) = (x1 )2
∀x = (x1 , x2 ) ∈ R2 ,
252
J Optim Theory Appl (2011) 148: 237–256
⎧ ⎪ if t = 1, 2, ⎨−x1 , 0 gt (x) = −x2 , if t = 3, ⎪ ⎩ x1 + x2 − 1, if t ∈ [4, 5], ∀x = (x1 , x2 ) ∈ R2 . Let x 0 := (0, 0) ∈ S(p 0 ). We now check the assumptions of Theorem 3.1. Clearly, xˆ = ( 14 , 14 ) is a Slater element for p 0 . However, assumption (ii) is violate. Indeed, we have Tp0 (x 0 ) = {1, 2, 3}, T0 = {2} ⊂ Tp0 (x 0 ) and |T0 | = 1 < 2. It is easily seen that ∂f 0 (x 0 ) = (0, 0), ∂g20 (x 0 ) = {(−1, 0)} and N (x 0 ) = −R2+ . It follows that cone( t∈T0 ∂gt0 (x 0 )) = −R+ × {0} and ∂gt0 (x 0 ) + N (x 0 ) = {0}. −∂f 0 (x 0 ) ∩ cone t∈T0
We next examine the pseudo-Lipschitz property of S at (p 0 , x 0 ). Let {p k = (f 0 , g k )}∞ k=1 ⊂ P be a sequence such that ⎧1 ⎪ 2k x2 − x1 , ⎪ ⎪ ⎨−x , 1 gtk (x) = ⎪ −x 2, ⎪ ⎪ ⎩ x1 + x2 − 1,
if t = 1, if t = 2, if t = 3, if t ∈ [4, 5].
We claim that p k = (f 0 , g k ) → p 0 = (f 0 , g 0 ) as k → ∞. Indeed, it is sufficient to show that g k → g 0 as k → ∞. Let {Km }∞ m=1 be a sequence of compact subset of R2 such that Km := m(cl B), where B stands for the open unit ball of R2 . Then R2 = ∞ m=1 Km . Clearly, δm (gtk , gt0 ) :=
max
x∈Km
|gtk (x) − gt0 (x)| =
1 2k m,
0,
if t = 1, if t ∈ {2, 3} ∪ [4, 5].
Hence, σ (gtk , gt0 ) = 0 for t ∈ {2, 3} ∪ [4, 5], and for t = 1 we have σ (g1k , g10 ) :=
∞ ∞ 1 σm (g1k , g10 ) 1 1 1 = . ≤ m m k 0 2 1 + σm (g1 , g1 ) 2k 2 2k
m=1
m=1
1 and so, g k → g 0 as k → ∞. Then σ∞ (g k , g 0 ) := maxt∈T σ (gtk , gt0 ) ≤ 2k We see that S is not pseudo-Lipschitz at (p 0 , x 0 ). Indeed, taking p¯ k := (f 0 , g 0 ), k x¯ = (1, 0) ∈ S(p¯ k ) and x k ∈ S(p k ) = {(0, 0)}, we have
1 = d(x¯ k , S(p k )) >
1 ≥ kd(p k , p¯ k ) ∀k ≥ 1. 2
Thus, S is not pseudo-Lipschitz at (p 0 , x 0 ).
J Optim Theory Appl (2011) 148: 237–256
253
Example 3.3 For PCSI, let T = [0, 1] ∪ {2}, P := CO(R) × C1 (R × T ) and := R. Let p 0 = (f 0 , g 0 ) ∈ P be defined by f 0 (x) = x + 1, gt0 (x) =
−tx − t, 0,
∀x ∈ R,
if t ∈ [0, 1], if t = 2, ∀x ∈ R.
Then, C(p 0 ) = [−1, +∞[. Let x 0 = −1. We now check the assumptions of Theorem 3.1. Clearly, p 0 does not satisfy the Slater constraint qualification by g00 (x) = 0 for all x ∈ R. Hence, assumption (i) is violate. Let us examine assumption (ii). It is easily seen that ∂f 0 (x 0 ) = {1}, Tp0 (x 0 ) = [0, 1] ∪ {2} and N (x 0 ) = {0}. If there exists T0 ⊂ Tp0 (x 0 ) such that |T0 | < 1 then T0 = ∅. Hence, 0 0 0 0 0 ∂gt (x ) + N (x ) = ∅, −∂f (x ) ∩ cone t∈T0
and so, assumption (ii) is fulfilled. We next check the pseudo-Lipschitz property of S at (p 0 , x 0 ). We have S(p 0 ) = {−1}. Let p k = (f 0 , g k ) ∈ P , 1 1 −tx − k+1 k k t + k , if t ∈ [ k+1 , 1], gt (x) = 1 [ ∪ {2}. 0, if t ∈ [0, k+1 We claim that p k → p 0 as k → ∞. Indeed, it is sufficient to show that g k → g 0 as k → ∞. Let {Km }∞ m=1 be a sequence of compact subset of R such that Km := m[−1, 1]. Then R = ∞ m=1 Km . Clearly, ⎧ ⎪ if t ∈ {0, 2}, ⎨0, 1 δm (gtk , gt0 ) := max |gtk (x) − gt0 (x)| = t (m + 1), if t ∈ ]0, k+1 [, ⎪ x∈Km ⎩ 1 1 1 − k t + k , if t ∈ [ k+1 , 1]. 1 , 1] we have Hence, σ (gtk , gt0 ) = 0 for t ∈ {0, 2}. For t ∈ [ k+1
σ (gtk , gt0 ) =
∞ ∞ 1 1 1 δm (gtk , gt0 ) 1 . = − t+ 2m 1 + δm (gtk , gt0 ) k k 2m
m=1
This implies σ (gtk , gt0 ) = − k1 t + σ (gtk , gt0 ) =
m=1
1 k
1 1 for all t ∈ [ k+1 , 1]. For t ∈ ]0, k+1 [ we have
∞ ∞ 1 t (m + 1) 1 = t 2m 1 + t (m + 1) 2m
m=1
m=1
(m + 1) 1 t
+ (m + 1)
Therefore, σ∞ (g k , g 0 ) := max σ (gtk , gt0 ) ≤ t∈T
1 , k+1
≤t
∞ 1 = t. 2m
m=1
254
J Optim Theory Appl (2011) 148: 237–256
and so, g k → g 0 as k → ∞. This implies that p k → p 0 as k → ∞. It is a simple matter to check that S(p k ) = {0} for every k ≥ 1. Thus, S is not pseudo-Lipschitz at (p 0 , x 0 ). We now consider a special case of PCSI which has the form (CSI)(c,b) :
min f (x) + cT x
subject to x ∈ Rn , gt (x) ≤ bt , t ∈ T ,
where c ∈ Rn , cT denotes the transpose of c, f : Rn → R and gt : Rn → R (t ∈ T ) are given convex functions such that (x, t) → gt (x) is continuous on Rn × T , and b ∈ C0 (T )-the set of all continuous functions on T . The set of feasible points of (CSI)(c,b) is denoted by (c, b). S(c, b) stands for the set of all solutions of (CSI)(c,b) . The set of active constraints at x ∈ (c, b) is given by T(c,b) (x) := {t ∈ T | gt (x) = bt }. The following corollary is an immediate consequence of Theorem 3.1 by taking f˜(x) = f (x) + cT x,
g(x, ˜ t) = gt (x) − bt ,
and = Rn .
Corollary 3.1 [5, Theorem 10] For (CSI)(c,b) , let c0 ∈ Rn , b0 ∈ C0 (T ) and x 0 ∈ S(c0 , b0 ). Suppose that the following conditions hold: (i) (c0 , b0 ) satisfies the Slater constraint qualification; (ii) There is no T0 ⊂ T(c0 ,b0 ) (x 0 ) with |T0 | < n satisfying ∂gt0 (x 0 ) = ∅. − c0 + ∂f 0 (x 0 ) ∩ cone t∈T0
Then S is pseudo-Lipschitz at ((c0 , b0 ), x 0 ).
4 Application to Linear Semi-Infinite Optimization Problems In this section we establish sufficient conditions for pseudo-Lipschitz property of the solution mapping of parametric linear semi-infinite optimization problems at the nominal parameter. Let T be a nonempty compact metric space and let C0 (T ) and C0 (T , Rn ) be, respectively, the set of all continuous mappings b: T → R
and a : T → Rn
normed by b∞ := sup |b(t)| and a∞ := sup a(t), t∈T
where · denotes Euclidean norm in
t∈T
Rn .
J Optim Theory Appl (2011) 148: 237–256
255
For every triple of parameters p = (c, a, b) ∈ P := Rn × C0 (T , Rn ) × C0 (T ) we consider the linear semi-infinite problem (LSI)p
minc, x subject to x ∈ C(p),
where C(p) := {x ∈ |a(t), x ≤ b(t) for every t ∈ T } and is a nonempty closed convex subset of Rn . Theorem 4.1 Let p 0 = (c0 , a0 , b0 ) ∈ P and x 0 ∈ S(p 0 ). Suppose that the following conditions hold: (i) p 0 satisfies the Slater constraint qualification; (ii) There is no T0 ⊂ Tp0 (x 0 ) with |T0 | < n satisfying −c0 ∈ cone({a0 (t) | t ∈ T0 }) + N (x 0 ). Then S is pseudo-Lipschitz at (p 0 , x 0 ). Proof The assertion of the theorem follows by the same method as in the √ proof of Theorem 3.1 with d∞ (·) replaced by · ∞ as well noting that vt − v¯t , x ≤ nv − v ¯ ∞ x for every v, v¯ ∈ C0 (T , Rn ), x ∈ Rn and for every t ∈ T . Finally, in the case that = Rn and p just depends on the parameters c and b, i.e., the problems under consideration become the linear semi-infinite optimization problems under continuous perturbations of the right-hand side of the constraints and linear perturbations of the objective function. For this class of problems, the pseudoLipschitz property holds, if and only if both conditions (i) and (ii) of Theorem 4.1 hold at the nominal parameter, as shown in [5, Theorem 16]. However, we do not known at present how to proceed with the case that p depends on the triple of parameters c, a and b.
5 Concluding Remarks We have established sufficient conditions for the pseudo-Lipschitz property of the solution map of parametric convex semi-infinite optimization problem (PCSI) under convex functional perturbations of both objective function and constraint set. Moreover, examples are provided to illustrate the obtained results. An application to parametric linear semi-infinite optimization problems is also given.
References 1. Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000) 2. Brosowski, B.: Parametric semi-infinite linear programming: I. Continuity of the feasible set and of the optimal value. Sensitivity, stability and parametric analysis. Math. Program. Stud. 21, 18–42 (1984)
256
J Optim Theory Appl (2011) 148: 237–256
3. Cánovas, M.J., López, M.A., Parra, J., Todorov, M.I.: Stability and well-posedness in linear semiinfinite programming. SIAM J. Optim. 10, 82–98 (1999) 4. Cánovas, M.J., López, M.A., Parra, J., Toledo, F.J.: Lipschitz continuity of the optimal value via bounds on the optimal set in linear semi-infinite optimization. Math. Oper. Res. 31(3), 478–489 (2006) 5. Cánovas, M.J., Klatte, D., López, M.A., Parra, J.: Metric regularity in convex semi-infinite optimization under canonical perturbations. SIAM J. Optim. 18, 717–732 (2007) 6. Cánovas, M.J., Hantoute, A., López, M.A., Parra, J.: Stability of indices in the KKT conditions and metric regularity in convex semi-infinite optimization. J. Optim. Theory Appl. 139, 485–500 (2008) 7. Cánovas, M.J., Hantoute, A., López, M.A., Parra, J.: A note on the compactness of the index set in convex optimization. Application to metric regularity. Optimization 59, 477–483 (2010) 8. Chuong, T.D., Huy, N.Q., Yao, J.-C.: Pseudo-Lipschitz property of linear semi-infinite vector optimization problems. Eur. J. Oper. Res. 200, 639–644 (2010) 9. Chuong, T.D., Huy, N.Q., Yao, J.-C.: Stability of semi-infinite vector optimization problems under functional perturbations. J. Glob. Optim. 45, 583–595 (2009) 10. Colgen, R., Schnatz, K.: Continuity properties in semi-infinite parametric linear optimization. Numer. Funct. Anal. Optim. 3, 451–460 (1981) 11. Dinh, N., Goberna, M.A., López, M.A., Son, T.Q.: New Farkas-type constraint qualifications in convex infinite programming. ESAIM Control Optim. Calc. Var. 13, 580–597 (2007) 12. Gayá, V.E., López, M.A., Vera de Serio, V.N.: Stability in convex semi-infinite programming and rates of convergence of optimal solutions of discretized finite subproblems. Optimization 52, 693– 713 (2003) 13. Goberna, M.A.: Linear semi-infinite optimization: recent advances. Continuous optimization. In: Appl. Optim., vol. 99, pp. 3–22. Springer, New York (2005) 14. Goberna, M.A., López, M.A., Todorov, M.: Stability theory for linear inequality systems. SIAM J. Matrix Anal. Appl. 17(4), 730–743 (1996) 15. Goberna, M.A., López, M.A., Todorov, M.: Stability theory for linear inequality systems. II. Upper semicontinuity of the solution set mapping. SIAM J. Optim. 7, 1138–1151 (1997) 16. Goberna, M.A., Lopez, M.A.: Linear Semi-Infinite Optimization. Wiley, Chichester (1998) 17. Goberna, M.A., Gómez, S., Guerra, F., Todorov, M.I.: Sensitivity analysis in linear semi-infinite programming: perturbing cost and right-hand-side coefficients. Eur. J. Oper. Res. 181(3), 1069–1085 (2007) 18. Hettich, R. (ed.): Semi-Infinite Programming. Lecture Notes in Control and Information Sciences, vol. 15. Springer, Berlin (1979) 19. Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms I. Springer, Berlin (1993) 20. López, M.A., Vera de Serio, V.N.: Stability of the feasible set mapping in convex semi-infinite programming. In: Nonconvex Optim. Appl., vol. 57, pp. 101–120. Kluwer Academic, Dordrecht (2001) 21. Reemtsen, R., Rückmann, J.-J. (eds.): Semi-Infinite Programming. Nonconvex Optimization and Its Applications, vol. 25. Kluwer Academic, Boston (1998) 22. Son, T.Q., Dinh, N.: Characterizations of optimal solution sets of convex infinite programs. Top 16, 147–163 (2008) 23. Clarke, F.H.: Optimization and Nonsmooth Analysis, 2nd edn. Classics in Applied Mathematics, vol. 5. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (1990) 24. Rockafellar, R.T.: Convex Analysis. Princeton Mathematical Series, vol. 28. Princeton University Press, Princeton (1970) 25. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)