Hamiltonian Necessary Conditions for a ... - Semantic Scholar

Report 2 Downloads 68 Views
Hamiltonian Necessary Conditions for a Multiobjective Optimal Control Problem with Endpoint Constraints Qiji J. Zhu

1

Department of Mathematics and Statistics Western Michigan University Kalamazoo, MI 49008

Abstract. This paper discusses Hamiltonian necessary conditions for a non-

smooth multiobjective optimal control problem with endpoint constraints related to a general preference. The transversality condition in our necessary conditions is stated in terms of a normal cone to the level sets of the preference. Examples for a number of useful preferences are discussed.

Keywords. Multiobjective optimal control, Hamiltonian necessary con-

ditions, di erential inclusions, preference, utility function, subdi erential calculus and variational principles.

Abbreviated Title. Multiobjective Optimal Control Problems. AMS Classi cation. 49K24, 90C29. Acknowledgement. I thank Yu. S. Ledyaev and R. B. Vinter for helpful

discussions.

1

Research was supported by the National Science Foundation under grant DMS-

9704203

1 Introduction Practical decision problems often involve many factors and, therefore, result in a vector valued decision function describing several competing objectives. The comparison between di erent values of the decision function is determined by a preference of the decision maker. The main purpose of this paper is to derive Hamiltonian necessary conditions for a nonsmooth multiobjective optimal control problem with the dynamics governed by a di erential inclusion. Historically, the concept of preference appeared in the value theory in economics. In the early studies of the value theory a preference is often de ned by an utility function. One of the central issue in the value theory was: given a preference whether it is always possible to de ne an utility function (with good analytic properties) that determines the preference. In terms of multiobjective optimal control problems this amounts to ask whether it is possible to reduce a multiobjective optimal control problem to a single objective optimal control problem with a reasonable decision function. In [13], Debreu proved a celebrated theorem which asserts that a preference  is deteremined by a continuous utility function if and only if  is continuous in the sense that for any x, the sets fy : x  yg and fy : y  xg are closed. While this theorem plays a central role in the value theory, it is not much help to us for the following reasons. Firstly, Debreu's theorem is an existance theorem. It does not provide methods for determining the utility function for a given preference. Secondly, even if one can nd a continuous utility function that determines the preference, an optimal control problem with a continuous decision function and endpoint constrains is not a convenient form to analysis. Finally, some useful preference relations (e.g. the preference determined by the lexicographical order of the vectors) are not continuous. For these reasons we will pursue the multiobjective optimal control problem directly. In the area of multiobjective optimization and optimal control much research has been devoted to the weak Pareto solution and its generalizations. The preference relation for two vectors x; y 2 Rm in a weak Pareto sense is de ne by x  y if and only if xi  yi ; i = 1; :::; m and at least one of the inequality is strict. In other words x  y if and only if x ? y 2 K := fz 2 Rm : z has nonpositive componentsg and x 6= y. More generaly one can use other cones K in the de ntion of the preference relations. Necessary optimality conditions for (generalized) weak Pareto solutions were derived for optimization problems in [1, 8, 11, 25, 29, 35, 36, 39] 1

(see also the survey paper [14] for more information), for linear-quadratic and H 1 optimal control problems in [16] and for an abstract optimal control problem in [6]. A common key step in deriving necessary conditions for generalized Pareto solutions is to apply a separation theorem to a tangent cone of the attainable set and a shift of the cone ?K where K is the cone that generates the preference. In this paper we take a di erent approach. We use a normal cone condition similar to that in the extremal principle [24, 26, 28] at the optimal point in terms of the normal cones to the attainable set and a level set of the preference. This approach enables us to handle more general preference relations: they are not necessarily de ned by a cone and are not even necessarily continuous. Necessary optimality conditions for the weak Pareto solution and its generalizations can be derived and re ned by using our necessary conditions. The technical implementation of our proof relies on recent progress in nonsmooth analysis, in particular, calculus for smooth subdi erentials of lower semicontinuous functions [2, 4, 5, 10, 19], the methods for proving the extremal principle [24, 26, 28] and techniques in handling the Hamiltonians for a di erential inclusion [10, 19]. To avoid technical distractions we prove here Hamiltonian necessary conditions that extends the classical Hamiltonian necessary conditions for optimal control problems derived by Clarke (see [8]). Recently there are several signi cant re nements of the Hamiltonian necessary conditions for single objective optimal control problems [18, 19, 20, 34]. To what extend the methods in this paper can be used to generalized these re ned Hamiltonian necessary conditions to multiobjective optimal control problems is an interesting problem. Moreover, there are many other types of necessary conditions for optimal control problems, in particular, those re ne and generalize the maximum principle (see e.g. [21, 22, 23, 27, 30, 31, 33, 37, 38, 41]). Whether those necessary conditions can be generalized to multiobjective optimal control problems in our general setting are perhaps more challenging open problems. The remainder of the paper is arrange as follows. Section 2 contains de nition and preliminary results in subdi erential calculus. We state our main result in Section 3 along with some examples and discussions. The technical proofs are given in Section 4.

2

2 Preliminary Let X be a real re exive Banach space with closed unit ball BX and with topological real dual X  . Note that X has an equivalent Frechet smooth norm and we will use this norm as the norm of X unless otherwise stated. Let f : X ! R := R [ f+1g be an extended-valued function. We denote by dom f := fx 2 X : f (x) 2 Rg the e ective domain of f . We assume all our functions are proper in that they take some nite values: dom f 6= ;. Let us now recall the de nitions of subdi erentials and normal cones (see [5] for greater details and historical comments). De nition 2.1 Let f : X ! R be a lower semicontinuous function and C a closed subset of X . We say f is Frechet-subdi erentiable and x is a Frechet-subderivative of f at x if there exists a concave C 1 function g such that g0 (x) = x and f ? g attains a local minimum at x. We denote the set of all Frechet-subderivatives of f at x by DF f (x). We de ne the Frechetnormal cone of C at x to be NF (C; x) := DF C (x) where S is the indicator function of C de ned by C (x) := 0 for x 2 C and 1 otherwise.

De nition 2.2 Firstly let f : X ! R be a lower semicontinuous function.

De ne

@f (x) := fw ? ilim v : v 2 DF f (xi ); (xi ; f (xi )) ! (x; f (x))g; !1 i i and

@ 1f (x) := fw ? ilim t v : v 2 DF f (xi); ti ! 0+ ; (xi ; f (xi)) ! (x; f (x))g !1 i i i and call @f (x) and @ 1 f (x) the limiting subdi erential and singular subdifferential of f at x respectively. Secondly, let C be a closed subset of X . De ne

N (C; x) := fw ? ilim v : v 2 NF (C; xi ); C 3 xi ! xg !1 i i and call N (S; x) the limiting normal cone of S at x.

We will also need the Clarke subdi erential @C which is derived by taking the weak-star closed convex hull of the limiting and singular subdi erential, i.e., @C f (x) := cl co[@f (x) + @ 1 f (x)]: 3

We conclude this section with a sum rule and a chain rule for the Frechet subdi erential. They can be viewed as nonsmooth versions of the corresponding calculus rules for derivatives. We start with the sum rule. The prototypes of this result appeared rst in [17]. We use the following version derived in [4] which re nes similar results in [2, 19].

De nition 2.3 (Uniform Lower Semicontinuity) Let f1; :::; fN : X ! R be lower semicontinuous functions and E a closed subset of X . We say that (f1 ; :::; fn ) is uniformly lower semicontinuous on E if

inf x2E

N X n=1

fn(x)  lim inf f !0

N X n=1

fn(xn) : kxn?xm k  ; xn ; xm 2 E; n; m = 1; :::; N g:

We say that (f1 ; :::; fN ) is locally uniformly lower semicontinuous at x 2 \Nn=1dom(fn) if (f1; :::; fN ) is uniformly lower semicontinuous on a closed

ball centered at x.

Theorem 2.4 (Sum Rule) Let f1; :::; fN : X ! R be lower semicontinu-

ous functions. Suppose PN that (f1 ; :::; fN ) is locally uniformly lower semicontinuous at x and n=1 fn attains a local minimum at x. Then, for any " > 0, there exist xn 2 x + "B and xn 2 DF fn(xn ); n = 1; :::; N such that jfn(xn) ? fn(x)j < "; n = 1; 2; :::; N , diam(x1 ; :::; xN )  max(kx1 k; :::; kxN k) < " and N X 

k

n=1

xnk < ":

Following the argument of [10, Theorem 9.1] one can deduce the following Frechet subdi erential chain rule for Lipschitz functions from the sum rule of Theorem 2.4.

Theorem 2.5 (Chain Rule) Let X and Y be re exive Banach spaces. Let  : X ! Y and f : Y ! R be locally Lipschitz mappings. Then for any x 2 DF (f  )(x) and any " > 0 there exist x 2 x + "BX , y 2 (x) + "BY and y 2 DF f (y) such that k(x) ? (x)k < " and x 2 DF hy ; i(x) + "BX :

4

3 The main results Let  be a (nonre exive) preference for vectors in Rm . We consider the following multiobjective optimization problem with endpoint constaints.

M Minimize subject to

(y(1)) y_ (t) 2 F (y(t)) a:e: in [0; 1]; y(0) = 0 ; y(1) 2 E:

(1) (2)

Here,  = (1 ; :::; m ) is a Lipschitz vector function on Rn , E is a closed subset of Rn and F is a multifunction from Rn to Rn satisfying the following conditions. (H1) For every x, F (x) is a nonempty compact convex set; (H2) F is Lipschitz with rank LF , i.e., for any x; y,

F (x)  F (y) + LF kx ? ykBRn : We say that y is a feasible trajectory for problem M if y is absolutely continuous and satis es relations (1) and (2). We say x is a solution to problem M provided that it is a feasible trajectory for M and there exists no other feasible trajectory y such that (y(1))  (x(1)). For any r 2 Rm , we denote l(r) := fs 2 Rm : s  rg. We will need the following regularity assumptions on the preference. De nition 3.1 We say that a preference  is regular at r 2 Rm provided that (A1) for any r 2 Rm , r 2 l(r); (A2) for any r  s, t 2 l(r) implies that t  s; (A3) for any sequences rk ; k ! r in Rm , lim sup N (l(rk ); k )  N (l(r); r): k!1

Our main result is:

Theorem 3.2 Let x be a solution to the multiobjective optimal control problem M. Suppose that the preference  is regular at (x(1)). Then there 5

exist an absolutely continuous mapping p : [0; 1] ! Rn, a vector  2 N (l((x(1))); (x(1))) with kk = 1, and a scalar 0 = 0 or 1 satisfying 0 + kp(t)k 6= 0; 8 t 2 [0; 1] such that (?p_(t); x_ (t)) 2 @C H (x(t); p(t)) a:e: in [0; 1]; ?p(1) 2 0 @ h; i(x(1)) + N (E; x(1)): Moreover, one can always choose 0 = 1 when x(1) 2 int E . Here H is the Hamiltonian corresponding to F de ned by H (x; p) := maxfhp; vi : v 2 F (x)g: Remark 3.3 Observing that H is positive homogeneous in p we can scale p or, alternatively,  in Theorem 3.2 by a positive constant. In the remainder of this section we will examining a few examples. The proof of Theorem 3.2 is postponed to the next section. Example 3.4 (A Single Objective Problem) Apparently when m = 1 and r  s () r < s, Theorem 3.2 reduces to the classical Hamiltonian necessary conditions for an optimal control problem [8]. Thus, the necessary conditions in Theorem 3.2 are true generalizations of the Hamiltonian necessary conditions for single objective optimal control problems. Example 3.5 (The Weak Pareto Optimal) In a weak Pareto optimal control problem we de ne the preference by r  s if and only if ri  si ; i = 1; :::; m and at least one of the inequality is strict. It is easy to check that  de ned this way satis es assumptions (A1) and (A2) in De nition 3.1 at any r 2 Rm . Moreover, for any r 2 Rm , l(r) = r + R?m where R?m := fs 2 Rm : si  0; i = 1; :::; mg. It follows that, for any r;  2 Rm, N (l(r); )  R+m := ?R?m. Since N (l(r); r) = R+m we can see that  also satis es assumption (A3) at r and, therefore, it is regular at any r 2 Rm . Combining Theorem 3.2 and Remark 3.3 we obtain the following corollary. Corollary 3.6 Let x be a weak Pareto solution to the multiobjective optimal control problem M. Then there existPan absolutely continuous mapping p : [0; 1] ! Rn , a vector  2 R+m with mi=1 i = 1, and a scalar 0 = 0 or 1 satisfying 0 + kp(t)k 6= 0; 8 t 2 [0; 1] such that (?p_(t); x_ (t)) 2 @C H (x(t); p(t)) a:e: in [0; 1]; ?p(1) 2 0 @ h; i(x(1)) + N (E; x(1)): Moreover, one can always choose 0 = 1 when x(1) 2 int E .

6

We point out that if x is a weak Pareto optimal solution to problem M then it is a solution to the following single objective optimal control problem: minimize max(1 (y(1)); :::m (y(1))) subject to constraint (1) and (2). Then we can deduce Corollary 3.6 by combining the Hamiltonian necessary conditions for a single objective problem and the subdi erential chain rule for the max function. However, this method does not apply to the following generalized weak Pareto optimal solution without additional assumptions [1]. Example 3.7 (A Generalized Weak Pareto Optimal) Let K  Rm be a closed cone. We now de ne the preference by r  s if and only if r ? s 2 K and r 6= s. Multiobjective optimal control problems with this preference is called generalized weak Pareto optimal control problem. When K = R?m we get the weak Pareto problem. Note that we do not assume any convexity on K . Similar to the last example we can check that the preference  de ned here is regular at any r 2 Rm . Moreover, N (l(r); r) = K ? := fs 2 Rm : hs; ti  0; t 2 K g. In particular, N (l((x(1))); (x(1))) = K ?. Thus, we have Corollary 3.8 Let x be a solution to the generalized weak Pareto multiobjective optimal control problem M with the preference de ned by a closed cone K . Then there exist an absolutely continuous mapping p : [0; 1] ! Rn, a vector  2 K ? with kk = 1, and a scalar 0 = 0 or 1 satisfying 0 + kp(t)k 6= 0; 8 t 2 [0; 1] such that (?p_(t); x_ (t)) 2 @C H (x(t); p(t)) a:e: in [0; 1]; ?p(1) 2 0 @ h; i(x(1)) + N (E; x(1)): Moreover, one can always choose 0 = 1 when x(1) 2 int E . Example 3.9 (A Preference Determined by an Utility Function) Let u be an continuous utility function that determines the preference, i.e., s  r if and only if u(s) < u(r). We need an additional assumption to ensure the regularity of  which we summarize in the following lemma. We will use d(S; r) := inf fks ? rk : s 2 S g to denote the distance between a set S and a point r. Lemma 3.10 Let u be a continuous utility function determining the preference . Suppose that (3) slim !r d(DF u(s); 0) > 0: 7

Then  is regular at r and

[ [ N (l(r); r) = @ 1 u(r)

a>0

!

a@u(r) :

Proof. It follows from (3) that l(r) is nonempty. Then conditions (A1) and (A2) in De nition 3.1 follows from the continuity of u. It remains to show that  satis es assumption (A3). First we observe that, for r0 suciently close to r, l(r0 ) = fs 2 Rm : u(s) ? u(r0 )  0g. Thus, DF u(r0 )  NF (l(r0 ); r0). Taking limits we have ! [ [ 1 (4) @ u(r) a@u(r)  N (l(r); r): a>0

Let rk ; k and k be sequences satisfying rk ; k ! r, k 2 N (l(rk ); k ) and k !  . We need show that  2 N (l(r); r). By the de nition of the limiting normal cone, without loss of generality, we may assume that k 2 NF (l(rk ); k ). Since N (l(r); r) always contains 0 we consider the interesting case when  6= 0. Then, when n is suciently large, k 6= 0. Since NF (l(rk ); k ) is empty when u(k ) > u(rk ) and f0g when u(k ) < u(rk ). We must have u(k ) = u(rk ), i.e., NF (l(rk ); k ) = NF (l(k ); k ) = NF (fs : u(s) ? u(k )  0g; k ). Applying [5, Theorem 3.4] (see also [3, 32]) we conclude that there exist ak > 0 and k 2 DF u(k ) such that kak k ? k k < 1=n. It follows that lim a  = : k!1 k k

Since k is bounded away from 0, ak is bounded. Passing to a subsequence if necessary we may assume that ak ! a. If a 6= 0 then k converges to an element of @u(r) and, therefore,  2 a@u(r). If a = 0, then by de nition  2 @ 1u(r). In view of (4) we have shown that  is regular at r. The formula for N (l(r); r) also follows. Using Lemma 3.10 and Remark 3.3 we have the following corollary of Theorem 3.2.

Corollary 3.11 Let  be a preference determined by a utility function u.

Suppose that u satis es the condition of Lemma 3.10. Let x be a solution to the multiobjective optimal control problem M. Then there exist an absolutely continuous mapping p : [0; 1] ! Rn , a nonzero vector  2 @ 1 u((x(1))) [

8

@u((x(1))), and a scalar 0 = 0 or 1 satisfying 0 + kp(t)k 6= 0; 8 t 2 [0; 1] such that

(?p_(t); x_ (t)) 2 @C H (x(t); p(t)) a:e: in [0; 1]; ?p(1) 2 0 @ h; i(x(1)) + N (E; x(1)): Moreover, one can always choose 0 = 1 when x(1) 2 int E .

Here we derived necessary conditions for an optimal control problem with a continuous decision function. This example also shows that under favorable conditions necessary optimality conditions in terms of a preference and its utility function are the same. However, the condition in terms of the normal cone of the level sets of the preference is intrinsic. In fact, if u is a (smooth) utility function corresponding to preference  then so is v(r) = (u(r) ? u(x(1)))3 . But v has a derivative 0 at x(1). Thus, using v as a decision function, the necessary optimality conditions in Corollary 3.11 will yield no useful information. Our next example considers the preference determined by the lexicographical order. This preference does not corresponding to any real utility function [12, page 72].

Example 3.12 (The Preference Determined by the Lexicographical Order) De ne r  s if there exist an integer q 2 f0; 1; :::; m?1g such that ri = si; i = 1; :::; q and rq+1 < sq+1. It is easy to check that  satis es assumptions

(A1) and (A2) in De nition 3.1. Straightforward calculation yields l(r) = fs = (s1; :::; sm ) 2 Rm : s1  r1g. It follows that, for any r;  2 Rm, N (l(r); ) = fae1 : a  0g. Here e1 = (1; 0; :::; 0) 2 Rm . Thus,  is regular at any r 2 Rm . Moreover, N (l((x(1))); (x(1))) = fae1 : a  0g. Combining Theorem 3.2 and Remark 3.3 we have

Corollary 3.13 Let x be a solution to the multiobjective optimal control problem M with the lexicographical preference. Then there exist an absolutely continuous mapping p : [0; 1] ! Rn and a scalar 0 = 0 or 1 satisfying 0 + kp(t)k 6= 0; 8 t 2 [0; 1] such that (?p_(t); x_ (t)) 2 @C H (x(t); p(t)) a:e: in [0; 1]; ?p(1) 2 0 @1 (x(1)) + N (E; x(1)): Moreover, one can always choose 0 = 1 when x(1) 2 int E .

9

Intuitively this tells us that since objective 1 is much more important than the other objectives the necessary conditions for a multiobjective optimal control problem with the lexicographical preference is the same as necessary conditions for an optimal control problem with a single objective function 1 . To get further information one can add an additional endpoint constraint D = fy : 1 (y) = 1 (x(1))g to obtain necessary conditions: there exist an absolutely continuous mapping p : [0; 1] ! Rn and a scalar 0 = 0 or 1 satisfying 0 + kp(t)k 6= 0; 8 t 2 [0; 1] such that (?p_(t); x_ (t)) 2 @C H (x(t); p(t)) a:e: in [0; 1]; ?p(1) 2 0 @2 (x(1)) + N (E \ D; x(1)): This process can be continued. Note that the adjoint arcs p and scalars 0 in this sequence of necessary conditions are not necessarily the same.

4 Proof of Theorem 3.2 We divide the proof into several steps. Step 1. Converting the multiobjective optimal control problem into an abstract optimization problem. The method we use here develops a similar convertion for the single objective problem that can be traced back to [7] (see also [8, 10]). The way we handle the multiobjective preference is stimulated by the proof of the extremal principle in [24, 26, 28]. Let A := fy(1) : y is a solution to (1)g, let ? := f(y) : y 2 A \ E g. Let " be an arbitrary positive number and let  2 (0; min(1; ")=8L ) where L is the Lipschitz rank of . Choose   (x(1)) such that k  ? (x(1))k < 2 and de ne  := l(  ). We observe that both ? and  are closed. Moreover, it follows from condition (A2) on  that ? \  = ;: De ne

f ( ; ) := k ? k + ? ( ) +  (): Then, for any ( ; ) 2 R2m , f ( ; ) > 0 and f ((x(1));  ) = k(x(1)) ?

 k < 2 . Let C := f(u; v) 2 L2 ([0; 1]; Rn )  L2 ([0; 1]; Rn ) : v(t) 2 F (u(t)); a:e: in [0; 1]g 10

and

K := f( ; u; v) 2 Rn  L2 ([0; 1]; Rn )  L2 ([0; 1]; Rn ) : Zt Z1 = 0 + v(s)ds and u(t) = 0 + v(s)dsg: 0

0

Then for any = ( 1 ; :::; m ); ; u and v we have m X ? ( )  graph i ( ; i ) + E ( ) + K ( ; u; v) + C (u; v) i=1

and 0 = ? ((x(1))) =

m X i=1

graph i (x(1); (x(1)))+E (x(1))+K (x(1); x; x_ )+C (x; x_ ):

De ne ( ; ; ; u; v) := k ?k+ ()+

m X i=1

graph i ( ; i )+E ( )+K ( ; u; v)+C (u; v):

Then > 0 and ((x(1));  ; x(1); x; x_ ) = k  ? x(1)k < 2 . By virtue of the Ekeland Variational Principle [15], there exist ~ ; 2 (x(1)) + BRm , ~ 2 (  + BRm ) \   ((x(1)) + 2BRm ) \ , ~ 2 (x(1) + BRn ) \ E , u~ 2 x + BL2 ([0;1];Rn ) and v~ 2 x_ + BL2 ([0;1];Rn ) such that ( ; ; ; u; v) + k( ; ; ; u; v) ? (~ ; ~; ~ ; u~; v~)k attains a minimum at ( ; ; ; u; v) = (~ ; ~; ~ ; u~; v~). To simplify notation we denote z := ( ; ; ; u; v) and Z := R2m+n  L2 ([0; 1]; Rn )  L2 ([0; 1]; Rn ). De ne functions m X f1(z) = k ? k +  () + graph i ( ; i ) + E ( ); i=1

and

f2 (z) := K ( ; u; v); f3 (z) := C (u; v)

f4 (z) := k( ; ; ; u; v) ? (~ ; ~; ~ ; u~; v~)k: Then, f1 ; f2 ; f3 ; f4 are lower semicontinuous and f1 + f2 + f3 + f4 attains a minimum at z~ over a closed neighborhood U of z~. 11

Step 2. Applying the sum rule. To do so, we need to check that (f1 ; f2 ; f3 ; f4 ) is uniformly lower semicontinuous around z~. The argument is similar to that of [19] but somewhat simpler because of the weaker condition required in the sum rule of Theorem 2.4 (see [42] for the case of single objective problems). Let z1k ; z2k ; z3k ; z4k 2 U be four sequences satisfying

diam(z1k ; z2k ; z3k ; z4k ) ! 0; as k ! 1 such that

(5)

(

4 k ) = lim inf X f (z ); diam(z ; z ; z ; z )  h f ( z lim 1 2 3 4 i i n!1 h!0 i=1 i i i=1 Then we must have (uk ; vk ) 2 C ., i.e., 4 X

2

)

:

2

v2k (t) 2 F (uk2 (t)); a:e: in [0; 1]: (6) Since z2k 2 U , uk2 is a bounded sequence in L2 ([0; 1]; Rn ). Since F is Lipschitz with compact set-value, v2k is also a bounded sequence in L2 ([0; 1]; Rn ). Without loss of generality we may assume that v2k converges weakly to v in L2 ([0; 1]; Rn ). Then it follows from relation (5) that v3k also converges R weakly to v in L2 ([0; 1]; RnR). Since ( k3 ; uk3 ; v3k ) 2 K we have k3 = 0 + 01 v3k (s)ds k to and uk3 (t)R = 0 + 0t v3k (s)ds. Thus, we may assume R t that 3 converges 1 k  = 0 + 0 v(s)ds and u3 converges to u(t) = 0 + 0 v(s)ds in L2 ([0; 1]; Rn ). By (5) we also have uk2 converges to u in L2 ([0; 1]; Rn ). It follows from (6) that, for almost all t 2 [0; 1], hv ; v2k (t)i 2 supfhv ; vi : v 2 F (uk2 (t))g; 8v 2 Rn: Taking limits as k ! 1 yields, for almost all t 2 [0; 1], hv ; v(t)i 2 supfhv ; vi : v 2 F (u(t))g; 8v 2 Rn: Thus, v(t) 2 F (u(t)) a.e. in [0; 1]. Combining k3 converges to  and (5) we conclude that k1 also converges to  . Passing to a subsequence if necessary we may assume that the bounded sequences 1k and 1k converges to  and , respectively. Since f2 (z2k ) = f3(z3k ) = 0 for suciently large k, f1 is lower semicontinuous and f4 is weakly lower semicontinuous we have 4 X

k )  f (z ) + f (z ) = X f (z )  X f (~z ): lim f ( z i 1 4 i i i k!1 i=1 i=1 i=1 4

12

4

This veri es that (f1 ; f2 ; f3 ; f4 ) is uniformly lower semicontinuousPat z~. Now we can apply the fuzzy sum rule of Theorem 2.4 to 4i=1 fi at z~. Noticing that f4 is Lipschitzian with rank  we have that there exist z1 ; z2 ; z3 2 z~ + BZ and zi 2 DF fi(zi ); i = 1; 2; 3 such that

kz1 + z2 + z3 k < 2:

(7)

Since f1 does not depend on u and v we have z1 = (  ;  ;  ; 0; 0). Similarly, we may write z2 = (0; 0;  ; u ; v ) and z3 = (0; 0; 0; q; p). Our next task is to calculate zi ; i = 1; 2; 3. Step 3. Calculating z1 . Let z1 = ( ; ; ; u1 ; v1 ) and let g be a concave C 1 function on Rm  m R  Rn such that g0 ( ; ; ) = (  ;  ;  ) and f1 ? g attains a minimum at ( ; ; ). In particular, let 0 = ( 0 ) we have

k( 0 ) ? 0k ? g(( 0 ); 0 ; 0 ) + E( 0 ; 0)

(8)

attains a minimum at (; ). Applying the sum rule of Theorem 2.4 and the chain rule of Theorem 2.5 to the functions in (8). With some straight forward (yet somewhat tidious) calculation we conclude that there exist 0 2 ( + BRm ) \ , 1 2 + BRn , 2 2 ( + BRn ) \ E , and  2 NF (; 0 ) with kk 2 (1 ? 3; 1 + 3) such that

 2 DF h; i( 1 ) + NF (E; 2 ) + BRn :

(9)

Step 4. Calculating z2 . Let z2 = (0; 0;  ; u ; v ) and z2 = ( 2 ; 2 ; ; u; v). Then (  ; u ; v ) 2 NF (K; ( ; u; v)). The useful information for us is summarized in the following lemma.

Lemma 4.1 Let (  ; u ; v ) 2 NF (K; ( ; u; v)). Then  + v (t) +

Z1 t

u (s)ds = 0; a:e: in [0; 1]:

Proof. Since K is convex the Frechet normal cone of K coincides with the convex normal cone. Thus,

h 0 ? ;  i + hu0 ? u; u i + hv0 ? v; v i  0; 8( 0; u0 ; v0 ) 2 K: 13

By the de nition of K we have that, for any v0 2 L2 ([0; 1]; Rn ), Zt Z1 Z1 h  ; (v0 (t) ? v(t))dti + hu (t); (v0 (s) ? v(s))dsidt 0 0 Z0 1 + hv (t); v0 (t) ? v(t)idt  0: 0

Integrating by parts yeilds that, for any v0 2 L2 ([0; 1]; Rn ), Z1 Z1  h + u(s)ds + v(t); v0 (t) ? v(t)idt  0: 0 t R 1 Thus,  + t u (s)ds + v (t) = 0 a.e. in [0; 1]. Step 5. Calculating NF (C; (u; v)). Combining (7), (9) and Lemma 4.1 we conclude that, for any " > 0, there exist, " 2 (x(1)) + "BRm , 0 2 ( + BRm ) \   ((x(1)) + "BRm ) \ , 1 2 + BRn  x(1) + "BRn , 2 2 ( + BRn ) \ E  (x(1) + "BRn ) \ E ,  2 NF (l( " ); 0) with kk 2 (1 ? "; 1 + "), 2 2 NF (E; 2 ) and  2 DF h; i( 1 ) + 2 + "BRn ; (10) such that there exist (u; v) 2 (x; x_ ) + "BL2 ([0;1];Rn )L2 ([0;1];Rn ) , (q; p) 2 NF (C; (u; v)) and u 2 L2 ([0; 1]; Rn ) satisfying Z1   (11) k(u ; ? ? u(s)ds) ? (q; p)k < ": 

Now we need to calculate the normal cone NF (C; (u; v)). This calculation is similar to [10, Lemma 9.4]. Lemma 4.2 Let (q; p) 2 NF (C; (u; v)). Then (?q(t); v(t)) 2 @C H (u(t); p(t)); a:e: in [0; 1]: Proof. Since (q; p) 2 NF (C; (u; v)). there exists a C 1 concave function w on L2 ([0; 1]; Rn )  L2 ([0; 1]; Rn ) with w(u; v) = 0 and w0 (u; v) = 0 such that, for any (u0 ; v0 ) 2 C , hq; u0 ? ui + hp; v0 ? vi + w(u0 ; v0 )  0: (12) Let B := (0; 1) \ fthe Lebesgue points of u and vg. Then B has measure 1. For any t 2 B , (;  ) 2 Graph F and h > 0, de ne  s 2 [t ? h; t + h], 0 uh(s) := uu((tt)) +  ifotherwise 14

and

 s 2 [t ? h; t + h], vh0 (s) := vv((tt)) +  ifotherwise. Then ku0h ? uk = O(h), kvh0 ? vk = O(h) and w(u0h ; vh0 ) = o(h). Setting (u0 ; v0 ) = (u0h ; vh0 ) in (12), dividing by 2h and taking limits yield

hq(t);  ? u(t)i + hp(t);  ? v(t)i  0; 8(;  ) 2 Graph F:

(13)

In particular, set  = u(t) we have

hp(t);  ? v(t)i  0; 8 2 F (u(t)):

(14)

hp(t); v(t)i = sup hp(t);  i = H (u(t); p(t)):

(15)

That is to say  2F (u(t))

De ne

g(x; p) := hp(t) ? p; v(t)i + kp(t) ? pk2 +hq(t); x ? u(t)i + H (x; p): Then g is Lipschitz and strictly convex in p for each x. Let U be a ball around u(t) and let K be a uniform bound for F (x) over U . Then jH (x; p)j  K kpk for x 2 U . Thus for all x 2 U , the function p ! g(x; p) attains a unique minimum at p = p(x) and kp(x)k  c for some constant c. We claim that (i) p ! g(u(t); p) attains a local minimum at p = p(t), and (ii) x ! minp g(x; p) attains a local maximum at x = u(t). Then it follows from [10, Lemma 9.5] that (0; 0) 2 @C g(u(t); p(t)), i.e., (?q(t); v(t)) 2 @C H (u(t); p(t)): It remains to verify claims (i) and (ii). By the minimax theorem we have 2 min p g(x; p) = min p fhp(t) ? p; v(t)i + kp(t) ? pk +hq(t); x ? u(t)i + H (x; p)g = min fhp(t) ? p; v(t)i + kp(t) ? pk2 p max 2F (x) +hq(t); x ? u(t)i + hp;  ig

15

(16)

= max minfhp;  ? v(t)i + kp(t) ? pk2 2F (x) p +hq(t); x ? u(t)i + hp(t); v(t)ig = max fhp(t);  ? v(t)i ? k ? v(t)k2 =4  2F (x)

+hq(t); x ? u(t)i + hp(t); v(t)ig:

In particular, when x = u(t) we have, by (14) and (15), min p g(u(t); p) =

max fhp(t);  ? v(t)i ? k ? v(t)k2 =4 + hp(t); v(t)ig

 2F (u(t))

= hp(t); v(t)i = g(u(t); p(t)):

This veri es (i). On the other hand, combining (13) and (16) we have min p g(x; p)  hp(t); v(t)i = g(u(t); p(t)) = min p g(u(t); p); which veri es (ii). Step 6. Taking limits. Let " = 1=k for k = 1; 2; ::::. By (10) and (11) there exist sequences

k ; k ! (x(1)), k1 ; k2 ! x(1), k 2 NF (l( k ); k ) with kk k ! 1, 2k 2 NF (E; k2 ),

k 2 DF hk ; i( k1 ) + 2k + (1=k)BRn ; (17) (uk ; vk ) ! (x; x_ ) in L2 ([0; 1]; Rn )  L2 ([0; 1]; Rn ), (qk ; pk ) 2 NF (C; (uk ; vk )) and uk 2 L2 ([0; 1]; Rn ) such that

k(uk ; ? k ?

Z1 

uk (s)ds) ? (qk ; pk )k < 1=k:

(18)

We consider the limiting processes for the following two cases: The Good Case: k 2k k is bounded. Passing to a subsequence we may assume that 2k converges to 2 2 N (E; x(1)). Since  is Lipschitzian and kk k ! 1, taking subsequences if necessary we may assume that k converges to

 2 @ h; i(x(1)) + N (E; x(1)); where  2 N (l((x(1))); (x(1))) and kk = 1 by (A3). By Lemma 4.2, (qk ; pk ) 2 NF (C; (uk ; vk )) implies that (?qk (t); vk (t)) 2 @C H (uk (t); pk (t)); a:e: in [0; 1]: 16

(19)

Since F is Lipschitz of rank LF , H (u; p) is Lipschtz with respect to u of rank LF kpk. It follows from (19) that

kqk (t)k  lkpk (t)k:

(20)

Combining (18) and (20) we have Z1 kuk (t)k  LF ( kuk (s)kds + k k k + 2=k): t

Invoking Gronwall's inequality we may conclude that uk (t) is uniformly bounded on [0; 1] and, therefore, uk is a bounded sequence in L2 ([0; 1]; Rn ). Again, taking a subsequence if necessary we may assume that uk converges weakly to, say q, in L2 ([0; 1]; Rn ). Then,Rby (18) qk weakly converges to q and pk strongly converges to p = ?  ? 1 q(s)ds in L2 ([0; 1]; Rn ). Taking limits in (19) as k ! 1 yields (?p_(t); x_ (t)) 2 @C H (x(t); p(t)); a:e: in [0; 1]: It is obvious that ?p(1) =  2 @ h; i(x(1)) + N (E; x(1)). Thus, we derived the necessary condition in Theorem 3.2 corresponding to the case when 0 = 1. The Bad Case: k 2k k is unbounded. Wihtout loss of generality we may assume that k 2k k ! 1. Dividing sequences k ; 2k ; uk ; qk and pk by k 2k k and taking limits as before yield that there exist an absolutely continuous function p satisfying (?p_(t); x_ (t)) 2 @C H (x(t); p(t)); a:e: in [0; 1]: with ?p(1) =  = limk!1 k =k 2k k = limk!1 2k =k 2k k 2 N (E; x(1)). Observing that k  k = 1 we have kp(t)k > 0 for all t 2 [0; 1]. This corresponds to the necessary condition in Theorem 3.2 when 0 = 0. Finally, we observe that if x(1) 2 int E then when k is suciently large  k 2 = 0 so that the good case always apply.

References [1] J. M. Borwein, Proper ecient points for maximazation with respect to cones, SIAM J. Contr. & Optim. 15 (1977), 57-63. 17

[2] J. M. Borwein & A. Ioffe, Proximal analysis in smooth spaces, CECM Research Report 93-04 (1993), Set-valued Analysis, 4 (1996), 1-24. [3] J. M. Borwein, J. S. Treiman & Q. J. Zhu, Necessary conditions for constrained optimization problems with semicontinuous and continuous data, CECM Research Report 95-051 (1995), Trans. Amer. Math. Soc. 350 (1998), 2409-2429. [4] J. M. Borwein and Q. J. Zhu, Viscosity solutions and viscosity subderivatives in smooth Banach spaces with applications to metric regularity, CECM Research Report 94-12 (1994), SIAM J. Control and Optimization 34 (1996), 1568-1591. [5] J. M. Borwein and Q. J. Zhu, A survey of subdi erential calculus with applications, Nonlinear Analysis, TMA, to appear. [6] W. W. Breckner Derived sets for weak multiobjective optimization problems with state and control variables, J. Optim. Theo. Appli. 93 (1997), 73-102. [7] F. H. Clarke, Necessary Conditions for Nonsmooth Problems in Optimal Control and the Calculus of Variations, Ph. D. thesis, Univ. of Washington, 1973. [8] F. H. Clarke, Optimization and Nonsmooth Analysis, John Wiley & Sons, New York, 1983, Russian edition MIR, Moscow, (1988). Reprinted as Vol. 5 of the series Classics in Applied Mathematics, SIAM, Philadelphia, 1990. [9] F. H. Clarke, Methods of Dynamic and Nonsmooth Optimization, CBMS-NSF Regional conference series in applied mathematics, Vol. 57 SIAM, Philadelphia, 1989. [10] F. H. Clarke, Yu. S. Ledyaev, R. J. Stern & P. R. Wolenski, Nonsmooth Analysis and Control Theory, Graduate Texts in Mathematics Vol. 178, Springer-Verlag, New York, 1998. [11] B .D. Craven, Nonsmooth multiobjective programming, Numer. Funct. and Optimization, 10 (1989), 49-64. [12] G. Debreu, Theory of Value, John Wiley and Sons, New York, 1959. 18

[13] G. Debreu, Mathematical Economicis: Twenty Papers of Gerard Debreu, Cambridge Press, 163-172, 1983. [14] J. Dong, Nondi erentiable multiobjective obtimization, (Chinese) Adv. in Math. 23 (1994), 517-528. [15] I. Ekeland, On the variational principle, J. Math. Anal. Appl., 47 (1974), 324-353. [16] Z. Hu, S. E. Salcudean and P. D. Loewen, Multiple objective control problems via nonsmooth analysis, The 13th World Congress of IFAC, San Francisco, CA, USA, June 30-July 5, 1996. [17] A. D. Ioffe, Calculus of Dini subdi erentials of functions and contingent derivatives of set-valued maps, Nonlinear Analysis, TMA 8 (1984), 517-539. [18] A. D. Ioffe, Euler-Lagrange and Hamiltonian formalisms in dynamic optimization, Trans. Amer. Math. Soc. 349 (1997), 2871-2900. [19] A. D. Ioffe & R. T. Rockafellar, The Euler and Weierstrass conditions for nonsmooth variational problems, Calc. Var., 4 (1996), 59-87. [20] P. D. Loewen and R. T. Rockafellar, New necessary conditions for the generalized problem of Bolza, SIAM J. Control Optim. 34 (1996), 1496-1511. [21] B. Kaskosz, A maximum principle in relaxed controls, Nonlinear Analysis, TMA 14 (1990), 357-367. [22] B. Kaskosz and S. Lojasiewicz, Jr., A maximum principle for generalized control systems, Nonlinear Analysis, TMA 9 (1985), 109130. [23] B. Kaskosz and S. Lojasiewicz, Jr., Lagrange-type extremal trajectories in di erential inclusions, Systems and Control Letters 19 (1992), 241-247. [24] A. Y. Kruger & B. S. Mordukhovich, Extremal points and Euler equations in nonsmooth optimization, (Russian) Dokl. Akad. Nauk. BSSR 24 (1980), 684-687. 19

[25] M. Minami, Weak Pareto-optimal necessary conditions in a nondifferential multiobjective program on a Banach space, J. Optim. Theo. Appli. 41 (1983), 451-461. [26] B. S. Mordukhovich, Maximum principle in problems of time optimal control with nonsmooth constraints, J. Appl. Math. Mech. 40 (1976), 960-969. [27] B. S. Mordukhovich, Approximation Methods in Problems of Optimization and Control, (Russian) Nauka, Moscow, (1988). (English transl. to appear in Wiley-Interscience.) [28] B. S. Mordukhovich, Generalized di erential calculus for nonsmooth and set-valued mappings, J. Math. Anal. Appl. 183 (1994), 250-288. [29] C. Singh, Optimality conditions in multiobjective di erentiable programming, J. Optim. Theo. Appli. 53 (1987), 115-123. [30] H. J. Sussmann, A strong maximum principle for systems of di erential inclusions, Proceedings of the 35rd IEEE conference on decision and control, Kobe, Japan, December (1996). [31] H. J. Sussmann, Transversality conditions and a strong maximum principle for systems of di erential inclusions, Proceedings of the 37th IEEE conference on decision and control, Tampa, FL, December (1998). [32] J. S. Treiman, Lagrange multipliers for nonconvex generalized gradients with equality, inequality and set constraints, SIAM J. Contr. and Optim. to appear. [33] H. D. Tuan, On controllability and extremality in nonconvex di erential inclusions, J. Optim. Theo. Appli. 85 (1995), 435-472. [34] R. B. Vinter and H. Zheng, Necessary conditions for optimal control problems with state constraints, Trans. Amer. Math. Soc. 350 (1998), 1181-1204. [35] L. Wang, J. Dong and Q. Liu, Optimality conditions in nonsmooth multiobjective programming, System Sci. and Math. Sci. 7 (1994), 250255. [36] L. Wang, J. Dong and Q. Liu, Nonsmooth multiobjective programming, System Sci. and Math. Sci. 7 (1994), 362-366. 20

[37] J. Warga, Optimization and controllability without di erentiability assumptions, SIAM J. Control and Optimization 21 (1983), 837-855. [38] J. Warga, An extension of the Kaskosz maximum principle, Applied Math. Optim. 22 (1990), 61-74. [39] X. Q. Yang and V. Jeyakumar, First and second-order optimality conditions for convex composite multiobjective optimization, J. Optim. Theo. Appli. 95 (1997), 209-224. [40] M. Ying, The nondominated solution and the proper ecient of nonsmooth multiobjective programming, J. Sys. Sci. and Math. Sci. 5 (1985), 269-278. [41] Q. J. Zhu, Necessary optimality conditions for nonconvex di erential inclusion with endpoint constraints, J. Di . Equs. 124 (1996), 186-204. [42] Q. J. Zhu, Optimal control problem and nonsmooth analysis, Proceedings of the 38th IEEE Conference on Decision and Control, 15-18 Dec. 1998, Tempa, FL.

21