Uniqueness of lower semicontinuous viscosity solutions ... - CiteSeerX

Report 3 Downloads 66 Views
Uniqueness of lower semicontinuous viscosity solutions for the minimum time problem Olivier Alvarez UPRESA 60-85, Site Colbert Universite de Rouen 76821 Mont-Saint-Aignan Cedex France

Shigeaki Koike and Isao Nakayama Department of Mathematics Saitama University 255 Shimo-Okubo, Urawa Saitama 338-8570 Japan

Abstract

We obtain the uniqueness of lower semicontinuous (lsc for short) viscosity solutions of the transformed minimum time problem assuming that they converge to zero on a \reachable" part of the target in appropriate directions. We present a counter-example which shows that the uniqueness does not hold without this convergence assumption. It was shown by Soravia that the uniqueness of lsc viscosity solutions having a \subsolution property" on the target holds. In order to verify this subsolution property, we show that the Dynamic Programming Principle (DPP for short) holds inside for any lsc viscosity solutions. In order to obtain the DPP, we prepare appropriate approximate PDEs derived through Barles' inf-convolution and its variant.

1

1 Introduction In this manuscript, we discuss the minimum time problem of deterministic optimal control, which has been studied via viscosity solution approach by many authors. As the rst result, we refer to Bardi [Ba]. See also [EJ], [BF], [BS] and [BSo] which also treated the minimum time problem of di erential games. In those works, they characterized the value function of the minimum time problem to reach a given target as the unique viscosity solution of a rstorder PDE. However, they only treated the case when the resulting value functions are continuous since those uniqueness results imply the continuity of solutions. We note that there often appear discontinuous value functions for practical minimum time problems. The break-through to treat semicontinuous solutions for rst-order PDEs was done by Barron and Jensen [BJ1]. Indeed, they introduced a new definition of semicontinuous viscosity solutions for Cauchy problems with convex Hamiltonians, which arise when we deal with optimal control problems. Under their setting, it was shown in [BJ2] that the semicontinuous value function is the unique solution of the associated PDE. We note that, if we restrict ourselves to treat continuous viscosity solutions, then their de nition is equivalent to that of the standard one. Afterwards, Barles [B1] discussed semicontinuous solutions for stationary problems utilizing \Barles"-convolution. With this idea, Soravia [S1] studied the Dirichlet type problems. More precisely, he imposed a \subsolution" property on the boundary of the target, under which the uniqueness of lsc viscosity solutions for the (transformed) minimum time problem was obtained. See also [K] and [BL] for related topics. Recently, C^arja et al. in [CMP] (see also [C]) studied lsc viscosity solutions of the minimum time problem assuming that they converge to the Dirichlet data from inside. For the viscosity solution theory of rst-order Hamilton-Jacobi equations, we refer to a new book by Bardi and Capuzzo Dolcetta [BC]. On the other hand, in non-smooth analysis, lsc solutions have been studied in optimal control theory. For the rst result, we refer to Frankowska [F]. More recently, Wolenski and Zhuang [WZ] have proved the uniqueness of lsc solutions of the minimum time problem assuming the subsolution property on the target as in [S1], which the value function satis es. We note that their de nition of solutions is slightly di erent from that of viscosity solutions. It 2

is worth mentioning that, in [WZ], to show the uniqueness, they compared the other lsc solution (if it exists) with the value function by the so-called invariance theory while, in the literature of the viscosity solution theory, we have shown it via comparison principle for a boundary value problem of PDEs. Our aim here is to obtain a uniqueness result without assuming the subsolution property on the target. In fact, we will derive such a property from the de nition of solutions under some continuity assumption on a \reachable" part of the boundary. Then, we will be able to apply Soravia's argument in [S1] to get the uniqueness. Moreover, we will mention that our continuity condition is equivalent to Soravia's one. In fact, to show that the Soravia's condition implies the continuity one, we give a direct proof though we can prove it using the uniqueness of solutions. In an example, we will see that this continuity assumption is necessary to obtain the uniqueness result. Here, we shall recall the original minimum time problem: Consider the state equation associated with controls

2 A  f : [0; 1) ! A measurableg; where A is a compact set in Rm (for some m 2 N): For x 2 Rn, 8 < dX dt (t) = g(X (t); (t)) for t > 0; : X (0) = x;

(1:1)

where g : Rn  A ! Rn is a given function and x 2 Rn is xed. We shall denote, under appropriate hypotheses, by X (; x; ) the (unique) solution of (1.1). We will also denote by X (; x;  ()) the unique solution for a vector eld  2 W 1;1(Rn; Rn); 8 < dX dt (t) =  (X (t)) for t > 0; : X (0) = x: For simplicity, we shall suppose that (A0)

T  Rn is compact. 3

With these notations, we recall the value function of the minimum time problem: V (x) = inf T ; 2A x where Tx = inf ft  0 j X (t; x; ) 2 T g. Since V (x) might be in nity in a subregion of  RnnT , we will have to study the free boundary problem: max f?hg(x; a); DV (x)i ? 1g = 0 in R  fx 2 j V (x) < 1g: a2A

(1:2)

Since we can not expect that R is open in general as we will see, we meet some diculty if we treat (1.2) directly. Therefore, in this paper, following the previous works, we shall consider the transformed value function by Kruzkov transformation: ?Tx : 1 ? e u(x) = inf 2A 



Then, we can expect u to be a solution of

u(x) + max f?hg(x; a); Du(x)ig = 1 in : a2A

(1:3)

Thus, once we verify that u is the unique solution of (1.3), we will be able to derive the reachable set by R = fx 2 j u(x) < 1g. This paper is organized as follows: Section 2 is devoted to give our de nition of the minimum time problem and the DPP which implies the subsolution property. We present our uniqueness result and examples in Section 3. Also, we discuss about the equivalence of boundary conditions in Section 3. In the nal section, we prove the DPP in Section 2.

Acknowledgement: We wish to thank Professor E. N. Barron for in-

forming us the manuscript [WZ]. We also wish to thank Professor M. Bardi for letting us know an interesting example (see Example 3.2 below) due to P. Soravia. We nally wish to thank the referees for their suggestions to the rst draft. The second author SK was supported by Grant-in-Aid for Scienti c Research No. 09640242 and No. 09440067, the Ministry of Education, Science and Culture in Japan. 4

2 Dynamic Programming Principle Our hypothesis on the regularity of given functions is as follows: (A1) g 2 C (Rn  A; Rn) and sup kg(; a)kW 1;1(Rn;Rn) < 1: a2A

For later convenience, we shall consider the following general rst-order PDE in a set   Rn: u(x) + max f?hg(x; a); Du(x)i ? f (x; a)g = 0 in ; (2:1) a2A where f : Rn  A ! R is a given continuous function. For simplicity, we shall use the notation: H (x; r; p)  r + max f?hg(x; a); pi ? f (x; a)g: a2A

We will suppose the following regurality on given functions in (2.1): 8 < gn 2 C (Rn  A; Rn); f 2 C (Rn  A; R); and o 0 (A1 ) n n n 1 ; 1 1 ; 1 k g (  ; a ) k + k f (  ; a ) k sup : W (R ;R ) W (R ;R) < 1: a2A

Following [BJ1] (also [B1]), we present our de nition of solutions of (2.1).

De nition: For a function u :  ! R, we call it a subsolution (resp.,

supersolution) of (2.1) if u is lsc in , and H (x; u(x); p)  (resp., ) 0 for x 2  and p 2 D?u(x); where D?u(x) denotes the standard subdi erential of u at x 2 . D?u(x) = fp 2 Rn j u(y)  u(x) + hp; y ? xi + o(jy ? xj) as y ! xg: For a function u :  ! R, we also call it a solution of (2.1) if u is both a sub- and supersolution of (2.1) We characterize the set of reachable controls in the following way: For x 2 @T , exists t > 0 such that A(x)  a 2 A X (sThere ; x; ?g(; a)) 2 for s 2 (0; t): (



)

5

We shall derive the \subsolution" properties on @ T for solutions through the following propositions. The rst one is

Lemma 2.1. Assume that (A1) holds. Let u be a solution of (1:3). Assume also that u = 0 on T . Then, for x 2 @ T and p 2 D?u(x), we have ?hg(x; a); pi  0 provided a 2 AnA(x): Proof. Choose  2 C 1 such that u ?  attains its minimum over Rn at x 2 @ T , u(x) = (x) = 0, and D(x) = p. Set X ()  X (; x; ?g(; a)). Since a 2 AnA(x), there exists ftk > 0g1 k=1 , such that limk!1 tk = 0 and X (tk ) 2 T (k = 1; 2;   ). Hence,

(X (tk )) ? (x)  u(X (tk )) = 0: Therefore, dividing tk and then, sending k ! 1, we conclude the assertion. QED For simplicity, we shall suppose that (A00)

 is open, and @  is compact.

For  > 0, we de ne an open subset   fx 2  j dist(x; @ ) > g : Also, for an open subset O   and x 2 O, we use the notation:

Ox; = inf ft  0 j X (t; x; ) 2= Og: We present the DPP for (2.1) whose proof will be given in the nal section since it is rather complicated.

Theorem 2.2. (cf. [L]) Assume that (A00) and (A10) hold. Let u :  !

R be a bounded solution of

H (x; u; Du) = 0 in : 6

Then, for  > 0 and x 2  ,

u(x) = inf 2A

x; x;  ?s e f (X (s; x; ); (s))ds + e? u(X (x; ; x; )) 0

( Z

)

:

Corollary 2.3. Assume that (A0) and (A1) hold. Fix x 2 @ T . Let u : ! R be a bounded solution of u(x) + max f?hg(x; a); Du(x)ig ? 1 = 0 in : a2A Assume also that u = 0 on T and, for any x 2 @ T and a 2 A(x), lims!inf u(X (s; xt; a)) = u(x) holds; t

where xt  X (t; x; ?g(; a)) for t  0. Then, u(x) ? hg(x; a); pi  1 for p 2 D?u(x): Proof of Corollary 2.3. Let x 2 @ T ; p 2 D?u(x) and a 2 A(x) in the hypothesis. We then set xt = X (t; x; ?g(; a)) 2 for small t > 0. Choose  2 C 1 such that u(x) = (x), u   in Rn, and D(x) = p. Fix small t > 0 and choose (t) > 0 such that xt 2  for  2 (0; (t)). By Theorem 2.2, we have xt ;a

xt ;a

u(xt)  1 ? e?  + e?  u(X ( xt;a; xt ; a)); where a stands for the constant control; ()  a. We note that lim!0  xt;a = t. We also note that the uniqueness of solutions of (1.1) yields X ( xt;a; xt ; a) = xt? xt;a . Take the limit in mum, as  ! 0, together with these in the above to get (xt ) ? e?t (x)  u(xt ) ? e?tu(x)  1 ? e?t: Dividing t > 0 and then, sending t ! 0 in the above, we conclude the proof. QED 7

3 Main results In order to obtain the uniqueness result, we will suppose the following continuity assumption: Letting u be a lsc function in , we will suppose that, for any x 2 T and a 2 A(x), (A2)

lims!inf u(X (s; xt; a)) = 0 for small t > 0; t

where xt = X (t; x; ?g(; a)). Notice that we do not suppose that A(x) 6= ; in this hypothesis. Our uniqueness result for (1.3) is as follows.

Theorem 3.1. Assume that (A0) and (A1) hold. Let u and v : Rn ! R

be bounded solutions of (1:3) and satisfy (A2). Assume also that u = v = 0 on T . Then, u = v in Rn. Proof of Theorem 3.1. In view of Lemma 2.1 and Corollary 2.3, we see that

u(x) + max f?hg(x; a); pig  1 for x 2 @ T and p 2 D?u(x): a2A

(3:1)

This property enables us to apply Soravia's result Theorem 3.1 in [S1] to conclude the proof. QED Thanks to the above theorem, it is easy to show that the relaxed value function is the unique bounded viscosity solution of (1.3) satisfying (A2). To this end, let us introduce the unique solution X^ (; x; ) of the associated state equation:

X^ (t) = x +

Z

tZ 0 A

g(X^ (s); a)d[s](a)ds;

(3:2)

where s 2 [0; 1) ! [s] 2 M (A) is measurable. Here, M (A) is the set of all Radon probability measures on A. We shall denote by A^ the set of such maps . The relaxed value function is as follows:   V^ (x) = inf 1 ? e?T^x ; 2A^

8

where T^x = inf ft  0 j X^ (t; x; ) 2 T g.

Theorem 3.2. Assume that (A0) and (A1) hold. Then, V^ is the unique

bounded solution of (1:3) satisfying (A2).

Proof of Theorem 3.2. Following the argument in [BJ2], we see that V^ is lsc and satis es (1.3) in our sense in . To check that (A2) holds for V^ , we observe that, for x 2 @ T and a 2 A(x), V^ (X (s; X (t; x; ?g(; a)); a)  1 ? e?(t?s) : Hence, since V^ (x) = 0, sending s ! t, we obtain (A2) for V^ . We remark here that the nonnegativity of V^ indeed yields lim V^ (X (s; X (t; x; ?g(; a)); a) = 0: s!t Therefore, Theorem 3.1 immediately implies the assertion. QED

Remark. If we have a lsc solution u of (1.3) satisfying (3.1), then, u(x) = ^V (x) in Rn. Hence, (A2) holds true for u by Soravia's argument in [S1] since V^ also satis es (3.1). Thus, through the above theorem, the condition (3.1) is equivalent to (A2). See [WZ] for the same argument. Now, we shall show that condition (3.1) implies a bit stronger assertion than (A2).

Theorem 3.3. Assume that (A0) and (A1) hold. Let u : ! [0; 1) be a bounded subsolution of (1:3) satisfying (3:1) and u = 0 on T . Then, for each x 2 @ T , a 2 A(x) and small t > 0, we have lim u(X (s; X (t; x; ?g(; a)); a) = 0: s!t Proof of Theorem 3.3. Fix x 2 @ T and a 2 A(x). As usual, we may suppose x = 0 and g(0; a) = ?en , where en = (0;    ; 0; 1). Furthermore, we may suppose that g(; a) = ?en near the origin. Indeed, setting v(y) = v(y1;    ; yn) = u(X (yn; (y1;    ; yn?1; 0); ?g(; a))), we have @v (y) = u(X ) ? hg(X; a); Du(X )i: v(y) + @y n 9

De ne Qh = fx = (x0 ; xn) j ? 1 < xn < h; jx0j < g for small h;  2 (0; 1) and (x0 ; xn) = 2(xn ? jx0 j2=2). Since minQh (u ? )  (u ? )(0) = 0, the minimum point x^ 2 Qh can be attained at x^ = (^x0; h). Indeed, otherwise, we have 4 possibilities: (1) In case when x^n = ?1, we immediately see (u?)(^x)  2 > (u?)(0). (2) In case when jx^0 j =  holds, we also have (u ? )(^x) > 0 = (u ? )(0). (3) In case when x^ = (^x0 ; x^n) 2 Qhn , there is  > 0 such that (u ? )(^x + en ) < (u ? )(^x). In the above three cases, we get a contradiction to the choice of the minimum point x^. The remaining case is as follows: (4) In case when x^ 2 \ Qh, the de nition of solutions yields 1  u(^x) + @ (^x)  2; @xn which is a contradiction. Therefore, taking  ! 0 along a subsequence if necessarily, by the lower semicontituity of u, we nd u(0;    ; 0; h)  2h, which concludes the assertion. QED Remark. Recently, P. Soravia kindly let us know that we can easily obtain the above assertion using the optimality principle in [S2]. Now, we give an example due to P. Soravia: Example 3.4. For T  [?1; 1], consider the PDE:



u + @u @x = 1 in  RnT :

(3:3)

It is easy to show that the unique (continuous) solution is given by ?jxj+1 for jxj  1; V (x) = 10 ? e for jxj < 1: (

On the other hand, we observe that the following function satis es (3.3) 10

in .

1 ? ex+1 for x < ?1; for jxj  1; V^ (x) = > 0 : 1 for x > 1: Notice that V^ does not satisfy (A2) at x = 1. Thus, this example indicates that it is necessary for the uniqueness result to suppose (A2). We also note that, by Theorem 3.1, V^ is the unique lsc solution of @u = 1 in : u ? @x We next give an example, in which the reachable set is not open and the discontinuity appears in . 8 >
: 0 for x2  ?2: We easily verify that the reachable set R = fx 2 j V (x) < 1g is given

a(x2 ) =

by

)

8 >
1 ? ? e?jx1j+x2 > > : 1 ? (x2 + 2)e?jx1j?1 8 > > >
1 or x2  ?2; for x2 2 (0; 1]; for x2 2 (?1; 0] for x2 2 (?2; ?1]:

Notice that the discontinuity of V occurs at (x1; 1) 2 .

11

4 Proof of Theorem 2.2 The basic idea of our proof was obtained by P.-L. Lions in [L] for secondorder PDEs. We also refer to [EI] and [BSo]. But, in their argument, we need some regularity of solutions. Hence, we will adapt some approximation techniques. Let u be a solution of (2.1). We shall extend u (with the same notation) to the whole space by setting u(x) = 1 for x 2= . We x any T > 0. We rst approximate u by locally Lipschitz continuous functions: For  > 0 and (x; t) 2 Rn  (0; T ), we de ne 2! j x ? y j ? t u(x; t) = yinf u(y) + e 2Rn 2 and 2! j x ? y j  t u (x; t) = yinf u(y) + e 2 : 2Rn Here, we x  > 2 + 2 maxa2A kDg(; a)k1. Notice that the rst one is Barles' convolution but the second one has an opposite sign of the power on the exponential. It is immediate to see that u  u in Rn  [0; T ]. We can easily show the properties: 8 2 > t=2 (2kuk1)1=2 if u (x; t) = u(y ) + e?t jx ? y j ; > < jx ? y j  e 2 2 (4:1) > j x ? yj : > : jx ? y j  e?t=2 (2kuk )1=2 if u (x; t) = u(y ) + et 1 2 In view of these facts, we de ne the constant c^ = c^(T )  eT=2 (2kuk1)1=2 . We claim that the following properties hold: For some C1 and C10 > 0 independent of  and  > 0, ( 0  u(x; t) + q + max f?hg(x; a); pi ? f (x; a)g ? C12 et a2A (4:2) for (x; t) 2 c^  (0; T ) and (p; q) 2 D+u(x; t); and ( 0  u(x; t) + q + max f?hg(x; a); pi ? f (x; a)g + C10 2 e?t a2A (4:3) for (x; t) 2 c^  (0; T ) and (p; q) 2 D?u(x; t): 12

Here, D+u(x; t) = ?D?(?u)(x; t). We note that (4.3) holds in a larger set than c^ but this is sucient to conclude the proof. Although it is not hard to show (4.2) and (4.3) by the argument in [B1] together with (4.1), we give a brief proof for the reader's convenience. Since (4.3) can be obtained easily by remarking the sign of the power on e, we shall only show (4.2). See also our proof for (4.2) below. Let us recall the Barron-Jensen lemma, which will be needed also for checking the sign of q in (4.2):

Lemma 4.1. (See [BJ1] or [K]) Fix (x; t) 2 Rn n (0; T ) and (p; q) 2 D+u(x; t). For any > 0, there exist (x k ; t k) 2 R  (0; T ), (p k ; qk ) 2 D?u(x k ; t k) for k 2 f1; 2;    ; n( )g (with some n( ) 2 N), (x ; t ) 2 Rn  ( ) such that (0; T ), C0 > 0, and fk 2 [0; 1]gnk=1 jx k ? x j = 0; (i) lim !0 (ii) lim (x ; tk ) = (x; t) (8k = 1;    ; n( )); !0 (iii) jpk j  C0 (8 > 0; k = 1;    ; n( )); n( ) (4:4) (iv) k = 1 (8 > 0); 8 > > > > > > > > > > > > > > > < > > > > > > > > > > > > > > > :

X

k=1

(v)

lim !0

nX ( ) k=1

k (p k; qk ) = (p; q):

For (x; t) 2 c^  (0; T ) and (p; q) 2 D+u(x; t) in (4.2), we shall choose etc. in Lemma 4.1. Since we may suppose x k 2 c^ for small > 0, in view of (4.1), we can choose yk 2  such that

(x k ; t k)

u(x k ; t k) = u(yk ) + e?t k jxk ?2 yk j :

2

Since p k 2 D?u(yk ), the de nition yields

0 = u(yk ) + max f?hg(yk ; a); p ki ? f (yk ; a)g: a2A 13

(4:5)

Noting p k = e?t k 2(xk?2 yk ) , we calculate in the following way.



0  u(yk ) + max f?hg(x k ; a); p ki ? f (yk ; a)g a2A 2 ?t k jxk ? yk j ?2 max k Dg (  ; a ) k e 1 a2A 2  u(yk ) + max f?hg(x ; a); pk i ? f (xk ; a)g a2A ! j2 ? y jx k k ? t ? max kDg(; a)k1 2e k 2 + jxk ? x jjpk j a2A ? max kDf (; a)k1jx k ? yk j a2A

Since we may also suppose e?t k jxk ?2yk j + qk = 0 for small > 0, by (4.5) and (iii) of (4.4), we can nd C1 > 0 such that 0  u(x k ; t k ) + qk + max f?hg(x ; a); p ki ? f (x k ; a)g a2A jxk ? x j ? C 2 et k ?C0 max k Dg (  ; a ) k 1 a2A  1 2  ?t k jxk ? yk j : +  ? 2 max k Dg (  ; a ) k ? 2 e 1 a2A 2 From the choice of  > 0, we see that the last term in the right hand side of the above is nonnegative. ( ) and then, sending ! 0 Taking the convex combination with fk gnk=1 with (i); (ii); (iv); (v) of (4.4) in the above, we have

2

0  u(x; t) + q + max f?hg(x; a); pi ? f (x; a)g ? C12 et : a2A

Now, for  > 0, we choose  2 C 1(Rn) such that

c

0    1 in Rn;  = 1 in  ; and  = 0 in =2 : 



We set the functions: 8 > < g (x; a) =  (x)g (x; a); f; (x; t; a) =  (x)(f (x; a) + C12 et ) + (1 ?  (x))u(x; t); > : f ; (x; t; a) =  (x)(f (x; a) ? C10 2 e?t ) + (1 ?  (x))u(x; t) We then consider the problems: For (x; t; p; q) 2 Rn  (0; T )  Rn  R,

u + ut + H; (x; t; Du) = 0; 14

(4:6)

and where

u + ut + H ; (x; t; Du) = 0;

(4:7)

H; (x; t; p) = max f?hg (x; a); pi ? f; (x; t; a)g; a2A ; H (x; t; p) = max f?hg (x; a); pi ? f ; (x; t; a)g: a2A In what follows, we suppose that  > 2^c . We claim that u and u, respectively, are the standard viscosity subsolution and supersolution of u + ut + H; = 0 and u + ut + H ; = 0 in Rn  (0; T ); For (x; t) 2 Rn  (0; T ), 8 < :

u(x; t) + q + H; (x; t; p)  0 provided (p; q) 2 D+u(x; t);

(4:8)

u(x; t) + q + H ; (x; t; p)  0 provided (p; q) 2 D?u(x; t):

(4:9)

and Indeed, it is immediate to check that u and u, respectively, satisfy that, for (x; t) 2 Rn  (0; T ),

u(x; t) +  (x)q + H; (x; t; p)  0 provided (p; q) 2 D+u(x; t); and

u(x; t) +  (x)q + H ; (x; t; p)  0 provided (p; q) 2 D?u(x; t): Here, we have used the fact  (x) = 0 for x 2= c^. We rst show (4.9). 2 Since (p; q) 2 D?u(x; t), from the de nition, we have q = et jx?2yj  0 for some y 2 . Hence, we conclude our claim because   0. Thus, for (4.8), it is sucent to show that q  0 provided (p; q) 2 D+u(x; t). This is not straightforward unlike that for (4.9). However, in view of (iv) and (v) of Lemma 4.1, q can be approximated by Pn( ) ? k=1 k qk (as ! 0) for (pk ; qk ) 2 D u (xk2; tk ) with appropriate (xk ; tk ). ? y j j x Hence, we can see that qk = ?e?tk k2  0 for some y. Therefore, q  0. 15

Now, we shall give the value functions u; and u; , respectively, for (4:6) and (4:7) with initial condition u(; 0) and u(; 0):

u; (x; t) = inf 2A and u; (x; t) = inf 2A

Z

0 Z

0

t ?s e f

;

(X (s; x; ); t ? s; (s))ds + e?t u (X (t; x; ); 0) 



;

 t ?s ; ? t  e f (X (s; x; ); t ? s; (s))ds + e u (X (t; x; ); 0) :

Since f; (x; t)  f ; (x; t) + 2 (C1et + C10 e?t ) and u  u, there exists C2 > 0 such that u; (x; t)  u; (x; t) + C22 et in Rn  [0; T ]: (4:10) We also remark that u; and u; are bounded and continuous. Hence, the standard comparison principle yields that u(x; t)  u; (x; t) and u; (x; t)  u(x; t) in Rn  [0; T ]: (4:11) Fix x 2  and choose  > 0 so that x 2  . Then, the DPP for u; at (x; T ) with (4.10) and (4.11) implies that 8 ?x; ^T u (X ( x; ^ T ; x; ); (T ?  x; )+ ) 9 > >  = < e ;   Z  x; ^T u(x; T )  inf  > 2A > ; : +  e?sf (X (s; x; ); (s))ds 0

 inf 2A

8 > < > :

x; (4:12) e? ^T u; (X (x;  ^ T ; x; ); (T ? x;  )+) 2 T x;  ^T + C2  e +  e?sf (X (s; x; ); (s))ds 9 > =

Z

> ;

0

 u(x; T ) + C22 eT : We note that, for each 2 A, (4.10) and (4.11) imply u(X (x;  ^ T ; x; )) = lim u (X (x;  ^ T ; x; ); (T ? x;  )+) !0 ; (4:13) ; (X ( x; ^ T ; x; ); (T ?  x; )+ ): = lim u     !0 Therefore, sending  ! 0 with (4.13) in (4.12), we have x; ^T u(X ( x; ^ T ; x; ))  e?x;  u(x) = inf 2A

8 >
=

^T

; +  e?sf (X (s; x; ); (s))ds > 0 Finally, sending T ! 1, we conclude the proof. QED > :

16

:

References [Ba] [BC] [BF] [BS] [BSo]

[B1] [BJ1] [BJ2] [BL] [C]

M. Bardi, A boundary value problem for the minimum time

function, SIAM J. Control, 27 (1989), 776-785. M. Bardi & I. Capuzzo Dolcetta, Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations, Birkhauser, 1996. M. Bardi & M. Falcone, An approximation scheme for the minimum time function, SIAM J. Control Optim., 28 (1990), 950-965. M. Bardi & V. Staicu, The Bellman equation for timeoptimal control of noncontrollable nonlinear systems, Acta Applicanda Mathematicae, 31 (1993), 201-223. M. Bardi & P. Soravia, Hamilton-Jacobi equations with singular boundary conditions on a free boundary and applications to di erential games, Trans. Amer. Math. Soc., 325 (1991), 205-229. G. Barles, Discontinuous viscosity solutions of rst-order Hamilton-Jacobi equations: A guided visit, Nonlinear Anal., 20 (1993), 1123-1134. E. N. Barron & R. Jensen, Semicontinuous viscosity solutions of Hamilton-Jacobi equations with convex Hamiltonians, Comm. Partial Di erential Equation, 15 (1990), 1713-1742. E. N. Barron & R. Jensen, Optimal control and semicontinuous viscosity solutions, Proc. Amer. Math. Soc., 111 (1991), 397-402. E. N. Barron & W. Liu, Semicontinuous and continuous blowup and minimal time functions, unpublished. O. Ca^rja, Lower semicontinuous solutions for a class of Hamilton-Jacobi-Bellman equations, J. Optim. Theory Appl., 89 (1996), 637-657. 17

[CMP] [EI] [EJ] [F] [K] [L]

[S1]

[S2]

[WZ]

O. Ca^rja, F. Mignanego & G. Pieri, Lower semicontin-

uous solutions of the Bellman equations for the minimum time problem, J. Optim. Theory Appl., 85 (1995), 563-574. L. C. Evans & H. Ishii, Di erential games and nonlinear rst order PDE on bounded domains, Manuscripta Math., 49 (1984), 109-139. L. C. Evans & M. R. James, The Hamilton-Jacobi-Bellman equation for time-optimal control, SIAM J. Control Optim., 27 (1989), 1477-1489. H. Frankowska, Lower semicontinuous solutions of Hamilton-Jacobi equations, SIAM J. Control Optim., 31 (1993), 257-272. S. Koike, Semicontinuous viscosity solutions for HamiltonJacobi equations with a degenerate coecient, Di erential Integral Equation., 10 (1997), 455-472. P.-L. Lions, Optimal control of di usion processes and Hamilton-Jacobi-Bellman equations: Part 2: Viscosity solutions and uniqueness, Comm. Partial Di erential Equation, 8 (1983), 1229-1276. P. Soravia, Discontinuous viscosity solutions to Dirichlet problems for Hamilton-Jacobi equations with convex Hamiltonians, Comm. Partial Di erential Equation, 19 (1993), 14931514. P. Soravia, Optimality principles and representation formulas for viscosity solutions of Hamilton-Jacobi equations, I: Equations of unbounded and degenerate control problems without uniqueness, II: Equations of control problems with state constraints, to appear. P. R. Wolenski & Y. Zhuang, Proximal analysis and the minimal time function, preprint. 18