STATIONARY HAMILTON–JACOBI EQUATIONS IN ... - UMD MATH

Report 2 Downloads 15 Views
c 2001 Society for Industrial and Applied Mathematics 

SIAM J. CONTROL OPTIM. Vol. 40, No. 3, pp. 824–852

STATIONARY HAMILTON–JACOBI EQUATIONS IN HILBERT SPACES AND APPLICATIONS TO A STOCHASTIC OPTIMAL CONTROL PROBLEM∗ SANDRA CERRAI† Abstract. We study an infinite horizon stochastic control problem associated with a class of stochastic reaction-diffusion systems with coefficients having polynomial growth. The hamiltonian is assumed to be only locally Lipschitz continuous so that the quadratic case can be covered. We prove that the value function V corresponding to the control problem is given by the solution of the stationary Hamilton–Jacobi equation associated with the state system. To this purpose we write the Hamilton–Jacobi equation in integral form, and, by using the smoothing properties of the transition semigroup relative to the state system and the theory of m-dissipative operators, we show that it admits a unique solution. Moreover, the value function V is obtained as the limit of minima for some approximating control problems which admit unique optimal controls and states. Key words. stochastic reaction-diffusion systems, stationary Hamilton–Jacobi–Bellman equations in infinite dimension, infinite horizon stochastic control problems AMS subject classifications. 60H15, 60J35 93C20, 93E20 PII. S0363012999359949

1. Introduction. In the present paper we are concerned with an infinite horizon stochastic control problem associated with the following reaction-diffusion system perturbed by a random term: (1.1)  ∂yk ∂ 2 wk   (t, ξ) = Ak yk (t, ξ) + fk (ξ, y1 (t, ξ), . . . , yr (t, ξ)) + zk (t, ξ) + Qk (t, ξ),    ∂t∂ξ  ∂t  yk (0, ξ) = xk (ξ),        Bk yk (s, ξ) = 0,

t ≥ 0, ξ ∈ O, ξ ∈ ∂O,

k = 1, . . . , r.

Here O is a bounded open set in R d , d ≤ 3, with regular boundary. The second order differential operators Ak are strictly elliptic, have regular coefficients, and are endowed with some boundary conditions Bk . The function f = (f1 , . . . , fr ) : O × R r → R r is twice differentiable, has polynomial growth together with its derivatives, and verifies suitable dissipativity conditions. The linear operators Qk are bounded and self-adjoint from L2 (O) into itself and are not assumed to be Hilbert–Schmidt in general. Finally, the random fields ∂ 2 wk /∂ t∂ ξ are mutually independent white noises in space and in time, defined on the same stochastic basis (Ω, F, Ft , P), and zk are square integrable processes adapted to the filtration Ft . Such a class of systems are of interest in applications and, especially in chemistry and in the present setting, have been widely studied by several authors (see, for example, Friedlin in [18] and Da Prato and Zabczyk in [14]). We recall that in [14] it ∗ Received by the editors August 4, 1999; accepted for publication (in revised form) February 8, 2001; published electronically September 28, 2001. http://www.siam.org/journals/sicon/40-3/35994.html † Dipartimento di Matematica per le Decisioni, Universit` a di Firenze, Via C. Lombroso 6/17, I-50134 Firenze, Italy ([email protected]fi.it).

824

STATIONARY H–J EQUATIONS IN HILBERT SPACES

825

is proved that for any initial datum x in the Hilbert space H = L2 (O; R r ) and for any adapted control z ∈ L2 (Ω; L2 (0, +∞; H)) the system (1.1) admits a unique solution y(t; x, z) in a generalized sense that we will specify later. Moreover, if x ∈ C(O; R r ) and z ∈ L2 (Ω; Lp (0, +∞; H)) with p > 4/(4 − d), such a solution is a mild solution. In correspondence with the system (1.1) we study the following stochastic control problem: minimizing the cost functional  (1.2)

J(x, z) = E

+∞

0

e−λ t [g(y(t; x, z)) + k(z(t))] dt,

among all controls z ∈ L2 (Ω; L2 (0, +∞; H)) adapted to the filtration Ft . Here y(t; x, z) is the unique solution of (1.1), and g : H → R is Lipschitz continuous and bounded. Moreover, k : H → (−∞, +∞] is a measurable mapping such that its Legendre transform K, which is defined by K(x) = sup { − x, yH − k(y) } , y∈ H

x ∈ H,

is Fr´echet differentiable and locally Lipschitz continuous together with its derivative. Our aim here is to study the value function corresponding to the functional (1.2)   V (x) = inf J(x, z) ; z ∈ L2 (Ω; L2 (0, +∞; H)), adapted . Namely, we show that, if A is the realization in H of the differential operator A = (A1 , . . . , Ar ), endowed with the boundary conditions B = (B1 , . . . , Br ), and if F is the Nemytskii operator associated with the function f = (f1 , . . . , fr ), then, under the assumption of Lipschitz continuity for K, for any λ > 0 and g ∈ Cb (H)1 the infinite dimensional second order nonlinear elliptic problem (1.3)

1  λ ϕ(x) − Tr Q2 D2 ϕ(x) − Ax + F (x), Dϕ(x)H + K(Dϕ(x)) = g(x) 2

admits a unique differentiable mild solution ϕ. This means that there exists a unique solution ϕ ∈ Cb1 (H) to the integral problem  ϕ(x) = E

0

+∞

e−λ t [g(y(t; x)) − K(Dϕ(y(t; x)))] dt,

where y(t; x) is the solution of the system (1.1), corresponding to z = 0. Moreover, for any x ∈ H the solution ϕ(x) coincides with the function V (x). When K is only locally Lipschitz continuous, there exists µ0 > 0 such that the same result holds for any λ > µ0 and g ∈ Cb1 (H). It is important to remark that even if we assume f (ξ, ·) to be more than once differentiable, nevertheless we are able to prove only C 1 -regularity in H for the transition semigroup Pt associated with the system (1.1) (see [8]). Then the solution ϕ of the problem (1.3) is only C 1 , and we can not prove the existence of an optimal state and an optimal control for our control problem. Actually, by following a dynamic 1 We shall denote by B (H) the Banach space of all bounded Borel functions ϕ : H → R and by b Cb (H) the subspace of uniformly continuous functions. Moreover, we denote by Cbk (H), k ∈ N, the subspace of all k-times Fr´echet differentiable functions, having uniformly continuous and bounded derivatives, up to the kth order.

826

SANDRA CERRAI

programming approach, the optimal state and the optimal control would be given, respectively, by the solution y (t) of the so-called closed loop equation (1.4)

dy(t) = [Ay(t) + F (y(t)) − DK(Dϕ(y(t)))] dt + Q dw(t),

y(0) = x,

and by z (t) = −DK(Dϕ(y (t))). On the other hand, as Dϕ is only continuous, the mapping x → −DK(Dϕ(x)) is only continuous. Thus we are able only to prove the existence of martingale solutions for the problem (1.4), which are not defined in general in the original stochastic basis (Ω, F, Ft , P), so that the control z (t) is not admissible for our original problem. However, in the case of space dimension d = 1 it is possible to prove the existence and uniqueness of solutions for the closed loop equation and then the existence and uniqueness of an optimal control. In what follows, it could be interesting to see if, by introducing the notion of relaxed controls (see [17] for the definition and some interesting results in finite dimension), it is possible to prove the existence of an optimal control. Nevertheless, even if we are not able to prove in general the existence of an optimal control, we can show that the value function V is obtained as the limit of minima of suitable approximating cost functionals Jα , α ≥ 0, which admit unique optimal controls and unique optimal states and whose value functions coincide with the solutions of suitable approximating Hamilton–Jacobi problems. Several authors have studied second order Hamilton–Jacobi equations by the approach of viscosity solutions. For the finite dimensional case we refer to the paper by Crandall, Ishii, and Lions [11] and to the book by Fleming and Soner [17], and for the infinite dimensional case we refer to the papers by Lions [25, 26] and to the thesis of Swiech [28]. Other authors have studied regular solutions of second order Hamilton– Jacobi equations, and as far as the infinite dimension is concerned we refer to the works by Barbu and Da Prato [1], Cannarsa and Da Prato [2, 3], Gozzi [20, 21], Haverneau [23] for the evolution case, and by Gozzi and Rouy [22] and Chow and Menaldi [10] for the stationary case. More recently, infinite dimensional Hamilton– Jacobi equations have been studied in connection with some ergodic control problems (see, for example, [19] and [16]). The main novelty here lies in the fact that we can prove the existence and uniqueness of regular solutions for (1.3) when the nonlinear coefficient F in the state equation has polynomial growth and is not even well defined in the Hilbert space H. Moreover, we can treat both the case of a Lipschitz continuous hamiltonian and the case of a locally Lipschitz hamiltonian so that the quadratic case can be covered. Due to the difficulties arising from coefficients which are not Lipschitz continuous, the study of mild solutions for the problem (1.3) is quite delicate, and we have to proceed in several steps. We first consider the case of a Lipschitz hamiltonian K, and we prove the existence and uniqueness result for λ large enough. To this purpose, we apply a fixed point argument in the space Cb1 (H), and we use the regularizing properties of the semigroup Pt which have been studied in detail in [7] and [8]. Namely, it has been proved that ϕ ∈ Bb (H) =⇒ Pt ϕ ∈ Cb1 (H),

t > 0,

and sup |D(Pt ϕ)(x)|H ≤ c (t ∧ 1)−

x∈ H

1+ 2

sup |ϕ(x)|

x∈ H

STATIONARY H–J EQUATIONS IN HILBERT SPACES

827

for some constant % < 1 depending on Q. Then, if we denote by L the weak generator of Pt (see [4] for the definition and main properties) by proceeding with suitable approximations, we show that the operator N (ϕ) = Lϕ − K(Dϕ) is m-dissipative. This yields the existence and uniqueness of solutions for any λ > 0. Then we consider a locally Lipschitz hamiltonian K. We approximate it by a sequence of Lipschitz functions, we consider the problems associated with the approximating hamiltonians, and, by a suitable a priori estimate, we get our result, even if in a less general case. We remark that throughout the paper we have to proceed by several approximations because of the intrinsic difficulties in the study of the system (1.1) and because of the corresponding transition semigroup Pt . Actually, first we have to approximate the reaction term F by Lipschitz continuous functionals Fα in order to get C 2 regularity for the semigroup Ptα associated with the system (1.5)

dy(t) = [Ay(t) + Fα (y(t))] dt + Q dw(t),

y(0) = x,

Ptα,n

Ptα

by the semigroups associated with the finite and then we have to approximate dimensional version of (1.5) in order to apply the usual Itˆ o calculus. Unfortunately, the direct approximation of the semigroup Pt by the semigroups Ptα,n does not work. 2. Assumptions. We denote by H the Hilbert space L2 (O; R r ), where O is a bounded open set of R d , d ≤ 3, having the boundary sufficiently regular. The norm and the scalar product in H are, respectively, denoted by | · |H and ·, ·H . Moreover, we denote by E the Banach space C(O; R r ), endowed with the usual sup-norm | · |E . Bb (H) is the Banach space of bounded Borel functions ϕ : H → R, endowed with the sup-norm ϕ0 = sup |ϕ(x)|. x∈ H

Cb (H) is the subspace of uniformly continuous functions. Moreover, Lipb (H) denotes the subspace of functions ϕ such that [ϕ]Lip = sup

x,y∈ H

x=y

|ϕ(x) − ϕ(y)| < ∞. |x − y|H

Lipb (H) is a Banach space endowed with the norm ϕLip = ϕ0 + [ϕ]Lip . For each k ∈ N, we denote by Cbk (H) the Banach space of k-times Fr´echet differentiable functions, endowed with the norm ϕk = ϕ0 +

k

h=1

h

sup |Dh ϕ(x)|Lh (H) .

x∈ H

(Here and in what follows L (H) = L(H; Lh−1 (H)), h ≥ 1, and L0 (H) = R.) Finally, for any k ∈ N and θ ∈ (0, 1), we denote by Cbk+θ (H) the subspace of all functions ϕ ∈ Cbk (H) such that k D ϕ(x) − Dk ϕ(y) k L (H) < ∞. [ϕ]k+θ = sup |x − y|θH x,y∈ H x=y

828

SANDRA CERRAI

Cbk+θ (H) is a Banach space endowed with the norm ϕk+θ = ϕk + [ϕ]k+θ . In what follows we shall assume that for any ξ ∈ O and σ = (σ1 , . . . , σr ) ∈ R r fk (ξ, σ1 , . . . , σr ) = gk (ξ, σk ) + hk (ξ, σ1 , . . . , σr ),

k = 1, . . . , r.

The functions gk : O × R → R and hk : O × R r → R are continuous. Moreover, they are assumed to fulfill the following conditions. Hypothesis 1. 1. For any ξ ∈ O, the function hk (ξ, ·) is of class C 2 and has bounded derivatives, uniformly with respect to ξ ∈ O. Moreover, the mappings Dσj hk : O × R r → R are continuous for j = 1, 2. 2. For any ξ ∈ O, the function gk (ξ, ·) is of class C 2 , and there exists m ≥ 0 such that sup sup

ξ∈ O t∈ R

|Dtj gk (ξ, t)| < ∞. 1 + |t|2m+1−j

Moreover, the mappings Dtj gk : O × R → R are continuous for j = 1, 2. 3. If m ≥ 1, there exist a > 0 and c ∈ R such that (2.1)

sup Dt gk (ξ, t) ≤ −a t2m + c,

ξ∈ O

t ∈ R.

Notice that if ck and ckj are continuous functions from O into R for k = 1, . . . , r and j = 1, . . . , 2m, and inf ck (ξ) > 0,

ξ∈ O

then, for any k = 1, . . . , r, the function gk (ξ, t) = ck (ξ) t2m+1 +

2m

ckj (ξ) tj

j=1

fulfills the conditions of the Hypothesis 1. Now we define the operator F by setting for any function x : O → R r F (x)(ξ) = f (ξ, x(ξ)),

ξ ∈ O.

If we set p = 2m+2 and q = (2m+2)/(2m+1), then F is continuous from Lp (O; R r ) into Lq (O; R r ), and if m ≥ 1, it is twice Fr´echet differentiable. In particular, from (2.1) and the mean-value theorem for x, y ∈ Lp (O; R r ), it holds that (2.2)

F (x) − F (y), x − yH ≤ c |x − y|2H .

In the same way, we have that the functional F is twice differentiable and dissipative from E into itself. (For more details on the properties of F we refer to [7] and [9].) Notice that due to the growth conditions on f , the functional F is not even well defined in H.

STATIONARY H–J EQUATIONS IN HILBERT SPACES

829

As in [9], we can construct a sequence of functionals {Fα }α which are Lipschitz continuous both in H and in E and such that for any x, y ∈ H

Fα (x) − Fα (y), x − yH ≤ c |x − y|2H

(2.3)

for a suitable constant c independent of α > 0. Moreover, they are twice Fr´echet differentiable in E and for each j ≤ 2 and R > 0 lim sup |Dj Fα (x) − Dj F (x)|Lj (E) = 0.

α→0 |x| ≤R E

Concerning the differential operator A = (A1 , . . . , Ar ), we assume that for any k = 1, . . . , r Ak (ξ, D) =

d

i,j=1

akij (ξ)

d

∂2 ∂ + bki (ξ) , ∂ξi ∂ξj ∂ξ i i=1

ξ ∈ O.

The coefficients akij and bki are of class C 1 (O), and for any ξ ∈ O the matrix [akij (ξ)] is symmetric and strictly positive, uniformly with respect to ξ ∈ O. The boundary operators Bk are given by Bk (ξ, D) = I

or

Bk (ξ, D) =

d

akij (ξ)νj (ξ)

i,j=1

∂ , ∂ξi

ξ ∈ O,

where ν is the exterior normal to the boundary of O. We denote by A the realization in H of the differential operator A equipped with the boundary conditions B. The unbounded operator A : D(A) ⊂ H → H generates an analytic semigroup et A , which is not restrictive to assume of negative type. Thus we have (2.4)

Ax, xH ≤ 0,

x ∈ D(A).

Notice that each Lp (O; R r ), p ∈ [1, +∞], is invariant for the semigroup et A , and if p > 1, then et A is analytic in Lp (O; R r ). Moreover, E is invariant for et A as well, and et A generates an analytic semigroup in E which is not strongly continuous. (For the proofs of these facts we refer to [15] and [27].) Now, for any k = 1, . . . , r we define   d d k



∂a ∂ ij bki (ξ) − Gk (ξ, D) = (ξ) , ξ ∈ O, ∂ξ ∂ξ j i i=1 j=1 and by difference we set Ck = Ak − Gk . The realization C of the operator C = (C1 , . . . , Cr ) with the boundary conditions B generates in H a self-adjoint analytic semigroup et C . In what follows we denote by Q the bounded linear operator of components Q1 , . . . , Qr . Hypothesis 2. 1. There exists a complete orthonormal basis {ek } in H which diagonalizes C such that supk∈ N |ek |E < ∞. The corresponding set of eigenvalues is denoted by {−αk }.

830

SANDRA CERRAI

2. The bounded linear operator Q : H → H is nonnegative and diagonal with respect to the complete orthonormal basis {ek } which diagonalizes C. Moreover, if {λk } is the corresponding set of eigenvalues, we have ∞

λ2k < +∞ α1−γ k=1 k

for some γ > 0. 3. There exists % < 1 such that 

D((−C) 2 ) ⊂ D(Q−1 ). If the operator A with the boundary conditions B is smooth enough, then αk  k 2/d . Thus, if we assume that λk  αk−ρ , when d ≤ 3 it is possible to find some ρ such that the conditions of Hypothesis 2 are verified. (For more details see [7] and [8].) In what follows we shall denote by Pn the projection operator of H onto Hn , the subspace generated by the eigenfunctions {e1 , . . . , en }. Then for any x ∈ H we define An x = Pn APn x and Fα,n (x) = Pn (Fα (Pn x)). It is immediate to check that there exists a constant c independent of α > 0 and n ∈ N such that (2.5)

Fα,n (x) − Fα,n (y), x − yH ≤ c |x − y|2H .

Next, let {wk (t)} be a sequence of mutually independent real-valued Brownian motions defined on a stochastic basis (Ω, F, Ft , P). The cylindrical Wiener process w(t) is formally defined as ∞

ek wk (t),

k=1

where {ek } is the orthonormal basis of H introduced in Hypothesis 2(1). Under the Hypotheses 2(1) and 2(2) it is possible to show that the linear problem associated with the system (1.1), (2.6)

dz(t) = Az(t) dt + Q dw(t),

z(0) = 0,

admits a unique solution wA (t) which is the mean-square Gaussian process with values in H given by  t wA (t) = e(t−s) A Q dw(s). 0

As shown, for example, in [14], the process wA belongs to C([0, +∞) × O), P almost surely (a.s), and for any p ≥ 1 and T > 0 it holds that (2.7)

E sup |wA |pE < ∞. t∈ [0,T ]

3. The transition semigroup. By using the notations introduced in the previous section, the system (1.1) can be rewritten as (3.1)

dy(t) = [Ay(t) + F (y(t)) + z(t)] dt + Q dw(t),

y(0) = x.

The following theorem is proved in [14] in the uncontrolled case. The proof in the controlled case is analogous; thus we omit it. (For more details we refer also to [7] and [8].)

831

STATIONARY H–J EQUATIONS IN HILBERT SPACES

Theorem 3.1. Assume Hypotheses 1 and 2. 1. For any x ∈ E and for any adapted process z ∈ L2 (Ω; Lp (0, +∞; H)) with p > 4/(4−d), there exists a unique mild solution y(·; x, z) for the problem (3.1) which belongs to L2 (Ω; C((0, T ]; E) ∩ L∞ (0, T ; E)), for any T > 0. This means that  t (3.2) e(t−s) A F (y(s; x, z)) ds + wA (t), y(t; x, z) = et A x + 0

where wA (t) is the solution of the linear system (2.6). Moreover, it holds that   (3.3)

2m+1 A |y(t; x, z)|E ≤ c(t) |x|E + |z|2m+1 Lp (0,+∞;H) + sup |w (s)|E

,

s∈ [0,t]

P-a.s. for a suitable continuous increasing function c(t). 2. For any x ∈ H and for any adapted process z ∈ L2 (Ω; L2 (0, +∞; H)), there exists a unique generalized solution y(·; x, z) ∈ L2 (Ω; C([0, +∞); H)) for the problem (3.1). This means that for any sequence {zn } ⊂ L2 (Ω; Lp (0, +∞; H)) converging to z in L2 (Ω; L2 (0, +∞; H)) and for any sequence {xn } ⊂ E converging to x in H, the corresponding sequence of mild solutions {y(·; xn , zn )} converges to the process y(·; x, z) in C([0, T ]; H), P-a.s., for any fixed T > 0. Moreover, it holds that   2m+1 A |y(t; x, z)|H ≤ c(t) |x|H + |z|2m+1 L2 (0,+∞;H) + sup |w (s)|E s∈ [0,t]

,

P-a.s., for a suitable continuous increasing function c(t). 3. The generalized solution y(·; x, z) belongs to L2m+2 (0, +∞; L2m+2 (O; R r )), P-a.s., and fulfills the integral equation (3.2). 4. For any x1 , x2 ∈ H and z1 , z2 ∈ L2 (Ω; L2 (0, +∞; H)) it holds that   (3.4) |y(t; x1 , z1 ) − y(t; x2 , z2 )|H ≤ c(t) |x1 − x2 |H + |z1 − z2 |L2 (0,t;H) , P-a.s., for a suitable continuous increasing function c(t). Next, for any α > 0, we introduce the approximating problem (3.5)

dy(t) = (Ay(t) + Fα (y(t)) + z(t)) dt + Q dw(t),

y(0) = x.

If x ∈ H and z ∈ L2 (Ω; L2 (0, +∞; H)), the system (3.5) admits a unique mild solution yα (·; x, z) ∈ L2 (Ω; C([0, +∞); H)). If x ∈ E and z ∈ L2 (Ω; Lp (0, +∞; H)) with p > 4/(4−d), then yα (·; x, z) ∈ L2 (Ω; C((0, T ]; E) ∩ L∞ (0, T ; E)) for any T > 0. Moreover, an estimate analogous to (3.3) holds, uniformly with respect to α > 0. Namely, there exists an increasing continuous function c(t) independent of α such that   (3.6)

2m+1 A |yα (t; x, z)|E ≤ c(t) |x|E + |z|2m+1 Lp (0,+∞;H) + sup |w (s)|E s∈ [0,t]

,

P-a.s. The following approximation result has been proved already in [9]. Proposition 3.2. Under the Hypotheses 1 and 2, for any q ≥ 1 there exists p ≥ 1 such that if x ∈ E and z ∈ Lp (Ω; L∞ (0, +∞; H)), then it holds that (3.7)

lim E |y(t; x, z) − yα (t; x, z)|qE = 0,

α→0

P-a.s.,

832

SANDRA CERRAI

uniformly with respect to (t, x) in bounded sets of [0, +∞) × E and z in the set   (3.8) M2R = z ∈ L2 (Ω; L2 (0, +∞; H)) : sup |z(t)|H ≤ R, P-a.s. t≥0

for any R ≥ 0. For any α > 0 and n ∈ N, we denote by yα,n (·; x, z) the unique strong solution in L2 (Ω; C([0, +∞); H)) of the approximating problem (3.9)

dy(t) = (An y(t) + Fα,n (y(t)) + Pn z(t)) dt + Qn dw(t),

y(0) = Pn x,

with x ∈ H and z ∈ L2 (Ω; L2 (0, +∞; H)) adapted. In [9, Lemma 3.4] we have shown that for any fixed R, T > 0 (3.10)

lim

sup |yα,n (·; x, z) − yα (·; x, z)|L2 (Ω;C([0,T ];H)) = 0.

n→+∞ |x| ≤R H

Moreover, we have (3.11)





2m+1 A |yα,n (t; x, z)|H ≤ c(t) |x|H + |z|2m+1 L2 (0,+∞;H) + sup |w (s)|E s∈ [0,t]

,

P-a.s.

In what follows we shall denote by y(t; x) the solution of (3.1) with z = 0. In [7, Theorem 7.4] we have shown that if f (ξ, ·) is k-times differentiable, then for any t ≥ 0 the mapping E → L2 (Ω; E),

x → y(t; x)

is k-times Fr´echet differentiable. In particular, the first derivative Dy(t; x)h is the unique solution of the linearized problem dv (t) = Av(t) + DF (y(t; x))v(t), dt

v(0) = h,

and it holds that sup |Dy(t; x)h|H ≤ ec t |h|H ,

x∈ E

P-a.s.

If x, h ∈ H, then, as shown in [8], the problem above admits a unique generalized solution v(t; x, h) which is not intended to be the mean-square derivative of y(t; x) in general. In [6] we have proved that, since Fα and Fα,n are Lipschitz continuous, yα (t; x) and yα,n (t; x) are twice mean-square differentiable with respect to x ∈ H along any direction h ∈ H. In addition, their derivatives belong to D(A1/2 ) ⊂ D(Q−1 ) and for any T > 0 sup |Dyα (·; x)h|L∞ (0,T ;H)∩L2 (0,T ;D(A1/2 )) ≤ cT |h|H ,

(3.12)

x∈ H

sup |Dyα,n (t; x)h|L∞ (0,T ;H)∩L2 (0,T ;D(A1/2 )) ≤ cT |h|H ,

x∈ H

P-a.s., P-a.s.,

STATIONARY H–J EQUATIONS IN HILBERT SPACES

833

for a constant cT which is independent of α > 0 and n ∈ N. In [9, Lemma 4.1] we have also proved that lim E sup |Dy(·; x)h − Dyα (·; x)h|2L∞ (0,T ;H)∩L2 (0,T ;D((−A)1/2 )) = 0,

α→0

|h|H ≤1

uniformly with respect to x in bounded sets of E, and in [9, Lemma 4.2] we have proved that lim E sup |Dyα (·; x)h − Dyα,n (·; x)h|2L∞ (0,T ;H)∩L2 (0,T ;D((−A)1/2 )) = 0,

n→+∞

|h|H ≤1

uniformly with respect to x in bounded sets of H. Next we define the transition semigroup Pt corresponding to the system (3.1) by setting for any ϕ ∈ Bb (H) and x ∈ H Pt ϕ(x) = E ϕ(y(t; x)),

t ≥ 0.

In an analogous way, we define the semigroups Ptα and Ptα,n associated, respectively, to the systems (3.5) and (3.9). Due to (3.7), for any ϕ ∈ Cb (H) and R > 0 (3.13)

lim sup |Ptα ϕ(x) − Pt ϕ(x)| = 0,

α→0 |x| ≤R E

uniformly for t in bounded sets of [0, +∞). Moreover, due to (3.10) we have that (3.14)

lim

sup |Ptα,n ϕ(x) − Ptα ϕ(x)| = 0,

n→+∞ |x| ≤R H

uniformly for t in bounded sets of [0, +∞). It is important to notice that all the properties of the semigroup Pt which we are going to describe are fulfilled by the semigroups Ptα and Ptα,n as well. From (3.4) it easily follows that Pt maps Cb (H) into itself as a contraction. In general Pt is not strongly continuous in Cb (H). (See [4] for a counter example even in finite dimension.) Nevertheless, as y(·; x) ∈ L2 (Ω; C([0, +∞); H)) for any fixed x ∈ H, by the dominated convergence theorem, we have that if ϕ ∈ Cb (H), then the mapping [0, +∞) → R,

t → Pt ϕ(x)

is continuous. Thus, by proceeding as in [4], we define the generator L of Pt as the unique closed operator L : D(L) ⊂ Cb (H) → Cb (H) such that  +∞ R(λ, L)ϕ(x) = e−λt Pt ϕ(x) dt, λ > 0, 0

for any fixed ϕ ∈ Cb (H) and x ∈ H. In a similar way we define the generators Lα and Lα,n corresponding, respectively, to the semigroups Ptα and Ptα,n . In [4] it is shown that for any ϕ ∈ D(L) and x ∈ H the mapping [0, +∞) → R,

t → Pt ϕ(x)

is differentiable and d Pt ϕ(x) = L(Pt ϕ)(x) = Pt (Lϕ)(x). dt

834

SANDRA CERRAI

The same holds for Lα and Lα,n . In particular, if ϕ ∈ Cb2 (H), we have that Psα,n ϕ ∈ D(Lα,n ), for any α > 0, n ∈ N, and s ≥ 0, and Lα,n (Psα,n ϕ) = Lα,n (Psα,n ϕ),

(3.15)

where the differential operator Lα,n is defined by (3.16)

Lα,n ϕ(x) =

1  2 2 Tr Qn D ϕ(x) + An x + Fα,n (x), Dϕ(x)H , 2

x ∈ H.

Actually, if we define ψ = λPsα,n ϕ − Lα,n (Psα,n ϕ) for some λ > 0, we have that  +∞  α,n e−λ t λPt+s ϕ(x) − Ptα,n Lα,n (Psα,n ϕ)(x) dt. R(λ, Lα,n )ψ(x) = 0

It is not difficult to prove that, in general, if ϕ is twice differentiable, then Ptα,n (Lα,n ϕ)(x) = Lα,n (Ptα,n ϕ)(x). Thus, as Psα,n ϕ ∈ Cb2 (H), from the Itˆ o formula we have  α,n  d α,n ϕ (x) = (Pt+s ϕ(x)). Ptα,n Lα,n (Psα,n ϕ)(x) = Lα,n Pt+s dt This allows us to conclude that R(λ, Lα,n )ψ(x) = −

 0

+∞

d −λ t α,n Pt+s ϕ(x)) dt = Psα,n ϕ(x), (e dt

Psα,n ϕ

so that ∈ D(Lα,n ) and (3.15) holds. In [8] we have proved that the semigroup Pt has a smoothing effect. Namely, it maps Bb (H) into Cb1 (H) for any t > 0, and for i ≤ j = 0, 1 it holds that Pt ϕj ≤ c (t ∧ 1)−

(3.17)

(j−i)(1+) 2

ϕi ,

where % is the constant introduced in Hypothesis 2(3). As far as the semigroups Ptα and Ptα,n are concerned, in [6] it is proved that they map Bb (H) into Cb2 (H) for any t > 0, and (3.18)

Ptα ϕj + Ptα,n ϕj ≤ cα (t ∧ 1)−

(j−i)(1+) 2

ϕi

for any i ≤ j ≤ 2, for some constant cα independent of n. Moreover, if i ≤ j ≤ 1, the constant cα is independent of α as well. We conclude, recalling that in [9] it has been proved that if ϕ ∈ Cb (H), then for any R > 0 (3.19)

lim sup |D(Ptα ϕ)(x) − D(Pt ϕ)(x)|H = 0,

α→0 |x| ≤R E

uniformly for t in bounded sets of [δ, +∞) with δ > 0. Moreover, it has been proved that (3.20)

lim

sup |D(Ptα,n ϕ)(x) − D(Ptα ϕ)(x)|H = 0,

n→+∞ |x| ≤R H

uniformly for t in bounded sets of [δ, +∞) with δ > 0.

STATIONARY H–J EQUATIONS IN HILBERT SPACES

835

4. The Hamilton–Jacobi equation. We are here concerned with the stationary Hamilton–Jacobi equation (4.1)

λϕ(x) − Lϕ(x) + K(Dϕ(x)) = g(x),

x ∈ H.

Our aim is to show that such an equation admits a unique solution ϕ(λ, g) for any λ > 0 and g ∈ Cb (H). To this purpose we first prove a regularity result for the elements of D(L). Lemma 4.1. Assume Hypotheses 1 and 2. Then D(L) ⊂ Cb1 (H), and for any λ > 0 and g ∈ Cb (H) it holds that R(λ, L)g1 ≤ ρ(λ)g0 ,

(4.2) −1

where ρ(λ) = c(λ 2 + λ−1 ). Proof. We recall that if ϕ ∈ Cb (H), then Pt ϕ ∈ Cb1 (H) for any t > 0. Thus for any x, h ∈ H and λ > 0 we have  +∞ R(λ, L)g(x + h) − R(λ, L)g(x) = e−λ t (Pt g(x + h) − Pt g(x)) dt 0

 =

+∞

0

e−λ t D(Pt g)(x), hH dt + E(x, h),

where

 E(x, h) =

0

+∞

e−λ t



1

0

D(Pt g)(x + θh) − D(Pt g)(x), hH dθ dt.

Due to (3.17) we have  +∞ −λ t e

D(P g)(x), h dt t H 0

 ≤c

0

+∞

e−λ t (t ∧ 1)−

1+ 2

  −1 dt |h|H g0 = c λ 2 + λ−1 |h|H g0 .

Moreover, as D(Pt g) is continuous in H, by the dominated convergence theorem we easily have that lim

|h|H →0

|E(x, h)| = 0. |h|H

This implies that R(λ, L)g ∈ Cb1 (H), and for any x, h ∈ H  +∞ (4.3) e−λ t D(Pt g)(x), hH dt

D(R(λ, L)g)(x), hH = 0

so that the estimate (4.2) holds true. Remark 4.2. Notice that due to (3.18) we can repeat the arguments used above, and we can show that both D(Lα ) and D(Lα,n ) are contained in Cb1 (H), and a formula analogous to (4.3) holds for the derivatives of R(λ, Lα )g and R(λ, Lα,n )g when g ∈ Cb (H). In particular, it holds that (4.4)

R(λ, Lα )g1 + R(λ, Lα,n )g1 ≤ ρ(λ)g0 .

836

SANDRA CERRAI

Moreover, as Ptα ϕi + Ptα,n ϕi ≤ cα (t ∧ 1)−

(i−j)(1+) 2

ϕj ,

j ≤ i ≤ 2,

for a constant cα independent of n ∈ N, by interpolation we have that for any θ1 , θ2 ∈ [0, 1] Ptα ϕ1+θ1 + Ptα,n ϕ1+θ1 ≤ cα (t ∧ 1)−

(θ1 −θ2 +1)(1+) 2

ϕθ2 .

By proceeding as in the proof of the previous lemma, this implies that if ϕ ∈ Cbθ2 (H), then R(λ, Lα )ϕ and R(λ, Lα,n )ϕ are in Cb1+θ1 (H) for any θ1 < θ2 + (1 − %)/(1 + %) and  (θ1 −θ2 +1)(+1)  −1 2 (4.5) R(λ, Lα )ϕ1+θ1 + R(λ, Lα,n )ϕ1+θ1 ≤ cα λ + λ−1 gθ2 . In particular, we have that D(Lα ) and D(Lα,n ) are contained in Cb1+θ (H) for any θ < (1 − %)/(% + 1). 4.1. Lipschitz hamiltonian K. In the proof of the existence and uniqueness of solutions for the problem (4.1) we proceed in several steps. First we assume the Lipschitz continuity of the hamiltonian K. Hypothesis 3. The mapping K : H → R is Fr´echet differentiable and Lipschitz continuous together with its derivative. Moreover, K(0) = 0. Notice that the condition K(0) = 0 is not restrictive, as we can substitute g by g − K(0). By using the Lemma 4.1 we get the following result. Proposition 4.3. Under Hypotheses 1, 2, and 3, there exists λ0 > 0 such that (4.1) admits a unique solution ϕ(λ, g) ∈ Cb1 (H) for any λ > λ0 and for any g ∈ Cb (H). Proof. The equation (4.1) is equivalent to the equation ϕ = R(λ, L) (g − K(Dϕ)) = Γ(λ, g)(ϕ). Due to Lemma 4.1, if ϕ ∈ Cb1 (H) and g ∈ Cb (H), then Γ(λ, g)(ϕ) ∈ Cb1 (H). Thus if we show that for some λ0 > 0 the mapping Γ(λ, g) is a contraction in Cb1 (H) for any λ > λ0 , our thesis follows. As K is Lipschitz continuous for any ϕ1 , ϕ2 ∈ Cb1 (H), we have R(λ, L)(K(Dϕ1 ) − K(Dϕ2 ))1 ≤ c ρ(λ)ϕ1 − ϕ2 1 . Thus, if we choose λ0 such that cρ(λ0 ) = 1, we have that Γ(λ, g) is a contraction in Cb1 (H) for any λ > λ0 . This implies that it admits a unique fixed point ϕ ∈ Cb1 (H), which is the unique solution of (4.1) in Cb1 (H). Remark 4.4. By using (4.4) it is possible to prove that there exists λ0 > 0 sufficiently large such that the mappings Γα (λ, g)(ϕ) = R(λ, Lα )(g − K(Dϕ)),

α > 0,

are contractions in Cb1 (H) for any λ > λ0 and for any g ∈ Cb (H), and the approximating Hamilton–Jacobi equations (4.6)

λϕ − Lα ϕ + K(Dϕ) = g

837

STATIONARY H–J EQUATIONS IN HILBERT SPACES

admit a unique solution ϕα (λ, g) ∈ Cb1 (H). Moreover, as the function ρ(λ) in (4.4) does not depend on α > 0, the constant λ0 does not depend on α either. Lemma 4.5. Under Hypotheses 1, 2, and 3, for any λ > 0 and g ∈ Cb (H) we have   lim sup Dj Γkα (λ, g)(0) − Γk (λ, g)(0) (x) Lj (H) = 0, (4.7) j = 0, 1, α→0 |x| ≤R E

for any k ∈ N and R > 0. Proof. We proceed by induction. For k = 1 the limit (4.7) is trivially verified. Assume that (4.7) holds for some k ≥ 1. We show that this implies that (4.7) holds for k + 1. We have   k+1 (λ, g)(0) Dj Γk+1 α (λ, g)(0) − Γ         = Dj R(λ, Lα ) g − K D(Γkα (λ, g)(0)) − R(λ, L) g − K D(Γk (λ, g)(0)) . In general, if f ∈ Cb (H) and {fα } is any bounded generalized sequence of Cb (H) such that for any R > 0 (4.8)

lim sup |fα (x) − f (x)| = 0,

α→0 |x| ≤R E

then for any R > 0 and j = 0, 1 we have (4.9)

lim sup |Dj (R(λ, Lα )(fα − f ))(x)|H = 0.

α→0 |x| ≤R E

Indeed, as the formula (4.3) holds for the derivative of R(λ, Lα ), as well, for any x ∈ H we have  +∞ j e−λ t Dj (Ptα (fα − f )) (x) dt. D (R(λ, Lα )(fα − f ))(x) = 0

If x lies in a bounded set of E, due to (2.7) and (3.6) the solution yα (t; x)(ω) lies in a bounded set of E for P-almost all ω ∈ Ω. Therefore, by (4.8) for any R > 0 this yields (4.10)

lim sup |(fα − f )(yα (t; x))| = 0,

P-a.s.,

α→0 |x| ≤R E

and by applying the dominated convergence theorem we get (4.9) for j = 0. As proved in [5], for any t > 0 we have

D(Ptα (fα − f ))(x), hH =

1 E (fα − f )(yα (t; x)) t

 0

t



 Q−1 Dyα (s; x)h, dw(s) H ,

where Dyα (t; x)h is the mean-square derivative of yα (t; x) along the direction h ∈ H. Hence, thanks to (3.12), by interpolation we easily get |D(Ptα (fα − f ))(x)|H ≤ c (t ∧ 1)−

1+ 2



2

E |(fα − f )(yα (t; x))|

and, thanks to (4.10), this implies (4.9) for j = 1.

1/2

,

838

SANDRA CERRAI

Thus, since from the inductive hypothesis and the Lipschitz continuity of K the sequence {K(D[Γkα (λ, g)(0)])} and K(D[Γk (λ, g)(0)]) fulfill (4.8), we can conclude that for any R > 0 (4.11)    k+1 lim sup Dj R(λ, Lα ) K(D[Γk+1 α (λ, g)(0)])) − K(D[Γα (λ, g)(0)]) (x) = 0. α→0 |x| ≤R E

Now, if f ∈ Cb1 (H), for any x ∈ H we have Dj [(R(λ, Lα ) − R(λ, L)) (g − K(Df ))] (x)  =

+∞

0

e−λ t Dj [(Ptα − Pt )(g − K(Df ))] (x) dt.

Then, by using the estimates (3.17) and (3.18) and the limits (3.13) and (3.19), we get that lim sup Dj [(R(λ, Lα ) − R(λ, L)) (g − K(Df ))] (x) H = 0 α→0 |x| ≤R E

for any R > 0. As Γk (λ, L)(0) ∈ Cb1 (H), this implies that  lim sup Dj (R(λ, Lα ) − R(λ, L)) (g − Γk (λ, g)(0)) (x) H = 0, α→0 |x| ≤R E

and recalling (4.11) we can conclude that   k+1 lim sup Dj Γk+1 (λ, g)(0) (x) = 0. α (λ, g)(0) − Γ α→0 |x| ≤R E

By induction this yields (4.7). In the next proposition we show that the solution ϕ(λ, g) of the problem (4.1) can be approximated by the solutions ϕα (λ, g) of the problems (4.6). Proposition 4.6. Assume Hypotheses 1, 2, and 3. Then, if λ0 is the constant introduced in the Proposition 4.3, for any λ > λ0 and g ∈ Cb (H) it holds that lim sup Dj (ϕ(λ, g) − ϕα (λ, g)) (x) Lj (H) = 0, (4.12) j = 0, 1, α→0 |x| ≤R E

for any R > 0. In particular, for any λ > 0 we have (4.13) lim sup Dj [ϕ(λ, g) − ϕα (λ + λ0 , g + λ0 ϕ(λ, g))] (x) Lj (H) = 0, α→0 |x| ≤R E

j = 0, 1.

Proof. Let us fix λ0 as in Proposition 4.3. We have seen that ϕ = ϕ(λ, g) and ϕα = ϕα (λ, g) are, respectively, the unique fixed points of the mappings Γ(λ, g) and Γα (λ, g). Since for any λ > λ0 and g ∈ Cb (H) the contraction constants of Γα (λ, g) are the same for all α > 0, for any % > 0 there exists k# ∈ N such that Γk (λ, g)(0) − ϕ1 + sup Γkα (λ, g)(0) − ϕα 1 ≤ %. α>0

Thus for j = 0, 1 and x ∈ H we have j   D (ϕ − ϕα ) (x) ≤ % + Dj Γk (λ, g)(0) − Γkα (λ, g)(0) (x) ,

STATIONARY H–J EQUATIONS IN HILBERT SPACES

839

and due to (4.7) this implies (4.12). Now, since ϕ(λ, g) = ϕ(λ + λ0 , g + λ0 ϕ(λ, g)), by using (4.12) we can conclude that (4.13) holds true. Remark 4.7. For any α > 0 and n ∈ N, consider the problem (4.14)

λϕ − Lα,n ϕ + Kn (Dϕ) = gn ,

where Kn (x) = K(Pn x) and gn (x) = g(Pn x) for each n ∈ N and x ∈ H. By proceeding as for the problems (4.1) and (4.6), it is possible to show that there exists λ0 large enough such that for any g ∈ Cb (H) and λ > λ0 there exists a unique solution ϕα,n (λ, g) ∈ Cb1 (H). Such a solution is given by the unique fixed point of the mapping Γα,n (λ, g)(ϕ) = R(λ, Lα,n )(gn − Kn (Dϕ)). By using arguments analogous to those used in the Lemma 4.5, due to the estimates (3.11) and (3.18), and due to the limits (3.14) and (3.20), there exists λ0 > 0 such that for λ > λ0 and g ∈ Cb (H) it holds that   j = 0, 1, lim sup Dj Γkα,n (λ, g)(0) − Γkα (λ, g)(0) (x) Lj (H) = 0, n→+∞ |x| ≤R H

for any α > 0, k ∈ N, and R > 0. Thus, by proceeding as in the proof of Proposition 4.6, due to (3.14) and (3.20) it is possible to verify that there exists λ0 > 0 such that if λ > λ0 , then for any α > 0, and R > 0 it holds that (4.15) lim sup Dj [ϕα (λ, g) − ϕα,n (λ, g)] (x) Lj (H) = 0. n→+∞ |x| ≤R H

In the next proposition we show that if the datum g belongs to Cb1 (H), then the approximating problems (4.6) and (4.14) have a solution of class C 2 . Lemma 4.8. Under Hypotheses 1, 2, and 3, if g ∈ Cb1 (H) and λ > 0, then the solutions ϕα (λ, g) and ϕα,n (λ, g) of the problems (4.6) and (4.14) belong to Cb2 (H). Moreover, for any R > 0 and λ > 0 (4.16)

sup ϕα (λ, g)2 < ∞.

g1 ≤R

Proof. We prove the lemma only for the problem (4.6), as the proof for the problem (4.14) is identical. As shown in Remark 4.2, D(Lα ) ⊂ Cb1+θ (H) for any θ < (1 − %)/(1 + %). Thus, if ϕα (λ, g) is the solution of the problem (4.6), we have that ϕα (λ, g) ∈ Cb1+θ0 (H) for some 0 < θ0 < (1 − %)/(1 + %). As we have ϕα (λ, g) = R(λ, Lα ) (g − K(Dϕα (λ, g))) , by using again Remark 4.2 it follows that ϕα (λ, g) ∈ Cb1+2 θ0 (H). Therefore, by repeating this argument a finite number of steps we get that ϕα (λ, g) ∈ Cb2 (H). The estimate (4.16) follows as above by applying (4.5) a finite number of times. Due to (3.15), the previous lemma implies that if g ∈ Cb1 (H), then ϕα,n = ϕα,n (λ, g) is a strict solution of the problem (4.14); that is, λϕα,n − Lα,n ϕα,n + Kn (Dϕ) = gn ,

840

SANDRA CERRAI

where Lα,n is the differential operator introduced in (3.16). Now, for any ϕ ∈ D(L) we define (4.17)

N (ϕ) = Lϕ − K(Dϕ).

In the same way, for any α > 0 and n ∈ N we define Nα (ϕ) = Lα ϕ − K(Dϕ) and Nα,n (ϕ) = Lα,n ϕ − Kn (Dϕ). Theorem 4.9. Under Hypotheses 1, 2, and 3, the operator N defined by (4.17) is m-dissipative. Thus for any λ > 0 and for any g ∈ Cb (H) there exists a unique solution ϕ(λ, g) ∈ D(L) for the problem (4.1). Thanks to Proposition 4.3, in order to show that N is m-dissipative, it suffices to show that N is dissipative. To this purpose, we first give the following preliminary result. Lemma 4.10. Assume that Hypotheses 1, 2, and 3 hold. Then there exists λ0 > 0 such that for any λ > λ0 and ϕ1 , ϕ2 ∈ D(Lα ) ϕ1 − ϕ2 0 ≤

1 λ(ϕ1 − ϕ2 ) − (Nα (ϕ1 ) − Nα (ϕ2 ))0 . λ

Proof. We set g1 = λϕ1 − Nα (ϕ1 ) and g2 = λϕ2 − Nα (ϕ2 ), and for any n ∈ N we set g1,n (x) = g1 (Pn x) and g2,n (x) = g2 (Pn x), x ∈ H. Then for λ large enough there exist ϕ1,n and ϕ2,n in D(Lα,n ) such that λϕ1,n − Nα,n (ϕ1,n ) = g1,n ,

λϕ2,n − Nα,n (ϕ2,n ) = g2,n .

If we show that ϕ1,n − ϕ2,n 0 ≤

(4.18)

1 g1,n − g2,n 0 , λ

we are done. Actually, for any x ∈ H this implies that |ϕ1,n (x) − ϕ2,n (x)| ≤

1 1 g1,n − g2,n 0 ≤ g1 − g2 0 , λ λ

and due to (4.15) we can take the limit as n → +∞, and we get |ϕ1 (x) − ϕ2 (x)| ≤

1 g1 − g2 0 . λ

By taking the supremum for x ∈ H, we can conclude. Thus in order to conclude the proof we have to show that the operator Nα,n fulfills (4.18). The operator Lα,n satisfies the same conditions of the operator L studied in [5]; thus, thanks to [5, Proposition 7.5],      2,p D(Lα,n ) = ϕ ∈ Wloc (R n ) ∩ Cb (R n ) ; Lα,n ϕ ∈ Cb (R n ) ,   p≥1

Lα,n ϕ = Lα,n ϕ. Now we remark that Kn (Dϕ1,n (x)) − Kn (Dϕ2,n (x))  =

0

1

 DKn (λDϕ1,n (x) + (1 − λ)Dϕ2,n (x)) dλ, Dϕ1,n (x) − Dϕ2,n (x) ;

STATIONARY H–J EQUATIONS IN HILBERT SPACES

841

thus, if we set  Uα,n (x) =

1

0

DKn (λDϕ1,n (x) + (1 − λ)Dϕ2,n (x)) dλ,

we have λ(ϕ1,n − ϕ2,n )(x) − Lα,n (ϕ1,n − ϕ2,n )(x) + Uα,n (x), D(ϕ1,n − ϕ2,n )(x) = g1,n (x) − g2,n (x). Since the function Uα,n is uniformly continuous, as ϕ1,n and ϕ2,n belong to Cb1 (H), the operator Nα,n defined by Nα,n ψ(x) = Lα,n ψ(x) − Uα,n (x), Dψ(x) is of the same type as the operator L studied in [5]. Therefore, we can adapt the proof of [5, Lemma 7.4] to the present situation, and we obtain ϕ1,n − ϕ2,n 0 ≤

1 1 λ(ϕ1,n − ϕ2,n ) − Nα,n (ϕ1,n − ϕ2,n )0 = g1,n − g2,n 0 . λ λ

Proof of Theorem 4.9. Let us fix λ > 0 and ϕ1 , ϕ2 ∈ D(L), and let us define g1 = λϕ1 − N (ϕ1 ) and g2 = λϕ2 − N (ϕ2 ). If λ0 is the maximum between the constant introduced in Remark 4.4 and the constant introduced in Lemma 4.10, for any α > 0 there exist ϕ1,α , ϕ2,α ∈ D(Lα ) such that (λ + λ0 ) ϕ1,α − Nα ϕ1,α = g1 + λ0 ϕ1 ,

(λ + λ0 ) ϕ2,α − Nα ϕ2,α = g2 + λ0 ϕ2 ,

and ϕ1,α − ϕ2,α 0 ≤

1 (g1 − g2 ) + λ0 (ϕ1 − ϕ2 )0 . λ + λ0

Thus for any x ∈ H we have |ϕ1,α (x) − ϕ2,α (x)| ≤

1 λ0 g1 − g2 0 + ϕ1 − ϕ2 0 . λ + λ0 λ + λ0

Now, if x ∈ E, due to (4.13) we can take the limit in the left-hand side as α goes to zero, and we get |ϕ1 (x) − ϕ2 (x)| ≤

1 λ0 g1 − g2 0 + ϕ1 − ϕ2 0 . λ + λ0 λ + λ0

As ϕ1 and ϕ2 are continuous in H, the estimate above holds also for x ∈ H, and by taking the supremum for x ∈ H it follows that ϕ1 − ϕ2 0 −

λ0 1 ϕ1 − ϕ2 0 ≤ g1 − g2 0 , λ + λ0 λ + λ0

so that ϕ1 − ϕ2 0 ≤

1 g1 − g2 0 . λ

842

SANDRA CERRAI

4.2. Locally Lipschitz hamiltonian K. We first prove an a priori estimate which is crucial in order to prove the m-dissipativity of the operator N in the case of a locally Lipschitz hamiltonian K. Proposition 4.11. Assume that Hypotheses 1, 2, and 3 hold. Then there exists some µ0 > 0, which does not depend on K, such that if g ∈ Cb1 (H) and λ > µ0 , then Dϕ(λ, g)0 ≤ Dg0 .

(4.19)

Proof. Let us fix λ, µ > 0 and g ∈ Cb1 (H), and let us consider ϕα = ϕα (λ + µ, g + µ ϕ(λ, g)) and ϕα,n = ϕα,n (λ + µ, g + µ ϕ(λ, g)). Since g ∈ Cb1 (H), then ϕα,n belongs to Cb2 (H), and it is a strict solution of the problem (λ + µ) ϕ − Nα,n (ϕ) = gn + µ ϕn (λ, g), where ϕn (λ, g)(x) = ϕ(λ, g)(Pn x). The problem above can be rewritten as (λ + µ) ϕ(x) −

n n

1 2 2 λh Dh ϕ(x) − ahk xk Dh ϕ(x) 2 h=1

h,k=1

− Fα (Pn x), Dϕ(x)H + K(Pn Dϕ(x)) = g(Pn x) + µ ϕ(λ, g)(Pn x), where Dh ϕ(x) = Dϕ(x), eh H and ahk = Aek , eh H . By differentiating with respect to xj , by setting ψh = Dh ϕ, for h = 1, . . . , n, and by multiplying each side by ψj , we get (λ +

µ) ψj2

n n n



1 2 2 − λh ψj Dh ψj − ahk xk ψj Dh ψj − ahj ψh ψj 2 h=1



n



Fα,n , eh  ψj Dψj , eh  −

h=1

+

n

h,k=1

n

h=1

DFα,n ej ψj , eh  ψh

h=1

Dh K(Pn Dϕα,n )ψj Dh ψj = Dgn , ej ψj  + µ Dϕn (λ, g), ej ψj  .

h=1

Then we sum up over j and by setting z(x) = |Dϕα,n (x)|2H and by taking into account that 

 1 Dh2 ψj ψj = Dh2 (ψj2 ) − (Dh ψj )2 , 2

we have n

1  2(λ + µ) z(x) − Tr Q2n D2 z(x) + λ2h (Dh ψj )2 (x) − An x, Dz(x) 2 h,j=1

−2 An Dϕα,n (x), Dϕα,n (x) + DK(Dϕα,n (x)), Dz(x) − Fα (Pn x), Dz(x) −2 DFα (Pn x)Dϕα,n (x), Dϕα,n (x) = 2 Dg(Pn x) + µ Dϕ(λ, g)(Pn x), Dϕα,n (x) .

843

STATIONARY H–J EQUATIONS IN HILBERT SPACES

Therefore, by using (2.3) and (2.4) it follows that 1  2(λ + µ) z(x) − Tr Q2n D2 z(x) − An x, Dz(x) − Fα (Pn x), Dz(x) 2 + DK(Dϕα,n (x)), Dz(x) ≤ 2 Dg(x) + µ Dϕ(λ, g)(Pn x), Dϕα,n (Pn x) + γ z(x) ≤ 2 (Dg0 + µ Dϕ(λ, g)0 ) |Dϕα,n (x)|H + γ z(x) for a suitable constant γ ∈ R depending only on F and A. Now let us consider the equation (4.20)

dy(t) = [An y(t) + Fα,n (y(t)) + Uα,n (y(t))] dt + Qn dw(t),

y(0) = Pn x,

where Uα,n (x) = −DK(Dϕα,n (x)) for any x ∈ H. If g ∈ Cb1 (H), then ϕα,n ∈ Cb2 (H), and then the mapping Uα,n : H → H is Lipschitz continuous. This implies that there exists a unique strong solution yα,n (·; x) ∈ L2 (Ω; C([0, +∞); H)) for (4.20). If we denote by Rtα,n the corresponding transition semigroup, it is possible to show that the solution of the problem 1  (2(λ + µ) − γ)ψ(x) − Tr Q2n D2 ψ(x) − An x, Dψ(x) − Fα (Pn x), Dψ(x) 2 + DK(Dϕα,n (x)), Dψ(x) = 2 (Dg0 + µ Dϕ(λ, g)0 ) |Dϕα,n (x)|H for any λ > γ is given by  ψ(x) = 2 (Dg0 + µ Dϕ(λ, g)0 )

+∞

0

e−(2(λ+µ)−γ) t Rtα,n (|Dϕα,n |H ) (x) dt.

(See [5] for a proof.) Thus by a comparison argument we have that |Dϕα,n (x)|2H ≤

2 (Dg0 + µ Dϕ(λ, g)0 ) |Dϕα,n (x)|H , 2(λ + µ) − γ

and if we take λ > 1 + γ/2 = µ0 , it follows that |Dϕα,n (λ + µ, g + µ ϕ(λ, g))(x)|H ≤

1 (Dg0 + µ Dϕ(λ, g)0 ) . 1+µ

Due to (4.13) and (4.15), if µ is large enough, we can take first the limit as n goes to infinity and then the limit as α goes to zero, and for any x ∈ E we get |Dϕ(x)|H ≤

1 (Dg0 + µ Dϕ(λ, g)0 ) . 1+µ

As ϕ(λ, g) ∈ Cb1 (H), the same estimate holds for x ∈ H, and then, by taking the supremum for x ∈ H, we get Dϕ(λ, g)0 ≤ which immediately yields (4.19).

1 (Dg0 + µ Dϕ(λ, g)0 ) , 1+µ

844

SANDRA CERRAI

Remark 4.12. It is immediate to check that the proof of the previous proposition adapts to the problem (4.6). Thus there exists λ0 > 0, which is clearly independent of α > 0, such that for any λ > λ0 and g ∈ Cb1 (H) Dϕα (λ, g)0 ≤ Dg0 . From now on we shall assume that K fulfills the following assumption. Hypothesis 4. The hamiltonian K : H → R is Fr´echet differentiable and is locally Lipschitz continuous, together with its derivative. Moreover, K(0) = 0. We want to show that under the hypotheses above the problem (4.1) admits a unique solution for any λ > µ0 and g ∈ Cb1 (H). To this purpose, for any r > 0 let Kr be a Fr´echet differentiable function such that  K(x) if |x|H ≤ r,    ! Kr (x) = (4.21) (r + 1)x   if |x|H > r + 1.  K |x|H It is immediate to check that Kr is Lipschitz continuous, together with its derivative, for each r > 0, and Kr (x) = K(x) if |x|H ≤ r. Theorem 4.13. Under Hypotheses 1, 2, and 4 there exists µ0 > 0 such that for any λ > µ0 and g ∈ Cb1 (H) there exists a unique solution ϕ(λ, g) ∈ D(L) for the problem (4.1). Proof. For any r > 0 and g ∈ Cb1 (H) we define ϕr (λ, g) as the solution of the problem λϕ − Lϕ + Kr (Dϕ) = g. Due to Proposition 4.11 there exists µ0 > 0 such that for any λ > µ0 sup Dϕr (λ, g)0 ≤ Dg0 . r>0

Thus, if we fix r > g1 , we have that Kr (Dϕr (λ, g)) = K(Dϕr (λ, g)), and then λϕr (λ, g) − Lϕr (λ, g) + K(Dϕr (λ, g)) = g. Remark 4.14. The operator N is dissipative. Actually, fix λ > 0 and ϕ1 , ϕ2 ∈ D(L), and define gi = λϕi − N (ϕi ) for i = 1, 2. If we take r ≥ max (ϕ1 1 , ϕ2 1 ), we have gi = λϕi − Lϕi + Kr (Dϕi ),

i = 1, 2.

Thus we can apply Theorem 4.9 to the hamiltonian Kr , and we get ϕ1 − ϕ2 0 ≤

1 g1 − g2 0 , λ

¯ is m-dissipative, so that N is dissipative. In particular, N is closable, and its closure N so that for any λ > 0 and g ∈ Cb (H) there exists a unique solution to the problem ¯ (ϕ) = g. λϕ − N

STATIONARY H–J EQUATIONS IN HILBERT SPACES

845

5. Application to the control problem. Let k : H → (−∞, +∞] be a measurable mapping such that its Legendre transform K(x) = sup { − x, yH − k(y) ; y ∈ H } ,

x ∈ H,

fulfills Hypothesis 4. It is possible to show that if k is strictly convex and continuously Fr´echet differentiable, if lim

|y|H →+∞

k(y) = 0, |y|H

and if Dk : H → L(H) has a continuous inverse which is Lipschitz continuous on bounded subsets of H, then Hypothesis 4 is verified. An easy example is given by k(y) = |y|2H . For any λ > 0 and g ∈ Cb (H) we consider the cost functional  +∞ J(x; z) = E (5.1) e−λ t [g(y(t)) + k(z(t))] dt, 0

where y(t) = y(t; x, z) is the unique solution of the system (3.1). The corresponding value function is defined as   V (x) = inf J(x; z) ; z ∈ L2 (Ω; L2 (0, +∞; H)) adapted . Our aim is to prove that if ϕ is the unique solution of the Hamilton–Jacobi equation (4.1), then V (x) = ϕ(x) for any x ∈ H. To this purpose we first prove the following preliminary result. Lemma 5.1. Assume Hypotheses 1, 2, and 4. If ϕ = ϕ(λ, g) is the solution of the problem (4.1) in Cb1 (H) and if y(t) = y(t; x, z) is the solution of the controlled system (3.1), we have (5.2)



J(x; z) = ϕ(x) + E

0

+∞

e−λ t [K(Dϕ(y(t))) + z(t), Dϕ(y(t))H + k(z(t))] dt.

Proof. If r ≥ Dϕ(λ, g)0 and if Kr is defined as in (4.21), then we have K(Dϕ(x)) = Kr (Dϕ(x)) for any x ∈ H, and the problem (4.1) can be rewritten as λϕ − Lϕ + Kr (Dϕ) = g. Now we fix a sequence {gk } ⊂ Cb1 (H) converging to g in Cb (H) and for any k, n ∈ N and α > 0 we denote by ϕkα,n = ϕkα,n (λ + µ, gk + µ ϕ(λ, g)) the solution of the problem (5.3)

(λ + µ) ϕ − Lα,n ϕ + Kr,n (Dϕ) = gk,n + µ ϕn (λ, g),

where Kr,n (x) = Kr (Pn x), gk,n (x) = gk (Pn x), and ϕn (λ, g)(x) = ϕ(λ, g)(Pn x), and µ is some positive constant to be determined later. Since gk and ϕ(λ, g) are continuously differentiable, due to Lemma 4.8 we have that ϕkα,n belongs to Cb2 (H). Then, since yα,n (t; x, z) is a strong solution of the problem (3.9), we can apply the Itˆ o formula to the mapping t → e−λ t ϕkα,n (yα,n (t)), and we get     d e−λ t ϕkα,n (yα,n (t)) = e−λ t Dϕkα,n (yα,n (t)), Qn dw(t) H     +e−λ t (Lα,n − λ)ϕkα,n (yα,n (t)) + Pn z(t), Dϕkα,n (yα,n (t)) H .

846

SANDRA CERRAI

Recalling that ϕkα,n is the solution of (5.3) and that (3.15) holds, we have (Lα,n − λ)ϕkα,n = µ ϕkα,n + Kr,n (Dϕkα,n ) − gk,n − µ ϕn (λ, g). Then, by integrating with respect to t ∈ [0, T ] and by taking the expectation, we get  T   e−λ t ϕkα,n − ϕ(λ, g) (yα,n (t)) dt e−λ T PTα,n ϕkα,n − ϕkα,n = µ E 0

 +E

T

0

    e−λ t Kr (Dϕkα,n (yα,n (t))) − gk (yα,n (t)) + z(t), Dϕkα,n (yα,n (t)) H dt.

Due to (3.10) and (4.15), if µ is large enough, we can take the limit as n goes to infinity, and we get  T   −λ T α k k e PT ϕα (x) − ϕα (x) = µ E e−λ t ϕkα − ϕ(λ, g) (yα (t)) dt 0

 =E where

ϕkα

T

   e−λ t Kr (Dϕkα (yα (t))) − gk (yα (t)) + z(t), Dϕkα (yα (t)) H dt,

0

= ϕkα (λ + µ, gk + µ ϕ(λ, g)) is the solution of the problem (λ + µ)ϕ − Lα ϕ + Kr (Dϕ) = gk + µ ϕ(λ, g).

By taking the limit as T goes to infinity, this yields  +∞   e−λ t ϕkα − ϕ(λ, g) (yα (t)) dt −ϕkα = µ E 0

(5.4)

 +E

+∞

0

   e−λ t Kr (Dϕkα (yα (t))) − gk (yα (t)) + z(t), Dϕkα (yα (t)) H dt.

We remark that for any h, k ∈ N and α > 0 we have    ϕkα − ϕhα = R(λ + µ, Lα ) gk − gh − Kr (Dϕkα ) − Kr (Dϕhα ) , and then, due to (4.2),

  ϕkα − ϕhα 1 ≤ ρ(λ + µ) gk − gh 0 + cr Dϕkα − Dϕhα 0 ,

where cr is the Lipschitz constant of Kr . Therefore, if µ is sufficiently large, we have ρ(λ + µ)cr < 1, so that ϕkα − ϕhα 1 ≤

(5.5)

ρ(λ + µ) gk − gh 0 . 1 − ρ(λ + µ)cr

This means that the sequence {ϕkα } converges to some ϕα in Cb1 (H). It is immediate to check that ϕα coincides with ϕα (λ + µ, g + µ ϕ(λ, g)), and then, by taking the limit as k goes to infinity in (5.4), due to the dominated convergence theorem we can conclude that  +∞ −ϕα = µ E e−λ t (ϕα − ϕ(λ, g)) (yα (t)) dt 0

(5.6)

 +E

0

+∞

e−λ t [Kr (Dϕα (yα (t))) − g(yα (t)) + z(t), Dϕα (yα (t))H ] dt.

847

STATIONARY H–J EQUATIONS IN HILBERT SPACES

If x ∈ E and z ∈ Lp (Ω; L∞ (0, +∞; H)) with p as in the Proposition 3.2, we can use (3.7) and (4.13), and by taking the limit as α goes to zero we have  +∞ e−λ t [K(Dϕ(y(t))) − g(y(t)) + z(t), Dϕ(y(t))H ] dt = 0. ϕ(x) + E 0

Notice that here we have replaced Kr by K, as we fixed r ≥ Dϕ(λ, g)0 . Since ϕ ∈ Cb1 (H) and y(t; x, z) depends continuously on x ∈ H and z ∈ L2 (Ω; L2 (0, +∞; H)), the same identity holds for x ∈ H and z ∈ L2 (Ω; L2 (0, +∞; H)). Then, recalling how J(x; z) is defined, if we rearrange all terms, we get (5.2). Theorem 5.2. Assume that Hypotheses 1, 2, and 4 hold. Then there exists µ0 such that for any λ > µ0 and g ∈ Cb1 (H) the value function V corresponding to the cost functional (5.1) coincides with the solution ϕ(λ, g) of the Hamilton–Jacobi equation (4.1). Moreover, for any x ∈ E we have   V (x) = lim min Jα (x; z) ; z ∈ L2 (Ω; L2 (0, +∞; H)) adapted , α→0

where {Jα (x, z)} is a sequence of cost functionals which admit unique optimal controls and states and whose value functions Vα coincide with the solution of the problems (λ + λ0 )ϕ − Lα ϕ + Kr (Dϕ) = g + λ0 ϕ(λ, g) for some λ0 > 0 large enough and r ≥ Dϕ(λ, g)0 . Proof. In Theorem 4.13 we have seen that, if λ > µ0 and g ∈ Cb1 (H), there exists a unique solution ϕ(λ, g) ∈ Cb1 (H) for (4.1). Due to (5.2) and to the definition of K, we have that V (x) ≥ ϕ(λ, g)(x) for any x ∈ H. Now we try to prove the opposite inequality. To this purpose we proceed by approximation. We fix r ≥ Dϕ(λ, g)0 , and for any α > 0 we define the cost functional  +∞ e−λ t [g(yα (t; x, z)) + k(z(t))] dt Jα (x; z) = E 0

 +λ0 E  +E

0

+∞

0

+∞

e−λ t [(ϕ(λ, g) − ϕα ) (yα (t; x, z))] dt

e−λ t [K(Dϕα (yα (t; x, z))) − Kr (Dϕα (yα (t; x, z)))] dt,

where ϕα = ϕα (λ + λ0 , g + λ0 ϕ(λ, g)) is the solution of the problem (λ + λ0 )ϕ − Lα ϕ + Kr (Dϕ) = g + λ0 ϕ(λ, g), and λ0 is the constant introduced in Proposition 4.3 corresponding to the hamiltonian Kr . We denote by Vα (x) the corresponding value function. Thanks to (5.6) we easily have that Vα (x) ≥ ϕα (x) for any x ∈ H. In fact, it is possible to show that Vα (x) = ϕα (x). Indeed, for each x ∈ H the function H → R,

z → − z, Dϕα (x)H − k(z)

attains its maximum at z = −DK(Dϕα (x)). Then, if we show that the closed loop equation (5.7)

dy(t) = [Ay(t) + Fα (y(t)) − DK(Dϕα (y(t)))] dt + Q dw(t),

y(0) = x,

848

SANDRA CERRAI

has a unique adapted solution yα (t), we have that for the control zα (t) = −DK(Dϕα (yα (t))) it holds that Jα (x, zα ) = ϕα (x). This means that Vα (x) = ϕα (x), and there exists a unique optimal control and a unique optimal state for the minimizing problem corresponding to the cost functional Jα (x; z). If g ∈ Cb1 (H), then due to Lemma 4.8 ϕα ∈ Cb2 (H), so that the mapping Uα : H → H,

x → −DK(Dϕα (x))

is Lipschitz continuous. This implies that the closed loop equation admits a unique solution. For any α > 0 the optimal control relative to the functional Jα (x; z) is zα (t) = −DK(Dϕα (yα (t))). According to Proposition 4.11 we have Dϕα 0 ≤ Dg0 + λ0 Dϕ(λ, g)0 , and then, since DK is bounded on bounded sets, there exists R > 0 such that sup sup |zα (t)|H = R,

α>0 t≥0

This implies that Vα (x) = inf



P-a.s.

 Jα (x; z) : z ∈ M2R ,

where M2R is the subset of admissible controls introduced in (3.8). Now, recalling Proposition 4.6, we have that for any x ∈ E lim Vα (x) = lim ϕα (λ + λ0 , g + λ0 ϕ(λ, g))(x) = ϕ(λ, g)(x).

α→0

α→0

Thus, if we show that (5.8)

lim sup |Jα (x; z) − J(x; z)| = 0,

α→0 z∈ M2

R

it immediately follows that V (x) = ϕ(λ, g)(x) for x ∈ E. Due to Proposition 3.2, we have that lim E |g(yα (t; x, z)) − g(y(t; x, z))| = 0,

α→0

uniformly for (t, x) in bounded sets of [0, +∞) × E and z ∈ M2R . Hence, if we fix % > 0 and M > 0 such that  +∞ % e−λ t dt ≤ , 2 g0 M we have

 E

+∞

0

e−λ t [g(yα (t; x, z)) − g(y(t; x, z))] dt

 ≤%+

0

M

e−λ t E |g(yα (t; x, z)) − g(y(t; x, z))| dt,

STATIONARY H–J EQUATIONS IN HILBERT SPACES

849

so that, due to the arbitrariness of % > 0,  lim sup E

α→0 z∈ M2

R

+∞

0

e−λ t [g(yα (t; x, z)) − g(y(t; x, z))] dt = 0.

Thanks to Lemma 4.8, we have that for j = 0, 1 (5.9) lim sup Dj (ϕα − ϕ) (x) Lj (H) = 0. α→0 |x| ≤R E

Moreover, thanks to (3.6), sup |yα (t; x, z)|E < +∞,

sup

z∈ M2R t∈ [0,T ]

P-a.s.

for any T > 0. Thus, by using the same arguments as above, we have  lim sup E

α→0 z∈ M2

R

+∞

0

e−λ t [(ϕα − ϕ)(yα (t; x, z))] dt = 0.

Finally, since the sequence {ϕα } is bounded in Cb1 (H), recalling that K and Kr are bounded on bounded sets and K(Dϕ(x)) = Kr (Dϕ(x)) for any x ∈ H, by using (5.9) and by arguing as above, we have  lim sup E

α→0 z∈ M2

R

+∞

0

e−λ t [K(Dϕα (yα (t; x, z))) − Kr (Dϕα (yα (t; x, z)))] dt = 0.

Therefore, we can conclude that (5.8) holds for any x ∈ E, and then V (x) = ϕ(x) for x ∈ E. Now assume that x ∈ H. We fix a sequence {xn } ⊂ E converging to x in H. For each n ∈ N we have V (xn ) = ϕ(xn ) and  J(xn ; z) − J(x; z) = E

+∞

0

e−λ t [g(y(t; xn , z)) − g(y(t; x, z))] dt.

Then, due to (3.4), if ϕ ∈ Cb1 (H), we easily get lim

sup J(xn ; z) = J(x; z),

n→+∞ z∈ M2

R

so that we can conclude that V (x) = ϕ(x) for any x ∈ H. We have seen that if we assume the hamiltonian K to be Lipschitz continuous, then for any λ > 0 and g ∈ Cb (H) there exists a unique solution ϕ(λ, g) in D(L) ⊂ Cb1 (H) to the problem (4.1). This allows us to have a stronger version of the previous theorem in the case of Lipschitz K. Theorem 5.3. Assume that Hypotheses 1, 2, and 3 hold. Then for any λ > 0 and g ∈ Lipb (H) the value function V corresponding to the cost functional (5.1) coincides with the solution ϕ(λ, g) of the Hamilton–Jacobi equation (4.1). Moreover, for any x ∈ E we have   V (x) = lim min Jα (x; z) ; z ∈ L2 (Ω; L2 (0, +∞; H)) adapted , α→0

850

SANDRA CERRAI

where {Jα (x, z)} is a sequence of cost functionals which admit unique optimal controls and states and whose value functions Vα coincide with the solution of the problems (λ + λ0 )ϕ − Lα ϕ + K(Dϕ) = g + λ0 ϕ(λ, g) for some λ0 > 0. Proof. By arguing as in the proof of the previous theorem, we have the thesis for any g ∈ Cb1 (H) and λ > 0. Thus, in order to conclude, we have to show that for any g ∈ Lipb (H) the approximating closed loop equation du(t) = [Au(t) + Fα (u(t)) − DK(Dϕα (u(t)))] dt + Q dw(t),

u(0) = x,

admits a unique adapted solution u α (t). If g ∈ Lipb (H), we can find a bounded sequence {gk } ⊂ Cb1 (H) converging to g in Cb (H). For each k ∈ N there exists a unique solution ϕα,k for the Hamilton–Jacobi problem (λ + λ0 ) ϕ − Lα ϕ + Kr (Dϕ) = gk + λ0 ϕ(λ, g). Then, since gk + λ0 ϕ(λ, g) ∈ Cb1 (H), as proved above, the corresponding closed loop

(t). If we show that for any T > 0 the sequence equation has a unique solution yα,k



{yα,k } converges to some yα in C([0, T ]; H), P-a.s. and in mean-square, then we easily have that yα is the solution of the closed loop equation (5.7).



For k, h ∈ N we define vαk,h (t) = yα,k (t) − yα,h (t). We have that vαk,h is the solution of the problem dv



(t) = Av(t) + Fα (yα,k (t)) − Fα (yα,h (t)) − DK(Dϕα,k (yα,k (t))) dt

+DK(Dϕα,h (yα,h (t))),

v(0) = 0.

By multiplying each side by vαk,h (t) and recalling (2.3) and (2.4), we have 1 d k,h 2 |v (t)|H ≤ c |vαk,h (t)|2H 2 dt α



+|DK(Dϕα,k (yα,k (t))) − DK(Dϕα,h (yα,h (t)))|H |vαk,h (t)|H .

Since DK is locally Lipschitz continuous and according to Lemma 4.11 we have sup Dϕα,k 0 ≤ sup (Dgk 0 + λ0 Dϕ0 ) < ∞,

k∈ N

k∈ N

we obtain



|DK(Dϕα,k (yα,k (t))) − DK(Dϕα,h (yα,h (t)))|H





≤ c Dϕα,k (yα,k (t)) − Dϕα,h (yα,h (t)) H . For each x, y ∈ H we have |Dϕα,k (x) − Dϕα,k (y)|H ≤ c ϕα,k 2 |x − y|H , and then, since the sequence {gk } is bounded in Cb1 (H), from (4.16) it follows that



|Dϕα,k (yα,k (t)) − Dϕα,k (yα,h (t))|H ≤ cα |vαk,h (t)|H .

STATIONARY H–J EQUATIONS IN HILBERT SPACES

851

Moreover, if λ0 is large enough, due to (5.5) we have Dϕα,k − Dϕα,h 0 ≤ cα gk − gh 0 . Therefore, we can conclude that



|DK(Dϕα,k (yα,k (t))) − DK(Dϕα,h (yα,h (t)))|H ≤ cα |vαk,h (t)|H + cα gk − gh 0 ,

so that from the Young inequality we have 1 d k,h 2 |v (t)|H ≤ cα |vαk,h (t)|2H + cα gk − gh 20 . 2 dt α By the Gronwall lemma this yields



sup |yα,k (t) − yα,h (t)|H ≤ cT gk − gh 0 ,

t∈ [0,T ]

P-a.s.,

for some constant CT . Thus yα,k converges to some yα in C([0, T ]; H), P-a.s and in mean-square, and it is not difficult to check that yα is the solution of the closed loop equation corresponding to the datum g. By proceeding as in [9, Theorem 7.3] it is possible to show that when the space dimension d equals 1, under suitable assumptions there exist an optimal control and the corresponding optimal state. Theorem 5.4. Assume that the space dimension d equals 1. 1. If the constant m in Hypothesis 1 is less than or equal to 1, then there exists a unique optimal control for the minimizing problem associated with the functional J. Furthermore, the optimal control z is related to the corresponding optimal state y by the feedback formula

z (t) = −DK[DV (y (t))],

t ∈ [0, T ].

2. If DK can be extended as a Lipschitz continuous mapping from E into itself, then the same conclusion holds for any x ∈ E. REFERENCES [1] V. Barbu and G. Da Prato, Hamilton-Jacobi Equations in Hilbert Spaces, Research Notes in Mathematics 86, Pitman, Boston, 1983. [2] P. Cannarsa and G. Da Prato, Second-order Hamilton–Jacobi equations in infinite dimensions, SIAM J. Control Optim., 29 (1991), pp. 474–492. [3] P. Cannarsa and G. Da Prato, Direct solution of a second-order Hamilton-Jacobi equation in Hilbert spaces, in Stochastic Partial Differential Equations and Applications, Pitman Res. Notes Math. Ser. 268, G. Da Prato and L. Tubaro, eds., Longman, Harlow, UK, 1992, pp. 72–85. [4] S. Cerrai, A Hille Yosida theorem for weakly continuous semigroups, Semigroup Forum, 49 (1994), pp. 349–367. [5] S. Cerrai, Elliptic and parabolic equations in Rn with coefficients having polynomial growth, Comm. Partial Differential Equations, 21 (1996), pp. 281–317. [6] S. Cerrai, Differentiability with respect to initial datum for solutions of SPDE’S with no Fr´ echet differentiable drift term, Comm. Appl. Anal., 2 (1998), pp. 249–270. [7] S. Cerrai, Smoothing properties of transition semigroups relative to SDE’s with values in Banach spaces, Probab. Theory Related Fields, 113 (1999), pp. 85–114. [8] S. Cerrai, Differentiability of Markov semigroups for stochastic reaction-diffusion equations and applications to control, Stochastic Process. Appl., 83 (1999), pp. 15–37. [9] S. Cerrai, Optimal control problems for stochastic reaction-diffusion systems with nonLipschitz coefficients, SIAM J. Control Optim., 39 (2001), pp. 1779–1816.

852

SANDRA CERRAI

[10] P. L. Chow and J. L. Menaldi, Infinite dimensional Hamilton-Jacobi equations in GaussSobolev spaces, J. Nonlinear Anal., 29 (1997), pp. 415–426. [11] M. G. Crandall, H. Ishii, and P. L. Lions, User’s guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. (N.S.), 27 (1992), pp. 1–67. [12] G. Da Prato and A. Debussche, Control of the stochastic Burgers model of turbulence, SIAM J. Control Optim., 37 (1999), pp. 1123–1149. [13] G. Da Prato and A. Debussche, Dynamic programming for the stochastic Burgers equation, Ann. Mat. Pura Appl. (4), 178 (2000), pp. 143–174. [14] G. Da Prato and J. Zabczyk, Stochastic Equations in Infinite Dimensions, Cambridge University Press, Cambridge, UK, 1992. [15] E. B. Davies, Heat Kernels and Spectral Theory, Cambridge University Press, Cambridge, UK, 1989. [16] T. E. Duncan, B. Maslowski, and B. Pasik-Duncan, Ergodic boundary point control of stochastic semilinear systems, SIAM J. Control Optim., 36 (1998), pp. 1020–1047. [17] W. H. Fleming and H. M. Soner, Controlled Markov Processes and Viscosity Solutions, Springer-Verlag, New York, 1993. [18] M. Freidlin, Markov Processes and Differential Equations: Asymptotic Problems, Lectures Math. ETH Z¨ urich, Birkh¨ auser-Verlag, Basel, 1996. [19] B. Goldys and B. Maslowski, Ergodic control of semilinear stochastic equations and the Hamilton-Jacobi equation, J. Math. Anal. Appl., 234 (1999), pp. 592–631. [20] F. Gozzi, Regularity of solutions of a second order Hamilton-Jacobi equation and application to a control problem, Comm. Partial Differential Equations, 20 (1995), pp. 775–826. [21] F. Gozzi, Global regular solutions of second order Hamilton-Jacobi equations in Hilbert spaces with locally Lipschitz nonlinearities, J. Math. Anal. Appl., 198 (1996), pp. 399–443. [22] F. Gozzi and E. Rouy, Regular solutions of second-order stationary Hamilton-Jacobi equations, J. Differential Equations, 130 (1996), pp. 201–234. [23] T. Haverneanu, Existence for the dynamic programming equation of control diffusion processes in Hilbert spaces, Nonlinear Anal., 9 (1985), pp. 619–629. [24] P. L. Lions, Viscosity solutions of fully nonlinear second order equations and optimal control in infinite dimensions I: The case of bounded stochastic evolutions, Acta Math., 161 (1998), pp. 243–278. [25] P. L. Lions, Viscosity solutions of fully nonlinear second order equations and optimal control in infinite dimensions II: Optimal control of Zakai’s equation, in Stochastic Partial Differential Equations and Applications, Lecture Notes in Math. 1390, G. DaPrato and L. Tubaro, eds., Springer-Verlag, Berlin, 1989, pp. 147–170. [26] P. L. Lions, Viscosity solutions of fully nonlinear second order equations and optimal control in infinite dimensions III: Uniqueness of viscosity solutions for general second order equations, J. Funct. Anal., 86 (1989), pp. 1–18. [27] A. Lunardi, Analytic Semigroups and Optimal Regularity in Parabolic Problems, Birk¨ auserVerlag, Basel, 1995. [28] A. Swiech, Viscosity Solutions of Fully Nonlinear Partial Differential Equations with Unbounded Terms in Infinite Dimensions, Ph.D. thesis, University of California Santa Barbara, Santa Barbara, CA, 1993.