Mean-Field Backward Stochastic Volterra Integral Equations∗
arXiv:1104.4725v2 [math.PR] 5 Jul 2011
Yufeng Shi†, Tianxiao Wang‡, and Jiongmin Yong§ January 20, 2013
Abstract Mean-field backward stochastic Volterra integral equations (MF-BSVIEs, for short) are introduced and studied. Well-posedness of MF-BSVIEs in the sense of introduced adapted Msolutions is established. Two duality principles between linear mean-field (forward) stochastic Volterra integral equations (MF-FSVIEs, for short) and MF-BSVIEs are obtained. Several comparison theorems for MF-FSVIEs and MF-BSVIEs are proved. A Pontryagin’s type maximum principle is established for an optimal control of MF-FSVIEs.
Keywords. Mean-field stochastic Volterra integral equation, mean-field backward stochastic Volterra integral equation, duality principle, comparison theorem, maximum principle. AMS Mathematics subject classification. 60H20, 93E20, 35Q83.
1
Introduction.
Throughout this paper, we let (Ω, F, lF, lP) be a complete filtered probability space on which a one-dimensional standard Brownian motion W (·) is defined with lF = {Ft }t≥0 being its natural filtration augmented by all the lP-null sets. Let us begin with the following stochastic differential equation (SDE, for short) in lR:
where
dX(t) = b(X(t), µ(t))dt + dW (t), X(0) = x,
b(X(t), µ(t)) = ≡
Z
ZΩ lR
∗
t ∈ [0, T ],
(1.1)
b(X(t, ω), X(t; ω ′ ))lP(dω ′ ) b(ξ, y)µ(t; dy)
ξ=X(t)
≡ lE[b(ξ, X(t))]
(1.2) ξ=X(t)
,
This work is supported in part by National Natural Science Foundation of China (Grants 10771122 and 11071145), Natural Science Foundation of Shandong Province of China (Grant Y2006A08), Foundation for Innovative Research Groups of National Natural Science Foundation of China (Grant 10921101), National Basic Research Program of China (973 Program, No. 2007CB814900), Independent Innovation Foundation of Shandong University (Grant 2010JQ010), Graduate Independent Innovation Foundation of Shandong University (GIIFSDU), and the NSF grant DMS-1007514. † School of Mathematics, Shandong University, Jinan 250100, China ‡ School of Mathematics, Shandong University, Jinan 250100, China § Department of Mathematics, University of Central Florida, Orlando, FL 32816, USA.
1
where b : lR × lR → lR is a (locally) bounded Borel measurable function and µ(t; ·) is the probability distribution of the unknown process X(t): µ(t; A) = lP(X(t) ∈ A),
∀A ∈ B(lR).
(1.3)
Here B(lRn ) is the Borel σ-field of lRn (n ≥ 1). Equation (1.1) is called a McKean–Vlasov SDE. Such an equation was suggested by Kac [20] as a stochastic toy model for the Vlasov kinetic equation of plasma and the study of which was initiated by McKean [25]. Since then, many authors made contributions on McKean–Vlasov type SDEs and applications, see, for examples, Dawson [14], Dawson–G¨artner [15], G´ artner [16], Scheutzow [32], Sznitman [33], Graham [17], Chan [11], Chiang [12], Ahmed–Ding [2]. In recent years, related topics and problems have attracted more and more attentions, see, for examples, Veretennikov [38], Huang–Malham´e–Caines [19], Ahmed [1], Mahmudov–McKibben [24], Lasry–Lions [22], Borkar–Kumar [7], Crisan–Xiong [13], Kotelenez– Kurtz [21], Park–Balasubramaniam–Kang [27], Andersson–Djehiche [4], Meyer-Brandis–Oksendal– Zhou [26], and so on. Inspired by (1.1), one can consider the following more general SDE: b dX(t) = b(t, X(t), lE[θ (t, ξ, X(t))]ξ=X(t) )dt
+σ(t, X(t), lE[θ σ (t, ξ, X(t))]ξ=X(t) )dW (t),
X(0) = x.
t ∈ [0, T ],
(1.4)
where θ b and θ σ are some suitable maps. We call the above a mean-field (forward) stochastic differential equation (MF-FSDE, for short). From (1.2) and (1.4), we see that (1.1) is a special case of (1.4). Note also that (1.4) is an extension of classical Itˆo type SDEs. Due to the dependence of b and σ on lE[θ b (t, ξ, X(t))]ξ=X(t) and lE[θ σ (t, ξ, X(t))]ξ=X(t) , respectively, MF-FSDE (1.4) is nonlocal with respect to the event ω ∈ Ω. It is easy to see that the equivalent integral form of (1.4) is as follows: X(t) = x +
Z
t
0
b(s, X(s), lE[θ b (s, ξ, X(s))]ξ=X(s) )ds Z
+
t
σ
σ(s, X(s), lE[θ (s, ξ, X(s))]ξ=X(s) )dW (s),
0
(1.5) t ∈ [0, T ].
This suggests a natural extension of the above to the following: X(t) = ϕ(t) +
Z
t
0
+
Z
b(t, s, X(s), lE[θ b (t, s, ξ, X(s))]ξ=X(s) )ds
0
t
σ
σ(t, s, X(s), lE[θ (t, s, ξ, X(s))]ξ=X(s) )dW (s),
(1.6) t ≥ 0.
We call the above a mean-field (forward) stochastic Volterra integral equation (MF-FSVIE, for short). It is worthy of pointing out that when the drift b and diffusion σ in (1.6) are independent of the nonlocal terms lE[θ b (t, s, ξ, X(s))]ξ=X(s) and lE[θ σ (t, s, ξ, X(s))]ξ=X(s) , respectively, (1.6) is reduced to a so-called (forward) stochastic Volterra integral equation (FSVIEs, for short): X(t) = ϕ(t) +
Z
t
b(t, s, X(s))ds + 0
Z
0
2
t
σ(t, s, X(s))dW (s),
t ≥ 0.
(1.7)
Such kind of equations have been studied by a number of researchers, see, for examples, Berger– Mizel [6], Protter [30], Pardoux–Protter [29], Tudor [34], Zhang [43], and so on. Needless to say, the theory for (1.6) is very rich and have a great application potential in various areas. On the other hand, a general (nonlinear) backward stochastic differential equation (BSDE, for short) introduced in Pardoux–Peng [28] is equivalent to the following: Y (t) = ξ +
Z
T
g(s, Y (s), Z(s))ds −
Z
T
Z(s)dW (s),
t ∈ [0, T ].
(1.8)
t
t
Extending the above, the following general stochastic integral equation was introduced and studied in Yong [39, 40, 41]: Y (t) = ψ(t) +
Z
T
g(t, s, Y (s), Z(t, s), Z(s, t))ds −
Z
T
Z(t, s)dW (s),
t ∈ [0, T ].
(1.9)
t
t
Such an equation is called a backward stochastic Volterra integral equation (BSVIE, for short). A special case of (1.9) with g(·) independent of Z(s, t) and ψ(t) ≡ ξ was studied by Lin [23] and Aman–N’zi [3] a little earlier. Some relevant studies of (1.9) can be found in Wang–Zhang [37], Wang–Shi [36], Ren [31], and Anh–Grecksch–Yong [5]. Inspired by BSVIEs, it is very natural for us to introduce the following stochastic integral equation: Y (t) = ψ(t) +
Z
−
T
g(t, s, Y (s), Z(t, s), Z(s, t), Γ(t, s, Y (s), Z(t, s), Z(s, t)))ds t
Z
(1.10)
T
Z(t, s)dW (s),
t ∈ [0, T ],
t
where (Y (·), Z(· , ·)) is the pair of unknown processes, ψ(·) is a given free term which is FT measurable (not necessarily lF-adapted), g(·) is a given mapping, called the generator, and h
i
b = lE θ(t, s, y, z, zˆ, Y, Z, Z) b Γ(t, s, Y, Z, Z)
b (y,z,ˆ z )=(Y,Z,Z)
(1.11)
b being some random variables, for some mapping θ(·) (see the next section for precise with (Y, Z, Z) meaning of the above). We call (1.10) a mean-field backward stochastic Volterra integral equation (MF-BSVIE, for short). Relevant to the current paper, let us mention that in Buckdahn–Djehiche– Li–Peng [9], mean-field backward stochastic differential equations (MF-BSDEs, for short) were introduced and in Buckdahn–Li–Peng [10] a class of nonlocal PDEs are studied with the help of an MF-BSDE and a McKean-Vlasov forward equation.
We see that MF-BSVIE (1.10) not only includes MF-BSDEs (which, of course, also includes standard BSDEs) introduced in [9, 10], but also generalizes BSVIEs studied in [39, 41, 36], etc. in a natural way. Besides, investigating MF-BSVIEs allows us to meet the need in the study of optimal control for MF-FSVIEs. As a matter of fact, in the statement of Pontryagin type maximum principle for optimal control of a forward (deterministic or stochastic) control system, the adjoint equation of variational state equation is a corresponding (deterministic or stochastic) backward system, see [42] for the case of classical optimal control problems, [4, 8, 26] for the case of MFFSDEs, and [39, 41] for the case of FSVIEs. When the state equation is an MF-FSVIE, the adjoint equation will naturally be an MF-BSVIE. Hence the study of well-posedness for MF-BSVIEs is not avoidable when we want to study optimal control problems for MF-BSVIEs. 3
The novelty of this paper mainly contains the following: First, well-posedness of general MFBSVIEs will be established. In doing that, we discover that the growth of the generator and the nonlocal term with respect to Z(s, t) plays a crucial role; a better understanding of which enables us to have found a neat way of treating term Z(s, t). Even for BSVIEs, our new method will significantly simplify the proof of well-posedness of the equation (comparing with [41]). Second, we establish two slightly different duality principles, one starts from linear MF-FSVIEs, and the other starts from linear MF-BSVIEs. We found that “Twice adjoint of a linear MF-FSVIE is itself”, whereas, “Twice adjoint of a linear MF-BSVIE is not necessarily itself”. Third, some comparison theorems will be established for MF-FSVIEs and MF-BSVIEs. It turns out that the situation is surprisingly different from the differential equation cases. Some mistakes found in [39, 40] will be corrected. Finally, as an application of the duality principle for MF-FSVIEs, we establish a Pontryagin type maximum principle for an optimal control problem of MF-FSVIEs. The rest of the paper is organized as follows. Section 2 is devoted to present some preliminary results. In Section 3, we prove the existence and uniqueness of adapted M-solutions to MF-BSVIE (1.10). In Section 4 we obtain duality principles. Comparison theorems will be presented in Section 5. In Section 6, we deduce a maximum principle of optimal controls for MF-FSVIEs.
2
Preliminary Results.
In this section, we will make some preliminaries.
2.1
Formulation of MF-BSVIEs.
Let us first introduce some spaces. For H = lRn , etc., and p > 1, t ∈ [0, T ], let Z T o L (0, T ; H) = x : [0, T ] → H |x(s)|p ds < ∞ , 0 o n p LFt (Ω; H) = ξ : Ω → H ξ is Ft -measurable, lE|ξ|p < ∞ , Z T o n |X(s)|p ds < ∞ , LpFt (0, T ; H) = X : [0, T ] × Ω → H X(·) is Ft -measurable, lE 0 Z T o n |X(s)|p ds < ∞ , LplF (0, T ; H) = X : [0, T ] × Ω → H X(·) is lF-adapted, lE 0 n Z T p o LplF (Ω; L2 (0, T ; H)) = X : [0, T ] × Ω → H X(·) is lF-adapted, lE |X(s)|2 ds 2 < ∞ . p
n
0
Also, let (with q ≥ 1)
n
Lp (0, T ; LqlF (0, T ; H)) = Z : [0, T ]2 × Ω → H Z(t, ·) is lF-adapted for almost all t ∈ [0, T ], ClFp ([0, T ]; H)
n
lE
Z
0
T
Z
T
|Z(t, s)|q ds
0
p q
o
dt < ∞ ,
= X : [0, T ] × Ω → H X(·) is lF-adapted, t 7→ X(t) is continuous o
from [0, T ] to LpFT (Ω; H), sup lE[|X(t)|p ] < ∞ . t∈[0,T ]
4
We denote Hp [0, T ] = LplF (0, T ; H) × Lp (0, T ; L2lF (0, T ; H)), lHp [0, T ] = ClFp ([0, T ]; H) × LplF (Ω; L2 (0, T ; H)). Next, let (Ω2 , F 2 , lP2 ) = (Ω × Ω, F ⊗ F, lP ⊗ lP) be the completion of the product probability space of the original (Ω, F, lP) with itself, where we define the filtration as lF2 = {Ft ⊗ Ft , t ∈ [0, T ]} with Ft ⊗ Ft being the completion of Ft × Ft . It is worthy of noting that any random variable ξ = ξ(ω) defined on Ω can be extended naturally to Ω2 as ξ ′ (ω, ω ′ ) = ξ(ω), with (ω, ω ′ ) ∈ Ω2 . Similar to the above, we define Z
n
L1 (Ω2 , F 2 , lP2 ; H) = ξ : Ω2 → H ξ is F 2 -measurable, lE2 |ξ| ≡
Ω2
o
|ξ(ω ′ , ω)|lP(dω ′ )lP(dω) < ∞ .
For any η ∈ L1 (Ω2 , F 2 , lP2 ; H), we denote lE′ η(ω, ·) =
Z
η(ω, ω ′ )lP(dω ′ ) ∈ L1 (Ω, F, lP).
Ω
Note that if η(ω, ω ′ ) = η(ω ′ ), then ′
lE η =
Z
′
′
η(ω )lP(dω ) =
Ω
Z
η(ω)lP(dω) = lEη.
Ω
In what follows, lE′ will be used when we need to distinguish ω ′ from ω, which is the case when both ω and ω ′ appear at the same time. Finally, we denote n
o
∆ = (t, s) ∈ [0, T ]2 t ≤ s ,
Let
n
o
∆∗ = (t, s) ∈ [0, T ]2 t ≥ s ≡ ∆c .
g : ∆ × Ω × lR3n × lRm → lRn ,
θ : ∆ × Ω2 × lR6n → lRm ,
(2.1)
be some suitable maps (see below for precise conditions) and define h
i
b = lE′ θ(t, s, y, z, zb, Y, Z, Z) b Γ(t, s, Y, Z, Z)
=
Z
Ω
b (y,z,ˆ z )=(Y,Z,Z)
b b ′ ))lP(dω ′ ), θ(t, s, ω, ω ′ , Y (ω), Z(ω), Z(ω), Y (ω ′ ), Z(ω ′ ), Z(ω
(2.2)
b This gives the precise meaning of (1.11). Hereafter, for all reasonable random variables (Y, Z, Z). when we talk about MF-BSVIE (1.10), the mapping Γ is defined by (2.2). With such a mapping, we have
Γ(t, s, Y (s), Z(t, s), Z(s, t)) ≡ Γ(t, s, ω, Y (s, ω), Z(t, s, ω), Z(s, t, ω)) =
Z
θ(t, s, ω, ω ′ , Y (s, ω), Z(t, s, ω), Z(s, t, ω), Y (s, ω ′ ), Z(t, s, ω ′ ), Z(s, t, ω ′ ))lP(dω ′ ). Ω
Clearly, the operator Γ is nonlocal in the sense that the value Γ(t, s, ω, Y (s, ω), Z(t, s, ω), Z(s, t, ω)) of Γ(t, s, Y (s), Z(t, s), Z(s, t)) at ω depends on the whole set {(Y (s, ω ′ ), Z(t, s, ω ′ ), Z(s, t, ω ′ )) | ω ′ ∈ Ω}, not just on (Y (s, ω), Z(t, s, ω), Z(s, t, ω)). To get some feeling about such an operator, let us look at a simple but nontrivial special case. 5
Example 2.1. Let θ(t, s, ω, ω ′ , y, z, zˆ, y ′ , z ′ , zˆ′ ) = θ0 (t, s, ω) + A0 (t, s, ω)y + B0 (t, s, ω)z + C0 (t, s, ω)ˆ z +A1 (t, s, ω, ω ′ )y ′ + B1 (t, s, ω, ω ′ )z ′ + C1 (t, s, ω, ω ′ )ˆ z′ . We should carefully distinguish ω ′ and ω in the above. Then (suppressing ω) Γ(t, s, Y (s), Z(t, s), Z(s, t)) = θ0 (t, s) + A0 (t, s)Y (s) + B0 (t, s)Z(t, s) + C0 (t, s)Z(s, t) +lE′ [A1 (t, s)Y (s)] + lE′ [B1 (t, s)Z(t, s)] + lE′ [C1 (t, s)Z(s, t)], where, for example, lE′ [B1 (t, s)Z(t, s)] =
Z
Ω
B1 (t, s, ω, ω ′ )Z(t, s, ω ′ )lP(dω ′ ).
For such a case, (Y (·), Z(· , ·)) 7→ Γ(· , · , Y (·), Z(· , ·), Z(· , ·)) is affine. Having some feeling about the operator Γ from the above, let us look at some useful properties of the operator Γ in general. To this end, we make the following assumption. (H0)q The map θ : ∆ × Ω2 × lR6n → lRm is measurable and for all (t, y, z, zˆ, y ′ , z ′ , zˆ′ ) ∈ [0, T ] × lR6n , the map (s, ω, ω ′ ) 7→ θ(t, s, ω, ω ′ , y, z, zˆ, y ′ , z ′ , zˆ′ ) is lF2 -progressively measurable on [t, T ]. Moreover, there exist constants L > 0 and q ≥ 2 such that |θ(t, s, ω, ω ′ , y1 , z1 , zˆ1 , y1′ , z1′ , zˆ1′ ) − θ(t, s, ω, ω ′ , y2 , z2 , zˆ2 , y2′ , z2′ , zˆ2′ )|
≤ L |y1 − y2 | + |z1 − z2 | + |ˆ z1 − zˆ2 | + |y1′ − y2′ | + |z1′ − z2′ | + |ˆ z1′ − zˆ2′ | ,
and
(2.3)
∀(t, s, ω, ω ′ ) ∈ ∆ × Ω2 , (yi , zi , zˆi , yi′ , zi′ , zˆi′ ) ∈ lR6n , i = 1, 2,
2
2
|θ(t, s, ω, ω ′ , y, z, zˆ, y ′ , z ′ , zˆ′ )| ≤ L 1 + |y| + |z| + |ˆ z | q + |y ′ | + |z ′ | + |ˆ z′ | q , ∀(t, s, ω ′ , ω) ∈ ∆ × Ω2 , (y, z, zˆ, y ′ , z ′ , zˆ′ ) ∈ lR6n .
(2.4)
In the above, we may replace constant L by some function L(t, s) with certain integrability (similar to [41]). However, for the simplicity of presentation, we prefer to take a constant L. Also, 2 2 z′ | q . we note that (ˆ z , zˆ′ ) 7→ θ(t, s, ω, ω ′ , y, z, zˆ, y ′ , z ′ , zˆ′ ) is assumed to grow no more than |ˆ z | q + |ˆ If q = 2, then the growth is linear and if q > 2, the growth is sublinearly. This condition is very subtle in showing that the solution (Y (·), Z(· , ·)) of MF-BSVIE belongs in Hq [0, T ]. We would like to mention that (H0)∞ is understood as that (2.4) is replaced by the following
|θ(t, s, ω, ω ′ , y, z, zˆ, y ′ , z ′ , zˆ′ )| ≤ L 1 + |y| + |z| + |y ′ | + |z ′ | , ∀(t, s, ω ′ , ω) ∈ ∆ × Ω2 , (y, z, zˆ, y ′ , z ′ , zˆ′ ) ∈ lR6n . Under (H0)q , for any (Y (·), Z(· , ·)) ∈ Hq [0, T ], we see that for each t ∈ [0, T ], the map (s, ω) 7→ Γ(t, s, ω, Y (s), Z(t, s), Z(s, t)) ≡
Z
θ(t, s, ω, ω ′ , Y (s, ω), Z(t, s, ω), Z(s, t, ω), Y (s, ω ′ ), Z(t, s, ω ′ ), Z(s, t, ω ′ ))lP(dω ′ )
Ω
6
(2.5)
is lF-progressively measurable on [t, T ]. Also, |θ(t, s, ω, ω ′ , Y (s, ω), Z(t, s, ω), Z(s, t, ω), y, z, zˆ)|
2
2
(2.6)
≤ L 1 + |Y (s, ω)| + |Z(t, s, ω)| + |Z(s, t, ω)| q + |y| + |z| + |ˆ z| q . Consequently,
|Γ(t, s, Y (s), Z(t, s), Z(s, t))|
2
2
(2.7)
≤ L 1 + |Y (s)| + |Z(t, s)| + |Z(s, t)| q + lE|Y (s)| + lE|Z(t, s)| + lE|Z(s, t)| q . Likewise, for any (Y1 (·), Z1 (· , ·)), (Y2 (·), Z2 (· , ·)) ∈ Hq [0, T ], we have |Γ(t, s, Y1 (s), Z1 (t, s), Z1 (s, t)) − Γ(t, s, Y2 (s), Z2 (t, s), Z2 (s, t))|
≤ L |Y1 (s) − Y2 (s)| + |Z1 (t, s) − Z2 (t, s)| + |Z1 (s, t) − Z2 (s, t)|
(2.8)
+lE|Y1 (s) − Y2 (s)| + lE|Z1 (t, s) − Z2 (t, s)| + lE|Z1 (s, t) − Z2 (s, t)| .
The above two estimates will play an interesting role later. We now introduce the following definition. Definition 2.2. A pair of (Y (·), Z(· , ·)) ∈ Hp [0, T ] is called an adapted M-solution of MFBSVIE (1.10) if (1.10) is satisfied in the Itˆo sense and the following holds: Z
Y (t) = lEY (t) +
t
Z(t, s)dW (s),
t ∈ [0, T ].
(2.9)
0
It is clear that (2.9) implies Y (t) = lE[Y (t) | FS ] +
Z
t
Z(t, s)dW (s),
0 ≤ S ≤ t ≤ T.
(2.10)
S
This suggests us define Mp [0, T ] as the set of all elements (y(·), z(· , ·)) ∈ Hp [0, T ] satisfying: i
h
y(t) = lE y(t) | FS +
Z
t
z(t, s)dW (s),
t ∈ [S, T ],
S ∈ [0, T ).
(2.11)
S
Obviously Mp [0, T ] is a closed subspace of Hp [0, T ]. Note that for any (y(·), z(· , ·)) ∈ M2 [0, T ], h
lE|y(t)|2 = lE y(t) | FS
i2
+ lE
Z
t
|z(t, s)|2 ds ≥ lE
Z
t
|z(t, s)|2 ds.
(2.12)
S
S
Relation (2.12) can be generalized a little bit more. To see this, let us present the following lemma. Lemma 2.3. Let 0 ≤ S < t ≤ T , η ∈ LpFS (Ω; lRn ) and ζ(·) ∈ LplF (Ω; L2 (S, t; lRn )). Then h
p
lE |η| +
Z
t
S
2
|ζ(s)| ds
Z t p ζ(s)dW (s) . ≤ KlE η +
p i 2
S
Hereafter, K > 0 stands for a generic constant which can be different from line to line. Proof. For fixed (S, t) ∈ ∆ (which means 0 ≤ S ≤ t ≤ T ) with S < t, let ξ=η+
Z
t
ζ(s)dW (s),
S
7
(2.13)
which is Ft -measurable. Let (Y (·), Z(·)) be the adapted solution to the following BSDE: Y (r) = ξ −
Z
t
Z(s)dW (s),
r ∈ [S, t].
r
Then it is standard that h
Z
p
lE sup |Y (r)| + Z
Y (S) +
|Z(s)|2 ds
S
r∈[S,t]
Now,
t
p i
≤ KlE|ξ|p .
Z
ζ(s)dW (s).
2
t
Z(s)dW (s) = ξ = η +
(2.14)
t
S
S
By taking conditional expectation lE[· | FS ], we see that Y (S) = η. Consequently,
Z t
Z(s) − ζ(s) dW (s) = 0,
S
which leads to
Z(s) = ζ(s),
s ∈ [S, t], a.s.
Then (2.13) follows from (2.14). We have the following interesting corollary for elements in Mp [0, T ] (comparing with (2.12)). Corollary 2.4. For any (y(·), z(· , ·)) ∈ Mp [0, T ], the following holds: lE
Z
t
|z(t, s)|2 ds
S
p
≤ KlE|y(t)|p ,
2
∀S ∈ [0, t].
(2.15)
Proof. Applying (2.13) to (2.11), we have lE
Z
t
|z(t, s)|2 ds
S
This proves the corollary.
p 2
h
≤ lE |lE[y(t) | FS ]|p +
Z
t
|z(t, s)|2 ds
S
p i 2
≤ KlE|y(t)|p .
From the above, we see that for any (y(·), z(·, ·)) ∈ Mp [0, T ], and any β > 0, KlE
Z
T
βt
p
e |y(t)| dt ≥ lE
0
≥ lE
Z
Z
T
βt
e
0 T
βt
e
Hence,
≤ KlE ≤ KlE ≤ KlE
hZ
hZ
hZ
T
p
≡ lE
|y(t)| dt + 0 T
βt
Z
0
p
0
0
T
p
eβt |y(t)|p dt +
Z
Z
Z
e
0
2
|z(t, s)| ds
T
2
Z
t
p
eβt
Z
2
Z
T
dt +
8
t
|z(t, s)|2 ds
2
2
p 2
T
0
Z
2
p
i
0
dt
T
2
dt
i
p
dt
|z(t, s)|2 ds
t
dt +
p
T
Z
p i 2
(2.16)
dt.
|z(t, s)|2 ds
Z
p
|z(t, s)|2 ds
t
T
0
|z(t, s)| ds
0
0 T
t
Z
|z(t, s)| ds βt
Z
0
0
t
0
T
|lEy(t)| +
Z
|y(t)| dt +
0
T
e |y(t)| dt + T
hZ
p
0
0
k(y(·), z(· , ·))kpHp [0,T ]
h
βt
e
Z
t
T
2
i
|z(t, s)|2 ds
p 2
dt ≤ Kk(y(·), z(· , ·))kpHp [0,T ] .
dt
i
This means that we can use the following as an equivalent norm in Mp [0, T ]: (
k(y(·), z(· , ·))kMp [0,T ] ≡
lE
Z
T
βt
p
e |y(t)| dt + lE
0
Z
T
βt
e
0
Z
T
2
|z(t, s)| ds
t
p 2
)1
p
dt
.
Sometimes we use Mpβ [0, T ] for Mp [0, T ] to emphasize the involved parameter β. To conclude this subsection, we state the following corollary of Lemma 2.3 relevant to BSVIEs, whose proof is straightforward. Corollary 2.5. Suppose (η(·), ζ(·, ·)) is an adapted M-solution to the following BSVIE: Z
η(t) = ξ(t) +
T
g(t, s)ds −
Z
T
ζ(t, s)dW (s),
t ∈ [0, T ],
(2.17)
t
t
for ξ(·) ∈ LpFT (0, T ; lRn ) and g(· , ·) ∈ Lp (0, T ; L1lF (0, T ; lRn )). Then h
lE |η(t)|p +
2.2
Z
T
|ζ(t, s)|2 ds
t
p i 2
h
≤ KlE |ξ(t)|p +
Z
T
|g(t, s)|ds t
p i
,
∀t ∈ [0, T ].
(2.18)
Mean-field forward stochastic Volterra integral equations.
In this subsection, we study the following MF-FSVIE: X(t) = ϕ(t) +
Z
t
b(t, s, X(s), Γb (t, s, X(s)))ds
0
+
Z
(2.19)
t
σ
σ(t, s, X(s), Γ (t, s, X(s)))dW (s),
t ∈ [0, T ],
0
where Z h i Γb (t, s, X) = lE′ θ b (t, s, ξ, X, ) ≡ θ b (t, s, ω, ω ′ , X(ω), X(ω ′ ))lP(dω ′ ), ξ=X ZΩ h i Γσ (t, s, X) = lE′ θ σ (t, s, ξ, X) ≡ θ σ (t, s, ω, ω ′ , X(ω), X(ω ′ ))lP(dω ′ ). ξ=X
(2.20)
Ω
We see that MF-FSVIE (2.19) is slightly more general than MF-FSVIE (1.6) because of the above definition (2.20) of the operators Γb and Γσ . An lF-adapted process X(·) is called a solution to (2.19) if (2.19) is satisfied in the usual Itˆo sense. To guarantee the well-posedness of (2.19), let us make the following hypotheses. (H1) The maps b : ∆∗ × Ω × lRn × lRm1 → lRn and σ : ∆∗ × Ω × lRn × lRm2 → lRn are measurable, and for all (t, x, γ, γ ′ ) ∈ [0, T ] × lRn × lRm1 × lRm2 , the map (s, ω) 7→ (b(t, s, ω, x, γ), σ(t, s, ω, x, γ ′ )) is lF-progressively measurable on [0, t]. Moreover, there exists some constant L > 0 such that |b(t, s, ω, x1 , γ1 ) − b(t, s, ω, x2 , γ2 )| + |σ(t, s, ω, x1 , γ1′ ) − σ(t, s, ω, x2 , γ2′ )| ≤ L(|x1 − x2 | + |γ1 − γ2 | + |γ1′ − γ2′ |), (t, s, ω) ∈ ∆∗ × Ω, (xi , γi , γi′ ) ∈ lRn × lRm1 × lRm2 , i = 1, 2. 9
(2.21)
Moreover, |b(t, s, ω, x, γ)| + |σ(t, s, ω, x, γ ′ )| ≤ L(1 + |x| + |γ| + |γ ′ |),
(2.22)
(t, s, ω, x, γ, γ ′ ) ∈ ∆∗ × Ω × lRn × lRm1 × lRm2 .
(H2) The maps θ b : ∆∗ × Ω2 × lR2n → lRm1 and θ σ : ∆∗ × Ω2 × lR2n → lRm2 are measurable, and for all (t, x, x′ ) ∈ [0, T ] × lR2n , the map (s, ω, ω ′ ) 7→ (θ b (t, s, ω, ω ′ , x, x′ ), θ σ (t, s, ω, ω ′ , x, x′ )) is lF2 -progressively measurable on [0, t]. Moreover, there exists some constant L > 0 such that |θ b (t, s, ω, ω ′ , x1 , x′1 ) − θ b (t, s, ω, ω ′ , x2 , x′2 )| + |θ σ (t, s, ω, ω ′ , x1 , x′1 ) − θ σ (t, s, ω, ω ′ , x2 , x′2 )| ≤ L(|x1 − x2 | + |x′1 − x′2 |),
(t, s, ω, ω ′ ) ∈ ∆∗ × Ω2 , (xi , x′i ) ∈ lR2n , i = 1, 2,
(2.23)
and |θ b (t, s, ω, ω ′ , x, x′ )| + |θ σ (t, s, ω, ω ′ , x, x′ )| ≤ L(1 + |x| + |x′ |),
(2.24)
(t, s, ω, ω ′ ) ∈ ∆∗ × Ω2 , x, x′ ∈ lRn . We will also need the following assumptions.
(H1)′ In addition to (H1), the map t 7→ (b(t, s, ω, x, γ), σ(t, s, ω, x, γ ′ )) is continuous on [s, T ]. (H2)′ In addition to (H2), the map t 7→ (θ b (t, s, ω, ω ′ , x, x′ ), θ σ (t, s, ω, ω ′ , x, x′ )) is continuous on [s, T ]. Now, let us state and prove the following result concerning MF-FSVIE (2.19). Theorem 2.6. Let (H1)–(H2) hold. Then for any p ≥ 2, and ϕ(·) ∈ LplF (0, T ; lRn ), MF-FSVIE (2.19) admits a unique solution X(·) ∈ LplF (0, T ; lRn ), and the following estimate holds: lE
Z
T
p
|X(t)| dt ≤ K 1 + lE
0
Z
0
T
|ϕ(t)|p dt .
(2.25)
Further, for i = 1, 2, let Xi (·) ∈ LplF (0, T ; lRn ) be the solutions of (2.19) corresponding to ϕi (·) ∈ LplF (0, T ; lRn ) and bi (·), σi (·), θib (·), θiσ (·) satisfying (H1)–(H2). Let Z h i b ′ b Γi (t, s, X) = lE θi (t, s, ξ, X) ≡ θib (t, s, ω, ω ′ , X(ω), X(ω ′ ))lP(dω ′ ), ξ=X Ω Z h i Γσi (t, s, X) = lE′ θiσ (t, s, ξ, X) ≡ θiσ (t, s, ω, ω ′ , X(ω), X(ω ′ ))lP(dω ′ ), ξ=X
i = 1, 2.
Ω
Then the following stability estimate holds: lE
Z
T
0
+lE
p
|X1 (t) − X2 (t)| ≤ K lE Z
T
0
+lE
n
Z
0
T
Z
Z
t 0 t 0
Z
T 0
|ϕ1 (t) − ϕ2 (t)|p dt p
|b1 (t, s, X1 (s), Γb1 (t, s, X1 (s)))
− b2 (t, s, X1 (s), Γb2 (t, s, X1 (s)))|ds dt
|σ1 (t, s, X1 (s), Γb1 (t, s, X1 (s)))
σ2 (t, s, X1 (s), Γb2 (t, s, X1 (s)))|2 ds
−
10
p 2
(2.26) o
dt .
Moreover, let (H1)′ –(H2)′ hold. Then for any p ≥ 2, and any ϕ(·) ∈ ClFp ([0, T ]; lRn ), the unique solution X(·) ∈ ClFp ([0, T ]; lRn ), and estimate (2.25) is replaced by the following: n
o
sup lE|X(t)|p ≤ K 1 + sup lE|ϕ(t)|p . t∈[0,T ]
t∈[0,T ]
(2.27)
Also, for i = 1, 2, let Xi (·) ∈ LplF (0, T ; lRn ) be the solutions of (2.19) corresponding to ϕi (·) ∈ LplF (0, T ; lRn ) and bi (·), σi (·), θib (·), θiσ (·) satisfying (H1)′ –(H2)′ . Then (2.26) is replaced by the following: sup lE|X1 (t) − X2 (t)|p ≤ K t∈[0,T ]
+ sup lE
Z
t
0
t∈[0,T ]
+ sup lE
Z
0
t∈[0,T ]
t
n
sup lE|ϕ1 (t) − ϕ2 (t)|p t∈[0,T ]
|b1 (t, s, X1 (s), Γb1 (t, s, X1 (s))) − b2 (t, s, X1 (s), Γb2 (t, s, X1 (s)))|ds
p
|σ1 (t, s, X1 (s), Γb1 (t, s, X1 (s))) − σ2 (t, s, X1 (s), Γb2 (t, s, X1 (s)))|2 ds
(2.28)
p o 2
.
Proof. By (H2), similar to (2.7)–(2.8), making use of (2.24), for any X(·) ∈ LplF (0, T ; lRn ), we have |Γb (t, s, X(s))| + |Γσ (t, s, X(s))| ≤ L 1 + lE|X(s)| + |X(s)| . (2.29)
Thus, if X(·) ∈ LplF (0, T ; lRn ) is a solution to (2.19) with ϕ(·) ∈ LplF (0, T ; lRn ), then by (2.22), Z
n
lE|X(t)|p ≤ 3p−1 lE |ϕ(t)|p + n
t
L 1 + |X(s)| + |Γb (t, s, X(s))| ds
0
h
i2
L2 1 + |X(s)| + |Γσ (t, s, X(s))| ds
+lE
0
n
0
p
≤ K 1 + lE|ϕ(t)| +
Z
t
o
|X(s)|p ds .
0
p
b(t, s, X(s), Γb (t, s, X(s)))ds
p o Z t σ(t, s, X(s), Γσ (t, s, X(s)))dW (s) + 0 i Z t h
≤ 3p−1 lE|ϕ(t)|p + lE Z
t
p
p o 2
Consequently, Z
0
t
n
lE|X(r)|p dr ≤ K 1 +
Z
t
lE|ϕ(r)|p dr +
0
Z t hZ 0
r
i
o
|X(s)|p ds dr ,
0
0 ≤ t ≤ T.
Using Gronwall’s inequality, we obtain (2.25). Now, let δ > 0 be undetermined. For any x(·) ∈ LplF (0, δ; lR n ), define G(x(·))(t) = ϕ(t) + +
Z
t
Z
t
b(t, s, x(s), Γb (t, s, x(s)))ds
0
σ(t, x, x(s), Γσ (t, s, x(s)))dW (s),
0
11
t ∈ [0, δ].
(2.30)
Then we have lE
Z
δ
nZ
δ
|G(x(·))(t)|p dt ≤ KlE
|ϕ(t)|p dt +
0
0
0
+ n
≤ K lE
Z
δ
|ϕ(t)|p dt + lE
Z
Z
δ
0
δ
δ
Z t
1 + |x(s)| + |Γb (t, s, x(s))| ds
0
Z t p o σ(t, s, x(s), Γσ (t, s, x(s)))dW (s) dt 0
p
o
|x(t)|p dt .
0
0
Z
Thus, G : LplF (0, δ; lR n ) → LplF (0, δ; lRn ). Next, for any x1 (·), x2 (·) ∈ LplF (0, δ; lRn ), we have (making use of (2.21) and (2.23)) lE
Z
δ 0
|G(x1 (·))(t) − G(x2 (·))(t)|p dt
p−1
≤2
+
Z
n
lE
lE
≤ K0 δlE
δ
0
δ
0
Z
hZ
Z
δ
t 0
hZ
t 0
L2 |x1 (s) − x2 (s)|2 + |Γσ (t, s, x1 (s)) − Γσ (t, s, x2 (s))|2 ds
|x1 (t) − x2 (t)|p dt,
0
ip
L |x1 (s) − x2 (s)| + |Γb (t, s, x1 (s)) − Γb (t, s, x2 (s))| ds dt ip 2
dt
o
1 , we see with K0 > 0 being an absolute constant (only depending on L and p). Then letting δ = 2K 0 p p n n that G : LlF (0, δ; lR ) → LlF (0, δ; lR ) is a contraction. Hence, MF-FSVIE (2.19) admits a unique solution X(·) ∈ LplF (0, δ; lRn ).
Next, for t ∈ [δ, 2δ], we write (1.6) as b + X(t) = ϕ(t)
Z
+
t
b(t, s, X(s), Γb (t, s, X(s)))ds
δ
Z
(2.31)
t
σ
σ(t, s, X(s), Γ (t, s, X(s))dW (s),
δ
with b ϕ(t) = ϕ(t) +
Z
δ
b(t, s, X(s), Γb (t, s, X(s)))ds +
Z
δ
σ(t, s, X(s), Γσ (t, s, X(s))dW (s).
0
0
Then a similar argument as above applies to obtain a unique solution of (2.31) on [δ, 2δ]. It is important to note that the step-length δ > 0 is uniform. Hence, by induction, we obtain the unique solvability of (2.19) on [0, T ]. Now, for i = 1, 2, let Xi (·) ∈ LplF (0, T ; lRn ) be the solutions of (2.19) corresponding to ϕi (·) ∈
12
LplF (0, T ; lRn ) and bi (·), σi (·), θib (·), θiσ (·) (satisfying (H1)–(H2)). Then n
lE|X1 (t) − X2 (t)|p ≤ 3p−1 lE |ϕ1 (t) − ϕ2 (t)|p +
Z
t
|b1 (t, s, X1 (s), Γb1 (t, s, X1 (s))) − b2 (t, s, X2 (s), Γb2 (t, s, X2 (s)))|ds
0
p
Z t p o + σ1 (t, s, X1 (s), Γb1 (t, s, X1 (s))) − σ2 (t, s, X2 (s), Γb2 (t, s, X2 (s))) dW (s) 0 Z t n
≤ K lE|ϕ1 (t) − ϕ2 (t)|p + lE +lE
Z
t
0
0
|X1 (s) − X2 (s)|p ds
|b1 (t, s, X1 (s), Γb1 (t, s, X1 (s))) − b2 (t, s, X1 (s), Γb2 (t, s, X1 (s)))|ds
p
Z t p o +lE σ1 (t, s, X1 (s), Γb1 (t, s, X1 (s))) − σ2 (t, s, X1 (s), Γb2 (t, s, X1 (s))) dW (s) . 0
Then we can obtain estimate (2.26).
The conclusions under (H1)′ –(H2)′ are easy to obtain.
2.3
Linear MF-FSVIEs and MF-BSVIEs.
Let us now look at linear MF-FSVIEs, by which we mean the following: X(t) = ϕ(t) + +
Z t
Z t0 0
h
i
A0 (t, s)X(s) + lE′ C0 (t, s)X(s) ds ′
h
i
A1 (t, s)X(s) + lE C1 (t, s)X(s) dW (s),
(2.32) t ∈ [0, T ].
For such an equation, we introduce the following hypotheses. (L1) The maps A0 , A1 : ∆∗ × Ω → lRn×n ,
C0 , C1 : ∆∗ × Ω2 → lRn×n ,
are measurable and uniformly bounded. For any t ∈ [0, T ], s 7→ (A0 (t, s), A1 (t, s)) is lF-progressively measurable on [0, t], and s 7→ (C0 (t, s), C1 (t, s)) is lF2 -progressively measurable on [0, t]. (L1)′ In addition to (L1), the maps t 7→ (A0 (t, s, ω), A1 (t, s, ω), C0 (t, s, ω, ω ′ ), C1 (t, s, ω, ω ′ )) is continuous on [s, T ]. Clearly, by defining b(t, s, ω, x, γ) = A (t, s, ω)x + γ, θ b (t, s, ω, ω ′ , x, x′ ) = C (t, s, ω, ω ′ )x′ , 0 0 σ(t, s, ω, x, γ ′ ) = A (t, s, ω)x + γ ′ , θ σ (t, s, ω, ω ′ , x, x′ ) = C (t, s, ω, ω ′ )x′ , 1 1
we see that (2.32) is a special case of (2.19). Moreover, (L1) implies (H1)–(H2), and (L1)′ implies (H1)′ –(H2)′ . Hence, we have the following corollary of Theorem 2.6. 13
Corollary 2.7. Let (L1) hold, and p ≥ 2. Then for any ϕ(·) ∈ LplF (0, T ; lRn ), (2.32) admits a unique solution X(·) ∈ LplF (0, T ; lRn ), and estimate (2.25) holds. Further, let p > 2. If for i = 1, 2, Ai0 (·), Ai1 (·), C0i (·), C1i (·) satisfy (L1), ϕi (·) ∈ LplF (0, T ; lRn ), and Xi (·) ∈ LplF (0, T ; lRn ) are the corresponding solutions to (2.32), then for any r ∈ (2, p), lE Z
Z
·
0
T
0 Tnh
Z
|X1 (t) − X2 (t)|r dt ≤ KlE
lE
h Z
Z
0
t
0
0
r
p−r
2(p−r)
p
Z
+ lE2
p
(r−2)p ip−r h
2r |A11 (t, s)−A21 (t, s)| r−2 ds
|ϕ1 (t) − ϕ2 (t)|r dt + K 1 + lE
(r−1)p ip−r h
|A10 (t, s)−A20 (t, s)| r−1 ds
t
+ lE
T
+ lE
2
t
0
Z
Z
T
|ϕ1 (t)|p dt
0
r
0
2r |C11 (t, s)−C12 (t, s)| r−2 ds
p
(r−1)p i p−r
|C01 (t, s)−C02 (t, s)| r−1 ds
t
r
p
p−r
(2.33)
(r−2)p i p−ro 2(p−r)
p
dt.
Moreover, let (L1)′ hold. Then for any ϕ(·) ∈ ClFp ([0, T ]; lRn ), (2.32) admits a unique solution X(·) ∈ ClFp ([0, T ]; lRn ), and estimate (2.27) holds. Now for i = 1, 2, let Ai0 (·), Ai1 (·), C0i (·), C1i (·) satisfy (L1)′ , ϕi (·) ∈ ClFp ([0, T ]; lRn ), and Xi (·) ∈ ClFp ([0, T ]; lRn ) be the corresponding solutions to (2.32), then for any 2 < r < p, sup lE|X1 (t) − X2 (t)|r ≤ K sup lE|ϕ1 (t) − ϕ2 (t)|r t∈[0,T [
t∈[0,T ]
p
+K 1 + sup lE|ϕ1 (t)| + sup t∈[0,T ]
+ sup t∈[0,T ]
+ sup t∈[0,T ]
t∈[0,T ]
h
lE
Z
t
2
h Z
0 t
0
lE
2
p
sup
h Z
lE
t∈[0,T ]
0
2r
Z
0
t
2r
t
r
(r−1)p ip−r
|A10 (t, s)−A20 (t, s)| r−1 ds
(r−1)p i p−r
r
|C01 (t, s)−C02 (t, s)| r−1 ds
p
p−r
p−r
p
(2.34)
(r−2)p ip−r
|A11 (t, s)−A21 (t, s)| r−2 ds
lE
h
r n
2(p−r)
|C11 (t, s)−C12 (t, s)| r−2 ds
p
(r−2)p i p−ro 2(p−r)
p
.
Proof. We need only to prove the stability estimate. Let Xi (·) ∈ LplF (0, T ; lRn ) be the solutions to the linear MF-FSVIEs corresponding to the coefficients (Ai0 (·), C0i (·), Ai1 (·), C1i (·)) satisfying (L1) and free term ϕi (·) ∈ LplF (0, T ; lRn ). Then we have lE
Z
0
T
|Xi (s)|p ds ≤ K 1 + lE
14
Z
0
T
|ϕi (s)|p .
Now, for any 2 < r < p, lE
Z
T
0
+lE
Z
hZ t
T
0
+lE
Z
0
hZ t
T
0
n
n
|X1 (t) − X2 (t)|r ≤ K lE
0 T
Z
≤ K lE
|A10 (t, s) −
Z
0
0
T
T
Z
t
+lE2 +lE
Z
Z T Z
0 T
Z
≤ KlE
Z
Z
Z
t
0 t
0
0
+lE2
0
Z
T
Z
T
0 T
nh Z h
+ lE
0
t
|C01 (t, s) − C02 (t, s)|
ir
r r−1
ds
2r
t
0
|C11 (t, s) − C12 (t, s)|
|ϕ1 (t) − ϕ2 (t)|r dt + KlE
2r r−2
Z
T
Z TZ
r
|A10 (t, s)−A20 (t, s)| r−1 ds 2r
|A11 (t, s)−A21 (t, s)| r−2 ds
0
r
2
r−1 Z
r−2 Z
t
0
dt
t
0 t
2
0
ds
r−2 Z
h Z
0
p
+ lE2
(r−2)p ip−r h 2(p−r)
p
+ lE
2
dt
o
r
|A11 (t, s)−A21 (t, s)|2 |X1(s)|2 ds 2 dt
o
r
r−1 Z
t
0
|X1 (s)|r ds dt
|X1 (s)|r ds dt
o
|X1 (s)|r ds dt
|X1 (s)|r ds
(r−1)p ip−r h p−r
t
0
t
ir
|X1 (s)|r ds dt
2
lE
0
t
r
|A10 (t, s) − A20 (t, s)||X1 (s)|ds dt
|A10 (t, s) − A20 (t, s)| r−1 ds
|A11 (t, s) − A21 (t, s)| r−2 ds
0 Z t 0
t
r
0
0
· lE
Z
|C11 (t, s) − C12 (t, s)|2 |X1 (s)|2 ds
≤ K lE |ϕ1 (t) − ϕ2 (t)|rdt+lE 0
+ lE′ [ |C01 (t, s) − C02 (t, s)||X1 (s)| ] ds dt
|C01 (t, s)−C02 (t, s)||X1(s)|ds dt+lE 0
0 T
Z
0
t
+lE2
nZ
0
|A11 (t, s) − A21 (t, s)|2 |X1 (s)|2 + lE′ [ |C11 (t, s) − C12 (t, s)|2 |X1 (s)|2 ] ds
0
Z
|ϕ1 (t) − ϕ2 (t)|r dt
A20 (t, s)||X1 (s)|
|ϕ1 (t) − ϕ2 (t)|r dt + lE
Z TZ
+lE2
T
Z
t
r
p
r
(r−1)p i p−r
|C01 (t, s)−C02 (t, s)| r−1 ds
0 Z t 2 0
p ir
2r
p−r
(r−2)p ip−ro
|C11 (t, s)−C12 (t, s)| r−2 ds
Then (2.33) follows. The case that (L1)′ holds case be proved similarly.
p
2(p−r)
p
dt.
We point out that linear MF-FSVIE (2.32) is general enough in some sense. To see this, let us formally look at the variational equation of (2.19). More precisely, let X δ (·) be the unique solution of (2.19) with ϕ(·) replaced by ϕ(·) + δϕ(·). ¯ We formally let X δ (t) − X(t) ¯ . X(t) = lim δ→0 δ ¯ Then X(·) should satisfy the following linear MF-FSVIE: ¯ X(t) = ϕ(t) ¯ + +
Z t 0
Z t 0
h
i
¯ ¯ ω) + θ b ′ (t, s)X(s, ¯ ω ′ ) ds bx (t, s)X(s) + bγ (t, s)lE′ θxb (t, s)X(s, x h
i
(2.35)
¯ ω ′ ) dW (s), ¯ ¯ ω) + θxσ′ (t, s)X(s, σx (t, s)X(s) + σγ (t, s)lE′ θxσ (t, s)X(s,
where (with a little misuse of γ)
bx (t, s) = bx (t, s, ω, Γb (t, s, X(s, ω))), θxb (t, s) = θxb (t, s, ω, ω ′ , X(s, ω), X(s, ω ′ )), b (t, s) = b (t, s, ω, Γb (t, s, X(s, ω))), θ b ′ (t, s) = θ b ′ (t, s, ω, ω ′ , X(s, ω), X(s, ω ′ )), γ γ x x σ σ σx (t, s) = σx (t, s, ω, Γ (t, s, X(s, ω))), θx (t, s) = θxσ (t, s, ω, ω ′ , X(s, ω), X(s, ω ′ )), ′ ′ σ σ σ
σγ (t, s) = σγ (t, s, ω, Γ (t, s, X(s, ω))),
θx′ (t, s) = θx′ (t, s, ω, ω , X(s, ω), X(s, ω )). 15
(2.36)
It is interesting to note that (2.35) can be written as follows: Z t n
¯ X(t) = ϕ(t) ¯ + +
0
Z t n
io
h
¯ ¯ bx (t, s) + bγ (t, s)lE′ [θxb (t, s)] X(s) + lE′ bγ (t, s)θxb ′ (t, s)X(s)
h
ds
io
¯ ω) + lE′ σγ (t, s)θxσ′ (t, s)X(s, ¯ ω′ ) σx (t, s) + σγ (t, s)lE′ [θxσ (t, s)] X(s,
0
dW (s), (2.37)
which is a special case of (2.32). Mimicking the above, we see that general linear MF-BSVIE should take the following form: Y (t) = ψ(t) + h
Z
t
T
¯0 (t, s)Z(t, s) + C¯0 (t, s)Z(s, t) A¯0 (t, s)Y (s) + B i
¯1 (t, s)Z(t, s) + C¯1 (t, s)Z(s, t) ds − +lE A¯1 (t, s)Y (s) + B ′
For the coefficients, we should adopt the following hypothesis.
Z
(2.38)
T
Z(t, s)dW (s).
t
(L2) The maps ¯0 , C¯0 : ∆ × Ω → lRn×n , A¯0 , B
¯1 , C¯1 : ∆ × Ω2 → lRn×n A¯1 , B
¯0 (t, s), C¯0 (t, s)) are measurable and uniformly bounded. Moreover, for any t ∈ [0, T ], s 7→ (A¯0 (t, s), B ¯1 (t, s), C¯1 (t, s)) is lF2 -progressively is lF-progressively measurable on [t, T ], and s 7→ (A¯1 (t, s), B measurable on [t, T ]. We expect that under (L2), for reasonable ψ(·), the above (2.38) will have a unique adapted M-solution. Such a result will be a consequence of the main result of the next section.
3
Well-posedness of MF-BSVIEs.
In this section, we are going to establish the well-posedness of our MF-BSVIEs. To begin with, let us introduce the following hypothesis. (H3)q The map θ : ∆ × Ω2 × lR6n → lRm satisfies (H0)q . The map g : ∆ × Ω × lR3n × lRm → lRn is measurable and for all (t, y, z, zb, γ) ∈ [0, T ] × lR3n × lRm , the map (s, ω) 7→ g(t, s, ω, y, z, zb, γ) is lF-progressively measurable. Moreover, there exist constants L > 0 and q ≥ 2 such that |g(t, s, ω, y1 , z1 , zb1 , γ1 ) − g(t, s, ω, y2 , z2 , zb2 , γ2 )|
(3.1)
≤ L |y1 − y2 | + |z1 − z2 | + |zb1 − zb2 | + |γ1 − γ2 | , and
∀(t, s, ω) ∈ ∆ × Ω, (yi , zi , zbi , γi ) ∈ lR3n × lRm , i = 1, 2,
2
|g(t, s, ω, y, z, zb, γ)| ≤ L 1 + |y| + |z| + |zb| q + |γ| ,
∀(t, s, ω) ∈ ∆ × Ω, (y, z, zb, γ) ∈ lR3n × lRm .
(3.2)
Similar to (H0)∞ in the previous section, (H3)∞ is understood that (2.5) holds and (3.2) is replaced by the following:
|g(t, s, ω, y, z, zb, γ)| ≤ L 1 + |y| + |z| + |γ| ,
∀(t, s, ω) ∈ ∆ × Ω, (y, z, zb, γ) ∈ lR3n × lRm . 16
(3.3)
3.1
A special MF-BSVIE.
In this subsection, we firstly consider the following special MF-BSVIE: Y (t) = ψ(t) +
Z
T
e s, Z(t, s)))ds − ge(t, s, Z(t, s), Γ(t,
t
where
Z
T
Z(t, s)dW (s),
t ∈ [0, T ],
(3.4)
t
ge(t, s, Z, γ) = g(t, s, y(s), Z, z(s, t), γ),
h i e s, Z) = Γ(t, s, y(s), Z, z(s, t)) ≡ lE′ θ(t, s, y(s), Z, z(s, t), y ′ , z ′ , zb′ ) Γ(t,
(y ′ ,z ′ ,b z ′ )=(y(s),Z,z(s,t))
for some given (y(·), z(· , ·)) ∈ Mp [0, T ]. Therefore,
,
e s, Z(t, s))) = g(t, s, y(s), Z(t, s), z(s, t), Γ(t, s, y(s), Z(t, s), z(s, t))). ge(t, s, Z(t, s), Γ(t,
e Note that we may take much more general ge(·) and Γ(·). But the above is sufficient for our purpose, and by restricting such a case, we avoid stating a lengthy assumption similar to (H3)q . We now state and prove the following result concerning MF-BSVIE (3.4).
Proposition 3.1. Let (H3)q hold. Then for any p > 1 and ψ(·) ∈ LpFT (0, T ; lRn ), MF-BSVIE (3.4) admits a unique M-solution (Y (·), Z(· , ·)) ∈ Mp [0, T ]. Moreover, the following estimate holds: h
Z
p
lE |Y (t)| +
T
2
|Z(t, s)| ds
t
p i 2
h
p
≤ KlE |ψ(t)| +
Z
T
t
e s, 0))|ds |ge(t, s, 0, Γ(t,
Further, for i = 1, 2, let ψi (·) ∈ LpFT (0, T ; lRn ), (yi (·), zi (· , ·)) ∈ Mp [0, T ], and
p i
(3.5)
.
e i (t, s, Z(t, s)) = gi (t, s, yi (s), Z(t, s), zi (s, t), Γi (t, s, yi (s), Z(t, s), zi (s, t)), gei (t, s, Z(t, s), Γ h i b = lE′ θi (t, s, yi (s), Z, zi (s, t), y ′ , z ′ , zb′ ) Γi (t, s, Y, Z, Z) ′ ′ ′ (y ,z ,ˆ z )=(yi (s),Z,zi (s,t))
with gi (·) and θi (·) satisfying (H3)q . Then the corresponding M-solutions (Yi (·), Zi (·)) satisfy the following stability estimate: h
Z
lE |Y1 (t) − Y2 (t)|p + +
Z
t
T
T
|Z1 (t, s) − Z2 (t, s)|2 ds
t
p i 2
h
≤ KlE |ψ1 (t) − ψ2 (t)|p
e 1 (t, s, Z1 (t, s)) − ge2 (t, s, Z1 (t, s), Γ e 2 (t, s, Z1 (t, s))|ds |ge1 (t, s, Z1 (t, s), Γ
p i
(3.6)
.
Proof. Fix t ∈ [0, T ). Consider the following MF-BSDE (parameterized by t): Z
η(r) = ψ(t) +
T
r
e s, ζ(s)))ds − ge(t, s, ζ(s), Γ(t,
Z
T
ζ(s)dW (s),
r ∈ [t, T ].
r
If p ∈ (1, 2], it follows from (H3)q that lE
Z
0
T
Z
≤ KlE
Z
T
t T
0
≤ KlE
Z
0
T
e s, 0))|ds |ge(t, s, 0, Γ(t,
Z
Z
t
t T
T
p
dt
(|y(s)| + |z(s, t)| + lE|y(s)| + lE|z(s, t)|)ds
|y(s)|p dsdt + KlE
Z
T 0
17
Z
s 0
|z(s, t)|2 dt
p 2
p
dt + K
ds + K < ∞,
(3.7)
As to the case of p = q > 2, similarly we have lE
Z
0
T
Z
≤ KlE
Z
T
t T
0
≤ KlE
Z
0
T
q
e s, 0))|ds dt |ge(t, s, 0, Γ(t,
Z
T
2
t T
Z
q
2
(|y(s)| + |z(s, t)| q + lE|y(s)| + lE|z(s, t)| q )ds dt + K q
|y(s)| dsdt + KlE
Z
T
0
t
Z
T
|z(s, t)|2 dsdt + K < ∞.
t
Similar to a standard argument for BSDEs, making use of contraction mapping theorem, we can show that the above MF-BSDE admits a unique adapted solution
(η(·), ζ(·)) ≡ η(· ; t, ψ(t)), ζ(· ; t, ψ(t)) . Moreover, the following estimate holds: lE
h
p
sup |η(r; t, ψ(t))| + r∈[t,T ]
h
≤ KlE |ψ(t)|p +
Z
Z
T
|ζ(s; t, ψ(t))|2 ds
t
T
t
e s, 0))|ds |ge(t, s, 0, Γ(t,
p i
p i 2
(3.8)
.
Further, for i = 1, 2, let ψi (·) ∈ LpFT (0, T ; lRn ), (yi (·), zi (· , ·)) ∈ Mp [0, T ], and e i (t, s, Z(t, s)) = gi (t, s, yi (s), Z(t, s), zi (s, t), Γi (t, s, yi (s), Z(t, s), zi (s, t)), gei (t, s, Z(t, s), Γ h i b = lE′ θi (t, s, yi (s), Z, zi (s, t), y ′ , z ′ , zb′ ) Γi (t, s, Y, Z, Z) (y ′ ,z ′ ,ˆ z ′ )=(yi (s),Z,zi (s,t))
with gi (·) and θi (·) satisfying (H3)q . Then let (ηi (·), ζi (·)) be the adapted solutions of the corresponding BSDE. It follows that lE
h
p
sup |η1 (r) − η2 (r)| r∈[t,T ]
h
+ lE
Z
≤KlE |ψ1 (t) − ψ2 (t)|p+ Now, we define
i
t
Z
t
T
T
|ζ1 (s) − ζ2 (s)|2 ds
p 2
e 1 (t, s, ζ1 (s)) − ge2 (t, s, ζ1 (s), Γ e 2 (t, s, ζ1 (s))|ds |ge1 (t, s, ζ1 (s), Γ
Y (t) = η(t; t, ψ(t)),
Z(t, s) = ζ(s; t, ψ(t)),
p i
(3.9)
.
(t, s) ∈ ∆,
and Z(t, s) on ∆c through the martingale representation: Y (t) = lEY (t) +
Z
t
Z(t, s)dW (s),
t ∈ [0, T ].
0
Then (Y (·), Z(· , ·)) ∈ Mp [0, T ] is the unique M-solution to (3.4). Estimates (3.5) and (3.6) follows easily from (3.8) and (3.9), respectively. Note that the cases that we are interested in are p = 2, q. We will use them below.
18
3.2
The general case.
Now, we consider our MF-BSVIEs. For convenience, let us rewrite (1.10) here: Y (t) = ψ(t) +
Z
−
T
g(t, s, Y (s), Z(t, s), Z(s, t), Γ(t, s, Y (s), Z(t, s), Z(s, t)))ds t
Z
(3.10)
T
Z(t, s)dW (s),
t ∈ [0, T ],
t
with h
i
Γ(t, s, Y (s), Z(t, s), Z(s, t)) = lE′ θ(t, s, Y (s), Z(t, s), Z(s, t)) =
Z
θ(t, s, ω ′ , ω, Y (s, ω ′ ), Z(t, s, ω ′ ), Z(s, t, ω ′ ), Y (s, ω), Z(t, s, ω), Z(s, t, ω))lP(dω ′ ).
(3.11)
Ω
Our main result of this section is the following. Theorem 3.2. Let (H3)q hold with 2 ≤ q < ∞. Then for any ψ(·) ∈ LqFT (0, T ; lRn ), MFBSVIE (3.10) admits a unique adapted M-solution (Y (·), Z(· , ·)) ∈ Mq [0, T ], and the following estimate holds: k(Y (·), Z(· , ·))kMq [0,T ] ≤ K 1 + kψ(·)kLq (0,T ;lRn ) . (3.12) FT
Moreover, for i = 1, 2, let gi (·) and θi (·) satisfy (H3)q , and ψi (·) ∈ LqFT (0, T ; lRn ). Let (Yi (·), Zi (· , ·)) ∈ Mq [0, T ] be the corresponding adapted M-solutions. Then k(Y1 (·), Z1 (· , ·)) − (Y2 (·), Z2 (· , ·))k2M2 [0,T ] ≤ KlE
nZ
T
|ψ1 (t) − ψ2 (t)|2 dt +
0
Z
0
T
Z
T
t
2
o
(3.13)
|(g1 − g2 )(t, s)|ds dt ,
where (g1 − g2 )(t, s) = g1 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t), Γ1 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t))) −g2 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t), Γ2 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t))), with Γi (t, s, Yi (s), Zi (t, s), Zi (s, t)) h
i
= lE′ θi (t, s, Yi (s), Zi (t, s), Zi (s, t), y, z, zb)
≡
Z
Ω
(y,z,ˆ z )=(Yi (s),Zi (t,s),Zi (s,t))
θi (t, s, ω ′ , ω, Yi (s, ω ′ ), Zi (t, s, ω ′ ), Zi (s, t, ω ′ ), Yi (s, ω), Zi (t, s, ω), Zi (s, t, ω))lP(dω ′ ).
Proof. We split the proof into several steps. Step 1. Existence and uniqueness of M-solutions of (3.10) in Mp [0, T ] with p ∈ (1, 2]. Let ψ(·) ∈ LpFT (0, T ; lRn ) be given. For any (y(·), z(· , ·)) ∈ Mp [0, T ], we consider the following MF-BSVIE Y (t) = ψ(t) +
Z
T
g(t, s, y(s), Z(t, s), z(s, t), Γ(t, s, y(s), Z(t, s), z(s, t)))ds
t
−
Z
(3.14)
T
Z(t, s)dW (s),
t ∈ [0, T ].
t
19
According to Proposition 3.1, there exists a unique adapted M-solution (Y (·), Z(· , ·)) ∈ Mp [0, T ]. Moreover, the following estimate holds: (making use of (2.7 ) and (2.8 )) n
p
lE |Y (t)| + n
Z
T
t
p
≤ KlE |ψ(t)| + n
|Z(t, s)|2 ds
Z
t
p
≤ KlE 1 + |ψ(t)| + n
T
≤ KlE 1+|ψ(t)| +
hZ
T
n
p
≤ KlE 1 + |ψ(t)| +
ip o
|y(s)| + |z(s, t)| + lE|y(s)| + lE|z(s, t)| + |y(s)| + |z(s, t)| ds
t
n
po
|y(s)| + |z(s, t)| + |Γ(t, s, y(s), 0, z(s, t))| ds
h Z T
≤ KlE 1 + |ψ(t)|p +
2
|g(t, s, y(s), 0, z(s, t), Γ(t, s, y(s), 0, z(s, t))|ds
t
p
p o
hZ Z
T
|y(s)| + |z(s, t)| ds
t T
Z
p
|y(s)| ds +
t
T
ip o
ip o
(3.15)
o
|z(s, t)|p ds .
t
Consequently, (making use of (2.15) for (y(·), z(· , ·)) ∈ Mp [0, T ]) k(Y (·), Z(· , ·))kpHp [0,T ] ≡ lE n
≤ KlE 1 + n
≤ KlE 1 + n
≤ KlE 1 + n
≤ KlE 1 + n
Z
T
|ψ(t)|p dt +
T
|ψ(t)|p dt +
0
Z
Z
T
|Y (t)|p dt +
0 T Z T
|ψ(t)|p dt +
Z
|y(t)|p dt + |y(t)|p dt +
0
T
|ψ(t)|p dt +
Z
T
|y(t)|p dt
0
0
≤ K 1 + kψ(·)kpLp
FT
n
(0,T ;lR
Z
T
0
T
T
|y(s)|p dsdt +
t
T
Z
0
0
T
0
Z
Z
0
0
Z
nZ
o
Z
0
Z
Z Z
0
t
T
|Z(t, s)|2 ds
0 T Z T t
|z(t, s)|p dsdt
Z
t
|z(t, s)|2 ds
0
2
dt
|z(s, t)|p dsdt
0
T
p
o
p 2
dt
o
o
o
o
+ k(y(·), z(· , ·))kMp [0,T ] . )
Hence, if we define Θ(y(·), z(· , ·)) = (Y (·), Z(· , ·)), then Θ maps from Mp [0, T ] to itself. We now show that the mapping Θ is contractive. To this end, take any (yi (·), zi (· , ·)) ∈ Mp [0, T ] (i = 1, 2), and let (Yi (·), Zi (· , ·)) = Θ(yi (·), zi (· , ·)).
20
Then by Proposition 3.1, we have (note (2.8)) h
lE |Y1 (t) − Y2 (t)|p + Z
T
≤ KlE
Z
t
T
|Z1 (t, s) − Z2 (t, s)|2 ds
p i 2
|g(t, s, y1 (s), Z1 (t, s), z1 (s, t), Γ(t, s, y1 (s), Z1 (t, s), z1 (s, t)))
t
−g(t, s, y2 (s), Z1 (t, s), z2 (s, t), Γ(t, s, y2 (s), Z1 (t, s), z2 (s, t)))|ds hZ
T
≤ KlE
t
|y1 (s) − y2 (s)| + |z1 (s, t) − z2 (s, t)|
|y1 (s) − y2 (s)| + |z1 (s, t) − z2 (s, t)|
|y1 (s) − y2 (s)| + |z1 (s, t) − z2 (s, t)| ds
p
+|Γ(t, s, y1 (s), Z1 (t, s), z1 (s, t)) − Γ(t, s, y2 (s), Z1 (t, s), z2 (s, t))| ds hZ
T
≤ KlE
t
ip
+|y1 (s) − y2 (s)| + |z1 (s, t)) − z2 (s, t)| + lE|y1 (s) − y2 (s)| + lE|z1 (s, t) − z2 (s, t)| ds hZ
T
≤ KlE
t
hZ
T
Z
p
|y1 (s) − y2 (s)| ds +
≤ KlE
t
T
ip
|z1 (s, t) − z2 (s, t)|ds
t
ip
p i
.
Hence, kΘ(y1 (·), z1 (· , ·)) − Θ(y2 (·), z2 (· , ·))kpMp [0,T ] ≡
Z
0
≤K
T
h
βt
p
e lE |Y1 (t) − Y2 (t)| + Z
T
eβt lE
0
hZ TZ
t
s
= KlE
0
0
h 1Z
≤ KlE
β Z
0 T
T
hZ
T
Z
β
T
|Z1 (t, s) − Z2 (t, s)|2 ds
t
|y1 (s) − y2 (s)|p ds +
Z
eβt dt |y1 (s)−y2 (s)|p ds+
βt
p
Z
T
βt
e |y1 (t)−y2 (t)| dt+ e 0
Z
T
t
T
eβt
0
Z
t
e
2
dt
|z1 (s, t) − z2 (s, t)|ds Z
T
t
T
p i
−qβs p
Z
β
β
p i
dt p
e− p s e p s |z1 (s, t)−z2 (s, t)|ds dt
ds
p Z q
Z
t
T
i
i
eβs |z1 (s, t) − z2 (s, t)|p ds dt
(3.16)
p p T s K q lE eβs |z1 (s, t) − z2 (s, t)|p dtds lE eβt |y1 (t) − y2 (t)|p dt + K β βq 0 0 0 Z T 1 p Z T Z s p K q βt p βs ≤ lE e |y1 (t) − y2 (t)| dt + K lE e |z1 (s, t) − z2 (s, t)|2 dt 2 ds β β 0 0 0 1 p Z T K q +K lE eβt |y1 (t) − y2 (t)|p dt ≤ β β 0 K 1 p q k(y1 (·), z1 (· , ·)) − (y2 (·), z2 (· , ·))kpMp [0,T ] . ≤ +K β β β
≤
Since the constant K > 0 appears in the right hand side of the above is independent of β, by choosing β > 0 large, we obtain that Θ is a contraction. Hence, there exists a unique fixed point (Y (·), Z(· , ·)) ∈ Mp [0, T ], which is the unique adapted M-solution of (1.10). Step 2. The adapted M-solution (Y (·), Z(· , ·)) ∈ Mq [0, T ] if ψ(·) ∈ LqFT (0, T ; lRn ). Let ψ(·) ∈ LqFT (0, T ; lRn ) ⊆ L2FT (0, T ; lRn ). According to Step 1, there exists a unique adapted M-solution (Y (·), Z(· , ·)) ∈ M2 [0, T ]. We want to show that in the current case, (Y (·), Z(· , ·)) 21
is actually in Mq [0, T ]. To show this, for the obtained adapted M-solution (Y (·), Z(· , ·)), let us consider the following MF-BSVIE: Ye (t) = ψ(t) +
Z
T
t
Z
−
e s), Z(s, t), Γ(t, s, Ye (s), Z(t, e s), Z(s, t))ds g(t, s, Ye (s), Z(t,
T
t
e s)dW (s), Z(t,
(3.17)
t ∈ [0, T ].
For any (y(·), z(· , ·)) ∈ Mp [0, T ], by Proposition 3.1 (with p = q), the following MF-BSVIE admits e , ·)) ∈ Mq [0, T ]: a unique adapted M-solution (Ye (·), Z(· Ye (t) = ψ(t) +
Z
−
T
t
Z
e s), Z(s, t), Γ(t, s, y(s), Z(t, e s), Z(s, t))ds g(t, s, y(s), Z(t,
T
t
e s)dW (s), Z(t,
(3.18)
t ∈ [0, T ].
e e , ·)), then Θ e : Mq [0, T ] → Mq [0, T ]. We now show Thus, if we define Θ(y(·), z(· , ·)) = (Ye (·), Z(· q e is a contraction on M [0, T ] (Compare that Θ in Step 1 is a contraction on M2 [0, T ]). To that Θ this end, let (yi (·), zi (· , ·)) ∈ Mq [0, T ] and let e i (·), zi (· , ·)), (Yei (·), Zei (· , ·)) = Θ(y
i = 1, 2.
Then by Proposition 3.1 (with p = q), we have h
lE |Ye1 (t) − Ye2 (t)|q + Z T
Z
t
T
|Ze1 (t, s) − Ze2 (t, s)|2 ds
q i 2
|g(t, s, y1 (s), Ze1 (t, s), Z(s, t), Γ(t, s, y1 (s), Ze1 (t, s), Z(s, t)))
≤ KlE
t
−g(t, s, y2 (s), Ze1 (t, s), Z(s, t), Γ(t, s, y2 (s), Ze1 (t, s), Z(s, t)))|ds hZ
T
≤ KlE
t
hZ
≤ KlE
t
T
|y1 (s) − y2 (s)| + lE|y1 (s) − y2 (s)| ds
lE
|y1 (s) − y2 (s)| + |Γ(t, s, y1 (s), Ze1 (t, s), Z(s, t)) − Γ(t, s, y2 (s), Ze1 (t, s), Z(s, t))| ds
Then
q
hZ
T
0
βt
e
≤ KlE K ≤ β
e |Ye1 (t) − Ye2 (t)|q dt + Z T Z T βt
Z
0
0 T
t
Z
T
βt
e
0
iq
Z
t
q
≤ KlE
T
Z
T
t
|y1 (s) − y2 (s)|q ds.
|Ze1 (t, s) − Ze2 (t, s)|2 ds
|y1 (s) − y2 (s)| dsdt = KlE
Z
0
T
Z
0
s
q
2
dt
iq
i
eβt |y1 (s) − y2 (s)|q dtds
eβt |y1 (t) − y2 (t)|q dt.
e is a contraction on Mq [0, T ] (with the equivalent norm). Hence, (3.18) admits a unique Hence, Θ e , ·)) ∈ Mq [0, T ] ⊆ M2 [0, T ]. Then by the uniqueness of adapted adapted M-solution (Ye (·), Z(· solutions in M2 [0, T ] of (3.18), it is necessary that e , ·)) ∈ Mq [0, T ]. (Y (·), Z(· , ·)) = (Ye (·), Z(·
22
Step 3. Some estimates. According to Proposition 3.1, we have n
lE |Y (t)|q + n
Z
T t
|Z(t, s)|2 ds
Z
q
≤ KlE |ψ(t)| +
T
q
≤ KlE 1 + |ψ(t)| +
hZ
T
t
n
q
≤ KlE 1 + |ψ(t)| + n
2
|g(t, s, Y (s), 0, Z(s, t), Γ(t, s, Y (s), 0, Z(s, t))|ds
t
n
q o
q
≤ KlE 1 + |ψ(t)| +
hZ
Z
T
t T
q o
2
|Y (s)| + |Z(s, t)| q + |Γ(t, s, Y (s), 0, Z(s, t))| ds 2
|Y (s)| + |Z(s, t)| q ds Z
|Y (s)|q ds +
T
iq o 2
|Z(s, t)| q ds
t
t
q o
iq o
.
Then lE
nZ
T
0
≤ KlE
Z
eβt |Y (t)|q dt +
nZ
T
T
Z
eβt
0
T t
Z
eβt 1 + |ψ(t)|q dt +
0
|Z(t, s)|2 ds T
eβt
0
Z
T
q
2
dt
o
|Y (s)|q dsdt +
Z
T
eβt
0
t
Z
T
2
q
t
Note that lE
Z
T
0
≤ lE
eβt
Z
q
2
|Z(s, t)| q ds dt = lE
t
Z
T
eβt
0
= lE
T
Z
T
eβt
Z
T
β s − q−1
e
ds
T
eβt
0
q−1 Z
T
Z
T
− βq s
e
t
e
βT − q−1
−βt q−1
−e
β
2
s
iq−1 Z
T
q
e q |Z(s, t)| q ds dt
eβs |Z(s, t)|2 ds dt
t
t
hq − 1
Z
eβs |Z(s, t)|2 ds dt
β 0 t Z TZ s Z T (q − 1)q−1 (q − 1)q−1 βs 2 e |Z(s, t)| dtds ≤ eβt |Y (t)|2 dt. ≤ lE lE β q−1 β q−1 0 0 0 Hence, lE
nZ
T
βt
q
e |Y (t)| dt +
0
nZ
≤ KlE
0
T
βt
e
Z
T
βt
e
0
Z
T
|Z(t, s)|2 ds
t
1 1 + |ψ(t)| dt + β q
Z
T
βt
q
2
dt
q
o
e |Y (t)| dt +
0
1 β q−1
Z
T
o
eβt |Y (t)|2 dt .
0
By choosing β > 0 large, we obtain lE
nZ
T
eβt |Y (t)|q dt +
0
Z
0
T
eβt
Z
T
|Z(t, s)|2 ds
t
Then (3.12) follows.
23
q
2
o
dt ≤ KlE 1 +
Z
o
|Z(s, t)| q ds dt .
T 0
eβt |ψ(t)|q dt .
Finally, let ψi (·) ∈ LqFT (0, T ; lRn ), gi (·) and θi (·) satisfy (H3)q . Observe that Y1 (t) = ψ1 (t) + − = ψ1 (t) +
Z
T
T
g2 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t), Γ2 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t)))ds T
g2 (t, s, Y2 (s), Z1 (t, s), Z2 (s, t), Γ2 (t, s, Y2 (s), Z1 (t, s), Z2 (s, t)))ds
t
Z
T
g2 (t, s, Y2 (s), Z1 (t, s), Z2 (s, t), Γ2 (t, s, Y2 (s), Z1 (t, s), Z2 (s, t)))ds
t
− ≡ ψb1 (t) +
Z
Z1 (t, s)dW (s)
g2 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t), Γ2 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t)))ds
t
Z
+
T
T
t
−
g1 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t), Γ1 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t)))ds
t
Z
g1 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t), Γ1 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t)))ds
t
Z
+
T
t
Z
−
Z
Z
T
Z1 (t, s)dW (s)
t
T
g2 (t, s, Y2 (s), Z1 (t, s), Z2 (s, t), Γ2 (t, s, Y2 (s), Z1 (t, s), Z2 (s, t)))ds
t
−
Z
t
T
Z1 (t, s)dW (s),
with ψb1 (·) defined in an obvious way. Then by Proposition 3.1 (with p = 2), we obtain h
2
lE |Y1 (t) − Y2 (t)| + n
Z
t 2
≤ KlE |ψ1 (t) − ψ2 (t)| +
Z
+
|g1 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t), Γ1 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t)))
−g2 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t), Γ2 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t)))|ds T
|g2 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t), Γ2 (t, s, Y1 (s), Z1 (t, s), Z1 (s, t)))
t
−g2 (t, s, Y2 (s), Z1 (t, s), Z2 (s, t), Γ2 (t, s, Y2 (s), Z1 (t, s), Z2 (s, t)))|ds
n
≤ KlE |ψ1 (t) − ψ2 (t)|2 + +
hZ
i
|Z1 (t, s) − Z2 (t, s)|2 ds ≤ KlE|ψb1 (t) − ψ2 (t)|2
T
t
Z
T
t
T
Z
t
T
|(g1 − g2 )(t, s)|ds
2
|Y1 (s) − Y2 (s)| + |Z1 (s, t) − Z2 (s, t)| ds
i2 o
2 2 o
.
Then similar to the proof of the contraction for Θ, we can obtain our stability estimate (3.13). Let us make some remarks on the above result, together with its proof. First of all, we have seen that the growth of the maps zˆ 7→ g(t, s, y, z, zˆ),
(ˆ z , zˆ′ ) 7→ θ(t, s, y, z, zˆ, y ′ , z ′ , zˆ′ ) 24
(3.19)
plays an important role in proving the well-posedness of MF-BSVIEs, especially for the case of p > 2. When p ∈ (1, 2], the adapted M-solutions for BSVIEs was discussed in [35]. It is possible to adopt the idea of [35] to treat MF-BSVIEs for p ∈ (1, 2). If (H3)∞ holds, then for any p > 1, as long as ψ(·) ∈ LpFT (0, T ; lRn ), (3.10) admits a unique adapted M-solution (Y (·), Z(· , ·)) ∈ Mp [0, T ]. On the other hand, if the maps in (3.19) grow linearly, the adapted M-solution (Y (·), Z(· , ·)) of (3.10) may not be in Mp [0, T ] for p > 2, even if ψ(·) ∈ LpFT (0, T ; lRn ). This can be seen from the following example. Example 3.3. Consider BSVIE: Y (t) = ψ(t) +
Z
Z
T
Z(s, t)ds −
ψ(t) ≡ with ψ1 (·) being deterministic and
Z(t, s)dW (s),
t ∈ [0, T ].
(3.20)
t
t
Let
T
Z
T 0
ψ1 (s)dW (s),
ψ1 (·) ∈ L2 (0, T ) \
[
∀t ∈ [0, T ],
Lp (0, T − δ; lR),
p>2
for some fixed δ ∈ (0, T ; lR). Thus, for any p > 1, lE
Z
T 0
Z
|ψ(t)|p ds = T lE
p
T
0
ψ1 (s)dW (s) ≤ C
which means ψ(·) ∈ LpFT (0, T ; lR) for any p > 1. If we define
Z
0
Z t Y (t) = ψ1 (s)dW (s) + ψ1 (t)(T − t), 0 2
Z(t, s) = ψ1 (s),
then
Y (t) =
Z
0
T
|ψ1 (s)|2 ds
p 2
,
t ∈ [0, T ],
(s, t) ∈ [0, T ] ,
t
ψ1 (s)dW (s) + ψ1 (t)(T − t)
= ψ(t) − = ψ(t) −
Z
Z
T
ψ1 (s)dW (s) +
t T
Z(s, t)ds +
Z
Z
T
ψ1 (t)ds
t
T
Z(t, s)dW (s).
t
t
This means that (Y (·), Z(· , ·)) is the adapted M-solution of (3.20). We claim that Y (·) ∈ / LplF (0, T ; lR), for any p > 2. In fact, if Y (·) ∈ LplF (0, T ; lR) for some p > 2, then δp lE
Z
T −δ
0
|ψ1 (t)|p dt ≤
Z
T
0
≤K This is a contradiction.
(T − t)p |ψ1 (t)|p dt ≤ 2p−1 lE
nZ
T
lE|Y (t)|p dt +
0
Z
0
T
Z
0
T
Z
|Y (t)|p +
|ψ(s)|2 ds
p 2
< ∞.
0
t
Next, if gi (t, s, y, z, zˆ) = gi (t, s, y, z),
θi (t, s, y, z, zˆ, y ′ , z ′ , zˆ′ ) = θi (t, s, y, z, y ′ , z ′ ), 25
p
ψ1 (s)dW (s) dt
then the stability estimate (3.13) can be improved to k(Y1 (·), Z1 (· , ·)) − (Y2 (·), Z2 (· , ·))kqMq [0,T ] nZ
T
≤ KlE
0
for any q > 2.
|ψ1 (t) − ψ2 (t)|q dt +
Z
T
0
Z
q
T
(3.21)
o
|(g1 − g2 )(t, s)|ds dt ,
t
We point out that even for the special case of BSVIEs, the proof we provided here significantly simplifies that given in [41]. The key is that we have a better understanding of the term Z(s, t) in the drift, and find a new way to treat it (see (3.16)). Now, let us look at linear MF-BSVIE (2.38). It is not hard to see that under (L2), we have (H3)q with q = 2. Hence, we have the following corollary. Corollary 3.4. Let (L2) hold. Then for any ψ(·) ∈ L2FT (0, T ; lRn ), (2.38) admits a unique adapted M-solution (Y (·), Z(· , ·)) ∈ M2 [0, T ].
4
Duality Principles.
In this section, we are going to establish two duality principles between linear MF-FSVIEs and linear MF-BSVIEs. Let us first consider the following linear MF-FSVIE (2.32) which is rewritten below (for convenience): X(t) = ϕ(t) + +
Z t
Z t0 0
h
i
A0 (t, s)X(s) + lE′ C0 (t, s)X(s) ds ′
h
(4.1)
i
A1 (t, s)X(s) + lE C1 (t, s)X(s) dW (s),
t ∈ [0, T ].
Let (L1) hold and ϕ(·) ∈ L2lF (0, T ; lRn ). Then by Corollary 2.7, (4.1) admits a unique solution X(·) ∈ L2lF (0, T ; lRn ). Now, let (Y (·), Z(· , ·)) ∈ M2 [0, T ] be undetermined, and we observe the following: lE
Z
Z
T
h Y (t), ϕ(t) i dt = lE
0
−lE
Z
T
h Y (t),
≡ lE
T
h Y (t), X(t) − 0
Z t 0
0
Z
T
Z t 0
A1 (t, s)X(s) + lE′ [C1 (t, s)X(s)] dW (s) i dt
h Y (t), X(t) i dt −
0
4 X
A0 (t, s)X(s) + lE′ [C0 (t, s)X(s)] ds i dt
Ii .
i=1
We now look at each term Ii . First, for I1 , we have I1 = lE
Z
0
T
Z
0
t
h Y (t), A0 (t, s)X(s) i dsdt = lE
Z
Next, for I2 , let us pay some extra attention on ω and I2 = lE
Z TZ
= lElE
0
Z
∗
0
t
′
′
h Y (t), lE [C0 (t, s)X(s)] i dsdt = lE lE
0 TZ T s
∗
T
∗
h X(t),
0
T t
A0 (s, t)T Y (s)ds i dt.
s
T
h C0 (t, s, ω, ω ′ )T Y (t, ω), X(s, ω ′ ) i dtds Z
h C0 (t, s, ω , ω) Y (t, ω ), X(s, ω) i dtds = lE 26
Z
ω′,
Z TZ 0
T
0
T
h X(t),
Z
T t
lE∗ [C0 (s, t)T Y (s)]ds i dt.
Here, we have introduced the notation lE∗ , whose definition is obvious from the above, to distinguish lE (and lE′ ). For I3 , we have I3 = lE = lE
Z
Z
T
h Y (t),
0 T
= lE
t 0
h lEY (t) +
0
Z
Z
T
0
Z
t
0
A1 (t, s)X(s)dW (s) i dt Z
t
Z(t, s)dW (s), 0
Z
t 0
A1 (t, s)X(s)dW (s) i dt Z
h Z(t, s), A1 (t, s)X(s) i dsdt = lE
T
h X(t),
0
Z
T t
A1 (s, t)T Z(s, t)ds i dt.
Finally, we look at I4 . I4 = lE = lE
Z
T
h Y (t),
0 Z T 0
= lE′ lE = lE
Z
Z
0
Z
T
t
Z
t 0
lE′ [C1 (t, s)X(s)]dW (s) i dt
h Z(t, s), lE′ [C1 (t, s)X(s)] i dsdt
0 T Z T s
h Z(t, s, ω), C1 (t, s, ω, ω ′ )X(s, ω ′ ) i dtds
h X(t),
0
Z
T t
lE∗ [C1 (s, t)T Z(s, t)]ds i dt.
Hence, we obtain lE
Z
T
h Y (t), ϕ(t) i dt = lE
Z
T
h X(t), Y (t) −
0
0
h
Z
T t
A0 (s, t)T Y (s) + A1 (s, t)T Z(s, t) i
+lE∗ C0 (s, t)T Y (s) + C1 (s, t)T Z(s, t) ds i dt.
On the other hand, suppose (L1)′ holds and ϕ(·) ∈ ClFp ([0, T ]; lRn ). Then X(·) ∈ ClFp ([0, T ]; lRn ). Consequently, we obtain the following duality principle for MF-FSVIEs whose proof is clear from the above. Theorem 4.1. Let (L1) hold, and ϕ(·), ψ(·) ∈ L2lF (0, T ; lRn ). Let X(·) ∈ L2lF (0, T ; lRn ) be the solution to the linear MF-FSVIE (4.1), and (Y (·), Z(· , ·)) ∈ M2 [0, T ] be the adapted M-solution to the following linear MF-BSVIE: Y (t) = ψ(t) +
Z
T
A0 (s, t)T Y (s) + A1 (s, t)T Z(s, t)
t
∗
h
T
i
T
+lE C0 (s, t) Y (s) + C1 (s, t) Z(s, t) ds − Then lE
Z
T
h X(t), ψ(t) i dt = lE
0
Z
Z
(4.2)
T
Z(t, s)dW (s).
t
T
h Y (t), ϕ(t) i dt.
(4.3)
0
We call (4.2) the adjoint equation of (4.1). The above duality principle will be used in establishing Pontryagin’s type maximum principle for optimal controls of MF-FSVIEs. Next, different from the above, we want to start from the followng linear MF-BSVIE: Y (t) = ψ(t) +
Z
t
T
A¯0 (t, s)Y (s) + C¯0 (t, s)Z(s, t) + lE′ [A¯1 (t, s)Y (s) + C¯1 (t, s)Z(s, t)] ds
−
Z
T
Z(t, s)dW (s),
t ∈ [0, T ].
t
27
(4.4)
This is a special case of (2.38) in which ¯0 (t, s) = 0, B
¯1 (t, s) = 0. B
Under (L2), by Corollary 3.4, for any ψ(·) ∈ L2lF (0, T ; lRn ), (4.4) admits a unique adapted M-solution (Y (·), Z(· , ·)) ∈ M2 [0, T ]. We point out here that for each t ∈ [0, T ), the maps s 7→ C¯0 (t, s),
s 7→ C¯1 (t, s)
are lF-progressively measurable and lF2 -progressively measurable on [t, T ], respectively. Now, we let a process X(·) ∈ L2lF (0, T ; lRn ) be undetermined, and make the following calculation: Z
lE
T
h X(t), ψ(t) i dt = lE
Z
T
h X(t), Y (t) −
0
0
Z
t
+lE [A¯1 (t, s)Y (s) + C¯1 (t, s)Z(s, t)] ds − ′
= lE
Z
T
h X(t), Y (t) i dt − lE
0
Z
−lE
0
Z
−lE
0
Z
T
T
Z
0
T
Z
t
T
Z
T
A¯0 (t, s)Y (s) + C¯0 (t, s)Z(s, t) T
Z(t, s)dW (s) i dt
t
h X(t), A¯0 (t, s)Y (s) i dsdt
h X(t), C¯0 (t, s)Z(s, t) i dsdt − lE
t
T
Z
T
Z
Z
T
0
t
h X(t), lE′ [C¯1 (t, s)Z(s, t)] i dsdt ≡ lE
t
T
Z
h X(t), lE′ [A¯1 (t, s)Y (s)] i dsdt
T
h X(t), Y (t) i dt −
0
4 X
Ii .
i=1
Similar to the above, we now look at the terms Ii (i = 1, 2, 3, 4) one by one. First, we look at I1 : I1 = lE
Z
T
0
= lE
Z
T
t T
Z
h
Next, for I2 , one has Z
Z
=
T
0
T
lE
Z
t Z t
T
lE h
Z
T
lE h
0
Z
= lE Now, for I3 , Z
I3 = lE = lE = lE
T
0
′
Z
Z
T
Z
h
0 T
h
≡ lE
0
T
h
Z
0
t
0 t
0
Z
Z
t
Z
s
0
0
h A¯0 (t, s)T X(t), Y (s) i dtds
A¯0 (s, t) X(s)ds, Y (t) i dt.
h X(t), C¯0 (t, s)Z(s, t) i dsdt = lE
t
t
0
t
lE[C¯0 (s, t)T | Fs ]X(s)dW (s),
Z
T
0
Z
0
s
h C¯0 (t, s)T X(t), Z(s, t) i dtds
Z
t
Z(t, s)dW (s) i dt 0
lE[C¯0 (s, t)T | Fs ]X(s)dW (s), Y (t) − lEY (t) i dt lE[C¯0 (s, t)T | Fs ]X(s)dW (s), Y (t) i dt.
h X(t), lE [A¯1 (t, s)Y (s)] i dsdt = lElE′
t
Z
h
T
T
′
0
0
Z
T
Z
Z
0
T
t
Z
h C¯0 (s, t)T X(s), Z(t, s) i dsdt
0
0
=
T
0
0
=
Z
Z
0
0
I2 = lE
h X(t), A¯0 (t, s)Y (s) i dsdt = lE
Z
0
T
Z
0
lE[A¯1 (s, t, ω, ω ′ )T X(s, ω)]ds, Y (t, ω ′ ) i dt
lE∗ [A¯1 (s, t, ω ∗ , ω)T X(s, ω ∗ )]ds, Y (t, ω) i dt lE∗ [A¯1 (s, t)T X(s)]ds, Y (t) i dt. 28
s
h A¯1 (t, s, ω, ω ′ )T X(t, ω), Y (s, ω ′ ) i dtds
Finally, similar to the above, one has Z
I4 = lE
0
= lElE = lE = lE
Z
=
Z
=
Z
=
′
Z
Z
T
h X(t), lE′ [C¯1 (t, s)Z(s, t)] i dsdt
t T Z s
0 0 T Z t
Z
′
Z
T
0 T
Z
0 t
0
0
T
lE
Z
t
0
0 T
lE h 0
h lE[C¯1 (s, t, ω, ω ′ )T X(s, ω)], Z(t, s, ω ′ ) i dsdt
h lE∗ [C¯1 (s, t)T X(s)], Z(t, s) i dsdt h
lE h 0
Z
Z
t
= lE
Z
h
0
lE
0
= lE
i
t
h
0
T
h Y (t), X(t) −
−
Z t 0
Z(t, s)dW (s) i dt 0
i
h X(t), ψ(t) i dt = lE
0
t
lE∗ lE[C¯1 (s, t)T | Fs ]X(s) dW (s), Y (t) i dt.
T
Z
Z
lE∗ lE[C¯1 (s, t)T | Fs ]X(s) dW (s), Y (t) − lEY (t) i dt
Combining the above, we obtain Z
i
h
t
Z
h
lE lE[C¯1 (s, t)T | Fs ]X(s) dW (s), ∗
0
T
i
h lE lE∗ [C¯1 (s, t)T X(s)] | Fs , Z(t, s) i dsdt
0
T
h C¯1 (t, s, ω, ω ′ )T X(t, ω), Z(s, t, ω ′ ) i dtds
Z
Z t 0
T
h X(t), Y (t) i dt −
0
4 X
Ii
i=1
(4.5)
A¯0 (s, t)T X(s) + lE∗ [A¯1 (s, t)T X(s)] ds h
i
lE[C¯0 (s, t) | Fs ]X(s) + lE∗ lE[C¯1 (s, t)T | Fs ]X(s) dW (s) i dt. T
Now, we are at the position to state and prove the following duality principle for MF-BSVIEs. Theorem 4.2. Let (L2) hold and ψ(·) ∈ L2FT (0, T ; lRn ). Let (Y (·), Z(· , ·)) ∈ M2 [0, T ] be the unique adapted M-solution of linear MF-BSVIE (4.4). Further, let ϕ(·) ∈ L2lF (0, T ; lRn ) and X(·) ∈ L2lF (0, T ; lRn ) be the solution to the following linear MF-FSVIE: X(t) = ϕ(t) + +
Z t 0 0
Then
Z t
A¯0 (s, t)T X(s) + lE∗ [A¯1 (s, t)T X(s)] ds h
i
lE[C¯0 (s, t)T | Fs ]X(s) + lE∗ lE[C¯1 (s, t)T | Fs ]X(s) dW (s),
lE
Z
T
h Y (t), ϕ(t) i dt = lE
Z
(4.6) t ∈ [0, T ].
T
h X(t), ψ(t) i dt.
(4.7)
0
0
Proof. For linear MF-FSVIE (4.6), when (L2) holds, we have (L1). Hence, for any ϕ(·) ∈ (4.6) admits a unique solution X(·) ∈ L2lF (0, T ; lRn ). Then (4.7) follows from (4.5) immediately.
L2lF (0, T ; lRn ),
We call MF-FSVIE (4.6) the adjoint equation of MF-BSVIE (4.4). Such a duality principle will be used to establish comparison theorems for MF-BSVIEs. Note that since for s < t, C¯0 (s, t)T is Ft -measurable and not necessarily Fs -measurable, we have lE[C¯0 (s, t)T | Fs ] 6= C¯0 (s, t), 29
t ∈ (s, T ],
(4.8)
in general. Likewise, in general, lE[C¯1 (s, t)T | Fs ] 6= C¯1 (s, t),
t ∈ (s, T ].
(4.9)
We now make some comparison between Theorems 4.1 and 4.2. First, we begin with linear MF-FSVIE (4.1) which is rewritten here for convenience: X(t) = ϕ(t) + +
Z t
Z t0 0
A0 (t, s)X(s) + lE′ [C0 (t, s)X(s)] ds
′
A1 (t, s)X(s) + lE [C1 (t, s)X(s)] dW (s),
(4.10) t ∈ [0, T ].
According to Theorem 4.1, the adjoint equation of (4.10) is MF-BSVIE (4.2). Now, we want to use Theorem 4.2 to find the adjoint equation of (4.2) which is regarded as (4.4) with A ¯0 (t, s) = A0 (s, t)T , C ¯ (t, s) = A (s, t)T , 0
1
A¯1 (t, s, ω, ω ′ ) = C0 (s, t, ω ′ , ω)T , C¯1 (t, s, ω, ω ′ ) = C1 (s, t, ω ′ , ω)T .
Then, by Theorem 4.2, we obtain the adjoint equation (4.6) with the coefficients: T ¯ A0 (s, t) = A0 (t, s),
A¯1 (s, t, ω ′ , ω)T = C0 (t, s, ω, ω ′ ),
lE[C¯0 (s, t)T | Fs ] = lE[A1 (t, s) | Fs ] = A1 (t, s), lE[C ¯1 (s, t, ω ′ , ω)T | Fs ] = lE[C1 (t, s, ω, ω ′ ) | Fs ] = C1 (t, s, ω, ω ′ ).
Hence, (4.10) is the adjoint equation of (4.2). Thus, we have the following conclusion: Twice adjoint equation of a linear MF-FSVIE is itself. Next, we begin with linear MF-BSVIE (4.4). From Theorem 4.2, we know that the adjoint equation is linear MF-FSVIE (4.6). Now, we want to use Theorem 4.1 to find the adjoint equation of (4.6) which is regarded as (4.10) with A (t, s) = A ¯0 (s, t)T , C0 (t, s, ω, ω ′ ) = A¯1 (s, t, ω ′ , ω)T , 0 A (t, s) = lE[C ¯0 (s, t)T | Fs ], C1 (t, s, ω, ω ′ ) = lE[C¯1 (s, t, ω ′ , ω)T | Fs ]. 1
Then by Theorem 4.2, the adjoint equation is given by (4.2) with coefficients:
A (s, t)T = A ¯0 (t, s), C0 (s, t, ω ′ , ω)T = A¯1 (t, s, ω, ω ′ ), 0 A (s, t)T = lE[C ¯0 (t, s) | Ft ], C1 (s, t, ω ′ , ω) = lE[C¯1 (t, s, ω, ω ′ ) | Ft ]. 1
In another word, the twice adjoint equation of linear MF-BSVIE (4.4) is the following: Y (t) = ψ(t) + h
Z
T
t
A¯0 (t, s)Y (s) + lE[C¯0 (t, s) | Ft ]Z(s, t) i
+lE A¯1 (t, s)Y (s) + lE[C¯1 (t, s) | Ft ]Z(s, t) ds − ′
Z
(4.11)
T
Z(t, s)dW (s),
t ∈ [0, T ],
t
which is different from (4.4), unless C¯0 (t, s) and C¯1 (t, s) are Ft -measurable for all (t, s) ∈ ∆. Thus, we have the following conclusion: Twice adjoint of a linear MF-BSVIE is not necessarily itself. 30
5
Comparison Theorems.
In this section, we are going to establish some comparison theorems for MF-FSVIEs and MFBSVIEs, allowing the dimension to be larger than 1. Let n
o
lRn+ = (x1 , · · · , xn ) ∈ lRn | xi ≥ 0, 1 ≤ i ≤ n . When x ∈ lRn+ , we also denote it by x ≥ 0, and say that x is nonnegative. By x ≤ 0 and x ≥ y (if x, y ∈ lRn ), we mean −x ≥ 0 and x − y ≥ 0, respectively. Moreover, if X(·) is a process, then by X(·) ≥ 0, we mean X(t) ≥ 0, t ∈ [0, T ], a.s. Also, X(·) is said to be nondecreasing if it is componentwise nondecreasing. Likewise, we may define X(·) ≤ 0 and X(·) ≥ Y (·) (if both X(·) and Y (·) are lRn -valued processes), and so on. In what follows, we let ei ∈ lRn be the vector that the i-th entry is 1 and all other entries are zero. Also, we let n o n o n n×n n×n lM = A = (a ) ∈ lR | a ≥ 0, i = 6 j ≡ A ∈ lR | h Ae , e i ≥ 0, i = 6 j , ij ij i j + n o c n×m = A = (a ) ∈ lRn×m | a ≥ 0, 1 ≤ i ≤ n, 1 ≤ j ≤ m , lM ij ij + n o n o lMn = A = (a ) ∈ lRn×n | a = 0, i 6= j ≡ A ∈ lRn×n | h Ae , e i = 0, i = 6 j . ij ij i j 0 n×m
c is the set of all (n × m) matrices with all the entries being nonnegative, lMn+ is the Note that lM + set of all (n × n) matrices with all the off-diagonal entries being nonnegative, and lMn0 is actually c n×m are closed convex cones of lRn×n the set of all (n × n) diagonal matrices. Clearly, lMn+ and lM + and lRn×m , respectively, and lMn0 is a proper subspace of lRn×n . Whereas, for n = m = 1, one has
lM1+ = lM10 = lR,
1×1
c lM +
= lR+ ≡ [0, ∞).
(5.1)
We have the following simple result which will be useful below and whose proof is obvious. n×m
c Proposition 5.1. Let A ∈ lRn×m . Then A ∈ lM +
if and only if
∀x ∈ lRm +.
Ax ≥ 0,
(5.2)
c n = lM c n×n . In what follows, we will denote lM + +
5.1
Comparison of solutions to MF-FSVIEs.
In this subsection, we would like to discuss comparison of solutions to linear MF-FSVIEs. There are some positive and also negative results. To begin with, let us first present an example of MF-FSDEs. Example 5.2. Consider the following one-dimensional linear MF-FSDE, written in the integral form: Z t
lEX(s)dW (s),
X(t) = 1 +
0
31
t ∈ [0, T ].
Taking expectation, we have lEX(t) = 1,
∀t ∈ [0, T ].
Consequently, the solution X(·) is given by X(t) = 1 +
Z
t
dW (s) = 1 + W (t),
t ∈ [0, T ].
0
Thus, although X(0) = 1 > 0, the following fails: X(t) ≥ 0,
t ∈ [0, T ],
a.s.
The above example shows that if the diffusion contains a nonlocal term in an MF-FSDE, we could not get an expected comparison of solutions, in general. Therefore, for linear MF-FSDEs, one had better only look at the following: X(t) = x +
Z t 0
′
A0 (s)X(s) + lE [C0 (s)X(s)] ds +
Z
t
0
A1 (s)X(s)dW (s),
t ∈ [0, T ],
(5.3)
with the diffusion does not contain a nonlocal term. For the above, we make the following assumption. (C1) The maps A0 , A1 : [0, T ] × Ω → lRn×n ,
C0 : [0, T ] × Ω2 → lRn×n ,
are uniformly bounded, and they are lF-progressively measurable, and lF2 -progressively measurable, respectively. Note that, due to (5.1), the above (C1) is always true if n = 1. We now present the following comparison theorem for linear MF-FSDEs. Proposition 5.3. Let (C1) hold and moreover, A0 (s, ω) ∈ lMn+ ,
n
c , C0 (s, ω, ω ′ ) ∈ lM +
A1 (s, ω) ∈ lMn0 ,
s ∈ [0, T ], a.s. ω, ω ′ ∈ Ω.
(5.4)
Let X(·) ∈ L2lF (0, T ; lRn ) be the solution of linear MF-FSDE (5.3) with x ≥ 0. Then X(t) ≥ 0,
∀t ∈ [0, T ],
a.s.
(5.5)
Proof. It is known from Theorem 2.6 that as a special case of MF-FSVIE, the linear MF-FSDE (5.3) admits a unique solution X(·) ∈ LplF (0, T ; lRn ) for any x ∈ lRn , and any p ≥ 2. Further, it is not hard to see that X(·) has continuous paths. Since the equation is linear, it suffices to show that x ≤ 0 implies X(t) ≤ 0, t ∈ [0, T ], a.s. (5.6) To prove (5.6), we define a convex function f (x) =
n X
2 (x+ i ) ,
∀x = (x1 , x2 , · · · , xn ) ∈ lRn ,
i=1
32
where a+ = max{a, 0} for any a ∈ lR. Applying Itˆo’s formula to f (X(t)), we get f (X(t)) − f (x) =
Z th 0
h fx (X(s)), A0 (s)X(s) + lE′ [C0 (s)X(s)] i
i 1 + h fxx (X(s))A1 (s)X(s), A1 (s)X(s) i ds + 2
Z
0
t
h fx (X(s)), A1 (s)X(s) i dW (s).
We observe the following: (noting A0 (s) ∈ lMn+ ) h fx (X(s)), A0 (s)X(s) i =
n X
2Xi (s)+ h ei , A0 (s)ej i Xj (s)
i,j=1
=
n X
X
2Xi (s)+ h ei , A0 (s)ei i Xi (s) +
i6=j
i=1
≤
n X
2Xi (s)+ h ei , A0 (s)ej i Xj (s)
2[Xi (s)+ ]2 h ei , A0 (s)ei i +
X
2 h ei , A0 (s)ej i Xi (s)+ Xj (s)+ ≤ Kf (X(s)).
i6=j
i=1
n
c ) Also, one has (making use of C0 (s) ∈ lM +
lE h fx (X(s)), lE′ [C0 (s)X(s)] i =2 ≤2
Z
Ω2
Z
n X
i,j=1 n X
Ω2 i,j=1 n hX
≤ K lE
Xi (s, ω)+ h ei , C0 (s, ω, ω ′ )ej i Xj (s, ω ′ )lP(dω)lP(dω ′ ) Xi (s, ω)+ h ei , C0 (s, ω, ω ′ )ej i Xj (s, ω ′ )+ lP(dω)lP(dω ′ ) Xi (s)+
i=1
i2
≤ KlEf (X(s)).
Next, we have (noting A1 (·) and fxx (·) are diagonal) n 2 1 X 1 lE h fxx (X(s))A1 (s)X(s), A1 (s)X(s) i = lE I(Xi (s)≥0) h A1 (s)ei , ei i Xi (s) 2 2 i=1
n 1 X = lE h A1 (s)ei , ei i 2 [Xi (s)+ ]2 ≤ Kf (X(s)). 2 i=1
Consequently, lEf (X(t)) ≤ f (x) + K
Z
t
lEf (X(s))ds,
t ∈ [0, T ].
0
Hence, by Gronwall’s inequality, we obtain n X
lE|Xi (t)+ |2 ≤ K
i=1
n X
2 |x+ i | ,
t ∈ [0, T ].
i=1
Therefore, if x ≤ 0 (component-wise), then n X
lE|Xi (t)+ |2 = 0,
i=1
33
∀t ∈ [0, T ].
This leads to (5.6). We now make some observations on condition (5.4). 1. Let C0 (·) = 0, A1 (·) = 0, and A0 (·) be continuous and for some i 6= j, h A0 (0)ei , ej i < 0, i.e., at least one off-diagonal entry of A0 (0) is negative. Then by letting x = ei , we have Xj (t) = h X(t), ej i =
Z
t
0
h A0 (s)X(s), ej i ds = h A0 (0)ei , ej i t + o(t) < 0,
for t > 0 small. Thus, X(0) ≥ 0 does not imply X(t) ≥ 0. 2. Let A0 (·) = 0, A1 (·) = 0, and C0 (·) be continuous and for some i 6= j, h C0 (0)ei , ej i < 0, i.e., at least one off-diagonal entry of C0 (0) is negative. Then by a similar argument as above, we have that X(0) ≥ 0 does not imply X(t) ≥ 0. 3. Let A0 (·) = 0, C0 (·) = 0 and for some i 6= j, Z
0
T
lP h A1 (s)ei , ej i 6= 0 ds > 0,
i.e., at least one off-diagonal entry of A1 (·) is not identically zero. Then by letting x = ei , we have Xj (t) =
Z
0
t
h A1 (s)X(s), ej i dW (s) 6≡ 0,
t ∈ [0, T ].
On the other hand, lEXj (t) = 0,
t ∈ [0, T ].
Hence, Xj (t) ≥ 0,
∀t ∈ [0, T ], a.s.
must fail. 4. Let n = 1, A0 (·) = A1 (·) = 0 and C0 (·) bounded, lF-adapted with C0 (s) 6= 0,
lEC0 (s) = 0,
s ∈ [0, T ].
This means that “C0 (s) ≥ 0, ∀s ∈ [0, T ], a.s. ” fails (or diagonal elements of C0 (·) are not all nonnegative). Consider the following MF-FSDE: X(t) = 1 +
Z
t
0
C0 (s)lEX(s)ds,
t ∈ [0, T ].
Then lEX(t) = 1, Hence, X(t) = 1 +
Z
t ∈ [0, T ].
t 0
C0 (s)ds, 34
t ∈ [0, T ].
It is easy to choose a C0 (·) such that X(t) ≥ 0,
∀t ∈ [0, T ], a.s.
is violated. The above observations show that, in some sense, conditions assumed in (5.4) are sharp for Proposition 5.3. Based on the above, let us now consider the following linear MF-FSVIE: X(t) = ϕ(t) +
Z t 0
h
i
A0 (t, s)X(s) + lE′ C0 (t, s)X(s) ds
+
Z
0
t
A1 (s)X(s)dW (s),
(5.7)
t ∈ [0, T ].
Note that A1 (·) is independent of t here. According to [34], we know that for (linear) FSVIEs (without the nonlocal term, i.e., C0 (· , ·) = 0 in (5.7)), if the diffusion depends on both (t, s) and X(·), i.e., A1 (t, s) really depends on (t, s), a comparison theorem will fail in general. Next, let us look at an example which is concerned with the free term ϕ(·). Example 5.4. Consider the following one-dimensional FSVIE: X(t) = T − t +
Z
t
bX(s)ds +
Z
t
σX(s)dW (s),
t ∈ [0, T ],
0
0
for some b, σ ∈ lR. The above is equivalent to the following: dX(t) = [bX(t) − 1]dt + σX(t)dW (t), X(0) = T.
t ∈ [0, T ],
The solution to the above is explicitly given by the following: X(t) = e(b−
σ2 )t+σW (t) 2
h
T−
Z
t
e−(b−
σ2 )s−σW (s) 2
0
i
ds ,
t ∈ [0, T ].
We know that as long as σ 6= 0, for any t > 0 small and any K > 0, lP
Z
t
e−(b−
σ2 )s−σW (s) 2
ds ≥ K > 0.
0
Therefore, we must have lP(X(t) < 0) > 0,
∀t > 0 (small).
On the other hand, if σ = 0, then h
X(t) = ebt T −
Z
t
i
e−bs ds ,
0
t ∈ [0, T ].
Thus, when b = 0, one has X(t) = T − t,
t ∈ [0, T ],
and when b 6= 0, ebt −bt 1 e − 1 + bT , X(t) = ebt T + (1 − ebt ) = b b 35
t ∈ [0, T ].
Since eλ − 1 − λ > 0,
∀λ 6= 0,
we have that b < 0 implies X(T ) < 0. The above example tells us that when σ 6= 0, or σ = 0 and b < 0, although the free term ϕ(t) = T − t is nonnegative on [0, T ], the solution X(·) of the FSVIE (5.7) does not necessarily remain nonnegative on [0, T ]. Consequently, nonnegativity of the free term is not enough for the solution of the MF-FSVIE to be nonnegative. Thus, besides the nonnegativity of the free term, some additional conditions are needed. To present positive results, we introduce the following assumption. (C2) The maps A0 : ∆∗ × Ω → lRn×n ,
A1 : [0, T ] × Ω → lRn×n ,
C0 : ∆∗ × Ω2 → lRn×n ,
are measurable and uniformly bounded. For any t ∈ [0, T ], s 7→ (A0 (t, s), A1 (s)) is lF-progressively measurable on [0, t], and s 7→ C0 (t, s) is lF2 -progressively measurable on [0, t]. We now present the following result which is simple but will be useful later. Proposition 5.5. Let (C2) hold. Further, n
c , A0 (t, s, ω), C0 (t, s, ω, ω ′ ) ∈ lM +
A1 (s, ω) = 0,
a.e. (t, s) ∈ ∆∗ , a.s. ω, ω ′ ∈ Ω.
(5.8)
Let X(·) be the solution to (5.7), with ϕ(·) ∈ L2lF (0, T ; lRn ) and ϕ(·) ≥ 0. Then X(t) ≥ ϕ(t) ≥ 0,
t ∈ [0, T ].
(5.9)
Proof. Define (AX)(t) =
Z t 0
A0 (t, s)X(s) + lE′ [C0 (t, s)X(s)] ds,
t ∈ [0, T ].
By our condition, we see that (AX)(·) ≥ 0,
∀X(·) ∈ L2lF (0, T ; lRn ), X(·) ≥ 0.
Now, we define the following sequence X (·) = ϕ(·), 0 X (·) = ϕ(·) + (AX
k−1 )(·),
k
It is easy to see that
Xk (·) ≥ ϕ(·),
k ≥ 1.
∀k ≥ 0,
and lim kXk (·) − X(·)kL2 (0,T ;lRn ) = 0, lF
k→∞
with X(·) being the solution to (5.7). Then it is easy to see that (5.9) holds. 36
For the case that the diffusion is nonzero in the equation, we have the following result. Proposition 5.6. Let (C2) hold. Suppose n
c , C0 (t, s, ω, ω ′ ) ∈ lM +
A0 (t, s, ω) ∈ lMn+ ,
A1 (s, ω) ∈ lMn0 ,
a.e. (t, s) ∈ ∆∗ , a.s. ω, ω ′ ∈ Ω.
(5.10)
Moreover, let t 7→ (ϕ(t), A0 (t, s), C0 (t, s)) be continuous, and ϕ(·) ∈ LplF (0, T ; lRn ) for some p > 2. Further, ϕ(t1 ) ≥ ϕ(t0 ) ≥ 0,
b ≥ A0 (t0 , s)x b, A0 (t1 , s)x
b ≥ C0 (t0 , s)x b, C0 (t1 , s)x
b ∈ lRn ∀ s ≤ t0 < t1 ≤ T, s ∈ [0, T ], x + , a.s.
(5.11)
Let X(·) be the solution of linear MF-FSVIE (5.7). Then X(t) ≥ 0,
t ∈ [0, T ], a.s.
(5.12)
Proof. Let Π = {τk , 0 ≤ k ≤ N } be an arbitrary set of finitely many lF-stopping times with 0 = τ0 < τ1 < · · · < τN = T , and we define its mesh size by kΠk = esssup max |τk − τk−1 |. 1≤k≤N
ω∈Ω
Let
N −1 X Π A (t, s) = A0 (τk , s)I[τk ,τk+1 ) (t), 0 Π ϕ (t) =
C0Π (t, s) =
k=0 N −1 X
N −1 X
C0 (τk , s)I[τk ,τk+1 ) (t),
k=0
ϕ(τk )I[τk ,τk+1 ) (t).
k=0
Clearly, each A0 (τk , ·) is an lF-adapted bounded process, each C0 (τk , ·) is an lF2 -adapted bounded process, and each ϕ(τk ) is an Fτk -measurable random variable. Moreover, for each k ≥ 0, A0 (τk , s) ∈ lMn+ , and
n
c , C0 (τk , s) ∈ lM +
s ∈ [τk , τk+1 ], a.s. ,
0 ≤ ϕ(τk ) ≤ ϕ(τk+1 ),
a.s.
(5.13)
(5.14)
Now, we let X Π (·) be the solution to the following MF-FSVIE: X Π (t) = ϕΠ (t) +
Z t 0
h
i
Π ′ Π Π AΠ 0 (t, s)X (s) + lE C0 (t, s)X (s) ds
+
Z
t
0
Π
A1 (s)X (s)dW (s),
t ∈ [0, T ].
Then on interval [0, τ1 ), we have X Π (t) = ϕ(0) +
Z t 0
h
i
A0 (0, s)X Π (s) + lE′ C0 (0, s)X Π (s) ds +
Z
0
t
A1 (s)X Π (s)dW (s),
which is an MF-FSDE, and X Π (·) has continuous paths. From Proposition 5.3, we have X Π (t) ≥ 0,
t ∈ [0, τ1 ), a.s. 37
(5.15)
In particular, X Π (τ1 − 0) = ϕ(0) +
Z
τ1
0
+
h
i
A0 (0, s)X Π (s) + lE′ C0 (0, s)X Π (s) ds
Z
τ1
0
Π
A1 (s)X (s)dW (s) ≥ 0.
(5.16)
Next, on [τ1 , τ2 ), we have (making use the above) Z
Π
X (t) = ϕ(τ1 ) + +
Z
t
τ1
τ1 0
Π
′
+ +
i
Π
A0 (τ1 , s)X (s) + lE C0 (τ1 , s)X (s) ds + Π
′
h
i
Π
A0 (τ1 , s)X (s) + lE C0 (τ1 , s)X (s) ds +
= ϕ(τ1 ) − ϕ(0) + X Π (τ1 − 0) Z
h
n
τ1
A0 (τ1 , s) − A0 (0, s) X Π (s) + lE′
Z0t
h
h
e 1) + ≡ X(τ
Z
t
τ1
Π
′
t
τ1
0
τ1
A1 (s)X Π (s)dW (s)
A1 (s)X Π (s)dW (s)
io
C0 (τ1 , s) − C0 (0, s) X Π (s)
i
A0 (τ1 , s)X Π (s) + lE′ C0 (τ1 , s)X Π (s) ds +
τ1
Z
Z
h
Π
i
Z
t
τ1
ds
A1 (s)X Π (s)dW (s)
A0 (τ1 , s)X (s) + lE C0 (τ1 , s)X (s) ds +
Z
t
τ1
A1 (s)X Π (s)dW (s),
where, by our conditions assumed in (5.11), and noting (5.16), e 1 ) ≡ ϕ(τ1 ) − ϕ(0) + X Π (τ1 − 0) X(τ
+
Z
τ1
0
n
A0 (τ1 , s) − A0 (0, s) X Π (s) + lE′
h
io
C0 (τ1 , s) − C0 (0, s) X Π (s)
ds ≥ 0.
Hence, by Proposition 5.3 again, one obtains X Π (t) ≥ 0,
t ∈ [τ1 , τ2 ).
By induction, we see that X Π (t) ≥ 0,
t ∈ [0, T ], a.s.
On the other hand, it is ready to show that lim kX Π (·) − X(·)kL2 (0,T ;lRn ) = 0, lF
kΠk→0
Then (5.12) follows from the stability estimate in Corollary 2.7. We now look at the following (nonlinear) MF-FSVIEs with i = 0, 1: Xi (t) = ϕi (t) +
Z
0
t
bi (t, s, Xi (t), Γbi (t, s, Xi (s)))ds
where Γbi (t, s, Xi (s)) =
Z
Ω
+
Z
0
t
σ(s, Xi (s))dW (s),
θib (t, s, ω, ω ′ , Xi (s, ω), Xi (s, ω ′ ))lP(dω ′ ).
t ∈ [0, T ],
(5.17)
(5.18)
Note that σ(·) does not contain a nonlocal term, and it is independent of t ∈ [0, T ], as well as i = 0, 1. The following result can be regarded as an extension of [34] from FSVIEs to MF-FSVIEs. 38
Theorem 5.7. For i = 0, 1, let bi (·), σ(·), θib (·) appeared in (5.17) satisfy (H1)–(H2) and m1 b ∈ lRn ϕi (·) ∈ L2lF (0, T ; lRn ). Further, for all x, x ¯, x′ ∈ lRn , x , almost all (t, s) ∈ ∆∗ and + , γ ∈ lR ′ almost sure ω, ω ∈ Ω, n×m1
c (b0 )γ (t, s, ω, x, γ) ∈ lM +
and maps
,
σx (s, ω, x) ∈ lMn0 ,
(5.19)
b, t 7→ (b0 )x (t, s, ω, x, γ)x
b, t 7→ (b0 )γ (t, s, ω, x, γ)(θ0b )x (t, s, ω, x ¯, x′ )x
b, t 7→ (b0 )γ (t, s, ω, x, γ)(θ0b )x′ (t, s, ω, x¯, x′ )x
(5.20)
t 7→ b1 (t, s, ω, x, γ) − b0 (t, s, ω, x, γ), t 7→
θ1b (t, s, ω, ω ′ , x, x′ ) − θ0b (t, s, ω, ω ′ , x, x′ ), h
i
t 7→ (b0 )γ (t, s, ω, x, γ) θ1b (t, s, ω, ω ′ , x, x′ ) − θ0b (t, s, ω, ω ′ , x, x′ ) , t 7→ ϕ1 (t) − ϕ0 (t)
are continuous, nonnegative and nondecreasing on [s, T ]. Let Xi (·) ∈ LplF (0, T ; lRn ) be the solutions to the corresponding equation (5.17). Then X0 (t) ≤ X1 (t),
∀t ∈ [0, T ], a.s.
(5.21)
Proof. From the equations satisfied by X0 (·) and X1 (·), we have the following: X1 (t) − X0 (t) = ϕ1 (t) − ϕ0 (t) + +
Z th
Z0t h 0
+
i
σ(s, X1 (s)) − σ(s, X0 (s)) dW (s)
= ϕb1 (t) − ϕb0 (t) +
i
b1 (t, s, X1 (s), Γb1 (t, s, X1 (s))) − b0 (t, s, X0 (s), Γb0 (t, s, X0 (s))) ds
Z th
i
b0 (t, s, X1 (s), Γb0 (t, s, X1 (s))) − b0 (t, s, X0 (s), Γb0 (t, s, X0 (s))) ds
Z0t h 0
i
σ(s, X1 (s)) − σ(s, X0 (s)) dW (s),
where (making use of Proposition 5.1 and (5.19)–(5.20)) b0 (t) = ϕ1 (t) − ϕ0 (t) ϕb1 (t) − ϕ
+
Z th 0
= ϕ1 (t) − ϕ0 (t) + +
Z thZ 0
i
b1 (t, s, X1 (s), Γb1 (t, s, X1 (s))) − b0 (t, s, X1 (s), Γb0 (t, s, X1 (s))) ds
0
1
Z th 0
i
b1 (t, s, X1 (s), Γb1 (t, s, X1 (s))) − b0 (t, s, X1 (s), Γb1 (t, s, X1 (s))) ds i
e b (t, s))dλ Γb (t, s, X1 (s))) − Γb (t, s, X1 (s))) ds ≥ 0, (b0 )γ (t, s, X1 (s), Γ λ 1 0
and nondecreasing in t, where
e b (t, s) = (1 − λ)Γb (t, s, X1 (s)) + λΓb (t, s, X1 (s)). Γ λ 0 1
39
Now, we look at the following: b0 (t, s, X1 (s), Γb0 (t, s, X1 (s))) − b0 (t, s, X0 (s), Γb0 (t, s, X0 (s))) hZ
=
1
0
+
i
(b0 )x (t, s, Xλ (s), Γbλ (t, s))dλ X1 (s) − X0 (s)
hZ
1
0
i
(b0 )γ (t, s, Xλ (s), Γbλ (t, s))dλ Γb0 (t, s, X1 (s)) − Γb0 (t, s, X0 (s))
≡ (b0 )x (t, s) X1 (s) − X0 (s) + (b0 )γ (t, s) Γb0 (t, s, X1 (s)) − Γb0 (t, s, X0 (s)) , where
X (s) = (1 − λ)X (s) + λX (s), 0 1 λ Γb (t, s) = (1 − λ)Γb (t, s, X (s)) + λΓb (t, s, X (s)). 0 1 0 λ 0
and
(5.22)
Z 1 (b0 )x (t, s) = (b0 )x (t, s, Xλ (s), Γbλ (t, s))dλ, 0 Z 1 (b0 )γ (t, s) = (b0 )γ (t, s, Xλ (s), Γbλ (t, s))dλ. 0
Moreover,
Γb0 (t, s, X1 (s)) − Γb0 (t, s, X0 (s)) =
Z h Ω
=
nZ hZ Ω
+ = lE where
′
i
θ0b (t, s, ω, ω ′ , X1 (s, ω), X1 (s, ω ′ )) − θ0b (t, s, ω, ω ′ , X0 (s, ω), X0 (s, ω ′ )) lP(dω ′ ) 0
1
i
Z hZ
1
i
X1 (s, ω) − X0 (s, ω)
(θ0b )x′ (t, s, ω, ω ′ , Xλ (s, ω), Xλ (s, ω ′ ))dλ X1 (s, ω ′ ) − X0 (s, ω ′ ) lP(dω ′ )
0 i Ω b (θ0 )x (t, s) X1 (s) −
h
o
(θ0b )x (t, s, ω, ω ′ , Xλ (s, ω), Xλ (s, ω ′ ))dλ lP(dω ′ )
h
i
X0 (s) + lE′ (θ0b )x′ (t, s) X1 (s, ω ′ ) − X0 (s, ω ′ ) ,
Z 1 (θ0b )x (t, s) = (θ0b )x (t, s, ω, ω ′ , Xλ (s, ω), Xλ (s, ω ′ ))dλ, Z0 1 b (θ0 )x′ (t, s) = (θ0b )x′ (t, s, ω, ω ′ , Xλ (s, ω), Xλ (s, ω ′ ))dλ,
(5.23)
0
and Xλ (·) is defined as (5.22). Thus,
b0 (t, s, X1 (s), Γb0 (t, s, X1 (s))) − b0 (t, s, X0 (s), Γb0 (t, s, X0 (s))) n
h
io
= (b0 )x (t, s) + lE′ (b0 )γ (t, s)(θ0b )x (t, s) h
i
+lE′ (b0 )γ (t, s)(θ0b )x′ (t, s) X1 (s, ω ′ ) − X0 (s, ω ′ )
h
X1 (s) − X0 (s)
i
≡ A0 (t, s) X1 (s) − X0 (s) + lE′ C0 (t, s) X1 (s) − X0 (s) , where
h i A (t, s) = (b ) (t, s) + lE′ (b ) (t, s)(θ b ) (t, s) ∈ lMn , 0 0 x 0 γ 0 x + n b c
C0 (t, s) = (b0 )γ (t, s)(θ0 )x′ (t, s) ∈ lM+ ,
40
(t, s) ∈ ∆∗ , a.s.
(5.24)
Similarly,
σ(s, X1 (s)) − σ(s, X0 (s)) ≡ A1 (s) X1 (s) − X0 (s) , where A1 (s) ≡ Then we have
Z
1
0
X1 (t) − X0 (t) = ϕb1 (t) − ϕb0 (t) + ′
h
(t, s) ∈ ∆∗ , a.s.
σx (s, Xλ (s))dλ ∈ lMn0 ,
Z tn 0
(5.25)
A0 (t, s) X1 (s) − X0 (s) Z
io
+lE C0 (t, s) X1 (s) − X0 (s)
ds +
t
A1 (s) X1 (s) − X0 (s) dW (s).
0
From (5.19)–(5.20), we see that the coefficients of the above linear MF-FSVIE satisfy (C2), and ϕb1 (·) − ϕb0 (·) is nonnegative and nondecreasing. Then (5.21) follows from Proposition 5.6.
From the above proof, we see that one may replace b0 (·) in conditions (5.19) by b1 (·). Also, by an approximation argument, we may replace the derivatives in (5.19) of b0 (·) and σ(·) by the corresponding difference quotients.
5.2
Comparison theorems for MF-BSVIEs.
In this subsection, we discuss comparison property for MF-BSVIEs. First, we consider the following linear MF-BSVIE: Y (t) = ψ(t) +
Z
T
t
h
i
A¯0 (t, s)Y (s) + C¯0 (t)Z(s, t) + lE′ A¯1 (t, s)Y (s) ds
−
Z
T
Z(t, s)dW (s),
(5.26)
t ∈ [0, T ].
t
Note that Z(t, s) does not appear in the whole drift term, and Z(s, t) does not appear in the nonlocal term. Further, the coefficient of Z(s, t) is independent of s. Let us introduce the following assumption. (C3) The maps A¯0 : ∆ × Ω → lRn×n ,
C¯0 : [0, T ] × Ω → lRn×n ,
A¯1 : ∆ × Ω2 → lRn×n
are uniformly bounded, C¯0 (·) is lF-progressively measurable, and for each t ∈ [0, T ], s 7→ A¯0 (t, s) and s 7→ A¯1 (t, s) are lF-progressively measurable and lF2 -progressively measurable on [t, T ], respectively. We have the following result. Theorem 5.8. Let (C3) hold. In addition, suppose A¯0 (t, s, ω) ∈ lMn+ ,
cn , A¯1 (t, s, ω, ω ′ ) ∈ lM +
C¯0 (s, ω) ∈ lMn0 ,
a.e. (t, s) ∈ ∆∗ , a.s. ω, ω ′ ∈ Ω.
(5.27)
Moreover, let t 7→ (A¯0 (s, t), C¯0 (s, t)) be continuous, and A¯0 (s, t1 )T x ≥ A¯0 (s, t0 )T x,
A¯1 (s, t1 )T x ≥ A¯1 (s, t0 )T x,
∀ s ≤ t0 < t1 ≤ T, s ∈ [0, T ], x ∈ lRn+ , a.s. 41
(5.28)
Let (Y (·), Z(· , ·)) be the adapted M-solution to (5.26) with ψ(·) ∈ L2FT (0, T ; lRn ), ψ(·) ≥ 0. Then lE
hZ
i
T
Y (s)ds | Ft ≥ 0,
t
∀t ∈ [0, T ], a.s.
(5.29)
Proof. We consider the following linear MF-FSVIE: X(t) = ϕ(t) +
Z t 0
A¯0 (s, t)T X(s) + lE∗ [A¯1 (s, t)T X(s)] ds +
Z t 0
where
Z
ϕ(t) =
C¯0 (s)T X(s) dW (s),
(5.30)
t ∈ [0, T ],
t
η(s)ds,
t ∈ [0, T ],
0
for some η(·) ∈ L2lF (0, T ; lRn ) with η(·) ≥ 0. By our conditions on A¯0 (· , ·) and A¯1 (· , ·), using Proposition 5.6, we have X(·) ≥ 0. Then by Theorem 4.2, one obtains 0 ≤ lE
Z
= lE
T
h ψ(t), X(t) i dt = lE 0
Z
0
T
Z
Z
T
h ϕ(t), Y (t) i dt
0
t
h η(s), Y (t) i dsdt = lE
0
Z
T
h η(s),
0
Z
T
Y (t)dt i ds. s
This proves (5.29). Since the conditions assumed in Proposition 5.6 are very close to necessary conditions, we feel that it is very difficult (if not impossible) to get better comparison results for general MF-BSVIEs. However, if the drift term does not contain Z(· , ·), we are able to get a much better looking result. Let us now make it precise. For i = 0, 1, we consider the following (nonlinear) MF-BSVIEs: Yi (t) = ψi (t) +
Z
t
T
gi (t, s, Yi (s), Γi (t, s, Yi (s)))ds −
where
h
Z
t
T
Zi (t, s)dW (s),
t ∈ [0, T ],
i
Γi (t, s, Yi (s)) = lE′ θi (t, s, Yi (s), Yi (s, ω ′ )) ≡
Z
Ω
θi (t, s, ω, ω ′ , Yi (s, ω), Yi (s, ω ′ ))lP(dω ′ ).
(5.31)
(5.32)
Note that in the above, Zi (· , ·) does not appear in the drift term. Theorem 5.9. Let gi : ∆ × Ω × lRn × lRm → lRn and θi : ∆ × Ω2 × lRn × lRn → lRm satisfy (H3)q for some q ≥ 2. Moreover, for all y, y ′ ∈ lRn , γ ∈ lRm , almost all (t, s) ∈ ∆, and almost surely ω, ω ′ ∈ Ω, the following hold: (g ) (t, s, ω, y, γ) ∈ lM c m×n , c n×m , (θ0 )y′ (t, s, ω, ω ′ , y, y ′ ) ∈ lM 0 γ + + m×n n ′ ′ c c , (θ0 )y (t, s, ω, ω , y, y ) ∈ lM , (g0 )y (t, s, ω, y, γ) ∈ M + +
42
(5.33)
and
g (t, s, ω, y, γ) ≥ g (t, s, ω, y, γ), 1 0 θ (t, s, ω, ω ′ , y, y ′ ) ≥ θ (t, s, ω, ω ′ , y, y ′ ). 1
(5.34)
0
Let ψi (·) ∈ L2FT (0, T ; lRn ) with
ψ0 (t) ≤ ψ1 (t),
∀t ∈ [0, T ], a.s. ,
(5.35)
and (Yi (·), Zi (· , ·)) be the adapted M-solutions to the corresponding MF-BSVIEs (5.31). Then Y0 (t) ≤ Y1 (t),
t ∈ [0, T ], a.s.
(5.36)
Proof. From the MF-BSVIEs satisfied by (Yi (·), Zi (· , ·)), we have Y1 (t) − Y0 (t) = ψ1 (t) − ψ0 (t) +
Z
t
T
h
g1 (t, s, Y1 (s), Γ1 (t, s, Y1 (s))) i
−g0 (t, s, Y0 (s), Γ0 (t, s, Y0 (s))) ds − = ψb1 (t) − ψb0 (t) + −
Z
Z
T
t
T
t
h
h
Z
T
h
i
Z1 (t, s) − Z0 (t, s) dW (s)
t
i
g0 (t, s, Y1 (s), Γ0 (t, s, Y1 (s))) − g0 (t, s, Y0 (s), Γ0 (t, s, Y0 (s))) ds i
Z1 (t, s) − Z0 (t, s) dW (s),
where (making use of our condition) ψb1 (t) − ψb0 (t) = ψ1 (t) − ψ0 (t) +
= ψ1 (t) − ψ0 (t) + +
Z
T
t
+
T
t
with
T
t
T
t
g1 (t, s, Y1 (s), Γ1 (t, s, Y1 (s))) − g0 (t, s, Y1 (s), Γ0 (t, s, Y1 (s))) ds
g1 (t, s, Y1 (s), Γ1 (t, s, Y1 (s))) − g0 (t, s, Y1 (s), Γ1 (t, s, Y1 (s))) ds
g0 (t, s, Y1 (s), Γ1 (t, s, Y1 (s))) − g0 (t, s, Y1 (s), Γ0 (t, s, Y1 (s))) ds
= ψ1 (t) − ψ0 (t) + Z
Z
Z
Z
T
t
g1 (t, s, Y1 (s), Γ1 (t, s, Y1 (s))) − g0 (t, s, Y1 (s), Γ1 (t, s, Y1 (s))) ds
(ge0 )γ (t, s) Γ1 (t, s, Y1 (s)) − Γ0 (t, s, Y1 (s)) ds ≥ 0, (ge0 )γ (t, s) =
Z
0
1
n×m
c (g0 )γ (t, s, Y1 (s), Γλ (t, s, Y1 (s)))dλ ∈ lM +
,
Γλ (t, s, Y1 (s)) = (1 − λ)Γ0 (t, s, Y1 (s)) + λΓ1 (t, s, Y1 (s)).
Next, we note that g0 (t, s, Y1 (s), Γ0 (t, s, Y1 (s))) − g0 (t, s, Y0 (s), Γ0 (t, s, Y0 (s))) =
Z
0
1n
h
i
(g0 )y (t, s, Yλ (s), Γλ (t, s)) Y1 (s) − Y0 (s) h
io
+(g0 )γ (t, s, Yλ (s), Γλ (t, s)) Γ0 (t, s, Y1 (s)) − Γ0 (t, s, Y0 (s)) h
i
h
dλ
i
≡ (g0 )y (t, s) Y1 (s) − Y0 (s) + (g0 )γ (t, s) Γ0 (t, s, Y1 (s)) − Γ0 (t, s, Y0 (s)) , 43
where
Y (s) = (1 − λ)Y (s) + λY (s), 0 1 λ Γ (t, s) = (1 − λ)Γ (t, s, Y (s)) + λΓ (t, s, Y (s)), 0 0 0 1 λ
and
Z 1 (g0 )y (t, s) = (g0 )y (t, s, Yλ (s), Γλ (t, s))dλ ∈ lMn+ , Z0 1 c n×m . (g0 )γ (t, s) = (g0 )γ (t, s, Yλ (s), Γλ (t, s))dλ ∈ lM + 0
Also,
Γ0 (t, s, Y1 (s)) − Γ0 (t, s, Y0 (s)) h
i
= lE′ θ0 (t, s, Y1 (s), Y1 (s, ω ′ )) − θ0 (t, s, Y0 (s), Y0 (s, ω ′ )) = lE′
Z
0
1n
(θ0 )y (t, s, Yλ (s), Yλ (s, ω ′ )) Y1 (s) − Y0 (s)
o
+(θ0 )y′ (t, s, Yλ (s), Yλ (s, ω ′ )) Y1 (s, ω ′ ) − Y0 (s, ω ′ )
h
i
h
dλ
i
≡ lE′ (θ0 )y (t, s) Y1 (s) − Y0 (s) + lE′ (θ0 )y′ (t, s) Y1 (s, ω ′ ) − Y0 (s, ω ′ ) , with
Z 1 (θ0 )y (t, s) = (θ0 )y (t, s, Yλ (s), Yλ (s, ω ′ ))dλ, 0 Z 1 (θ0 )y′ (t, s) = (θ0 )y′ (t, s, Yλ (s), Yλ (s, ω ′ ))dλ. 0
Thus,
Y1 (t) − Y0 (t) = ψb1 (t) − ψb0 (t) +
h
Z
n
A¯0 (t, s) Y1 (s) − Y0 (s)
io
+lE A¯1 (t, s) Y1 (s) − Y0 (s) ′
t
T
ds −
Z
T
t
(5.37)
Z1 (t, s) − Z0 (s, t) dW (s),
t ∈ [0, T ],
with h i A cn , ¯0 (t, s) = (g0 )y (t, s) + lE′ (g0 )γ (t, s)(θ0 )y (t, s) ∈ lM + n ¯ c
(t, s) ∈ ∆, a.s.
(5.38)
A1 (t, s) = (g0 )γ (t, s)(θ0 )y′ (t, s) ∈ lM+ ,
Now, for any ϕ(·) ∈ L2lF (0, T ; lRn ), let X(·) be the solution to the following linear MF-FSVIE: X(t) = ϕ(t) +
Z t 0
i
h
A¯0 (s, t)T X(s) + lE∗ A¯1 (s, t)T X(s) ds,
t ∈ [0, T ].
By Proposition 5.5, we know that X(·) ≥ 0. Then by Theorem 4.2, we have 0 ≤ lE
Z
0
Hence, (5.36) follows.
T
h ψb1 (t) − ψb0 (t), X(t) i dt = lE
44
Z
0
T
h ϕ(t), Y1 (t) − Y0 (t) i dt.
(5.39)
Combining the above two results, we are able to get a comparison theorem for the following MF-BSVIE: Yi (t) = ψi (t) + −
Z
T
Z
t
T
gi (t, s, Yi (s), Γi (t, s, Yi (s))) + C¯0 (t)Zi (s, t) ds
Zi (t, s)dW (s),
t
(5.40)
t ∈ [0, T ],
where Γi (·) is as that in (5.31). Under proper conditions, we will have the following comparison: lE
hZ
t
i
T
Y0 (s)ds | Ft ≤ lE
hZ
t
i
T
Y1 (s)ds | Ft ,
∀t ∈ [0, T ], a.s.
(5.41)
We omit the details here. We note that in Proposition 5.6, monotonicity conditions for ϕ(·), A0 (· , ·) and C0 (· , ·) play a crucial role. These kind of conditions were overlooked in [39, 40, 41]. The following example shows that in general (5.36) might be false. Example 5.10. Consider Y0 (t) = −
Z
and Y1 (t) = t −
T
Y0 (s)ds,
t
Z
t ∈ [0, T ],
T
t
Y1 (s)ds,
t ∈ [0, T ].
Then Y0 (t) = 0,
t ∈ [0, T ],
and the equation for Y1 (·) is equivalent to the following: Y˙ 1 (t) = Y1 (t) + 1,
Y1 (T ) = T,
whose solution is given by Y1 (t) = et−T (T + 1) − 1,
t ∈ [0, T ].
It is easy to see that Y1 (t) < 0 = Y0 (t),
∀t ∈ [0, T − ln(T + 1)).
Hence, (5.36) fails. To conclude this section, we would like to pose the following open question: For general MFBSVIEs, under what conditions on the coefficients, one has a nice-looking comparison theorem? We hope to be able to report some results concerning the above question before long.
45
6
An Optimal Control Problem for MF-SVIEs.
In this section, we will briefly discuss a simple optimal control problem for MF-FSVIEs. This can be regarded as an application of Theorem 4.1, a duality principle for MF-FSVIEs. The main clue is similar to the relevant results presented in [39, 41]. We will omit some detailed derivations. General optimal control problems for MF-FSVIEs will be much more involved and we will present systematic results for that in our forthcoming publications. Let U be a non-empty bounded convex set in lRm , and let U be the set of all lF-adapted processes m u : [0, T ] × Ω → U . Since U is bounded, we see that U ⊆ L∞ lF (0, T ; lR ). For any u(·) ∈ U , consider the following controlled MF-FSVIE: X(t) = ϕ(t) +
Z
t
b(t, s, X(s), u(s), Γb (t, s, X(s), u(s)))ds
0
+
Z
t
(6.1) σ
σ(t, s, X(s), u(s), Γ (t, s, X(s), u(s)))dW (s),
t ∈ [0, T ],
0
where
and
b : ∆∗ × Ω × lRn × U × lRm1 → lRn , σ : ∆∗ × Ω × lRn × U × lRm2 → lRn ,
with
Z b Γ (t, s, X(s), u(s)) = θ b (t, s, ω, ω ′ , X(s, ω), u(s, ω), X(s, ω ′ ), u(s, ω ′ ))lP(dω ′ ) Ω h i ≡ lE′ θ b (t, s, X(s), u(s), x′ , u′ ) ′ ′ , (x ,u )=(X(s),u(s)) Z Γσ (t, s, X(s), u(s)) = θ σ (t, s, ω, ω ′ , X(s, ω), u(s, ω), X(s, ω ′ ), u(s, ω ′ ))lP(dω ′ ) Ωh i , ≡ lE′ θ σ (t, s, X(s), u(s), x′ , u′ ) (x′ ,u′ )=(X(s),u(s))
θ b : ∆∗ × Ω2 × lRn × U × lRn × U → lRm1 , θ σ : ∆∗ × Ω2 × lRn × U × lRn × U → lRm2 .
In the above, X(·) is referred to as the state process and u(·) as the control process. We introduce the following assumptions for the state equation (Comparing with (H1)–(H2)): (H1)′′ The maps
b : ∆∗ × Ω × lRn × U × lRm1 → lRn , σ : ∆∗ × Ω × lRn × U × lRm2 → lRn ,
are measurable, and for all (t, x, u, γ, γ ′ ) ∈ [0, T ] × lRn × U × lRm1 × lRm2 , the map (s, ω) 7→ (b(t, s, ω, x, u, γ), σ(t, s, ω, x, u, γ ′ )) is lF-progressively measurable on [0, t]. Moreover, for all (t, s, ω, ω ′ ) ∈ ∆c × Ω, the map (x, u, γ, γ ′ ) 7→ (b(t, s, ω, x, u, γ), σ(t, s, x, u, γ ′ )) 46
is continuously differentiable and there exists some constant L > 0 such that |bx (t, s, ω, x, u, γ)| + |bu (t, s, ω, x, u, γ)| + |bγ (t, s, ω, x, u, γ)| +|σx (t, s, ω, x, u, γ ′ )| + |σu (t, s, ω, x, u, γ ′ )| + |σγ ′ (t, s, ω, x, u, γ ′ )| ≤ L,
(6.2)
(t, s, ω, x, u, γ, γ ′ ) ∈ ∆∗ × Ω × lRn × U × lRm1 × lRm2 . Further, |b(t, s, ω, x, u, γ)| + |σ(t, s, ω, x, u, γ ′ )| ≤ L(1 + |x| + |γ| + |γ ′ |), (t, s, ω, x, u, γ, γ ′ ) ∈ ∆∗ × Ω × lRn × U × lRm1 × lRm2 . (H2)′′ The maps
(6.3)
θ b : ∆∗ × Ω2 × lRn × U × lRn × U → lRm1 , θ σ : ∆∗ × Ω2 × lRn × U × lRn × U → lRm2 ,
are measurable, and for all (t, x, u, x′ , u′ ) ∈ [0, T ] × lRn × U × lRn × U , the map (s, ω, ω ′ ) 7→ (θ b (t, s, ω, ω ′ , x, u, x′ , u′ ), θ σ (t, s, ω, ω ′ , x, u, x′ , u′ )) is lF2 -progressively measurable on [0, t]. Moreover, for any (t, s, ω, ω ′ ) ∈ ∆∗ × Ω2 , (x, u, γ, γ ′ ) 7→ (θ b (t, s, ω, ω ′ , x, u, x′ , u′ ), θ σ (t, s, ω, ω ′ , x, u, x′ , u′ )) is continuously differentiable and there exists some constant L > 0 such that |θxb (t, s, ω, ω ′ , x, u, x′ , u′ )| + |θub (t, s, ω, ω ′ , x, u, x′ , u′ )| +|θxb ′ (t, s, ω, ω ′ , x, u, x′ , u′ )| + |θub ′ (t, s, ω, ω ′ , x, u, x′ , u′ )| +|θxσ (t, s, ω, ω ′ , x, u, x′ , u′ )| + |θuσ (t, s, ω, ω ′ , x, u, x′ , u′ )|
(6.4)
+|θxσ′ (t, s, ω, ω ′ , x, u, x′ , u′ )| + |θuσ′ (t, s, ω, ω ′ , x, u, x′ , u′ )| ≤ L, (t, s, ω, ω ′ , x, u, x′ , u′ ) ∈ ∆∗ × Ω2 × lRn × U × lRn × U. Further, |θ b (t, s, ω, ω ′ , x, u, x′ , u′ )| + |θ σ (t, s, ω, ω ′ , x, u, x′ , u′ )| ≤ L(1 + |x| + |x′ |), (t, s, ω, ω ′ , x, u, x′ , u′ ) ∈ ∆∗ × Ω2 × lRn × U × lRn × U.
(6.5)
It is easy to see that under (H1)′′ –(H2)′′ , for any given u(·) ∈ U , the state equation (6.1) satisfies (H1)–(H2). Hence, for any ϕ(·) ∈ LplF (0, T ; lRn ), (6.1) admits a unique solution X(·) ∈ Lp (0, T ; lRn ). To measure the performance of the control process u(·), the following (Lagrange type) cost functional is defined: J(u(·)) = lE
Z
T
g(s, X(s), u(s), Γg (s, X(s), u(s)))ds,
0
where g : [0, T ] × Ω × lRn × U × lRℓ → lR,
47
(6.6)
and g
Γ (s, X(s), u(s)) =
Z
θ g (s, ω, ω ′ , X(s, ω), u(s, ω), X(s, ω ′ ), u(s, ω ′ ))lP(dω ′ )
Ωh ′ g
i
≡ lE θ (s, X(s), u(s), x′ , u′ ) with
(x′ ,u′ )=(X(s),u(s))
,
θ g : [0, T ] × Ω2 × lRn × U × lRn × U → lRℓ . For convenience, we make the following assumptions for the functions involved in the cost functional. (H1)′′′ The map g : [0, T ] × Ω × lRn × U × lRℓ → lR is measurable, and for all (x, u, γ) ∈ lRn × U × lRℓ , the map (t, ω) 7→ g(t, ω, x, u, γ) is lF-progressively measurable. Moreover, for almost all (t, ω) ∈ ∆∗ × Ω, the map (x, u, γ) 7→ g(t, ω, x, u, γ) is continuously differentiable and there exists some constant L > 0 such that |gx (t, ω, x, u, γ)| + |gu (t, ω, x, u, γ)| + |gγ (t, ω, x, u, γ)| ≤ L, (t, ω, x, u, γ) ∈ [0, T ] × Ω × lRn × U × lRℓ .
(6.7)
Further, |g(t, ω, x, u, γ)| ≤ L(1 + |x| + |γ|), (t, ω, x, u, γ) ∈ [0, T ] × Ω × lRn × U × lRℓ .
(6.8)
(H2)′′′ The map θ g : [0, T ]×Ω2 ×lRn ×U ×lRn ×U → lRℓ is measurable, and for all (x, u, x′ , u′ ) ∈ lR × U × lRn × U , the map (s, ω, ω ′ ) 7→ (θ g (t, ω, ω ′ , x, u, x′ , u′ ) is lF2 -progressively measurable. Moreover, for almost all (t, ω, ω ′ ) ∈ [0, T ] × Ω2 , the map (x, u, x′ , u′ ) 7→ θ g (t, s, ω, ω ′ , x, u, x′ , u′ ) is continuously differentiable and there exists some constant L > 0 such that n
|θxg (t, ω, ω ′ , x, u, x′ , u′ )| + |θug (t, ω, ω ′ , x, u, x′ , u′ )| +|θxg ′ (t, ω, ω ′ , x, u, x′ , u′ )| + |θug ′ (t, ω, ω ′ , x, u, x′ , u′ )| ≤ L,
(6.9)
(t, ω, ω ′ , x, u, x′ , u′ ) ∈ [0, T ] × Ω2 × lRn × U × lRn × U. Further, |θ g (t, ω, ω ′ , x, u, x′ , u′ )| ≤ L(1 + |x| + |x′ |), (t, ω, ω ′ , x, u, x′ , u′ ) ∈ [0, T ] × Ω2 × lRn × U × lRn × U.
(6.10)
Under (H1)′′ –(H2)′′ and (H1)′′′ –(H2)′′′ , the cost functional J(u(·)) is well-defined. Then we can state our optimal control problem as follows. Problem (C). For given ϕ(·) ∈ LplF (0, T ; lRn ), find u ¯(·) ∈ U such that J(¯ u(·)) = inf J(u(·)). u(·)∈U
(6.11)
Any u ¯(·) ∈ U satisfying (6.11) is called an optimal control of Problem (C), and the corresponding ¯ ¯ state process X(·) is called an optimal state process. In this case, we refer to (X(·), u¯(·)) as an optimal pair. ¯ We now briefly derive the Pontryagin type maximum principle for any optimal pair (X(·), u¯(·)). To this end, we take any u(·) ∈ U , let uε (·) = u ¯(·) + ε[u(·) − u ¯(·)] = (1 − ε)¯ u(·) + εu(·) ∈ U . 48
Let X ε (·) be the corresponding state process. Then ¯ X ε (·) − X(·) ε→0 ε
X1 (·) = lim satisfies the following: X1 (t) =
Z tn
bx (t, s)X1 (s) + bu (t, s)[u(s) − u ¯(s)]
0
h
+bγ (t, s)lE′ θxb (t, s)X1 (s, ω) + θub (t, s)[u(s, ω) − u ¯(s, ω)] io
+θxb ′ (t, s)X1 (s, ω ′ ) + θub ′ (t, s)[u(s, ω ′ ) − u ¯(s, ω ′ )]
+
Z tn
σx (t, s)X1 (s) + σu (t, s)[u(s) − u ¯(s)]
0
h
ds
+σγ (t, s)lE′ θxσ (t, s)X1 (s, ω) + θuσ (t, s)[u(s, ω) − u ¯(s, ω)]
=
io
+θxσ′ (t, s)X1 (s, ω ′ ) + θuσ′ (t, s)[u(s, ω ′ ) − u ¯(s, ω ′ )]
Z t nh
i
bx (t, s) + bγ (t, s)lE′ θxb (t, s) X1 (s)
0
h
dW (s)
i
+ bu (t, s) + bγ (t, s)lE′ θub (t, s) [u(s) − u ¯(s)] h
io
+lE′ bγ (t, s)θxb ′ (t, s)X1 (s) + bγ (t, s)θub ′ (t, s)[u(s) − u ¯(s)]
+
Z t nh 0
h
i
σx (t, s) + σγ (t, s)lE′ θxσ (t, s) X1 (s) i
+ σu (t, s) + σγ (t, s)lE′ θuσ (t, s) [u(s) − u ¯(s)]
≡
Z tn
h
Z tn
+
0
h
io
A1 (t, s)X1 (s)+B1 (t, s)[u(s)− u ¯(s)]+lE′ C1 (t, s)X1 (s)+D1 (t, s)[u(s)− u ¯(s)]
b + ≡ ϕ(t)
Z tn
h
io
A0 (t, s)X1 (s) + lE′ C0 (t, s)X1 (s)
0
+
Z tn 0
io
h
io
A1 (t, s)X1 (s) + lE′ C1 (t, s)X1 (s)
dW (s),
¯ ¯ bξ (t, s) = bξ (t, s, X(s), u¯(s), Γb (t, s, X(s), u ¯(s))), ξ = x, u, γ, θ b (t, s) = θ b (t, s, ω, ω ′ , X(s, ′ ¯ ω), u¯(s, ω), X(s, ¯ ω ), u¯(s, ω ′ )), ξ = x, u, x′ , u′ , ξ ξ ¯ ¯ σξ (t, s) = σξ (t, s, X(s), u ¯(s), Γσ (t, s, X(s), u¯(s))), ξ = x, u, γ, b ′ ¯ ′ ′ ′ ′ σ ¯
ξ = x, u, x , u ,
A0 (t, s) = bx (t, s) + bγ (t, s)lE′ θxb (t, s), B0 (t, s) = bu (t, s) + bγ (t, s)lE′ θub (t, s), C (t, s) = b (t, s)θ b ′ (t, s), D0 (t, s) = bγ (t, s)θub ′ (t, s), 0 γ x A1 (t, s) = σx (t, s) + σγ (t, s)lE′ θxσ (t, s), B1 (t, s) = σu (t, s) + σγ (t, s)lE′ θuσ (t, s), σ σ
C1 (t, s) = σγ (t, s)θx′ (t, s),
D1 (t, s) = σγ (t, s)θu′ (t, s). 49
ds
dW (s)
ds
θξ (t, s) = θξ (t, s, ω, ω , X(s, ω), u¯(s, ω), X(s, ω ), u ¯(s, ω )),
and
dW (s)
A0 (t, s)X1 (s) + B0 (t, s)[u(s) − u ¯(s)] + lE′ C0 (t, s)X1 (s) + D0 (t, s)[u(s) − u ¯(s)]
0
where
io
+lE′ σγ (t, s)θxσ′ (t, s)X1 (s) + σγ (t, s)θuσ′ (t, s)[u(s) − u ¯(s)] h
ds
Also, b ϕ(t) =
Z tn
h
io
B0 (t, s)[u(s) − u ¯(s)] + lE′ D0 (t, s)[u(s) − u ¯(s)]
0
Z tn
+
0
h
ds io
B1 (t, s)[u(s) − u ¯(s)] + lE′ D1 (t, s)[u(s) − u ¯(s)]
¯ On the other hand, by the optimality of (X(·), u¯(·)), we have
dW (s).
J(uε (·)) − J(¯ u(·)) ε→0 ε
0 ≤ lim = lE
Z
T
gx (s)X1 (s) + gu (s)[u(s) − u ¯(s)]
0
= lE
Z
= lE
0
n
io
+θxg ′ (s)X1 (s, ω ′ ) + θug ′ (s)[u(s, ω ′ ) − u ¯(s, ω ′ )] T
nh
i
h
ds
h
io
+lE′ gγ (s)θxg ′ (s)X1 (s) + gγ (s)θug ′ (s)[u(s) − u ¯(s)] T
i
gx (s) + gγ (s)lE′ θxg (s) X1 (s) + gu (s) + gγ (s)lE′ θug (s) [u(s) − u ¯(s)]
n
a0 (s)T X1 (s) + b0 (s)T [u(s) − u ¯(s)] h
io
+lE′ c0 (s)T X1 (s) + d0 (s)T [u(s) − u ¯(s)]
= lE ϕb0 +
where
h
+gγ (s)lE′ θxg (s)X1 (s, ω) + θug (s)[u(s, ω) − u ¯(s, ω)]
0
Z
n
Z
T
0
h
ds
ds
i
o
a0 (s)T X1 (s) + lE′ c0 (s)T X1 (s) ds ,
g (s) = g (s, X(s), ¯ ¯ u¯(s), Γg (s, X(s), u ¯(s))), ξ = x, u, γ, ξ ξ ′ θ g (s) = θ g (s, ω, ω ′ , X(s, ¯ ω), u ¯ ¯(s, ω), X(s, ω ), u ¯(s, ω ′ )), ξ = x, u, x′ , u′ , ξ ξ
and
a0 (s)T = gx (s) + gγ (s)lE′ θxg (s), g T T
d0 (s) = gγ (s)θug ′ (s),
c0 (s) = gγ (s)θx′ (s),
Z ϕ b0 =
0
T
b0 (s)T = gu (s) + gγ (s)lE′ θug (s),
n
h
io
b0 (s)T [u(s) − u ¯(s)] + lE′ d0 (s)T [u(s) − u ¯(s)]
ds.
Then for any undetermined (Y (·), Z(· , ·)) ∈ M2 [0, T ], similar to the proof of Theorem 4.1, we have lE
Z
0
T
b i dt = lE h Y (t), ϕ(t)
Z
T
0
h X1 (t), Y (t) − h
Z
t
T
A0 (s, t)T Y (s) + A1 (s, t)T Z(s, t) i
+lE∗ C0 (s, t)T Y (s) + C1 (s, t)T Z(s, t) ds i dt.
50
Hence,
Z
n
0 ≤ lE ϕb0 + n
0
= lE ϕb0 − Z
Z
T
h
i
a0 (s)T X1 (s) + lE′ c0 (s)T X1 (s) ds
b i dt + h Y (t), ϕ(t)
0
T
0
= lE ϕb0 − −
Z
h
Z
T
o
Z
h X1 (t), Y (t) −
0
T
t
A0 (s, t)T Y (s) i
+A1 (s, t)T Z(s, t) + lE∗ C0 (s, t)T Y (s) + C1 (s, t)T Z(s, t) ds i dt
+ n
T
T
t
Z
T
b i dt + h Y (t), ϕ(t)
0
h
h X1 (t), a0 (t) i +lE′ h X1 (t), c0 (t) i Z
T
0
i o
dt
h X1 (t), Y (t) + a0 (t) + lE∗ c0 (t)
A0 (s, t)T Y (s) + A1 (s, t)T Z(s, t) h
i
o
+lE∗ C0 (s, t)T Y (s) + C1 (s, t)T Z(s, t) ds i dt .
We now let (Y (·), Z(· , ·)) ∈ M2 [0, T ] be the adapted M-solution to the following MF-BSVIE: Y (t) = −a0 (t) − lE∗ c0 (t) + h
∗
T
Z
t
T
A0 (s, t)T Y (s) + A1 (s, t)T Z(s, t) i
T
+lE C0 (s, t) Y (s) + C1 (s, t) Z(s, t) dsdt − Then
n
b0 − 0 ≤ lE ϕ
= lE −
nZ Z
T
0 T
+
−
nZ Z
−
0
b i dt h Y (t), ϕ(t)
0
0
T
0 T
h
Z
h
Z
h
Z t 0
T
T
t
h
h b0 (t) + [lE d0 (t)] −
io
dt i
B0 (t, s)[u(s) − u ¯(s)] + lE′ D0 (t, s)[u(s) − u ¯(s)] ds h
i
B1 (t, s)[u(s) − u ¯(s)] + lE′ D1 (t, s)[u(s) − u ¯(s)] dW (s) i dt
o
B0 (s, t)T Y (s) + lE∗ [D0 (s, t)T Y (s)] ds, u(t) − u ¯(t) i dt
o
B1 (s, t)T Z(s, t) + lE∗ [D1 (s, t)T Z(s, t)] ds, u(t) − u ¯(t) i dt .
Hence, we must have the following variational inequality: ∗
Z(t, s)dW (s).
t
h b0 (t) + [lE∗ d0 (t)], u(t) − u ¯(t) i dt
t
T
o
(6.12)
T
h b0 (t), u(t) − u ¯(t) i +lE′ h d0 (t), u(t) − u ¯(t) i
Z t
0
Z
n
T
h Y (t),
0
= lE
Z
Z
Z
t
T
B0 (s, t)T Y (s) + lE∗ [D0 (s, t)T Y (s)]
+B1 (s, t)T Z(s, t) + lE∗ [D1 (s, t)T Z(s, t)] ds, u − u ¯(t) i ≥ 0,
(6.13)
∀u ∈ U, a.e. t ∈ [0, T ], a.s. We now summarize the above derivation. ¯ Theorem 5.1. Let (H1)′′ –(H2)′′ and (H1)′′′ –(H2)′′′ hold and let (X(·), u¯(·)) be an optimal pair of Problem (C). Then the adjoint equation (6.12) admits a unique adapted M-solution (Y (·), Z(· , ·)) ∈ M2 [0, T ] such that the variational inequality (6.13) holds. 51
The purpose of presenting a simple optimal control problem of MF-FSVIEs here is to realize a major motivation of studying MF-BSVIEs. It is possible to discuss Bolza type cost functional. Also, some of the assumptions assumed in this section might be relaxed. However, we have no intention to have a full exploration of general optimal control problems for MF-FSVIEs in the current paper since such kind of general problems (even for FSVIEs) are much more involved and they deserve to be addressed in another paper. We will report further results along that line in a forthcoming paper.
References [1] N. U. Ahmed, Nonlinear diffusion governed by McKean-Vlasov equation on Hilbert space and optimal control, SIAM J. Control Optim., 46 (2007), 356–378. [2] N. U. Ahmed and X. Ding, A semilinear McKean-Vlasov stochastic evolution equation in Hilbert space, Stoch. Proc. Appl., 60 (1995), 65–85. [3] A. Aman and M. N’zi, Backward stochastic nonlinear Volterra integral equation with local Lipschitz drift, Prob. Math. Stat., 25 (2005), 105–127. [4] D. Andersson and B. Djehiche, A maximum principle for SDEs of mean-field type, Appl. Math. Optim., DOI 10.1007/s00245-010-9123-8. [5] V. V. Anh, W. Grecksch, and J. Yong, Regularity of backward stochastic Volterra integral equations in Hilbert spaces, Stoch. Anal. Appl., 29 (2011), 146–168. [6] M. Berger and V. Mizel, Volterra equations with Itˆ o integrals, I,II, J. Int. Equ. 2 (1980), 187–245, 319–337. [7] V. S. Borkar and K. S. Kumar, McKean-Vlasov limit in portfolio optimization, Stoch. Anal. Appl., 28 (2010), 884–906. [8] R. Buckdahn, B. Djehiche, and J. Li, A general stochastic maximum principle for SDEs of mean-field type, preprint. [9] R. Buckdahn, B. Djehiche, J. Li, and S. Peng, Mean-field backward stochastic differential equations: a limit approach, Ann. Probab., 37 (2009), 1524–1565. [10] R. Buckdahn, J. Li, and S. Peng, Mean-field backward stochastic differential equations and related partial differential equations, Stoch. Proc. Appl., 119, (2009) 3133–3154. [11] T. Chan, Dynamics of the McKean-Vlasov equation, Ann. Probab. 22 (1994), 431–441. [12] T. Chiang, McKean-Vlasov equations with discontinuous coefficients, Soochow J. Math., 20 (1994), 507–526. [13] D. Crisan and J. Xiong, Approximate McKean-Vlasov representations for a class of SPDEs, Stochastics, 82 (2010), 53–68.
52
[14] D. A. Dawson, Critical dynamics and fluctuations for a mean-field model of cooperative behavior, J. Statist. Phys., 31 (1983), 29–85. [15] D. A. Dawson and J. G¨ artner, Large deviations from the McKean-Vlasov limit for weakly interacting diffusions, Stochastics, 20 (1987), 247–308. [16] J. G¨ artner, On the Mckean-Vlasov limit for interacting diffusions, Math. Nachr., 137 (1988), 197–248. [17] C. Graham, McKean-Vlasov Ito-Skorohod equations, and nonlinear diffusions with discrete jump sets, Stoch. Proc. Appl., 40 (1992), 69–82. [18] Y. Hu and S. Peng, On the comparison theorem for multidimensional BSDEs, C. R. Math. Acad. Sci. Paris, 343 (2006), 135–140. [19] M. Huang, R. P. Malham´e, and P. E. Caines, Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle, Comm. Inform. Systems, 6 (2006), 221–252. [20] M. Kac, Foundations of kinetic theory, Proc. 3rd Berkeley Sympos. Math. Statist. Prob. 3 (1956), 171–197. [21] P. M. Kotelenez and T. G. Kurtz, Macroscopic limit for stochastic partial differential equations of McKean-Vlasov type, Prob. Theory Rel. Fields, 146 (2010), 189–222. [22] J. M. Lasry and P. L. Lions, Mean field games, Japan J. Math., 2 (2007), 229–260. [23] J. Lin, Adapted solution of a backward stochastic nonlinear Volterra integral equation, Stoch. Anal. Appl., 20 (2002), 165–183. [24] N. I. Mahmudov and M. A. McKibben, On a class of backward McKean-Vlasov stochastic equations in Hilbert space: existence and convergence properties, Dynamic Systems Appl., 16 (2007), 643–664. [25] H. P. McKean, A class of Markov processes associated with nonlinear parabolic equations, Proc. Natl. Acad. Sci. USA, 56 (1966), 1907–1911. [26] T. Meyer-Brandis, B. Oksendal, and X. Zhou, A mean-field stochastic maximum principle via Malliavin calculus, A special issue for Mark Davis’ Festschrift, to appear in Stochastics. [27] J. Y. Park, P. Balasubramaniam, and Y. H. Kang, Controllability of McKean-Vlasov stochastic integrodifferential evolution equation in Hilbert spaces, Numer. Funct. Anal. Optim., 29 (2008), 1328–1346. [28] E. Pardoux and S. Peng, Adapted solution of a backward stochastic differential equation, Systems Control Lett., 14 (1990), 55–61. [29] E. Pardoux and P. Protter, Stochastic Volterra equations with anticipating coefficients, Ann. Probab., 18 (1990), 1635–1655. 53
[30] P. Protter, Volterra equations driven by semimartingales, Ann. Prabab., 13 (1985), 519–530. [31] Y. Ren, On solutions of backward stochastic Volterra integral equations with jumps in Hilbert spaces, J. Optim. Theory Appl., 144 (2010), 319–333. [32] M. Scheutzow, Uniqueness and non-uniqueness of solutions of Vlasov-McKean equations, J. Austral. Math. Soc., Ser. A, 43 (1987), 246–256. [33] A. S. Sznitman, Topics in propagation of chaos, Ecˆ ole de Probabilites de Saint Flour, XIX1989. Lecture Notes in Math, vol. 1464, Springer, Berlin 1989, 165–251. [34] C. Tudor, A comparion theorem for stochastic equations with Volterra drifts, Ann. Probab., 17 (1989), 1541–1545. [35] T. Wang, Lp solutions of backward stochastic Volterra integral equations, Acta Math. Sinica, to appear. [36] T. Wang and Y. Shi, Symmetrical solutions of backward stochastic Volterra integral equations and applications, Discrete Contin. Dyn. Syst., Ser. B, 14 (2010), 251–274. [37] Z. Wang and X. Zhang, Non-Lipschitz backward stochastic Volterra type equations with jumps, Stoch. Dyn., 7 (2007), 479–496. [38] A. Yu. Veretennikov, On ergodic measures for McKean–Vlasov stochastic equations, From Stochastic Calculus to Mathematical Finance, 623–633, Springer, Berline, 2006. [39] J. Yong, Backward stochastic Volterra integral equations and some related problems, Stochastic Proc. Appl., 116 (2006), 779–795. [40] J. Yong, Continuous-time dynamic risk measures by backward stochastic Volterra integral equations, Appl. Anal., 86 (2007), 1429–1442. [41] J. Yong, Well-posedness and regularity of backward stochastic Volterra integral equation, Probab. Theory Relat. Fields, 142 (2008), 21–77. [42] J. Yong and X. Y. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations, Springer-Verlag, New York, 1999. [43] X. Zhang, Stochastic Volterra equations in Banach spaces and stochastic partial differential equation, J. Funct. Anal., 258 (2010), 1361–1425.
54