arXiv:0803.3712v3 [math.PR] 23 Sep 2009
Numerical Algorithms and Simulations for Reflected Backward Stochastic Differential Equations with two Continuous Barriers Mingyu XU∗ Institute of Applied Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, 100080, China.
Abstract In this paper we study different algorithms for reflected backward stochastic differential equations (BSDE in short) with two continuous barriers based on binomial tree framework. We introduce numerical algorithms by penalization method and reflected method respectively. In the end simulation results are also presented.
Keywords: Backward Stochastic Differential Equations with two continuous barriers, Penalization method, Discrete Brownian motion, Numerical simulation
AMS: 60H10, 34K28
1
Introduction
Non-linear backward stochastic differential equations (BSDEs in short) were firstly introduced by Pardoux and Peng ([21], 1990), who proved the existence and uniqueness of the adapted solution, under smooth square integrability assumptions on the coefficient and the terminal condition, plus that the coefficient g(t, ω, y, z) is (t, ω)-uniformly Lipschitz in (y, z). Then El Karoui, Kapoudjian, Pardoux, Peng and Quenez introduced the notion of reflected BSDE (RBSDE in short) ([11], 1997) with one continuous lower barrier. More precisely, a solution for such an equation associated to a coefficient g, a terminal value ξ, a continuous barrier Lt , is a triplet (Yt , Zt , At )0≤t≤T of adapted processes valued in R1+d+1 , which satisfies Z T Z T Yt = ξ + g(s, Ys , Zs )ds + AT − At + Zs dBs , 0 ≤ t ≤ T, a.s., t
t
Email :
[email protected]. This work is supported in part by the National Basic Research Program of China (973 Program), No. 2007CB814902 ∗
1
and Yt ≥ Lt a.s. for any 0 ≤ t ≤ T . At is non-decreasing continuous, and Bt is a ddimensional Brownian motion. The role of At is to push upward the process Y in a minimal RT way, in order to keep it above L. In this sense it satisfies 0 (Ys − Ls )dAs = 0. Following this paper, Cvitanic and Karatzas ([9], 1996) introduced the notion of reflected BSDE with two continuous barriers. In this case a solution of such an equation associated to a coefficient g, a terminal value ξ, a continuous lower barrier Lt and a continuous upper barrier Ut , with Lt ≤ Ut and LT ≤ ξ ≤ UT a.s., is a quadruple (Yt , Zt , At , Kt )0≤t≤T of adapted processes, valued in R1+d+1 , which satisfies Z T Z T Yt = ξ + g(s, Ys , Zs )ds + AT − At − (KT − Kt ) − Zs dBs , 0 ≤ t ≤ T, a.s., t
t
and Lt ≤ Yt ≤ Ut , a.s. for any 0 ≤ t ≤ T . Here At and Kt are increasing continuous process, whose roles are to keep the process Y between L and U in such a way that Z T Z T (Ys − Ls )dAs = 0 and (Ys − Us )dKs = 0. 0
0
In view to prove the existence and uniqueness of a solution, the method is based on a Picard-type iteration procedure, which requires at each step the solution of a Dynkin game problem. Furthermore, the authors proved the existence result by penalization method when the coefficient g does not depend on z. In 2004 ([16]), Lepeltier and San Martin relaxed in some sense the condition on the barriers, proved by a penalization method an existence result, without any assumption other than the square integrability one on L and U, but only when there exists a continuous semi-martingale with terminal value ξ, between L and U. More recently, Lepeltier and Xu ([18]) studied the case when the barriers are right continuous and left limit (RCLL in short), and proved the existence and uniqueness of solution in both Picard iteration and penalization method. In 2005, Peng and Xu considered the most general case when barriers are just L2 -processes by penalization method, and studied a special penalization BSDE, which penalized with two barriers at the same time, and proved that the solutions of these equations converge to the solution of reflected BSDE. The calculation and simulation of BSDEs is essentially different from those of SDEs (see [14]). When g is linear in y and z, we may solve the solution of BSDE by considering its dual equation, which is a forward SDE. However for nonlinear case of g, we can not find the solution explicitly. Here our numerical algorithms is based on approximate Brownian motion by random walk. This method is first considered by Peng and Xu [25]. The convergence of this type of numerical algorithms is proved by Briand, Delyon and M´emin in 2000 ([4]) and 2002 [5] . In 2002, M´emin, Peng and Xu studied the algorithms for reflected BSDE with one barrier and proved its convergence (cf. [20]). Recently Chassagneux also studied discrete-time approximation of doubly reflected BSDE in [6]. In this paper, we consider different numerical algorithms for reflected BSDE with two continuous barriers. The basic idea is to approximate a Brownian motion by random walks based on binary tree model. Compare with the one barrier case (cf. [20]), the additive barrier brings more difficulties in proving the convergence of algorithm, which requires us to get finer 2
estimation. When the Brownian motion is 1-dimensional, our algorithms have advantages in computer programming. In fact we developed a software package based on these algorithms for BSDE with two barriers. Furthermore it also contains programs for classical BSDEs and reflected BSDEs with one barrier. One significant advantage of this package is that the users have a very convenient user-machine interface. Any user who knows the basics of BSDE can run this package without difficulty. This paper is organized as follows. In Section 2, we recall some classical results of reflected BSDE with two continuous barriers, and discretization for reflected BSDE. In Section 3, we introduce implicit and implicit-explicit penalization schemes and prove their convergence. In Section 4, we study implicit and explicit reflected schemes, and get their convergence. In Section 5, we present some simulations for reflected BSDE with two barriers. The proof of convergence of penalization solution is in Appendix. We should point out that recently there have been many different algorithms for computing solutions of BSDEs and the related results in numerical analysis, for example [3], [4], [7], [8], [10], [13], [19], [26]. In contrast to these results, our methods can easily be realized by computer in 1-dimensional case. In the multi-dimensional case, the algorithms are still suitable, however to realize them by computer is difficult, since it will require larger amount of calculation than 1-dimensional case.
2
Preliminaries: Reflected BSDEs with two barriers and Basic discretization
Let (Ω, F , P ) be a complete probability space, (Bt )t≥0 a 1-dimensional Brownian motion defined on a fixed interval [0, T ], with a fixed T > 0. We denote by {Ft }0≤t≤T the natural filtration generated by the Brownian motion B, i.e., Ft = σ{Bs ; 0 ≤ s ≤ t} augmented with all P -null sets of F . Here we mainly consider 1-dimensional case, since the solution of reflected BSDE is 1-dimensional. In fact, we can also generalize algorithms in this paper to multi-dimensional Brownian motion, which will require a huge amount of calculation. We introduce the following spaces for p ∈ [1, ∞): • Lp (Ft ) :={R-valued Ft –measurable random variables X s. t. E[|X|p ] < ∞}; Rt • LpF (0, t) :={R–valued and Ft –adapted processes ϕ defined on [0, t], s. t. E 0 |ϕs |p ds < ∞}; • Sp (0, t) :={R–valued and Ft –adapted continuous processes ϕ defined on [0, t], s. t. E[sup0≤t≤T |ϕt |2 ] < ∞}; • Ap (0, t) :={increasing processes in Sp (0, t) with A(0) = 0}. We are especially interested in the case p = 2.
3
2.1
Reflected BSDE: Definition and convergence results
The random variable ξ is considered as terminal value, satisfying ξ ∈ L2 (FT ). Let g : [0, T ] × R × R → R be a t-uniformly Lipschitz function in (y, z), i.e., there exists a fixed µ > 0 such that |g(t, y1, z1 ) − g(t, y2, z2 )| ≤ µ(|y1 − y2 | + |z1 − z2 |) ∀t ∈ [0, T ], ∀(y1, z1 ), (y2, z2 ) ∈ R × R.
(1)
And g(·, 0, 0) is square integrable. The solution of our BSDE with two barriers is reflected between a lower barrier L and an upper barrier U, which are supposed to satisfy Assumption 2.1 L and U are Ft -progressively measurable continuous processes valued in R, such that E[ sup ((Lt )+ )2 + sup ((Ut )− )2 ] < ∞. (2) 0≤t≤T
0≤t≤T
Rt and there exists a continuous process Xt = X0 − 0 σs dBs + Vt+ − Vt− where σ ∈ L2F (0, T ), V + and V − are (Ft )-adapted continuous increasing processes with E[|VT+ |2 ] + E[|VT− |2 ] < ∞ such that Lt ≤ Xt ≤ Ut , P -a.s. for 0 ≤ t ≤ T. Remark 2.1 Condition (2) permits us to treat situations when Ut ≡ +∞ or Lt ≡ −∞, t ∈ [0, T ], in such cases the corresponding reflected BSDE with two barriers becomes a reflected BSDE with a single lower barrier L or a single upper barrier U, respectively. Definition 2.1 The solution of a reflected BSDE with two continuous barriers is a quadruple (Y, Z, A, K) ∈ S2 (0, T ) × L2F (0, T ) ×A2 (0, T ) × A2 (0, T ) defined on [0, T ] satisfying the following equations − dYt = g(t, Yt , Zt )dt + dAt − dKt − Zt dBt , YT = ξ Lt ≤ Yt ≤ Ut , dAt ≥ 0, dKt ≥ 0, dAt · dKt = 0. and the reflecting conditions Z T 0
(Yt − Lt )dAt =
Z
(3)
T 0
(Yt − Ut )dKt = 0.
(4)
To prove the existence of the solution, penalization method is important. Thanks to the convergence results of penalization solution in [16], [15] for continuous barriers’ case and methods in [23], we have the following results, especially it gives the convergence speed of penalization solutions.
4
Theorem 2.1 (a) There exists a unique solution (Y, Z, A, K) of reflected BSDE, i.e. it bm,p b m,p ) satisfies (3), (4). Moreover it is the limit of penalization solutions (Ybtm,p , Zbtm,p , A t , Kt as m → ∞ then p → ∞, or equivalent as q → ∞ then m → ∞. Here the penalization bm,p b m,p ) with respect to two barriers L and U is defined, for m ∈ N, solution (Ybtm,p , Zbtm,p , A t , Kt p ∈ N, as the solution of a classical BSDE − dYbtm,p = g(t, Ybtm,p , Zbtm,p )dt + m(Ybtm,p − Lt )− dt − q(Ybtm,p − Ut )+ dt − Zbtm,p dBt , (5) YbTm,p = ξ. Rt R bm,p b tm,p = p t (Yb m,p − Us )+ ds. And we set A = m 0 (Ybsm,p − Ls )− ds, K t s 0
(b) Consider a special penalized BSDE for the reflected BSDE with two barriers: for any p ∈ N, − dYtp = g(t, Ytp , Ztp )dt + p(Ytp − Lt )− dt − p(Ytp − Ut )+ dt − Ztp dBt , (6) p YT = ξ, R Rt t with Apt = 0 p(Ysp −Ls )− ds and Ktp = 0 p(Ysp −Us )+ ds. Then we have, as p → ∞, Ytp → Yt in S2 (0, T ), Ztp → Zt in L2F (0, T ) and Apt → At weakly in S2 (0, T ) as well as Ktp → Kt . Moreover there exists a constant C depending on ξ, g(t, 0, 0), µ, L and U, such that Z T C p 2 E[ sup |Yt − Yt | + (|Ztp − Zt |2 )dt + sup [(At − Kt ) − (Apt − Ktp )]2 ] ≤ √ . (7) p 0≤t≤T 0≤t≤T 0 The proof is based on the results in [16] and [23], we put it in Appendix. Remark 2.2 In the following, we focus on the penalized BSDE as (7), which consider the penalization with respect to the two barriers at the same time. And p in superscribe always stands for the penalization parameter. Now we consider a special case: Assume that Assumption 2.2 L and U are Itˆo processes of the following form Z t Z t Lt = L0 + ls ds + σsl dBs , Z0 t Z0 t Ut = U0 + us ds + σsu dBs . 0
(8)
0
Suppose that ls and us are right continuous with left limits (RCLL in short) processes, σsl 2 RT and σsu are predictable with E[ 0 [|ls |2 + σsl + |us |2 + |σsu |2 ]ds < ∞.
It is easy to check that if Lt ≤ Ut , then Assumption R ± R 2.1± is satisfied.± We may±just set l u ± X = L or U, with σs = σs or σs and V = 0 ls ds or 0 us ds. Here ls (resp. us ) is the positive or the negative part of l (resp. u). As Proposition 4.2 in [11], we have following proposition for two increasing processes, which can give us the integrability of the increasing processes by barriers. 5
Proposition 2.1 Let (Y, Z, A, K) be a solution of reflected BSDE (3). Then Zt = σtl , a.s.dP × dt on the set {Yt = Lt }, Zt = σtu , a.s.-dP × dt on the set {Yt = Ut }. And 0 ≤ dAt ≤ 1{Yt =Lt } [g(t, Lt , σtl ) + lt ]− dt, 0 ≤ dKt ≤ 1{Yt =Ut } [g(t, Ut , σtu ) + ut ]+ dt. So there exist positive processes α and β, with 0 ≤ αt , βt ≤ 1, such that dAt = αt 1{Yt =Lt } [g(t, Lt , σtl )+ lt ]− dt, dKt = βt 1{Yt =Ut } [g(t, Ut , σtu ) + ut ]+ dt. Proof. We can prove these results easily by using similar techniques as in Proposition 4.2 in [11], in view that on the set {Lt = Ut }, we have σtl = σtu and lt = ut . So we omit the details of the proof here. In the following, we will work under Assumption 2.2.
2.2
Approximation of Brownian motion and barriers
We use random walk to approximate the Brownian motion. Consider for each j = 1, 2, · · · , Btn
[t/δ] √ X := δ εnj , j=1
for all 0 ≤ t ≤ T, δ =
T , n
where {εnj }nj=1 is a {1, −1}-valued i.i.d. sequence with P (εnj = 1) = P (εnj = −1) = 0.5, i.e., it is a Bernoulli sequence. We set the discrete filtration Gjn := σ{εn1 , · · · , εnj } and tj = jδ, for 0 ≤ j ≤ n. We denote by Dt the space of RCLL functions from [0, t] in R, endowed with the topology of uniform convergence, and we assume that: Assumption 2.3 Γ : DT → R is K-Lipschitz. We consider ξ = Γ(B), which is FT measurable and ξ n = Γ(B n ), which is Gnn -measurable, such that E[|ξ|2] + sup E[|ξ n |2 ] < ∞ n
Now we consider the approximation of the barriers L and U. Notice that L and U are progressively measurable with respect to the filtration (Ft ), which is generated by Brownian motion. So they can be presented as a functional of Brownian motion, i.e. for each t ∈ [0, T ], Lt = Ψ1 (t, (Bs )0≤s≤t ) and Ut = Ψ2 (t, (Bs )0≤s≤t ), where Ψ1 (t, ·) and Ψ2 (t, ·) : Dt → R. And we assume that Ψ1 (t, ·) and Ψ2 (t, ·) are Lipschitz. Then we get the discretizaton of the barriers Lnj = Ψ1 (tj , (Bsn )0≤s≤t ) and Utn = Ψ2 (tj , (Bsn )0≤s≤t ). If Lt ≤ Ut , then Lj ≤ Uj . On the other hand, we mainly consider barriers which are Itˆo processes and satisfy Assumption 2.2. So we have a natural approximation: for 1 ≤ j ≤ n, Lnj
= L0 + δ
j−1 X i=0 j−1
Ujn = U0 + δ
X i=0
6
li +
j−1 X
√ σil εni+1 δ,
i=0 j−1
ui +
X i=0
√ σiu εni+1 δ
where li = lti , σil = σtli , ui = uti , σiu = σtui . Then Lnj and Ujn are discrete versions of L and U, with supn E[supj ((Lnj )+ )2 + supj ((Ujn )− )2 ] < ∞ and Lnj ≤ Ujn still hold. In the following, we may use both approximations. In this paper, we study two different types of numerical schemes. The first one is based on the penalization approach, whereas the second is to obtain the solution Y by reflecting it between L and U and get two reflecting processes A and K directly. Throughout this paper, n always stands for the discretization of the time interval. And process (φnj )0≤j≤n is a discrete process with n + 1 values, for φ = L, U, y p , y, etc.
3 3.1
Algorithms based on Penalization BSDE and their Convergence Discretization of Penalization BSDE and Penalization schemes
First we consider the discretization of penalized BSDE with respect to two discrete barriers Ln and U n . After the discretization of time interval, we get the following discrete backward equation on the same interval [tj , tj+1 ], for 0 ≤ j ≤ n − 1, √ n p,n p,n p,n δεj+1, (9) yjp,n = yj+1 + g(tj , yjp,n, zjp,n )δ + ap,n j − kj − zj p,n p,n p,n p,n n − n + aj = pδ(yj − Lj ) , kj = pδ(yj − Uj ) . The terminal condition is ynp,n = ξ n . Since for a large fixed p > 0, (6) is in fact a classical BSDE. By numerical algorithms for BSDEs (cf. [24]), explicit scheme gives zjp,n = 1 n √ (y n | − yj+1 |εj =−1 ), and yjp,n is solved from the inversion of the following mapping 2 δ j+1 εj =1 p,n yjp,n = (Θp )−1 (E[yj+1 |Gjn ]), where Θp (y) = y − g(tj , y, zjp,n)δ − pδ(y − Lnj )− + pδ(y − Ujn )+ , p,n p,n p,n |εnj+1=1 + yj+1 |εnj+1=−1 ) into it. And increasing processes by substituting E[yj+1 |Gjn ] = 21 (yj+1 p,n p,n aj and kj will be obtained from (9). In many cases, the inversion of the operator Θp is not easy to solve. So we apply the p,n implicit–explicit penalization scheme to (9), replacing yjp,n in g by E[yj+1 |Gjn ], and get √ p,n p,n p,n y¯jp,n = y¯j+1 + g(tj , E[¯ yj+1 |Gjn ], z¯jp,n )δ + ap,n ¯jp,n δεnj+1 j − kj − z p,n
ap,n = pδ(¯ yjp,n − Lnj )− , k j j
In the same way, we get z¯jp,n =
= pδ(¯ yjp,n − Ujn )+ .
1 √ (¯ yn | n 2 δ j+1 εj =1
n − y¯j+1 |εnj =−1 ) and p,n
p,n p,n y¯jp,n = E[¯ yj+1 |Gjn ] + g(tj , E[¯ yj+1 |Gjn ], z¯jp,n )δ + ap,n j − kj .
Solving this equation, we obtain p,n p,n n n y p,n = E[y p,n j j+1 |Gj ] + g(tj , E[y j+1 |Gj ], z j )δ
7
(10)
pδ p,n p,n n n n − (E[y p,n j+1 |Gj ] + g(tj , E[y j+1 |Gj ], z j )δ − Lj ) 1 + pδ pδ p,n p,n n n n + − (E[y p,n j+1 |Gj ] + g(tj , E[y j+1 |Gj ], z j )δ − Uj ) . 1 + pδ
+
p,n p,n p,n yj+1 |εnj+1=1 + y¯j+1 |εnj+1 =−1 ). For increasing processes, we can get them with E[¯ yj+1 |Gjn ] = 21 (¯ from
pδ p,n p,n (E[¯ yj+1 |Gjn ] + g(tj , E[¯ yj+1 |Gjn ], z¯jp,n )δ − Lnj )− , 1 + pδ pδ p,n p,n (E[¯ yj+1 |Gjn ] + g(tj , E[¯ yj+1 |Gjn ], z¯jp,n )δ − Ujn )+ . = 1 + pδ
ap,n = j p,n
kj
3.2
Convergence of penalization schemes and estimations
First we give the following lemma, which is proved in [20]. This Gronwall type lemma is classical but here it is given with more detailed formulation. Lemma 3.1 Let a, b and α be positive constants, δb < 1 and a sequence (vj )j=1,...n of positive numbers such that, for every j vj + α ≤ a + bδ
j X
(11)
vi .
i=1
Then sup vj + α ≤ aEδ (b), j≤n
where Eδ (b) = 1 +
P∞
bp
p=1 p
(1 + δ) · · · (1 + (p − 1)δ), which is a convergent series.
Notice the Eδ (b) is increasing in δ and δ < 1b , so we can replace the right hand side of (11) by a constant depending on b. p,n We define the discrete solutions, (Ytp,n , Ztp,n , Ap,n t , Kt ) by the implicit penalization scheme [t/δ] [t/δ] X X p,n p,n p,n p,n p,n p,n p,n p,n Yt = y[t/δ] , Zt = z[t/δ] , At = am , Kt = km , m=0
or
p,n ¯ tp,n ) (Y¯tp,n , Z¯tp,n , At , K
m=0
by the implicit–explicit penalization scheme,
p,n p,n p,n Y¯tp,n = y¯[t/δ] , Z¯tp,n = z¯[t/δ] , At =
[t/δ] X
m=0
ap,n m ,
¯ tp,n = K
[t/δ] X
p,n
km .
m=0
Let us notice that the laws of the solutions (Y p , Z p , Ap , K p ) and (Y p,n , Z p,n , Ap,n , K p,n ) or p,n ¯ p,n ) to penalized BSDE depend only on (PB , Γ−1(PB ), g, Ψ−1(PB ), Ψ−1(PB )) (Y¯ p,n , Z¯ p,n , A , K 1 2 −1 n n n ) is the probability intro)) where P (resp. P ), Ψ (P (P and (PBn , Γ−1 (PBn ), g, Ψ−1 B B B B 2 1 n n −1 −1 duced by B(resp. B ), and f (PB ) (resp. f (PBn )) is the law of f (B) (resp. f (B )) for 8
f = Γ, Ψ1 , Ψ2 . So if we concern the convergence in law, we can consider these equations on any probability space. By Donsker’s theorem and Skorokhod representation theorem, there exists a probability space, such that sup0≤t≤T |Btn − Bt | → 0, as n → ∞, in L2 (FT ), since εk is in L2+δ . So we will work on this space with respect to the filtration generated by B n and B, trying to prove the convergence of solutions. Thanks to the convergence of B n , (Ln , U n ) also converges to (L, U). Then we have the following result, which is based on the convergence results of numerical solutions for BSDE (cf. [4], [5]) and penalization method for reflected BSDE (Theorem 2.1). Proposition 3.1 Assume 2.3 holds, the sequence (Ytp,n , Ztp,n ) converges to (Yt , Zt ) in following sense Z T 2 p,n lim lim E[ sup |Yt − Yt | + |Zsp,n − Zs |2 ds] → 0, (12) p→∞ n→∞
and for 0 ≤ t ≤ T ,
Ap,n t
−
0≤t≤T
Ktp,n
0
2
→ At − Kt in L (Ft ), as n → ∞, p → ∞.
Proof. Notice E[ sup 0≤t≤T
|Ytp,n
2
− Yt | +
Z
0
T
|Zsp,n
2
− Zs | ds] ≤ 2E[ sup
0≤t≤T
|Ytp,n
−
2 Ytp |
+
+2E[ sup |Ytp − Yt |2 + 0≤t≤T
Z
Z
T 0 T
0
|Zsp,n − Zsp |2 ds] |Zsp − Zs |2 ds].
By the convergence results of numerical solutions for BSDE (cf. [4], [5]), the first part tends to 0. For the second part, it is a direct application of Theorem 2.1 of the penalization method. So we get (12). For the increasing processes, we have p,n p,n p,n p p 2 2 E[((Ap,n t − Kt ) − (At − Kt )) ] ≤ 2E[((At − Kt ) − (At − Kt )) ] +2E[((Apt − Ktp ) − (At − Kt ))2 ]
C p,n p p 2 ≤ 2E[((Ap,n t − Kt ) − (At − Kt )) ] + √ , p
in view of (7). While for fixed p, Ap,n t
−
Ktp,n
=
Y0p,n
−
Ytp,n
Apt − Ktp = Y0p − Ytp −
Z
− Z t 0
t
0
g(s, Ysp,n , Zsp,n )ds
g(s, Ysp , Zsp )ds +
R·
Z
0
t
+
Z
0
t
Zsp,n dBsn ,
Zsp dBs ,
R· from Corollary 14 in [5], we know that 0 Zsp,n dBsn converges to 0 Zsp dBs in S2 (0, T ), as n → ∞, then with the Lipschitz condition of g and the convergence of Y p,n , we get (Ap,n t − p,n 2 Kt ) → (At − Kt ) in L (Ft ), as n → ∞, p → ∞. Now we consider the implicit–explicit penalization scheme. From Proposition 5 in [25], we know that for implicit–explicit scheme, the difference between this solution and the totally implicit one depends on µ + p for fixed p ∈ N. So we have 9
Proposition 3.2 For any p ∈ N, when n → ∞, Z T p,n p,n p,n 2 Z − Z p,n 2 ds → 0, E[ sup Y t − Yt + s s 0≤t≤T
p,n
0
p,n
p,n 2 with (At − K t ) − (Ap,n t − Kt ) → 0 in L (Ft ), for 0 ≤ t ≤ T . p,n
p,n
Proof. The convergence of (Y t , Z t ) to (Ytp,n , Ztp,n) is a direct consequence of Proposition 5 in [25]. More precisely, there exists a constant C which depends only on µ and T , such that Z T p,n p,n p,n 2 Z − Z p,n 2 ds ≤ Cδ 2 . ]+E E[ sup Y − Yt 0≤t≤T
t
0
s
s
Then we consider the convergence of the increasing processes, notice that for 0 ≤ t ≤ T , Z t Z t p,n p,n p,n p,n p,n p,n p,n At − K t = Y 0 − Y t − Z s dBsn , g(s, Y s , Z s )ds + 0
0
Rt
Rt
p,n compare with Ap,n = Y0p,n − Ytp,n − 0 g(s, Ysp,n , Zsp,n )ds + 0 Zsp,n dBsn , thanks to the t − Kt p,n p,n p,n p,n p,n Lipschitz condition of g and the convergence of (Y , Z ), we get At −K t → Ap,n t −Kt , in L2 (Ft ), as n → ∞, for fixed p. So the result follows.
Remark 3.1 From this proposition and Proposition 3.1, we get the convergence of the implicit–explicit penalization scheme. Before going further, we prove an a-priori estimation of (y p,n, z p,n , ap,n , k p,n ). This result will help us to get the convergence of reflected scheme, which will be discussed in the next section. Throughout this paper, we use Cφ,ψ,··· to denote a constant which depends on φ, ψ, · · ·. Here φ, ψ, · · · can be random variables or stochastic processes. Lemma 3.2 For each p ∈ N and δ such that δ(1 + 2µ + 2µ2 ) < 1, there exists a constant c such that n n n p,n 2 X p,n 2 1 X p,n 2 1 X p,n 2 E[sup yj + zj δ+ aj + kj ] ≤ c Cξn ,g,Ln ,U n . pδ pδ j j=0 j=0 j=0
Here Cξn ,g,Ln ,U n depends on ξ n , g(t, 0, 0), (Ln )+ and (U n )− , while c depends only on µ and T. Proof. Recall (9), we apply ’discrete Itˆo formula’ (cf. [20]) for (yjp,n)2 , we get n−1 n−1 X p,n 2 X p,n 2 n 2 E[ yj |zi | δ] ≤ E[|ξ | + 2[ yip,n |g(ti, yip,n, zip,n )| δ] + i=j
i=j
+2E
n−1 X i=j
10
(yip,n · ap,n − yip,n · kip,n ). i
1 p,n Since yip,n · ap,n = −pδ((yip,n − Lni )− )2 + pδLni (yip,n − Lni )− = pδ ai + Lni ap,n and yip,n · kip,n = i i 1 p,n ki + Uin kip,n , we have pδ((yip,n − Uin )+ )2 + Uin pδ(yip,n − Uin )+ = pδ n−1 n−1 n−1 p,n 2 1 X 1 X p,n 2 1 X p,n 2 p,n 2 |z | δ] + 2E[ (a ) + (k ) ] E[ yj + 2 i=j i pδ i=j i pδ i=j i n 2
≤ E[|ξ | + ≤ E[|ξ n |2 +
n−1 X i=j
n−1 X i=j
2
2
|g(ti , 0, 0)| δ + (1 + 2µ + 2µ )
n−1 X i=j
|g(ti , 0, 0)|2 δ] + (1 + 2µ + 2µ2 )δE
+αE sup ((Lni )+ )2 + j≤i≤n−1
n−1 X
|yip,n |2
n−1 X i=j
δ+2
n−1 X
(Lni )+ ap,n i
+2
i=j
|yip,n|2 +
n−1 X
(Uin )− kip,n ]
i=j
n−1 X
1 2 E( ap,n i ) α i=j
1 E( k p,n )2 + βE sup ((Uin )+ )2 . β i=j i j≤i≤n−1
Since Ln and U n are approximations of Itˆo processes, we can find a process Xjn of the form Pj−1 n √ Xjn = X0 − i=0 σi εi+1 δ+Vj+n −Vj−n , where Vj±n are Gjn -adapted increasing processes with E[|Vn+n |2 + |Vn−n |2 ] < +∞, and Lnj ≤ Xjn ≤ Ujn holds. Then we apply similar techniques of stopping times as in the proof of Lemma 2 in [16] for the discrete case with Lnj ≤ Xjn ≤ Ujn , we can prove E(
n−1 X i=j
n
2 ap,n i )
+ E(
n−1 X
kip,n )2
i=j
n
≤ 3µ(Cξn,g,X n + E n
n−1 X i=j
[|yip,n|2 + |zip,n |2 ]δ).
While X can be controlled by L and U , we can replace it by Ln and U n . Set α = β = 12µ in the previous inequality, with Lemma 3.1, we get n−1 n−1 n−1 X p,n 2 1 X p,n 2 1 X p,n 2 p,n 2 sup E[ yj ] + E[ |zi | δ] + (a ) + (k ) ] ≤ cCξn ,g,Ln ,U n . pδ i=0 i pδ i=0 i j i=0
We reconsider Itˆo formula for |yjp,n|2 , the take supj before expectation. Using Burkholder√ P Davis-Gundy inequality for martingale part ji=0 yjp,nzjp,n δεnj+1, with similar techniques, we get n−1 X p,n 2 |yip,n|2 δ] ≤ Cξn ,g,Ln ,U n + Cµ T sup E[|yjp,n|2 ]. ] ≤ Cξn ,g,Ln,U n + Cµ E[ E[sup yj j
j
i=0
It follows the desired results.
4 4.1
Reflected Algorithms and their convergence Reflected Schemes
This type of numerical schemes is based on reflecting the solution y n between two barriers by an and k n directly. In such a way the discrete solution y n really stays between two barriers Ln 11
and U n . After discretization of time interval, our discrete reflected BSDE with two barriers on small interval [tj , tj+1], for 0 ≤ j ≤ n − 1, is √ n yjn = yj+1 + g(tj , yjn , zjn )δ + anj − kjn − zjn δεnj+1, (13) with terminal condition ynn = ξ n , and constraint and discrete integral conditions hold: anj ≥ 0, kjn ≥ 0, anj · kjn = 0, Lnj ≤ yjn ≤ Ujn , (yjn − Lnj )anj = (yjn − Ujn )kjn = 0.
(14)
n Note that, all terms in (13) are Gjn -measurable except yj+1 and εnj+1. The key point of our numerical schemes is how to solve (yjn , zjn , anj , kjn ) from (13) using the n n Gj+1 -measurable random variable yj+1 obtained in the preceding step. First zjn is obtained by 1 n n n |εnj =1 − yj+1 |εnj =−1 ). zjn = E[yj+1 εnj+1|Gjn ] = √ (yj+1 2 δ Then (13) with (14) becomes n yjn = E[yj+1 |Gjn ] + g(tj , yjn , zjn )δ + anj − kjn , anj ≥ 0, kjn ≥ 0, Lnj ≤ yjn ≤ Ujn , (yjn − Lnj )anj = (yjn − Ujn )kjn = 0.
(15)
Set Θ(y) := y − g(tj , y, zjn )δ. In view of hΘ(y) − Θ(y ′), y − y ′i ≥ (1 − δµ) |y − y ′|2 > 0, for δ small enough, we get that in such case Θ(y) is strictly increasing in y. So y ≥ Lnj ⇐⇒ Θ(y) ≥ Θ(Lnj ), y ≤ Ujn ⇐⇒ Θ(y) ≤ Θ(Ujn ). n n n |εnj=1 + yj+1 |εnj=−1 ) Then implicit reflected scheme gives the results with E[yj+1 |Gjn ] = 21 (yj+1 as follows n yjn = Θ−1 (E[yj+1 |Gjn ] + anj − kjn ),
anj =
kjn =
n E[yj+1 |Gjn ] + g(tj , Lnj , zjn )δ − Lnj
−
n +
n E[yj+1 |Gjn ] + g(tj , Ujn , zjn )δ − Uj
, ,
on the set {Lnj < Ujn }, then we know that {yjn − Lnj = 0} and {yjn − Ujn = 0} are disjoint. So with (yjn − Lnj )anj = (yjn − Ujn )kjn = 0, we have anj · kjn = 0. On the set {Lnj = Ujn }, we n get anj = (Ijn )+ and kjn = (Ijn )− by defintion, where Ijn := E[yj+1 |Gjn ] + g(tj , Lnj , zjn )δ − Lnj . n n So automatically aj · kj = 0. n Our explicit reflected scheme is introduced by replacing yjn in g by E[¯ yj+1 |Gjn ] in (15). So we get the following equation, n
n
n n y¯jn = E[¯ yj+1 |Gjn ] + g(tj , E[¯ yj+1 |Gjn ], z¯jn )δ + anj − k j , anj ≥ 0, k j ≥ 0,
yjn − Lnj ≤ y¯jn ≤ Ujn , (¯ yjn − Lnj )anj = (¯ 12
n Ujn )k j
= 0.
(16)
Then with E[y nj+1 |Gjn ] = 21 (y nj+1 |εnj =1 + y nj+1 |εnj =−1 ), we get the solution n
4.2
n n y nj = E[¯ yj+1 |Gjn ] + g(tj , E[¯ yj+1 |Gjn ], z nj )δ + anj − k j , − n n anj = E[¯ yj+1 |Gjn ] + g(tj , E[¯ yj+1 |Gjn ], z¯jn )δ − Lnj , + n n n yj+1 |Gjn ] + g(tj , E[¯ yj+1 |Gjn ], z¯jn )δ − Ujn . k j = E[¯
(17)
Convergence of Reflected Implicit Schemes
Now we study the convergence of Reflected Schemes. For implicit reflected scheme, we denote [t/δ] [t/δ] X X n n n n n n n Yt = y[t/δ] , Zt = z[t/δ] , At = ai , Kt = kin , i=0
i=0
and for explicit reflected scheme
n n n Y¯tn = y¯[t/δ] , Z¯tn = z¯[t/δ] , At =
[t/δ] X
ani ,
i=0
¯n = K t
[t/δ] X
n
ki .
i=0
First we prove an estimation result for (y n , z n , an , k n ). Lemma 4.1 For δ such that δ(1 + 2µ + 2µ2 ) < 1, there exists a constant c depending only on µ and T such that 2 2 n−1 n−1 n−1 X X n 2 n 2 X n n kj ] ≤ c Cξn ,g,Ln ,U n . aj + zj δ + E[sup yj + j j=0
j=0
j=0
Proof. First we consider the estimation of ani and kin . In view of Lnj ≤ Yjn ≤ Ujn , we have that − anj ≤ E[Lnj+1 |Gjn ] + g(tj , Lnj , zjn )δ − Lnj = δ(lj + g(tj , Lnj , zjn ))− , (18) + + n kjn ≤ E[Uj+1 |Gjn ] + g(tj , Ujn , zjn )δ − Ujn = δ uj + g(tj , Ujn , zjn ) . We consider following discrete BSDEs with ybnn = yenn = ξ n ,
√ n ybjn = ybj+1 + [g(tj , ybjn , zbjn ) + (lj )− + g(tj , Lnj , zbjn )− ]δ − zbjn δεnj+1, √ n yejn = yej+1 + [g(tj , yejn , zejn ) − (uj )+ − g(tj , Ujn , zejn )+ ]δ − zejn δεnj+1.
Thanks to discrete comparison theorem in [20], we have yejn ≤ yjn ≤ ybjn , so
2 2 2 E[sup yjn ] ≤ max{E[sup yejn ], E[sup ybjn ]} ≤ c Cξn ,g,Ln ,U n . j
j
j
13
(19)
The last inequality follows from estimations of discrete solution of classical BSDE (b yjn )2 and n 2 (e yj ) , which is obtained by Itˆo formulae and the discrete Gronwall inequality in Lemma 3.1. For zjn , we use ’discrete Itˆo formula’ (cf. [20]) again for (yjn )2 , and get n−1 n−1 n−1 n−1 X X X n 2 X n 2 n 2 n n n n n E yj + |zi | δ = E[|ξ | + 2 yi g(ti , yi , zi )δ + 2 yi ai − 2 yin kin ] i=j
i=j
n−1 X
≤ E[|ξ n |2 +
i=j
i=j
i=j
|g(ti , 0, 0)|2 δ + δ(1 + 2µ + 2µ2 )
+αE[sup((Lnj )+ )2 j
+
sup((Ujn )− )2 ] j
n−1 X i=j
|yin |2 +
n−1 X
n−1
1X n2 |z | δ] 2 i=j i
n−1 X 1 n 2 + E[( aj ) + ( kjn )2 ], α j=i j=i
using (yin − Lni )ani = 0 and (yin − Uin )kin = 0. And from (18), we have E(
n−1 X
anj )2
j=i
E(
n−1 X j=i
≤ 4δE
kjn )2 ≤ 4δE
n−1 X j=i
n−1 X j=i
Set α = 32µ, it follows
2 2 [(lj )2 + g(ti , 0, 0)2 + µ Lnj + µ zjn ],
2 2 [(uj )2 + g(ti, 0, 0)2 + µ Ujn + µ zjn ].
n−1 n−1 n−1 X n 2 1 X 1 X 2 n 2 n 2 2 |zi | δ] ≤ E[|ξ | + (1 + 2 ) |g(ti, 0, 0)| δ] + δ(1 + 2µ + 2µ ) |yin |2 E[ yj + 4 i=j 8µ i=j i=j n−1
+32µ
2
E[sup((Lnj )+ )2 j n−1
+
sup((Ujn )− )2 ] j
X 2 2 1 + δE [ Lnj + Ujn ] 8 j=i
X 1 [(lj )2 + (uj )2 ] + 2E 8µ j=i
Pn−1 n 2 |zi | δ ≤ cCξn ,g,Ln ,U n . Then applying these estimations to (18), With (19), we obtain i=0 we obtain desired results. With arguments similar to those precede Proposition 3.1, the laws of the solutions n ¯ n ) to reflected BSDE depend only on (Y, Z, A, K) and (Y n , Z n , An , K n ) or (Y¯ n , Z¯ n , A , K −1 −1 −1 −1 (PB , Γ−1 (PB ), g, Ψ−1 1 (PB ), Ψ2 (PB )) and (PB n , Γ (PB n ), g, Ψ1 (PB n ), Ψ2 (PB n )) where f −1 (PB ) (resp. f −1 (PBn )) is the law of f (B) (resp. f (B n )) for f = Γ, Ψ1 , Ψ2 . So if we concern the convergence in law, we can consider these equations on any probability space. From Donsker’s theorem and Skorokhod representation theorem, there exists a probability space satisfying sup0≤t≤T |Btn − Bt | → 0, as n → ∞, in L2 (FT ), since εk is in L2+δ . And it is sufficient for us to prove convergence results in this probability space. Our convergence result for the implicit reflected scheme is as follows: 14
Theorem 4.1 Under Assumption 2.3 and suppose moreover that g satisfies Lipschitz condition (1), we have when n → +∞, Z T n 2 E[sup |Yt − Yt | ] + E |Ztn − Zt |2 dt → 0, (20) t
0
and Ant − Ktn → At − Kt in L2 (Ft ), for 0 ≤ t ≤ T . Proof. The proof is done in three steps. In the first step, we consider the difference between discrete solutions of reflect implicit scheme and of penalization implicit scheme introduce in section 4.1 and section 3.1, respectively. More precisely, we will prove that for each p, E[sup |yjn − yjp,n|2 ] + δE j
n−1 X
1 |zjn − zjp,n |2 ≤ cCξn ,g,Ln,U n √ . p j=0
(21)
Here c only depends on µ and T . From (9) and (13), applying ’discrete Itˆo formula’ (cf. [20]) to (yjn − yjp,n)2 , we get n−1 X n p,n 2 + δE |zin − zip,n |2 E yj − yj i=j
= 2E
n−1 X i=j
+2E
[(yin − yip,n)(g(ti, yin , zin ) − g(ti , yip,n, zip,n ))δ]
n−1 X
[(yin
i=j
−
yip,n )(ani
−
ap,n i )]
− 2E
n−1 X i=j
[(yin − yip,n)(kin − kip,n )]
From (14), we have p,n n n n (yin − yip,n)(ani − ap,n − Lni )ani − (yin − Lni )ap,n + (yip,n − Lni )ap,n i ) = (yi − Li )ai − (yi i i p,n p,n n − n n − 2 ≤ (yi − Li ) ai − ((yi − Li ) ) , ≤ (yip,n − Lni )− ani ,
Similarly we have (yin −yip,n)(kin −kip,n ) ≥ −(yip,n −Uin )kin . By (18) and the Lipschitz property of g, it follows n−1 2 δ X |z n − zip,n |2 E yjn − yjp,n + E 2 i=j i
≤ (2µ + 2µ )δE
n−1 X
≤ (2µ + 2µ2 )δE
n−1 X
2
i=j
i=j
[(yin
−
yip,n)2 ]
n−1 X [(yip,n − Lni )− ani + (yip,n − Uin )+ kin ] + 2E i=j
[(yin − yip,n)2 ] + 2 δE
n−1 X i=j
15
((yip,n − Lni )− )2
! 12
δE
n−1 X i=j
((lj + g(ti , Lnj , zjn ))− )2
! 21
+2 δE
n−1 X i=j
((yip,n − Uin )+ )2
δE
n−1 X
((uj + g(ti , Ujn , zjn ))+ )2
i=j
n−1 X
n−1 X
! 21
2 1 2 [(yin − yip,n)2 ] + √ E (ap,n δE i ) p pδ i=j i=j ! 21 ! 12 n−1 n−1 X 1 X p,n 2 E (k ) δE ((uj + g(ti , Ujn , zjn ))+ )2 . pδ i=j i i=j
= (2µ + 2µ2 )δE 2 +√ p
! 12
! 21 n−1 X
((lj + g(ti , Lnj , zjn ))− )2
i=j
Then by estimation results in Lemma 3.2, Lemma 4.1 and discrete Gronwall inequality in Lemma 3.1, we get n−1 X n 1 p,n 2 sup E yj − yj + δE |zin − zip,n |2 ≤ c Cξn ,g,Ln ,U n √ . p j i=0
Apply B-D-G inequality, we obtain (21). In the second step, we want to prove (20). We have Z T 2 n E[sup |Yt − Yt | ] + E[ |Ztn − Zt |2 dt] t 0 Z T Z T p p p,n 2 2 2 n ≤ 3E[sup |Yt − Yt | + |Zt − Zt | dt] + 3E[sup |Yt − Yt | + |Ztn − Ztp,n |2 dt] t t 0 0 Z T +3E[sup |Ytp − Ytp,n |2 + |Ztp − Ztp,n |2 dt] t 0 Z T p p,n 2 − 12 − 21 |Ztp − Ztp,n |2 dt], ≤ 3Cp + cCξn ,g,Ln ,U n p + 3E[sup |Yt − Yt | + t
0
in view of (21) and Theorem 2.1. For fixed p > 0, by convergence results of numerical algorithms for BSDE, (Theorem 12 in [5] and Theorem 2 in [25]), we know that the last two terms converge to 0, as δ → 0. And when δ is small enough, Cξn ,g,Ln ,U n is dominated by ξ n , g, L and U. This implies that we can choose suitable δ such that the right hand side is as small as we want, so (20) follows. In the last step, we consider the convergence of (An , K n ). Recall that for 0 ≤ t ≤ T , Z t Z t n n n n n n At − Kt = Y0 − Yt − g(s, Ys , Zs )ds + Zsn dBsn , 0 0 Z t Z t p,n p,n p,n p,n p,n p,n At − Kt = Y0 − Yt − g(s, Ys , Zs )ds + Zsp,n dBsn . 0
0
By (21) and Lipschitz condition of g, we get 1 p,n 2 E[|(Ant − Ktn ) − (Ap,n t − Kt )| ] ≤ c Cξ n ,g,Ln ,U n √ . p 16
! 12
Since 2
2
p,n p p E[|(Ant − Ktn ) − (At − Kt )|2 ] ≤ 3E[|(Ant − Ktn ) − (Ap,n t − Kt )| ] + 3E[|(At − Kt ) − (At − Kt )| ] p,n 2 +3E[|(Apt − Ktp ) − (Ap,n t − Kt )| ] 1 p,n 2 ≤ c(Cξn ,g,Ln ,U n + Cξ,g,L,U ) √ + 3E[|(Apt − Ktp ) − (Ap,n t − Kt )| ], p
with similar techniques, we obtain E[|(Ant − Ktn ) − (At − Kt )|2 ] → 0. Here the fact that (Ap,n − Ktp,n ) converges to (Apt − Ktp ) for fixed p follows from the convergence results of t p,n (Yt , Ztp,n ) to (Ytp , Ztp ).
4.3
Convergence of Reflected Explicit Scheme
Then we study the convergence of explicit reflected scheme. Before going further, we need n an estimation result for (y n , z n , an , k ). Lemma 4.2 For δ such that δ( 49 + 2µ + 4µ2 ) < 1, there exists a constant c depending only on µ and T , such that 2 2 n−1 n−1 n−1 X X X n 2 n 2 n n zj δ + kj + aj ] ≤ c Cξn,g,Ln ,U n . E[sup y j ] + E[ j j=0
j=0
j=0
Proof. We recall the explicit reflected scheme, which is: √ n n n n y¯jn = y¯j+1 + g(tj , E[¯ yj+1 |Gjn ], z¯jn )δ + anj − k j − z¯jn δεnj+1 , anj ≥ 0, k j ≥ 0, yjn − Lnj ≤ y¯jn ≤ Ujn , (¯ yjn − Lnj )anj = (¯
n Ujn )k j
(22)
= 0.
Then we have n
n |¯ yjn |2 = |y nj+1 |2 − |¯ zjn |2 δ + 2¯ yj+1 · g(tj , E[y nj+1 |Gjn ], z nj )δ + 2¯ yjn · anj − 2¯ yjn · k j √ n +|g(tj , E[y nj+1|Gjn ], z nj )|2 δ 2 − (anj )2 − (k j )2 − 2¯ yjn z¯jn δεnj+1 √ n n√ n zj δεj+1. zjn δ δεnj+1 − 2(anj − k j )¯ +2g(tj , E[y nj+1 |Gjn ], z nj )¯ n
(23)
n
In view of (y nj − Lnj )anj = (¯ yjn − Ujn )k j = 0, anj and k j ≥ 0, and taking expectation, we have n
n yj+1 · g(tj , E[y nj+1 |Gjn ], z nj )]δ + 2E[(Lnj )+ · anj ] + E[(Ujn )− · k j ] E|¯ yjn |2 + E|¯ zjn |2 δ ≤ E|y nj+1 |2 + 2E[¯
+E[|g(tj , E[y nj+1|Gjn ], z nj )|2 δ 2 ]
1 ≤ E|y nj+1 |2 + (δ + 3δ 2 )E[|g(tj , 0, 0)|2] + ( δ + 3µ2 δ 2 )E[(z nj )2 ] 4 n 2 2 n 2 +δ(1 + 2µ + 4µ + 3µ δ)E|y j+1 | + 2E[(Lnj )+ · anj ] + E[(Ujn )− · k j ]
17
Taking the sum for j = i, · · · , n − 1 yields n−1
E|¯ yin |2
1X + E|¯ zjn |2 δ 2 j=i
≤ E|ξ n |2 + (δ + 3δ 2 )E
(24)
n−1 n−1 X X [|g(tj , 0, 0)|2] + δ(1 + 2µ + 4µ2 + 3µ2 δ)E |y nj+1|2 j=i
j=i
+αE[sup((Lnj )+ )2 + sup((Ujn )+ )2 ] + j
j
n−1 X
n−1 X
1 n anj )2 + ( k j )2 ], E[( α j=i j=i
where α is a constant to be decided later. From (17), we have − n n anj ≤ E[Lnj+1 |Gjn ] + g(tj , E[¯ yj+1 |Gjn ], z¯jn )δ − Lnj = (lj + g(tj , E[¯ yj+1 |Gjn ], z¯jn ))− δ, + n n n n |Gjn ] + g(tj , E[¯ yj+1 |Gjn ], z¯jn )δ − Ujn = (uj + g(tj , E[¯ yj+1 |Gjn ], z¯jn ))− δ. k j ≤ E[Uj+1 Then we get
E(
n−1 X
anj )2
j=i
E(
n−1 X j=i
n−1 X n ≤ 4δE [(lj )2 + g(tj , 0, 0)2 + µ2 (E[¯ yj+1 |Gjn ])2 + µ2 (¯ zjn )2 ],
(25)
j=i
n
k j )2 ≤ 4δE
n−1 X j=i
n [(uj )2 + g(tj , 0, 0)2 + µ2 (E[¯ yj+1 |Gjn ])2 + µ2 (¯ zjn )2 ],
Set α = 32µ2 in (24), it follows n−1
E|¯ yin |2 +
1X E|¯ zjn |2 δ 4 j=i
n−1
X 1 [|g(tj , 0, 0)|2] + 32µ2 E[sup((Lnj )+ )2 + sup((Ujn )+ )2 ] ≤ E|ξ | + (δ + 2 δ + 3δ 2 )E 4µ j j j=i n 2
n−1
n−1
X X 1 5 |y nj+1 |2 + 2 δE [(uli )2 + (uui )2 ]. +δ( + 2µ + 4µ2 + 3µ2 δ)E 4 8µ j=i j=i
Notice that 3µ2 δ < 1, so 3µ2 δ 2 < δ. Then by applying the discrete Gronwall inequality in n Lemma 3.1, and the estimation of anj and k j follows from (25) , we get n−1 2 n−1 2 n−1 X X X n 2 n 2 n z δ + sup E[ y j ] + E[ k anj ] ≤ c Cξn,g,Ln ,U n . + j j j j=0
j=0
j=0
We reconsider (23), as before take sum and supj , then take expectation, using BurkholderDavis-Gundy inequality for martingale part, with similar techniques, we get n−1 X n 2 n 2 y j δ ≤ E[sup y nj 2 ] ≤ Cξn ,g,Ln ,U n + Cµ T sup E[ y nj 2 ], E[sup y j ] ≤ Cξn,g,Ln ,U n + Cµ E[ j
j
j=0
18
j
which implies final result. Then our convergence result for the explicit reflected scheme is Theorem 4.2 Under the same assumptions as in Theorem 4.1, when n → +∞, Z T n n 2 E[sup |Y t − Yt | ] + E |Z t − Zt |2 dt → 0. t
n
(26)
0
n
And At − K t → At − Kt in L2 (Ft ), for 0 ≤ t ≤ T .
Proof. Thanks to Theorem 4.1, it is sufficient to prove that as n → +∞, E[sup |ynj j Since
−
yjn |2 ]
+E
n−1 X j=0
|z nj − zjn |2 δ → 0.
√ n yjn = yj+1 + g(tj , yjn , zjn )δ + anj − kjn − zjn δεnj+1 ,
y¯jn
=
we get
n E[¯ yj+1 |Gjn ]
+
n g(tj , E[¯ yj+1 |Gjn ], z¯jn )δ
+
anj
−
n kj
−
(27)
z¯jn
√
(28) δεnj+1,
2 E yjn − y nj n 2 2 − y nj+1 − δE zjn − z nj + 2δE[(yjn − y nj )(g(tj , yjn , zjn ) − g(tj , E[y nj+1|Gjn ], z nj ))] = E yj+1 n
−E[δ(g(tj , yjn , zjn ) − g(tj , E[y nj+1|Gjn ], z nj )) + (anj − anj ) − (kjn − k j )]2 n
+2E[(yjn − y nj )(anj − anj )] − 2E[(yjn − y nj )(kjn − k j )] n 2 2 − y nj+1 − δE zjn − z nj + 2δE[(yjn − y nj )(g(tj , yjn , zjn ) − g(tj , E[y nj+1|Gjn ], z nj ))] ≤ E yj+1
in view of
(yjn − y nj )(anj − anj ) = (yjn − Lnj )anj + (y nj − Lnj )(anj ) −(y nj − Lnj )anj − (yjn − Lnj )(anj ) ≤ 0 n n n n n (yj − y j )(kj − k j ) = (yjn − Ujn )kjn + (y nj − Ujn )k j n
−(yjn − Ujn )k j − (y nj − Ujn )kjn ≥ 0. n
We take sum over j from i to n − 1, with ξ n − ξ = 0, then we get
n−1 n−1 X X 2 2 E zjn − z nj ≤ 2δ E[(yjn − y nj )(g(tj , yjn, zjn ) − g(tj , E[y nj+1 |Gjn ], z nj ))] E yjn − y nj + δ j=i
j=i
n−1 n−1 X n 2 δ X n n 2 yj − y j + ≤ 2µ δE E zj − z nj 2 j=i j=i 2
n−1 X n yj − y nj · yjn − E[y nj+1 |Gjn ] . +2µδE j=i
19
n
n n Since y¯jn − E[¯ yj+1 |Gjn ] = g(tj , E[¯ yj+1 |Gjn ], z¯jn )δ + anj − k j , we have 2µδE[ yjn − y nj · yjn − E[y nj+1|Gjn ] ] n n n n n n n n n = 2µδE[ yj − y j · yj − y j + g(tj , E[y j+1|Gj ]), zj )δ + aj − k j ] 2 2 2 n ≤ (2µ + 1)δE[ yjn − y nj ] + µ2 δE[3δ 2 (|g(tj , 0, 0)|2 + µ2 y nj+1 + µ2 zjn + (anj )2 + (k j )2 ].
Then by Lemma 4.2, we obtain
n−1 n−1 X 2 n δ X n 2 n 2 n 2 E zj − z j ≤ (2µ + 2µ + 1)δ E yjn − y nj + δCξn ,g,Ln ,U n . E yj − y j + 2 j=i j=i
(29)
By the discrete Gronwall inequality in Lemma 3.1, we get 2 2 sup E yjn − y nj ≤ Cδ 2 e(2µ+2µ +1)T . j≤n
2 Pn−1 n E zj − z nj ] ≤ Cδ 2 . Then we reconsider (28), this time we With (29), it follows E[δ j=0 take expectation after taking square, sum and sup over j. Using Burkholder-Davis-Gundy inequality for martingale parts and similar tachniques, it follows that n−1 X 2 2 2 E sup yjn − y nj ≤ CE[ E yjn − y nj δ] ≤ CT sup E yjn − y nj , j≤n
j≤n
j=0
which implies (27). n n For the convergence of (A , K ), we consider Z t Z t n n n n n n n At − K t = Y 0 − Y t − g(s, Y s , Z s )ds + Z s dBsn , Z0 t Z0 t Ant − Ktn = Y0n − Ytn − g(s, Ysn , Zsn )ds + Zsn dBsn , 0
0
then the convergence results follow easily from the convergence of An , (26) and the Lipschitz condition of g.
5
Simulations of Reflected BSDE with two barriers
For computational convenience, we consider the case when T = 1. The calculation begins from ynn = ξ n and proceeds backward to solve (yjn , zjn , anj , kjn ), for j = n−1, · · · 1, 0. Due to the amount of computation, we consider a very simple case: ξ = Φ(B1 ), Lt = ψ1 (t, B(t)), Ut = ψ2 (t, B(t)), where Φ, ψ1 and ψ2 are real analytic functions defined on R and [0, 1]×R respectively. As mentioned in the introduction, we have developed a Matlab toolbox for calculating and simulating solutions of reflected BSDEs with two barriers which has a well-designed interface. This toolbox can be downloaded from http://159.226.47.50:8080/iam/xumingyu/English/index.jsp, with clicking ’Preprint’ on the left side. 20
We take the following example: g(y, z) = −5 |y + z| − 1, Φ(x) = |x|, Ψ1 (t, x) = −3(x − 2) + 3, Ψ2 (t, x) = (x + 1)2 + 3(t − 1), and n = 400. In Figure 1, we can see both the global situation of the solution surface of y n and its partial situation i.e. trajectory. In the upper portion of Figure 1, it is in 3-dimensional. The lower surface shows the barrier L, as well the upper one is for the barrier U. The solution y n is in the middle of them. Then we generate one trajectory of the discrete Brownian motion (Bjn )0≤j≤n , which is drawn on the horizontal plane. The value of yjn with respect to this Brownian sample is showed on the solution The remainder of the figure shows Pj Pj surface. n n n n respectively the trajectory of the force Aj = i=0 ai and Kj = i=0 ki . 2
y
0
−50
1 −100 4
t 0.6
x
2
0.4
0 0.2
−2 5
x 10
0
−4
A
−3
0.8
K
0.3
K
A
0.2 0
0.1
−5
0
0.2
0.4
0.6
0.8
1
t
0
0
0.2
0.4
0.6
0.8
1
t
Figure 1: A solution surface of reflected BSDE with two barriers The lower graphs shows clearly that An (respective K n ) acts only if y n touches the lower barrier Ln , i.e. on the set {y n = Ln } (respective the upper barrier U, i.e. on the set {y n = U n } ), and they never act at the same time. In the upper portion we can see that there is an area, named Area I, (resp. Area II) where the solution surface and the lower barrier surface (resp. the solution surface and the upper barrier surface) stick together. When the trajectory of solution yjn goes into Area I (resp. Area II), the force Anj (resp. Kjn ) will push yjn upward (resp. downward). Indeed, if we don’t have the barriers here, yjn intends going up or down to cross the reflecting barrier Lnj and Ujn , so to keep yjn staying between Lnj and Ujn , the action of forces Anj and Kjn are necessary. In Figure 1, the increasing process Anj keeps zero, while Kjn increases from the 21
beginning. Correspondingly in the beginning yjn goes into Area II, but always stay out of Area I. Since Area I and Area II are totally disjoint, so Anj and Kjn never increase at same time. About this point, let us have a look at Figure 2. This figure shows a group of 3dimensional dynamic trajectories (tj , Bjn , Yjn ) and (tj , Bjn , Zjn ), simultaneously, of 2-dimensional trajectories of (tj , Yjn ) and (tj , Zjn ). For the other sub-figures, the upper-right one is for the trajectories Anj , and while the lower-left one is for Kjn , then comparing these two sub-figures, as in Figure 1, {anj 6= 0} and {kjn 6= 0} are disjoint, but the converse is not true. trajectory of y(t) in 3−D
trajectory of z(t) in 3−D simulation of stochastic phenomena: A(t) 10 5
0
8 0
A
z
−5
6
−10 4
−5 4 1
4 2 0
1 2
2 0
0.5
−2 −4 0
x
t
x
trajectory of z(t) in 2−D
t
0
0.5 t
1
simulation of stochastic phenomena: K(t)t
5
1
0.15
0
3
−1
2
−2
1 0
0.5 t
1
K
4 z
Y
0
−4 0
trajectory of y(t) in 2−D 2
0.5
−2
0.1
0.05
0
0.5 t
1
0
0
0.5 t
1
Figure 2: The trajectories of solutions of (3) Now we present some numerical results using the explicit reflected scheme and the implicit-explicit penalization scheme, respectively, with different discretization. Consider the parameters: g(y, z) = −5 |y + z| − 1, Φ(x) = |x|, Ψ1 (t, x) = −3(x − 2)2 + 3, Ψ2 (t, x) = (x + 1)2 + 3t − 2.5: n = 400, reflected explicit scheme: y0n = −1.7312 penalization scheme:
p y0p,n 22
20 200 2000 2 × 104 −1.8346 −1.7476 −1.7329 −1.7314
n = 1000, reflected explicit scheme: y0n = −1.7142 penalization scheme:
p y0p,n
20 200 2000 2 × 104 −1.8177 −1.7306 −1.7161 −1.7144
n = 2000, reflected explicit scheme: y0n = −1.7084 penalization scheme:
p y0p,n
20 200 2000 2 × 104 −1.8124 −1.7250 −1.7103 −1.7068
n = 4000, reflected explicit scheme: y0n = −1.7055 penalization scheme:
p y0p,n
20 200 2000 2 × 104 −1.8096 −1.7222 −1.7074 −1.7057
From this form, first we can see that as the penalization parameter p increases, the penalization solution y0p,n tends increasingly to the reflected solution y0n . Second, as the discretaization parameter n increases, the differences of y0n with different n become smaller as well as that of y0p,n.
6
Appendix: The proof of Theorem 2.1
To complete the paper, here we give the proof of Theorem 2.1. Proof of Theorem 2.1. (a) is the main result in [16]. So we omit its proof. Now we consider (b). The convergence of (Ytp , Ztp ) is a direct consequence of [23]. For the convergence speed, the proof is a combination of results in [16] and [23]. From [16], we bm,p know that for (5), as m → ∞, Ybtm,p ր Y pt in S2 (0, T ), Zbtm,p → Z pt in L2F (0, T ), A → Apt t p p p in S2 (0, T ), where (Y t , Z t , At ) is a solution of the following reflected BSDE with one lower barrier L − dY pt = g(t, Y pt , Z pt )dt + dApt − p(Y pt − Ut )+ dt − Z pt dBt , Y pT = ξ, Z T p Y t ≥ Lt , (Y pt − Lt )dApt = 0.
(30)
0
Rt Set K pt = 0 p(Y ps − Us )+ ds. Then letting p → ∞, it follows that Y pt ց Yt in S2 (0, T ), Z pt → Zt in L2F (0, T ). By comparison theorem, dApt is increasing in p. So ApT ր AT , and − Apt ] ≤ Ap+1 0 ≤ supt [Ap+1 − ApT . It follows that Apt → At in S2 (0, T ). Then with Lipschitz t T condition of g and convergence results, we get K pt → Kt in S2 (0, T ). Moreover from Lemma 4 in [16], we know that there exists a constant C depending on ξ, g(t, 0, 0), µ, L and U, such that Z T C p 2 E[ sup |Y t − Yt | + (|Z pt − Zt |2 )dt ≤ √ . p 0≤t≤T 0 m m Similarly for (5), first letting p → ∞, we get Ybtm,p ց Y t in S2 (0, T ), Zbtm,p → Z t in L2F (0, T ), b tm,p → K m in S2 (0, T ), where (Y m , Z m , K m ) is a solution of the following reflected BSDE K t t t t
23
with one upper barrier U m
m
m
m
m
m
m
= g(t, Y t , Z t )dt + m(Lt − Y t )+ dt − dK t − Z t dBt , Y T = ξ, Z T p m ≤ Ut , (Y t − Ut )dK t = 0.
− dY t
m
Yt
(31)
0
m
m
m
m
In the same way, as m → ∞, Y t ր YtR in S2 (0, T ), Z t → Zt in L2F (0, T ), and (At , K t ) → m m t (At , Kt ) in (S2 (0, T ))2, where At = 0 m(Ls − Y s )+ ds. Also there exists a constant C depending on ξ, g(t, 0, 0), µ, L and U, such that Z T C m m 2 E[ sup |Y t − Yt | + (|Z t − Zt |2 )dt ≤ √ . m 0≤t≤T 0 p
Applying comparison theorem to (6) and (30), (6) and (31)(let m = p), we have Y t ≤ Ytp ≤ Y pt . Then we have C E[ sup |Ytp − Yt |2 ] ≤ √ , p 0≤t≤T 2
for some constant C. To get the estimate results for Z p , we apply Itˆo formula to |Ytp − Yt | , and get Z T 1 p 2 |Zsp − Zs |2 ds E|Y0 − Y0 | + E 2 0 Z T Z T Z T 2 2 p p p = (µ + 2µ )E |Ys − Ys | ds + 2E (Ys − Ys )dAs − 2E (Ysp − Ys )dAs 0 0 0 Z T Z T −2E (Ysp − Ys )dKsp + 2E (Ysp − Ys )dKs . 0
0
Since 2E
Z
T 0
(Ysp
−
Ys )dAps
= 2E
Z
≤ 2pE and 2E
RT 0
(Ysp − Ys )dKsp ≥ 2pE
RT
T
(Ysp
0
Z
T 0
−
Ls )dAps
− 2E
Z
T 0
(Ys − Ls )dAps
(Ysp − Ls )(Ysp − Ls )− ds ≤ 0
(Ysp − Us )(Ysp − Us )+ ds ≥ 0, we have Z T C E |Zsp − Zs |2 ds ≤ √ , p 0 0
in view of the estimation of A and K and the convergence of Y p . Now we consider the convergence of Ap and K p . Since Z t Z t At − Kt = Y0 − Yt − g(s, Ys, Zs )ds + Zs dBs , 0 0 Z t Z t p p p p p p At − Kt = Y0 − Yt − g(s, Ys , Zs )ds + Zsp dBs , 0
24
0
from the Lipschitz condition of g and the convergence results of Y p and Z p , we have immediately E[ sup [(At − Kt ) − (Apt − Ktp )]2 ] 0≤t≤T
≤ 8E[ sup
0≤t≤T
|Ytp
2
− Yt | + 4µ
Z
0
T
|Ysp
2
− Ys | ds + C
Z
T 0
C |Zsp − Zs |2 ds] ≤ √ . p
e and K e in Meanwhile we know E[(ApT )2 + (KTp )2 ] < ∞, so Ap and K p admits weak limit A p p p S2 (0, T ) respectively. By the comparison results of Y t , Yt and Y t , we get p
p
dApt = p(Ytp − Lt )− dt ≤ p(Y t − Lt )− dt = dAt , dKtp = p(Ytp − Ut )+ dt ≥ p(Y pt − Ut )+ dt = dK pt .
et ≤ dAt and dK e t ≥ dKt , it follows that dA et −dK e t ≤ dAt −dKt . On the other hand, the So dA p e e et = dAt and dK e t = dKt , limit of Y is Y , so dAt − dKt = dAt − dKt . Then there must be dA et = At and K e t = Kt . which implies A
References
[1] Bally, V., Pages, G., 2001. A quantization algorithm for solving multi-dimensional optimal stopping problems. Preprint. [2] Barles, G. and Lesigne, L., 1997. SDE, BSDE and PDE. In: El Karoui, N. and Mazliak, L., (Eds.), Backward Stochastic differential Equatons.. Pitman Research Notes in Mathematics Series 364, pp. 47-80. [3] Bouchard, B. and Touzi, N. 2002. Discrete-time approximation and Monte Carlo simulation of backward stochastic differential equations, Stochastic Processes and their Applications, 111, 175-206 (2004). [4] Briand, Ph., Delyon, B., M´emin, J., 2001. Donsker-type theorem for BSDEs, Elect. Comm. in Probab. 6, 1-14. [5] Briand, Ph., Delyon, B., M´emin, J. 2002, On the robustness of backward stochastic differential equations. Stochastic Processes and their Applications, 97, 229-253. [6] Chassagneux, J.-F. 2009. Discrete-time approximation of doubly reflected BSDE, Preprint. [7] Chevance, D., 1997. Resolution numerique des `equations differentielles stochastiques retrogrades, Ph.D. thesis, Universit´e de Provence, Provence. [8] Coquet, F., Mackevicius, V., Memin, J., 1998. Stability in D of martingales and backward equations under discretization of filtration. Stochastic Process. Appl. 75, 235-248. 25
[9] Cvitanic, J. and Karatzas, I., 1996. Backward Stochastic Differential Equations with Reflection and Dynkin Games, Ann. Probab. 24 , no 4, 2024–2056. [10] Douglas Jr., J., Ma, J., and Protter, P., 1996. Numerical methods for forward-backward stochastic differential equations, Ann. Appl. Probab. 6 (1996), no. 3, 940–968. [11] El Karoui, N., Kapoudjian, C., Pardoux, E., Peng S. and Quenez, M.-C., 1997. Reflected Solutions of Backward SDE and Related Obstacle Problems for PDEs, Ann. Probab. 25, no 2, 702–737. [12] El Karoui, N., Peng, S., Quenez, M.C., 1997. Backward stochastic differential equations in finance. Math. finance 7, 1-71. [13] Gobet, E., Lemor, J.P. and Warin, X., (2006) Rate of convergence of an empirical regression method for solving generalized backward stochastic differential equations. Bernoulli, Volume 12 (5), pp. 889-916. [14] Kloeden, P.E., Platen, E., 1992. Numerical Solution of Stochastic Differential Equations. Springer, Berlin. [15] Hamadene, S., Lepeltier, J.-P., Matoussi, A. 1996, Double barrier backward SDEs with continuous coefficient, 161-175. [16] Lepeltier, J.P. and San Mart´ın, J. (2004). Reflected Backward SDE’s with two barriers and discontinuous coefficient. An existence result. Journal of Applied Probability, vol. 41, no.1. 162-175. [17] Lepeltier, J.P. and Xu, M. (2005) Penalization method for Reflected Backward Stochastic Differential Equations with one r.c.l.l. barrier. Statistics and Probability Letters, 75, 58-66. 2005. [18] Lepeltier, J.P. and Xu, M. (2007) Refleted Backward Stochastic Differential Equations with two RCLL Barriers. ESAIM: Probability and Statistics, 11, 3-22. 2007. [19] Ma, J., Protter, P., San Mart´ın, J. and Torres,S., 2002. Numerical method for backward stochastic differential equations, Ann. Appl. Probab. 12, no. 1, 302-316. [20] M´emin, J., Peng, S. and Xu, M. 2002. Convergence of solutions of discrete Reflected backward SDE’s and simulations, Acta Mathematica Sinica, English Series. 2008. Vol. 24, No.1, 1-18. [21] Pardoux, E. and Peng, S. 1990. Adapted solution of a backward stochastic differential equation, Systems and Control Letters 14, no 1, 55-61. [22] Peng, S., 1999. Monotonic limit theory of BSDE and nonlinear decomposition theorem of Doob-Meyer’s type. Probab. Theory and Related Fields, 113 473–499.
26
[23] Peng, S. and Xu, M., (2005). Smallest g-Supermartingales and related Reflected BSDEs, Annales of I.H.P. Vol. 41, 3, 605-630. [24] Peng, S. and Xu, M., (2007) Reflected BSDE with a Constraint and its applications in incomplete market, arXiv : math.PR/0611869v2 [25] Peng, S. and Xu, M., (2007) Numerical Algorithms for BSDEs: Convergence and simulation, arXiv: math.PR/0611864v2. [26] Zhang, Y. and Zheng, W., 2002. Discretizing a backward stochastic differential equation, Int. J. Math. Math. Sci. 32, no. 2, 103-116.
27