Stochastic Target Games with Controlled Loss B. Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech
USC, February 2013
Joint work with L. Moreau (ETH-Zürich) and M. Nutz (Columbia)
Plan of the talk • Problem formulation
Plan of the talk • Problem formulation • Examples
Plan of the talk • Problem formulation • Examples • Assumptions for this talk
Plan of the talk • Problem formulation • Examples • Assumptions for this talk • Geometric dynamic programming
Plan of the talk • Problem formulation • Examples • Assumptions for this talk • Geometric dynamic programming • Application to the monotone case
Problem formulation and Motivations
Problem formulation Provide a PDE characterization of the viability sets h i u[ϑ],ϑ Λ(t) := {(z, p) : ∃ u ∈ U s. t. E `(Zt,z (T )) ≥ m ∀ ϑ ∈ V} In which : • V is a set of admissible adverse controls • U is a set of admissible strategies u[ϑ],ϑ
• Zt,z
u[ϑ],ϑ
is an adapted Rd -valued process s.t. Zt,z
• ` is a given loss/utility function • m a threshold.
(t) = z
Application in finance u[ϑ],ϑ
2 Zt,z
u[ϑ],ϑ
= (Xt,x
u[ϑ],ϑ
, Yt,x,y ) where
u[ϑ],ϑ
• Xt,x
models financial assets or factors with dynamics depending on ϑ u[ϑ],ϑ
• Yt,x,y
models a wealth process
• ϑ is the control of the market : parameter uncertainty (e.g.
volatility), adverse players, etc... • u[ϑ] is the financial strategy given the past observations of ϑ.
2 Flexible enough to embed constraints, transaction costs, market impact, etc...
Examples
h i u[ϑ],ϑ Λ(t) := {(z, p) : ∃ u ∈ U s.t. E `(Zt,z (T )) ≥ m ∀ ϑ ∈ V} 2 Expected loss control for `(z) = −[y − g (x)]− 2 Give sense to problems that would be degenerate under P − a.s. constraints : B. and Dang (guaranteed VWAP pricing).
Examples
2 Constraint in probability : h i u[ϑ],ϑ Λ(t) := {(z, p) : ∃ u ∈ U s.t. P Zt,z (T ) ∈ O ≥ m ∀ ϑ ∈ V} for `(z) = 1z∈O , m ∈ (0, 1). ⇒ Quantile-hedging in finance for O := {y ≥ g (x)}.
Examples
2 Matching a P&L distribution = Multiple constraints in probability : i h u[ϑ],ϑ {∃ u ∈ U s.t. P dist Zt,z (T ), O ≤ γi ≥ mi ∀ i ≤ I , ∀ ϑ ∈ V} (see B. and Thanh Nam)
Examples
2 Almost sure constraint : u[ϑ],ϑ
Λ(t) := {z : ∃ u ∈ U s.t. Zt,z
(T ) ∈ O P − a.s. ∀ ϑ ∈ V}
for `(z) = 1z∈O , m = 1. ⇒ Super-hedging in finance for O := {y ≥ g (x)}. Unfortunately not covered by our assumptions...
Setting for this talk (see the paper for an abstract version)
Brownian diffusion setting
Brownian diffusion setting 2 State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformly Lipschitz in space) Z s Z s Z (s) = z + µ(Z (r ), u[ϑ]r , ϑr ) dr + σ(Z (r ), u[ϑ]r , ϑr ) dWr . t
t
Brownian diffusion setting 2 State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformly Lipschitz in space) Z s Z s Z (s) = z + µ(Z (r ), u[ϑ]r , ϑr ) dr + σ(Z (r ), u[ϑ]r , ϑr ) dWr . t
2 Controls and strategies :
t
Brownian diffusion setting 2 State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformly Lipschitz in space) Z s Z s Z (s) = z + µ(Z (r ), u[ϑ]r , ϑr ) dr + σ(Z (r ), u[ϑ]r , ϑr ) dWr . t
t
2 Controls and strategies : • V is the set of predictable processes with values in V ⊂ Rd .
Brownian diffusion setting 2 State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformly Lipschitz in space) Z s Z s Z (s) = z + µ(Z (r ), u[ϑ]r , ϑr ) dr + σ(Z (r ), u[ϑ]r , ϑr ) dWr . t
t
2 Controls and strategies : • V is the set of predictable processes with values in V ⊂ Rd . • U is set of non-anticipating maps u : ϑ ∈ V 7→ U, i.e.
{ϑ1 =(0,s] ϑ2 } ⊂ {u[ϑ1 ] =(0,s] u[ϑ2 ]} ∀ ϑ1 , ϑ2 ∈ V, s ≤ T , where U is the set of predictable processes with values in U ⊂ Rd .
Brownian diffusion setting 2 State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformly Lipschitz in space) Z s Z s Z (s) = z + µ(Z (r ), u[ϑ]r , ϑr ) dr + σ(Z (r ), u[ϑ]r , ϑr ) dWr . t
t
2 Controls and strategies : • V is the set of predictable processes with values in V ⊂ Rd . • U is set of non-anticipating maps u : ϑ ∈ V 7→ U, i.e.
{ϑ1 =(0,s] ϑ2 } ⊂ {u[ϑ1 ] =(0,s] u[ϑ2 ]} ∀ ϑ1 , ϑ2 ∈ V, s ≤ T , where U is the set of predictable processes with values in U ⊂ Rd . 2 The loss function ` has polynomial growth and is continuous.
The game problem 2 Worst expected loss for a given strategy : h i u[ϑ],ϑ J(t, z, u) := ess inf E ` Zt,z (T ) |Ft ϑ∈V
The game problem 2 Worst expected loss for a given strategy : h i u[ϑ],ϑ J(t, z, u) := ess inf E ` Zt,z (T ) |Ft ϑ∈V
2 The viability sets are given by Λ(t) := {(z, p) : ∃ u ∈ U s.t. J(t, z, u) ≥ p P − a.s.}. Compare with the formulation of games in Buckdahn and Li (2008).
Geometric dynamic programming principle How are the properties u[ϑ],ϑ
(z, m) ∈ Λ(t) and (Zt,z
related ?
(θ), ?) ∈ Λ(θ)
2 First direction : Take (z, m) ∈ Λ(t) and u ∈ U such that h i u[ϑ],ϑ ess inf E ` Zt,z (T ) |Ft ≥ m ϑ∈V
2 First direction : Take (z, m) ∈ Λ(t) and u ∈ U such that h i u[ϑ],ϑ ess inf E ` Zt,z (T ) |Ft ≥ m ϑ∈V
To take care of the evolution of the worst case scenario conditional expectation, we introduce : h i ¯ ¯ u[ϑ⊕ ϑ],ϑ⊕ rϑ (T ) |Fr . Srϑ := ess inf E ` Zt,z r ¯ ϑ∈V
2 First direction : Take (z, m) ∈ Λ(t) and u ∈ U such that h i u[ϑ],ϑ ess inf E ` Zt,z (T ) |Ft ≥ m ϑ∈V
To take care of the evolution of the worst case scenario conditional expectation, we introduce : h i ¯ ¯ u[ϑ⊕ ϑ],ϑ⊕ rϑ (T ) |Fr . Srϑ := ess inf E ` Zt,z r ¯ ϑ∈V
Then S ϑ is a submartingale and Stϑ ≥ m for all ϑ ∈ V,
2 First direction : Take (z, m) ∈ Λ(t) and u ∈ U such that h i u[ϑ],ϑ ess inf E ` Zt,z (T ) |Ft ≥ m ϑ∈V
To take care of the evolution of the worst case scenario conditional expectation, we introduce : h i ¯ ¯ u[ϑ⊕ ϑ],ϑ⊕ rϑ (T ) |Fr . Srϑ := ess inf E ` Zt,z r ¯ ϑ∈V
Then S ϑ is a submartingale and Stϑ ≥ m for all ϑ ∈ V, and we can find a martingale M ϑ such that S ϑ ≥ M ϑ and Mtϑ = Stϑ ≥ m.
2 We have for all stopping times θ (may depend on u and ϑ) h i ¯ ¯ u[ϑ⊕ ϑ],ϑ⊕ θϑ ess inf E ` Zt,z θ (T ) |Fθ = Sθϑ ≥ Mθϑ . ¯ ϑ∈V
2 We have for all stopping times θ (may depend on u and ϑ) h i ¯ ¯ u[ϑ⊕ ϑ],ϑ⊕ θϑ ess inf E ` Zt,z θ (T ) |Fθ = Sθϑ ≥ Mθϑ . ¯ ϑ∈V
If the above is usc in space uniformly in the strategies, θ takes finitely many values, we can find a covering formed of Bi 3 (ti , zi ), i ≥ 1, such that u[ϑ],ϑ
J(ti , zi ; u) ≥ Mθϑ − ε on Ai := {(θ, Zt,z
(θ)) ∈ Bi }.
2 We have for all stopping times θ (may depend on u and ϑ) h i ¯ ¯ u[ϑ⊕ ϑ],ϑ⊕ θϑ ess inf E ` Zt,z θ (T ) |Fθ = Sθϑ ≥ Mθϑ . ¯ ϑ∈V
If the above is usc in space uniformly in the strategies, θ takes finitely many values, we can find a covering formed of Bi 3 (ti , zi ), i ≥ 1, such that u[ϑ],ϑ
J(ti , zi ; u) ≥ Mθϑ − ε on Ai := {(θ, Zt,z
(θ)) ∈ Bi }.
Hence K (ti , zi ) ≥ Mθϑ − ε on Ai . where h i u[ϑ],ϑ K (ti , zi ) := esssup ess inf E ` Zti ,zi (T ) |Fti is deterministic. u∈U
ϑ∈V
2 We have for all stopping times θ (may depend on u and ϑ) h i ¯ ¯ u[ϑ⊕ ϑ],ϑ⊕ θϑ ess inf E ` Zt,z θ (T ) |Fθ = Sθϑ ≥ Mθϑ . ¯ ϑ∈V
If the above is usc in space uniformly in the strategies, θ takes finitely many values, we can find a covering formed of Bi 3 (ti , zi ), i ≥ 1, such that u[ϑ],ϑ
J(ti , zi ; u) ≥ Mθϑ − ε on Ai := {(θ, Zt,z
(θ)) ∈ Bi }.
Hence K (ti , zi ) ≥ Mθϑ − ε on Ai . where h i u[ϑ],ϑ K (ti , zi ) := esssup ess inf E ` Zti ,zi (T ) |Fti is deterministic. u∈U
ϑ∈V
2 If K is lsc u[ϑ],ϑ
K (θ(ω), Zt,z
(θ)(ω)) ≥ K (ti , zi ) − ε ≥ Mθϑ (ω) − 2ε on Ai .
2 We have for all stopping times θ (may depend on u and ϑ) h i ¯ ¯ u[ϑ⊕ ϑ],ϑ⊕ θϑ ess inf E ` Zt,z θ (T ) |Fθ = Sθϑ ≥ Mθϑ . ¯ ϑ∈V
If the above is usc in space uniformly in the strategies, θ takes finitely many values, we can find a covering formed of Bi 3 (ti , zi ), i ≥ 1, such that u[ϑ],ϑ
J(ti , zi ; u) ≥ Mθϑ − ε on Ai := {(θ, Zt,z
(θ)) ∈ Bi }.
Hence K (ti , zi ) ≥ Mθϑ − ε on Ai . where h i u[ϑ],ϑ K (ti , zi ) := esssup ess inf E ` Zti ,zi (T ) |Fti is deterministic. u∈U
ϑ∈V
2 If K is lsc u[ϑ],ϑ
(Zt,z
(θ(ω)), Mθϑ (ω) − 3ε) ∈ Λ(θ(ω))
2 To get rid of ε, and for non-regular cases (in terms of K and J), we work by approximation : One needs to start from (z, m − ι) ∈ Λ(t) and obtain u[ϑ],ϑ
(Zt,z
¯ (θ), Mθϑ ) ∈ Λ(θ) P − a.s. ∀ ϑ ∈ V
where ( ¯ Λ(t) :=
(z, m) : there exist (tn , zn , mn ) → (t, z, m) such that (zn , mn ) ∈ Λ(tn ) and tn ≥ t for all n ≥ 1
)
2 To get rid of ε, and for non-regular cases (in terms of K and J), we work by approximation : One needs to start from (z, m − ι) ∈ Λ(t) and obtain u[ϑ],ϑ
(Zt,z
¯ (θ), Mθϑ ) ∈ Λ(θ) P − a.s. ∀ ϑ ∈ V
where ( ¯ Λ(t) :=
(z, m) : there exist (tn , zn , mn ) → (t, z, m) such that (zn , mn ) ∈ Λ(tn ) and tn ≥ t for all n ≥ 1
)
R αϑ := p + · αϑ dW , with αϑ ∈ A, the set of Remark : M ϑ = Mt,p s t s predictable processes such that the above is a martingale.
2 To get rid of ε, and for non-regular cases (in terms of K and J), we work by approximation : One needs to start from (z, m − ι) ∈ Λ(t) and obtain u[ϑ],ϑ
(Zt,z
¯ (θ), Mθϑ ) ∈ Λ(θ) P − a.s. ∀ ϑ ∈ V
where ( ¯ Λ(t) :=
(z, m) : there exist (tn , zn , mn ) → (t, z, m) such that (zn , mn ) ∈ Λ(tn ) and tn ≥ t for all n ≥ 1
)
R αϑ := p + · αϑ dW , with αϑ ∈ A, the set of Remark : M ϑ = Mt,p s t s predictable processes such that the above is a martingale. Remark : The bounded variation part of S is useless : optimal adverse player control should turn S in a martingale.
2 Reverse direction : Assume that {θϑ , ϑ ∈ V} takes finitely many values and that u[ϑ],ϑ
(Zt,z
ϑ
α (θϑ ), Mt,m (θϑ )) ∈ Λ(θϑ ) P − a.s. ∀ ϑ ∈ V.
2 Reverse direction : Assume that {θϑ , ϑ ∈ V} takes finitely many values and that u[ϑ],ϑ
(Zt,z
ϑ
α (θϑ ), Mt,m (θϑ )) ∈ Λ(θϑ ) P − a.s. ∀ ϑ ∈ V.
Then u[ϑ],ϑ
K (θϑ , Zt,z
ϑ
α (θϑ )) ≥ Mt,m (θϑ ).
2 Reverse direction : Assume that {θϑ , ϑ ∈ V} takes finitely many values and that u[ϑ],ϑ
(Zt,z
ϑ
α (θϑ ), Mt,m (θϑ )) ∈ Λ(θϑ ) P − a.s. ∀ ϑ ∈ V.
Then u[ϑ],ϑ
K (θϑ , Zt,z
ϑ
α (θϑ )) ≥ Mt,m (θϑ ).
Play again with balls + concatenation of strategies (assuming ¯ ∈ U such that smoothness) to obtain u h (u⊕ ¯u)[ϑ],ϑ i ϑ αϑ E ` Zt,z θ (T ) |Fθ ≥ Mt,m (θϑ ) − ε P − a.s. ∀ ϑ ∈ V.
2 Reverse direction : Assume that {θϑ , ϑ ∈ V} takes finitely many values and that u[ϑ],ϑ
(Zt,z
ϑ
α (θϑ ), Mt,m (θϑ )) ∈ Λ(θϑ ) P − a.s. ∀ ϑ ∈ V.
Then u[ϑ],ϑ
K (θϑ , Zt,z
ϑ
α (θϑ )) ≥ Mt,m (θϑ ).
Play again with balls + concatenation of strategies (assuming ¯ ∈ U such that smoothness) to obtain u h (u⊕ ¯u)[ϑ],ϑ i ϑ αϑ E ` Zt,z θ (T ) |Fθ ≥ Mt,m (θϑ ) − ε P − a.s. ∀ ϑ ∈ V. By taking expectation h (u⊕ ¯u)[ϑ],ϑ i ϑ E ` Zt,z θ (T ) |Ft ≥ m − ε P − a.s. ∀ ϑ ∈ V
2 Reverse direction : Assume that {θϑ , ϑ ∈ V} takes finitely many values and that u[ϑ],ϑ
(Zt,z
ϑ
α (θϑ ), Mt,m (θϑ )) ∈ Λ(θϑ ) P − a.s. ∀ ϑ ∈ V.
Then u[ϑ],ϑ
K (θϑ , Zt,z
ϑ
α (θϑ )) ≥ Mt,m (θϑ ).
Play again with balls + concatenation of strategies (assuming ¯ ∈ U such that smoothness) to obtain u h (u⊕ ¯u)[ϑ],ϑ i ϑ αϑ E ` Zt,z θ (T ) |Fθ ≥ Mt,m (θϑ ) − ε P − a.s. ∀ ϑ ∈ V. By taking expectation h (u⊕ ¯u)[ϑ],ϑ i ϑ E ` Zt,z θ (T ) |Ft ≥ m − ε P − a.s. ∀ ϑ ∈ V so that (z, m − ε) ∈ Λ(t).
2 To cover the general case by approximations, we need to start with u[ϑ],ϑ
(Zt,z
αϑ (θϑ ), Mt,m (θϑ )) ∈ ˚ Λι (θϑ ) P − a.s. ∀ ϑ ∈ V,
where ˚ Λι (t) := (z, p) : (t 0 , z 0 , p 0 ) ∈ Bι (t, z, p) implies (z 0 , p 0 ) ∈ Λ(t 0 ) .
2 To cover the general case by approximations, we need to start with u[ϑ],ϑ
(Zt,z
αϑ (θϑ ), Mt,m (θϑ )) ∈ ˚ Λι (θϑ ) P − a.s. ∀ ϑ ∈ V,
where ˚ Λι (t) := (z, p) : (t 0 , z 0 , p 0 ) ∈ Bι (t, z, p) implies (z 0 , p 0 ) ∈ Λ(t 0 ) . 2 To concatenate while keeping the non-anticipative feature : • Martingale strategies : αϑ replaced by a : ϑ ∈ V 7→ a[ϑ] ∈ A
in a non-anticipating way (corresponding set A) • Non-anticipating stopping times : θ[ϑ] (typically first exit time
of (Z u[ϑ],ϑ ), M a[ϑ] ) from a ball.
The geometric dynamic programming principle (GDP1) : If (z, m − ι) ∈ Λ(t) for some ι > 0, then ∃ u ∈ U and {αϑ , ϑ ∈ V} ⊂ A such that u[ϑ],ϑ
(Zt,z
αϑ ¯ (θ), Mt,m (θ)) ∈ Λ(θ) P − a.s. ∀ ϑ ∈ V.
(GDP2) : If (u, a) ∈ U × A and ι > 0 are such that u[ϑ],ϑ
(Zt,z
a[ϑ] (θ[ϑ]), Mt,m (θ[ϑ])) ∈ ˚ Λι (θ[ϑ]) P − a.s. ∀ ϑ ∈ V
for some family (θ[ϑ], ϑ ∈ V) of non-anticipating stopping times, then (z, m − ε) ∈ Λ(t) , ∀ ε > 0. Rem : Relaxed version of Soner and Touzi (2002) and B., Elie and Touzi (2009).
Application to the monotone case
u[ϑ],ϑ
u[ϑ],ϑ
2 Monotone case : Zt,x,y = (Xt,x u[ϑ],ϑ
Rd × R with Xt,x
u[ϑ],ϑ
, Yt,x,y ) with values in u[ϑ],ϑ
independent of y and Yt,x,y ↑ y .
u[ϑ],ϑ
u[ϑ],ϑ
2 Monotone case : Zt,x,y = (Xt,x u[ϑ],ϑ
Rd × R with Xt,x
u[ϑ],ϑ
, Yt,x,y ) with values in u[ϑ],ϑ
independent of y and Yt,x,y ↑ y .
2 The value function is : $(t, x, m) := inf{y : (x, y , m) ∈ Λ(t)}.
u[ϑ],ϑ
u[ϑ],ϑ
2 Monotone case : Zt,x,y = (Xt,x u[ϑ],ϑ
Rd × R with Xt,x
u[ϑ],ϑ
, Yt,x,y ) with values in u[ϑ],ϑ
independent of y and Yt,x,y ↑ y .
2 The value function is : $(t, x, m) := inf{y : (x, y , m) ∈ Λ(t)}.
2 Remark :
u[ϑ],ϑ
u[ϑ],ϑ
2 Monotone case : Zt,x,y = (Xt,x u[ϑ],ϑ
Rd × R with Xt,x
u[ϑ],ϑ
, Yt,x,y ) with values in u[ϑ],ϑ
independent of y and Yt,x,y ↑ y .
2 The value function is : $(t, x, m) := inf{y : (x, y , m) ∈ Λ(t)}.
2 Remark : • If ϕ is lsc and $ ≥ ϕ, then
¯ Λ(t) ⊂ {(x, y , m) : y ≥ ϕ(t, x, m)}.
u[ϑ],ϑ
u[ϑ],ϑ
2 Monotone case : Zt,x,y = (Xt,x u[ϑ],ϑ
Rd × R with Xt,x
u[ϑ],ϑ
, Yt,x,y ) with values in u[ϑ],ϑ
independent of y and Yt,x,y ↑ y .
2 The value function is : $(t, x, m) := inf{y : (x, y , m) ∈ Λ(t)}.
2 Remark : • If ϕ is lsc and $ ≥ ϕ, then
¯ Λ(t) ⊂ {(x, y , m) : y ≥ ϕ(t, x, m)}. • If ϕ is usc and ϕ ≥ $ then
˚ Λι (t) ⊃ {(x, y , m) : y ≥ ϕ(t, x, m) + ηι } for some ηι > 0.
Thm :
Thm : (GDP1) : Let ϕ ∈ C 0 be such that arg min($∗ − ϕ) = (t, x, m). Assume that y > $(t, x, m − ι) for some ι > 0. Then, there exists (u, a) ∈ U × A that u[ϑ],ϑ
u[ϑ],ϑ
Yt,x,y (θϑ ) ≥ ϕ(θϑ , Xt,x
a[ϑ]
(θϑ ), Mt,m (θϑ )) P − a.s. ∀ ϑ ∈ V.
Thm : (GDP1) : Let ϕ ∈ C 0 be such that arg min($∗ − ϕ) = (t, x, m). Assume that y > $(t, x, m − ι) for some ι > 0. Then, there exists (u, a) ∈ U × A that u[ϑ],ϑ
u[ϑ],ϑ
Yt,x,y (θϑ ) ≥ ϕ(θϑ , Xt,x
a[ϑ]
(θϑ ), Mt,m (θϑ )) P − a.s. ∀ ϑ ∈ V.
(GDP2) : Let ϕ ∈ C 0 be such that arg max($∗ − ϕ) = (t, x, m). Assume that (u, a) ∈ U × A and η > 0 are such that u[ϑ],ϑ
u[ϑ],ϑ
Yt,x,y (θ[ϑ]) ≥ ϕ(θ[ϑ], Xt,x
a[ϑ]
(θ[ϑ]), Mt,m (θ[ϑ]))+η P−a.s. ∀ ϑ ∈ V.
Then, y ≥ $(t, x, m − ε) , ∀ ε > 0.
Thm : (GDP1) : Let ϕ ∈ C 0 be such that arg min($∗ − ϕ) = (t, x, m). Assume that y > $(t, x, m − ι) for some ι > 0. Then, there exists (u, a) ∈ U × A that u[ϑ],ϑ
u[ϑ],ϑ
Yt,x,y (θϑ ) ≥ ϕ(θϑ , Xt,x
a[ϑ]
(θϑ ), Mt,m (θϑ )) P − a.s. ∀ ϑ ∈ V.
(GDP2) : Let ϕ ∈ C 0 be such that arg max($∗ − ϕ) = (t, x, m). Assume that (u, a) ∈ U × A and η > 0 are such that u[ϑ],ϑ
u[ϑ],ϑ
Yt,x,y (θ[ϑ]) ≥ ϕ(θ[ϑ], Xt,x
a[ϑ]
(θ[ϑ]), Mt,m (θ[ϑ]))+η P−a.s. ∀ ϑ ∈ V.
Then, y ≥ $(t, x, m − ε) , ∀ ε > 0. Remark : In the spirit of the weak dynamic programming principle (B. and Touzi, and, B. and Nutz).
PDE characterization - “waving hands” version 2 Assuming smoothness, existence of optimal strategies... 2 y = $(t, x, m) implies Y u[ϑ],ϑ (t+) ≥ $(t+, X u[ϑ],ϑ (t+), M a[ϑ] (t+)) for all ϑ.
PDE characterization - “waving hands” version 2 Assuming smoothness, existence of optimal strategies... 2 y = $(t, x, m) implies Y u[ϑ],ϑ (t+) ≥ $(t+, X u[ϑ],ϑ (t+), M a[ϑ] (t+)) for all ϑ. This implies dY u[ϑ],ϑ (t) ≥ d $(t, X u[ϑ],ϑ (t), M a[ϑ] (t)) for all ϑ
PDE characterization - “waving hands” version 2 Assuming smoothness, existence of optimal strategies... 2 y = $(t, x, m) implies Y u[ϑ],ϑ (t+) ≥ $(t+, X u[ϑ],ϑ (t+), M a[ϑ] (t+)) for all ϑ. This implies dY u[ϑ],ϑ (t) ≥ d $(t, X u[ϑ],ϑ (t), M a[ϑ] (t)) for all ϑ Hence, for all ϑ, u[ϑ] ,ϑt ,a[ϑ]t
µY (x, y , u[ϑ]t , ϑt ) ≥ LX ,Mt
$(t, x, m)
σY (x, y , u[ϑ]t , ϑt ) = σX (x, u[ϑ]t , ϑt )Dx $(t, x, p) +a[ϑ]t Dm $(t, x, m) with y = $(t, x, m)
PDE characterization - “waving hands” version 2 Supersolution property ,a inf sup µY (·, $, u, v ) − Lu,v $ ≥0 X ,M v ∈V (u,a)∈N v $
where N v $ := {(u, a) ∈ U × Rd : σY (·, $, u, v ) = σX (·, u, v )Dx $ + aDm $}.
PDE characterization - “waving hands” version 2 Supersolution property ,a inf sup µY (·, $, u, v ) − Lu,v $ ≥0 X ,M v ∈V (u,a)∈N v $
where N v $ := {(u, a) ∈ U × Rd : σY (·, $, u, v ) = σX (·, u, v )Dx $ + aDm $}. 2 Subsolution property u[v ],v ,a[v ] sup inf µY (·, $, u[v ], v ) − LX ,M $ ≤0 (u[·],a[·])∈N [·] $ v ∈V
where N [·] $ := {loc. Lip. (u[·], a[·]) s.t. (u[·], a[·]) ∈ N · $(·)}.
References 2 R. Buckdahn and J. Li (2008). Stochastic differential games and viscosity solutions of Hamilton-Jacobi-Bellman-Isaacs equations. SIAM SICON, 47(1) :444-475. 2 H.M. Soner and N. Touzi (2002). Dynamic programming for stochastic target problems and geometric flows, JEMS, 4, 201-236. 2 B. , R. Elie and N. Touzi (2009). Stochastic Target problems with controlled loss, SIAM SICON, 48 (5), 3123-3150. 2 B. and M. Nutz (2012). Weak Dynamic Programming for Generalized State Constraints, SIAM SICON, to appear. 2 L. Moreau (2011), Stochastic target problems with controlled loss in a jump diffusion model, SIAM SICON, 49, 2577-2607, 2011. 2 B. and N. M. Dang (2010). Generalized stochastic target problems for pricing and partial hedging under loss constraints - Application in optimal book liquidation, F&S, online. 2 B. and V. Thanh Nam (2011). A stochastic target approach for P&L matching problems, MOR, 37(3), 526-558.