Stability of Constrained Markov Modulated Diffusions Amarjit Budhiraja∗ and Xin Liu
April 29, 2011 Abstract: A family of constrained diffusions in random environment is considered. Constraint set is a polyhedral cone and coefficients of the diffusion are governed by, in addition to the system state, a finite state Markov process that is independent of the driving noise. Such models arise as limit objects in the heavy traffic analysis of stochastic processing networks (SPN) with Markov modulated arrival and processing rates. We give sufficient conditions (which in particular includes a requirement on the regularity of the underlying Skorohod map) for positive recurrence and geometric ergodicity. When the coefficients only depend on the modulating Markov process (i.e. they are independent of the system state), a complete characterization for stability and transience is provided. The case, where the pathwise Skorohod problem is not well-posed but the underlying reflection matrix is completely-S, is treated as well. As consequences of geometric ergodicity various results, such as exponential integrability of invariant measures and CLT for fluctuations of long time averages of process functionals about their stationary values, are obtained. Conditions for stability are formulated in terms of the averaged drift, where the average is taken with respect to the stationary distribution of the modulating Markov process. Finally, steady state distributions of the underlying SPN are considered and it is shown that, under suitable conditions, such distributions converge to the unique stationary distribution of the constrained random environment diffusion. AMS 2010 subject classifications: Primary 60J60, 60J65, 60K25; secondary 60J25, 34D20, 93E15. Keywords: Markov modulated diffusions, Switching diffusions, Diffusions in random environment, Skorohod problem, Reflected diffusions, Geometric ergodicity, Invariant measures, Moment stability, Poisson’s equation, Heavy traffic approximations, Semimartingale reflected Brownian motion.
1. Introduction
We study a family of constrained diffusions in random environment. Such models arise as diffusion approximations of stochastic processing networks (SPN) in heavy traffic. One such setting has been considered in [7] where it is shown that state processes for certain critically ∗
Research supported in part by the Army Research Office (W911NF-0-1-0080 and W911NF-10-1-0158), National Science Foundation (DMS-1004418) and the US-Israel Binational Science Foundation (2008466). 1
/Stability of Constrained Markov Modulated Diffusions
2
loaded generalized Jackson networks for which the arrival and service rates depend on the system state and values of an independent finite state Markov process, under a suitable scaling, converge to constrained random environment diffusions of the form studied in the current work. Our objective in this work is to study stability properties and invariant measures of such diffusions and argue that the latter serve as good approximations for the steady states of the corresponding SPN model. Let G ⊂ RK be a convex polyhedral cone with vertex at the origin given as the intersection of half spaces Gi , i = 1, · · · , N . Denote by ni and di the inward normal and constraint direction associated with Gi . We assume that the Skorohod map Γ defined by the data {(di , ni ) : i = 1, 2, . . . , N } is well posed and Lipschitz continuous (Assumption 2.1). The map Γ takes a RCLL trajectory ψ : [0, ∞) → RK to another RCLL trajectory φ, which stays within G at all times, in a manner that is uniquely determined by the set of reflection directions {di : i = 1, 2, . . . , N }. A precise description and definition of the Skorohod map is given in Section 2.1. The Markov modulated diffusion process we study is constrained to take values in G and is defined through the equation Z · Z · X(t) = Γ z + b(X(s), Y (s))ds + σ(X(s), Y (s))dW (s) (t), t ≥ 0. (1.1) 0
0
The process (X, Y ) is defined on a filtered probability space (Ω, F, P, {Ft }t≥0 ) such that Y . is a {Ft } Markov process with state space L = {1, 2, . . . , L}, infinitesimal generator Q, and a unique stationary distribution q ∗ = {qj∗ : j ∈ L} (see Assumption 2.4), and W is a {Ft } standard Brownian motion. We also assume Lipschitz continuity on σ and b (Assumption 2.2) and boundedness and uniform nondegeneracy of σ (Assumption 2.3). We now introduce the main stability assumption on the drift coefficient b. Let ( N ) X . C= − ζi di : ζi ≥ 0, i ∈ {1, · · · , N } (1.2) i=1
and, for δ > 0,
. C(δ) = {v ∈ C : dist(v, ∂C) ≥ δ}, δ ∈ (0, ∞).
(1.3)
The cone in (1.2) plays a key role in the stability analysis of constrained diffusions (see [16], [3], [1], [6]). For example, it follows from results in [1] that if the drift and diffusion coefficients do not depend on the process Y (i.e., for all (x, y) ∈ G×L, b(x, y) ≡ b(x) and σ(x, y) ≡ σ(x)), and for some δ0 > 0, b(x) ∈ C(δ0 ) for all x ∈ G, then the Markov process X is positive recurrent and consequently has a unique invariant probability measure. In the Markov modulated setting considered here the cone C once again plays a key role, however instead of requiring that b(x, y) ∈ C(δ0 ) for all x, y, we impose a substantially weaker condition that the drift averaged over y according to the stationary distribution q ∗ , is in C(δ0 ), for all x. For technical reasons (see Remark 3.1) we are only able to treat the case where the drift can be decomposed as b(x, y) = b1 (x) + b2 (y). For a general drift field b(x, y) our condition for stability is somewhat stronger, as described below. Write b as b(x, y) = b1 (x, y) + b2 (y), (x, y) ∈ G × L,
(1.4)
/Stability of Constrained Markov Modulated Diffusions
3
P where b1 : G × L → RK and b2 : L → RK are measurable maps. Define b∗2 = j∈L qj∗ b2 (j) and b∗ (x, y) = b1 (x, y) + b∗2 . Then our stability assumption (Assumption 2.5) requires that b∗ (x, y) ∈ C(δ0 ) for some δ0 > 0. In Theorem 2.2 we show that, under Assumptions 2.1 2.5, (X, Y ) is positive recurrent and has a unique invariant probability measure. For the case when b1 = 0, we obtain a sharper result (Theorem 2.3) which says that if b∗ (≡ b∗2 ) is in the interior of C, then (X, Y ) is positive recurrent, and if b∗ is not in C, (X, Y ) is transient. Under the same stability condition, we identify an appropriate exponentially growing Lyapunov function V and establish the V -uniform ergodicity of (X, Y ). As consequences of this geometric ergodicity property we obtain exponential integrability of the invariant probability measure, uniform time estimates for polynomial moments of all orders and certain functional central limit results (Theorem 2.4). We note that our stability condition (Assumption 2.5) on the drift vector field allows for the drift to be “transient” in some states of the Markov process Y . For example, consider vectors v1 , v2 ∈ RK such that v1 ∈ C ◦ and v2 ∈ C c . Then it is well known that if b(x, y) ≡ v1 , X in (1.1) will be positive recurrent and if b(x, y) ≡ v2 , X will be transient. Our results show that in a Markov modulated case where, for example, L = {1, 2} and b(x, y) ≡ vy , the pair (X, Y ) will be positive recurrent (in fact geometrically ergodic) if b∗ = q1∗ v1 + q2∗ v2 ∈ C ◦ and transient if b∗ ∈ C c . Regularity of the Skorohod map (Assumption 2.1) is a key ingredient in the proof of Theorems 2.2-2.4. Motivated by the study of diffusion approximations of multi-class queueing networks, in [18, 21] the authors have identified a necessary and sufficient condition on the reflection matrix for weak existence and uniqueness of a reflected Brownian motion in the positive orthant RK + . This key condition, which requires the reflection matrix to be completely-S (see Section 2.2 for a precise definition) is substantially weaker than known sufficient conditions for Lipschitz continuity of the Skorohod map (Assumption 2.1). In [13], stability theory for such semimartingale reflected Brownian motions (SRBM) has been developed. The key stability condition, referred to in our work as the DW-stability condition (see Definition 2.5) is formulated in terms of certain fluid limit trajectories associated with the SRBM model. Under this condition, the paper [13] shows that the SRBM is positive recurrent and admits a unique invariant probability measure. These results were strengthened in [5] by establishing geometric ergodicity. In the current work we consider a Markov modulated SRBM X in the nonnegative orthant RK + . The modulating process Y is as before a Markov process with values in L and satisfies Assumption 2.4. In particular, this corresponds to a setting where b(x, y) = b2 (y), σ(x, y) ≡ σ, and G ≡ RK + . As in [13] we make the basic assumption that the matrix (d1 |d2 | . . . |dK ) is completely-S. Using a standard argument based on Girsanov’s theorem and classical results of [21], one can establish the existence and uniquely characterize the probability law of such a process (see Theorem 2.5). Under the assumption that b∗ (≡ b∗2 ) satisfies the DW-stability condition, we prove that (X, Y ) is positive recurrent and has a unique stationary distribution (Theorem 2.6). Furthermore, we show that (X, Y ) is V -uniformly ergodic where the Lyapunov function V grows exponentially, and establish various stability properties of Markov modulated SRBM (see Theorem 2.7) analogous to those in Theorem 2.4. As noted earlier, constrained random environment diffusions studied in our work have been
/Stability of Constrained Markov Modulated Diffusions
4
shown in [7] to arise as diffusion limit models for certain generalized Jackson networks. Since such queueing networks are too complex to be analyzed directly, it is valuable to know that the steady state behavior of the limit diffusion model is a good approximation for that of the underlying queueing system. In the final part of this paper, we study the validity of such approximation. We consider a sequence of Markov modulated open queueing networks in heavy traffic. The external arrival processes and service processes are assumed to depend on the state of the system and an auxiliary finite state Markov process. The routing mechanism is governed by a K × K substochastic matrix P. In the nth network a Markov process Y n modulates the arrival and service rates. We assume that Y n has a finite state space L and infinitesimal generator Qn which converges to some matrix Q. Denote by Qn the queue length process b n a suitably scaled form of Qn . It is shown in [7] that under suitable heavy traffic and by Q b n , Y n ) converges to (X, Y ) weakly, where Y is a Markov process with infinitesimal condition, (Q generator Q and X is a constrained Markov modulated diffusion process defined as in (1.1). In Theorem 2.9 of the current work we show that, under appropriate heavy traffic and stability b n , Y n ) admits a stationary distribution which converges to the unique stationary conditions, (Q distribution of (X, Y ), as the scaling parameter n → ∞. The paper is organized as follows. We collect all the main results in Section 2. More precisely, the recurrence and transience properties for constrained Markov modulated diffusions are presented in Section 2.1. Section 2.2 considers stability properties for Markov modulated SRBM. Finally, in Section 2.3, we state our main result on the convergence of invariant measures for Markov modulated open queueing networks. Proofs of all results are given in Section 3. In Appendix, for completeness, we collect results, proofs of which are similar to arguments in existing literature. The following notation will be used. For a metric space U, let B(U) be the Borel σ-field on U and P(U) the collection of all probability measures on U. Denote the set of natural numbers by N and let N0 = N ∪ {0}. Denote by R the set of real numbers and R+ the set of nonnegative real numbers. For K ∈ N, let RK = {(x1 , x2 , · · · , xK )0 : xi ∈ R, i = 1, 2, . . . , K} 0 K and RK + = {(x1 , x2 , · · · , xK ) : xi ∈ R+ , i = 1, 2, . . . , K}. Let {ei }i=1 be the standard basis in K K R . For x, y ∈ R , the usual inner product is denoted as hx, yi. For di ∈ RK , i = 1, 2, . . . , K, denote (d1 |d2 | . . . |dK ) the matrix with di as the ith column. For B ∈ B(RK ), denote by ∂B, B ◦ ¯ the boundary, the interior, and the closure of B, respectively. Space of real bounded and B, measurable functions on some topological space T will be denoted by BM(T ). For f ∈ BM(T ), we denote supx∈T |f (x)| by |f |∞ . For g : [0, ∞) → RK and t ≥ 0, we write sup0≤s≤t |g(s)| as |g|∗t . Convergence in distribution of random variables (with values in some Polish space) Xn to X will be denoted as Xn ⇒ X. With an abuse of notation, weak convergence of probability measures (on a Polish space) µn to µ will also be denoted as µn ⇒ µ. For a Polish space V, let D([0, ∞), V) denote the class of right continuous functions with left limits defined from [0, ∞) to V with the usual Skorohod topology and C([0, ∞), V) the class of continuous functions from [0, ∞) to V with the local uniform topology. A stochastic process X = {X(t) : t ≥ 0} is said to be RCLL if every sample path of X is right continuous on [0, ∞) with finite left limits on (0, ∞). A countable set will be regarded as a metric space endowed with the discrete metric. We denote by ι : [0, ∞) → [0, ∞) the identity map. Finally, we will denote generic positive constants by c1 , c2 , . . .. Their values may change from one proof to another.
/Stability of Constrained Markov Modulated Diffusions
5
2. Main results
In this section we collect the main results of this work.
2.1. Recurrence and transience properties under a regular Skorohod map
Recall from Section 1 that G denotes a convex polyhedral cone with vertex at the origin given as the intersection of half spaces Gi , i = 1, · · · , N and ni is the inward normal vector associated with Gi , i.e., Gi = {x ∈ RK : hx, ni i ≥ 0}. Denote the set {x ∈ ∂G : hx, ni i = 0} by Fi . With each face Fi we associate a unit vector di such that hdi , ni i > 0. This vector defines the direction of constraint associated with the face Fi . At points on ∂G where more than one faces meet, there are more than one allowed directions of constraint. For x ∈ ∂G, define the set of directions of constraint X d(x) = d ∈ RK : d = ζi di , ζi ≥ 0, |d| = 1 , (2.1) i∈I(x)
where I(x) = {i ∈ {1, 2, . . . , N } : hx, ni i = 0}. Note that if I(x) = {j} for some j ∈ {1, 2, . . . , N }, then d(x) = {dj }. We now introduce the Skorohod problem (SP) and the Skorohod map (SM) associated with G and d. Define DG ([0, ∞) : RK ) = ξ ∈ D([0, ∞) : RK ) : ξ(0) ∈ G . For ξ ∈ D([0, ∞) : RK ) and T ≥ 0, let |ξ|(T ) denote the total variation of ξ on [0, T ] with respect to the Euclidean norm on RK . Definition 2.1. Let ψ ∈ DG ([0, ∞) : RK ) be given. Then the pair (φ, η) ∈ D([0, ∞) : G) × D([0, ∞) : RK ) solves the SP for ψ with respect to G and d if and only if φ(0) = ψ(0) and for all t ∈ [0, ∞) the following hold: (i) φ(t) = ψ(t) + η(t), and φ(t) R ∈ G. (ii) |η|(t) < ∞, and |η|(t) = [0,t] 1{φ(s)∈∂G} d|η|(s). (iii) There existsR Borel measurable map γ : [0, ∞) → RK such that γ(t) ∈ d(φ(t)) a.e. d|η| and η(t) = [0,t] γ(s)d|η|(s). Let D ⊂ DG ([0, ∞), RK ) be the domain on which there is a unique solution to the SP. On . D we define the SM Γ as Γ(ψ) = φ, if (φ, φ − ψ) is the unique solution of the SP posed by ψ. We will make the following assumption on the regularity of the SM defined by the data {(di , ni ) : i ∈ {1, 2, . . . , N }}.
/Stability of Constrained Markov Modulated Diffusions
6
Assumption 2.1. The SM is well defined on all of DG ([0, ∞), RK ), that is D = DG ([0, ∞), RK ), and the SM is Lipschitz continuous in the following sense. There exists κ1 ∈ (1, ∞) such that for all ψ1 , ψ2 ∈ DG ([0, ∞), RK ) : sup |Γ(ψ1 )(t) − Γ(ψ2 )(t)| ≤ κ1 sup |ψ1 (t) − ψ2 (t)| . t≥0
t≥0
We refer the reader to [15], [11], and [12] for sufficient conditions under which the above assumption holds. We now introduce the Markov process (X, Y ) which will be studied here. The component . Y is a Markov process with a finite state space L = {1, 2, . . . , L} and infinitesimal generator Q, while X is a constrained diffusion with drift and diffusion coefficients that, in addition to depending on the current state, are modulated through the values of Y . More precisely, the process X satisfies an integral equation of the form Z · Z · X(t) = Γ x + b (X(s), Y (s)) ds + σ (X(s), Y (s)) dW (s) (t), t ≥ 0, (2.2) 0
0
where W is a standard Wiener process which is independent of Y , and σ : G × L → RK×K , b : G × L → RK are measurable maps. We will make the usual Lipschitz assumption on coefficients b and σ as follows. Assumption 2.2. There exists κ2 ∈ (0, ∞) such that, for all x1 , x2 ∈ G and y ∈ L, |σ(x1 , y) − σ(x2 , y)| + |b(x1 , y) − b(x2 , y)| ≤ κ2 |x1 − x2 |. . . Let S = G×L and Z = (X, Y ). Using the above Lipschitz property along with the regularity assumption on the SM Γ (Assumption 2.1), it is easily seen that equation (2.2) is well posed. In particular, we have the following. Theorem 2.1. Under Assumptions 2.1, 2.2, there is a filtered measurable space (Ω, F, {Ft }t≥0 ) on which are given a collection of probability measures {Pz }z∈S and {Ft } adapted processes (X, W, k) and Y with sample paths in C([0, ∞) : G × RK × RK ) and D([0, ∞) : L) respectively, such that (Z, {Pz }z∈S ) is a Feller-Markov family and, for every z ≡ (x, y) ∈ S, Pz -a.s., the following hold. (i) W is a K-dimensional standard {Ft } Brownian motion. (ii) For all t ∈ (0, ∞), Z X(t) = x +
t
Z b(Z(s))ds +
0
t
σ(Z(s))dW (s) + k(t),
(2.3)
0
and X(t) ∈ G. Rt (iii) For all t ∈ (0, ∞), |k|(t) < ∞ and |k|(t) = 0 1{X(s)∈∂G} d|k|(s). (iv) There is a RK -valued {Ft } progressively R t measurable process γ such that γ(t) ∈ d(X(t)) a.e. d|k| and for all t ∈ (0, ∞), k(t) = 0 γ(s)d|k|(s).
/Stability of Constrained Markov Modulated Diffusions
7
(v) Y is a L-valued {Ft }-Markov process with Y (0) = y and infinitesimal generator Q. We will denote the Markov family (Z, {Pz }z∈S ) merely as Z and denote the transition kernel of Z by PZt , namely for z ∈ S and B ∈ B(S), PZt (z, B) = Pz (Z(t) ∈ B). We recall the basic definitions of positive recurrence and transience of Markov process Z. Definition 2.2. The Markov process {Z(t) : t ≥ 0} is said to be positive recurrent if for each A ∈ B(G) with positive Lebesgue measure, j ∈ L, and z ∈ S, we have Ez (τA×{j} ) < ∞, where τA×{j} = inf{t ≥ 0 : Z(t) ∈ A × {j}} and Ez denotes the expectation under Pz . Definition 2.3. The Markov process {Z(t) : t ≥ 0} is said to be transient if there exist A ∈ B(G) with positive Lebesgue measure, j ∈ L, and z ∈ S such that Pz (τA×{j} < ∞) < 1. We now introduce additional assumptions that will be needed for the main stability results. The second part of the following assumption will ensure irreducibility of the Markov process Z while the first will be needed in some moment estimates. Assumption 2.3. (i) For some κ3 ∈ (0, ∞), |σ(z)| ≤ κ3 for all z ∈ S. (ii) There exists κ4 ∈ (0, ∞) such that for all z ∈ S and ζ ∈ RK , ζ 0 σ(z)σ 0 (z)ζ ≥ κ4 ζ 0 ζ. We will make the following irreducibility assumption on the finite state Markov process associated with the generator Q. Let Tt = exp(tQ), t ≥ 0. Assumption 2.4. For every t > 0 and i, j ∈ L, Tt (i, j) > 0. This assumption ensures that the Markov process with the infinitesimal generator Q has a unique stationary distribution q ∗ ≡ {qj∗ }j∈L . We now give the main stability assumption on the drift coefficient b. Recall the cone C and the set C(δ) introduced in (1.2) and (1.3). Also recall the maps b1 , b2 from (1.4). Then our assumption on the drift b is as follows. Assumption 2.5. There exist δ0 ∈ (0, ∞) and bounded set A ⊂ G such that for all x ∈ G\A and y ∈ L, b∗ (x, y) ∈ C(δ0 ) where X b∗ (x, y) = b1 (x, y) + b∗2 and b∗2 = qj∗ b2 (j), (x, y) ∈ S. j∈L
The following theorem is the first main result of this work. Theorem 2.2. Suppose that Assumptions 2.1-2.5 hold. Then the Markov process (Z, {Pz }z∈S ) is positive recurrent and has a unique invariant measure π. Remark 2.1. In [24], the authors consider a 1-dimensional Markov-modulated reflected Ornstein– Uhlenbeck process {X(t) : t ≥ 0} defined as follows. Z t Z t X(t) = − [λ1 (Y (s))X(s) + λ2 (Y (s))]ds + σ(Y (s))dB(s) + k(t), t ≥ 0, 0
0
/Stability of Constrained Markov Modulated Diffusions
8
where {Y (t) : t ≥ 0} is as in Theorem 2.1(v), {B(t) : t ≥ 0} is a standard 1-dimensional Brownian motion, and λ1 , λ2 , σ are all strictly positive functions. The paper shows that (X, Y ) has a unique stationary distribution. Clearly b(x, y) = −[λ1 (y)x + λ2 (y)] satisfies Assumption 2.5 and thus Theorem 2.2 in particular covers the setting considered in [24]. In fact, Theorem 2.2, in addition to covering the much more general multidimensional setting, shows that the positive assumption on λ1 , λ2 can be relaxed to the condition that λ1 , λ2 are nonnegative and λ2 (j) > 0 for some j ∈ L. For the case when b1 = 0, we obtain a sharper result as follows. Theorem 2.3. Suppose that b1 (z) = 0 for all z ∈ S. Also suppose that Assumptions 2.1-2.4 hold. Then the following hold: (i) If b∗2 ∈ C o , then (Z, {Pz }z∈S ) is positive recurrent. (ii) If b∗2 6∈ C, then (Z, {Pz }z∈S ) is transient. In section 2.4, we will establish geometric ergodicity of the Markov family (Z, {Pz }z∈S ). More precisely, the following result will be proved. Let f : S → R be a measurable function such that, for some measurable g : S → R and for all z ∈ S, t ≥ 0, Z t Z t Ez |f (Z(t))| + |g(Z(s))|ds < ∞, Ez [f (Z(t)] = f (z) + Ez g(Z(s))ds . 0
0
Denote by D(A) the collection of all such measurable functions f . For a pair (f, g) as above, we write (f, g) ∈ A, or with abuse of terminology, g = Af . The (multi-valued) operator A is referred to as the extended generator of Z and D(A) its domain. Theorem 2.4. Suppose that Assumptions 2.1-2.5 hold. Then the following properties hold. (i) There exists β1 ∈ (0, ∞) such that for all measurable f : S → R which satisfy |f (z)| ≤ eβ1 |x| for all z = (x, y) ∈ S, Z |f (z)|π(dz) < ∞. S
In particular, for all c ∈
RK
with |c| ≤ β1 , Z ehc,xi π(dxdy) < ∞. S
(ii) There are β2 , β3 , b0 ∈ (0, ∞) such that for f as in (i), the following hold. (a) For all z = (x, y) ∈ S and t ∈ (0, ∞), |Ez (f (Z(t)) − π(f )| ≤ eβ2 (|x|+1) e−b0 t . . Rt . (b) Defining for t ≥ 0, St = 0 f (Z(u))du, we have that ftc (z) = Ez (St −tπ(f )) converges to a finite limit fˆ(z) for all z ∈ S. (c) The convergence in (b) is exponentially fast, i.e., |ftc (z) − fˆ(z)| ≤ eβ3 (|x|+1) e−b0 t for all z = (x, y) ∈ S and t ∈ (0, ∞).
/Stability of Constrained Markov Modulated Diffusions
9
(d) The function fˆ ∈ D(A) and solves the Poisson equation: Afˆ(z) = π(f )−f (z), z ∈ S. (iii) Let f : S → R be a measurable function such that, with β1 as in (i), f 2 (z) ≤ eβ1 |x| , for all z = (x, y) ∈ S. Define for t ∈ [0, 1], Z nt . 1 n ξ (t) = √ [f (Z(s)) − π(f )] ds. n 0 Let fˆ be as in (ii)(b). Define γf2
. =2
Z
fˆ(z)(f (z) − π(f ))π(dz).
S
ξn
Then |γf | < ∞ and converges weakly to γf B in C([0, 1], R), where B is a 1-dimensional standard Brownian motion.
2.2. Markov modulated SRBM
In Section 4 we will consider a model with somewhat more restrictive conditions on the domain and the coefficients b and σ but significantly weaker assumptions on the constraint vector th face field d. Suppose that G = RK + and N = K. For i ∈ K, let ni = ei . Then the i Fi = {x ∈ G : xi = 0}. Define a K × K matrix R = (d1 | · · · |dK ) and a K-dimensional vector b0 ∈ RK . Let σ be a K × K positive definite matrix. We recall from [21] the definition of a SRBM associated with (G, b0 , σ, R). Definition 2.4. For x ∈ G, an SRBM associated with (G, b0 , σ, R) that starts from x is a ¯ defined on some filtered probability space continuous, {F¯t }-adapted K-dimensional process X, ¯ ¯ ¯ ¯ ¯ (Ω, F, {Ft }t≥0 , P ) such that, P -a.s., the following hold. ¯ ¯ (t) + RU ¯ (t) and X(t) ¯ (i) X(t) = x + b0 t + σ W ∈ G for all t ≥ 0. ¯ ¯ (ii) W is a K-dimensional standard {Ft } Brownian motion. ¯ is an {F¯t }-adapted K-dimensional process such that, for i = 1, . . . , K, U ¯i (0) = 0, (iii) U ¯ ¯ ¯ U i is continuous and nondecreasing, and Ui can increase only when X is on Fi , i.e., R∞ ¯i (s) = 0. ¯ i (s)>0} dU 0 1{X An SRBM arises as the diffusion approximation limit for many multiclass queueing networks in heavy traffic (see [22]). The paper [21] shows that if R is completely-S, namely for every k×k ˜ of R, there is a k-dimensional vector v ˜ such that v ˜ ≥ 0 and Rv ˜ ˜ > 0, principle submatrix R R R R then (weak) existence and uniqueness of SRBM hold. This condition, which is significantly weaker than Assumption 2.1 made in Section 2.2, is in fact known to be a necessary condition for existence of an SRBM ([18, Theorem 2]). We record this condition below for future reference. Assumption 2.6. The matrix R is completely-S. The following result follows from [21] along with a straightforward argument based on Girsanov’s theorem. Fix a measurable map b2 : L → RK .
/Stability of Constrained Markov Modulated Diffusions
10
Theorem 2.5. Suppose that Assumption 2.6 holds. Then there is a filtered measurable space (Ω, F, {Ft }t≥0 ) on which are given a collection of probability measures {Pz }z∈S and {Ft }adapted processes (X, W, U ) and Y with sample paths in C([0, ∞) : G×RK ×G) and D([0, ∞) : L), respectively, such that for every z ≡ (x, y) ∈ S, Pz -a.s., the following hold. (i) W is a K-dimensional standard {Ft } Brownian motion. (ii) For all t ≥ 0, Z t b2 (Y (s))ds + σW (t) + RU (t), X(t) = x +
(2.4)
0
and X(t) ∈ G. (iii) For each i = 1, . . . , K, Ui (0) = 0, Ui is continuous and nondecreasing, and Z ∞ 1{Xi (s)>0} dUi (s) = 0. 0
(iv) Y is a L-valued {Ft } Markov process with Y (0) = y and infinitesimal generator Q. Let Z = (X, Y ). Then (Z, {Pz }z∈S ) is a Feller-Markov family. We now recall the key stability condition, introduced in [13], for positive recurrence of an SRBM in terms of the associated “fluid limit” trajectories. Definition 2.5. We say a vector b0 ∈ RK satisfies the DW-stability condition if for all φ ∈ C([0, ∞) : G) satisfying the property (F ) below, we have φ(t) → 0 as t → ∞. ( For some z ∈ G and η ∈ C([0, ∞) : G), φ(t) = z + b0 t + Rη(t) for all t ≥ 0, where R∞ (F ) for i = 1, . . . , K, ηi (0) = 0, ηi is nondecreasing, and 0 1{φi (s)>0} dηi (s) = 0. In [13], the authors showed that if Z is a (G, b0 , σ, R) SRBM, i.e., b2 (y) = b0 for all y ∈ L, and b0 satisfies the DW-stability condition, then the SRBM is positive recurrent and consequently has a unique invariant probability distribution. In the current work we establish a similar result for the Markov modulated setting. P Theorem 2.6. Suppose that Assumptions 2.4 and 2.6 hold and the vector b∗2 = j∈L qj∗ b2 (j) satisfies the DW-stability condition. Then the family (Z, {Pz }z∈S ) is positive recurrent and admits a unique invariant probability measure π. In fact, we establish geometric ergodicity properties similar to that in Theorem 2.4. Analogous result for the constant drift case (i.e. b2 (y) ≡ b0 ) has been proved in [5]. Theorem 2.7. Under the assumptions made in Theorem 2.6, properties in Theorem 2.4 hold for the Markov process Z.
/Stability of Constrained Markov Modulated Diffusions
11
2.3. Convergence of invariant measures for SPN
Consider a sequence of open queueing networks with the following structure. Each network has K service stations each of which has an infinite capacity buffer. We denote the ith station by . Pi , i ∈ K = {1, 2, . . . , K}. All customers/jobs at a station are “homogeneous” in terms of service requirement and routing decisions. Arrivals of jobs can be from outside the system and/or from internal routing. Upon completion of service at station Pi a customer is routed to some other service station or exits the system. The external arrival processes and service processes are assumed to depend on the state of the system and an auxiliary finite state Markov process. The routing mechanism is governed by a K × K substochastic matrix P. Roughly speaking, the conditional probability that a job completed at station Pi is routed to station Pj equals the (i, j)th entry of the matrix P. The above formal description is made precise in what follows. In the nth network, the Markov process modulating the arrival and service rates is denoted as {Y n (t) : t ≥ 0}. We assume that Y n has a finite state space L and infinitesimal generator Qn which converges to some matrix Q. Let Qni (t) denote the number of customers at station Pi at time t. Then the evolution of Qn can be described by the following equation Qni (t)
=
Qni (0)
+
Ani (t)
−
Din (t)
+
K X
n Dji (t), i ∈ K.
(2.5)
j=1
Here Ani (t) is the number of arrivals from outside at station Pi by time t, Din (t) is the number n (t) is the number of jobs that are routed of service completions by time t at station Pi , and Dji n (t) be the number of to Pi immediately upon completion at station Pj by time t. Letting Di0 customers by time t who leave the network after service at Pi , we have Din (t)
=
K X
n Dij (t).
(2.6)
j=0
The dependance of arrival and processing rates on the system state and Y n is modeled by n , 1 ≤ i ≤ K, 0 ≤ j ≤ K, are counting processes given on a suitrequiring that Ani and Dij able filtered probability space (Ωn , F n , Pn , {Ftn }t≥0 ) such that for some measurable functions λni , µ ˜ni : RK + × L → R+ , the processes Z · n n ˜ Ai (·) ≡ Ai (·) − λni (Qn (u), Y n (u))du, 0 Z · (2.7) n n n ˜ n (·) ≡ Dn (·) − D P µ ˜ (Q (u), Y (u))du ij i ij ij 0
P are locally square integrable {Ftn } martingales. Here Pi0 = 1 − K j=1 Pij . We assume that n , 1 ≤ i ≤ K, 0 ≤ j ≤ K, and Y n have no common jumps. We also require processes Ani and Dij that Y n is a {Ftn } Markov process. The functions λni and µ ˜ni , i ∈ K, represent the arrival and service rates. We denote by K0 (K0 ⊆ K) the set of indices of stations which receive arrivals from outside. In particular, λni (x, y) = 0 for all (x, y) ∈ RK + × L whenever i ∈ K\K0 . Corresponding to the fact that no service occurs when the buffer is empty, µ ˜ni (x, y) = 0 if xi = 0. Let λn =
/Stability of Constrained Markov Modulated Diffusions
12
K (λn1 , . . . , λnK )0 . We assume that, for each i ∈ K, µ ˜ni restricted to RK + \{x ∈ R+ : xi = 0} × L can be extended to a function µni defined on RK + × L (that satisfies additional properties as specified below), and write µn = (µn1 , . . . , µnK )0 . Let . λn − [I − P0 ]µn √ bn = . n We now introduce the main assumption on model parameters. Assumption 2.7. (i) The spectral radius of P is strictly less than 1. (ii) There exist some θ1 , θ¯1 ∈ (0, ∞) such that, for all n ≥ 1, i ∈ K0 , j ∈ K and (x, y) ∈ RK + × L, nθ1 ≤ |λni (x, y)| ≤ nθ¯1 , nθ1 ≤ |µnj (x, y)| ≤ nθ¯1 . (iii) For some θ2 ∈ (0, ∞), sup
|bn (x, y)| ≤ θ2 .
(x,y)∈RK + ×L K such that (iv) There exists a bounded Lipschitz map b : RK + ×L→R √ bn ( nx, y) → b(x, y)
uniformly on RK + × L as n → ∞. (v) There exist RK -valued bounded Lipschitz functions λ, µ defined on RK + + × L, such that √ √ λn ( nx, y) µn ( nx, y) → λ(x, y), → µ(x, y) n n 0 uniformly for (x, y) in compact subsets of RK + × L as n → ∞. Furthermore, λ = [I − P ]µ. m (vi) For each i ∈ K\K0 , there exists j ∈ K0 such that Pji > 0 for some m ∈ N.
For t ≥ 0, let
n b n (t) = Q√(t) . Q n
Denote the ith column of [I − P0 ] by di and let G = RK + and N = K. Recall from Section 2.1 the definition of the SP and SM Γ associated with G and d (d is defined through (2.1)). Under Assumption 2.7 (i), it follows from [15] that Assumption 2.1 is satisfied. b n , Y n ) converges weakly Theorem 3.2 of [7] shows that under Assumption 2.7, as n → ∞, (Q to a Markov process (X, Y ), where Y is a Markov process with infinitesimal generator Q and X is a reflected diffusion process with state dependent and Markov modulated coefficients, defined as in (1.1). To state this result precisely, define for z ∈ S, a K × [K + K(K + 1)]-dimensional matrix Σ(z) as . Σ(z) = (A(z), B1 (z), . . . , BK (z)) , where A and Bi , i ∈ K, are K × K and K × (K + 1) matrices given as follows. For z ∈ S, p p A(z) = diag λ1 (z), . . . , λK (z) , Bi (z) = Bi0 (z), Bi1 (z), . . . , BiK (z) ,
/Stability of Constrained Markov Modulated Diffusions
13
p p . . where Bi0 (z) = −1i Pi0 µi (z), Bii (z) = 0 and for j ∈ K and j 6= i, Bij (z) = 1ij Pij µi (z). Here 1i is a K-dimensional vector with 1 at the ith coordinate and 0 elsewhere, 0 is the Kdimensional 0 vector, and 1ij is a K-dimensional vector with −1 at the ith coordinate, 1 at the j th coordinate, and 0 elsewhere. It is easy to see that due to Assumption (ii) and (vi), Σ(z)Σ(z)0 is uniformly nondegenerate (see [4, Appendix]). More precisely, there exists a θ3 ∈ (0, ∞) such that, for all ζ ∈ RK and z ∈ S, ζ 0 (Σ(z)Σ(z)0 )ζ ≥ θ3 ζ 0 ζ. One can then find a Lipschitz function σ : S → RK×K (cf. [20, Theorem 5.2.2]) such that Σ(z)Σ(z)0 = σ(z)σ(z)0 . Note that b given in Assumption 2.7(iv) and σ introduced above satisfy Assumptions 2.2 and . bn n 2.3. Denote Z n = (Q , Y ). From Theorem 3.2 of [7], we have the following conclusion. Theorem 2.8. [7] Let (Ω, F, {Pz }z∈S , {Ft }t≥0 ) and Z be as in Theorem 2.1. Suppose P n ◦ Z n (0)−1 converges to ν ∈ P(Ω) weakly as n → ∞. Then Z n n −1 P ◦ (Z ) ⇒ Pz ◦ Z −1 ν(dz), as n → ∞. S
The following is the main result of the section. Theorem 2.9. Suppose that Assumption 2.7 and 2.4 holds and that b in Assumption 2.7(iv) can be expressed as in (1.4) in terms of functions b1 and b2 that satisfy Assumption 2.5. Then there exists N ∈ N such that for any n ≥ N , the Markov process Z n admits a stationary distribution. Let πn be a stationary distribution of Z n . Then πn ⇒ π as n → ∞, where π is as in Theorem 2.2. In the following, we provide an explicit example, where assumptions of the above theorem hold. 0 21 Example 2.1. Let K = 2, L = {1, 2}, and P = . The arrival and service rate λn 1 3 0 and µn are defined as follows. For x = (x1 , x2 ) ∈ R2+ and y ∈ L, √ 0 √ √ √ n(e−x1 / n + 4) + ny, n(e−x2 / n + 4) + 2ny , λn (x, y) = 0 24 √ 27 √ n µ (y) = ny + 2ny, ny + 3ny . 5 5 Therefore,
and
0 √ √ bn (x, y) = e−x1 / n + 4 − 3y, e−x2 / n + 4 − 3y , √ b(x, y) = bn ( nx, y) = (e−x1 + 4 − 3y, e−x2 + 4 − 3y)0 .
(2.8)
/Stability of Constrained Markov Modulated Diffusions
14
Let q ∗ = ( 14 , 34 )0 . Let Y n be a Markov process with state space L such that it converges to a Markov process Y with the stationary distribution q ∗ . With the above model parameters and b n (0), Y n (0)) = z0 ∈ S, we have from Theorem 2.8 that (Q b n , Y n ) ⇒ (X, Y ), where letting (Q X is defined as in (1.1) with drift b defined in (2.8) and diffusion coefficient σ constructed as above Theorem 2.8. Note that the constraint directions in this example are d1 = (1, − 21 )0 and d2 = (− 13 , 1)0 and the cone C = {−ζ1 d1 − ζ2 d2 : ζ1 ≥ 0, ζ2 ≥ 0} . Also observe that, for all x ∈ R2+ b(x, 1) = e−x1 + 1, e−x2 + 1 ∈ C c , b(x, 2) = e−x1 − 2, e−x2 − 2 ∈ C o , and the averaged drift
5 −x2 5 0 −x1 b (x) = e − ,e − ∈ Co. 4 4 ∗
In fact, for any 0 < δ0 < 14 , we have that for all x ∈ R2+ , b∗ (x) ∈ C(δ0 ). By Theorem 2.2, (X, Y ) is positive recurrent and has a unique invariant measure π. Finally, from Theorem b n , Y n ) admits an invariant probability measure π n for n large enough and π n ⇒ π as 2.9, (Q n → ∞.
3. Stability properties under a regular Skorohod map
3.1. Positive Recurrence
In this section we prove Theorem 2.2. Assumptions 2.1-2.5 will be assumed through out this section. Recall the parameter δ0 introduced in Assumption 2.5. Let υ : [0, ∞) → RK be a measurable map such that, Z t |υ(s)|ds < ∞, for all t ≥ 0. (3.1) 0
For x ∈ G and υ as above, let Z t x(t) = Γ x + υ(s)ds , t ≥ 0.
(3.2)
0
For x ∈ G, let A(x) ≡ A(x, δ0 ) be the set of all absolutely continuous functions x defined by (3.2) for some υ : [0, ∞) → C(δ0 ) that satisfies (3.1). Define the “hitting time to the origin” function as follows. T (x) = sup inf{t ∈ (0, ∞) : x(t) = 0}, x ∈ G.
(3.3)
x∈A(x)
Note that T (0) = 0. The following lemma from [1](cf. Lemma 3.1 therein) is a key ingredient in our analysis. Lemma 3.1. [1] The function T defined by (3.3) satisfies the following properties.
/Stability of Constrained Markov Modulated Diffusions
15
(i) For some Θ1 ≡ Θ1 (δ0 ) ∈ (0, ∞), |T (x1 ) − T (x2 )| ≤ Θ1 |x1 − x2 | for all x1 , x2 ∈ G. (ii) For some Θ2 ≡ Θ2 (δ0 ) ∈ (0, ∞), Θ2 |x| ≤ T (x) ≤ Θ1 |x|, for all x ∈ G. (iii) Fix x ∈ G and let x ∈ A(x). Then for all t > 0, T (x(t)) ≤ (T (x) − t)+ . Define for (x, y) ∈ S, bc (y) = b(x, y) − b∗ (x, y) = b2 (y) − b∗2 . The following lemma is an immediate consequence of Lemma 3.1 and the Lipschitz property of Γ. The proof is quite similar to that of Lemma 4.1 of [1], however for completeness we provide the arguments in Appendix. Recall the filtered probability space (Ω, F, {Ft }, {Pz }z∈S ) and processes X, W, Y, Z introduced in Theorem 2.1. Lemma 3.2. Let ∆ > 0 and u > 0 be arbitrary. Fix z ∈ S. Then, Pz -a.s., on the set {ω : X(t, ω) ∈ G \ A for all t ∈ (u, u + ∆]}, u T (X(u + ∆)) ≤ (T (X(u)) − ∆)+ + κ1 Θ1 ν∆ ,
where Θ1 and κ1 are as in Lemma 3.1(i) and Assumption 2.1 respectively, and Z t Z t u . c ν∆ = sup b (Y (s))ds + σ(Z(s))dW (s) . u≤t≤u+∆
u
(3.4)
u
Lemma 3.3. There exists a Θ3 ∈ (0, ∞) such that for all α, t ∈ (0, ∞) and z ∈ S, Ez (exp{ανt0 }) ≤ 8 exp {Θ3 α(1 + α + αt)} ,
(3.5)
where νt0 is defined as in (3.4) with u, ∆ replaced by 0, t, respectively. Proof: By Holder’s inequality, Z s 2 Z s c Ez exp α sup b (Y (u))du + σ(Z(u))dW (u) 0≤s≤t 0 0 Z s Z c ≤ Ez exp 2α sup b (Y (u))du Ez exp 2α sup 0≤s≤t 0≤s≤t 0
s
0
(3.6) σ(Z(u))dW (u) .
Consider the first expectation on the right hand side of the above inequality. For f ∈ BM(L), s ≥ 0 and y ∈ L, let PYs f (y) = E(f (Y (s))|Y (0) = y). Let g(·) be a solution of the Poisson equation for bc (·) corresponding to the Markov semigroup {PYs }s≥0 , i.e., for y ∈ L and s ≥ 0, Z s PYs g(y) − g(y) − PYu bc (y)du = 0. 0
Then, under Pz , . Ms = g(Y (s)) − g(Y (0)) −
Z
s
bc (Y (u))du
(3.7)
0
is an {Fs } martingale. We next show that, for all s ≥ 0 and y ≥ 0, Pz (|Ms | ≥ y) ≤ 2 exp
−2y 2 (1 + v 2 )2 (s + 1)
,
(3.8)
/Stability of Constrained Markov Modulated Diffusions
16
where v = 2(|g|∞ + |bc |∞ ) < ∞. For fixed s ≥ 0, let ( Mk+1 − Mk , 0 ≤ k ≤ bsc − 1, ξk = Ms − Mbsc , k = bsc. Pbsc Then Ms = i=0 ξi and for 0 ≤ k ≤ bsc, Ez (ξk |Fk ) = 0 and |ξk | ≤ v. Using well known concentration inequalities for martingales with bounded increments (see e.g. Corollary 2.4.7 in [9]), for 0 ≤ k ≤ bsc and y ≥ 0, ! k X √ 2y 2 Pz ξi ≥ y k + 1 ≤ exp − . (1 + v 2 )2 i=0
Therefore, Pz
bsc X
ξi ≥ y ≤ exp −
i=0
2y 2 2 (1 + v )2 (bsc + 1)
≤ exp −
2y 2 . (1 + v 2 )2 (s + 1)
Similarly, Pz −
bsc X
ξi ≥ y ≤ exp −
i=0
2y 2 . (1 + v 2 )2 (s + 1)
The inequality in (3.8) follows on combining the above two estimates. Denoting
2 (1+v 2 )2
by c1 , we have, Z
∞
(log y)2
c1 dy ≤ 2 Ez (exp{2α|Ms |}) ≤ 2 exp − 2 4α (s + 1) 0 (1 + 4π)α2 (s + 1) ≤ 2 exp . c1
s
4πα2 (s + 1) exp c1
α2 (s + 1) c1
An application of Doob’s inequality now yields that (1 + 4π)α2 (t + 1) Ez exp 2α sup |Ms | ≤ 4Ez (exp {2α|Mt |}) ≤ 8 exp . c1 0≤s≤t Combining this with (3.7), we have that Z s (1 + 4π)α2 (t + 1) c Ez exp 2α sup b (Y (u))du ≤ 8 exp 4α|g|∞ + . c1 0≤s≤t 0
(3.9)
For the second expectation on the right side of (3.6), using Assumption 2.3 (i), we have by standard estimates (see e.g. Lemma 4.2 of [1]) Z s Ez exp 2α sup σ(Z(u))dW (u) ≤ 8 exp 2α2 κ23 K 2 t . (3.10) 0≤s≤t
0
/Stability of Constrained Markov Modulated Diffusions
17
Using (3.9) and (3.10), we now have that the left side of (3.5) is bounded above by 1 + 4π 2 1 + 4π 8 exp 2|g|∞ α + α + + κ23 K 2 α2 t . 2c1 2c1 The result follows. Remark 3.1. Note that, in the above proof, bc only depends on y and satisfies the condition P ∗ c c j∈L qj b (j) = 0. We can then derive a solution of the Poisson equation for b corresponding s to the Markov semigroup {PY }s≥0 and use the martingale property to get the desirable result. However, if we consider a general measurable drift b(x, y) and define “averaged” drift b∗ (x) = P ∗ c ∗ j∈L qj b(x, j), then b = b−b would depend on both x and y. In such a case, the above method will become difficult to be applied. Using the fact that (X, Y ) is a {Ft } Markov process and that W is a {Ft } Brownian motion (cf. Lemma 4.3 of [1]) we have the following lemma. Proof is omitted. (n−1)∆
(n−1)∆
Lemma 3.4. Let z ∈ S and ∆ > 0 be fixed. For n ∈ N, let νn ≡ ν∆ , where ν∆ is defined as in (3.4) with u replaced by (n − 1)∆. Then for any α ∈ (0, ∞) and m, n ∈ N; m ≤ n, ( n )! X Ez exp α νi ≤ [8 exp {Θ3 α(1 + α + α∆)}]n−m+1 , i=m
where Θ3 is as in Lemma 3.3. Given a compact set C ⊂ S, let . τC = inf{t ≥ 0 : Z(t) ∈ C}.
(3.11)
. For M > 0, let BM = {(x, y) ∈ S : T (x) ≤ M } and CM = {(x, y) ∈ S : |x| ≤ M }. Theorem 3.1. There exist ∆, a0 ∈ (0, ∞) and ς ∈ (0, 1) such that for any z = (x, y) ∈ S and t ∈ (0, ∞), (3.12) Pz (τB∆ > t) ≤ exp{ςT (x) + (a0 − ς)∆ − a0 t}. In particular, for every M ∈ (0, ∞) and a < a0 , sup Ez (exp{aτB∆ }) < ∞.
z∈CM
Proof. Proof is similar to that of Theorem 4.1 of [1], so only a sketch is provided. Fix z = (x, y) ∈ S. Recall the set A from Assumption 2.5. Choose ∆ > 0 large enough so that A × L ⊂ B∆ . Additional restrictions on ∆ will be imposed later in the proof. Let . Ωn = {ω : τB∆ > n∆} = ω : inf T (X(s, ω)) > ∆ . 0≤s≤n∆
Then for z ∈ S, Pz (Ωn ) ≤ Pz (T (X(n∆)) > ∆). By Lemma 3.2 we have, for ω ∈ Ωn , T (X(n∆)) ≤ T (x) − n∆ + κ1 Θ1
n X j=1
νj ,
/Stability of Constrained Markov Modulated Diffusions
18
where νj is as in Lemma 3.4. Using Lemma 3.4 and a calculation similar to that in the proof of Theorem 4.1 of [1] we now have, for any ς ∈ (0, 1), Pz (Ωn ) ≤ exp{ς(T (x) − ∆)} exp n Θ3 ςκ1 Θ1 (1 + ςκ1 Θ1 ) + log 8 + Θ3 ς 2 κ21 Θ21 ∆ − ς∆ . Take ς = (2Θ3 κ21 Θ21 + 1)−1 . Choose ∆ sufficiently large so that, in addition to the property A × L ⊂ B∆ , we have ∆−1 [Θ3 ςκ1 Θ1 (1 + ςκ1 Θ1 ) + log 8] < ς/2. Then ς . ∆−1 Θ3 ςκ1 Θ1 (1 + ςκ1 Θ1 ) + log 8 + Θ3 ς 2 κ21 Θ21 ∆ − ς∆ < Θ3 ς 2 κ21 Θ21 − = −a0 < 0, 2 and Pz (Ωn ) ≤ exp{ς(T (x) − ∆)} exp{−a0 n∆}. The proof of (3.12) now follows from the above estimate, exactly as in the proof of Theorem 4.1 of [1]. Second part of the theorem is an immediate consequence of (3.12). The lemma below gives the tightness of the family {Pz ◦ Z(t)−1 : z ∈ CM , t ≥ 0} for any M > 0. The proof is similar to Lemma 4.4 of [1]. A sketch is given in Appendix. Lemma 3.5. There exists κ ∈ (0, ∞) such that for all M > 0, sup sup Ez (exp{κ|X(t)|}) < ∞. z∈CM t≥0
The following irreducibility property is used in showing uniqueness of the invariant measure. j,t For j ∈ L, z ∈ S, t > 0, define mj,t z ∈ P(G) as mz (E) = Pz (Z(t) ∈ E × {j}), E ∈ B(G). Lemma 3.6. For every j ∈ L, z ∈ S and t > 0, mj,t z is mutually absolutely continuous with respect to the Lebesgue measure λ on G. Proof: Without loss of generality we can assume that on the filtered probability space (Ω, F, {Ft }t≥0 ), introduced in Theorem 2.1 we have, for each z = (x, j) ∈ G × L, probability measures Pjx under which (i) - (iv) of Theorem 2.1 hold, with (2.3) replaced by Z t Z t X(t) = x + b(X(s), j)ds + σ(X(s), j)dW (s) + k(t), a.s.. 0
0
As argued in the proof of Lemma 5.7 of [5], for all (x, j) ∈ S, t > 0, Pjx ◦ X(t)−1 is mutually absolutely continuous to λ.
(3.13)
Fix z = (x, i) ∈ S. Denote by {τk }k∈N0 the sequence of transition times of the pure jump process Y , namely, τ0 = 0, τk+1 = inf{t > τk : Y (t) 6= Y (t−)}, k ∈ N0 . Then Pz a.s., τk is strictly increasing to ∞. Also, the law of τk+1 − τk , conditioned on PFτk (under Pz ) has density ϕYτk where for i ∈ L, ϕi is the Exponential density with rate j:j6=i Qij . For k ≥ 0, let mzk ∈ P([0, ∞) × G × L) be the probability law of (τk , X(τk ), Y (τk )). Also for j ∈ L, define sub-probability measures mz,j k on [0, ∞) × G by the relation z mz,j k (E) = mk (E × {j}), E ∈ B([0, ∞) × G).
/Stability of Constrained Markov Modulated Diffusions
19
Then, for A ∈ B(G), j ∈ L, t > 0, ∞ X
Pz (X(t) ∈ A, Y (t) = j) =
Pz (X(t) ∈ A, Y (t) = j, τk ≤ t < τk+1 )
k=0 ∞ Z X
=
k=0
Z
[0,t)×G
∞
t−u
ϕj (v)dv Pjx˜ (X(t − u) ∈ A)mz,j x). k (dud˜
From (3.13) and the above display, if λ(A) = 0, then Pz (X(t) ∈ A, Y (t) = j) = 0. Conversely suppose that λ(A) > 0. From Assumption 2.4, for some k0 ∈ N0 , P(Y (τk0 )|Y (0) = i) > 0 and therefore mz,j k0 ([0, t] × G) is nonzero for every t > 0. Finally, from the above display and using (3.13) once again we obtain Z ∞ Z Pz (X(t) ∈ A, Y (t) = j) ≥ ϕj (v)dv Pjx˜ (X(t − u) ∈ A)mz,j x) > 0. k0 (dud˜ [0,t)×G
t−u
The result follows. Proof of Theorem 2.2. Let S be a compact subset of G with a positive Lebesgue measure. For the proof of positive recurrence, it suffices to show that for every M > 0 and j ∈ L, sup Ez (τ (j) ) < ∞,
(3.14)
z∈CM
where τ (j) = inf{t ≥ 0 : Z(t) ∈ S (j) } and S (j) = S × {j}. Let ∆ be as in Theorem 3.1. From Assumptions 2.3 and 2.4, it follows that p1 = inf Pz (Z(1) ∈ S (j) ) > 0. z∈B∆
Since the family {Pz ◦ Z(1)−1 , z ∈ B∆ } is tight, there exists c1 ∈ (0, ∞) such that inf Pz (Z(1) ∈ S (j) , |X|∗1 ≤ c1 ) ≥
z∈B∆
p1 . 2
Arguing as in the proof of Theorem 2.2 of [1], we now have that for all M > c1 , ! 2 (j) sup Ez (τ ) ≤ sup Ez (τB∆ ) + 1 + sup Ez (τB∆ ) . p1 z∈CM z∈CM z∈CM This, in view of Theorem 3.1, proves (3.14) and positive recurrence of Z follows. Existence of a unique invariant probability measure is an immediate consequence of Lemma 3.5, the Feller property of Z, and the irreducibility property in Lemma 3.6.
3.2. Transience
In this section, we prove Theorem 2.3. We will assume through this section that Assumption 2.1-2.4 hold and that b1 (z) = 0, z ∈ S. Let ι be the identity map from [0, ∞) to [0, ∞). The following lemma is taken from [3] (cf. Lemma 3.1 and Theorem 3.10 therein).
/Stability of Constrained Markov Modulated Diffusions
20
˜ Lemma 3.7. [3] For each ζ ∈ RK , there is a ζ˜ ∈ RK + such that Γ(ζι)(t) = ζt for all t ≥ 0. Furthermore, ζ˜ 6= 0 if and only if ζ 6∈ C. Proof of Theorem 2.3: Part (i) is immediate from Theorem 2.2. Consider now Part (ii). Since b1 ≡ 0, the process Z = (X, Y ) satisfies, for every z = (x, y) ∈ S, Pz -a.s., Z · Z · σ(Z(s))dW (s) (t), t ≥ 0. b2 (Y (s))ds + X(t) = Γ x + 0
0
An application of triangle inequality shows that Z · Z · |X(t)| 1 σ(Z(s))dW (s) (t) b2 (Y (s))ds + = Γ x + t t 0 Z ·0 Z · 1 1 ∗ ∗ ≥ |Γ (b2 ι) (t)| − Γ x + b2 (Y (s))ds + σ(Z(s))dW (s) (t) − Γ(b2 ι)(t) . t t 0 0 Noting that bc (y) = b2 (y) − b∗2 , we have from the Lipschitz property of Γ and Lemma 3.7, Z s Z s κ1 κ1 |x| κ1 |X(t)| c ˜ σ(Z(u))dW (u) − b (Y (u))du − ≥ |β| − sup sup , (3.15) t t 0≤s≤t 0 t 0≤s≤t 0 t Rt . ˜ i (t) = where β˜ = Γ(b∗2 ι)(1). Since b∗2 6∈ C, from Lemma 3.7, β˜ 6= 0. Let W hei , 0 σ(Z(s))dW (s)i, K where {ei }K i=1 is theR standard basis in R . Then the quadratic variation of the martingale ˜ i it ≤ κ2 t. Using ˜ i equals hW ˜ i it = t e0 σ(Z(s))σ(Z(s))0 ei ds. By Assumption 2.3, κ4 t ≤ hW W 3 0 i a standard time change argument and the law of iterated logarithm for a scaler Brownian motion, we now have for some c1 ∈ (0, ∞), Z s K X 1 1 ˜ i (s)| = 0, Pz -a.s. lim sup sup σ(Z(u))dW (u) ≤ c1 lim sup sup |W t→∞ t 0≤s≤t 0 t→∞ t 0≤s≤t i=1
Next consider the martingale {Mt : t ≥ 0} defined by (3.7). From (3.8), there exists c2 ∈ (0, ∞) such that for all t, y ∈ (0, ∞) and z ∈ S, c2 y 2 Pz (|Mt | ≥ y) ≤ 2 exp − . t+1 Consequently, using Markov’s inequality and Lp maximal inequality, for any > 0, " 4 # 1 1 44 Pz sup |Ms | > ≤ 4 4 Ez sup |Ms | ≤ 4 4 4 Ez |Mt |4 t 0≤s≤t t 3 t 0≤s≤t Z √ ∞ c2 y 45 (t + 1)2 44 ≤2 4 4 4 exp − dy = 4 2 4 4 . 3 t 0 t+1 3 c2 t An application of Borel-Cantelli lemma now shows that, Pz -a.s. 1 lim sup sup |M (s)| = 0. t 0≤s≤t t→∞
/Stability of Constrained Markov Modulated Diffusions
21
Combining this with (3.7), we have Z s 1 1 2|g|∞ c lim sup b (Y (u))du ≤ lim sup = 0. sup sup |M (s)| + t 0≤s≤t 0 t 0≤s≤t t t→∞ t→∞ Recalling that β˜ 6= 0, we now have from (3.15) that, lim inf t→∞
|X(t)| > 0, Pz -a.s. t
(3.16)
Finally, we argue that, for some z ∈ S, Pz (τC1 < ∞) < 1,
(3.17)
where τC1 is as in (3.11) with C1 defined as below (3.11) with M replaced by 1. Suppose that (3.17) is false. Then by a straightforward application of the strong Markov property, we have that Pz (Z(tn ) ∈ C1 for some sequence {tn }, s.t. tn ↑ ∞) = 1. However, this contradicts (3.16) and the result follows.
3.3. Geometric ergodicity
In this section we prove Theorem 2.4. Assumption 2.1-2.5 will be assumed throughout this section. The following drift inequality is at the heart of Theorem 2.4. . Lemma 3.8. For some $ > 0, the $-skeleton chain {Zˇn = Z(n$) : n ∈ N} satisfies the following drift inequality: There exist α0 , β0 ∈ (0, 1), γ0 ∈ (0, ∞) and a compact set S ⊂ G such that Ez V (Zˇ1 ) ≤ (1 − β0 )V (z) + γ0 1S×L (z), z ∈ S, (3.18) . where, for z = (x, y) ∈ S, V (z) = eα0 T (x) . Proof: Recall τA×L = inf{t ≥ 0 : Z(t) ∈ A × L}. From Lemma 3.2, for α0 , $ ∈ (0, ∞), 0 Ez V (Zˇ1 )1{τA×L >$} ≤ Ez exp α0 (T (x) − $)+ + α0 κ1 Θ1 ν$ 1{τA×L >$} , (3.19) 0 is defined by (3.4) with ∆ and u replaced by $ and 0, respectively. Recall B = where ν$ $ {z = (x, y) ∈ S : T (x) ≤ $}. Thus for z ∈ (B$ )c , by Lemma 3.3, Ez V (Zˇ1 )1{τA×L >$} 0 ≤ Ez exp {α0 κ1 Θ1 ν$ − α0 $}1{τA×L >$} V (z) ≤ 8 exp{c1 α0 + c2 α02 + c2 α02 $ − α0 $},
where c1 = κ1 Θ3 Θ1 and c2 = Θ3 κ21 Θ21 . Now fix α0 small enough and $ large enough so that . 8 exp{c1 α0 + c2 α02 + c2 α02 $ − α0 $} = (1 − 2β0 ) < 1.
/Stability of Constrained Markov Modulated Diffusions
22
Then for z ∈ (B$ )c , Ez V (Zˇ1 )1{τA×L >$} ≤ (1 − 2β0 )V (z). From the strong Markov property of Z, we see that for all z ∈ S, Ez V (Zˇ1 )1{τA×L ≤$} = Ez Ez V (Zˇ1 )|FτA×L 1{τA×L ≤$} = Ez EZ(τA×L ) [V (Z($ − τA×L ))] 1{τA×L ≤$} . Therefore, by Assumptions 2.1, 2.2 and 2.3, there exists c1 ∈ (0, ∞) such that for all z ∈ S, ˇ (3.20) Ez V (Z1 )1{τA×L ≤$} ≤ sup Ez˜ sup V (Z(t)) ≤ c1 . z˜∈A×L
0≤t≤$
Choose M > $ such that for all z ∈ (BM )c , β0 V (z) ≥ c1 . Then on (BM )c , Ez V (Zˇ1 ) ≤ (1 − β0 )V (z). For z ∈ BM , (T (x) − $)+ ≤ M and from (3.19) and (3.20), 0 Ez V (Zˇ1 ) ≤ Ez exp α0 (T (x) − $)+ + α0 κ1 Θ1 ν$ + c1 . 2 2 ≤ 8 exp M α0 + c1 α0 + c2 α0 + c2 α0 $ + c1 = γ0 . The lemma follows on setting S × L = BM . R For a signed measure µ on (S, B(S)) and a measurable function f : S → R, let µ(f ) = S f (z)µ(dz) if f is. |µ| integrable. If f : S → (0, ∞) is a |µ|-integrable map, we define the f norm of µ as kµkf = sup|g|≤f |µ(g)|. We set kµkf = ∞ if f is not |µ|-integrable. As an immediate consequence of Lemma 3.8 and Theorems 14.0.1, 16.0.1 in [19], we have the following corollary. Denote by {P n }n∈N the transition kernel of the chain {Zˇn : n ∈ N}, namely, for z ∈ S and B ∈ B(S), P n (z, B) = Pz (Zˇn ∈ B). From Lemma 3.8, it follows that P n (z, V ) < ∞, ∀n ∈ N and z ∈ S. Corollary 3.1. The invariant measure π satisfies π(V ) < ∞. Furthermore, the $-skeleton chain {Zˇn } is V -uniformly ergodic, i.e., there exist ρ0 ∈ (0, 1) and B0 ∈ (0, ∞) such that for all z ∈ S, kP n (z, ·) − πkV ≤ B0 ρn0 V (z). (3.21) Proof of Theorem 2.4: (i) This is immediate from Corollary 3.1 and Lemma 3.1(ii), on taking β1 ≤ Θ2 α0 , where Θ2 and α0 are as in Lemma 3.1(ii) and Lemma 3.8, respectively. (ii) (a). For a map ν from S to the space of signed measures on (S, B(S)), let kν(z)kV . . kνkV = sup z∈S V (z) Recall that PZt denote the transition kernel of Z. Denoting the signed measure PZt (z, ·) − π(·) as P˜ t (z), we have from Corollary 3.1, kP˜ n$ kV ≤ B0 ρn0 . Fix t ∈ (0, ∞) and let n0 ∈ N be such that t ∈ [n0 $, (n0 + 1)$). It is easy to check that Ez [V (Z(r))] + π(V ) . V (z) z∈S 0≤r≤$
kP˜ t kV ≤ kP˜ n0 $ kV kP˜ t−n0 $ kV ≤ B0 ρn0 sup sup
/Stability of Constrained Markov Modulated Diffusions
23
˜0 ≡ B ˜0 ($) ∈ (0, ∞), From Assumptions 2.1, 2.2, and 2.3, we have for some B ˜0 V (z). sup Ez [V (Z(r))] ≤ B
(3.22)
0≤r≤$
. 1/$ . ˜= ˜0 + π(V )). Then kP˜ t kV ≤ B ˜ ρ˜t . This proves (a). Let ρ˜ = ρ0 and B B0 (B (b) & (c). By (a), for all z ∈ S, ftc (z) = Ez (St − tπ(f )) is well defined. We observe that for 0 ≤ t < T < ∞, |ftc (z)
−
fTc (z)|
Z ≤
T
˜ − π(f )| ds ≤ V (z)B
|PZs (z, f )
Z
T
ρ˜s ds ≤ B1 V (z) ρ˜t − ρ˜T .
(3.23)
t
t
˜ log ρ˜. Noting that |f c (z) − f c (z)| → 0 as t, T → ∞, limt→∞ f c (z) where B1 = −B/ t t T exists. In particular, denoting the limit by fˆ(z) and letting t = 0 and T → ∞ in (3.23), we have for all z ∈ S, |fˆ(z)| ≤ V (z)B1 . (3.24) Then fixing t and letting T → ∞, we have from (3.23), that c ft (z) − fˆ(z) ≤ V (z)B1 ρ˜t .
(3.25)
This proves (b)&(c). (d). From (3.24), for t > 0, Ez (|fˆ(Z(t))|) ≤ B1 Ez [V (Z(t))]. Also Z
t
t
Z Ez (|π(f ) − f (Z(s))|)ds ≤
Z Ez (|f (Z(s))|)ds + tπ(f ) ≤
0
0
t
Ez [V (Z(s))]ds + tπ(f ). 0
Similar to (3.22), we have for all t ≥ 0, sup0≤s≤t Ez [V (Z(s))] < ∞. Consequently, for all t ∈ [0, ∞), Z t ˆ Ez |f (Z(t))| + |π(f ) − f (Z(s))|ds < ∞. 0
Also note that Ez [fˆ(Z(t))] =
Z
∞
Ez (PZs (Z(t), f )
=
[PZs+t (z, f ) − π(f )]ds
− π(f ))ds =
0
Z
∞
Z 0
∞
[PZs (z, f )
− π(f )]ds = fˆ(z) −
t
= fˆ(z) +
Z
t
[PZs (z, f ) − π(f )]ds
0
Z
t
Ez [π(f ) − f (Z(s))]ds. 0
This proves (d). (iii) The proof is an immediate consequence of [14, Theorem 4.4] and [10, Theorem 5.1(f)].
/Stability of Constrained Markov Modulated Diffusions
24
4. Markov modulated SRBM
This section is devoted to proofs of Theorems 2.6 and 2.7. We use the notation introduced in Section 2.2. In particular, through out this section, G = RK + , N = K, and for i = 1, 2, . . . , K, Gi = {x ∈ RK : hx, ei i ≥ 0}. Also, R = (d1 | . . . |dK ) and σ is a K × K positive definite matrix. The proof of Theorem 2.6 crucially makes use of the Lyapunov function F , which was constructed in [13]. Recall the DW-stability condition introduced in Definition 2.5. Theorem 4.1. [13] Suppose that b∗2 satisfies the DW-stability condition. Then there exists a continuous map F : RK → R such that the following hold. (i) F ∈ C 2 (RK \ {0}). (ii) Given ∈ (0, ∞), there exists an M ∈ (0, ∞) such that, for all x ˜ ∈ RK and |˜ x| ≥ M , 2 |∇ F (˜ x)| ≤ . (iii) There exists c ∈ (0, ∞) such that (a) for all x ˜ ∈ G \ {0}, h∇F (˜ x), b∗2 i ≤ −c, (b) for all x ˜ ∈ ∂G \ {0} and d ∈ d(˜ x), h∇F (˜ x), di ≤ −c. (iv) F is radially homogeneous, i.e., F (ζ x ˜) = ζF (˜ x) for all ζ ≥ 0 and x ˜ ∈ RK . (v) ∇F is uniformly bounded on G \ {0}. We denote . Λ = sup |∇F (˜ x)| < ∞. x ˜∈G\{0}
(vi) There exist a1 , a2 ∈ (0, ∞) such that, for all x ˜ ∈ G, a1 |˜ x| ≤ F (˜ x) ≤ a2 |˜ x|. With an abuse of notation, we set ∇F (0) = 0 and ∇2 F (0) = 0. Fix z = (x, y) ∈ S and recall the martingale {Mt : t ≥ 0} introduced in (3.7). Denote . Υ(t) = X(t) − g(Y (t)) + g(Y (0)), t ≥ 0. (4.1) Then from (2.4) and (3.7), for all t ≥ 0, Pz -a.s., Υ(t) = x + b∗2 t − Mt + σW (t) + RU (t). By Ito’s formula, we have that Z t
1 2 0 ∗ F (Υ(t)) = F (x) + tr ∇ F (Υ(s))σσ + h∇F (Υ(s)), b2 i ds 2 0 Z t Z t + h∇F (Υ(s)), σdW (s)i − h∇F (Υ(s−)), dMs )i +
0 K XZ t i=1
0+
(4.2)
h∇F (Υ(s)), di idUi (s) + Rt ,
0
where Rt =
X
[F (Υ(s)) − F (Υ(s−)) − h∇F (Υ(s−)), g(Y (s)) − g(Y (s−))i].
0<s≤t
(4.3)
/Stability of Constrained Markov Modulated Diffusions
25
Proof of Theorem 2.6: Given > 0, let r > 2|g|∞ large enough such that |∇2 F (˜ x)| ≤ whenever x ˜ ∈ RK and |˜ x| ≥ r − 2|g|∞ . Define τ˜r = inf{t ≥ 0 : |X(t)| ≤ r}. Fix z ≡ (x, y) ∈ S. We first assume |x| > r. Using the Lagrange remainder form of Taylor’s expansion and Theorem 4.1(ii), we have Rt∧˜τr =
1 2
≤ 2
X
[g(Y (s)) − g(Y (s−))]0 ∇2 F (ς1 Υ(s) + (1 − ς1 )Υ(s−))[g(Y (s)) − g(Y (s))]
0<s≤t∧˜ τr
X
|g(Y (s)) − g(Y (s−))|2 ,
0<s≤t∧˜ τr
where ς1 ≡ ς1 (s, ω) ∈ [0, 1]. Taking expectation, we have for some c1 ∈ (0, ∞), Ez (Rt∧˜τr ) ≤ c1 Ez (t ∧ τ˜r ), ∀t ≥ 0.
(4.4)
Next by Theorem 4.1(ii) and (iii)(a) and for 0 ≤ s ≤ t ∧ τ˜r , there exists ς2 ≡ ς2 (s, ω) ∈ (0, 1) such that h∇F (Υ(s)), b∗2 i = h∇F (X(s)), b∗2 i + h∇F (Υ(s)) − ∇F (X(s)), b∗2 i ≤ −c + h∇2 F (ς2 Υ(s) + (1 − ς2 )X(s))[g(Y (s)) − g(Y (0))], b∗2 i ≤ −c +
(4.5)
2|g|∞ |b∗2 |.
Similarly, by Theorem 4.1(ii) and (iii)(b) and for 0 ≤ s ≤ t ∧ τ˜r and ς3 ≡ ς3 (s, ω) ∈ (0, 1), whenever X(s) ∈ Fi , h∇F (Υ(s)), di i = h∇F (X(s)), di i + h∇F (Υ(s)) − ∇F (X(s)), di i ≤ h∇2 F (ς3 Υ(s) + (1 − ς3 )X(s))[g(Y (s)) − g(Y (0))], di i
(4.6)
≤ 2|g|∞ . From Theorem 4.1(ii) and (4.5), there exists c2 ∈ (0, ∞), such that for all t > 0 1 2 tr ∇ F (Υ(t ∧ τ˜r ))σσ 0 + h∇F (Υ(t ∧ τ˜r )), b∗2 i ≤ c2 − c. 2
(4.7)
By (4.6), we have K Z X i=1
t∧˜ τr
h∇F (Υ(s)), di idUi (s) ≤ 2|g|∞
0
K X
Ui (t ∧ τ˜r ).
(4.8)
i=1
By Theorem 3.2 in [8], there exists h ∈ Cb2 (G) such that for i ∈ K and x ˜ ∈ Fi , h∇h(˜ x), di i ≥ 1. Applying Ito’s formula, Z t∧˜τr 1 Ez [h(X(t ∧ τ˜r ))] = h(˜ x) + E z h∇h(X(s)), b(Y (s))i + tr[∇2 h(X(s))σσ 0 ] ds 2 0 Z K t∧˜ τr X + Ez h∇h(X(s)), di idUi (s) . i=1
0
/Stability of Constrained Markov Modulated Diffusions
26
Thus we have for c3 ∈ (0, ∞), K X
Ez (Ui (t ∧ τ˜r )) ≤
i=1
Z
K X i=1 t∧˜ τr
≤ 2|h|∞ + Ez 0
Z
t∧˜ τr
h∇h(X(s)), di idUi (s)
Ez 0
(4.9)
1 2 0 |h∇h(X(s)), b(Y (s))i| + tr[∇ h(X(s))σσ ] ds 2
≤ c3 (1 + Ez (t ∧ τ˜r )). Using (4.8) and (4.9), we now have Ez
K Z X i=1
!
t∧˜ τr
h∇F (Υ(s)), di idUi (s)
≤ 2|g|∞ c3 (1 + Ez (t ∧ τ˜r )).
(4.10)
0
We note that the constants c1 , c2 , and c3 only depend on bounds of σ, g, h, and b∗2 . In particular, they are independent of and t. Combining (4.4), (4.7), (4.10) and applying (4.2), we have Ez [F (Υ(t ∧ τ˜r ))] − F (x) ≤ 2|g|∞ c3 + [(2|g|∞ c3 + c1 + c2 ) − c]Ez (t ∧ τ˜r ).
(4.11)
Again applying the Lagrange remainder form of Taylor’s expansion and Theorem 4.1(ii), (v) and (vi), there exists ς4 ∈ (0, 1) such that for 0 ≤ s ≤ t ∧ τ˜r , F (Υ(s)) = F (X(s)) + [g(Y (s)) − g(Y (0))]0 ∇F (X(s)) 1 + [g(Y (s)) − g(Y (0))]0 ∇2 F (ς4 Υ(s) + (1 − ς4 )X(s))[g(Y (s)) − g(Y (0))] 2 ≥ F (X(t)) − 2|g|∞ Λ − 2|g|2∞ ≥ −2|g|∞ Λ − 2|g|2∞ . Choosing =
c , 2(2|g|∞ c3 + c1 + c2 )
we have
2 Ez (t ∧ τ˜r ) ≤ (F (x) + 2|g|∞ c3 + 2|g|∞ Λ) < ∞. c Letting t → ∞, we have Ez (˜ τr ) < ∞. If |x| ≤ r, Ez (˜ τr ) < ∞ automatically. Therefore, Ez (˜ τr ) < ∞ for all z ∈ S. The rest of the argument is as in the proof of Theorem 2.6 in [13]. Details are left to the reader. We next establish geometric ergodicity for Z. We begin with some preliminary estimates. Arguments similar to those used in Lemmas 3.3 and 3.4 yield the following result. Proof is provided in Appendix. Lemma 4.1. Let z ∈ S and ∆ > 0 be fixed. For n ∈ N, let ν˜n be defined as follows: Z Z s s . ν˜n = sup h∇F (Υ(s)), σdW (s)i − h∇F (Υ(s−)), dMs i . (n−1)∆ (n−1)∆≤s≤n∆ (n−1)∆
(4.12)
/Stability of Constrained Markov Modulated Diffusions
27
Then there exists Θ4 ∈ (0, ∞) such that, for any z ∈ S, α ∈ (0, ∞) and m, n ∈ N; m ≤ n, ( n )! X n−m+1 ≤ 8 exp Θ4 α2 (1 + ∆) . Ez exp α ν˜i i=m
For r ∈ (0, ∞), define . . C(r) = {˜ x ∈ G : F (˜ x) ≤ r}, τr = inf{t ≥ 0 : X(t) ∈ C(r)}.
(4.13)
Lemma 4.2. There exist r0 , β, γ1 , γ2 ∈ (0, ∞) such that, for all z = (x, y) ∈ S, Ez (exp{βτ }) ≤ γ1 exp{γ2 |x|}, where τ = τr0 and τr0 is defined as in (4.13) with r replaced by r0 . Proof: By Theorem 4.1(ii), given > 0, there exists a r > 2|g|∞ such that |∇2 F (˜ x)| ≤ whenever x ˜ ∈ RK and |˜ x| ≥ r − 2|g|∞ . By Theorem 4.1(vi), we can choose r0 such that C(r0 ) ⊃ {˜ x ∈ G : |˜ x| ≤ r}. Let α ∈ (0, ∞) and z = (x, y) ∈ S with |x| > r. Similar to the arguments for (4.9) and using Lemma 4.2 in [1], there exist c1 ∈ (0, ∞) such that for all t ≥ 0, ( K )! ( K Z )! X X t Ez exp α Ui (t) ≤ Ez exp α h∇h(X(s)), di idUi (s) i=1
i=1
0
Z t ≤ Ez exp α|h(X(t))| + α|h(x)| + α h∇h(X(s)), σdW (s)i 0 Z t 1 2 0 +α h∇h(X(s)), b(Y (s))i + 2 tr[∇ h(X(s))σσ ] ds 0 ≤ exp{c1 α(1 + t + αt)}.
(4.14)
Fix ∆ ∈ (0, ∞) and n ∈ N. Using (4.2) and (4.3), we have n∆
1 2 0 ∗ F (Υ(n∆)) = F (Υ((n − 1)∆)) + tr ∇ F (Υ(s))σσ + h∇F (Υ(s)), b i ds (n−1)∆ 2 Z n∆ Z n∆ + h∇F (Υ(s)), σdW (s)i − h∇F (Υ(s−)), dMs )i Z
(n−1)∆
+
K Z n∆ X i=1
(n−1)∆
h∇F (Υ(s)), di idUi (s) + Rn∆ − R(n−1)∆ ,
(n−1)∆
. For m ≥ n, define Am = {ω ∈ Ω : inf 0≤s≤m∆ F (X(s)) > r0 }. For ω ∈ Am we have that |Υ(n∆)| > r − 2|g|∞ . Using arguments similar to that for (4.7), we have for some c2 ∈ (0, ∞), on Am Z n∆ 1 2 0 ∗ tr ∇ F (Υ(s))σσ + h∇F (Υ(s)), b i ds ≤ (c2 − c)∆, (n−1)∆ 2
/Stability of Constrained Markov Modulated Diffusions
28
Therefore, on the set Am and for n ≤ m, F (Υ(n∆)) ≤ F (x) + (c2 − c)n∆ +
K Z X i=1
n∆
h∇F (Υ(s)), di idUi (s) + Rn∆ +
0
n X
ν˜i ,
i=1
where ν˜n is as in (4.12). From (4.14) and (4.6), on Am , for t ≤ m∆ ( K Z )! X t h∇F (Υ(s)), di idUi (s) ≤ exp{2|g|∞ c1 α(1 + t + αt)}. Ez exp α i=1
0
Also note that on Am for some c3 ∈ (0, ∞) (independent of α, t, , ∆, m), X 2 Ez (exp{αRt }) ≤ E exp α |g(Y (s)) − g(Y (s−))| ≤ exp{c3 αt}. 0<s≤t
Noting that on Am , F (Υ(n∆)) > r0 , we have Pz (Am ) ≤ Pz
Am ;
K Z X i=1
h∇F (Υ(s)), di idUi (s) + Rn∆ +
0
( ≤ Ez
n∆
1Am exp α
n X
! ν˜i
! > r0 − (c2 − c)n∆ − F (x)
i=1 K Z X i=1
n∆
h∇F (Υ(s)), di idUi (s) + Rn∆ +
0
n X
!)! ν˜i
i=1
× exp {−α[r0 − (c2 − c)n∆ − F (x)]} α2 α + (2|g|∞ c1 + c3 )α + 6|g|∞ c1 α2 + 3Θ4 + 3Θ4 α2 ≤ exp n∆ 2|g|∞ c1 n∆ ∆ α log 8 + c2 α − r0 − αc + αF (x) . + 3∆ n∆ Let and α be small enough and ∆ large enough, we have 2|g|∞ c1
α α2 log 8 α +(2|g|∞ c1 +c3 )α+6|g|∞ c1 α2 +3Θ4 +3Θ4 α2 + +c2 α−r0 −αc ≡ −η < 0. n∆ ∆ 3∆ n∆
Then Pz (Am ) ≤ exp{−n∆η + αF (x)}. Let t ∈ (0, ∞) be arbitrary and let n0 ∈ N0 be such that t ∈ [n0 ∆, (n0 + 1)∆). Then Pz (τ > t) ≤ Pz (An0 ) ≤ exp{−(t − ∆)η + αF (x)}. The result follows. As an immediate consequence of the above lemma, we have the following. For θ0 ∈ (0, ∞) and a compact set S ⊂ G, define a stopping time τS (θ0 ) = inf{t ≥ θ0 : X(t) ∈ S}.
/Stability of Constrained Markov Modulated Diffusions
29
Lemma 4.3. Fix θ0 ∈ (0, ∞) and let β, γ2 , r0 be as in Lemma 4.2 and C ≡ C(r0 ) be as in (4.13) with r replaced by r0 . Then there exists γ3 ∈ (0, ∞) such that for z = (x, y) ∈ S, Ez (exp{βτC (θ0 )}) ≤ γ3 exp{γ2 |x|}. Proof: An application of the strong Markov property yields Ez (exp{βτC (θ0 )}) = exp{βθ0 }Ez (Ez (exp{β(τC (θ0 ) − θ0 )}|Fθ0 )) = exp{βθ0 }Ez EZ(θ0 ) exp{βτ } , where τ is as in Lemma 4.2. Now by Lemma 4.2, Ez EZ(θ0 ) exp{βτ } ≤ γ1 Ez (exp{γ2 |X(θ0 )|}) .
(4.15)
Using the oscillation estimate from [23](also see [2]), we have the following result: There exists c1 ∈ (0, ∞) such that for all z = (x, y) ∈ S and 0 ≤ t1 < t2 < ∞, Pz -a.s., sup |X(t) − X(s)| ≤ c1 sup |W (t) − W (s)| + (t2 − t1 ) . (4.16) t1 ≤s≤t≤t2
t1 ≤s≤t≤t2
Combining the estimates in (4.15) and (4.16), we have (
Ez EZ(θ0 ) (exp{βτ }) ≤ γ1 exp{γ2 (x + c1 θ0 )}Ez
)!
exp c1 γ2 sup |W (s)|
.
0≤s≤θ0
The result follows via an application of Doob’s inequality. A key step in the proof of geometric ergodicity is the following result from [10]. For θ0 , β, and C as in Lemma 4.3, let . Ez (exp{βτC (θ0 )}) − 1 V0 (z) = + 1. β Define, for θ > 0, . Vθ (z) = Rθ V0 (z) =
Z
∞
Ez [V0 (Z(t))]θ exp{−θt}dt. 0
By Lemma 4.3 (a), Theorem 6.2 (b), and Theorem 5.1 (a) in [10], we have the following result. Theorem 4.2. [10] For all θ > 0, AVθ = θ(Vθ − V0 ), where A is the extended generator of Z introduced below Theorem 2.3. Furthermore, there exist κ0 , h0 ∈ (0, ∞) such that for all z ∈ S, AVθ (z) ≤ −κ0 Vθ (z) + h0 1C×L (z). The following lemma is proved exactly as Lemma 4.8 of [5]. Proof is omitted.
/Stability of Constrained Markov Modulated Diffusions
30
Lemma 4.4. There exist a1 , a2 , A1 , A2 ∈ (0, ∞) such that for all z = (x, y) ∈ S, a1 ea2 |x| ≤ V0 (z) ≤ A1 eA2 |x| .
(4.17)
˜ ∞) there are Furthermore, there exists a constant θ˜ ∈ (0, ∞) such that for every θ ∈ (θ, a ˜1 , a ˜2 , A˜1 , A˜2 ∈ (0, ∞) such that for z = (x, y) ∈ S, ˜ a ˜1 ea˜2 |x| ≤ Vθ (z) ≤ A˜1 eA2 |x| .
(4.18)
˜ ∞) and denote V ≡ Vθ . Then the following corollary is an immediate We will fix θ ∈ (θ, consequence of Theorem 4.2. Corollary 4.1. V is in D(A) and AV = θ(V − V0 ). Furthermore, there exist κ0 , h0 ∈ (0, ∞) such that for all z ∈ S, AV (z) ≤ −κ0 V (z) + h0 1C×L (z). Corollary 4.2. Let π be the unique invariant measure of Z. Then π(V ) < ∞. Proof: By Corollary 4.1 and Theorem 5.1(d) of [10], for all s ≥ 0, there exist ς0 (s) ∈ (0, 1) and h1 > 0 such that for all z ∈ S, Ez [V (Z(s))] ≤ ς0 (s)V (z) + h1 1C×L (z). Integrating both sides with respect to π, we have π(V ) ≤ h1 π(C × L)/(1 − ς0 (s)) < ∞. As a consequence of the above corollaries and Theorem 5.2(c) of [10], we have the following geometric ergodicity result. Theorem 4.3. The Markov process Z ≡ (X, Y ) is V -uniformly ergodic, i.e., there exist constants B0 ∈ (0, ∞), ρ0 ∈ (0, 1) such that for all t ∈ (0, ∞) and z ∈ S, kP t (z, ·) − πkV ≤ B0 ρt0 V (z). Proof of Theorem 2.7: Part (i) of the theorem is immediate from Corollary 4.2 and Lemma 4.4. Part (ii)(a) is a consequence of Theorem 4.3 and Lemma 4.4. The rest of the proof is same as that for Theorem 2.4.
5. Convergence of invariant measures for Markov modulated open queueing networks in heavy traffic. ˜n In this section we prove Theorem 2.9. Recall the network description and the processes Qn , A˜n , D K n n n defined in (2.5) and (2.7). Define R valued stochastic processes M , B , η as follows. For
/Stability of Constrained Markov Modulated Diffusions
i ∈ K and t ≥ 0,
K K X X 1 n eni (t) − e ij e ji (t) , Min (t) = √ A D (t) + D n j=0 j=1 Z t √ n b (u), Y n (u))du, bni ( nQ Bin (t) = 0 Z t √ n 1 n b (u), Y n (u))1 bn ηi (t) = √ µni ( nQ du. {Qi (u)=0} n 0
31
(5.1)
Noting that µ ˜ni (x, y) = µni (x, y)1{xi >0} for all (x, y) ∈ RK + × L, we have from (2.5) that b n (t) = Q b n (0) + M n (t) + B n (t) + [I − P0 ]η n (t). Q With this notation, equation (5.2) can be written as b n (t) = Γ Q b n (0) + M n (·) + B n (·) (t), Q
(5.2)
(5.3)
where Γ is the Skorohod map with reflection matrix I − P0 . As noted in Section 2.3, due to Assumption 2.7(i), Γ is Lipschitz continuous, namely Assumption 2.1 is satisfied. The following stability estimate is a key ingredient in our proof. Denote by Dn = {z = (x, y) ∈ S : nx ∈ N0 }. Proposition 5.1. There exist N1 ∈ N and t0 , a1 ∈ (0, ∞) such that for all t ≥ t0 and n ≥ N1 , 2 bn sup Ez Q (t|x|) ≤ a1 (1 + |x|), for all z = (x, y) ∈ Dn . n≥N1
Proof: Fix z = (x, y) ∈ Dn such that x ∈ G \ A. Let M > 0 be large enough such that . ¯ Suppressing n in the notation, define a sequence of stopping GM = {x ∈ G : |x| < M } ⊇ A. times {σk }k∈N0 as σ0 = 0, b n (t) ∈ A}, σ2k+2 = inf{t ≥ σ2k+1 : Q b n (t) ∈ σ2k+1 = inf{t ≥ σ2k : Q / GM }, k ∈ N0 . If t ∈ [σ2k+1 , σ2k+2 ] for some k ∈ N0 , then b n (t)| ≤ M + 1. |Q
(5.4)
Suppose now t ∈ [σ2k , σ2k+1 ) for some k ∈ N0 . Then b n (t) = Γ Q b n (σ2k ) + M n (· + σ2k ) − M n (σ2k ) + B n (· + σ2k ) − B n (σ2k ) (t − σ2k ). Q From convergence of Qn to Q, it follows that for some n0 ∈ N, the Markov process Y n has a unique invariant measure q n , whenever n ≥ n0 . Furthermore, q n → q ∗ as n → ∞. We will assume n ≥ n0 . Define for s ≥ 0, ˘ n (s) = Γ Q b n (σ2k ) + B∗n (· + σ2k ) − B∗n (σ2k ) (s − σ2k ), X where B∗n (s)
Z = 0
s
bn∗
√
b n (u), Y n (u) du, nQ
/Stability of Constrained Markov Modulated Diffusions
32
and bn∗ (x, y) = bn (x, y) − b2 (y) +
X
b2 (j)q n (j).
j∈L
√ Using Assumption 2.7 (iv) and the property q n → q ∗ , we see that as n → ∞, bn∗ ( nx, y) → b∗ (x, y) uniformly on S. Using Assumption 2.5 we now have that for some n1 ∈ N and n ≥ n1 , √ bn∗ ( nx, y) ∈ C(δ0 /2) for all (x, y) ∈ Dn . Thus Z σ2k +· n √ bn n Γ b∗ ( nQ (u ∧ σ2k+1 ), Y (u ∧ σ2k+1 ))du (· − σ2k ) ∈ A(0, δ0 /2), σ2k
where A is as defined below (3.2). Applying Lemma 3.1 (iii), we now have that for all σ2k ≤ s < σ2k+1 , Γ(B∗n (· + σ2k ) − B∗n (σ2k ))(s − σ2k ) Z σ2k +· n √ bn n =Γ b∗ ( nQ (u ∧ σ2k+1 ), Y (u ∧ σ2k+1 ))du (s − σ2k ) = 0. σ2k
Thus using Assumption 2.1, we have that if k > 0, for n ≥ max(n0 , n1 ), ˘ n (t)| ≤ κ1 |Q b n (σ2k )| ≤ κ1 (M + 1), ∀t ∈ [σ2k , σ2k+1 ). |X A similar argument, using Lemma 3.1(i) and (iii), shows that in the case k = 0, i.e. t ∈ [σ0 , σ1 ) ˘ n (t)| = 0, Pz -a.s., where Θ1 (δ0 /2) is as in Lemma 3.1. Next note that and t ≥ Θ1 (δ0 /2)|x|, |X B n = B∗n + Bcn , where Z s X n n n Bc (s) = b2 (Y (u)) − b2 (j)q (j) du. 0
j∈L
Lipschitz property of Γ yields that b n (t) − X ˘ n (t)| ≤ 2κ1 sup |M n (s) + Bcn (s)|. |Q 0≤s≤t
Combining the above estimates, for all t ≥ Θ1 |x|, bn Q (t) ≤ 2κ1 sup |M n (s)| + 2κ1 sup |Bcn (s)| + κ1 (M + 1). 0≤s≤t
0≤s≤t
(5.5)
By martingale properties of processes in (2.7), Doob’s inequality and Assumption 2.7(ii), we have that for some c1 ∈ (0, ∞), Ez
2 K X sup |M (s)| ≤ 4 Ez (|Min (t)|2 ) n
0≤s≤t
≤
4 n
K X i=1
≤ c1 t.
i=1
Z t K X √ n √ n b x (u), Y n (u) + 2 b x (u), Y n (u) du Ez λni nQ µnj nQ 0
j=1
(5.6)
/Stability of Constrained Markov Modulated Diffusions
33
2 Next we consider Ez sup0≤s≤t |Bcn (s)| . Let g n (·) be a solution of the Poisson equation for bnc (·) corresponding to the Markov semigroup {Pns } of Y n . Then Z s n . n n n n bnc (Y n (u))du Ms = g (Y (s)) − g (Y (0)) − 0
. is a martingale and Θ = supn |g n |∞ < ∞. Therefore, another application of Doob’s inequality yields 2 2 n n n n n n Ez sup |Bc (s)| = Ez sup |g (Y (s)) − g (Y (0)) − Ms | ≤ 4Θ2 + 4Ez |Mtn |2 {Fsn }
0≤s≤t
0≤s≤t
Analogous to (3.8), we have for some c2 ∈ (0, ∞) and n2 ∈ N, c2 v n 2 . sup Pz |Mt | ≥ v ≤ 2 exp − t+1 n≥n2 Therefore, for n ≥ n2 , Ez
2 Z sup |Bcn (s)| ≤ 4Θ2 + 4
0≤s≤t
0
∞
c2 v 2 exp − dv t+1
(5.7)
8(t + 1) ≤ 4Θ2 + . c2 Combing (5.5), (5.6) and (5.7), we have for some c3 ∈ (0, ∞) and all t ≥ Θ1 and n ≥ max(n0 , n1 , n2 ), 2 bn Ez Q (t|x|) ≤ c3 (1 + t|x|), z = (x, y) ∈ Dn . The lemma now follows on setting t0 = Θ1 and N1 = max(n0 , n1 , n2 ). The following proposition yields the tightness of Pnz ◦ Z n (t)−1 : z ∈ CM , t ≥ 0, n ≥ N for all M > 0 and N sufficiently large, where CM is defined as below (3.11). . Proof is similar to that of Lemma 3.5. For completeness, a sketch is given in Appendix. Proposition 5.2. There exist N2 ∈ N and κ ˇ ∈ (0, ∞) such that for M > 0, bn sup sup sup Ez eκˇ|Q (t)| < ∞. n≥N2 z∈CM ∩Dn t≥0
The following two propositions will be needed in the proof of Theorem 2.9. Proof of the next proposition is identical to that of Proposition 4.2 of [6] and thus is omitted. For % ∈ (0, ∞) and a compact set F ⊂ S, let . τFn (%) = inf{t ≥ % : Z n (t) ∈ F}. (5.8) Proposition 5.3. Let f : S → R+ be a measurable map. Define for % ∈ (0, ∞), ! Z n τF (%)
Gn (z) = Ez 0
f (Z n (t))dt , z ∈ Dn .
/Stability of Constrained Markov Modulated Diffusions
34
Assume sup
sup
n z∈CM ∩Dn
Gn (z) < ∞ for every M > 0.
(5.9)
Then there exists a κ ¯ ∈ (0, ∞) such that, for all n ∈ N, t ∈ [%, ∞) and z ∈ Dn , Z 1 1 1 t n Ez [f (Z n (s))] ds ≤ Gn (z) + κ Ez [Gn (Z (t))] + ¯. t t 0 t By Proposition 5.1, there exists Λ0 ∈ (0, ∞) such that for |x| ≥ Λ0 , z = (x, y) ∈ Dn , and n ≥ N1 , 2 1 bn Ez Q (t0 |x|) ≤ |x|2 , 2 where t0 and N1 are as in Proposition 5.1. The following proposition is proved exactly as Proposition 4.2 of [4] and thus the proof is omitted. Proposition 5.4. There exists N3 ∈ N and c0 ∈ (0, ∞) such that for all n ≥ N3 and z ∈ Dn , Z τ n bn sup Ez 1 + Q (t) dt ≤ c0 1 + |x|2 , n≥N3
0
where τ n = τCnΛ (t0 Λ0 ) (see (5.8)), t0 , Λ0 are as introduced above and CΛ0 is defined as below 0 (3.11) with M replaced by Λ0 . Proof of Theorem 2.9: From Proposition 5.2, it follows that for all n ≥ N2 , Z n has an invariant probability measure on Dn . Denote by {πn }n≥N one such sequence of invariant measures, where N = max(N2 , N3 ) and N2 , N3 are as in Propositions 5.2, 5.4, respectively. Since π is the unique invariant measure of the Feller-Markov process (Z, {Pz }z∈S ), we have from Z n ⇒ Z (Theorem 2.8), that it suffices to establish the tightness of the family {πn } (regarded as a sequence of probability measures on S). We apply Proposition 5.3 with f (z) = 1 + |x|, z = (x, y) ∈ S and % = t0 Λ0 , F = CΛ0 , where t0 and Λ0 as in Proposition 5.4. Note that condition (5.9) in Proposition 5.3 is satisfied as a consequence of Proposition 5.4. To prove the desired tightness we only need to show that, for all n ≥ N , hπn , f i ≤ c1 < ∞. Note that for any nonnegative, real measurable function ψ on S and n ≥ N , Z Ez [ψ(Z n (t))] πn (dz) = hπn , ψi. (5.10) Dn
Fix k ∈ N and t ∈ (%, ∞). Let for z ∈ Dn , 1 . 1 Φn (z) = Gn (z) − Ez [Gn (Z n (t))] . t t By (5.10),
R Dn
Φn (z)πn (dz) = 0. From Proposition 5.3, Z Z Z t 1 n 0= Φn (z)πn (dz) ≥ Ez (f (Z (s)))ds − κ ¯ πn (dz). t 0 Dn Dn
Recalling (5.10), we have that hπn , f i ≤ κ ¯ . The result follows.
/Stability of Constrained Markov Modulated Diffusions
35
6. Appendix ˜ Proof of Lemma 3.2: For t ≥ 0, let X(t) = X(t + u). Then Pz -a.s., Z · Z · ˜ σ(Z(s + u))dWu (s) (t), b(Z(s + u))ds + X(t) = Γ X(u) + 0
0
where Wu (s) = W (s + u) − W (u). Let Z · ∗ ¯ X(t) = Γ X(u) + b (Z(s + u))ds (t), 0
where b∗ is as defined in Assumption 2.5. By Lipschitz property of Γ (Assumption 2.1), Z t Z t ˜ c ¯ sup X(t) − X(t) ≤ κ1 sup b (Y (s + u))ds + σ(Z(s + u))dWu (s) 0≤t≤∆
=
0≤t≤∆ u κ1 ν∆ .
0
0
Recalling the assumption on b∗2 (Assumption 2.5), we have applying Lemma 3.1, that on the set {ω : X(t, ω) ∈ G \ A for all t ∈ (u, u + ∆)} u u ˜ ¯ T (X(u + ∆)) = T (X(∆)) ≤ T (X(∆)) + κ1 Θ1 ν∆ ≤ (T (X(u)) − ∆)+ + κ1 Θ1 ν∆ .
Proof of Lemma 3.5 (sketch): Recall ∆ and νn defined as in Lemma 3.4. By arguing as in the proof of Lemma 4.4 of [1], we can show, for M0 ∈ (0, ∞) such that n n X X Pz (T (X(n∆) ≥ M0 )) ≤ Pz 2κ1 Θ1 νj ≥ M0 + (n − l − 1)∆ − T (x) l=1
≤
j=l
exp{α(T (x) + 2∆)} exp{αM0 }
n X
P Ez exp{2ακ1 Θ1 nj=l νj }
l=1
exp{α(n − l + 1)∆}
,
where α ∈ (0, ∞) is arbitrary. From Lemma 3.4 we now have Pz (T (X(n∆)) ≥ M0 ) n
≤
exp{α(T (x) + 2∆)} X (8 exp{2Θ3 κ1 Θ1 α(1 + 2κ1 Θ1 α + 2κ1 Θ1 α∆)})n−l+1 exp{αM0 } exp{α(n − l + 1)∆} l=1
≤
n exp{α(T (x) + 2∆)} X
exp{αM0 }
exp {(n − l + 1) (log 8 + 2Θ3 κ1 Θ1 α(1 + 2κ1 Θ1 α + 2κ1 Θ1 α∆) − α∆)} .
l=1
Similar to the proof of Theorem 3.1, we can choose α and ∆ so that log 8 + 2Θ3 κ1 Θ1 α(1 + 2κ1 Θ1 α + 2κ1 Θ1 α∆) − α∆ = −θ¯ < 0.
/Stability of Constrained Markov Modulated Diffusions
36
An application of Lemma 3.1 yields that for every κ ∈ (0, αΘ2 ) and M > 0, sup sup Ez (eκ|X(n∆)| ) < ∞. |z|≤M n∈N
The result follows from the above estimate, using the Lipschitz property of Γ, in a straightforward manner (see Lemma 4.4 of [1]). Proof of Lemma 4.1 (sketch): By the strong Markov property of Z, it suffices to show Ez (exp{α˜ ν1 }) ≤ 8 exp{Θ4 α2 (1 + ∆)}. By Holder’s inequality, ( "
)!#2 Z t Z t h∇F (Υ(s−)), dMs i Ez exp α sup h∇F (Υ(s)), σdW (s)i − 0≤t≤∆ 0 0 ( Z t )! ≤ Ez exp 2α sup h∇F (Υ(s)), σdW (s)i 0≤t≤∆ 0 ( Z t )! × Ez exp 2α sup h∇F (Υ(s−)), dMs i . 0≤t≤∆
0
Using Lagrange remainder form of Taylor expansion we have for t ≥ 0, ∇F (Υ(t)) = ∇F (X(t)) + ∇2 F (ςΥ(t) + (1 − ς)X(t))(g(Y (t)) − g(Y (0))), where ς ≡ ς(s, ω) ∈ (0, 1). From Theorem 4.1 (ii) and (v), there exists some c1 ∈ (0, ∞) for all t ≥ 0, |∇F (Υ(t))| ≤ c1 . We have by standard estimates (see e.g. Lemma 4.2 of [1]) for some c2 ∈ (0, ∞), ( Z t )! Ez exp 2α sup h∇F (Υ(s)), dW (s)i ≤ 8 exp c2 α2 ∆ . (6.1) 0≤t≤∆
0
Applying arguments similar to those between (3.8) and (3.9) in the proof of Lemma 3.3, there exist c3 ∈ (0, ∞) such that Z t Ez exp 2α sup h∇F (Υ(s)), dMs i ≤ 8 exp c3 α2 (1 + ∆) . 0≤s≤t
0
Result follows on combining the above estimates. Proof of Proposition 5.2 (sketch): Define, for j ∈ N, νjn =
sup
|M n (s)) − M n ((j − 1)∆) + Bcn (s) − Bcn ((j − 1)∆)| .
(j−1)∆≤s≤j∆
Along the lines of proof of Lemma 4.4 in [1], we have that for all q ∈ N, T (X n (q∆)) ≤ T (x) + 2∆ + max
1≤l≤q
q X (2κ1 Θ1 νjn − ∆). j=l
/Stability of Constrained Markov Modulated Diffusions
37
Hence for α, M0 ∈ (0, ∞), Pz (T (X n (q∆)) ≥ M0 )) ≤
q X
Pz 2κ1 Θ1
l=1
q X
νjn ≥ M0 + (q − l − 1)∆ − T (x)
j=l
≤ exp{α(T (x) + 2∆ − M0 )}
Pq n} q E ν X z exp{2ακ1 Θ1 j=l j exp{α(q − l + 1)∆}
l=1
.
. Let c1 = 2κ1 Θ1 . We claim that there exist constants α0 , ∆0 , η ∈ (0, ∞) and N ∈ N such that n n sup sup e−α0 ∆0 Ez ec1 α0 νj | F(j−1)∆ ≤ e−η∆0 , (6.2) 0 n≥N j∈N
where Ftn is as introduced below (2.6). Suppose, for now, that the claim holds. Then by the Markov properties of Z n and Y n , we have that, for n ≥ N and q ∈ N, Pz (T (X n (q∆0 ) ≥ M0 )) ≤ exp{α0 (T (x) + 2∆0 − M0 )}
q X
exp{−(q − l + 1)η∆0 }
l=1
≤
exp{α0 (T (x) + 2∆0 − M0 )} . 1 − exp{−η∆0 }
Consequently, there exists κ1 ∈ (0, ∞) such that for all M ∈ (0, ∞), sup sup Ez (exp{κ1 |X n (q∆0 )|}) < ∞. n≥N |z|≤M
The result now follows by a standard argument, using the Lipschitz property of Γ. Finally we prove the claim in (6.2). Note that νjn ≤
|M n (s) − M n ((j − 1)∆)| +
sup (j−1)∆≤s≤j∆
sup
|Bcn (s) − Bcn ((j − 1)∆)| .
(6.3)
(j−1)∆≤s≤j∆
Following the proof of Lemma 3.3 (see arguments between (3.7) and (3.9)), we can find c2 ∈ (0, ∞) such that for all j ∈ N, α, ∆ ∈ (0, ∞), ( ) ! n Ez exp α sup |Bcn (s) − Bcn ((j − 1)∆)| F(j−1)∆ ≤ 8 exp{c2 α(1 + α + α∆)}. (j−1)∆≤s≤j∆
Furthermore, following the proof of Proposition 3.2 in [4] (see arguments below (7.4) therein). We can find c3 ∈ (0, ∞) such that for j ∈ N, α, ∆ ∈ (0, ∞), ( ) ! n n n Ez exp α sup |M (s) − M ((j − 1)∆)| F(j−1)∆ ≤ 8 exp{c3 α2 ∆}. (j−1)∆≤s≤j∆
By Holder’s inequality, for j ∈ N, α, ∆ ∈ (0, ∞), n exp{−α∆}Ez exp{αc1 νjn }|F(j−1)∆ ≤ 8 exp{c1 c2 α + 2c21 c2 α2 + 2(c21 c2 + c21 c3 )α2 ∆ − α∆}. Finally, choose appropriate (small) α0 and (large) ∆0 such that 1 (log 8 + c1 c2 α0 + 2c21 c2 α02 + 2(c21 c2 + c21 c3 )α02 ∆0 − α0 ∆0 ) ≡ −η < 0. ∆0 The claim follows.
/Stability of Constrained Markov Modulated Diffusions
38
References [1] R. Atar, A. Budhiraja, and P. Dupuis, On positive recurrence of constrained diffusion process, Annals of Probability 29 (2001), no. 2, 979–1000. [2] M. Bernard and A. El Kharroubi, R´ egulation de processus dans le premier orthant de Rn , Stochastics and Stochatics Rep. 34 (1991), 149–167. [3] A. Budhiraja and P. Dupuis, Simple necessary and sufficient conditions for the stability of constrained processes, SIAM J. Appl. Math. 59 (1999), 1686–1700. [4] A. Budhiraja, A. P. Ghosh, and C. Lee, An ergodic rate control problem for single class queueing networks, SIAM J. Cont. and Opt., to appear. [5] A. Budhiraja and C. Lee, Long time asymptotics for constrained diffusions in polyhedral domains, Stochastic Processes and their Applications 117 (2007), 1014–1036. [6] A. Budhiraja and C. Lee, Stationary distribution convergence for generalized jackson networks in heavy traffic, Math. Oper. Res. 34 (2009), no. 1, 45–56. [7] A. Budhiraja and X. Liu, Multiscale diffusion approximations for stochastic networks in heavy traffic, Stochastic Processes and their Applications, to appear. [8] J. G. Dai and R. J. Williams, Existence and uniqueness of semmartingale reflecting brownian motions in convex polyhedrons, Theory Probab. Appl. 40 (1995), 1–40. [9] A. Dembo and O. Zeitouni, Large deviations techniques and applications, second edition, Springer-Verlag, 2007. [10] D. Down, S. P. Meyn, and R. L. Tweedie, Exponential and uniform ergodicity of markov processes, Annals of Probablity 23 (1995), 1671–1791. [11] P. Dupuis and H. Ishii, On lipschitz continuity of the solution mapping to the skorohod problem, with applications, Stochastics 35 (1991), 31–62. [12] P. Dupuis and K. Ramanan, Convex duality and the skorokhod problem – parts i and ii, Probability Theory and Related Fields 115 (1999), no. 2, 153–236. [13] P. Dupuis and R. J. Williams, Lyapunov functions for semimartingale reflecting brownian motions, Annals of Probability 22 (1994), no. 2, 680–702. [14] P. W. Glynn and S. P. Meyn, A liapounov bound for solutions of the poisson equation, The Annals of Probability 24 (1996), 916–931. [15] J. M. Harrison and M. I. Reiman, Reflected brownian motion on an orthant, The Annals of Probability 9 (1981), 302–308. [16] J. M. Harrison and R. J. Williams, Brownian models of open queueing networks with homogeneous customer populations, Stochastics 22 (1987), 77–115. [17] S. P. Meyn and R. L. Tweedie, Gneralized resolvents and harris recurrence of markov processes, Doeblin Conference held November 2-7, at the University of Tubingen’s Heinrich Fabri Institut, 1991. [18] M. I. Reiman and R. J. Williams, A boundary property of semimartingale reflecting brownian motions, Probability Theory and Related Fields 77 (1988), 87–97. [19] R. L. Tweedie S. P. Meyn, Markov chains and stochastic stability, Springer-Verlag, London, 1993. [20] D. W. Stroock and S. R. S. Varadhan, Multidimensional diffusion processes, SpringerVerlag, Berlin, 1979. [21] L. M. Taylor and R. J. Williams, Existence and uniqueness of semimartingale reflecting brownian motions in an orthant, Probability Theory and Related Fields 96 (1993), 283–
/Stability of Constrained Markov Modulated Diffusions
39
317. [22] R. J. Williams, Diffusion approximations for open multiclass queueing networks: Sufficient conditions involving state space collapse, Queueing Systems: Theory and Applications 30 (1998), no. 1–2, 27–88. [23] R. J. Williams, An invariance principle for semimartingale reflecting brownian motions in an orthant, Queueing Systems: Theory and Applications 30 (1998), no. 1–2, 5–25. [24] X. Xing, W. Zhang, and Y. Wang, The stationary distributions of two classes of reflected ornstein–uhlenbeck processes, Journal of Applied Probability 46 (2009), no. 3, 709–720.