Subexponential asymptotics of the stationary distributions of M/G/1-type Markov chains Hiroyuki Masuyama Department of Systems Science, Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan
Abstract This paper studies the subexponential asymptotics of the stationary distribution of an M/G/1-type Markov chain. We provide a sufficient condition for the subexponentiality of the stationary distribution. The sufficient condition requires only the subexponential integrated tail of level increments. On the other hand, the previous studies assume the subexponentiality of level increments themselves and/or the aperiodicity of the G-matrix. Therefore, our sufficient condition is weaker than the existing ones. We also mention some errors in the literature. Keywords: Queueing, Subexponential asymptotics, M/G/1-type Markov chain, Periodicity, G-matrix, BMAP
1. Introduction This paper considers an irreducible and positive-recurrent Markov chain {(Xn , Sn ); n ∈ Z+ } of M/G/1 type (Neuts 1989), where Z+ = {0, 1, . . . }. For each n ∈ Z+ , Xn takes values in Z+ . If Xn = 0, Sn takes values in M0 , {1, 2, . . . , M0 }; and otherwise in M , {1, 2, . . . , M }, where M0 and M are positive integers. The sets of states {(0, j); j ∈ M0 } and {(k, j); j ∈ M} (k ∈ N , Z+ \{0}) are called level 0 and level k, respectively. Arranging the states in lexicographical order, the transition probability matrix T of
∗
Tel.: +81–75–753–5513; fax: +81–75–753–3358. Email address:
[email protected] (Hiroyuki Masuyama)
Preprint submitted to Elsevier
April 3, 2011
{(Xn , Sn )} is given by T =
B(0) B(1) B(2) B(3) · · · C(0) A(1) A(2) A(3) · · · O A(0) A(1) A(2) · · · O O A(0) A(1) · · · .. .. .. .. .. . . . . .
,
where A(k) (k ∈ Z+ ) is an M × M matrix, B(0) is an M0 × M0 matrix, B(k) P (k ∈ N) is an M0 × M P∞matrix, and C(0) is an M × M0 matrix. Let A= ∞ A(k) and B = k=0 k=1 B(k). It is clear that if T is stochastic, A [M0 ] is stochastic and B(0)e + Be[M ] = e[M0 ] , where e[m] (m ∈ N) denotes an m × 1 column vector whose elements are all equal to one (hereafter we may write simply e for e[m] when its size is obvious). Throughout this paper, we make the following assumption. Assumption 1.1 (a) T is stochastic and irreducible; (b) A is irreducible; (c) ρ P , πβA < 1, where π is thePstationary probability vector of A and ∞ βA = ∞ k=1 kB(k)e is a finite vector. k=1 kA(k)e; and (d) βB , It is known that under Assumption 1.1, the Markov chain {(Xn , Sn )} is irreducible and positive recurrent (see Asmussen 2003, Chapter XI, Proposition 3.1). Thus T has a unique stationary probability vector x > 0. Let x(k) (k ∈ Z+ ) denote a subvector of x corresponding to level k. We then have x = (x(0), x(1), x(2), . . . ) and x(k) = x(0)B(k) +
k+1 X
x(l)A(k + 1 − l),
k ∈ N.
l=1
P∞
We also define x(k) = l=k+1 x(l), which is positive for all k ∈ Z+ . As is well known, M/G/1-type Markov chains arise from various queueing models with batch Markovian arrival processes (BMAPs) such as BMAP/PH/c queues; and BMAP/GI/1 queues with/without vacations, interruptions and priorities. However, it is difficult to obtain an explicit expression of {x(k)} and thus {x(k)} of the M/G/1-type Markov chain, in general. They have to be computed by a recursive algorithm (see Neuts 1989; Ramaswami 1988; Schellhaas 1990), which implies that we are forced to perform a number of numerical experiments with various parameter sets in order to investigate the qualitative behavior of the stationary distribution. Therefore it is 2
a hot topic to study the tail asymptotics of the stationary distributions of structured Markov chains such as quasi-birth-and-death processes (QBDs), M/G/1-type Markov chains and more general GI/G/1-type ones. Especially, the light-tailed asymptotics has been studied by many researchers (Abate et al. 1994; Falkenberg 1994; Kimura et al. 2010; Li and Zhao 2005b; Li et al. 2007; Miyazawa 2004; Miyazawa and Zhao 2004; Møller 2001; Tai 2009; Takine 2004). On the other hand, a smaller number of researchers have studied the subexponential asymptotics of the stationary distributions of GI/G/1-type Markov chains (including M/G/1-type ones). Asmussen and Møller (1999) and Li and Zhao (2005a) consider GI/G/1-type Markov chains having subexponential level increments. In the context of this paper, the subexponentiality of level increments means that Y is subexponential (i.e., Y ∈ S; see Definition A.2), where Y is a random variable in Z+ such that P∞ P∞ l=k+1 B(l) l=k+1 A(l) = C 1 ≥ O, lim = C 2 ≥ O, lim k→∞ P(Y > k) k→∞ P(Y > k) with C 1 6= O or C 2 6= O. In addition to Y ∈ S, Asmussen and Møller (1999) and Li and Zhao (2005a) assume Ye ∈ S, where Ye denotes the discrete equilibrium random variable of Y , distributed with P(Ye = k) = P(Y > k)/E[Y ] (k ∈ Z+ ). It should be noted that Y ∈ S does not necessarily imply Ye ∈ S and vice versa (see Sigman 1999, Remark 3.5). Under these conditions, they show that for some c > 0, x(k) = c, Ye ∈ S. (1) lim k→∞ P(Ye > k) Takine (2004) derives the subexponential asymptotic formula (1) for an M/G/1-type Markov chain, assuming Ye ∈ S but not Y ∈ S. The result implies that the subexponential asymptotic formula (1) does not necessarily require Y ∈ S, i.e., the subexponentiality of level increments. However, the proof of (1) given in Takine (2004) requires the aperiodicity of the G-matrix. In this paper, we show that (1) holds without an additional condition such as Y ∈ S and the aperiodicity of the G-matrix. Therefore, our sufficient condition for the subexponentiality of the stationary distribution is weaker than those presented in the literature (Asmussen and Møller 1999; Li and Zhao 2005a; Takine 2004), though our result is limited to the M/G/1-type Markov chain. We also point out that some asymptotic formulae given in Li 3
and Zhao (2005a) are incorrect. In fact, the formulae include the inverse of a singular matrix and thus are inconsistent with our results. The rest of this paper is divided into two sections. In Section 2, we provide some preliminary results of M/G/1-type Markov chains, and present our main results in Section 3. 2. Preliminaries Throughout this paper, we use the following conventions. Let Z denote the set of all integer numbers, i.e., Z = {0, ±1, ±2, . . . }. For any set A, let |A| denote the cardinality of A. For any random variable X in Z+ with finite positive mean, let Xe denote P the discrete equilibrium random variable of X such that P(Xe ≤ k) = kl=0 P(X > l)/E[X] (k ∈ Z+ ). The superscript “t” represents the transpose operator for vectors and matrices. Let I denote the identity matrix. For any matrix X, [X]i,j represents the (i, j)th element of X. For any summable matrix sequence {M (k); k ∈ Z+ }, P∞ let M (k) = l=k+1 M (l) (k ∈ Z+ ). Finally, if a sequence of nonnegative matrices {M (k); k ∈ Z+ } satisfies limk→∞ M (k)/f (k) = C ≥ O for some k {f (k) > 0; k ∈ Z+ }, then we write M (k) ∼ Cf (k). The conventions for matrices are also applied to vectors and scalars in an appropriate manner. Let G denote an M × M matrix such that [G]i,j = P(Sν(k) = j | X0 = k + 1, S0 = i) for any given k (k ∈ N), where ν(k) = inf{n ∈ N; Xn = k, Xl > k (l = 0, 1, . . . , n − 1)}. It is known (Neuts 1989) that G is the minimal nonnegative solution of G=
∞ X
A(k)Gk .
k=0
If Assumption 1.1 (b) and (c) hold, G is stochastic (see Neuts 1989, Theorem 2.3.1). Proposition 2.1 (Kimura et al. 2010, Proposition 2.1) Suppose Assumption 1.1 (a) and (b) hold. Then G has only one irreducible class, which is denoted by M• . In some cases, M• = M, i.e., MT , M\M• = ∅. Furthermore, by a permutation, G takes a form:
G=
M• MT
µ
M• G• G◦
4
MT ¶ O , GT
(2)
where G• is irreducible, GT (if any) is strictly lower-triangular and G◦ (if any) is a nonnegative matrix such that G◦ e ≤ e. In what follows, the states in M are arranged in such a way that G takes the form of (2). Proposition 2.1 shows that G has a unique stationary probability vector, which is denoted by g hereafter (i.e., gG = g and ge = 1). Note that g has positive elements in the positions corresponding to M• and the other elements (if any) are all equal to zero. Let R(k) and R0 (k) (k ∈ N) denote R(k) = R0 (k) =
∞ X m=0 ∞ X
A(k + m + 1)Gm (I − U (0))−1 , k ∈ N,
(3)
B(k + m)Gm (I − U (0))−1 , k ∈ N,
(4)
m=0
respectively, where U (0) =
∞ X
A(m + 1)Gm .
m=0
For convenience, let R(0) = O and R0 (0) = O. We then have (see Ramaswami 1988) x(k) = x(0)R0 (k) +
k X
x(j)R(k − j),
k ∈ N.
(5)
j=1
We also have the following results (see Takine 2003, Lemma 14; Takine 2004, Lemma 3). Proposition 2.2 Suppose Assumption 1.1 holds. Then x(0) is given by x(0) = x(0)R0 (I − R)−1 , P P∞ where R = ∞ k=0 R(k) and R0 = k=0 R0 (k). Furthermore, we have π = (1 − ρ)g(I − U (0))−1 (I − R)−1 .
(6)
(7)
For n ∈ N, let {Rn∗ (k); k ∈ Z+ } denote the nth-fold convolution of {R(k); k ∈ Z+ } with itself, i.e., R1∗ (k) = R(k) (k ∈ Z+ ) and for n = 2, 3, . . . , n∗
R (k) =
k X
R(n−1)∗ (k − l)R(l),
l=0
5
k ∈ Z+ .
Let R0∗ (0) = I and R0∗ (k) = O for all k ∈ N. Furthermore, let F (k) (k ∈ Z+ ) denote ∞ X F (k) = Rn∗ (k). n=0
It then follows from (5) that x(k) = x(0)R0 ∗ F (k), k ∈ N, P where R0 ∗F (k) = kl=0 R0 (k−l)F (l) for k ∈ Z+ (Takine 2004, Corollary 1). Thus x(k) is given by x(k) = x(0)R0 ∗ F (k),
k ∈ Z+ . (8) P k ˆ ˆ ˆ Let a(z) = det(I − z −1 A(z)), where A(z) = ∞ k=0 z A(k). Since A(1) = A is stochastic, a(1) = 0. Let τ denote τ = max{n ∈ M; a(ei2πl/n ) = 0, l = 0, 1, . . . , n − 1}, where i =
√
(9)
−1.
Remark 2.1 It is known that τ is equivalent to the period of a Markov additive process {(Zn , Jn ); n ∈ Z+ } on M × Z such that for any i, j ∈ M and k, l ∈ Z, P(Zn+1 = k + l, Jn+1 = j | Zn = l, Jn = i) = [A(k + 1)]i,j , where A(k) = O for k = −1, −2, . . . . By definition, τ = gcd{k + q(i) − q(j); [A(k + 1)]i,j > 0, k ∈ Z, i, j ∈ M},
(10)
where q is some function from M to {0, 1, . . . , τ − 1} (see, e.g., Appendix B in Kimura et al. 2010). Although Lemma B.2 in Kimura et al. (2010) states that function q satisfying (10) is injective, this is not true, in general. Proposition 2.3 If Assumption 1.1 holds, then the period of G• in (2) is equal to τ . Proof. Let τG denote the period of G• . Since G• is irreducible and stochastic, it follows (see, e.g., Seneta 2006, Theorem 1.7) that τG = max{n ∈ M; det(ei2πl/n I − G• ) = 0, l = 0, 1, . . . , n − 1}. 6
(11)
In what follows,P we prove τG = τ . k ˆ Let R(z) = ∞ k=0 z R(k). It then follows from Theorem 14 in Zhao et al. (2003) that ˆ ˆ I − z −1 A(z) = z −1 (I − R(z))(I − U (0))(zI − G), 0 < |z| ≤ 1. Recall that under Assumption 1.1, the Markov chain {(Xn , Sn )} is irreducible and positive recurrent. Thus I − U (0) is nonsingular, and Corollary 30 in ˆ Zhao et al. (2003) shows that det(I − R(z)) 6= 0 for any complex number z such that 0 < |z| ≤ 1. Therefore a(z) = 0 if and only if det(zI − G) = 0,
0 < |z| ≤ 1.
Note here that det(zI − G) = det(zI − G• ) det(zI − GT ), and that det(zI − GT ) 6= 0 for |z| > 0 because GT is a nilpotent matrix. As a result, a(z) = 0 if and only if det(zI − G• ) = 0,
0 < |z| ≤ 1,
from which, (9) and (11) we have τG = τ .
¤
Remark 2.2 Proposition 2.3 implies that under Assumption 1.1, limm→∞ Gm = eg if and only if τ = 1. Along the lines of the proof of Theorem 4 in Takine (2004), we can readily prove the following proposition. Proposition 2.4 Suppose Assumption 1.1 holds and there exists some random variable Y in Z+ with finite positive mean such that Ye ∈ S and the following limits exist. A(k) CA = , k→∞ P(Y > k) E[Y ]
B(k) CB = , k→∞ P(Y > k) E[Y ] lim
lim
(12)
where CA and CB are nonnegative matrices satisfying CA 6= O or CB 6= O. If τ = 1, we then have x(0)CB e + x(0)CA e x(k) = · π. k→∞ P(Ye > k) 1−ρ lim
(13)
Remark 2.3 Theorem 4 in Takine (2004) asserts that (13) holds without τ = 1, though the proof of the theorem requires that limm→∞ Gm = eg, i.e., τ = 1 (see Takine 2003, Appendix A.6). 7
3. Main Results This section presents the main results of this paper. Since the period of G• in (2) is equal to τ (see Proposition 2.3), G can be partitioned as follows: M•,1 M•,1 O M•,2 O .. .. . . G= M•,τ −1 O M•,τ Gτ,1 MT GT,1
M•,2 G1,2 O .. .
M•,3 O G2,3 .. .
··· ··· ··· .. .
M•,τ O O .. .
O O GT,2
O O GT,3
··· ··· ···
Gτ −1,τ O GT,τ
MT O O .. . , O O GT
where M•,ν ’s (ν = 1, 2, . . . , τ ) are disjoint subsets of M• such that ) M• . Let G(τ ν (ν = 1, 2, . . . , τ ) denote
Sτ ν=1
M•,ν =
) G(τ ν = Gν,ν+1 Gν+1,ν+2 · · · Gν+τ −1,ν+τ ,
where Gτ,τ +1 = Gτ,1 and Gτ +l,τ +l+1 = Gl,l+1 for l = 1, 2, . . . , τ − 1. We then have (τ ) G1 O ··· O O O (τ ) ··· O O O O G2 . .. .. .. .. ... .. . . . . τ , G = (τ ) O O O O · · · G τ −1 ) O O ··· O G(τ O τ (τ ) (τ ) (τ ) (τ ) τ GT,1 GT,2 · · · GT,τ −1 GT,τ (GT ) (τ )
where GT,ν (ν = 1, 2, . . . , τ ) is a nonnegative |MT | × |M•,ν | matrix such that P (τ ) ) { τν=1 GT,ν + (GT )τ }e = e. Note that gGτ = g and G(τ ν (ν = 1, 2, . . . , τ ) is aperiodic, irreducible and stochastic. Let g˘ ν = g ν /(g ν eν ) > 0 (ν = 1, 2, . . . , τ ), where g ν is a subvector of g corresponding to M•,ν and eν = e[|M•,ν |] . It then follows that g˘ ν is a unique stationary probability vector ) (τ ) n ˘ ν (ν = 1, 2, . . . , τ ) and of G(τ ν . Furthermore, since limn→∞ (Gν ) = eν g
8
limn→∞ (GT )nτ = O, we obtain e1 g˘ 1 O O ˘2 e 2g .. .. . lim Gnτ = . n→∞ O O O O f 1 g˘ 1 f 2 g˘ 2
··· ··· ...
O O .. .
O O .. .
O O .. .
, O O O
· · · eτ −1 g˘ τ −1 O ··· O eτ g˘ τ · · · f τ −1 g˘ τ −1 f τ g˘ τ
(τ )
where f ν = [I − (GT )τ ]−1 GT,ν eν (ν = 1, 2, . . . , τ ) and easy to see that lim Gnτ = EΓ ,
Pτ ν=1
f ν = e. It is (14)
n→∞
where E and Γ e1 0 0 e2 .. .. . E= . 0 0 0 0 f1 f2
denote M × τ and τ × M matrices, respectively, ··· 0 0 g˘ 1 0 · · · 0 ··· 0 0 0 g˘ · · · 0 2 .. .. .. .. . . .. . . . .. .. , Γ = . . · · · eτ −1 0 0 0 · · · g˘ τ −1 ··· 0 eτ 0 0 ··· 0 · · · f τ −1 f τ
such that 0 0 .. . 0 g˘ τ
0 0 .. .
. 0 0
Note here that if MT = ∅, the corresponding rows and columns of E and Γ vanish. Note also that Γ satisfies Γ Gτ = Γ ,
Γ e = e[τ ] .
(15)
We now make an assumption and then show two lemmas, which will be used later to prove our main theorem. Assumption 3.1 There exists some random variable Y in Z+ with finite positive mean such that CE A(k)E = A , k→∞ P(Y > k) E[Y ] lim
CE B(k)E = B , k→∞ P(Y > k) E[Y ] lim
(16)
where CAE and CBE are nonnegative M × τ and M0 × τ matrices, respectively, satisfying CAE e[τ ] 6= 0 or CBE e[τ ] 6= 0. 9
Remark 3.1 If there exist some nonnegative matrices CA and CB satisfying (12), then Assumption 3.1 holds. Furthermore, if Assumption 3.1 holds, then (16) and Ee[τ ] = e[M ] yield A(k)e C E e[τ ] = A , k→∞ P(Y > k) E[Y ]
B(k)e C E e[τ ] = B . k→∞ P(Y > k) E[Y ]
lim
lim
(17)
Lemma 3.1 Suppose Assumptions 1.1 and 3.1 hold. If Ye are long-tailed (i.e., Ye ∈ L; see Definition A.1), then R(k) = CAE e[τ ] g(I − U (0))−1 , k→∞ P(Ye > k) R0 (k) lim = CBE e[τ ] g(I − U (0))−1 . k→∞ P(Ye > k) lim
Proof. See Appendix B.1.
(18) (19) ¤
Lemma 3.2 Suppose Assumptions 1.1 and 3.1 hold. If Ye ∈ S, then π F (k) = (I − R)−1 CAE e[τ ] . k→∞ P(Ye > k) 1−ρ lim
(20)
P −1 Proof. Since Ye ∈ S and ∞ k=0 F (k) = (I − R) , it follows from (18) and Lemma 6 in Jelenkovi´c and Lazar (1998) that ∞ X F (k) Rn∗ (k) lim = lim k→∞ P(Ye > k) k→∞ P(Ye > k) n=0
= (I − R)−1 CAE e[τ ] g(I − U (0))−1 (I − R)−1 . Substituting (7) into the above equation, we have (20). The following is our main theorem. Theorem 3.1 Suppose Assumptions 1.1 and 3.1 hold. If Ye ∈ S, then x(0)CBE e[τ ] + x(0)CAE e[τ ] x(k) = · π. k→∞ P(Ye > k) 1−ρ lim
10
¤
Proof. Applying Proposition A.3 to (8) and using (19) and (20), we obtain x(k) k→∞ P(Ye > k) £ = x(0) CBE e[τ ] g(I − U (0))−1 (I − R)−1 ¸ −1 E [τ ] π + R0 (I − R) CA e 1−ρ · ¸ π π E [τ ] E [τ ] = x(0)CB e · + x(0)CA e · , 1−ρ 1−ρ lim
where the last equality follows from (6) and (7).
¤
Example 3.1 We consider a discrete-time FIFO BMAP/D/1 queue fed by a BMAP {D(k); k ∈ Z+ } with M phases. We assume that service times are equal to a unit time and that {D(k)} satisfies µ ¶ O D 1,2 (k) D(2k) = , D 2,1 (k) O µ ¶ D 1,1 (k) O D(2k + 1) = , O D 2,2 (k) P where D i,j (k) (i, j = 1, 2, k ∈ Z+ ) is some positive matrix such that ∞ k=0 (D i,1 (k)+ k
D i,2 (k))e = e and D i,j (k)e ∼ ci,j /(2k)θ for some ci,j > 0 and θ > 1. It then follows that the stationary queue length distribution is identical to the stationary distribution of an M/G/1-type Markov chain with C(0) = A(0) and A(k) = B(k) = D(k) for k ∈ Z+ . We can also confirm that τ = 2, and that E and G have the following expressions: ¶ µ ¶ µ e 0 O G1,2 E= , G= . G2,1 O 0 e Furthermore, A(k)E B(k)E lim = lim = k→∞ P(Y > k) k→∞ P(Y > k)
µ
c1,1 c1,2 c2,1 c2,2
¶ > O, k
for some regularly-varying random variable Y such that P(Y > k) ∼ k −θ . Therefore this example satisfies the assumptions of Theorem 3.1. 11
Remark 3.2 The two results corresponding to Theorem 3.1 are presented in Theorem 5.1 (a) in Li and Zhao (2005a) and Theorem 5.1 in Asmussen and Møller (1999). The first one requires that CBE = O and Y ∈ S ∗ , which implies Y ∈ S and Ye ∈ S (see Definition A.3 and Proposition A.2). The second one requires that (i) Y is regularly varying with finite positive mean, or (ii) Y ∈ S and α(x) , E[Y − x | Y > x] is eventually non-decreasing and for some t > 1, α(tx) lim inf > 1. x→∞ α(x) Clearly under condition (i), Ye is also regularly varying (thus Ye ∈ S). Furthermore, Corollary 2.5 in Goldie and Resnick (1988) implies that if condition (ii) holds, then Ye ∈ S and Ye is in the maximum domain of attraction of the Gumbel distribution (see, e.g., Embrechts et al. 1997, Section 3.3). As a result, the assumptions of these results in Li and Zhao (2005a) and Asmussen and Møller (1999) are more restrictive than those of Theorem 3.1. In the rest of this section, we consider the case of τ = 1. When τ = 1, Γ = g and E = e[M ] . Therefore in this case, Assumption 3.1 is rewritten as follows. Assumption 3.2 There exists some random variable Y in Z+ with finite positive mean such that cA A(k)e = , k→∞ P(Y > k) E[Y ]
cB B(k)e = , k→∞ P(Y > k) E[Y ] lim
lim
where cA and cB are nonnegative vectors satisfying cA 6= 0 or cB 6= 0. Corollary 3.1 Suppose Assumptions 1.1 and 3.2 hold. If τ = 1 and Ye ∈ S, x(k) x(0)cB + x(0)cA = · π. k→∞ P(Ye > k) 1−ρ lim
(21)
We compare Corollary 3.1 with Proposition 2.4. We first suppose the assumptions of Proposition 2.4 are satisfied. Then (12) yields A(k)e CA e = , k→∞ P(Y > k) E[Y ]
B(k)e CB e = , k→∞ P(Y > k) E[Y ] lim
lim
12
which shows that the assumptions of Corollary 3.1 hold for cA = CA e and cB = CB e. On the other hand, the assumptions of Corollary 3.1 do not necessarily imply those of Proposition 2.4. To see this, we assume that √ {A(k)} − k and {B(k)} are of 2 × 2 dimension and they satisfy limk→∞ A(k)/e =O and k
B 1,1 (k) ∼ e− k
√
B 2,1 (k) ∼ 2e−
k−sin
√
k
√
k−1
,
k
B 1,2 (k) ∼ 2e− k
B 2,2 (k) ∼ e−
,
√
√
k
k
− e−
√
k−sin
√
k−1
,
,
where B i,j (k) denotes the (i, j)th element of B(k). Furthermore, we choose √ k a random variable Y in Z+ satisfying P(Y > k) ∼ e− k . Then Ye ∈ S and µ ¶ A(k)e B(k)e 2 lim = 0, lim = , 3 k→∞ P(Y > k) k→∞ P(Y > k) which shows that the assumptions of Corollary 3.1 are satisfied. However in this example, there exists no random variable Y in Z+ satisfying (12) in Proposition 2.4. Therefore, Corollary 3.1 is more general than Proposition 2.4. The following is a special case, which can be applied to the stationary queue length distribution in the FIFO BMAP/GI/1 queue (see Masuyama et al. 2009; Takine 2000). Corollary 3.2 Suppose Assumptions 1.1 and 3.2 hold. If τ = 1, Ye ∈ S, C(0) = A(0) and B(k) = A(k) (∀k ∈ Z+ ), then lim
k→∞
πcA x(k) = · π. P(Ye > k) 1−ρ
(22)
Proof. Clearly, cA = cB and π = x(0) + x(0). Thus (21) is reduced to (22). ¤ Finally, we mention some asymptotic results given in Li and Zhao (2005a). Lemma 3.3 Suppose Assumption 1.1 holds and there exists some random variable Y in Z+ with finite positive mean such that Ye ∈ L and lim
k→∞
B(k) = V 6= O. P(Y > k)
(23)
Then R0 (k)/P(Y > k) does not converge to any finite matrix as k → ∞. 13
Proof. See Appendix B.2.
¤
Actually, Theorem 4.1 in Li and Zhao (2005a) states that if Assumption 1.1 is satisfied and (23) holds for some Y ∈ L (thus Ye ∈ L), then R0 (k) = V [(I − U (0))(I − G)]−1 . k→∞ P(Y > k) lim
(24)
However, the inverse matrix on the right hand side of (24) does not exist because G is stochastic due to Assumption 1.1. By similar reasoning, we can see that the asymptotic equalities given in Corollary 5.1, Theorem 5.1 (b) and Theorem 5.2 in Li and Zhao (2005a) are incorrect. Furthermore, Theorem 5.1 (b) and Theorem 5.2 (ii) and (iii) in Li and Zhao (2005a) state that if the tail of {B(k)} is equivalent to that of some Y ∈ S and also heavier than that of {A(k)}, then limk→∞ x(k)/P(Y > k) = c for some finite c ≥ 0. The statement is, however, inconsistent with our Theorem 3.1 with CAE = O. A. Heavy-Tailed Distributions In this section, we describe some properties of heavy-tailed distributions, focusing on random variables in Z+ . A random variable X in Z+ and its distribution are said to be heavy-tailed if E[z X ] = ∞ for any z > 1. As is well known, there are two important subclasses of heavy-tailed distributions: the long-tailed class and the subexponential class. Definition A.1 (Asmussen 2003; Sigman 1999) A random variable X in Z+ and its distribution are said to be long-tailed if P(X > k) > 0 for all k k ∈ Z+ and P(X > k + 1) ∼ P(X > k). The class of long-tailed distributions is denoted by L. Proposition A.1 If Xe ∈ L, then for h ∈ N, l0 ∈ Z+ and ν = 0, 1, . . . , h−1, P∞ 1 1 l=l0 P(X > k + lh + ν) lim = . (A.1) E[X] k→∞ P(Xe > k) h Proof. It follows from Corollary 3.3 in Sigman (1999) that for any fixed (possibly negative) integer i, P(X > k + l0 h + i) lim Ph−1 P∞ k→∞ j=0 l=l0 P(X > k + lh + j) =
1 P(X > k + l0 h + i) lim = 0. E[X] k→∞ P(Xe > k + l0 h − 1) 14
Thus there exists some j∗ ∈ {0, 1, . . . , h − 1} such that P(X > k + l0 h + i) lim P∞ = 0. k→∞ l=l0 P(X > k + lh + j∗ )
(A.2)
For j∗ ≤ ν ≤ h, we have P∞ P(X > k + l0 h + j∗ ) l=l0 P(X > k + lh + ν) 1 ≥ P∞ ≥ 1 − P∞ , l=l0 P(X > k + lh + j∗ ) l=l0 P(X > k + lh + j∗ ) from which and (A.2) it follows that P∞ l=l0 P(X > k + lh + ν) lim P∞ = 1. k→∞ l=l0 P(X > k + lh + j∗ )
(A.3)
Similarly, (A.3) holds for 0 ≤ ν < j∗ . Finally, (A.3) yields P∞ 1 l=l0 P(X > k + lh + ν) lim E[X] k→∞ P(Xe > k + l0 h − 1) P∞ P(X > k + lh + ν) 0 = lim Pl=l ∞ k→∞ m=l0 h P(X > k + m) P ∞ P(X > k + lh + j∗ ) = lim Ph−1l=l P0 ∞ k→∞ l=l0 P(X > k + lh + j) j=0 P ∞ 1 l=l0 P(X > k + lh + ν) · P∞ = , h l=l0 P(X > k + lh + j∗ ) k
which implies (A.1) because P(Xe > k + l0 h − 1) ∼ P(Xe > k).
¤
Definition A.2 (Chistyakov 1964; Sigman 1999) A random variable X in Z+ and its distribution are said to be subexponential if P(X > k) > 0 for k all k ∈ Z+ and P(X1 + · · · + Xn > k) ∼ nP(X > k) for all n = 2, 3, . . ., where Xi ’s are independent copies of X. The class of subexponential distributions is denoted by S. The following is a discrete analog of class S ∗ , which is introduced by Kl¨ uppelberg (1988).
15
Definition A.3 A random variable X in Z+ and its distribution belong to class S ∗ if P(X > k) > 0 for all k ∈ Z+ and k X P(X > k − l)P(X > l) lim = 2E[X]. k→∞ P(X > k) l=0
(A.4)
Proposition A.2 If a random variable X in Z+ belongs to S ∗ , then X ∈ S and Xe ∈ S. Proposition A.2 can be proved similarly to the proof of Theorem 3.2 (b) in Kl¨ uppelberg (1988). However for readers’ convenience, we provide a complete proof in Appendix B.3. Proposition A.3 below characterizes the tail asymptotics of the convolution of two matrix sequences associated with a subexponential tail. Proposition A.3 Let Jn (n = 0, 1, 2) denote a finite set. Let {P (k); k ∈ Z+ } and {Q(k); k ∈ Z+ } denote nonnegative 1 | × |J2 | matrix P∞ |J0 | × |J1 | and |JP sequences, respectively, such that P , k=0 P (k) and Q , ∞ k=0 Q(k) are finite. Suppose that for some random variable Y ∈ S, P (k) ˜ ≥ O, =P k→∞ P(Y > k)
Q(k) ˜ ≥ O, =Q k→∞ P(Y > k)
lim
lim
˜ =Q ˜ = O is allowed. We then have where P P ∗ Q(k) ˜ Q + P Q, ˜ =P k→∞ P(Y > k) lim
where P ∗ Q(k) =
Pk l=0
(A.5)
P (k − l)Q(l) for k ∈ Z+ .
Proof. Let J1 (i, j) = {ν ∈ J1 ; [P ]i,ν [Q]ν,j > 0} for i ∈ J0 and j ∈ J2 . In P what follows, we fix i ∈ J0 and j ∈ J2 arbitrarily. Let P (k) = kl=0 P (l) for P k ∈ Z+ . Since P ∗ Q(k) = P Q − kl=0 P (k − l)Q(l), we have à ! k X X [P ∗ Q(k)]i,j = [P ]i,ν [Q]ν,j − [P (k − l)]i,ν [Q(l)]ν,j ν∈J1 (i,j)
=
X
à [P ]i,ν [Q]ν,j
X
1−
k X [P (k − l)]i,ν [Q(l)]ν,j l=0
ν∈J1 (i,j)
=
l=0
[P ]i,ν
[P ]i,ν [Q]ν,j P(Pi,ν + Qν,j > k),
ν∈J1 (i,j)
16
!
[Q]ν,j (A.6)
where Pi,ν and Qν,j (ν ∈ J1 (i, j)) denote random variables in Z+ such that for all k = 0, 1, . . . , P(Pi,ν = k) =
[P (k)]i,ν , [P ]i,ν
P(Qν,j = k) =
[Q(k)]ν,j . [Q]ν,j
Note here that for ν ∈ J1 (i, j), ˜ ]i,ν P(Pi,ν > k) [P = , k→∞ P(Y > k) [P ]i,ν
˜ ν,j P(Qν,j > k) [Q] = . k→∞ P(Y > k) [Q]ν,j
lim
lim
(A.7)
˜ ]i,ν = 0 (resp. [Q] ˜ ν,j = Note also that if [P ]i,ν = 0 (resp. [Q]ν,j = 0), then [P 0). Thus ˜ ]i,ν [Q]ν,j + [P ]i,ν [Q] ˜ ν,j = 0, [P ν ∈ J1 \J1 (i, j). (A.8) By applying Lemma 10 in Jelenkovi´c and Lazar (1998) to (A.6) and using (A.7) and (A.8), we obtain à ! X ˜ ˜ [P ]i,ν [Q]ν,j [P ∗ Q(k)]i,j = [P ]i,ν [Q]ν,j + lim k→∞ P(Y > k) [P ]i,ν [Q]ν,j ν∈J1 (i,j) ´ X ³ ˜ ]i,ν [Q]ν,j + [P ]i,ν [Q] ˜ ν,j [P = ν∈J1 (i,j)
=
X³
´ ˜ ]i,ν [Q]ν,j + [P ]i,ν [Q] ˜ ν,j , [P
ν∈J1
which leads to (A.5).
¤
B. Proofs B.1. Proof of Lemma 3.1 It follows from (3) and (4) that ∞ X R(k) A(k + m + 1)Gm lim = lim k→∞ P(Ye > k) k→∞ P(Ye > k + 1) m=0
P(Ye > k + 1) (I − U (0))−1 . P(Ye > k) ∞ X R0 (k) B(k + m)Gm lim = lim (I − U (0))−1 . k→∞ P(Ye > k) k→∞ P(Y > k) e m=0 ·
17
(B.1) (B.2)
It can be shown that ∞ X A(k + m)Gm lim = CAE e[τ ] g, k→∞ P(Y > k) e m=0
(B.3)
∞ X B(k + m)Gm = CBE e[τ ] g. k→∞ P(Y > k) e m=0
(B.4)
lim
k
Applying (B.3) and P(Ye > k + 1) ∼ P(Ye > k) to (B.1), we have (18). From (B.2) and (B.4), we also have (19). Therefore Lemma 3.1 is true. In what follows, we provide the proof of (B.3), because (B.4) is proved in the same way. It follows from (14) and (16) that for any ε > 0 there exists some positive integer m∗ := m∗ (ε) such that for all m > m∗ , (1 − ε)EΓ ≤ Gτ bm/τ c ≤ (1 + ε)EΓ , A(m)EΓ CAE Γ + εeet CAE Γ − εeet ≤ ≤ . E[Y ] P(Y > m) E[Y ]
(B.5) (B.6)
We now have ∞ X A(k + m)Gm lim sup P(Ye > k) k→∞ m=0 ≤
m∗ X m=0
lim sup k→∞
P(Y > k + m) P(Ye > k + m) A(k + m)Gm lim sup lim sup P(Y > k + m) k→∞ P(Ye > k + m) k→∞ P(Ye > k) ∞ X
+ lim sup k→∞
m=m∗
A(k + m)Gm . P(Ye > k) +1
(B.7)
Note that Gm ≤ eet (∀m = 0, 1, . . . , m∗ ) and (17) yield lim sup k→∞
A(k + m)Gm C E e[τ ] t ≤ A e < ∞, P(Y > k + m) E[Y ]
for m = 0, 1, . . . , m∗ . Thus since Ye ∈ L, the first term in (B.7) vanishes (see Corollary 3.3 in Sigman 1999) and therefore lim sup k→∞
∞ ∞ X X A(k + m)Gm A(k + m)Gm = lim sup P(Ye > k) P(Ye > k) k→∞ m=m +1 m=0 ∗
= lim sup k→∞
τ −1 X
X
ν=0
m≥m∗ m≡ν (mod τ )
τ bm/τ c
A(k + m)G P(Ye > k) +1 18
Gν
.
(B.8)
Substituting (B.5) into (B.8), we obtain ∞ X A(k + m)Gm lim sup P(Ye > k) k→∞ m=0
τ −1 X X lim sup ≤ (1 + ε) k→∞
A(k + m)EΓ Gν . P(Y > k) e +1
ν=0
(B.9)
m≥m∗ m≡ν (mod τ )
It follows from (B.6) and Proposition A.1 that for any k ∈ Z+ , X
lim sup k→∞
≤
=
m≥m∗ +1 m≡ν (mod τ )
A(k + m)EΓ P(Ye > k)
CAE Γ + εeet lim sup E[Y ] k→∞
X m≥m∗ +1 m≡ν (mod τ )
P(Y > k + m) P(Ye > k)
CAE Γ + εeet . τ
(B.10)
As a result, substituting (B.10) into (B.9) and letting ε → 0 yield lim sup k→∞
∞ τ −1 X X A(k + m)Gm 1 ≤ CAE Γ Gν . P(Y > k) τ e m=0 ν=0
(B.11)
Similarly to the derivation of (B.11), we can obtain ∞ τ −1 X A(k + m)Gm 1 E X ν lim inf ≥ CA Γ G . k→∞ P(Y > k) τ e m=0 ν=0
(B.12)
Note here that Γ
τ −1 X
Gν = Γ (I − Gτ + τ eg)(I − G + eg)−1
ν=0
= τ · e[τ ] g(I − G + eg)−1 = τ · e[τ ] g, where the second equality follows from (15). Therefore, from (B.11) and (B.12), we have (B.3). ¤ 19
B.2. Proof of Lemma 3.3 It follows from (4) and (I − U (0))−1 ≥ I that lim inf k→∞
∞ X R0 (k) B(k + m)Gm ≥ lim inf . k→∞ P(Y > k) P(Y > k) m=0
(B.13)
It follows from (14) and (23) that for any ε > 0 there exists some positive integer m0 := m0 (ε) such that for all m > m0 , B(m) ≥ V − εeet . P(Y > m)
Gτ bm/τ c ≥ (1 − ε)EΓ ,
(B.14)
Thus substituting (B.14) into (B.13), we have τ −1
X R0 (k) ≥ lim inf lim inf k→∞ k→∞ P(Y > k) ν=0
X
B(k + m)Gτ bm/τ c Gν P(Y > k) +1
m≥m0 m≡ν (mod τ )
≥ (V − εeet )(1 − ε)EΓ X
Gν
ν=0
P(Y > k + m) . P(Y > k) +1
· lim inf k→∞
τ −1 X
m≥m0 m≡ν (mod τ )
Proposition A.1 and Corollary 3.3 in Sigman (1999) yield lim inf k→∞
X
P(Y > k + m) = ∞. P(Y > k) +1
m≥m0 m≡ν (mod τ )
As a result, the lemma is true.
¤
B.3. Proof of Proposition A.2 We first show X ∈ L. For k ∈ Z+ , we have 2k X
P(X > 2k − l)P(X > l)
l=0
=2
k−1 X
P(X > 2k − l)P(X > l) + P(X > k)2 , 2ϕ1 (k).
l=0
20
(B.15)
It then follows that for any fixed ν ∈ Z+ and k ≥ ν + 1, ϕ1 (k) = ≥
k−1 X l=0 ν−1 X
1 P(X > 2k − l)P(X > l) + P(X > k)2 2 P(X > 2k − l)P(X > l) +
l=0
k−1 X
P(X > 2k − l)P(X > l)
l=ν
h ≥ E[X] P(X > 2k)P(Xe ≤ ν − 1)
i + P(X > 2k − ν)P(ν − 1 < Xe ≤ k − 1) ,
which yields P(X > 2k − ν) P(X > 2k) ¸Á · 1 ϕ1 (k) − P(Xe ≤ ν − 1) P(ν − 1 < Xe ≤ k − 1). ≤ E[X] P(X > 2k) (B.16)
1≤
Note here that (A.4) and (B.15) lead to lim ϕ1 (k)/P(X > 2k) = E[X].
k→∞
Since the right hand side of (B.16) converges to one as k → ∞, we have limk→∞ P(X > 2k − ν)/P(X > 2k) = 1. Similarly we can show that limk→∞ P(X > 2k − 1 − ν)/P(X > 2k − 1) = 1 by using the following instead of (B.15). ϕ2 (k) ,
k−1 X l=0
2k−1 1X P(X > 2k − 1 − l)P(X > l) = P(X > 2k − 1 − l)P(X > l) 2 l=0
As a result, we have X ∈ L. Next we show X ∈ S. Note that for any positive integer η < bk/2c, k X P(X > k − l)P(X > l) P(X > k) l=0 η−1 k−η X P(X > k − l)P(X > l) X P(X > k − l)P(X > l) =2 + , P(X > k) P(X > k) l=0 l=η
21
from which, (A.4) and X ∈ L we have lim lim sup
η→∞
k→∞
k−η X P(X > k − l)P(X > l) = 0. P(X > k) l=η
(B.17)
Equation (B.17) and liml→∞ P(X = l)/P(X > l) = 0 (due to X ∈ L) yield k−η X P(X > k − l)P(X = l) = 0. lim lim sup η→∞ k→∞ P(X > k) l=η
(B.18)
Furthermore, it follows from X ∈ L that η−1 X P(X > k − l)P(X = l) = 1, η→∞ k→∞ P(X > k) l=0
lim lim
lim sup k→∞
(B.19)
k X
P(X > k − l)P(X = l) P(X > k) l=k−η+1 ≤ lim sup k→∞
P(k − η < X ≤ k) = 0. P(X > k)
(B.20)
From (B.18), (B.19) and (B.20), we have k X P(X > k − l)P(X = l) lim = 1, k→∞ P(X > k) l=0
which implies X ∈ S. k Finally, we show Xe ∈ S. It follows from X ∈ L that P(Xe = k + 1) ∼ P(Xe = k). Further, from (A.4), we have k X P(Xe = k − l)P(Xe = l) lim = 2. k→∞ P(X e = k) l=0
Therefore, Xe is locally subexponential, and thus Xe ∈ S (see Asmussen et al. 2003, Definition 2, Remarks 2 and 3). ¤
22
Acknowledgment The author thanks the anonymous reviewers for their valuable comments and suggestions, which have improved the quality of this paper. The research of the author was supported in part by Grant-in-Aid for Young Scientists (B) of Japan Society for the Promotion of Science under Grant No. 21710151. Abate, J., Choudhury, G. L., Whitt, W., 1994. Asymptotics for steady-state tail probabilities in structured Markov queueing models. Stochastic Models 10, 99–143. Asmussen, S., 2003. Applied Probability and Queues, Second Edition. Springer, New York. Asmussen, S., Foss, S., Korshunov, D., 2003. Asymptotics for sums of random variables with local subexponential behaviour. Journal of Theoretical Probability 16, 489–518. Asmussen, S., Møller, J. R., 1999. Tail asymptotics for M/G/1 type queueing processes with subexponential increments. Queueing Systems 33, 153–176. Chistyakov, V. P., 1964. A theorem on sums of independent positive random variables and its applications to branching random processes. Theory of Probability and its Applications 9, 640–648. Embrechts, P., Kl¨ uppelberg, C., Mikosch, T., 1997. Modelling Extremal Events for Insurance and Finance. Springer, Berlin. Falkenberg, E., 1994. On the asymptotic behaviour of the stationary distribution of Markov chains of M/G/1-type. Stochastic Models 10, 75–97. Goldie, C. M., Resnick, S., 1988. Distributions that are both subexponential and in the domain of attraction of an extreme-value distribution. Advances in Applied Probability 20, 706–718. Jelenkovi´c, P. R., Lazar, A. A., 1998. Subexponential asymptotics of a Markov-modulated random walk with queueing applications. Journal of Applied Probability 35, 325–347. Kimura, T., Daikoku, K., Masuyama, H., Takahashi, Y., 2010. Light-tailed asymptotics of stationary tail probability vectors of Markov chains of M/G/1 type. Stochastic Models 26, 505–548. 23
Kl¨ uppelberg, C., 1988. Subexponential distributions and integrated tails. Journal of Applied Probability 25, 132–141. Li, H., Miyazawa, M., Zhao, Y. Q., 2007. Geometric decay in a QBD process with countable background states with applications to a join-the-shortestqueue model. Stochastic Models 23, 413–438. Li, Q.-L., Zhao, Y. Q., 2005a. Heavy-tailed asymptotics of stationary probability vectors of Markov chains of GI/G/1 type. Advances in Applied Probability 37, 482–509. Li, Q.-L., Zhao, Y. Q., 2005b. Light-tailed asymptotics of stationary probability vectors of Markov chains of GI/G/1 type. Advances in Applied Probability 37, 1075–1093. Masuyama, H., Liu, B., Takine, T., 2009. Subexponential asymptotics of the BMAP/GI/1 queue. Journal of the Operations Research Society of Japan 52, 377–401. Miyazawa, M., 2004. A Markov renewal approach to M/G/1 type queues with countably many background states. Queueing Systems 46, 177–196. Miyazawa, M., Zhao, Y. Q., 2004. The stationary tail asymptotics in the GI/G/1-type queue with countably many background states. Advances in Applied Probability 36, 1231–1251. Møller, J. R., 2001. Tail asymptotics for M/G/1-type queueing processes with light-tailed increments. Operations Research Letters 28, 181–185. Neuts, M. F., 1989. Structured Stochastic Matrices of M/G/1 Type and their Applications. Marcel Dekker, New York. Ramaswami, V., 1988. A stable recursion for the steady state vector in Markov chains of M/G/1 type. Stochastic Models 4, 183–188. Schellhaas, H., 1990. On Ramaswami’s algorithm for the computation of the steady state vector in Markov chains of M/G/1-type. Stochastic Models 6, 541–550. Seneta, E., 2006. Non-negative Matrices and Markov Chains, Revised Printing Edition. Springer, New York. 24
Sigman, K., 1999. Appendix: A primer on heavy-tailed distributions. Queueing Systems 33, 261–275. Tai, Y., 2009. Tail Asymptotics and Ergodicity for the GI/G/1-type Markov Chains, Dissertation, Carleton University, Ottawa, Canada. Takine, T., 2000. A new recursion for the queue length distribution in the stationary BMAP/G/1 queue. Stochastic Models 16, 335–341. Takine, T., 2003. Geometric and subexponential asymptotics of Markov chains of M/G/1 type, Technical Report #2003-005, Department of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University, Kyoto, Japan. Takine, T., 2004. Geometric and subexponential asymptotics of Markov chains of M/G/1 type. Mathematics of Operations Research 29, 624–648. Zhao, Y. Q., Li, W., Braun, W. J., 2003. Censoring, factorizations, and spectral analysis for transition matrices with block-repeating entries. Methodology and Computing in Applied Probability 5, 35–58.
25