IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 12, DECEMBER 2002
2021
Stability of Hybrid Dynamic Systems Containing Singularly Perturbed Random Processes Grazyna Badowski and G. George Yin, Fellow, IEEE
Abstract—To meet the challenge of handling dynamic systems in which continuous dynamics and discrete events coexist, we develop stability analysis of such systems governed by ordinary differential equations and stochastic differential equations with regime switching modulated by a Markov chain involving a small parameter. The small parameter is used to reflect either the inherent two-time scales in the original system or the different rates of regime switching among a large number of states of the discrete events. The smaller the parameter is, the more rapid switching the system will experience. To reduce the complexity, one attempts to effectively “replace” the actual systems by a limit system with simpler structure. To ensure the validity of such a replacement in a longtime horizon, it is crucial that the original system is stable. The fast regime changes and the large state space of the Markov chain make the stability analysis difficult. Under suitable conditions, using the limit dynamic systems and the perturbed Lyapunov function methods, we show that if the limit systems are stable, then so are the original systems. This justifies the replacement of a complicated original system by its limit from a longtime behavior point of view. Index Terms—Hybrid system, Lyapunov method, Markov chain, singular perturbation, stability, switching diffusion.
I. INTRODUCTION
T
HIS PAPER is devoted to the study of asymptotic properties of hybrid dynamic systems driven by singularly perturbed Markov chains and/or singularly perturbed switching diffusions. We focus on stability of the underlying dynamic systems and analyze their longtime behavior. Our motivation stems from the needs of solving numerous control and optimization problems in engineering, management sciences, and biological and physical sciences. Many such applications involve dynamic systems under uncertainty and with hybrid nature. That is, the systems include both continuous dynamics and discrete-event interventions. For instance, see [19] for production planning and manufacturing systems, [20] for a stock-investment model, [21] for a hybrid Leontief economic model, and [24] for a hybrid LQG control problem. Such hybrid models also appear in electronic power systems, control of a solar thermal central receiver, and command control systems (see the references cited in [7]). To model the uncertainty and to reflect the hybrid features, Markov processes of pure jump type and switching diffusions are natural candidates. Manuscript received August 17, 2001; revised March 25, 2002. Recommended by Associate Editor C. Wen. This work was supported in part by Wayne State University and by the National Science Foundation under Grant DMS-9877090. G. Badowski is with Institute for Systems Research, University of Maryland, College Park, MD 20742 USA. G. Yin is with Department of Mathematics, Wayne State University, Detroit, MI 48202 USA. Digital Object Identifier 10.1109/TAC.2002.805682
Using a finite-state Markov chain to model the discrete events having piecewise constant behavior and incorporating various considerations into the models often result in the underlying Markov chain having a large state space. The computational effort and the complexity become a real concern. To overcome the difficulties, much effort has been devoted to the modeling and analysis of such systems; see [9], [17], [19], [22], and [23], among others. Focusing on reduction of complexity, one of the ideas is: All systems in the real-world applications involve some form of hierarchy [18], which provides us with an opportunity of decomposition. Taking advantages of the inherent hierarchy and the different rates of changes of the subsystems, one may split a large-scale system into several classes and lump the states in each class into one state. This leads to a singular perturbation formulation involving time-scale separations. Note that such a time-scale separation is also inherent in many systems. For instance, in a stock market, there are long-term investors who make decisions based on longtime horizon behavior of a stock, and short-term investors who focus on short-term returns based on daily or even shorter period price behavior of the equity. Their time scales are in striking contrast. In our recent work, by introducing a small parameter ([9], [22], [23]), we have established asymptotic properties of including asymptotic expansions of the Markov chains probability vectors and transition matrices, scaled and unscaled occupation measures, aggregation of states, and switching diffusion limits of suitably scaled sequence. Up to now, except for certain associated control problems [22, Ch. 9], our main focus . We have demonhas been devoted to a finite horizon strated that effectively, the complicated system can be replaced by a limit system having much simpler structure. Using the optimal or nearly optimal controls of the limit system, one can construct controls that are nearly as good for the original system. To ensure that such a replacement can be made in a longtime horizon, first and foremost, the system must preserve stability. Some of the recent effort in stability of jump systems can be found in [4], [7], [14], and [15] (see also [8], [10], [13], and the references therein for general discussion on stochastic stability). Linear systems were treated in [4], [7], and [15], whereas nonlinear systems were dealt with in [14]. Our aim in this work is to take up the stability issue for sufficiently small and sufficiently large of systems described by hybrid ordinary differential equations (ODEs) or hybrid stochastic differential equations (SDEs), in which the discrete events involve two-time scales. Due to the complexity of the systems, the stability analysis is difficult. We take an alternative approach in lieu of directly attacking the problem. Using the stability of the limit system as a bridge, we derive the desired asymptotic properties of the
0018-9286/02$17.00 © 2002 IEEE
Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on April 05,2010 at 16:28:04 EDT from IEEE Xplore. Restrictions apply.
2022
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 12, DECEMBER 2002
original system via perturbed Lyapunov function methods (see [11]). In fact, we show that if the limit system is stable then for small enough, the original system is also stable, which is similar in spirit to the stability due to wideband noise perturbation in [3]. This justifies from a longtime behavior point of view, the replacement of a complicated original system by its limit (an average with respect to the stationary measures). Therefore, in lieu of treating a complex and computationally intensive original system, we need only consider its limit. To proceed, a word of notation is in order. Throughout the rest of the paper, we use to denote a generic positive constant with and . For , the convention denotes its transpose; is an -dimensional column vector with denotes a blockall entries being equal to 1; is a matrix with appropriate diagonal matrix such that each dimension. The rest of the paper is organized as follows. Section II begins with the precise formulation of the problem. Section III proceeds with the study of stability of dynamic systems represented by hybrid ODEs and hybrid SDEs involving singularly perturbed Markov chains. For ease of presentation, in Sections II and III, the Markov chain is assumed to contain only irreducible (or ergodic) classes. A couple of examples and simulation results are given in Section IV. In Section V, we carry out the analysis for systems that include transient states in addition to the irreducible classes, and make further remarks. In order not to disrupt the flow of presentation, the proofs and technical complements are placed in an Appendix at the end of the paper. II. FORMULATION Let
be a continuous-time Markov chain with state space and generator satisfying for and for each . For a realdefined on , valued function is a martingale, where for each (1) It is well known that a generator or its corresponding Markov chain is said to be irreducible if, the system of equations (2) satisfying for has a unique solution . Such a solution is termed a stationary distribution. In this paper, we consider the case that the Markov chain has a large state space. In various applications, to obtain optimal or nearly optimal controls of hybrid systems involving such a Markov chain, the computation complexity becomes a pressing issue due to the presence of a large number of states. As suggested in [22], the complexity can be reduced by observing that not all of the states are changing at the same rate. Some of them vary rapidly and others change slowly. To take advantage of the hierarchical structure, the states can be naturally subdivided into several classes. To reflect the different rates of changes, we ininto the system. The Markov troduce a small parameter
chain now becomes one depending on , i.e., . is generated by Throughout the paper, we assume that (3) , the generwhere and are themselves generators. If is changing at a fast pace, and the corresponding Markov ator chain varies very rapidly. The smaller the is, the faster the , the Markov chain is also subject to weak chain varies. If . Coninteractions in addition to the rapid variations due to sider the case that the states of the Markov chain are divisible to a number of classes such that within each class, the transitions take place at a fast pace, whereas among different classes, the transitions appear less frequently. Such a formulation is taken care of by use of the structure of ; more details will be given later. A. Problem Setup Dynamic Systems Governed by Ordinary Differential Equais a singularly perturbed Markov tions: Suppose that is to be specified later. Let chain having generator (3), where be the state of a system at time , influenced and . Suppose that by
(4) Our motivation stems from applications to production planning and manufacturing systems. In such a setup, the Markov chain can be used to model the machine capacity of an unreliable machine or random demand process. We are interested in the stability or longtime behavior of such a manufacturing system. As was mentioned, this system is difficult to analyze. Instead, we , the dywill examine its stability by use of the fact that as namic system is close to a limit system in an appropriate sense. Given the stability of the limit system, our goal is to figure out the large-time behavior of the dynamic system given by (4) as and . Note that effectively, rather than dealing with one equation, we are considering a system of equations in . which the total number of equations is precisely Dynamic Systems Governed by Stochastic Differential Equabe given as in the previous case, tions: Let , , and be a standard -dimensional Brownian motion that is independent of . be the state variable of a switching diffusion at time Let , governed by the following equation:
(5) and are appropriate functions satisfying suitwhere able conditions. This model is motivated by, for example, the represents the price movements in a stock market, where , , price of a stock, and are the expected appreciation rate and and volatility, respectively. The Markov chain describes the market , the dynamic trends and other economic factors. As system is close to an averaged system of switching diffusions.
Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on April 05,2010 at 16:28:04 EDT from IEEE Xplore. Restrictions apply.
BADOWSKI AND YIN: STABILITY OF HYBRID DYNAMIC SYSTEMS
2023
for some satisfies
Again, our interest is to figure out the stability of the dynamic system governed by (5).
, and
B. Preliminary Results Since in a finite-state Markov chain not all states can be transient, either it consists of recurrent states only or in addition to recurrent states, it also contains transient states. In this and the next sections, we consider the case that the chain contains only irreducible classes. The case of inclusion of transient states will be examined in Section IV. Since the rates of changes of the Markov chain are dominated by , its structure is crucial. For Markov chains including irreducible classes only, we assume (6) of the underlying Markov In view of (6), the state–space chain is decomposable into subspaces. That is
ii) For the transition probability matrix
(11) and
where
(12) converges weakly to iii) The aggregated process as , where is a Markov chain generated by . The proof of the following theorem is in the Appendix . and Theorem 2: Assume A1). Then, for each
(7) and with such that the generators associated with the subspaces for are irreducible. Thus, the corresponding for consist of recurrent states belonging to ergodic classes. into a single state and To proceed, lump the states in each define if
, we have
(13) denotes the th component of for and . is known as ocRemark 3: The quantity cupation measure of the Markov chain since it represents the . For instance, using amount of time the chain spends in state such an idea, we can rewrite (4) as
where
(8)
by , and denote Denote the state–space of , where is the stationary distribution . Define corresponding to (9) and . with The essence of the aggregated process is to treat all the states in as one state, so the total number of states in the “effective” state space is much reduced. We need the following assumptions . about the generator of , is irreducible. A1) For each Note that for simplicity, we have chosen to use the notion irreducibility, but all the subsequent development carries over when the irreducibility is replaced by weak irreducibility; see Example 2 in Section V and the further remark in Section V.B of this paper and [22]. For the subsequent study, we need a couple of preliminary results. The proofs of i) and ii) in Lemma 1 can be found in [22, Cor. 6.12, p. 130]. For the proof of iii), see [22, Th. 7.4]. Lemma 1: Assume condition A1). Then, the following assertions hold. i) For the probability distribution vector with
(10)
which often facilitates the require analysis. Theorem 2 indicates can be approximated by that of the agthat gregated process in the sense that the mean squares error is of as given in (13). Although there are certain simithe order larities, Theorem 2 is different from that of [22, p. 170], since in lieu of working with a finite horizon, infinite time intervals are dealt with. To ensure the integral to be well defined, a discount is used in (13). factor III. STABILITY OF HYBRID SYSTEMS We proceed to study the stability of the hybrid systems. The analysis via the Lyapunov method for the original system is often difficult due to the -dependence and large state space of the Markov chain, but the analysis of the limit system is relatively simple. The results in what follows demonstrate that using the Lyapunov functions of the limit systems, we can study the stability of the original systems via the use of perturbed Lyapunov function methods. A. Stability of Hybrid ODEs To analyze the stability of (4), we need the following assumptions: satisfies A2) , for each , and for , a) for some ; for each . b)
Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on April 05,2010 at 16:28:04 EDT from IEEE Xplore. Restrictions apply.
2024
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 12, DECEMBER 2002
The Lipschitz continuity in A2a) guarantees the existence and uniqueness of the solution of (4). Assumption A2b) indicates is a stationary point for each of the equations that in (4). This condition is mainly for convenience. Following the usual argument of stability theory, by translation of axes, we need only consider the stability of this trivial solution. The following result can be proved similar to the development in [22, Ch. 9]. , ) Lemma 4: Under A1) and A2), the sequence ( , ) such that given in (4) converges weakly to ( satisfies (14) , for . where Remark 5: This lemma indicates that associated with (4), there is a limit process, in which, the system is averaged out with respect to the stationary measure. Suppose that the system represents a manufacturing system. As pointed out in, for example, in [19], the management at the higher level of the production decision-making hierarchy can ignore daily fluctuations in machine capacities and/or demand variations by looking at only the “average of the system” to make long-term planning decisions. In what follows, assuming the stability of the limit process , we derive the stability of . We will need a condition concerning the growth and smoothness of the Lyapunov function. It will be referred to as condition (G). is a nonnegative real-valued func(G) as ; there is tion such that for each , is -times continuously a positive integer such that , differentiable satisfying for , and . Theorem 6: Assume A1) and A2), and suppose that for each , there is a Lyapunov function that satisfies (G), that is at least twice continuously differentiable, and that for for some
B. Stability of Hybrid SDEs Similar to the previous section, we impose the following assumptions. A2’) The functions and satisfy the following. such that for each and a) There is a , and . and for each . b) Remark 8: Note that for each fixed , A2’b) and the Lipsand chitz continuity imply that . This together with A2’a) is the well-known Ito condition. After presenting a weak convergence lemma, we show that the trivial solution of (5) is exponentially stable for sufficiently small . , ) conLemma 9: Assume A1) and A2’). Then, , such that is the solution of verges weakly to ( (17) with
,
, where for each
,
, . Theorem 10: Assume conditions A1) and A2’), suppose that , and there is a Lyapunov function for each satisfying (G), and having continuous mixed partial derivatives up to the order four, such that for some
(18)
where (15)
Suppose that
methods, which have been proven to be a powerful tool. They were used to treat convergence of a sequence of diffusions [16], to study a large class of problems involving wide-band noise [11], and to analyze stochastic approximation algorithms [12]. The basic idea is to add a small perturbation to the Lyapunov function of the underlying system such that the perturbations result in needed cancellations of unwanted terms.
Let
. Then
. Then (16)
Remark 7: The previous theorem indicates that for suf, the trivial solution of (14) being ficiently small exponentially stable leads to that of (4) being exponentially stable. The growth and smooth condition is satisfied if is a polynomial of order or has polynomial growth of ; see [14, Th. 3.1] for the use of such a condition. For order , if for some positive constants and instance, for the independent of , (see [14]), , denotes the growth condition is satisfied. If , it is the Hessian (see [5]), and if , the gradient, if denotes the higher-order mixed partial derivatives in the sense of the usual multi-index convention. In the proof of Theorem 6, we use the techniques of perturbed test function
IV. INCLUSION OF TRANSIENT STATES This section is concerned with the case that includes be a transient states in addition to the ergodic classes. Let Markov chain with generator given by (3) and
..
.
(19)
The state space is decomposed to irreducible classes and a collection of transient states. That is,
Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on April 05,2010 at 16:28:04 EDT from IEEE Xplore. Restrictions apply.
BADOWSKI AND YIN: STABILITY OF HYBRID DYNAMIC SYSTEMS
2025
, where is a collection of transient states. We assume the following. , are irreducible, and is Hurwitz A1’) For (i.e., all of its eigenvalues have negative-real parts). Define
for
(20)
as
Partition the matrix
iii)
converges weakly to , a Markov chain generated by . Theorem 12: Assume condition A1’). Then 1) for , ; , for , 2) . The proof of the aforementioned theorem is in the Appendix . The following lemma can be obtained by using essentially the same techniques as that of Lemma 9; we omit the details. such that Lemma 13: 1) for the hybrid ODE
, where ,
(26)
,
, and
2) for the hybrid SDE
. Write
(27)
(21) Define the aggregated process
by
if if
(22) , and
where
is a
random variable uniformly distributed on [0, 1], independent of . Since we only lump the states in each irreducible classes, is again the state space of the aggregated process . The proofs of i) and ii) in Lemma 11 are in [22, Ch. 6], and that of iii) is in [23]. Lemma 11: Assume condition A1’). Then, the following assertions hold. , using partition i) For the probability vector , with and
and are as defined in Lemma 4 and where Lemma 9, respectively. Remark 14: Note that even a collection of transient states are included, the limit system is still an average with respect to the stationary measures. Asymptotically, the transient states can be ignored in the limit system since the probabilities go to zero rapidly. Theorem 15: Assume A1’) and A2) [respectively, A1’) and , there is a Lyapunov A2’)] . Suppose that for each satisfying (G) and the smoothness condition function as in Theorem 6 (respectively, Theorem 10 ) such that (15) [re, then spectively, (18)] holds. If
V. EXAMPLES AND FURTHER REMARKS
, we have
A. Examples (23) for some
, where
ii) For the transition probability matrix
, we have (24)
where (25) with
, where
satisfies
We present several examples to illustrate the stability studied thus far. Since it is for demonstration purpose, we use only simple examples. In the first example, we delineate the sample paths of the Markov chain as well as that of the hybrid system. As can be seen, as is getting smaller and smaller, the Markov chain undergoes rapid variations. Although the transitions take place at a fast pace, as predicted by our results, the hybrid system is still stable. In the second and third examples, we work out the analytic expressions and show the longtime properties of the underlying systems. It is easily seen from these examples that constructing Lyapunov functions for the original systems may be difficult, but the Lyapunov functions for the limit systems are much simpler. Using the Lyapunov functions of the limit systems, we can make judgement of the stability of the original systems. Example 1: Consider a hybrid system, in which the Markov and generator chain has two states . Consider system (4) with ,
. We demonstrate the behavior of the hybrid
Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on April 05,2010 at 16:28:04 EDT from IEEE Xplore. Restrictions apply.
2026
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 12, DECEMBER 2002
Fig. 1. Sample path of Markov Chain and trajectory of dynamic system (" = 0:1).
Fig. 2. Sample path of Markov Chain and trajectory of dynamic system (" = 0:05).
system by plotting the sample paths of the Markov chain and the trajectories of the dynamic system in the figures. The produces a squeezing effect resulting in rapid small variations in the Markovian sample paths. The system quickly comes to its equilibrium position. Fig. 1 shows the results for , whereas Fig. 2 displays the sample path and trajectory . corresponding to be a Markov chain generated by Example 2: Let
where
given in (3) with
and
is the Markov chain generated by
Let
, and
, then
Hence, by Theorem 6, the solution of (28) is stable. be a Markov chain generated by Example 3: Let . Then the quasistationary distri-
The generator
con-
. Consider a one-dimensional equa-
bution is tion
sists of two weakly irreducible blocks. Straightforward computation confers that the quasistationary distributions (see the notion of weak irreducibility and quasistationary distribution in [22, Sec. 2.5] and/or the section on further remarks in this , , respectively, paper) are
with
.
and
(30)
Consider
The limit equation is (28)
(31)
with
Let
As
so the solution of (31) is stable. By Theorem 10, we conclude that the solution of (30) is also stable. It is interesting to note that we can substitute any other func, and for , as long as they satisfy tions in
,
converges to
such that (29)
. Then
Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on April 05,2010 at 16:28:04 EDT from IEEE Xplore. Restrictions apply.
BADOWSKI AND YIN: STABILITY OF HYBRID DYNAMIC SYSTEMS
A2’). The limit equation remains the same and we get the same conclusion for the stability. The reason is that the limit equation is an average with respect to the quasistationary measure .
2027
In addition, for some
B. Further Remarks This paper has been devoted to stability of hybrid systems. The evolutions of the dynamic systems follow the trajectories of ordinary differential equations or stochastic differential equations whose configurations are modulated by a continuous-time Markov chain, which results in markedly different behavior among different regimes. Using a two-time scale formulation originated from either high contrast in the physical system or complexity reduction consideration, we have shown that if the limit system is stable then the original system is also stable in an appropriate sense. One can work with the notation of weakly irreducibility, which is a slight generalization of the usual notion of irreducibility. A generator or its corresponding Markov chain is said to be weakly irreducible if (2) has a unique solution satisfying for , and the unique solution is called the quasistationary distribution. The usual definition of irreducibility requires that the limit probability distribution vector have strictly positive components, whereas weak irreducibility requires only the limit distribution have nonnegative components. For example, whereas
is irreducible,
is only weakly irreducible. Replacing
the irreducibility in (A1) and (A1’) by weak irreducibility, the results obtained continue to hold. As further research topics, one may consider mean recurrence time estimates. Effort can also be directed toward possible extensions various estimates of excursion probabilities and to nonstationary Markov processes.
Thus
Likewise, by symmetry, we also have
The proof is completed. B. Proof of Theorem 6 Let conditioned on by erator
, and be the expectation . For a suitable function , define the op(32)
real-valued and continuously differentiable for each
Let
be
a
function with being twice . Then (33)
APPENDIX This section contains the proofs of several technical results. A. Proof of Theorem 2 Results from Lemma 1, and direct calculation reveal that
Often a truncation is applied to the Lyapunov function by using , a bounded smooth function that is unity on and is zero on , . Such an approach and by defining is equivalent to have a stopping rule on the sphere of radius . Nevertheless, in the current situation, this can be much simplified by use of the argument of [11, pp. 148–149]. Thus, we work with the Lyapunov function directly in what follows. Define if
(34)
Note that For the term on the second line, carrying out the indicated inte, and grations leads to
(35) so these two expressions will be used interchangeably in is orthogonal to , i.e., what follows. Observe that , so
Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on April 05,2010 at 16:28:04 EDT from IEEE Xplore. Restrictions apply.
2028
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 12, DECEMBER 2002
In this, we have used (37), (38), and a Taylor expansion of about to obtain
Define
(36) By the definition of Furthermore, we have the first equation shown at the bottom of the page. Using (38) (37) (42)
It can be shown (see [22, p. 187]) that for Define (38) Thus, using condition (G), (35), (37), and (38) in (36), we obtain
Then, by the definitions of and in (9) and (34), we have the second equation shown at the bottom of the next page. Sim, ilar to the estimate for (43)
Thus, we have (39)
Moreover
To proceed, for convenience, use a notation (40) Using (32)
Define (44) It follows from (35), (39), and (43) that (45) In addition
(41)
Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on April 05,2010 at 16:28:04 EDT from IEEE Xplore. Restrictions apply.
BADOWSKI AND YIN: STABILITY OF HYBRID DYNAMIC SYSTEMS
2029
Therefore (46) where is defined in (15). Using and (46), we obtain
, (15), (45),
tightness by virtue of the tightness criterion (see e.g., [1], [11], [12]). Step 2) Extract a weakly convergent subsequence still indexed by for notational simplicity and denote the , ). We characterize the limit limit by ( , ) is the unique process by showing that ( solution of the martingale problem with operator , an -truncation (see [12]) of the operator that was defined in (18). Step 3) Using a piecing together argument similar to [12, pp. , the untruncated 248-250], we show that as process also converges and is a solution of the martingale problem with operator . D. Proof of Theorem 10 Define
Multiplying both sides by
as in (34). Then
yields
(47) An application of the generalized Gronwall’s inequality [6, Lemma 6.2, p. 36] then yields that
Define that
as in (36). Similar as before, it can be shown (48)
Using (45), we also have that (16) holds. The desired result then follows.
as in (40) and by the definition of Using notation , we have the equation shown at the bottom of the page. It is easily seen that
C. Sketch of the Proof of Lemma 9 Since the focus is on the stability, we outline only the main ideas involved in the proof. The detailed estimates are omitted for brevity. Step 1) It is most convenient to work with a truncated be process (see [12, p. 248 and p. 278]). Let be equal to up until the first given and centered at the origin with exit from the sphere radius . We then proceed to derive the tightness , } in . Using of the pair { the well-known Cramér–Wold’s theorem [1, p. 49] } is tight, it suffices to consider the since { }. Detailed estimates reveal that tightness of { for any
, which implies the desired
(49) and that
(50) and , using (37), and (40), for By the independence of , we arrive at (51), shown at the bottom of the page. Note
Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on April 05,2010 at 16:28:04 EDT from IEEE Xplore. Restrictions apply.
2030
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 12, DECEMBER 2002
the second equation at the bottom of the page, using (38), (40), and (51), (52), shown at the bottom of the page, holds, and (53), shown at the bottom of the page, also holds. Using estimates (49), (50), (52), and (53), we obtain
To proceed, define (54), as shown at the bottom of the page. , it can be verified Similar to the estimates for that
(55)
Moreover, detailed computation reveals that
(56) Define
(57)
(51)
(52)
(53)
(54)
Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on April 05,2010 at 16:28:04 EDT from IEEE Xplore. Restrictions apply.
BADOWSKI AND YIN: STABILITY OF HYBRID DYNAMIC SYSTEMS
2031
the first part, note that by the definition of , . Thus, the second equation shown at the bottom of the page holds. It has already been shown that the second and the fourth lines above are of the ; it remains to prove that the third line is of the order order as well. Again, by Lemma 11 , for any , , and , we have
It follows that
(58) Therefore (59) Using a similar argument as in the last part of the proof of Theorem 6 , the desired result follows. E. Proof of Theorem 12 First, we will prove the second part. Since, by Lemma 11
(61) and , together Therefore, by using the independence of with (60) and (61), we have the third equation shown at the bottom of the page. The proof is completed. F. Proof of Theorem 15 We shall only indicate the differences compared with that of , let be a Lyapunov function, Theorem 6. For and define
(60) for any , similar calculations as in the proof of Theorem 2 show the first equation shown at the bottom of the page. To prove
(62)
Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on April 05,2010 at 16:28:04 EDT from IEEE Xplore. Restrictions apply.
2032
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 12, DECEMBER 2002
It is easy to check that in (19). Moreover
, where
was defined
Therefore, by using Theorem 12, we can carry out the proof as in that of Theorem 6. Similarly, we can carry out the modification corresponding to Theorem 10 for the hybrid SDE. A few details are omitted. REFERENCES [1] P. Billingsley, Convergence of Probability Measures. New York: Wiley, 1968. [2] W. P. Blair and D. D. Sworder, “Feedback control of a class of linear discrete systems with jump parameters and quadratic cost criteria,” Int. J. Control, vol. 21, pp. 833–841, 1986. [3] G. B. Blankenship and G. C. Papanicolaou, “Stability and control of stochastic systems with wide band noise,” SIAM J. Appl. Math., vol. 34, pp. 437–476, 1978. [4] X. Feng, K. A. Loparo, Y. Ji, and H. J. Chizeck, “Stochastic stability properties of jump linear systems,” IEEE Trans. Automat. Contr., vol. 37, pp. 38–53, Jan. 1992. [5] A. Graham, Kronecker Products and Matrix Calculus with Applications. Chinchester, U.K.: Ellis Horwood, 1981. [6] J. K. Hale, Ordinary Differential Equations. Malabar, FL: Krieger, 1980. [7] Y. Ji and H. J. Chizeck, “Controllability, stabilizability, and continuous-time Markovian jump linear quadratic control,” IEEE Trans. Automat. Contr., vol. 35, pp. 777–788, July 1990. [8] R. Z. Khasminskii, Stochastic Stability of Differential Equations. Alphen aan den Rijn, The Netherlands: Sijthoff & Noordhoff, 1980. [9] R. Z. Khasminskii, G. Yin, and Q. Zhang, “Asymptotic expansions of singularly perturbed systems involving rapidly fluctuating Markov chains,” SIAM J. Appl. Math., vol. 56, pp. 277–293, 1996. [10] H. J. Kushner, Stochastic Stability and Control. New York: Academic, 1967. [11] , Approximation and Weak Convergence Methods for Random Processes, with Applications to Stochastic Systems Theory. Cambridge, MA: MIT Press, 1984. [12] H. J. Kushner and G. Yin, Stochastic Approximation Algorithms and Applications. New York: Springer-Verlag, 1997. [13] X. Mao, Exponential Stability of Stochastic Differential Equations. New York: Marcel Dekker, 1994. [14] , “Stability of stochastic differential equations with Markovian switching,” Stoch. Process. Appl., vol. 79, pp. 45–67, 1999. [15] M. Mariton and P. Bertrand, “Robust jump linear quadratic control: A mode stabilizing solution,” IEEE Trans. Automat. Contr., vol. AC-30, pp. 1145–1147, Nov. 1985. [16] G. C. Papanicolaou, D. Stroock, and S. R. S. Varadhan, “Martingale approach to some limit theorems,” in Proceedings of the Conference on Statistical Mechanics, Dynamical Systems, and Turbulence, M. Reed, Ed. Durham, NC: Duke Univ., 1977, vol. 3.
[17] A. A. Pervozvanskii and V. G. Gaitsgori, Theory of Suboptimal Decisions: Decomposition and Aggregation. Dordrecht, The Netherlands: Kluwer, 1988. [18] H. A. Simon and A. Ando, “Aggregation of variables in dynamic systems,” Econometrica, vol. 29, pp. 111–138, 1961. [19] S. P. Sethi and Q. Zhang, Hierarchical Decision Making in Stochastic Manufacturing Systems. Boston, MA: Birkhäuser, 1994. [20] G. Yin, R. H. Liu, and Q. Zhang, “Recursive algorithms for stock liquidation: A stochastic optimization approach,” SIAM J. Optim., vol. 13, pp. 241–258, 2002. [21] G. Yin and J. F. Zhang, “Hybrid singular systems of differential equations,” Scientia Sinica, vol. 45, pp. 241–258, 2002. [22] G. Yin and Q. Zhang, Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach. New York: Springer-Verlag, 1998. [23] G. Yin, Q. Zhang, and G. Badowski, “Asymptotic properties of a singularly perturbed Markov chain with inclusion of transient states,” Ann. Appl. Probab., vol. 10, pp. 549–572, 2000. [24] Q. Zhang and G. Yin, “On nearly optimal controls of hybrid LQG problems,” IEEE Trans. Automat. Contr., vol. 44, pp. 2271–2282, Dec. 1999.
Grazyna Badowski received the M.A. degree in statistics and the Ph.D. degree in mathematics from Wayne State University, Detroit, MI, in 1998 and 2001, respectively. She is currently a Research Fellow at the Institute for Systems Research, University of Maryland, College Park. Her research interests include stochastic models, especially Markov decision processes, stochastic optimization and control, simulation modeling, and analysis. Dr. Badowski is a Member of the Golden Key National Honor Society, the American Mathematical Society, the Association for Women in Mathematics, and the Society for Industrial and Applied Mathematics.
G. George Yin (F’02) received the B.S. degree in mathematics from the University of Delaware Newark, and the M.S. (electrical engineering) and Ph.D. (applied mathematics) degrees from Brown University, Providence, RI, in 1983 and 1987, respectively. Subsequently, he joined the Department of Mathematics, Wayne State University, Detroit, MI, where he became a Professor in 1996. He served on the Editorial Board of Stochastic Optimization Design, the Mathematical Review Date Base Committee, and various conference program committees, and was the Editor of the SIAM Activity Group on Control and Systems Theory Newsletters, the SIAM Representative to the 34th Conference of Decision and Control, and Chair of the 1996 AMS-SIAM Summer Seminar in Applied Mathematics. He is also Chair of one of the 2003 AMS-IMA-SIAM Summer Research Conferences. Dr. Yin has served as an Associate Editor of the IEEE TRANSACTIONS ON AUTOMATIC CONTROL.
Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on April 05,2010 at 16:28:04 EDT from IEEE Xplore. Restrictions apply.