IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 52, NO. 8, AUGUST 2007
[15] M. Johansson and A. Rantzer, “Computation of piecewise quadratic Lyapunov functions for hybrid systems,” IEEE Trans. Autom. Control, vol. 43, no. 4, pp. 555–559, Apr. 1998. [16] M. Mahmoud, P. Shi, and A. Ismail, “Robust Kalman filtering for discrete-time Markovian jump systems with Parameter Uncertainty,” J. Comput. Appl. Math., vol. 169, no. 1, pp. 53–69, 2004. [17] C. Meyer, S. Schroder, and R. W. De Doncker, “Solid-state circuits breakers and current limiters for medium-voltage systems having distributed power systems,” IEEE Trans. Power Electron., vol. 19, no. 5, pp. 1333–1340, Sep. 2004. and [18] M. C. de Oliverrira, J. C. Geromel, and J. Bernussu, “Extended norm characterizations and controller parameterizations for discrete-time systems,” Int. J. Control, vol. 75, no. 9, pp. 666–679, 2002. [19] P. Shi, M. Karan, and Y. Kaya, “Robust Kalman filter design for Markovian jump linear systems with norm-bounded unknown nonlinearities,” Circuits Syst. Signal Process., vol. 24, no. 2, pp. 135–150, 2005. [20] P. Shi, M. Mahmoud, and S. Nguang, “Robust filtering for jumping systems with mode-dependent delays,” Signal Process., vol. 86, no. 1, pp. 140–152, Jan. 2006. -gain analysis for [21] X. Sun, J. Zhao, and D. J. Hill, “Stability and switched delay systems: A delay-dependent method,” Automatica, vol. 42, pp. 1769–1774, 2006. [22] H. Tshii and B. A. Francis, “Stabilizing a linear system by switching control with dwell-time,” IEEE Trans. Autom. Control, vol. 47, no. 2, pp. 1962–1973, Feb. 2002. [23] Z. Wang and F. Yang, “Robust filtering for uncertain linear systems with delayed states and outputs,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 49, no. 1, pp. 125–130, Jan. 2002. and [24] L. Wu, P. Shi, C. Wang, and H. Gao, “Delay-dependent robust filtering for LPV systems with both discrete and distributed delays,” Inst. Electr. Eng. Proc. Control Theory Appl., vol. 153, no. 4, pp. 483–492, 2006. [25] M. A. Wicks, P. Peleties, and R. A. De Carlo, “Construction of piecewise Lyapunov functions for stabilizing switched systems,” in Proc. 33rd Conf. Decision Control, Dec. 1994, pp. 3492–3497. [26] G. Xie and L. Wang, “Quadratic stability and stabilization of discretetime switched systems with state delay,” in Proc. 43th IEEE Conf. Decision Control, Atlantis, Paradise Island, Bahamas, Dec. 14–17, 2004, pp. 3235–3240. control of systems with parameter un[27] L. Xie, “Output feedback certainty,” Int. J. Control, vol. 63, pp. 741–750, 1996. [28] H. Xu, Y. Zou, and S. Xu, “Robust control for a class of uncertain nonlinear two-dimensional systems,” Int. J. Innovative Comput. Inf. Control, vol. 1, no. 2, pp. 181–191, 2005. [29] G. Zhai, B. Hu, K. Yasuda, and A. N. Michel, “Disturbance attenuation properties of time-controlled switched systems,” J. Franklin Inst., vol. 338, no. 7, pp. 765–779, 2001. gain [30] G. Zhai, Y. Sun, X. Chen, and A. N. Michel, “Stability and analysis for switched symmetric systems with time delay,” in Proc. Amer. Control Conf., Denver, CO, Jun. 2003, pp. 4–6. [31] B. Zhang and S. Xu, “Robust filtering for uncertain discrete piecewise time-delay systems,” Int. J. Control, vol. 80, no. 4, pp. 636–645, 2007. [32] J. Zhao and G. M. Dimirovski, “Quadratic stability of a class of switched nonlinear systems,” IEEE Trans. Autom. Control, vol. 49, no. 4, pp. 574–578, Apr. 2004. [33] S. Zhou and J. Lam, “Robust stabilization of delayed singular systems with linear fractional parametric uncertainties,” Circuits Syst. Signal Process., vol. 22, no. 6, pp. 579–588, 2003.
H
H
L
H
L 0L
H
H
L
H
1525
Intelligent Excitation for Adaptive Control With Unknown Parameters in Reference Input Chengyu Cao, Naira Hovakimyan, and Jiang Wang
Abstract—Model reference adaptive control problem is considered for a class of reference inputs dependent upon the unknown parameters of the system. Due to the uncertainty in the reference input, the tracking objective cannot be achieved without parameter convergence. The common approach of injecting persistent excitation (PE) in the reference input leads to tracking of the excited reference input as opposed to the true one. A new technique, named intelligent excitation, is presented for introducing an excitation signal in the reference input and regulating its amplitude, dependent upon the convergence of the output tracking and parameter errors. Intelligent excitation ensures parameter convergence, similar to conventional PE; it vanishes as the errors converge to zero and reinitiates with every change in the unknown parameters. As a result, the regulated output tracks the desired reference input and not the excited one. Index Terms—Adaptive control, excitation, parameter convergence.
I. INTRODUCTION This note considers the framework of model reference adaptive control (MRAC) architecture for the class of reference inputs dependent upon the unknown parameters of system dynamics. This problem formulation is motivated by practical applications, like visual guidance [1] or closed-coupled formation flight [10]. In [1], the target tracking problem is considered with visual sensor (monocular camera), where the characteristic length of the target is unknown, and the relative range is not observable from available visual measurements. In [10], autopilot design is considered for a trailing aircraft that follows a lead aircraft in a closed-coupled formation. The trailing aircraft must constantly seek an optimal position relative to the leader to minimize the unknown drag effects introduced by the wing tip vortices of the lead aircraft. Both problems lead to definition of a reference system with a reference input dependent upon the unknown parameters of the system. Following the common approach of injecting persistent excitation (PE) in the reference input leads to parameter convergence [2], [11], [12] but the system output tracks the excited reference input and not the true one. A new technique, named intelligent excitation, is presented in this note to solve the output tracking problem while ensuring parameter convergence. The main feature of it is that it initiates excitation only when the tracking error exceeds a prespecified threshold, and it vanishes as the parameter error converges to a neighborhood of the origin. The relationship between convergence of the parameter and the tracking errors, as well as between the conditions for convergence, has been extensively explored in the literature [2]–[8], [13]. The well-known fact is that exponential parameter convergence leads to exponential tracking error convergence [2]. We note that for a special set of regressor functions, like periodic ones, one can prove exponential convergence of tracking error to zero without the PE
Manuscript received August 5, 2005; revised October 23, 2006 and May 1, 2007. Recommended by Associate Editor M. Demetriou. This work was supported by the Air Force Office of Scientific Research (AFOSR) under Contract FA9550-05-1-0157 and the Multidisciplinary University Research Initiative (MURI) under Subcontract F49620-03-1-0401. The authors are with the Department of Aerospace and Ocean Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061-0203 USA (e-mail:
[email protected];
[email protected];
[email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TAC.2007.902780 0018-9286/$25.00 © 2007 IEEE
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on February 12, 2009 at 09:44 from IEEE Xplore. Restrictions apply.
1526
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 52, NO. 8, AUGUST 2007
requirement or parameter convergence [4], [5], [9]. However, in the problem of interest to us, namely when the reference input depends upon the unknown parameters of the system dynamics, the control objective cannot be met without parameter convergence. We further notice that in the presence of PE the exponential convergence of the tracking error to zero does not imply convergence of the system output to the desired reference input. Instead, the output tracks the excited reference input. In this note, we present a control design methodology that ensures simultaneous parameter convergence and tracking of the desired reference input. This note is organized as follows. Section II presents the problem formulation. Adaptive controller with intelligent excitation is introduced in Section III. Convergence of the regulated output within desired precision in finite time is shown in Section IV. In Section V, simulation results are presented, and Section VI concludes this note. The proofs are included in the Appendix. II. PROBLEM FORMULATION Consider the following single-input–single-output (SISO) system dynamics:
1 x_ (t)= Am x(t)+bm 3 r
3> u ( t) 0 ( ) x ( t) x
> y (t)= c x(t)
;
(1)
where x 2 IRn is the system state vector (measurable),u 2 IR is the control signal, bm 2 IRn and c 2 IRn are known constant vectors, Am is known Hurwitz n 2 n matrix, r3 2 IR is an unknown constant with known sign, x3 2 IRn is the vector of unknown parameters, and y 2 IR > is the regulated output. Let 3 = [(x3 )> r3 ] . The control objective is to regulate the output y so that it tracks r(3 ), where r is a known map n 3 r : IR 2 IR ! IR, dependent upon the unknown parameters of the 3 system. Let 2 be the compact set, to which the unknown parameters belong, i.e., 3 2 23 . In case of a known reference signal r(t), application of the conventional MRAC ensures that the tracking error between the desired reference model and the system state goes to zero asymptotically, which consequently leads to output convergence y (t) ! r(t). Since r(3 ) depends upon the unknown parameters 3 , it cannot be used in the feedforward component of the MRAC architecture, even if the map n r : IR 2 IR ! IR is known. III. ADAPTIVE CONTROLLER USING INTELLIGENT EXCITATION In this section, we present a solution for solving the tracking problem in the presence of unknown r(3 ). We consider the following reference model: x_ m (t)
where kg troller:
= Am xm (t) + bm kg r^(t)
y m ( t)
= c > x m ( t)
(2)
1 lim (1=(c> (sI 0 A )01 b )) , and the following con= m m s!0 > u ( t) = ( t) x ( t) + r ( t) k g r ^(t) x
(3)
in which r^(t) is a bounded reference signal to be defined shortly, and 2 IRn and r (t) 2 IR are the adaptive parameters governed by the following adaptive laws [15]:
x ( t)
_x (t)
= 0x Proj
x ( t) ;
k ( t)
0x(t)e> (t)P b sgn (r3)
_r (t)
= 0r Proj
r ( t) ;
0kg r^(t)e> (t)P b sgn (r3)
(4)
where e(t) = x(t) 0 xm (t). In (4), 0x > 0 and 0r > 0 are the adaptation rates, P = P > > 0 is the solution of the algebraic Lyapunov equation A> m P + P Am = 0Q for arbitrary Q > 0, and Proj(1; 1) denotes the projection operator defined as
1 Proj(; y) =
y; y; y
>
f rf y f ( ); 0 rjr f j2
if f () < 0 if f () 0 and rf > y 0 if f () 0 and rf > y > 0 (5)
: n ! is the smooth convex function: f () = ((> 0 2 max )= ), max is the norm bound imposed on the parameter vector , and > 0 denotes the convergence tolerance of our choice. Let > x (t) = r^(t) x>m (t) H (s ) = x (s)=r^(s): (6) where f
It follows from (2) that xm (s)=r^(s) = (sI 0 Am )01 bm kg , and hence H (s )
> = 1 (sI 0 Am )01 bm kg > :
(7)
We define a (n +1) 22m matrix with its pth row, q th column element as
=
Re
Hp
Im
Hp
d e j!d e
j!
;
if q is odd
;
if q is even
q
= 1 ; 2; 1 1 1 ; m
(8) where dq=2e denotes the smallest integer that is q=2 and Hp (s) is the pth element of H (s) defined in (7). Lemma 1: There exist m and !1 ; . . . ; !m such that has full row rank. Consider the following excitation signal over a finite time interval [0; T ]: e x ( t)
=
m
=1
i
sin(!i t);
t
2 [0; T ]
(9)
where !1 ; . . . ; !m ensure that has full row rank, and T > 0 is the first time instant for which ex (T ) = 0. The existence of finite T is straightforward for a linear combination of sinusoidal functions. Let > > (t) = [x (t) r (t)] . The reference signal r ^(t) is defined as r ^(t)
= r^0 (t) + Ex (t) = r ((t)) Ex (t) = k (t)ex (t 0 jT ); j = 0 ; 1; 2 . . . r ^ ( t)
0
(10) if t 2 [jT ; (j + 1)T ) ;
(11) (12)
and (13), shown at the bottom of the page, holds, where Ex (t) is the intelligent excitation signal, ex (t) is defined in (9), and 01 > 0, 02 > 0, 03 > 0, and k0 are design gains, subject to 03 k0 02 . It is straightforward to verify that 03 k(t) 02 , t 0. The complete controller with intelligent excitation consists of (2)–(4) and (10). We note that for the system in (1) the reference input r(3 ) is not available since 3 is unknown. We can use (t), the estimate of 3 , to construct r((t)), for which MRAC can be applied. However, without
k0 ; = min 0 jT e> ( )Qe( )d; 0 0 0 + 0 ; 1 (j01)T 2 3 3
2 [0; T ) t 2 [jT ; (j + 1)T ) ;
t
j
1
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on February 12, 2009 at 09:44 from IEEE Xplore. Restrictions apply.
(13)
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 52, NO. 8, AUGUST 2007
1527
parameter convergence, there is no guarantee that r((t)) will converge to a neighborhood of r(3 ) in finite time. Therefore, we augment r((t)) with the intelligent excitation signal Ex (t), which leads to the control objective as we prove in the following. First, we notice that though k(t) is piecewise constant over the time increments [jT ; (j + 1)T ), Ex (t) is a continuous signal, since ex (T ) = ex (0) = 0. Thus, the redefined reference input r^(t) in (10) is continuous and bounded. Let e(t) = x(t) 0 xm (t) be the tracking error. Substituting (3) into (1), it follows from (2) that the dynamics of the tracking error can be rewritten
= Am e(t) + 13 bm ~x> (t)x(t) + ~r (t)kg r^(t) (14) r where ~x (t) = x (t) 0 x3 and ~r (t) = r (t) 0 r3 denote parametric > errors. Let ~(t) = [~x (t)> ~r (t)] . Using the candidate Lyapunov e_ (t)
function
= e> (t)P e(t) + 13 ~x> (t)00x 1 ~x (t) r +0r01 13 ~r2 (t) (15) r > it can be verified easily that V_ (t) 0e (t)Qe(t) 0, t 0. Application of Barbalat’s lemma yields lim e(t) = 0. Furthermore, it t!1 can be verified easily that if r^(t) = r, where r is constant, one has lim y(t) = r. From the definition of asymptotic stability it follows t!1 that for any > 0 there exists finite Ts > 0 such that jy (t) 0 rj , V
t
e(t); ~(t)
Ts .
IV. CONVERGENCE RESULT OF ADAPTIVE CONTROLLER WITH INTELLIGENT EXCITATION We note that the amplitude of the excitation signal in (13) is defined via the integral of the tracking error over a time increment equal to the period of the excitation signal. To prove parameter convergence, we need to characterize the relationship between the unknown parameter error and the integral of the 1 > > 2 IR3n+1 tracking error. Let 4(t) = [x> (t) x> m (t) (t)] be the state of the extended system dynamics (1), (2), and (4) with the reference input defined in (10) and (13). Consider the compact set of all possible initial conditions of the system dynamics and adaptive parameters 4(0) = 40 2 D0 IR3n+1 . The Lyapunov function V (e(t); ~(t)) in (15) can be equivalently rewritten in the phase space as V (e; ~) = (x 0 xm )> P (x 0 3 3 > 01 3 01 3 32 xm )+j1=r j(x 0 x ) 0x (x 0 x ) + 0r j1=r j(r 0 r ) and 3 n+1 3 3 2 2 ! [0; 1). Since the error viewed as a map V (4; ) : IR dynamics are globally asymptotically stable, the maximum of the Lyapunov function for every initial condition 40 2 D0 is attained at max V (40 ; 3 ). Notice, the initial time instant. Let Vmax = 4 2D ; 22 however, that as time evolves, the system trajectory 4(t) can leave D0 , but since the Lyapunov function is nonincreasing, there exists a compact set Dc , possibly larger than D0 , such that the system trajectory stays in it for all t 0. Since (14) implies that k(t) is constant over any time interval [jT ; (j + 1)T ], j = 0; 1; 2 . . ., we denote kj the value of k(t) over this interval. We note that the system trajectory over [jT ; (j + 1)T ], (j +1)T > e (t)Qe(t)dt is uniquely defined as well as the value of jT by 4(jT ) 2 Dc , kj 2 (0; 02 ], and 3 2 23 . We consider the following map g4 : Dc 2 (0; 02 ] 2 23 ! [0; 1) to characterize this relationship:
(j +1)T e jT
> (t)Qe(t)dt = g4 (4(jT ); kj ; 3 ) :
(16)
Fig. 1. Illustration of maps g (v) and g (w ).
We note that the entire systems (1), (2), and (4) defining the trajectory of 4(t), can be viewed as a time-invariant system with Ex (t) as an external periodic input signal. Thus, the trajectory of 4(t) on t 2 [j1 T ; (j1 + 1)T ) is the same as on t 2 [j2 T ; (j2 + 1)T ), if 4(j1 T ) = 4(j2 T ) and kj = kj , for any j1 ; j2 . Hence, g4 (4(jT ); kj ; 3 ) is independent of the choice of j and depends only upon the values of 4(jT ) and kj . Moreover, since Q is positive definite, g4 (4c ; kc ; 3 ) : Dc 2 (0; 02 ] 2 23 ! [0; 1) is nonnegative, where 4c stands for the value of 4(jT ), and kc stands for kj to indicate the independence on j . We further define the map gv : [0; Vmax ] 2 (0; 02 ] ! [0; 1) as the solution of the following constrained optimization problem:
3 min g (4 ; k ; ) (17) 2D ; 22 ; s:t: V (4 ; )=v 4 c c where v 2 [0; Vmax ]. Notice that the constraint V (4c ; 3 ) = v defines gv (v; kc )
=
4
a nonempty compact set, hence the optimization problem (17) has at least one solution. Lemma 2: The map gv defined in (17) has the following properties: 1) gv (0; kc ) = 0; 2) if kc > 0 and gv (v; kc ) = 0, then v = 0. We define the map gk : [0; Vmax ] ! [0; 1) as g k (v )
= 0 min (gv (v; kc )=kc ) (18) k 0 where gv is defined in (17) and 02 > 03 > 0 are design gains defined in (14). The nonnegative property of gk (v ) follows directly from the fact that gv (v; kc ) 0. Corollary 1 follows from Lemma 2 directly. Corollary 1: gk (v ) = 0 if and only if v = 0. It can be checked easily that g4 (4c ; kc ; 3 ) is a continuous function of its arguments. Therefore, gv (v; kc ), as well as gk (v ), continuously depend upon their arguments. Fig. 1 illustrates the function gk (v ). We note that gk (v ) is a nonnegative function with a unique zero at v = 0. Given the map gk (v ), which may not be monotonous, we define an “inverse-type” map gi : [0; gk (Vmax )] ! [0; Vmax ] as gi (w )
= max fv 2 [0; Vmax]jgk (v) = wg : v
(19)
Illustration of gi (w) for a possibly nonmonotonous gk (v ) is shown in Fig. 1 for three different values of w . Since gk (v ) is a continuous function and gk (0) = 0, it can be checked easily that for any w 2 [0; gk (Vmax )], the map gi (w) exists. Notice, however, that despite the
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on February 12, 2009 at 09:44 from IEEE Xplore. Restrictions apply.
1528
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 52, NO. 8, AUGUST 2007
fact that the map gk (v ) is continuous, gi (w), as defined in (19), is not guaranteed to be continuous. Lemma 3: The map gi (w) has the following properties: 1)
lim !0 gi (w) = 0;
(20)
gi (0) = 0;
(21)
w
2)
3) for all v
gi (w) one has gk (v) w:
(22)
For any constant > 1, we define
1 (01 ; ) = gi (=01 ) j 3 = d(lg(02 =03 )) ( )e + 1
(23) (24)
where 01 ; 02 , and 03 are the design gains defined in (14), and the notation dae denotes the smallest integer greater than or equal to a. It follows from (20) that
0
lim !1 1 (01 ; ) = 0:
(25)
Theorem 1: Given the system in (1) and the adaptive controller with intelligent excitation in (2), (3), (4), and (13), if for j 3 T , V ( )
1 (01 ; ), then kx ( ) = 02 . Theorem 1 states that as long as the value of Lyapunov function is greater than 1 (01 ; ), the reference input will be subject to PE with constant amplitude. Next, we prove that there exists a finite time instant such that the value of Lyapunov function will drop below 1 (01 ; ). In that case, consequently, the amplitude of the excitation signal will be regulated dependent upon the integral of the tracking error to ensure that the control objective can be met. 1~ 01 3 ~2 Towards that end, let Bv~ = f~jj1=r3 j~x> 00 x x + 0r j1=r jr v~g, where 0 v~ Vmax . Let
(; v~) = max r(3 + ~) 0 r(3 ) : ~2B
(26)
Let = max (cc> )=min (P ), where min (P ) is the minimum eigenvalue of P > 0, while max (cc> ) is the maximum eigenvalue of cc> . )+ + We define 2 (01 ; 03 ; ) = kG(s)kL ( (3 ; 1 (01 ; ))+03 + 1 (01 ; ); where > 0 and > 1 are arbitrary constants, and kG(s)kL is the L1 gain of the system G(s) = y(s)=r^(s). Theorem 2: For the system (1) and the adaptive controller with intelligent excitation (2), (3), (4), and (13), there exists a finite Ts > 0 such that
jy(t) 0 r(3 )j 2 (01 ; 03 ; );
t Ts :
(27)
It can be verified easily that (3 ; v~) is a continuous function of v~ and, therefore, (3 ; 0) = 0. Hence, it follows that
0
2 (01 ; 03 ; ) = 0: !1;lim 0 !0;!0
(28)
This implies that we can set 01 arbitrarily large and 03 and arbitrarily small to obtain any desired precision of 1 and 2 in (25) and (28). Notice that 2 can be set arbitrarily small by control design. Also, it is important to point out that a large value of 01 will not cause instability, since the excitation signal Ex (t) is always bounded by 02 . Thus, we proved that the adaptive controller with intelligent excitation regulates the system output in finite time. If there is any change in
Fig. 2. Illustration of time trajectories of V (t) and k (t).
the unknown parameters of the system, then the desired reference trajectory changes correspondingly. If the interval in which the unknown parameters 3 hold constant values is larger than the finite settling time guaranteed by intelligent excitation, then the adaptive controller with intelligent excitation will achieve the control objective. Indeed, any change in the unknown parameters of the system results in an abrupt change of V (t). Theorem 1 ensures then that the intelligent excitation will reinitialize and lead to desired output tracking. Remark 1: For practical implementation, due to the presence of noise and transient tracking errors, we can set 03 = 0 without worrying about premature disappearance of excitation. In that case, the ex k(t) 02 , where is a small positive citation signal satisfies number due to the noise in practical implementation. Furthermore, the )= lg( )e +1. The definition of j 3 in (24) will change to j 3 = dlg(02 = constant gain 01 is inverse proportional to the bound of the parameter tracking error, so setting it large will increase the accuracy of parameter estimates. The gain 02 is the amplitude of the excitation signal, which controls the rate of convergence. Remark 2: Fig. 2 illustrates the simultaneous change of V (t) and k(t). Let T be the time instant when V (T) = 1 (01 ; ). Theorem 1 states that k(t) is nondecreasing and increases to 02 before j 3 T for any initial k0 2 [03 ; 02 ], while k(t) = 02 ; 8t 2 [j 3 T; T]. This implies a constant excitation signal, which leads to decrease of V (t) until it drops below 1 (01 ; ). Once V (t) 1 (01 ; ), Theorem 2 (Step 1 in the proof) states that k(t) will decrease to 03 + , where can be arbitrarily small. Thus, Theorem 1 quantifies the performance of the intelligent excitation signal, while Theorem 2 consequently proves the output convergence. We further notice that Theorem 1 relates the presence of constant excitation signal to the value of the Lyapunov function, which depends upon the unknown parameter errors. Thus, any change in the unknown parameters of the system, which leads to a new value for Lyapunov function, implies reinitialization of the excitation signal. Remark 3: We finally note that adaptive control is not the only tool for controlling systems in the presence of uncertainties. Robust control literature offers alternative approaches, as high-gain controllers, variable structure controllers, to name just a few. However, for the output regulation problem discussed here, namely when the desired reference input depends on unknown parameters of the system, parameter identification appears to be a required step for achieving the control objective. Intelligent excitation provides a solution for simultaneous parameter identification and output regulation.
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on February 12, 2009 at 09:44 from IEEE Xplore. Restrictions apply.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 52, NO. 8, AUGUST 2007
1529
V. SIMULATION We consider the reference input dependent on piecewise constant unknown parameters. Consider the SISO system in (1) with Am
=
01:03 0:88 0:25 02:96
bm
1 0 [3:2433 10:7432]> ; x3 (t) = [2:1622 7:1621]> ; 6:18; t 42 sec : r3 (t) = 4; t > 42 sec
=
00:01 01:05
t
42 sec
c=
t > 42 sec
The reference signal is r ( 3 ) =
0[1 1]
Am
0 bm3 (x3 )> [1 1]0 0 1:35 r
r
and the objective is to design control signal u(t) so that the system output y (t) tracks the reference input r(3 ). We construct the adaptive controller with intelligent excitation using the following parameters:Q = diag[100; 10], 0x = 50I222 , 0r = 10, 01 = 10, T = =3, 02 = 1:5, and 03 = 0. We choose !1 = 6 and !2 = 9. It can be verified that has full row rank with the chosen !i . Simulation results are given in Fig. 3. Fig. 3(a) plots the time history of y (t) and the ideal reference signal r(3 ). It demonstrates that with the change in unknown parameter r3 (t) the output y (t) converges to r(3 ) with the help of intelligent excitation. The trajectory of k(t), which defines the amplitude of the intelligent excitation, is plotted in Fig. 3(b). Fig. 3 demonstrates the following: 1) intelligent excitation vanishes as parameter convergence takes place and 2) intelligent excitation reinitiates when a change occurs in unknown parameters. VI. CONCLUSION In this note, we augment the traditional MRAC with an intelligent excitation signal to solve the output tracking for a reference input that depends upon the unknown parameters of the system. The main feature of the new technique is that it initiates excitation only when necessary. We prove that intelligent excitation is a general technique for the class of problems, in which parameter convergence is needed to meet the control objective. It can also be used to enhance robustness of the adaptive controllers, when parameter drift may cause instability. Since intelligent excitation modifies only the reference input, while the proofs of convergence and reinitialization use only the properties of the Lyapunov function, it can be straightforwardly modified for different adaptive controllers, like backstepping, output feedback, etc. APPENDIX
Proof of Lemma 1: The transfer function (sI 0 Am )01 bm kg can be expressed as (sI 0 Am )01 bm kg = n(s)=d(s), where d(s) = det(sI 0 Am ) is a nth order polynomial, and n(s) is a n 2 1 vector with its ith element being a polynomial function n i (s ) =
n j =1
nij sj 01 :
(29)
Consider the matrix N with its entries nij in (29). We first prove that N is full rank. We note that (Am ; bm ) is controllable. Controllability of (Am ; bm ) for the linear time invariant (LTI) system in (2) implies IRn , xm (t0 ) = 0, and arbitrary t1 , there exists that for arbitrary xt u( ); [t0 ; t1 ] such that xm (t1 ) = xt . If N is not full rank, then
2
2
Fig. 3. Simulation results. (a) Comparison of y (t) and r (t). (b) Trajectory of k (t) .
there exists a nonzero constant vector 2 IRn , such that > n(s) = 0. Then, (> n(s)=d(s)) = (>x (s) =r^(s)) = 0, which implies that > xm (s) = 0. If xm (t0 ) = 0, for any u( ), > t0 , we have > xm ( ) = 0; 8 > t0 . This contradicts xm (t1 ) = xt , in which xt 2 IRn is assumed to be an arbitrary point. Therefore,N must be full rank. Since N is full rank, it follows that n(s) contains n linearly independent polynomials. We can rewrite the transfer function in (7) as H (s) = (1=d(s))[d(s) n1 (s) 1 1 1 nn (s)]> . Since d(s) is an nth order polynomial, and ni (s) is (n 0 1)th order polynomial, d(s) and ni (s) are linearly independent. Hence, H (s) contains n + 1 linearly independent functions of s, and, therefore, there exist !1 ; . . . ; !m , which ensure that has full row rank. Proof of Lemma 2: The proof of the first statement is straightforward. If v = 0, then the optimization set is defined by V (4c ; 3 ) = 0. This is indeed a nonempty set, since the points of the hyperspace in IR3n+1 2 23 defined via the conditions x = xm ; = 3 satisfy V (4c ; 3 ) = 0. Notice that the computation of g4 (4c ; kc ; 3 ) in (16) is done by starting the integration from 4(jT ) = 4c .
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on February 12, 2009 at 09:44 from IEEE Xplore. Restrictions apply.
1530
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 52, NO. 8, AUGUST 2007
It follows from V (4c ; 3 ) = 0 that V (jT ) = 0, and this implies that e(jT ) = 0. Since the Lyapunov function is nonincreasing, we have e(t) = 0, t 2 [jT; (j + 1)T ]. Consequently, (j +1)T e> (t)Qe(t)dt = 0. Hence, g (0; k ) = 0. g4 (4c ; kc ; 3 ) = jT c To prove the second statement, first we will prove that, for any kj > 0 and 4(jT ), we have
(j +1)T
e> ( )Qe( )d
= 0 ) V (4(jT ); 3 ) = 0:
(30)
jT
The proof of (30) is done by contradiction. Since Q > 0 is positive definite and 4(t) is Lipschitz continuous (as the solution of differential equations with bounded right-hand sides), then (j +1)T e> ( )Qe( )d = 0 implies that jT
e(jT
8 2 [0; T ]:
+ ) = 0
(31)
such It follows from (4) that there exists a constant vector 2 IR 2 [0; T ]. From (11), this further implies that that (jT + ) = ; r^0 (jT + ) = const; 2 [0; T ], while (13) yields n+1
Ex (jT
+ ) = kj ex (jT + );
2 [0; T ]
(32)
where ex (t) is defined in (9). From (6), it follows that x (jT + ) is the output response of the linear system, defined by H (s), to the signal r^(jT + ) = r^0 (jT + ) + Ex (jT + ) over the interval 2 [0; T ]. (jT + ) can be expressed as sum of three components Therefore, x x(jT + ) = x1 (jT + ) + x2 (jT + ) + x3 (jT + ) , 2 [0; T ], 1 (jT + ) and x2 (jT + ) are the particular solutions of where x the linear system (6) corresponding to the inputs Ex (jT + ) and r^0 (jT + ), respectively, while x3 (jT + ) is the general solution of the linear system (6) propagating from the initial value 4(jT ), and can, therefore, be expressed by linear combination of exponential functions of . It follows from r^0 (jT + ) = const; 2 [0; T ] that x2 (jT + ) = const; 2 [0; T ]. Next, we use Fourier transformation to construct the particular solution of the system (6) in response to Ex (jT + ) defined in (32). If the input signal to the linear system H (s) is kj m i=1 sin(!i t), with its Fourier transformation F E (! ) = k j m (=2)j ( (! + !i ) 0 (! 0 !i )), then the i=1 Fourier transformation of the particular solution, corresponding to the m input kj m = kj (=2)j i=1 (H (0j!i ) (! + i=1 sin(!i t), is Fx !i )0H (j!i ) (! 0 !i )). Hence, the particular solution cor1 (t) = responding to the input signal kj m i=1 sin(!i t) is x From kj m i=1 (Re(H (j!i ))sin(!i t)+Im(H (j!i ))cos(!i t)). (9), we notice that ex (jT + ) = ex ( ), therefore, the particular solution corresponding to Ex (jT + ) defined in (32) satisfies x 1 (jT + ) = kj m i=1 (Re(H (j!i ))sin(!i )+Im(H (j!i ))cos(!i )), 2 [0; T ], which further can be rewritten as x1 (jT + ) = kj b(jT + ); 2 [0; T ], where is defined in (8) and b is a 2m 2 1 vector with its q th element being
bq ( ) =
sin
!d
e ;
cos
!d
e
if q is odd
; if q is even
linear combination of sinusoidal functions x 1 (jT + ) over 2 [0; T ]. Hence, all the elements of x (jT + ) are linearly independent over 2 [0; T ]. Let ~c ( ) = ( ) 0 3 , where ( ) is defined as = (jT + ); 2 [0; T ]. We will prove by contradiction that ~c ( ) = 0. For a nonzero ~c , since the elements of x (jT + ) are linearly independent over the interval 2 [0; T ], there exists some 0 2 [0; T ] such that ~c> x(jT + 0 ) 6= 0. Since x(t) is Lipschitz continuous, it follows from ~c> x(jT + 0 ) 6= 0 that there exist 1 ; 2 2 [0; T ], where 1 < 2 , such that jT + jT +
6= 0:
(33)
It follows from (14), (31), and (33) that
e(jT
+ 2 ) 0 e(jT + 1 ) = 13 bm r
jT +
~c> x( )d
jT +
6= 0:
(34)
Since (34) contradicts (31), ~c = 0 must be true. It follows from (31) that ~c = 0 and from the definition of V in (15) it follows that V (jT ) = 0, which completes the proof of (30). To prove that for kc > 0, gv (v; kc ) = 0 implies that v = 0, we will assume the opposite, i.e., there exists kc > 0 such that gv (v; kc ) = 0, but 0 < v Vmax , i.e., v 6= 0. The definition of gv in (17) implies that there exists at least one point, satisfying V (4c ; 3 ) = v > 0, where the minimum is attained: g4 (4c ; kc ; 3 ) = 0. Setting 4(jT ) = 4c ,kj = kc > 0, it follows from the definition of g4 in (j +1)T e> ( )Qe( )d = 0. Consequently, the statement in (16) that jT (30) implies that V (4c ; 3 ) = V (4(jT ); 3 ) = 0, which contradicts V (4c ; 3 ) = v > 0 and, hence, v = 0, and Lemma 2 is proved Proof of Lemma 3: If (20) is not true, then there exists g > 0 such that for any g > 0 there exists w3 2 (0; g ] such that jgi (w3 )j g . It follows from (19) that there exists v 3 g such that gk (v 3 ) = w3 . = min gk (v). Then, w w3 . Since w3 2 (0; g ], where Let w
] v 2[ ;V g can be arbitrarily small, it follows from w w3 that w can be arbitrarily small. On the other hand, since gk (v ) is continuous and g is a given nonzero constant, it follows from w = min gk (v) that ] v 2[ ;V w > 0 cannot be arbitrarily small. Hence, (20) is proved. Equation (21) follows from Corollary 1 directly, while the statement in (22) follows from the definition in (19). Lemma 3 is proved. Proof of Theorem 1: Since the Lyapunov function V (t) is nonincreasing, it follows from V ( ) 1 (01 ; ) that, for j 3 T , we have V (t) 1 (01 ; ), 8t 2 [0; ] and, hence V (jT ) 1 (01 ; ) for any jT (17) that
(35)
. It follows from (16) and the definition of gv (v; kc ) in (j +1)T
:
Here, 2 [0; T ] and dq=2e denotes the smallest integer that is greater than or equal to q=2. Since kj > 0, has full row rank and the elements 1 (jT + ) of b are linearly independent functions and the elements of x are also linearly independent over 2 [0; T ]. Notice that x 2 (jT + ) is constant. The general solution x 3 (jT + ) is a sum of exponential 2 (jT + )+ x3 (jT + ) cannot be expressed by functions. Therefore, x
~c> x( )d
e> ( )Qe( )d
gv (V (jT ); kj ) :
jT
Recall that 03 kj 02 ; 8j = 0; 1; 2; 1 1 1, and rewrite gv (V (jT ); kj ) = kj (gv (V (jT ); kj )=kj ). From the definition of gk in (18) it follows that gv (V (jT ); kj ) kj gk (V (jT )). There(j +1)T e> ( )Qe( )d k g (V (jT )). Consequently, it fore, jT j k follows from (13) that
kj +1
min f01 kj gk (V (jT )) ; 02 0 03g + 03 :
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on February 12, 2009 at 09:44 from IEEE Xplore. Restrictions apply.
(36)
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 52, NO. 8, AUGUST 2007
Equations (23) and (35) imply that V (jT ) gi (=01 ), and, hence, it follows from (22) that gk (V (jT )) (=01 ). Further, substituting into (36) implies that kj +1 minfkj ; 02 0 03 g + 03 and, consequently
kj +1 > kj ; kj +1 = 02 ;
if if
minfkj ; 02 0 03 g = kj minfkj ; 02 0 03 g = 02 0 03
which can be rewritten as
kj +1 > kj ;
if kj
0, there exists a finite time jym (t)jg = 0 Then, for any 3 T2 such that jym (t) 0 r( )j0jym (t)j ; 8t T1 + T2 .
Step 3) It follows from (43) that r^(t)
Hence
(38)
jym (t) 0 r(3 )j kGkL ( (3 ; 1 (01 ; )) + 03 + ) + ; 8t T 1 + T2 : (44)
then ki = 02 , for any (j + 1)T iT , and the proof of Theorem 1 will follow. We now prove that there indeed exists j j 3 0 1 such that (38) holds. If (38) is not true, which implies that
From (42), it follows that e> (t)P e(t) 1 (01 ; ); 8t > 1 T + T . Since e> (t)Pe(t) (P )ke(t)k2 and Ts = min 1 2 y~2 (t) = e> (t)cc> e(t) max (cc> )ke(t)k2 , we have y~2 (t) e> (t)P e(t), t 0, and therefore
kj
02 0 03
0 0 03 ; kj < 2
j = 0; . . . ; j 3 0 1
(39)
it follows from (37) that kj 01 k0 j 01 . Since k0 03 and (39) is true for j = 0; . . . ; j 3 0 1, we have (02 0 03 )= 03 j 01 , and, therefore, (02 0 03 )=03 j . From here, it follows that j 3 dlg((02 0 03 )=03 )= lg()e, which contradicts (24). The proof of Theorem 1 is complete. Proof of Theorem 2: The existence of finite Ts such that
jy(t) 0 r(3 )j 2 (01 ; 03 ; );
t Ts
(40)
will be proven in three steps. In Step 1, we will prove that there exists a finite time T1 = j1 T such that when t > T1 , the amplitude of the excitation signal will be within neighborhood of 03 , i.e., kj 03 + ; 8j j1 , where can be arbitrarily small. In Step 2, we will prove that jr^(t) 0 r(3 )j (3 ; 1 (01 ; ))+ 03 + ; t T1 . In Step 3, we will prove there exists finite Ts such that (27) holds. Step 1) It follows from lim e(t) = 0 that lim e> (t)Qe(t) = 0, t!1 t!1 and, therefore
lim !1
j
(j+1)T
e> ( )Qe( )d = 0:
jT
It follows then that for any > 0, there exists finite j1 such (j+1)T e> ( )Qe( )d (=01 ), 8j j1 . Hence, that jT it follows from (13) that
kj
03 +
8j j1 :
(41)
Step 2) Equation (41) and Theorem 1 imply that
V (t) 1 (01 ; )
8t T1 (42) where T1 = maxfj 3 T; j1 T g and j 3 is defined in (24). Indeed, if it were not true, then it would follow from Theorem 1 that kj = 02 ; 8j j1 , which contradicts (41). It follows from (42) and the definition of in (26) that
jr() 0 r(3 )j (3 ; 1 (01 ; )), and, therefore, for any t T1 jr^(t) 0 r(3 )j (3 ; 1 (01 ; )) + 03 + : (43)
jy~(t)j
1 (01 ; )
8t Ts :
(45)
The relationship in (40) follows from (44) and (45) directly, which proves Theorem 2.
REFERENCES [1] C. Cao and N. Hovakimyan, “Vision-based aerial tracking using intelligent excitation,” in Proc. Amer. Control Conf., Portland, OR, 2005, pp. 5091–5096. [2] S. Sastry and M. Bodson, Adaptive Control Stability, Convergence, and Robustness. Englewood Cliffs, NJ: Prentice-Hall, 1989. [3] R. R. Bitmead and B. D. O. Anderson, “Lyapunov techniques for the exponential stability of linear difference equations with random coefficients,” IEEE Trans. Autom. Control, vol. AC-25, no. 4, pp. 782–787, Aug. 1980. [4] D. Bayard, J. Spanos, and Z. Rahman, “A result on exponential tracking error convergence and persistent excitation,” IEEE Trans. Autom. Control, vol. 43, no. 9, pp. 1334–1338, Sep. 1998. [5] D. Bayard, J. Spanos, and Z. Rahman, “Exponential convergence of the tracking error in adaptive systems without persistent excitation,” in Proc. 34th IEEE Conf Decision Control, New Orleans, LA, Dec. 1995, pp. 208–209. [6] S. Boyd and S. Sastry, “On parameter convergence in adaptive control,” Syst. Control Lett., vol. 3, pp. 311–319, 1983. [7] R. R. Bitmead, “Persistence of excitation conditions and the convergence of adaptive schemes,” IEEE Trans. Inf. Theory, vol. IT-30, no. 2, pp. 183–191, Mar. 1984. [8] S. Boyd and S. Sastry, “Necessary and sufficient conditions for parameter convergence in adaptive control,” Automatica, vol. 22, no. 6, pp. 629–639, 1986. [9] R. Johansson, “Global Lyapunov stability and exponential convergence of direct adaptive control,” Int. J. Control, vol. 50, no. 3, pp. 859–869, 1989. [10] E. Lavretsky, N. Hovakimyan, A. Calise, and V. Stepanyan, “Vortex seeking formation flight neurocontrol,” presented at the Proc. of AIAA Guid. Navigat. Control Conf. Exhibit, Austin, TX, Aug. 11–14, 2003, Paper AIAA-2003-5726, unpublished. [11] K. S. Narendra and A. M. Annaswamy, Stable Adaptive Systems. Englewood Cliffs, NJ: Prentice-Hall, 1989. [12] J. J. Slotine and W. Li, Applied Nonlinear Control. Englewood Cliffs, NJ: Prentice-Hall, 1991. [13] B. D. O. Anderson, “Exponential stability of linear equations arising from adaptive identification,” IEEE Trans. Autom. Control, vol. AC-22, no. 1, pp. 83–88, Feb. 1977. [14] A. P. Morgan and K. S. Narendra, “On the uniform asymptotic stability of certain linear nonautonomous differential equations,” SIAM J. Control Optim., vol. 15, pp. 5–24, 1977.
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on February 12, 2009 at 09:44 from IEEE Xplore. Restrictions apply.
1532
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 52, NO. 8, AUGUST 2007
[15] J. B. Pomet and L. Praly, “Adaptive nonlinear regulation: Estimation from the Lyapunov equation,” IEEE Trans. Autom. Control, vol. 37, no. 6, pp. 729–740, Jun. 1992. [16] D. Dehaan and M. Guay, “Extremum seeking control of nonlinear systems with parametric uncertainties and state constraints,” in Proc. Amer. Control Conf., Boston, MA, Jun. 30–Jul. 2 2004, pp. 596–601. [17] M. Guay and T. Zhang, “Adaptive extremum seeking control of nonlinear dynamic systems,” Automatica, vol. 39, pp. 1283–1983, 2003. [18] H. K. Khalil, Nonlinear Systems. Englewood Cliffs, NJ: PrenticeHall, 2002.
Optimal Stochastic SD Control With Preview K. Y. Polyakov, E. N. Rosenwasser, and B. P. Lampe, Member, IEEE
Abstract—The problem of -optimal sampled-data (SD) preview control is considered. It is assumed that a stochastic reference signal corrupted with additive colored noise acts upon the system so that future values of this input are known within a preview window . A rigorous frequency-domain solution of the problem is given for SISO SD systems on the basis of the parametric transfer function concept. Numerator and denominator degrees of the optimal controller are calculated explicitly on the basis of initial data. The dependence of the optimal cost function on is investigated and the limiting value as is found. Moreover, it is shown that this limit depends on the relation between and the sampling period. Index Terms—Delay systems, polynomial methods, sampled data systems, stochastic optimal control, transfer functions.
I. INTRODUCTION
block diagram. Therefore, the system considered in continuous time becomes infinite-dimensional, and this feature causes significant difficulties in analysis and design. Such well known techniques of direct SD systems design as “lifting” [8] or “FR-operator” [9] can hardly be used in this case, because, so far, they have not been adapted for systems with arbitrary delays. This note presents a rigorous solution of the optimal stochastic SD control problem with preview. The proposed technique is based on the frequency-domain theory of digital control presented in [10], which makes it possible to take into account units with transfer functions es and e0s without any approximations [11], [12]. As distinct from [6] and [7], here, the system performance is evaluated by the H2 -norm of the error, i.e., by its average (over continuous time) variance. The presented solution takes into account an additive colored noise and restrictions imposed on control power. Moreover, it is valid for any hold circuit and arbitrary preview interval rather than for integer multiples of the sampling period. II. NOTATION
Let T be the sampling period, e0sT the unit delay operator, and ! 2=T the sampling frequency. The asterisk denotes the Hermitian conjugate function such that (for the scalar case) F 3 (s) F (0s) and F 3 ( ) = F ( 01 ). Real rational functions in s and will be called stable if they are analytic in <s 0 or j j 1, respectively. Polynomials in having no roots in the closed unit disk will also be called stable. For any Laplace image F (s), let us introduce the displaced pulse-frequency response
8F (s; t)
1 1
T
01
k=
( + kj!)ekj!t
F s
(1)
and discrete Laplace transform [10]
In many control problems, a desired trajectory of system motion is known in advance within a window . Examples of such systems are robotic manipulators, terrain following flight systems, underwater vehicles and so on. Obviously, the knowledge of “future” input signals can be used for enhancing system performance and decreasing tracking errors. The aim of preview control is realized through the synthesis of optimal controllers whose design explicitly takes into account the preview effect. Previously, the problems of preview or predictive control were considered for continuous [1]–[5] and discrete linear time invariant (LTI) systems. In the present note, this problem is investigated for hybrid SD systems, where a digital regulator is used for controlling a continuous-time plant. For open-loop SD systems, a similar problem of delayed signal reconstruction without noise and restrictions on control power was considered in [6] and [7], where the reconstruction error was evaluated by the H1 -norm of the corresponding operator. These papers demonstrated great problems encountered in constructing an equivalent discrete model for SD systems with delays and preview. Due to the preview effect, a network with the transfer function es appears in the Manuscript received February 23, 2006; revised November 6, 2006. Recommended by Associate Editor D. Nesic. The work was supported in part by the Deutsche Forschungsgemeinschaft. K. Y. Polyakov and E. N. Rosenwasser are with the St. Petersburg State University of Ocean Technology, St. Petersburg 190008, Russia (e-mail:
[email protected]). B. P. Lampe is with the Department Computer Science and Electrical Engineering, University of Rostock, D-18051 Rostock, Germany (e-mail:
[email protected]). Digital Object Identifier 10.1109/TAC.2007.900845
( ) 8F (s; t) est DF (; t) DF (s; t)jexp(0sT )= : (2) Denote the monic denominator of DF (; t) by dF ( ) and its degree by (F ) . For any polynomial f ( ) having no roots on the unit circle the superscripts “+” and “0” denote stable and strictly antistable cofactors (up to a constant factor) so that f ( ) = f + f 0 . The tilde denotes a polynomial in with reverse order of coefficients: f~( ) = f ( 01 ) degf . DF s; t
By a quasipolynomial we mean a rational function in having no poles except for = 0. III. STATEMENT OF THE PROBLEM
Fig. 1 shows the block-diagram of a SD system composed of a plant with transfer function F (s), actuator H (s), dynamic feedback G(s) and digital controller C ( ) with prefilter F0 (s). The networks e0s and e0s simulate pure delays in continuous elements and computational delays. The transfer function of the (arbitrary) hold circuit will be denoted by Gh (s). The stochastic reference signal r(t) and additive noise n(t) are generated by forming filters R(s) and N (s), excited by independent zero-mean unit white noise sources (t) and (t), respectively. The reference signal is transformed by a block with transfer function Q(s) into the desired output y^(t). The preview block is simulated by the transfer function es . The location of this block indicates that the controller uses information on “future” values of the reference signal. Equivalently, preview means that the signal y^(t) must be reconstructed with a delay . As was shown in [10], in quasistationary mode, all continuous-time processes in a periodic system are periodical with period T . In order
0018-9286/$25.00 © 2007 IEEE
Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on February 12, 2009 at 09:44 from IEEE Xplore. Restrictions apply.