Finite Gain Stabilization of Discrete-Time Linear Systems ... - CiteSeerX

Report 27 Downloads 95 Views
Finite Gain Stabilization of Discrete-Time Linear Systems Subject to Actuator Saturation Xiangyu Bao1

Zongli Lin1

Eduardo D. Sontag2

Abstract

It is shown that, for neutrally stable discrete-time linear systems subject to actuator saturation, nite gain p stabilization can be achieved by linear output feedback, for all 2 (1 1]. An explicit construction of the corresponding feedback laws is given. The feedback laws constructed also result in a closed-loop system that is globally asymptotically stable, and in an input-to-state estimate. l

p

;

Key Words : input saturation, discrete-time linear systems, nite gain stability, Lyapunov functions.

Department of Electrical Engineering, University of Virginia, Charlottesville, VA 22903. Department of Mathematics, Rutgers University, New Brunswick, NJ 08903. Supported in part by US Air Force Grant F49620-97-1-0159. 1 2

1

1 Introduction In this paper, we consider the problem of global stabilization of a discrete-time linear system subject to actuator saturation:  + (u + u1 ); x 2 Rn ; u 2 Rm P : xy ==CxAx++u2B (1) ; y 2 Rr (we use the notation x+ to indicate a forward shift, that is, for a function x and an integer t, x+ (t) is x(t +1)), where u1 2 Rm is the actuator disturbance, u2 2 Rr is the sensor noise, and  : Rm ! Rm represents actuator saturation, i.e., (s) = [ 1 (s1 ) 2 (s2 )    m (sm ) ] with i (si ) = sign(si ) minf1; jsijg, and the pair (A; B ) is stabilizable. The problem of global asymptotic stabilization (internal stabilization) of this system has recently been solved, in [12], using nonlinear state feedback laws and under the condition that all the eigenvalues of A are inside or on the unit circle, and in [1] for neutrally stable open-loop system using linear state feedback. Here, we are interested not only in closed-loop state space stability (internal stability), but also in stability with respect to both measurement and actuator noises. More speci cally, we would like to construct a controller C so that the operator (u1 ; u2 ) 7! (y1 ; y2 ) as de ned by the following standard systems interconnection (see Fig. 1.)  y1 = P (u1 + y2 ) (2) y2 = C (u2 + y1 ) is well-de ned and nite gain stable. u1

-j 6 y2

y1

P C



?j

u2

Figure 1: Standard closed-loop connection This problem was rst studied for continuous-time systems. It was shown in [7] that, for neutrally stable open loop systems, linear feedback laws can be used to achieve nite gain stability, with respect to every Lp -norm. For a neutrally stable system, all open loop poles are located in the closed left-half plane, with those on the j! axis having Jordan blocks of size one. In the case that full state is available for feedback (i.e., y1 = x and u2 = 0), it was shown in [6] that if the external input signal is uniformly bounded, then nite-gain Lp -stabilization and local asymptotic stabilization can always be achieved simultaneously by linear feedback, no matter where the poles of the open loop system are. The uniform boundedness condition of [6] was later removed in [5] by resorting to nonlinear feedback. Some other works related to the topic are [4, 8, 9, 11] and the references therein. There are also several studies in the discrete-time setting, showing some of the continuous-time results carry over to discrete-time (for example, [4, 12]) and some do not (for example, [3]). In particular, [3] shows that the results of [5, 6] on nite gain stabilization of continuous-time systems do not carry over to discretetime systems. The objective of this paper is to show that the results of [7], however, do carry over to discretetime systems. More speci cally, we show that, for neutrally stable discrete-time linear systems subject to actuator saturation, nite gain lp stabilization can be achieved by linear output feedback for all p 2 (1; 1].

2 An explicit construction of the corresponding feedback laws is given. The feedback laws constructed also result in a closed-loop system that is globally asymptotically stable, and provide an input-to-state estimate. While many of the arguments used are conceptually similar to those used in the continuous-time case [7], there are technical aspects that are very di erent and not totally obvious. For example, unlike in [7], the feedback gain for the discrete-time case needs to be multiplied by a small factor, say , which causes the solution of a certain Lyapunov equation, and the subsequent estimation of the solution, to be dependent on  (see Lemma 2). As another example, the diculties in evaluating the di erence of the non-quadratic Lyapunov function along the trajectories of the closed-loop system entail a careful estimation by Taylor expansion. The remainder of the paper is organized as follows. Section 2 states the main results. Section 3 contains the proof of the results that were stated in Section 2. A brief concluding remark is given in Section 4.

2 Preliminary and Problem Statement We rst recall some notation. For a vector X 2 R` , jX j denotes the Euclidean norm of X , and for a matrix X 2 Rmn , the induced operator norm. For any p 2 [1; 1), we write lpn for the set of all sequences fx(t)g1 t=0 , P p < 1, and the lp -norm of x 2 ln is de ned as kxkl = (P1 jx(t)jp )1=p . where x 2 Rn , such that 1 j x ( t ) j p t=0 t=0 n to denote the set of all sequences fx(t)g1 , where x 2 Rn , such that sup jx(t)j < 1, and the We use l1 t=0 t n is de ned as kxkl = sup jx(t)j. l1 -norm of x 2 l1 1 t The objective of this paper is to show the following result concerning the global asymptotic stabilization as well as lp -stabilization of system P , as given by (1), using linear output feedback. p

Theorem 1 Consider a system (1). Let A be neutrally stable, i.e., all the eigenvalues of A are inside or on

the unit circle, with those on the unit circle having all Jordan blocks of size one. Also assume that (A; B ) is stabilizable and (A; C ) is detectable. Then, there exits a linear observer-based output feedback law of the form  x^+ = Ax^ + B(F x^) ? L(y ? C x^) (3) u = F x^ which has the following properties: 1. It is nite gain lp -stable for all p 2 (1; 1], i.e., there exists a p > 0 such that 



kxkl  p ku kl + ku kl ; 8u 2 lpm; u 2 lpr ; and x(0) = 0; x^(0) = 0 : p

1

p

2

1

p

2

(4)

2. In the absence of actuator and sensor noises u1 and u2 , the equilibrium (x; x^) = (0; 0) is globally asymptotically stable.

Remark 1 We will in fact actually obtain the following stronger ISS-like property (cf. [10] and references

there):





k(x; x^)kl  p (jx(0)j + jx^(0)j) + p ku kl + ku kl (5) where p is a class-K function. Observe that the single estimate (5) encompasses both the gain estimate (4) p

1

p

2

p

and asymptotic stability. Obviously, (4) is the special case of (5) for zero initial states. On the other hand, when applied with arbitrary initial states but u1 = u2 = 0, there follows that (x; x^) is in lp , which implies, in particular, that (x(t); x^(t)) must converge to zero as t ! 1 (global attraction) and that j(x(t); x^(t))j is bounded by p (jx(0)j + jx^(0)j) (stability).

3

3 Proof of Theorem 1 The proof of Theorem 1 will follow readily from the following proposition, which we establish rst.

Proposition 1 Let A be orthogonal (i.e., A0 A = I ), and suppose that the pair (A; B) is controllable. Then, the system

x+ = Ax + B(?B 0 Ax + u); x 2 Rn ; u 2 Rm (6) is nite gain lp -stable, p 2 (1; 1], for suciently small  > 0. Moreover, for each p 2 (1; 1] there exist a real p , a  2 (0; 1], and a class-K function p such that, for all  2 (0;  ], (7) kxkl  p kukl + p (jx(0)j) for all inputs u 2 lpm and all initial states x(0). p

p

To prove this proposition, we need to establish a few lemmas.

Lemma 1 For any p > l > 0, there exists two scalars M ; M > 0 such that, for any two positive scalars  1

and  ,

2

 p?l  l  M1  p + M2 p and consequently, for any n > 0 and  > 0, ?  p?l  l  M1 n  p +  M2  p :

(8) (9)

n(l p) l

Proof of Lemma 1. Let h : R ! R be de ned as h(x) = x ? , which is continuous and strictly increasing with h(0) = 0 and h(1) = 1, and k(x) = x ? be its pointwise inverse. De ne Z x H (x) = h(v)dv = p ? l x ? (10) +

+

p

p

p

0

and

Z

x

p

p

l

l

l

l

l

k(v)dv = pl x :

(11)

 p?l  l  p ?p l  p + pl  p = M1 p + M2  p ;

(12)

K (x) =

0

p l

Letting a =  p?l and b =  l , it follows from Young's inequality (see e.g. [2]), ab  H (a) + K (b) for all a; b 2 R+ , that which also trivially implies (9).

2

Lemma 2 Let A and B be as given in Proposition 1. Then, for any  > 0 such that B 0B < 2I , A() = A ? BB 0 A is asymptotically stable. Moreover, let P () be the unique positive de nite solution to the Lyapunov equation, A()0 P A() ? P = ?I :

(13)

Then, there exists a  > 0 such that

1 I  P ()  2 I; 8 2 (0;  ]   for some positive constants 1 and 2 independent of .

(14)

4

Proof of Lemma 2: The asymptotic stability of A follows from a simple Lyapunov/LaSalle argument [1]. Let  > 0 be such that B 0 B < 2I for all  2 (0;  ]. We recall that the solution to the Lyapunov equation 1

1

(13) is given by

P () =

1 X

k=0

(Ak ())0 Ak () =

1 X k=0

[(A ? BB 0 A)0 ]k [(A ? BB 0 A)]k :

(15)

Using the fact that AA0 = I , we have

(A ? BB 0 A)0 (A ? BB 0 A) = I ? 2A0 BB 0 A + 2 A0 BB 0 BB 0 A = I ? A0 B (2I ? B 0 B )B 0 A :

(16)

Using now the fact that B 0 B < 2I for  2 (0; 1 ], we know that there exists 2 2 (0; 1 ] such that 1 0 0 0  2 I  (A ? BB A) (A ? BB A)  I; 8 2 (0; 2 ] : Again using the fact that A0 A = I , we verify in a straightforward way that 0 An + 2 M1 () (A ? BB 0 A)n = An ? CA;B CA;B

(17) (18)

where M1 (k) is a polynomial matrix in  of order n ? 2, n being the order of the system (6), and

CA;B = [ B AB    An?1 B ] is the controllability matrix of the pair (A; B ) and is of full rank. It then follows that ((A ? BB 0 A)n )0 (A ? BB 0 A)n 0 + 2 M 0 ())(An ? CA;B C 0 An + 2 M1 ()) = ((An )0 ? (An )0 CA;B CA;B 1 A;B n 2 n 0 0 = I ? 2(A ) CA;B CA;B A +  M2 ()

(19)

where M2 () is a symmetric polynomial matrix in  of order 2n ? 2. Since CA;B is of full rank, and because A is nonsingular, there exists a  2 (0; 2 ] such that 0  I ? M10 I  ((A ? BB 0 A)n )0 (A ? BB 0 A)n  I ? M20 I < I; 8 2 (0;  ]

(20)

for some constants M10 ; M20 > 0 independent of . Using (17), (20) and the fact that A0 A = I in (15), we have that for all  2 (0;  ],

P ()  and

P () 

nX ?1 i=0

nX ?1 i=0

[(A ? BB 0 A)i ]0 (A ? BB 0 A)i

1 X k=0

1 I = 2 I (1 ? M20 )k I  n M 0 

(21)

2

 n?1 1 X 0 i 0 0 i 0 k n 1 0 I = 1 I [(A ? BB A) ] (A ? BB A) (1 ? M1 ) I  1

where 1 = 2n?n1 M 0 and 2 = Mn0 . 1

k=0

2

M1



(22)

2

2

Lemma 3 Let A() be as given in Proposition 1, P () as de ned in Lemma 2, then for any p 2 (1; 1), there exists a  > 0 such that [x0 A0 ()P ()A()x]p=2 ? [x0 P ()x]p=2  ?

where  > 0 is some constant independent of .

?p

2

2

 jxjp ;  2 (0; ] ;

(23)

5

Proof of Lemma 3. Inequality (23) holds trivially for x = 0. Hence in what follows, we assume, without loss of generality, that x = 6 0. For simplicity, we introduce from now the following notation:  = x0 A0 ()P ()A()x

(24)

(where x and  will be clear from the context). By the de nition of P (), we have

 ? x0 P ()x = ?x0 x :

(25)

From Lemma 2, there exists a 1 > 0 such that for all  2 (0; 1 ] x0 x 4 x0 P ()x  5 ; 8x 6= 0 : With (25) and (26), we can continue the proof using Taylor expansion with remainder, [x0 A0 () P ()A()x]p=2 ? [x0 P ()x]p=2 = [x0 P ()x ? x"0 x]p=2 ? [x0 P ()x]p=2 #



 0x 0 x 2 p x x 0 p= 2 [x P ()x] 1 ? 2 x0 P ()x +  x0 P ()x

? [x0 P ()x]p=

2

? ? = ? 2p [x0 P ()x] jxj2 + [x0 P ()x] jxj4 ;  2 (0; 1 ] o n where  = maxjzj p8 (p ? 2)(1 + z ) ?2 is a constant independent of . Again by Lemma 2, there exists a  2 (0; 1 ] such that p

2

2

p

2

(26)

4

(27)

p

2

4 5

[]p=2 ? [x0 P ()x]p=2  ?

?p

2

2

 jxjp ;  2 (0;  ]

(28)

for some  > 0 independent of .

2

Lemma 4 Let A and B be as given in Proposition 1. For any l 2 [1; 1) and any  2 (0; 1], j(?B 0 Ax + u)jl  2l? l jB jl jxjl + 2l? jujl :

(29)

Proof of Lemma 4. Since  is a standard saturation function and jAj = 1, for any l  1, we have j(?B 0 Ax + u)jl  (jB jjxj + juj)l  2l? kl jB jl jxjl + 2l? jujl ;

(30)

1

1

1

1

where the last inequality follows from Jensen's inequality applied to the convex function sl : (a + b)l  21 (2a)l + 21 (2b)l ; 8a; b  0 :

2

Lemma 5 Let A and B be as given in Proposition 1. Pick any x 2 Rn and u 2 Rm , any number   3, and any nonnegative real number l. Denote x~ = ?B 0 Ax + u. Then, provided jxj > jB(~x)j, we have: jAx + B(~x)jl  jxjl + ljxjl? x0 A0 B(~x) + M jxjl? jB(~x)j ; (31) 2

for some constant M > 0 which is independent of .

2

2

6

Proof of Lemma 5. We rst note that, since jxj > jB(~x)j  3jB(~x)j, 2x0 A0 B (~ x) + jB(~x)j 4  5: jxj 2

(32)

2

Hence, using Taylor expansion with remainder, we have 

l

jAx + B(~x)jl = jxj + 2x0 A0 B(~x) + jB(~x)j   0 0 = jxjl 1 + 2x A B(~xjx)j+ jB(~x)j "  #  0 0 0 A0 B(~x) + jB(~x)j 2 x 2 x A B (~ x ) + j B (~ x ) j l l +  jxj 1 + 2 jxj jxj  jxjl + ljxjl? x0 A0 B(~x) + 2l jxjl? jB(~x)j + jxjl? ((2 + 1=)jxjjB(~x)j)   l ? where  = max (l ? 2)(1 + z ) is a constant independent of . 2

2 2

l

2

2

2

2

2

2

jzj 54 8

2

2

2

l

2

2

2

4

2

(33)

2

So we can see that the inequality (31) holds for M = 2l + (2 + 1=)2 .

2

We are now ready to prove Proposition 1. Proof of Proposition 1: We separate the proof for p 2 (1; 1) and for p = 1. Proof for p 2 (1; 1). For clarity, let us repeat here the system equation (6):

x+ = Ax + B(?B 0 Ax + u); x 2 Rn ; u 2 Rm : This may also be rewritten as: x+ = A()x + B (?x~ + (~x) + u); x 2 Rn ; u 2 Rm ; where A() = A ? BB 0 A, x~ = ?B 0 Ax + u. For this system, de ne the function V1 as:

V1 (x) = (x0 P ()x)p=2 ;

(34) (35)

(36)

where P () is as given in Lemma 2. We next evaluate the increments V (x+ (t)) ? V (x(t)), which we denote as \V1 " for short, along any given trajectory of (35). It is convenient to treat separately the cases jxj > jB(~x)j and jxj  jB(~x)j. Here   3 is a number to be speci ed soon. Case 1: jxj > jB(~x)j. Using the de nition of V1 , we now give an upper bound on V1 along the trajectories of the system (35). To simplify the equations, we introduce the following notation:

 = 2x0 A0 P ()BB 0 Ax + 2x0 A0 P ()B(~x) + 0 (~x)B 0 P ()B(~x) ; in addition to  as de ned in Equation (24). Thus: V1 = V1+ ? V1 = [(x+ )0 P ()x+ ]p=2 ? [x0 P ()x]p=2   = [A()x + BB 0 Ax + B(~x)]0 P ()[A()x + BB 0 Ax + B(~x)] p=2 ? [x0 P ()x]p=2  =  + 2x0 A0 P ()B (B 0 Ax + (~x)) ? 2 x0 A0 BB 0 P ()BB 0 Ax

7 +0 (~x)B 0 P ()B(~x)

ip=2

ip=2

h

? [x0 P ()x]p=

2

? [x0 P ()x]p=  p=  p= = [] 1 +  ? [x0 P ()x]p= : (37) By Lemma 2, there exist a  > 0 and   3 independent of , such that for all jxj > jB(~x)j  4  ;  2 (0;  ] : (38)  5 To see this, let  > 0 be such that (14) of Lemma 2 and (17) in the proof of Lemma 2 both hold for all  2 (0;  ]. Then, for all  2 (0; ], we have   +   jxj j j  2 jB j + 2 (39)   +

2

2

2

2

1

1

0

0

0

and

2

2

2

2 2

2

jj  2 jxj ; 1

(40)

2

from which it is clear that there exist 1 and  > 3 such that (38) holds. Next, we may use a Taylor expansion with remainder to continue the bounding of V1 as follows: V1  []p=2

"

#

 2 p  1 + 2  +   ? [x0 P ()x]p=2

(41)

is a constant independent of . where  = maxjzj p8 (p ? 2)(1 + z ) By Lemma 3 and Lemma 2, there exists 2 2 (0; 1 ] such that for any  2 (0; 2 ], we have n

4 5

o p 2 ?2



V1 ?  jxjp + p2 [] ?p

2

2

?2

p

2

[ ] + []

?4

p

2

[ ]2

? ?  jxjp +  ? jxjp? 2jxjjP ()B jjx~ ? (~x)j + 2jxjjP ()B jjuj + jP ()jjB(~x)j   ? +  jxjp? 2jxj jP ()jjB j + 2jP ()jjxjjB(~x))j + jP ()jjB(~x)j where  > 0 is as de ned in Lemma 3, and ; > 0 are some constants independent of . 2

2

p

1

4

2

2

p

4

2

2

p

2



2

2



2 2

2

1

Before continuing, we digress to observe that

(42)

2

jx~ ? (~x)j  x~0 (~x) :

(43)

Using (43), Lemma 1, Lemma 4, and the condition jxj > jB(~x)j, we can show that there exists a 3 2 (0; 2 ] such that for all  2 (0; 3 ] the estimation of V1 can be now concluded as follows: ?p

jxjp? jP ()B jx~0 (~x) + M a  ? maxf; p? gjxjp + M a ()jujp ; (44) where M a > 0; M a() > 0 with M a independent of  are de ned in an obvious way. In deriving (44), we have also used the fact that jxjp? < (jB(~x)j)p? for p < 2 and B(~x) = 6 0. V1  ? 1

2

2

 jxjp + 2 1 

?p

2

2

2

1

2

1

2

p

1

2

1

2

2

Case 2: jxj  jB(~x)j. By using Lemma 2, Lemma 3 and Lemma 4, V1 along the trajectories of (35) is bounded as follows:

V1 = [(x+ )0 P ()x+ ]p=2 ? [x0 P ()x]p=2

8

jP ()jp= jAx + B(~x)jp ? [x0 P ()x]p= + []p= ? ?  jxjp + ?  (jxj + jB(~x)j)p ? ?  jxjp + ( + 1)p ?  jB(~x)jp (45) ? ?  jxjp + M b ? ? p? jxjp + M b ()jujp ;  2 (0;  ] where  > 0 and  > 0 are as de ned in Lemma 2 and Lemma 3 respectively, and M b > 0; M b() > 0 are constants with M b being independent of .    

2

2

2

2

2 2 2

p

2

2

p

p

2

2

2

p

p

2

p

p

1

2

2

p

2

2

1

2

3

2

1

2

1

Summarizing, we may combine Case 1 with Case 2, to obtain: 8 ? ?  jxjp + 2 1  ?? jP ()B jjxjp?1 x~0 (~x) > > < +M1  maxf; p?1gjxjp + M2 ()jujp ; if jxj > jB(~x)j; V1  > > : if jxj  jB(~x)j; ? ?  jxjp + M1  ? p?1 jxjp + M2 ()jujp ; where M1 = maxfM1a; M1b g and M2 () = maxfM2a (); M2b ()g. 2

2

p

2

2

2

2

p

2

2

2

2

p

p

(46)

p

For system (34), we next de ne another function:

V0 (x) = jxjp+1 :

(47)

An estimation of its increments along the trajectories of (34) can also be carried out by separately considering each of the cases jxj > jB(~x)j and jxj  jB(~x)j. Case 1: jxj  jB(~x)j. By Lemma 4, for any  2 (0; 3 ],

V0 = jAx + B(~x)jp+1 ? jxjp+1  jAx + B(~x)jp+1  (jxj + jB(~x)j)p+1  (( + 1)jB(~x)j)p+1  N1ajxjp + N2a jujp

(48)

for some positive constants N1a and N2a independent of . In deriving (48), we have used the fact that both  and  are bounded. Case 2: jxj > jB(~x)j. By Lemma 5, Lemma 4 and Lemma 1, there exists 4 2 (0; 3 ] such that for any  2 (0; 4 ],

V0 = jAx + B(~x)jp+1 ? jxjp+1  jxjp+1 + (p + 1)jxjp?1 x0 A0 B(~x) + N1b jB(~x)j2 jxjp?1 ? jxjp+1  ? p + 1 jxjp?1 x~T (~x) + N1cjxjp + N2c()jujp

(49)

where N1c ; N1b > 0; N2c() > 0 are constants, and N1b ; N1c are independent of . In deriving (49), the rst inequality by Lemma 5, the second inequality is the consequence of the fact that  is bounded and Lemmas 4 and 1. Combining Case 1 with Case 2, we have, for any  2 (0; 4 ]:  p+1 ?  jxjp?1 x~0 (~x) + N1jxjp + N2 ()jujp ; if jxj > jB(~x)j; (50) V0  N p p if jxj  jB(~x)j; 1 jxj + N2 ()juj ; where N1 = maxfN1a ; N1cg and N2 () = maxfN2a ; N2c()g. Finally, we de ne the following Lyapunov (or \storage") function:

V (x) = V1 (x) + $V0 (x) ;

(51)

9 where

$ = p +2 1 

?p

4

2

1

jP ()B j:

It is straightforward to verify that there exists some  2 (0; 4 ] such that ?p

jxjp + ()jujp ; 8 2 (0;  ] (52) for some 2 (0;  ) and () > 0. Now consider an arbitrary initial state x(0) and control u, and the ensuing trajectory x. Summing both sides of (52) from t = 0 to 1 and using the fact that V is nonnegative, we conclude that:  ? kxkpl  ()kukpl + p0 (jx(0)j) ; (53) V (x)  ?

2

2

2

2

p

p

p

p=2 where p0 (r) = $rp+1 +  r . This implies that kxkl  p kukl + p (jx(0)j) ; 

2

2

(54)

p

p



where p = 

?2

p

2



1

()= , and p (r) =  p

?2

p

2

1

p0 (r)= . p

Proof for p = 1. From (52) we get for p = 2,

V (x)  ? jxj2 + ()kuk2l1 :

(55)

Hence, V (x) is negative outside the ball of radius ( ()= ) kukl1 centered at the origin, from which it follows that, for any state x(t) in the trajectory: 1 2

!

() kuk2 +  (jx(0)j) ; V (x(t))  $ () kukl1 + 2  l1 10 where 10 (r) =  r2 + $r3 . If kukl1  1, we have: 1 jx(t)j2  x(t)0 P ()x(t)  V (x(t))  3 2

3 2

(56)

2

and

(57)

!

() kuk2 +  (jx(0)j) ; V (x(t))  $ () + 2  l1 10 3 2

3 2

(58)

which implies the following estimate for the entire trajectory: (

() kxkl1  $ () + 2  1 1 3 2

3 2

)1 2

kukl1 + 1 (jx(0)j); 1

(59)

  where 11 (r) = 1 (r) . If, instead, kukl1 > 1, we have: 0

1 2

1

!

() kuk3 +  (jx(0)j); $jxj  V (x)  $ () + 2  l1 11 3 2

3

3 2

(60)

from which we get that 2 () kxkl1  () +  $ $ 3 2

3 2

!1 3

kukl1 + 1 (jx(0)j); 2

(61)

10   where 12 (r) = 1$(r) . Letting 1 3

0

8 ( >
$ () + 2   1 : 1 3 2

)1 ( 2

3 2

2 () ; () +  $ $ 3 2

9 ) 31 > =

3 2

> ;

and 1 = maxf11 ; 12 g, we have, nally, the required conclusion:

kxkl1  1 kukl1 + 1 (jx(0)j) for p = 1 as well.

(62)

2

We are now ready to prove Theorem 1.

Proof of Theorem 1. Without loss of generality, making a change of coordinates if required, we may

assume that the system (1) has the following partitioned form: 8 + < x1 = A1 x1 + B1  (u + u1 ) x+ = A0 x0 + B0 (u + u1 ) (63) : 0 y = Cx + u2 where A1 is orthogonal and A0 is asymptotically stable, and     B A 0 1 1 B = B0 : A = 0 A0 ; We construct the output feedback law in the form of (3) with F = [ ?B10 A1 0 ], the matrix L being chosen such that A + LC is asymptotically stable. Using this feedback, the closed-loop system is: 8 + ^1 + u 1 ) < x1 = A1 x1 + B1  (?B10 A1 x + 0 ^1 + u 1 ) x (64) 0 = A0 x0 + B0  (?B1 A1 x : + x^ = Ax^ + B(?B10 A1 x^1 ) ? L(Cx ? C x^ + u2 ) :

Let e = [ e01 e00 ]0 , where e1 = x1 ? x^1 and e0 = x0 ? x^0 . Here we have partitioned x^ = [ x^01 x^00 ]0 accordingly. In the new states (x; e), (64) can be written as follows, 8 + x = A1 x1 + B1 (?B10 A1 x1 + B10 A1 e1 + u1 ) > < 1+ x0 = A0 x0 + B0 (?B10 A1 x1 + B10 A1 e1 + u1 ) (65) + > : e = (A + LC )e +B [(?B10 A1 x1 + B10 A1 e1 + u1 ) ? (?B10 A1 x1 + B10 A1 e1)] + Lu2 : Since  is global Lipschitz with a Lipschitz constant 1,

j(?B 0 A x + B 0 A e + u ) ? (?B 0 A x + B 0 A e )j  ju j : (66) Noting that A + LC is asymptotically stable and viewing (?B 0 A x + B 0 A e + u ) ? (?B 0 A x + B 0 A e ) + Lu as an lp input to the e-subsystem, we have that, for some constant pe > 0, (67) kekl  pe (ku kl + ku kl + je(0)j) : Next, applying Proposition 1 to the x -subsystem, and viewing B 0 A e + u as an lp input to this 1

1

1 1

1

1 1

1

1

p

1 1

1 1

1

1

1

2

p

1

1

1

1

1

1

1

1

p

1 1

1

2

1

kx kl  p (ku kl + ku kl + je(0)j) + p (jx (0)j) p

1

1 1

p

subsystem, we have, 1

1

2

p

1

1

1 1

1

11 for some p1 > 0 and p1 of class K. On the other hand, viewing (?B10 A1 x1 + B10 A1 e1 + u1 ) as an lp input to the x0 -subsystem, we have the estimate:

kx kl  p (kx kl + kekl + ku kl + jx (0)j); for some p > 0. 0

p

0

1

1

p

p

p

0

0

In conclusion, we have,

kxkl  kx kl + kx kl (68)  p (ku kl + ku kl ) + 'p (je(0)j + jx(0)j) where p > 0 is some constant and 'p is a suitable class-K function. Together with (67), and changing back to the original coordinates, we also conclude that an estimate like the one in (5) holds. 2 p

1

0

p

1

p

p

2

p

4 Conclusions In this paper, we have established that a discrete-time, neutrally stable, stabilizable, and detectable linear system, when subject to actuator saturation, is nite gain lp stabilizable by linear feedback, for any p 2 (1; 1]. A linear output feedback law which simultaneously achieves lp stabilization and global asymptotic stabilization was constructed.

References [1] J. Choi, \On the stabilization of linear discrete-time systems subject to input saturation," preprint. [2] G.H. Hardy, J. E. Littlewood, and G. Polya, Inequalities, Cambridge University Press, 1952. [3] P. Hou, A. Saberi, and Z. Lin, \On lp -stabilization of strictly unstable discrete-time linear systems with saturating actuators," Proceedings of the 36th CDC, pp. 4510-4515, 1997. [4] P. Hou, A. Saberi, Z. Lin, and P. Sannuti, \Simultaneous external and internal stabilization for continuous and discrete-time critically unstable linear systems with saturating actuators," Proceedings of 1997 ACC, pp. 1292-1296, 1997. [5] Z. Lin, \H1 -almost disturbance decoupling with internal stability for linear systems subject to input saturation," IEEE Transactions on Automatic Control, Vol. 42, pp. 992-995, 1997. [6] Z. Lin, A. Saberi, and A. Teel, \Simultaneously Lp - stabilization and internal stabilization of linear system subject to input saturation - state feedback case," Systems and Control Letters, Vol. 25, pp. 219-226, 1995. [7] W. Liu, Y. Chitour, and E. Sontag, \On nite gain stabilizability of linear systems subject to input saturation," SIAM J. Control and Optimization, Vol.34, No.4, pp. 1190-1219, 1996. [8] Chitour, Y., W. Liu, and E. Sontag, \On the continuity and incremental-gain properties of certain saturated linear feedback loops," Intern. J. Robust & Nonlinear Control , Vol. 5, pp. 413-440, 1995. [9] T. Nguyen and F. Jabbari, \Output feedback controllers for disturbance attenuation with bounded control," Proceedings of 36th CDC, pp. 177-182, 1997. [10] E.D. Sontag, \Comments on integral variants of ISS," Systems & Control Letters 34(1998): 93-100.

12 [11] R. Suarez, J. Alvarez-Ramirez, M. Sznaier, and C. Ibarra-Valdez, \L2-disturbance attenuation for linear systems with bounded controls: an ARE-Based Approach," Control of Uncertain Systems with Bounded Inputs, eds. S. Tarbouriech and Germain Garcia, Lecture Notes in Control and Information Sciences, Springer-Verlag, Vol. 227, pp. 25-38, 1997. [12] Y. Yang, E.D. Sontag, and H.J. Sussmann, \Global stabilization of linear discrete-time systems with bounded feedback," Systems & Control Letters, Vol. 30, pp. 273-281, 1997.