Robust State Feedback Control of LTV Systems ... - CiteSeerX

Report 4 Downloads 173 Views
Robust State Feedback Control of LTV Systems: Nonlinear Is Better Than Linear F. Blanchini, Dipartimento di Matematica e Informatica, Via delle Scienze 208, 33100 Udine, Italy, Email: [email protected]

A. Megretski Massachusetts Institute of Technology, 35-418, Dept. EECS, Cambridge, MA 02139 Email: [email protected]

Abstract

Examples of linear control systems with fast time-varying uncertain coecients are given, which can be stabilized by a nonlinear memoryless state feedback, but cannot be stabilized by a linear time invariant dynamic state feedback. By means of one of these examples we show that the closed{loop quadratic stability margin may be in nitely smaller than the actual stability margin.

Key Words: Parametric uncertainties, Lyapunov functions, Robust stabilization, State feedback, Nonlinear Control

 Corresponding author

1 Introduction It is well-known that nonlinear control o ers no advantage over linear control when optimizing the L2 -induced gain (H-in nity norm) of a given linear time invariant (LTI) control system. Such results can be extended to some uncertain systems, provided the uncertainty model is \poor" (see, for example, [13]) for the uncertainties described in terms of Integral Quadratic Constraints (IQC)). For the problem of stabilization of linear systems via state feedback under parametric time-varying uncertainties, the question whether or not nonlinear controllers may o er advantages over linear ones has already been considered in the past. In the quadratic stabilization framework (i.e. stabilization by means of a quadratic Lyapunov function) it has been shown that under some assumptions, such as the input matrix being certain [3], or under the additional assumption of continuous di erentiability of the stabilizing nonlinear control [1], quadratic stabilizability implies quadratic stabilizability via linear control. In general (unlikely the case of unstructured uncertainties [12]) this property is not true in the presence of parametric uncertainties as shown by Petersen [10]. Clearly Petersen example does not cover the more general stabilizability problem (i.e. without the involvement of a quadratic function). Indeed, although stability under time-varying parametric uncertainties (with no limits on the variation rate) implies the existence of a Lyapunov function (see for instance [8]) this function does not have to be quadratic in general. Furthermore, in [14] it has been shown that Petersen's example in [10] is actually stabilizable via a linear static state feedback. This result was proven by means of polyhedral Lyapunov functions. Some basic properties of this class of functions have been presented in [4], where it is shown that, if a system is stabilizable in a Lyapunov sense, then it is stabilizable via a polyhedral Lyapunov function and a nonlinear (but piecewise{linear) controller. In this paper, we show that there exist systems with time-varying parametric uncertainties which are stabilizable by a nonlinear static state feedback but not by any LTI controller. More speci cally, we consider uncertain control system described by

dx(t)=dt = A0 x(t) + B0 u(t) + B(t)(Cx(t) + Du(t)); (t) 2 : (1) Here  is a given convex compact set of m  m matrices, x(t) 2 Rn is the system state, u(t) 2 R is the scalar control signal, A0 ; B0 ; B; C; D are given matrices of appropriate size. The class of control algorithms under consideration is given by

u(t) = K (xc (t); x(t)); x_ c (t) = F (xc (t); x(t));

(2)

where xc (t) is the nite dimensional controller state, and the functions K; F are continuous. In the case when K (xc ; x) = K0 (x), controller (2) is called static. Controller (2) is said to be stabilizing for system (1) if x(t) ! 0 and xc (t) ! 0 exponentially as t ! 1 for any measurable matrix function (t) 2 , and for any solution x(); xc (); u() of the system of di erential equations in (1),(2). In this paper, we construct the uncertainty set  and matrices A0 ; B0 ; B; C; D (where it is possible to set D = 0, if so desired), in such a way that system (1) will not be stabilized by any linear controller (2), while a nonlinear static controller (2) will stabilize (1). Actually, in this class of examples, a linear dynamic feedback (2) in not capable of stabilizing (1) even under the additional assumption that (t) = const does not depend on time. The examples require that m  3 (i.e. the \uncertainty rank" is greater than the dimension of the control variable), and are based on A. Stoorvogel's paper [15]. We also address the apparently less trivial problem of whether such counterexamples exist with m = 1, when one can assume that  = [?1; 1] without loss of generality. For the time-varying uncertainty (t) 2 [?1; 1], we give an example when a nonlinear static controller does a better job than any linear static feedback. This example will show that a linear static

2

feedback may be in nitely conservative in terms of closed{loop stabilizability margin. We conjecture that this example also cannot be stabilized using any linear dynamic feedback (2), but are unable to prove this at the time of writing the paper. Finally we use this simple example to show that the quadratic stabilizability margin can be in nitely smaller than the actual stabilizability margin. All these negative results point out some intrinsic limitations of the widely investigated quadratic stabilization theory via linear control (and the related H1 theory,) for the stabilization of system with parametric time-varying uncertainties. These facts support, at least from a theoretical standpoint, the analysis of nonlinear compensator and non-quadratic candidate Lyapunov functions for the problem.

2 Counterexamples based on Stoorvogel's norms

Remember that a norm on Rm is any function p : Rm ! R such that

p(x) > 0; p(rx) = jrjp(x); p(x + y)  p(x) + p(y) 8 x; y 2 Rm ; r 2 R; x 6= 0: De nition A norm p on Rm is simple if p(Mr ? EU (r)) inf sup p(Mrp(?r)ELr)  sup m L r2Rm ;r6=0 p(r) r2R ;r6=0

(3)

for any globally Lipschitz function U : Rm ! R and for any matrices M and E of size m  m and m  1 respectively, where the in mum is taken over all 1  m matrices L. In other words, a simple norm is any norm in which nonlinear approximation of Mr by EU (r) cannot outperform linear approximation (of Mr by ELr) in terms of the worst case induced norm. It can be easily seen that any quadratic norm p(r)2 = r0 Pr, where P = P 0 > 0, is simple. Also, any norm on R2 is simple. However, for m > 2, examples of norms that are not simple are easy to nd. In particular, A.Stoorvogel has shown [15] that the l1 norm on R4 ,

p(x) = maxfjx1 j; jx2 j; jx3 j; jx4 jg; x = [x1 ; x2 ; x3 ; x4 ];

(4)

is not simple, and has used this result to give an example when non-linear estimation outperformed the linear one in the l1 induced norm setting. We are going to use norms that are not simple to give examples of systems (1) which can be stabilized using nonlinear static state feedback (2), but cannot be stabilized using linear state feedback (2). Theorem 1 Let p be a norm on Rm . Assume that p is not simple, i.e. mthere exist matrices M; E of size m  m and m  1, and a globally Lipschitz function U : R ! R such that p(Mr ? EU (r)) : inf sup p(Mrp(?r)ELr)  1 >  = sup (5) m L r2Rm ;r6=0 p(r) r2R ;r6=0 Consider the uncertain control system

r_ (t) = ?r(t) + (t)(Mr(t) ? Eu(t));

(6)

x(t) = r(t) 2 Rm ; A0 = ?Im ; B0 = 0; B = Im ; C = M; D = ?E;

(7)

 = f : p(r)  p(r) 8 r 2 Rm g:

(8)

i.e. system (1) with and

3

Then (i) system (6) with any time-varying  = (t) 2  is stabilized by the nonlinear static feedback u(t) = U (x(t)); (9) (ii) there exist no linear feedback (2) that will stabilize system (6) for any constant  2 . Since D 6= 0 in the construction in Theorem 1, it is worth mentioning a class of more speci c counterexamples. Theorem 2 Under the assumptions of Theorem 1, consider the uncertain control system  r_ = ?r + (Mr ? Ez ); (10) z_ = ?z + u; i.e. system (1) with       r 0 I m m +1 x = z 2 R ; A0 = ?Im+1 ; B0 = 1 ; B = 0 ; C = [M ? E ]; D = 0; where  is de ned by (8). Then (i) for a suciently large N > 0, system (10) with any time-varying  = (t) 2  is stabilized by the nonlinear static feedback u(t) = N (U (r(t)) ? z (t)); (11) (ii) there exist no linear feedback (2) that will stabilize system (10) for any constant  2 . We rst prove Theorem 1. Proof. (i) Remember that any absolutely continuous function g = g(t) (and, in particular, any Lipschitz function g = g(t)) has a locally integrable derivative f = f (t). Therefore, writing f (t) = g_ (t) does not necessarily mean that g is di erentiable at the point t. In this context, the triangle inequality implies that   dp(g )  p dg (12)

dt

dt



for any norm p, where the inequality in (12) holds \almost everywhere". To show that the static nonlinear feedback (9) stabilizes system (6), note that, by assumption, we have dp(et r(t))  p(et [r_ (t) + r(t)]) = et p( (t)[Mr(t) ? EU (r(t))])  p(et r(t)):

dt



Since  < 1, this implies p(et r(t))  etp(r(0)) for t  0. Hence p(r(t))  p(r(0)) exp(( ? 1)t) ! 0 and r(t) ! 0 exponentially as t ! 0. (ii) By the Hahn-Banach theorem (see [9], section 5.12), for any q 2 Rm there exists a linear functional q Rm ! R such that q (q) = p(q) and q (x)  p(x) for any x 2 Rm . Therefore, for any pair of vectors q; r0 2 Rm satisfying the inequality p(q)  p(r0 ) there exists  2  such that q = r0 (for q 6= 0, this  is simply de ned by x = r0 q (x)=p(q), and  = 0 for q = 0). For a stabilizing linear controller (2), the closed-loop system with  = 0 must be stable. Therefore, for any r0 2 Rm , the linear system of equations r_ (t) = ?r(t)+ r0 and (2) will have a steady-state solution r  r0 , u  u0 = Lr0 , where L is a 1m matrix which does not depend on r0 . (Indeed, L is simply the dc gain of controller (2).) By (5), p(Mr0 ? ELr0 )  p(r0 ) > 0 for some r0 2 Rm . Hence, there exists 0 2  such that 0 (Mr0 ? ELr0 ) = r0 . Therefore, for  = 0 , system (6),(2) has a non-zero steady-state solution, and hence controller (2) is not stabilizing.

4

The prove Theorem 2 we add a little backstepping to the proof of Theorem 1. Proof. (i) Note that for any norm p : Rm ! R there exist positive constants c0 ; c1 such that c0 jrj  p(r)  c1 jrj 8 r 2 Rm : (13) We prove now the following inequalities

jv_ j  v + C jwj; jw_ + Nwj  Cv + C jwj; where C is a constant that depends on M; E; U (); p() only (not on N ) and

(14) (15)

v(t) = et p(r(t)); w(t) = et (z (t) ? U (r(t))):

To show (14), note that from (12), we have



jv_ (t)j = dtd [etp(r(t))]  p( dtd [et r(t)]): Applying (10), adding and subtracting EU (r(t)) we get jv_ (t)j  p(et [r_ (t) + r(t)])  j et p ([Mr(t) ? Ez (t)]) j   j et p([Mr(t) ? EU (r(t))]) j + j et p ([EU (r(t)) ? Ez (t)]) j   v(t) + C jw(t)j

(16)

where the last inequality derives from et jEUr ? Ez )j  p(et r) which has been shown in the proof of Theorem 1, and from the de nition of w. To show (15), note that from the de nition of u(t) and w(t) we have Nw(t) = ?et u(t) and then

jw_ (t) + Nw(t)j = j e|t (z (t) + et{z z_ (t) + Nw(t}) ?et U (r(t)) ? et drd U (r)r_ (t)j =0

d U (r)r_ (t)j: = et jU (r(t)) + dr

If we sum and subtract drd U (r)r(t) in the absolute value, taking into account that U (r) is Lipschitz (thus drd U (r) is bounded), we get

jw_ (t) + Nw(t)j  C1 et j U (r(t)) ? drd U (r)r(t) j + C2 et j drd U (r)[r_ (t) + r(t)] j  ?   C3 et jrj + C4 p et [r_(t) + r(t)]  C5 et p(r(t)) + C4 (v(t) + C jw(t)j):

for proper positive constants Ci . In the last inequality we have used (13) and the fact that, as it has been shown in (16), p (et [r_ (t) + r(t)])  v(t) + C jw(t)j. Therefore, for a proper C , we have (15). Now note that 



djw(t)j = sgn[w(t)] dw(t) = sgn[w(t)] dw(t) + Nw(t) ? Nw(t) = dt dt dt   dw(t) dw ( t ) = sgn[w(t)] dt + Nw(t) ? N jw(t)j  dt + Nw(t) ? N jw(t)j  Cv(t) + C jw(t)j ? N jw(t)j:

5

Choose  > 0 small enough and N > 0 large enough such that a =:  + C < 1 and C + C ? N < 0. Then we have that

d (v + jwj) = (C + ) + (C + C ? N)  a(v + jwj): dt Hence v + jwj  eat (v(0)+ jw(0)j) which implies that both r(t) and z (t) ? U (r(t)) converge

to zero exponentially. Therefore, the feedback system (6),(11) is exponentially stable. (ii) If a linear controller (2) with transfer matrix K (s) stabilizes system (10) then the controller with the transfer function K (s)=(s + 1) (i.e. the same controller augmented with the dynamics of z ) will stabilize system (6), which is impossible accordingly to Theorem 1. To apply Theorems 1 and 2, one only needs an example of a norm on Rm that is not simple. One such example was given in [15], where m = 4 and p is the L1 norm on R4 de ned by (4). Then (5) holds with 2 6

E = 64

3

2

3

1 0 3 0 1 77 ; M = 4 66 ?1:5 1:5 ?3 77 ;  = 4 : 15 15 4 ?3 0 0 5 5 1 0 0 ?3

(17)

Hence, Theorems 1 and 2 give examples of system (1) which can be stabilized by a Lipschitz static nonlinear feedback for any time varying (t) 2 , while it cannot be stabilized by any dynamic LTI feedback for any constant  2 .

3 Counterexample based on phase lock

Consider the case when m = 1,  = [?1; 1], and (1) takes the form

dx1 =dt = ?x2 + Nu ; (18) dx2 =dt = x1 + u where N > 0 is suciently large. Using V (x) = x21 + x22 as a control Lyapunov function, it is easy to design a nonlinear stabilizing controller u = K (x): since dV=dt = u(x1 + x2 ); (19) it is sucient to satisfy the conditions  K (x) = 0 if jx2 j  N jx1 j; K (x)x2 < 0 otherwise: For example, one can take the following piecewise-linear control function

(20)

K (x) = ?sgn[x2 ]  maxf0; jx2 j ? N jx1 jg: Figure 1 shows a geometric interpretation of the example in terms of sub-tangentiality conditions. For u = 0 the system trajectories are circles. Take a point P in the region characterized by jx2 j > N jx1 j the dark one (assume x2 > 0). In this region, a negative control action has the e ect of pulling the state derivative towards the interior of the circle with respect to the tangent line, for all values of  (note that B ()u is a convex combination of B (N )u and B (?N )u. Outside the dark region, for any control value u the derivative is pushed on the \wrong part" of the tangent line for some \unfavorable" values of  (for instance in the point Q, for positive u and for  = ?N , the derivative vector is super-tangential, roughly it points

6

P B(N)u

B(-N)u

B(-N)u

B(N)u Q

Figure 1: Geometric representation of the idea. to the external part of the circle. In this region, the best control action is \doing nothing". Easy geometric considerations show that the boundary of this region is characterized by the two lines orthogonal to B (?N ) and B (N ). Also note that for N ! 1 this region is shrunken on the x2 axis. Formula (19) also brings some feeling that system (18) cannot be stabilized by a linear feedback, because such feedback is unlikely to be able to \lock" u to the system phase so that u(t) is small at any moment except when jx2 j > N jx1 j. The stabilizability of system (18) could be easily proved by showing how to construct a polyhedral Lyapunov function and a piecewise linear control via Euler Approximating System (see [4] for details). It turns out that the linear gains are all zero except in some sectors for which jx2 j  N jx1 j. It is still an open question if a dynamic state feedback can do this, but the following statement on the memoryless linear state feedback is the main result of this paper. Theorem 3 If N > 100, there exist no real numbers k1 ; k2 such that the controller u(t) = ?k1 x1 (t) ? k2 x2 (t) (21) stabilizes system (18). Proof. Since (21) must stabilize (18) for any constant  2 [?N; N ], the matrix   ?k1 ?1 ? k2 1 ? k1 ?k2 must be Hurwitz for any  2 [?N; N ]. Hence k2 + k1 > 0; 1 ? k1 + k2 > 0 8 d 2 [?N; N ]; i.e. k2 > N jk1 j; 1 ? k1 > N jk2 j:

7

This implies

jk1 j < k2 =N; k2 < N=(N 2 ? 1):

(22)

Now rewriting (18),(21) in the polar coordinates (r; ), where x1 = er cos ; x2 = er sin ; yields r_ = ?( cos  + sin )(k1 cos  + k2 sin ); _ = 1 + ( sin  ? cos )(k1 cos  + k2 sin ): Let us show that (t) de ned by (t) = ?0:3N sgn[cos  sin ] will destabilize the feedback system. First note that j( cos  ? sin )(k1 cos  + k2 sin )j  [(2 + 1)(k12 + k22 )]1=2  0:5: (Here we have used the upper bounds for jk1 j; jk2 j given in (22).) Hence 0:5  _  1:5 Second, we have r_ = 0:3N jcos sin jk2 ? 0:3N sgn[cos sin ]k1 cos2  ? k1 cossin ? k2 sin2  0:3Nk2jcos sin j ? 0:3N jk1 j ? jk1 j ? k2  0:3Nk2[jcos sin j ? jk1 j=k2 ? k1 =(0:3k2N ) ? 1=(0:3N )] Therefore, from the rst inequality in (22) we immediately have r_  0:3Nk2[jcos sin j ? 0:1]: Note that  is monotonically increasing to in nity as time increases to in nity, and thus r = r(t) can be considered as a function of : r = r(). We have dr = r_  0:3Nk [0:5j cos  sin j ? 0:2]: 2 d _ Hence Z =2 r( + =2)  r() + 0:3Nk2 [0:5j cos  sin j ? 0:2]d  r(): 0

Therefore, since k2 is non-negative, r does not decrease to zero as t ! 1, i.e. controller (21) is not stabilizing.

4 Further comments on the stabilizability of the given example In the previous section we have considered stabilizability in its more general sense. A stronger notion of stabilizability is derived by involving a quadratic Lyapunov function. System (1) is quadratically stabilizable (see for instance [2]) i there exists a symmetric positive de nite matrix P and a control K (x) such that, de ning by V (x) = 21 xT Px;

8

we have

V_ (x) = xT P [B0 K (x) + B(C (x + DK (x))] < 0;

for all x 2 Rn, and  2 .

It is known that this notion of stabilizability has been deeply exploited especially in view of the well{known relationship with the H1 robust stabilizability criterion. Nevertheless, we can show that requiring stabilizability in this sense may lead to arbitrarily conservative results. To show this, we recall the following notion of robustness margin. Let us introduce a parameterized uncertainty set for system (1)

(t) 2 N ; (23) where  is an assigned compact set and N > 0 is a positive scaling parameter. Consider the following de nition for N , the of robust stabilizability margin. Nsup = supfN > 0 : system (1) is stabilizableg: (24) In a similar way, we denote by Nnl and Nl the robustness margins attainable via nonlinear respectively linear controls, and by Nnq and Nq the robustness margins attainable via non-

quadratic respectively quadratic Lyapunov functions. We recall that from [8] closed{loop stability implies the existence of a Lyapunov function, therefore Nnl = Nnq . In the previous sections we have shown that in general, Nnl 6= Nl . Actually we have shown that the ratio

Nnl Nl may be in nite. By means of the same example we are able to show that also

Nnq Nq may be in nite.

Theorem 2. For system (18) with j(t)j  N; Nq = 1:1

Proof. From Barmish's result [2], Theorem 3.1, we have that system (18), is quadratically stabilizable if and only if there exists a positive de nite matrix S such that

xT (AS + SAT )x < 0; for all x 2 N ;

(25)

where the set N is de ned by

N = fx : B ()T x = 0; for some jj  N g:

(26)

If the condition is satis ed, the Lyapunov matrix is given by P = S ?1. Without restriction we can parameterize the matrix S , according to [2], as follows 



S = 1  ;  > 0;  > 2 ; 1

(27)

We recall that we are considering a quadratic function whose Lyapunov derivative is strictly negative, and 2 2 V (x1 ; x2 ) = x1 + x2 considered above is only non{positive

that the function

9

because we can always set s11 = 1 by scaling S . Conditions (25) and (26) applied to our problem yield

(x) =: xT (AS + SAT )x = 2x21 + 2( ? 1)x1 x2 ? 2x22 < 0; for all x such that

(28)

b()T x = x1 + x2 = 0; for some jj  N;

that is for all x such that

jx2 j  N jx1 j:

(29) Note that the set identi ed by (29) is the zone where the \best control action" is to remain inactive (the white region in Figure 1). Without restriction, in view of the homogeneity of (28) and (29), we can limit our analysis to all the points such that x1 = 1 and jx2 j  N . Now for x1 = 1 and x2 = 0, (28) becomes 2 < 0 and then

 < 0; Furthermore, if x1 = 1, we get from x2 = N and x2 = ?N the two conditions 2 + 2( ? 1)N ? 2N 2 < 0; 2 ? 2( ? 1)N ? 2N 2 < 0;

(30) (31) (32)

whose sum gives (?4)N 2 < (?4). Since  < 0, we get

N < 1:

(33) This is a necessary condition for quadratic stabilizability of the system and therefore Nq  1 (note that at this point we have already proven Nnq =Nq = 1). Since  < 0, the function in (28) is convex with respect to x2 if restricted to the set of points for which x1 = 1 and jx2 j  N and it is negative for x2 = 0. Then the two conditions (31) and (32) are necessary and sucient for this function to be negative for all jx2 j  N . These two equations, for each N < 1 together  > 2 admit a solution (for instance we can take  = 1 and any ?1 <  < 0). Thus Nq = 1.

Remark. The considered system is marginally stable in the sense of Lyapunov if u = 0.

However by perturbing the matrix A as A + I , where I is the identity matrix and  > 0 is suciently small, we obtain the same results. In particular, we may change the example in such a way that for N suciently large the system state goes to in nity for any linear feedback and which does not admit a quadratic function whose derivative is non-positive.

5 Conclusions In the past several authors have considered the problem of robust stabilizability of uncertain systems via quadratic Lyapunov functions and linear controllers. For a bibliography the author is referred to [7]. Quadratic functions have the advantage to be simple to be computed and handled mathematically. Similarly linear controllers are simple to be implemented. The present paper points out some negative aspects by showing that linear controllers and quadratic functions can o er arbitrarily conservative results in terms of stability robustness with respect to nonlinear and non-quadratic Lyapunov functions. A consequent problem is, however, how to compute a nonlinear controllers and a non{quadratic function. Results in this sense have been given in [4] [5] [6].

10

References [1] B. R, Barmish, \Stabilization of Uncertain Systems via Linear Control" IEEE Trans. on Autom. Contr. Vol. AC-28, no. 8, 1983. [2] B. R. Barmish, \Necessary and Sucient Conditions for Quadratic Stabilizability of an Uncertain System", J. of Opt. Th. and Appl., Vol. 46, No.4, 1985. [3] B.R. Barmish, I. Petersen, A. Feuer, \Linear Ultimate Boundedness Control of Uncertain Systems", Automatica, Vol. 19, No. 5, p.523-532, 1983. [4] F. Blanchini, \Nonquadratic Lyapunov function for robust control", Automatica, Vol. 31, no 3, p. 451-461, 1995. [5] F. Blanchini, S. Miani, \A new class of universal Lyapunov functions for the control of uncertain systems", To appear in Proc. of the 35th, Conf on Dec. and Contr. Kobe, Proceedings of 35th, Conference on Decisione and Control, 1996, pp. 1027-1032.. [6] F. Blanchini, S. Miani, M. Sznaier, \Worst case l1 to l1 gain minimization: dynamic versus static state feedback", Proceedings of 35th, Conference on Decision and Control, Kobe, Dec. 1996. [7] M. Corless, \Robust analysis and design via quadratic Lyapunov functions", Proc. of the IFAC World Conference, Sidney, 18-23, July, 1993. [8] Y. Lin, E. D. Sontag, Y. Wang, \A smooth converse Lyapunov theorem for robust stability", SIAM J. on Contr. and Opt., Vol. 34, no. 1, pp. 124-160, January 1996. [9] D. G. Luenberger, \Optimization by vector spaces methods", John Wiley and Sons, Inc., New York, 1969. [10] I.R. Petersen \Quadratic stabilizability of uncertain systems: existence of a nonlinear stabilizing control does not imply the existence of a linear stabilizing control", IEEE Trans. on Autom. Contr. Vol. AC-30, no. 3, March 1995. [11] Robust Stabilization of uncertain systems: quadratic stability and H1 control theory", IEEE Transactions on Autom. Contr. F Vol. 35, no, 3, pp. 365, March 1990. [12] M. Rotea, P.P. Kargonekar, \Stabilization of uncertain systems with norm-bounded uncertainty: a Lyapunov function approach", SIAM J. on Contr. and Opt., Vol. 27, p.1462-1476, 1989. [13] A.V.Savkin, and I.R.Petersen \Nonlinear Versus Linear Control in the Absolute Stabilizability of Uncertain Systems with Structured Uncertainty", IEEE Transactions on Automatic Control, January 1995, pp. 122-127. [14] H. Stalford \Robust asymptotic Stability of Petersen Counterexample via Linear Static Controller", IEEE Trans. on Autom. Contr. Vol. 40, no. 8, Aug. 1995. [15] A. Stoorvogel, \Nonlinear L1 optimal controllers for linear systems," IEEE Trans. Automat. Contr., Vol. AC-40, no. 4, pp. 694{696, 1995.

11