A Simple Approach to H-Infinity Analysis

Report 4 Downloads 47 Views
52nd IEEE Conference on Decision and Control December 10-13, 2013. Florence, Italy

A Simple Approach to H∞ Analysis Bassam Bamieh Department of Mechanical Engineering University of California at Santa Barbara Santa Barbara, CA 93106-5070, USA [email protected]

Ather Gattami Ericsson Research Ericsson Inc. 164 80 Stockholm, Sweden. [email protected] Abstract— We revisit the classical H∞ analysis problem of computing the `2 -induced norm of a linear time-invariant system. We follow an approach based on converting the problem of maximization over signals to that of maximization over a sort of deterministic covariance matrices. The reformulation in terms of these covariance matrices greatly simplifies the dynamic analysis problem and converts the computation to a convex, constrained matrix maximization problem. Furthermore, the equivalence is for the actual H∞ norm of the system rather than a bound, and thus does not require the typical “gamma iterations”. We argue that this approach is both attractive, elementary, and constructive in that the worst case disturbance is also easily obtained as a state feedback constructed from the solution of the matrix problem. We give an illustrative example with some interpretations of the results.

N OTATION R Sn Sn+ Sn++   Tr In 0m×n

The set of real numbers. The set of n × n symmetric matrices. The set of n × n symmetric positive semidefinite matrices. The set of n × n symmetric positive definite matrices. A  B ⇐⇒ A − B ∈ Sn+ . A  B ⇐⇒ A − B ∈ Sn++ . Tr (A) is the trace of the matrix A. Denotes the n × n identity matrix. Denotes the m × n zero matrix.

II. P ROBLEM F ORMULATION AND M AIN R ESULT We consider a standard finite-dimensional, linear, time-invariant, discrete-time, stable system xk+1 = Axk + Bwk zk = Cxk + Dwk

I. I NTRODUCTION AND BACKGROUND The problem of computing the H∞ norm of a linear time invariant system is a well studied one. Standard textbook references include [1], [2], [3], and the literature is too vast to summarize here. Fast algorithms for this computation have been proposed and in use for some time [4], [5], [6]. These algorithms are based on the deep relations between H∞ norm inequalities, certain matrix Riccati equations arising from linear 978-1-4673-5716-6/13/$31.00 ©2013 IEEE

quadratic optimization problems, and Linear Matrix Inequalities (LMIs). Our aim in the present paper is to introduce a somewhat different, and arguably more elementary treatment of this problem. The approach parallels that taken in [7], [8] for stochastic linear quadratic control problems, where a joint covariance matrix between input and state signals plays a central role. Many system theoretic problems involving linear dynamical systems with quadratic performance can then be recast as static matrix optimization problems over such covariance matrices. This approach sometimes greatly simplifies systems analysis and design problems. In the present paper, we show that a similar approach can be used for the H∞ analysis problem, greatly simplifying H∞ analysis, and potentially synthesis problems. We define a sort of deterministic joint covariance matrix (in a similar spirit to the work of [9]) as an asymptotic average of signal outer products, and show that one can work with these matrices in an almost parallel manner to that for covariance matrices of stochastic processes.

,

x0 = 0,

(1)

where xk ∈ Rn , wk ∈ Rm , and A has all its eigenvalues with modulus less than 1. We will assume without loss of generality that (A, B) is controllable (otherwise replace the given realization with a minimal one). Given any vector or matrix-valued sequence Sk , we define its asymptotic average by

2424

hSi :=

N 1 X Sk . N →∞ N

lim

k=0

This quantity can be thought of as a kind of “expectation” for deterministic signals. Although this limit is not well defined for all bounded signals [10], it is welldefined for periodic and almost periodic signals, which is all we will need in this paper. The `2 norm on signals is defined as kwk22 :=

∞ X

which is a consequence of the system recursion (1). The asymptotic average of {Vk } therefore obeys    T F hV i F T = A B hV i A B . We further note that kwk2p = Tr kzk2p

wkT wk ,

k=0

  wwT = Tr H hV i H T

 = Tr zz T    T  = Tr C D hV i C D .

and the closely related power semi-norm is defined by

It turns out that maximizing kzkp subject to kwkp ≤ N X



 1 and the system dynamics can be recast in terms of 1 kwk2p := lim wkT wk = wT w = Tr wwT . the matrix-valued sequence {V }. Comparing the above k N →∞ N k=0 three equations with those in (3), we see that there We

note that for any deterministic sequence, the matrix is a further equivalence to a matrix problem in terms wwT can be intuitively thought of as a “covariance” of a single matrix V which can be thought of as the matrix. asymptotic average of the sequence {Vk }, i.e. V = The H∞ norm [1] of the above system is the worst- hV i. case `2 gain The remainder of the paper is devoted to making the above argument precise. γ∞ := sup kzk2 . (2) kwk2 ≤1 III. M ATRIX F ORMULATION OF H∞ A NALYSIS It is well-known [11] (and simple to show) that the H∞ norm is also the worst-case gain for power signals, i.e. γ∞ = γp :=

sup kzkp . kwkp ≤1

The main result of this paper is to show that γ∞ = γp , where the latter is the optimal value of the following matrix optimization problem    T  2 γm := max Tr C D V C D V0    T , subject to F VF T = A B V A B  Tr HVH T ≤ 1 (3) with     F := In 0n×m , H := 0m×n Im . The above matrix optimization problem has an intuitive interpretation in terms of “covariance”-like matrices of the signals in the system [7], [8]. In one direction, starting from the system equations (1), let {Vk } be the outer product sequence    T   xk xk xk xTk xk wkT = . (4) Vk := wk wk wk xTk wk wkT This sequence obeys a recursion equation    T F Vk+1 F T = A B Vk A B ,

The first direction to prove is relatively easy. Theorem 1: The H∞ norm γ∞ (2) of the system (1) and the value γm of the matrix optimization problem (3) are such that γm ≥ γ∞ . Proof: This statement has essentially been proven in the previous section. We list the sequence of arguments here for completeness. Given that the system norm is γ∞ , we can find for any  > 0, an input sequence {wk } with kwk2p ≤ 1 that achieves an output norm 2 kzk2p > γ∞ − .

Now take that input and the corresponding state trajectory, form the matrix sequence {Vk } in (4), observe that its asymptotic average V satisfies by construction    T F hV i F T = A B hV i A B  1 ≥ Tr H hV i H T    T  2 γ∞ −  < Tr C D hV i C D . We note that this asymptotic average hV i exists since w can be taken to be a periodic sequence, and the corresponding x limits to a periodic sequence. We have thus found a matrix hV i which satisfies the constraints of the matrix problem (3) and achieves an objective of 2 − . Since  can be arbitrarily small we at least γ∞ conclude that 2 2 γm ≥ γ∞ .

2425

We can write ([3]) To prove the opposite bound, we will show how to find an input sequence with kwk2 ≤ 1 to the system (1) that produces an output with kzk2 arbitrarily close to γm of the matrix problem (3). Note first that the maximization problem (3) over positive semidefinite matrices V  0 has the same optimal value as taking the supremum over strictly positive definite matrices V  0 sup

Tr

   T  C D V C D

V0

   T subject to F VF T = A B V A B  Tr HVH T ≤ 1

(5)

Ω=

n X

ωi ωiT

i=1

with ωi ∈ Rn . Let Y = X i be the unique positive definite solution to the Lyapunov equation Y = (A + BRT X −1 )Y (A + BRT X −1 )T + Bωi ωiT B T (8) for i = 1, ..., n. Now summing up the right and left hand sides of (8), for i = 1, ..., n, gives n X

Xi =

i=1

Theorem 2: Suppose that the matrix   X R 0 V= RT W

(A + BRT X −1 )

n X

! Xi

(A + BRT X −1 )T

(9)

i=1 T

+ BΩB .

is such that    T γ = Tr C D V C D    T F VF T = A B V A B 2

(6)

Since X was the unique positive definite solution to (7), we must have

T

Tr(HVH ) ≤ 1

Then, for the linear system (1), there exists a vector w0 such that we have wk = RT X −1 xk for k ≥ 1, kwk2 ≤ 1 and kzk22 ≥ γ 2 . Proof: Starting from V  0, a Schur complement argument implies that X  0 and W  RT X −1 R. Let Ω = W − RT X −1 R  0. Now observe that the constraint    T F VF T = A B V A B

X=

n X

X i.

i=1

Now introduce   Xi X i X −1 R i . V = RT X −1 X i RT X −1 X i X −1 R + ωi ωiT Then, V i satisfies the last two constraints in (6), and V=

n X

V i.

i=1

implies that X satisfies the Lyapunov equation Let

X = AXAT + ARB T + BRT AT + BW B T

λ2i := Tr(RT X −1 X i X −1 R + ωi ωiT ).

= AXAT + ARB T + BRT AT

For λi 6= 0, we get

+ BRT X −1 RB T + BΩB T = (A + BRT X −1 )X(A + BRT X −1 )T + BΩB T . (7) Note that Y = X is the unique solution to the Lyapunov equation

1 Tr(RT X −1 X i X −1 R + ωi ωiT ) = 1. λ2i

Also, the inequality constraint in (6) gives

Y = (A + BRT X −1 )Y (A + BRT X −1 )T + BΩB T

λ21 + · · · + λ2n ≤ 1.

Also, since we have assumed that the pair (A, B) is controllable and Ω  0 implies that (A + BRT X −1 , BΩB T ) is also controllable. This together with X  0 implies that A + BRT X −1 is asymptotically stable.

Let j be the index such that λj 6= 0 and that maximizes the objective

2426

   T 1 Tr C D V i C D 2 λi

over i = 1, ..., n. Then,    T 1 Tr( C D V j C D ) 2 λj n X    T 1 ≥ λ2i 2 Tr( C D V j C D ) λj i=1 n X

find sequences that can get arbitrarily close. The main reason is that the initial value w0 might go to zero as γ → γ∞ . This can be seen as follows. The supremum can’t be replaced by maximum, unless we relax the constraint V  0 to V  0, which does not affect the optimal value γ∞ . On the other hand, if the optimal value of V gives Ω = 0, then the corresponding optimal value for w0 is 0, according to the construction of the proof of Theorem 2. The example in the next section illustrates the discussion above.

   T 1 Tr( C D V i C D ) 2 λi i=1    T = Tr( C D V C D )



λ2i

= γ2.

IV. N UMERICAL E XAMPLE

It’s now easy to verify that the choice of w0 =

1 ωj λj

wk = RT X −1 xk

Example 1: Consider the stable linear system (1) with parameters given by A = 21 , B = 12 , C = 1, and D = 1. Then, Solving the optimization problem gives that the optimal value (the H∞ - norm) is γ∞ = 2. It can be achieved by the matrix

for all k ≥ 1,

gives Ωj = λ2j w0 w0T = ωj ωjT ,



hV i =

Xj

1 λ2j RT X −1 X j

X j X −1 R RT X −1 X j X −1 R + Ωj

  1 1 . V = 1 1 ?

 ,

where hV i satisfies the last two constraints in (6) and at least the value γ 2 is achieved. This completes the proof.

V? is not strictly positive definite. Instead, if we impose the harder constraint V  0.01 · I in our maximization problem, we obtain a solution

For completeness, the next theorem puts together Theorems 1 and 2 to state the main result of this paper.

  0.9900 0.9850 , V = 0.9850 1.0000

Theorem 3: The H∞ norm γ∞ of the system (1) is equal to the optimal value γm of the matrix optimization problem (3) Proof: Theorem 1 gives the bound γm ≥ γ∞ . Conversly, Theorem 2 shows that for any given positive definite matrix V and positive number  such that    T  2 γm −  = Tr C D V C D    T (10) F VF T = A B V A B  Tr HVH T ≤ 1,

with γ = 1.99. Following the steps of the proof of Theorem 2, we get Ω = 1−0.9852 /0.99 = 0.02, which gives w02 = 0.02, and

there exists a sequence w such that 2 kzk2 ≥ γm − .

?

wk = (0.985/0.99) · xk ≈ 0.9949 · xk .

Note that the closed loop will be given by x(k + 1) = (A + BRT X −1 )x(k) √ with initial value w0 = 0.2. As w0 goes to zero, R and X will go to 1, and thus, the closed loop matrix approaches A + BRT X −1 = 0.5 + 0.5 = 1,

Thus, 2 (γ∞ )2 = sup kzk22 ≥ γm −  ≥ (γ∞ )2 − . kwk≤1

Letting  → 0 gives γm = γ∞ , and we are done. Remark 1: Note that there might not exist a sequence w that achieves the supremum γ∞ , but we can

and the closed loop becomes unstable. Therefore, the input sequence tries to destabilize the system, but this can’t be achieved since the initial value w0 goes to zero, and a destabilized closed loop is not possible to realize.

2427

V. C ONCLUSIONS We have introduced a novel and elementary approach to the H∞ analysis problem. The proposed approach relies on a new matrix formulation of the problem. The matrix optimization is a trace maximization problem subject to a linear matrix equality and a convex linear convex inequality. The proof technique is constructive, so we can construct an input sequence that generates a gain arbitrarily close to the H∞ norm. The structure of the near-optimal input sequence is a state-feedback control law with respect to its initial value. An illustrative example with some interpretations of the results was also given to highlight the main ideas. As a topic of future investigations, it would be interesting to consider the application of this technique to provide new approaches to the synthesis problem for state-feedback and output-feedback controllers. VI. ACKNOWLEDGEMENTS This work was in part supported by the Swedish Research Council. R EFERENCES [1] K. Zhou, J. C. Doyle, K. Glover et al., Robust and optimal control. Prentice Hall Upper Saddle River, NJ, 1996, vol. 40. [2] G. E. Dullerud and F. Paganini, A course in robust control theory. Springer New York, 2000, vol. 6. [3] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 2004. [4] S. Boyd and V. Balakrishnan, “A regularity result for the singular values of a transfer matrix and a quadratically convergent algorithm for computing its L∞ norm,” Systems & Control Letters, vol. 15, no. 1, pp. 1–7, 1990. [5] C. Scherer, “H∞ control by state-feedback and fast algorithms for the computation of optimal H∞ -norms,” Automatic Control, IEEE Transactions on, vol. 35, no. 10, pp. 1090–1099, 1990. [6] N. Bruinsma and M. Steinbuch, “A fast algorithm to compute the H∞ -norm of a transfer function matrix,” Systems & Control Letters, vol. 14, no. 4, pp. 287–293, 1990. [7] A. Gattami, “Optimal decisions with limited information,” Ph.D. dissertation, Department of Automatic Control, Lund University, 2007. [8] ——, “Generalized linear quadratic control,” Automatic Control, IEEE Transactions on, vol. 55, no. 1, pp. 131–136, 2010. [9] F. Paganini, “A set-based approach for white noise modeling,” Automatic Control, IEEE Transactions on, vol. 41, no. 10, pp. 1453–1465, 1996. [10] J. Mari, “A counterexample in power signals space,” Automatic Control, IEEE Transactions on, vol. 41, no. 1, pp. 115– 116, 1996. [11] J. Doyle, K. Zhou, K. Glover, and B. Bodenheimer, “Mixed H2 and H∞ performance objectives. ii. optimal control,” Automatic Control, IEEE Transactions on, vol. 39, no. 8, pp. 1575–1587, 1994.

2428