1
IEEE Transactions on Automatic Control 58(4):1068-1073, 2013
Normalized coprime robust stability and performance guarantees for reduced-order controllers Kevin K. Chen and Clarence W. Rowley
Abstract—Constructing a reduced-order controller from a highdimensional plant is commonly necessary. The “reduce-then-design” approach constructs the controller from a reduced-order plant; “designthen-reduce” reduces a full-order controller. In both cases, we present sufficient conditions for the full-order plant and reduced-order controller to achieve closed-loop stability or performance. These conditions, motivated primarily by the ⌫-gap metric, reveal model reduction orders that guarantee stability or performance. The control of the linearized Ginzburg-Landau system provides validation.
I. I NTRODUCTION The last few decades have seen great development in linear timeinvariant (LTI) state-space methods for computing powerful and effective controllers. Many real-life dynamical models, however, have a very large state dimension. Since controllers are often at least as large as the plant, it is frequently preferable to implement reducedorder controllers instead. When the matrix representing the plant dynamics is available, there exist several methods for designing such reduced-order controllers [1]. In certain applications, however, the dynamical operator is so large that matrix computations, such as the solution of Riccati equations or linear matrix inequalities, are impractical or impossible. In the control of a linearized fluid flow, for instance, the number of states is typically on the order of the number of grid points used to represent the fluid dynamics. Such a number is frequently upwards of hundreds of thousands or millions. In the context of these very-large-dimensional plants, designers often first reduce the plant dynamics to an approximate system of much lower dimension. If a matrix representation of the dynamical operator is available, then standard methods include modal truncation, balanced truncation, optimal Hankel norm reduction, and coprime factor model reduction [2, Ch. 11]. Should the matrix representation be unavailable, or too large to be tractable, designers may use databased techniques such as proper orthogonal decomposition [3] or balanced proper orthogonal decomposition (BPOD) [4], as well as system-identification methods such as the eigensystem realization algorithm (ERA) [5]. In either case, the designer then constructs the controller from the reduced-order model (i.e., “reduce-then-design”), with the assumption that that the reduced-order model is a sufficiently accurate representation of the original dynamics. Alternatively, we may design the controller directly from the full-order plant. For large-dimensional plants, certain techniques approximate the full-order controller of interest directly from the original plant. For instance, Chandrasekhar’s method [6] and recentlyproposed methods such as the adjoint of the direct-adjoint [7] approximate the H2 optimal controller. Whether we directly compute or approximate the full-order controller, we may then reduce the controller (i.e., “design-then-reduce”) to make their application more computationally tractable. There exists a wealth of literature that discusses effective methods for controller reduction. For instance, [1], [8], [9] investigate frequency-weighted model reduction techniques, The Department of Defense (DOD) supported this work through the National Defense Science & Engineering Graduate (NDSEG) fellowship. K. K. Chen and C. W. Rowley are with the department of Mechanical & Aerospace Engineering, Princeton University, Princeton, NJ 08544 (email:
[email protected];
[email protected]).
model full order
controller design
P
K
reduce
red. order
Pr
reduce KrK
design KrP
Fig. 1. Two approaches for arriving at different reduced-order controllers Kr from a full-order model P , as used in this study (cf. [1, Fig. 3.1.1]).
which are constructed from approximations of an internal-modelcontrol-based transfer function. Fig. 1 depicts the two methods for designing low-order controllers. We denote the design-then-reduce controller KrK , and the reducethen-design controller KrP . The notation Kr may refer to either one. A number of works have also focused specifically on the reducedorder control of distributed-parameter (e.g., partial differential equation) systems [10]–[13]. Some of these references, as well as [1], argue loosely that reduce-then-design is inherently inferior to designthen-reduce. The latter delays the approximation step as much as possible, whereas the former allows approximation error to propagate. In addition, certain works in the model reduction literature have addressed robust stability or performance in the context of controller reduction [8], [11], [14], [15]. In particular, a recent work [16] analyzes multiplicative uncertainty using structured singular values and the M -structure, focusing on balanced truncation for controller reduction. This model of controller reduction yields lower bounds on the controller order necessary for robust stability or performance. In the present study, we connect concepts from robust stability and performance (such as the normalized coprime stability margin and the ⌫-gap metric) with concepts from model reduction (such as Hankel singular values and H1 norms on model reduction error). By doing so, we derive simple theorems that guarantee the closedloop stability or performance of full-order plants with reduced-order controllers. The primary merit of these theorems is that they are more generalizable and readily applicable to the design of reducedorder controllers—we believe—than other known robust stability or performance results. Furthermore, as in [16], and as we will see in the included example, the theorems provide a sense of the required model reduction accuracy for stability or performance. Control designers may therefore use them to guide the model reduction process. Lastly, our results are applicable to both the reduce-then-design and the design-then-reduce methods. This paper is organized as follows. We briefly review elements of the normalized coprime stability margin and the ⌫-gap margin in Section II. Section III presents the main results of the paper by proving sufficient conditions for the stability, or desired performance level,
2
of full-order plants in closed-loop with reduced-order controllers. In Section IV, we show examples of these conditions using the control of the linearized Ginzburg-Landau system, and we provide concluding remarks in Section V. II. T HE N ORMALIZED C OPRIME S TABILITY M ARGIN AND THE ⌫- GAP M ETRIC In this paper, we use the following notation. We denote the conjugate transpose of a matrix or transfer function G by G⇤ . The maximum singular value of a matrix or transfer function G is denoted (G). L1 is the Lebesgue space of matrix functions essentially bounded on the imaginary axis, with norm kGk1 = ess sup!2R (G(j!)). The space of stable, continuous time, LTI transfer functions is H1 , and H1 ⇢ L1 . R is the space of rational functions, and RL1 , R \ L1 and RH1 , R \ H1 . For G 2 RL1 , kGk1 = max!2R (G(j!)). We denote the kth largest Hankel singular value of G 2 RH1 by k (G). In the case of an unstable G, we first separate G into a stable and an anti-stable part; k (G) only includes the Hankel singular values of the stable part. Next, we review the framework of the normalized coprime stability margin and the ⌫-gap metric. We base this discussion on [17], [18]; consult these references for a complete description. The normalized coprime stability margin and the ⌫-gap metric provide a useful framework for providing bounds on closed-loop performance. Suppose the stability margin bP,K of a plant P and a controller K, as well as the ⌫-gap metric ⌫ (P, Pp ) between P and a perturbed plant Pp , are both known. Then, we may analytically solve for lower and upper bounds on bPp ,K . The utility of this framework is multifaceted. First, we can potentially guarantee that the closed loop system [Pp , K] will be stable and achieve a certain performance level—even though we constructed K to control P , not Pp . Furthermore, we may determine this without actually testing [Pp , K]. In addition, the framework of normalized coprime factorization is more generalizable than, for instance, multiplicative or inverse multiplicative uncertainty alone. Finally, the framework for P , K, and a perturbed controller Kp is exactly analogous. We first introduce some definitions that we use in our framework. Definition 1: The closed-loop system [P, K] is stable if the eight transfer functions from v1 , v2 , v3 , and v4 to u and y in Fig. 2 are stable [18, Def. 1.1]. These eight transfer function relations are ⇥ ⇤ v2 y P = (I KP ) 1 K I u I v1 ⇤ v4 I 1⇥ I P + (I P K) . (1) K v3 v3
P
+
+
v4
+
v1 Fig. 2.
+
K
(see [18, (2.1)]). It is a measure of both the performance and the robustness of [P, K]. Furthermore, bP,K = bK,P . Definition 4: For a system G 2 R p⇥q given by a normalized right coprime factorization (N, M ), and a perturbed system given by Gp = (N + N )(M + M ) 1 2 R p⇥q , N (G, G ) , inf (3) ⌫ p M 1 N, M 2L1 wno |M +
M |=⌘(Gp )
(see [18, Def. 1.8]). Here, wno g(s) is the counterclockwise winding number of the scalar function g(s) as s follows the counterclockwise Nyquist D-contour (indented to the right of any pure imaginary poles or zeros). Also, |·| indicates the determinant, and ⌘(G) is the number of open right-half-plane poles of G. The ⌫-gap metric is a true metric; it obeys the relation ⌫ (G, Gp ) = ⌫ (Gp , G). We may also compute the ⌫-gap metric using a more readily calculable relation. First, we define the matrix square root. Definition 5: Consider the matrix X 2 Cn⇥m . The Hermitian 1 square root (X ⇤ X) 2 of the matrix X ⇤ X is the unique positive 1 1 semidefinite matrix satisfying ((X ⇤ X) 2 )2 = X ⇤ X and (X ⇤ X) 2 = 1 ⇤ ⇤ ((X X) 2 ) . (See [18, p. 69].) Given this definition, we may calculate the ⌫-gap metric by ⌫ (G, Gp )
1 2
= (I + Gp G⇤p )
(G
Gp )(I + G⇤ G)
1 2
1
,
(4)
if |I + G⇤p (j!)G(j!)| 6= 0 for all ! 2 R, and wno|I + G⇤p G| + ⌘(G) ⌘(Gp ) ⌘0 (Gp ) = 0, where ⌘0 (Gp ) is the number of pure imaginary poles of Gp [19, Thm. 17.6]. Otherwise, ⌫ (G, Gp ) = 1. The key relation that bounds the normalized coprime stability margin is 1
sin
bPp ,K
1
sin
bP,K
1
sin
⌫ (P, Pp ).
(5a)
(See, for instance, [18, (3.2)]. For a more detailed discussion and proof, see [17, Thm. 4.2] or [18, Thm. 3.8].) Since inputs to ⌫ commute, we have that sin
1
bP,KrP
sin
1
bPr ,KrP
1
bKrK ,P
sin
1
bK,P
sin
bP,K
sin
1
⌫ (P, Pr ).
(5b)
1
K ⌫ (K, Kr );
(5c)
1
K ⌫ (K, Kr ).
(5d)
sin
Furthermore, sin
since the inputs to b commute, sin
1
bP,KrK
sin
1
III. G UARANTEES ON THE S TABILITY AND P ERFORMANCE OF [P, Kr ]
y
u
Definition 3: For a p ⇥ q plant P and a q ⇥ p controller K, the normalized coprime stability margin is 8 " # h i 1 > > < P (I KP ) 1 , [P, K] stable K I bP,K , (2) I 1 > > :0, otherwise
+
v2
The standard feedback interconnection [18, Fig. 1.1].
Definition 2: The normalized right coprime factorization of G 2 p⇥q R p⇥q is the ordered pair (N, M ), where N 2 RH1 and q⇥q M 2 RH1 , G = N M 1 , N and M are proper, and N ⇤ N + M ⇤ M = I. The transfer functions N and M are unique up to a right multiplication by a unitary matrix. The normalized left coprime factorization is exactly analogous; see [18, Ch. 1.2.1].
Equations (5b, 5d) motivate the stability and performance guarantees. If the right-hand side of these inequalities is positive, then [P, Kr ] must be stable. Additionally, if the right-hand side is greater than some positive value, then we can provide a lower bound on bP,Kr greater than zero. The main results of this paper are readily applicable when the model reduction method of choice has an analytic upper bound on the H1 norm of the error. For instance, balanced truncation has P the upper bound kG Gr k1 2 n k=r+1 k (G) [2, (11.17)], and a particular construction Pof optimal Hankel norm reduction has the bound kG Gr k1 n k=r+1 k (G) [2, (11.29–30)].
3
Our main results are based on the following basic theorem. Theorem 1: If G, Gr 2 RL1 and ⌫ (G, Gr ) < 1, then ⌫ (G, Gr )
kG
(6)
Gr k1 .
Proof: See, for instance, the proof of Proposition 3 in [20]. We now show conditions that guarantee the closed-loop stability or performance of the full plant and reduced-order controller. Theorem 2 (Condition for design-then-reduce performance): If KrK is an approximation of K that satisfies K KrK 1 ↵ for some ↵ 2 [0, 1), and ⌫ (K, KrK ) < 1, then the condition 1
sin
1
bP,K > sin
1
↵ + sin
(7)
bd
is sufficient for satisfying the performance criterion bP,KrK > bd for the desired bd 2 [0, 1). Proof: Given the stated conditions, (7) implies 1
sin
1
bP,K > sin
KrK
K
and Theorem 1 yields sin
1
bP,K > sin
1
1
K ⌫ (K, Kr )
1
+ sin
1
+ sin
bd ,
bd .
(8a)
(8b)
Equations (5d, 8b) reduce to bP,KrK > bd . Corollary 1 (Weak condition for design-then-reduce stability): If KrK is an approximation of K that satisfies K KrK 1 ↵ for some ↵ 2 [0, 1), and ⌫ (K, KrK ) < 1, then the condition (9)
bP,K > ↵ [P, KrK ].
is sufficient for the stability of Proof: This follows directly from Theorem 2, with the choice bd = 0. We may derive a stronger condition, however, from [1, Ch. 3.2]. Theorem 3 (Strong condition for design-then-reduce stability): If [P, K] is stable, KrK is an approximation of K that satisfies K KrK 1 ↵ for some ↵ 2 [0, 1), and ⌘(K) = ⌘(KrK ), then P (I
1
1
KP )
1
(10)
>↵
[P, KrK ].
is sufficient for the stability of Proof: An internal-model-control-like structure [1, Ch. 3.2] shows that if K, KrK 2 RL1 , [P, K] is stable, ⌘(K) = ⌘(KrK ), and (K KrK )P (I KP ) 1 1 < 1, then [P, KrK ] is stable. From the stated conditions and (10), we see that 1> K (K
KrK
1
P (I
KrK )P (I
KP )
KP )
1 1
1 1
;
(11a) (11b)
therefore, [P, KrK ] must be stable. This condition is stronger than Corollary 1, because it can be shown 1 using H1 norm properties that P (I KP ) 1 1 bP,K . The difference between the two sides of this inequality, however, may not be large in practice; see Fig. 4 for an example comparison. The design-then-reduce conditions presented above may be difficult or impossible to compute for very large systems. Nevertheless, reduce-then-design is a more common approach in the control of very large systems (e.g., fluid flow control); the conditions for reduce-thendesign performance and stability—which we present below—are easy to compute. These results and their proofs are exactly analogous to the design-then-reduce counterparts. Theorem 4 (Condition for reduce-then-design performance): If Pr is an approximation of P that satisfies kP Pr k1 for some 2 [0, 1), and ⌫ (P, Pr ) < 1, then the condition sin
1
bPr ,KrP > sin
1
+ sin
1
bd
(12)
is sufficient for satisfying the performance criterion bP,KrP > bd for the desired bd 2 [0, 1). Proof: Given the stated conditions, (12) implies sin
1
bPr ,KrP > sin
1
kP
1
Pr k1 + sin
bd ,
(13a)
1
(13b)
and Theorem 1 yields sin
1
bPr ,KrP > sin
1
⌫ (P, Pr )
+ sin
bd .
Equations (5b, 13b) reduce to bP,KrP > bd . Corollary 2 (Weak condition for reduce-then-design stability): If Pr is an approximation of P that satisfies kP Pr k1 for some 2 [0, 1), and ⌫ (P, Pr ) < 1, then the condition (14)
bPr ,KrP > [P, KrP ].
is sufficient for the stability of Proof: This follows directly from Theorem 4, with the choice bd = 0. Next, we derive a stronger bound using [1, Ch. 3.2], as before. Theorem 5 (Strong condition for reduce-then-design stability): If [Pr , KrP ] is stable, Pr is an approximation of P that satisfies kP Pr k1 for some 2 [0, 1), and ⌘(P ) = ⌘(Pr ), then (I
KrP Pr )
1
1
KrP
1
(15)
>
is sufficient for the stability of [P, KrP ]. Proof: We may modify the internal model control structure in [1, Ch. 3.2] to handle plant perturbations instead of controller perturbations. In this case, if P, Pr 2 RL1 , [Pr , KrP ] is stable, ⌘(P ) = ⌘(Pr ), and (I KrP Pr ) 1 KrP (P Pr ) 1 < 1, then [P, KrP ] is stable. From the stated conditions and (15), 1 > (I
KrP Pr )
1
(I
KrP Pr )
1
KrP
1
KrP (P
kP Pr )
Pr k 1 1
;
(16a) (16b)
therefore, [P, KrP ] must be stable. This condition is stronger than Corollary 2, because it can be shown 1 using norm properties that (I KrP Pr ) 1 KrP 1 bPr ,KrP . Again, the difference between the two sides of this inequality may not be large in practice; see Fig. 6 for an example comparison. Remark 1: Practically, the parameters ↵ and are the analytic upper bounds on the model reduction error’s H1 norm. For instance, if KrK is the order r balanced truncation of K, then Theorem 3 1 guarantees the stability of [P, KrK ] when P (I KP ) 1 1 > Pn 2 k=r+1 k (K). Alternatively, if Pr is the order r balanced truncation of P , then Theorem 5 guarantees the stability of [P, KrP ] when P 1 (I KrP Pr ) 1 KrP 1 > 2 n k=r+1 k (P ). Remark 2: The conditions that ⌘(K) = ⌘(KrK ) (Theorem 3) and ⌘(P ) = ⌘(Pr ) (Theorem 5) are often satisfied in practice. For instance, when performing balanced reduction or optimal Hankel norm reduction on an unstable system G, we perform the decomposition G = Gs + Ga , with Gs stable and Ga anti-stable, and reduce only Gs . See, for example, the reduce-then-design results of Section IV. IV. E XAMPLE : L INEARIZED G INZBURG -L ANDAU C ONTROL The Ginzburg-Landau equation is a model for fluid flow perturbations, among many other applications. For a detailed discussion, see [21]–[23]. Reference [24] provides a comprehensive review of Ginzburg-Landau control and model reduction, using conventional linear controllers designed from finite-dimensional numerical approximations of the governing PDE. This is the approach we take in the present paper. An alternative approach retains the infinitedimensional PDE, but is restricted to boundary control; see [25]–[27]
4
(17a)
where
@ @2 + . (17b) @x @x2 With an infinite spatial domain x 2 ( 1, 1), the boundary conditions are q(±1, t) = 0 [22], [23]. The Ginzburg-Landau equation is a convective-diffusive model with amplification. The amplification function we choose is µ(x) = 0.37 0.005x2 . The complex advection speed is ⌫ = 2.0 + 0.4j, and the complex diffusion parameter is = 1.0 1.0j. With this choice of parameters, L has one unstable eigenvalue, at 0.0123 0.648j. To implement output feedback control of the linearized GinzburgLandau equation, we use single-input, single-output control with one localized actuator at x = xa and one localized sensor at x = xs . For a scalar actuation signal u(t) and a scalar sensor signal y(t), the continuous-space state space is ✓ ◆ (x xa )2 @q (x, t) = L q(x, t) + exp u(t) (18a) @t 2 2 ⌧ ✓ ◆ (x xs )2 y(t) = q(x, t), exp , (18b) 2 2 R1 where hf1 (x), f2 (x)i , 1 f¯2 (x)f1 (x) dx is a spatial inner product. For this study, we choose = 0.4. To limit the Ginzburg-Landau model to a finite dimension, we employ a Hermite polynomial pseudospectral method. The software described in [30] computes the domain discretization {x1 , . . . , xn } and the derivatives. We choose N = 100 grid points; in [29], we showed that this discretization is sufficiently accurate for implementing LTI methods. The domain reaches from x1 = 56.1 to xn = 56.1; the region of amplification, where µ(x) > 0, is 8.6 < x < 8.6. we define the state vector ⇠(t) , ⇥ With this discretization ⇤scheme, T q(x1 , t) · · · q(xN , t) , and we denote the pseudospectral discretization of L by A. The actuation matrix B is the discretization of exp( (x xa )2 /(2 2 )). The sensing matrix C is trickier to compute, because of an implied spatial integration on an uneven grid. Define the trapezoidal integration matrix L , µ(x)
H,
1 diag(x2 2
x1 , x 3
⌫
x1 , · · · , xj+1 · · · , xN
xN
xj 2 , xN
1,
xN
1 ).
(19)
With this weight, the continuous-space inner product hq1 , q2 i has the discretization h⇠1 , ⇠2 i = ⇠2⇤ H⇠1 . Thus, if fs is the discretization of exp( (x xs )2 /(2 2 )), then C = fs⇤ H. Altogether, we define the Ginzburg-Landau plant A B ⇠ ⇠˙ P : = . (20) C 0 u y
We implement a nearly optimal H1 loop-shaping controller, using the negative-feedback H2 optimal controller as the plant weight w. As in our previous work [29], we choose the state cost matrix Q = 49H and the input cost matrix R = 1 for the H2 optimal controller.
0
10 gain
@q = L q, @t
In addition, we choose the state disturbance covariance W = H, corresponding to a white noise uncorrelated and evenly distributed in time and space, and we choose the sensor noise covariance V = 4 · 10 8 . These matrices, along with P , determine the H2 optimal controller for a given actuator and sensor location [2, Ch. 9.2]. We place the actuator and sensor at the H2 optimal placement— that is, the placement which minimizes the H2 norm from state disturbances and sensor noise to the cost on the state and input sizes, when P is in closed-loop with the H2 optimal controller. (See [29] for complete details.) This occurs at xa = 1.0 and xs = 1.0. Setting the weight w equal to the negative-feedback H2 optimal controller at this placement, we denote the weighted plant by Pw = P w. Fig. 3 shows the Bode plots of P and Pw . From a classical control standpoint, we see that the weight increases the loop gain at low frequencies, where we require disturbance rejection, and decreases the loop gain at high frequencies, where we require noise attenuation. Furthermore, the weighted plant has ample stability margins. Its Nyquist plot encircles the 1 point once, since it has one unstable pole, and it comes no closer than 0.61 units to the 1 point.
−4
10
−8
10 pha s e ( deg r ees )
and references within, or [28, Ch. 6] for a summary of the technique. Our goal here is not to compare these control design approaches, but rather, to illustrate how the techniques of the previous section may be used to provide guaranteed bounds for reduced-order controllers. The linearized system and control setup that we describe here are from [29], which contains a more complete description. Given a time parameter t, a spatial coordinate x (typically the fluid flow direction), and a complex parameter q(x, t) that represents a velocity or streamfunction perturbation, the linearized Ginzburg-Landau equation is
90 0
P Pw
−90 −180 −270 6 4 2 0 −2 −2 0 2 4 −10 −10 −10 −10 −10 10 10 10 10 fr equency fr eq uency
6
10
Fig. 3. Bode plots of the linearized complex Ginzburg-Landau input-output dynamics P and the weighted dynamics Pw .
The true H1 loop-shaping controller is the controller Kopt 2 R such that bPw ,Kopt = supK2R bPw ,K . To simplify the computation of the controller, we first compute supK2R bPw ,K ; then, we assemble a suboptimal controller K, where the target value of bPw ,K is 1/(1 + 10 5 ) times the optimal value (see [2, Ch. 9.4.1]). In this example, we use balanced truncation to approximate the plant and P controller. Therefore, we apply Theorems 2 and 3 with ↵=2 n k=r+1 k (K). In the reduce-then-design case (Theorems 4 and 5), the computation is more complicated because Pw has one right-half-plane pole. Therefore, we split Pw into two additive parts, with one stable and one anti-stable; we only reduce the stable part (see Remark 2). As a result, the lowest reduction order is r = 1. Furthermore, since k (Pw ) only P includes the Hankel singular values 1 of the stable part, we use = 2 n k=r k (Pw ). In the design-then-reduce case, we compute bPw ,K = 5.10 · 10 1 1 and Pw (I KPw ) 1 1 = 5.91 · 10 1 . Fig. 4 depicts this, along with the ⌫-gap metric and the error upper bound ↵. From this figure, we observe that Theorem 3 guarantees the stability of [Pw , KrK ] when r 2. If we directly apply the stability margin and ⌫-gap relation in (5d), then we also derive r 2. In reality, bPw ,KrK > 0, and hence [Pw , KrK ] is stable, when r 1. If the desired minimal performance is bd = 0.4, then Theorem 2 guarantees this performance level when r 4; see Fig. 5. The direct
5
2
10
0
10
0
10 −2
10
−2
10 −4
10
−4
10 0
2
4
6
8
10
0
2
4
r
6
8
10
r
Fig. 4. Robust stability quantities as the balanced truncation order of the 1 H1 loop-shaping controller K varies. Dashed line: PP KPw ) 1 1 w (I (constant); solid line: bPw ,K (constant); : ↵ = 2 n (K); ⇥: k=r+1 k K ⌫ (K, Kr ).
Fig. 6. Robust stability quantities as the balanced truncation order of the 1 weighted plant Pw varies. Dashed line: (I KrP Pwr ) 1 KrP 1 , where Pwr is the order r balanced truncation of Pw ; solid line: bPwr ,K P ; : = r P 1 2 n k=r k (Pw ); ⇥: ⌫ (Pw , Pwr ).
0
10
0
10
−1
10
−1
0
2
4
6
8
10
r Fig. 5. Robust performance quantities as the balanced truncation order of the H1 loop-shaping controller K varies; here, bd = 0.4. Solid line: sin 1 bPw ,K (constant); : sin 1 ↵ + sin 1 bd ; ⇥: sin 1 ⌫ (K, KrK ) + sin 1 bd .
application of the ⌫-gap metric in (8b) yields the better guarantee r 3. In reality, bPw ,KrK > bd when r 3. Fig. 6 plots stability parameters for reduce-then-design control, along with the error upper bound . Theorem 5 guarantees the stability of [Pw , KrP ] when r 4. Directly applying the stability margin and ⌫-gap relation in (5b) gives the better guarantee r 3. In reality, bPw ,KrP > 0, and hence [Pw , KrP ] is stable, when r 3. Keeping the minimal performance specification bd = 0.4, Theorem 4 and the direct application of the ⌫-gap metric in (13b) both guarantee this performance level when r 5; see Fig. 7. This bound is actually tight; we verify that in reality, the closed-loop indeed meets this performance specification only for r 5. Finally, we remark briefly that for r 13, design-then-reduce yields a stability margin bP,KrK greater than the reduce-then-design stability margin bP,KrP . At r > 13, the difference between these margins is negligible. This result is consistent with [1], which argues that effective control is best preserved by delaying the approximation step (i.e., the model reduction) as late in the design process as possible. Nonetheless, we must keep in mind that the design-thenreduce procedure is generally more computationally expensive than the reduce-then-design procedure, especially for large systems. V. C ONCLUSION In the application of control theory to large systems, it is typically necessary to use a reduced-order controller Kr in closedloop with the full-order plant P . This study presents conditions that are sufficient for guaranteeing the stability or required performance of [P, Kr ], whether Kr is a reduced-order model of a full-order controller K, or a controller designed from a reduced-order plant Pr . The conditions do not place restrictions on the control design or
10
0
2
4
6
8
10
r Fig. 7. Robust performance quantities as the balanced truncation order of the weighted plant Pw varies; here, bd = 0.4. Solid line: sin 1 bPwr ,K P ; r : sin 1 + sin 1 bd ; ⇥: sin 1 ⌫ (Pw , Pwr ) + sin 1 bd .
model reduction, besides that the model reduction needs to have an upper bound on the H1 norm of the error. Theorems 3 and 5 state the stability conditions, and Theorems 2 and 4 state the performance conditions. The analytic bounds on the normalized coprime stability margin, the ⌫-gap metric, and model reduction error motivate the performance results. Although the performance guarantees yield analogous stability guarantees, we find stronger bounds by employing the internal-model-control-like structure in [1, Ch. 3.2]. A number of model reduction methods have known analytic upper bounds on the H1 norm of the reduction error (e.g., modal truncation, balanced truncation, and optimal Hankel norm reduction). This H1 norm, by itself, is not the best measure of robust stability— for robust stability, we only require that the model reduction be accurate near the crossover frequency. Therefore, many model reduction methods—particularly those that attempt to minimize the H1 norm of the error—will tend to fit the reduced model to the original model over a larger frequency range than is necessary for robust stability. Nevertheless, if the H1 norm of the model reduction error is bounded from above, then the ⌫-gap metric shares the same upper bound (see Theorem 1). If the normalized coprime stability margin between the full-order plant and controller—or between the reducedorder plant and controller—is greater than the bound on the ⌫-gap metric, then the full-order plant will provably be stable in closedloop with the reduced-order controller. We may conclude this without actually testing the closed-loop system. Furthermore, the bounds indicate how accurate a model approximation needs to be, in the sense of the error’s H1 norm, to be sufficient for guaranteeing normalized coprime robust stability or performance. This aids the control designer in choosing a model reduction order. We demonstrate these sufficient conditions on the control of the lin-
6
earized supercritical Ginzburg-Landau equation. For both the designthen-reduce and the reduce-then-design approaches, the conditions correctly guarantee values of the controller order r for which [P, Kr ] is stable. Given a desired performance level, they also correctly guarantee the values of r for which [P, Kr ] meets that performance level. There generally exist lower values of r for which [P, Kr ] is stable or has satisfactory performance, but the theorems are a priori unable to guarantee such. Nonetheless, the bounds provided by the theorems and corollaries are fairly tight in this example. These results are directly implementable in the control of large systems. For instance, the BPOD and ERA techniques are applicable to computer simulations of a linearized fluid flow, yielding approximations of the linearized dynamics’ balanced truncation and Hankel singular values [4], [31]. Theorem 5 can then predict the plant and controller reduction orders for which the reduced-order controller is guaranteed to stabilize the flow. Additionally, Theorem 4 can predict the orders for which the closed-loop system is guaranteed to achieve a desired performance level. This would be beneficial for the control designer, since fluid flows—and indeed, many other reallife systems—can be remarkably difficult to stabilize and control well. R EFERENCES [1] G. Obinata and B. D. O. Anderson, Model Reduction for Control System Design. London, U.K.: Springer, 2001. [2] S. Skogestad and I. Postlethwaite, Multivariable Feedback Control: Analysis and Design, 2nd ed. West Sussex, U.K.: John Wiley & Sons Ltd, 2005. [3] P. Holmes, J. L. Lumley, and G. Berkooz, Turbulence, Coherent Structures, Dynamical Systems and Symmetry. Cambridge, U.K.: Cambridge University Press, 1996. [4] C. W. Rowley, “Model reduction for fluids, using balanced proper orthogonal decomposition,” Int. J. Bifurc. Chaos, vol. 15, no. 3, pp. 997–1013, Mar. 2005. [5] J.-N. Juang and R. S. Pappa, “An eigensystem realization algorithm for modal parameter identification and model reduction,” J. Guid. Contr. Dyn., vol. 8, no. 5, pp. 620–627, 1985. [6] T. Kailath, “Some new algorithms for recursive estimation in constant linear systems,” IEEE Trans. Inf. Theory, vol. 19, no. 6, pp. 750–760, Nov. 1973. [7] T. R. Bewley, P. Luchini, and J. O. Pralits, “Methods for the solution of large optimal control problems that bypass open-loop model reduction,” submitted for publication. [8] P. J. Goddard and K. Glover, “Controller approximation: Approaches for preserving H1 performance,” IEEE Trans. Autom. Control, vol. 43, no. 7, pp. 858–871, Jul. 1998. [9] H. Gao, J. Lam, and C. Wang, “Controller reduction with H1 error performance: Continuous- and discrete-time cases,” Int. J. Control, vol. 79, no. 6, pp. 604–616, Jun. 2006. [10] J. A. Burns and B. B. King, “A reduced basis approach to the design of low-order feedback controllers for nonlinear continuous systems,” J. Vib. Control, vol. 4, no. 3, pp. 297–323, May 1998. [11] B. B. King and E. W. Sachs, “Semidefinite programming techniques for reduced order systems with guaranteed stability margins,” Comput. Optim. Appl., vol. 17, no. 1, pp. 37–59, Oct. 2000. [12] J. A. Atwell and B. B. King, “Reduced order controllers for spatially distributed systems via proper orthogonal decomposition,” SIAM J. Sci. Comput., vol. 26, no. 1, pp. 128–151, 2004. [13] B. A. Batten and K. A. Evans, “Reduced-order compensators via balancing and central control design for a structural control problem,” Int. J. Control, vol. 83, no. 3, pp. 563–574, Mar. 2010. [14] K. Zhou and J. Chen, “Performance bounds for coprime factor controller reductions,” Syst. Control Lett., vol. 26, no. 2, pp. 119–127, Sep. 1995. [15] J.-Z. Wang and L. Huang, “Controller order reduction with guaranteed performance via coprime factorization,” Int. J. Robust Nonlin. Control, vol. 13, no. 6, pp. 501–517, May 2003. [16] V. R. Dehkordi and B. Boulet, “Robust controller order reduction,” Int. J. Control, vol. 84, no. 5, pp. 985–997, May 2011. [17] G. Vinnicombe, “Frequency domain uncertainty and the graph topology,” IEEE Transactions on Automatic Control, vol. 38, no. 9, pp. 1371–1383, Sep. 1993. [18] ——, Uncertainty and Feedback: H1 Loop-shaping and the ⌫-gap Metric. London, U.K.: Imperial College Press, 2001.
[19] K. Zhou and J. C. Doyle, Essentials of Robust Control. Upper Saddle River, NJ, U.S.A.: Prentice Hall, 1998. [20] B. L. Jones and E. C. Kerrigan, “When is the discretization of a spatially distributed system good enough for control?” Automatica, vol. 46, no. 9, pp. 1462–1468, Sep. 2010. [21] J.-M. Chomaz, P. Huerre, and L. G. Redekopp, “Bifurcations to local and global modes in spatially developing flows,” Phys. Rev. Lett., vol. 60, no. 1, pp. 25–28, Jan. 1988. [22] C. Cossu and J.-M. Chomaz, “Global measures of local convective instabilities,” Phys. Rev. Lett., vol. 78, no. 23, pp. 4387–4390, Jun. 1997. [23] J.-M. Chomaz, “Global instabilities in spatially developing flows: nonnormality and nonlinearity,” Annu. Rev. Fluid Mech., vol. 37, pp. 357– 392, 2005. [24] S. Bagheri, D. S. Henningson, J. Hœpffner, and P. J. Schmid, “Inputoutput analysis and control design applied to a linear model of spatially developing flows,” Appl. Mech. Rev., vol. 62, no. 2, p. 020803, Mar. 2009. [25] O. M. Aamo, A. Smyshlyaev, and M. Krsti´c, “Boundary control of the linearized Ginzburg-Landau model of vortex shedding,” SIAM J. Control Optim., vol. 43, no. 6, pp. 1953–1971, 2005. [26] O. M. Aamo, A. Smyshlyaev, M. Krsti´c, and B. A. Foss, “Output feedback boundary control of a Ginzburg-Landau model of vortex shedding,” IEEE Trans. Autom. Control, vol. 52, no. 4, pp. 742–748, Apr. 2007. [27] M. Milovanovic and O. M. Aamo, “Stabilization of 3D Ginzburg-Landau equation by model-based output feedback control,” in 18th IFAC World Congress. Milano, Italy: IFAC, Aug.–Sep. 2011, pp. 14 435–14 439. [28] A. Smyshlyaev and M. Krsti´c, Adaptive Control of Parabolic PDEs. Princeton, NJ, USA: Princeton University Press, 2010. [29] K. K. Chen and C. W. Rowley, “H2 optimal actuator and sensor placement in the linearised complex Ginzburg-Landau system,” J. Fluid Mech., vol. 681, pp. 241–260, Jul. 2011. [30] J. A. C. Weideman and S. C. Reddy, “A MATLAB differentiation matrix suite,” ACM Trans. Math. Softw., vol. 26, no. 4, pp. 465–519, Dec. 2000. [31] Z. Ma, S. Ahuja, and C. W. Rowley, “Reduced-order models for control of fluids using the eigensystem realization algorithm,” Theor. Comput. Fluid Dyn., vol. 25, no. 1–4, pp. 233–247, Jun. 2011.
Kevin K. Chen received a B.S. in engineering and applied science, with a focus in aeronautics, from the California Institute of Technology in 2009. In 2011, he received an M.A. in mechanical and aerospace engineering from Princeton University. He is currently a Ph.D. candidate in mechanical and aerospace engineering at Princeton University. His primary research interest is in fluid flow control, with emphases in data-based modal decomposition and the placement of actuators and sensors in flows. Mr. Chen is a member of the American Physical Society, in the Division of Fluid Dynamics.
Clarence W. Rowley received the B.S.E. degree in mechanical and aerospace engineering from Princeton University, Princeton, NJ, in 1995, and the M.S. and Ph.D. degrees in mechanical engineering from the California Institute of Technology, Pasadena, in 1996 and 2001, respectively. He joined the faculty of Princeton University in 2001, and is currently a Professor of mechanical and aerospace engineering, and an affiliated Faculty Member with the Program in Applied and Computational Mathematics. His research interests include modeling and control of high-dimensional systems such as fluid flows. Prof. Rowley is the recipient of the National Science Foundation CAREER award and the Air Force Office of Scientific Research (AFOSR) Young Investigator Award.