Closed Form Expressions of Linear Discrete Time System Responses ...

Report 5 Downloads 24 Views
IFAC Conference on Advances in PID Control PID'12 Brescia (Italy), March 28-30, 2012

ThPS.9

Closed Form Expressions of Linear Discrete Time System Responses with an Application to PID controllers Sven Þ. Sigurðsson ∗ Anna Soffía Hauksdóttir ∗∗ ∗

Department of Computer Science, University of Iceland (e-mail: [email protected]). ∗∗ Department of Electrical and Computer Engineering, Univeristy of Iceland (e-mail: [email protected]).

Abstract: General closed–form expressions of linear discrete–time system responses of arbitrary order are presented without proof. The system poles can be real and/or complex, and may be repeated. While these expressions are readily computed from the system in standard forms, they are based on a backward difference formulation, shown to offer some important simplifications and a closer analogy with the continuous case. Expressions are also derived for Lyapunov (Sylvester) equations, whose solution is the corresponding (cross) Grammian matrix, thus allowing evaluation of it without direct reference to the poles of the system. Finally, an example is presented of how these expressions may be utilized to obtain an expression for zero-optimized open-loop PID coefficients. 1. INTRODUCTION Closed form transfer functions for discrete time systems are of considerable interest in the area of control systems and filter design, see e.g., Gajić [2003], Brogan [1991], Oppenheim et al. [1997], Feuer et al. [1996] and Goodwin et al. [2001]. In many cases, it is beneficial to carry out the entire procedure from system identification through controller design and implementation in discrete time, despite the fact that the actual process to be controlled may by nature be continuous. In this paper we present without proof due to space limitations general closed–form expressions of linear discrete– time system responses of arbitrary order. While these expressions are readily computed from the system in standard forms, they are based on a backward difference formulation, shown to offer some important simplifications. Our approach follows closely an approach for obtaining closed– form expressions for linear continuous–time responses presented, e.g., in Herjólfsson et al. [2006] and Hauksdóttir et al. [2007]. By working with backward differences we obtain a closer analogy than in some earlier work on discrete time systems. See, e.g., Herjólfsson et al. [2004] for some earlier work on PID controllers in discrete–time. The discrete–time responses are presented in Section 2. The calculation of Grammian matrices is discussed in Section 3. One application is the computation of PID controllers by minimizing impulse, step, ramp, etc., openloop response deviations from a reference response, effectively presenting design requirements. This is dealt with in Section 4, including examples. 2. DISCRETE–TIME SYSTEM RESPONSES Consider the n-th order discrete-time difference equation

n X

a−i y[k − i] =

i=0

m X

b−i u[k − i],

a0 = 1,

k ≥ 0, (2.1)

i=0

with the initial conditions y[k] = 0, k = −1, −2, . . . , −n, corresponding to the transfer function Y (z) b0 + b−1 z −1 + · · · + b−m z −m = U (z) 1 + a−1 z −1 + · · · + a−n z −n b0 z m + b−1 z m−1 + · · · + b−m . = z n−m n z + a−1 z n−1 + · · · + a−n

(2.2)

Expressing this equation in backward difference form we have n m X X α−i ∇i y[k] = β−i ∇i y[k], (2.3) i=0

i=0

where ∇y[k] = y[k] − y[k − 1],

ATn = [ α0 α−1 · · · α−n ] = [ a0 a−1 · · · a−n ] Pn+1 , (2.4) T Bm = [ β0 β−1 · · · β−m ] = [ b0 b−1 · · · b−m ] Pm+1 , (2.5) and Pk denotes a k×k Pascal matrix, whose (p+1, q+1)-th   element is (−1)q pq with pq = 0 if q > p. Equivalently we are replacing the transfer function (2.2) with β0 + β−1 zˆ + · · · + β−m zˆm , (2.6) 1 + α−1 zˆ + · · · + α−n zˆn where z−1 . (2.7) zˆ = z We are interested in having a closed formula for the solution to such an equation when u[k] is a forcing function of a given order, γ, denoted by Iγ [k]. We choose to define

IFAC Conference on Advances in PID Control PID'12 Brescia (Italy), March 28-30, 2012

I0 [k] = δ[k] and   k+γ−1 Iγ [k] = H[k], γ−1

γ ≥ 1,

ThPS.9

yˆ2 [k] = −y1 [k] + y2 [k], 1 3 yˆ3 [k] = y1 [k] − y2 [k] + y3 [k], 2 2 7 1 yˆ4 [k] = − y1 [k] + y2 [k] − 2y3 [k] + y4 [k], etc.(2.16) 6 6

(2.8)

rather than defining it as k γ−1 H[k], γ ≥ 1, (2.9) (γ − 1)! as is more common. Here δ[k] denotes the Dirac delta function, and H[k] denotes the Heaviside unit step function. The underlying reason for adopting the choice (2.8), rather than (2.9), is the result of the following lemma. Lemma 1. ∇Iγ+1 [k] = Iγ [k], Iγ+1 [k] =

k X

(2.10)

Iγ [i],

γ ≥ 0, k ≥ 0.

(2.11)

i=0

Proof: We have from (2.8) and Pascal’s identity that when γ ≥ 1, k ≥ 0     k+γ k+γ−1 ∇Iγ+1 [k] = H[k] − H[k − 1] γ γ   k+γ−1 = H[k] = Iγ [k]. (2.12) γ −1 It follows from this result that Iγ+1 [k] = Iγ+1 (0) +

k X

=1+

i=1



Next we introduce the n × n Jordan matrix   J1 0 · · · 0  0 J2 · · · 0   J =  ... . . . . . . ... 

(2.20)

0 · · · 0 Jν

∇Iγ+1 [i]

i=1

k X

Before stating the main result of this section, we introduce some notation. Let λ1 , λ2 , . . . , λν denote the poles of the transfer function 1 1 = n , (2.17) a(z) z + a−1 z n−1 + · · · + a−n repeated d1 , d2 , . . . , dν times, respectively and κij denote the basic partial fraction coefficients given by the standard formula   d(di −j) (z − λi )di 1 j = 1, 2, . . . , di , κij = i = 1, 2, . . . , ν. (di − j)! dz (di −j) a(z) z=λi (2.18) Thus, 1 1 = a(z) (z − λ1 )d1 (z − λ2 )d2 · · · (z − λν )dν di ν X (2.19) X κij . = (z − λi )j i=1 j=1

Iγ [i] =

k X

with the diagonal blocks λ

1 0  ... ...    Ji =  (2.21)  ..  . 1 0 λi each a di × di matrix, as well as the matrix J˜ = I − J −1 = J −1 (J − I), (2.22) where I denotes the n × n unit matrix and assuming, for the time being, that λi 6= 0, i = 1, 2, . . . , ν. Then introduce the n-vector T κ = [ κ11 · · · κ1d1 · · · κν1 . . . κνdν ] (2.23) and the n × (m + 1) matrices   Kγ,m = J˜−γ κ J˜1−γ κ · · · J˜m−γ κ (2.24) i

Iγ [i].

(2.13)

i=0

Furthermore we have the following result: Theorem 1: Let y[k] denote the solution to (2.1) when u[k] = pm [k]H[k] and pm [k] is a general polynomial of degree m in k. Let yγ [k] denote the solution to (2.1) when u[k] = Iγ [k], γ ≥ 1. Then m X y[k] = (∇γ pm [−1]) yγ+1 [k]. (2.14) γ=0

Proof: By Newton’s backward difference interpolation formula, pm [k] is exactly expressed by   m X k+γ pm [k] = (∇γ pm [−1]) , (2.15) γ γ=0

choosing the interpolation points as k = −1, −2, . . . , −m− 1. This follows from the fact that there is a unique polynomial of degree m that satisfies such m+1 conditions. The result follows directly from (2.8) and (2.15).  We can, in particular, make use of (2.14), if we wish to obtain a solution when the forcing function, u[k], is of kγ−1 H[k] form (2.9). Denoting the solution when u[k] = (γ−1)! by yˆγ [k] we thus get

and

Kγ,m,n = J n−1 Kγ,m , (2.25) where we note that the latter matrix is well defined for n + γ ≥ m + 1, even if λi = 0 for some i = 1, 2, . . . , ν. For a given vector cγ = [cγ , cγ−1 , . . . , c1 ]T , denote by Dcγ the following γ × γ upper triangular Hankel matrix   cγ cγ−1 · · · · · · c1 cγ−1 cγ−2 · · · c1 0  . Dcγ =  (2.26) ..  ... . · · · · · · . . . c1

0

··· ··· 0

Finally, introduce the n-vector function  T E[k] = E1 [k]T , E2 [k]T , . . . , Eν [k]T ,

(2.27)

IFAC Conference on Advances in PID Control PID'12 Brescia (Italy), March 28-30, 2012

where

ThPS.9

(γ)

   λki λki   d k     k k−1     λ λ i     dλ 1 λ=λi     Ei [k] =  =  .. ..     . .           k 1 d(di −1) k k−di +1 λi λ d −1 i di − 1 (di − 1)! dλ λ=λi (2.28) and the γ-vector function     T k+2 k+γ−1 ργ [k] = 1 (k + 1) ··· . 2 γ−1 (2.29) We can now state without proof: Theorem 2. Denote by yγ [k] the solution to the difference equation (2.1) with u[k] = Iγ [k], γ = 0, 1, . . .. Then T yγ [k] = (Kγ,m,n Bm )T E[k] − Dcγ Bγ−1 ργ [k] (2.30) with Kγ,m,n , Bm and Dcγ , defined as in (2.25), (2.5) and (2.26), respectively, setting β−i = 0 if i > m, and where the coefficients cj , j = 1, 2, . . . , γ, in (2.26) are the solution of the following system      α0 0 ··· ··· 0 c1 1 α0 0 · · · 0   c2   α−1 0  .  .  = − . , (2.31) .. ..  ..  ..  ..  . . .   ..  

α−γ+1 α−γ+2 · · · · · · α0 cγ 0 assuming that the system (2.1) has no poles equal to one.

Remark 1. Given the ν poles of (2.17), the n-vector Kγ,m,n Bm in (2.30) can be calculated by O(n2 ) operations. Firstly the partial fraction coefficients (2.18) can be calculated by O(n2 ) operations. Secondly, the matrix Kγ,m in (2.24) can be calculated by O(mn) operations from the vector κ in (2.23) by recursively forming matrix˜ by solving the system vector products of the form u = Jv, Ju = (J − I)v, and/or solving linear systems of the ˜ = v, by solving the system (J − I)u = Jv, form Ju each requiring O(n) operations. Finally, the matrix-vector product J n−1 (Kγ,m Bm ) requires O(n2 ) operations. The modifications required in the case of zero poles, do not increase this total complexity. The calculation of the γvector Dcγ Bγ−1 in (2.30) requires further O(nγ) operations (if γ ≤ n). When the system is stable the calculations of αi , i = 0, 1, . . . , γ − 1 from the a-coefficients by (2.4) requires O(nγ) operations, the solution of cγ from (2.31) further O(γ 2 ) operations.

for γ = 0, 1, 2, . . . . Now let yb denote the backward difference of order γ of the basic response of order zero, yb [k], i.e. (γ) γ T yb [k] = ∇γ yb [k] = J n−γ−1 (J − I) κ E[k], k ≥ 0, (3.3) for γ = 1, 2, . . . , noting that ∇E[k] = J −1 (J − I)E[k]. Remark 2: This symmetry between higher order differences and higher order transient basic responses, analogous to such a symmetry in the continuous case, with the backward differences being replaced by derivatives, is indeed the main motivation for working with backward differences. Next let Yˆγ,m denote the m-vector function, m ≥ 0, h i (−γ) (−γ) (−γ) Yˆγ,m [k] = yb [k] yb [k − 1] · · · yb [k − m + 1] , (3.4)

and let Yγ,m denote the m-vector function h i (−γ) (−γ+1) (−γ+m−1) Yγ,m [k] = yb [k] yb [k] · · · yb [k]

related to Yˆγ,m by the Pascal matrix Yγ,m [k] = Pm Yˆγ,m [k].

k=0

j = 0, 1, 2, . . . , m2 − 1. (3.7) Similarly, let Gj,m1 ,m2 denote the corresponding crossGrammian matrix based on backward differences whose (i, j)-th element is ∞ X (−γ+i) (−γ+j) y1,b [k]y2,b [k], i = 0, 1, . . . , m1 − 1, (3.8) k=0 j = 0, 1, . . . , m2 − 1. These Grammians depend on the coefficients a1,0 , a1,1 , . . . , a1,n1 −1 , a2,0 , a2,1 , . . . , a2,n2 −1 , although we do not denote that explicitly. Equivalently, we have ˆ γ,m1 ,m2 = G Gγ,m1 ,m2 =

i=0

∞ X

k=0 ∞ X

Yˆ1,γ,m1 [k]Yˆ2,γ,m2 [k]T

(3.9)

Y1,γ,m1 [k]Y2,γ,m2 [k]T

(3.10)

k=0

and then from (3.6)

with y[k] = 0, k = −1, −2, . . . , −n and Iγ [k] being defined as in (2.8), referred to as the basic response of order γ. Let (−γ) yb [k] denote the transient part of this basic response. Then, it follows from (2.30) that  T (−γ) −γ yb [k] = J n+γ−1 (J − I) κ E[k], k ≥ 0, (3.2)

(3.6)

ˆ γ,m1 ,m2 denote the m1 × m2 cross-Grammian Then let G matrix associated with the solution of two separate equations of form (3.1), identified by the subscripts 1 and 2, such that the (i, j)-th element of Gγ,m1 ,m2 is given by ∞ X (−γ) (−γ) y1,b [k − i]y2,b [k − j], i = 0, 1, 2, . . . , m1 − 1,

3. CALCULATION OF GRAMMIAN MATRICES In this section we consider the calculations of Grammian matrices associated with stable equations of form n X a−i y[k − i] = Iγ [k], k ≥ 0, γ = 0, 1, 2, . . . (3.1)

(3.5)

T ˆ γ,m1 ,m2 Pm . (3.11) Gγ,m1 ,m2 = Pm1 G 2 These cross-Grammians can be calculated as solutions to Sylvester-systems of size n1 × n2 , provided m1 ≤ n1 and m2 ≤ n2 , where n1 and n2 denotes the order of (3.1) in each case. This is shown by the next theorem and corollary, but note that even if m1 < n1 and/or m2 < n2 , we must solve a system of size n1 × n2 and then obtain the desired Grammian matrix as the m1 × m2 principal submatrix of the solution. Also note, that if m1 ≥ n1 or m2 ≥ n2 , we have the option of increasing n1 and/or n2 by adding extra zero poles.

IFAC Conference on Advances in PID Control PID'12 Brescia (Italy), March 28-30, 2012

ThPS.9

ˆ γ,n1 ,n2 (3.10) Theorem 3. The cross-Grammian matrix G is the solution of the following discrete Sylvester equation C1 XC2T −X +Pn1 u1,γ uT2,γ PnT2 = 0, γ = 0, 1, 2, . . . , (3.12) where C denotes the n × n companion matrix   −a−1 −a−2 · · · −a−n+1 −a−n 0 ··· 0 0   1  0 1 ··· 0 0   , (3.13)  . . . ..  .  .. .. .. .. .  0 0 ··· 1 0 Pn denotes the Pascal matrix and uγ is an n-vector whose Pγ+1−i i-th element is 1 + j=1 cj where ci , i = 1, 2, . . . , γ are determined by (2.31). (−γ)

Proof: yb [k], the transient part of the basic response of order γ, satisfies the homogeneous difference equation (3.1) setting Iγ [k] ≡ 0, and it can be deduced from (2.30), setting m = 0 and b0 = 1, and (2.26), that it further satisfies the initial conditions  γ−i  1 + X c , i = 0, 1, . . . , γ − 1 j ∇i y[0] = (3.14) j=1   1, i = γ, γ + 1, . . . , n − 1.

This follows from the fact that ∇i yγ [0] = 1, i = 0, 1, . . . , n − 1 by the definition of the basic response of order γ, and ( T if γ > i 0 · · · 0 ργ−i [k] i ∇ ργ [k] =  (3.15) T if γ ≤ i. 0 ··· 0 Equivalently y (−γ) [k] satisfies the same homogeneous difference equation and the initial conditions     y[0] y[0]  y[−1]   ∇y[0]   = Pn uγ .   = Pn  .. .     .. . n−1 y[−n + 1] ∇ y[0]

Thus, yet another equivalent formulation is that y (−γ) [k] satisfies the first order system Y [k] = CY [k − 1] Y [0] = Pn uγ , (3.16) T

where Y [k] = [y[k] y[k − 1] · · · y[k − n + 1]] and C is the companion matrix (3.13). The results now follows from a straight forward well known argument.  Corollary 1: The cross-Gramian matrix Gγ,n1 ,n2 is the solution of the following Sylvester equation C˜1 X C˜2T − X + u1,γ uT2,γ = 0, γ = 0, 1, 2, . . . (3.17) where C˜1 = Pn1 C1 Pn1 , C˜2 = Pn2 C2 Pn2 . Proof: The result follows from (3.11) and (3.12) and the fact that Pn−1 = Pn .  Remark 3. The cross-Grammian matrix Gγ,m1 ,m2 (3.9) is also given by the following expression T Gγ,m1 ,m2 = K1,γ,m Wn1 ,n2 K2,γ,m2 −1,n2 −1 , (3.18) 1 −1,n1 −1 where Kγ,m,n is the n × (m + 1) matrix (2.25) and Wn1 ,n2 is an n1 × n2 matrix whose (i, j)-th element with i=

t−1 X

k=1

is given by

d1,k + r and j =

u−1 X k=1

d2,k + s

(3.19)

Pmin(r,s)

 r−p s−1 s−p ¯ λ1,t p−1 λ 2,u , (3.20) ¯ 2,u )r+s−1 (1 − λ1,t λ r = 1, . . . , d1,t , t = 1, . . . , ν1 , s = 1, . . . , d2,u , u = 1, . . . , ν2 , the poles and their multiplicities associated with the two equations being denoted as in (2.19). p=1

r−1 p−1

We have directly from Theorem 3 that ∞ X Wn1 ,n2 = E1 [k]E2 [k]H

(3.21)

k=0

and the result now follows from the fact that by (2.27) and (2.28), the (i, j)-th element of Wn1 ,n1 with i and j given by (3.19) is    ∞  X k k k−r+1 λ λk−s+1 . (3.22) r − 1 1,t s − 1 2,s k=0

4. AN APPLICATION TO PID ZEROS As an example of how the expressions above may be utilized, we derive an expression for zero–optimized open– loop PID coefficients, with respect to a response of arbitrary order. Here we follow an approach, presented, e.g., in Herjólfsson et al. [2006] and Hauksdóttir et al. [2007] for continuous systems. While open loop zero-optimization may lead to instability in the closed loop, it remains a useful tool for providing initial values for the PID coefficients in a two stage optimization approach, cf. Herjólfsson et al. [2012]. When applying PID control in open loop we are replacing (2.3) with the equation n m X X i 2 α−i ∇ y[k] = kI + kP ∇ + kD ∇ β−i ∇i u[k], i=0

i=0

(4.1) kI , kP and kD being the discrete PID coefficients. Introducing the (m + 3) × 3 matrix   β0 0 0  β−1 β0 0    β0   β−2 β−1  . ..  ..  (4.2) B= . , .  ..  β  −m β−m+1 β−m+2   0 β−m β−m+1  0 0 β−m and the γ × 3 matrix Bγ corresponding to the first γ rows of B, B being padded by zero-rows if γ > m + 3, and the T 3-vector p = [kI kP kD ] , it follows directly from (2.30) that the solution to (4.1) can be expressed in terms of the PID-coefficients as follows yγ [k] = (Kγ,m+2,n Bp)T E[k] − (Dcγ Bγ p)T ργ [k] = (Bp)T Yγ,m+3 [k] − (Dcγ Bγ p)T ργ [k], with Yγ,m [k] being defined by (3.5).

(4.3)

We wish to track a system with a given transfer function βr,0 + βr,−1 zˆ + · · · + βr,−mr zˆmr , (4.4) αr,0 + αr,−1 zˆ + · · · + αr,−nr zˆnr to which we apply the same forcing function Iγ [k]. Denoting it by yr,γ [k], we now wish to choose the PID-coefficients in such a way that

IFAC Conference on Advances in PID Control PID'12 Brescia (Italy), March 28-30, 2012

∞ X

(yγ [k] − yr,γ [k])2

ThPS.9

(4.5)

k=0

is minimized. Since the infinite sums of the non-transient parts of yγ [k] and yr,γ [k] are unbounded as k → ∞, if γ ≥ 1, whereas the infinite sums of the transient parts are bounded, assuming that both systems are stable, this implies that the PID-coefficients have to be chosen so that ∞ X ((Bp)T Yγ,m+3 [k] − BrT Yr,γ,0 [k])2 (4.6) k=0

is minimized subject to the constraint that (Dcγ Bγ p)T ργ [k] ≡ (Dcr,γ Br,mr ,γ )T ρr,γ [k], (4.7) if γ ≥ 1. The additional suffix, r, denotes that the corresponding expression is for the reference system. The vector Br,mr ,γ consists of the first γ elements of the vector Br,mr . Equivalently the PID-coefficients can be chosen so that pT B T Gγ Bp − 2pT B T Hγ Br (4.8) is minimized, subject to the same constraint, where Gγ is the (m + 3) × (m + 3) Grammian matrix ∞ X Yγ,m+3 [k]Yγ,m+3 [k]T , (4.9) k=0

which can be determined from Theorem 3 as the (m + 3) × (m + 3) principal minor of the n × n solution of the given Lyapunov equation where we have increased n, if necessary, such that n ≥ m + 3, by adding m + 3 − n zero poles. Hγ is the (m+ 3)× (mr + 1) cross Grammian matrix ∞ X Yγ,m+3 [k]Yr,γ,mr +1 [k]T , (4.10) k=0

which can again be determined from Theorem 3 as the (m + 3) × (mr + 1) principal minor of the n × nr solution of the given Sylvester equation, increasing n and nr as before, if necessary. Thus the PID-coefficients will be determined directly by the following set of equation  " p #   Vγ Aγ UγT 1 = , (4.11) Wγ Λ Uγ 0 2 where Λ is a γ × 1 vector of Lagrange multipliers, Aγ = B T Gγ B, a 3 × 3 matrix, Uγ = Dcγ Bγ , a γ × 3 matrix, Vγ = B T Hγ Br,mr , a 3 × 1 vector and Wγ = Dr,cr,γ Br,mr ,γ , a γ × 1 vector.

Example 1: Impulse response minimization - γ = 0. Consider the third order system having poles at 0.5, 0.6 and 0.8 and a unity gain given by 0.04 Y (z) = z2 3 , (4.12) 2 U (z) z − 1.9z + 1.18z − 0.24 where n = 3 and m = 1. We then have        α0 1 1 1 1 1 0.04  α   0 −1 −2 −3   −1.9   0.26  A3 =  −1  =  = α−2 0 0 1 3   1.18   0.46  α−3 0 0 0 −1 −0.24 0.24 (4.13) and        β0 1 1 0 0.04 B1 = = = . (4.14) β−1 0 −1 0.04 −0.04

The reference system is given by 0.3 Yr (z) = , U (z) z − 0.7 where nr = 1 and mr = 1. Then        αr,0 1 1 1 0.3 Ar,1 = = = αr,−1 0 −1 −0.7 0.7

(4.15)

(4.16)

and

Br,1 =



βr,0 βr,−1



=



1 1 0 −1



0 0.3



=



 0.3 . −0.3

(4.17)

We now compute the solution to the discrete Sylvester equation ˜ 0 C˜ T − G0 + u0 uT0 = 0, CG (4.18) where   1.9 −1.18 0.24 0 1 0 0 0  C˜ = P4  P (4.19) 0 1 0 0 4 0 0 1 0 where we have added a zero in the top row since n < m+3. We further have T u0 = [ 1 1 1 1 ] (4.20) and using Matlab’s dlyap to solve the discrete Sylvester equation we obtain   44.1553 1.3143 −1.9502 −0.8784  1.3143 2.6285 0.6783 −0.2001  G0 =  . (4.21) −1.9502 0.6783 1.3567 1.1566  −0.8784 −0.2001 1.1566 2.3131 Then   0.04 0 0 0  −0.04 0.04 B= (4.22) 0 −0.04 0.04  0 0 −0.04 and " # 0.0706 0.0021 −0.0031 0.0021 0.0042 0.0011 . A0 = (4.23) −0.0031 0.0011 0.0022 Next, we compute the solution to the discrete Sylvester equation ˜ 0 C˜r − H0 + u0 uT = 0, CH (4.24) r,0 which results in   3.8734 1.0000  1.4620 1.0000  H0 =  . (4.25) 0.7386 1.0000  0.5216 1.0000 Then, we have V0 = B T H0 Br,1 = [ 0.0289 0.0087 0.0026 ]

T

(4.26)

and T T p = [ kI kP kD ] = A0 \V0 = [ 0.4048 1.6096 0.9769 ] . (4.27) The resulting step responses and pole (x) zero (o) diagram for the open loop is shown in Figure 1. The resulting step responses and root locus for the closed loop is shown in Figure 2. All closed loop poles are labeled by an ∗ on the corresponding root locus.

IFAC Conference on Advances in PID Control PID'12 Brescia (Italy), March 28-30, 2012

Step Response

ThPS.9

Pole−Zero Map

1

1

0.9

0.8

0.8

0.6

0.7

0.4

Step Response

Root Locus

1.4

1 0.8

0.5 0.4

0.2 0 −0.2

0.3

−0.4

0.2

−0.6

0.4

Imaginary Axis

0.6

0.6 1

Amplitude

Imaginary Axis

Amplitude

1.2

0.8

0.6

0.2 0 −0.2 −0.4

0.4

−0.6 0.2

0.1

−0.8

open loop reference system w/o int PID and system w/o int

0 0

5

10 15 Time (sec)

20

−1 −1

25

closed loop reference system PID controlled system

−0.5

0 Real Axis

0.5

1

Fig. 1. Example 1. A third order system with impulse optimized PID zeros running in open loop. Step Response

1 0.8

1.2 0.6 0.4

Imaginary Axis

Amplitude

1 0.8

0.6

0.2 0 −0.2

−0.6 0.2 −0.8

closed loop reference system PID controlled system

5

10

15 20 Time (sec)

25

30

−1 −1

35

−0.5

0 Real Axis

0.5

1

Fig. 2. Example 1. A third order system with impulse optimized PID zeros running in closed loop.

15 20 Time (sec)

25

30

35

−0.8 −1 −1

−0.5

0 Real Axis

0.5

1

Fig. 4. Example 2. A third order system with step optimized PID zeros running in closed loop.

0.9

0.8

0.8

0.6

0.7

0.4

0.6 0.5 0.4

−0.2 −0.4 −0.6

open loop reference system w/o int PID and system w/o int

10 15 Time (sec)

20

25

This work was supported by the University of Iceland.

0

0.2

5

ACKNOWLEDGEMENTS

0.2

0.3

0.1

Comparing the root-loci in Figures 2 and 4, one can note that while the root-locus of the PID controlled system (in blue) is closer to that of reference system (in red) for the impulse optimization, the complex poles of the PID controlled system (labeled ∗) are closer to those of the reference system for the step optimization, resulting in a more similar overshoot and settling time in that case.

Pole−Zero Map

1

Imaginary Axis

Amplitude

Step Response

1

0 0

10

−0.4

0.4

0 0

5

open loop poles (labeled x) in the impulse optimization, while in the step optimization the corresponding zero is located between the faster two system poles. While the interpretation of this is not obvious, it is clear that the DC gain is taken care of only in the step optimization case, due to the Lagrange constraint.

Root Locus

1.4

0 0

REFERENCES

−0.8 −1 −1

−0.5

0 Real Axis

0.5

1

Fig. 3. Example 2. A third order system with step optimized PID zeros running in open loop. Example 2: Step response minimization - γ = 1. In the case of step response minimization, we have T

T

(4.28)

ur,1 = [ 1 + cr,1 1 ] = [ −2.3333 1 ] .

T

(4.29)

Dcγ = [c1 ] = −25, where α0 c1 = −1

(4.30)

u1 = [ 1 + c1 1 1 1 ] [ −24 1 1 1 ] and T

Then, and Dcγ,γr = [cr,1 ] = −3.3333, where αr,0 cr,1 = −1. (4.31) We now have U1 = Dc1 B1 = [ −1 0 0 ] (4.32) and W1 = Dr,cr,γr βr,mr,1 = cr,1 βr,0 = −1. (4.33) This results in T T p = [ kI kP kD ] = [ 1.0000 4.4761 4.0838 ] , (4.34) the open loop results are shown in Figure 3 and the corresponding closed loop results are shown in Figure 4. Comparing the pole-zero maps in Figures 1 and 3, one of the PID zeros (labeled o) is located to the left of the

W. Brogan. Modern Control Theory. Prentice Hall. 1991. A. Feuer and G.C. Goodwin. Sampling in Digital Signal Processing and Control. Birkhauser. 1996. Z. Gajić. Linear Dynamic Systems and Signals. Prentice Hall. 2003. G.C. Goodwin, S.F. Graebe and M.E. Salgado. Control Systems Design. Prentice Hall. 2001. A.S. Hauksdóttir, G. Herjólfsson, S.Þ. Sigurðsson. Optimizing zero tracking and disturbance rejecting controllers - The generalized PID controller. The 2007 American Control Conference. New York City, USA, July 11-13:5790–5795, 2007. G. Herjólfsson, A.S. Hauksdóttir, S.Þ. Sigurðsson. L2 /H2 optimal zeros two stage optimization. Accepted to the IFAC Conference on Advances in PID Control, PID’12. To be held March 28-30, 2012, in Brescia, Italy. G. Herjólfsson, A.S. Hauksdóttir, S.Þ. Sigurðsson. Closed form expressions of linear continuous- and discrete time filter responses. NORSIG 2006, 7th Nordic Signal Processing Symposium. Reykjavík, Iceland, June 7-9, 2006. G. Herjólfsson and A.S. Hauksdóttir. Direct computation of optimal discrete time PID controllers. The 2004 American Control Conference. Boston, Massachusetts, June 30 - July 2, 2004, pp. 46-51. A.V. Oppenheim and A.S. Willsky. Signals and Systems. Prentice Hall. 1997. Boston, Massachusetts, USA, June 2004.