Observers for Systems with Nonlinearities Satisfying Incremental Quadratic Constraints A. Beh¸cet A¸cıkme¸se
∗
and Martin Corless
†
April 10, 2007
Abstract We consider the problem of designing observers to asymptotically estimate the state of a system whose nonlinear time-varying terms satisfy an incremental quadratic inequality that is parameterized by a set of multiplier matrices. Observer design is reduced to solving linear matrix inequalities for the observer gain matrices. The proposed observers guarantee exponential convergence of the state estimation error to zero. In addition to considering a larger class of nonlinearities than previously considered, this paper unifies some earlier results in the literature. The results are illustrated by application to an underwater vehicle.
1
Introduction
A fundamental problem in system analysis and control design is that of determining the state of a system from its measured input and output. Many solutions to this problem use an asymptotic observer (or state estimator) that produces an estimate of the system state that asymptotically approaches the actual system state. Typical observers for linear systems consist of a copy of the system dynamics along with a linear correction term based on the output error, that is, the difference between the measured output and its estimate based on the estimated state [21, 9]. References [15, 16, 23, 4, 7, 6] consider systems with globally Lipschitz nonlinearities and nonlinearities in unbounded sectors. Reference [13] extends these results to multivariable nonlinearities satisfying a monotonicity condition, as well as relaxing the observer ∗
A. B. A¸cıkme¸se is with Guidance and Control Analysis Group, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109,USA.
[email protected] † M. Corless is with the School of Aeronautics and Astronautics, Purdue University, W. Lafayette, IN 47907, USA.
[email protected] feasibility conditions via a multiplier by exploiting the decoupled nature of the multivariable nonlinearity. They present asymptotic observers that consist of a copy of the system dynamics and two correction terms based on the output error; one term is the usual linear correction term while the other term (called the nonlinear injection term) enters the copy of the nonlinear element in the observer. Reference [25] gives a general description of a nonlinear observer with an “output injection” form, that is also analyzed by [3] in an incremental stability framework. Additional results on observers for nonlinear systems can also be found in [19, 10, 24, 26]. In this paper, we consider systems whose state space description contains a linear timeinvariant part and a nonlinear/time-varying part. We characterize the nonlinear/timevarying term by a set of symmetric matrices which we call incremental multiplier matrices. More specifically, the nonlinear term satisfies an inequality which we call an incremental quadratic constraint (δQC); this constraint is parameterized by the incremental multiplier matrices associated with the nonlinear term; see inequality (3). The nonlinearities considered here include many commonly encountered nonlinearities including those considered in [15, 16, 23, 4, 7, 6, 13]. Consequently, this paper unifies earlier results by characterizing a nonlinearity via an incremental quadratic inequality. Beyond the unification and generalization of previous observer results, we consider two other general classes of nonlinearities described by polytopic and conic parameterizations. These additional characterizations can provide less conservative feasibility results for globally Lipschitz multivariable nonlinearities and multivariable nonlinearities in unbounded sectors by further exploiting their structure. We also consider the case of multiple nonlinearities with different characterizations for each portion of the nonlinearity, and establish the corresponding incremental multiplier matrices. For the systems under consideration, we present observers whose structure is partially inspired by [7]. These observers are characterized by three gain matrices: the gain matrix L for the usual linear correction term, the gain matrix Ln for the nonlinear injection term and an additional gain matrix L3 . Initially, we consider Ln and L3 fixed and convert the problem of determining L into that of solving linear matrix inequalities. Such inequalities can be readily treated using the LMI toolbox in MATLAB [14]. We also consider the problem of simultaneously computing L and Ln . By imposing a specific condition on the set of incremental multiplier matrices describing the nonlinearities, we convert the problem of simultaneously determining L and Ln into that of solving linear matrix inequalities. All of these results are based on the analysis of the state estimation error dynamics using quadratic Lyapunov functions. To illustrate our results, we apply the nonlinear observer design technique to an underwater vehicle from [20].
2
Observer based output feedback controller design (such as that considered in [7, 5, 22]) is beyond the scope of this paper, and it will be the subject of a separate paper [1]. We present a description of the systems under consideration in Section 2. This includes a general characterization of nonlinearities using an incremental quadratic inequality constraint that is parameterized by a set of incremental multiplier matrices. The proposed observers, which are parameterized by a linear gain matrix L, nonlinear injection gain matrix Ln , and a third matrix L3 are presented in Section 3. This section also presents an observer design procedure for fixed Ln and L3 . Section 5 presents an approach for simultaneous design of L and Ln . In Section 7, we demonstrate that many common nonlinearities satisfy the incremental quadratic consraint; we also present multiplier matrices for these nonlinearities. Section 8 presents an application of the nonlinear observer design to an underwater vehicle.
2
System Description and Incremental Quadratic Constraints
We consider nonlinear/time-varying systems described by x˙ = Ax + Bp(t, x, u) + f (w) y = Cx + Dp(t, x, u) + h(w)
(1)
where x(t) ∈ IRn is the state, u(t) ∈ IRm is a known input, y(t) ∈ IRl is the measured output, t ∈ IR is the time variable and w(t) = (t, u(t), y(t)). All the nonlinear/time-varying elements in the system are lumped into the terms p, f and h. We suppose that p(t, x, u) ∈ IRlp is given by p(t, x, u) = φ(w, q) where q = Cq x + Dqp p , (2) φ is a continuous function and q ∈ IRlq . The matrices A, B, C, D and Cq , Dqp are constant and of appropriate dimensions. We will refer to the above system as the plant. By a motion of the plant, we mean any continuous function x(·) : [t0 , t1 ) → IRn , with t0 < t1 ≤ ∞, which satisfies the differential equation in (1) for some piecewise continuous input function u(·). Our characterization of φ is based on a set M of symmetric matrices which we refer to as incremental multiplier matrices. Specifically, for all M ∈ M, the following incremental quadratic constraint (δQC) holds for all w and q1 , q2 ∈ IRlq : q2 − q1 φ(w, q2 ) − φ(w, q1 )
!T
q2 − q1 φ(w, q2) − φ(w, q1)
M
3
!
≥ 0.
(3)
Basically, M provides a characterization of φ in an incremental sense. Section 7 provides incremental multiplier matrices for large classes of commonly encountered nonlinearities. Example 1 Consider any differentiable scalar valued function φ of a scalar variable. Suppose that φ′ , the derivative of φ, is bounded and choose σ1 and σ2 so that σ1 ≤ φ′ (q) ≤ σ2 for all q ∈ R. An application of the mean value theorem shows that φ satisfies σ1 (q2 − q1 )2 ≤ (φ(q2 ) − φ(q1 ))(q2 − q1 ) ≤ σ2 (q2 − q1 )2
(4)
for all q1 , q2 ∈ R. Condition (4) is equivalent to [(φ(q2 ) − φ(q1 )) − σ1 (q2 − q1 )] × [σ2 (q2 − q1 ) − (φ(q2 ) − φ(q1 ))] ≥ 0 , that is, 1 2
q2 − q1 φ(q2 ) − φ(q1 )
!T
−2σ1 σ2 σ1 + σ2 σ1 + σ2 −2
!
q2 − q1 φ(q2 ) − φ(q1 )
!
≥ 0.
Hence, any matrix M =λ
−2σ1 σ2 σ1 + σ2 σ1 + σ2 −2
!
with λ ≥ 0
is an incremental multiplier matrix for φ. Example 2 Consider any monotone scalar valued function φ of a scalar variable, that is, φ(q2 ) ≥ φ(q1 )
when
q2 ≥ q1 .
This is equivalent to (φ(q2 ) − φ(q1 ))(q2 − q1 ) ≥ 0 for all q1 , q2 ∈ R. Notice that satisfaction of (5) is equivalent to satisfaction of !T ! ! q2 − q1 0 1 q2 − q1 ≥ 0. φ(q) − φ(q1 ) 1 0 φ(q) − φ(q1 ) This clearly shows that any matrix M =λ
0 1 1 0
!
is a multiplier matrix for φ. 4
with λ ≥ 0
(5)
Remark 1 (Dqp 6= 0 and ψ) When Dqp 6= 0, the relationships in (2) show that p is implicitly defined by p = φ(w, z+Dqpp) (6) where z = Cq x. In this case, we assume that there is a continuous function ψ such that, for each w and z, p = ψ(w, z) (7) uniquely solves (6), that is, ψ(w, z) = φ(w, z+Dqpψ(w, z)) .
(8)
Note that ψ satisfies z2 − z1 ψ(w, z2 ) − ψ(w, z1 )
!T
N
where N=
z2 − z1 ψ(w, z2 ) − ψ(w, z1 )
I Dqp 0 I
!T
M
I Dqp 0 I
!
.
!
≥0
(9)
(10)
Thus the plants under consideration can be described by x˙ = Ax + Bψ(w, Cq x) + f (w) y = Cx + Dψ(w, Cq x) + h(w)
(11)
where ψ satisfies (9).
3
Observers
We propose the following observers to provide an estimate xˆ of the state x of the plant described by (1) and (2) in the previous section: xˆ˙ = Aˆ x + B pˆ + f (w) + L(ˆ y −y)
(12)
yˆ = C xˆ + D pˆ + h(w) where
pˆ = ψ(w, Cq xˆ +Ln (ˆ y −y)) + L3 (ˆ y −y)
(13)
and the observer gain matrices L, Ln and L3 are constant matrices of appropriate dimensions. Note that the observer is basically a copy of the plant augmented with three correction terms. The first correction term, L(ˆ y − y), is the usual linear output error term common 5
in state observation of linear systems. The second term, Ln (ˆ y − y), is sometimes called a nonlinear injection term and has appeared in prior results on observation of nonlinear systems; see, for example, [7]. Section 6 provides an example of a system where asymptotic state observation using the approach of this paper is not possible without this term. The third term L3 (ˆ y −y) can add additional capabilities to the observer design. However, if D is zero this term is clearly superfluous. One can see this by noting that, with D = 0, we have yˆ−y = C(ˆ x −x) and B pˆ + L(ˆ y −y) = Bψ(w, zˆ+Ln (ˆ y −y)) + (L + BL3 )(ˆ y −y) . Thus, the L3 (ˆ y −y) term can be incorporated into the linear correction term. Remark 2 One can also show that, if the matrix I − DL3 is invertible, then one can eliminate the L3 correction term in a proposed observer (12)-(13) by suitable modification of the other two correction terms. This modification consists of replacing L and Ln with ˆ := (L + BL3 )(I −DL3 )−1 L
(14)
ˆ n := Ln (I −DL3 )−1 , L
(15)
and respectively. This is demonstrated in Appendix A.1. Remark 3 In the observer description above, we have pˆ = ψ(w, η1 + Ln Dp pˆ) + η2 + L3 D pˆ .
(16)
η1 = (Cq + Ln C)ˆ x + Ln (h(w) − y) = Cq x + (Cq + Ln C)e − Ln Dp
(17)
η2 = L3 (C xˆ + h(w) − y)
(18)
where
= L3 Ce − L3 Dp
and e := xˆ −x is the state estimation error. If Ln Dp = 0 then, assuming I −L3 D is invertible, the above equation can be explicitly solved for pˆ to yield pˆ = (I − L3 D)−1 [ψ(w, η1 ) + η2 ] . When Ln Dp 6= 0, equation (16) is an implicit equation for pˆ. So, we assume that there is a continuous function ψˆ such that for all w, η1 and η2 , equation (16) is uniquely solved by ˆ pˆ = ψ(w, η1 , η2 ). Then, pˆ is uniquely given by ˆ pˆ = ψ(w, (Cq +Ln C)ˆ x +Ln (h(w)−y), L3 (C xˆ +h(w)−y)) . 6
(19)
Note also that ˆ pˆ = ψ(w, Cq x + (Cq + Ln C)e − Ln Dp, L3 Ce − L3 Dp) .
(20)
ˆ Section 4 provides some sufficient conditions which guarantee the existence of ψ. Remark 4 Note that xˆ = x is a solution to the observer dynamics. In order to show this, we need only to demonstrate that pˆ = p when xˆ = x. With xˆ = x equation (16) reduces to pˆ = ψ(w, Cq x + Ln Dp (ˆ p −p)) + L3 Dp (ˆ p −p) . Since p = ψ(w, Cq x), it should be clear that pˆ = p is a solution of the equation above. Therefore, xˆ = x is a solution to the observer dynamics. The following result is the main result of this paper. It provides conditions on the observer gain matrices L, Ln and L3 which result in exponentially decaying estimation errors. Theorem 1 Consider a plant described by (1)-(2) which satisfies the incremental quadratic constraint (3) with a set M of incremental multiplier matrices. Suppose that there exist a scalar α > 0 and matrices P = P T > 0, L, Ln , L3 and M ∈ M such that the following matrix inequality is satisfied ! P (A+LC) + (A+LC)T P + 2αP P (B +LD) + ΦT MΦ ≤ 0 (21) T (B +LD) P 0 where Φ :=
Cq + (Ln −Dqp L3 )C Dqp + (Ln −Dqp L3 )D −L3 C I −L3 D
!
.
(22)
ˆ Also suppose that there is a continuous function ψˆ such that pˆ = ψ(w, η1, η2 ) uniquely solves n equation (16). Let x(·) : [t0 , t1 ) → IR be any motion of the plant. Then, for any initial observer state xˆ0 , the observer (12) has a solution xˆ(·) defined on [t0 , t1 ) with xˆ(t0 ) = xˆ0 and every such solution satisfies ke(t)k ≤ cke(t0 )k exp−α(t−t0 )
(23) 1
for t0 ≤ t < t1 where e = xˆ − x is the state estimation error and c = [λmax (P )/λmin(P )] 2 . Proof: Consider any motion x(·) of the plant defined on some interval [t0 , t1 ) and consider any observer initial condition xˆ(t0 ) = xˆ0 . The evolution of the observer state xˆ can be described by xˆ˙ = (A + LC)ˆ x + (B + LD)ˆ p + f (w(t)) + L(h(w(t)) − y(t)) 7
where ˆ pˆ = ψ(w(t), (Cq +Ln C)ˆ x +Ln (h(w(t))−y(t)), L3 (C xˆ +h(w(t))−y(t))) . Since ψˆ is continuous and w is piecewise continuous, it follows that the righthand-side of the above differential equation is continuous with respect to xˆ and piecewise continuous with respect to t. Hence it has a solution xˆ(·) on some interval [t0 , t′1 ) satisfying xˆ(t0 ) = xˆ0 and t0 < t′1 ≤ t1 . Considering now any solution xˆ(·) of the observer dynamics, the evolution of the corresponding state estimation error e = xˆ −x is described by e˙ = (A + LC)e + (B + LD)δp
(24)
where δp := pˆ − p. It follows from (13) and yˆ−y = Ce + Ln Dδp that pˆ = ψ(w, Cq x+(Cq +Ln C)e+Ln Dp δp ) + L3 Ce+L3Dδp Since p = ψ(w, Cq x), we obtain that δψ = −L3 Ce + (I − L3 D)δp
(25)
where δψ = ψ(w, Cq x + (Cq + Ln C)e + Ln Dp δp) − ψ(w, Cq x) . Considering the incremental quadratic constraint (9) with z1 = Cq x and z2 = Cq x + (Cq + Ln C)e + Ln Dp δp, we obtain that δz δψ
!T
δz δψ
N
!
≥0
where δz := z2 −z1 = (Cq +Ln C)e+Ln Dp δp and N is given in (10). Hence, e δp
!T
T
Φ MΦ
e δp
!
≥0
(26)
where Φ is given by (22). Returning now to the differential equation (24) describing the behavior of e and letting V (t) = e(t)T P e(t), we obtain that V˙ = 2xT P e˙ = xT [P (A + LC) + (A + LC)T P ]x + xT P (B + LD)δp + δpT (B + LD)T P x .
8
From this expression we see that pre- and post-multiplying both sides of the matrix inequality (21) by [eT δpT ] and its transpose results in !T ! e e V˙ + 2αV + ΦT MΦ ≤0 δp δp It now follows from (26) that V˙ ≤ −2αV
(27)
for t0 ≤ t < t′1 . Using standard arguments (see [11, 2] and/or [8]), one may show that inequality (27) implies that the desired result (23) holds for t0 ≤ t < t′1 . Since e is bounded on any finite sub-interval of [t0 , t1 ), it now follows that xˆ is bounded on any finite subinterval of [t0 , t1 ); hence xˆ(·) can be continued over [t0 , t1 ) and (23) holds for t0 ≤ t < t1 . The following corollary yields an observer design procedure for a given Ln and L3 . Corollary 1 Consider a plant described by (1)-(2) which satisfies the incremental quadratic constraint (3) with a set M of matrices. For given matrices Ln , L3 and scalar α > 0, suppose that there exist matrices P = P T > 0, R and M ∈ M which satisfy the matrix inequality ! P A + AT P + RC + C T RT + 2αP P B + RD + ΦT MΦ ≤ 0 (28) B T P + D T RT 0 where Φ is given by (22) and let L = P −1 R .
(29)
ˆ Also suppose that there is a continuous function ψˆ such that pˆ = ψ(w, η1 , η2 ) solves equation n (16). Let x(·) : [t0 , t1 ) → IR be any motion of the plant. Then, for any initial observer state xˆ0 , the observer (12) has a solution xˆ(·) defined on [t0 , t1 ) with xˆ(t0 ) = xˆ0 and every such solution satisfies (23) for all t0 ≤ t < t1 where e = xˆ − x is the state estimation error. Remark 5 Note that, for fixed α, Ln and L3 inequality (28) is an LMI (linear matrix inequality) in the variables P , R, and M.
4
On the Existence of a Solution to Equation (16) When LnD is Non-zero
As mentioned in the previous section, when Ln D 6= 0 we need to be able to solve equation (16) for pˆ to implement the observer. This equation defines an implicit relation for pˆ in 9
terms of w, η1 and and η2 . The following lemma provides a sufficient condition which ˆ guarantees that, for each w, η1 and and η2 , equation (16) has a solution pˆ = ψ(w, η1 , η2 ), where ψˆ is continuous. Lemma 1 Consider a continuous function ψ which satisfies the incremental quadratic constraint (9) with some nonempty set M of matrices. Given Ln and L3 , suppose there is a scalar β > 0 and matrices M2 ∈ M and Q such that Q(L3 D − I) + (L3 D − I)T QT +βI Q 0 QT
!
+
Ln D Dqp 0 I
!T
Ln D Dqp 0 I
M2
!
≤0 .
(30)
ˆ Then, there is a continuous function ψˆ such that, for all w, η1 and η2 , pˆ = ψ(w, η1 , η2 ) solves equation (16). Proof:
Define the function F by F (ˆ p, v) = (L3 D − I)ˆ p + ψ(w, η1 + Ln D pˆ) + η2
where v = (w, η1 , η2 ). Now all we need to do is show that there is a continuous function ˆ ψˆ such that F (ψ(v), v) = 0 for all v. To achieve this, we will utilize Theorem 2 in Section A.2. So, considering any v and any pˆ1 , pˆ2 we see that, F (ˆ p2 , v) − F (ˆ p1 , v) = (L3 D − I)δ pˆ + δψ , where δ pˆ := pˆ2 − pˆ1 and δψ := ψ(w, η1 + Ln D pˆ2 ) − ψ(w, η1 + Ln D pˆ1 ). As a consequence of the hypotheses on M2 we obtain that δ pˆ δψ
!T
N2
where N2 =
δ pˆ δψ
Ln D Dqp 0 I
!T
M2
!
≥0
(31)
Ln D Dqp 0 I
!
.
Pre- and post-multiplying inequality (30) by [δ pˆT δψ T ] and its transpose we obtain that δ pˆT [Q(L3 D − I) + (L3 D − I)T QT ]δ pˆ + βδ pˆT δ pˆ + 2δ pˆT Qδψ +
δ pˆ δψ
!T
N2
that is, 2δ pˆT Q[F (ˆ p2 , v) − F (ˆ p1 , v)] + βkδ pˆk2 + 10
δ pˆ δψ
!T
N2
δ pˆ δψ
!
≤ 0.
δ pˆ δψ
!
≤ 0,
Recalling (31) now results in (ˆ p2 − pˆ1 )T (2Q/β)[F (ˆ p2, v) − F (ˆ p1 , v)] ≤ −kˆ p2 − pˆ1 k2 . Replacing Q in Theorem 2 with 2Q/β, we can directly conclude the proof.
Remark 6 When Ln D 6= 0, Lemma 1 suggests that we can design an observer for given Ln and L3 by simultaneously solving the LMI’s (28) and (30) for P , M, R, Q, M2 and β. Then L = P −1 R. Consequently we obtain a well defined observer to estimate the plant state, because equation (16) has a continuous solution.
5
Simultaneous Design of L and Ln via LMIs
The previous section contains an observer design procedure where the observer gain L is designed for fixed values of the gains Ln and L3 . Here we address the problem of simultaneous design of L and Ln . To obtain tractable conditions permitting the simultaneous design of L and Ln , we consider multiplier matrices M which are parameterized by two matrices X and Y of lower dimensions and satisfy the following condition. Condition 1 There exist a nonsingular matrix T and a set N of matrix pairs (X, Y ) with Y ∈ IRmp ×mp such that X = X T > 0, Y = Y T ≥ 0, and the matrix ! X 0 T (32) M = TT 0 −Y is in M. In addition, T22 + T21 Dqp is nonsingular where ! T11 T12 T = T21 T22
(33)
and T22 ∈ IRmp ×mp . To achieve the above mentioned goals, suppose Condition 1 holds and consider the term ΦT MΦ in matrix inequality (21). Using the structure of M and letting Ψ := T Φ we can express this term as follows: ! ! ! T T X 0 Ψ Ψ 11 21 Ψ= X − Y ΦT MΦ = ΨT Ψ Ψ Ψ Ψ 11 12 21 22 0 −Y ΨT12 ΨT22 11
where Ψ11 Ψ12 Ψ21 Ψ22
Ψ=
!
.
Recalling expression (22) for Φ and the partitioning of T in (33), we obtain that ˜ Ψ11 = T11 Cq + (ΣLn − Γ12 Γ−1 22 L3 )C ˜ 3 )D Ψ12 = Γ12 + (ΣLn − Γ12 Γ−1 L 22
˜3C Ψ21 = T21 Cq − L ˜ 3 DP Ψ22 = Γ22 − L
(34)
where Γ12 := T12 + T11 Dqp , Σ := T11 − Γ12 Γ−1 22 T21 ,
Γ22 := T22 + T21 Dqp ˜ 3 := Γ22 L3 − T21 Ln . L
(35)
Since Γ22 is assumed to be invertible, we can choose L3 = Γ−1 22 T21 Ln
(36)
˜ 3 = 0. As a result, to yield L ΦT M Φ =
T + C T LT Σ T CqT T11 n ΓT12 + DT LTn ΣT
!
X
T11 Cq + ΣLn C Γ12 + ΣLn D −
T CqT T21 ΓT22
!
Y
T21 Cq Γ22
and matrix inequality (21) can now be written as ! TYT C T T P A+ AT P +2αP +RC +C T RT −CqT T21 21 q P B +RD − Cq T21 Y Γ22 B T P +DT RT −ΓT22 Y T21 Cq −ΓT22 Y Γ22 ! T X + C T RT CqT T11 n −1 + X ≤ 0, XT C + R C XΓ + R D 11 q n 12 n ΓT12 X + DT RnT
where R := P L
and
Rn := XΣLn .
(37)
Applying a Schur complement result [8], the inequality above is equivalent to matrix inequality (38). We now show that Σ is invertible. Note that ! ! ! T11 Γ12 T11 T12 + T11 Dqp I Dqp = =T . T21 Γ22 T21 T22 + T21 Dqp 0 I Since the two matrices on the righthand-side of the second equality are invertible, the matrix on the lefthand-side of the first equality is invertible. Since Γ22 is assumed to be 12
invertible, by using the matrix inversion lemma [17], the first matrix above is invertible if and only if the following Schur complement of the matrix is invertible T11 − Γ12 Γ−1 22 T21 = Σ . This implies that Σ is invertible. The following corollary now follows from Theorem 1. Corollary 2 Consider a plant described by (1)-(2) which satisfies the incremental quadratic constraint (3) with a set M of matrices satisfying Condition 1. Suppose that, for some scalar α > 0, there exist matrices P = P T > 0, R, Rn and (X, Y ) ∈ N such that
TY T C T T T T T T P A+ AT P +2αP +RC +C T RT −CqT T21 21 q P B +RD − Cq T21 Y Γ22 Cq T11 X +C Rn B T P +DT RT − ΓT22 Y T21 Cq −ΓT22 Y Γ22 ΓT12 X + DT RnT ≤ 0 XT11 Cq +Rn C XΓ12 +Rn D −X (38)
is satisfied and let L = P −1R ,
Ln = Σ−1 X −1 Rn ,
−1 −1 L3 = Γ−1 Rn 22 T21 Σ X
(39)
ˆ Also suppose that there is a continuous function ψˆ such that pˆ = ψ(w, η1 , η2 ) solves equation n (16). Let x(·) : [t0 , t1 ) → IR be any motion of the plant. Then, for any initial observer state xˆ0 , the observer (12) has a solution xˆ(·) defined on [t0 , t1 ) with xˆ(t0 ) = xˆ0 and every such solution satisfies (23) for all t ≥ t0 where e = xˆ − x is the state estimation error. Remark 7 Note that, for a fixed α, inequality (38) is an LMI (linear matrix inequality) in the variables P, R, Rn , X and Y . When Ln Dp 6= 0, the following corollary of Lemma 1 presents an LMI that guarantees a continuous solution to equation (16) for pˆ. Corollary 3 Consider a continuous function ψ which satisfies the incremental quadratic constraint (9) with a set M of matrices satisfying Condition 1. Given matrices Rn and ˜ such that, (X, Y ) ∈ N , suppose that there is a scalar β˜ > 0 and a matrix Q ˜ ˜−Q ˜ T + βI ˜ ˜ T RT −Q Q D n T T ˜ ˜ (40) Q −Y D X ≤ 0, qp ˜ ˜ qp −X Rn D XD where
˜ = DΓ−1 , D 22
˜ qp = Γ12 Γ−1 . D 22
ˆ Then there is a continuous function ψˆ such that pˆ = ψ(w, η1, η2 ) satisfies (16) where Ln and L3 are given by (39). 13
Proof: We prove this result by demonstrating that the hypotheses of Lemma 1 are satisfied, that is, there is a scalar β > 0 and matrices M2 ∈ M and Q such that the matrix inequality (30) holds. So suppose that, given Rn and (X, Y ) ∈ N , there is a scalar β˜ > 0 ˜ such that (40) holds and let and a matrix Q ! X 0 M2 := T T T. 0 −Y Now introduce the invertible matrix Tˇ =
Γ−1 0 22 −1 −1 −Γ22 T21 Ln DΓ22 Γ−1 22
!
(41)
where Γ22 = T22 + T21 Dqp . By post- and pre-multiplying matrix inequality (30) by Tˇ and its transpose, respectively, this inequality is equivalent to ˜ L ˜3D ˜ − I) + (L ˜3D ˜ − I)T Q ˜ T +βΓ−T Γ−1 Q ˜ Q( 22 22 ˜T Q 0
!
+
˜ D ˜ qp ΣLn D 0 I
!T
X 0 0 −Y
!
˜ D ˜ qp ΣLn D 0 I
−1 ˜ := Γ−T ˜ ˜ where Q 22 QΓ22 and L3 = Γ22 L3 −T21 Ln . Recalling that L3 = 0, the above inequality can be written as ! ! −1 T T T ˜−Q ˜ T +βΓ−T ˜ ˜ −Q Γ Q D L Σ 22 22 n ˜ ˜ + X ΣLn D Dqp ≤ 0 . ˜T ˜T Q −Y D qp
Recalling that Rn = XΣLn and applying a Schur complement result yields ˜−Q ˜ T + βΓ−T Γ−1 ˜ ˜ T RT −Q Q D 22 22 n T T ˜ ˜ Q −Y D X ≤ 0. qp ˜ ˜ qp −X Rn D XD
˜ We now see that if inequality (40) holds then, choosing β > 0 so that that βΓ−T 22 Γ22 ≤ βI, ˜ 22 . inequality (30) holds with Q = ΓT22 QΓ Remark 8 When Ln D 6= 0, Corollary 3 tells that we can design the observer gains L and ˜ ˜ and β. Ln by simultaneously solving LMI’s (38) and (40) for P, R, Rn , X, Y, Q
6
An Example where Ln is Necessary
To further demonstrate the usefulness of the nonlinear injection term, we present a simple example for which inequality (21) is feasible but cannot be satisfied with Ln = 0. 14
!
≤0 .
Consider a system with input u and output y described by y¨ + |y| ˙ y˙ = u . With x = (y, y) ˙ and p = |y| ˙ y˙ this system is described by (1) with ! ! 0 1 0 A= , B= , C = 1 0 , D = 0. 0 0 −1 The nonlinear term p is described by (2) with φ(w, q) = |q|q
and
Cq =
0 1
,
Dqp = 0 .
Since φ is a nondecreasing function, it satisfies the incremental quadratic inequality (3) with ! 0 λ M= where λ > 0. λ 0 Since D = 0, we can consider L3 = 0 without loss of generality; see Remark 2. In this case inequality (21) reduces to ! T T P (A + LC) + (A + LC) P + 2αP P B + λ(Cq + Ln C) ≤ 0. (42) B T P + λ(Cq + Ln C) 0 This inequality is satisfied if and only if P (A + LC) + (A + LC)T P + 2αP ≤ 0 P B + λ(Cq + Ln C)
T
= 0.
(43) (44)
The last equation is equivalent to p21 = λLn where P =
and
p11 p12 p12 p22
p22 = λ ,
.
Thus, we must have λ = p22 and Ln = p21 /p22 .
(45)
We now show that it is impossible to satisfy inequality (43) with p21 = 0. With L = [l1 l2 ]T , we have ! l1 1 A + LC = l2 0 15
Considering now the (2, 2) element of inequality (43) we must have 2p21 + 2αp22 ≤ 0 ; So, we must have p21 ≤ −αp22 < 0. It immediately follows that p21 cannot be zero; hence Ln = p21 /p22 must be non-zero, in particular, Since p22 > 0, we have Ln ≤ −α < 0 . We conclude that inequality (42) cannot be satisfied with Ln = 0. Note that inequality (42) can readily be satisfied by first choosing any matrix L such that the eigenvalues of A+ LC have negative real part; this is possible since (C, A) is observable. Now choose any positive definite symmetric Q and solve the Lyapunov equation P (A + LC) + (A + LC)T P + Q = 0 for P which will be positive definite symmetric. Let α be the the minimum eigenvalue of P −1 Q; then inequality (43) is satisfied. Finally let Ln = p21 /p22 and λ = p22 to satisfy (44). Satisfaction of (43) and (44) imply satisfaction of (42).
7
Examples of Nonlinearities Satisfying an Incremental Quadratic Inequality
In this section, we consider some common classes of nonlinearities and present multiplier matrices which demonstrate that these nonlinearities satisfy the incremental quadratic inequality (3). We also present additional conditions under which the nonlinearities satisfy Condition 1. The first two classes include globally Lipschitz nonlinearities, monotonic nonlinearities, and nonlinearities in bounded and unbounded sectors, which are also studied in [7], [5], and [6]. Then, we consider nonlinearities that can be parameterized by a set of matrices; in particular we consider polytopic and conic sets. These parameterizations are useful for fully exploiting the structure of a nonlinear term.
7.1
Incrementally Sector Bounded Nonlinearities
The nonlinearities considered in this section are characterized by two fixed matrices K1 , K2 and a set X of symmetric positive definite matrices. In particular, we consider nonlinearities φ which satisfy (δφ − K1 δq)T X(K2 δq − δφ) ≥ 0 for all X ∈ X , (46) 16
and for all w and q1 , q2 , where δφ := φ(w, q2 ) − φ(w, q1 )
and
δq := q2 − q1 .
(47)
Without loss of generality, we assume that the set X is invariant under multiplication by a positive number. It readily follows from (46) that a set M of multiplier matrices, which ensure that the nonlinearities under consideration satisfy incremental quadratic inequality (3), is given by M=
(
−K1TXK2 − K2TXK1 (K1 +K2 )TX X(K1 +K2 ) −2X
!
: X∈X
)
.
To satisfy Condition 1, suppose that there exists a nonzero scalar σ such that S2 + σ 2 S1 is nonsingular where S1 := K1 Dqp − I and S2 := K2 Dqp − I . One can verify by substitution that the following equality holds 2
−K1TXK2 −K2TXK1 (K1 +K2 )T X X(K1 +K2) −2X
!
=T
T
where T =
σ −1 K2 − σK1 (σ − σ −1 )I K2 + σ 2 K1 −(1 + σ 2 )I
X 0 −2 0 −σ X
!
!
T,
.
Here Γ22 = S2 + σ 2 S1 is nonsingular. Therefore, Condition 1 is satisfied with the matrix T defined above and N = ( X , σ −2 X ) : X ∈ X .
As a specific example of a nonlinearity under consideration, consider a globally Lipschitz nonlinearity which satisfies kδφk ≤ γkδqk for some γ > 0. In this case, inequality (46) holds with K1 = −γI, K2 = γI, and X = {λI : λ > 0}. Remark 9 When q and p are scalars and Cq 6= 0, one can always choose a non-zero scalar σ such that S2 +σ 2 S2 is nonzero. In the trivial case for which Cq = 0, the term p only depends on w and can be “lumped” with f and h. To prove the above claim, note that if S2 + σ 2 S2 is zero for all σ 6= 0 then, S1 = S2 = 0. In this case, Dqp 6= 0 and K1 = K2 = 1/Dqp . It now follows from the incremental inequality (46) that δφ = Kδq where K := K1 = K2 6= 0. Recalling the definition of the uncertain term p, we see that for all t, u and x1 , x2 we have δp = K(Cq δx + Dqp δp) = KCq δx + δp where δx = x2−x1 and δp = p(t, x2 , u) − p(t, x1 , u). This implies that Cq δx = 0; this is false for arbitrary x1 , x2 Consequently, Condition 1 can always be satisfied in the scalar case. 17
7.2
Incrementally Positive Real Nonlinearities
This class of nonlinearities is characterized by a fixed matrix K and a set X of symmetric positive definite matrices. Specifically, for all w and q1 , q2 , δq T Xδφ ≥ δq T XKδq
for all X ∈ X ,
(48)
where δq and δψ are as defined in (47). It is clear from (48) that, without loss of generality, we can assume that the set X of matrices is invariant under multiplication by a positive scalar. It readily follows from (48) that a set M of incremental multiplier matrices for the nonlinearities under consideration is given by ( ! ) −XK −K T X X M= : X∈X . X 0 To satisfy Condition 1 suppose there exists a non-zero scalar σ such that Dqp +σ 2 (KDqp − I) is nonsingular. Then, we can readily show that ! ! X 0 −XK −K T X X T 2 = TT 0 −σ −2 X X 0 where σ I − σK σI I + σ 2 K −σ 2 I −1
T =
!
.
Here Γ22 = Dqp + σ 2 (KDqp − I) is nonsingular. Therefore, Condition 1 is satisfied with the matrix T defined above and N =
( X , σ −2 X ) : X ∈ X
.
As a specific example of a nonlinearity under consideration, consider a nondecreasing nonlinearity which satisfies δq T δφ ≥ 0 In this case, inequality (48) holds with K = 0 and X = {λI : λ > 0}. Remark 10 When q and p are scalars, one can always choose a non-zero scalar σ such that Dqp + σ 2 (KDqp − I) is nonzero. To prove the above claim, note that if Dqp + σ 2 (KDqp − I) is zero for all σ 6= 0 then, one has the contradiction that Dqp = 0 and KDqp − I = 0. Consequently, Condition 1 can always be satisfied in the scalar case.
18
7.3
Nonlinearities with Matrix Parameterizations
In this section, we consider nonlinear uncertain terms that are characterized by some known set Ω of matrices. Specifically, we assume that there is a known set Ω of real lp ×lq matrices with the following property. For each w, q1 and q2 , there is a matrix in Θ ∈ Ω such that δφ = Θδq
(49)
where δφ and δq are defined in (47). For example, suppose that φ is a function which is continuously differentiable with respect to its second argument, and for each w and q, the derivative ∂φ (w, q) lies in some ∂q known closed convex set Ω, that is, ∂φ (w, q) ∈ Ω ∂q
for all w and q .
(50)
Then, it follows from Lemma 3.5.1 in [12] that for each w, q1 and q2 , there exists a matrix Θ ∈ Ω such that φ(w, q2) − φ(w, q1) = Θ(q2 − q1 ) . It now follows that for every w, q1 and q2 , there is a matrix Θ in Ω such that (49) holds. Since δφ = Θδq for some Θ in Ω, it follows that a symmetric matrix M satisfies the multiplier condition (3) if I Θ
!T
M
I Θ
!
≥ 0 for all Θ ∈ Ω .
Let M=
M11 M12 T M12 M22
!
where partitioning is in accordance with (δq, δφ). Then the above inequalities can be expressed as T M11 + M12 Θ + ΘT M12 + ΘT M22 Θ ≥ 0 for all Θ ∈ Ω. We restrict consideration to those matrices M that satisfy M22 ≤ 0. When (I − L3 D)Dqp + Ln D = 0 and I −L3 D is nonsingular, this leads to no loss of generality; this is a consequence of inequality (21). With M22 ≤ 0, the above inequalities are equivalent to: T +M Θ ΘTM M11 +ΘT M12 12 22 M22Θ −M22
!
≥ 0 for all Θ ∈ Ω .
Thus any symmetric matrix M that satisfies (51) is a multiplier matrix. 19
(51)
7.3.1
Polytopic case
Here we consider the case in which Ω = Co {Θ1 , . . . , Θν } , P that is, Ω is the set of matrices Θ that are given by Θ = νk=1 λk Θk where λk ≥ 0, k = P 1, . . . , ν , and νk=1 λk = 1. In this case, condition (51) is satisfied if and only if ! T M11 + ΘTk M12 + M12 Θk ΘTk M22 ≥ 0 for k = 1, 2 · · · , ν . (52) M22 Θk −M22 Since M22 ≤ 0, the above inequalities are equivalent to M22 ≤ 0 and T M11 +M12 Θk +ΘTk M12 +ΘTk M22 Θk ≥ 0 for k = 1, 2 · · · , ν .
Thus, the set M of symmetric matrices M which satisfy I Θk
!T
M
I Θk
!
≥ 0 for k = 1,. . .,ν
and
M22 ≤ 0 ,
(53)
is a set of incremental multipliers matrices. To obtain a set of multiplier matrices satisfying Condition 1, choose any nonsingular matrix T and consider multiplier matrices of the form given in (32) where X T = X > 0 and Y T = Y ≥ 0. A matrix M of this structure satisfies inequalities (53) if and only if X and Y satisfy !T ! ! I X 0 I TT T ≥ 0 for k = 1,. . .,ν, Θk 0 −Y Θk T T T12 XT12 −T22 YT22 ≤ 0 .
(54)
Then, provided T22 + T21 Dqp is invertible, Condition 1 is satisfied with N = (X, Y ) : X T = X > 0 and Y T = Y ≥ 0 satisfy (54) .
Once T is chosen, (54) is a set of linear matrix inequalities in X and Y . However, the choice of T to yield a large subset of multiplier matrices in some sense is not clear. Therefore, T is treated as a design parameter at this point. For example, a simple choice of T = I satisfies Condition 1 with N defined by N = {(X, Y ) : X T = X > 0 and Y T = Y ≥ 0 satisfy (55)} with X − ΘTk Y Θk ≥ 0
for k = 1, . . . , ν . 20
(55)
Example 3 As an example of a nonlinearity treated in this section, consider φ(w, q) = Here
sin q1 sin q2
T
cos q1 0 0 cos q2
∂φ (w, q) = ∂q
.
!
.
Hence Ω is the polytope defined by the four matrices Θ1 =
1 0 0 0
!
,
Θ2 =
−1 0 0 0
!
! 0 0 , 0 1
Θ3 =
,
Θ4 =
0 0 0 −1
!
.
Example 4 As another example, consider φ(w, q) = sin q1 sin q2 Here
∂φ (w, q) = cosq1 sinq2 sinq1 cosq2 . ∂q
Hence Ω is the polytope defined by the four matrices Θ1 = 1 1 ,
7.3.2
Θ2 =
−1 1
,
Θ3 =
1 −1
,
Θ4 = −1 −1 .
Conic Case
In this case, Ω is a closed convex set defined by Ω = Cone {Θ1 , . . . , Θν } , that is, Ω is the set of matrices Θ that satisfy Θ =
Pν
k=1 λk Θk
where λk ≥ 0, k = 1, . . . , ν.
As in the previous section, we only consider multiplier matrices with M22 ≤ 0. In this case, condition (51) is satisfied if and only if T +M Θ ΘT M M11 +ΘT M12 12 22 M22 Θ −M22
!
≥ 0 for all
Θ ∈ Cone{Θ1 ,. . ., Θν }.
(56)
Consider any matrix Θk . For any λ ≥ 0, the matrix λΘk is also in Cone{Θ1 , . . . , Θν }; hence ! T M11 + λΘTk M12 + λM12 Θk λΘTk M22 ≥ 0. λM22 Θk −M22 21
Considering λ = 0, we obtain that M11 0 0 −M22
!
≥ 0,
that is, M11 ≥ 0 and M22 ≤ 0. Considering λ > 0, we obtain T λ−1 M11 + ΘTk M12 + M12 Θk ΘTk M22 M22 Θk −λ−1 M22
!
≥ 0.
Since λ can be arbitrary large, we must have T ΘTk M12 + M12 Θk ΘTk M22 M22 Θk 0
!
≥ 0.
Hence, satisfaction of (56) implies that M11 ≥ 0, T M12 Θk + ΘTk M12 ≥ 0,
M22 ≤ 0 M22 Θk = 0 for k = 1, · · · , ν .
(57)
Clearly, satisfaction of condition (57) implies (56). Thus any symmetric matrix M that satisfies (57) is a multiplier matrix for this case. To obtain a set of multiplier matrices satisfying Condition 1, one could choose any nonsingular matrix T and consider multiplier matrices of the form given in (32). Once T is chosen, (57) defines a set of linear matrix inequalities in X and Y . However, the choice of T to yield a large subset of multipliers in some sense is not clear. Therefore, T is treated as a design parameter at this point. For example, the following is a simple choice I F T = (58) F T −I where F is a full rank matrix of appropriate dimensions satisfying F F T = I or F T F = I as appropriate. With this choice, M11 = X − F Y F T M12 = XF + F Y M22 = F T XF − Y . Consider the case F F T = I and let Y = F T XF . Then M22 = 0, M11 = 0, and M12 = 2XF . Hence, Condition 1 is satisfied with N = (X, F T XF ) : X T = X > 0 and (59) is satisfied ,
22
XF Θk + ΘTk F T X ≥ 0 for k = 1,. . . , ν.
(59)
Consider now the case F T F = I and let Y = F T XF . Then M22 = 0, M11 = X − F F T XF F T , and M12 = (I + F F T )XF . Hence, Condition 1 is satisfied with N = (X, F TXF ) : X T= X > 0, X −FF TXFF T ≥ 0, (60) holds , (I +FF T )XF Θk +ΘTkF TX(I +FF T ) ≥ 0 for
k = 1,. . . ,ν .
(60)
Example 5 As an example of a nonlinearity treated in this section, consider T φ(w, z) = q 3 q 5 . Here
T ∂φ (w, q) = 3q 2 5q 4 . ∂q Hence Ω is the cone defined by the two matrices T T Θ1 = 1 0 , Θ2 = 0 1 . Example 6 As another example, consider
3
φ(w, q) = e(q1 +q2 ) Here,
∂φ 3 3 (w, q) = e(q1 +q2 ) 3q22 e(q1 +q2 ) . ∂q Hence Ω is the cone defined by the two matrices Θ1 = 1 0 , Θ2 = 0 1 .
7.4
Multiple Nonlinearities
In this subsection, we consider multiple nonlinearities that have different characterizations for each nonlinearity, that is p(t, x, u) = p1 (t, x, u), . . . , pµ (t, x, u) ,
where pk (t, x, u) = φk (w, qk ) with qk = Cq,k x + Dqp,k p for k = 1, . . . , µ. For each k, there is a set Mk of incremental multiplier matrices such that for all Mk ∈ Mk and all w, qk , q˜k , we have !T ! qk − q˜k qk − q˜k Mk ≥0 (61) φk (w, qk )−φk (w, q˜k ) φk (w, qk )−φk (w, q˜k ) 23
The results of this section also contain the feasibility relaxations obtained for strictly positive real conditions for multivariable monotone nonlinearities presented in [13]. If we define q = (q1 , . . . , qµ ) and φ(w, q) = (φ1 (w, q1 ), . . . , φµ (w, qµ )) then, p(t, x, u) = φ(w, q) with q = Cq x + Dqp p where 1 Cq = diag(Cq,1 , . . . , Cq,µ ) and Dqp = diag(Dqp,1, . . . , Dqp,µ) , and φ satisfies (3) with M, where for each M ∈ M we have Mij = diag(Mij,1 ,. . ., Mij,µ ) for with
i, j = 1, 2,
! M11,k M12,k =Mk . T M21,k M22,k
Now, suppose that for each k = 1, . . . , µ, Condition 1 is satisfied by a set Mk of incremental multiplier matrices for φk with some Tk and set of pairs (Xk , Yk ) ∈ Nk . Then Condition 1 is also satisfied by φ with matrix pairs (X, Y ) ∈ N and transformation T where X = diag(X1 , . . . , Xµ ) , Y = diag(Y1 , . . . , Yµ ) , and Tij = diag(Tij,1,. . ., Tij,µ ) for with
7.4.1
i, j = 1, 2,
T11,k T12,k = Tk . T21,k T22,k
A special case
In this subsection, we consider a important special case of multiple uncertain/nonlinear terms and provide a richer set of incremental multiplier matrices than would be obtained using the general approach of the previous section. Suppose the functions φ1 , . . . , φµ are scalar-valued and, for k = 1, . . . , µ, they satisfy σ1k (qk − q˜k )2 ≤ (φk (w, qk )−φk (w, q˜k )) (qk − q˜k ) ≤ σ2k (qk − q˜k )2 where q1 , . . . , qµ are scalars. In this case, φk (w, qk ) − φ(w, q˜k ) = θk (qk − q˜k ) 1
where
σ1k ≤ θk ≤ σ2k .
For any set of matrices Q1 , . . . , Qµ , the matrix diag(Q1 , . . . , Qµ ) is the block diagonal matrix of appropriate dimensions with Q1 , . . . , Qµ on the diagonal, and with zero off-diagonal blocks.
24
One could use the results of the previous subsection to obtain an incremental multiplier set based on incremental multiplier sets for φ1 , . . . , φµ . However, one can obtain a richer set of incremental multiplier matrices by proceeding as follows. Using the notation of the preceding section, we obtain a single uncertain/nonlinear term described by φ(w, q) − φ(w, q˜) = Θ(q− q˜) where
θ1 . . . .. . . Θ= . . 0
0 .. .
and σ1k ≤ θk ≤ σ2k
for k = 1, . . . , µ.
. . . θµ
Thus Θ ∈ Co{Θ1 , . . . , Θν } where the ν = 2µ matrices Θ1 , . . . , Θν correspond to the extreme values σ1k , σ2k of the parameters θk . We now have a polytopic description of φ and we can obtain a set of incremental multiplier matrices as described in Section 7.3.1. In a similar fashion, one can use the results of Section 7.3.2 to treat the case in which the functions φ1 , . . . , φµ are nondecreasing scalar-valued functions, that is, they satisfy (φk (qk )−φk (w, q˜k ))(qk − q˜k ) ≥ 0 . In this case, we have φk (w, qk )−φk (w, q˜k ) = θk (qk − q˜k )
8
where
θk ≥ 0 .
(62)
An Example: Underwater Vehicle
In this section we consider a simple model of an underwater vehicle with thruster dynamics. This example is taken from [20] where a similar objective of designing observers is considered in a different framework. The simplified dynamics of the vehicle is given by x¨1 = −3x˙ 1 |x˙ 1 | + u x¨3 = x˙ 1 |x˙ 1 | − 10x˙ 3 |x˙ 3 | , where x1 is propeller angle, x3 is vehicle position and u is the torque input to the propeller. It is assumed that we can only measure x1 and x3 ; the angular velocity x˙ 1 of the propeller and the speed x˙ 3 of the vehicle will be estimated using an observer. In this model, x˙ 1 |x˙ 1 | represents the propeller thrust and 10x˙ 3 |x˙ 3 | represents the hydraulic drag on the vehicle.
25
Introducing the state x = (x1 , x˙ 1 , x3 , x˙ 3 ), and the output y = (x1 , x3 ), and letting p = (x˙ 1 |x˙ 1 |, x˙ 3 |x˙ 3 |), we can write this system in state space form (1) with
0 0 A= 0 0
1 0 0 0
0 0 0 0
0 0 , 1 0
0 0 −3 0 , B = 0 0 1 −10
C=
1 0 0 0 0 0 1 0
!
,
D=
0 0 0 0
!
,
f (w) = u and h(w) = 0. With q = (x2 , x4 ), the nonlinear term is described by (2) where φ(w, q) =
q1 |q1 | q2 |q2 |
!
,
0 1 0 0 0 0 0 1
Cq =
!
,
Dqp =
0 0 0 0
!
.
Note that the nonlinear function given by f (ν) = ν|ν| is a nondecreasing function. Thus φ(w, ·) is an incrementally positive real nonlinearity satisfying (48) with X being the set of matrices X of the form ! λ1 0 0 λ2
X=
where λ1 and λ2 are any positive scalars. Consider any bounded piecewise continuous input u(·). Then there is a bound ρu such that |u(t)| ≤ ρu for all t ≥ t0 and d(x22 ) ≤ −6x22 |x2 | + 2ρu |x2 | < 0 for |x2 | > ρ for some ρ > 0 . dt This implies that x2 (·) is bounded when u(·) is bounded (see [18] for a Lyapunov characterization of bounded solutions). When x2 (·) is bounded, by using very similar arguments we can conclude that x4 (·) is also bounded. Now, since x1 (·) and x3 (·) are integrals of x2 (·) and x4 (·) which are bounded, x1 (·) and x3 (·) are bounded on finite time intervals. Hence, when u(·) is bounded, the plant has a state motion x(·) which is defined for all t ≥ t0 . We now design an observer using the results in Corollary 2. This is done by using the LMI toolbox in MATLAB [14]. The observer gains obtained for α = 4 are
−9.4678 −0.0134 0.3072 −21.6510 L= −0.0039 −19.0395 −0.2699 −211.0569
,
! −4.4758 0.0189 Ln= . −0.3196 −13.0741
A two second simulation was carried out with initial state x(0) = (0, 0, 0, 5), initial state estimate, xˆ(0) = (0, 4, 0, −10), and control input 5 for 0 ≤ t < 1 u(t) = . −10 for 1 ≤ t < 2 In these simulations, dotted lines represent the state estimate, which converged to the vehicle state in less than 0.5 seconds. 26
4
5
3
4
0
1
x
x
2
2
0
−5
−1 −2
0
0.5
1
1.5
−10
2
0
0.5
t
1
1.5
2
1.5
2
t
5
20
4
15
|e|
|x|
3 10
2 5
1 0
0
0.5
1
1.5
0
2
0
0.5
1
t
t
Figure 1: Estimating the state of an underwater vehicle
9
Conclusions
We considered the problem of state estimation for systems whose nonlinear time-varying terms satisfy an incremental quadratic inequality that is parameterized by a set of multiplier matrices. We also demonstrate that many common nonlinear/time-varying terms satisfy such an inequality. We present observers that guarantee that the resulting state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities (LMIs) for the observer gain matrices. The results of this paper will be useful in obtaining observer based output feedback controllers for systems with nonlinear/time-varying terms satisfying an incremental quadratic inequality.
Acknowledgements The authors gratefully acknowledge Dr. Murat Arcak of Rensselaer Polytechnic Institute for his valuable comments and suggestions. The work described in this paper was performed at Purdue University. The writing and publication of this paper was supported in part by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. 27
A
Appendix
A.1
A Note on the L3 Correction Term
Here we show that if I−DL3 is invertible, then one can eliminate the L3 correction term in a proposed observer (12)-(13) by suitable modification of the other two correction terms. This modification consists of replacing L and Ln with ˜ := (L + BL3 )(I −DL3 )−1 L
(63)
˜ n := Ln (I −DL3 )−1 , L
(64)
and respectively. We will need the following facts: Suppose P and Q are any two matrices of appropriate dimensions with I − P Q invertible. Then I − QP is invertible and we have the following identities: (I − P Q)−1 P = P (I − QP )−1
and
I + (I − P Q)−1 P Q = (I − P Q)−1 .
We first note that yˆ−y = Ce + D(ˆ p −p) .
(65)
where e = xˆ −x. Hence, (13) can be rearranged to yield (I −L3 D)ˆ p = p˜ + L3 Ce − L3 Dp where p˜ := ψ(w, Cq xˆ + Ln Ce + Ln D(ˆ p −p) ) .
(66)
Since I −DL3 is invertible, we have invertibility of I −L3 D and pˆ = (I −L3 D)−1 p˜ + (I −L3 D)−1 L3 Ce − (I −L3 D)−1 L3 Dp ; hence, pˆ−p = (I −L3 D)−1 (˜ p −p + L3 Ce) .
(67)
Considering the arguments of ψ in (66), we now see that Ln Ce + Ln D(ˆ p −p) = [Ln + Ln D(I −L3 D)−1 L3 ]Ce + Ln D(I −L3 D)−1 (˜ p −p) = Ln [I + (I −DL3 )−1 DL3 ]Ce + Ln (I −DL3 )−1 D(˜ p −p) ˜ n [Ce + D(˜ = L p −p)] 28
(68)
˜ n = Ln (I − DL3 )−1 . If we let where L y˜ = C xˆ + D p˜ + h(w)
(69)
y˜−y = Ce + D(˜ p −p)
(70)
˜ n (˜ p˜ = ψ(w, Cq xˆ + L y −y) )
(71)
then and, recalling (66) and (68),
Using (65) and (67), we see that B pˆ + L(ˆ y −y) = Bp + (B + LD)(ˆ p −p) + LCe ˜ = Bp + (B + LD)(I −L3 D)−1 (˜ p −p) + LCe
(72)
where ˜ := L + (B + LD)(I −L3 D)−1 L3 L = L[I + D(I −L3 D)−1 L3 ] + B(I −L3 D)−1 L3 = L[I + (I −DL3 )−1 DL3 ] + BL3 (I −DL3 )−1 = (L + BL3 )(I −DL3 )−1 . Now note that ˜ B + LD = B + (L + BL3 )(I −DL3 )−1 D = B[I + L3 (I −DL3 )−1 D] + L(I −DL3 )−1 D = B(I −L3 D)−1 + LD(I −L3 D)−1 = (B + LD)(I −L3 D)−1
Hence, recalling (72), ˜ ˜ x − x) B pˆ + L(ˆ y −y) = Bp + (B + LD)(˜ p −p) + LC(ˆ ˜ = B p˜ + L[Ce + D(˜ p −p)] ˜ y −y) . = B p˜ + L(˜
(73)
In now follows from (73),(69) and (71) that the original observer is equivalent to an observer ˜ and L ˜ n respectively. with L3 = 0 and L and Ln replaced with L 29
A.2
On the Existence of Solutions to Nonlinear Equations
Here we present a technical result which establishes a sufficient condition for the existence of a unique solution to a parameter dependent nonlinear equation for each value of the parameter. Also the solution depends continuously on the parameter. The proof of this result utilizes Lemma 2 of Section A.3. Theorem 2 Suppose F : IRn×m → IRn is a continuous function and there exists a matrix Q such that (η2 − η1 )T Q[F (η2 , v) − F (η1 , v)] ≤ −kη2 − η1 k2 (74) for all η1 , η2 ∈ IRn and v ∈ IRm . Then F has the following properties. • For each v ∈ IRm , there is a unique vector g(v) such that F (g(v), v) = 0 . Furthermore, g is continuous. • For each v ∈ IRm , the vector g(v) is the GES (globally exponentially stable) equilibrium point of the following differential equation η˙ = QF (η, v) .
(75)
Proof: Consider any v in IRm . We first show that all solutions of the differential equation (75) are bounded. To this end, consider any solution η(·) and let let V (t) = kη(t)k2 . Utilizing inequality (74), V˙
= 2η T QF (η, v) = 2(η − 0)T Q[F (η, v) − F (0, v)] + 2η T QF (0, v) ≤ −kηk2 + 2kQF (0, v)kkηk 1
= −V + 2kQF (0, v)kV 2 . From this last inequality, one can readily conclude that V is bounded; hence η(·) is bounded. We now show that all solutions exponentially converge to each other. To this end, consider any two solutions η1 (·), η2 (·) and let V (t) = kη2 (t) − η1 (t)k2 ; then utilizing inequality (74), V˙ = 2(η2 − η1 )T Q[F (η2 , v) − F (η1 , v)] ≤ −2kη2 − η1 k2 = −2V . that is, V˙ ≤ −2V . From this last inequality one can readily show that kη2 (t) − η1 (t)k ≤ e−t kη2 (0) − η1 (0)k 30
(76)
for all t ≥ 0. Since all solutions of the differential equation (75) are bounded are bounded and converge to each other, it follows from Lemma 2 that this differential has a unique equilibrium solution η = g(v); hence QF (g(v), v) = 0. It follows from inequality (74) that Q is nonsingular. Hence, we must have F (g(v), v) = 0. It also follows from (76) that (75) is GES about g(v). We now prove that g is continuous at every v ∈ IRm . To this end, consider any δv ∈ IRm . Utilizing inequality (74) with η1 = g(v) and η2 = g(v + δv), we see that −kg(v + δv) − g(v)k kQk k[F (g(v + δv), v + δv) − F (g(v), v + δv)]k ≤ [g(v + δv) − g(v)]T Q[F (g(v + δv), v + δv) − F (g(v), v + δv)] ≤ −kg(v + δv) − g(v)k2 , which in turn implies that kg(v + δv) − g(v)k ≤ kQk kF (g(v + δv), v + δv) − F (g(v), v + δv)k . Noting that F (g(v + δv), v + δv) = 0 = F (g(v), v), we have kg(v + δv) − g(v)k ≤ kQk kF (g(v), v + δv) − F (g(v), v)k . Recalling that F is continuous, it follows that kF (g(v), v + δv) − F (g(v), v)k can be made arbitrarily small by choosing δv sufficiently small. From this we can conclude continuity of g at v.
A.3
A Lemma on the Steady State Behavior of Autonomous Systems
The following lemma was utilized in the proof of Theorem 2. It utilizes the following definition for a nonlinear autonomous system described by x˙ = f (x) .
(77)
where f : IRn → IRn . is a continuous Definition 1 A a solution x¯(·) of system (77) is a global attractor (GA) if, for each R > 0 and ǫ > 0 there exists some T (R, ǫ) > 0 such that, whenever kx(0) − x¯(0)k ≤ R for any solution x(·) of (77) then, kx(t) − x¯(t)k < ǫ for all t ≥ T (ǫ, R) . An equilibrium state x∗ is a global attractor if the constant solution corresponding to x∗ is a global attractor. 31
Lemma 2 Consider a system described by (77) with f continuous. If this system has a bounded solution x¯(·) which is a global attractor then, it has an equilibrium state x∗ which is a global attractor; hence every solution x(·) of (77) satisfies lim x(t) = x∗ .
t→∞
Proof: We first show that there exists some vector x∗ such that limt→∞ x¯(t) = x∗ . The function x¯(·) is a solution of the time-invariant system (77). Hence, for any τ ≥ 0, the function xτ (·), defined by xτ (t) = x¯(τ + t), is also a solution of this system. Since x¯(·) is bounded, there exists some bound R > 0 such that kxτ (0) − x¯(0)k = k¯ x(τ ) − x¯(0)k < R for all τ ≥ 0. Consider any ǫ > 0. Since x¯(·) is a global attractor for the system, it now follows that there exists a time Tǫ ≥ 0 such that kxτ (t) − x¯(t)k < ǫ/2 for all t ≥ Tǫ and τ ≥ 0 . This implies that, k¯ x(t2 ) − x¯(t1 )k < ǫ/2 for all t1 , t2 ≥ Tǫ .
(78)
Consider now the sequence {¯ x(k)}∞ x(k1 )− x¯(k2 )k < ǫ/2 k=0 . Equation (78) implies that k¯ whenever k1 , k2 ≥ Tǫ . Since ǫ can be chosen arbitrarily small, this sequence is a Cauchy sequence in IRn ; hence there is a vector x∗ ∈ IRn such that limk→∞ x¯(k) = x∗ . In particular, for each ǫ > 0 there is an integer Kǫ ≥ 0 such that kx(k) − x∗ k < ǫ/2 for all k ≥ Kǫ . We can now show that x¯(·) converges to x∗ . Consider any ǫ > 0. Choose any integer ˜ ǫ ≥ max{Tǫ , Kǫ }. Then, for any t ≥ K ˜ ǫ , we see that K ˜ ǫ )k + k¯ ˜ ǫ ) − x∗ k < ǫ/2 + ǫ/2 = ǫ . k¯ x(t) − x∗ k ≤ k¯ x(t) − x¯(K x(K Since ǫ can be chosen arbitrarily small, we obtain that limt→∞ x¯(t) = x∗ . To show that x∗ must be an equilibrium state note that, for any t ≥ 0, we have Z t+1 Z t+1 x¯(t + 1) − x¯(t) = f (¯ x(s)) ds = (f (¯ x(s)) − f (x∗ )) ds + f (x∗ ) . t
(79)
t
Since both x¯(t + 1) and x¯(t), approach x∗ as t tends to infinity, the difference x¯(t + 1) − x¯(t) converges to zero as t tends to infinity. Since f is a continuous function, (79) implies that (f (¯ x(s) − f (x∗ )), converges to zero as as s tends to infinity. This clearly implies that Z t+1 lim (f (¯ x(s) − f (x∗ ))) ds = 0 . t→∞
t
32
Therefore, by taking limits as t approaches to infinity in equation (79) we have f (x∗ ) = 0 . Consequently x∗ is an equilibrium point of (77). To demonstrate that x∗ is a global attractor, consider any R > 0 and any solution x(·) satisfying kx(0) − x∗ k ≤ R. Since x¯(·) is a GA, there is a time Tǫ such that kx(t) − x¯(t)k ≤ ǫ/2 for all t ≥ Tǫ . Since x¯(·) converges to x∗ , there is a time Tˆǫ such that k¯ x(t) − x∗ k < ǫ/2 whenever t ≥ Tˆǫ . Hence, whenever t ≥ T˜ǫ := max{Tǫ , Tˆǫ }, kx(t) − x∗ k ≤ kx(t) − x¯(t)k + k¯ x(t) − x∗ k < ǫ/2 + ǫ/2 = ǫ . Since R and ǫ are arbitrary, it follows that x∗ is a global attractor.
References [1] A. B. A¸cıkme¸se. Stabilization, Observation, Tracking and Disturbance Rejection for Uncertain/Nonlinear and Time-Varying Systems. PhD thesis, Purdue University, December 2002. [2] A. B. A¸cıkme¸se and M. Corless. Stability analysis with quadratic Lyapunov functions: A necessary and sufficient multiplier condition. Proceedings of Allerton Conference on Communication, Control, and Computing, 2003. [3] D. Angeli. A Lyapunov approach to incremental stability properties. IEEE Transactions on Automatic Control, 47(3):410–421, 2002. [4] M. Arcak and P. Kokotovic. Nonlinear observers: A circle criterion design. Proceedings of 38th IEEE Conference on Decision and Control, pages 4872–4876, 1999. [5] M. Arcak and P. Kokotovic. Feasibility conditions for circle criterion designs. System and Control Letters, 42:405–412, 2001. [6] M. Arcak and P. Kokotovic. Nonlinear observers: A circle criterion design and robustness analysis. Automatica, 37(12):1923–1930, 2001. [7] M. Arcak and P. Kokotovic. Observer-based control of systems with slope-restricted nonlinearities. IEEE Transactions on Automatic Control, AC-46(7):1146–1150, 2001. [8] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in System and Control Theory. SIAM, 1994. [9] C. Chen. Linear System Theory and Design, Third Edition. Oxford Press, 1999. 33
[10] B.L. Walcott M.J. Corless and S.H. Zak. Comparative study of non-linear stateobservation techniques. International Journal of Control, 45:2109–2132, 1987. [11] M. Corless. Robust stability analysis and controller design with quadratic Lyapunov functions. Variable Structure and Lyapunov Control, A. Zinober, ed., Springer-Verlag, 1993. [12] L.P. D’Alto. Incremental quadratic stability. Master’s thesis, Purdue University, 2004. [13] X. Fan and M. Arcak. Observer design for systems with multivariable monotone nonlinearities. System and Control Letters, 50(4):319–330, 2003. [14] P. Gahinet, A. Nemirovski, A. J. Laub, and M. Chilali. The LMI Control Toolbox. The MathWorks, 1995. [15] J. P. Gauthier, H. Hammouri, and S. Othman. A simple observer for nonlinear systems: Applications to bioreactors. IEEE Transactions on Automatic Control, 37(6):875–880, 1992. [16] J.K. Hedrick and S. Raghavan. Observer design for a class on nonlinear systems. International Journal of Control, 59(2):515–528, 1994. [17] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, 1985. [18] H. K. Khalil. Nonlinear Systems, Second Edition. Prentice Hall, 1996. [19] A.J. Krener and A. Isidori. Linearization by output injection and nonlinear observers. Systems and Control Letters, 3(1):47–52, 1983. [20] W. Lohmiller and J. E. Slotine. On contraction analysis for non-linear systems. Automatica, 34(6):683–696, 1998. [21] D.G. Luenberger. Observing the states of a linear system. IEEE Transactions on Military Electronics, 8:74–80, 1964. [22] L. Praly and M. Arcak. A relaxed condition for stability of nonlinear observer-based controllers. Accepted for publication, System and Control Letters, 2004. [23] R. Rajamani. Observers for Lipschitz nonlinear systems. IEEE Transactions on Automatic Control, 43(3):397–401, 1998.
34
[24] J.J.E. Slotine, J.K. Hedrick, and E.A. Misawa. On sliding observers for nonlinearsystems. Journal of Dynamic Systems Measurements and Control, 109(3):245–252, 1987. [25] E. D. Sontag and Y. Wang. Output-to-state stability and detectability for nonlinear systems. System and Control Letters, 29:279–290, 1997. [26] M. Zeitz. The extended Luenberger observer for nonlinear systems. Systems and Control Letters, 9:149–156, 1987.
35