Systems & Control Letters 81 (2015) 31–33
Contents lists available at ScienceDirect
Systems & Control Letters journal homepage: www.elsevier.com/locate/sysconle
Stability of linear nonautonomous multivariable systems with differentiable matrices✩ Michael Gil’ Department of Mathematics, Ben Gurion University of the Negev, P.O. Box 653, Beer Sheva 84105, Israel
article
info
Article history: Received 7 January 2015 Received in revised form 8 March 2015 Accepted 28 March 2015
abstract We consider linear nonautonomous multivariable systems with differentiable matrices. Explicit stability conditions are derived. In the appropriate situations our results improve the traditional freezing method. © 2015 Elsevier B.V. All rights reserved.
Keywords: Linear systems Stability
Nρ eρ t (t ≥ 0). As it is well-known Θ0 characterises the stability margin and plays an important role in the theory of linear and nonlinear perturbations, cf. [10].
1. Introduction and statement of the main result The purpose of this note is to suggest new sufficient stability conditions for a system described by the equation u˙ (t ) = A(t )u(t )
(t ≥ 0)
(1.1)
with a slowly varying matrix A(t ). The problem of stability analysis of linear systems continues to attract the attention of many specialists despite its long history. It is still one of the most burning problems of control theory, because of the absence of its complete solution. One of the main methods for the stability analysis of systems with slowly varying matrices is the freezing method [1–9]. In particular, in the interesting recent paper [4] in the framework of the freezing approach, a numerical method is suggested. Besides, the point-wise is not required. Moreover, no bound is imposed on ∥A˙ (.)∥, provided the A˙ (.) exists. The main features of the present note are: (1) Our stability conditions are explicit (not numerical). If A(t ) depends on two or more parameters, then applications of our results are simpler than numerical tests. (2) Our stability test depends on the integral of ∥A˙ (t )∥, not on its supremum. In appropriate situations this fact enables us to improve the traditional freezing method. (3) We obtain a bound for the upper Lyapunov exponent. Recall that the upper Lyapunov exponent Θ0 of (1.1) is an infimum of numbers ρ , for which exist numbers Nρ , such that ∥u(t )∥ ≤
Introduce the notations. Let Cn be the complex n-dimensional Euclidean √ space with a scalar product (., .), the Euclidean norm ∥ · ∥ = (., .) and the unit matrix I. For a linear operator A in Cn (matrix), ∥A∥ = supx∈Cn ∥Ax∥/∥x∥ is the spectral (operator) norm, A∗ is the adjoint operator, √ N2 (A) is the Hilbert–Schmidt (Frobenius) trace AA∗ ; λk (A) (k = 1, . . . , n) are the norm of A: N2 (A) = eigenvalues with their multiplicities, α(A) = maxk Re λk (A). The quantity
g (A) =
http://www.ctan.org/tex-archive/macros/latex/contrib/elsarticle. E-mail address:
[email protected]. http://dx.doi.org/10.1016/j.sysconle.2015.03.005 0167-6911/© 2015 Elsevier B.V. All rights reserved.
(A) −
n
1/2 |λk (A)|
2
k=1
plays an essential role hereafter. The following relations are checked in [9, Section 1.5]: g 2 (A) ≤ N22 (A) − |Trace A2 |,
1 g (A) ≤ √ N2 (A − A∗ ) 2
and g (eia A + zIH ) = g (A) (a ∈ R, z ∈ C); if A is a normal matrix: A∗ A = AA∗ , then g (A) = 0. If A1 and A2 are commuting matrices, then g (A1 + A2 ) ≤ g (A1 ) + g (A2 ). In addition, by the inequality between the geometric and arithmetic mean values,
n 1
n k =1 ✩ Linear Systems
N22
n |λk (A)|
2
≥
n
2 |λk (A)|
.
k=1
Hence g 2 (A) ≤ N22 (A) − n(det A)2/n . Everywhere below A(t ) is a variable n × n-matrix, bounded on [0, ∞) and having a piece-wise continuous derivative defined and
32
M. Gil’ / Systems & Control Letters 81 (2015) 31–33
bounded on [t0 , ∞) (t0 ≥ 0). In addition,
Proof. Put b(t ) = 1/∥Q (t )∥ and substitute
α(A(t )) < 0 (t ≥ t0 ).
(1.2)
Denote by λR (t ) the smallest eigenvalue of (A(t ) + A (t ))/2 and put ∗
µ(t ) :=
(k + j)!g
n −1
k+j
(A(t ))
2k+j |α(A(t ))|k+j+1 (k!j!)3/2 j,k=0
1
(t − t0 )
− t0
1
x(t )
(2.3)
x˙ = (b(t )I + A(t ))x.
(2.4)
Since
|λR (s)|µ (s) ˙ ∥A(s)∥ ds 2
+
t0 b(s)ds
into (1.1). Then
d
µ(s)
t
(Q (t )˙x(t ), x(t )) = b(t )(Q (t )x(t ), x(t )) + (Q (t )A(t )x(t ), x(t )).
Theorem 1.1. For a t0 ≥ 0, let
t
−
Multiplying this equation by Q (t ) and doing the scalar product, we can write
.
Now we are in a position to formulate our main result.
ˆ := limt →∞ Θ
u( t ) = e
2
dt
(Q (t )x(t ), x(t )) = (Q (t )˙x(t ), x(t )) + (x(t ), Q (t )˙x(t )) + (Q˙ (t )x(t ), x(t )),
it can be written
< 0.
d
(Q (t )x(t ), x(t )) = (Q (t )(b(t ) + A(t ))x(t ), x(t ))
ˆ. Then Eq. (1.1) is exponentially stable. Moreover, Θ0 ≤ Θ
dt
This theorem is proved in the next two sections.
(Q˙ (t )x(t ), x(t )) = 2b(t )(Q (t )x(t ), x(t )) + ((Q (t )A(t ) + A∗ (t )Q (t ))x(t ), x(t ))+ 2 (Q˙ (t )x(t ), x(t )) = −2I + Q˙ (t ) + Q (t ) x(t ), x(t ) ∥Q (t )∥ ≤ (Q˙ (t )x(t ), x(t )).
2. Auxiliary results Put Q (t ) = 2
∞
∗ eA (t )s eA(t )s ds
So
and
d
0
q(t ) = 2
∞
∥e
+ (x(t ), Q (t )(b(t ) + A(t ))x(t ))+
dt
A(t )s 2
∥ ds (t ≥ t0 ).
(Q (t )x(t ), x(t )) ≤ ∥Q˙ (t )Q −1 (t )∥(Q (t )x(t ), x(t )).
Hence
0
As it is well known, Q (t ) is a unique solution of the equation A∗ (t )Q (t ) + Q (t )A(t ) = −2I ,
(Q (t )x(t ), x(t )) ≤ d0 exp (2.1)
cf. [10, Section I.5]. Clearly,
t0
Now due to (2.3) we get the required result.
(2.2)
+ q2 (t1 )∥Q −1 (t1 )∥ ∥A˙ (t1 )∥ dt1 ,
c2 ∥u(t )∥ ≤ d0 exp 2
A∗ (t )Q˙ (t ) + Q˙ (t )A(t ) = −((A∗ (t ))′ Q (t ) + Q (t )A˙ (t ))
(t ≥ t0 ).
e
A(t )s
((A (t )) Q (t ) + Q (t )A˙ (t ))e ∗
′
ds.
Thus,
where c2 = inft ,v∈Cn limt →∞
1 2
−
2 q(t1 )
+ q2 (t1 )∥Q −1 (t1 )∥ ∥A˙ (t1 )∥ dt1 ,
0
∥Q˙ (t )∥ ≤
t t0
Hence A∗ (t )s
q(t )∥(A∗ (t ))′ Q (t ) + Q (t )A˙ (t )∥ ≤ q(t )∥Q (t )∥ ∥A˙ (t )∥.
Now (2.2) yields the result.
(Q (t )u(t ), u(t )) ≤ d0 exp
t
Thus,
ln ∥u(t )∥ ≤ limt →∞
(t − t0 )
+ q (t1 )∥Q 2
t
1
−1
− t0
2 q(t1 )
˙ (t1 )∥ ∥A(t1 )∥ dt1 .
Θ0 ≤ Θ1 ,
(2.7)
where
t
(−2∥Q (t1 )∥−1 t0
+ ∥Q˙ (t1 )Q −1 (t1 )∥)dt1 where d0 = (Q (t0 )u(t0 ), u(t0 )).
1
(Q (t )v,v) . ∥v∥2
(2.6)
So the upper Lyapunov exponent Θ0 of (1.1) satisfies the inequality
Lemma 2.2. A solution of (1.1) satisfies the inequality
(2.5)
and therefore
Proof. Differentiating (2.1), we have
Q˙ (t ) =
t 2 (Q (t )u(t ), u(t )) ≤ d0 exp − q ( t1 ) t0
Lemma 2.1. Let condition (1.2) hold. Then Q (t ) is differentiable and ∥Q˙ (t )∥ ≤ q2 (t )∥A˙ (t )∥.
∞
∥Q˙ (s)Q −1 (s)∥ds .
From (2.2), and Lemmas 2.1 and 2.2 it follows
∥Q (t )∥ ≤ q(t ) (t ≥ t0 ).
t
(t ≥ t0 ),
1
Θ1 = limt →∞ (t − t0 ) t 1 q2 (t1 ) × − + ∥Q −1 (t1 )∥ ∥A˙ (t1 )∥ dt1 , q(t1 ) 2 t0 and therefore (1.1) is exponentially stable, provided Θ1 < 0.
M. Gil’ / Systems & Control Letters 81 (2015) 31–33
with positive constants a and ω. In this case one can apply various methods, for example the Wazewsky inequality, but to compare our results with [4] we apply Theorem 1.1. In [4] it is taken a = 1.1 and ω = 1. We have λ1,2 (A(t )) = −1 ± ia sin(ωt ) and ∥A˙ (t )∥ = aω| cos(ωt )|. So α(A(t )) ≡ −1. In addition, λR (t ) ≡ −1 and g (A(t )) ≡ 0, since A(t ) is normal. Simple calculations show that
3. Proof of Theorem 1.1 For a constant Hurwitz matrix A0 , due to [9, Lemma 1.9.2], ∞
∥eA0 s ∥2 ds ≤ µ( ˆ A0 ),
2 0
where µ( ˆ A0 ) :=
(k + j)!g k+j (A0 ) . 2k+j |α(A0 )|k+j+1 (k!j!)3/2 j,k=0 n −1
lim
t →∞
So µ(t ) = µ( ˆ A(t )) and therefore q(t ) ≤ µ(t )
(t ≥ t0 ).
= ((A0 + A∗0 )y(t ), y(t )).
dt Hence d(y(t ), y(t ))
≥ λ(A0 + A∗0 )(y(t ), y(t ))
dt
and therefore ∥eA0 t v∥2 ≥ et λ(A0 +A0 ) ∥v∥2 , ∗
where λ(A0 + A∗0 ) is the smallest eigenvalue of A0 + A∗0 . Recall that A0 is Hurwitzian, so λ(A0 + A∗0 ) < 0. Put ∞
Q0 = 2
A∗ s A0 s 0
e
e
1 t
t
| cos(ωs)|ds = lim 0
ds.
t →∞
= lim
(3.1)
Furthermore, put y(t ) = eA0 t v (v ∈ Cn ). Then y˙ (t ) = A0 y(t ), and d(y(t ), y(t ))
33
t →∞
1
ωt
| cos(s)|ds
0 t
1 t
tω
| cos(s)|ds = 0
2
π
.
Due to Theorem 1.1 the considered equation is exponentially stable, provided aω < π . This certainly holds if a = 1.1 and ω = 1. In addition, Θ0 ≤ −(1 − aω/π ). This example shows that the application of Theorem 1.1 to equations with matrices containing two and more parameters requires simpler calculations than the method suggested in [4], but Theorem 1.1, in contrast to [4] requires the point-wise Hurwitzness of the matrix. Acknowledgement I am very grateful to the referee of this paper for his (her) really helpful remarks.
0
References
Then
(Q0 h, h) = 2
∞
(eA0 s h, eA0 s h)ds ≥ 2
0
∞
eλ(A0 +A0 )s ds ∥h∥2 ∗
0
= 2∥h∥2 |λ(A0 + A∗0 )|−1 (h ∈ Cn ). Hence,
∥Q −1 (t )∥ ≤ |λR (t )|.
(3.2)
The assertion of Theorem 1.1 follows from inequalities (3.1), (3.2) and (2.7). 4. Example Consider Eq. (1.1), taking A(t ) =
−1 −a sin(ωt )
a sin(ωt ) −1
[1] B.F. Bylov, B.M. Grobman, V.V. Nemyckii, R.E. Vinograd, The Theory of Lyapunov Exponents, Nauka, Moscow, 1966, (in Russian). [2] C.A. Desoer, Slowly varying systems x˙ = A(t )x, IEEE Trans. Automat. Control 14 (1969) 780–781. [3] A. Ilchmann, D.H. Owens, D. Pratcel-Wolters, Sufficient conditions for stability of linear time-varying systems, Systems Control Lett. 9 (1987) 157–163. [4] L. Jetto, V. Orsini, Relaxed conditions for the exponential stability of a class of linear time-varying systems, IEEE Trans. Automat. Control 54 (7) (2009) 1580–1585. [5] H.W. Kamen, P.P. Khargonekar, A. Tannembaum, Control of slowly varying linear systems, IEEE Trans. Automat. Control 34 (12) (1989) 1283–1285. [6] P. Mullhaupt, D. Buccieri, D. Bonvin, A numerical sufficiency test for the asymptotic stability of linear time-varying systems, Automatica 43 (2007) 631–638. [7] V. Solo, On the stability of slowly time-varying linear systems, Math. Control Signals Systems 7 (1994) 331–350. [8] R.E. Vinograd, An improved estimate in the method of freezing, Proc. Amer. Soc. 89 (1) (1983) 125–129. [9] M.I. Gil’, Explicit Stability Conditions for Continuous Systems, in: Lectures Notes in Control and Information Sciences, vol. 314, Springer Verlag, Berlin, 2005. [10] Yu L. Daleckii, M.G. Krein, Stability of Solutions of Differential Equations in Banach Space, Amer. Math. Soc., Providence, RI, 1974.