802
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 5, MAY 2002
[7] T. C. Lee, K. T. Song, C. H. Lee, and C. C. Teng, “Tracking control of unicycle-modeled mobile robots using a saturation feedback controller,” IEEE Trans. Contr. Syst. Technol., vol. 9, pp. 305–318, Mar. 2001. [8] A. Loria, E. Panteley, and A. Teel, “UGAS of nonlinear time-varying systems: A -persistency of excitation approach,” in IEEE Proc. 39th Conf. Decision Control, Sydney, Australia, 2000, pp. 3489–3494. [9] A. S. Shiriaev, “The notion of V-detectability and stabilization of invariant sets of nonlinear systems,” Syst. Control Lett., vol. 39, pp. 327–338, 2000. [10] M. Vidyasagar, Nonlinear Systems Analysis. Upper Saddle River, NJ: Prentice-Hall, 1993.
Global Asymptotic Stability and Global Exponential Stability of Continuous-Time Recurrent Neural Networks Sanqing Hu and Jun Wang Abstract—This note presents new results on global asymptotic stability (GAS) and global exponential stability (GES) of a general class of continuous-time recurrent neural networks with Lipschitz continuous and monotone nondecreasing activation functions. We first give three sufficient conditions for the GAS of neural networks. These testable sufficient conditions differ from and improve upon existing ones. We then extend an existing GAS result to GES one and also extend the existing GES results to more general cases with less restrictive connection weight matrices and/or partially Lipschitz activation functions. Index Terms—Global asymptotic (exponential) stability, Lipschitz continuous, recurrent neural networks.
I. INTRODUCTION Stability analysis of recurrent neural networks received much attention in the literature, e.g., [1]–[23]. Since guaranteeing GAS and GES is very important for dynamic systems [7] and [23], many results for GAS and GES of continuous-time neural networks have been reported recently. For example, it is proved that the symmetric connection weight matrix of a neural network model with a sigmoid activation function to be negative semidefinite is the necessary and sufficient condition for absolute stability (ABST) of neural networks [4]. The ABST result was extended to the absolute exponential stability (AEST) in [17]. Within the class of globally or locally Lipschitz continuous and monotone nondecreasing activation functions, results on Lyapunov diagonal stability (LDS) in [5] and Lyapunov diagonal semistability (LDSS) in [16] were reported. The LDS result for GES was extended in [21] and [13] for globally Lipschitz continuous and monotone nondecreasing activation functions. As shown in [15], [18], and [13], the LDS result extends many existing conditions in the literature, such as M -matrix characteristic [20], lower triangular structure [2], negative semidefiniteness [3], diagonal stability [10], diagonal semistability [11] and the sufficient conditions in [6], [7], [12], [19] and [18]. Within the class of partially Lipschitz continuous and monotone nondecreasing functions, Manuscript received January 12, 2001; revised July 11, 2001 and November 27, 2001. Recommended by Associate Editor T. Parisini. This work was supported by the Hong Kong Research Grants Council under Grant CUHK4174/00E. The authors are with the Department of Automation and Computer-Aided Engineering, The Chinese University of Hong Kong, Shatin, N. T., Hong Kong (e-mail:
[email protected]). Publisher Item Identifier S 0018-9286(02)04759-1.
the AEST result was presented in [14] under a mild condition that the connection weight matrix is an H -matrix with nonpositive diagonal elements. Although this AEST condition is more restrictive than the additively diagonally stable (ADS) one introduced in [1], the ADS result achieved in [1] is only concerned with the class of bounded and continuously differentiable activation functions with positive derivatives. In [9], a sufficient condition was given to guarantee GAS for the neural network with globally Lipschitz continuous and monotone nondecreasing activation functions. The sufficient condition is equivalent to a high-dimensional linear matrix inequality. However, as the size of the neural network increases, solving the high-dimensional linear matrix inequality becomes increasingly difficult. This note is concerned with the GAS and GES of continuous-time recurrent neural networks. We first present three sufficient conditions to guarantee GAS of neural networks with locally or globally Lipschitz continuous and monotone nondecreasing activation functions. All these sufficient conditions differ and improve upon the existing stability results such as the LDS result, LDSS result, H -matrix result and ADS result mentioned above. We then extend an existing GAS result to GES and also extend the existing GES results to more general cases in terms of connection weight matrices and/or activation functions. II. PRELIMINARIES Consider the the following typical recurrent neural network model:
0Dx + W g(x) + I x(0) = x0 (1) where x = (x1 ; x2 ; . . . ; xn )T 2 Rn is the state vector, D = diag (d1 ; d2 ; . . . ; dn ) 2 Rn2n is a diagonal matrix with di > 0, W = [wij ] 2 Rn2n is a connection weight matrix, I 2 Rn x_ =
is an input vector and g (x) = (g1 (x); g2 (x); . . . ; gn (x))T is a nonlinear vector-valued activation function from Rn to Rn . In this note, let LL denote the class of locally Lipschitz continuous (l.l.c.) and monotone nondecreasing activation functions; that is, for any xi0 2 R there exist an "i0 > 0 and a constant `i0 > 0 such that 8 , 2 [xi0 0 "i0 ; xi0 + "i0 ] and 6= 0
gi () 00 gi () `i0 ;
i = 1; 2; . . . ; n:
(2)
Let PL denote the class of partially Lipschitz continuous (p.l.c.) and monotone nondecreasing activation functions [14]; that is, for any 2 R there exists `i () > 0 such that 8 2 R and 6=
0
gi () 0 gi () 0
` (); i
i = 1; 2; . . . ; n:
(3)
Let GL denote the class of globally Lipschitz continuous (g.l.c.) and monotone nondecreasing activation functions; that is, there exist `i and `i such that 8 , 2 R and 6=
0 `i
g () 00 g () ` : i
i
i
(4)
Definition 1 [4] and [14]: The equilibrium x3 of the neural network (1) is said to be GAS if it is locally stable in the sense of Lyapunov and globally attractive. The equilibrium x3 is said to be GES if there exist 1 and > 0 such that 8x0 2 Rn , the positive-half trajectory x(t) of the neural network (1) satisfies kx(t) 0 x3 k kx0 0 x3 k exp(0 t), t 0. Definition 2 [5]: A real square matrix A is said to be LDS [respectively, Lyapunov diagonally semistable (LDSS)] if there exists a diagonal matrix P > 0 such that [P A]S := (P A + AT P )=2 > 0
0018-9286/02$17.00 © 2002 IEEE
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 5, MAY 2002
(respectively, [P A]S 0). We define the class LDS (or LDSS ) in which A 2 LDS (or LDSS ) means A is LDS (or LDSS). Obviously, LDS LDSS . In the sequel, In2n is the identity matrix. P > 0 (or P < 0) means P is a positive (or negative) definite matrix.
803
WH
Proof: In terms of Wi = W Ei , i = 1; 2; . . . ; n, we have 0D + = 0D + ni=1 hi Wi . Then P
(0D + W H ) + (0D + W H )T P
When g 2 GL and g (0) = 0, a necessary and sufficient condition for existence and uniqueness of equilibrium of the neural network (1) was given in [9, Th. 1]. In fact, g (0) = 0 is not necessarily required. If each ei = 0 and fi = +1, then [9, Th. 1] can be extended to the case g 2 LL, and can thus be restated by the following lemma. Lemma 1: Let H = diag (h1 ; . . . ; hn ) and g 2 LL or g 2 GL. The neural network (1) has a unique equilibrium for any I 2 Rn if and only if 0D + W H is nonsingular 8hi 2 [0; +1) when g 2 LL or 8hi 2 [`i ; `i ] when g 2 GL. Lemma 2: If there exists a P > 0 such that P (0D + W H ) + (0D + W H )T P < 0 for any H defined in Lemma 1, then the neural network (1) is GAS for any g 2 LL or GL. Proof: As P (0D + W H ) + (0D + W H )T P < 0 for any H as in Lemma 1, it follows that 0D + W H is a stable matrix and consequently 0D + W H is nonsingular. According to Lemma 1, the neural network (1) has a unique equilibrium x3 . By means of the coordinate translation z = x 0 x3 , (1) can be put into the equivalent form
dt
= 0Dz + W f (z )
=
dt
= (0D + W H (z )) z:
(6)
(z )
dt
=z T
P
(0D + W H (z )) + (0D + W H (z ))T P
j
P
8 2 f1 2 . . . g i
j
N
2f
;
g0N
;
hj
P Wj
+ WjT P
0 such that
0 such that P (0D + W H ) + (0D + W H )T P < 0. Hence, verifying the conditions in Theorem 1 is much easier than verifying the conditions hi ;
804
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 5, MAY 2002
in Lemma 2. Comparing Theorem 2 with Theorem 1, we can see that checking the conditions in Theorem 2 is much easier than checking the conditions in Theorem 1 since we do not need to consider any ; ; ; n in Theorem 2. variable hi i Theorem 3: Let g 2 GL. The neural network (1) is GAS if there exists a P > such that
( = 1 2 ... ) 0 0Q := P 0D + W H + 0D + W H T P < 0 and T kP W k + kW P k 1` < 21 Q + QT 2
2
min
( . . . ) 1 := max ( )
=
(
(11)
)
where H ; `n , ` diag `1 ; `2 ; 1in `i 0 `i , k 1 k2 is the 2-norm and min A is the minimum eigenvalue of A. Proof:
s2 n s31 "i0 ; s13 derive j j3
2[ 0 8
+ "i ] when j j 3 . In view of (12), we can 0
0 gi (ssnn ) 00 sgi (ns n ) 1
(
2
1
`i s1n ; s2n
2
) `i
0
which contradicts `i s1n ; s2n ! 1 as j ! 1. If s13 6 3 s2 , based on the continuity of the function gi 1 on ai ; bi , clearly gi s13 0 gi s23 = s13 0 s23 is finite; that is, `i s13 ; s23 is finite contradicting `i s13 ; s23 1. Therefore, there exists a finite number Mi ai ; bi > such that (13) is true. Consequently, D+ gi s Mi ai ; bi for any s 2 ai ; bi . Similar to the proof of [21, Lemma 1], we have the following lemma. Lemma 4 : If D+ gi s Mi ai ; bi for any s 2 ai ; bi , then
(
)
+
) ( ( ) ( )) ( ( )=+ ( ) 0 ( ) [ ] 0 () u v
+ [ ] )
() (
=
0
(
)
()
[
]
[gi (s) 0 gi (v)] ds 2Mi (1ai ; bi ) [gi (u) 0 gi (v)]
2
(0D + W H ) + (0D + W H )T P for all u; v 2 [ai ; bi ] where Mi (ai ; bi ), ai and bi are finite. =P (0D + W H ) + (0D + W H )T P Theorem 4 : Let g 2 LL. If 0W 2 LDSS , then the neural net0 P W (H 0 H ) 0 [W (H 0 H )]T P work (1) is GES with its exponential convergence rate being at least d =2 where d := min in di . In addition, if for any u, v 2 R = 0 Q 0 P W (H 0 H ) 0 [W (H 0 H )]T P there exists m 2 (0:5; 1] such that T 0 Q + kP W k + W P 1`In2n u [gi (s) 0 gi (v)] ds 21m [gi (u) 0 gi (v)](u 0 v) (14) 0 21 Q + QT + kP W k + W T P 1` v 1 In2n < 0: then the lower bound of the exponential convergence rate is md . Proof: According to [16, Th. 1], the neural network (1) has a T So, P (0D + W H ) + (0D + W H ) P < 0 8hi 2 [`i ; `i ]; i = unique GAS equilibrium x3 for any given I 2 Rn . By definition, 1; . . . ; n. From Lemma 2, the neural network (1) is GAS. 0W 2 LDSS Smeans that there exists a P = diag(p ; . . . ; pn ) > 0 such that [P W ] 0. Define a function IV. GLOBAL EXPONENTIAL STABILITY n x t Lemma 3: If gi (s) is l.l.c. and monotone nondecreasing, then V (x(t)) = pi [gi () 0 gi (xi3 )] d: (15) there exists a positive–finite number Mi (ai ; bi ) > 0 such that x i 0 D gi (s) Mi (ai ; bi ) for any s 2 [ai ; bi ] where ai and bi are finite and D denotes the upper right Dini-derivative defined as Computing the time derivative of V (x(t)) along the positive-half trajectory of (1), we have gi (s + h) 0 gi (s) D gi (s) = lim sup : n h h! dV (x(t)) pi di [gi (xi (t)) 0 gi (xi3 )] (xi (t) 0 x3i ) = 0 dt Proof: Since gi (s) is l.l.c. and monotone nondecreasing, for any i n n s3 2 [ai ; bi ] there exist an "i > 0 and a constant `i > 0 such that 3 3 + pi wij [gi (xi (t)) 0 gi (xi3 )] 8; 2 [s 0 "i ; s + "i ] and 6= we have i j gi () 0 gi () 1 gj (xj (t)) 0 gj x3j 0 0 `i : (12) n 0d pi [gi (xi (t)) 0 gi (xi3 )] (xi (t) 0 x3i ) Next, by contradiction, we will prove there exists a finite number i Mi (ai ; bi ) > 0 such that + [g(x(t)) 0 g (x3 )]T [P W ]S [g(x(t)) 0 g (x3 )] n 0d pi [gi (xi (t)) 0 gi (xi3 )] (xi (t) 0 x3i ) 0 gi (ss ) 00 sgi (s ) `i (s ; s ) Mi (ai ; bi ) i 8s ; s 2 [ai ; bi ] and s 6= s : (13) n x t 0d pi [gi () 0 gi (xi3 )] d x Assume that (13) does not hold. Then, we may select three i = 0 d V (x(t)): (16) sequences fNj g, fs j g and fs j g such that 0 < N < `i (s ; s ) < N < `i (s ; s ) < 1 1 1 < Nj < `i (s j ; s j ) < Nj < `i (s j ; s j ) < 1 1 1, limj ! 1 Nj = +1 and Hence, V (x(t)) V (x )exp(0d t) 8t 0. According to [16, limj! 1 `i (s j ; s j ) = +1 where each s j ; s j 2 [ai ; bi ] and Th. 1], 8x 2 Rn we have limt! 1 x(t) = xn3 . Thus, for3 any given s j 6= s j . Since each s j ; s j 2 [ai ; bi ], there must exist two x there exists a compact set := fx 2 R jai (x ; x ) xi subsequences fs n g fs j g and fs n g fs j g such that bi (x ; x3 ) ; i = 1; . . . ; ng such that x(t) 2 for all t 0 and limj! 1 s n = s3 2 [ai ; bi ] and limj! 13 s3 n = s3 2 [ai ; bi ]. x3 2 where ai (x ; x3 ) and bi (x ; x3 ) are finite. Then, according to So, limj ! 1 `i (s n ; s n ) = `i (s ; s ) = + 1. If Lemma 3, there exists a positive number Mi (ai (x ; x3 ); bi (x ; x3 )), s3 = s3 , then there exists some integer j 3 such that s n , such that 0 D gi (s) Mi (ai (x ; x3 ); bi (x ; x3 )) for any s 2 P
min
2
min
1
2
min
2
2
min
1
( )
=1
+
+
+
0
=1
0
1
0
1
0
0
1
=1 =1
0
min
=1
1
2
1
1
2
min
2
=1
1
2
1
2
( )
min
=1
1
11
21
2
+1
12
1 +1
+
1
2
1
1
1
2
2
0
min
+
0
2
2
+
1
2
2
1
1
2
0
1
1
+
1
+
2
1
+
22
2 +1
2
min
1
2
1
0
0
0
2
2
1
+
0
0
0
0
0
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 5, MAY 2002
[ai (x0 ; x3 ); bi (x0 ; x3 )], i = 1; 2; . . . ; n. From Lemma 4, it follows that 8u, v 2 [ai (x0 ; x3 ); bi (x0 ; x3 )]: u v
[gi (s) 0 gi (v )]ds
[gi (u) 0 gi (v )]2 : 2Mi (ai (x0 ; x3 ); bi (x0 ; x3 ))
805
If (14) is satisfied, then similar to the derivation of (16), we have dV (x(t))=dt 2mdmin V (x(t)). In (19) and (20), replacing dmin =2 with mdmin , we derive n M (x0 ; x3 )V (x0 ) xi (t) x3i x0 x3 + wij di mdmin j =1 exp( mdmin t); t 0 (21)
0
j
0 j k 0 k 1
Hence x (t) x
[gi () 0 gi (x3i )]d
[gi (xi (t)) 0 gi (x3i )]2 2Mi (ai (x0 ; x3 ); bi (x0 ; x3 ))
8t 0:
(x0 ; x3 ) := 2 max1in Mi (ai (x0 ; x3 ); bi (x0 ; x3 )): Then Let M [gi (xi (t)) 0 gi (xi3 )]2 (x0 ; x3 ) M 8t 0; i = 1; . . . ; n:
x (t)
[gi () 0 gi (xi3 )] d
x
(17)
(x0 ; x3 )=pi : From (16) and (17), Let M (x0 ; x3 ) := max1in M we have [gi (xi (t)) 0 gi (x3i )]2
M (x ; x3 ) V (x(t)) M (x ; x3 ) V (x ) exp (0d t) i.e., 8t 0; jgi (xi (t)) 0 gi (xi3 ) j M (x ; x3 ) V (x ) exp 0 d 2 0 0
0
0
min
min
0
t ;
i =1; . . . ; n:
j
D+ xi (t)
j =1
0 di jxi (t) 0 x3i j + M (x ; x3 )V (x ) n 2 jwij j exp 0 d 2 t : (19) 0
min
j =1
So
0
0
0
min
0
kx 0 x3 k exp(0di t) + 2 0
n j =1
jwij j exp 0 d 2
min
kx0 0 x3 k + 1 exp 0
i
0
i = 1; . . . ; n by the definition of
+ D[L(x3 )]01
0
dmin
2
t
2
M (x0 ; x3 )V (x0 ) 2di dmin
0
0
0
0
0
0
0
0
0
0
0
0
0
2
0
n M (x0 ; x3 ) V (x0 ) wij d di 2 j =1
j j
0
0
0
0
(20)
where kx0 0 x3 k := max1in jxi (0) 0 x3i j. The above inequality implies that the neural network (1) is GES at an exponential convergence rate of at least dmin =2.
(22)
8
0
ds
t
0 ; 8t 0
2 LDS
PL:
Proof: Since W , there exists P = diag (p1 ; . . . ; pn ) > 0 such that [P ( W + D[L(x3 )]01 )]S > 0 from which it follows that there must exists a constant (0; dmin ) such that W := [P (W (D + In2n )[L(x3 )]01 )]S 0. Now consider the function V (x(t)) as defined in (15). Based on (22) and W 0, computing the time derivative of V (x(t)) along the positive-half trajectory of (1), we have t 0 n dV (x(t)) pi di [gi (xi (t)) gi (xi3 )] (xi (t) x3i ) = dt i=1 n n + pi wij [gi (xi (t)) gi (xi3 )] i=1 j =1 gj (xj (t)) gj x3j n = pi [gi (xi (t)) gi (xi3 )] (xi (t) x3i ) i=1 n pi (di ) [gi (xi (t)) gi (xi3 )] (xi (t) x3i ) i=1 n n pi wij + wji pj + [gi (xi (t)) gi (xi3 )] 2 i=1 j =1 gj (xj (t)) gj x3j n pi [gi (xi (t)) gi (xi3 )] (xi (t) x3i ) i=1 + [g (x(t)) g (x3 )]T W [g (x(t)) g (x3 )] n pi [gi (xi (t)) gi (xi3 )] (xi (t) x3i ) i=1 n x (t) pi [gi () gi (xi3 )] d x i=1 = V (x(t)) :
0
i
jxi 0 x3i j exp (0di t) + M (x ; x3 ) V (x ) n t 1 jwij j exp (0di (t 0 s)) exp 0 d 2 s 1
3
gi (xxi )i 00 xgi3(xi ) ; 8xi 2 R and xi 6= x3i
2
jxi (t) 0 x3 j j =1
`i (xi3 )
0
0 x3i j 0 di jxi (t) 0 x3i j n + jwij kgj (xj (t)) 0 gj (xj3)j 0
0
which implies that mdmin is the lower bound of convergence rate. Many l.l.c. and monotone nondecreasing functions satisfy (14). For example, if gi is a linear function, then (14) holds for m = 1. In fact, the left-hand side of (14) represents the area of gi (x) between u and v and the right-hand side of (14) is the rectangular area in u, v , gi (u) and gi (v ) over 2m. Remark 1: As shown in Example 1 [18], a GAS neural network (1) may not be GES. However, Theorem 4 shows that the condition for GAS in [16, Th. 1] can guarantee the neural network (1) to be GES. Based on LDS LDSS , it can be seen that Theorem 4 extends [13, Th. 2]. If we know (1) has an equilibrium x3 for a given input vector I and activation function g 2 PL, we can use the following sufficient condition for GES, which is weaker than existing ones. Theorem 5: Let g 2 PL. Suppose the neural network (1) has an equilibrium x3 for a given input vector I . If 0W + D[L(x3 )]01 2 LDS , then the neural network (1) is GES at an exponential convergence rate of at least =2 where 2 (0; dmin ) 3 )) and is decided later, L(x3 ) = diag (`1 (x13 ); `2 (x23 ); . . . ; `n (xn
(18)
Then, from (1) and (18), it follows that:
j j
0 8
0
0 0
0
0
0
806
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 5, MAY 2002
Similar to the last part of the proof of [21, Th. 1], we can show that the neural network (1) is GES at an exponential convergence rate of at least = . Remark 2: Since M > implies M 2 LDS , we can see that the results in [22] are special cases of [13, Th. 3] when = > is not an eigenvalue of M . As shown in the proof of [13, Th. 3], when = > is not an eigenvalue of M , [13, Th. 3] is a direct application of the LDS result ([13, Th. 1]). Hence, when = > is not an eigenvalue of M we can see that the results in [22] are special cases of the LDS result ([13, Th. 1]). On the other hand, [14, Th. 1], the LDS results ([21, Ths. 1–4] or [13, Th. 1]) and [1, Th. 1] are essentially special cases of Theorem 5 in terms of connection weight matrix W or/and activation function g x . Therefore, Theorem 5 extends the existing GES results to more general case in terms of the connection weight matrix W or/and activation function g x .
2
0
1
1
0
1
0
0
()
()
Fig. 1. a
= 2,
I
Phase plot of convergent positive-half trajectories in Example 1 with = (3; 1) , and g (x ) = x (i = 1; 2).
V. ILLUSTRATIVE EXAMPLES Example 1: Consider a neural network (1) with g
= 00aa 00 where a > 0: = diag(p1 ; p2 ) > 0, det 0 P W + W T P
D= For any
P
1 0 0 1
2 LL and
; W
=
0a2 p22 < 0, implies 0W 62 LDSS . Hence, [16, Th. 2] or Theorem
4 as previously shown is invalid in analyzing GAS of the neural network. However, we may employ Theorem 1 to analyze GAS. It is easy to see that
= 00aa 00 ; W2 = 00 00 : 2 01 Select P = 01 1 > 0: Then 02a 0 0; P W2 + W T P = 0 P W1 + W1T P = 2 0 0 W1 = W
Fig. 2. Exponential convergence of positive half trajectories in Example 3 with I = (1; 2) and g (x ) = x (i = 1; 2).
1
and
0(P D + DP ) = 0 2P < 0:
According to Theorem 1, the neural network is GAS for any g 2 LL. Fig. 1 shows that all the trajectories from 0 random initial points converge to a unique equilibrium x3 ;0 T. Example 2: Consider a neural network (1) with g 2 GL and `i
= (1 1)
1; i = 1; 2
=
1 0 ;W= 1 1 : 0 1 01 0 Let L = diag (`1 ; `2 ) = diag (1; 1). Because the diagonal elements of D=
W
are nonnegative and
0DL01 + W = 001 011 62 LDS
neither the H -matrix result ([14, Th. 1]), the LDS results ([21, Ths. 1–4] or [13, Th. 1]), nor the ADS result ([1, Th. 1]) is applicable to ascertain the stability of this neural network.
= 10 11 > 0: Then 0Q =P 0DL01 + W + 0DL01 + W T P = 001 0011 < 0 and 12 min Q + QT = 12 :
Let P
+ kW T P k2 = 2:618: According to Theorem 3, if 1` < 1=2:618 = 0:191 or `i > 1 0 0:191 = 0:809, then this neural Moreover, kP W k2
network is GAS for any g
2 GL.
When [9, Th. 6] is employed in this example, we need to solve a 4 2 4 linear matrix inequality. However, according to Theorem 3, when ` < : we only need to find a 2 2 2 matrix P > such that the inequality (11) holds. Obviously, the later is simpler than the former as far as computational complexity is concerned. As shown in Example 1, Theorem 1 is different from [16, Th. 2] or Theorem 4 above when g 2 LL. As illustrated in Example 2, Theorem 3 improves upon [9, Th. 6] as far as computational complexity is concerned. Example 3: Consider the neural network (1) with g 2 LL and
0 191
0
1 0 ;W= 0 1 Let P = D > 0; then P W + W T P = D=
01 1 : 01 0 02 0 0: 0 0
Consequently, 0W 2 LDSS . According to Theorem 2 [16], this neural network is GAS for any g 2 LL. However, based on Theorem 4, this neural network is GES for any g 2 LL. Fig. 2 shows that all the trajectories from 0 random initial points exponentially converge to a unique equilibrium x3 ; T. Example 4: Consider a cellular neural network (CNN) with gi xi jxi j 0 jxi 0 j = , i ;
= (1 1) 1) 2 =1 2 ( )= ( +1 1 0 ; W = 1:5 0:5 ; and I = 2 : D= 0 1 0 1:5 01:5 In this case, `i = 0, `i = 1 and L = diag (`1 ; `2 ) = diag (1; 1): Since W > 0 and 0DL01 + W = 00:5 00::55 62 LDS neither the H -matrix result ([14, Th. 1]) and the LDS results ([21, Ths. 1–4] or [13, Th. 1]) nor the ADS result ([1, Th. 1]) can be applied to ascertain the stability of the above CNN. We can obtain an
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 5, MAY 2002
Fig. 3.
807
Exponential convergence of positive-half trajectories in Example 4.
equilibrium x3 = (3; 03)T and diag (0:5; 0:5). Thus,
0
D [L
3 01 (x )] +W =
3
L(x
) =
00 5 :
0
3
3
diag (`1 (x1 ); `2 (x2 ))
05 00 5 2 LDS :
:
=
:
According to Theorem 5, x3 is GES. Fig. 3 shows that all the positive half trajectories of the CNN converge exponentially to x3 from zero random initial points. VI. CONCLUSION In this note, we present new results on GAS and GES of continuous-time recurrent neural networks with Lipschitz continuous and monotone nondecreasing activation functions. At first, three sufficient conditions on GAS of the neural networks are given. These testable sufficient conditions differ from and improve upon the existing GAS conditions. Next, we extend an existing GAS result to GES one and also extend the existing GES results to more general cases with less restrictive connection weight matrices and/or with partially Lipschitz activation functions. REFERENCES [1] S. Arik and V. Tavsanoglu, “A comment on comments on necessary and sufficient condition for absolute stability of neural networks,” IEEE Trans. Circuits Syst. I, vol. 45, pp. 595–596, May 1998. [2] G. Avitabile, M. Forti, S. Manetti, and M. Marini, “On a class of nonsymmetrical neural networks with application to ADC,” IEEE Trans. Circuits Syst., vol. 38, pp. 202–209, Feb. 1991. [3] M. Forti, S. Manetti, and M. Marini, “A condition for global convergence of a class of symmetric neural networks,” IEEE Trans. Circuits Syst. I, vol. 39, pp. 480–483, June 1992. [4] , “Necessary and sufficient condition for absolute stability of neural networks,” IEEE Trans. Circuits Syst. I, vol. 41, pp. 491–494, July 1994. [5] M. Forti and A. Tesi, “New condition for global stability of neural networks with application to linear and quadratic programming problems,” IEEE Trans. Circuits Syst. I, vol. 42, pp. 354–366, July 1995. [6] Z. H. Guan, G. R. Chen, and Y. Qin, “On equilibria, stability and instability of Hopfield neural networks,” IEEE Trans. Neural Networks, vol. 11, pp. 534–540, Mar. 2000. [7] M. W. Hirsch, “Convergent activation dynamics in continuous time networks,” Neural Networks, vol. 2, pp. 331–349, 1989. [8] J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proc. Nat. Acad. Sci , vol. 81, no. 5, pp. 3088–3092, 1984. [9] J. C. Juang, “Stability analysis of Hopfield-type neural network,” IEEE Trans. Neural Networks, vol. 10, pp. 1366–1374, Dec. 1999. [10] E. Kaszkurewicz and A. Bhaya, “On a class of globally stable neural circuits,” IEEE Trans. Circuits Syst. I, vol. 41, pp. 171–174, Feb. 1994. [11] , “Comments on Necessary and sufficient condition for absolute stability of neural networks,” IEEE Trans. Circuits Syst. I, vol. 42, pp. 497–499, Aug. 1995. [12] D. G. Kelly, “Stability in contractive nonlinear neural networks,” IEEE Trans. Biomed. Eng., vol. 3, pp. 231–242, Mar. 1990.
[13] X. B. Liang and J. Si, “Global exponential stability of neural networks with globally Lipschitz continuous activations and its application to linear variational inequality problem,” IEEE Trans. Neural Networks, vol. 12, pp. 349–359, Mar. 2001. [14] X. B. Liang and J. Wang, “Absolute exponential stability of neural networks with a general class of activation functions,” IEEE Trans. Circuits Syst. I, vol. 47, pp. 1258–1263, Aug. 2000. [15] X. B. Liang, “A comment on equilibria, stability and instability of Hopfield neural networks,” IEEE Trans. Neural Networks, vol. 11, pp. 1506–1507, Dec. 2000. [16] X. B. Liang and L. D. Wu, “Comments on new conditions for global stability of neural networks with application to linear and quadratic programming problems,” IEEE Trans. Circuits Syst. I, vol. 44, pp. 1099–1101, Nov. 1997. [17] X. B. Liang and T. Yamaguchi, “Necessary and sufficient conditions for absolute exponential stability of Hopfield-type neural networks,” IEICE Trans. Inf. Syst., vol. E79-D, pp. 990–993, 1996. [18] X. B. Liang and L. D. Wu, “Global exponential stability of a class of neural circuits,” IEEE Trans. Circuits Syst. I, vol. 46, pp. 748–751, June 1999. [19] K. Matsuoka, “Stability conditions for nonlinear continuous neural networks with asymmetric connection weights,” Neural Networks, vol. 5, no. 3, pp. 495–500, 1992. [20] T. Roska, “Some qualitative aspects of neural computing systems,” in Proc. 1988 IEEE ISCAS, Helsinki, Finland, 1989, pp. 751–754. [21] Y. Zhang, P. A. Heng, and A. W. C. Fu, “Estimate of exponential convergence rate and exponential stability for neural networks,” IEEE Trans. Neural Networks, vol. 10, pp. 1487–1493, Dec. 1999. [22] Y. Xia and J. Wang, “Global asymptotic and exponential stability of a dynamic neural system with asymmetric connection weight weights,” IEEE Trans. Automat. Contr., vol. 46, pp. 635–638, Apr. 2001. [23] E. N. Sanchez and J. P. Perez, “Input-to-state stability (ISS) analysis for dynamic neural networks,” IEEE Trans. Circuits Syst. I, vol. 46, pp. 1395–1398, Nov. 1999.