Neurocomputing 77 (2012) 222–228
Contents lists available at SciVerse ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Attracting and quasi-invariant sets of non-autonomous neural networks with delays Daoyi Xu a, Shujun Long a,b,n a b
Yangtze Center of Mathematics, Sichuan University, Chengdu 610064, PR China College of Mathematics and Information Science, Leshan Teachers College, Leshan 614004, PR China
a r t i c l e i n f o
a b s t r a c t
Article history: Received 23 December 2010 Received in revised form 10 April 2011 Accepted 7 September 2011 Communicated by H. Jiang Available online 19 September 2011
In this paper, a class of non-autonomous neural networks with delays is considered. By using the properties of spectral radius of nonnegative matrix, two new integral inequalities are established. Based on the integral inequalities, some new sufficient conditions for the existence of quasi-invariant and attracting sets of the non-autonomous neural networks with delays are obtained. The framework of the quasi-invariant and attracting sets is also given. The results extend and improve the earlier publications. One example is presented to illustrate the effectiveness of our conclusion. & 2011 Elsevier B.V. All rights reserved.
Keywords: Quasi-invariant sets Attracting sets Stability Delays Integral inequality
1. Introduction Recently, there has been increasing interest in the study of stability and asymptotic behavior of nonlinear neural networks with delays and unique equilibrium, and many significant results have been obtained [2–4,9,12,13,21,23–25]. However, the equilibrium point sometimes does not exist in many real physical systems, especially in nonlinear and non-autonomous neural networks with delays. Therefore, an interesting subject is to discuss the attracting and invariant sets of the neural networks with delays. Some significant progress has been made in the techniques and methods of determining the invariant set and attracting set for dynamical systems including ordinary differential equations, partial differential equations, delay differential equations, impulsive functional differential equations and stochastic functional differential equations and so on [5,7,8,11,15–20,22]. As is well known, one of the most popular ways to analyze the stability property and asymptotic behavior is to construct suitable Lyapunov functions [11]. Unfortunately, construction of a suitable Lyapunov function is usually not an easy task. Another important tool for investigating dynamical behavior of differential equation is the differential inequality or integral inequality methods [10]. These inequality methods give directly the estimate of the solutions of the
n
Corresponding author at: Yangtze Center of Mathematics, Sichuan University, Chengdu 610064, PR China. E-mail address:
[email protected] (S. Long). 0925-2312/$ - see front matter & 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2011.09.004
systems, but their proofs are rather technical. Recently, by using the differential inequality technique, Xu et al. studied the attracting and invariant sets for a class of delay differential systems [5,8]; impulsive functional differential equations [17] and impulsive stochastic functional differential equation [16]. In addition, Xu and Zhao studied the invariant and attracting sets of nonlinear differential equation with delays by integral inequality technique [18,19]. However, all the systems concerning the invariant and attracting sets mentioned above are autonomous. Refs. [7,15,20] investigate the attracting and invariant sets for non-autonomous systems with time-varying delays, but their conditions require the common factor of the coefficients, which is restrictive. Therefore, techniques and methods for the attracting and invariant sets of non-autonomous delay differential equations should be developed and explored. Motivated by the above discussions, the main aim of this paper is to establish new integral inequalities to investigate the quasi-invariant and attracting sets of the non-autonomous neural networks with delays. Based on the integral inequalities, some new sufficient conditions for the existence of the quasi-invariant and attracting sets are obtained. The framework of the quasi-invariant and attracting sets is also given. The results extend and improve the earlier publications.
2. Preliminaries Let Rn be the space of n-dimensional real column vectors, N 9f1; 2, . . . ,ng, R þ 9½0, þ 1Þ, and Rmn denotes the set of m n real matrices. Usually E denotes an n n unit matrix. For A,
D. Xu, S. Long / Neurocomputing 77 (2012) 222–228
B A Rmn or A, B A Rn , the notation A Z B ðA 4BÞ means that each pair of corresponding elements of A and B satisfies the inequality ‘‘ Zð 4 Þ’’. Especially, A A Rmn is called a nonnegative matrix if A Z0, and z A Rn is called a positive vector if z 40. Let rðAÞ denote the spectral radius of nonnegative square matrix A. C½X,Y denotes the space of continuous mappings from the topological space X to the topological space Y. Especially, C9C½ð1,0,Rn denotes the family of all bounded continuous Rn-valued functions f defined on ð1,0. For any x A Rn , A A Rnn , f A C, we define ½x þ ¼ ð9x1 9, . . . , 9xn 9ÞT , ½A þ ¼ ð9aij 9Þnn , ½xðtÞtþðtÞ ¼ ðJx1 ðtÞJtðtÞ , . . . ,Jxn ðtÞJtðtÞ ÞT , where Jxi ðtÞJtðtÞ ¼ sup0 r s r tðtÞ 9xi ðtsÞ9, i ¼ 1, . . . ,n: Especially, when tðtÞ ¼ þ þ1, we denote ½xðtÞtþðtÞ ¼ ½xðtÞ1 . In this paper, the non-autonomous neural networks with delays are defined by the following state equation: 8 n n X X > dx ðtÞ > < i ¼ ci ðtÞxi ðtÞ þ aij ðtÞf j ðxj ðtÞÞ þ bij ðtÞf j ðxj ðttðtÞÞÞ þIi ðtÞ, dt j¼1 j¼1 > > : x ðt þsÞ ¼ f ðsÞ, 1o s r 0, i ¼ 1, . . . ,n, i 0 i ð1Þ where n corresponds to the number of units in neural networks, xi(t) corresponds to the state of the ith unit at time t, f j ðxj Þ is the activation function of the jth unit, ci(t) represents the rate with which the ith unit will reset its potential to the resting state in isolation when disconnected from the network and external inputs, tðtÞ A C½R,R þ corresponds to the transmission delay which satisfies limt- þ 1 ðttðtÞÞ ¼ þ1, aij(t) and bij(t) denote the strength of the jth unit at time t and ttðtÞ, respectively, Ii(t) is the external bias on the ith unit. We always assume that functions ci ðtÞ, aij ðtÞ, bij ðtÞ and Ii(t) are continuous for t A R, i,j ¼ 1, . . . ,n: Throughout the paper, we always assume that for any f A C, system (1) has least one solution through ðt 0 , fÞ, denoted by xðt,t 0 , fÞ or xt ðt 0 , fÞ (simply x(t) and xt if no confusion should occur), where xt ðt 0 , fÞ ¼ xðt þs,t 0 , fÞ A C,s A ð1,0.
223
where distðf,SÞ ¼ inf c A S distðf, cÞ,distðf, cÞ ¼ sups A ð1, 0 9fðsÞ cðsÞ9 for f A C. To prove our results, the following lemma is necessary. For a nonnegative matrix A A Rnn , the spectral radius rðAÞ is an eigenvalue of A and its eigenspace is denoted by
Or ðAÞ9fz A Rn 9Az ¼ rðAÞzg, which includes all positive eigenvectors of A provided that the nonnegative matrix A has at least one positive eigenvector (see Ref. [6]). Lemma 1 (Berman and Plemmons [1]). If A Z 0 and rðAÞ o 1, then (a) ðEAÞ1 Z0; (b) there is a positive vector z A Or ðAÞ such that ðEAÞz 40.
3. Delay integral inequalities Theorem 1. Let yðtÞ A C½R,Rnþ be a solution of the delay integral inequality ( Rt Rt yðtÞ r t0 Aðt,sÞyðsÞ dsþ t0 Bðt,sÞ½yðsÞtþðsÞ dsþ J, t Z t 0 , ð3Þ yðtÞ r fðtÞ, 8t A ð1,t 0 , where Aðt,sÞ, Bðt,sÞ A C½R þ R,Rnn þ , J Z0 is a constant vector, fðtÞ A C½ð1,t 0 ,Rnþ and satisfy the following conditions: (A1) There are constant matrices P1 Z0, P2 Z 0 such that Z t Z t Aðt,sÞ ds r P1 , Bðt,sÞ dsr P2 , 8t Zt 0 : t0
t0
(A2) Let P ¼ P1 þ P2 , rðPÞ o 1: Then,
Definition 1 (Xu [14]). f ðt,sÞ A UC t means that f A C½R þ R,R þ and that for any given a and any e 4 0 there exist positive numbers B,T and A satisfying Z t Z tT f ðt,sÞ ds rB, f ðt,sÞ ds o e, 8t Z A: ð2Þ a
a
Especially, f ðt,sÞ A UC t if f ðt,sÞ ¼ f ðtsÞ and
R1 0
f ðuÞ du o 1.
Definition 2 (Berman and Plemmons [1]). Let the matrix D ¼ ðdij Þnn has non-positive off-diagonal elements (i.e., dij r 0,i aj), then each of the following conditions is equivalent to the statement that D is a nonsingular M-matrix. (i) All the leading principle minors of D are positive. (ii) D ¼ CG and rðC 1 GÞ o1, where G Z 0, C ¼ diagfc1 , . . . ,cn g. (iii) The diagonal elements of D are all positive and there exists a positive vector d such that Dd 4 0 or DT d 4 0. Definition 3. A set Q D C is called the quasi-invariant set of system (1) if there exist a matrix W Z 0 and a vector b Z0 such that for any f A Q , there is a vector z such that the solution xðtÞ ¼ xt ðt 0 , fÞ of system (1) satisfies ½xðtÞ þ rWz þ b, t Zt 0 as þ ½f1 rz. Obviously, the set Q is a invariant set of system (1) if W¼E and b¼0. Definition 4. A set S C is called a global attracting set of system (1), if for any initial value f A C the solution xt ðt 0 , fÞ converges to S as t-1. That is, distðxt ,SÞ-0
as t-1,
yðtÞ r ðEPÞ1 J,
t Zt 0 ,
when fðtÞ r ðEPÞ
1
ð4Þ
J for t rt 0 .
Proof. From the condition (A2) and Lemma 1, we know there is a positive vector z A Or ðPÞ such that
Pz ¼ rðPÞz o z:
ð5Þ
In order to prove (4), we first prove for any e 4 0, yðtÞ o ðEPÞ1 J þ ez,
t Zt 0 :
ð6Þ
Letting ei ¼ ð0, . . . ,0; 1 ,0, . . . ,0Þ, if (6) is not true, from the fact |fflfflfflfflfflffl{zfflfflfflfflfflffl} i
that fðtÞ rðEPÞ1 J and y(t) is continuous, then there must be a t 1 4 t 0 and iA N such that ei yðt 1 Þ ¼ ei ððEPÞ1 J þ ezÞ, yðtÞ r ðEPÞ1 J þ ez,
ð7Þ
t rt 1 :
ð8Þ
Hence, it follows from (3), (5) and (8) that Z t1 Z t1 Aðt 1 ,sÞyðsÞ dsþ Bðt 1 ,sÞ½yðsÞtþðsÞ dsþ J yðt 1 Þ r t0
t0
r PðEPÞ1 J þ ePz þJ ¼ ðPðEPÞ1 þ EÞJ þ erðPÞz oðEPÞ1 J þ ez
ð9Þ
which contradicts to the equality (7). So (6) holds for all t Z t 0 . Letting e-0 in (6), we have (4). The proof is complete. &
224
D. Xu, S. Long / Neurocomputing 77 (2012) 222–228
Theorem 2. Let yðtÞ A C½R,Rnþ be a solution of the delay integral inequality 8 Rt R d hðvÞ dv > t0 > fðt0 Þ þ tt0 Aðt,sÞyðsÞ ds > yðtÞ rMe < Rt ð10Þ þ t0 Bðt,sÞ½yðsÞtþ ds þJ, t Zt 0 , > > > : yðtÞ r fðtÞ, 8t A ½t t,t , 0 0 where M is a nonnegative matrix, d 4 0 is a constant, hðtÞ Z0 is Rt Rt integral and supt Z t0 tt hðsÞ ds ¼ H o1, limt-1 t0 hðsÞ ds ¼ 1, nn Aðt,sÞ, Bðt,sÞ A C½R þ R,R þ , J Z 0 is a constant vector, fðtÞ A C½½t 0 t,t 0 ,Rnþ is a Rnþ -value function, and satisfy the following conditions: (A3) There are constant matrices P3 Z0, P4 Z 0 and a constant g 4 0 such that Z t Z t Rt Rt g hðvÞ dv g hðvÞ dv Aðt,sÞe s ds r P3 , Bðt,sÞe s dsr P4 , t0
t0
(A4) Let P ¼ P3 þ P4 , rðP Þ o 1. Then there are constants l 40 and N Z 1 such that Rt l hðvÞ dv t0 yðtÞ r Nze þ ðEPn Þ1 J, t Z t 0 ,
ð11Þ
for any fðtÞ A Q 1 ¼ f½ftþ rz9z 4 0, z A Or ðelH Pn þM=NÞg. Proof. From the condition (A4), there exist positive constants l ominfg, dg and N Z1 such that rðelH Pn þ M=NÞ o 1. From Lemma 1, we know M z o z: ð12Þ elH Pn þ N In order to prove (11), we first prove for any d 41, Rt l hðvÞ dv t0 yðtÞ o dNze þðEPn Þ1 J, t Z t 0 :
yðtÞ r dNze
t0
hðvÞ dv
t1
þ t0
ð13Þ
M þ ðP3 þelH P4 Þ dNzþ ðEPn Þ1 J N Rt l 1 hðvÞ dv M t0 þ elH Pn z þ ðEPn Þ1 J re dN N Rt l 1 hðvÞ dv t0 o dNze þðEPn Þ1 J, l
re
þðEP Þ
J,
t r t1 ,
and (15) that Rt Z d 1 hðvÞ dv t0 yðt 1 Þ rMe fðt0 Þ þ
rMe
d
Corollary 1. Let x(t) is a solution of differential inequality
Aðt 1 ,sÞyðsÞ ds
Bðt 1 ,sÞ½yðsÞt dsþ J hðvÞ dv
zþ
Z
t1
l
Aðt 1 ,sÞðdNze
Rs t0
hðvÞ dv
þ ðEPn Þ1 JÞ ds
t0
Z
t1
Bðt 1 ,sÞðdNze
þ
l
Rs t0
hðvÞ dv l
e
Rs st
hðvÞ dv
þ ðEPn Þ1 JÞ dsþ J
t0
r
Rt Rs Z t1 l hðvÞ dv M d t 1 hðvÞ dv t0 0 e Nz þ Aðt 1 ,sÞe dNz ds N t0 Rs Z t1 Z t1 l hðvÞ dv t0 þ Aðt 1 ,sÞðEPn Þ1 J ds þ Bðt 1 ,sÞelH e dNz ds t0
Z
t1
þ t0
t Z t0 ,
ð17Þ
provided that the initial conditions satisfy Ry l hðvÞ dv t0 þðEP0 Þ1 J, t 0 t r y r t 0 , xðyÞ r kze where k Z 0, z A Or ððA0 lEÞ
1
lH
ðA þBe ÞÞ, J
ð19Þ
^ ¼ A1 0 I.
Proof. From (17), by the variation of parameters formula, we can get Rt Rt A hðvÞ dv A hðvÞ dv AhðsÞ, Bðt,sÞ ¼ e 0 s BhðsÞ and (10), where Aðt,sÞ ¼ e 0 s dE rA0 , then by simple computation, we can get the conclusion. &
When JðtÞ 0 and hðtÞ 1, then we can get Theorem 2 in [14].
4. Attracting and quasi-invariant sets For convenience, we denote
JðtÞ ¼ col
8 n <X :
AðtÞ ¼ ð9aij ðtÞ9Þnn ,
BðtÞ ¼ ð9bij ðtÞ9Þnn ,
ð9aij ðtÞ99f j ð0Þ9 þ 9bij ðtÞ99f j ð0Þ9Þ þ 9Ii ðtÞ9
j¼1
9n = ;
:
i¼1
For (1), we give the following assumptions:
þ
t0
ð16Þ
t1
t0 t1
R t1
From Theorem 2, we can get the following related result on differential inequality.
i
t0
hðvÞ dv
which contradicts to the equality (14). So (13) holds for all t Zt 0 . Letting d-1 in (13), we have (11). The proof is complete. &
ð15Þ
where e ¼ ð0, . . . ,0; 1 ,0, . . . ,0Þ. Hence, it follows from (10), (12) |fflfflfflfflfflffl{zfflfflfflfflfflffl}
þ
t0
C 0 ðtÞ ¼ diagfc1 ðtÞ, . . . ,cn ðtÞg, n 1
i
Z
R t1
Z t1 Rt M l 1 hðvÞ dv þ Aðt 1 ,sÞe s ds N t0 Rt l 1 hðvÞ dv lH Bðt 1 ,sÞe s e ds dNzþ ðP3 þ P4 ÞðEPn Þ1 J þ J hðvÞ dv
Remark 1. From rðP0 Þ o1 and Definition 4, we know A0 ðA þ BÞ is a nonsingular M-matrix, then we can get Lemma 5 in [7].
If (13) is not true, from the fact that ½ftþ r z and y(t) is continuous, then there must be a t 1 4t 0 and i A N such that Rt l 1 hðvÞ dv t0 ei yðt 1 Þ ¼ ei ðdNze þ ðEPn Þ1 JÞ, ð14Þ Rt
Z
t0
where A0 is a positive diagonal matrix, A, B are nonnegative matrix, I^ Z 0, h(t) satisfies the assumption in Theorem 2. Letting P0 ¼ A1 0 ðA þBÞ, if rðP0 Þ o 1, then we have Rt l hðvÞ dv t0 xðtÞ r kze þðEP0 Þ1 J, t Zt 0 , ð18Þ
n
l
R t1
^ D þ xðtÞ r hðtÞðA0 xðtÞ þ AxðtÞ þ B½xðtÞt þ IÞ,
8t Zt 0 : n
l
re
t0
Bðt 1 ,sÞðEPn Þ1 J ds þ J
(H1) For any xj A R, j A N , there exist nonnegative constants lj such that 9f j ðxj Þ9 r lj 9xj 9. (H2) ci ðtÞ Z0, i A N . (H3) There exist constant matrices P5 Z0, P6 Z0 and a constant vector J^ Z 0 such that Z t Rt Z t Rt C ðvÞ dv C ðvÞ dv e s 0 AðsÞ ds r P5 , e s 0 BðsÞ dsr P6 , t0
Z
t
t0
e
Rt s
C 0 ðvÞ dv
^ JðsÞ dsr J,
8t Z t 0 :
t0
(H4) Let Pnn ¼ P5 þ P6 , rðPnn LÞ o 1, where L ¼ diagfl1 , . . . ,ln g. R Rt (H5) t C 0 ðvÞ dv C ðvÞ dv e s AðsÞ ¼ ðwij ðt,sÞ A UC t Þnn , e s 0 BðsÞ ¼ R1 ðzij ðt,sÞ A UC t Þnn , t0 ci ðsÞ ds ¼ þ 1.
D. Xu, S. Long / Neurocomputing 77 (2012) 222–228 þ Theorem 3. Assume that (H1)–(H4) hold, then Q 2 ¼ f½f1 r z9z 40, nn z A Or ðP LÞg is the quasi-invariant set of system (1).
Proof. By the variation of parameters formula and combining with (H1), we get Rt Z t Rt n X ci ðsÞ ds c ðvÞ dv 9fi ðt 0 Þ9 þ e s i 9aij ðsÞ99f j ðxj ðsÞÞ9 ds 9xi ðtÞ9 re t0 t0
Z
t
Rt
e
þ
s
n X c ðvÞ dv i
t0
Z
j¼1
t
Rt
e
Rt
t0
Z
s
ci ðsÞ ds
t
Rt
e
þ
ci ðvÞ dv
9bij ðsÞ99f j ðxj ðstðsÞÞÞ9 ds
s
ci ðvÞ dv
t
Rt
e
s
ci ðvÞ dv
t0 t
e
t0
ci ðvÞ dv @
s
ci ðsÞ ds
t
Rt
e
þ Z
9aij ðsÞ99f j ðxj ðsÞÞf j ð0Þ9 ds
Rt
s
ci ðvÞ dv
n X
Z
1
^ ds o BðsÞLððEPnn LÞ1 z þ ðEPnn LÞ1 JÞ
e 4
I,
t 4 T1,
r t
e
Rt s
ci ðvÞ dv
t0
n X
Rt
e
s
ci ðvÞ dv
t0
e 2
Iþ
e
þ t0
Rt s
Z Z
C 0 ðvÞ dv
Z
s
tT 1
2
e t0 t
Rt
e
Iþ Z
e 2
Iþ
t
s
s
Z
Rt s
tT 1
^ BðsÞL½xðsÞtþðsÞ ds þ ½fðt 0 Þ þ þ J:
r eI þ
ð21Þ
Rt s
BðsÞL½xðsÞtþðsÞ ds þ
C 0 ðvÞ dv
Z
t
e
Rt s
C 0 ðvÞ dv
JðsÞ ds
t0
^ ds AðsÞLððEPnn LÞ1 z þ ðEPnn LÞ1 JÞ
Z
t
e tT 1
Z
t
e
þ
C 0 ðvÞ dv
^ ds BðsÞLððEPnn LÞ1 z þ ðEPnn LÞ1 JÞ
C 0 ðvÞ dv
AðsÞL½xðsÞ þ ds
C 0 ðvÞ dv
BðsÞL½xðsÞtþðsÞ ds þ J^
t
e
Rt s
C 0 ðvÞ dv
AðsÞL½xðsÞ þ ds
tT 1
e
þ
s
Rt
tT 1 t
Rt
e
tT 1
e
t0 C 0 ðvÞ dv
e
þ
ð20Þ
Rt
tT 1
þ
From (20), we get Z t Rt C ðvÞ dv e s 0 AðsÞL½xðsÞ þ ds ½xðtÞ þ r t
þ
9bij ðsÞ9lj Jxj ðsÞJtðsÞ ds
Ji ðsÞ ds:
ð26Þ
t0
Z
9aij ðsÞ9lj 9xj ðsÞ9 ds
j¼1
r
t0
t ZT 2 ,
e t0
ð9aij ðtÞ99f j ð0Þ9 þ 9bij ðtÞ99f j ð0Þ9Þ þ 9Ii ðsÞ9A ds
Z
t
þ
j¼1
t
Z
C 0 ðsÞ ds
s
where rðtÞ ¼ suptT 1 r s r t tðsÞ þ T 1 . Therefore, we get Rt Z t Rt C 0 ðvÞ dv C ðvÞ dv ½xðtÞ þ r e t0 fðt0 Þ þ e s 0 AðsÞL½xðsÞ þ ds
9bij ðsÞ99f j ðxj ðstðsÞÞÞf j ð0Þ9 ds
n X
9fi ðt 0 Þ9þ
t0
þ
e
þ r s þ eI, ½xðtÞrðtÞ
j¼1
Rt
Z
n X
0
Rt
t0
n X
j¼1
þ
re
tT 1
I,
where I ¼ colf1, . . . ,1g. In addition, according to the definition of super-limit and limt- þ 1 ðttðtÞÞ ¼ þ 1, we know there must be a big enough T 2 Z T 1 such that
9Ii ðsÞ9 ds
j¼1
þ Z
Z
e 4
ð25Þ
9fi ðt 0 Þ9
t0
Z
^ ds o þ ðEPnn LÞ1 JÞ
t0
t0
re
be a big enough T 1 4t 0 such that Rt Z tT 1 R t C 0 ðsÞ ds e C ðsÞ ds fðt0 Þ o I, e s 0 AðsÞLððEPnn LÞ1 z e t0 2 t0
j¼1
þ
225
Rt s
C 0 ðvÞ dv
Rt s
BðsÞL½xðsÞtþðsÞ ds þ J^
C 0 ðvÞ dv
C 0 ðvÞ dv
AðsÞLðs þ eIÞ ds
BðvsÞLðs þ eIÞ dsþ J^
tT 1
In addition, ½xðtÞ þ ¼ ½fðtÞ þ , t A ð1,t 0 : Therefore, from Theorem 1 and the conditions (H1)–(H4), for any fðtÞ A Q 2 ¼ þ r z9z 40, z A Or ðPnn LÞg, we have f½f1 ^ þb, ½xðtÞ þ rðEPnn LÞ1 z þ ðEPnn LÞ1 J9Wz
t Z t0 :
ð22Þ
By Definition 3, we know the set Q2 is the quasi-invariant set of system (1). & Theorem 4. Assume the conditions ðH1 Þ2ðH5 Þ hold. Then the solution of (1) is uniformly bounded and S ¼ ff A C½ð1,t 0 ,Rn 9 þ ^ is global attracting set of (1). ½f1 rK ¼ ðEPnn LÞ1 Jg Proof. For any given fðtÞ A C½ð1,t 0 ,Rn , from the condition (H4) and Lemma 1, there is a positive vector z A Or ðPnn LÞ such that þ ½f1 rz. Then from Theorem 3 we get ^ ½xðtÞ þ rðEPnn LÞ1 z þ ðEPnn LÞ1 J,
t Zt 0 :
ð23Þ
That is the solution of (1) is uniformly bounded. Then there must be a constant vector s Z 0 such that ^ lim ½xðtÞ þ ¼ s rðEPnn LÞ1 z þ ðEPnn LÞ1 J:
t- þ 1
ð24Þ
we prove s A S. For any e 4 0, according to Rt C 0 ðsÞ ds ¼ 0, wij ðt,sÞ, zij ðt,sÞ A UC t , we know there must limt- þ 1 e t0 Next,
r eI þ Pnn Lðs þ eIÞ þ J^
ð27Þ
Similarly, by the definition of super-limit, we know there must be a big enough T 3 ZT 2 such that ½xðtÞ þ 4 seI, so we get ^ seI o eI þ Pnn Lðs þ eIÞ þ J:
ð28Þ
^ that ^ therefore s rðEPnn LÞ1 J, Letting e-0, we get s r P Ls þ J, is s A S. & nn
Corollary 3. Suppose that ðH1 Þ2ðH5 Þ hold, and further assume f j ð0Þ ¼ 0, JðtÞ 0, then the zero solution of (1) is globally asymptotically stable. If we replace the assumptions ðH2 Þ2ðH4 Þ by the follows: ðH2 Þ0 C 0 ðtÞ ZC 0 hðtÞ, C 0 is a constant positive definite diagonal Rt matrix, hðtÞ Z0 is integral and supt Z t0 tt hðsÞ ds ¼ H o1, Rt limt-1 t0 hðsÞ ds ¼ 1, tðtÞ r t ðH3 Þ0 There exist constant matrices P7 Z0, P8 Z 0 and a constant vector J^ Z0 and a constant g 40 such that Z t Rt Rt C ðvÞ dv g hðvÞ dv e s 0 AðsÞe s ds r P7 , Z
t0 t
e t0
Rt s
C 0 ðvÞ dv
g
BðsÞe
Rt s
hðvÞ dv
dsr P8 ,
226
D. Xu, S. Long / Neurocomputing 77 (2012) 222–228
Z
t
e
Rt s
C 0 ðvÞ dv
^ JðsÞ ds r J,
8t Zt 0 :
¼e
t0 o
o
ðH4 Þ Let P ¼ P7 þ P8 , rðP LÞ o1. 0
n X 1 1 þ ekp þð1Þn et ðsin tcos tÞ 2 k¼1 2
t
!
ðn þ 1Þp e 1 1 1 þ ð1Þn ðsin tcos tÞ p e 1 2 2 p e 1 1 np u 1 e ¼ p p þ e þ ðsin ucos uÞ e 1 2 2 e 1 ¼ et
Then, by Theorem 2, we get the following theorem: Theorem 5. Assume the conditions ðH1 Þ, ðH2 Þ0 2ðH4 Þ0 hold, then there exist constants l 4 0 and N Z1 such that the solution of system (1) satisfies Rt l hðvÞ dv t0 ^ t Z t0 , ½xðtÞ þ rNze þðEPo LÞ1 J, ð29Þ lH o þ for any fðtÞ A Q 3 ¼ f½ft rz9z 4 0, z A Or e P Lþ E=N g.
u
o he
1 ðsin ucos uÞ9mðuÞ, 2
þ
where h ¼ ep =ðep 1Þ and u
Corollary 4. Assume the conditions ðH1 Þ, ðH2 Þ0 2ðH4 Þ0 hold, further assume f j ð0Þ ¼ 0, JðtÞ 0, hðtÞ 1, then the zero solution of (1) is globally exponentially stable. Remark 2. If we further give the following assumptions: ^ ðH2 Þ00 C 0 ðtÞ ZC 0 hðtÞ, AðtÞ r AhðtÞ, BðtÞ rBhðtÞ, JðtÞ r IhðtÞ, hðtÞ Z0 is Rt Rt integral and supt Z t0 tt hðsÞ ds ¼ H o 1, limt-1 t0 hðsÞds ¼ 1, tðtÞ r t. ðH3 Þ00 rðC 1 0 ðA þBÞLÞ o 1. Then from Corollary 1 and Theorem 5, we can get the main results in [7]. 00
00
Corollary 5. Provided that the conditions ðH1 Þ, ðH2 Þ , ðH3 Þ hold, ^ is the then S ¼ ff A C½½t 0 t,t 0 ,Rn 9½ftþ rK ¼ ðC 0 ðA þ BÞLÞ1 Ig invariant and global exponential attracting set of (1).
ð31Þ
m0 ðuÞ ¼ he
þ 12ðcos u þ sin uÞ:
ð32Þ
By simple computation, we get pffiffiffi 2p ep=3 3p ep=4 31 m0 40, m0 ¼ p þ ¼ p o 0, 4 3 e 1 4 e 1
ð33Þ
therefore, there is a un A ð2p=3; 3p=4Þ such that m0 ðun Þ ¼ 0. From (33) we know un is the maximum point of m(u). Combining with (31) and (32), we know pffiffiffi 2p 3 IðtÞ omðun Þ ¼ sin un o sin : ð34Þ ¼ 2 3
Z
t
0
Similar to the computation of I(t), we can get pffiffiffi 3 : eðtsÞ 9cos s9 ds o 2
ð35Þ
The other computation is simple, we omit the detail. By simple computation, we get ! Z 37 1 R
5. Example
t
Example. Consider the following two-dimensional non-autonomous neural networks: 8 dx1 ðtÞ 37 1 > > ¼ x1 ðtÞ þ pffiffiffi sin t f1 ðx1 ðtÞÞþ pffiffifficos t f2 ðx2 ðtÞÞ > > dt > 40 3 40 3 > > > > 37 1 > > > þ pffiffiffi cos tf1 ðx1 ðttðtÞÞÞ þ pffiffiffisin tf2 ðx2 ðttðtÞÞÞ þI1 ðtÞ, > > 40 3 40 3 < dx2 ðtÞ 1 37 1 > ¼ 2x2 ðtÞ þ f1 ðx1 ðtÞÞ þ f2 ðx2 ðtÞÞ þ f1 ðx1 ðttðtÞÞÞ > > dt 40 40 40 > > > > 37 > > > þ f2 ðx2 ðttðtÞÞÞ þ I2 ðtÞ, > > 40 > > : xi ðsÞ ¼ fi ðsÞ, t r s r 0, i ¼ 1; 2,
t
e
s
C 0 ðvÞ dv
ðAðsÞ þBðsÞÞ dsr
0
Z
t
e
Rt s
40 1 40
40 37 40
¼ Pnn ,
0 p9d9 1 C 0 ðvÞ dv
0
2 ^ JðsÞ ds r @ p9d9 A ¼ J: 4
ð30Þ
As rðPnn Þ ¼ 38=40 o 1, then all the conditions of Theorems 3 and 4 are satisfied. Therefore, set Q 4 ¼ f½ftþ r z9z 40, z A Or ðPnn Þg is the quasi-invariant set of (30), and S ¼ ffðtÞ A C½½t,0,R2 9 Jf1 Jt r ð35p9d9Þ=4, Jf2 Jt rð25p9d9Þ=4g is a global attracting set of (30). (Fig. 1 shows the attraction of (30), Fig. 2 shows the variation of the trajectories of (30).)
where f1 ðx1 Þ ¼ 1=2ð9x1 þ 199x1 19Þ, f2 ðx2 Þ ¼ x2 , tðtÞ ¼ t, I1 ðtÞ ¼ I2 ðtÞ ¼ d arctanðtÞ, d is a constant.
Remark 3. However, it is very difficult to get the global attracting set of this network by differential inequality technique in [7].
Then, we easily know that l1 ¼ l2 ¼ 1, f1 ð0Þ ¼ f 2pt2 ð0Þ ¼ 0, and 0 1 37 1pffiffi pffiffi 9sin t9 9cos t9 1 0 40 3 40 3 @ A, C 0 ðtÞ ¼ , AðtÞ ¼ 1 37 0 2 40 40 0 1 37 1pffiffi pffiffi 9cos t9 9sin t9 40 3 40 3 @ A: BðtÞ ¼ 1 37 40
We compute Z t IðtÞ9 eðtsÞ 9sin s9 ds:
8 6
2
0
ðk1Þp
10
4
For any t Z0, there exists an integer n Z 0 such that np r t oðn þ 1Þp, letting t ¼ np þ u, 0 ru o p, then we get ! Z kp Z t n X ð1Þk1 es sin s dsþ ð1Þn es sin s ds IðtÞ ¼ et k¼0
12
x2
40
14
np
0
0.5
1
1.5 x1
2
Fig. 1. The phase graph of the neural networks (30) with d ¼1.
2.5
2.4 2.2 2 1.8 1.6 1.4 1.2 1 0.8 0.6
227
12 10 8 x2 (t)
x1 (t)
D. Xu, S. Long / Neurocomputing 77 (2012) 222–228
6 4 2 0
0
20
40
60
80
100
0
Time t
20
40
60
80
100
Time t
Fig. 2. The trajectory x1 ðtÞ, x2 ðtÞ of the neural networks (30) with d ¼ 1.
1
Grant 10ZA032 and Fundamental Research Funds for the Central Universities under Grant 2010SCU21006.
x1 (t) x2 (t)
0.8 x (t)
0.6
References
0.4 0.2 0 0
10
20
30 Time (s)
40
50
60
Fig. 3. Stability for neural networks (30) with d ¼ 0.
Since there are not any common factors for the coefficients of this networks and the coefficients may be enlarged by the inequalities 9cos t9r 1 and 9sin t9 r1 so that the check matrix become 0 1 pffiffi 1pffiffi 1 2037 3 3 20 @ A, 1 3 20 20 which is not an M-matrix. Furthermore, from Corollary 3, we know the zero solution of (30) with d¼0 is globally asymptotically stable (see Fig. 3). Of course, it is also difficult to check the stability of this network by both Lyapunov method and differential inequality technique.
6. Conclusion In this paper, a new approach for studying the asymptotic behavior of a class of non-autonomous neural networks with delays is presented. By establishing new integral inequalities and using the properties of spectral radius of nonnegative matrix, we obtain the quasi-invariant and attracting sets of the non-autonomous neural networks with delays.
Acknowledgments The authors sincerely thank the Associate Editor and the reviewers for the detailed comments and valuable suggestions to improve the quality of this paper. This work is supported by National Natural Science Foundation of China under Grant 10971147, Scientific Research Fund of Sichuan Provincial Education Department under
[1] A. Berman, R.J. Plemmons, Nonnegative Matrices in Mathematical Sciences, Academic Press, New York, 1979. [2] J.D. Cao, On exponential stability and periodic solutions of CNNs with delays, Phys. Lett. A 267 (2000) 312–318. [3] J.D. Cao, A.P. Chen, X. Huang, Almost periodic attractor of delayed neural networks with variable coefficients, Phys. Lett. A 340 (2005) 104–120. [4] T. Ensari, S. Arik, Global stability analysis of neural networks with multiple time varying delays, IEEE Trans. Autom. Control 11 (2005) 1781–1784. [5] D.H. He, D.Y. Xu, Attracting and invariant sets of fuzzy Cohen–Grossberg neural networks with time-varying delays, Phys. Lett. A 372 (2008) 7057–7062. [6] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, 1985. [7] Y.M. Huang, D.Y. Xu, Z.C. Yang, Dissipativity and periodic attractor for nonautonomous neural networks with time-varying delays, Neurocomputing 70 (2007) 2953–2958. [8] Y.M. Huang, W. Zhu, D.Y. Xu, Invariant and attracting set of fuzzy cellular neural networks with variable delays, Appl. Math. Lett. 22 (2009) 478–483. [9] M.H. Jiang, Y. Shen, Stability of non-autonomous bidirectional associative memory neural networks with delay, Neurocomputing 71 (2008) 863–874. [10] V. Lakshmikantham, S. Leela, Differential and Integral Inequalities: Theory and Applications, Academic Press Inc., 1969. [11] X.X. Liao, Q. Luo, Z.G. Zeng, Positive invariant and global exponential attractive sets of neural networks with time-varying delays, Neurocomputing 71 (2008) 513–518. [12] S.J. Long, D.Y. Xu, Delay-dependent stability analysis for impulsive neural networks with time varying delays, Neurocomputing 71 (2008) 1705–1713. [13] X.Y. Lou, B.T. Cui, Boundedness and exponential stability for nonautonomous cellular neural networks with reaction-diffusion terms, Chaos Solitons Fract. 33 (2007) 653–662. [14] D.Y. Xu, Integro-differential equations and delay integral inequalities, Toˆ hoku Math. J. 44 (1992) 365–378. [15] D.Y. Xu, L.G. Xu, New results for studying a certain class of nonlinear delay differential systems, IEEE Trans. Autom. Control 7 (2010) 1641–1645. [16] L.G. Xu, D.Y. Xu, P-attracting and p-invariant sets for a class of impulsive stochastic functional differential equations, Comput. Math. Appl. 57 (2009) 54–61. [17] D.Y. Xu, Z.C. Yang, Attracting and invariant sets for a class of impulsive functional differential equations, J. Math. Anal. Appl. 329 (2007) 1036–1044. [18] D.Y. Xu, H.Y. Zhao, Invariant and attracting sets of Hopfield neural networks with delay, Int. J. Syst. Sci. 32 (2001) 863–866. [19] D.Y. Xu, H.Y. Zhao, Invariant set and attractivity of nonlinear differential equation with delays, Appl. Math. Lett. 15 (2002) 321–325. [20] Z.G. Yang, D.Y. Xu, Global dynamics for non-autonomous reaction-diffusion neural networks with time-varying delays, Theor. Comput. Sci. 403 (2008) 3–10. [21] Z.H. Yuan, L.F. Yuan, L.H. Huang, D.W. Hu, Boundedness and global convergence of non-autonomous neural networks with variable delays, Nonlinear Anal.: Real World Appl. 10 (2009) 2195–2206. [22] L. Zhang, Y. Zhang, S.L. Zhang, A.H. Pheng, Activity invariant sets and exponentially stable attractors of linear threshold discrete-time recurrent neural networks, IEEE Trans. Autom. Control 6 (2009) 1341–1347. [23] Q. Zhang, X.P. Wei, J. Xu, Exponential stability for nonautonomous neural networks with variable delays, Chaos Solitons Fract. 39 (2009) 1152–1157. [24] Q. Zhang, X.P. Wei, J. Xu, Delay-dependent exponential stability criteria for non-autonomous cellular neural networks with time-varying delays, Chaos Solitons Fract. 36 (2008) 985–990.
228
D. Xu, S. Long / Neurocomputing 77 (2012) 222–228
[25] Q.X. Zhu, J.D. Cao, Stability analysis for stochastic neural networks of neutral type with both Markovian jump parameter and mixed time delays, Neurocomputing 73 (2010) 2671–2680.
Daoyi Xu graduated from the Department of Mathematics of Nanchong Normal College, Sichuan, China, in 1975 and engaged in advanced studies in the Department of Mathematics of Huazhong Normal University during 1981–1982. From 1975 to 1992 he was at Mianyang Teacher’s College and was promoted to Professor in 1987. From 1992 to 1997, he was a Professor and vice Director of Department of Mathematics, Sichuan Normal University. Since October 1997 he has been a Professor and vice Director of Institute of Mathematics, Sichuan University, China. His current research interests include qualitative theory of stochastic and impulsive systems, delay differential systems, biomathematics and neural networks.
Shujun Long graduated from the Department of Mathematics of Sichuan Normal University, China, in 1998. He received his Diploma of Master degree from Sichuan University, China. He is now studying in the College of Mathematics of Sichuan University. From 1998 till now, he is at Leshan Teachers College. His current research interests include qualitative theory of impulsive and stochastic systems, delay differential systems and neural networks.