International Journal of Neural Systems, Vol. 13, No. 3 (2003) 193–204 c World Scientific Publishing Company
GLOBAL CONVERGENCE OF DELAYED NEURAL NETWORK SYSTEMS WENLIAN LU, LIBIN RONG and TIANPING CHEN∗ Laboratory of Nonlinear Science, Institute of Mathematics, Fudan University, Shanghai, 200433 P.R. China ∗
[email protected] Received 31 October 2002 Revised 28 April 2003 In this paper, without assuming the boundedness, strict monotonicity and differentiability of the activation functions, we utilize a new Lyapunov function to analyze the global convergence of a class of neural networks models with time delays. A new sufficient condition guaranteeing the existence, uniqueness and global exponential stability of the equilibrium point is derived. This stability criterion imposes constraints on the feedback matrices independently of the delay parameters. The result is compared with some previous works. Furthermore, the condition may be less restrictive in the case that the activation functions are hyperbolic tangent. Keywords: Neural networks; time delays; exponential stability; asymptotic stability; global convergence; Lyapunov function.
(DCNNs) have already been obtained in Refs. 3–5, 8, 13, 17, 18, 23, 31. In Refs. 3, 13 and 23, some criteria related to the global asymptotic stability independent of delays were obtained by use of the Lyapunov function method. In Refs. 4, 5 and 8, some sufficient conditions ensuring the global exponential stability of DCNNs were given. In Refs. 17 and 18, the asymptotic stability and absolute stability of the delayed system were considered. In Ref. 31, a model of CNNs which contains variable, unbounded delays was investigated. In this paper, we present some sufficient conditions for the uniqueness and global exponential stability of the equilibrium point for a class of delayed neural networks models. The conditions in our results are less restrictive, so some previous works of other researchers are extended. This paper is organized in the following way. In Sec. 2, we give the model description and establish some lemmas related to the existence and uniqueness
1. Introduction It is well-known that cellular neural networks (CNNs), proposed by L. O. Chua and L. Yang in 1988,10,11 have been extensively studied both in theory and applications. They have been successfully applied in signal processing, pattern recognition and associative memories, especially in static image treatment.12 Such applications heavily rely on the dynamic behaviors of the neural networks. Therefore, the analysis of these dynamic behaviors is a necessary step for the practical design of neural networks. In hardware implementation, time delays inevitably occur due to the finite switching speed of the amplifiers and communication time. What’s more, to process moving images, one must introduce time delays in the signals transmitted among the cells.9 Neural networks with time delays have much more complicated dynamics due to the incorporation of delays.9 Nevertheless, some useful results on the stability analysis of delayed cellular neural networks ∗Corresponding
author.
193
194 W. L. Lu et al.
of the system. In Sec. 3, we apply the lemmas obtained in Sec. 2 to give detailed stability analysis of the delayed model. We also compare and contrast our results with others from the literature in the remarks following the main theorem. A less restrictive criterion for a special case of the dynamical system is obtained in Sec. 4. In Sec. 5, we investigate the existence and exponential stability of unique equilibria of bidirectional associative memory (BAM) neural network. And conclusions follow in Sec. 6.
In this paper, we deal with delayed neural networks systems described by the following differential equations with time delays: n X dui (t) = −di ui (t) + aij gj (uj (t)) dt j=1 n X j=1
bij gj (uj (t − τj )) + Ii
i = 1, 2, . . . , n
Definition 2 A real square matrix A is said to be Lyapunov Diagonally Stable (LDS) if there exists a positive diagonal matrix P such that the symmetric part of P A, i.e., [P A]S , is positive definite.
A map H: Rn → Rn is a homeomorphism of Rn onto itself if H ∈ C 0 , H is one-to-one, H is onto and the inverse map H −1 ∈ C 0 . Definition 4 A continuous function g : Rn → Rn of the form g = (g1 , g2 , . . . , gn )T is said to be of class G{G1 , G2 , . . . , Gn }, where G = diag(G1 , G2 , . . . , Gn ) with 0 < Gi < +∞, i = 1, 2, . . . , n, if the function g(x) satisfies
(1)
where n corresponds to the number of units in a neural network, ui (t) corresponds to the state of the ith unit at time t, gj (uj (t)) denotes the output of jth unit at time t. aij , bij , di are constants, aij denotes the strength of the jth unit on the ith unit at time t, bij denotes the strength of jth unit on the ith unit at time t−τj . Ii denotes the input to the ith unit, τj corresponds to the transmission delay along the axon of the jth unit and is a nonnegative constant, di represents the positive rate with which the ith unit will reset its potential to the resting state in isolation when disconnected from the network and the external inputs Ii . System (1) can be rewritten for u = (u1 , u2 , . . . , un )T as du(t) = −Du(t)+Ag(u(t))+Bg(u(t−τ ))+I dt
A real n × n matrix A = {aij } is said to be an M-matrix if aij ≤ 0, i, j = 1, 2, . . . , n, i 6= j, and all successive principle minors of A are positive.
Definition 3
2. Model Description and Uniqueness of the Equilibrium Point
+
Definition 1
(2)
where T denotes transpose, D = diag(d1 , d2 , . . . , dn ), g(u) = (g1 (u1 ), g2 (u2 ), . . . , gn (un ))T , I = (I1 , I2 , . . . , In )T , τ = (τ1 , τ2 , . . . , τn )T . A = {aij } is the feedback matrix, B = {bij } is the delayed feedback matrix.
0≤
gi (x) − gi (y) ≤ Gi x−y
(3)
for each x, y ∈ R, x 6= y and for i = 1, 2, . . . , n. For system (2), Chen has proved the following global convergence theorem (see Ref. 5). Theorem A Suppose that g(x) ∈ G{G1 , G2 , . . . , Gn } and DG−1 − |A| − |B| is an M-matrix, then any solution of (2) converges to a unique equilibrium u∗ exponentially. By use of the theory of the M-matrix,25 we can get many similar results ensuring the global stability of the delayed system, for example, see Refs. 4, 5 and 13. Conditions in Theorem A and many related criteria are explicit and easily verified in practice. But they neglect the signs of entries in the connection matrices A and B, and thus, the difference between excitatory and inhibitory effects might be ignored. Recently, Van Den Driessche and Zou,30 Arik and Tavsanoglu,3 Zhang,32 Liao and Wang,23 Joy17,18 attempted to overcome this disadvantage. But most of them only considered the asymptotic stability of the system with either bounded or strictly
Global Convergence of Delayed Neural Network Systems 195
monotonic increasing activation function. In the following section we will further these results with only the assumption that g(x) ∈ G{G1 , G2 , . . . , Gn }. Clearly, this output function is more general than the piecewise-linear function gi (x) = 21 (kx+1k−kx−1k) used in the traditional DCNNs, which were investigated extensively by many researchers. Furthermore, we consider the exponential stability instead of global asymptotic stability. Before we establish a criterion for the global exponential stability of system (2), we give some lemmas concerning the existence and uniqueness of the equilibrium point of (2).
Bg(u) + I is a homeomorphism of Rn onto itself and system (2) has a unique equilibrium point for each I ∈ Rn . 3. Global Exponential Stability Result In this section, we will prove the following result. Theorem 1 Suppose that g ∈ G{G1 , G2 , . . . , Gn } and there exist positive diagonal matrices P and Q such that 2P DG−1 −(P A+AT P )−P BQ−1(P B)T −Q > 0
Lemma 1 (Forti and Tesi14 ) Suppose that g ∈ G{G1 , G2 , . . . , Gn } and DG−1 − A ∈ LDS, then we have (1) H(u) = −Du + Ag(u) + I is a homeomorphism of Rn onto itself (2) the system du(t) dt = −Du(t) + Ag(u(t)) + I has a unique equilibrium point for each I ∈ Rn . Lemma 2 Suppose that g ∈ G{G1 , G2 , . . . , Gn } and there exist positive diagonal matrices P and Q such that 2P DG−1 − (P A + AT P ) − P BQ−1 (P B)T − Q > 0
(4)
i.e., the left matrix is positive definite, then, for each I ∈ Rn , system (2) has a unique equilibrium point. Proof From (4), we get 2P DG−1 −(P A+AT P ) > P BQ−1 (P B)T +Q 1
1
(5)
1
By the inequality [Q− 2 (P B)T − Q 2 ]T [Q− 2 (P B)T − 1 Q 2 ] ≥ 0, we have P BQ−1 (P B)T + Q ≥ P B + (P B)T
(6)
so (5) becomes 2P DG
−1
T
> P (A + B) + (A + B) P
(7)
i.e., {P (DG
−1
S
− A − B)} > 0
(8)
which implies DG−1 − (A + B) ∈ LDS by Definition 2. From Lemma 1, H(u) = −Du + Ag(u) +
(9)
Then, for each I ∈ Rn , system (2) has a unique equilibrium point which is globally exponentially stable, independent of the delays. Proof By Lemma 2, we know that for each I ∈ Rn , system (2) has a unique equilibrium, namely, u∗ . By means of coordinate translation x(t) = u(t) − u∗ , (2) can be rewritten as dx(t) = −Dx(t) + Aϕ(t) + Bϕ(t − τ ) dt
(10)
where x(·) = (x1 (·), . . . , xn (·))T , ϕ(·) = (ϕ1 (x1 (·)), . . . , ϕn (xn (·)))T , ϕi (xi (·)) = gi (xi (·) + u∗i ) − gi (u∗i ) i = 1, 2, . . . , n. Clearly, u∗ is globally exponentially stable for (2) if and only if the trivial solution of (10) is globally exponentially stable. To analyze the global stability of the origin of (10), consider following Lyapunov functional
V (x(t), t) =
n X
x2i et + 2α
i=1
n X
Pi et
i=1
xi
×
Z
t
×
Z
ϕi (ρ)dρ + (α + β)
0
n X
Qi
i=1
ϕ2i (xi (s))e(s+τi ) ds
(11)
t−τi
where positive constants α and β are to be decided and > 0 is a small real number.
196 W. L. Lu et al.
+ (α + β)[ϕT (t)E τ Qϕ(t)et
Differentiating V along the solution of (10), we have
− ϕT (t − τ )Qϕ(t − τ )et ]
V˙ (x(t), t) = e x x + 2e x [−Dx + Aϕ(t) t T
t T
+ Bϕ(t − τ )] + 2αet ×
Z
xi
n X
(12)
Since τ = (τ1 , τ2 , . . . , τn ) is a vector, here we use E τ to denote diag{eτ1 , eτ2 , . . . , eτn }. Because g ∈ G{G1 , G2 , . . . , Gn }, then
Pi
i=1
ϕi (ρ)dρ + 2αet ϕT (t)
Z
0
× [−P Dx + P Aϕ(t) + P Bϕ(t − τ )]
xi
ϕi (ρ)dρ ≤
0
1 Gi x2i 2
ϕT (t)(−P D)x ≤ ϕT (t)(−P DG−1 )ϕ(t)
(13) (14)
Thus, ( t ˙ V (x(t), t) ≤ e 2xT
! 1 1 − D + I + αP G x + 2xT Aϕ(t) + 2xT Bϕ(t − τ ) + 2α[−ϕT (t)P DG−1 ϕ(t) 2 2 )
+ ϕT (t)P Aϕ(t) + ϕT (t)P Bϕ(t − τ )] + (α + β)[ϕT (t)E τ Qϕ(t) − ϕT (t − τ )Qϕ(t − τ )]
Let ε = α, then the above inequality changes to ( ! ε ε ε T t ˙ α −D+ 2x V (x(t), t) ≤ e I + P G x + 2xT Aϕ(t) + 2xT Bϕ(t − τ ) + 2α[−ϕT (t)P DG−1 ϕ(t) 2α 2 ) ε
+ ϕT (t)P Aϕ(t) + ϕT (t)P Bϕ(t − τ )] + (α + β)[ϕT (t)E α τ Qϕ(t) − ϕT (t − τ )Qϕ(t − τ )]
Now, we will choose suitable parameters ε, α, β in order to prove V˙ (x(t), t) ≤ 0. Firstly, we choose a fixed positive β such that kBk2 kD−1 k β> (17) min Qi i
where the matrix norm k · k is defined as kXk = 1 (λmax (X T X)) 2 , λmax is the largest eigenvalue of the matrix. Secondly, we choose a sufficiently small ε > 0 and a sufficiently large α > 0 such that ε ε D− I − PG > 0 (18) 2α 2 2
−1
ε kBk kD k ε kD−1 k + kP GD−1 k ≤ 1 − 2α 2 β min Qi
(19)
i
Moreover, by the inequality (9), we have α[2P DG−1 − (P A + AT P ) − P BQ−1 (P B)T ε
ε
− E α τ Q] − βE α τ Q − AT −1 ε ε A>0 I − PG × D− 2α 2
(20)
(15)
(16)
for the sufficiently large α, sufficiently small ε and fixed β. From Eq. (19), we also have kBk2 kD−1 k β≥ ε min Qi (1 − ( 2α kD−1 k + 2ε kP GD−1 k)) i
−1
ε −1 ε kBk2 kD−1 k −1
I− D − P GD ≥
min Qi 2α 2 i
≥
kB T (D −
ε 2α I
− 2ε P G)−1 Bk min Qi
(21)
i
where we use the inequality (Ref. 15): k(I −F )−1 k ≤ 1 1−kF k for kF k ≤ 1. Thus, −1 ε ε B (22) I − PG βQ ≥ B T D − 2α 2 Up to now, we have chosen the parameters and got two inequalities (20) and (22). Now, we are to prove V˙ (x(t), t) ≤ 0. Notice that ε D − 2α I − 2ε P G is positive definite from (18), we have
Global Convergence of Delayed Neural Network Systems 197
−x
T
21 ε ε ε ε ε T D− I − P G x + 2x Aϕ(t) = − D − I − PG x − D − I− 2α 2 2α 2 2α 21 ε ε ε I − PG x − D − I− × D− 2α 2 2α −1 ε ε Aϕ(t) I − PG + ϕT (t)AT D − 2α 2 −1 ε ε ≤ ϕT (t)AT D − I − PG Aϕ(t) 2α 2
T − 12 ε Aϕ(t) PG 2 − 12 ε PG Aϕ(t) 2
(23)
Similarly, we have −1 ε ε ε ε −xT D − Bϕ(t − τ ) I − P G x + 2xT Bϕ(t − τ ) ≤ ϕT (t − τ )B T D − I − PG 2α 2 2α 2 −αϕT (t − τ )Qϕ(t − τ ) + 2αϕT (t)P Bϕ(t − τ ) ≤ αϕT (t)P BQ−1 B T P ϕ(t)
(24) (25)
Therefore, from Eq. (16), we have ε ε V˙ (x(t), t) ≤ e α t − ϕT (t) 2αP DG−1 − α(P A + AT P ) − αP BQ−1 B T P − (α + β)E α τ Q − AT −1 −1 ε ε ε ε × D− I − PG I − PG A ϕ(t) + ϕT (t − τ ) B T D − B − βQ ϕ(t − τ ) 2α 2 2α 2 By Eqs. (20) and (22), we obtain V˙ (x(t), t) ≤ 0
(27) Pn
Therefore, V (x(t), t) ≤ V (x(0), 0). Thus, i=1 x2i ≤ V (x(0), 0)e−t . This fact implies that x = 0 is globally exponentially stable for (10). The proof of Theorem 1 is completed. We should notice that it is not easy to select P and Q directly since inequality (9) is a matrix operation. But we can transform it equivalently to a Linear Matrix Inequality (LMI). Then we can get the solution P and Q by use of the LMI Toolbox in Matlab. We illustrate it in details. Firstly, we have Lemma 3 (Schur Complement, S. Boyd et al.2 ) The following LMI ! Q(x) S(x) >0 (28) S T (x) R(x) where Q(x) = QT (x), R(x) = RT (x) and S(x) depend affinely on x, is equivalent to R(x) > 0 and Q(x) − S(x)R−1 (x)S T (x) > 0
(29)
(26)
From Lemma 3, we can easily get the following result Theorem 2 Suppose that P = diag(P1 , . . . , Pn ), Q = diag(Q1 , . . . , Qn ), Pi > 0, Qi > 0, i = 1, 2, . . . , n, then 2P DG−1 −(P A+AT P )−P BQ−1(P B)T −Q > 0
(30)
is equivalent to the following LMI 2P DG−1 − (P A + AT P ) − Q P B (P B)T Q
!
>0
(31)
Therefore, the stability criterion has been transformed to the linear matrix inequality (31). We can get P and Q by use of the LMI Toolbox in Matlab. In the following, we give a specific example to compute the matrices P and Q. Considering the two-dimension system: x˙ 1 (t) = −9x1 (t) + 2g(x1 (t)) − g(x2 (t)) + 3g(x1 (t − τ1 )) + g(x2 (t − τ2 )) + I1
x˙ 2 (t) = −9x2 (t) − 2g(x1 (t)) + 3g(x2 (t)) 1 + g(x1 (t − τ1 )) + 2g(x2 (t − τ2 )) + I2 2 (32)
198 W. L. Lu et al.
5
4
3
2
1
0
−1
−2 −5
−4
−3
−2
−1
0
1
2
3
4
5
A phase plane portrait of the trajectories of system (32).
where τ1 = 1, τ2 = 2, I1 = 1, I2 = 2, g(x) = 1 2 (|x + 1| − |x − 1|). 2 −1 Thus, D = 90 09 , G = I , A = −2 3 , 3 1 B= 12 . 2
From LMI (31), we get P and Q by the LMI Toolbox in Matlab: ! ! 0.0912 0 0.0912 0 P = , Q= . 0 0.1215 0 0.1215 Furthermore, we get
−2P DG−1 + (P A + AT P ) + P BQ−1 B T P + Q ! −0.2963 0.0304 = (33) 0.0304 −0.8102 whose eigenvalues are −0.8120, −0.2945. We give a figure to describe the stability of system (32) as it is done in the paper of Qiao et al.28 We get the phase plots with initial functions (x1 , x2 ) = (sin s, cos s), (2s, exp s), (log (s + 3), s2 ),
1
(s3 , arctan s − 4), (27, −45), (tan s − 3, −(s + 10) 3 ), (81, 128), where s ∈ [−2, 0]. Remark 1 There are many stability results for such delayed system in literatures. Here we give some comparisons with others. In Joy,17,18 the similar model was considered and some interesting results were given. It should be noted that the theorem (Theorem 3.1, Ref. 18) proposed a similar condition to Theorem 1 if the diagonal matrix Q is chosen to be the identity matrix In . However, the activation function in Ref. 18 was required to be either strictly monotonic increasing, or bounded and nondecreasing, in addition to g ∈ G{G1 , G2 , . . . , Gn }. In our proof, we only assume that g ∈ G{G1 , G2 , . . . , Gn }, which includes a class of more general functions. This extension is necessary specially when infinite intervals with zero slope (such as the piecewise linear models) are presented in unbounded activations. Moreover, in
Global Convergence of Delayed Neural Network Systems 199
Ref. 18, only the global asymptotic stability was considered. Here we prove the global exponential stability of the system. This is very important since in designing a neural circuit, it is often desired that a neural network converges in an exponential rate to ensure fast response in the network. In Van Den Driessche and Zou,30 a class of delayed Hopfield neural network models (A = 0) with bounded activation functions was investigated. In Arik and Tavsanoglu,3 Liao and Wang,23 the following piece-wise linear models: gi (ui (t)) = 0.5(|ui (t) + 1|−|ui (t)−1|) ∈ [−1, +1] was assumed. Furthermore, in their results, there was an additional requirement that −(A + AT ) was positive definite. Obviously, it imposes more constraints on the connection matrix. Thus, Theorem 1 represents a generalization of some previous results on the global convergence in the literature. Remark 2
Proposition 1 The equilibrium point of (10) is globally asymptotically stable if there exists a positive β such that
(
−(A + AT + βI) > 0 √ if β ≥ 1 kBk2 ≤ 2β √ kBk2 ≤ 1 + β if 0 < β ≤ 1
(36) (37)
Now, we point out that Proposition 1 can be deduced from Theorem 1. In Theorem 1, (9) reduces to 2P − (P A + AT P ) − P BQ−1 (P B)T − Q > 0
(38)
We will consider the following two cases separately.
In Arik’s 2002 paper (Ref. 1), the author proposed a Lyapunov functional to analyze the cellular neural networks. But there are two main differences between Arik’s and ours. Firstly, Arik’s Lyapunov functional is chosen to discuss the asymptotic stability, in our paper, we chose the functional to analyze the exponential stability. Secondly, in Arik’s paper, the author got two separate conditions by the Lyapunov functional. One is for connection matrix A, the other is for connection matrix B. But we got an integral condition concerning matrices A and B. Here we give an example for which Arik’s methods fails and ours is successful. Considering the following one dimension system: 1 x˙ = −x + g(x) + 1 3
(34)
where g(x) = 12 (|x + 1| − |x − 1|). Thus, A = 13 , B = 0, D = 1, G = 1. Obviously, −(A + AT + βI) is not positive definite. So Arik’s method fails to show the stability. In our paper, by choosing P = Q = 1, We have 2P DG−1 − (P A + AT P ) − P BQ−1 (P B)T − Q =2−
special case. Since the activation function g(x) = 0.5(|x + 1| − |x − 1|), we know G is identity matrix I. Here the matrix D is also assumed to be I. We restate their theorem as
1 2 −1= >0 3 3
Case 1 β ≥ 1. In this case, we have −(A + AT + βI) > 0 √ and kBk2 ≤ 2β, then 2I − (A + AT ) − β −1 BB T − βI > 0
which is just the (9), if P = I, Q = βI, where I is identity matrix. Case 2 0 < β ≤ 1. In this case, we have −(A + AT + βI) > 0 √ and kBk2 ≤ 1 + β, then, we have 2I −(A+AT )−(1+β)−1 BB T −(1+β)I > 0
(40)
which is just the (9), if P = I, Q = (1 + β)I. If fact, from Theorem 1 in our paper we can get the following result: Proposition 2 The equilibrium point of (10) is globally asymptotically stable if there exists a positive β such that
(35)
It shows the exponential stability by our criterion. Moreover, we can show that the criterion given in Arik’s paper1 is recovered from our theorem as a
(39)
−(A + AT + βI) > 0 and kBk2 ≤ 1 +
β . 2
(41)
(42)
200 W. L. Lu et al.
Proof β 2,
T
β 2 2) .
From kBk2 ≤ 1 + we know kBB k2 ≤ (1 + Let α = 1 + β2 , we have (1 + β2 )2 = α(2 + β − α). Thus, kBB T k2 ≤ α(2+β −α). So we get α−1 BB T ≤ (2 + β − α)I. According to −(A + AT + βI) > 0, we know −(A + AT + βI) + (2 + β − α)I − α−1 BB T > 0
(43)
that is T
2I − (A + A ) − α
−1
T
BB − αI > 0
(44)
This is exactly (9) when we choose P = I and Q = αI. Thus the global stability of the trivial solution of (10) follows from Theorem 1. Obviously, our result is more general than that of Proposition 1. Remark 3 In Peng et al.’s 2002 paper (Ref. 27), the authors presented a new approach to stability analysis of Hopfield-type neural networks with time-varying delays. Here we extended the model and included the non-delay item. Peng et al defined two nonlinear functions similar to the matrix norm and matrix measure. In fact, they gave the condition concerning the absolute value of entries in the connections matrix as it was done in Theorem A. Thus, the differences between excitatory and inhibitory effects of the networks are ignored. If some entries in the connection matrices are negative, there are some examples32 for which Theorem A in Sec. 2 fails to be true. However, Theorem 1 still holds. It illustrates that the hypothesis in Theorem A is conservative. This results from the ignoring differences between excitatory and inhibitory effects. 4. Further Result of Stability Analysis In the previous section, we obtain the global exponential stability of system (2) if 2P DG−1 − (P A + AT P ) − P BQ−1 (P B)T − Q > 0. Now, it is natural to ask: What will happen if the strict inequality is replaced by the following nonstrict inequality? 2P DG−1 − (P A + AT P ) − P BQ−1 (P B)T − Q ≥ 0 . In the present section, we give an affirmative answer when the activation functions are hyperbolic
tangent. These functions are typically used in neural networks models. Throughout this section, we assume that in system (1), gi (x) = tanh(Gi x), i = 1, 2, . . . , n. Constant Gi is positive. we have the following Theorem 3 If there exist positive diagonal matrices P and Q such that 2P DG−1 −(P A+AT P )−P BQ−1(P B)T −Q ≥ 0
(45)
Then system (1) has a unique equilibrium u∗ = [u∗1 , u∗2 , . . . , u∗n ]T ∈ Rn such that each solution of (1) satisfies limt→+∞ u(t) = u∗ . Before the proof, we have to make some preparations. Firstly, we prove (1) has at least one equilibrium ∗ u . Defining the mapping: H(u) = (En − αD)u + α(Ag(u) + Bg(u) + I)
(46)
where En is identity matrix, D = diag(d1 , d2 , . . . , dn ), α is positive and to be selected. Since g(u) is bounded, there exists a positive M1 such that kAg(u) + Bg(u) + Ik ≤ M1
(47)
There exist d and d such that 0 < d ≤ di < d. We select a sufficiently small α such that 1 − αd > 0. Thus, 1 − αd > 0. Let M ≥ d1 M1 and Ω = {x ∈ Rn | kxk ≤ M }. Thus, for x ∈ Ω, we have kH(u)k ≤ (1 − αd)M + αM1 ≤ M
(48)
Thus H(u) is a continuous mapping from Ω to itself. Since Ω is a convex and compact set, by Brouwer fixed point theory, there exists u∗ ∈ Ω such that H(u∗ ) = u∗ . From (46), we know this u∗ satisfies −Du∗ + Ag(u∗ ) + Bg(u∗ ) + I = 0
(49)
u∗ is an equilibrium point of system (1). Let xi (t) = ui (t) − u∗i , (1) can be written as n X dxi (t) = −di xi (t) + aij ϕj (xj (t)) dt j=1
+
n X j=1
bij ϕj (xj (t − τj ))
(50)
Global Convergence of Delayed Neural Network Systems 201
where ϕi (xi (·)) = gi (xi (·) + u∗i ) − gi (u∗i ). Thus, the uniqueness and asymptotic stability of (1) reduces to the global convergence of (50). We also need the following properties for the sigmoidal function g: 1. For any real number x, j = 1, 2, . . . , n
E3 ≤ |ϕj (x)| ≤ E2 Gj |x|
j = 1, 2, . . . , n .
(52)
Proof of Theorem 3 Pn
i=1
|xi (t)|2 , similar to
lim kx(t)k2 = 0
t→+∞
(53)
Otherwise, there is a constant E1 such that for any large T > 0, there is tT > T such that kx(tT )k2 > √ 2 nE1 . Since kx(t)k2 is uniformly continuous, then there is such an interval (aT , bT ) with bT − aT ≥ δ √ that kx(t)k2 > nE1 whenever t ∈ (aT , bT ), where δ is a fixed positive number. Thus, for each t ∈ (aT , bT ), there is an index i0 ∈ {1, 2, . . . , n} such that |xi0 (t)| > E1 . From Eq. (52), we also know that there are constants 0 < E2 < 1, E3 > 0, such that E3 ≤ |ϕi0 (xi0 (t))| ≤ E2 Gi0 |xi0 (t)|
(54)
Now, we consider the following functional Z xi (t) n X V (x(t)) = 2 Pi ϕi (s)ds +
n X
0
i=1
Qi
≤ ϕT (t)P BQ−1 (P B)T ϕ(t)
(57)
From Eq. (56), we obtain
Z
t
ϕ2i (xi (ξ))dξ
(55)
t−τi
Calculating V˙ (x) along the solutions of Eq. (50), we get V˙ (x) = 2ϕT (t)[−P Dx + P Aϕ(t) + P Bϕ(t − τ )] + ϕT (t)Qϕ(t) − ϕT (t − τ )Qϕ(t − τ ) = − 2ϕT (t)P Dx + 2ϕT (t)P Aϕ(t)
(56)
(58)
Thus, by Eqs. (51) and (54), we have 1 1 V˙ (x) ≤ −2ϕi0 (t)Pi0 di0 ϕi (t) E 2 G i0 0 −2
n X
ϕi (t)Pi di xi (t) + ϕT (t)
i6=i0
× [(P A + AT P ) + P BQ−1 (P B)T + Q]ϕ(t) 1 1 ≤ −2 − 1 ϕi0 (t)Pi0 di0 ϕi (t) E2 G i0 0 n
− 2ϕi0 (t)Pi0 di0 ×
X 1 ϕi (t)Pi di ϕi0 (t) − 2 G i0 i6=i0
1 ϕi (t) + ϕT (t)[(P A + AT P ) Gi
+ P BQ−1 (P B)T + Q]ϕ(t) 1 1 2 = −2 − 1 Pi 0 d i 0 ϕ (t) − ϕT (t) E2 G i0 i0 × [2P DG−1 − P A − AT P − P BQ−1 (P B)T − Q]ϕ(t) According to the condition (45), we get 1 2 1 − 1 Pi 0 d i 0 ϕ (t) V˙ (x) ≤ −2 E2 G i0 i0 From Eq. (54), we obtain 1 2 1 − 1 Pi 0 d i 0 E = −C V˙ (x) ≤ −2 E2 G i0 3
(59)
(60)
(61)
Where C is a fixed positive number. Now, we claim that lim V (t) = −∞ (62) t→+∞
Let (a1 , b1 ) be such an interval with the length √ |b1 −a1 | ≥ δ that kx(t)k2 > 2 nE1 for all t ∈ (a1 , b1 ). If b1 = +∞, then Z t lim [V (t) − V (0)] = lim V˙ (s)ds t→+∞
+ 2ϕT (t)P Bϕ(t − τ ) + ϕT (t)Qϕ(t)
− ϕT (t − τ )Qϕ(t − τ )
+ P BQ−1 (P B)T + Q]ϕ(t)
(51)
2. Let E1 > 0 be a fixed positive constant and |x| > E1 . Then there exist constants 0 < E2 < 1, E3 > 0 depending only on E1 such that
i=1
2ϕT (t)P Bϕ(t − τ ) − ϕT (t − τ )Qϕ(t − τ )
V˙ (x) ≤ −2ϕT (t)P Dx + ϕT (t)[(P A + AT P )
|ϕj (x)| ≤ Gj |x|
Define function kx(t)k22 = Ref. 6, we prove
We also have
t→+∞
0
≤ lim (−Ct) = −∞ t→+∞
(63)
202 W. L. Lu et al.
If b1 < +∞, we can find another interval (a2 , b2 ) with the same property (a1 , b1 ) possesses and a2 ≥ a1 . By induction, we can find a sequence of intervals (ai , bi ) with this property and ai+1 ≥ bi . If for some i, we have bi = +∞, then by previous arguments, we conclude that V (t) → −∞. Otherwise, lim [V (t) − V (0)] ≤ lim
t→+∞
n→+∞
n X i=1
[V (bi ) − V (ai )]
≤ lim (−nCδ) = −∞ n→+∞
(64)
In any case, V (t) → −∞, which contradicts V (t) ≥ 0. Thus, Theorem 3 is proved. Remark 1 In Theorem 3, if the activation function g ∈ G{G1 , G2 , . . . , Gn }, then the conclusion of the global convergence of (1) is not valid any more. Following example verifies it. Let n = 1 and consider the differential equation du(t) = −u(t) + g(u(t)) + 1 dt
(65)
If g(u) = u, then G = 1. Thus, DG−1 − A = 0. But, obviously, the above system has no equilibrium. However, when g(u) = tanh(u), the system has a globally asymptotically stable equilibrium, as Theorem 3 has stated. Remark 2 From the process of proving the above theorem, we can infer that the theorem is still valid for a class of sigmoidal functions, which is more general than tanh(Gi x). 5. Results for BAM Networks In this section, we will investigate the existence and exponential stability of unique equilibrium point of bidirectional associative memory (BAM) neural network described by the following delayed model n X dxi (t) = −a x (t) + bij fj (yj (t − σj )) + Ii i i dt j=1 n X dyi (t) = −c y (t) + dij gj (xj (t − τj )) + Ji i i dt j=1
(66)
for i = 1, 2, . . . , n. If we denote x = (x1 , x2 , . . . , xn )T , y = (y1 , y2 , . . . , yn )T , then Eq. (66) reduces to dx(t) = −Ax(t) + Bf (y(t − σ)) + I dt dy(t) = −Cy(t) + Dg(x(t − τ )) + J dt
(67)
System (66) consists of two sets of n neurons arranged on two layers, namely, I-layer and J-layer. xi (·) and yi (·) denote membrane potentials of ith neurons from the I-layer and J-layer, respectively; bij , dij correspond to synaptic connection matrices. Ii , Ji denote external inputs to the neurons introduced from outside the network; σj , τj are time delays. It is clear that Eq. (66) extended those models studied by Refs. 16, 19, 20, 21, 24. It was also discussed by Mohamad.26 But most of the works discuss the asymptotic stability and the conditions guaranteeing the convergence ignore the signs of entries in the connection matrices. In this section, we simplify the two-layer model and obtain the global exponential stability results with more general activation functions. Theorem 4 Suppose that f ∈ G{F1 , F2 , . . . , Fn }, g ∈ G{G1 , G2 , . . . , Gn } and there exist positive diagonal matrices P1 , P2 , Q1 and Q2 such that T 2P1 AG−1 − P1 BQ−1 2 (P1 B) − Q1 > 0 (68)
T 2P2 CF −1 − P2 DQ−1 1 (P2 D) − Q2 > 0 (69)
Then, for each I, J ∈ Rn , system (66) has a x∗ unique equilibrium point, say, y∗ which is globally exponentially stable, independent of the delays. Proof Firstly, we simplify the two-layer model and so, the bidirectional associative memory neural network can be regarded as a single-layer system, i.e., the delayed Hopfield neural network studied in the previous section. We let w(t) = (x1 (t), x2 (t), . . . , xn (t), y1 (t), y2 (t), . . . , yn (t))T , then system (66) can be rewritten
Global Convergence of Delayed Neural Network Systems 203
as dw(t) =− dt
A 0
0 C
+
0 D
B 0
comments and suggestions. This paper is supported by National Science Foundation of China 69982003 and 60074005.
w(t) h(w(t − η)) +
I J
!
(70)
which is a Hopfield-type neural network. We note that there is only the delayed item and the delayed 0 B connection matrix is D 0 . And the activation
function is h = (g1 , g2 , . . . , gn , f1 , f2 , . . . , fn )T . η = (τ1 , τ2 , . . . , τn , σ1 , σ2 , . . . , σn )T denotes the new time delays. Since f ∈ G{F1 , F2 , . . . , Fn }, g ∈ G{G1 , G2 , . . . , Gn }, we get h ∈ G{G1 , G2 , . . . , Gn , F1 , F2 , . . . , Fn }. From the conditions (68) and (69), after a direct calculation, we can obtain −1 G 0 A 0 P1 0 2 0 F 0 C 0 P2 −1 P1 0 0 B Q1 0 − 0 P2 D 0 0 Q2 T 0 B P1 0 Q1 0 × − > 0. D 0 0 P2 0 Q2
Therefore, by Theorem 1, system (70) has a ∗ unique equilibrium point, say, w , or equivalent to x∗ for system (66), which is globally exponeny∗ tially stable, independent of the delays. The proof is completed. 6. Conclusions A sufficient condition for the global exponential stability independent of time delays is obtained for a class of delayed neural networks systems. We don’t assume the symmetry of the connection matrices. The boundedness and differentiability of the activation functions are not required, either. The criteria concerning the differences between excitatory and inhibitory effects on units extend some existing results in the literature. Furthermore, the condition can be relaxed in the case that the activation functions are a class of sigmoidal functions. We also give similar results concerning bidirectional associative memory (BAM) neural network. Acknowledgments We are very grateful to the reviewers for their helpful
References 1. S. Arik 2002, “An analysis of global asymptotic stability of delayed cellular neural networks,” IEEE Trans. on Neural Networks 13(5), 1239–1242. 2. S. Boyd et al. 1994, Linear Matrix Inequalities in System and Control Theory (SIAM, Philadelphia). 3. S. Arik and V. Tavsanoglu 2000, “On the global asymptotic stability of delayed cellular neural networks,” IEEE Trans. Circuits Syst. I 47(4), 571–574. 4. J. Cao 2000, “Periodic oscillation and exponential stability of delayed CNNs,” Phys. Lett. A 270, 157–163. 5. T. P. Chen 2001, “Global exponential stability of delayed Hopfield neural networks,” Neural Networks 14, 977–980. 6. T. P. Chen 2001, “Global convergence of delayed dynamical systems,” IEEE Transactions on Neural Networks 12(6), 1532–1536. 7. T. P. Chen and L. B. Rong 2003, “Robust global exponential stability of Cohen-Grossberg neural networks with time delays,” (To appear in IEEE Transactions on Neural Networks). 8. T. P. Chen and S. Amari 2001, “Exponential convergence of delayed dynamical systems,” Neural Computation 13(3), 621–636. 9. P. P. Civalleri, L. M. Gilli and L. Pabdolfi 1993, “On stability of cellular neural networks with delay,” IEEE Trans. Circuits Syst. 40, 157–164. 10. L. O. Chua and L. Yang 1988, “Cellular neural networks: Theory,” IEEE Trans. Circuits Syst. 35, 1257–1272. 11. L. O. Chua and L. Yang 1988, “Cellular neural networks: Application,” IEEE Trans. Circuits Syst. 35, 1273–1290. 12. L. O. Chua 1998, CNN: A Paradigm for Complexity (Singapore: World Scientific). 13. C. Feng and R. Plamondon 2001, “On the stability analysis of delayed neural networks systems,” Neural Networks 14, 1181–1188. 14. M. Forti and A. Tesi 1995, “New conditions for global stability of neural networks with application to linear and quadratic programming problems,” IEEE Trans. Circuits Syst. I 42, 354–366. 15. G. H. Golub and C. F. Van Loan 1996, Matrix Computations (third edition) (The John Hopkins University Press). 16. K. Gopalsamy and X. He 1994, “Delay-independent stability in bidirectional associative memory networks,” IEEE Trans. Neural Networks 5(6), 998–1002. 17. M. Joy 1999, “On the global convergence of a class of functional differential equations with applications
204 W. L. Lu et al.
18.
19. 20. 21. 22.
23.
24.
25.
26.
in neural network theory,” J. Mathematical Analysis and Applications 232, 61–81. M. Joy 2000, “Results concerning the absolute stability of delayed neural networks,” Neural Networks 13, 613–616. B. Kosko 1987, “Adaptive bidirectional associative memories,” Appl. Opt. 26, 4947–4960. B. Kosko 1988, “Bidirectional associative memories,” IEEE Trans. Syst. Man Cybernet 18, 49–60. B. Kosko 1990, “Unsupervised learning in noise,” IEEE Trans. Neural Networks 1, 44–57. Y. Kuang 1993, “Delay differential equations with applications in population dynamics,” (Boston: Academic Press). T. Liao and F. Wang 2000, “Global stability for cellular neural networks with time delay,” IEEE Trans. Neural Networks 11(6), 1481–1484. X. Liao and J. Yu 1998, “Qualitative analysis of bi-directional associative memory with time delay,” Int. J. Circ. Theor. Appl. 26, 219–229. A. N. Michel, J. A. Farrell and W. Porod 1989, “Qualitative analysis of neural networks,” IEEE Trans. Circuits Syst. 36(2), 229–243. S. Mohamad 2001, “Global exponential stability in continuous-time and discrete-time delayed
27.
28.
29.
30.
31.
32.
bidirectional neural networks,” Physica D 159, 233–251. J. Peng, H. Qiao and Z. Xu 2002, “A new approach to stability of neural networks with time-varing delays,” Neural Networks 15, 95–103. H. Qiao, J. Peng and Z. Xu 2001, “Nonlinear Measure: A new approach to exponential stability analysis for Hopfield-Type neural networks,” IEEE Trans. on Neural Networks 12(2), 360–370. V. Sree Hari Rao and B. Phaneendra 1999, “Global dynamics of bidirectional associative memory neural networks involving transmission delays and dead zones,” Neural Networks 12, 455–465. P. Van Den Driessche and X. Zou 1998, “Global attractivity in delayed Hopfield neural networks models,” SIAM J. of Applied Math. 58(6), 1878–1890. Z. Yi, P. A. Heng and K. S. Leung 2001, “Convergence analysis of cellular neural networks with unbounded delay,” IEEE Trans. Circuits Syst. I 48(6), 680–687. J. Zhang and X. Jin 2000, “Global stability analysis in delayed Hopfield neural network models,” Neural Networks 13, 745–753.