Available online at www.sciencedirect.com
Mathematics and Computers in Simulation 79 (2009) 1527–1543
New criteria for globally exponential stability of delayed Cohen–Grossberg neural network Shengshuang Chen a , Weirui Zhao a,b,∗ , Yong Xu b a
Department of Mathematics, Wuhan University of Technology, Wuhan, Hubei 430070, China b Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen 518055, China Received 3 October 2006; received in revised form 16 April 2007; accepted 1 July 2008 Available online 23 July 2008
Abstract This paper is concerned with analysis problem for the global exponential stability of the Cohen–Grossberg neural networks with discrete delays and with distributed delays. We first prove the existence and uniqueness of the equilibrium point under mild conditions, assuming neither differentiability nor strict monotonicity for the activation function. Then, we employ Lyapunov functions to establish some sufficient conditions ensuring global exponential stability of equilibria for the Cohen–Grossberg neural networks with discrete delays and with distributed delays. Our results are not only presented in terms of system parameters and can be easily verified and also less restrictive than previously known criteria. A comparison between our results and the previous results admits that our results establish a new set of stability criteria for delayed neural networks. © 2008 IMACS. Published by Elsevier B.V. All rights reserved. Keywords: Global exponential stability (GES); Lyapunov functional; Neural networks; Delays
1. Introduction Recently, Cohen and Grossberg [5] proposed and studied an artificial feedback neural network, which is described by a system of ordinary differential equations. ⎤ ⎡ n x˙ i (t) = −ai (xi (t)) ⎣bi (xi (t)) − (1.1) aij fj (xj (t)) + Ji ⎦ , i = 1, 2, . . . , n, j=1
where n is the number of neurons in the network; xi describes the activation of the ith neuron; ai represents an amplification function and the function bi can be include a constant term indicating a fixed input to the network; the n × n connection matrix A = (aij ) tells how the neurons are connected in the network; the activation functions fj , j = 1, 2, . . . , n, show how the neurons react to input, and Ji , i = 1, 2, . . . , n, denote the constant inputs from outside of the system. Due to its promising potential for the tasks of classification, associative memory, parallel computations, and its ability to solve difficult optimization problems, (1.1) has greatly attracted the attention of the scientific community. Various ∗
Corresponding author at: Department of Mathematics, Wuhan University of Technology, 122 Luoshi Road, Wuhan, Hubei 430070, China. E-mail address:
[email protected] (W. Zhao).
0378-4754/$32.00 © 2008 IMACS. Published by Elsevier B.V. All rights reserved. doi:10.1016/j.matcom.2008.07.002
1528
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543
generalizations and modifications of (1.1) have also been proposed and studied, among which is the incorporation of time delay into the model. In fact, due to the finite speeds of the switching and transmission of signals in a network, time delays do exist in a working network and, thus, should be incorporated into the model equations of the network. For more detailed justifications for introducing delays into model equations of neural networks [1–4,13,14]. For the Cohen–Grossberg model (1.1), Ye et al. [21] first introduced delays by considering the following system of delay differential equations: ⎤ ⎡ n n x˙ i (t) = −ai (xi (t)) ⎣bi (xi (t)) − (1.2) aij fj (xj (t)) − bij fj (xj (t − τij (t))) + Ji ⎦ , i = 1, 2, . . . , n, j=1
j=1
where ai (xi (t)) > 0, bi (xi (t)) are called the amplification and the self-signal functions, respectively. A = (aij )n×n and B = (bij )n×n are the normal and the delayed connection weight matrix, respectively. Moreover, fj (·) denote the neuron activation functions. The variable delays τij (t), i, j = 1, 2, . . . , n, are continuous and differential with 0 ≤ τij (t) ≤ τ and τ˙ij (t) ≤ η < 1 for nonnegative constant η (If τ˙ij (t) ≤ 0, let η = 0). We also note that, some authors [8,13,19] have studied the pure-delay model (with aij = 0, i, j, = 1, 2, . . . , n), and here we consider the above hybrid model in which both instantaneous and delayed signaling occur (with aij = / 0 and bij = / 0). Although the use of constant fixed delays in models of delayed neural networks provides a good approximation in simple circuits composed of a small number of cells, neural networks usually have a spatial extent due to the presence of a multitude of parallel pathways with a variety of axon sizes and lengths. Thus, there will be distribution of propagation delays. In these circumstances, the signal propagation is not instantaneous and cannot modeled with discrete delays. A more appropriate way is to incorporate distributed delays. However, the criteria for stability are mostly for systems with discrete delays. Only a few of them are for neural networks with distributed delays; see, for example, [8,14,17,20,22,23]. However, those authors have only studied the pure-delay model (with aij = 0, i, j, = 1, 2, . . . , n). Hence, in this paper, we also consider the following Cohen–Grossberg hybrid models with distributed delays: ⎤ ⎡ t n n x˙ i (t) = −ai (xi (t)) ⎣bi (xi (t)) − aij fj (xj (t)) − bij Kij (t − s)fj (xj (s)) ds + Ji ⎦ , i = 1, 2, . . . , n, j=1
j=1
−∞
(1.3) where the definitions of ai , bi , aij and bij are the same as those for systems (1.2). The delay kernel functions kij (t) are assumed to be real valued nonnegative and piecewise continuous functions defined on [0, +∞), and satisfy +∞ kij (t) dt = 1. (1.4) 0
One of the most investigated problems is that of the existence, uniqueness, global asymptotic stability (GAS), and global exponential stability (GES) of the equilibrium. The number of equilibria of the neural network relates to its storage capacity. When designing an associative memory neural network, we should make as many stable equilibrium states as possible to provide a memory system with large information capability, an attractive region of each stable equilibrium state as large as possible to provide the robustness and fault tolerance for information processing, and a convergence speed as high as possible to ensure the quick convergence of the network operation. Due to the properties of locally asymptotic stability, the associative memory network is used mainly for information retrieval, pattern recognition, etc. On the other hand, to embed and solve many problems in applications of neural networks to parallel computations, signal processing and other problems involving the optimization, the dynamic neural networks have to be designed to have only a unique equilibrium point which is GAS or GES to avoid the risk of spurious responses or the problem of local minima. In fact, earlier applications of neural networks to optimization problems have suffered from the existence of a complicated set of equilibria (see [18]). Thus, the GAS and GES of a unique equilibrium for the model system is of great importance from a theoretical and an application point of view in several fields, and has been the major concern of many authors. Thus, the primary purpose in this paper is to obtain some criteria ensuring that (1.2) and (1.3) have a unique equilibrium which is globally exponentially stable (GES). Some existing results on existence, uniqueness, GAS, and GES of the equilibrium concern the case where the activation functions are all bounded and strictly increasing. These assumptions make the results inapplicable to some important engineering problems. However, in this paper, not only do we abandon the boundedness condition of fi , but also we remove the differentiability and strict monotonicity
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543
1529
restriction from fi , although this will lead to move difficulty in stability analysis. In addition, we do not impose any restriction such as symmetry on the connection matrix. The rest of this paper is organized as follows. In the next Section, some notations and assumptions are given, and then in Section 3, Existence and uniqueness of the equilibrium are addressed by employing homeomorphism techniques. By constructing novel Lyapunov functionals, we studied the Cohen–Grossberg models with discrete delay and with distributed delays. Some exponential stability criteria for the various networks are presented in Sections 4 and 5, respectively. Examples and comparisons are given in Section 6 to demonstrate the effectiveness of our main results. Finally, conclusions are drawn in Section 7. 2. Preliminaries Firstly, throughout this paper we will use the following notations: Let B = (bij ) be a real matrix of dimension of n × n. BT , B−1 denotes, respectively, the transpose of, the inverse of a square matrix B. The notation B > 0(B < 0) means that B is symmetric and positive definite (negative definite). B2 represent the norm of B induced by the 1/2 Euclidean vector norm, i.e., B2 = (λ(BT B)) , where λ(M) represents the maximum eigenvalue of matrix M. n B1 and B∞ represent the first norm and infinity norm of B, respectively. That is, B = max 1 1≤j≤n n n n ni=1 |bij |,
B∞ = max1≤i≤n j=1 |bij |. B1 and B∞ represent diagonal matrices diag i=1 |bi1 |, i=1 |bi2 |, . . . , i=1 |bin | n
n n and diag |b |, |b |, . . . , |b | , respectively. Secondly, in order to establish the global stability con1i 2i ni i=1 i=1 i=1 ditions for the above neural networks and make a precise comparison between our stability conditions and previous results derived in the literature, we first give some usual assumptions on the functions ai , ki , bi and fi : Assumption A1 . The functions ai (x), i = 1, 2, . . . , n, are continuously bounded, and there exist positive constants αi and α¯ i such that 0 < αi ≤ ai (x) ≤ α¯ i , ∀x ∈ R. Assumption A2 . The functions bi (x) are continuous and there exist constants γi > 0 such that (bi (x) − bi (y))(x − y) ≥ γi (x − y)2 > 0, i = 1, 2, . . . , n,
∀x, y ∈ R, x = / y.
Assumption A3 . There exist some positive constants Gi such that 0 ≤ (fi (x) − fi (y))(x − y) ≤ Gi (x − y)2 , i = 1, 2, . . . , n, Assumption A4 . There exists a positive number δ0 such that
∞ 0
∀x, y ∈ R, x = / y. kij (s)eδ0 s ds < ∞.
3. Existence and uniqueness of equilibrium In this section, we prove the existence and the uniqueness of the equilibrium point of the Cohen–Grossberg networks under very relaxed conditions. Before the proof, let us review a lemma given in [7] Lemma 3.1. Continuous map H(x) : Rn → Rn is homeomorphic if H(x) is injective and limx→∞ H(x) = ∞. We also have the following lemma due to [16] Lemma 3.2. Given any real matrices X, Y, C of appropriate dimensions and a scalar 0 > 0, where C > 0. Then XT Y + Y T X ≤ 0 XT CX + (1/0 )Y T C−1 Y . Since ai (x) is positive, a point x∗ = (x1∗ , x2x , . . . , xn∗ )T is an equilibrium of system (1.2)(or (1.3)) if and only if this point x∗ is a solution of the following equation: bi (xi ) −
n j=1
(aij + bij )fj (xj ) + Ji = 0,
i = 1, 2, . . . , n.
(3.1)
1530
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543
Generally, (3.1) may have more than one solution x∗ and, hence, system (1.2)(or (1.3)) maybe have more than one equilibrium. It is well known that bounded activation functions always guarantee the existence of an equilibrium point for system (1.2). Moreover, we have the following theorem for unbounded activation functions: Theorem 3.3. Suppose that in systems (1.2)(or (1.3)), Assumptions A1 – A3 are satisfied, and |fj (x)| → ∞ as |x| → ∞. Neural system (1.2)(or (1.3)) has a unique equilibrium point if there exist symmetric positive diagonal matrices P = diag(p1 , p2 , . . . , pn ), and Q = diag(q1 , q2 , . . . , qn ) satisfying that: 1 = 2P G−1 − PA − AT P − (PQ−1 B)∞ − (PQB)1 > 0 where = diag(γ1 , γ2 , . . . , γn ). Proof. Let x∗ = (x1∗ , x2∗ , . . . , xn∗ )T denotes an equilibrium point of neural network model (1.2). Then, x∗ satisfies (3.1). Let (3.2) F (x) = (F1 (x), F2 (x), . . . , Fn (x))T = 0. n where Fi (x) = bi (xi ) − j=1 (aij + bij )fj (xj ) + Ji . Obviously, the solution of (3.2) is the equilibrium point of (1.2). Therefore, (1.2) has a unique equilibrium point if F (x) is homeomorphism of Rn . From Lemma 3.1, we know that F (x) is homeomorphism of Rn if F (x) = / F (y), ∀x = / y, x, y, ∈ Rn , and F (x) → ∞ as x → ∞. Let x and y be two vectors such x = / y. Under the assumptions on the activation functions, x = / y will imply two cases: (i) x = / y and f (x) − f (y) = / 0, (ii) x = / y and f (x) − f (y) = 0, where f (x) = (f1 (x1 ), f2 (x2 ), . . . , fn (xn ))T . First, consider the case where x = / y and f (x) − f (y) = / 0. In this case, for P = diag(p1 , p2 , . . . , pn ), multiplying both sides of the first equation in (3.2) by 2(f (x) − f (y))T P results in 2(f (x) − f (y))T P(F (x) − F (y)) = −2(f (x) − f (y))T P[β(x) − β(y) − (A + B)(f (x) − f (y))]
(3.3)
where β(x) = (b1 (x1 ), b2 (x2 ), . . . , bn (xn ))T . Since (bi (xi ) − bi (yi ))(fi (xi ) − fi (yi )) ≥ (γi /Gi )(fi (xi ) − fi (yi ))2 , we have (f (x) − f (y))T P(β(x) − β(y)) ≥ (f (x) − f (y))T P G−1 (f (x) − f (y)). Since 2bij (fi (xi ) − fi (yi ))(fj (xj ) − fj (yj )) ≤ qi−1 |bij |(fi (xi ) − fi (yi ))2 + qi |bij |(fj (xj ) − fj (yj ))2 , we have 2(f (x) − f (y))T P(F (x) − F (y)) ≤ −2(f (x) − f (y))T P G−1 (f (x) − f (y)) +(f (x) − f (y))T (PA + AT P)(f (x) − f (y)) + (f (x) − f (y))T ((PQ−1 B)∞ +(PQB)1 )(f (x) − f (y)) ≤ −(f (x) − f (y))T 1 (f (x) − f (y)) < 0
(3.4)
Thus, F (x) = / F (y) when f (x) = / f (y) as P is a positive diagonal matrix. Hence, we have proved that F (x) − F (y) = / 0 when x = / y and f (x) = / f (y). Now consider the case (ii) where x = / y and f (x) − f (y) = 0. In the case, we have F (x) − F (y) = −(β(x) − β(y)) = / 0, thus, implying that F (x) = / F (y) when x = / y and f (x) = f (y). Hence, we have proved that F (x) = / F (y) for all x = / y. We will show that the conditions of Theorem 3.3 also imply that F (x) → ∞ as x → ∞. To this end, in (3.4), let y = 0, which yields 2(f (x) − f (0))T P(F (x) − F (0)) ≤ −(f (x) − f (0))T 1 (f (x) − f (0)) ≤ −λ(f (x) − f (0))T (f (x) − f (0)) (3.5) where λ is the minimum eigenvalue of the positive-definite matrix 1 . From (3.5), it follows that n n 2 λf (x)−f (0)2 ≤ 2pi (fi (xi )−fi (0))(Fi (x) − Fi (0)) ≤ 2p |fi (xi ) − fi (0)||Fi (x) − Fi (0)| ≤ 2pf (x) i=1
−f (0)∞
n i=1
i=1
|Fi (x) − Fi (0)| = 2pf (x) − f (0)∞ F (x) − F (0)1 ,
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543
1531
where p = max{p1 , p2 , . . . , pn }. Using the fact f (x) − f (0)∞ ≤ f (x) − f (0)2 , we obtain λf (x) − f (0)∞ ≤ 2pF (x) − F (0)1 . We note that f (x) − f (0)∞ ≥ f (x)∞ − f (0)∞ and F (x) − F (0)1 ≤ F (x)1 + F (0)1 . Thus, we have λf (x)∞ − λf (0)∞ ≤ 2pF (x)1 + 2pF (0)1 . Hence, F (x)1 ≥
λf (x)∞ − λf (0)∞ − 2pF (0)1 2p
from which it can be easily concluded that F (x) → ∞ as f (x) → ∞. From that |fj (x)| → ∞ as |x| → ∞, we can obtain that F (x) → ∞ as x → ∞. Hence, we have proved that F (x) is homeomorphism of Rn , thus implying that the neural system (1.2)(or (1.3)) has an equilibrium point and this equilibrium point is unique. Similar to that of Theorem 3.3, we can obtain the following theorems. Theorem 3.4. Suppose that in systems (1.2)(or (1.3)), Assumptions A1 – A3 are satisfied, and |fj (x)| → ∞ as |x| → ∞. Neural system (1.2)(or (1.3)) has a unique equilibrium point if there exist symmetric positive diagonal matrices P = diag(p1 , p2 , . . . , pn ), and Q = diag(q1 , q2 , . . . , qn ) such that: 2 = 2P G−1 − PA − AT P − (PBQ−1 )∞ − (PBQ)1 > 0 where = diag(γ1 , γ2 , . . . , γn ). Theorem 3.5. Suppose that in systems (1.2)(or (1.3)), Assumptions A1 – A3 are satisfied, and |fj (x)| → ∞ as |x| → ∞. Neural system (1.2)(or (1.3)) has a unique equilibrium point if there exist symmetric positive diagonal matrices P = diag(p1 , p2 , . . . , pn ), and Q = diag(q1 , q2 , . . . , qn ) such that: ⎞ ⎛ n n n 2 2 2 ⎠ > 0, , bj2 ,..., bjn 3 = 2P G−1 − PA − AT P − nPQ−1 − PQdiag ⎝ bj1 j=1
j=1
j=1
where = diag(γ1 , γ2 , . . . , γn ). 4. The Cohen–Grossberg model with discrete delays In this section, we consider system (1.2). Its initial conditions are of the form: xi (s) = ϕi (s) ∈ C([−τ, 0], R), s ∈ [−τ, 0], i = 1, 2, . . . , n,
(4.1)
where τ = max1≤i,j≤n τij . Let x∗ be an equilibrium of system (1.2) and z(t) = x(t) − x∗ . Substituting x(t) = z(t) + x∗ into system (1.2) leads to ⎡ ⎤ n n z˙ i (t) = −αi (zi (t)) ⎣βi (zi (t)) − aij gj (zj (t)) − bij gj (zj (t − τij (t)))⎦ , (4.2) j=1
j=1
where αi (zi (t)) = ai (zi (t) + xi∗ ), βi (zi (t)) = bi (zi (t) + xi∗ ) − bi (xi∗ )gi (zi (t)) = fi (zi (t) + xi∗ ) − fi (xi∗ ).
(4.3)
With respect to the exponential stability of x∗ for the system (1.2), we have the following result which is independent of delays. Theorem 4.1. Suppose that in systems (4.2), τ˙ij (t) ≤ η < 1for non-negative constant η (If τ˙ij (t) ≤ η ≤ 0, let η = 0), 0 ≤ τ(t) ≤ τ, Assumptions A1 – A3 are satisfied. If there exist symmetric positive diagonal matrices P =
1532
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543
diag(p1 , p2 , . . . , pn ), and Q = diag(q1 , q2 , . . . , qn ) such that: 11 = 2P G−1 − PA − AT P − (PQ−1 B)∞ −
1 (PQB)1 > 0, 1−η
then the origin of neural system (4.2) is exponentially stable. This implies that there exist positive constants k and γ such that for any solution z(t) of system (4.2) with initial function z(s) = φ(s) for all s ∈ [−τ, 0], where φ ∈ C([−τ, 0], Rn ), one has n n z(t)22 ≡ sup z2i (t) ≤ γe−kt sup φi2 (s) = γe−kt φ22 . i=1 t ∈ [−τ,0]
i=1 s ∈ [−τ,0]
Proof. Since 1 ≥ 11 > 0 and Lemma 3.3, the origin of neural system (4.2) is a unique equilibrium point. We employ the following positive-definite Lyapunov functional: V (z(t), t) = 1 V1 (z(t), t) + V2 (z(t), t), where V1 (z(t), t) = 2 V2 (z(t), t) = 2
n
zi (t)
s ds, αi (s)
i=1 0 zi (t) n
pi
0
i=1
(4.4)
gi (s) 1 ds + αi (s) 1−η n
n
i=1 j=1
t
t−τji (t)
rj |bji |gi2 (zi (v)) dv,
for positive constant 1 and ri , i = 1, 2, . . . , n. The positive constant 1 and ri will be determined later. The derivative of V along trajectories of (4.2) is given by: V˙ (z(t), t) = 1 V˙ 1 (z(t), t) + V˙ 2 (z(t), t), where V˙ 1 (z(t), t) = 2
n
⎡ zi (t) ⎣−βi (zi (t)) +
i=1
V˙ 2 (z(t), t) = 2
⎡ gi (zi (t))pi ⎣−βi (zi (t)) +
i=1
+
aij gj (zj (t)) +
j=1
and n
n
1 1−η
n
n
rj |bji |gi2 (zi (t)) −
i=1 j=1
bij gj (zj (t − τij (t)))⎦
j=1
aij gj (zj (t)) +
j=1 n n
⎤
n n i=1 j=1
n
⎤ bij gj (zj (t − τij (t)))⎦
j=1
1 − τ˙ji (t) rj |bji |gi2 (zi (t − τji (t))) 1−η
Since zi (t)βi (zi (t)) ≥ γi z2i (t), we can rewrite V˙ 1 as the following format: n n n
1/2 √ −(1/2) 2 T ˙ V1 (z(t), t) ≤ −2 γi zi (t) + 2 z (t) √ Ag(z(t))) + 2zi (t)bij gj (zj (t − τij (t))), ( 2
2 i=1 i=1 j=1 where g(z(t)) = (g1 (z1 (t)), g2 (z2 (t)), . . . , gn (zn (t)))T . By Lemma 3.2 and the Cauchy inequality (i.e. a2 + b2 ≥ 2ab), we get V˙ 1 (z(t), t) ≤ −zT (t) z(t) + 2g(z(t))T AT −1 Ag(z(t)) +
n n 2 i=1 j=1
+Mg(z(t))T g(z(t)) + M
n n i=1 j=1
|bij |gj2 (zj (t − τij (t)))
γi
bij2 gj2 (zj (t − τij (t))) ≤ −zT (t) z(t)
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543
1533
where M = max{2AT −1 A2 , max1≤i,j≤n (2/γi )|bij |} ≥ 0. 2 Since gi (zi (t))βi (zi (t)) ≥ γi zi (t)gi (zi (t)) ≥ γi G−1 i (gi (zi (t))) , we have −g(z(t))T Pβ(z(t)) ≤ −g(z(t))T P z(t) ≤ −g(z(t))T P G−1 g(z(t)). And since ((1 − τ˙ij (t))/(1 − η)) ≤ 1, the term V˙ 2 can be bounded. T ˙ V2 (z(t), t) ≤ −g(z(t)) 2P G−1 − PA − AT P − +2
n
pi gi (zi (t))
i=1
n
bij gj (zj (t − τij (t))) −
j=1
1 (RB)1 g(z(t)) 1−η
n n
rj |bji |gi2 (zi (t − τji (t)))
i=1 j=1
Since gi (zi (t))bij gj (zj (t − τij (t))) ≤ ((|bij |)/qi )gi2 (zi (t)) + qi |bij |gj2 (zj (t − τij (t))), we have V˙ 2 (z(t), t) ≤ −g(z(t))T (2P G−1 − PA − AT P −
1 pi qi−1 |bij |gi2 (zi (t)) (RB)1 )g(z(t)) + 1−η n
n
i=1 j=1
n n
+
pi qi |bij |gj2 (zj (t − τij (t))) −
i=1 j=1
2P G
n n
rj |bji |gi2 (zi (t − τji (t))) ≤ −g(z(t))T
i=1 j=1
−1
n n 1 − PA − A P − pi qi−1 |bij |gi2 (zi (t)) (RB)1 g(z(t)) + 1−η T
i=1 j=1
n n
−
(ri − pi qi )|bij |gj2 (zj (t − τij (t)))
i=1 j=1
Since 11 > 0, there exists 2 > 0 such that 11 − 2 (I + (1/1 − η)B1 ) > 0. If we define ri = pi qi + 2 , we can have V˙ 2 (z(t), t) ≤ −g(z(t))T 2P G−1 − PA − AT P − (PQ−1 B)∞ −
2 1 B1 − (PQB)1 g(z(t)) 1−η 1−η n n 1 2 T −2 11 − 2 I + + 2 I g(z(t)) |bij |gj (zj (t − τij (t))) ≤ −g(z(t)) B1 1−η i=1 j=1
−2
n n
|bij |gj2 (zj (t − τij (t))) ≤ −2 g(z(t))T g(z(t)) − 2
i=1 j=1
n n
|bij |gj2 (zj (t − τij (t))).
i=1 j=1
˙ (z(t), t) ≤ −1 z(t)T z(t). If we choose 1 > 0 such that M1 ≤ 2 , then V n Let γ = min1≤i≤n γi , r = max1≤i≤n j=1 rj |bji |, α = min1≤i≤n αi , GM = max1≤i≤n Gi and p = max1≤i≤n pi , we consider the above V (z(t), t). Obviously, V (z(t), t) is a positive definite and radially unbounded Lyapunov functional. Choose > 0 satisfying the following condition: 1 pGM − 1 γ + + rG2M τ eτ < 0. α α z (t) z (t) Since pi 0 i (gi (s)/αi (s)) ds ≤ p 0 i (Gi s/α) ds ≤ (pGM /2α)z2i (t) (1/2α)z2i (t), we have
(4.5) and
zi (t) 0
(s/αi (s)) ds ≤
zi (t) 0
(s/α) ds ≤
1534
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543
d t (e V (z(t), t)) ≤ et dt ×
n n 1 T pGM T et t T z z − 1 z z + z z + rj |bji |gi2 (zi (v)) dv ≤ et α α 1−η t−τji (t) i=1 j=1
1 pGM − 1 γ + α α
zT z +
n n et t
1−η
i=1 j=1
t−τij (t)
ri |bij |gj2 (zj (v)) dv.
Integrating both sides of (4.6) from 0 to an arbitrary positive number s, we can obtain s 1 pGM T z (t)z(t) dt es V (z(s), s) − V (z(0), 0) ≤ et − 1 γ + α α 0 s n n t t + e ri |bij |gj2 (zj (v)) dv dt. 1−η 0 t−τij (t)
(4.6)
(4.7)
i=1 j=1
It is analogous to proof of Theorem 3.5 in [25], we can have s t s et ri |bij |gj2 (zj (v)) dv dt ≤ ri |bij et 0
t−τij (t)
×
0
−τ
ev z2j (v) dv +
s
0
0
ev z2j (v) dv
t
t−τ
gj2 (zj (v)) dv dt < ri |bij |G2M τeτ (4.8)
.
Substituting (4.8) into (4.7) and using (4.5), we can obtain s n 1 pGM T es V (z(s), s) − V (z(0), 0) ≤ z z dt + et − 1 γ + rG2M τeτ α α 0 i=1 ×
0
−τ
s
ev z2i (v) dv +
0
≤ rG2M τeτ
ev z2i (v) dv
0
−τ
ev zT (v)z(v) dv ≡ M1 φ22 .
So V (z(t), t) ≤ (V (z(0), 0) + M1 φ22 )e−t , n
zi (0)
∀t > 0.
s ds + 2 pi αi (s) n
(4.9)
zi (0)
gi (s) ds αi (s) 0 i=1 0 i=1 n n 1 pGM rG2M 1 0 2 rj |bji |gi (ϕi (v)) dv ≤ + + φ22 = M2 φ22 . + 1−η α α 1 − η −τji (0)
V (z(0), 0) = 21
i=1 j=1
According to (4.4), (4.9) and the above inequality, we can obtain n zi (t) 1 T s z (t)z(t) ≤ 21 ds ≤ V (z(t), t) ≤ (M1 + M2 )φ22 e−t , α¯ αi (s) 0
∀t > 0,
i=1
where α¯ = max1≤j≤n {α¯j }, that is, α¯ (M1 + M2 )φ2 e−(/2)t . z2 ≤ 1 (4.10) implies the origin of (4.2) is globally exponentially stable.
(4.10)
If we set τij (t) = τij , i.e., τ˙ij (t) = 0, then we can easily obtain the following result.
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543
1535
Corollary 4.2. Suppose that in systems (1.2), τ˙ij (t) = 0, 0 ≤ τij ≤ τ, and Assumptions A1 – A3 are satisfied. If there exist symmetric positive diagonal matrices P = diag(p1 , p2 , . . . , pn ), and Q = diag(q1 , 12 , . . . , qn ) such that: 1 = 2P G−1 − PA − AT P − (PQ−1 B)∞ − (PQB)1 > 0, then the unique equilibrium point x∗ of neural system (1.2) is globally exponentially stable. Corollary 4.3. Suppose that in systems (1.2), 0 ≤ τij (t) = τij ≤ τ, and Assumptions A1 – A3 are satisfied. If there exist positive constants di , i = 1, 2, . . . , n, r1 ∈ [0, 1], r2 ∈ [0, 1], and the following conditions hold: ⎧ ⎛ ⎞⎫ n n ⎨ 1 ⎬ 2(1−r1 ) 2(1−r2 ) 1 |a | + G2r2 |b |) + ⎝di (G2r ⎠ max1≤i≤n d (G |a | + G |b |) < 2, (4.11) ij ij j ji ji i i j j ⎩ γi di ⎭ j=1
j=1
then the unique equilibrium point x∗ of neural system (1.2) is globally exponentially stable. 2 , G1−2r2 , . . . , G1−2r2 ), then the in Proof. Choose D = diag(d1 , d2 , . . . , dn ), P = DG and Q = diag(G1−2r 1 n 1 2 Corollary 4.2 becomes
1 = 2D − DGA − AT DG − (DGQ−1 B)∞ − (DGQB)1 . / 0, we have Thus, for all x = (x1 , x2 , . . . , xn ) = x T 1 x =
n
2di γi xi2 −
i=1
−
i=1 j=1
=
2aij di Gi xi xj −
i=1 j=1
n n
n
n n
⎛
n n i=1 j=1
2(1−r1 ) 2 1 2 |aij |di (G2r xj ) − i x i + Gi
⎛
⎝2di γi − ⎝di
i=1
n
2 2 di G2r i |bij |xi −
⎛ ⎝
2 |b |x2 ≥ dj G2−2r ji i j
i=1 j=1 n
i=1 n
n n
2 2 di G2r i |bij |xi +
j=1
2r2 1 (G2r i |aij | + Gi |bij |) +
j=1
n
n
n
2di γi xi2
i=1
⎞ dj Gj2−2r2 |bji |xi2 ⎠
j=1
⎞⎞
2 ) |b |)⎠⎠ x2 > 0. dj (Gj2(1−r1 ) |aji | + G2(1−r ji j i
j=1
Therefore, The Corollary then follows from Corollary 4.2.
If we choose the matrices P = I, Q = qI, as a special cases of Theorem 4.1, we have the following corollary. Corollary 4.4. Suppose that in systems (1.2), τ˙ij (t) ≤ η < 1for non-negative constant η (If τ˙ij (t) ≤ η ≤ 0, let η = 0), 0 ≤ τij (t) = τij ≤ τ, and Assumptions A1 – A3 are satisfied. If 12 = 2 G−1 − A − AT −
1 B∞ − qB > 0, q(1 − η)
then the unique equilibrium point x∗ of neural system (1.2) is globally exponentially stable. By constructing a differential Lyapunov functional, we have the following results Theorem 4.5. Suppose that in systems (4.2), τ˙ij (t) ≤ η < 1for non-negative constant η (If τ˙ij (t) ≤ η ≤ 0, let η = 0), 0 ≤ τ(t) ≤ τ, Assumptions A1 – A3 are satisfied. The origin of neural system (4.2) is globally exponentially stable if there exist symmetric positive diagonal matrices P = diag(p1 , p2 , . . . , pn ) and Q = diag(q1 , q2 , . . . , qn ) such that: 22 = 2P G−1 − PA − AT P −
1 (PBQ−1 )∞ − (PBQ)1 > 0. 1−η
Proof. We employ the following positive-definite Lyapunov functional: V (z(t), t) = 1 V3 (z(t), t) + V4 (z(t), t),
(4.12)
1536
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543
where V3 (z(t), t) = 2 V4 (z(t), t) = 2
n
i=1 n i=1
zi (t)
s ds, αi (s)
zi (t)
gi (s) 1 ds + αi (s) 1−η
pi 0
pi
0
n
n
t
t−τij (t)
i=1 j=1
pi rj |bij |gj2 (zj (v)) dv,
for some positive constants 1 and rk (k = 1, 2, . . . , m). The positive constants 1 and rk (k = 1, 2, . . . , m) will be determined later. It is analogous to that of Theorem 4.1, we can obtain V˙ 3 (z(t), t) ≤ −zT (t)P z(t) + Mg(z(t))T g(z(t)) + M
n n
pi |bij |gj2 (zj (t − τij (t)))
i=1 j=1
where M = max{2AT P −1 A2 , max1≤i,j≤n (2/γi )|bij |} ≥ 0. By using the inequality 2gi (zi (t))bij gj (zj (t − τij (t))) ≤ qj−1 |bij |gi2 (zi (t)) + qj |bij |gj2 (zj (t − τij )), we also have V˙ 4 (z(t), t) ≤ −g(z(t))T (2P G−1 − PA − AT P − (PBQ−1 )∞ )g(z(t)) +
n n
pj ri |bji |gi2 (zi (t))
i=1 j=1 n n − (rj − qj )pi |bij |gj2 (zj (t − τij (t))). i=1 j=1
Since 22 > 0, there exists 2 > 0 such that 22 − 2 (I + (1/(1 − η))B1 ) > 0. If we define R = diag(r1 , r2 , . . . , rn ) = Q + 2 I, we can have V˙ 4 (z(t), t) ≤ −2 g(z(t))T g(z(t)) − 2
n n
pi |bij |gj2 (zj (t − τij (t))) ds.
i=1 j=1
If we choose 1 > 0 such that M1 ≤ 2 , then V˙ (z(t), t) ≤ −1 zT (t)P z(t). The rest of proof is analogous to that of Theorem 4.1 and hence, is omitted here. z (t) t By choosing V4 (z(t), t) = 2 ni=1 pi 0 i (gi (s)/αi (s)) ds + (1/(1 − η)) ni=1 nj=1 t−τij (t) pi rj bij2 gj2 (zj (v)) dv, we can obtain Theorem 4.6. Suppose that in systems (4.2), τ˙ij (t) ≤ η < 1for non-negative constant η (If τ˙ij (t) ≤ η ≤ 0, let η = 0), 0 ≤ τ(t) ≤ τ, Assumptions A1 – A3 are satisfied. The origin of neural system (4.2) is globally exponentially stable if there exist symmetric positive diagonal matrices P = diag(p1 , p2 , . . . , pn ) and Q = diag(q1 , q2 , . . . , qn ) such that: ⎞ ⎛ n n n 1 2 2 2 ⎠ − nPQ−1 > 0. 33 = 2P G−1 − PA − AT P − , bj2 ,..., bjn PQdiag ⎝ bj1 1−η j=1
j=1
j=1
5. The Cohen–Grossberg model with distributed delays In this section, we consider system (1.3) with initial values of the form xi (s) = ϕi (s),
s ∈ (−∞, 0],
i = 1, 2, . . . , n,
(5.1)
where ϕi (s) denote real-valued continuous and bounded functions defined on (−∞, 0]. Firstly, about the kernel functions kij (t), i, j = 1, 2, . . . , n, we also have the following lemma due to [24]: Lemma 5.1. If Assumption A4 holds, then ∞ ∞ skij (s) ds < ∞, kij (s)ses ds < ∞, 0
0
∀0 <
0. In addition, Assumption A4 is satisfied, then the origin of system (5.3) is globally exponentially stable. This implies that there exist positive constants k and γ such that for any solution x(t) of system (5.3) with initial function x(s) = φ(s) for all s ∈ (−∞, 0], where φ ∈ C((−∞, 0], Rn ), one has n
z2i (t) ≤ γe−kt
i=1
n
φi2 (s) = γe−kt φ2 .
sup
i=1 s ∈ (−∞,0]
Proof. We employ the following positive-definite Lyapunov functional: V (z(t), t) = 1 V1 (z(t), t) + V5 (z(t), t),
(5.4)
where V5 (z(t), t) = 2
n i=1
pi 0
zi (t)
gi (s) ds + αi (s) n
n
i=1 j=1
+∞
rj |bji |kji (s)
0
t
t−s
gi2 (zi (v)) dv ds,
for some positive constants 1 and rk (k = 1, 2, . . . , m). The positive constants 1 and rk (k = 1, 2, . . . , m) will be determined later. It is analogous to that of Theorem 4.1, we can choose 2 > 0 such that the matrix 2 − 2 AAT − 2 B∞ is a positive definite matrix. By Lemma 3.2, we gain 1 1 V˙ 1 (z(t), t) ≤ −zT (t)(2 − 2 AAT − 2 B∞ )z(t) + g(z(t))T g(z(t)) + 2 2 n
n
i=1 j=1
0
+∞
|bij |kij (s)gj2 (zj (t − s)) ds.
For the term V˙ 5 , we can also have V˙ 5 (z(t), t) ≤ −g(z(t))T (2P G−1 − PA − AT P − (PQ−1 B)∞ )g(z(t)) +
n n
rj |bji |gi2 (zi (t))
i=1 j=1 n n
−
i=1 j=1
(ri − pi qi )|bij | 0
∞
kij (s)gj2 (zj (t − s)) ds.
Since 1 > 0, there exists 3 > 0 such that 1 − 3 (I + B1 ) > 0. If we define R = diag(r1 , r2 , . . . , rn ) = PQ + 3 I which is a symmetric positive diagonal matrix, we can have
1538
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543 n n
V˙ 5 (z(t), t) ≤ −g(z(t))T ( 3 − 3 B1 )g(z(t)) − 3
i=1 j=1
−3 (I + B1 ) + 3 I)g(z(t)) − 3
n n
|bij | 0
i=1 j=1
−3
n n
∞
|bij | 0
i=1 j=1
∞
|bij | 0
∞
kij (s)gj2 (zj (t − s)) ds ≤ −g(z(t))T ( 1
kij (s)gj2 (zj (t − s)) ds ≤ −3 g(z(t))T g(z(t))
kij (s)gj2 (zj (t − s)) ds.
If we choose 1 > 0 such that 1 ≤ 2 3 , then V˙ (z(t)) ≤ −1 z(t)T (2 − 2 AAT − 2 B∞ )z(t). On the other hand, obviously, n zi (t) 1 s 1 T 2 |z(t)| = z (t)z(t) ≤ 21 ds ≤ V (z(t), t). a¯ a¯ αi (s) 0 i=1
Thus, the origin of neural system (5.3) is globally asymptotically stable. If Assumption A4 is satisfied. Let γ = λmin (2 − 2 AAT − 2 B∞ ), r = max1≤k≤n |rk |, α = min1≤i≤n αi , GM = max1≤i≤n Gi and p = max1≤i≤n pi , we consider the above V (z(t), t). Obviously, V (z(t), t) is a positive definite and radially unbounded Lyapunov functional. +∞ By virtue of (5.1), it follows that kij ≡ 0 kij (t)tet dt < ∞, ∀ ∈ (0, (δ0 /2)). Let k = max1≤i≤n kij , choose ∈ (0, (δ0 /2)) satisfying the following condition: 1 pGM − 1 γ + + RB1 kG2M < 0. α α
(5.5)
It is analogous to that of Theorem 4.1, we then have
d t pGM T t t dv(z(t), t) t 1 z (t)z(t) − 1 γ + (e V (z(t), t)) = e V (z(t), t) + e ≤e dt dt α α t n n +∞ +et rj |bji |kji (s) gi2 (zi (v)) dv ds. 0
i=1 j=1
(5.6)
t−s
by integrating both sides of (5.6) from 0 to an arbitrary positive number θ and changing the integrals, it is analogous to proof of Theorem 1 in [24] we have 0 eθ V (z(θ), θ) − V (z(0), 0) ≤ RB1 kG2M ev zT (v)z(v) dv ≡ M1 φ2∞ . −∞
So V (z(t), t) ≤ (V (z(0), 0) + M1 φ2∞ )e−t , V (z(0), 0) = 21
n 0
i=1
×
t
t−s
zi (0)
s ds + 2 pi αi (s) n
i=1
gi2 (zi (v)) dv ds ≤ +∞
where ks = max1≤i≤n,1≤j≤n { 1 T z (t)z(t) ≤ 21 α¯ n
i=1
0
0
zi (t)
∀t > 0. 0
zi (0)
(5.7) gi (s) ds + αi (s) n
n
i=1 j=1
1 pGM + + ks RB1 G2M α α
0
−∞
ri |bji |kji (s)
φ2∞ = M2 φ2∞ ,
skji (s) ds}. According to (5.4), (5.7) and the above inequality, we can obtain s ds ≤ V (z(t), t) ≤ (M1 + M2 )φ2∞ e−t , αi (s)
∀t > 0,
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543
1539
where α¯ = max1≤j≤n {α¯ j }, that is ¯ 1 + M2 ) α(M z ≤ φ∞ e−(/2)t . 1
(5.8)
(5.8) implies the origin of (5.3) is globally exponentially stable.
Corollary 5.3. Suppose that in systems (1.3), Assumptions A1 – A3 are satisfied. If there exist positive constants di , i = 1, 2, . . . , n, r1 ∈ [0, 1], r2 ∈ [0, 1], and the condition (4.11) holds, then the unique equilibrium point x∗ of neural system (1.3) is globally exponentially stable. 2 , G1−2r2 , . . . , G1−2r2 ), then the in Proof. Choose D = diag(d1 , d2 , . . . , dn ), P = DG and Q = diag(G1−2r 1 n 1 2 Theorem 4.1 becomes
1 = 2D − DGA − AT DG − (DGQ−1 B)∞ − (DGQB)1 . By Corollary 4.3, we have 1 > 0.Thus, The Corollary then follows from Theorem 5.2.
By constructing a differential Lyapunov functional, we have the following results Theorem 5.4. Suppose that in systems (5.3), Assumptions A1 – A3 are satisfied. The origin of neural system (5.3) is globally asymptotically stable if there exist symmetric positive diagonal matrices P = diag(p1 , p2 , . . . , pn ) and Q = diag(q1 , q2 , . . . , qn ) such that: 2 = 2P G−1 − PA − AT P − (PBQ−1 )∞ − (PBQ)1 > 0. In addition, Assumption A4 is satisfied, then the origin of system (5.3) is globally exponentially stable. Proof. We employ the following positive-definite Lyapunov functional: V (z(t), t) = 1 V1 (z(t), t) + V6 (z(t), t), where V6 (z(t), t) = 2
n i=1
pi 0
zi (t)
(5.9)
gi (s) ds + αi (s) n
n
i=1 j=1
+∞
pi rj |bij |kij (s)
0
t
t−s
gj2 (zj (v)) dv ds,
for some positive constants 1 and rk (k = 1, 2, . . . , m). The positive constants 1 and rk (k = 1, 2, . . . , m) will be determined later. It is similar to that of Theorem 5.2, we can obtain that there exist 2 > 0 such that the matrix 2 − 2 AAT − 2 (PB)∞ is a positive definite matrix and 1 V˙ 1 (z(t), t) ≤ −zT (t)(2 − 2 AAT − 2 (PB)∞ )z(t) + g(z(t))T g(z(t)) 2 n n +∞ 1 + pi |bij |kij (s)gj2 (zj (t − s)) ds. 2 0 i=1 j=1
By using the inequality 2gi (zi (t))bij kij (s)gj (zj (t − s)) ≤ qj−1 |bij |kij (s)gi2 (zi (t)) + qj |bij |kij (s)gj2 (zj (t − s)), we also have V˙ 6 (z(t), t) ≤ −g(z(t))T (2P G−1 − PA − AT P − (PBQ−1 )∞ )g(z(t)) +
n n i=1 j=1
n n
−
i=1 j=1
(rj − qj )pi |bij | 0
∞
kij (s)gj2 (zj (t − s)) ds.
pj ri |bji |gi2 (zi (t))
1540
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543
Since 2 > 0, there exists 3 > 0 such that 2 − 3 (I + (PB)1 ) > 0. If we define R = diag(r1 , r2 , . . . , rn ) = Q + 3 I, we can have ∞ n n T ˙ pi |bij | kij (s)gj2 (zj (t − s)) ds. V6 (z(t), t) ≤ −3 g(z(t)) g(z(t)) − 3 i=1 j=1
0
If we choose 1 > 0 such that 1 ≤ 2 3 , then V˙ (z(t), t) ≤ −1 z(t)T (2 − 2 AAT − 2 (PB)∞ )z(t). The rest of proof is similar to that of Theorem 5.2 and hence, is omitted here. z (t) n n ∞ t 2 2 By choosing V2 (z(t), t) = 2 ni=1 pi 0 i αgii (s) i=1 j=1 0 t−s pi rj bij kij (s)gj (zj (v)) dv ds, we can obtain. (s) ds + Theorem 5.5. Suppose that in systems (5.3), Assumptions A1 – A4 are satisfied. The origin of neural system (5.3) is globally exponentially stable if there exist symmetric positive diagonal matrices P = diag(p1 , p2 , . . . , pn ) and Q = diag(q1 , q2 , . . . , qn ) such that: ⎞ ⎛ n n n 2 2 2 ⎠ > 0. 3 = 2P G−1 − PA − AT P − nPQ−1 − PQdiag ⎝ bj1 , bj2 ,..., bjn j=1
j=1
j=1
Choosing the matrices P = I, Q = I, as a special cases of Theorem 5.4, we have the following corollary. Corollary 5.6. Suppose that in systems (1.3), Assumptions A1 – A4 are satisfied. If 2 G−1 − A − AT − B∞ − B1 > 0, then the unique equilibrium x∗ of neural system (1.3) is globally exponentially stable. Choosing the matrices P = I, Q = G, as a special cases of Theorem 5.4, we also obtain the following corollary. Corollary 5.7. Suppose that in systems (1.3), Assumptions A1 – A4 are satisfied. If 2 G−1 − A − AT − (BG−1 )∞ − (BG)1 > 0, then the unique equilibrium x∗ of neural system (1.3) is globally exponentially stable. 6. Remark and numerical example Several results on the globally exponential stability of neural networks with delays have appeared in recent years. First, comparing with the corresponding results given in [2,6,11,12,15,23–25], we find that the results obtained in this paper very strongly improve and extend those results in many aspects. In [2,12,15], the authors gave some global asymptotical (exponential) stability criteria for a class of Cohen–Grossberg neural networks with bounded activation functions and fixed delays. However, in this paper, we do not require that the activation functions to be bounded and delays are constants. In additions, the authors in [6] also removed the boundedness of activation functions, and considered Hopfield neural networks with single delay τ(t). But, the authors have not proved the existence of equilibrium point. However, in this paper, we do not assume that all delays are equal. Moreover, we have proved the existence of equilibrium point. Third, all conditions in [11] and [23] neglect the signs of entries in the connection matrix A, and thus, the difference between excitatory and inhibitory effects might be ignored. Finally, in [24] and [25], the authors have only considered the case when τij (t) = τj (t). Therefore, our results in this paper extend and improve the corresponding results in [2,6,11,12,15,23–25]. Second, we will now compare our results with those results in [11,23]. First, we restate the previous stability results:
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543
1541
Theorem 6.1. [11] For delayed neural networks (1.2), suppose that Assumptions A1 – A3 are satisfied. System (1.2) is globally exponentially stable If there exists constants hij , lij , h∗ij , lij∗ ∈ R, ωi > 0(i, j = 1, 2, . . . , n), r > 1 and σ > 0 such that ξi ≡ rωi αi γi − (r − 1)
n
ij )/r−1) − ωj α¯ i |aij |((r−hij )/r−1) G((r−l j
j=1
−(r − 1)
n
ωj α¯ i |bij |
n
ωj α¯ i Gljij |aij |hij
j=1 ((r−h∗ij )/r−1)
((r−lij∗ )/r−1)
Gj
j=1
−
n
l∗
ωj α¯ i Gjij |bij∗ |hij > σ, i = 1, 2, . . . , n.
j=1
Theorem 6.2. [23] Suppose that in systems (1.3), ai (x) = 1, bi (x) = di x, Assumptions A1 – A4 are satisfied. If there exist positive constants di , i = 1, 2, . . . , n, r1 ∈ [0, 1], r2 ∈ [0, 1], and the following condition holds: ρ(M) < 1, M = (mij )n×n , mij =
Gj (|aij | + |bij |) di
(6.1)
where ρ(M) denotes the spectral radius of a square matrix M, then the equilibrium x∗ ofbi neural system (1.3) is globally asymptotically stable. Now, we present some example to illustrate the effectiveness of Corollary 4.4 and 5.6 Example 6.3. Consider the following Cohen–Grossberg neural networks with discrete delays: 1 dx1 (t) = −(8 + sin(x1 (t))) γx1 (t) + f (x1 (t)) + f (x2 (t)) − f x1 t − sin t − 1 3 dt 1 −f x2 t − e− sin t − 1 +2 , 4 dx2 (t) 1 = −(5 + cos(x2 (t))) γx2 (t) − f (x1 (t)) − f (x2 (t)) − f x1 t − e− sin t − 1 dt 4 1 −f x2 t − sin t − 1 +3 , 3
(6.2)
7, α2 = 4, α¯1 = 9, α¯ where f (x) = (1/2)x + (1/2) tanh(x). In this case, α1 = 2 = 6, τ 11 (t) = τ22 (t) = (1/3) sin t + −1 −1 1 1 1, τ12 (t) = τ21 (t) = (1/4)e− sin t + 1, = γI, G = I, A = ,B = , η = (1/3). Thus the 12 in 1 −1 1 1 Corollary 4.4 becomes: ⎡ ⎤ 3 − 2q 0 2γ + 2 − ⎢ ⎥ q ⎥. 12 = ⎢ ⎣ ⎦ 3 0 2γ + 2 − − 2q q √ If γ > 6 − 1, there always √ exists positive constant q such that 2γ + 2 − (3/q) − 2q > 0. Hence, the conditions of Corollary 4.4 hold for γ > 6 − 1. Now let us check the conditions of Theorem 6.1 for the same network parameters. In this case, for all hij , lij , h∗ij , lij∗ ∈ R, ωi > 0(i, j = 1, 2), r > 1, we have ξ1 = (7γ − 18)rω1 − 18rω2 , ξ2 = (4γ − 12)rω2 − 12rω1 > 0. In√ order to ensure the positive definiteness of ξ1 and ξ2 , γ must be chosen as γ > 3. Therefore, it can be concluded that 6 − 1 < γ ≤ 3, the conditions of Theorem 6.1 are not satisfied, whereas the conditions of Corollary 4.4 still hold if √ for 6 − 1 < γ ≤ 3.
1542
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543
Example 6.4. Consider the following Cohen–Grossberg neural networks with delays: t t dx1 (t) = −γx1 (t) − f (x1 (t)) − f (x2 (t)) + 2e−2(t−s) f (x1 (s)) ds + e−(t−s) f (x2 (s)) ds + 2 , dt −∞ t −∞ t dx2 (t) −(t−s) = −γx2 (t) + f (x1 (t)) − f (x2 (t)) − e f (x1 (s)) ds − 2e−2(t−s) f (x2 (s)) ds + 3 , dt −∞ −∞
(6.3)
−2t −t where + (1/2) tanh(x). f (x) = (1/2)x In this case, k11 (t) = k22 (t) = 2e , k12 (t) = k21 (t) = e , = γI, G = −1 −1 1 1 I, A = ,B = . Thus the condition in Corollary 5.6 becomes: 1 −1 −1 −1 ! " 2γ − 2 0 −1 T 2 G − A − A − B∞ − B1 = > 0, 0 2γ − 2
Thus, the condition of Corollary 5.6 holds for γ > 1. Now let us check the conditions of Theorem 6.2 for the same network parameters. In this case, we have ⎤ ⎡ 2 2 ⎢γ γ ⎥ 4 ⎥ M=⎢ ⎣ 2 2 ⎦ , ρ(M) = γ γ γ In order to ensure that ρ(M) < 1, γ must be chosen as γ > 4. Therefore, it can be concluded that if 1 < γ ≤ 4, the conditions of Theorem 6.2 are not satisfied, whereas the conditions of Corollary 5.6 still hold for 1 < γ ≤ 4. 7. Conclusion In this paper, some criteria for the global exponential stability of a class of Cohen–Grossberg neural networks with discrete delays and with distributed delays have been derived. We have also shown by analyses that the neuronal input–output activation function only needs to satisfy Assumption A3 given in this paper, but does not need to be continuous, differentiable, strictly monotonically increasing, and bounded, as usually required by other analyzing methods. The criteria concerning the differences between excitatory and inhibitory effects on units extend some existing results in the literature. Some new stability conditions are stated in simple algebraic forms so that their verification and applications are straightforward and convenient. Comparisons between our results and the previous results have also been made. They shows that our results establish a new set of globally exponential stability criteria for delayed neural networks. Those conditions are less restrictive than those given in the earlier references. Acknowledgements The authors would like to thank the anonymous reviewers and the editor for their constructive comments that led to a truly significant improvement in the manuscript. This work was supported by the Natural Science Foundation of China under Grant 60602038 and China Postdoctoral Science Foundation under Grant 20060400239. References [1] J. Cao, J. Liang, Boundedness and stability for Cohen–Grossberg neural networks with time-varying delays, J. Math. Anal. Appl. 296 (2) (2004) 665–685. [2] J. Cao, X. Li, Stability in delayed Cohen–Grossberg neural networks: LMI optimization approach, Phys. D: Nonlinear Phenom. 212 (1/2) (2005) 54–65. [3] J. Cao, J. Wang, Global asymptotic and robust stability of recurrent neural networks with time delays, IEEE Trans. Circ. Syst. -I 52 (2) (2005) 417–426. [4] J. Cao, Q. Song, Stability in Cohen–Grossberg type BAM neural networks with time-varying delays, Nonlinearity 19 (7) (2006) 1601–1617. [5] M.A. Cohen, S. Grossberg, Absolute stability and global pattern formation and parallel memory storage by competitive neural networks, IEEE Trans. Syst., Man Cybern. 13 (7) (1983) 815–821.
S. Chen et al. / Mathematics and Computers in Simulation 79 (2009) 1527–1543
1543
[6] T. Ensari, S. Arik, Global stability of class of neural networks with time varying delays, IEEE Trans. Circ. Syst. -II 52 (3) (2005) 126–130. [7] M. Forti, A. Tesi, New conditions for global stability of neural networks with application to linear and quadratic programming problems, IEEE Trans. Circ. Syst. I 42 (7) (1995) 354–366. [8] K. Gopalsamy, X. He, Delay-independent stability in bidirectional associative memory networks, IEEE Trans. Neural Netw. 5 (1994) 998–1002. [11] H. Jiang, J. Cao, Z. Teng, Dyanmics of Cohen–Grossberg neural networks with time-varying delays, Phys. Lett. A 354 (5/6) (2006) 414–422. [12] X. Liao, C. Li, K. Wong, Criteria for exponential stability of Cohen–Grossberg neural networks, Neural Netw. 17 (2004) 1401–1414. [13] C. Marcus, R. Westervelt, Stability of analog neural networks with delay, Phys. Rev. A 39 (1989) 347–359. [14] S. Mohamad, K. Gopalsamy, Dynamics of a class of discrete-time neural networks and their continuous-time counter parts, Math. Comput. Simul. 53 (2000) 1–39. [15] Z. Orman, S. Arik, New results for global stability of Cohen–Grossberg neural networks with discrete time delays, Lect. Notes Comput. Sci. 4232 (2006) 570–579. [16] E. Sanchez, E. Perez, J. Perez, Input-to-state stability (ISS) analysis for dynamic neural networks, IEEE Trans. Circ. Syst. I 46 (11) (1999) 1395–1398. [17] J. Sun, L. Wan, Global exponential stability and periodic solutions of Cohen–Grossberg neural networks with continuously distributed delays, Physica D 208 (2005) 1–20. [18] D. Tank, J. Hopfield, Simple neural optimization networks: an A/D converter, signal decision circuit and a linear programming circuit, IEEE Trans. Circ. Syst. I 33 (5) (1986) 533–541. [19] P. van den Driessche, X. Zou, Global attractivity in delayed Hopfield neural network models, SIAM J. Appl. Math. 58 (6) (1998) 1878–1890. [20] L. Wang, Stability of Cohen–Grossberg neural networks with distributed delays, Appl. Math. Comput. 160 (2005) 93–110. [21] H. Ye, A. Michel, K. Wang, Global stability and local stability of Hopfield neural networks with delays, Phys. Rev. E 50 (5) (1994) 4206–4213. [22] Q. Zhang, X. Wei, J. Xu, Global exponential stability of Hopfields neural networks with continuously distributed delays, Phys. Lett. A 315 (2003) 431–436. [23] H. Zhao, Global asympetotic stability of Hopfields neural network involving distributed delays, Neural Netw. 17 (2004) 47–53. [24] W. Zhao, H. Zhang, Global stability of Cohen–Grossberg neural networks with distributed delays, Lect. Notes Comput. Sci. 4232 (2006) 580–590. [25] W. Zhao, Global exponential stability analysis of Cohen–Grossberg neural network with time varying delays, Comm. Nonlinear Sci. Numer. Simul. 13 (5) (2008) 847–856.