Stability Analysis for Neural Networks With Time ... - Semantic Scholar

Report 2 Downloads 99 Views
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 4, APRIL 2013

513

Stability Analysis for Neural Networks With Time-Varying Delay Based on Quadratic Convex Combination Huaguang Zhang, Senior Member, IEEE, Feisheng Yang, Xiaodong Liu, and Qingling Zhang

Abstract— In this paper, a novel method is developed for the stability problem of a class of neural networks with time-varying delay. New delay-dependent stability criteria in terms of linear matrix inequalities for recurrent neural networks with timevarying delay are derived by the newly proposed augmented simple Lyapunov–Krasovski functional. Different from previous results by using the first-order convex combination property, our derivation applies the idea of second-order convex combination and the property of quadratic convex function which is given in the form of a lemma without resorting to Jensen’s inequality. A numerical example is provided to verify the effectiveness and superiority of the presented results. Index Terms— Quadratic convex combination, recurrent neural network (RNN), stability analysis, time-varying delay.

I. I NTRODUCTION

N

EURAL networks belong to a special class of nonlinear dynamical systems, and much attention has been paid to them in the past few decades because of their extensive applications, such as in pattern recognition [1], combinatorial optimization [2], associative memory [3], and so forth. It is sometimes desirable to introduce delays into neural networks when dealing with problems associated with motions [4]. On the other hand, since the neural nets are usually implemented by different hardware circuits—analog, digital, or even very large scale integrated circuits—time delay is inevitably encountered in their electronic implementation due to the finite switching speed of amplifier when information is processed

Manuscript received June 30, 2012; revised October 29, 2012; accepted December 19, 2012. Date of publication January 14, 2013; date of current version February 13, 2013. This work was supported in part by the National Natural Science Foundation of China under Grant 50977008, the State Key Program of National Natural Science Foundation of China under Grant 61034005, the National Basic Research Program of China under Grant 2009CB320601, and the National High Technology Research and the Development Program of China under Grant 2012AA040104. H. Zhang and F. Yang are with the College of Information Science and Engineering, Northeastern University, Shenyang 110819, China, and also with the State Key Laboratory of Synthetical Automation for Process Industries, Shenyang 110819, China (e-mail: [email protected]; [email protected]). X. Liu is with the Research Center of Information and Control, Dalian University of Technology, Dalian 116024, China, and also with the Department of Mathematics, Dalian Maritime University, Dalian 116026, China (e-mail: [email protected]). Q. Zhang is with the Institute of System Science, College of Sciences, Northeastern University, Shenyang 110819, China (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2012.2236571

and signals are transmitted; this is frequently a source of oscillation or instability in neural networks [5]. Therefore, the stability of neural networks with delay has become a topic of great theoretical and practical importance. There exist many results on stability issues of recurrent neural networks (RNNs) either with constant delay [6]– [12] or with time-varying delay [13]–[30]. The stability-related criteria in those papers about neural networks with time-varying delay are based on several kinds of inequalities, such as the norm inequality [13], the generalized Halanay inequality [14], and the linear matrix inequality (LMI) [15]– [29]. Among these stability conditions, the criteria in the LMI form are easy to test with some convex optimization software, and thus the LMI-based approach has become a popular and powerful tool to treat the stability problems of neural networks with timevarying delays. Generally speaking, stability criteria of neural networks with time delay are classified into two categories, i.e., delayindependent, e.g., [3], [11], [25], [27], and delay-dependent, e.g., [7], [15]–[17], [19], [22], [23], [29]. Since delaydependent criteria tend to be less conservative than the delayindependent ones, in particular when the delay is small, many studies have been made on the delay-dependent category in recent years. In this paper, our objective is to obtain new delay-dependent stability criteria for a class of RNNs with time-varying delay by choosing a new augmented Lyapunov–Krasovski (L–K) functional and estimating its derivative tightly from a novel viewpoint. Our results employ the quadratic convex combination technique, which is different from the linear convex combination and inverse convex combination that are extensively used in recent literature on systems with time-varying delay (see, [31] and [23], respectively, [32]– [33]). A remarkable feature of our methodology is that we resort to neither Jensen’s inequality with the delay-dividing approach, nor the Leibniz– Newton formula with the free-weighting matrix method. To the best of our knowledge, this is the first time that investigation is carried out on the stability problem for neural networks with time-varying delay via this new nonlinear convex combination idea, i.e., the quadratic convex combination, combined with the newly proposed augmented L–K functional. Compared with existing relevant results, the criteria in this paper not only lead to less conservative stability conditions thanks to the augmented terms and multiple integrals up to quadruple, but also have smaller computational burden since our theoretical

2162–237X/$31.00 © 2013 IEEE

514

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 4, APRIL 2013

proof is not concerned with any delay-decomposing method or free-weighting matrix method. The validity and less conservatism of the proposed methodology are confirmed on a widely tested example. Notations: In the following, C = [ci j ]m×n denotes an m ×n real matrix. C T represents the transpose of matrix C. M > 0 (M < 0) denotes that M is a symmetric positive (negative) definite matrix. M ≥ 0 (M ≤ 0) denotes that M is a positive (negative) semidefinite symmetric matrix. The identity and zero matrices of appropriate dimensions are denoted by I and 0, respectively. Rn denotes the n-dimensional Euclidean space. Sym(M) is defined as Sym(M) = M + M T . The notation ∗ in a block matrix always represents the block entry induced by symmetry. II. P ROBLEM F ORMULATION Consider the following RNNs with a time-varying delay: z˙ (t) = −Dz(t) + A f (z(t)) + B f (z(t − d(t))) + U z(t) = φ(t)

∀t ∈ [−h, 0]

(1)

where z(·) = [z 1 (·), z 2 (·), . . . , z n (·)] is the neuron state vector, f (z(·)) = [ f 1 (z 1 (·)), f 2 (z 2 (·)), . . . , fn (z n (·))]T denotes the neuron activation function, and U = [U1 , U2 , . . . , Un ]T is a bias value vector. D = diag(d1 , d2 , . . . , dn ) is a diagonal matrix with di > 0, i = 1, 2, . . . , n. A and B are the connection weight matrix and the delay connection weight matrix, respectively. The initial condition φ(t) is a continuous and differentiable vector-valued function, where t ∈ [−h, 0]. The time delay d(t) is a differentiable function that satisfies T

0  d(t)  h

φ(t) − z ∗ . g(x(t)) = [g1(x 1 (t)), g2 (x 2 (t)), . . . , gn (x n (t))]T , and gi (x i (t)) = f i (x i (t) + z i∗ ) − f i (z i∗ ), i = 1, 2, . . . , n. Functions gi (·), i = 1, 2, . . . , n, satisfy the following condition:  l i  gi x(xi i )  l i , if x i = 0 (6) if x i = 0. gi (x i ) = 0, III. M AIN R ESULTS Notice that the double-fold integral can be rewritten as a single integral multiplied by a scalar function  t  0 t k(s)dsdη = (h − t + s)k(s)ds. −h

t +η

t −h

This fact is proven in Appendix A. We are motivated to propose the following new simple L–K functional to be used in deriving main results, based on an idea that by adopting new augmented variables (not division variables), cross terms of variables and multifold integral terms may reduce the conservatism. A similar argument that the L–K functional containing some triple-integral terms is very effective in the reduction of conservatism can be found in [37]. Choose the L–K functional candidate as follows: V (x(t)) =

˙ μ d(t)

li 

f i (u) − fi (v)  li u−v

V1 (x(t)) = η1T (t)Pη1 (t),  xi (t ) n  V2 (x(t)) = 2 λi gi (s)ds, 

(3)

(4)

where ∀u, v ∈ , u = v, l i and l i can take any real constants, i = 1, 2, . . . , n, as has been done previously, e.g., [21], [35]– [36], making activation functions more general than nonnegative sigmoidal functions used in [23] to characterize more generalized RNNs. Let the equilibrium point of system (1) be denoted by z ∗ = ∗ [z 1 z 2∗ . . . z n∗ ]T . Defining x i (·) = z i (·) − z i∗ , system (1) can then be transformed into the following form: x(t) ˙ = −Dx(t) + Ag(x(t)) + Bg(x(t − d(t))) x(t) = ϕ(t) ∀t ∈ [−h, 0]

(5)

where x(·) = [x 1 (·), x 2 (·), . . . , x n (·)]T is the state vector of the transformed system, the initial condition ϕ(t) =

(7)

where

(2)

where h > 0, and μ < 1 makes networks with time-varying delay well-posed, because the fast time-varying delay case will cause problems with causality, minimality, and inconsistency, as indicated by Verriest in [34]. So this restriction is a reasonable and necessary assumption. In addition, it is assumed that each neuron activation function in system (1), fi (·), i = 1, 2, . . . , n, satisfies the following condition:

Vi (x(t))

i=1

V3 (x(t)) =

and

4 

 V4 (x(t)) =

0

i=1 t t −d(t ) T

[η2T (t, s)Q 1 η2 (t, s)

+ g (x(s))Sg(x(s))]ds, t

t −h

[η2T (t, s)Q 2 η2 (t, s)

+ (h − t + s)η3T (s)Q 3 η3 (s) ˙ + (h − t + s)2 x˙ T (s)R1 x(s) + (h − t + s)3 x˙ T (s)R2 x(s)]ds ˙ where

 η1T (t)



t



= x (t) x (s)ds t −h   η2T (t, s) = x T (t) x T (s)   η3T (t) = x T (t) x˙ T (t) T

T

P > 0,  = diag(λ1 , λ2 , . . . , λn ) > 0, Q i > 0, i = 1, 2, 3; R j > 0, j = 1, 2; and S > 0. Remark 1: Our newly constructed functional, compared with the ones in the literature, has t three differences: 1) an independent augmented variable t −h x T (s)ds; 2) the cross terms between entries in η1T (t), η2T (t, s), η3T (s), respectively; and 3) quadratic terms multiplied by first, second, and third degrees of a scalar function h −t +s, where the degree increase of h − t + s by 1 means the number increase of the integral by 1.

ZHANG et al.: STABILITY ANALYSIS FOR NEURAL NETWORKS WITH TIME-VARYING DELAY

Before giving the main theorem, we present the following important lemmas that will be used in the proof to derive the stability conditions of the delayed neural networks. Lemma 1: Let W > 0, and ω(s) be an appropriate dimensional vector. Then, we have the following facts for any scalar function β(s) ≥ 0 ∀s ∈ [t1 , t2 ]: t 1) − t12 ω T (s)W ω(s)ds t ≤ (t2 − t1 )ξtT F1T W −1 F1 ξt + 2ξtT F1T t12 ω(s)ds; t2 2) − t1 β(s)ω T (s)W ω(s)ds t t ≤ t12 β(s)dsξtT F2T W −1 F2 ξt + 2ξtT F2T t12 β(s)ω(s)ds; t2 2 3) − t1 β (s)ω T (s)W ω(s)ds t ≤ (t2 − t1 )ξtT F3T W −1 F3 ξt + 2ξtT F3T t12 β(s)ω(s)ds where, matrices Fi (i = 1, 2, 3) and vector ξt independent of the integral variable are appropriate dimensional arbitrary ones. Proof: See Appendix B.  Lemma 2: (Schur Complement). Let X be a symmetric matrix given by

X 11 X 12 X= . ∗ X 22 Then the following three matrix inequalities are equivalent: 1)  X < 0; X 11 < 0 2) −1 T X 22 − X 12 X 11 X 12 < 0;  X 22 < 0 3) −1 T X 11 − X 12 X 22 X 12 < 0. Lemma 3: For symmetric matrices Z 0 , Z 1 , a positive semidefinite matrix Z 2 ≥ 0, and any vector ξt , a necessary and sufficient condition for

TABLE I D IMENSIONS OF M ATRICES C ONCERNED IN T HEOREM 1 Matrices (Nonzero Ones) P Q i , i = 1, 2, 3 R j , j = 1, 2 S  Wk , k = 1, 2 Fl , l = 1, 4 Fm , m = 2, 3, 5, 6 Ac E n , n = 1, 2, . . . , 7 L L

Dimensions (Row by Column) 2n × 2n 2n × 2n n×n n×n n×n n×n 2n × 7n n × 7n 7n × n 7n × n n×n n×n

0, W2 = diag(w21 , w22 , . . . , w2n ) > 0, and Fi , i = 1, 2, . . . , 6, such that the following set of inequalities hold: ⎡ ⎤ ⎧ 0 ∗ ∗ ∗ ⎪ ⎪ ⎪ ⎢ h F1 −h Q 3 ∗ ⎪ ∗ ⎥ ⎪ ⎥ 0, S > 0, Q i > 0, i = 1, 2, 3; R j > 0, j = 1, 2, and  = diag(λ1 , λ2 , . . . , λn ) > 0, W1 = diag(w11 , w12 , . . . , w1n ) >

515

− 2E 1 LW1 L E 1T − 2E 7 W2 E 7T − 2E 2 LW2 L E 2T + Sym(E 7 W2 L¯ E 2T + E 2 LW2 E 7T ) 1 = Sym([ Ac , 0]Q 1 [E 1 , 0]T ) − Sym(E 2 (2F2 + 3F3 ) + E 1 (2F5 + 3F6 )) with Ac = [−D, 0, 0, 0, 0, A, B]T E i = [0n×(i−1)n , In , 0n×(7−i)n ]T , i = 1, 2, . . . , 7 L = diag{l 1 , l 2 , . . . , l n } L = diag{l 1 , l 2 , . . . , l n }. Proof: At first, define a vector ξt ∈ R7n as   t −d(t ) x T (s)ds ξtT = x T (t), x T (t − d(t)), x T (t − h), t −h   t T T T x (s)ds, g (x(t)), g (x(t − d(t))) . t −d(t )

516

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 4, APRIL 2013

Calculating the time derivatives of V1 (x(t)), V2 (x(t)), V3 (x(t)), and V4 (x(t)) along the trajectories of system (5) yields

Here, Va (x t ) is the sum of all integral terms for the factor h − t + s in the integrand expressed as

V˙1 (x(t))

Va (x t ) = −

= 2η1T (t)P η˙1 (t)   x(t) ˙ T = 2η1 (t)P x(t) − x(t − h)   −Dx(t) + Ag(x(t)) + Bg(x(t − d(t))) = 2η1T (t)P x(t) − x(t − h)



 T η3 (s)Q 3 η3 (s) + 2(h − t + s)x(s) ˙ T R1 x(s) ˙ t −h  + 3(h − t + s)2 x˙ T (s)R2 x(s) ˙ ds. t

It is not difficult to obtain the following identities: h − t + s = [d(t) − t + s] + [h − d(t)] (h − t + s)2 = [d(t) − t + s]2 + [h 2 − d 2 (t)]

= 2ξtT [E 1 , E 4 + E 5 ]P[ Ac , E 1 − E 3 ]T ξt

+2[h − d(t)](s − t).

= ξtT ([E 1 , E 4 + E 5 ]P[ Ac , E 1 − E 3 ]T + [ Ac , E 1 − E 3 ]P[E 1 , E 4 + E 5 ]T )ξt =

ξtT Sym([E 1 ,

T

E 4 + E 5 ]P[ Ac , E 1 − E 3 ] )ξt

(9)

V˙2 (x(t)) n  =2 λi gi (x i (t))x˙i (t) i=1

= 2g T (x(t))x(t) ˙ = −2g T (x(t))Dx(t) + 2g T (x(t))Ag(x(t)) +2g T (x(t))Bg(x(t − d(t))) = 2ξtT E 6 AcT ξt

(10)

V˙3 (x(t)) = g T (x(t))Sg(x(t)) + η2T (t, t)Q 1 η2 (t, t) T ˙ −(1 − d(t))g (x(t − d(t)))Sg(x(t − d(t))) −η2T (t, t − d(t))Q 1 η2 (t, t − d(t))  t ∂η2 (t, s) +2 Q 1 η2 (t, s)ds ∂t t −d(t ) = g T (x(t))Sg(x(t)) + [x T (t), x T (t)]Q 1 [x T (t), x T (t)]T T ˙ −(1 − d(t))g (x(t − d(t)))Sg(x(t − d(t)))   x(t) T T −[x (t), x (t − d(t))]Q 1 x(t − d(t))  t [x T (t), x T (s)]T ds +2[x˙ T (t), 0]Q 1 t −d(t )  ≤ ξtT E 6 S E 6T + (μ − 1)E 7 S E 7T + [E 1 , E 1 ]Q 1 [E 1 , E 1 ]T + 2[ Ac , 0]Q 1 [d(t)E 1 , E 5 ]T

 − (1 − μ)[E 1 , E 2 ]Q 1 [E 1 , E 2 ]T ξt

Va (x t )  t −d(t )  T =− ˙ η3 (s)Q 3 η3 (s) + 2(h − t + s)x˙ T (s)R1 x(s) t −h  ˙ ds + 3(h − t + s)2 x˙ T (s)R2 x(s)  t  T ˙ − η3 (s)Q 3 η3 (s) + 2(h − t + s)x˙ T (s)R1 x(s) t −d(t )  + 3(h − t + s)2 x˙ T (s)R2 x(s) ˙ ds  t −d(t )  T ˙ =− η3 (s)Q 3 η3 (s) + 2(h − t + s)x˙ T (s)R1 x(s) t −h  ˙ ds + 3(h − t + s)2 x˙ T (s)R2 x(s)  t  T ˙ − η3 (s)Q 3 η3 (s) + 2[d(t) − t + s]x˙ T (s)R1 x(s) t −d(t )  ˙ ds + 3[d(t) − t + s]2 x˙ T (s)R2 x(s)  t x(s) ˙ T [2(h − d(t))R1 + 3(h 2 − d 2 (t))R2 ]x(s)ds ˙ − t −d(t )  t − 6[h − d(t)](s − t)x(s) ˙ T R2 x(s)ds ˙ t −d(t )

 Vˆa (x t ) + V˜a (x t )

= η2T (t, t)Q 2 η2 (t, t) − η2T (t, t − h)Q 2 η2 (t, t − h)  t T +2[x˙ (t), 0]Q 2 [x T (t), x T (s)]T ds

=− −

t −h

+hη3T (t)Q 3 η3 (t) + x˙ T (t)(h 2 R1 + h 3 R2 )x(t) ˙ + Va (x t )  T T = ξt [E 1 , E 1 ]Q 2 [E 1 , E 1 ] + 2[ Ac , 0]Q 2 [h E 1 , E 4 + E 5 ]T

Vˆa (x t )



t −d(t )



=−



t

t −d(t )  t

x(s) ˙ T [2(h − d(t))R1 + 3(h 2 − d 2 (t))R2 ]x(s)ds ˙ 6[h − d(t)](s − t)x(s) ˙ T R2 x(s)ds ˙

t −d(t )  t −h



(12)

(13)

where V˜a (x t )

(11)

V˙4 (x(t))

− [E 1 , E 3 ]Q 2 [E 1 , E 3 ]T + h[E 1 , Ac ]Q 3 [E 1 , Ac ]T  + Ac (h 2 R1 + h 3 R2 )AcT ξt + Va (x t ).

So we can disassemble the integral into two parts as follows:

η3T (s)Q 3 η3 (s) + 2(h − t + s)x˙ T (s)R1 x(s) ˙  + 3(h − t + s)2 x˙ T (s)R2 x(s) ˙ ds

 T η3 (s)Q 3 η3 (s) + 2[d(t) − t + s]x˙ T (s)R1 x(s) ˙ t −d(t )  + 3[d(t) − t + s]2 x˙ T (s)R2 x(s) ˙ ds. t

ZHANG et al.: STABILITY ANALYSIS FOR NEURAL NETWORKS WITH TIME-VARYING DELAY

Applying Lemma 1 to Vˆa (x t ), we get  T T Vˆa (x t ) ≤ ξtT (h − d(t))F1T Q −1 3 F1 + 2F1 [E 4 , E 2 − E 3 ] + (h − d(t))2 F2T R1−1 F2

517

+ Ac (h 2 R1 + h 3 R2 )AcT + (h − d(t))F1T Q −1 3 F1 + 2F1T [E 4 , E 2 − E 3 ]T + (h − d(t))2 F2T R1−1 F2

+ 4F2T [(h −d(t))E 2 − E 4 ]T +3(h −d(t))F3T R2−1 F3

+ 4F2T [(h − d(t))E 2 − E 4 ]T

+ 6F3T [(h − d(t))E 2 − E 4 ]T + d(t)F4T Q −1 3 F4

+ 6F3T [(h − d(t))E 2 − E 4 ]T

+ 4F5T (d(t)E 1 − E 5 )T + 3d(t)F6T R2−1 F6

+ d(t)2 F5T R1−1 F5 + 4F5T (d(t)E 1 − E 5 )T

+ 2E 1 LW1 E 6T − 2E 1 L W1 L E 1T − 2E 7 W2 E 7T + 2E 7 W2 L¯ E 2T +2E 2 L W2 E 7T −2E 2 LW2 L¯ E 2T }ξt

+ 2F4T [E 5 , E 1 − E 2 ]T + d(t)2 F5T R1−1 F5

+ 3(h − d(t))F3T R2−1 F3

+ 6F6T (d(t)E 1 − E 5)T −2E 6 W1 E 6T + 2E 6 W1 L¯ E 1T

T T + d(t)F4T Q −1 3 F4 + 2F4 [E 5 , E 1 − E 2 ]

+ 3d(t)F6T R2−1 F6

 + 6F6T (d(t)E 1 − E 5 )T ξt . On the other hand, according to (6), we can obtain    gi (x i (t)) − l i x i (t) gi (x i (t)) − l i x i (t)  0

which can be rewritten as (15)

(16)

W2 = diag (w21 , w22 , . . . , w2n ) > 0. So by (15) and (16), we have

2 T −1 T −1 = 0 + h F1T Q −1 3 F1 + h F2 R1 F2 + 3h F3 R2 F3 < 0

(21)

+2g T (x(t − d(t)))W2 Lx(t − d(t)) +2x T (t − d(t))LW2 g(x(t − d(t)))

which is equivalent to 1 < 0; [ 0 + d(t) 1 + d ]d(t )=h

−2x T (t − d(t))LW2 Lx(t − d(t))  = 2ξtT − E 6 W1 E 6T + E 6 W1 L¯ E 1T

2 T −1 = 0 + h 1 + h F4T Q −1 3 F4 + h F5 R1 F5

+3h F6T R2−1 F6 < 0

(17)

where L = diag{l 1 , l 2 , . . . , l n } L = diag{l 1 , . . . , l n }. Combining (7) and (9–14) and adding (17) in consideration of the Lipschitz constraint by S-procedure yields V˙ (x t ) − V˜a (x t ) ≤ ξtT {2[E 1 , E 4 + E 5 ]P[ Ac , E 1 − E 3]T + 2E 6 AcT + E 6 S E 6T − (1 − μ)E 7 S E 7T + [E 1 , E 1 ]Q 1 [E 1 , E 1 ]T

+ 2[ Ac , 0]Q 2 [h E 1 , E 4 + E 5 ] + h[E 1 , Ac ]Q 3 [E 1 , Ac ]T

T

(20)

= [ 0 + d ]d(t )=0

+2x T (t)L W1 g(x(t)) − 2x T (t)L W1 Lx(t) −2g T (x(t − d(t)))W2 g(x(t − d(t)))

+ 2[ Ac , 0]Q 1 [d(t)E 1 , E 5 ] + [E 1 , E 1 ]Q 2 [E 1 , E 1 ]T − [E 1 , E 3 ]Q 2 [E 1 , E 3 ]T

+d 2 (t)F5T R1−1 F5 + 3d(t)F6T R2−1 F6 .

[ 0 + d(t) 1 + d ]d(t )=0

0  −2g T (x(t))W1 g(x(t)) + 2g T (x(t))W1 L x(t)

T

+3(h − d(t))F3T R2−1 F3 + d(t)F4T Q −1 3 F4

Note that ξtT { 0 + d(t) 1 + d }ξt is a quadratic function on d(t) and the second-order coefficient is ξtT {F2T R1−1 F2 + F5T R1−1 F5 }ξt ≥ 0. This induces that ξtT { 0 + d(t) 1 + d }ξt is a convex quadratic function with respect to d(t). Applying Lemma 2 to (8), we get

W1 = diag (w11 , w12 , . . . , w1n ) > 0

− (1 − μ)[E 1 , E 2 ]Q 1 [E 1 , E 2 ]T

(19)

2 T −1 d = (h − d(t))F1T Q −1 3 F1 + (h − d(t)) F2 R1 F2

where i = 1, 2, . . . , n. For any positive diagonal matrices

+ E 1 LW1 E 6T − E 1 L W1 L E 1T − E 7 W2 E 7T + E 7 W2 L¯ E 2T  + E 2 LW2 E 7T − E 2 LW2 L¯ E 2T ξt

V˙ (x t ) − V˜a (x t ) = ξtT { 0 + d(t) 1 + d }ξt where 0 , 1 are defined in the theorem context, and

and (gi (x i (t − d(t))) − l i x i (t − d(t))) ×(gi (x i (t − d(t))) − l i x i (t − d(t))  0

(18)

(14)

(22)

which shows that 2 < 0 is true, where i , i = 1, 2 are the same as those in (8). Finally, employing Lemma 3 leads to 0 + d(t) 1 + d < 0

∀d(t) ∈ [0, h].

(23)

It is easy to show the following relation for 0 ≤ d(t) ≤ h :  t  T V˜a (x t ) = −(h − d(t)) x˙ (s)(2R1 )x(s) ˙ t −d(t )  +3 x˙ T (s)[(h + d(t)) + 2(s − t)]R2 x(s) ˙ ds  t (h − d(t))x˙ T (s)R1 x(s)ds ˙ ≤ −2 t −d(t )  t (h − d(t))2 x˙ T (s)R2 x(s)ds ˙ −3 ≤ 0.

t −d(t )

Thus we can obtain from (19) and (23)–(24) that V˙ (x t ) ≤ ξtT { 0 + d(t) 1 + d }ξt < 0 which means that the system is asymptotically stable.

(24) (25)

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 4, APRIL 2013

Now it is obvious from (24) that V˜a (x t ) = 0 when either d(t) = 0 or d(t) = h and it has no contribution to the derived LMIs; so V˜a (x t ) < 0 for 0 < d(t) < h does not introduce any conservativeness in the resultant two LMIs corresponding to the ends of the variation interval of the time delay in view of Lemma 3. This completes the proof.  Remark 2: In the process of calculating the derivative of the L–K functional, we employ the upper bound of the integral of quadratic multiplied by a scalar function in Lemma 1, which is proved only via the Cauchy inequality, combined with the quadratic convex combination technique implied by Lemma 3, rather than by Jensen’s inequality and the linear convex combination technique. Remark 3: It is worth noting that the obtained stability criteria in the form of LMIs can be easily tested by the standard convex optimization software M ATLAB LMI Toolbox. Certainly, the delay-dependent criteria reduce to delayindependent ones by leaving out the terms with respect to Q i , i = 1, 2, 3, R j , j = 1, 2 and the induced terms with respect to Fk , k = 1, 2, . . . , 6 by Q 3 and R j , j = 1, 2 in the results of Theorem 1. That is, the delay-size-independent stability criteria for RNNs degrade into a single LMI as follows: ¯ 0 = Sym([E 1 , E 4 + E 5 ]P[ Ac , E 1 − E 3 ] + T

E 6 AcT )

+E 6 S E 6T

− (1 − μ)E 7 S E 7T −2E 6 W1 E 6T + Sym(E 6 W1 L¯ E 1T + E 1 LW1 E 6T ) −2E 1 L W1 L E 1T − 2E 7 W2 E 7T − 2E 2 L W2 L E 2T +Sym(E 7 W2 L¯ E 2T + E 2 L W2 E 7T ) < 0. In addition, when the information of the time derivative of delay is unknown, or the derivative of the time-varying delay does not exist, by eliminating Q 1 and S we have the corresponding result from Theorem 1. Corollary 1: For given scalar h > 0, the origin of system (5) with (6) and a time-varying delay satisfying conditions (2) is globally asymptotically stable if there exist matrices P > 0, Q i > 0, i = 2, 3; R j > 0, j = 1, 2, and  = diag (λ1 , λ2 , . . . , λn ) > 0, W1 = diag(w11 , w12 , . . . , w1n ) > 0, W2 = diag(w21 , w22 , . . . , w2n ) > 0 and Fi , i = 1, 2, . . . , 6 such that the following set of inequalities hold: ⎤ ⎡ ⎧ ∗ ∗ ∗ 0 ⎪ ⎪ ⎪ ⎢ h F1 −h Q 3 ∗ ∗ ⎥ ⎪ ⎪ ⎥