New Robust Stability of Uncertain Neutral-Type ... - Semantic Scholar

Report 1 Downloads 91 Views
264

JOURNAL OF COMPUTERS, VOL. 7, NO. 1, JANUARY 2012

New Robust Stability of Uncertain Neutral-Type Neural Networks with Discrete Interval and Distributed Time-Varying Delays Guoquan Liu College of Automation, Chongqing University, Chongqing, China Email: [email protected]

Simon X. Yang School of Engineering, University of Guelph, Guelph, Ontario, Canada Email: [email protected]

Wei Fu College of Automation, Chongqing University, Chongqing, China Email: [email protected]

Abstract—This paper develops a novel robust stability criterion for a class of uncertain neutral-type neural networks with discrete interval and distributed time-varying delays. By constructing a general form of LyapunovKrasovskii functional, using the linear matrix inequality (LMI) approach and introducing some free-weight matrices, the delay-dependent robust stability criteria are derived in terms of LMI. Number examples are given to illustrate the effectiveness of the proposed method. Index Terms—Neural networks; Robust stability; Linear matrix inequality; Neutral-type; Lyapunov-krasoviskii functional

I. INTRODUCTION As is well known, stability is one of the main properties of neural networks, which is a crucial feature in the design and hardware implementation of neural networks. However, in the process of information storage and transmission in neural networks, time delays as a source of oscillations, instability and other poor performance may occur. Therefore, a great number of results have been proposed to guarantee the global asymptotic or exponential stability of delayed neural networks, see [1]-[8] and references therein. Among theses, on the delay-dependent robust stability problems of neural networks with delays have received considerable attention. In addition, a special type of time delay in real systems as well as neural networks, i.e., interval time-varying delay is identified and investigated [9-16]. Interval time-varying delay is a time delay that varies in an interval in which the lower bound is not Corresponding author: E-mail address: [email protected]

© 2012 ACADEMY PUBLISHER doi:10.4304/jcp.7.1.264-271

restricted to be zero. Hence, stability analysis for neural networks with interval time-varying delays has been widely investigated in recent years. On the other hand, owing to the complicated dynamic properties of the neural cells in practice, the existing neural network models cannot characterize the properties of a neural reaction process precisely [17]. In order to describe dynamics more precisely for some complicated neural networks, a new type of the neural networks is in need to be introduced. Neural networks of this new type are called neutral neural networks or neural networks of neutral type. However, to date, the problem of robust stability analysis for neural networks of neutral type has been investigated by a few investigators [17]-[20]. In [19], the problem of global asymptotic stability for neural networks of neutral type time-varying delays has been investigated. In [20], the global exponential stability problem has been considered for a class of neutral-type impulsive neural networks with discrete and distributed delays. However, it should be pointed out that in the existing literature, parameter uncertainties and distributed time-varying delays were not taken into account in real neutral neural networks. Up to now, the robust stability analysis problem for neutral neural networks with discrete interval and distributed time-varying delays has not been full studied. Therefore, it is important and challenging to get some new stability criteria for uncertain delayed neutral-type neural networks. Motivated by the above statements, a class of uncertain neutral-type neural networks with discrete interval and distributed time-varying delays is considered in this paper. Based on the Lyapunov-Krasovskii functional approach and the free-weight matrices technique, new robust stability criteria are developed in terms of LMIs, which can be easily calculated by MATLAB LMI toolbox. Moreover, the proposed stability criteria do not require the monotonicity of the activation functions and

JOURNAL OF COMPUTERS, VOL. 7, NO. 1, JANUARY 2012

265

the derivative of discrete time-varying delays being less than one, which generalize and improve those earlier methods. Finally, the validity and performance of the obtained results are illustrated by two examples. Notations: Throughout this paper, for symmetric matrices X and Y , X ≥ Y (respectively, X > Y ) means that X − Y ≥ 0 ( X − Y > 0) is a positive semi-definite (respectively, positive definite) matrix. The superscripts "T " and "-1" stand for matrix transposition and matrix inverse, respectively; ℜ n and ℜ n×n denote the n-dimensional Euclidean space and the set of all n × n real matrices, respectively; ∗ represents the blocks that are readily inferred by symmetry; I denotes the unit matrix of appropriate dimensions. diag{...} denotes the block diagonal matrix. In this paper, if not explicit, matrices are assumed to have compatible dimensions. II. PROBLEM FORMULATION AND LEMMAS Consider the following uncertain neutral-type neural networks with discrete interval and distributed timevarying delays: y (t ) = − ( A + ∆A(t ) ) y (t ) + (W1 + ∆W1 (t ) ) g ( y (t )) + (W2 + ∆W2 (t ) ) g ( y (t − τ (t ))

0 < τ 1 ≤ τ (t ) ≤ τ 2 , τ(t ) ≤ τ d , 0 < h(t ) ≤ h,

where τ 1 ,τ 2 ,τ d , h, hd , r are positive constants. Throughout this paper, we assume that the neuron activation functions g j (⋅), j = 1, 2,..., n satisfy the following hypotheses, respectively: g j (⋅) is bounded Assumption.1

+ (W4 + ∆W4 (t ) ) ∫

t − r (t )

g ( y ( s )) ds + I ,

Assumption.2 li− ≤

gi ( x1 ) − gi ( x2 ) + ≤ li , i = 1, 2,..., n, x1 − x2

where ∀x1 , x2 ∈ ℜn , x1 ≠ x2 . li− , li+ are some constants, and they can be positive, negative, and zero. So it is less restrictive than the descriptions on both the sigmond activation functions and the Lipschitz-type activation functions. Assume y ∗ = [ y1∗ , y2∗ ,..., yn∗ ]T is an equilibrium point of (1). It can be easily derive that the transformation xi = yi − yi∗ puts system (1) into the following form x (t ) = − ( A + ∆A(t ) ) x(t ) + (W1 + ∆W1 (t ) ) f ( x(t )) + (W3 + ∆W3 (t ) ) x (t − h(t ))

(1)

+ (W4 + ∆W4 (t ) ) ∫

t

t − r (t )

(4)

f ( x( s ))ds,

x(t ) = ϕ (t ), ∀t ∈ [−δ , 0], δ = max{τ 2 , h, r},

ϕ ∈ℵ([−δ , 0], ℜn ) where A = diang{a1 , a2 ,..., an } is a positive diagonal matrix. Wi ∈ ℜn× n (i = 1, 2,3, 4) are the interconnection weight matrices . ∆A(t ) and ∆Wi (t ) (i = 1, 2,3, 4) are parametric uncertainties. τ (t ), r (t ) and h(t ) represent time-varying discrete, distributed and neutral delays of the system (1), respectively. y (t ) = [ y1 (t ), y2 (t ),..., yn (t ) ]

T

∈ ℜ n is neural status vector. g ( y (t )) = [ g1 ( y1 (t )), g 2 ( y2 (t )), ..., g n ( yn (t )) ] ∈ ℜ n is the neuron activation. I = [ I1 , I 2 ,..., I n ]

T

∈ ℜ n is the constant external input vector. ϕ (t ) is the initial condition, where t ∈ [−δ , 0] . In system (1), the parameter uncertainties ∆A(t ), ∆W1 (t ), ∆W2 (t ), ∆W3 (t ) and ∆W4 (t ) are assumed to be the following form [∆A(t ), ∆W1 (t ), ∆W2 (t ), ∆W3 (t ), ∆W4 (t )] (2) = HF (t )[ B1 , B2 , B3 , B4 , B5 ],

where H and Bi , i = 1, 2,...,5 are known real constant matrices of appropriate dimensions. The matrix F (t ) , which may be time-varying, is unknown and satisfies F T (t ) F (t ) ≤ I . The time-varying delays τ (t ), r (t ) and h(t ) satisfy, respectively

© 2012 ACADEMY PUBLISHER

for

+ (W2 + ∆W2 (t ) ) f ( x(t − τ (t )))

y (t ) = ϕ (t ), ∀t ∈ [−δ , 0], δ = max{τ 2 , h, r},

T

function

any j = 1, 2,..., n.

+ (W3 + ∆W3 (t ) ) y (t − h(t )) t

(3)

h(t ) ≤ hd < 1, 0 < r (t ) ≤ r ,

ϕ ∈ℵ([−δ , 0], ℜn ), where x(t ) is the state vector of the transformation system, f ( x(t )) = [ f1 ( x1 (t )), f 2 ( x2 (t )),..., f n ( xn (t )) ] ∈ ℜn T

is the activation function. fi ( xi (t )) = gi ( xi (t ) + yi∗ ) − gi ( yi∗ ) with f i ( xi (t )) = 0 for i = 1, 2,..., n . Note that since each function g j (⋅) satisfies the hypotheses Assumptions 1 and 2, hence f j (⋅) satisfies fi ( xi ) + ≤ li , i = 1, 2,..., n, ∀xi ∈ ℜ n , xi ≠ 0, xi which implies that f ( x ) − li− xi l + x − f i ( xi ) 0≤ i i , 0≤ i i , xi xi li− ≤

(5)

(6)

where li− , li+ are some constants. In order to obtain our main results, the following basic lemmas are introduced: Lemma 1(Schur complement). Given constant S1 , S2 and S3 with appropriate dimensions, where S1T = S1 and S 2T = S 2 > 0 , then S1 + S3T S2−1 S3 < 0 if and only if ⎡ S1 S3T ⎤ ⎡ − S 2 S3 ⎤ < 0. (7) ⎢ ⎥ < 0, or ⎢ S1 ⎥⎦ ⎣ * ⎣ * − S2 ⎦ Lemma 2. For any real matrices X , Y and one positive definite matrix G , then following matrix inequality hold X T Y + Y T X ≤ X T GX + Y T G −1Y . (8)

266

JOURNAL OF COMPUTERS, VOL. 7, NO. 1, JANUARY 2012

Lemma 3. For any constant matrix M ∈ ℜn× n , M = M T > 0 , scalars a, b satisfying a < b, and vector ω : [ a, b ] → ℜn such that the integrations function concerned are well defined, then

( ∫ ω(s)ds ) M ( ∫ ω(s)ds ) ≤ (b − a ) ∫ ω (s)M ω(s)ds. T

b

a

b

b

a

a

T

Ξ1,8 = P + U 8T − AT S8T − S1 , Ξ1,9 = U 9T − AT S9T + S1W3 ,

Ξ1,10 = U10T − AT S10T + + S1W4 , Ξ 2,2 = Q 2 + rC − T1 − T1T + S2W1 + W1T S2T ,

(9)

Lemma 4. Let U , M , F (t ) and N be real matrices of appropriate dimensions with U satisfying U = U , then U + MF (t ) N + N T F T (t ) M T < 0, for all F T (t ) F (t ) ≤ I , if and only if there exists a scalar ε > 0 such that U + MF (t ) N + N T F T (t ) M T ≤ U + ε −1 MM T + ε NN T . (10) T

Ξ 2,3 = − M 2 + N 2 + W1T S3T , Ξ 2,4 = W1T S4T , Ξ 2,5 = −U 2 + M 2 + W1T S5T , Ξ 2,6 = − N 2 + W1T S6T , Ξ 2,7 = W1T S7T + S2W2 , Ξ 2,8 = K + W1T S8T − S2 ,

III. MAIN RESULTS

Ξ 2,9 = W1T S9T + S 2W3 ,

A. Stability Critera for Normial System In this section, we will perform the robust stability analysis for delayed neutral-type neural networks given as follows: x (t ) = Ax(t ) + W1 f ( x(t )) + W2 f ( x(t − τ (t ))) (11) t + W3 x (t − h(t )) + W4 ∫ f ( x( s)) ds.

Ξ 2,10 = W1T S10T + S2W4 ,

By constructing a Lyapunov-Krasovskill functional, we have the following theorem. Theorem 1. For given scalars τ 1 ,τ 2 ,τ d , h, hd and r satisfy (3), the system (11) is globally asymptotically stable, if there exist matrices P > 0, Qi = QiT > 0, i = 1, 2,..., 6,

Ξ 3,7 = L2T2 − M 7T + N 7T + S3W2 ,

t − r (t )

Ξ 3,3 = −(1 − τ d )Q1 − L1T2 − T2T LT2 − M 3 − M 3T + N 3 + N 3T , Ξ 3,4 = − M 4T + N 4T , Ξ 3,5 = −U 3 + M 3 − M 5T + N 5T , Ξ 3,6 = − M 6T + N 6T − N 3 , Ξ 3,8 = − M 8T + N 8T − S3 , Ξ 3,9 = − M 9T + N 9T + S3W3 , Ξ 3,10 = − M 10T + N10T + S3W4 ,

Ri = RiT > 0, i = 1, 2,3 C = C T > 0, U i , M i , N i , Si , i = 1, 2,...,10 and diagonal matrices K > 0, Ti > 0, i = 1, 2 such that the following LMI holds: N⎤ ⎡ Ψ1 U M ⎢∗ Ψ 0 0 ⎥⎥ 2 (12) Ψ=⎢ < 0, ⎢∗ ∗ Ψ3 0 ⎥ ⎢ ⎥ ∗ ∗ Ψ4 ⎦ ⎣∗ where

Ξ 4,4 = −(1 − hd )Q 5 ,

Ψ1 = Ψ1T = ⎡⎣Ξ i , j ⎤⎦ , i, j = 1,2,...,10, 10×10

Ξ 5,6 = −U 6T + M 6T − N 5 ,

Ψ 2 = −τ 1−1 ( R1 + R2 ),

Ξ 5,7 = −U 7T + M 7T + S5W2 ,

Ψ 3 = − (τ 2 − τ 1 )

( R2 + R3 ) , Ψ 4 = −τ 2−1 ( R2 + R3 ),U T = [U i ] 1×10 , M T = [ M i ] 1×10 , N T = [ N i ] 1×10 , S T = [ Si ] 1×10 , i = 1,2,...,10,

Ξ 5,8 = −U 8T + M 8T − S5 ,

with Ξ1,1 = Q 1 +Q 3 +Q 4 +Q 5 − L1T1 − T1T LT1 + U1 + U1T

Ξ 6,7 = − N 7T + S6W2 ,

−1

− S1 A − AT S1T , Ξ1,2 = L2T1 + U − A S + S1W1 , T 2

T

T 2

Ξ1,3 = U − M 1 + N1 − A S , T 3

T

T 3

Ξ1,4 = U 4T − AT S 4T , Ξ1,5 = U − U1 + M 1 − A S , T 5

T

T 5

Ξ1,6 = U 6T − N1 − AT S6T , Ξ1,7 = U 7T − AT S7T + S1W2 , © 2012 ACADEMY PUBLISHER

Ξ 4,5 = −U 4 + M 4 , Ξ 4,6 = − N 4 , Ξ 4,7 = S 4W2 , Ξ 4,8 = − S4 , Ξ 4,9 = S 4W3 , Ξ 4,10 = S 4W4 , Ξ 5,5 = −Q 3 −U 5 − U 5T + M 5 + M 5T ,

Ξ 5,9 = −U 9T + M 9T + S5W3 , Ξ 5,10 = −U10T + M 10T + S5W4 , Ξ 6,6 = −Q 4 − N 6 − N 6T , Ξ 6,8 = − N8T − S6 , Ξ 6,9 = − N 9T + S6W3 , Ξ 6,10 = − N10T + S6W4 , Ξ 7,7 = −(1 − τ d )Q2 − T2 − T2T + S7W2 + W2T S7T , Ξ 7,8 = W2T S8T − S7 , Ξ 7,9 = W2T S9T + S7W3 ,

JOURNAL OF COMPUTERS, VOL. 7, NO. 1, JANUARY 2012

267

V3 ( x(t ), t ) = xT (t )Q5 x(t ) − (1 − h(t )) xT (t − h(t ))Q5 × x(t − h(t )) + x T (t )Q x (t ) − (1 − h(t ))

Ξ 7,10 = W2T S10T + S7W4 ,

Ξ8,8 = Q6 + τ 1 R1 + τ 2 R3 + (τ 2 − τ 1 ) R5 − S8 − S8T ,

6

× x (t − h(t ))Q6 x (t − h(t ))

Ξ8,9 = S8W3 − S ,

T

T 9

≤ xT (t )Q5 x(t ) − (1 − hd ) xT (t − h(t ))Q5

Ξ8,10 = − N + S8W4 − S , T 10

T 10

Ξ 9,9 = −(1 − hd )Q6 + S9W3 + W3T S9T ,

× x(t − h(t )) + Z T (t )Q6 Z (t ) − (1 − hd )

Ξ 9,10 = W3T S10T + S9W4 ,

× Z T (t − h(t ))Q6 Z (t − h(t )), V4 ( x(t ), t ) = x T (t ) ⎣⎡τ 1 R1 + τ 2 R2 + (τ 2 − τ 1 ) R3 ⎦⎤ x (t )

Ξ10,10 = − r −1C + S10W4 + W4T S10T .

Proof: Consider the following lyapunov-krasoskill functional for system (11) as (13)

i

xi

⎡⎣ x ( s)Q1 x( s) + f ( x( s ))Q2 f ( x( s)) ⎤⎦ds

+∫

t −τ1

V3 ( x(t ), t ) = ∫

t

V4 ( x(t ), t ) = ∫

t

t − h (t )

x ( s )Q3 x( s )ds + ∫



t −τ1 θ

+∫

t

+∫

t −τ1

x ( s )Q4 x( s )ds,

0

t

t

− r t +θ

∫θ Z

T

t

Z T ( s ) R1 Z ( s )ds

(21)

t

−∫

Z T ( s ) R2 Z ( s )ds − ∫

t −τ ( t )

t −τ ( t )

t −τ 2

t −τ1

Z T ( s ) R2 Z ( s )ds

Z T ( s ) R2 Z ( s )ds T

t t ≤ −τ 1−1 ⎡ ∫ Z ( s )ds ⎤ R2 ⎡ ∫ Z ( s )ds ⎤ ⎢⎣ t −τ1 ⎥⎦ ⎢⎣ t −τ1 ⎥⎦

(22)

T

t −τ 1 t −τ1 − (τ 2 − τ 1 ) −1 ⎡ ∫ Z ( s )ds ⎤ R2 ⎡ ∫ Z ( s )ds ⎤ ⎢⎣ t −τ ( t ) ⎥⎦ ⎢⎣ t −τ ( t ) ⎥⎦

f ( x( s ))ds.

1

(16)

− (1 − τ d )Q4 f ( x(t − τ (t ))) f ( x(t − τ (t ))) T

+ x (t )Q3 x(t ) − x (t − τ 1 )Q3 x(t − τ 1 ) T

+ xT (t )Q4 x(t ) − xT (t − τ 2 )Q4 x(t − τ 2 ),

= −∫

Z T ( s ) R3 Z ( s )ds

t −τ1

t −τ ( t )

Z T ( s ) R3 Z ( s )ds − ∫

t −τ ( t )

t −τ 2

Z T ( s ) R3 Z ( s )ds

(23)

T

+ xT (t )Q3 x(t ) − xT (t − τ 1 )Q3 x(t − τ 1 )

× x(t − τ (t )) + f T ( x(t ))Q2 f ( x(t ))

t −τ1

t −τ 2

t −τ ( t ) t −τ ( t ) − τ 2−2 ⎡ ∫ Z ( s )ds ⎤ R3 ⎡ ∫ Z ( s )ds ⎤ , ⎢⎣ t −τ 2 ⎣⎢ t −τ 2 ⎦⎥ ⎦⎥

− (1 − τ(t ))Q4 f T ( x(t − τ (t ))) f ( x(t − τ (t )))

≤ xT (t )Q1 x(t ) − (1 − τ d ) xT (t − τ (t ))Q1

−∫

T

× x(t − τ (t )) + f T ( x(t ))Q2 f ( x(t ))

+ xT (t )Q4 x(t ) − xT (t − τ 2 )Q4 x(t − τ 2 )

T

t −τ ( t ) t −τ ( t ) − τ 2−1 ⎡ ∫ Z ( s )ds ⎤ R2 ⎡ ∫ Z ( s )ds ⎤ , ⎢⎣ t −τ 2 ⎣⎢ t −τ 2 ⎦⎥ ⎦⎥

t −τ1 t −τ 1 Z ( s )ds ⎤ R3 ⎡ ∫ Z ( s )ds ⎤ ≤ −(τ 2 − τ 1 ) −1 ⎡ ∫ ⎢⎣ t −τ (t ) ⎣⎢ t −τ (t ) ⎦⎥ ⎦⎥

1

© 2012 ACADEMY PUBLISHER

(20)

Z T ( s ) R2 Z ( s )ds

t −τ1

5

T

f T ( x( s ))Cf ( x( s ))ds.

Z T ( s ) R1 Z ( s )ds

t

= −∫

V1 ( x(t ), t ) = xT (t ) PZ (t ) + 2 f T ( x(t )) KZ (t ), V ( x(t ), t ) = xT (t )Q x(t ) − (1 − τ(t )) xT (t − τ (t ))Q 2

t

t −τ1

t −τ 2

Calculating the derivative of V ( x(t ), t ) along the solutions of system (11) V ( x(t ), t ) = V1 ( x(t ), t ) + V2 ( x(t ), t ) + V3 ( x(t ), t ) (15) + V ( x(t ), t ) + V ( x(t ), t ) 4

−∫

−∫

with Z (t ) = x (t ), Z (t ) = A(t ) x(t ) + W1 (t ) f ( x(t )) + W2 (t ) f ( x(t − τ (t ))) (14) t − r (t )

Z T ( s ) R3 Z ( s )ds,

T

f T ( x( s ))Cf ( x( s ))dsdθ ,

t

t

(19)

t t ≤ −τ 1−1 ⎡ ∫ Z ( s )ds ⎤ R1 ⎡ ∫ Z ( s )ds ⎤ , ⎢⎣ t −τ1 ⎥⎦ ⎢⎣ t −τ1 ⎥⎦

( s ) R3 Z ( s )dsdθ ,

+ W3 (t ) Z (t − h(t )) + W4 (t ) ∫

Z T ( s) R2 Z ( s )ds

By lemma 2 and (3), we have

t −τ1

Z T ( s ) R2 Z ( s )dsdθ t

t −τ1

t −r

= −τ 1−1τ 1 ∫

Z T ( s ) R1 Z ( s)dsdθ



t −τ 2



T

t −τ 2

t −τ 2 θ

V5 ( x(t ), t ) = ∫

t

⎡⎣ xT ( s )Q5 x ( s ) + Z T ( s )Q6 Z ( s ) ⎤⎦ ds, t

−∫

T

T

Z T ( s) R1 Z ( s )ds

V5 ( x(t ), t ) = rf T ( x(t ))Cf ( x(t ))

0

T

t

t

t −τ 2

i =1

t −τ ( t )

−∫

−∫

V1 ( x(t ), t ) = xT (t ) Px(t ) + 2∑ ki ∫ fi ( s) ds, V2 ( x(t ), t ) = ∫

t

t −τ 2

where

t

−∫

t −τ1

5

V ( x (t ), t ) = ∑ Vi ( x(t ), t ), n

(18)

(17)

−∫

t

f T ( x( s ))Cf ( x( s)))ds

t −r

≤ −∫

t

t − r (t )

f T ( x( s ))Cf ( x( s))ds

= −r −1 (t )r (t ) ∫

t

t − r (t )

f T ( x( s ))Cf ( x( s ))ds T

t ≤ − r −1 ⎡ ∫ f ( x( s ))ds ⎤ C ⎡ ∫ f ( x( s ))ds ⎤ . ⎢⎣ t − r ( t ) ⎥⎦ ⎢⎣ t − r ( t ) ⎥⎦ From (6), we know that t

(24)

268

JOURNAL OF COMPUTERS, VOL. 7, NO. 1, JANUARY 2012

t −τ1

⎡⎣ f i ( xi (t )) − li− xi (t ) ⎤⎦ ⎡⎣ fi ( xi (t )) − li+ xi (t ) ⎤⎦ ≤ 0,

−2ζ T (t ) M ∫

⎡⎣ f i ( xi (t − τ (t )) − li− xi (t − τ (t )) ⎤⎦ × ⎡⎣ f i ( xi (t − τ (t ))) − li+ xi (t − τ (t )) ⎤⎦ ≤ 0.

≤ ζ T (t ) M ⎡⎣ (τ 2 − τ 1 ) −1 ( R2 + R3 ) ⎤⎦

t −τ ( t )

−1

T

t −τ1 Z ( s )ds ⎤ (τ 2 − τ 1 ) −1 ( R2 + R3 ) × M T ζ (t ) + ⎡ ∫ ⎣⎢ t −τ (t ) ⎦⎥

Then, for any T j = diag{t1 j , t2 j ,..., tnj } ≥ 0, j = 1, 2 , it

(32)

t −τ1 Z ( s )ds ⎤ , × ⎡∫ ⎣⎢ t −τ (t ) ⎦⎥ and

follows that n

0 ≤ −2∑ ti1 ⎡⎣ fi ( xi (t )) − li− xi (t ) ⎤⎦ ⎡⎣ f i ( xi (t )) − li+ xi (t ) ⎤⎦ i =1

= −2 f ( x(t ))T1 f ( x(t )) + 2 x (t ) L2T1 f ( x(t )) T

Z ( s )ds

T

(25)

− 2 xT (t ) L1T1 x(t )

−2ζ T (t ) N ∫

t −τ ( t )

t −τ 2

Z ( s )ds −1

≤ ζ T (t ) N ⎡⎣τ 2−1 ( R2 + R3 ) ⎤⎦ N T ζ (t )

(33)

T

and

t −τ ( t ) t −τ ( t ) Z ( s )ds ⎤ ⎡⎣τ 2−1 ( R2 + R3 ) ⎤⎦ ⎡ ∫ Z ( s )ds ⎤ . + ⎡∫ ⎣⎢ t −τ 2 ⎦⎥ ⎣⎢ t −τ 2 ⎦⎥ Substituting (16)-(33) into (15), we can get −1 V ( x(t ), t ) ≤ ζ T (t ) Ψ + U (τ −1 R + τ −1 R ) U T

n

0 ≤ −2∑ ti 2 ⎡⎣ f i ( xi (t − τ (t )) − li− xi (t − τ (t )) ⎤⎦ i =1

{

× ⎡⎣ f i ( xi (t − τ (t ))) − li+ xi (t − τ (t )) ⎤⎦ = −2 f T ( x (t − τ (t )))T2 f ( x(t − τ (t )))

−1

− 2 xT (t − τ (t )) L1T2 x(t − τ (t )), where L1 = diag{l1− l1+ , l2− l2+ ,..., ln− ln+ } and L2 = diag{l1− + l1+ , l2− + l2+ ,..., ln− + ln+ }. From the Leibniz-Newton formula and Eq. (11), we can see that the following equations are true for any matrices U , M , N and S with appropriate dimensions. t −τ1

Z ( s )ds ⎤ = 0, ⎥⎦

(27)

t −τ1

Z ( s )ds ⎤ = 0, ⎥⎦ t −τ ( t ) 2ζ T (t ) N [ x(t − τ (t )) − x(t − τ 2 ) − ∫ Z ( s )ds ⎤ = 0, ⎥⎦ t −τ 2 and 2ζ T (t ) S [ − Ax(t ) + W1 f ( x(t )) + W2 f ( x(t − τ (t )) 2ζ T (t ) M [ x(t − τ 1 ) − x(t − τ (t )) − ∫

(28)

t −τ ( t )

+ W3 Z (t − h(t )) + W4 ∫

t

t − r (t )

(29)

f ( x( s ))ds − Z (t ) ⎤ = 0. ⎥⎦

(30)

where

ζ T (t ) = { xT (t ), f T ( x(t )), xT (t − τ (t )), xT (t − h(t )), xT (t − τ 1 ), xT (t − τ 2 ), f T ( x(t − τ (t ))), Z T (t ), T t ⎫ Z T (t − h(t )), ⎡ ∫ f ( x( s ))ds ⎤ ⎬ . ⎣⎢ t − r (t ) ⎦⎥ ⎭

Now, by Lemmas 2 and 3, the following inequalities hold: −2ζ T (t )U ∫

t

t −τ1

Z ( s )ds −1 1

−1

T

(31)

+ ⎡ ∫ Z ( s )ds ⎤ (τ 1−1 R1 + τ 1−1 R2 ) ⎡ ∫ Z ( s )ds ⎤ , ⎣⎢ t −τ1 ⎦⎥ ⎣⎢ t −τ1 ⎦⎥ t

T

© 2012 ACADEMY PUBLISHER

1

2

}

+ N ⎡⎣τ 2−1 ( R2 + R3 ) ⎤⎦ N T ζ (t ) = ζ (t )Ψζ (t ), T

where Ψ = Ψ1 + U (τ 1−1 R1 + τ 1−1 R3 ) U T −1

−1

+ M ⎡⎣ (τ 2 − τ 1 ) −1 ( R3 + R5 ) ⎤⎦ M T

(34)

−1

+ N ⎡⎣τ 2−1 ( R3 + R5 ) ⎤⎦ N T .

By Lemma1 (Schur complements), we know that LMI (12) is equivalent to (34). It is obvious that for Ψ < 0 , there exists a scalar γ > 0 such that 2 V ( x(t ), t ) ≤ −γ x(t ) ,

thus, if Ψ < 0 , the system (11) is globally, asymptotically stable in the mean square. The proof is completed. Remark 1. This result does not require for the derivative of the discrete time-varying delay τ (t ) to be less than one, which is needed in [20]. Hence, the new result here is less conservative. B. Robust Stability Criteria for Uncertain System It is well known that, parameter uncertainties often appear in modeled neural networks, it is necessary and important to consider the robust stability of system (4). Based on Theorem 1, we can derive the following theorem for the robust stability of system (4) with uncertainties satisfying (2). Theorem 2. For given scalars τ 1 ,τ 2 ,τ d , h, hd and r satisfy (3), the system (4) is globally robustly asymptotically stable, if there exist matrices P > 0, Qi = QiT > 0, i = 1, 2,..., 6, Ri = RiT > 0, i = 1, 2,3, C = C T > 0, U i , M i ,

≤ ζ (t )U (τ R + τ R2 ) U ζ (t ) −1 1 1

T

1

+ M ⎡⎣(τ 2 − τ 1 ) −1 ( R2 + R3 ) ⎤⎦ M T

+ 2 x (t − τ (t )) L2T2 f ( x(t − τ (t )))

t

1

−1

(26)

T

2ζ T (t )U [ x(t ) − x(t − τ 1 ) − ∫

1

t

N i , Si , i = 1, 2,...,10 and diagonal matrices K > 0, Ti > 0, i = 1, 2 , a scalar ε > 0 such that the following LMI holds:

JOURNAL OF COMPUTERS, VOL. 7, NO. 1, JANUARY 2012

 ⎡Ψ 1 ⎢ * ⎢ ⎢* ⎢ ⎢* ⎢* ⎣

U Ψ2 * * *

M 0 Ψ3 * *

N 0 0 Ψ4 *

⎤ ⎥ ⎥ ⎥ < 0. ⎥ ⎥ −ε I ⎥⎦ Γ 0 0 0

269

(35)

where  =Ψ  T = ⎡Ξ ⎤  Ψ 1 1 ⎣ i , j ⎦10×10 , Ξ i , j = Ξ i , j , i, j = 1, 2,...,10, Ξ ≠ Ξ , Ξ ≠ Ξ , Ξ ≠ Ξ , 1,1

1,1

2,2

2,2

7,7

7,7

Ξ 8,8 ≠ Ξ8,8 , Ξ 10,10 ≠ Ξ10,10 , ΓT = [ Si H ]1×10 , i = 1, 2,...,10,

TABLE I.

with Ξ 1,1 = Q 1 +Q 3 +Q 4 +Q 5 − L1T1 − T1T LT1 + U1 + U1T − S1 A − AT S1T + ε T1T T1 , Ξ 2,2 = Q 2 + rZ − T1 − T1T + S 2W1 + W1T S2T + ε T2T T2 , Ξ 7,7 = −(1 − τ d )Q2 − T2 − T2T + S7W2 + W2T S7T + ε T3T T3 , Ξ 8,8 = Q6 + τ 1 R1 + τ 2 R3 + (τ 2 − τ 1 ) R5 − S8 − S8T + ε T4T T4 , Ξ 10,10 = −r −1 Z + S10W4 + W4T S1T0 + ε T5T T5 . Proof: By Lemma 1 (Schur complement) and Eq. (5), the system (4) is globally robustly, asymptotically stable in the mean square if the following inequality holds: Ψ + Ω1 F (t )Ω 2 + ΩT2 F T (t )Ω1T < 0, (36) where Ψ is defined in Theorem 1, Ω1 = [ S1 H S 2 H S3 H S 4 H S5 H S6 H S7 H S8 H

S9 H

S10 H

0 0 0]

T

and Ω 2 = [ −V1 V2 0 0 0 0 V3 V4 0 V5 0 0 0] . From Lemma 4, (2) holds for all F T (t ) F (t ) ≤ I , if and only if there exist a scalar ε , such that Ψ + ε −1Ω1Ω1T + εΩT2 Ω 2 < 0. (37) It follows from the Schur complement that (37) is equivalent to the LMI (35). Then, if the LMI given in (35) hold, the neutral system (4) is globally robustly asymptotically stable in the mean square. The proof is completed. Remark 2. As shown above, from Theorems 1 and 2, we can give stability criteria for neutral-type neural networks with time-varying delays, especially, the discrete delay is belong to a time-varying interval, which can include the fast and slow time-varying delays. IV. ILLUSTRATIVE EXAMPLES In this section, we present two numerical examples to illustrate the validity of our results. Example 1. Consider the following two-dimensional delayed neutral-type neural network o given in (11) with parameters ⎡2 0⎤ ⎡ α 0.3⎤ ⎡ 0.2 0.1⎤ A=⎢ , W1 = ⎢ , W2 = ⎢ ⎥ ⎥ ⎥, ⎣0 2⎦ ⎣0.3 0.5⎦ ⎣ 0.1 0.2 ⎦

© 2012 ACADEMY PUBLISHER

⎡ 0.15 0.1 ⎤ ⎡ 0.3 0.2 ⎤ W3 = ⎢ ⎥ , W4 = ⎢ 0.1 0.3⎥ , − 0.1 0.15 ⎣ ⎦ ⎣ ⎦ − + − + l1 = 0.3, l1 = 0.7, l2 = 0.3, l2 = 0.7, where α is a positive scalar. Let τ 1 = 0.5,τ 2 = h = 2, hd = 0.5, r = 0.3, L1 = 0.21I , L2 = I , the problem is to determine the maximum allowable bound of α for guaranteeing the stability of the system (11). By solving the LMI (12) given in Theorem 1, one can obtain the maximum bound α M of α and it is listed in Table 1 for different cases.

τd α

CALCULATED UPPER BOUNDS OF

α

τ

FOR VARIOUS d

0

0.4

0.8

1.6

2.471

2.470

2.469

2.466

Example 2. Consider the following two-dimensional uncertain neutral-type neural network of with discrete and distributed time-varying delays x (t ) = − ( A + ∆A(t ) ) x(t ) + (W1 + ∆W1 (t ) ) f ( x(t )) + (W2 + ∆W2 (t ) ) f ( x(t − τ (t ))) + (W3 + ∆W3 (t ) ) x (t − h(t )) + (W4 + ∆W4 (t ) ) ∫

t

t − r (t )

(38)

f ( x( s ))ds,

where ⎡1.2 0 ⎤ ⎡ −1.2 0.2 ⎤ ⎡ −0.1 0.2 ⎤ A=⎢ ⎥ ,W1 = ⎢ 0.26 0.1⎥ , W2 = ⎢ 0.2 −0.1⎥ , 0 1 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ 0 ⎤ ⎡ −0.2 ⎡0.8 0 ⎤ W3 = ⎢ ⎥ , W4 = ⎢ 0 0.8⎥ , − 0.2 0.1 ⎣ ⎦ ⎣ ⎦ B1 = B2 = B3 = B4 = B5 = 0.2 I , H = I , l1− = 0.2, l1+ = 0.8, l2− = 0.2, l2+ = 0.8.

Let τ 1 = 0.8, τ 2 = 1.6, τ d = hd = 0.8, r = 0.2, h = 0.9, L1 = 0.26 I , L2 = I . By applying Theorem 2, there exists a feasible solution which guarantees the globally robustly stability of the system (38). Limited the length of the paper, we only show a part of the feasible solution: ⎡39.1345 5.4708 ⎤ ⎡9.6385 1.0086 ⎤ , Q1 = ⎢ P=⎢ ⎥ ⎥, ⎣ 5.4708 28.6173⎦ ⎣1.0086 6.4538⎦ ⎡ 13.3056 -0.2246 ⎤ ⎡11.0897 1.3563 ⎤ Q2 = ⎢ ⎥ , Q3 = ⎢ 1.3563 8.5016 ⎥ , -0.2246 9.0956 ⎣ ⎦ ⎣ ⎦ ⎡10.1669 1.0214 ⎤ ⎡14.9123 1.4304 ⎤ Q4 = ⎢ ⎥ , Q5 = ⎢ 1.4304 9.6367 ⎥ , 1.0214 10.1669 ⎣ ⎦ ⎣ ⎦ 17.6123 2.0489 5.3173 1.4545 ⎡ ⎤ ⎡ ⎤ Q6 = ⎢ ⎥ , R1 = ⎢1.4545 6.2822 ⎥ , 2.0489 10.2962 ⎣ ⎦ ⎣ ⎦

270

JOURNAL OF COMPUTERS, VOL. 7, NO. 1, JANUARY 2012

⎡3.9203 1.1083 ⎤ ⎡ 6.4508 1.4526 ⎤ R2 = ⎢ ⎥ , R3 = ⎢1.4526 7.4101⎥ , 1.1083 4.6760 ⎣ ⎦ ⎣ ⎦ 10.1738 0.7400 ⎡ ⎤ C=⎢ ⎥, 0.7400 15.5877 ⎣ ⎦ K = diag {21.0818 21.0818} ,

[10]

[11]

T1 = diag {23.4141 23.4141} , T2 = diag {10.1564 10.1564} ,

ε = 11.8443. V.

CONCLUSIONS

This paper studied the global robust stability analysis problem for uncertain neutral-type neural networks with discrete interval and distributed time-varying delays. Novel global robust stability criteria are derived based on the Lyapunov-Krasovskii function method and the freeweight matrices technique. The proposed criteria are expressed in terms of LMI, which can be easily by using the MALTAB LMI control toolbox. Finally, numerical examples are given to show the usefulness of the proposed stability results. ACKNOWLEDGMENT This work is supported by the Fundamental Research Funds for the Central Universities (No. CDJXS11172237).

[12]

[13]

[14]

[15]

[16]

[17]

[18]

REFERENCES [1] S. Arik, “An analysis of global asymptotic stability of delayed cellular neural networks,” IEEE Trans Neural Network, vol.13, no.5, pp. 1239-1242, Sep 2002. [2] T. Amemiya and W. B. Ma, “Global asymptotic stability of nonlinear delayed systems of neutral type,” Nonlinear Anal-Theor, vol.54, no.1, pp. 83-91, Jul 2003. [3] V. Singh, “A generalized lmi-based approach to the global asymptotic stability of delayed cellular neural networks,” IEEE Trans Neural Network, vol.15, no.1, pp. 223-225, Jan 2004. [4] D. G. Yang and C. Y. Hu, “Novel delay-dependent global asymptotic stability condition of hopfield neural networks with delays,” Comput Math Appl, vol.57, no.11-12, pp. 1978-1984, Jun 2009. [5] Y. J. Shen, H. Yu and J. G. Jian, “Delay-dependent global asymptotic stability for delayed cellular neural networks,” Commun Nonlinear Sci, vol.14, no.4, pp. 1057-1063, Apr 2009. [6] Q. H. Zhou and L. Wan, “Exponential stability of stochastic delayed hopfield neural networks,” Appl Math Comput, vol.199, no.1, pp. 84-89, 2008. [7] B. Y. Zhang, S. Y. Xu and Y. M. Li, “Delay-dependent robust exponential stability for uncertain recurrent neural networks with time-varying delays,” Int J Neural Syst, vol.17, no.3, pp. 207-218, 2007. [8] K. Y. Liu and H. Q. Zhang, “An improved global exponential stability criterion for delayed neural networks,” Nonlinear Anal-Real, vol.10, no.4, pp. 26132619, Aug 2009. [9] C. Peng and Y. C. Tian, “Delay-dependent robust stability criteria for uncertain systems with interval time-varying

© 2012 ACADEMY PUBLISHER

[19]

[20]

delay,” J Comput Appl Math, vol.214, no.2, pp. 480-494, 2008. O. M. Kwon and J. H. Park, “Delay-range-dependent stabilization of uncertain dynamic systems with interval time-varying delays,” Appl Math Comput, vol.208, no.1, pp. 58-68, 2009. S. H. Kim and P. Park, “Relaxed h-infinity stabilization conditions for discrete-time fuzzy systems with interval time-varying delays,” IEEE Trans on Fuzzy Systems, vol.17, no.6, pp. 1441-1449, 2009. K. W. Yu and C. H. Lien, “Stability criteria for uncertain neutral systems with interval time-varying delays,” Chaos Soliton Fract, vol.38, no.3, pp. 650-657, Nov 2008. Y. Y. Hou, T. L. Liao, C. H. Lien and J. J. Yan, “Stability analysis of neural networks with interval time-varying delays,” Chaos, vol.17, no.3, pp. 033120, Sep 2007. O. M. Kwon, J. H. Park and S. M. Lee, “On robust stability for uncertain neural networks with interval time-varying delays,” Iet Control Theory and Applications, vol.2, no.7, pp. 625-634, Jul 2008. O. M. Kwon and J. H. Park, “Exponential stability analysis for uncertain neural networks with interval time-varying delays,” Appl Math Comput, vol.212, no.2, pp. 530-541, Jun 15 2009. J. Tian and X. Zhou, “Improved asymptotic stability criteria for neural networks with interval time-varying delay,” Expert Syst Appl, vol.37, no.12, pp. 7521-7525, 2010. J. H. Park, O. M. Kwon and S. M. Lee, “Lmi optimization approach on stability for delayed neural networks of neutral-type,” Appl Math Comput, vol.196, no.1, pp. 236244, Feb 15 2008. G. Wang, W. Chen and Y. Liu, “Delay-dependent global robust stability criterion for neural networks with neutraltype time-varying delays,” Proceedings of the Institution of Mechanical Engineers Part I-Journal of Systems and Control Engineering, vol.224, no.I4, pp. 321-327 , 2010. J. H. Park and O. M. Kwon, “Global stability for neural networks of neutral-type with interval time-varying delays,” Chaos Soliton Fract, vol.41, no.3, pp. 1174-1181, Aug 15 2009. R. Samidurai, S. Marshal Anthoni and K. Balachandran, “Global exponential stability of neutral-type impulsive neural networks with discrete and distributed delays,” Nonlinear Analysis: Hybrid Systems, vol.4, no.1, pp. 103112, 2010.

Guoquan Liu was born in 1982. He received the B.Sc. degree in electronic information engineering from Zhengzhou University, Zhengzhou, China, in 2005, and his M.Sc. degree in pattern recognition and intelligent system from Sichuan University of Science and Engineering, Zigong, China, in 2008. He is now pursuing his Ph.D. degree in College of Automation, Chongqing University, China. His research interests include nonlinear systems, neural networks, and stochastic stability analysis.

Simon X. Yang received the B.Sc. degree in Engineering Physics from Beijing University, China, in 1987, the first M.Sc. degree in Biophysics from Chinese Academy of Sciences, Beijing, China, in 1990, the second M.Sc. degree in Electrical Engineering from the University of Houston, USA, in 1996, and the Ph.D. degree in Electrical and Computer Engineering from the University of Alberta, Edmonton, Canada, in 1999. He joined the University of Guelph in Canada in August 1999 right

JOURNAL OF COMPUTERS, VOL. 7, NO. 1, JANUARY 2012

after his Ph.D. graduation. Currently he is a Professor and the Head of the Advanced Robotics and Intelligent Systems (ARIS) Laboratory at the University of Guelph. Prof. Yang’s research expertise is in the areas of Robotics, Intelligent Systems, Control Systems, Sensing and Multi-sensor Fusion, and Computational Neuroscience. Dr. Yang has served as an Associate Editor of IEEE Transactions on Neural Networks, IEEE Transactions on Systems, Man, and Cybernetics, Part B, International Journal of Robotics and Automation, and has served as the Associate Editor or Editorial Board Member of several other international journals. He has involved in the organization of many international conferences. He was a

© 2012 ACADEMY PUBLISHER

271

recipient of the Presidential Distinguished Professor Awards at the University of Guelph, Canada.

Wei Fu received the B. S. degree in computer science and M. S. degree in electromechanical engineering from the University of Chongqing, Chongqing, China, in 2001 and 2006, respectively. He is currently a Doctoral postgraduate of the University of Chongqing from March 2007. His theoretical research interests include networked control systems, nonlinear control, and predictive control.