Delay-probability-distribution-dependent robust ... - Semantic Scholar

Report 3 Downloads 117 Views
Available online at www.sciencedirect.com

Progress in Natural Science 19 (2009) 1333–1340 www.elsevier.com/locate/pnsc

Delay-probability-distribution-dependent robust stability analysis for stochastic neural networks with time-varying delay Jie Fu, Huaguang Zhang *, Tiedong Ma School of Information Science and Engineering, Northeastern University, Box 134, Shenyang 110004, China Received 8 October 2008; received in revised form 7 November 2008; accepted 10 November 2008

Abstract The delay-probability-distribution-dependent robust stability problem for a class of uncertain stochastic neural networks (SNNs) with time-varying delay is investigated. The information of probability distribution of the time delay is considered and transformed into parameter matrices of the transferred SNNs model. Based on the Lyapunov–Krasovskii functional and stochastic analysis approach, a delay-probability-distribution-dependent sufficient condition is obtained in the linear matrix inequality (LMI) format such that delayed SNNs are robustly globally asymptotically stable in the mean-square sense for all admissible uncertainties. An important feature of the results is that the stability conditions are dependent on the probability distribution of the delay and upper bound of the delay derivative, and the upper bound is allowed to be greater than or equal to 1. Finally, numerical examples are given to illustrate the effectiveness and less conservativeness of the proposed method. Ó 2009 National Natural Science Foundation of China and Chinese Academy of Sciences. Published by Elsevier Limited and Science in China Press. All rights reserved. Keywords: Delay-probability-distribution-dependent; Stochastic neural networks; Time-varying delay; Linear matrix inequality

1. Introduction The stability analysis problem for delayed NNs has received considerable research attention in the last decade, see Refs. [1–3], where the delay type can be constant, time varying or distributed, and the stability criteria can be delay dependent or delay independent. Since delay-dependent methods make use of information on the length of delays, they are generally less conservative than delay-independent ones. However, in a real system, time delay often exists in a random form, that is, if some values of the time delay are very large but the probability of the delay taking such large values is very small, it may lead to a more conservative result if only the information of variation range of the time delay is considered. In addition, its probabilistic characteristic, such as the Bernoulli distribution and the *

Corresponding author. Tel.: +86 24 83687762; fax: +86 24 83689605. E-mail address: [email protected] (H. Zhang).

Poisson distribution, can also be obtained by statistical methods. Therefore, it is necessary and realizable to investigate the probability-distribution delay. Recently, the stability of discrete NNs and discrete SNNs with probabilitydistribution delay are investigated in Refs. [4,10], respectively. But neither of them considers the information of the delay derivative. It has been known that there are two kinds of disturbances that are unavoidable to be considered when one models the NNs. One is parameter uncertainty, the other is stochastic disturbance. For the stability analysis of SNNs with parameter uncertainty, some results related to this problem have recently been published, see Refs. [5–12]. However, Refs. [5,6,10,11] do not consider the information of the delay derivative. In Refs. [7–9], the information of the derivative is taken into consideration, but the upper bound l of the derivative must be smaller than 1. In the case of l P 1 the results in the aforementioned literatures either cannot be applicable [9] or discard the information

1002-0071/$ - see front matter Ó 2009 National Natural Science Foundation of China and Chinese Academy of Sciences. Published by Elsevier Limited and Science in China Press. All rights reserved. doi:10.1016/j.pnsc.2008.11.012

1334

J. Fu et al. / Progress in Natural Science 19 (2009) 1333–1340

of the derivative of time delays [7,8], which is obviously unreasonable. Therefore, it is essential and significant to investigate the problem of how to eliminate the constraint on the upper bound of the delay derivative in SNNs. In this paper, we investigate the robust stability problem for a class of uncertain SNNs with time-varying delay. The information of delay-probability distribution is introduced into the SNNs model and a new method is proposed to eliminate the constraint on the upper bound of the delay derivative. Less conservative stability criteria are presented such that the SNNs with probability distribution delay is robustly, globally, asymptotically stable in the meansquare sense for all admissible uncertainties. All results are established in the form of LMI and can be solved easily using the interior algorithms [13]. Numerical examples show less conservativeness of our results. 2. Problem formulation and preliminaries

We consider uncertain SNN with time-varying delay as follows: dxðtÞ ¼ ½AðtÞxðtÞ þ W 0 ðtÞf ðxðtÞÞ þ W 1 ðtÞf ðxðt  sðtÞÞÞ dt þ ½CxðtÞ þ Dxðt  sðtÞÞ dwðtÞ xðtÞ ¼ nðtÞ; t 2 ½sM ; 0 ð1Þ n

where xðtÞ ¼ ½x1 ðtÞ; x2 ðtÞ; . . . ; xn ðtÞ 2 R is the neural state vector associated with the neurons, f ðxðtÞÞ ¼ T ½f1 ðx1 ðtÞÞ; f2 ðx2 ðtÞÞ; . . . ; fn ðxn ðtÞÞ 2 Rn is the neuron activation function. AðtÞ ¼ A þ DAðtÞ; W 0 ðtÞ ¼ W 0 þ DW 0 ðtÞ; W 1 ðtÞ ¼ W 1 þ DW 1 ðtÞ; A ¼ diagða1 ; a2 ; . . . ; an Þ is a diagonal matrix with positive entries ai > 0; W 0 2 Rnn and W 1 2 Rnn are the connection weight matrix and the delayed connection weight matrix, respectively, C 2 Rnn and D 2 Rnn are known real constant matrices. DAðtÞ; DW 0 ðtÞ and DW 1 ðtÞ represent the parameter uncertainties. wðtÞ is a one-dimension Brownian motion. The time-varying delay sðtÞ satisfies 0 6 sðtÞ 6 sM

s_ 2 ðtÞ 6 l2 < 1

ð4Þ

where s0 2 ½0; sM , s1 2 ½0; s0 Þ and s2 2 ½s0 ; sM . It is easy to know t 2 X1 means the event sðtÞ 2 ½0; s0 Þ occurs and t 2 X2 means the event sðtÞ 2 ½s0 ; sM  occurs. Therefore, a stochastic variable aðtÞ can be defined as  1; t 2 X1 aðtÞ ¼ ð5Þ 0; t 2 X2 Assumption 2. aðtÞ is a Bernoulli distributed sequence with ProbfaðtÞ ¼ 1g ¼ EfaðtÞg ¼ a0 ; ProbfaðtÞ ¼ 0g ¼ 1  EfaðtÞg ¼ 1  a0

Remark 1. From Assumption 2, it is easy to know 2 EfaðtÞ  a0 g ¼ 0; EfðaðtÞ  a0 Þ g ¼ a0 ð1  a0 Þ. By Assumptions 1 and 2, the system (1) can be rewritten as dxðtÞ ¼ ½AðtÞxðtÞ þ W 0 ðtÞf ðxðtÞÞ þ aðtÞW 1 ðtÞf ðxðt  s1 ðtÞÞÞ þ ð1  aðtÞÞW 1 ðtÞf ðxðt  s2 ðtÞÞÞdt þ ½CxðtÞ þ aðtÞDxðt  s1 ðtÞÞ

ð2Þ

ð6Þ

þ ð1  aðtÞÞDxðt  s2 ðtÞÞdwðtÞ xðtÞ ¼ nðtÞ; t 2 ½sM ;0

which is equivalent to dxðtÞ ¼ ½AðtÞxðtÞ þ W 0 ðtÞf ðxðtÞÞ þ a0 W 1 ðtÞf ðxðt  s1 ðtÞÞÞ þ ð1  a0 ÞW 1 ðtÞf ðxðt  s2 ðtÞÞÞ þ ðaðtÞ  a0 ÞðW 1 ðtÞf ðxðt  s1 ðtÞÞÞ  W 1 ðtÞf ðxðt  s2 ðtÞÞÞÞdt þ ½CxðtÞ þ a0 Dxðt  s1 ðtÞÞ þ ð1  a0 ÞDxðt  s2 ðtÞÞ þ ðaðtÞ  a0 ÞðDxðt  s1 ðtÞÞ  Dxðt  s2 ðtÞÞÞdwðtÞ xðtÞ ¼ nðtÞ; t 2 ½sM ; 0

ð7Þ

Assumption 3. The neural activation function fi ðxi Þ satisfies l i 6

fi ðxi Þ  fi ðy i Þ þ 6 li 8xi ; y i 2 R; xi –y i ; i ¼ 1; 2; . . . ; n xi  y i

ð8Þ

which implies that  ðfi ðxi Þ  lþ i xi Þðfi ðxi Þ  li xi Þ 6 0

ð9Þ

þ where l i ; li are some constants.

Assumption 4. The parameter uncertainties DAðtÞ; DW 0 ðtÞ and DW 1 ðtÞ are of the forms: ½DAðtÞDW 0 ðtÞDW 1 ðtÞ ¼ EFðtÞ½H 1 H 2 H 3 

Assumption 1. Considering the information of probability distribution of the time delay sðtÞ, two sets and functions are defined. X1 ¼ ft : sðtÞ 2 ½0; s0 Þg and X2 ¼ ft : sðtÞ 2 ½s0 ; sM g

s_1 ðtÞ 6 l1 < 1;

where 0 6 a0 6 1 is a constant and EfaðtÞg is the expectation of aðtÞ.

Notation. For s > 0; Cð½s; 0; Rn Þ denotes the family of continuous functions / from ½s; 0 to Rn with the norm k/k ¼ sups6h60 j/ðhÞj. ðX; F; fFt gt>0 ; PÞ be a complete probability space with a filtration fFt gt>0 satisfying the usual conditions. Let L2F0 ð½s; 0; Rn Þ be the family of F0 measurable Cð½s; 0; Rn Þ-valued random variables n ¼ fnðhÞ : s 6 h 6 0g such that sups6h60 EjnðhÞj2 < 1, where Efg stands for the mathematical expectation operator with respect to the given probability measure P. I is the identity matrix of appropriate dimensions.

T

 sðtÞ; for t 2 X1 ; sðtÞ; for t 2 X2 and s2 ðtÞ ¼ s1 ðtÞ ¼ s1 ; for t 2 X2 ; s2 ; for t 2 X1 ð3Þ 

ð10Þ

where E; H 1 ; H 2 and H 3 are known constant matrices, FðtÞ satisfies F T ðtÞFðtÞ 6 I; for 8t 2 R. Let V ðxt ; tÞ 2 C 2;1 ðRn  Rþ Þ be a positive function which is continuously twice differentiable in x and once

J. Fu et al. / Progress in Natural Science 19 (2009) 1333–1340

differentiable in t. Thus, the stochastic derivative operator L acting on V ðxt ; tÞ is defined by 1 LV ðxt ; tÞ ¼ V t ðxt ; tÞ þ V x ðxt ; tÞgðtÞ þ ½rT ðtÞV xx ðxt ; tÞrðtÞ 2 where V t ðxt ; tÞ ¼

oV ðxt ; tÞ ; ot V xx ðxt ; tÞ ¼

 V x ðxt ; tÞ ¼

 oV ðxt ; tÞ oV ðxt ; tÞ ;...; ; ox1 oxn

 2  o V ðxt ; tÞ ; oxi oxj nn

gðtÞ ¼  AxðtÞ þ W 0 f ðxðtÞÞ þ a0 W 1 f ðxðt  s1 ðtÞÞÞ

2

 ðW 1 f ðxðt  s1 ðtÞÞÞ  W 1 f ðxðt  s2 ðtÞÞÞÞ; rðtÞ ¼ CxðtÞ þ a0 Dxðt  s1 ðtÞÞ þ ð1  a0 ÞDxðt  s2 ðtÞÞ þ ðaðtÞ  a0 ÞðDxðt  s1 ðtÞÞ  Dxðt  s2 ðtÞÞÞ: Definition 1. For system (7) and any n 2 L2F0 ð½s; 0; Rn Þ, the trivial solution is robustly, globally, asymptotically stable in the mean-square sense for all admissible uncertainties, if lim Ejxðt; nÞj2 ¼ 0

ð11Þ

t!1

where xðt; nÞ is the solution of system (7) at time t under the initial state n. Lemma 1. [14] For any G 2 Rnn ; G > 0, scalars b and k > 0, and vector function x : ½b  k; b ! Rn such that the integration in the following is well defined, then Z b T Z b Z b xðsÞ ds G xðsÞ ds 6 k xT ðsÞGxðsÞ ds bk

bk

Lemma 2. [15] Let U; VðtÞ; W and Z be real matrices of appropriate dimensions with Z satisfying Z ¼ Z T , then Z þ UVðtÞW þ W T V T ðtÞU T < 0;

for all V T ðtÞVðtÞ 6 I

6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 W1 ¼ 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4

0 0 0 W1;7 W1;8 0 0 W1;1 W1;2 0  W2;2 W2;3 0 0 0 0 W2;8 W2;9 0 0 0 W3;9 W3;10   W3;3 W3;4 W3;5 0 0 0 0 W4;10    W4;4 W4;5 0 0 0 0     W5;5 W5;6 0      

     

     

     

     

   

   

   

   

   

0 0 0 0 W5;12

W1;13 0 0 0 0

W1;14 0 W3;14 0 0

W1;15 0 0 0 W5;15

W6;6 0 0 0 0 0 W6;12 0 0 0 0  W7;7 0   W8;8 0 0 0 0 0 0 0    W9;9 0 0     W10;10 0      W11;11

0 W7;13 0 0 0 0

0 W7;14 0 0 0 0

0 W7;15 0 0 0 0

   

   

   

   

0 0 0 W4;11 W5;11

   

   

0 0 0 W12;12 0 0  W13;13 0   W14;14    W15;15

W1;1 ¼ Q1 þ Q2 þ Q3 þ M 1 þ M T1  N 5 A  AT N T5  K 1 L1 W1;2 ¼ M 1 þ M T2 W1;7 ¼ P  AT N T6  N 5 W1;8 ¼ M 1 W1;13 ¼ N 5 W 0 þ K 1 L2 W1;14 ¼ a0 N 5 W 1 W1;15 ¼ ð1  a0 ÞN 5 W 1 W2;2 ¼ ð1  a0 l1 ÞQ1  M 2  M T2 þ M 3 þ M T3 W2;3 ¼ M 3 þ M T4 ;

W2;8 ¼ M 2 ;

T

W3;3 ¼ a0 ð1  a0 ÞD PD  M 4 

W2;9 ¼ M 3

M T4

þ M 5 þ M T5  K 2 L1

W3;4 ¼ M 5 þ M T6

if and only if there exists a scalar e > 0 such that

 W3;5 ¼ a0 ð1  a0 ÞDT PD

Z þ e1 UU T þ eW T W < 0

W3;9 ¼ M 4 ;

W3;10 ¼ M 5

W3;14 ¼ K 2 L2

3. Main results For presentation convenience, in the denote  following, we 

 þ  L1 ¼ diagðlþ 1 l1 ; . . . ; ln ln Þ, L2 ¼ diag

1; 2; 3Þ and real matrices M i ; N i ði ¼ 1; 2; . . . 6Þ of appropriate dimensions, such that the following LMI holds: 3 2   ~  W1 M M M N N A 7 6 0 0 0 0 7 6  S 1 0 7 6 6   S 1 0 0 0 0 7 7 6 7 6 ð12Þ N¼6    S 1 0 0 0 7 0; l1 ; 0 < a0 < 1 satisfying a0 l1 < 1, the SNN described by (7) is globally asymptotically stable in the mean-square sense, if there exist positive matrices P > 0; Qj > 0 ðj ¼ 1; 2; 3Þ; R1 > 0; R2 > 0; S 1 > 0; S 2 > 0, positive diagonal matrices K j > 0 ðj ¼

W4;4 ¼ Q2  M 6  M T6 þ N 1 þ N T1 W4;5 ¼ N 1 þ N T2 ;

W4;10 ¼ M 6 ; T

W5;5 ¼ a0 ð1  a0 ÞD PD  N 2  W5;6 ¼ N 3 þ N T4 ;

N T2

W5;11 ¼ N 2 ;

W4;11 ¼ N 1 þ N 3 þ N T3  K 3 L1 W5;12 ¼ N 3

W5;15 ¼ K 3 L2 W6;6 ¼ Q3  N 4  N T4 W6;12 ¼ N 4 W7;7 ¼ s0 R1 þ ðsM  s0 ÞR2  N 6  N T6

3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5

1336

J. Fu et al. / Progress in Natural Science 19 (2009) 1333–1340

LV 2 ðxt ; tÞ 6 xT ðtÞðQ1 þ Q2 þ Q3 ÞxðtÞ

W7;13 ¼ N 6 W 0 W7;14 ¼ a0 N 6 W 1

 ð1  a0 l1 ÞxT ðt  a0 s1 ðtÞÞQ1 xðt  a0 s1 ðtÞÞ

W7;15 ¼ ð1  a0 ÞN 6 W 1

 xT ðt  s0 ÞQ2 xðt  s0 Þ

W8;8

 xT ðt  sM ÞQ3 xðt  sM Þ

1 ¼ R1 a0 s 0

W9;9 ¼ 

LV 3 ðxt ; tÞ ¼ g T ðtÞ½s0 R1 þ ðsM  s0 ÞR2 gðtÞ Z t  gT ðsÞR1 gðsÞ ds

1 R1 s0 ð1  a0 Þ

W10;10 ¼ 

ta0 s1 ðtÞ

1 R1 s0



W13;13 ¼ K 1 ;

1 R2 sM  s0

W14;14 ¼ K 2 ;

  

M T6

0 0 0 0 0 0 0 0 0 0 0

Proof. Choose P a Lyapunov–Krasovskii functional candi4 date V ðxt ; tÞ ¼ i¼1 V i ðxt ; tÞ, where V 1 ðxt ; tÞ ¼ xT ðtÞPxðtÞ Z t Z V 2 ðxt ; tÞ ¼ xT ðsÞQ1 xðsÞ ds þ ta0 s1 ðtÞ

t

xT ðsÞQ2 xðsÞ ds

ts0

t

xT ðsÞQ3 xðsÞ ds tsM

V 3 ðxt ; tÞ ¼

Z

s0



Z

t

gT ðhÞR1 gðhÞ dh ds þ

Z

V 4 ðxt ; tÞ ¼

0

gT ðhÞR2 gðhÞ dh ds

s0



Z

gT ðsÞR2 gðsÞ ds

rT ðhÞS 1 rðhÞ dh ds þ

tþs

gT ðsÞR2 gðsÞ ds

Z

ta0 s1 ðtÞ ta0 s1 ðtÞ

ts0

ts0

tsM

tsM

Z ts0 1  gT ðsÞ dsR2 sM  s0 ts2 ðtÞ Z ts0 1  gðsÞ ds  s  s0 M ts2 ðtÞ Z ts2 ðtÞ Z ts2 ðtÞ  gT ðsÞ dsR2 gðsÞ ds

ð16Þ

ta0 s1 ðtÞ

Z

s0



Z

ta0 s1 ðtÞ

rT ðsÞS 1 rðsÞ ds

ts1 ðtÞ

sM

t



rT ðhÞS 2 rðhÞ dh ds

Z

ts1 ðtÞ

rT ðsÞS 1 rðsÞ ds

ts0

tþs

Then, the stochastic differential of V ðxt ; tÞ (see [16]) along with (7) can be obtained as



dV ðxt ; tÞ ¼ LV ðxt ; tÞ dt þ 2xT ðtÞPrðtÞ dwðtÞ



ð13Þ

Z

ts0

rT ðsÞS 2 rðsÞ ds

ts2 ðtÞ

Z

ts2 ðtÞ

rT ðsÞS 2 rðsÞ ds

ð17Þ

tsM

Furthermore, we can get LV 1 ðxt ; tÞ ¼ 2xT ðtÞPgðtÞ þ rT ðtÞPrðtÞ

ts2 ðtÞ

LV 4 ðxt ; tÞ 6 rT ðtÞ½s0 S 1 þ ðsM  s0 ÞS 2 rðtÞ Z t  rT ðsÞS 1 rðsÞ ds

t

t

Z

s0

sM

tþs

Z

ts0

1 gT ðsÞ dsR1 s0 ð1  a0 Þ ts1 ðtÞ Z ta0 s1 ðtÞ 1  gðsÞ ds  s 0 ts1 ðtÞ Z ts1 ðtÞ Z ts1 ðtÞ  gT ðsÞ dsR1 gðsÞ ds

tþs

Z

Z



 T ¼ ½PC  0 a0 PD  0 0 0 0 0 0 0 0 0 0  0 ð1  a0 ÞPD A

0

gT ðsÞR1 gðsÞ ds

ta0 s1 ðtÞ

 T ¼ ½0 0 0 0 N T N T 0 0 0 0 0 0 0 0 0 N 3 4

Z

ts1 ðtÞ

1 6 g T ðtÞ½s0 R1 þ ðsM  s0 ÞR2 gðtÞ  a0 s 0 Z t Z t  gT ðsÞ dsR1 gðsÞ ds

N T ¼ ½0 0 0 N T1 N T2 0 0 0 0 0 0 0 0 0 0

þ

Z

tsM

 T ¼ ½0 M T M T 0 0 0 0 0 0 0 0 0 0 0 0 M 3 4

Z

gT ðsÞR1 gðsÞ ds

ts2 ðtÞ

M T ¼ ½M T1 M T2 0 0 0 0 0 0 0 0 0 0 0 0 0

~ ¼ ½0 0 M

ta0 s1 ðtÞ

ts0

W15;15 ¼ K 3

 ¼ P þ s0 S 1 þ ðsM  s0 ÞS 2 P

M T5

Z

ts1 ðtÞ

W11;11 ¼ W12;12 ¼ 

T

ð15Þ

ð14Þ

For arbitrary matrices M i ; N i ði ¼ 1; 2; . . . 6Þ with compatible dimensions, we have

J. Fu et al. / Progress in Natural Science 19 (2009) 1333–1340 ½xT ðtÞM 1 þ xT ðt  a0 s1 ðtÞÞM 2  " xðtÞ  xðt  a0 s1 ðtÞÞ 

Z

t

gðsÞds 

Z

ta0 s1 ðtÞ

#

t

ð18Þ

rðsÞdwðsÞ ¼ 0

ta0 s1 ðtÞ

½xT ðt  a0 s1 ðtÞÞM 3 þ xT ðt  s1 ðtÞÞM 4  " Z

ta0 s1 ðtÞ

xðt  a0 s1 ðtÞÞ  xðt  s1 ðtÞÞ 

gðsÞds 

Z

ts1 ðtÞ

#

ta0 s1 ðtÞ

rðsÞdwðsÞ ¼ 0 ts1 ðtÞ

R ts ðtÞ rðsÞ dwðsÞ, R3 ¼ ts01 rðsÞ dwðsÞ, R4 ¼ R ts2 ðtÞ rðsÞ dwðsÞ, R5 ¼ tsM rðsÞ dwðsÞ. ts2 ðtÞ From (9), for any matrices K i ¼ diagðk i1 ; k i2 ; . . . ; k in Þ P 0; i ¼ 1; 2; 3, it2is easy to obtain 3 " #T lþ þl j þ  j T T n  2 ej ej 7 X xðtÞ 6 lj lj e j e j k 1j  4 þ  5 l þl f ðxðtÞÞ j¼1  j j e eT e eT R2 ¼ R ts0

R ta0 s1 ðtÞ ts1 ðtÞ



ts0

½xT ðt  s0 ÞN 1 þ xT ðt  s2 ðtÞÞN 2  " Z

 rðsÞdwðsÞ ¼ 0

gðsÞds 

Z

ts2 ðtÞ

rðsÞdwðsÞ ¼ 0

ð21Þ

ts2 ðtÞ

½xT ðt  s2 ðtÞÞN 3 þ xT ðt  sM ÞN 4   Z ts2 ðtÞ Z gðsÞds  xðt  s2 ðtÞÞ  xðt  sM Þ  tsM

ts2 ðtÞ

 rðsÞdwðsÞ ¼ 0

ð22Þ

tsM

½xT ðtÞN 5 þ g T ðtÞN 6 ½AxðtÞ þ W 0 f ðxðtÞÞ þ a0 W 1 f ðxðt  s1 ðtÞÞÞ þ ð1  a0 ÞW 1 f ðxðt  s2 ðtÞÞÞ þ ðaðtÞ  a0 ÞðW 1 f ðxðt  s1 ðtÞÞÞ  W 1 f ðxðt  s2 ðtÞÞÞÞ  gðtÞ ¼ 0 ð23Þ

Z

 2fT ðtÞM

ta0 ts1 ðtÞ

Z

 Dxðt  s2 ðtÞÞg ¼ ½CxðtÞ þ a0 Dxðt  s1 ðtÞÞ T þ ð1  a0 ÞDxðt  s2 ðtÞÞ P½CxðtÞ þ a0 Dxðt  s1 ðtÞÞ

~ 1 M ~ T fðtÞ þ RT S 1 R3 rðsÞdwðsÞ 6 fT ðtÞMS 3 1

þ ð1  a0 ÞDxðt  s2 ðtÞÞ þ a0 ð1  a0 Þ½Dxðt  s1 ðtÞÞ   Dxðt  s2 ðtÞÞT P½Dxðt  s1 ðtÞÞ  Dxðt  s2 ðtÞÞ ð30Þ

2fT ðtÞN

ð26Þ ts0

ts2 ðtÞ

T T rðsÞ dwðsÞ 6 fT ðtÞNS 1 2 N fðtÞ þ R4 S 2 R4

ð27Þ  2f ðtÞN T

Z

ts2 ðtÞ

rðsÞ dwðsÞ 6 f

T

 1 N  T fðtÞ ðtÞNS 2

ð29Þ

  ½Dxðt  s1 ðtÞÞ  Dxðt  s2 ðtÞÞT P½Dxðt  s1 ðtÞÞ

ts0

Z

j¼1

þ a0 Dxðt  s1 ðtÞÞ þ ð1  a0 ÞDxðt  s2 ðtÞÞT   P½ðDxðt  s1 ðtÞÞ  Dxðt  s2 ðtÞÞÞ þ ðaðtÞ  a0 Þ2

 1 M  T fðtÞ þ RT S 1 R2 s1 ðtÞrðsÞdwðsÞ 6 fT ðtÞMS 2 1

ts1 ðtÞ

f ðxðt  si ðtÞÞÞ 3 " # lþ þl  j 2 j ej eTj 7 xðt  si ðtÞÞ 5 f ðxðt  si ðtÞÞÞ e eT

i¼1

þ ð1  a0 ÞDxðt  s2 ðtÞÞ þ 2ðaðtÞ  a0 Þ½CxðtÞ

T T rðsÞdwðsÞ 6 fT ðtÞMS 1 1 M fðtÞ þ R1 S 1 R1

ð25Þ

~ 2fT ðtÞM

#T

¼ Ef½CxðtÞ þ a0 Dxðt  s1 ðtÞÞ þ ð1  a0 Þ T  Dxðt  s2 ðtÞÞ P½CxðtÞ þ a0 Dxðt  s1 ðtÞÞ

ð24Þ

Z

xðt  si ðtÞÞ

EfrT ðtÞðP þ s0 S 1 þ ðsM  s0 ÞS 2 ÞrðtÞg

t

ta0 s1 ðtÞ

k ðiþ1Þj

j j

By Remark 1, it is easy to know

For formulas (18)–(22), we further have 2fT ðtÞM



2 X n X

"

þ  T 6 lj lj e j e j 4 þ  l þl  j 2 j ej eTj j j # " #T " #" xðtÞ K 1 L1 K 1 L2 xðtÞ ¼ f ðxðtÞÞ f ðxðtÞÞ K 1 L2 K 1 " #T " # 2 X xðt  si ðtÞÞ K iþ1 L1 K iþ1 L2 þ f ðxðt  si ðtÞÞÞ K iþ1 L2 K iþ1 i¼1 " # xðt  si ðtÞÞ  P0 f ðxðt  si ðtÞÞÞ

ð20Þ

#

ts0

xðtÞ

2

ts0

ts0

xðt  s0 Þ  xðt  s2 ðtÞÞ 

ts1 ðtÞ

#

f ðxðtÞÞ

T

½x ðt  s1 ðtÞÞM 5 þ x ðt  s0 ÞM 6   Z ts1 ðtÞ Z xðt  s1 ðtÞÞ  xðt  s0 Þ  gðsÞds 

j j

2

"

ð19Þ T

1337

þ

Since (Z t E

T

r ðsÞ dwðsÞS 1

Z

ta0 s1 ðtÞ

(Z

RT5 S 2 R5

rðsÞ dwðsÞ ta0 s1 ðtÞ

)

t

rT ðsÞS 1 rðsÞ ds

¼E

tsM

)

t

ð31Þ

ta0 s1 ðtÞ

ð28Þ T

where f ðtÞ ¼ ½xT ðtÞxT ðt  a0 s1 ðtÞÞxT ðt  s1 ðtÞÞxT ðt  s0 Þ Rt R ta T x ðt  s2 ðtÞÞxT ðt  sM Þg T ðtÞ ta0 s1 ðtÞ gT ðsÞ ds ts10ðtÞ s1 ðtÞgT R ts ðtÞ R ts R ts2 ðtÞ T ðsÞ ds ts01 g T ðsÞ ds ts20ðtÞ gT ðsÞ ds g ðsÞ dsf T ðxðtÞÞ tsM R t f T ðxðt  s1 ðtÞÞÞf T ðxðt  s2 ðtÞÞÞ, R1 ¼ ta0 s1 ðtÞ rðsÞ dwðsÞ,

(Z

ta0 s1 ðtÞ

T

r ðsÞ dwðsÞS 1

E ts1 ðtÞ

(Z

Z

)

ta0 s1 ðtÞ

rðsÞ dwðsÞ ts1 ðtÞ

ta0 s1 ðtÞ

)

T

r ðsÞS 1 rðsÞ ds

¼E

ts1 ðtÞ

ð32Þ

1338

J. Fu et al. / Progress in Natural Science 19 (2009) 1333–1340

Z

ts1 ðtÞ

T

r ðsÞ dwðsÞS 1

E

Z

rðsÞ dwðsÞ

ts0

ts0

Z ¼E (Z

ts1 ðtÞ

2



ts1 ðtÞ



T

r ðsÞS 1 rðsÞ ds

ð33Þ

ts0 ts0

rT ðsÞ dwðsÞS 2

E

Z

ts2 ðtÞ

(Z

)

ts0

rðsÞ dwðsÞ ts2 ðtÞ

ts0

¼E

)

rT ðsÞS 2 rðsÞ ds

ð34Þ

6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4

ts2 ðtÞ

Z

ts2 ðtÞ

E

rT ðsÞ dwðsÞS 2

Z

tsM

Z ¼E

rðsÞ dwðsÞ tsM

ts2 ðtÞ



ts2 ðtÞ

T



r ðsÞS 2 rðsÞ ds

ð35Þ

tsM

1 W

 S 1

E dV ðxt ; tÞ ¼ ELV ðxt ; tÞ 6 fT ðtÞWfðtÞ ð36Þ 1 1  T 1 ~ T 1 T T  ~ where W¼W1 þMS 1 M þ MS 1 M þ MS 1 M þNS 2 N P  T . By the Schur complement, it is easy  1 N  1 A  T þA þNS 2 to derive that (12) is equivalent to W 0; l1 ; 0 < a0 < 1 satisfying a0 l1 < 1, the SNN described by (7) is robustly, globally, asymptotically stable in the mean-square sense for all admissible uncertainties, if there exist positive matrices P > 0; Qj > 0 ðj ¼ 1; 2; 3Þ; R1 > 0; R2 > 0; S 1 > 0; S 2 > 0, positive diagonal matrices K j > 0 ðj ¼ 1; 2; 3Þ and real matrices M i ; N i ði ¼ 1; 2; . . . 6Þ of appropriate dimensions, positive scalar c, such that the following LMI holds:

H2 ¼

# and P ¼

6

1

diagðFðtÞ;FðtÞ;...;FðtÞÞ;¼diag@H 1 ;0;...0;H 2 ;H 3 ;H 3 0;...;0 A. |ffl{zffl} |ffl{zffl} 11

Remark 2. In Refs. [7,8], when l P 1; Q will no longer be helpful to improve the stability condition since ð1  lÞQ is nonnegative definite. However, by Theorem 1 in this paper, when l1 P 1, if a0 l1 < 1 is satisfied, then ð1  a0 l1 ÞQ1 is still negative definite. Therefore, the constraint on l1 < 1 is eliminated. Next, we will discuss the stability of SNN with parameter uncertainties. Based on the results obtained in Theorem 1, the following stability criterion can be derived easily, which is robust for all admissible uncertainties.

H1 ¼

#

N 5 E 0|fflfflffl.{zfflfflffl} . . 0 N 5 Ea0 N 5 Eð1  a0 ÞN 5 E 0|fflfflffl.{zfflfflffl} ..0 , tP0

ð37Þ

  T T T T  where W1 ¼W1 þdiag cH 1 H 1 ;0;...0;cH 2 H 2 ;cH 3 H 3 ;cH 3 H 3 ; |fflffl{zfflffl} 11 " # qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

t 2

 M





Substituting (14)–(17) into (13), adding (18)–(23) and (29) to (13), and taking expectation on both sides of (13), then using (24)–(28) and (30)–(35), we can get

M

6

It follows from Lemma 2 that the matrix inequality (38) is equivalent to the following inequality. N þ c1 HHT þ c T  < 0

ð39Þ

By the Schur complement, (37) is equivalent to (39) for a scalar c > 0. Then, similar to the proof of Theorem 1, the results of Theorem 2 can be obtained. h Remark 3. As a special case, when a0 ¼ 1 (or a0 ¼ 0Þ, by setting Q1 ¼ Q2 ¼ R2 ¼ S 2 ¼ 0; sM ¼ s0 (or Q1 ¼ R1 ¼ S 1 ¼ 0Þ in the Lyapunov–Krasovskii functional of Theorem 2, the robust stability criteria can be obtained easily and the corresponding proof is similar to Theorem 2, which are omitted. 4. Numerical examples Example 1. Consider the uncertain SNN (7) with parameters as follows (Example in Ref. [6]):

J. Fu et al. / Progress in Natural Science 19 (2009) 1333–1340

1339

Table 1 Allowable upper bound of sM for various l1 . Methods

l1 ¼ 0:97

l1 ¼ 1

l1 ¼ 1:5

l1 ¼ 2

Unknown

[6] [7] [8] Theorem 2

– 0.785 0.771 1.294 1.338 1.430 1.615

– 0.779 0.746 1.294 1.337 1.426 1.579

– 0.779 0.746 1.292 1.324 1.303 1.323

– 0.779 0.746 1.291 1.299 1.292 1.323

0.419 0.779 0.746 1.279 1.281 1.292 1.323

l1 ¼ 0:2

l1 ¼ 0:6

l1 ¼ 1

l1 ¼ 1:5

l1 ¼ 2

l1 ¼ 2:5

0.972 1.092 1.545 5.523

0.972 1.083 1.490 5.181

0.971 1.071 1.342 3.529

0.970 1.044 1.242 3.243

0.968 1.024 1.242 3.243

0.967 1.024 1.242 3.243

s0 ¼ 0:6

a0 a0 a0 a0

¼ 0:2 ¼ 0:4 ¼ 0:6 ¼ 0:8

Table 2 Allowable upper bound of sM for s0 ¼ 0:4. Results a0 a0 a0 a0

¼ 0:2 ¼ 0:5 ¼ 0:8 ¼ 0:99



     4 0 0:4 0:7 0:2 0:6 ; W0 ¼ ; W1 ¼ ; 0 5 0:1 0 0:5 0:1     0:5 0 0 0:5 T C¼ ; D¼ ; E ¼ ½0:1  0:1 ; 0 0:5 0:5 0 H 1 ¼ ½0:2 0:3; H 2 ¼ ½0:2  0:3; H 3 ¼ ½0:2  0:3; L1 ¼ 0; L2 ¼ 0:25I



by Assumption 3, L1 = 0, L2 = 0.25I equivalent to L ¼ 0:5I in Ref. [6]. For various l1 , the computed upper bound sM , which guarantee the robust stability of system (7), are listed in Table 1. From Table 1, when the information of the delayprobability distribution is considered, for various a0 the allowable upper bound sM is larger than those in Refs. [6– 8], where only the variation range of the delay is considered. In addition, when l1 P 1, the stability criteria fail in Ref. [6] and become derivative independent in Refs. [7,8]. However, in this paper, the constraint l1 < 1 is eliminated for a0 l1 < 1. Therefore, Theorem 2 in this paper is less conservative than those in Refs. [6–8]. Example 2. Consider the uncertain SNN (7) with parameters as follows:       7 0 0:2 4 0:4 0:2 A¼ ; W0 ¼ ; W1 ¼ ; 0 6 0:1 0:3 0:1 0:7     0:3 0 0:5 0:1 C¼ ; D¼ ; E ¼ ½0:1 0:1T ; 0 0:3 0:5 0 H 1 ¼ H 2 ¼ H 3 ¼ ½1 1: Take the activation function as: f1 ðx1 Þ ¼ tanhð0:2x1 Þ; f2 ðx2 Þ ¼ tanhðx2 Þ. It is obvious that 0:2 6 d d tanhð0:2x Þ < 0; 0 < tanhðx Þ 6 1, so L ¼ diagð0; 1 2 1 dx1 dx2 0Þ; L2 ¼ diagð0:1; 0:5Þ. For s0 ¼ 0:4, various l1 and delay probability distribution a0 , the computed upper bound sM , which guarantee the robust stability of system (7), are listed in Table 2.

From Table 2, this paper overcomes the constraint l1 < 1 for a0 l1 < 1. 5. Conclusions The problem of robust stability for uncertain SNNs with probability-distribution-dependent time-varying delay has been addressed in this paper. Some new stability criteria have been proposed to guarantee the robust global asymptotical stability of the SNNs. Probability distribution of time varying delay is introduced into the stability criteria, and the new method eliminates the constraint that the derivative of the delay must be smaller than 1. Numerical examples show the effectiveness and less conservatism of the method. Acknowledgments This work was supported by the National Natural Science Foundation of China (60534010, 60572070, 60774048, 60804006 and 60728307), the Program for Cheung Kong Scholars and Innovative Research Groups of China (60521003), the Research Fund for the Doctoral Program of China Higher Education (20070145015) and the National High Technology Research and Development Program of China (2006AA04Z183). References [1] Zhang HG, Wang ZS. Global asymptotic stability of delayed cellular neural networks. IEEE Trans Neural Network 2007;18(3):947–50. [2] He Y, Liu GP, Rees D, et al. Stability analysis for neural networks with time-varying interval delay. IEEE Trans Neural Network 2007;18(6):1850–4. [3] Zhang HG, Wang ZS, Liu DR. Robust exponential stability of recurrent neural networks with multiple time-varying delays. IEEE Trans Circuits Syst II Exp Briefs 2007;54(8):730–4. [4] Yue D, Zhang YJ, Tian EG, et al. Delay-distribution-dependent exponential stability criteria for discrete-time recurrent neural

1340

[5]

[6]

[7] [8]

[9]

[10]

J. Fu et al. / Progress in Natural Science 19 (2009) 1333–1340 networks with stochastic delay. IEEE Trans Neural Network 2008;19(7):1299–306. Huang H, Feng G. Delay-dependent stability for uncertain stochastic neural networks with time-varying delay. Physica A 2007;381(15):93–103. Huang H, Feng G. Corrigendum to: ‘‘Delay-dependent stability for uncertain stochastic neural networks with time-varying delay” [Physica A 381 (2007) 93–103]. Physica A 2008;387(5):1431–2. Chen WH, Lu XM. Mean square exponential stability of uncertain stochastic delayed neural networks. Phys Lett A 2008;372(7):1061–9. Li HY, Chen B, Zhou Q, et al. Robust exponential stability for uncertain stochastic neural networks with discrete and distributed time-varying delays. Phys Lett A 2008;372(19):3385–94. Zhang JH, Shi P, Qiu JQ. Novel robust stability criteria for uncertain stochastic Hopfield neural networks with time-varying delays. Nonlinear Anal 2007;8:1349–57. Zhang YJ, Yue D, Tian EG. Robust delay-distribution-dependent stability of discrete-time stochastic neural networks with time-varying delay. Neurocomputing 2009;72(4–6):1265–73.

[11] Liu YR, Wang ZD, Liu XH. Robust stability of discrete-time stochastic neural networks with time-varying delays. Neurocomputing 2008;71(4):823–33. [12] Zhang HG, Wang YC. Stability analysis of Markovian jumping stochastic Cohen–Grossberg neural networks with mixed time delays. IEEE Trans Neural Network 2008;19(2):366–70. [13] Boyd S, Chaoui LEI, Feron E, et al. Linear matrix inequalities in system and control theory. Philadelphia (PA): SIAM; 1994. [14] Zhang HG, Wang YC, Liu DR. Delay-dependent guaranteed cost control for uncertain stochastic fuzzy systems with multiple time delays. IEEE Trans Syst Man Cybern B 2008;38(1):126–40. [15] Xie L, Fu M, Souza CED. H 1 control and quadratic stabilization of systems with parameter uncertainty via output feedback. IEEE Trans Automat Control 1992;37(8):1253–6. [16] Mao XR. Stochastic differential equations and applications. West Sussex, UK: Horwood; 1997. [17] Itoˆ K, Mckean HP. Diffusion processes and their sample paths. Berlin: Springer; 1965.