New robust stability results for bidirectional ... - Semantic Scholar

Report 0 Downloads 28 Views
Applied Mathematics and Computation 218 (2012) 11472–11482

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

New robust stability results for bidirectional associative memory neural networks with multiple time delays Sibel Senan a,⇑, Sabri Arik b, Derong Liu c a

Istanbul University, Department of Computer Engineering, 34320 Avcilar, Istanbul, Turkey Isik University, Department of Electrical and Electronics Engineering, 34980 Sile, Istanbul, Turkey c University of Illinois at Chicago, Department of Electrical and Computer Engineering, Chicago, IL 60607, USA b

a r t i c l e

i n f o

Keywords: Equilibrium and stability analysis Bidirectional associative memory neural networks Lyapunov functionals

a b s t r a c t In this paper, the robust stability problem is investigated for a class of bidirectional associative memory (BAM) neural networks with multiple time delays. By employing suitable Lyapunov functionals and using the upper bound norm for the interconnection matrices of the neural network system, some novel sufficient conditions ensuring the existence, uniqueness and global robust stability of the equilibrium point are derived. The obtained results impose constraint conditions on the system parameters of neural network independent of the delay parameters. Some numerical examples and simulation results are given to demonstrate the applicability and effectiveness of our results, and to compare the results with previous robust stability results derived in the literature. Ó 2012 Elsevier Inc. All rights reserved.

1. Introduction In recent years, neural networks have received considerable attention because of their successful applications in image processing, associative memories, optimization problems and other engineering areas [1,2]. Such applications rely on the qualitative stability properties of the designed neural network. Therefore, stability analysis of neural networks plays an important role in the designs and applications of neural networks. On the other hand, time delays occur in VLSI implementation of neural networks due to the finite switching speed of neuron amplifiers, and the finite speed of signal propagation. It is also known that the working with the delayed version of neural networks is important for solving some classes of motionrelated optimization problems. However, it has been revealed that time delays may cause instability and oscillation of neural networks. For these reasons, it is of great importance to study the equilibrium and stability properties of neural networks in the presence of time delays. Some results concerning the dynamical behavior of various neural networks with or without delay has been reported in [3–20] and the references therein. We should also point out that, in hardware implementation of neural networks, the network parameters of the system may subject to some changes due to the tolerances of electronic components employed in the design. In such cases, it is desired that the stability properties of neural network should not be affected by the small deviations in the values of the parameters. In other words, the neural network must be globally robustly stable. Global robust stability of standard neural network models with time delays has been studied by many researchers and some important robust stability results have been reported in [21–26]. Bidirectional associative memory (BAM) neural networks were first introduced by Kosko [27,28]. A BAM neural network is composed of neurons arranged in two layers. The neurons in one layer are fully interconnected to the neurons in the other ⇑ Corresponding author. E-mail addresses: [email protected] (S. Senan), [email protected] (S. Arik), [email protected] (D. Liu). 0096-3003/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2012.04.075

S. Senan et al. / Applied Mathematics and Computation 218 (2012) 11472–11482

11473

layer, while there are no interconnection among neurons in the same layer. It uses the forward and backward information flow to produce an associative search for stored stimulus–response association. One beneficial characteristic of the BAM is its ability to recall stored pattern pairs in the presence of noise. One may refer to [29] for detailed memory architecture and examples of BAM neural networks. This class of networks has successful application perspective in the field of pattern recognition and artificial intelligence due to its generalization of the single-layer auto-associative Hebbian correlator to a twolayer pattern-matched heteroassociative circuit [30]. Some of these applications require that there should be a well-defined computable solution for all possible initial states. From a mathematical point of view, this means that the equilibrium point of the designed neural network is globally asymptotically stable (GAS). The stability of the BAM neural networks has been extensively studied in the literature in the recent years and many different sufficient conditions ensuring the stability of BAM neural networks have been given in [31–43]. However, many of the existing stability results derived for the BAM neural networks can be applicable when only a pure delayed neural network model is employed. In recently published papers [44– 48], a hybrid BAM neural network model in which both instantaneous and delayed signaling occur was considered. In this paper, we study the equilibrium and robust stability properties of hybrid bidirectional associative memory neural networks with multiple time delays. By employing more general types of suitable Lyapunov–Krasovskii functionals and using the upper bound norm for the interconnection matrices of the neural system we obtain some novel delay-independent sufficient conditions for the existence, uniqueness and global robust asymptotic stability of the equilibrium point for hybrid, BAM neural networks with time delays. Some numerical examples are also given to prove that our conditions can be considered as the alternative results to the previous stability results derived in the literature. 2. Model description Dynamical behavior of a hybrid BAM neural network with constant time delays is described by the following set of differential equations [47]:

u_ i ðtÞ ¼ ai ui ðtÞ þ z_ j ðtÞ ¼ bj zj ðtÞ þ

m m X X wji g j ðzj ðtÞÞ þ wsji g j ðzj ðt  sji ÞÞ þ Ii ; j¼1

j¼1

n X

n X

i¼1

i¼1

v ij g i ðui ðtÞÞ þ

8i; ð1Þ

v sij g i ðui ðt  rij ÞÞ þ Jj ; 8j;

The BAM neural network model (1) can be regarded as a neural network model having two layers. n denotes number of the neurons in the first layer and m denotes the number of neurons in the second layer. ui ðtÞ is the state of the ith neuron in the first layer and zj ðtÞ is the state of the jth neuron in the second layer. ai and bj denote the neuron charging time constants and passive decay rates, respectively; wji ; wsji ; v ij and v sij are synaptic connection strengths; g i and g j represent the activation functions of the neurons and the propagational signal functions, respectively; and Ii and J j are the exogenous inputs. It will be assumed that ai ; bj ; wji ; wsji ; v ij ; v sij ; sji and rij in system (1) are uncertain but bounded, and belong to the following intervals:

AI :¼ fA ¼ diagðai Þ : 0 < A 6 A 6 A; i:e:; 0 < ai 6 ai 6 ai ; i ¼ 1; 2; . . . ; n; 8A 2 AI g; BI :¼ fB ¼ diagðbj Þ : 0 < B 6 B 6 B; i:e:; 0 < bj 6 bj 6 bj ; j ¼ 1; 2; . . . ; m; 8B 2 BI g; W I :¼ fW ¼ ðwji Þmn : W 6 W 6 W; i:e:; wji 6 wji 6 wji ; i ¼ 1; 2; . . . ; n; j ¼ 1; 2; . . . ; m; 8W 2 W I g; V I :¼ fV ¼ ðv ij Þnm : V 6 V 6 V; i:e:; v ij 6 v ij 6 v ij ; i ¼ 1; 2; . . . ; n; j ¼ 1; 2; . . . ; m; 8V 2 V I g; W sI :¼ fW s ¼ ðwsji Þmn : W s 6 W 6 W s ; i:e:; wsji 6 wsji 6 wsji ; i ¼ 1; 2; . . . ; n; j ¼ 1; 2; . . . ; m; 8W s 2 W sI g;

ð2Þ

V sI :¼ fV s ¼ ðv sij Þnm : V s 6 V 6 V s ; i:e:; v sij 6 v sij 6 v sij ; i ¼ 1; 2; . . . ; n; j ¼ 1; 2; . . . ; m; 8V s 2 V sI g;

sI :¼ fs ¼ ðsji Þmn : s 6 s 6 s; i:e:; sji 6 sji 6 sji ; i ¼ 1; 2; . . . ; n; j ¼ 1; 2; . . . ; m; 8s 2 sI g; rI :¼ fr ¼ ðrij Þnm : r 6 r 6 r; i:e:; rij 6 rij 6 rij ; i ¼ 1; 2; . . . ; n; j ¼ 1; 2; . . . ; m; 8r 2 rI g: In order to establish the desired stability properties of neural network model (1), it is first necessary to specify the class of activation functions. The activation functions we employ in (1) are assumed to satisfy the following conditions: (H1) There exist some positive constants ‘i ; i ¼ 1; 2; . . . ; n and kj ; j ¼ 1; 2; . . . ; m such that

06

Þ g i ðxÞ  g i ðy 6 ‘i ; x  y 

06

^Þ g j ð^xÞ  g j ðy 6 kj ^x  y ^

; ^ ^2 R. This class of functions will be denoted by g 2 K. for all  x; y x; y (H2) There exist positive constants Mi ; i ¼ 1; 2; . . . ; n and Lj ; j ¼ 1; 2; . . . ; m such that jg i ðuÞj 6 M i and jg j ðzÞj 6 Lj for all u; z 2 R. Note that this assumption implies that the activation functions are bounded and this class of functions will be denoted by g 2 B.

11474

S. Senan et al. / Applied Mathematics and Computation 218 (2012) 11472–11482

3. Preliminaries Let v ¼ ðv 1 ; v 2 ; . . . ; v n ÞT 2 Rn be a column vector and Q ¼ ðqij Þnxn be a real matrix. The three commonly used vector norms kv k1 ; kv k2 ; kv k1 are defined as: n X kv k1 ¼ jv i j;

vffiffiffiffiffiffiffiffiffiffiffiffiffi u n uX kv k2 ¼ t v 2i ;

i¼1

kv k1 ¼ maxjv i j: 16i6n

i¼1

The three commonly used matrix norms kQ k1 ; kQ k2 ; kQ k1 are defined as follows: n X kQ k1 ¼ max jqij j; 16j6n

kQk2 ¼ ½kM ðQ T Q Þ1=2 ;

i¼1

n X kQ k1 ¼ max jqij j: 16i6n

j¼1

For the vector v ¼ ðv 1 ; v 2 ; . . . ; v n ÞT ; jv j will denote v ¼ ðjv 1 j; jv 2 j; . . . ; jv n jÞT . For the matrix Q ¼ ðqij Þnxn , the matrix jQ j will denote jQ j ¼ ðjqij jÞnxn , and km ðQ Þ and kM ðQ Þ will denote the minimum and maximum eigenvalues of Q, respectively. If P ¼ ðpij Þnxn and Q ¼ ðqij Þnxn are two real symmetric matrices, then P 6 Q will imply that v T Pv 6 v T Q v for all v ¼ ðv 1 ; v 2 ; . . . ; v n ÞT 2 Rn . Lemma 1 [3]. For A 2 AI :¼ fA ¼ ðaij Þ : A 6 A 6 A; i:e:; aij 6 aij 6 aij ; i; j ¼ 1; 2; . . . ; ng, the following inequality holds:

kAk22 6 kA k22 þ kA k22 þ 2kAT jA jk2 ; where A ¼ 12 ðA þ AÞ; A ¼ 12 ðA  AÞ. Lemma 2 [21]. For any matrix A 2 ½A; A, the following inequality holds:

jjAjj2 6 jjA jj2 þ jjA jj2 ; where A ¼ 12 ðA þ AÞ; A ¼ 12 ðA  AÞ. Lemma 3. For any two vectors x ¼ ðx1 ; x2 ; . . . ; xn ÞT and

2xT t ¼ 2tT x 6 cxT x þ

1

c

t ¼ ðt1 ; t2 ; . . . ; tn ÞT , the following equality holds:

tT t;

where c is any positive constant. 4. Global robust stability results In this section, we present some theorems proving the conditions that guarantee the global asymptotic stability of the equilibrium point of neural system (1). Under Assumption (H2), neural network defined by (1) always has an equilibrium point. Therefore, what we need to prove is the global asymptotic stability of the equilibrium point. To this end, the equilibrium point of system (1) will be shifted to the origin. By using the transformation

xi ðÞ ¼ ui ðÞ  ui ;

i ¼ 1; 2; . . . ; n;

yj ðÞ ¼ zj ðÞ  zj ;

j ¼ 1; 2; . . . ; m;

system (1) can be transformed into the following form:

x_ i ðtÞ ¼ ai xi ðtÞ þ

m m X X wji fj ðyj ðtÞÞ þ wsji fj ðyj ðt  sji ÞÞ; j¼1

y_ j ðtÞ ¼ bj yj ðtÞ þ

8i;

j¼1

n X

n X

i¼1

i¼1

v ij fi ðxi ðtÞÞ þ

ð3Þ

v sij fi ðxi ðt  rij ÞÞ; 8j;

where xðtÞ ¼ ðx1 ðtÞ; x2 ðtÞ; . . . ; xn ðtÞÞT ; yðtÞ ¼ ðy1 ðtÞ; y2 ðtÞ; . . . ; yn ðtÞÞT ; f ðxðtÞÞ ¼ ðf1 ðx1 ðtÞÞ; f2 ðx2 ðtÞÞ; . . . ; fn ðxn ðtÞÞÞT ; f ðyðtÞÞ ¼ ðf1 ðy1 ðtÞÞ; f2 ðy2 ðtÞÞ; . . . ; fn ðyn ðtÞÞÞT ; f ðxðt  rÞÞ ¼ ðf1 ðx1 ðt  r1 ÞÞ; f2 ðx2 ðt  r2 ÞÞ; . . . ; fn ðxn ðt  rn ÞÞÞT ; f ðyðt  sÞÞ ¼ ðf1 ðy1 ðt  s1 ÞÞ; T f2 ðy2 ðt  s2 ÞÞ; . . . ; fn ðyn ðt  sn ÞÞÞ . The functions fi ðxi Þ; fj ðyj Þ are of the form:

fi ðxi ðÞÞ ¼ g i ðxi ðÞ þ ui Þ  g i ðui Þ;

i ¼ 1; 2; . . . ; n;

fj ðyj ðÞÞ ¼ g j ðyj ðÞ þ zj Þ  g j ðzj Þ;

j ¼ 1; 2; . . . ; m:

11475

S. Senan et al. / Applied Mathematics and Computation 218 (2012) 11472–11482

It can be verified that the functions fi and fj satisfy the assumptions on g i and g j , i.e., g i 2 K and g j 2 B implies that fi 2 K and fj 2 B, respectively. We also note that fi ð0Þ ¼ 0 and fj ð0Þ ¼ 0; i ¼ 1; 2; . . . ; n. Note that the equilibrium point of system (1) is globally asymptotically stable, if the origin of system (4) is a globally asymptotically stable equilibrium point. Therefore, in order to prove the global asymptotic stability of the equilibrium point of system (1), it will be sufficient to prove the global asymptotic stability of the origin of system (4). We can now proceed with the following result: Theorem 1. Let the activation functions satisfy assumptions ðH1Þ and ðH2Þ. Then, neural system (1) with (2) has a unique equilibrium point which is globally asymptotically robustly stable if there exist positive constants a and b such that the network parameters of the system satisfy the following conditions m X 1 1  di ¼ mð2ai  a  bÞ  n‘2i ðkV  k22 þ kV  k22 þ 2kV T jV  jk2 Þ  n2 ‘2i ðv sij Þ2 > 0; b a j¼1

1 b

1

2

2

Xj ¼ nð2bj  a  bÞ  mkj ðkW  k22 þ kW  k22 þ 2kW T jW  jk2 Þ  m2 kj

a

8i > 0;

n X  ðwsji Þ2 > 0;

8j > 0;

i¼1

where W ¼ ðwji Þ; V ¼ ðv ij Þ; W  ¼ 12 ðW þ WÞ; W  ¼ 12 ðW  WÞ; V  ¼ 12 ðV þ VÞ; V  ¼ 12 ðV  VÞ;  wsji ¼ maxfjwsji j; jwsji jg.



v sij

Proof. Define the following positive definite Lyapunov functional:

VðxðtÞ; yðtÞÞ ¼

n m n X m  2 Z X X 1X mx2i ðtÞ þ ny2j ðtÞ þ m2 wsji i¼1

a i¼1

j¼1

t

tsji

j¼1

s2j ðyj ðgÞÞdg þ

m X n  2 1X n2 v sij

a j¼1

Z

i¼1

¼ maxfjv sij j; jv sij jg

and

t

trij

s2i ðxi ðnÞÞdn:

The derivative of VðxðtÞ; yðtÞÞ along the trajectories of the system is obtained as:

_ VðxðtÞ; yðtÞÞ ¼ 

n n X m n X m m X X X X 2mai x2i ðtÞ þ 2mxi ðtÞwji sj ðyj ðtÞÞ þ 2mxi ðtÞwsji sj ðyj ðt  sji ÞÞ  2nbj y2j ðtÞ i¼1

i¼1 j¼1

m X n X

þ

i¼1 j¼1

j¼1

m X n n X m  2 X 1X 2nyj ðtÞv ij si ðxi ðtÞÞ þ 2nyj ðtÞv sij si ðxi ðt  rij ÞÞ þ m2 wsji s2j ðyj ðtÞÞ

j¼1 i¼1

a i¼1

j¼1 i¼1

j¼1

n X m m X n m X n  2  2  2 1X 1X 1X m2 wsji s2j ðyj ðt  sji ÞÞ þ n2 v sij s2i ðxi ðtÞÞ  n2 v sij s2i ðxi ðt  rij ÞÞ 

a i¼1

a j¼1

j¼1

a j¼1

i¼1

i¼1

n X

n X m X

n X m X

m X 2nbj y2j ðtÞ

i¼1

i¼1 j¼1

i¼1 j¼1

j¼1

2mai x2i ðtÞ þ

6

2mxi ðtÞwji sj ðyj ðtÞÞ þ

2mxi ðtÞwsji sj ðyj ðt  sji ÞÞ 

m X n m X n n X m X X 1X 2 2nyj ðtÞv ij si ðxi ðtÞÞ þ 2nyj ðtÞv sij si ðxi ðt  rij ÞÞ þ m2 ðwsji Þ2 kj y2j ðtÞ þ j¼1 i¼1

a i¼1

j¼1 i¼1

j¼1

n X m m X n m X n 1X 1X 1X  m2 ðwsji Þ2 s2j ðyj ðt  sji ÞÞ þ n2 ðv sij Þ2 ‘2i x2i ðtÞ  n2 ðv sij Þ2 s2i ðxi ðt  rij ÞÞ

a i¼1

a j¼1

j¼1

i¼1

a j¼1

ð4Þ

i¼1

We note the following inequalities: n X m X 1 2mxi ðtÞwji sj ðyj ðtÞÞ ¼ 2mxT ðtÞWSðyðtÞÞ 6 mbxT ðtÞxðtÞ þ m ST ðyðtÞÞW T WSðyðtÞÞ b i¼1 j¼1 n m X X 1 1 2 6 mbxT ðtÞxðtÞ þ m jjWjj22 jjSðyðtÞÞjj22 6 mb x2i ðtÞ þ m jjWjj22 kj y2j ðtÞ; b b i¼1 j¼1

ð5Þ

n X n X 1 2nyj ðtÞv ij si ðxi ðtÞÞ ¼ 2nyT ðtÞVSðxðtÞÞ 6 nbyT ðtÞyðtÞ þ n ST ðxðtÞÞV T VSðxðtÞÞ b i¼1 i¼1 m n X X 1 1 6 nbyT ðtÞyðtÞ þ n jjVjj22 jjSðxðtÞÞjj22 6 nb y2j ðtÞ þ n jjVjj22 ‘2i x2i ðtÞ; b b j¼1 i¼1

ð6Þ

n X m n X m n X m X X X 1 2  s 2 2 2mxi ðtÞwsji sj ðyj ðt  sji ÞÞ 6 ax2i ðtÞ þ m wji sj ðyj ðt  sji ÞÞ i¼1 j¼1

i¼1 j¼1

i¼1 j¼1

a

n n X m X X 1 2  s 2 2 m wji sj ðyj ðt  sji ÞÞ; ¼ ma x2i ðtÞ þ i¼1

i¼1 j¼1

a

ð7Þ

11476

S. Senan et al. / Applied Mathematics and Computation 218 (2012) 11472–11482 m X n m X n m X n X X X 1 2  s 2 2 2nyj ðtÞv sij si ðxi ðt  rij ÞÞ 6 ay2j ðtÞ þ n v ij si ðxi ðt  rij ÞÞ j¼1 i¼1

j¼1 i¼1

j¼1 i¼1

m X

m X n X 1

¼ na

y2j ðtÞ þ

j¼1

j¼1 i¼1

a

a

n2



v sij

2

s2i ðxi ðt  rij ÞÞ:

ð8Þ

Using (5)–(8) in (4) results in n n m m m n X X X X X X 1 1 2 _ VðxðtÞ; yðtÞÞ 6  2mai x2i ðtÞ þ mb x2i ðtÞ þ m jjWjj22 kj y2j ðtÞ  2nbj y2j ðtÞ þ nb y2j ðtÞ þ n jjVjj22 ‘2i x2i ðtÞ b b i¼1 i¼1 j¼1 j¼1 j¼1 i¼1 n m n X m m X n  2  2 X X 1X 1X 2 m2 wsji kj y2j ðtÞ þ n2 v sij ‘2i x2i ðtÞ: þ ma x2i ðtÞ þ na y2j ðtÞ þ i¼1

Since

kWk22

6

kW  k22

_ VðxðtÞ; yðtÞÞ 6

þ

kW  k22

a i¼1

j¼1

þ

kVk22

2kW T jW  jk2 ;

6

a j¼1

j¼1

kV  k22

þ

kV  k22

þ

2kV T jV  jk2

i¼1

and ðwsji Þ2 6 ðwsji Þ2 ; 



v sji

2

6





v sji

2

( ) n m   1 2 X X 1 2  2 2 T  2 2 s x2i ðtÞ mð2ai þ a þ bÞ þ n‘i kV k2 þ kV  k2 þ 2kV  jV jk2 þ n ‘i v ij b a i¼1 j¼1 ( ) m n    1  X X 1  2 2 2 y2j ðtÞ nð2bj þ a þ bÞ þ mkj kW  k22 þ kW  k22 þ 2kW T jW  jk2 þ m2 kj wsji þ b a j¼1 i¼1

n m X X ¼  di x2i ðtÞ  Xj y2j ðtÞ: i¼1

j¼1

_ Since di > 0 for i ¼ 1; . . . ; n and Xj > 0 for j ¼ 1; . . . ; m, it follows that VðxðtÞ; yðtÞÞ < 0 for xðtÞ – 0 or yðtÞ – 0. Hence, by the standard Lyapunov-type theorem in functional differential equations we can conclude that the origin of system (3) is globally asymptotically stable. Theorem 2. Let the activation functions satisfy assumptions ðH1Þ and ðH2Þ. Then, neural system (1) with (2) has a unique equilibrium point which is globally asymptotically robustly stable if there exist positive constants a and b such that the network parameters of the system satisfy the following conditions

1

1

m X



ui ¼ mð2ai  a‘2i  bÞ  n‘2i ðkV  k22 þ kV  k22 þ 2kV T jV  jk2 Þ  m2 ðwsji Þ2 > 0; 8i; b a j¼1 n 1 1 X  2 2 #j ¼ nð2bj  akj  bÞ  mkj ðkW  k22 þ kW  k22 þ 2kW T jW  jk2 Þ  n2 ðv sij Þ2 > 0; b a i¼1

8j;

where W ¼ ðwji Þ; V ¼ ðv ij Þ; W  ¼ 12 ðW þ WÞ; W  ¼ 12 ðW  WÞ; V  ¼ 12 ðV þ VÞ; V  ¼ 12 ðV  VÞ;  wsji ¼ maxfjwsji j; jwsji jg.



v sij

¼ maxfjv sij j; jv sij jg

and

Proof. Define the following positive definite Lyapunov functional:

VðxðtÞ; yðtÞÞ ¼

n m n X m Z X X X mx2i ðtÞ þ ny2j ðtÞ þ a i¼1

j¼1

i¼1 j¼1

t

tsji

m X n Z X s2j ðyj ðgÞÞdg þ a j¼1 i¼1

t

trij

s2i ðxi ðnÞÞdn:

The derivative of VðxðtÞ; yðtÞÞ along the trajectories of the system is obtained as: n n X m n X m m X X X X _ 2mxi ðtÞwji sj ðyj ðtÞÞ þ 2mxi ðtÞwsji sj ðyj ðt  sji ÞÞ  2nbj y2j ðtÞ VðxðtÞ; yðtÞÞ ¼  2mai x2i ðtÞ þ i¼1

i¼1 j¼1

i¼1 j¼1

j¼1

m X n m X n n X m n X m X X X X þ 2nyj ðtÞv ij si ðxi ðtÞÞ þ 2nyj ðtÞv sij si ðxi ðt  rij ÞÞ þ a s2j ðyj ðtÞÞ  a s2j ðyj ðt  sji ÞÞ j¼1 i¼1 m X n X

þa

j¼1 i¼1

j¼1 i¼1

i¼1 j¼1

m X n X s2i ðxi ðtÞÞ  a s2i ðxi ðt  rij ÞÞ:

i¼1 j¼1

ð9Þ

j¼1 i¼1

We also note that n X m n X m n X m X X X 1 2 s 2 2 2mxi ðtÞwsji sj ðyj ðt  sji ÞÞ 6 m ðwji Þ xi ðtÞ þ as2j ðyj ðt  sji ÞÞ; i¼1 j¼1

i¼1 j¼1

a

i¼1 j¼1

ð10Þ

11477

S. Senan et al. / Applied Mathematics and Computation 218 (2012) 11472–11482 m X n m X n m X n X X X 1 2 s 2 2 2nyj ðtÞv sij si ðxi ðt  rij ÞÞ 6 n ðv ij Þ yj ðtÞ þ as2i ðxi ðt  rij ÞÞ: j¼1 i¼1

j¼1 i¼1

a

ð11Þ

j¼1 i¼1

Using (5), (6), (10) and (11) in (9) leads to: n n m m m n X X X X X X 1 1 2 _ VðxðtÞ; yðtÞÞ 6  2mai x2i ðtÞ þ mb x2i ðtÞ þ m jjWjj22 kj y2j ðtÞ  2nbj y2j ðtÞ þ nb y2j ðtÞ þ n jjVjj22 ‘2i x2i ðtÞ b b i¼1 i¼1 j¼1 j¼1 j¼1 i¼1 m n X m n m X n X X X X 1 2 s 2 2 1 2  s 2 2 2 þ an kj y2j ðtÞ þ m ðwji Þ xi ðtÞ þ am ‘2i x2i ðtÞ þ n v ij yj ðtÞ j¼1

i¼1 j¼1

a

i¼1

j¼1 i¼1

a

 2  2  2  2   Since kWk22 6 kW  k22 þ kW  k22 þ 2kW T jW  jk2 ; kVk22 6 kV  k22 þ kV  k22 þ 2kV T jV  jk2 and wsji 6 wsji ; v sji 6 v sji

_ VðxðtÞ; yðtÞÞ 6

( ) n m   X 1 1 X  2 mð2ai þ a‘2i þ bÞ þ n‘2i ðkV  k22 þ kV  k22 þ 2kV T jV  jk2 Þ þ m2 wsji x2i ðtÞ b a i¼1 j¼1 ( ) m n    1 X 2 X 1 2 2 2  2 T  2 s y2j ðtÞ nð2bj þ akj þ bÞ þ mkj kW k2 þ kW  k2 þ 2kW  jW jk2 þ n v þ b a i¼1 ij j¼1

¼

n X

m X #j y2j ðtÞ;

i¼1

j¼1

ui x2i ðtÞ 

_ in which VðxðtÞ; yðtÞÞ < 0 for all xðtÞ – 0 or yðtÞ – 0. Hence, the origin of system (3) is globally asymptotically stable.

5. Comparisons and examples In this section, the results obtained in this paper will be compared with the previous global robust stability results of BAM neural networks derived in the literature. In order to make the comparison precise, first the previous results will be restated: Theorem 3 [47]. Let the activation functions satisfy assumptions ðH1Þ and ðH2Þ. Then, neural system (1) with (2) has a unique equilibrium point which is globally asymptotically robust stable if there exist positive constants a and b such that the network parameters of the system satisfy the following conditions m  2 X 1 1 2 fi ¼ mð2ai  a  bÞ  n‘2i ðjjV  jj2 þ jjV  jj2 Þ  n2 ‘2i v sij > 0; b a j¼1

i ¼ 1; 2; . . . ; n;

n   X 1 1  2 2 2 2 wsji > 0; nj ¼ nð2bj  a  bÞ  mkj ðjjW  jj2 þ jjW  jj2 Þ  m2 kj b a i¼1

j ¼ 1; 2; . . . ; m;

where W ¼ ðwji Þ; V ¼ ðv ij Þ; W  ¼ 12 ðW þ WÞ; W  ¼ 12 ðW  WÞ; V  ¼ 12 ðV þ VÞ; V  ¼ 12 ðV  VÞ;  wsji ¼ maxfjwsji j; jwsji jg.



v sij

¼ maxfjv sij j; jv sij jg

and

Theorem 4 [47]. Let the activation functions satisfy assumptions ðH1Þ and ðH2Þ. Then, neural system (1) with (2) has a unique equilibrium point which is globally asymptotically robust stable if there exist positive constants a and b such that the network parameters of the system satisfy the following conditions m 1 1 X  /i ¼ mð2ai  a‘2i  bÞ  n‘2i ðjjV  jj2 þ jjV  jj2 Þ2  m2 ðwsji Þ2 > 0; b a j¼1 n 1 1 X  2 2 wj ¼ nð2bj  akj  bÞ  mkj ðjjW  jj2 þ jjW  jj2 Þ2  n2 ðv sij Þ2 > 0; b a i¼1

i ¼ 1; 2; . . . ; n;

j ¼ 1; 2; . . . ; m;

where W ¼ ðwji Þ; V ¼ ðv ij Þ; W  ¼ 12 ðW þ WÞ; W  ¼ 12 ðW  WÞ; V  ¼ 12 ðV þ VÞ; V  ¼ 12 ðV  VÞ;  wsji ¼ maxfjwsji j; jwsji jg.



v sij

¼ maxfjv sij j; jv sij jg

and

Theorem 5 [48]. For the neural system defined by (1), let the activation functions satisfy ðH1Þ and ðH2Þ, and network parameters satisfy (2). Then, the origin of neural system (1) is globally asymptotically stable if there exist positive constants p > 0 and q > 0 such that the following conditions hold:

11478

S. Senan et al. / Applied Mathematics and Computation 218 (2012) 11472–11482 m m X 1 1 X  hi ¼ 2ai  p  ‘2i ðjjV  jj2 þ jjV  jj2 Þ2  q wsji  ‘2i v sij > 0; p q j¼1 j¼1

1 p

cj ¼ 2bj  p  k2j ðjjW  jj2 þ jjW  jj2 Þ2  q

n X



v sij

i¼1

n 1 2X   kj ws > 0; q i¼1 ji

i ¼ 1; 2; . . . ; n;

j ¼ 1; 2; . . . ; m; 



where W  ¼ 12 ðW þ WÞ; W  ¼ 12 ðW  WÞ; V  ¼ 12 ðV þ VÞ; V  ¼ 12 ðV  VÞ; v sij ¼ maxfjv sij j; jv sij jg and wsji ¼ maxfjwsji j; jwsji jg. In order to show that the conditions we have obtained in Theorems 1 and 2 provide new different set of sufficient criteria for determining the equilibrium and stability properties of system (1) from those presented in [47,48], we consider the following example. Example 1. Assume that the network parameters of neural system (1) are given as follows:

2

3 0 2a 2a 2a 6 2a 2a 2a 2a 7 6 7 W ¼V ¼6 7; 4 2a 2a 2a 2a 5 2a 2a 2a 2a 2

2a 6 2a 6 Ws ¼ Vs ¼ 6 4 2a 2a

2a 2a 2a 2a

2a 2a 2a 2a

2

3 0 2a 2a 2a 6 2a 2a 2a 2a 7 6 7 W ¼V ¼6 7; 4 2a 2a 2a 2a 5 2a 2a 2a 0

3 2a 2a 7 7 7; 2a 5 2a

2

2a 6 2a 6 Ws ¼ Vs ¼ 6 4 2a 2a

2a 2a 2a 2a

2a 2a 2a 2a

3 2a 2a 7 7 7; 2a 5 2a

A ¼ A ¼ A ¼ B ¼ B ¼ B ¼ I; ‘1 ¼ ‘2 ¼ ‘3 ¼ ‘4 ¼ k1 ¼ k2 ¼ k3 ¼ k4 ¼ 1; where a > 0 is real number. The matrices W  ; W  ; V  ; V  ; W T jW  j and V T jV  j are obtained as follows:

3 0 2a 2a 2a 6 2a 2a 2a 2a 7 7 6 W ¼ V ¼ 6 7; 4 2a 2a 2a 2a 5 2a 2a 2a a 2

2

0 6 0 6 T  T  W  jW j ¼ V  jV j ¼ 6 4 0 2a2

0 0 0 2a2

0 0 0 2a2

2

0 60 6 W ¼ V ¼ 6 40 0

0 0 0 0

3 0 0 0 07 7 7; 0 05 0 a

3 0 07 7 7; 05 a2

where jjW  jj2 ¼ jjV  jj2 ¼ 4; 8399a; jjW  jj2 ¼ jjV  jj2 ¼ a and kW T jW  jk2 ¼ kV T jV  jk2 ¼ 3; 6056a2 . Let a ¼ b ¼ 12. Then, we obtain

d1 ¼ d2 ¼ d3 ¼ d4 ¼ X1 ¼ X2 ¼ X3 ¼ X4 ¼ 4  8ð31; 6358a2 Þ  512a2 ¼ 1  ð191; 2716Þa2 ; 1 1 1 in which a2 < 191;2716 or equivalently a < 13;8301 implies that d1 ¼ d2 ¼ d3 ¼ d4 ¼ X1 ¼ X2 ¼ X3 ¼ X4 > 0. Hence, if a < 13;8301 holds, then the conditions of Theorem 1 are satisfied. We will now check the conditions of Theorem 2 for the same network parameters. The conditions of Theorem 2 are obtained as follows:

u1 ¼ u2 ¼ u3 ¼ u4 ¼ #1 ¼ #2 ¼ #3 ¼ #4 ¼ 4  8ð31; 6358a2 Þ  512a2 ¼ 1  ð191; 2716Þa2 ; 1 Obviously, a < 13;8301 ensures that the conditions of Theorem 2 hold. We note here that, for the network parameters given in this example, Theorems 1 and 2 impose the same constraint conditions on the network parameters. We will now check the results of Theorems 3 and 4 for this example. The conditions of Theorems 3 and 4 are obtained as follows:

f1 ¼ f2 ¼ f3 ¼ f4 ¼ n1 ¼ n2 ¼ n3 ¼ n4 ¼ /1 ¼ /2 ¼ /3 ¼ /4 ¼ w1 ¼ w2 ¼ w3 ¼ w4 ¼ 4  8ð4; 8399a þ aÞ2  512a2 ¼ 1  ð196; 2088Þa2 1 1 from which the stability condition is obtained as a2 < 196;2088 or equivalently a < 14;0075 . 1 Remark 1. For this specific example, our results require that a < 13;8301. On the other hand, the results of Theorems 3 and 4 1 1 1 hold if and only if a < 14;0075 . Therefore, for 14;0075 6 a < 13;8301 , our conditions obtained in Theorems 1 and 2 are satisfied but

11479

S. Senan et al. / Applied Mathematics and Computation 218 (2012) 11472–11482

the results of Theorems 3 and 4 do not hold. Hence, our results impose less conservative constraints on the network parameters of this example than the constraints imposed by the results given in [47]. For the same network parameters, we will now check the results of Theorem 5. Let p ¼ 5; 8399a and q ¼ 2. Then the conditions of Theorem 5 are obtained as follows:

h1 ¼ h2 ¼ h3 ¼ h4 ¼ c1 ¼ c2 ¼ c3 ¼ c4 ¼ 2  31; 6798a ¼ 1  15; 8399a 1 from which the stability condition is obtained as a < 15;8399 .

1 . On the other hand, the results of Theorems 5 hold if Remark 2. For this specific example, our results require that a < 13;8301 1 1 1 and only if a < 15;8399 . Therefore, for 15;8399 6 a < 13;8301 , our conditions obtained in Theorems 1 and 2 are satisfied but the results of Theorems 5 do not hold. Hence, our results impose less conservative constraints on the network parameters of this example than the constraints imposed by the results given in [48]. In what follows, we give some simulation results for the sake of verification of our proposed results. For the neural network parameters given in Example 1, we choose the following fixed network parameters that satisfy the 1 condition a < 13;8301 :

2

0

0:125

0:125

0:125

3

6 0:125 0:125 0:125 0:125 7 7 6 W ¼ V ¼ Ws ¼ Vs ¼ 6 7; 4 0:125 0:125 0:125 0:125 5 0:125

0:125

0:125

0

s11 ¼ 0:5; s12 ¼ 0:3; s13 ¼ 0:2; s14 ¼ 0:7; s21 ¼ 0:6; s22 ¼ 0:4; s23 ¼ 0:3; s24 ¼ 0:1; s31 ¼ 0:8; s32 ¼ 0:2; s33 ¼ 0:9; s34 ¼ 0:4; s41 ¼ 0:7; s42 ¼ 0:1; s43 ¼ 0:4; s44 ¼ 0:5: For this example, the Matlab simulation results are presented for different activation functions in Fig. 1(a) and Fig. 1(b). For the same example, we now choose the following fixed network parameters that satisfying the constraint conditions imposed by our results:

2

3 0 0:08 0:08 0:08 6 0:08 0:08 0:08 0:08 7 6 7 W ¼ V ¼ Ws ¼ Vs ¼ 6 7; 4 0:08 0:08 0:08 0:08 5 0:08

0:08

0:08 0:04

1

1

0.8 0.6

0

x4

0.4 x4

x3

0.2 x3 Solution

0.2 Solution

y4 0.6 y3

y3

0.4

−0.2

0.8

y4

x1

x2

−0.4

y1

−0.6

y1

−0.6 y2

−0.8 −1

x1

−0.2

x2

−0.4

0

0

y2

−0.8 2

4

Time

6

8

10

−1

0

2

4

Time

6

8

Fig. 1. System solution for the initial states xð0Þ ¼ ½0:1  0:4 0:3 0:5 and yð0Þ ¼ ½0:6  0:9 0:7 0:9.

10

11480

S. Senan et al. / Applied Mathematics and Computation 218 (2012) 11472–11482 1

1

0.8

0.8

y4

0.6 y3

0.6

y3

0.4

0.4

y4

x4 x3

0 x1

−0.2

x2 y1

−0.6

y2

−0.8 −1

x1

−0.4

y1

−0.6

x3

0 −0.2

x2

−0.4

x4

0.2 Solution

Solution

0.2

y2

−0.8 0

2

4

Time

6

8

−1

10

0

2

4

Time

6

8

10

Fig. 2. System solution for the initial states xð0Þ ¼ ½0:2  0:45 0:2 0:5 and yð0Þ ¼ ½0:65  0:85 0:7 0:95.

1

1

0.8

0.8 y4

0.6

0.4 y3

0.4

x4 x3

0 −0.2

y3 x4

0.2 Solution

Solution

0.2

y4

0.6

x1

x3

0 x1 x2

−0.2

x2

−0.4 y1

−0.4 y1

−0.6

−0.6

y2

−0.8 −1 0

y2

−0.8 2

4

Time

6

8

10

−1

0

2

4

Time

6

8

10

Fig. 3. System solution for the initial states xð0Þ ¼ ½0:1  0:3 0:1 0:4 and yð0Þ ¼ ½0:6  0:8 0:6 0:9.

s11 ¼ 0:5; s12 ¼ 0:3; s13 ¼ 0:2; s14 ¼ 0:7; s21 ¼ 0:6; s22 ¼ 0:4; s23 ¼ 0:3; s24 ¼ 0:1; s31 ¼ 0:8; s32 ¼ 0:2; s33 ¼ 0:9; s34 ¼ 0:4; s41 ¼ 0:7; s42 ¼ 0:1; s43 ¼ 0:4; s44 ¼ 0:5: The Matlab simulation results for these parameters are given for different activation functions in Fig. 2(a) and Fig. 2(b). For the same example, we now choose the following fixed network parameters that satisfying the constraint conditions imposed by our results:

S. Senan et al. / Applied Mathematics and Computation 218 (2012) 11472–11482

2

0

0:042

0:042

0:042

11481

3

6 0:042 0:042 0:042 0:042 7 7 6 W ¼ V ¼ Ws ¼ Vs ¼ 6 7; 4 0:042 0:042 0:042 0:042 5 0:042

0:042

0:042 0:021

s11 ¼ 0:2; s12 ¼ 0:7; s13 ¼ 0:4; s14 ¼ 0:5; s21 ¼ 0:6; s22 ¼ 0:1; s23 ¼ 0:8; s24 ¼ 0:9; s31 ¼ 0:3; s32 ¼ 0:5; s33 ¼ 0:7; s34 ¼ 0:2; s41 ¼ 0:1; s42 ¼ 0:8; s43 ¼ 0:3; s44 ¼ 0:6: The Matlab simulation results for the above parameters are given for different activation functions in Fig. 3(a) and Fig. 3(b).

6. Conclusions In this paper, by using the Lyapunov stability theory and the upper bound norm for the interconnection matrices of the neural system, novel sufficient conditions ensuring the existence, uniqueness and the global robust asymptotic stability of the equilibrium point have been derived for a class of hybrid bidirectional associative memory (BAM) neural networks with multiple time delays. The obtained stability results establish some relationships between the network parameters of neural network model independently of the delay parameters. A comparison between our results and the previous results implies that our results establish a new set of global robust asymptotic stability criteria for BAM neural networks with multiple time delays. In order to give some guidance for the future works in the area of robust stability of delayed neural networks, we need to point out that different and weaker upper bound norm estimation of interconnection matrices would be the key factor. Therefore, in order to improve the current robust stability results for neural networks, some more effort must be put into investigation of the interval matrix theory. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22]

L. Chua, L. Yang, Cellular neural networks: applications, IEEE Trans. Circuits Syst. 35 (10) (1998) 1273–1290. A.N. Michel, D. Liu, Qualitative Analysis and Synthesis of Recurrent Neural Networks, Marcel Dekker, New York, 2002. T. Ensari, S. Arik, New results for dynamical neural networks with discrete time delays, Expert Syst. Appl. 37 (2010) 5925–5930. C.-D. Zheng, H. Zhang, Z. Wang, Novel exponential stability criteria of high-order neural networks with time-varying delays, IEEE Trans. Syst. Man Cybernet. – Part B: Cybernet. 41 (2) (2011) 486–496. Q. Song, Z. Wang, Neural networks with discrete and distributed time-varying delays: a general stability analysis, Chaos Solitons Fract. 37 (5) (2008) 1538–1547. Y. He, G.P. Liu, D. Rees, M. Wu, Stability analysis for neural networks with time-varying interval delay, IEEE Trans. Neural Networks 18 (6) (2007) 1850– 1854. S. Arik, V. Tavsanoglu, On the global asymptotic stability of delayed cellular neural networks, IEEE Trans. Circuits Syst. I (47) (2000) 571–574. X. Meng, M. Tian, S. Hu, Stability analysis of stochastic recurrent neural networks with unbounded time-varying delays, Neurocomputing 74 (6) (2011) 949–953. H. Zhang, Z. Wang, D. Liu, Global asymptotic stability of recurrent neural networks with multiple time varying delays, IEEE Trans. Neural Networks 19 (5) (2008) 855–873. R. Yang, Z. Zhang, P. Shi, Exponential stability on stochastic neural networks with discrete interval and distributed delays, IEEE Trans. Neural Networks 21 (1) (2010) 169–175. R.-S. Gau, C.-H. Lien, J.-G. Hsieh, Novel stability conditions for interval delayed neural networks with multiple time-varying delays, Int. J. Innovative Comput. Inform. Control 7 (1) (2011) 433–444. Y. Shao, Exponential stability of periodic neural networks with impulsive effects and time-varying delays, Appl. Math. Comput. 217 (16) (2011) 6893– 6899. Z. Liu, H. Zhang, Q. Zhang, Novel stability analysis for recurrent neural networks with multiple delays via line integral-type L-K functional, IEEE Trans. Neural Networks 21 (11) (2010) 1710–1718. Z. Zuo, C. Yang, Y. Wang, A new method for stability analysis of recurrent neural networks with interval time-varying delay, IEEE Trans. Neural Networks 21 (2) (2010) 339–344. Z.-G. Wu, Ju H. Park, H. Su, J. Chu, Passivity analysis of Markov jump neural networks with mixed time-delays and piecewise-constant transition rates, Nonlinear Anal.: Real World Appl. 13 (5) (2012) 2423–2431. S.M. Lee, O.M. Kwon, Ju H. Park, A novel delay-dependent criterion for delayed neural networks of neutral type, Phys. Lett. A 374 (17–18) (2010) 1843– 1848. Z.-G. Wu, Ju H. Park, H. Su, J. Chu, New results on exponential passivity of neural networks with time-varying delays, Nonlinear Anal.: Real World Appl. 13 (4) (2012) 1593–1599. D.H. Ji, J.H. Koo, S.C. Won, S.M. Lee, Ju H. Park, Passivity-based control for Hopfield neural networks using convex representation, Appl. Math. Comput. 217 (13) (2011) 6168–6175. P. Balasubramaniam, S. Lakshmanan, Delay-range dependent stability criteria for neural networks with Markovian jumping parameters, Nonlinear Anal.: Hybrid Syst. 3 (4) (2009) 749–756. Ju H. Park, O.M. Kwon, S.M. Lee, LMI optimization approach on stability for delayed neural networks of neutral-type, Appl. Math. Comput. 196 (1) (2008) 236–244. J. Cao, D.-S. Huang, Y. Qu, Global robust stability of delayed recurrent neural networks, Chaos Solitons Fract. 23 (1) (2005) 221–229. H. Zhang, Z. Wang, D. Liu, Global asymptotic stability and robust stability of a class of Cohen–Grossberg neural networks with mixed delays, IEEE Trans. Circuits Syst. I: Regular Papers 56 (3) (2009) 616–629.

11482

S. Senan et al. / Applied Mathematics and Computation 218 (2012) 11472–11482

[23] P. Balasubramaniam, M.S. Ali, Robust stability of uncertain fuzzy cellular neural networks with time-varying delays and reaction diffusion terms, Neurocomputing 74 (1–3) (2010) 439–446. [24] W.-H. Chen, W.X. Zheng, Robust stability analysis for stochastic neural networks with time-varying delay, IEEE Trans. Neural Networks 21 (3) (2010) 508–514. [25] Y. Zhao, L. Zhang, S. Shen, H. Gao, Robust stability criterion for discrete-time uncertain markovian jumping neural networks with defective statistics of modes transitions, IEEE Trans. Neural Networks 22 (1) (2011) 164–170. [26] P. Balasubramaniam, S. Lakshmanan, R. Rakkiyappan, Delay-interval dependent robust stability criteria for stochastic neural networks with linear fractional uncertainties, Neurocomputing 72 (16–18) (2009) 3675–3682. [27] B. Kosko, Adaptive bi-directional associative memories, Appl. Opt. 26 (1987) 4947–4960. [28] B. Kosko, Bi-directional associative memories, IEEE Trans. Syst. Man Cybern. 18 (1998) 49–60. [29] J.M Zurada, Introduction to Artificial Neural Systems, West Publishing Company, St.Paul, MN, 1992. [30] G. Mathai, B.R. Upadhyaya, Performance analysis and application of the bidirectional associative memoryto industrial spectral signatures, Proc. IJCNN 89 (1) (1989) 33–37. [31] S. Arik, Global asymptotic stability analysis of bidirectional associative memory neural networks with time delays, IEEE Trans. Neural Networks 16 (3) (2005) 580–586. [32] J.D. Cao, J.L. Liang, J. Lam, Exponential stability of high-order bidirectional associative memory neural networks with time delays, Phys. D-Nonlinear Phenom. 199 (3–4) (2004) 425–436. [33] J.H. Park, A novel criterion for global asymptotic stability of BAM neural networks with time delays, Chaos Solitons Fract. 29 (2) (2006) 446–453. [34] Z.-T. Huang, X.-S Luo, Q.-G Yang, Global asymptotic stability analysis of bidirectional associative memory neural networks with distributed delays and impulse, Chaos Solitons Fract. 34 (3) (2007) 878–885. [35] X. Lou, B. Cui, W. Wu, On global exponential stability and existence of periodic solutions for BAM neural networks with distributed delays and reactiondiffusion terms, Chaos Solitons Fract. 36 (4) (2008) 1044–1054. [36] J.H. Park, S.M. Lee, O.M. Kwon, On exponential stability of bidirectional associative memory neural networks with time-varying delays, Chaos Solitons Fract. 39 (2009) 1083–1091. [37] Y. Wang, Global exponential stability analysis of bidirectional associative memory neural networks with time-varying delays, Nonlinear Anal.: Real World Appl. 10 (2009) 1527–1539. [38] Y. Yuan, X. Li, New results for global robust asymptotic stability of BAM neural networks with time-varying delays, Neurocomputing 74 (1–3) (2010) 337–342. [39] B. Chen, L. Yu, W.-A. Zhang, Exponential convergence rate estimation for neutral BAM neural networks with mixed time-delays, Neural Comput. Appl. 20 (3) (2011) 451–460. [40] P. Balasubramaniam, C. Vidhya, Global asymptotic stability of stochastic BAM neural networks with distributed delays and reaction-diffusion terms, J. Comput. Appl. Math. 234 (12) (2010) 3458–3466. [41] Ju H. Park, C.H. Park, O.M. Kwon, S.M. Lee, A new stability criterion for bidirectional associative memory neural networks of neutral-type, Appl. Math. Comput. 199 (2) (2008) 716–722. [42] Ju H. Park, O.M. Kwon, Delay-dependent stability criterion for bidirectional associative memory neural networks with interval time-varying delays, Modern Phys. Lett. B (MPLB) Condensed Matter Phys.; Statis. Phys. Appl. Phys. 23 (1) (2009) 35–46. [43] Ju H. Park, Robust stability of bidirectional associative memory neural networks with time delays, Phys. Lett. A 349 (6) (2006) 494–499. [44] X.F. Liao, K. Wong, Global exponential stability of hybrid bidirectional associative memory neural networks with discrete delays, Phys. Rev. E 67 (4) (2003) 0402901. [45] X.F. Liao, K. Wong, Robust stability of interval bidirectional associative memory neural network with time delays, IEEE Trans Syst. Man Cybern.-Part C 34 (2004) 1142–1154. [46] S. Senan, S. Arik, New results for global robust stability of bidirectional associative memory neural networks with multiple time delays, Chaos Solitons Fract. 41 (4) (2009) 2106–2114. [47] S. Senan, S. Arik, Global robust stability of bidirectional associative memory neural networks with multiple time delays, IEEE Trans Syst. Man Cybern.Part B 37 (5) (2007) 1375–1381. [48] N. Ozcan, S. Arik, A new sufficient condition for global robust stability of bidirectional associative memory neural networks with multiple time delays, Nonlinear Anal.:Real World Appl. 10 (2009) 3312–3320.