Absolute exponential stability of a class of ... - Semantic Scholar

Report 4 Downloads 161 Views
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 1, JANUARY 2003

35

Absolute Exponential Stability of a Class of Continuous-Time Recurrent Neural Networks Sanqing Hu and Jun Wang, Senior Member, IEEE

Abstract—This paper presents a new result on absolute exponential stability (AEST) of a class of continuous-time recurrent neural networks with locally Lipschitz continuous and monotone nondecreasing activation functions. The additively diagonally stable connection weight matrices are proven to be able to guarantee AEST of the neural networks. The AEST result extends and improves the existing absolute stability and AEST ones in the literature. Index Terms—Absolute exponential stability (AEST), additive diagonal stability, diagonal semistability, global exponential stability, -matrix, neural networks.

I. INTRODUCTION

R

ECENTLY, the analysis of absolute stability (ABST) and absolute exponential stability (AEST) of recurrent neural networks has received much attention in the literature; e.g., [1]–[20]. The main impetus lies in a fact that an absolutely stable (ABST) or absolutely exponentially stable (AEST) neural network can converge globally asymptotically (or globally exponentially) to a unique equilibrium with any activation function in a proper given class and any other network parameters. This desirable property of neural networks is especially viable for solving many optimization problems because the optimization neural networks are devoid of the spurious suboptimal responses for any choice of the activation function in the proper class and other network parameters. Moreover, for a globally exponentially stable (GES) neural network (see [21]–[26]) we can make a quantitative analysis and thus know the convergence behaviors of the neural network, such as its convergence rate and the estimated time when the network arrives at a solution with a specified accuracy. To explore ABST or AEST of continuous-time recurrent neural networks, researchers have to constrain the connection weight matrix and the activation function of a neural network. For example, within the class of sigmoid activation functions, it is proved that the symmetric or noninhibitory lateral connection weight matrix of a neural-network model to be negative semidefinite is the necessary and sufficient condition for ABST of the neural network [6] and [7]. The ABST results are extended to the AEST ones in [18] and [19], respectively. In [11], a conjecture is raised: the necessary and sufficient condition for ABST of the neural network is its connection belongs to the class of matrices such weight matrix Manuscript received August 1, 2001; revised March 14, 2002 and September 9, 2002. This work was supported by the Hong Kong Research Grants Council under Grant CUHK4174/00E. The authors are with the Department of Automation and Computer-Aided Engineering, The Chinese University of Hong Kong, Hong Kong. Digital Object Identifier 10.1109/TNN.2002.806954

that all eigenvalues of matrix has negative real and . This parts for arbitrary positive diagonal matrices condition is proven to be a necessary and sufficient condition for ABST of the neural network with two neurons [16]. The necessity of such a condition for ABST is proven in [11] and implies that all existing sufficient conditions for ABST in the literature are special cases. However, whether or not such a condition is sufficient for ABST of a general neural network remains unknown in the case of more than two neurons. Within the class of partially Lipschitz continuous and monotone nondecreasing activation functions (this class includes the sigmoidal activation as a special case), a recent AEST result is given in [13] under a mild condition that the connection belongs to where (see Definition 8 weight matrix in Section II) denotes the class of additively diagonally stable introduced in [2], i.e., for any positive diagonal matrices , there exists a positive diagonal matrix such that matrix . This condition extends is an the condition in [14] that the connection weight matrix -matrix with nonpositive diagonal elements. Within the class of locally Lipschitz continuous and monotone nondecreasing activation functions, some ABST results such as the diagonal semistability result [17] and quasidiagonal column-sum dominance result [3] can be found. It is remarked that the additive diagonal stability condition introduced in [2] is the mildest one among the known sufficient conditions for ABST of neural networks in the literature. This paper is concerned with AEST of continuous-time recurrent neural networks with locally Lipschitz continuous and monotone nondecreasing activation functions. The additively diagonally stable connection weight matrices are verified to be able to guarantee AEST of the neural networks. The result extends and improves the existing ABST and AEST ones in the literature. The remainder of this paper is organized as follows. In Section II, some preliminaries are presented. In Section III, we discuss the AEST result of the neural networks. Finally, we make concluding remarks in Section IV. II. PRELIMINARIES A. Definitions Consider a typical continuous-time recurrent neural-network model as follows: (1) where diag

1045-9227/03$17.00 © 2003 IEEE

is the state vector, is a diagonal matrix

36

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 1, JANUARY 2003

with , is a connection weight is an input vector, matrix, is a nonlinear and to . In this vector-valued activation function from denote the class of locally Lipschitz continuous paper, let (l.l.c.) and monotone nondecreasing activation functions; that there exist an and a constant is, for any such that and (2) denote the class of partially Lipschitz continuous (p.l.c.) Let and monotone nondecreasing activation functions [14]; that is, there exists such that for any and

Let denote the class of globally Lipschitz continuous (g.l.c.) and monotone nondecreasing activation functions. It can be seen and . As for the relationship between that and , it remains unknown to our best knowledge. However, the existing (all or almost all) continuous activation funcbut may not be such as tions in the literature belong to and . Definition 1 [6] and [14]: An equilibrium of the neural network (1), which satisis said to be globally fies asymptotically stable (GAS) if it is locally stable in the sense of is said to Lyapunov and globally attractive. The equilibrium and such that , the be GES if there exist of the neural network (1) satisfies positive half trajectory

Definition 2 [4], [6] and [11]: (): The neural network (1) is said to be ABST (respectively, AEST) with its activation func(or ) if it possesses a unique and GAS tion in the class (or GES) equilibrium for any activation function belonging to (or ), any input vector , and any positive diagonal matrix . It is obvious that an AEST neural network (1) is also ABST since the GES property implies the GAS one. matrix is said to beDefinition 3 [27] and [28]: An if satisfies one of the following equivalent long to the class conditions. ( ) All principal minors of are nonnegative. and , there exists an index ( ) For any such that and , denotes the th component of the vector . where ( ) For any positive diagonal matrix diag , det . matrix have nonpositive Definition 4 [29]: Let the off-diagonal elements, then each of the following conditions is equivalent to the statement “A is an -matrix.” . ( ) All principal minors of are nonnegative, i.e.,

(

) is nonsingular for any positive diagonal matrix . matrix is said to be Definition 5 [30]: An defined an -matrix if its comparison matrix by, for if if is an -matrix. matrix is said to belong to the Definition 6 [2]: An class if is an -matrix with nonpositive diagonal elements. matrix is said to belong Definition 7 [11]: An (respectively, ) if there exists a positive dito the class such that (respectively, agonal matrix ). This class is also called diagonal stable (respectively, diagonal semistable). . is In the sequel, we define is the minimum eigenvalue of . the identity matrix. denote the norm of a matrix. In particular, for a vector Let , . . the Dini derivative of For any continuous function is defined as

Definition 8 [2]: An matrix is said to belong to the if for any positive diagonal matrix , there exists class such that a positive diagonal matrix . This class is also called additively diagonally stable. matrix is said to belong to Definition 9 [11]: An if for any positive diagonal matrices and , the class is stable, i.e., it has all eigenvalues with the matrix negative real parts. matrix is said to belong Definition 10: An if, by some row exchange operations and to the class can become corresponding column exchange operations,

.. .

.. .

..

.. .

.

(3)

,

or with dimensions, , , each is a matrix with proper dimensions, each zero is a zero matrix with proper dimensions. The form (3) is obviously equivalent to the following form:

where

,

.. .

.. .

..

.

.. .

So, we only consider the form (3). It should be notable that each in (3) may be an arbitrary constant matrix. , and can not Remark 1: When . However, and must guarantee

HU AND WANG: AEST OF A CLASS OF CONTINUOUS-TIME RECURRENT NEURAL NETWORKS

imply by using the definition of fact: given any square matrix

where and are two square matrices, if det det . matrix, then det

and noting a basic

or

is a zero

B. Relationship Among Different Classes of the Connection Weight Matrices . See the Appendix for proof. Lemma 1: The following example can points out a fact that there exists such that and . Example 1: Consider the matrix

Assume . In view of the definition of , for diag , there any positive diagonal matrix diag exists a positive diagonal matrix such that from which , and one can get . From it (otherwise, , follows that ; that then which is a contradictory is, to satisfy ). requirement if we choose and . Then, from , one gets So, we have which is a contradictory requirement if we . As a result, . choose and to satisfy . In view of the definition of , Now, we will show diag and for any positive diagonal matrices diag , our purpose is to prove the matrix is stable; that is, all eigenvalues of the polynomial det have negative real parts. For , a , lengthy but straightforward computation can show , and which are necessary and sufficient conditions to guarantee the polynomial have negative real parts according to the Routh’s stability criterion in [31]. So, . Now we outline the relationship among different classes of as follows. the connection weight matrices implies . Quasi-diagonal dominance [20] of implies . It can Strictly diagonal dominance [15] of . It is shown in [2] that (or be seen that ). According to the definition of , one can see that , . Lemma 1 shows . If , then the neural network (1) with a signmoid activation function is a necessary condition in [2] is ABST, for which implies , i.e., . Note [11]. Thus, . Based on Definition in Example 1, one can see that , then for any positive diagonal matrix , 9, if is stable and consequently nonsingular. Therefore, by condition

37

(

) in Definition 3, implies , i.e., . As shown in [6, Example 1], a matrix in may have an eigenvalue with negative real part. On the other hand, should have all eigenvalues with nonpositive real parts [11]. . When is symmetric ( ) Thus, ), one can see that or noninhibitory lateral ( . The above discussions clearly reveal the relationship among different classes. It is easy to see that is the largest possible class for ABST of the neural network (1). Many existing results on ABST and AEST of the neural netin the above classes work (1) in the literature are focused on with different classes of activation functions. For example, for was the class of sigmoid functions, in [6] and [7] proved to be the necessary and sufficient condition for ABST of the neural network (1) in the cases of symmetric and noninhibitory lateral network, respectively. The two ABST results were extended to the AEST ones in [18] and [19], respectively. being quasi-diagonally row-sum and column-sum In [20], dominant can ensure the ABST of the neural network (1). In [11], an ABST result was presented under the condition which extends the condition in [10]. In [2], was given as a sufficient condition for the ABST of the neural network (1). This condition was also proven to be the necessary and sufficient condition for ABST of the neural network (1) with two neurons [16]. For the activation function defined by the property that if for class of and there exist constants such that , in [1] an ABST result is introand the assumption of the existence of the duced under equilibrium of the neural network (1). For the activation funcdefined by the property that if for tion class of and there exists such that , an ABST result constant was developed in [4]. Obviously, and under . For the activation function class of , under the , an AEST result is provided in [14] which condition extends the existing ABST results in [1], [4], [7], and [20] as far as activation functions are concerned. A more recent result can yield AEST of the neural network [13] shows , under the condition (1). For the activation function class of an ABST result was proposed in [17]. When is ), a quasidiagonally column-sum dominant (as a result, GAS (implying ABST) result was given in [3]. III. MAIN RESULT In this section, we will prove that is a sufficient condition for AEST of the general neural network (1) with its . The result extends and improves the activation function in existing ABST and AEST ones in the literature. To introduce the main result, we have to give some lemmas. Lemma 2 ([34, Proposition 1]): The neural network (1) has a unique equilibrium for any continuous and monotone nondeand any positive creasing activation function , any . diagonal matrix if and only if can ensure the neural According to Lemma 2, network (1) has a unique equilibrium denoted by . In this case, . Then the neural network (1) let

38

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 1, JANUARY 2003

can be transformed into the following equivalent system with a : unique equilibrium at

which contradicts as . If , based on the continuity of the function on , is finite; that is, clearly is finite contradicting . Therefore, there exists such that (9) is true. Consequently, Propa positive constant erty B is true by noting that (9) is equivalent to (6). Lemma 3 (i.e., [32, Lemma (b)]): If a nonnegative function defined in satisfies or where constants and , then there exists positive constant such that

(4) where and from which we have three useful properties as follows. Property A: There exist positive functions such that (5) Property B: There exists positive constants

such that (6)

where is any given bounded for any . interval Property C: There exist positive constants such that (7) is any given bounded interval. where Property A is obvious by considering a simple choice, for ex. Property C comes from Property ample, B directly. Now, we illustrate Property B as follows: Since is l.l.c. and monotone nondecreasing, for any there exist an and a constant such that and we have (8)

Lemma 4 [17, Th. 2]: Let . If , then the for any neural network (1) has a unique GAS equilibrium . , then the neural network (1) with its Lemma 5: If is AEST. activation function in the class of , we consider the Proof: Since equivalent model (4) only. In view of Lemma 4, model (4) is , and consequently there exist constants GAS at such that . According to , there Properties B and C we have (6) and (7). Since diag such exists a positive diagonal matrix . that Define a differentiable function

(10) (as illustrated in [24], since may takes zeroes it results that is not a Lyapunov function). Computing the the function along the positive half trajectory of (4) time derivative of yields Obviously,

Next, by contradiction, we will show that there exists a positive such that constant

and

(9)

Assume (9) does not hold. Then, we may select three se, and such that quences , and where each each quences

If derive

and . Since , there must exist two subseand such that and . So, . , then there exists some integer such that , when . In view of (8) we can

and so have

for all

. By (10), we

from (11)

HU AND WANG: AEST OF A CLASS OF CONTINUOUS-TIME RECURRENT NEURAL NETWORKS

Define continuous functions where in terms of (7) we readily obtain

implying

,

. Then,

for all for

. Thus, ;

and (12)

Based on (11) it is seen that

So, from (4) we have

. Based on Lemma 3, is GES . Hence, is the at the convergence rate of at least is a GES equilibrium GES equilibrium of model (4); that is, of the neural network (1). This means that the neural network is AEST. The (1) with its activation function in the class of proof is complete. Lemma 6: If , then the neural network (1) with its is AEST. See the Appendix activation function in the class of for proof. Lemma 7: The neural network (1) with its activation function is AEST if . See the Appendix for in the class of proof. According to Lemmas 1 and 7, we have an immediate result as follows. Theorem 1: The neural network (1) with its activation funcis AEST if . tion in the class of Remark 2: Lemma 5 extends the ABST result [17] to the AEST one. Lemma 6 extends the GAS (implying ABST) result is quasidiagonally column-sum dominant. of [3] in which Moreover, the ABST result is extended to the AEST one. As far as activation functions are concerned, the AEST result in [14] is a special case of Lemma 6. Theorem 1 extends Lemmas 5 and and . Although we do not have 6 by noting and to our best knowledge, the a clear relationship for existing (all or almost all) continuous activation functions in the but may not be such as . literature belong to

39

Hence, Theorem 1 is completely different from the main result in [13]. In terms of Lemma 1, we may write by much simpler form (3) through some row exchanges and corresponding column exchange operations (it is notable that any or the class is a very simplified description of the class meaningful work). Since the additive diagonal stability condition is the mildest one among the known sufficient conditions for ABST and AEST of the neural networks and the general includes , , , and as activation function class of strict subclasses, in this case Theorem 1 actually includes many existing ABST and AEST results in the literature (given out in the second last paragraph of Section II) as special cases. is a strict subclass of . As pointed out in Section II, is a necessary condition for ABST of the Noting that neural network, in [11], the authors postulated a conjecture that is a necessary and sufficient condition for ABST of the neural network with its activation function in an appropriate activation class. This conjecture indeed holds for some special neural networks. , . If 1) If , , and . When , one can see . When , one can see . Hence, based Lemmas 5 and is necessary and sufficient for AEST of the 6, neural network with one neuron and two neurons and its . activation function in the class of , ) connection weight 2) For a symmetric ( . For matrix , one can see , ) connection a noninhibitory lateral ( . weight matrix , one can see is necessary Hence, based Lemmas 5 and 6, and sufficient for AEST of the neural network with symmetric or noninhibitory lateral connection weight matrix and its activation function in the class of . For a , however, general neural network in the case of this conjecture is still open and challenging. Therefore, further investigation on ABST (or AEST) of the neural networks will focus on the following aspect: Find new method to prove the conjecture for appropriate activation , or, find appropriate connecclasses in the case of and which can tion weight matrices guarantee ABST or AEST of the general neural networks. In the later case, any new result will be different from the existing ABST or AEST ones in the literature. Example 2: Consider the neural network (1) with

By exchanging the first row and the second row, and the first column and the second column of , we get

where

40

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 1, JANUARY 2003

existing ABST and AEST results of the neural networks in the literature. APPENDIX Proof of Lemma 1: We first show . Given any . In view of the definition of , can be made to have the form (3) by taking some row exchange operations and corresponding column exchange operations. For convenience, has the form (3) and (the reason is similar we suppose for other cases), i.e.,

where

or with dimensions, , . Since , for any positive diagonal matrix diag , where diag diag , there exdiag a positive diagonal matrix that ,

and Fig. 1.

Exponential convergence of positive half trajectories in Example 2.

(13) , . It is easy to see that , , , . Hence, . , (this property As shown in Example 1 of [2], is pointed out in Remark 1). Consequently, based on Theorem 1 is this network with its activation function in the class of diag , , AEST. To simulate, let , . Fig. 1 shows that all the positive half trajectories converge exponentially to the unique equilibfrom 40 uniformly disrium tributed random initial points in the cube . In this case, it is easy to check that are , not partially Lypschitz continuous and consequently, so, the result in [13] cannot be used. IV. CONCLUSION In this paper, we have investigated AEST of a general class of continuous-time recurrent neural networks with locally Lipschitz continuous and monotone nondecreasing activation functions. We have proven that the additively diagonally stable connection weight matrices can guarantee AEST of the neural networks. The obtained AEST result actually improves upon the

and ists such and where diag and diag . Let diag where constant . Compute as shown in the equation at the bottom of the page. To guarantee , one may choose , shown in the equation at the bottom of the page, that is, for any positive there exists a positive diagonal matrix diagonal matrix such that . From the definition , . So, . of ; that is, given any , We then show . This claim is proved by mathematical induction in , one can see that the following. In the case of . In the case , as discussed in [13], of , and . When , diag we can check ; that letting , as a result, . is, , one can see When . This actually shows that . implies in cases Now we assume that . We want to prove implies of , in case of .

HU AND WANG: AEST OF A CLASS OF CONTINUOUS-TIME RECURRENT NEURAL NETWORKS

Let where scalar a positive diagonal matrix that let

. If diag

, then there exists such . For , ;

41

diag diag Since

Obviously, . Since , it can be seen , . We consider three cases. , some (not all) of are 1) When bounded. Without a loss of generality, we further suppose are unbounded and are bounded where . Consider , , . Suppose . When is small enough and is big enough (for example, , from it follows that

we have for . If some of any are unbounded, then, similar to 1) above, one can see . If all of are that bounded, then there must exist a subsequence and such that . Moreover, for any . Let diag . Then, . In this case, if , similar to a) above, . If is a positive semidefinite we have ), let diagonal matrix (for example, diag diag

that is,

diag which implies that is bounded by noting being is unbounded. Thus, bounded. This contradicts that , , . Then we may by decompose

Since

(14) and are and where matrices, respectively, zero is an zero matrix, is an matrix. Since , we also and and . Then, by assumption, have and . Consequently, by noting (14). , are all bounded. Then, 2) When and there must exist a sequence such that , . Moreover, for any . Let diag . Then, . , we have a) When

This means . As a result, . is a positive semidefinite diagonal matrix b) When ), let (for example, diag

we have for any as i) above, we also have , all of 3) When diag bounded. Let diag . It is easy to see that

. Note that , then, similar to . are undiag

that is, . Due to which is bounded, similar to 1) and 2) above, we can derive . in case of Based on 1)–3) above, we can conclude . So, . The proof is complete. To prove Lemma 6, we introduce two useful lemmas. matrix Lemma 8: ([14] and [29]): Let the have nonpositive off-diagonal elements, then each of the following condition is equivalent to the statement “A is a nonsingular -matrix.” ( ) All principal minors of are positive.

42

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 1, JANUARY 2003

( ) A has all positive diagonal elements and there exists a diag such that is positive diagonal matrix strictly diagonally dominant; that is,

tive in case of . implies its comparison mais an -matrix whose diagonal elements trix By ( ) of Definition 4, for are diag the any positive diagonal matrix is a nonsingular -matrix. Let matrix

The following Lemma directly comes from Theorem 5 and its proof in [33]. be a vector field on , a map from Lemma 9: Let , and consider the dynamical system

(18)

(15) and This is the cascade of the two systems , where is a parameter for the latter system. 1) System is GAS and is GAS (15) is GAS if for every parameter value . 2) System (15) is convergent if is convergent and is GAS for every parameter value . , we will Proof of Lemma 6: Since focus on the equivalent model (4). The proof is divided by three steps. is a locally stable equilibrium of Step 1) We show that . Then, from model (4). Let Property B we have (6). As a result, the linearized system of model (4) admits

satisfy (5). Hence, based where on condition ( ) in Lemma 8, there exists a positive diag such diagonal matrix , that (19) which is Consider the function in the proof the same as the function (Case 1) of [20, Th. 3.1 ] where the index , , is such . One that is continuous on and the can see that exists on . Then Dini derivative of

(16) diag with . If is a positive definite diagonal matrix, based on the definition of , it is a stable matrix by is easy to see that . If is a positive semidefinoting nite diagonal matrix, assume and where and . Let , , be the remainder submatrices after deleting the th row and column, , th row and column of , , , respectively. One can see that stability of is equivalent to that of . Due we have . Based on the above to discussion for the positive definite diagonal matrix , is stable. So, is stable. This is a locally stable equilibrium of shows that model (4). Step 2) By mathematical induction on we show that is also globally attractive. Let denote the solution of model (4) with any given initial , model (4) becomes condition . When for some diagonal matrix

(17) . In this case, Based on Lemma 4, model (17) is GAS and definitely globally attractive. of model (4) Now assume that the equilibrium . We aim to prove is globally attractive in case of of model (4) is globally attracthe equilibrium

from

from from

by which we can obtain (20) . In terms of , there must exist some fixed such that . and be the remainder subNow we let and after deleting the th matrices of and , respectively. Let row and column of , implying

, and . Then, the remainder equations after deleting the th equation of model (4) become (21)

HU AND WANG: AEST OF A CLASS OF CONTINUOUS-TIME RECURRENT NEURAL NETWORKS

Since , it can be seen that assumption, the following system:

. By

43

by which we have

(22) is globally attractive. Based on Step 1, system (22) is actually GAS. As a result (23) . Hence, from is GAS for every parameter value Lemma 9 it follows that system (21) is convergent. , . This actuThus, of model (4) is ally shows that the equilibrium globally attractive. Step 3) By Steps 1) and 2), we know that model (4) is GAS . Then, for any , is bounded at ; that is, there exist positive constants such that , . From Property C it follows that there exist positive constants such that (24) and ( ) of Definition 4, we By can see that for any positive diagonal madiag , its transposition trix is a nonsingular -matrix. Let (25) Hence, based on condition ( ) in Lemma 8, there diag exists a positive diagonal matrix such that

that is, we have

. Then, for each . This shows that model (4) is GES at ; that is, the neural network (1) is GES at . This implies that the neural network (1) with its activais AEST. The proof of Lemma 6 is tion function in complete. Proof of Lemma 7: Since , we will concentrate on the equivalent model (4). Based on the def, any can be made to have the form inition of (3) by taking some row exchange operations and corresponding column exchange operations. In the following, we only consider since the reason is similar for . In this case, when and (in (3)) , in view of Remark 1, . From Lemma 6 one can see that Lemma 7 holds. When and , in view of Lemmas 3, 5, 6, and 9 one can easily prove Lemma 7 by using the method in Step 3 of the proof of Lemma 6. The detail is left for readers to verify. Here, we only . focus on the case where Without a loss of generality, we suppose

where with

or

with dimensions, dimensions, zero is a zero matrix with dimensions, and is an dimensional matrix. In this case, we partition and , respectively, as diag

(26) Differentiating the Lyapunov function along the solution of (4), we obtain

where

diag

and

, diag , and (4) is equivalent to the following systems:

, . Then, model (27)

and (28)

([from (26)] [from (25)]

diag and where diag . Since or , it follows from Lemma 6 or Lemma 5 that (27) is GES; that is, there and such that , exist constants , where . Thus, noting Property C, one can see that there exists a positive such that constant

([from (24)] (29) On the other hand, based on that system

and Lemma 5, we see (30)

44

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 1, JANUARY 2003

is GES and consequently

where

. In view of (28), for we may obtain

(31) . Hence, from (1) of is GAS for every parameter value Lemma 9 it follows that the cascade of systems (27) and (28) is and consequently, is bounded in (0,+ ). Let GAS at satisfy , , constants . Based on Property C, there exist positive constants such that

Define

. Then (32) from

is GES, in the following we only need to prove Due to is GES. that For system (28) we still use the differentiable function in (10), namely (33)

where

diag

and the time derivative of (28) yields

and along the trajectory

. Let . Computing of system

Then, based on Lemma 3, there exists positive constant such that is GES at the convergence rate of at least where . So, (28) is GES. As a result, by recalling that (27) is also GES model (4) is GES at and model (4) is equivalent to (27) and (28). Hence, the neural . This actually means that the network (1) is GES at is AEST. neural network (1) with its activation function in The proof of Lemma 7 is complete. ACKNOWLEDGMENT The authors would like to thank the Associate Editor and the reviewers for their constructive comments and also thank one of the reviewers for pointing out [32] and suggesting the use of [33, Lemma 9] for simplifying our original proof of global convergence and stability of Lemma 7.

(from (29) and (32))

REFERENCES

(34) such So, in view of Lemma 3, there exists positive constant . On the other hand, note that , the proof of Lemma 5, we similarly have (12) . Hence,

[1] S. Arik and V. Tavsanoglu, “Absolute stability of nonsymmetric neural networks,” in Proc. 1996 IEEE Int. Symp. Circuits and Systems, vol. III, May 1996, pp. 441–444. [2] , “A comment on ‘comments on ‘necessary and sufficient condition for absolute stability of neural networks”,” IEEE Trans. Circuits Syst. I, vol. 45, pp. 595–596, May 1998. [3] S. Arik, “Global asymptotic stability of a class of dynamical neural networks,” IEEE Trans. Circuits Syst. I, vol. 47, pp. 568–571, Apr. 2000. [4] S. Arik and V. Tavsanoglu, “A sufficient condition for absolute stability of a larger class of dynamical neural networks,” IEEE Trans. Circuits Syst. I, vol. 47, pp. 758–760, May 2000. [5] M. Forti, S. Manetti, and M. Marini, “A condition for global convergence of a class of symmetric neural networks,” IEEE Trans. Circuits Syst. I, vol. 39, pp. 480–483, June 1992. [6] , “Necessary and sufficient condition for absolute stability of neural networks,” IEEE Trans. Circuits Syst. I, vol. 41, pp. 491–494, June 1994. [7] M. Forti, A. Liberatore, S. Manetti, and M. Marini, “On absolute stability of neural networks,” in Proc. 1994 IEEE Int. Symp. Circuits and Systems, vol. 6, May 1994, pp. 241–244. [8] M. Forti and A. Tesi, “New condition for global stability of neural networks with application to linear and quadratic programming problems,” IEEE Trans. Circuits Syst. I, vol. 42, pp. 354–366, July 1995.

HU AND WANG: AEST OF A CLASS OF CONTINUOUS-TIME RECURRENT NEURAL NETWORKS

[9] J. C. Juang, “Stability analysis of Hopfield-type neural network,” IEEE Trans. Neural Networks, vol. 10, pp. 1366–1374, Nov. 1999. [10] E. Kaszkurewicz and A. Bhaya, “On a class of globally stable neural circuits,” IEEE Trans. Circuits Syst. I, vol. 41, pp. 171–174, Feb. 1994. , “Comments on ‘Necessary and sufficient condition for absolute [11] stability of neural networks’,” IEEE Trans. Circuits Syst. I, vol. 42, pp. 497–499, Aug. 1995. [12] K. Matsuoka, “On absolute stability of neural networks,” Trans. Inst. Electron., Inform, Commun. Eng., pp. 536–542, 1991. [13] X. B. Liang and J. Wang, “An additive diagonal-stability condition for absolute exponential stability of a general class of neural networks,” IEEE Trans. Circuits Syst. I, vol. 48, pp. 1308–1317, Nov. 2001. , “Absolute exponential stability of neural networks with a general [14] class of activation functions,” IEEE Trans. Circuits Syst. I, vol. 47, pp. 1258–1263, Aug. 2000. [15] X. B. Liang, “A comment on ‘On equilibria, stability, and instability of Hopfield neural networks’,” IEEE Trans. Neural Networks, vol. 11, pp. 1506–1507, Nov. 2000. [16] X. B. Liang and J. Wang, “A proof of Kaszkurewicz and Bhaya’s conjecture on absolute stability of neural networks in two-neuron case,” IEEE Trans. Circuits Syst. I, vol. 47, pp. 609–611, Apr. 2000. [17] X. B. Liang and L. D. Wu, “Comments on ‘new conditions for global stability of neural networks with application to linear and quadratic programming problems’,” IEEE Trans. Circuits Syst. I, vol. 44, pp. 1099–1101, Nov. 1997. [18] X. B. Liang and T. Yamaguchi, “Necessary and sufficient conditions for absolute exponential stability of Hopfield-type neural networks,” IEICE Trans. Inform. Syst., vol. E79-D, pp. 990–993, 1996. , “Necessary and sufficient conditions for absolute exponential sta[19] bility of a class of nonsymmetric neural networks,” IEICE Trans. Inform. Syst., vol. E80-D, pp. 802–807, 1997. [20] X. B. Liang and L. D. Wu, “New sufficient conditions for absolute stability of neural networks,” IEEE Trans. Circuits Syst. I, vol. 45, pp. 584–586, May 1998. [21] S. I. Sudharsanan and M. K. Sundareshan, “Exponential stability and a systematic synthesis of a neural network for quadratic minimization,” Neural Networks, vol. 4, pp. 599–613, 1991. [22] X. B. Liang and L. D. Wu, “Global exponential stability of a class of neural circuits,” IEEE Trans. Circuits Syst. I, vol. 46, pp. 748–751, June 1999. [23] X. B. Liang and J. Si, “Global exponential stability of neural networks with globally Lipschitz continuous activations and its application to linear variational inequality problem,” IEEE Trans. Neural Networks, vol. 12, pp. 349–359, Mar. 2001. [24] Y. Zhang, P. A. Heng, and A. W. C. Fu, “Estimate of exponential convergence rate and exponential stability for neural networks,” IEEE Trans. Neural Networks, vol. 10, pp. 1487–1493, Nov. 1999. [25] Y. Xia and J. Wang, “Global asymptotic and exponential stability of a dynamic neural system with asymmetric connection weight weights,” IEEE Trans. Automat. Contr., vol. 46, pp. 635–638, Apr. 2001. [26] H. Qiao, J. G. Peng, and Z. B. Xu, “Nonlinear measures: a new approach to exponential stability analysis for Hopfield-type neural networks,” IEEE Trans. Neural Networks, vol. 12, pp. 360–370, Mar. 2001.

45

[27] I. W. Sandberg and A. N. Wilson Jr., “Some theorems on properties of dc equations of nonlinear networks,” Bell Syst. Tech. J., vol. 48, no. 1, pp. 1–34, 1969. [28] M. Fiedler and V. Ptak, “Some generalizations of positive definiteness and monotonicity,” Numer. Math., vol. 9, no. 2, pp. 163–172, 1966. [29] A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences. New York: Academic, 1979. [30] D. Hershkowitz, “Recent directions in matrix stability,” Linear Algebra Applicat., vol. 172, pp. 161–186, 1992. [31] G. F. Franklin, J. D. Powell, and A. Emami-Naeini, Feedback Control of Dynamic Systems. Reading, MA: Addison–Wesley, 1991. [32] Y. G. Fang, M. A. Cohen, and T. G. Kincaid, “Dynamics of a winner-take-all neural network,” Neural Networks, vol. 92, pp. 1141–1154, 1996. [33] M. W. Hirsch, “Convergent activation dynamics in continuous time networks,” Neural Networks, vol. 2, pp. 331–349, 1989. [34] X. B. Liang, “On the existence and uniqueness of equilibrium for neural networks with continuous and monotonically increasing activation functions,” IEEE Trans. Neural Networks, submitted for publication.

Sanqing Hu received the B.S. degree from the Mathematics Department, Hunan Normal University, Hunan, China, the M.S. degree from the Automatic Control Department, Northeastern University, China, and the Ph.D. degree from the Department of Automation and Computer-Aided Engineering, The Chinese University of Hong Kong, Kowloon, Hong Kong, China, in 1992, 1996, and 2001, respectively. He is now working in the Department of Electrical and Computer Engineering, University of Illinois, Chicago, as a Visiting Scholar. His interests include robust control, nonlinear systems, neural networks, and signal processing.

Jun Wang (S’89–M’90–SM’93) received the B.S. degree in electrical engineering and the M.S. degree in systems engineering from Dalian University of Technology, Dalian, China, and the Ph.D. degree in systems engineering from Case Western Reserve University, Cleveland, OH. He was an Associate Professor at the University of North Dakota, Grand Forks. He is currently a Professor of automation and computer-aided engineering at the Chinese University of Hong Kong, Kowloon, Hong Kong. His current research interests include neural networks and their engineering applications. Prof. Wang is an Associate Editor of the IEEE TRANSACTIONS ON NEURAL NETWORKS and IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS.