IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 6, JUNE 2009
915
A Delayed Projection Neural Network for Solving Linear Variational Inequalities Long Cheng, Student Member, IEEE, Zeng-Guang Hou, Member, IEEE, and Min Tan, Member, IEEE
Abstract—In this paper, a delayed projection neural network is proposed for solving a class of linear variational inequality problems. The theoretical analysis shows that the proposed neural network is globally exponentially stable under different conditions. By the proposed linear matrix inequality (LMI) method, the monotonicity assumption on the linear variational inequality is no longer necessary. By employing Lagrange multipliers, the proposed method can resolve the constrained quadratic programming problems. Finally, simulation examples are given to demonstrate the satisfactory performance of the proposed neural network. Index Terms—Constrained quadratic programming, linear variational inequality, projection neural network, time delay.
I. INTRODUCTION HE variational inequality is considered as a uniform approach for optimization and equilibrium problems [1]. Many scientific and engineering applications can be treated well in this general framework [2]–[4]. A survey of related results and applications can be founded in [5]. However, traditional numerical algorithms for solving variational inequality problems may encounter serious speed bottleneck due to the serial nature of the digital computer employed. How to obtain the real-time solution to the variational inequality has been studied intensively. A promising approach for this purpose is to employ neural networks based on circuit implementation. In the past decade, several recurrent neural networks have been constructed to solve variational inequalities [6]–[13]. In [6], linear variational inequalities were solved by a recurrent neural network, which was an extension to the traditional projection and contraction methods. At almost the same time, in [7], another neural network model was suggested for solving linear variational inequalities, and the global exponential stability of neural network was analyzed. The discrete-time recurrent neural network for linear variational inequalities was presented in [8]. In [9] and [10], a projection neural network was proposed to solve the nonlinear variational inequalities with box
T
Manuscript received July 05, 2007; accepted February 24, 2008. First published May 05, 2009; current version published June 03, 2009. This work was supported in part by the National Natural Science Foundation of China under Grants 60775043, 60725309, and 60621001. L. Cheng is with the Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with the Graduate University of Chinese Academy of Sciences, Beijing 100049, China (e-mail:
[email protected]). Z.-G. Hou and M. Tan are with the Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China (e-mail:
[email protected];
[email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNN.2009.2012517
or sphere constraints. The neural network model for the nonlinear variational inequalities with general convex inequality constraints was considered in [11]. More recently, a general projection neural network was presented to deal with general variational inequalities, which included the previous neural network as the special case [12]–[14]. However, most of aforementioned neural networks require the monotonicity assumption on the variational inequality to guarantee their stability, and do not consider the time-delay effect in the practical implementation. It is well known that, in the hardware implementation of neural networks, time delays inevitably occur in the signal communication among the neurons. This may lead to the oscillation phenomenon or instability of networks. Therefore, the study on the dynamical behavior of the delayed neural network is attractive both in theory and in practice. The neural network with delays can be mathematically described by the functional differential equation whose general concepts and characteristics are introduced in [15]. The dynamical behavior of many famous neural networks with delays, such as delayed Hopfield neural network [16], delayed cellular neural network [17], delayed bidirectional associative memory neural network [18], delayed Gohen–Grossberg neural network [19], has been studied extensively in literature. These delayed neural networks can be demonstrated to be convergent to the equilibrium point [16]–[18] or the periodic orbit [19]. For the case of convergence to the equilibrium point (necessary condition for solving the variational inequality), several useful methods and techniques about the delayed neural network stability can be found in [20]–[23] and references therein. A common feature of these analysis approaches is as follows: first construct a Lyapunov–Krasovskii-type or Lyapunov–Razumikhin-type functional, and then use the linear matrix inequality (LMI) approach to obtain a negative–definite condition of the time derivative of constructed functional. However, most of the current work focuses on the neural network stability analysis. To the best of the authors’ knowledge, only very few papers consider the delayed neural network for solving the variational inequality problem. In [24]–[27], delayed neural networks have been proposed for solving the projection equation and optimization problems. These neural networks can be extended to dealing with the variational inequality problem because the variational inequality includes the projection equation and optimization problems as its special cases. In [24], the convex quadratic programming problem was solved by a delayed neural network based on the penalty method. The delay margin was determined by analyzing the characteristic equation of the neural network dynamics. In [25], Liu et al. proposed a delayed neural network for a class of linear projection equations. In that paper, the projection neural network
1045-9227/$25.00 © 2009 IEEE
916
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 6, JUNE 2009
Fig. 1. Architecture of the delayed neural network defined by (5).
proposed in [9] was first extended to considering the transmission delay in neural network implementation. The global exponential stability of the delayed neural network is obtained using the LMI approach. Moreover, when this delayed neural network is applied to solve bounded constrained quadratic programming problem, the delay-independent stability criteria can also be obtained. As an alternative, in [26], Yang and Cao presented another delayed projection neural network for solving the convex optimization problem where the time delay occurred in the nonlinear projection transformation component. Based on the same method, in [27], Yang and Cao also proposed a neural-network-based solution to the quadratic programming problem with equality constraints. Based on the neural network models in [25]–[27], an improved delayed projection neural network is proposed to solve a class of linear variational inequality problems. By the proposed LMI approach, the monotonicity assumption on the linear variational inequality is no longer necessary. By the theory of functional differential equation [15], the proposed neural network is demonstrated to be globally exponentially stable under different conditions. In addition, by employing Lagrange multipliers, the proposed method can resolve the constrained quadratic programming problems. The remainder of this paper is organized as follows. Section II introduces the problem formulation, the model of the proposed neural network, and some preliminary results. Section III provides the theoretical results on the global exponential stability of
the proposed neural network. Section IV analyzes the capacity of the delayed neural network proposed in [27]. The constrained quadratic programming problem is solved by proposed delayed neural network in Section V. Illustrative examples are given in Section VI. Section VII concludes this paper with final remarks. II. PROBLEM FORMULATION AND PRELIMINARIES The linear variational inequality problem, denoted by , is to find a vector such that
(1) where , , and is a convex nonempty subset of . If , then becomes a system of linear equations. If is the nonnegative orthant of , then becomes a linear complementarity problem. If is a positive–semidefinite matrix, then is called the monotone linear variational inequality problem. By the well-known projection theorem [1], it follows that is a solution of if and only if it satisfies the following projection equation:
(2)
CHENG et al.: A DELAYED PROJECTION NEURAL NETWORK FOR SOLVING LINEAR VARIATIONAL INEQUALITIES
where defined by
is a constant, and
is a projection operator
917
Definition 1: The equilibrium point of the delayed projection neural network defined by (5) is said to be globally expoand such nentially stable if there exist constants that
(3) where denotes norm of . Without consideration of time delays, the above linear projection equation can be solved by the following projection neural network [6], [9], [10]:
where Lemma 1: Assume that the set is defined by (3). Then set, and
. is a closed convex
(4) is a scaling constant. It is shown in [9] that if is where positive semidefinite and symmetric, then the projection neural network defined by (4) is stable and convergent to the solution of defined by (1). Therefore, the neural network defined by (4) can solve the monotone linear variational inequality problem. When the time delay is taken into account, a delayed projection neural network, which is an improvement of the neural network proposed in [27], is suggested for solving as follows:
and
Proof: See [28]. Lemma 2 (Gronwall Inequality): Let and be real-valued nonnegative continuous functions with domain ; let , where is a monotone increasing function; assume that, for
Then
(5) Lemma 3: For any
, the following inequality holds:
where
denotes the time delay, , denotes the set of all continuous vectorvalued functions from to , and is defined by (3). The architecture of the proposed neural network is shown in Fig. 1, where is the network output vector, is the network input vector, are weighted connections, and is the network initial state. It is clear that the equilibrium point of the proposed neural defined by (1). network is equal to the solution of Therefore, if this neural network is stable at its equilibrium point, then it can be employed to solve problem. It will be shown later that the proposed neural network has a milder convergence condition; that is, can be an asymmetric or indefinite matrix. This means that the proposed neural network can solve a wider class of linear variational inequalities. For the sake of further discussions on the stability of the proposed method, some notations, definitions and lemmas are introduced. In what follows, and represent the maximum and minimum eigenvalues of the given matrix, respectively; for a vector , denotes the vector -norm; for a matrix , denotes the matrix spectral norm, which is compatible with the vector -norm.
Proof: See [25].
III. MAIN RESULTS First, the existence of a solution of delayed neural network defined by (5) is analyzed. Theorem 1: Assume that the linear projection equation defined by (2) has at least one solution such that . For each , there exfor the delayed projecists a unique continuous solution tion neural network defined by (5) in the global time interval . Proof: Similarly to the proof of Theorem 1 in [27], let the right-hand side of the delayed neural network defined by (5) be . According to Lemma 1, it is obvious that is a local Lipschitz function. From the existence theorem of the functional differential equation [15], there exists a unique satisfying the initial condition for . solution
918
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 6, JUNE 2009
is the maximal right existence interval of the solution . On the other hand, it is clear that
Integrating both sides of (5) on
,
ential equation [15], there exists a unique continuous solution in the global time interval . In the following, two theorems are given to show the global exponential stability of the proposed delayed projection neural network under different conditions. is a positive–definite matrix. If Theorem 2: Assume that and , such that there exist constants , then the equilibrium point of the delayed projection neural network defined by (5) is globally exponentially stable. Proof: Similarly to the proof of Theorem 2 in [27], it can be obtained that
, we obtain By the variation-of-constants formula, the above functional differential equation can be solved as follows:
and Therefore
,
.
Then
where
, and .
By Lemma 2, it can be obtained that Hence, the solution is bounded for if is finite. According to the continuation theorem of the functional differ-
CHENG et al.: A DELAYED PROJECTION NEURAL NETWORK FOR SOLVING LINEAR VARIATIONAL INEQUALITIES
919
Then, holds. Thus, by , it is choosing holds. easy to check that Remark 1: The analysis technique of Theorems 1 and 2 is similar to that used in [27]. However, it is worth observing that the feasible set in this paper can be any closed convex set. The feasible set in [27] is the polyhedral set, which is only a special case of the convex set. In addition, because the feasible set is the general convex case, the nonexpansive property of defined by (3), which is shown in the projection operator Lemma 1, has to be adopted to facilitate the proof. Therefore, the results obtained in this paper improve the corresponding ones proposed in [27]. Remark 2: It should be noted that the positive–definite asin Theorem 2 cannot be eliminated. In [27], sumption on is not positive this condition was not considered. Actually, if definite, the condition of Theorem 2 never holds. Assume that is not positive definite, then there exists such that . It can be seen that So It is noted that
Therefore, . Then value
By Lemma 2, it can be obtained that
(6) Then
(7) By Definition 1, it can be seen that the equilibrium point of the delayed projection neural network defined by (5) is globally exponentially stable. Corollary 1: If is not only positive definite but symmetric, then by choosing and , the conditions of Theorem 2 can be met. is symmetric and positive definite Proof: Since
where
and so
is the eigenvalue of , it follows that
. Hence, if
is a symmetric matrix, and
must have at least one eigen-
Thus, by , it is easy to see that never holds. It is noted that, in practice, the positive–definite assumption is too strong. Therefore, the applicability of Theorem 2 for may be limited, and a more practical condition should be presented. Inspired by the result in [25], another theorem is given below that is based on the Lyapunov–Krasovskii stability theory is not and the LMI approach. In this theorem, the matrix necessarily symmetric or positive definite, while the global exponential stability of the delayed projection neural network is preserved. Theorem 3: Assume that , and ; for , ; for
. Let the matrix be nonsingular, then the equilibrium point of the delayed projection neural network defined by (5) is globally exponentially stable, if there exist positive–definite , a positive–definite and symmetric matrices and , and a constant such that diagonal matrix defined by (8), shown at the bottom of the next page, is positive definite.
920
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 6, JUNE 2009
Proof: Let
be defined by . Then, construct the following Lyapunov–Krasovskii functional:
(9) where , , and . According to the proof of Theorem 1 given in [25], the following inequality holds:
(11) Let
and . By Lemma 2, it can be obtained . Substituting that above two inequalities in (11), we obtain
(10) Differentiating the Lyapunov–Krasovskii functional defined by (9) with respect to time yields (12) Let
, then
where is defined by (8). If is a positive–definite matrix, then . Therefore, . Notice that , where is symmetric and positive definite since the matrix is nonsingular. So
(13) Therefore
(8)
CHENG et al.: A DELAYED PROJECTION NEURAL NETWORK FOR SOLVING LINEAR VARIATIONAL INEQUALITIES
921
neural network is a good supplement of the results in [25]. As to the stability analysis, except for Theorem 3 obtained by the LMI approach, Theorem 2 gives another exponential stability criterion, which is an efficient alternative for the case where is positive definite. IV. CAPACITY ANALYSIS OF THE DELAYED NEURAL NETWORK PROPOSED IN [27] In [27], a delayed projection neural network was designed to solve quadratic programming problems in which the equality constraint was considered. It is the first projection neural network considering the time delay that occurs in the nonlinear projection transformation component. However, the neural network proposed in [27] suffers from the improper definition of projection operator, which will be discussed in this section. The quadratic programming problem considered in [27] is defined by (18) (14) On the other hand
is a positive–definite (positive–semidefinite) matrix, where is a row full rank matrix. and The delayed projection neural network, which was employed in [27] to solve (18), is defined by
(15) Since
, then
(19) (16)
where is defined by (17) shown at the bottom of the page. By the above discussion, it can be seen that the equilibrium point of the delayed projection neural network defined by (5) is globally exponentially stable. Remark 3: Theorem 3 provides a novel exponential stability criterion for the proposed neural network. It can be applied is not positive defito analyze the network stability when nite. This indicates that the proposed delayed neural network can solve a class of nonmonotone linear variational inequalities. Therefore, this criterion apparently has a larger application scope than Theorem 2. Moreover, the conditions of Theorem 3 are described in the LMI form, which is easy to check by many efficient LMI solvers. Remark 4: Some comparisons of the proposed neural network defined by (5) with the model used in [25] are given here. From the architecture aspect, the time delay of the proposed , neural network occurs in the nonlinear component however, in [25], the time delay appears in the linear part of the neural network. Therefore, it can be said that the proposed
where the projection operator
.
is defined by
(20)
According to the result given in [9], the equilibrium point of the neural network defined by (19) is equal to the optimal solution of the quadratic programming problem defined by (18), is defined by (3). if the projection operator So, the projection operator defined by (20) will first be studied to show whether it has the same characteristic as that defined by (3). It is obvious that the projection operator defined by (20) can be treated as the combination of the following two basic projection processes. to • First, project by the projection operator defined by (3). to • Second, project by the projection operator defined by (3). A simple counter example will be provided to show that this combined projection operator cannot obtain the same projection
(17)
922
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 6, JUNE 2009
Fig. 2. Projection process of the counter example. Fig. 3. Transient behavior of the delayed projection neural network defined by (19) for solving the counter example.
point as that defined by (3); and worse, the projection point obtained by this combined projection operator may not be in the feasible region . In this counter example, the feasible polyhedral set has paand , and the point rameters is projected to this region by the projection operator defined by (20). It is obvious that the resultant projection point is , which is not feasible, while the corresponding projection point obtained by the projection operator defined by (3) . The projection process is shown in Fig. 2. is Therefore, the equilibrium point of the delayed projection neural network defined by (19) may not be equal to the optimal solution of the quadratic programming problem defined by (18), which implies that the neural network defined by (19) cannot solve the quadratic programming problem defined by (18). Here, another counter example is given to demonstrate this point. Consider the quadratic programming problem defined by (18) with the following parameters:
It is clear that above quadratic programming problem is a strictly convex optimization problem. It has a globally unique solution . Simulation studies have been performed using the delayed projection neural network defined by (19). The parameters of and the delayed projection neural network are set as . Ten constant vectors are chosen as the initial functions of the delay projection neural network defined by (19). The transient behavior of the projection neural network is shown in Fig. 3. By the simulation result, the equilibrium point of the , which is not neural network proposed in (19) is equal to the optimal solution ; and worse, it is not a feasible
solution. So the delayed projection neural network defined by (19) cannot solve the quadratic programming problem defined by (18). V. SOLVING QUADRATIC PROGRAMMING PROBLEMS BY THE PROPOSED APPROACH Construct the Lagrange function of the quadratic programming problem defined by (18) as follows: (21) and are Lagrange multipliers. where is an optimal solution to By the Kuhn–Tucker condition, the quadratic programming problem defined by (18) if and only and such that satisfies the folif there exist lowing condition:
This is equivalent to the following formulation: (22) is a positive constant, , and . It is clear that (22) can be written as the following linear projection equation:
where
(23)
CHENG et al.: A DELAYED PROJECTION NEURAL NETWORK FOR SOLVING LINEAR VARIATIONAL INEQUALITIES
923
where
if if
.
It is obvious that is positive semidefinite and asymmetric. If the condition of Theorem 3 holds, due to the relationship between the projection equation and the variational inequality problem, the delayed projection neural network defined by (5) can be employed to solve the above linear projection equation. It is noted that this neural network approach can be considered as a kind of primal–dual optimization method. That is, the optimal solution of the optimization problem and the dual Lagrange multiplier vector associated with the equality constraint can be obtained simultaneously. In [27], the authors claimed that their neural network model had no Lagrange multiplier, which resulted in the lowest number of state variables. However, according to the above analysis, without the defined by Lagrange multipliers, the projection operator (3) needs to be computed. Due to the form of the polyhedral set defined in (18), the projection process is a complicated optimization problem that increases the computational complexity dramatically.
Fig. 4. Transient behavior of the delayed projection neural network defined by (5) for solving Example 1 with (t) = [ t ; e ; cos(t)] .
0
appropriate and such that . For the above example, the parameters are set as and , and the initial function of the neural network model . The transient behavior of is set as the neural network is given in Fig. 4. It is clear that the delayed of this projection neural network converges to the solution linear variational inequality problem. Example 2 an
The delayed projection neural network is employed to solve problem. The parameters of this problem are
VI. SIMULATION EXAMPLES In this section, three examples are given to demonstrate the effectiveness of the improved delayed projection neural network defined by (5). The functional differential (5) is solved by the Matlab dde23 method. Example 1 To show the validity of Theorem 2, a third-order linear variational inequality problem with following parameters is considered:
It is obvious that is a symmetric and positive–definite matrix with the maximal eigenvalue 4.5 and the minimal eigenvalue 0.75. This problem has a unique solution . According to Theorem 2, the delayed projection neural network can solve this problem with
It is obvious that is an indefinite matrix, which means that it is a nonmonotone variational inequality, and this problem has a . solution By letting , , and , , , and can be obtained as follows:
924
Fig. 5. Transient behavior of the delayed projection neural network defined by (5) for solving Example 2 with (t) = [sin(t); cos(t); t] .
, which is shown in (24) at the bottom of the page, is a positive–definite matrix, which satisfies the condition of Theorem 2. Then the delayed projection neural network define by (5) will exponentially converge to its equilibrium point. The initial function of the neural network model is set as on . The transient behavior of the neural network is given in Fig. 5. It is clear that the of this linear equilibrium point is equal to the solution variational inequality problem. Example 3 The counter example given in Section IV is employed here to illustrate that the delayed projection neural network defined by (5) has the capability of solving the quadratic programming problem defined by (18). Let parameters of the delayed neural network set be and . The initial function of the delayed projection neural network is a constant vector in the time interval , and ten random vectors are chosen as the value of , respectively. The transient behavior of the delayed projection neural network is shown in Fig. 6. The simulation result shows that all solution trajectories converge to the equilibrium , which coincides with the optimal solution of the point quadratic programming problem defined by (18).
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 6, JUNE 2009
Fig. 6. Transient behavior of the delayed projection neural network defined by (5) for solving Example 3 with ten different random initial points.
VII. CONCLUSION This paper proposes a delayed projection neural network for solving a class of linear variational inequalities, which is an improvement of the neural network used in [27]. The stability analysis shows that the proposed delayed neural network is globally exponentially stable under different conditions, and the monotonicity assumption on the linear variational inequality is no longer necessary by the LMI approach. This means the proposed method can solve a class of nonmonotone linear variational inequality problems. It addition, by using the Lagrange multipliers, the constrained quadratic programming problem is transformed into a linear projection equation problem, which can be solved by the proposed neural network approach. Finally, the effectiveness of the proposed method has been shown in the illustrative examples. ACKNOWLEDGMENT The authors would like to thank Prof. J. Cao of Southeast University for valuable discussions and suggestions, and anonymous reviewers for helpful comments that improved the original manuscript. REFERENCES [1] D. Kinderlehrer and G. Stampcchia, An Introduction to Variational Inequalities and Their Applications. New York: Academic, 1980.
(24)
CHENG et al.: A DELAYED PROJECTION NEURAL NETWORK FOR SOLVING LINEAR VARIATIONAL INEQUALITIES
[2] Z.-G. Hou, M. M. Gupta, P. N. Nikiforuk, M. Tan, and L. Cheng, “A recurrent neural network for hierarchical control of interconnected dynamic systems,” IEEE Trans. Neural Netw., vol. 18, no. 2, pp. 466–481, Mar. 2007. [3] D. Li, R. M. Mersereau, and S. Simske, “Blind image deconvolution through support vector regression,” IEEE Trans. Neural Netw., vol. 18, no. 3, pp. 931–935, May 2007. [4] D.-S. Huang and J.-X. Mi, “A new constrained independent component analysis method,” IEEE Trans. Neural Netw., vol. 18, no. 5, pp. 1532–1535, Sep. 2007. [5] F. Facchinei and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems. New York: Springer-Verlag, 2003. [6] B. S. He and H. Yang, “A neural-network model for monotone linear asymmetric variational inequalities,” IEEE Trans. Neural Netw., vol. 11, no. 1, pp. 3–16, Jan. 2000. [7] X. B. Liang and J. Si, “Global exponential stability of neural networks with globally Lipschitz continuous activations and its application to linear variational inequality problem,” IEEE Trans. Neural Netw., vol. 12, no. 2, pp. 349–359, Mar. 2001. [8] H. J. Tang, K. C. Tan, and Y. Zhang, “Convergence analysis of discrete time recurrent neural networks for linear variational inequality problem,” in Proc. Int. Joint Conf. Neural Netw., Honolulu, HI, 2002, vol. 3, pp. 2470–2475. [9] Y. S. Xia and J. Wang, “On the stability of globally projected dynamical systems,” J. Optim. Theory Appl., vol. 106, no. 1, pp. 129–150, 2000. [10] Y. S. Xia, “Further results on global convergence and stability of globally projected dynamical systems,” J. Optim. Theory Appl., vol. 122, no. 3, pp. 627–649, 2004. [11] X. B. Gao, L. Z. Liao, and L. Q. Qi, “A novel neural network for variational inequalities with linear and nonlinear constraints,” IEEE Trans. Neural Netw., vol. 16, no. 6, pp. 1305–1317, Nov. 2005. [12] Y. S. Xia and J. Wang, “A general projection neural network for solving monotone variational inequalities and related optimization problems,” IEEE Trans. Neural Netw., vol. 15, no. 2, pp. 318–328, Mar. 2004. [13] X. Hu and J. Wang, “A recurrent neural network for solving a class of general variational inequalities,” IEEE Trans. Syst. Man Cybern. B, Cybern., vol. 37, no. 3, pp. 528–539, Jun. 2007. [14] X. Hu and J. Wang, “Solving generally constrained generalized linear variational inequalities using the general projection neural networks,” IEEE Trans. Neural Netw., vol. 18, no. 6, pp. 1697–1708, Nov. 2007. [15] J. K. Hale and S. M. V. Lunel, Introduction to Functional Differential Equations. New York: Springer-Verlag, 1993. [16] X. Liu and Q. Wang, “Impulsive stabilization of high-order Hopfieldtype neural networks with time-varying delays,” IEEE Trans. Neural Netw., vol. 19, no. 1, pp. 71–79, Jan. 2008. [17] H. Zhang and Z. Wang, “Global asymptotic stability of delayed cellular neural networks,” IEEE Trans. Neural Netw., vol. 18, no. 3, pp. 947–950, May 2007. [18] X.-G. Liu, R. R. Martin, M. Wu, and M.-L. Tang, “Global exponential stability of bidirectional associative memory neural networks with time delays,” IEEE Trans. Neural Netw., vol. 19, no. 3, pp. 397–407, Mar. 2008. [19] Z. Yuan, L. Huang, D. Hu, and B. Liu, “Convergence of nonautonomous Cohen-Grossberg-type neural networks with variable delays,” IEEE Trans. Neural Netw., vol. 19, no. 1, pp. 140–147, Jan. 2008. [20] X. F. Liao, G. R. Chen, and E. N. Sanchez, “LMI-based approach for asymptotically stability analysis of delayed neural networks,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 49, no. 7, pp. 1033–1039, Jul. 2002. [21] S. Arik, “An analysis of exponential stability of delayed neural networks with time varying delays,” Neural Netw., vol. 17, pp. 1027–1031, 2004. [22] Z. Wang, Y. Liu, and X. Liu, “On global asymptotic stability of neural networks with discrete and distributed delays,” Phys. Lett. A, vol. 345, pp. 299–308, 2005. [23] J. D. Cao and D. W. C. Ho, “A general framework for global asymptotic stability analysis of delayed neural networks based on LMI approach,” Chaos Solitons Fractals, vol. 24, pp. 1317–1329, 2005. [24] Y. H. Chen and S. C. Feng, “Neurocomputing with time delay analysis for solving convex quadratic programming problems,” IEEE Trans. Neural Netw., vol. 11, no. 1, pp. 230–240, Jan. 2000.
925
[25] Q. S. Liu, J. D. Cao, and Y. S. Xia, “A delayed neural network for solving linear projection equations and its analysis,” IEEE Trans. Neural Netw., vol. 16, no. 4, pp. 834–843, Jul. 2005. [26] Y. Q. Yang and J. D. Cao, “A delayed neural network method for solving convex optimization problems,” Int. J. Neural Syst., vol. 16, no. 4, pp. 295–303, 2006. [27] Y. Q. Yang and J. D. Cao, “Solving quadratic programming problems by delayed projection neural network,” IEEE Trans. Neural Netw., vol. 17, no. 6, pp. 1630–1634, Nov. 2006. [28] D. P. Bertsekas, Parallel and Distributed Computation: Numerical Methods. Englewood Cliffs, NJ: Prentice-Hall, 1989. Long Cheng (S’07) was born in Heilongjiang Province, China. He received the B.S. degree (with honors) in control engineering from Nankai University, Tianjin, China, in July 2004. He is currently working towards the Ph.D. degree at the Institute of Automation, Chinese Academy of Sciences, Beijing, China. His current research interests include neural networks, optimization, nonlinear control, and their applications to robotics.
Zeng-Guang Hou (M’05) received the B.E. and M.E. degrees in electrical engineering from Yanshan University (formerly North-East Heavy Machinery Institute), Qinhuangdao, China, in 1991 and 1993, respectively, and the Ph.D. degree in electrical engineering from Beijing Institute of Technology, Beijing, China, in 1997. From May 1997 to June 1999, he was a Postdoctoral Research Fellow with the Laboratory of Systems and Control, Institute of Systems Science, Chinese Academy of Sciences, Beijing, China. From May 2000 to January 2001, he was a Research Assistant with the Hong Kong Polytechnic University, Kowloon, Hong Kong. From July 1999 to May 2004, he was an Associate Professor with the Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, where he has been a Full Professor since June 2004. From September 2003 to October 2004, he was a Visiting Professor at the Intelligent Systems Research Laboratory, College of Engineering, University of Saskatchewan, Saskatoon, SK, Canada. His current research interests include neural networks, optimization algorithms, robotics, and intelligent control systems. Dr. Hou currently serves as an Editorial Board Member of the International Journal of Intelligent Systems Technologies and Applications, the Journal of Intelligent and Fuzzy Systems, and the International Journal of Cognitive Informatics and Natural Intelligence. He was an Associate Editor of the IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE. He served as the Publicity Co-Chair of the IEEE World Congress on Computational Intelligence held in Vancouver, BC, Canada, in July 2006, and the Publication Chair of the IEEE World Congress on Computational Intelligence held in Hong Kong, in June 2008. He was/is a program member of several prestigious conferences, and he served as the Program Chair of the 2007 International Symposium on Neural Networks. He currently serves as an Associate Editor of the IEEE TRANSACTIONS ON NEURAL NETWORKS.
Min Tan (M’03) received the B.S. degree in control engineering from Tsinghua University, Beijing, China, in 1986 and the Ph.D. degree in control theory and control engineering from Institute of Automation, Chinese Academy of Sciences, Beijing, China, in 1990. He is a Professor in the Laboratory of Complex Systems and Intelligent Science, Institute of Automation, Chinese Academy of Sciences. His research interests include advanced robot control, multirobot systems, biomimetic robots, and systems.