Global Exponential Stability of Recurrent Neural Networks for Solving ...

Report 3 Downloads 159 Views
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 4, JULY 2000

1017

Global Exponential Stability of Recurrent Neural Networks for Solving Optimization and Related Problems Youshen Xia and Jun Wang

Abstract—Global exponential stability is a desirable property for dynamic systems. This paper studies the global exponential stability of several existing recurrent neural networks for solving linear programming problems, convex programming problems with interval constraints, convex programming problems with nonlinear constraints, and monotone variational inequalities. In contrast to the existing results on global exponential stability, the present results do not require additional conditions on the weight matrices of recurrent neural networks and improve some existing conditions for global exponential stability. Therefore, the stability results in this paper further demonstrate the superior convergence properties of the existing neural networks for optimization. Index Terms—Global exponential stability, optimization problems, recurrent neural networks.

I. INTRODUCTION Since Hopfield and Tank first proposed a recurrent neural network for solving linear programming problems [1], many recurrent neural networks with global asymptotic stability (GAS) have been developed; e.g., [2]–[13]. To ensure rapid convergence and insusceptiblity to input noise or round-off errors, as emphasized in [4], [6], an important convergent property for neural networks is global exponential stability (GES). Since a GAS neural network is not necessarily GES, it is very desirable for optimization neural networks to have unique equilibria which are globally exponentially stable. The global exponential stability of several neural networks have been studied [3], [4], [6], [11], [13] in which weight matrices need to satisfy some sufficient conditions. When weight matrices do not fulfill given sufficient conditions, global exponential stability cannot be guaranteed. This paper presents new results on the global exponential stability of several existing neural networks for solving various optimization and related problems. The new results show that some optimization neural networks are in fact globally exponentially stable under the same conditions for globally asymptotic stability. Compared with the existing results on exponential stability of neural networks [3], [4], [6], [11], [13] the present results improve existing ones without requiring additional conditions on weight matrices for neural networks to be globally exponentially stable. The new results on global exponential stability are useful for analyzing and designing recurrent neural networks for optimization.

Manuscript received February 11, 1999; revised August 23, 1999. This work was supported by the Hong Kong Research Grants Council under Grant CUHK4150/97E. The authors are with the Department of Automation and Computer-Aided Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong. Publisher Item Identifier S 1045-9227(00)05880-X.

This paper is organized in seven sections. In Section II, preliminary information is introduced to facilitate later discussions. Sections III to VI discuss, respectively, the GES of recurrent neural networks for solving linear programs, convex programs with interval constraints, convex programs with general constraints, and monotone variational inequalities. Conclusions are found in Section VII. II. PRELIMINARIES To analyze the global exponential stability of an autonomous ordinary differential equation

where is locally Lipschitz continuous, in general such we need to find a differentiable Lyapunov function that

and

where is a vector norm, and constants. Then

, and are positive

Thus one has

and hence

where . In subsequent development, the basic method used belongs to the above class of methods. The norm is used throughout of this paper. For the convenience of later discussions, it is necessary to introduce to a few definitions. A dynamic system is said to be globally exponentially stable with degree at if every trajectory starting at any initial point satisfies

where and are posive constants independent of the initial points. It is clear that the exponential stability implies the globally asymptotical stability and the system converges arbitrarily fast.

1045–9227/00$10.00 © 2000 IEEE

1018

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 4, JULY 2000

A mapping is Lipschitz continuous with con, stant if, for each pair of points (1) if A mapping is said to be locally Lipschitz continuous on has a neighborhood such that the above each point of . inequality holds for each pair of points is monotone if A mapping

strongly monotone if, for each pair of points such that exists a constant

, there

where

and , and . The neural network is proven to be globally asymptotically stable when (3) and (4) have an unique optimal solutions [8]. The following theorem further shows the global exponential stability of the neural network (6). Theorem 1: Assume that (3) and (4) have unique optimal solutions. Then the primal-dual neural network in (6) is globally exponentially stable. Proof: Because (3) and (4) have unique optimal solutions, be system (6) has unique equilibrium point. Let . an optimal solution to (3) and (4), and Then

(2) Clearly, Lipschitz continuous with strongly monotone when definite. A functional and

where and is , and is monotone or constant is positive semidefinite or positive is convex in

Note that

if, for all

The functional is strictly convex if above strict inequality holds , and is uniformly convex if there is a constant whenever such that, for all and

It is clear that uniform convexity implies strict convexity which in turn implies convexity. It is easy to see that if , then is convex or uniformly convex if is positive semidefinite or positive definite. In addition, if the Hessian matrix of is positive definite, then is strictly convex [15]. III. GES OF A PRIMAL-DUAL NEURAL NETWORK FOR LINEAR PROGRAMMING

Then

and

In this section, we consider the following linear programming problem minimize subject to where

, and Maximize subject to

(3) . Its dual is as follows: Thus (4)

is an optimal By the Kuhn-Tucker condition we see that satisfies solution (3) and (4), if and only if

(5) A primal-dual neural network, whose equilibrium points solve (3) and (4), is presented in [8]

(6)

From the well-known Hoffman inequality [16], it follows

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 4, JULY 2000

1019

In terms of structure, (8) may be viewed as an improvement and extension of the existing model [6] for solving quadratic convex program with bound constraints. In [6], the existing model is proved to be globally exponentially stable. Now, we further prove the global exponential stability of the generalized model. Theorem 3: Assume that the Hessian matrix of , , is positive definite. Then the neural network in (8) with is globally exponentially stable to the only equlibrium point. , and Proof: First, let . By the mean-value theorem [14] let we see that

then

where

and

. Thus

Hence

So the primal-dual neural network (6) is globally exponentially stable. By the invariance of the -norm for the projection operator on closed convex set [15] we have

IV. GES OF THE NEURAL NETWORK FOR CONVEX PROGRAMMING WITH INTERVAL CONSTRAINTS In this section, we study GES of the various related neural networks for solving convex programming problems with interval constraints. We first consider the following convex programming problem: minimize subject to

(7)

is bounded and the function where is twice continuously differentiable and convex in . To solve (7), we consider a recurrent neural-network model below (8) where

Thus is Lipschitz continuous in . So, there is a continfor (8) over . uous and unique solution is symmetric Next, since

where that

is the eigenvalue of the matrix . Observe is positive definite, then . From it follows that for all

thus

and Then

This model is of a basic property that there is a direct relation between the equilibrium point of (8) and the optimal solution to (7). In fact, from the Karush–Kuhn–Tucker conditions we see is an optimal solution to (7) if and only if satisfies that (9)

and thus the mapping is conhas the unique fixed point [14], tractive in . So, and thus (8) has the only equilibrium point. Finally, we prove the global exponential stability of (8). Note that

It can be written in an equivalent projection equation [15] and where is the gradient of . Thus if point of (8), then

is an equilibrium it follows

and

Hence

is an optimal solution to (7).

1020

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 4, JULY 2000

and

Proof: Note that the invariance of the -norm for the projection operator on closed convex set [15], we have

Then using the equality

, we obtain

Then

Since

is symmetric

where is the eigenvalue of the matrix and ( ) , thus follows that

and

when

. Observe that . It

. So we can obtain

By Gronwall inequality [17], we have

where . Since , the neural network in (8) is globally exponentially stable. We next consider the special case of (7): quadratic programming problem with bounded constraints minimize subject to

where . Thus the neural network (11) is globally exponentially stable. Remark: An additional sufficient condition of global asymptotical stability for the discrete-time neural network (12) is given as follows [11]: (13)

(10)

positive definite and symmetric. In this where matrix is case, (8) may be written in the form below

where

is the element of the matrix and thus

. It is easy to see that

(11) As an immediate corollary of Theorem 3, we obtain the following result. Corollary 2: The neural network (11) with is globally exponentially stable. In contrast to the existing GES result in [6], where the given sufficient condition needs to calculate the maximum singular value of a connection matrix. Clearly, the present condition is easy to be verified. We then study the global exponential stability of the following discrete-time neural network for solving (10) (12) where is a positive parameter. By -norm, a sufficient condition for exponential stability of the discrete-time neural network is given in [11] below: is strongly row diagonally dominant; 1) . Using the -norm we 2) can obtain an improved result of the global exponential stability, which relaxes condition 1). is Theorem 4: If satisfied, then the neural network (12) is globally exponentially stable.

So by Theorem 4 we see that the condition (13) without the strict equality also guarantees the neural network to be globally exponentially stable. The above mentioned conditions of GES must make an appropriate choice for the parameter . In what follows, we prove that the following neural-network model, which may be viewed as a continuous version of the discrete-time model (12), is still GES without the parameter (14) Theorem 5: The neural network in (14) is globally exponentially stable. Proof: From [10, see the proof in Appendix], we can see that the following inequality holds:

Because the matrix

is positive definite, then where

. Thus

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 4, JULY 2000

Note that is symmetric and positive definite, then there is . a symmetric and positive definite matrix such that Now, let a Lyapunov function be (15) where

1021

and

Thus for all

is an unique optimial solution. Then

(19) Replacing where

with

which is a minimizer of

, then

. Therefore Thus

So the neural network (14) is globally exponentially stable. V. GES OF THE KENNEDY–CHUA NEURAL NETWORK FOR CONVEX PROGRAMMING

hence

In this section, we consider the convex programming problem with nonlinear constraints (20)

minimize subject to

since

. Then

(16) , where is an -dimensional vector-valued function, and ’s are , , and continuously differentiable and convex in . It is well known that Kennedy and Chua proposed the following neural network [2]: (17) and are, respectively, the gradient of and where , , , is a finite positive penalty parameter. In [10], the and Kennedy–Chua network is proven to be globally asymptotically stable if (17) has a unique equilibrium point. We now further study the globally exponential stability of this neural network. is continuously differentiable Theorem 6: Assume that . Then for any fixed , the and uniformly convex in neural network in (17) is globally exponentially stable at a minwhich minimizes imum of

VI. GES OF A NEURAL NETWORK FOR SOLVING MONOTONE VARIATIONAL INEQUALITIES In this section, we consider the monotone variational inof finding satisfying equality

where is a interval set and is a continuous function on . This problem has wide applications in engineering optimization. In [10], we proposed the following neural network for : solving (21) , and is a positive parameter. where denotes the projection onto set , which is defined by

(18)

minimize

is uniformly Proof: From [14] it is easy to see that if and only if there convex and continuously differentiable in such that exists a constant

Since ferentiable in

Thus the Kennedy–Chua network is globally exponentially stable.

and , we have

is convex and continuously dif-

Under both the monotone and Lipschitz conditions, the neural network (21) is shown to be globally asymptotically stable when in [10]. Now, under both the strongly monotone and Lipschitz conditions, we further prove its global exponential stability. satisfy the strongly monotone and LipTheorem 7: Let schitz conditions (1) and (2). Then neural network in (21) is . globally exponentially stable when

1022

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 4, JULY 2000

Proof: From the proof in [10] we see that

where

. Since , . So the neural network (21) is globally exponentially

stable. VII. CONCLUSIONS and

In this paper, we have rigorously proved the global exponential stability of several existing neural networks for solving various optimization and related problems. Compared with the existing results on global exponential stability which impose extra conditions on weight matrices, the conditions in the present results are the same as those for global asymptotic stability. In addition, the new conditions improve some existing conditions for the global exponential stability. Therefore, the present results on global exponential stability have further unveiled the elegant convergence properties of the existing neural networks for solving various optimization and related problems.

Thus from the second inequality, we have

It follows that

REFERENCES

(22) . since Using Lipschitz condition (1) and strongly monotone condition (2), we get

where

. Thus

which is similar to the inequality in [16] without the parameter . So we have

From this inequality, it follows that

thus

[1] D. W. Tank and J. J. Hopfield, “Simple neural optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit,” IEEE Trans. Circuits Syst., vol. 33, pp. 533–541, 1986. [2] M. P. Kennedy and L. O. Chua, “Neural networks for nonlinear programming,” IEEE Trans. Circuits Syst., vol. 35, pp. 554–562, 1988. [3] A. N. Michel, J. A. Farrell, and W. Porod, “Qualitative analysis of neural networks,” IEEE Trans. Circuits Syst., vol. 36, pp. 229–243, 1989. [4] S. Sudharsanan and M. Sundaareshan, “Exponential stability and a systematic synthesis of a neural network for quadratic minimization,” Neural Networks, vol. 4, no. 5, pp. 599–613, 1991. [5] C. Y. Maa and M. A. Shanblatt, “Linear and quadratic programming neural network analysis,” IEEE Trans. Neural Networks, vol. 3, pp. 580–594, 1992. [6] A. Bouzerdoum and T. R. Pattison, “Neural networks for quadratic optimization with bound constraints,” IEEE Trans. Neural Networks, vol. 4, pp. 293–304, 1993. [7] J. Wang, “A deterministic annealing neural network for convex programming,” Neural Networks, vol. 7, pp. 629–641, July 1994. [8] Y. Xia, “A new neural network for solving linear programming problems and its applications,” IEEE Trans. Neural Networks, vol. 7, pp. 525–529, 1996. [9] , “A new neural network for solving linear and quadratic programming problems,” IEEE Trans. Neural Networks, vol. 7, pp. 1544–1547, 1996. [10] Y. Xia and J. Wang, “A general methodology for designing globally convergent optimization neural networks,” IEEE Trans. Neural Networks, vol. 9, pp. 1331–1343, 1998. [11] M. J. Perze-Ilzarbe, “Convergence analysis of a discrete-time recurrent neural network to perform quadratic real optimization with bound constraints,” IEEE Trans. Neural Networks, vol. 9, pp. 1344–1351, 1998. [12] A. Cichocki and R. Unbehauen, Neural Networks for Optimization and Signal Processing. New York: Wiley, 1993. [13] P. D. Wilde, Neural Network Models. New York: Springer-Verlag, 1996. [14] J. M. Ortega and W. G. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables. New York: Academic, 1970. [15] D. P. Bertsekas, Parallel and Distributed Computation: Numerical Methods. Englewood Cliffs, NJ: Prentice-Hall, 1989. [16] J. S. Pang, “A posteriori error bounds for the linerly-constrained variational inequality problem,” Math. Oper. Res., vol. 12, pp. 474–484, 1987. [17] R. K. Miller and A. N. Michel, Ordinary Differential Equations. New York: Academic, 1982.