Stability Analysis of Nonlinear System ... - Semantic Scholar

Report 1 Downloads 139 Views
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: EXPRESS BRIEFS, VOL. 54, NO. 2, FEBRUARY 2007

161

Stability Analysis of Nonlinear System Identification via Delayed Neural Networks José de Jesús Rubio and Wen Yu

Abstract—In this brief, the identification problem for time-delay nonlinear system is discussed. We use a delayed dynamic neural network to do on-line identification. This neural network has dynamic series-parallel structure. The stability conditions of on-line identification are derived by Lyapunov–Krasovskii approach, which are described by linear matrix inequality. The conditions for passivity, asymptotic stability and uniform stability are established in some senses. We conclude that the gradient algorithm for updating the weights of the delayed neural networks is stable to any bounded uncertainties. Index Terms—Identification, stability, time delay.

I. INTRODUCTION ECENT results show that neural network technique seems to be very effective to identify a wide class of complex nonlinear systems when we have no complete model information, or even we consider the plant as a black-box. This modelfree approach uses the nice features of neural networks, but the lack of model makes hard to obtain theoretical results on the neuro identifier. The stability properties of a neural network is very important [13], such as in solving optimization problems. General stability analysis and synthesis of time-delay systems is an important issue addressed by many authors [8]. [11] presents sufficient conditions for linear time-delay systems based on Lyapunov–Krasovskii functional construction. [7] presents conditions to guarantee the stability of perturbed delay systems by assuming the stability of the nominal system. These results can be extended to neural networks [12]. There are different classes of neural networks with time-delay, such as additive neural networks [6], cellular networks [1], bidirectional associative neural networks [19], dynamic neural networks [6], [10]. The conditions obtained in these papers establish various types of stability such as passivity [9], asymptotic stability [4], [16], absolute stability [17] and exponential stability [14]. Besides the stability of neural networks, there is another stability problem: stability of learning procedure. The stability of learning algorithms for non-delayed neural networks can be derived by analyzing the identification or tracking errors of neural networks. [18] studied the stability conditions of the updating laws when multilayer perceptrons are used to identify and control a nonlinear system. In [15] the dynamic backpropagation was modified with NLq stability constraints.

R

Manuscript received January 7, 2006; revised July 6, 2006. This paper was recommended by Associate Editor J. Suykens. The authors are with the Departamento de Control Automatico, CINVESTAV-IPN, Av.IPN 2508, México D.F., 07360, México (e-mail: yuw@ ctrl.cinvestav.mx). Digital Object Identifier 10.1109/TCSII.2006.886464

Since neural networks cannot match the unknown nonlinear systems exactly, some robust modifications [18] should be applied on normal gradient or backpropagation algorithm. If we only consider the stability of identification error the commonly-used learning algorithms with robust modifications are robust stable [18]. Some neural control methods via time-delay neural networks use stable learning algorithms. [2] gives a stability analysis of the Takagi–Sugeno fuzzy control nonlinear systems with time-delay. In [5] an adaptive neural control is presented for a class of strict-feedback nonlinear systems with unknown time-delays, they use a radial basis function neural network to approximate nonlinear functions. A natural question is: if we use delayed neural networks to identify time-delay nonlinear systems, when the identification error is stable? In this brief, this identification problem is discussed. We use a delayed dynamic neural network to do on-line identification. This neural network has dynamic series-parallel structure and time-delay. The stability conditions of on-line identification are derived by Lyapunov–Krasovskii approach, they are described by linear matrix inequality (LMI). The conditions for passivity and uniform stability are established in some senses. Simulations give the effectiveness of the suggested algorithms. II. PRELIMINARIES A continuous-time time-delay nonlinear system can be described as (1) , is the state, where is the delayed state, is a bounded known time-varying delay, is the input vector, is the output is the dimension of the state, . vector, is general nonlinear smooth function , is continuous. and are unknown. In this brief, we will discuss passivity and stability properties of identification error. First, let us recall passivity definition of nonlinear systems. Definition 1: The system (1) is said to be passive if there exists a nonnegative function , called storage function, such that, for all , all initial conditions and all the following inequality holds:

where , and are nonnegative constants, is positive semi-definite function of such that . is called state dissipation rate. Furthermore, the system is said to be strictly passive if there exists a positive definite function

1057-7130/$20.00 © 2006 IEEE

162

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: EXPRESS BRIEFS, VOL. 54, NO. 2, FEBRUARY 2007

such that . If the storage function is differentiable and the dynamic system satisfies . is passive, storage function The stability property studied in this brief is focused on uniform stability, which is defined as follow. Definition 2: The system (1) is said to be uniformly stable if , such that (2) Now we extend the version of [3] and give the uniform stability theorem for time-delay nonlinear systems. The analysis of neural identification will use this theorem. be a semi positive definite function Theorem 1: Let of the time-delay nonlinear system (1), if it satisfies

(3) and are funcwhere is a positive constant, tions, and is a function, then the system (1) is uniformly , then we get stability only. stable. If we have , Proof: First we define , , 2. So , such that , such that and . We discuss two cases: 1) if , then and the system is stable, , we will use the definition of uniform stability 2) if (2) to prove that the system is uniformly stable. We define as . For contradiction, suppose such that that

Fig. 1. Continuous-time series-parallel delayed neural network.

Finally, from (2) and definition of normal have

, we

(9) Equations (8) and (9) prove that is well defined. III. TIME-DELAY NONLINEAR SYSTEM IDENTIFICATION WITH DELAYED NEURAL NETWORK The nonlinear system (1) can be identified by the following continuous-time series-parallel delayed neural network:

(4) (10) such that

then,

(5) and

, . Now we have

, then

, (6)

Because in the

first

is non-decrease, using inequality of (3)

. Using the first and the second inequality of (3), we have . Using (5), (6) gives , or , . From the definition of we know

where , the vector is the state of the neural network, is the delayed state of the neural network, is the time-delay of (1). The matrices , are stable matrices which will be specified after. The matrices and are the weights of the delayed neural network, is the given control vector field. . Function is selected as . is tangent hyperbolic vector functions, is diagonal matrix. The elements of (as well as ) are usually tangent hyperbolic functions. The structure of this dynamic system is shown in Fig. 1. Although , and are saturation functions, and are stable matrices, we cannot assure that the neural identification model (10) is stable. When we use this model to identify a stable nonlinear system, the stability of identification error cannot be obtained directly. According to the Stone–Weierstrass theorem, the nonlinear system (1) can be written in the form of neural networks (10)

(7) where (7) contradicts (4), thus (2) is satisfied and the system is uniformly stable. If , then such that (4) is satisfied, and from this inequality we have a contradiction, that is (8)

(11) where and are bounded initial matrices, is modelling error. Since the state and output variables are physically bounded, the modelling error can be assumed to be bounded too (see, for example [15], [18], . For the known time-delay , we make the following assumption.

RUBIO AND YU: STABILITY ANALYSIS OF NONLINEAR SYSTEM IDENTIFICATION VIA DELAYED NEURAL NETWORKS

A1: It is assumed that the time-varying delay

satisfies (12)

Let us define the identification error as (13) The error dynamic is obtained from (10), (11), and (13) (14) where

for

, and . Theorem 2: If time-delay satisfies A1, and there exist positive definite symmetric matrices , , such that the following LMI holds:

163

control community since the late 1980’s [9]. The passivity property provides a nice tool for analyzing the stability of systems, the proofs of the following corolly and theorems are based on the passivity result (18). The passivity conditions (15) for the neural networks can be obtained directly by solving the LMIs numerically by the interior-point algorithm and MATLAB LMI toolbox. , Corollary 1: If only parameters uncertainty present then the updating law as (16) can make the identification error asymptotically stable

(19) Proof: Since the identification error dynamics (14) is passive and , if , then

The positive definite implies , bounded. From the error equation (14)

,

and

are

(15) the weights of the delayed neural network are updated as Integrate (18) both sides (16) and are given positive symmetric gain matrices, where then the identification error dynamic (14) is strictly passive from the modeling error to the identification error . Proof: Select a Lyapunov–Krasovskii function as

(17) where

satisfies A1. The derivative of

is

So

,

, using Barbalat’s Lemma, we have . Since , , and

,

are bounded, and . Theorem 3: If time-delay satisfies A1, and there exist such that the positive definite matrices symmetric , and following LMI holds: (20) , then if and are updated as (16) and , then the identification error satisfies the following average performance: where

From A1, (14) and (16),

,

, we have (21)

where

and the identification procedure is stable. Proof: We use Theorem 1 to prove that the identification error is stable. Integrating (16) in both sides, we obtain

. By LMI condition (15) (18)

From definition 1, if we define the inputs as and the outputs as , then the system is strictly passive. Remark 1: The passivity theory intimately related to the circuit analysis methods has received a lot of attention from the

where

, we have ,

. Similarly,

164

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: EXPRESS BRIEFS, VOL. 54, NO. 2, FEBRUARY 2007

,

, , and are initial bounded matrices. where The dynamic of identification error is derived from (26) and (25)

. So (17) becomes

(27) In the case of two independent variables, a smooth function has Taylor formula as (22) Equation (22) satisfies the second condition of (3) with . Applying the following matrix inequality to the second term of (18):

where is the remainder of the Taylor formula. If we let correspond and , , correspond and , and define

(28) and

(23) (29) where , , matrix, we obtain

are any matrices,

is any positive definite

where , activation function

, is the derivative of nonlinear at the point of . Similar (30)

We select

,

. By (20), (18) becomes

where , We define the new modelling error as stituting (29) and (30) into (27) we have

(24) Equation (24) satisfies the first condition of (3). Using Theorem 1, the identification error is stable. , then It is noted that if , , the total time during which must be finite. Let denotes the time interval during . If only finite times that stay outside the ball of radius and then reenter, will eventually stay inside of this ball. If leave the ball infinite times, since the total time leave the ball is finite, , . So is bounded, the identification error and the weights are bounded. From (14) and (17) we know is also bounded. Let denote the largest tracking error during the interval. Then bounded imply that . So will convergence to . Remark 2: There is a tradeoff for . In order to make the identification error smaller (21), we should select big . But big make it difficult to assure the LMI condition (20). Now we use the following multilayer delayed neural network to identify the nonlinear system (1)

(25) The weights in output layer are , the weights in hidden layer are , is hidden layer number. Similar to (11), the nonlinear system (1) can be written in the form of neural networks (25)

(26)

. . Sub-

(31) where , , , . Theorem 4: If time-delay satisfies A1, and there exist positive definite symmetric matrices , , such that (15) is , , , are updated as held and

(32) then the error dynamic (31) is strictly passive from the modto the identification error . If there exist eling error such that (20) is held, positive definite matrices , and then the error dynamic (31) is stable. And the identification error satisfies the following average per, . formance: The proof of other parts are similar with Theorem 2 and Theorem 3. IV. SIMULATION In this section, we will use an example to illustrate how to apply the theory results proposed in this brief in applications. The example is a continuous stirred tank reactor [2]. It is a benchmark model because it is very simple time-delay nonlinear system and open-loop stable. The system is given by

(33)

RUBIO AND YU: STABILITY ANALYSIS OF NONLINEAR SYSTEM IDENTIFICATION VIA DELAYED NEURAL NETWORKS

165

, , 2. We use the learning law as in (32) with , . The mean squared error for is shown in Fig. 3. Multilayer delayed neural networks have the same advantage as the normal multilayer neural networks. V. CONCLUSION

Fig. 2. Least mean square error for x

Although there exist similar networks and learning algorithms, stability analysis of the identification error are not applied in the literature. In this brief, we discuss passivity and stability of identification procedure for delayed neural networks. The passivity, linear matrix inequality and Lyapunov–Krasovskii approaches are used to prove that the gradient descent algorithms for the weight adjustment are stable to any bounded uncertainties for delayed neural networks. In the future, we will study unknown time-delay case.

.

REFERENCES

Fig. 3. Least mean square error for x

where

.

, , , 2.

, neural network as

. For , , . , , , . Let us select the delayed single layer (34)

where

,

. The active functions are ,

. We select and , , , , , , note that it satisfies (20). and . , for for ,2. We use the learning law as in (16) with , . From the theory analysis we know the delayed neural networks should have the same time-delay as the plant, i.e., . Now we compare three different delayed neural networks, they are (estimated time-delay), (without time-delay) and (known time-delay [5]). We define the mean squared error as . Fig. 2 shows the results for identification error of . We can see that delayed neural networks has better identification performance than the normal neural identifier. Now we compare single layer and multilayer delayed neural network. The multilayer delayed neural network is (25), where , , , . The nonlinear functions are . We also select and , , , , note that it satisfies (20), and . , for for ,

[1] S. Arik, “An improved global stability result for delayed cellular neural networks,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 49, no. 8, pp. 1211–1214, Aug. 2002. [2] Y.-Y. Cao and P. M. Frank, “Analysis and syntesis of nonlinear timedelay systems via fuzzy control approach,” IEEE Trans. Fuzzy Syst., vol. 8, no. 2, pp. 200–211, Feb. 2000. [3] M. J. Corless and G. Leitmann, “Continuous state feedback guaranteeing uniform ultimate boundness for uncertain dynamic systems,” IEEE Trans. Autom. Contr., vol. 26, no. 5, pp. 1139–1144, May 1981. [4] T. Ensari and S. Arik, “Global stability of a class of neural networks with time-varying delay,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 52, no. 3, pp. 126–130, Mar. 2005. [5] S. S. Ge, F. Hong, and T. H. Lee, “Adaptive neural network control of nonlinear systems with unknown time delays,” IEEE Trans. Autom. Contr., vol. 48, no. 11, pp. 2004–2010, Nov. 2003. [6] H. Huang, D. Ho, and J. Lam, “Stochastic stability analysis of fuzzy hopfield neural networks with time-varying delays,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 52, no. 5, pp. 251–255, May 2005. [7] V. L. Kharitonov and S.-I. Niculescu, “On the stability of linear systems with uncertain delay,” IEEE Trans. Autom. Contr., vol. 48, no. 1, pp. 127–132, Jan. 2003. [8] V. B. Kolmanovskii and V. R. Nosov, Stability of Functional Differenctial Equations. Orlando, FL: Academic, 1986, 32887. [9] C. Li and X. Liao, “Passivity analysis of neural networks with timedelay,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 52, no. 8, pp. 471–475, Aug. 2005. [10] X. Liao, G. Chen, and E. N. Sanchez, “LMI-based approach for asymptotic stability analysis of delayed neural networks,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 49, no. 7, pp. 1033–1039, Jul. 2002. [11] S.-I. Niculescu and R. Losano, “On passivity of linear delay systems,” IEEE Trans. Autom. Contr., vol. 46, no. 3, pp. 460–464, Mar. 2001. [12] C. M. Marcus and R. M. Westervelt, “Stability of analog neural networks with delay,” Phys. Rev. A, vol. 39, pp. 347–359, 1989. [13] A. N. Michel and D. Liu, Qualitative Analysis and Synthesis of Recurrent Neural Networks. New York: Marcel Dekker, 2002. [14] S. Senan and S. Arik, “New results for exponential stability of delayed cellular neural networks,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 52, no. 3, pp. 154–158, Mar. 2005. [15] J. A. K. Suykens, J. Vandewalle, and B. De Moor, “NLq theory: Checking and imposing stability of recurrent neural networks for nonlinear modelling,” IEEE Trans. Signal Process., vol. 45, no. 11, pp. 2682–2691, Nov. 1997. [16] S. Xu, J. Lam, D. Ho, and Y. Zou, “Novel global asymptotic stability criteria for delayed cellular neural networks,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 52, no. 6, pp. 349–353, Jun. 2005. [17] Z. Yi, P. A. Heng, and P. Vadakkepat, “Absolute periodicity and absolute stability of delayed neural networks,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 49, no. 2, pp. 256–261, Feb. 2002. [18] W. Yu and X. Li, “Some new results on system identification with dynamic neural networks,” IEEE Trans. Neural Netw., vol. 12, no. 2, pp. 412–417, Mar. 2001. [19] J. Y. Zhang and Y. R. Yang, “Global stability analysis of bidirectional associative memory neural networks with time-delay,” Int. J. Circuit Theory Appl., vol. 29, pp. 185–196, 2001.