A Fuzzy-Neural Hierarchical Multi-model for Systems Identification and Direct Adaptive Control Ieroham Baruch, Jose-Luis Olivares G., Carlos-Roman Mariaca-Gaspar, and Rosalíba Galvan Guerra CINVESTAV-IPN, Department of Automatic Control, Av. IPN No2508, A.P. 14-740, 07360 Mexico D.F., México {baruch,lolivares,cmariaca,rgalvan}@ctrl.cinvestav.mx
Abstract. A Recurrent Trainable Neural Network (RTNN) with a two layer canonical architecture and a dynamic Backpropagation learning method are applied for local identification and local control of complex nonlinear plants. The RTNN model is incorporated in Hierarchical Fuzzy-Neural Multi-Model (HFNMM) architecture, combining the fuzzy model flexibility with the learning abilities of the RTNNs. A direct feedback/feedforward HFNMM control scheme using the states issued by the identification FNHMM is proposed. The proposed control scheme is applied for 1-DOF mechanical plant with friction, and the obtained results show that the control using HFNMM outperforms the fuzzy and the single RTNN one.
1 Introduction In the last decade, the computational intelligence, including artificial Neural Networks (NN) and Fuzzy Systems (FS) became a universal tool for many applications. Because of their approximation and learning capabilities, the NNs have been widely employed to dynamic process modeling, identification, prediction and control, [1]-[4]. Mainly, two types of NN models are used: Feedforward (FFNN) or static and Recurrent (RNN) or dynamic. The first type of NN could be used to resolve dynamic tasks introducing external dynamic feedbacks. The second one possesses its own internal dynamics performed by its internal local feedbacks so to form memory neurons [3], [4]. The application of the FFNN for modeling, identification and control of nonlinear dynamic plants caused some problems due to the lack of universality. The major disadvantage of all this approaches is that the identification NN model applied is a nonparametric one that does not permit them to use the obtained information directly for control systems design objectives. In [5], [6], Baruch and co-authors applied the statespace approach to describe RNN in an universal way, defining a Jordan canonical two- or three-layer RNN model, named Recurrent Trainable Neural Network (RTNN) which has a minimum number of learned parameter weights. This RTNN model is a parametric one, permitting to use of the obtained parameters and states during the learning for control systems design. This model has the advantage to be completely parallel one, so its dynamics depends only on the previous step and not on the other past steps, determined by the systems order which simplifies the computational complexity of the learning algorithm with respect to the sequential RNN model of Frasconi, Gori and Soda (FGS-RNN), [4]. For complex plants identification it is proposed P. Melin et al. (Eds.): Anal. and Des. of Intel. Sys. using SC Tech., ASC 41, pp. 163–172, 2007. springerlink.com © Springer-Verlag Berlin Heidelberg 2007
164
I. Baruch et al.
to use the Takagi-Sugeno (T-S) fuzzy-neural model [7] applying T-S fuzzy rules with a static premise and a dynamic function consequent part, [8]. The [9], [10], [11] proposed as a dynamic function in the consequent part of the T-S rules to use a RNN. The difference between the used in [9] fuzzy neural model and the approach used in [10] is that the first one uses the FGS-RNN model [4], which is sequential one, and the second one uses the RTNN model [5], [6] which is completely parallel one. But it is not still enough because the neural nonlinear dynamic function ought to be learned, and the Backpropagation (BP) learning algorithm is not introduced in the T-S fuzzy rule. So, the present paper proposed to extend the power of the fuzzy rules, using in its consequent part a learning procedure instead of dynamic nonlinear function and to organize the defuzzyfication part as a second RNN hierarchical level incorporated in a new Hierarchical Fuzzy-Neural Multi-Model (HFNMM) architecture. The output of the upper level represents a filtered weighted sum of the outputs of the lower level RTNN models. The HFNMM proposed uses only three membership functions (positive, zero, and negative), which combine the advantages of the RNNs with that of the fuzzy logic, simplifying the structure, augmenting the level of adaptation and decreasing the noise.
2 RTNN Model and Direct Control Scheme Description The RTNN model is described by the following equations, [5], [6]: X ( k + 1) = JX ( k ) + BU ( K ); J = block − diag ( J ii );| J ii |< 1 .
Z (k ) = Γ[ X (k )];Y (k ) = Φ[CZ (k )]
(1) (2)
Where: Y, X, U are, respectively, l, n, m - output, state and input vectors; J is a (nxn)state block-diagonal weight matrix; Jii is an i-th diagonal block of J with (1x1) dimension; Γ(.), Φ(.) are vector-valued activation functions like saturation, sigmoid or hyperbolic tangent, which have compatible dimensions. Equation (1) includes also the local stability conditions, imposed on all blocks of J; B and C are (nxm) and (lxn)input and output weight matrices; k is a discrete-time variable. The stability of the RTNN model is assured by the activation functions and by the local stability condition (1). The given RTNN model is a completely parallel parametric one, with parameters - the weight matrices J, B, C, and the state vector X. The RTNN topology has a linear time varying structure properties like: controllability, observability, reachability, and identifiability, which are considered in [6]. The main advantage of this discrete RTNN (which is really a Jordan Canonical RNN model), is of being an universal hybrid neural network model with one or two feedforward layers, and one recurrent hidden layer, where the weight matrix J is a block-diagonal one. So, the RTNN posses a minimal number of learning weights and the performance of the RTNN is fully parallel. Another property of the RTNN model is that it is globally nonlinear, but locally linear. Furthermore, the RTNN model is robust, due to the dynamic weight adaptation law, based on the sensitivity model of the RTNN, and the performance index minimization. The general RTNN - BP learning algorithm, is:
A Fuzzy-Neural Hierarchical Multi-model for Systems Identification
Wij (k + 1) = Wij (k ) + ηΔWij (k ) + αΔWij (k − 1)
165
(3)
Where: Wij is a general weight, denoting each weight matrix element (Cij, Aij, Bij) in the RTNN model, to be updated; ΔWij (ΔCij, ΔJij, ΔBij), is the weight correction of Wij; while; η and α are learning rate parameters. The weight updates are as: ΔCij (k ) = [Yd , j (k )-Yj (k )]Φ'j [Yj (k )]Zi (k ) ,
(4)
ΔJ ij (k ) = RX i (k − 1); ΔBij (k ) = RU i (k ) ,
(5)
R = Ci (k )[Yd (k )-Y (k )]Γ'j [ Zi (k )]
(6)
Where: ΔJij, ΔBij, ΔCij are weight corrections of the weights Jij, Bij, Cij, respectively; (Yd -Y) is an error vector of the output RTNN layer, where Yd is a desired target vector and Y is a RTNN output vector, both with dimensions l; Xi is an i-th element of the state vector; R is an auxiliary variable; Φj’, Γj’ are derivatives of the activation functions. Stability proof of this learning algorithm is given in [6]. The equations (1), (2) together with the equations (3)-(6) forms a BP-learning procedure, where the functional algorithm (1), (2) represented the forward step, executed with constant weights, and the learning algorithm (3)-(6) represented the backward step, executed with constant signal vector variables. This learning procedure is denoted by Π (L, M, N, Yd, U, X, J, B, C, E). It uses as input data the RTNN model dimensions l, m, n, and the learning data vectors Yd, U, and as output data - the X-state vector, and the matrix weight parameters J, B, C. The block-diagram of the direct adaptive neural control system is given on Fig.1a. The control scheme contains three RTNNs. The RTNN-1 is a plant identifier, learned by the identification error Ei = Yd - Y, which estimates the state vector. The RTNN-2 and RTNN-3 are feedback and feedforward NN controllers, learned by the control error Ec = R- Yd. The control vector is a sum of RTNN functions Ffb, Fff, learned by the procedure Π(L, M, N, Yd, U, X, J, B, C, E), [5], as: U ( k ) = −U fb ( k ) + U ff ( k ) = − F fb [ X ( k )] + F ff [ R ( k )]
(7)
This control system structure is maintained also in the case of the fuzzy-neural system, where the neural identifier is substituted by fuzzy-neural identifier and the neural controller is substituted by fuzzy-neural controller.
3 HFNMM Identifier and HFNMM Controller Description Let us assume that the unknown system y = f(x) generates the data y(k) and x(k) measured at k, k-1,..p, then the aim is to use this data to construct a deterministic function y=F(x) that can serve as a reasonable approximation of y=f(x) in which the function f(x) is unknown. The variables x = [x1,...,xp]’ ∈ℵ ⊂ ℜ and y ∈ Υ ⊂ ℜ are called regressor and regressand, respectively. The variable x is called an antecedent variable and the variable y is called a consequent variable. The function F(x) is represented as a collection of IF-THEN fuzzy rules as: p
166
I. Baruch et al.
IF antecedent proposition THEN consequent proposition
(8)
The linguistic fuzzy model of Zadeh and Mamdani, cited in [8] consists of rules Ri, where both the antecedent and the consequent are fuzzy propositions: Ri: If x(k) is Ai then y(k) is Bi, i = 1, 2,......, P
(9)
Where: Ai and Bi are linguistic terms (labels) defined by fuzzy sets μAi(x):Χ → [0, 1] and μBi(y): Υ→[0, 1], respectively; μAi(x), μBi(y) are membership functions of the correspondent variables; Ri denotes the i-th rule and P is the number of rules in the rule base. The model of Takagi and Sugeno, [7], is a mixture between linguistic and mathematical regression models, where the rule consequent is crisp mathematical function of the inputs. The T-S model has the most general form: Ri: If x(k) is Ai then yi (k) = fi [x(k)], i=1,2,..,P
(10)
The consequent part of the T-S model (10) could be also a dynamic state space model [8], [9]. So, the T-S model could be rewritten in the form: Ri: If x(k) is Ai and u(k) is Bi ⎧ xi(k+1) = Ji xi(k) + Bi u(k) then ⎨ ⎩ yi(k)
(11)
= Ci x(k)
Where: in the antecedent part Ai and Bi are the above mentioned linguistic terms; in the consequent part, xi(k) is the variable associated with the i-th sub-model state; yi(k) is the i-th sub-model output; Ji, Bi, Ci are parameters of this sub-model (Ji is a diagonal matrix). The paper [10] makes a step ahead proposing that the consequent function is a RTNN model (1), (2). So the fuzzy-neural rule obtains the form: Ri:If x(k) is Ji and u(k) is Bi then yi(k+1) = Ni [xi(k),u(k)], i =1,2,..,P
(12)
Where: the function yi(k+1) = Ni [xi(k),u(k)] represents the RTNN, given by the equations (1), (2); i - is the number of the function and P is the total number of RTNN approximation functions. The biases, obtained in the process of BP learning of the RTNN model could be used to form the membership functions, as they are natural centers of gravity for each variable, [10]. The number of rules could be optimized using the Mean-Square Error (MSE%< 2.5%) of RTNN’s learning. As the local RTNN model could be learned by the local error of approximation Ei = Ydi - Yi, the rule (12) could be extended changing the neural function by the learning procedure Y = Π (L, M, N, Yd, U, X, J, B, C, E), given by the equations (1)-(6). In this case the rule (12) could be rewritten as: Ri:If x(k) is Ji and u(k) is Bi then Yi = Πi (L,M,Ni,Ydi,U,Xi,Ji,Bi,Ci,Ei), i=1,2,..,P
(13)
The output of the fuzzy neural multi-model system, represented by the upper hierarchical level of defuzzyfication is given by the following equation:
A Fuzzy-Neural Hierarchical Multi-model for Systems Identification
Y(k) = Σi wi yi(k); wi = μi(y) / [Σi μ i(y)]
167
(14)
Where wi are weights, obtained from the membership functions μi(y). As it could be seen from the equation (14), the output of the fuzzy-neural multi-model, approximating the nonlinear plant, is a weighted sum of the outputs of RTNN models, appearing in the consequent part of (13). The weights wi depend on the form of the membership functions which is difficult to choose. We propose to augment the level of adaptation of the fuzzy-neural multi-model creating an upper hierarchical level of defuzzification which is a RTNN with inputs yi(k), i=1,…, P. So, the equation (14) is represented like this: Y = Π (L, M, N, Yd, Yo, X, J, B, C, E)
(15)
Where: the input vector Yo is formed from the vectors yi(k), i = 1,…, P; E =. Yd -Y is the error of learning; Π(.) is a RTNN learning procedure, given by equations (1)-(6). So, the output of the upper hierarchical defuzzyfication procedure (15) is a filtered weighted sum of the outputs of the T-S rules. As the RTNN is a universal function approximator, the number of rules P could be rather small, e.g. P = 3 (negative, zero, and positive) in the case of overlapping membership functions and P = 2 (negative and positive), in the case of non-overlapping membership functions. The stability of this HFNMM could be proven via linearization of the activation functions of the RTNN models and application of the methodology, given in [6]. As follows, in both HFNMM identification and control systems proposed, the three fuzzyfication intervals for the reference signal and the plant output are to be the same. A block-diagram of the dynamic system identification, using a HFNMM identifier is given on Fig.1b. The structure of the entire identification system contains a Fuzzyfier, a Fuzzy Rule-Based Inference System (FRBIS), containing up to three T-S rules (20), and a defuzzyfier. The system uses a RTNN model as an adaptive, upper hierarchical level defuzzyfier (15). The local and global errors used to learn the respective RTNNs models are Ei(k) = Ydi(k) - Yi(k); E(k) = Yd(k) - Yi(k). The HFNMM identifier has two levels – Lower.
R(k) RTNN-3
e c (k) +
y(k)
u ff + Plant
+
e c (k) RTNN-1
u fb RTNN-2
e i (k) X(k)
a)
b)
Fig. 1. Block-diagrams; a) Block-diagram of the direct adaptive RTNN control system; b) Detailed block-diagram of the HFNMM identifier
168
I. Baruch et al.
Hierarchical Level of Identification (LLI), and Upper Hierarchical Level of Identification (ULI). It is composed of three parts: 1) Fuzzyfication, where the normalized plant output signal Yd(k) is divided in three intervals (membership functions - μi): positive [1, -0.5], negative [-1, 0.5], and zero [-0.5, 0.5]; 2) Lower Level Inference Engine, which contains three T-S fuzzy rules, given by (20), and operating in the three intervals. The consequent part (procedure) of each rule has the L, M, Ni RTNN model dimensions, Ydi, U, Ei inputs and Yi (used as entry of the defuzzyfication level), Xi, Ji, Bi, Ci outputs, used for control. The T-S fuzzy rule is: Ri: If Yd(k) is Ai then Yi = Πi (L, M, Ni, Ydi, U, Xi, Ji, Bi, Ci, Ei), i =1,2, 3
(16)
3) Upper Level Defuzzyfication, which consists of one RTNN learning procedure, doing a filtered weighted summation of the outputs Yi of the lower level RTNNs. The defuzzyfication learning procedure (15) has L, M, N RTNN model dimensions, Yi (P=3), E, inputs, and Y(k) - output. The learning and functioning of both levels is independent. The main objective of the HFNMM identifier is to issue states and parameters for the HFNNMM controller when its output follows the output of the plant with a minimum error of approximation. The tracking control problem consists in the design of a controller that asymptotically reduces the error between the plant output and the reference signal. The block diagram of this direct adaptive control is schematically depicted in Fig.2a. The identification part on the right contains three RTNNs, corresponding to the three rules, fired by the fuzzyfied plant output and taking part of the FRBIS HFNMM identifier, and the RTNN DF1 represents the defuzzyfier of the HFNMM identifier (see Fig.1b). The control part on the left contains three double RTNN blocks. The RTNN-Uff block represented the feedforward part of the control, corresponding to the rule fired by the fuzzyfied reference, and the RTNN-Ufb block represented the feedback part rule, fired by the fuzzyfied plant output, and its entries are the corresponding states, issued by the HFNMM identifier. The RTNN DF2 represents the defuzzyfier of the HFNMM controller. The detailed structure of the direct adaptive HFNMM controller is given on Fig.2b. The structure of the entire control system has a Fuzzyfier, a Fuzzy Rule-Based Inference System (FRBIS), containing up to six T-S FF and FB rules, and a defuzzyfier. The system uses a RTNN model as an adaptive, upper hierarchical level defuzzyfier, given by equation (15). The local and global errors used to learn the respective RTNNs models are Eci(k) = Ri(k) - Ydi(k); E(k) = R(k) - Yd(k). The HFNMM controller has two levels – Lower Hierarchical Level of Control (LLC), and Upper Hierarchical Level of Control (ULC). It is composed of three parts: 1) Fuzzyfication, where the normalized reference signal R(k) is divided in three intervals (membership functions - μi ): positive [1, -0.5], negative [-1, 0.5], and zero [-0.5, 0.5]; 2) Lower Level Inference Engine, which contains six T-S fuzzy rules (three for the feedforward part and three for the feedback part), operating in the corresponding intervals. The consequent part of each feedforward control rule (the consequent learning procedure) has the M, L, Ni RTNN model dimensions, Ri, Ydi, Eci inputs and Uffi, outputs used to form the total control. The T-S fuzzy rule has the form: Ri: If R(k) is Bi then Uffi = Πi (M, L, Ni, Ri, Ydi, Xi, Ji, Bi, Ci, Eci), i =1,2,3
(17)
A Fuzzy-Neural Hierarchical Multi-model for Systems Identification
169
The consequent part of each feedback control rule (the consequent learning procedure) has the M, L, Ni RTNN model dimensions, Ydi, Xi, Eci inputs and Ufbi, outputs used to form the total control. The T-S fuzzy rule has the form: Ri: If Ydi is Ai then Ufbi = Πi (M, L, Ni, Ydi, Xi, Xci, Ji, Bi, Ci, Eci), i =1,2, 3
(18)
The total control corresponding to each membership function is a sum of its corresponding feedforward and feedback parts:
b)
a)
Fig. 2. Block-diagrams; a) Block-diagram of the direct adaptive fuzzy-neural multi-model control system; b) Detailed block-diagram of the HFNMM controller
Ui (k) = -Uffi (k) + Ufbi (k)
(19)
3) Upper Level Defuzzyfication which consists of one RTNN learning procedure, doing a filtered weighted summation of the control signals Ui of the lower level RTNNs. The defuzzyfication learning procedure is described by: U = Π (M, L, N, Yd, Uo, X, J, B, C, E)
(20)
It has M, L, N RTNN model dimensions, the vector Uo contains Ui (P=3), and the control error Ec, is an input. The learning and functioning of both levels is independent. The main objective of the HFNMM controller is to reduce the error of control, so the plant output to track the reference signal.
4 Simulation Results Let us consider a DC-motor - driven nonlinear 1-DOF mechanical system with friction, [12], to have the following friction parameters: α = 0.001 m/s; Fs+ = 4.2 N; Fs= - 4.0 N; ΔF+ = 1.8 N ; ΔF- = - 1.7 N ; vcr = 0.1 m/s; β = 0.5 Ns/m. The position and velocity measurements are taken with period of discretization To = 0.1s, the system gain ko = 8, the mass m = 1 kg, and the load disturbance depends on the position and
170
I. Baruch et al.
the velocity, (d(t) = d1q(t) + d2v(t); d1 = 0.25; d2 = - 0.7). The discrete-time model of the 1-DOF mechanical system with friction is given as: x1(k+1) = x2(k); x2(k+1)=-0.025x1(k)-0.3x2(k)+0.8u(k)-0.1fr(k);
(21)
v(k) = x2(k) - x1(k); y(k) = 0.1 x1(k)
(22)
Where: x1(k), x2(k) are system states; v(k) is shaft velocity; y(k) is shaft position; fr(k) is a friction force, taken from [12]. The topology of the identification and FF control RTNNs is (1, 5, 1), and that – of the FB control RTNN is (5, 5, 1). The topology of the defuzzyfication RTNN is (1, 3, 1). The learning rate parameters are η = 0.01, α = 0.9 and To = 0.01 sec. The reference signal is r(k) = sat [0.5 sin(πk) + 0.5 sin (πk/2)] with a saturation level as ± 0.8. The graphics of the comparative identification simulation results are given on Fig.3a-d. The graphics of the comparative simulation results, obtained for different control systems, are shown on Fig. 4.a-f. The identification results show that the HFNMM identifier outperformed the RTNN identifier (the MSE% is 1.2% vs. 2.5%, respectively). The results of control show that the MSE% of control has final values which are: 0.41% for the direct adaptive HFNMM control; 2.7% for the control with single RTNNs, and 5.8% for the fuzzy control. From the graphics of Fig. 4a-d, we could see that the direct adaptive HFNMM control is better than that, using single RTNNs. From Fig. 4e,f we could see that the fuzzy control is worse with respect to the neural control, especially when the friction parameters changed.
0 .3
1 .4
0 .2
1 .2
0 .1
1
0
0 .8
-0 .1
0 .6
-0 .2
0 .4 0 .2
-0 .3
-0 .4
0
5
1 0
1 5
2 0
0
0
5
a)
1 0
1 5
2 0
1 5
2 0
b) x
8
0.6
1 0
- 3
0.4
6 0.2 0
4 -0 .2
2
-0 .4 -0 .6
0 -0 .8
0
5
10
c)
1 5
2 0
0
5
1 0
d)
Fig. 3. Comparative closed-loop system identification results; a) comparison of the plant output and the output of a single RTNN identifier; b) MSE% of RTNN identification; c) comparison of the plant output and the output of a HFNMM identifier; d) MSE% of HFNMM identification
A Fuzzy-Neural Hierarchical Multi-model for Systems Identification
171
0 . 0 1 2
1
0 .0 1
0 . 5 0 . 0 0 8
0 . 0 0 6
0
0 . 0 0 4
- 0 . 5 0 . 0 0 2
- 1
0
0
5
1 0
1 5
2 0
0
5
1 0
1 5
2 0
b)
a) x
1 . 2
1
1 0
- 3
1
0 .5 0 . 8
0 . 6
0
0 . 4
- 0 .5 0 . 2
0
-1
0
5
1 0
1 5
0
5
1 0
1 5
2 0
2 0
d)
c) 2 . 5
1 0. 8
2
0. 6 0. 4
1 . 5
0. 2 0
1
- 0 . 2 - 0 . 4
0 . 5
- 0 . 6 0
- 0 . 8 - 1
0
0
5
1 0
e)
1 5
5
1 0
1 5
2 0
2 0
f)
Fig. 4. Comparative trajectory tracking control results; a) comparison of the reference signal and the output of the plant using RTNN controllers; b) MSE% of RTNN control; c) comparison of the reference signal and the output of the plant using HFNMM controller; d) MSE% of HFNMM control; e) comparison of the reference signal and the output of the plant using fuzzy controller; d) MSE% of fuzzy control
5 Conclusion The present paper proposed a new identification and direct adaptive control system based on the HFNMM. The HFNMM has three parts: 1) fuzzyfication, where the output of the plant is divided in three intervals μ (positive [1, -0.5], negative [-1, 0.5], and zero [-0.5, 0.5]); 2) inference engine, containing a given number of T-S rules corresponding to given number of RTNN models operating in the given intervals μ; 3) defuzzyfication, which consists of one RTNN doing a filtered weighted summation of the outputs of the lower level RTNNs. The learning and functioning of both hierarchical levels is independent. The HFNMM identifier is incorporated in a direct feedback/feedforward control scheme, using a HFNMM controller. The proposed HFNMM-based control scheme is applied for a 1-DOF mechanical plant with friction control. The comparative simulation results show the superiority of the proposed HFNMM control with respect to the others.
References [1] K.J. Hunt, D. Sbarbaro, R. Zbikowski, P.J. Gawthrop, Neural network for control systems (A survey), Automatica 28 (1992) 1083-1112. [2] K.S. Narendra, K. Parthasarathy, Identification and control of dynamical systems using neural networks, IEEE Transactions on Neural Networks, 1 (1) (1990) 4-27.
172
I. Baruch et al.
[3] P.S. Sastry, G. Santharam, K.P. Unnikrishnan, Memory networks for identification and control of dynamical systems, IEEE Trans. on Neural Networks, 5 (1994) 306-320. [4] P. Frasconi, M. Gori, G. Soda, Local feedback multilayered networks, Neural Computation, 4 (1992) 120-130. [5] I. Baruch, J.M. Flores, F. Thomas, R. Garrido, Adaptive neural control of nonlinear systems, Proc. of the Int. Conf on NNs, ICANN 2001, Vienna, Austria, August 2001, G. Dorffner, H. Bischof, K. Hornik, Edrs., Lecture Notes in Computer Science 2130, Springer, Berlin, Heidelberg, N. Y., 2001, 930-936. [6] I. Baruch, J.M. Flores, F. Nava, I.R. Ramirez, B. Nenkova, An advanced neural network topology and learning applied for identification and control of a D.C. motor, Proc. of the 1-st Int. IEEE Symp. on Intel. Syst., Varna, Bulgaria, Sept., 2002, 289-295. [7] T. Takagi, M. Sugeno, Fuzzy identification of systems and its applications to modeling and control, IEEE Trans. Systems, Man, and Cybernetics, 15 (1985) 116-132. [8] R. Babuska, Fuzzy Modeling for Control, Norwell, MA, Kluwer, NY, 1998. [9] P.A. Mastorocostas, J.B. Theocharis, A recurrent fuzzy-neural model for dynamic system identification, IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, 32 (2002) 176-190. [10] I. Baruch, J.M. Flores, R. Garrido, A fuzzy-neural recurrent multi-model for systems identification and control, Proc. of the European Control Conference, ECC’01, Porto, Portugal, Sept. 4-7, 2001, 3540-3545. [11] J.B. Theocharis, A high-order recurrent neuro-fuzzy system with internal dynamics: Application to the adaptive noise cancellation, Fuzzy Sets and Syst., 157 (4) (2006) 471-500. [12] S.W. Lee, J.H. Kim, Robust adaptive stick-slip friction compensation, IEEE Transaction on Industrial Electronics, 42 (5) (1995) 474-479.