Neural Comput & Applic (2008) 17:357–364 DOI 10.1007/s00521-007-0125-7
ORIGINAL ARTICLE
RBFN-based decentralized adaptive control of a class of large-scale non-affine nonlinear systems Tong Zhao
Received: 31 January 2007 / Accepted: 24 April 2007 / Published online: 22 May 2007 Springer-Verlag London Limited 2007
Abstract For a class of large-scale decentralized nonlinear systems with strong interconnections, a radial basis function neural network (RBFN) adaptive control scheme is proposed. The system is composed of a class of nonaffine nonlinear subsystems, which are implicit function and smooth with respect to control input. Based on implicit function theorem, inverse function theorem and the design idea of pseudo-control, a novel control algorithm is proposed. Two neural networks are used to approximate unknown nonlinearities in the subsystem and unknown interconnection function, respectively. The stability is proved rigidly. The result of simulation validates the effectiveness of the proposed scheme. Keywords Decentralized control Adaptive control RBFN Non-affine Large-scale nonlinear system
1 Introduction When controlling large-scale and highly nonlinear systems, the presupposition of centrality is violated either due to problems in data gathering when is spread out or due to the lack of accurate mathematical models. To avoid the difficulties, the decentralized control architecture has been tried in controller design. Decentralized control systems often also arise from various complex situations where there exist physical limitations on information exchange among several subsystems for which there is insufficient capability to have a single central controller. Furthermore, difficulty and T. Zhao (&) Department of Automatic Control, Qingdao University of Science and Technology, Qingdao, China e-mail:
[email protected] uncertainty in, measuring parameter values within a largescale system may call for adaptive techniques. Since these restrictions encompass a large group of applications, a variety of decentralized adaptive techniques have been developed [1]. Earlier literature on the decentralized control methods were focused on control of large-scale linear systems. The pioneer work by Siljak [2] presents stability theorems of interconnected linear systems based on the structure information only. Many works consider subsystems which are linear in a set of unknown parameters [3–7]. The others were focused on systems with first order interconnections [3–10].When the subsystems has nonlinear dynamics or the interconnected is entered in a nonlinear fashion, the analysis and design problem becomes even challenging. The use of neural networks’ learning ability avoids complex mathematical analysis in solving control problems when plant dynamics are complex and highly nonlinear, which is a distinct advantage over traditional control methods. As an alternative, intensive research has been carried out on neural networks control of unknown nonlinear systems [11–18]. This motivates some researches on combining neural networks with adaptive control techniques to develop decentralized control approaches for uncertain nonlinear systems with restrictions on interconnections. For example, in [19], two decentralized adaptive control schemes for uncertain nonlinear systems with radial basis neural networks are proposed, which a direct adaptive approach approximates unknown control laws required to stabilize each subsystem, while an indirect approach is provided which identifies the isolated subsystem dynamics to produce a stabilizing controller. For a class of large scale affine nonlinear systems with strong interconnections, two neural networks are used to approximate the unknown subsystems and strong interconnections, respectively [20], and Huang et al. [22] introduce a decomposition structure to
123
358
obtain the solution to the problem of decentralized adaptive tracking control a class of affine nonlinear systems with strong interconnections. Furthermore, it can be seen that most of these works are applicable for a kind of affine systems which can be linearly parameterized. Little has been found for the design of specific controllers for the nonlinear systems, which are implicit functions with respect to control input. In available literatures, there are mainly the results of Calise et al. [13, 21] and Ge et al. [16]. Calise et al. removed the affine in control restriction by developing dynamic inversion based control architecture with linearly parameterized neural networks in the feedback path to compensate for the inversion error introduced by an approximate inverse. Ge et al., proposed the control schemes for a class of nonaffine dynamic systems, using mean value theorem, separate control signals from controlled plant functions, and apply neural networks to approximate the control signal, therefore, obtain an adaptive control scheme. Hovakimyan [23] extend the results in [13] to non-affine nonlinear dynamical systems with first order interconnections. Huang [24] apply the results in [16] to a class of non-affine nonlinear systems with strong interconnections. Inspired by the above researches, in this paper, we assume that the large-scale system is composed of a class of non-affine subsystems, and propose a novel design idea of adaptive control. Utilizing the nice reversibility of the class of functions, and invoking the concept of pseudo-control and inverse function theorem, we design a decentralized RBFN controller for the class of large-scale nonlinear systems with strong interconnections. The rest of the paper is organized as follows. Section 2 discusses the class of large-scale nonlinear systems to be controlled and decentralized controller designs. Section 3 gives the structure and approximation properties of neural networks. Section 4 describes the proof of stability. One example of simulation is presented in 5, which is following by concluding remarks in Sect. 6. 2 Problem formulation and design of control scheme
Neural Comput & Applic (2008) 17:357–364
where xi 2 Rli is the state vector, xi ¼ ½xi1 ; xi2 ; . . . ; xili T ; gi ðx1 ; x2 ; . . . ; xn Þ is the interconnection term, ui 2 R is the input and yi 2 R is the output of the i-th subsystem. fi (xi,ui):Rli+1 fi R is an unknown continuous function and implicit and smooth function with respect to control input ui. Assumption 1. ¶fi (xi,ui)/¶ui „ 0 for all (xi,ui) 2 Wi · R. Assumption 2. The interconnection effect is bounded by the following function: jgi ðx1 ; x2 ; . . . ; xn Þj
n X
nij ðjsj jÞ
ð2Þ
j¼1
where nij ðjsj jÞ are unknown coefficients, sj are filtered tracking errors to be defined shortly. The control objective is: determine a control law, force the output, yi, to follow a given desired output, xdi, with an acceptable accuracy, while all signals involved must be bounded. Define the desired trajectory vector xdi ¼ ½ydi ; ðl Þ li 1 T y_ di ; . . . ; ydi ; Xdi ¼ ½ydi ; y_ di ; . . . ; ydii T ; and tracking error ðl 1Þ T
ei ¼ xi xdi ¼ ½ei1 ; ei2 ; . . . ; eili T ¼ ½ei ; e_ i ; . . . ; ei i the filter tracking error can be written as
; thus,
si ¼ ½KTi 1ei ðl 2Þ
¼ ki;1 ei þ ki;2 e_i þ þ ki;li 1 ei i
ðl 1Þ
þ ei i
ð3Þ
where ki,1,ki,2,…,ki,l_i – 1 are chosen such that the polynomial ki;1 þ ki;2 s þ þ ki;li1 sli 2 þ sli1 is Hurwitz. Assumption 3. The desired signal xdi(t) is bounded, so that kXdi k Xdi ; where Xdi is a known constant. For an isolated subsystem, by differentiating (3), the filtered tracking error can be rewritten as ðl Þ
s_ i ¼ x_ ill ydii þ ½0 KTi ei ¼ fi ðxi ; ui Þ þ gi þ Ydi
ð4Þ
ðl Þ
We consider the differential equations in the following form described, and assume the large-scale system is composed of the nonlinear subsystems: 8 x_ i1 ¼ xi2 > > > > > x_ ¼ xi3 > > < i2 .. . > ð1Þ > > > _ x ¼ f ðx ; x ; . . . ; x ; u Þ þ g ðx ; x ; . . . ; x Þ > il i i1 i2 ili i i 1 2 n i > > : yi ¼ xi1 i ¼ 1; 2; . . . n;
123
with Ydi ¼ ydii þ ½0KTi ei : Define a continuous function di ¼ ki si þ Ydi
ð5Þ
where ki is a positive constant. With Assumption 1, we know of ðxi ; ui Þ=oui 6¼ 0; thus, o½f ðxi ; ui Þ di =oui 6¼ 0: Considering the fact that odi =oui ¼ 0; with the implicit function theorem [25], there exists a continuous ideal control input u*i in a neighborhood of (xi ,ui) 2 Wi · R, such that f(xi, u*i ) – di = 0, i.e., di = fi (xi,u*i ) holds. Here, di = fi (xi, u*i ) represents ideal control inverse.
Neural Comput & Applic (2008) 17:357–364
359
Adding and subtracting di to the right-hand side of x_ ili ¼ fi ðxi ; ui Þ þ gi of (1), one obtains x_ ili ¼ fi ðxi ; ui Þ þ gi þ di Ydi ki si ;
ð6Þ
and yields s_ i ¼ ki si þ fi ðxi ; ui Þ þ gi þ di
ð7Þ
Considering the state dependent transformation, wi ¼ x_ ili ; with wi is commonly referred to as the pseudocontrol [13]. Apparently, the pseudo-control is not a function of the control ui but rather a state dependent operator. Then, owi =oui ¼ 0; from Assumption 1, ofi ðxi ; ui Þ=oui ¼ 6 0 thus ½wi f ðxi ; ui Þ=oui ¼ 6 0: With the implicit function theorem, for every (xi,ui) 2 Wi · R, there exists a implicit function such that wi – fi (xi,ui) = 0 holds, i.e., wi = fi (xi,ui). Therefore, we have wi ¼ fi ðxi ; ui Þ
ð8Þ
Furthermore, using inverse function theorem, with the fact that ½wi fi ðxi ; ui Þ=oui 6¼ 0 and fi (xi,ui) is a smooth with respect to control input, ui, (and thus, wi = fi (xi,ui) is also a smooth to control input) then, fi (xi ,ui) defines a local diffeomorphism [26], such that, for a neighborhood of ui, there exists a smooth inverse function and ui = f –i 1 (xi, wi) holds. If the inverse is available, the problem of control is easy. But this inverse is not known, we can generally use some techniques, such as neural networks, to approximate it. Hence, we can obtain an estimated function, ^ Þ: This result in the following equation u^i ¼ fi1 ðxi ; w i holding: ^ ¼ fi ðxi ; u^i Þ; w i
ð9Þ
^ may be referred to as approximation pseudowhere w i control input which represents an actual dynamic approximation inverse. Remark 1. According to the above-mentioned condi^ ; must tions, when one designs the pseudo-control signal, w i be a smooth function. Therefore, in order to satisfy the condition, we adopt hyperbolic tangent function, instead of sign function in design of control input. This also makes control signal tend smoother and system run easier. The hyperbolic tangent function has a good property as follows [27]: g 0\jgi j gi tanh i 1ai ai
ð10Þ
with 1 ¼ 0:2785; and ai any positive constant. Moreover, ^ is an approximation inverse, generally a theoretically, w i nonlinear function, and it must be bounded and play a
dynamic approximation role and make system stable. Hence, it represents actual dynamic approximation inverse. Based on the above conditions, in order to control the system and make it be stable, we design the approximation ^ as follows: pseudo-control input w i ^ ¼ ki si Ydi uci W ^ Tgi Sgi ðjsi jÞsi vri ; w i
ð11Þ
where uci is output of a neural network controller, which adopts a radial basis function neural network (RBFN), vri is robustifying control term designed in stability analysis. ^ Tgi Sgi ðjsi jÞ is used to compensate the interconnection W nonlinearity (we will define later). ^ to the right-hand side of (7), Adding and subtracting w i with di = ki si + Ydi = fi (xi,u*i ), we have ~ i ðxi ; ui ; u Þ uci s_ i ¼ ki si þ D i T
^ vri þ gi ; ^ gi Sgi ðjsi jÞsi þ di w W i
ð12Þ
~ i ðxi ; ui ; u Þ ¼ fi ðxi ; ui Þ fi ðxi ; u Þ is error between where D i i nonlinear function and its ideal control function, we can use the RBFN to approximate it.
3 Neural network-based approximation Similar to the multilayer feedforward neural networks, the RBFNs are also known to be universal approximators. In addition, RBFNs belong to a class of linearly parameterized models, which make them suitable for constructing adaptation mechanisms within them. Moreover, the fast initial learning of RBFNs also add to theirs suitability in adaptive [28, 29]. Thus, we choose a RBFN to implement the proposed controller. Given a multi-input-single-output RBFN, let n1i and m1i be node number of input layer and hidden layer, respectively. The active function used in the RBFN is Gaussian function, Sl ðxÞ ¼ exp½0:5ðkzi llk k2 Þ=r2k ; l ¼ 1; . . . ; n1i ; k ¼ 1; . . . ; m1i ; where zi 2 Rn1i 1 is input vector of the RBFN, li 2 Rn1i m1i and ri 2 Rm1i 1 are the center matrix and the width vector. Based on the approximation property of RBFN, ~ i ðxi ; ui ; u Þ can be written as D i ~ i ðxi ; ui ; u Þ ¼ W T Si ðzi Þ þ ei ðzi Þ; D i i
ð13Þ
where Wi is the weight vector, Si (zi) is Gaussian basis function, ei ðzi Þ is the approximation error of RBFN and the input vector, zi 2 Rq, q the number of input node. Assumption 4. The approximation error ei ðzi Þ is bounded by jei j eNi ; with eNi [0 is an unknown constant.
123
360
Neural Comput & Applic (2008) 17:357–364
^ T : The input of RBFN is chosen as zi ¼ ½xTi ; si ; w i Moreover, output of RBFN is designed as ^ Ti Si ðzi Þ: uci ¼ W
ð14Þ
^ i as estimates of ideal Wi, which are given by Define W the RBFN tuning algorithms. Assumption 5. The ideal values of Wi satisfy jWi k WiM
ð15Þ
where WiM is a positive known constant, with estimation ~ i ¼ Wi W ^ i: errors as W
^ vri þ gi L_i ¼ si ½ki si þ di w i ^ Tgi Sgi ðjsi jÞsi þ ei W _ _~ 1 ~ ~ ~ Ti W ^ i jsi j þ W ~ Tgi G1 þ cWi W i W gi þ k/i /i /i :
ð23Þ
Using (2), (23) is rewritten as ^ Þ vri si L_i ki s2i þ si ðdi w i " # n X T ^ gi Sgi ðjsi jÞsi þ jsi jeNi : þ si nij ðjsj jÞ W j¼1
þ
~ ~_ k1 /i /i /i
_~ ~ Ti W ^ i j si j þ W ~ Tgi G1 þ cWi W i W gi :
ð24Þ
Since nij () is a smooth function, there exists a smooth function fij ðjsj jÞð1 i; j nÞ such that nij ðjsj jÞ ¼ jsj jfij ðjsj jÞ hold. Thus, we have
4 Control laws design and stability analysis Substituting (13) and (14) into (12), we have ^ vri ~ Ti Si þ di w s_ i ¼ ki si þ W i T ^ þ gi W gi Sgi ðjsi jÞsi þ ei ðzi Þ:
ð16Þ
Theorem 1: Consider the nonlinear subsystems represented by (1) and let assumptions hold. If choose the ^ as (11), and use the following pseudo-control input w adaptation laws and robust control law _^ ¼ F ½S s c W ^ W i i i i Wi i jsi j;
ð17Þ
_^ ¼ G ½S ðjs jÞs2 c W ^ W gi i gi i gi gi jsi j; i
ð18Þ
^_ ¼ k/i ½si ðjsi j þ 1Þ tanhðsi =ai Þ c / ^ / i /i i jsi j;
ð19Þ
^ ðjsi j þ 1Þ tanhðsi =ai Þ; vri ¼ / i
ð20Þ
where Fi = FTi > 0, Gi = GTi > 0, are any constant matrices, ku i,cWi ,cgi ,cu i and ai are positive design ^ is the estimated value of the unknown parameters, / i approximation errors, which will be defined shortly, then, guarantee that all signals in the system are bounded and the tracking error ei will converge to a neighborhood of the origin. Proof: Consider the following positive define Lyapunov function candidate as 1 ~ 2 ~i þ W ~ ~ Ti Fi1 W ~ Tgi G1 2Li ¼ s2i þ W i W gi þ k/i /i :
ð21Þ
The time derivative of the above equation is given by
^ Þ vri si L_i ki s2i þ si ðdi w i " # n X T 2 ^ þ si fij ðjsj jÞ W gi Sgi ðjsi jÞ þ jsi jeNi j¼1
þ
~ ~_ k1 /i /i /i
L_i ¼ si s_ i þ
þ
~ ~ Tgi G1 W i W gi
þ
~ ~_ k1 /i /i /i
^ Þ vri si L_i ki s2i þ si ðdi w i " # n X T ^ þ si nij ð sj Þ W gi Sgi ðjsi jÞsi þ jsi jeNi j¼1
þ
~ ~_ k1 /i /i /i
_~ ~ Ti W ^ i j si j þ W ~ Tgi G1 þ cWi W i W gi
^ Þ vri si ki s2i þ si ðdi w i T
~ gi Sgi ðjsi jÞ þ egi s2i þ jsi jeNi þ s2i W ~ ~_ ~T ^ ~ T 1 _~ þ k1 /i /i /i þ cWi W i W i jsi j þ W gi Gi W gi :
ð26Þ
Substituting the adaptive law (18), we obtain ^ Þ vri si L_i ki s2i þ si ðdi w i ~ ~_ þ egNi s2i þ jsi jeNi þ k1 /i /i /i T
~i W ^ i jsi j þ cgi W ~ gi W ^ gi jsi j: þ cWi W
ð27Þ
ð22Þ
Applying (16) and (17) to (21) and ð~_Þ ¼ ð^_Þ; we have
123
ð25Þ
P Since the function di ðjsi jÞ ¼ ni¼1 fij ðjsi jÞ is smooth and si is on a compact set, di ðjsi jÞ can be approximated via a RBFN, i.e., di ðjsi jÞ ¼ WgiT Sgi ðjsi jÞ þ egi ; with bounded ^ gi is estimate of ideal approximation error egi ; jegi j egNi :W Wgi, with boundedness kWgi k WgMi ; WgMi > 0 a known ~ gi ¼ Wgi W ^ gi : constant, and the estimation errors as W Then, (25) becomes
T
_~ ~ Ti Fi1 W W i
_~ ~ Tgi G1 ~ Ti W ^ i jsi j þ W þ cWi W i W gi :
^ is its estimate, and Define ui = max (eNi ,egNi), with / i ~ ^ ~ /i ¼ /i /i with /i error. Equation (27) can be rewritten as
Neural Comput & Applic (2008) 17:357–364
^ Þ vri si þ / ðs2 þ jsi jÞ L_i ki s2i þ si ðdi w i i i T _ 1 ~ ~ ~ ^ ~ Tgi W ^ gi jsi j: þ k/i /i /i þ cWi W i W i jsi j þ cgi W
361
L_i ðki c2i Þs2i þ c8i jsi j þ /i 1i ai ð28Þ
Applying the adaptive law (19) and robust control term (20), we have
~/ ^ ~ Ti W ^ i jsi j þ cgi W ~ Tgi W ^ gi jsi j þ c/i / þ cWi W i i jsi j
L_i
n X
½ðki c2i Þs2i þ c8i jsi j þ /i 1i ai :
ð33Þ
i¼1
Now, define v ¼ ½js1 j; . . . jsn jT ; K ¼ diag½k1 c21 ; . . . ; P kn c2n ; C ¼ ½c81 ; c82 ; . . . ; c8n T ; D ¼ ni¼1 ð/i 1i ai Þ: Equation (33) can be rewritten as
^ Þ þ / jsi jðjsi j þ 1Þ ¼ ki s2i þ si ðdi w i i /i si ðjsi j þ 1Þ tanhðsi =ai Þ ~/ ^ ~ Ti W ^ i jsi j þ cgi W ~ Tgi W ^ gi jsi j þ c/i / þ cWi W i i jsi j ¼
n X
i¼1
~ si ðjsi j þ 1Þ tanhðsi =ai Þ þ /i jsi jðjsi j þ 1Þ / i
ki s2i
with c5i ¼cWi kWi k þ c3i ; c6i ¼ cgi kWgi k þ c4i ; c7i ¼ /2i 4k/i þ c26i 4cgi þ c25i 4cWi ; c8i ¼ c1i þ /i 1i ai þ c7i : For the overall system, we have L_ ¼
^ Þ/ ^ si ðjsi j þ 1Þ tanhðsi =ai Þ L_i ki s2i þ si ðdi w i i
ð32Þ
L_ vT Kv þ CT v þ D
^Þ þ si ðdi w i
kmin ðKÞkvk2 þ kCkkvk þ D:
ð34Þ
þ /i ðjsi j þ 1Þ½jsi j si tanhðsi =ai Þ By completing square, yields
~/ ^ ~ Ti W ^ i jsi j þ cgi W ~ Tgi W ^ gi jsi j þ c/i / þ cWi W i i jsi j: ð29Þ
_ L kmin ðKÞ kvk
Using (10), we get þ ^ Þ þ / ðjsi j þ 1Þ1ai L_i ki s2i þ si ðdi w i i T
T
~/ ^ ~i W ^ i jsi j þ cgi W ~ gi W ^ gi jsi j þ c/i / þ cWi W i i jsi j:
ð30Þ
With (11), (14), and (17)–(20), the approximation error between the ideal control inverse and the actual approxi^ j c1i þ c2i jsi jþ mation inverse is bounded by jdi w i ~ ~ c3i kW i k þ c4i kW gi k; with c1i ,c2i ,c3i ,c4i positive constants. Moreover, we utility the facts, a~T a^ k~ akkak k~ ak2 ; (30) can be rewritten as ~ i þ c4i W ~ gi L_i ðki c2i Þs2i þ jsi j c1i þ c3i W 2 3 ~ i ðkWi k W ~ i Þ cWi W 7 6 ~ ~ 7 þ j si j 6 4 þcgi W gi ð Wgi W gi Þ 5 þ /i 1i ai ~ ðj / j / ~ Þ þk/i / i i i ðki c2i Þs2i þ c1i jsi j þ /i 1i ai h 9 8 2 i ~ i þðkWi k þ c3i ÞW ~ i > > cWi W > > > > > > < h 2 i = ~ gi þðWgi þ c4i ÞW ~ gi þ j si j þ cgi W > > > > > > > > 2 ; : ~ þj/ j/ ~ k/i / i i i ð31Þ Completing square for (31), we have
kC k 2kmin ðKÞ
2
kC k 2 þ D: 4kmin ðKÞ
ð35Þ
Clearly, L_ 0; as long as ki > c2i, and ~ k1 j/ j kvk A; / i i /i ð36Þ W ~ gi c6i c1 Wgi ~ i c5i c1 kWi k; W Wi gi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kCk2 þDkminðKÞ where A ¼ þ 2kkCk ; with kmin (K) the minimum 4k2 ðKÞ min ðKÞ min
singular value of K. Now, we define n o ~ j j/ ~ j k1 j/ j ; Xv ¼ fv j kvk Ag; X/i ¼ / i i i /i n o 1 ~ i k j kW ~ i k c5i c kWi k ; XWi ¼ kW ð37Þ Wi n o ~ gi k j kW ~ gi k c6i c1 kWgi k : XWgi ¼ kW gi Since kWi k; Wgi ; ui,cui,cWi ,cWgi, c5i,c6i are positive constants, we conclude that Wv, X/i ; XWi and XWgi are compact sets. Hence L_ is negative outside these compacts set. According to a standard Lyapunov theorem, this ~ and v are bounded and will ~ i; W ~ gi ; / demonstrates that W i converge to Wv, X/i ; XWi ; and XWgi ; respectively. Furthermore, this implies ei is bounded and will converge to a neighborhood of the origin and all signals in the system are bounded.
123
362
Neural Comput & Applic (2008) 17:357–364
5 Simulation study
0.6 0.5
In order to validate the effectiveness of the proposed scheme, we implement an example, and assume that the large-scale system is composed of the following two subsystems defined by
Subsystem 1 :
Subsystem 2 :
> > > > > > :
þ u1 þ ðx211 þ x212 Þrðu1 Þ
þ tanhð0:1u2 Þ þ 0:15u32
0.3 0.2 0.1
ð38Þ 0
þ 0:1kx2 k expð0:5kx2 kÞ
8 x_ 21 ¼ x22 > > > > > < x22 ¼ x2 þ 0:1ð1 þ x2 Þu2 21 22 > > > > > :
0.4
e11,e21
8 x_ 11 ¼ x12 > > > > > > < x_ 12 ¼ x2 x11 þ 0:02ðx x211 Þx12
e11 e21
−0.1
0
5
10 time sec
15
Fig. 1 Tracking errors of subsystem 1: e11, and subsystem 2:e21
ð39Þ 3
þ 0:2kx2 k expð0:1kx2 kÞ
u1 u2
2.5 2 1.5 1
u1,u2
0.5 0 −0.5 −1 −1.5 −2 −2.5 0
5
10 time sec
15
20
Fig. 2 Control input of subsystem (1) u1, and subsystem, (2) u2
xd11 x11
0.8 0.6 0.4
x11,xd11
where x ¼ 0:4p; rðu1 Þ ¼ ð1 eu1 Þ=ð1 þ eu1 Þ: The desired trajectory xd11 = 0.1p [sin (2t) – cos (t)], xd21 = 0.1p sin (2t). For the RBFNs as (13), input vectors are chosen as ^ T ; i ¼ 1; 2; and number of hidden layer zi ¼ ½xTi ; si ; w i ^ i ð0Þ ¼ ð0Þ and the center nodes both 8, the initial weights W values and the widths of Gaussian function 0 and 2, respectively. For the RBFNs, which used to compensate the interconnection nonlinearities, both input vectors are [s1, s2]T, number of hidden layer nodes is 8, the initial weights ^ gi ð0Þ ¼ ð0Þ; and the center values and the widths of W pffiffiffi Gaussian function zero, and 5; respectively. The initial condition of controlled plant is x1 (0) = [0.2,0.2]T, x2 (0) = [0.3,0.2]T. The other parameters are chosen as follows: Li = 1,ki = 2, cWi = 0.001,cu i = 0.1,ku i = 0.01, ai = 10, Fi = 10IWi, G = 2Igi, with IWi,Igi corresponding identity matrices. Figures 1 and 2 show the results of comparisons of tracking errors and control input of two subsystems, Figs. 3 and 4 the comparison of tracking of two subsystems, respectively. Figures 5 and 6 illustrate the norm of the four weights and the output of RBFN in two subsystems, respectively. From these results, it can be seen that the effectiveness of the proposed scheme is validated, and tracking errors converge to a neighborhood of the zeroes and all signals in system are bounded. Furthermore, the learning rate of neural network controller is rapid, and can track the desired trajectory in less than 3 s. From the results of control inputs, after shortly shocking, they tend to be smoother, and this is because neural networks are unknown for objective in initial stages. As desired, though the system is complex, the whole running process is well.
123
20
0.2 0 −0.2 −0.4
0
5
10 time sec
Fig. 3 Comparison of tracking of subsystem 1
15
20
Neural Comput & Applic (2008) 17:357–364
363
0.5
0.6
xd21 x21
0.4
0.4 0.3
0.2
0.2
x21,xd21
0.1
0
0
−0.2 −0.1
−0.4
−0.2 −0.3 −0.4
||Wg2|| ||W2|| NN2
−0.6 0
5
10 time sec
15
20
−0.8
0
5
10
15
20
time sec
Fig. 4 Comparison of tracking of subsystem 2
Fig. 6 The norms of weights and output of RBFN of subsystem 2 1.2 NN1 ||Wg1|| ||W1||
1
tively. Unlike the most adaptive control schemes, weight, center and width of RBFN are designed tuning laws. Moreover, in order to make control smoother and system run easier, we apply hyperbolic tangent function, instead of sign function in robust control term. Adaptive update laws are derived from Lyapunov analysis. The performance of the controller is demonstrated as desired.
0.8 0.6 0.4 0.2 0
Acknowledgments This research is supported by the research fund granted by the Doctoral Foundation of Qingdao University of Science and Technology.
−0.2 −0.4 −0.6
References 0
5
10 time sec
15
20
Fig. 5 The norms of weights and output of RBFN of subsystem 1
6 Conclusion In this paper, we design a neural network (RBFN-based) decentralized adaptive control scheme of the class of largescale interconnected nonlinear systems. The system is composed of a class of non-affine nonlinear subsystems, which are implicit function and smooth with respect to control input. Based on some mathematical theories and the design idea of pseudo-control, a novel control algorithm is proposed. Using an on-line approximation approach, we have been able to relax the linear in the parameter requirements of traditional nonlinear decentralized adaptive control without considering the dynamic uncertainty as part of the interconnections and disturbances. Two designed robust control terms are applied to shield interconnection effect and disturbances, respec-
1. Spooner JT, Passino KM (1996) Adaptive control of a class of decentralized nonlinear system. IEEE Trans Automat Control 41(2):280–284 2. Siljak DD (1985) Decentralized control of complex systems. Academic, Boston 3. Ioannou PA (1986) Decentralized adaptive control of interconnected systems. IEEE Trans Automat Control AC-31:291– 298 4. Fu LC (1992) Robust adaptive decentralized control of robot manipulators. Automatic Control IEEE Trans 37:106–110 5. Sheikholeslam S, Desor CA (1993) Indirect adaptive control of a class of interconnected nonlinear dynamical systems. Int J Control 57(3):742–765 6. Wen C (1994) Decentralized adaptive regulation. IEEE Trans Automat Control 39:2163–2166 7. Tang Y, Tomizuka M, Guerrero G (2000) Decentralized robust control of mechanical systems. IEEE Trans Automat Control 45(4):2163–2166 8. Huang SN, Shao HH (1995) Robust stability analysis of uncertain large-scale systems. Control Comput 23(1):1–5 9. Seraji H (1989) Decentralized adaptive control of manipulators: theory, simulation, and experimentation. IEEE Trans Robot Automat 5:183–201 10. Huang SN, Shao HH (1995) Stability analysis of large-scale systems with delays. Syst Control Lett 25:75–78
123
364 11. Lewis FL, Yesildirek A, Liu K (1996) Multilayer neural-net robot controller with guaranteed tracking performance. IEEE Trans Neural Netw 7(2):388–399 12. Yesildirek A, Lewis FL (1995) Feedback linearization using neural networks. Automatica 3(11):1659–1664 13. Calise AJ, Hovakimyan N (2001) Adaptive output feedback control of nonlinear system using neural networks. Automatica 37:1201–1211 14. Johnson E, Calise AJ (2000) Feedback linearization with neural network augmentation applied to X-33 attitude control. Guidance Navigation and Control Conference, AIAA-2000-4157 15. Ge SS, Huang CC (1997) Direct adaptive neural network control of nonlinear systems. In: Proceedings of the American Control conference, Albuqerque, New Mexico, pp 1568–1572 16. Ge SS, Huang CC (1999) Adaptive neural network control of nonlinear systems by state and output feedback. IEEE Trans Syst Man Cybern B 29(6):818–828 17. Zhang T, Ge SS, Hang CC (1999) Design and performance analysis of a direct controller for nonlinear systems. Automatica 35:1809–1817 18. Zhang T, Ge SS, Hang CC (1997) Neural-based direct adaptive control for a class of general nonlinear systems. Int J Syst Sci 28:1011–1020 19. Spooner JT, Passino KM (1999) Decentralized adaptive control of nonlinear systems using radial basis neural networks. Trans Automat Control 44(11):2050–2057
123
Neural Comput & Applic (2008) 17:357–364 20. Huang S, Tan KK (2003) Decentralized control design for largescale systems with strong interconnections using neural networks. IEEE Trans Automa Control 48(5):805–810 21. Johnson E, Calise AJ (2000) Feedback linearization with Neural network augmentation applied to X-33 attitude control. Guidance navigation and control conference,AIAA-2000-4157 22. Huang SN, Tan KK (2006) Nonlinear adaptive control of interconnected systems using neural networks. IEEE Trans Neural Netw 17(1):243–246 23. Nardi F, Hovakimyan N (2006) Decentralized control of largescale systems using single hidden layer neural networks. In: Proceedings of the American control conference,Arilington, June 2001, pp 3123–3127 24. Huang SN, Tan KK (2005) Decentralized control of a class of large-scale nonlinear systems using neural networks. Automatica 41:1645–1649 25. Lang S (1983) Real analysis. Addison-Wesley, Reading 26. Slotine JJE, Li WP (1991) Applied nonlinear control. Prentice Hall, Englewood Cliffs 27. Polycarpou MM (1996) Stable adaptive neural control scheme for nonlinear system. IEEE Tran Autom Control 41(3):447–451 28. Girosi F, Poggio T (1989) Networks and the best approximation property. Artif Intell Lab Memo 1164 Mass Inst Technol, Cambridge 29. Gupta MM, Rao DH (1996) Neural-control system: theory and applications. IEEE Neural Netw Council N Y 41(3):447–451