Kinematic and Dynamic Adaptive Control of a Nonholonomic Mobile ...

Report 6 Downloads 131 Views
Kinematic and Dynamic Adaptive Control of a Nonholonomic Mobile Robot using a RNN Mohamed Oubbati, Member, IEEE, Michael Schanz, and Paul Levi Institute of Parallel and Distributed Systems University of Stuttgart Universitaetsstrasse 38, D-70569 Stuttgart, Germany {Mohamed.Oubbati, Michael.Schanz, Paul.Levi}@informatik.uni-stuttgart.de

Abstract— In this paper, an adaptive neurocontrol system with two levels is proposed for the motion control of a nonholonomic mobile robot. In the first level, a recurrent network improves the robustness of a kinematic controller and generates linear and angular velocities, necessary to track a reference trajectory. In the second level, another network converts the desired velocities, provided by the first level, into a torque control. The advantage of the control approach is that, no knowledge about the dynamic model is required, and no synaptic weight changing is needed in presence of robot’s parameters variation. This capability is acquired through prior ’meta-learning’. Simulation results are demonstrated to validate the robustness of the proposed approach. Index Terms— Nonholonomic mobile robots, adaptive control, recurrent neural networks, meta-learning.

I. I NTRODUCTION Navigation control of mobile robots has been studied by many authors in the last decade, since they are increasingly used in wide range of applications. At the beginning, the research effort was focused only on the kinematic model, assuming that there is perfect velocity tracking [1]. Later on, the research has been conducted to design navigation controllers, including also the dynamics of the robot [2], [3]. Taking into account the specific robot dynamics is more realistic, because the assumption ”perfect velocity tracking” does not hold in practice. Furthermore, during the robot motion, the robot parameters may change due to surface friction, additional load, among others. Therefore, it is desirable to develop a robust navigation control, which has the following capabilities: i) ability to successfully handle estimation errors and noise in sensor signals, ii) ”perfect” velocity tracking, and iii) adaptation ability, in presence of time varying parameters in the dynamical model. In [4] a robust adaptive velocity controller is developed using the dynamic model of the mobile robot and Lyapunov theory to prove the stability of the closed loop. The simulation results show effectively the robustness of this controller, but under restrictive assumptions made on internal and external disturbances. Artificial neural networks is one of the most popular intelligent techniques widely applied in engineering. Their ability to handle complex input-output mapping, without detailed analytical model, and robustness for noise environment make them an ideal choice for real implemenThis work was partially supported by the German Academic Exchange Service (DAAD).

tations. In [3][5] a kinematic controller based on backstepping method is used for steering, and feedforward neural networks were trained to compensate the robot dynamics. Recently, an increasing attention is being paid to recurrent neural networks (RNNs). Their ability to instantiate arbitrary temporal dynamics makes them promising for dynamical systems adaptive identification and control. Furthermore, an interesting property of RNNs has been noticed by many authors: Adaptive behavior with fixed weights. It has been shown by Feldkamp et. al.[6] that a single fixed weight RNN can perform one-time-step prediction for many distinct time series. In the control domain, it is shown in [7] that a RNN can be trained to act as a stabilizing controller for three unrelated systems and to handle switch between them. More recently, we train a novel RNN called Echo State Network (ESN) to act as an adaptive identifier of non linear dynamical systems, with fixed weights [8]. In [9] we used ESN to develop a single and fixed-weight adaptive velocity controller for multiple distinct mobile robots. This capability is acquired through prior training; instead of learning data from one system, RNNs are able to learn from different systems, or from one system in different operating conditions. A theory explanation about this property can be found in [10][11]. The adaptive behavior of RNNs with fixed weights is named differently. It is termed ”meta-learning” in [12], and ”accommodative” in [13]. In this paper, a control approach based on ESNs is developped for tracking control of a nonholonomic mobile robot. Two control levels are designed. First, an ESN kinematic controller provides the desired velocity, necessary to minimize the position error. Then, an adaptive dynamic ESN controller transform the desired velocity into torques for wheels, such that the robot’s velocity converge to the desired one. As in [9], no knowledge about the dynamical model is required, and adaptive behavior is acquired through prior meta-learning. The rest of this paper is organized as follows. Section II presents some preliminaries. It includes ESN training algorithm, and the problem statement. Design of the two ESN levels control is described in section III. In section IV, the neurocontrol system with two levels is designed for navigation, and some simulation results are presented. Finally, discussion and conclusion are drawn in section V.

II. P RELIMINARIES AND STATEMENT OF THE PROBLEM A. Model of a Two Actuated Wheels Robot The mobile robot of Fig. 1 is considered. In this paper, we adopt the same model used by Takanori et al.[2]. 1) Kinematic Model: In Fig. 1, P0 is the origin of the coordinate system and the middle between the wheels. The distance from P0 to the center of mass Pc is d. The geometric configuration of the robot is described by q = [x, y, φ, θr , θl ]T where (x, y) are the coordinates of P0 . φ is the heading angle of the robot, and θr , θl are the angles of the right and left driving wheels, respectively. The kinematic model is given by the following equation: Fig. 1.

q˙ = S(q)ν

where ν = [ν1 , ν2 ] are the angular velocities of the right and left wheels, and S(q) is selected as r r 2 cos φ 2 cos φ  r sin φ r sin φ  2  2 r  r  − 2b S(q) =  2b     1 0 0 1 The relationship between ν and the linear and angular velocities (v, w) is: ¸· ¸ · ¸ · 1 b v ν1 r r (2) = 1 w ν2 − rb r substituting (2) for (1) and taking into consideration only the robot pose, we get the ordinary kinematic form of a mobile robot with two actuated wheels:     ¸ x˙ cos(φ) 0 ·  y˙  =  sin(φ) 0  v (3) w 0 1 φ˙ If the mobile robot satisfies the conditions of pure rolling and no slipping, then [3]: x˙ sin φ − y˙ cos φ = 0

(4)

2) Dynamic Model: We assume that the mass of the body is mc and that of a wheel with a motor is mw . Ic , Iw , and Im are the moment of inertia of the body about the vertical axis through Pc , the wheel with a motor about the wheel axis, and the wheel with a motor about the wheel diameter, respectively. The dynamic model is given by: M (q)ν˙ + V (q, q)ν ˙ = B(q)τ

Mobile robot with two actuated wheels.

(1)

(5)

where τ = [τr , τl ] are torques applied on right and left wheels, and M , V , B are " 2 # r r2 2 2 (mb + I) + I (mb − I) 2 2 w 4b M = 4b r2 r2 2 2 4b2 (mb − I) 4b2 (mb + I) + Iw I and m are given by: I = mc d2 + Ic + 2mw b2 + 2Im , m = mc + 2mw . # " · ¸ r2 ˙ 1 0 0 m d φ c 2b ,B= V = 2 0 1 0 − r2b mc dφ˙

B. Navigation Control In navigation control, the objective is to control the velocity of the robot such that its pose P = [x, y, φ]T follows a reference trajectory Pr = [xr , yr , φr ]T . Various feedback controllers have been proposed to solve this problem (see the survey paper [1] and references cited therein). However, most of them neglect the vehicle dynamics and consider only the steering system (3), where the velocities (v,w) are supposed to be the robot inputs. One of these controllers is described as following. Let there be prescribed a reference robot (trajectory):     ¸ x˙ r cos(φr ) 0 ·  y˙ r  =  sin(φr ) 0  vr (6) wr 0 1 φ˙ r where xr , yr , and φr are the pose of the reference robot, and (vr , wr ) are its linear and angular velocities. We define e1 , e2 , e3 as following:      e1 cos(φ) sin(φ) 0 xr − x  e2  =  − sin(φ) cos(φ) 0   yr − y  (7) e3 0 0 1 φr − φ The inputs (v, w) which make e1 , e2 , e3 converge to zero are given by [1],[3]: ½ vd = vr cos e3 + K1 e1 (8) wd = wr + vr K2 e2 + K3 sin e3 where K1 , K2 , K3 are positive constants. In practical situations however, any set of sensor data used to estimate the robot pose may be incomplete and distorted by noise and errors. To emulate the real world, we drove the model (3) using (8), and an external perturbation chosen from gaussian distribution is added to the computed robot’s pose. The controller lost at many times stability of the closed loop. Furthermore, the assumption of ”perfect” velocity tracking is almost unattainable in real cases, and as it is well known, the control at the kinematic level may be instable if there is errors control at the dynamic level. If we consider that some of the robot parameters values are time varying, the problem becomes more complicated. The contribution of this paper is to improve the robustness of the controller (8)

against noise and external perturbations, and to take into account the specific vehicle dynamics, in presence of time varying parameters. Recurrent neural networks are an attractive tool to deal with complex nonlinear systems, without detailed analytical model. Their generalization capability, robustness against noise, and adaptation behavior ability make them very promising to offer complementary/new solutions for adaptive control of nonlinear dynamical systems. However, only few results have been published regarding the nonlinear system control by RNNs. The major reason is because simple and powerful training algorithms are missing. Echo State Network (ESN) is a novel RNN, which has an easy training algorithm, where only the network-to-output connection weights have to be trained. Many dynamical systems, which were difficult to learn with the existing methods, have been easily learnt by ESN [14]. C. Echo State Network Echo state network is formed by a ”Dynamic Reservoir”(DR), which contains a large number of sparsely interconnected neurons with non-trainable weights. As presented in (Fig. 2), we consider that the network has K inputs, N internal neurones and L output neurones. Activations of input neurons at time step n are U (n) = (u1 (n), u2 (n), . . . , uk (n)), of internal units are X(n) = (x1 (n), . . . , xN (n)), and of output neurons are Y (n) = (y1 (n), . . . , yL (n)). Weights for the input connection in in ), for the internal a (N xK) matrix are W in = (wij connection in a (N xN ) matrix are W = (wij ), and for the connection to the output neurons in an L x (K + N + L) out matrix are W out = (wij ), and in a (N xL) matrix back back W = (wij ) for the connection from the output to the internal units. The activation of internal and output units is updated according to: X(n + 1) = f (W in U (n + 1) + W X(n) + W back Y (n + 1)) (9) where f = (f1 , . . . , fN ) are the internal neurons output sigmoid functions. The outputs are computed according to: Y (n + 1) = f out (W out (U (n + 1), X(n + 1), Y (n))) (10) where f out = (f1out , . . . , fLout ) are the output neurons output sigmoid functions. The term (U (n+1), X(n+1), Y (n)) is the concatenation of the input, internal, and previous output activation vectors. The idea of this network is that only the weights connections from the internal neurons to the output (W out ) are to be adjusted. Here we present briefly an off-line algorithm for the learning procedure: 1) Given I/O training sequence (U (n), D(n)) 2) Generate randomly the matrices (W in , W, W back ), scaling the weight matrix W such that its maximum eingenvalue |λmax | ≤ 1.

Fig. 2. Basic architecture of ESN. Dotted arrows indicate connections that are possible but not required.

3) Drive the network using the training I/O training data, by computing X(n+1) = f (W in U (n+1)+W X(n)+W back D(n)) (11) 4) Collect at each time the state X(n) as a new row into a state collecting matrix M , and collect similarly at each time the sigmoid-inverted teacher output tanh−1 D(n) into a teacher collection matrix T . 5) Compute the pseudoinverse of M and put W out = (M −1 T )t

(12)

t: indicates transpose operation. For exploitation, the network can be driven by new input sequences and using the equations (9) and (10). More details can be found in [8][15].

D. Problem Statement The problem to solve is that of developing a motion control approach based on ESN for the robot of Fig. 1. We assume that the robot pose measurements are noisy, and no knowledge about the dynamics model is available. Furthermore, the distance d and the mass mc are time varying. Upon completion of the training procedures, we expect that the controller be robust against external disturbances and track reasonably predefined linear and angular velocities at the dynamical level. Furthermore, the controller is asked to exhibit characteristics, normally ascribed to adaptive controllers: i) detect the variation of the distance d and the mass mc ; ii) adapt itself to the perceived change and iii) generate the appropriate control signals (torques for the wheels). III. CONTROL DESIGN In this section, two ESN control levels are developed. First, a position controller is designed to improve the robustness of the controller (8). Then, an adaptive velocity controller is designed for the model (5), in presence of time varying parameters (d, mc ). Finally, the two control levels are assembled to form the global navigation control loop.

Reference and actual Robot position in X−Y axis 25

20

Y

15

Fig. 3.

Training ESNK as a kinematic controller.

10

5

A. ESN Kinematic Controller

reference trajectory robot trajectory controlled by the feedback controller robot trajectory controlled by ESN K

2) Test: After training, performances of ESNK and the feedback controller (8) are compared in tracking a reference trajectory (Fig. 4). During the time interval [0, 1000], the two controllers behave similarly. This result is expected, since ESNk is trained from (8). During time interval [1000, 1200] a gaussian noise with zero mean and a variation level of σ = 0.5 is added to the robot pose values. ESNk showed better performance to recover the perturbation, compared with the feedback controller (see Fig.4.(b) and (c)). In other tests (not presented here), when the variation level of noise is more than σ = 0.5, and the constraint (4) of ”pure rolling and no slipping” was hardly violated, the controller (8) showed poor performances and lost stability, whereas ESNk showed more robustness, and could maintain stability of the closed loop. Here we presented the case, where no initial errors exist. However, when the trajectory tracking starts with initial pose errors, ESNK could not bring the robot to the reference trajectory, and lost stability. This problem can be solved by using a path planner, which gives a feasible path from the initial robot pose to the nearest point on the reference trajectory.

0 0

5

10

15

20

25

X

(a)

tracking errors resulted from the feedback controller

1 xe ye theta

e

0.5

0

−0.5

−1

−1.5 0

500

1000

1500

2000

2500

time

(b)

1 xe y e theta

e

tracking errors resulted from ESNK control

1) Training: The ESN kinematic controller (called here ESNK ) is designed to emulate the behavior of the controller (8), and to improve its robustness in presence of noisy data. Fig.3 depicts a bloc diagram of the training procedure. First, a random reference trajectories Pr = [xr , yr , φr ] was generated. The kinematic model is then controlled by (8), in order to minimize the error between the actual pose of the robot P = [x, y, φ] and the reference trajectory. At the same time, ESNK learns the behavior of (8). To provide ESNK robustness against disturbances, a gaussian noise with zero mean and a variation level of σ = 0.5 was added to training data. Training procedure was performed using an ESN with 5 inputs (e1 , e2 , e3 ,vr , wr ), 9 internal neurons and 2 outputs (desired linear and angular velocities (vd , wd )). No back-connection from the output to the DR, and no connections from the input directly to the output. The input and the internal synaptic connections weights were randomly initialized from a uniform distribution over [−1, +1]. The internal weight matrix W has a sparse connectivity of 20% and scaled such that its maximum eigenvalue |λmax | ≈ 0.1.

0.5

0

−0.5

−1

−1.5 0

500

1000

1500

2000

2500

time

(c) Fig. 4. Kinematic control. a) Robot trajectory tracking controlled by the two controllers separately. b)Tracking errors resulted from the feedback controller (K1 = K2 = K3 = 5). c) Tracking errors resulted from the ESNK control.

Fig. 5.

Training ESND as a dynamic controller.

IV. A DAPTIVE M OTION C ONTROL As seen in the previous section, the kinematic and dynamic controllers (ESNK , ESND ) are designed and trained separately. They are now ready to cooperate together, in order to track a predefined reference trajectory. Thus, the global control structure can be decomposed in two stages. As shown in Fig. 6, an outer-loop, where the ESNK computes the desired velocity necessary to track a reference trajectory Pr , and an inner-loop, depending on the robot dynamics, where the ESND converts the velocity control provided by the outer-loop into a torque control.

B. ESN Dynamic Adaptive Controller The ESN dynamic controller (ESND ) is designed only by learning I/O data collected from (5). As mentioned above, meta-learning can offer an adaptive behavior with fixed weights. The idea is to train ESND for “all” possible operating conditions of the dynamical model, corresponding to possible combinations of the parameters values (d, mc ). The robot’s nominal parameters values are chosen as follows [2]: a = 2, b = 0.75, d = 0.3, r = 0.15, mc = 30, mw = 1, Ic = 15.625, Iw = 0.005, Im = 0.0025. In this paper, we assume that mc vary between the values [30, 35] and the distance d can vary in interval of ±50% around its nominal value. I/O training data are prepared by driving (5) with input sequences sampled from the uniform distribution over [−2, +2]. Training data are in a form of 10 ”blocs” of 1000 sequences. In each bloc, (d, mc ) are assigned new random values taken from their variation intervals. To train ESND as a velocity controller, we used the same training approach adopted in our previous works [16][9]. Fig. 5 depicts a block diagram of the training approach. ESND learns the output teacher signals (torques), which bring the robot from an actual velocity at time (n) to a future velocity at time (n + 1). The network architecture was chosen as follows. 4 inputs, which represent actual (v, w) and desired (vd , wd ) linear and angular velocities, 30 internal neurons, and 2 outputs. No back-connection from the output to the DR, and no synaptic weight connections from the input directly to the output. The input and the internal synaptic connections weights were randomly initialized from a uniform distribution over [−1, +1]. The internal weight matrix W has a sparse connectivity of 20% and scaled such that its maximum eigenvalue |λmax | ≈ 0.3. In presence of heterogenous training data blocs, each weight update should satisfy simultaneously the conflicting demands imposed by each bloc of training data. In a feedforward training process, the tendency of the network weights update will be in favor of the most recent presented information. To overcome this recency effect, a special technique called “multi-streaming” is used [6][7]. In this work, no multi-stream is needed, since the ESN batch update uses all data at once, and does not suffer from the recency effect. On test data the ESND showed a mean square error of M SE = 1.352e − 004 and could track a new reference velocity, in presence of time variation values of mc and d.

Fig. 6. ESNs

Global control structure of a nonholonomic mobile robot using

In the simulation, the parameters mc and d are time varying and chosen according to Table I. The reference trajectory Pr = [xr , yr , φr ]T is generated using (6), and starting from the pose coordinate (0, 0, 0). The reference velocities vr and wr are chosen as following: 0 ≤ t ≤ 100 : vr = 0.25(1 − cos πt 5 ) wr = −0.5vr 100 ≤ t ≤ 200 : vr = 0.5 wr = −0.25 200 ≤ t ≤ 300 : vr = 0.5 wr = −0.25 cos πt 5 300 ≤ t ≤ 700 : vr = 0.5 wr = 0.25 700 ≤ t ≤ 800 : vr = 0.125(1 − cos πt 5 ) + 0.25 wr = −0.25 cos πt 5 800 ≤ t ≤ 1000 : vr = 0.25 wr = −0.25 TABLE I Time 0 ≤ t ≤ 400 400 ≤ t ≤ 500 500 ≤ t ≤ 600 600 ≤ t ≤ 700 700 ≤ t ≤ 1000

mc 30 34.4 30 33.2 30

d 0.3 0.13 0.3 0.41 0.3

Fig.7 shows the control result. ESNK track the reference trajectory by providing desired linear and angular velocities. Because initial errors are zero, desired velcities are

vr vd v

0.6

0.5

0.4

0.3

0.2

0.1

0 0

100

200

300

400

500 time

600

700

800

900

1000

(a)

reference, desired and actual angular velocity 0.3 wr v wd

0.2

0.1

0

−0.1

−0.2

−0.3

−0.4 0

100

200

300

400

500 time

600

700

800

900

1000

700

800

900

1000

700

800

900

1000

(b)

Control for wheel right 0.4 0.3 0.2 0.1 0 torque

V. DISCUSSION AND CONCLUSIONS In most previous works on trajectory tracking of nonholonomic mobile robots, the control results assumed exact knowledge about the robot pose, and are heavily based on the assumption “perfect velocity tracking” at the dynamic level. These conditions are an idealization of real situations. In practice, data are usually distorted by noise and errors, and a “perfect” velocity tracking is almost unattainable. To confront these problems, we developped in this paper an adaptive neurocontrol system with two levels based on a novel RNN called Echo state network (ESN). First, a kinematic level controller (ESNK ) is trained to mimic a feedback controller, and to improve its robustness against noise. Indeed, the trained ESNK showed more robustness against external perturbations. However, excelent control performances are obtained only when initial errors are close to zero. When initial errors are far from zero, poor performances were obtained. This problem can be solved by using a path planner, which gives a feasible trajectory to the nearest point on the reference trajectory. Many approaches are introduced in [1]. To reach the assumption ”perfect velocity tracking”, the robot dynamics is taken into consideration by a dynamic level controller (ESND ). The role of ESND is to convert the desired velocities, provided by the first level, into a torque control. At this level, no knowledge about the dynamic model was required, since the controller is designed only by learning I/O data collected from the robot. This property is very important in practical cases, where it is almost impossible to have the real parameters values of a robot. After training, ESND could make an excellent generalization on incoming data and deliver the appropriate control signals (torques), in order to track the desired velocity. Moreover, ESND is being asked to deal with parameters variation. In other words, it is asked to exhibit a characteristic, normally ascribed to adaptive controllers, whose parameters change in response to an environmental change. ”Adaptation” in this work is defined as the ability of ESND to recognize change only through the robot output (velocity) and its own state, without changing any synaptic weight. This capability is a natural consequence of prior ’meta-learning’ used during training. When a change occured, the state of ESND switched from one family of orbits to another, which corresponds to the change. All in all, training experiments carried out here demonstrate that two control levels based on small and partially

reference, desired and robot linear velocity 0.7

−0.1 −0.2 −0.3 −0.4 −0.5 −0.6 0

100

200

300

400

500 time

600

(c)

Control for wheel left 0.6 0.5 0.4 0.3 0.2 torque

almost similar to the reference velocities. ESND shows an excellent tracking of the desired velocity provided by ESNK . Furthermore, when mc and d values are suddenly changed, ESND locks on the new dynamic behavior of the robot and provides the appropriate control signal (Fig.7 (c) and (d)), without changing any synaptic weight. We can see small adaptation periods after each variation. These are necessary for the network to switch from one familly of orbits to another. Surprisingly, the global control loop is barely affected by these switches, and the robot tracks reasonably the desired trajectory (Fig.8).

0.1 0 −0.1 −0.2 −0.3 −0.4 0

100

200

300

400

500 time

600

(d) Fig. 7. Kinematic and dynamic control. a) Linear velocity tracking. b) Angular velocity tracking. c) Computed torque for wheel right. d) Computed torque for wheel left.

Reference and actual Robot position in X−Y axis 0 Reference trajectory Robot trajectory −1

[10]

−2

[11]

Y

−3

−4

[12] −5

−6

[13] −7

−8 −1

0

1

Fig. 8.

2 X

3

4

5

Trajectory Tracking.

interconnected ESNs can be designed to act as a navigation system for mobile robots, in presence of external perturbations and time varying parameters. However, Substantial investigation on ESNs and more experiences are still needed to ensure that the results we have achieved to date are statistically significant. In this paper only simulation testing results are shown. Our future work is to implement the proposed approach on a real mobile robot. The dynamic level without meta-learning is already implemented successfully on an Omnidirectional RoboCup Player available at the Robotics Lab of the University of Stuttgart [16]. Despite the ”poor” quality of training data, and the performance limitation of the omniwheels, the ESND controller could achieve acceptable control results. The implementation of the kinematic level control (ESNK ) will be also conducted in the next stage. R EFERENCES [1] I. kolmanovsky and N. H. McClamroch, “Development in Nonholonomic Control Problems,” IEEE Control Systems, pp. 20–36, December 1995. [2] T. Fukao, H. Nakagawa, and N. Adachi, “Adaptive Tracking Control of a Nonholonomic Mobile Robot,” IEEE transactions on Robotics and Automation, vol. 16, pp. 609–615, October 2000. [3] R. Fierro and F. L. Lewis, “Control of a Nonholonomic Mobile Robot Using Neural Networks,” IEEE Transactions on neural networks, vol. 9, pp. 589–600, July 1998. [4] M.-S. Kim, J.-H. Shin, S.-G. Hong, and J.-J. Lee, “Designing a Robust Adaptive Dynamic Controller for Nonholonomic Mobile Robots under Modeling Uncertainty and Disturbances,” Mechatronics, vol. 13, pp. 507–519, 2003. [5] Z. Qiuju, J. Shippen, and J. Barrie, “Robust backstepping and neural network control of a low quality nonholonomic mobile robot,” International Journal of Machine Tools and Manufacture, vol. 39, pp. 1117–1134, 1999. [6] L. Feldkamp, G. Puskorius, and P. Moore, “Adaptation from Fixed weight Dynamic Networks,” in Proc. IEEE International Conference on Neural Networks, (Washington), pp. 155–160, IEEE, 1996. [7] L. Feldkamp, G. Puskorius, and P. Moore, “Fixed weight Controller for Multiple Systems,” in Proc. IEEE International Conference on Neural Networks, (Texas), pp. 773–778, IEEE, June 1997. [8] M. Oubbati, M. Schanz, and P. Levi, “Meta-learning for Adaptive Identification of Non-linear Dynamical Systems,” in Proc. Joint 20th IEEE International Symposium on Intelligent Control and 13th Mediterranean Conference on Control and Automation, (Limassol, Cyprus), IEEE, June 2005. [9] M. Oubbati, P. Levi, and M. Schanz, “A fixed-weight RNN Dynamic Controller for Multiple Mobile Robots,” in Proc. 24th IASTED

[14] [15] [16]

International Conference on Modelling, Identification, and Control, (Innsbruck, Austria), pp. 277–282, February 2005. A. Back, “Multiple and time-varying Dynamic Modelling Capabilities of Recurrent Neural Networks,” Neural Networks for Signal Processing, vol. 7, pp. 217–235, 1997. A. D. Back and T. Chen, “Universal Approximation of Multiple Nonlinear Operators by Neural Networks,” Neural Computation, vol. 14, no. 11, pp. 2561–2566, 2002. D. Prokhorov, L. Feldkamp, and I. Tyukin, “Adaptive Behavior with Fixed Weights in Recurrent Neural Networks: An Overview,” in Proc. International Joint Conference on Neural Networks, (Honolulu,Hawaii), May 2002. J. Lo, “Adaptive vs. Accommodative Neural Networks for Adaptive System Identification,” in Proc. International Joint Conference on Neural Networks, pp. 1279–1284, 2001. H. Jaeger, “The ’echo state’ approach to analysing and training recurrent neural networks,” Tech. Rep. 148, AIS Fraunhofer, St. Augustin, Germany, 2001. H. Jaeger, “Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the echo state network approach,” Tech. Rep. 159, AIS Fraunhofer, St. Augustin, Germany, 2002. M. Oubbati, P. Levi, and M. Schanz, “Recurrent Neural Network for Wheeled Mobile Robot Control,” WSEAS Transaction on Systems, vol. 3, pp. 2460–2467, August 2004.