Epipole-based visual servoing for nonholonomic mobile robots Gian Luca Mariottini, Domenico Prattichizzo
Giuseppe Oriolo
Dipartimento di Ingegneria dell’Informazione Universit`a di Siena Via Roma 56, 53100 Siena, Italy Email: {gmariottini,prattichizzo}@dii.unisi.it
Dipartimento di Informatica e Sistemistica Universit`a di Roma “La Sapienza” Via Eudossiana 18, 00184 Roma, Italy Email:
[email protected] Abstract— A new image-based visual servoing algorithm is presented for nonholonomic mobile robots. The algorithm, based on epipolar geometry, consists of three independent and sequential steps making use of both the estimated epipoles and the image features. In particular, due to the nonlinear dynamics of the camera-robot system, an input-output feedback linearizing control law is used during the second step. Simulations results are presented to validate the proposed visual servoing technique.
I. I NTRODUCTION In robotic visual servoing, both the control objective and the control law are directly specified in the image domain [11], [15]. Designing the feedback law at the sensor level improves the controller performance, especially when uncertainties and disturbances affect the model of the robot and/or the calibration of the camera. In this paper, we propose a visual servoing method exploiting the epipolar geometry between two views [10]. Our work builds upon previous contributions [19], [13] and extends epipole-based visual servoing to mobile robots with nonholonomic kinematics. The use of epipolar geometry in visual servoing allows to build a robust feedback controller in the image domain independently of the type of image features involved, i.e., no matter if they are feature points or contours. In the absence of workspace obstacles, the basic motion tasks assigned to a wheeled mobile robot may be formulated as (1) following a given trajectory and (2) moving between two robot postures. From a control viewpoint, the peculiar nature of nonholonomic kinematics makes the first problem easier than the second; in fact, it is known [3] that feedback stabilization at a given posture cannot be achieved via smooth time-invariant control. This indicates that the problem is truly nonlinear; linear control is ineffective, and innovative design techniques are needed. The trajectory tracking problem was globally solved in [22] by using a nonlinear feedback law and in [7] through the use of dynamic feedback linearization. As for posture stabilization, both discontinuous and/or time-varying feedback controllers have been proposed. Smooth time-varying stabilization was pioneered by Samson [21], while discontinuous control was used in various forms, e.g., see [1]; a review of the main results in the area is given in [8].
While the above controllers assume that the mobile robot state is available, the introduction of visual servoing techniques has naturally led to consider the case in which the control problem is formulated at the output level. In [2], feature points are used to estimate the fundamental matrix and the relative orientation and translation between different placements of the camera. In [20], epipolar geometry is used to steer the feature points to a given configuration. If the scene does not have any noticeable texture and only smooth surfaces are present, object profiles are the only information available to estimate the structure of the surface and the motion of the camera. Cipolla and co-workers [4], [18] use apparent contours and profiles to recover camera motion. In [5], [19] the authors propose an algorithm for mobile robot planar navigation exploiting epipolar geometry and some special symmetry conditions of epipoles. In this work the epipole-based visual servoing strategy is extended to mobile robot with nonholonomic constraints. The algorithm exploits the kinematics of the epipoles. The nonlinear differential kinematics of the mobile robot is approached in a feedback linearization context. In fact, it is well known that, if the number of generalized coordinates equals the number of input commands, one can try to use a static feedback transformation in order to transform exactly a nonlinear model into a linear system; necessary and sufficient conditions for the solvability of this problem are given in [12]. On the linear side of the problem, it is rather straightforward to complete the synthesis of a stabilizing controller. For example, this is the principle of the computed torque control method for articulated manipulators. In our case, static feedback linearization of the camera-robot system w.r.t. the epipoles is possible. The stability of the feedback linearization controller in the presence of unknown linearizing parameters is discussed. II. M ODELING
AND EPIPOLAR GEOMETRY
Consider the unicycle model in Fig. 1 and let q = [x, y, θ]T be its configuration vector with respect to the fixed world frame {Oxy}. Assume that a pinhole camera is fixed to the unicycle moving on a plane and let the optical axis of the camera-robot zc be parallel to the motion plane. Let the control inputs be the translational and angular velocities v and ω, respectively. The nonholonomic differential kinematics is
θ( t ) zc
0c
Feature point
y
X
y (t)
yc
Observed scene
xc ψ (t) x (t)
Fig. 1.
0
x
θ
The camera-robot with unicycle kinematics. baseline
simply derived as
up Ca
x˙ = v sin θ y˙ = v cos θ ˙ θ=ω
(1)
where θ denotes the angle between the zc axis and the y-axis and ψ denotes the angle between the segment Oc O and the x-axis. The epipolar geometry between two views is the mathematical framework to design the proposed visual servoing method. The main ideas about the epipolar geometry are briefly recalled here for the reader’s convenience [10], [20], [9]. Refer to the camera frame {Oc xc yc zc } and consider the full perspective model. Let K be the intrinsic camera parameter matrix ku f γ u0 K= 0 (2) kv f v0 0 0 1
where (u0 , v0 ) are the coordinates of the intersection between the image plane and the optical axis in the CCD plane (pixels), ku and kv are the number of pixels per unit distance in image coordinates, f is the focal length and γ represents the orthogonality of the CCD image axes. The relationship between a 3D point (homogeneous coordinates) X = [X, Y, Z, 1]T expressed in the world frame and its projection up = [up , vp , 1]T in the image plane of the camera in position Ca , as shown in Fig. 2, is: up = K[R|t]X where (R, t) are the extrinsic parameters (the rotation and the translation between the world and the camera frames). The main idea at the base of the proposed visual servoing method is to build the epipolar geometry between the desired view, grabbed when the camera-robot is in the desired configuration, and the actual view, grabbed when the camera-robot is in the actual configuration. Henceforth these two robot-camera configurations will be referred to as actual and the desired (or target) configurations (Ca xa ya za ) and (Cd xd yd zd ), respectively, as shown in Fig. 2. The segment Ca Cd is referred to as the baseline and its intersections with the two image planes define the actual and desired epipoles: ea = [eax , eay , 1]T and ed = [edx , edy , 1]T . Any plane containing the baseline is called an epipolar plane. The intersection of the epipolar plane with image planes generates the epipolar lines. The epipole positions in the image plane are the main actors of the proposed algorithm. They are defined, in homogeneous
actual position
za xa
R;t
ea
;
u ed p
zd
Cd
xd
desired position
Fig. 2. Nonholonomic mobile robot with a fixed camera mounted on in actual Ca and desired Cd configurations with optical axes referred to as za and zd .
coordinates, as the left and right null spaces of the so called fundamental matrix F Fea = 0, FT ed = 0. The fundamental matrix and the epipolar geometry can be retrieved according to some well known algorithms [10], [14]. In particular the fundamental matrix F can be estimated by solving at least eight corresponding features problems. The fundamental matrix keeps most of the information on the geometry (R, t) of the two views, in fact it is defined as F = K−T [t]× RK−1 . Note that, being the motion of the robot constrained on a plane parallel to the optical axis, only the x-coordinates of the epipoles change during the motion of the camera-robot. In what follows, we will describe a very special property on the epipolar geometry, referred to as autoepipolar condition [13] obtained when a simple translation occurs between the actual and desired views or camera configurations. It asserts that when both cameras gain the same orientation, then the two epipoles and the two epipolar lines are equal. In particular the epipole values can be estimated overlapping the two image planes and computing the intersection between lines through corresponding points. These lines will be referred to as bitangents. III. V ISUAL S ERVOING
BASED ON EPIPOLAR GEOMETRY: THE THREE STEPS ALGORITHM
Refer to the scenario reported in Fig. 2. The nonholonomic mobile camera-robot is in the actual position and the goal consists in steering the robot towards the target (or desired) position. It is important to note that no information about robot
X1
Xi unknown object features
ea(t2)
θ
θ
θ ea (t3) πa Ci
πi πa
Ci (t2)
πa ea(t1)
πa
Ca(t1)
C
Ca ea(t0) ed(t0)
zd
zd
πd
ed(t1)
Cd (t3)
3 rd
2nd Step
Step
a)
Step
c)
b)
The proposed 3-steps visual servoing algorithm steers the nonholonomic camera-robot from the initial configuration C a to the desired one Cd .
[1st Step] Starting from Ca the robot performs a rotation in order to reach the autoepipolar condition [13] in which ea x (t1 ) = ed x (t1 ) as shown in Fig. 3a. This step does not require any epipolar geometry estimation and supplies a good initial value for epipoles only using bitangent lines [6][13]. [2nd Step] The cart moves from Ca (t1 ) to an intermediate configuration Ci (t2 ) such that only a translation along the optical axis occurs between Cd and Ci (t2 ). The trajectory from Ca to Ci (t2 ) satisfies the nonholonomic constraints. Also this step is based on the epipole positions.
A. Step 1: reaching the autoepipolar condition In the first step the mobile robot performs a pure rotation about the yc axis until the autoepipolar condition [13] is gained (Fig. 3a). The algorithm to get the autoepipolar condition initially searches for bitangent lines obtained by overlapping the two images and computing the line passing through the corresponding points. It is possible to prove that when all the 3D scene features 350 350
300
300
250
actual position
20
desired position
a
X 0
XdX
Y
a
Xa
ZZdwf
0 a
10
100 X X d
Y
wf
wf
50
50
Y d Y
20 40
0
Xm
Yd Ywf
20
wf
20
40
20
0
20
Ym 150
Za
10
100
d
10
20
150
ZZwf
a
200
same orientation
Ym
Z
10
250
200
Zm
displacements or velocities is available apart from the actual and the desired images. In this section a three-steps visual servoing control law based on the epipolar geometry is presented. The algorithm is model-free because no a priori knowledge of the 3D structure of the observed object is required [16]. During its motion the robot acquires images of the scene then, for each time instant, a pair of images are available (the actual and desired ones). The corresponding features in these two images are used to estimate the fundamental matrix F and then the actual and desired epipoles. The proposed algorithm consists of three steps sketched in Fig. 3 and briefly presented in the following:
Zm
Fig. 3.
1st
zd e (t ) d 3 πd
πd Cd
Cd
Cd
Getting the starting configuration
zd
ed(t2)
πd
0
20
Xm
(a)
80
0
(c)
60
60 40
40 20
20
0
0
Epipole
20 20
40 40
[3rd Step] The robot-camera performs a translation from Ci (t2 ) to Cd (desired position) and stops when the actual and desired image features overlap. This work does not focus on the problem of keeping features in the camera field of view but is aimed to discuss the nonholonomic trajectory based on epipolar kinematics. Therefore, for the sake of simplicity, it is assumed that the camera has an infinite field of view. In what follows the three steps of the algorithm are discussed.
60
80 4500
4000
3500
3000
2500
2000
1500
(b)
1000
500
0
500
60 4500
4000
3500
3000
2500
2000
1500
1000
500
0
500
(d )
Fig. 4. Step 1: (a) The actual camera-robot is in a generic position and orientation; (b) All the bitangents do not intersect at the same point (different orientations between the actual and desired cameras) (c) The actual camera has the same orientation of the desired one. (d) All the bitangents intersect at the same point (same orientations).
bitangents intersect at a unique point then both camera-robots gain the same orientation [13].
In Fig. 4a the initial condition is reported: the actual camerarobot is rotated with respect to the desired. In this case bitangents does not intersect at the same point (Fig. 4b). Then the robot starts rotating about the yc axis and the same orientation is get (Fig. 4c) when all bitangent intersect at the same point (Fig. 4d). This is the autoepipolar property. The first step allows to provide an initial robust estimation for both epipoles ea (t1 ) and ed (t1 ) and guarantees that no singularities occur in the second step. Work is in progress to fuse this step with the next one in a unique motion strategy.
differential kinematics is input-output feedback linearizable by the following control law ν1 v (5) u= = E−1 ω ν2
B. Step 2: nonholonomic trajectory
being ψ = arctan Fig. 5, and
This step deals with the nonholonomic constraints of robot kinematics and is the main contribution of this work. Once the autoepipolar condition is gained, the robot starts moving towards the optical axis of desired camera position (Fig. 3b). The stopping condition is based on the epipoles x-coordinates. In particular, the condition eax = edx = 0 guarantees that the actual camera center is positioned on the zd -axis and that only a translation occurs with respect to the desired camera position (Fig. 3b). The goal of the proposed second step consists in aligning the camera-robot along the zd -axis without any knowledge about the state of the robot. The only information comes from the current and desired epipole x-coordinates. In this case the robot differential kinematics with the measured outputs is x˙ = v sin θ(t) y˙ = v cos θ(t) (3) θ˙ = ω y = e (t) 1 ax y2 = ed x (t) In this paper the input-output feedback linearization method [12], [8] is used to design a controller to steer the robot from Ca to Ci (t2 ) (Fig. 3b). Note that when the robot is in configuration Ci (t2 ) then both outputs y1 and y2 are zero. In other terms, the algorithm should steer the outputs to zero by tracking some desired functions y1des (t) and y2des (t). Consider the camera-robot in the actual and desired positions and put the world frame at the desired camera frame (Fig. 5). Moreover, let d(t) be the distance between the desired and the actual camera centers Ca and Cd . The first step to design the controller consists in differentiating the outputs until they are linearly related to the inputs. Let us write the epipole x-coordinates as a function of the state [x(t), y(t), θ(t)]T . For planar motions, we get (
ed x (t) = f x(t) y(t) ea x (t) = f
x(t) cos(θ(t))−y(t) sin(θ(t)) y(t) cos(θ(t))+x(t) sin(θ(t))
with decoupling matrix
E(eax , edx , d(t)) =
ν1 (t)
=
ν2 (t)
=
ax − f cos2 (θ(t)+ψ(t))
f cos(θ(t)+ψ(t)) − d(t) sin2 (ψ(t))
0
f ed x
and θ = ψ − arctan
y˙ 1des (t) − k1 y1 (t) − y1 des (t) y˙ 2des (t) − k2 y2 (t) − y2 des (t)
f ea x
(6) , see (7) (8)
being k1 and k2 the controller gains. Proof: The relative degree of system dynamics is obtained by differentiating the outputs functions y1 (t) and y2 (t). Let us start to differentiate the second output function: x(t) 1 ∂ x(t) ∂ed x f = x(t)f ˙ = − y(t)f ˙ (9) ∂t ∂t y(t) y(t) y 2 (t) Substituting (3) in (9) and writing x and y as functions of θ and ψ (Fig. 5), one gets y˙ 2 (t) = −v
f cos(θ(t) + ψ(t)) , d(t) sin2 (ψ(t))
(10)
then the system has relative degree 1 with respect to y2 . Similarly, it can be shown that the relative degree with respect to y1 is equal to 1. In fact it results that y˙ 1 (t) =
ea 2x ea 2x ∂eax = −v −ω . (11) ∂t f d(t) cos(θ + ψ) f cos2 (θ + ψ)
θ
ea x
ψ
θ za
ϕ Ca
y( t)
θ
xa
d(t)
(4)
The following theorem discusses input-output feedback linearization for the nonholonomic camera-robot with respect to the epipoles. Theorem 1 (Nonholonomic Epipole Based Control Law): The relative degree of system (3) is 2 and the nonholonomic
e 2 (t)
e 2 (t)
ax − d(t)f cos(ψ(t)+θ(t))
zd
ψ
π−ψ x( t ) Fig. 5.
ed x
Cd
xd
World frame
Setup for the two step motion algorithm.
The input-output linearizing control law in (5) requires invertibility of decoupling matrix E. From det(E) =
ea 2x (t) d(t) sin (ψ(t)) cos(ψ(t) + θ(t))
XY robot trajectory T =T a
d
T T a
d
250
200
y(t)
The expression of the decoupling matrix E is directly obtained from eqs. (11) and (10). Remark 1: It is worth noting that the decoupling matrix E in (6) can be evaluated from the measured outputs and from the unknown distance d(t). It will be shown in SectionIV that the feedback linearization works even if the unknown distance ˆ d(t) is substituted with a constant parameter d.
150
100
50
2
it follows that the matrix E is singular when (a) d → +∞ or (b) eax (t) = 0. According to Remark 1, discussion on the occurrence of condition (a) can be avoided. Occurrence of condition (b) is avoided by choosing special desired epipole trajectories y1des (t) and y2des (t). In particular, occurrence of condition (b) can be avoided guaranteeing that the robot will orient along the baseline (eax = 0) only at the end of the 2nd step. This implies that the desired epipole trajectory has to reach zero before the actual epipole x-coordinate does. To complete the analysis of the input-output feedback linearization, the study of zero dynamics is required. Zero dynamics are defined as as the internal dynamics of the system compatible with the output being identically zero [12]. The set of state variables where the measured output vector [y1 , y2 ]T and its derivative are zero is
150
100
50
0
50
100
150
200
x(t)
Fig. 6.
Robot trajectories for different Ta and Td .
C. Step 3: Translation The third step begin when both the epipole x-coordinates are zero (Fig. 3c). This step allows to translate the actual robot-camera along the optical axis till the desired camera configuration is reached. Note that during this motion the epipoles do not change and a different steering strategy must be used. The algorithm used in this step is feature-based and different control variables can be used such as areas of features’ convex hulls or features’ centroids. IV. S TABILITY
ANALYSIS
The stability analysis is crucial in the second step. A special attention is devoted to the analysis of stability with respect Note that the robot will never reach the zero dynamics during to the unknown distance d(t) between the desired and actual the nonholonomic trajectory if the desired epipole becomes camera positions. The stability analysis in the case of known distance d(t) is zero before that the actual one does. As shown in Theorem 1, the proposed linearizing input- trivial. In fact, in this case, the decoupling matrix E is known des des T output controller (5) strictly depends upon the desired epipole and the dynamic of the epipole error ζ = [y1 − y1 , y2 − y2 ] des des des des trajectories y1 (t) = eax (t) and y2 (t) = edx (t). In this becomes linear ˙ = −Gζ work a parabolic descending function has been chosen to ζ(t) guarantee differentiability and finite time convergence to zero. k1 0 with G = ( 0 k2 (0) eax (0) 2 if 0 < t < Ta t − 2 eax Ta t + eax (0) Ta 2 However, the case of known distance d(t) between the eax des (t) = 0 if t ≥ Ta actual and desired camera-robot positions is unreal in a visual (13) servoing context. In what follows stability of the proposed ( edx (0) 2 (0) t − 2 edx if 0 < t < Td visual servoing is investigated for time-constant estimation dˆ Td t + edx (0) Td 2 edx des x (t) = of distance d(t). 0 if t ≥ Td The control input becomes (14) The corresponding behavior of controlled robot strictly deˆ ˆ a , ed , d)Gζ [v, ω]T = E(e pends not only upon initial value of actual and desired epipoles but also upon Ta and Td . Note that according to previous and, as shown in [16], the necessary and sufficient condition discussion, the convergence of the second step is guaranteed for the global stability of the closed loop dynamic ζ˙ = −Qζ only when the time of convergence to zero for the actual is that the matrix epipole Ta is greater than that for the desired one Td . ˆ −1 G Q = EE In Fig. 6 different simulations are reported for different values of Ta and Td . has all eigenvalues with positive real part
[x, y, θ]T = [0, y, 0]T , y ∈ R .
(12)
progress to experimentally validate the proposed algorithm. Future work will focus on strategies to keep the features in the camera field of view during the robot motion and to extend the control law also in the case of textureless surfaces. ACKNOWLEDGMENTS This work was supported by ASI under project “TEMA: Team-based Exploration by Mobile Agents”, contract No. I/R/124/02. R EFERENCES
In this case Q can be written in closed form as: 2 2 # " ˆ (ψ) k1 k2 dd − 1 fe2acossin 2 (θ+ψ) Q= ˆ 0 k2 dd
(15)
and the two eigenvalues of matrix Q are positive if and only if dˆ > 0. In other terms the stability of the second step is guaranteed for any positive constant dˆ substituting the unknown distance d(t). This allows to easily compute the decoupling matrix only with the measure of the x-coordinates of the epipoles. V. S IMULATION
RESULTS
Simulations have been performed using Matlab and the Epipolar Geometry Toolbox [17] developed at the University of Siena. The mobile robot is initially in an arbitrary position and orientation. In this case the autoepipolar condition is not satisfied (Fig. 4b). A steering input, as described in [13], is then applied until all bitangents meet in the same point (Fig. 4d). Then the nonholonomic control law (5) is applied (Fig. 7) and both epipoles converge to zero (Fig. 8). In Fig. 9 some simulation results are reported to show convergence of the camera-robot to the desired optical axis zd for different starting configurations and between the actual and desired positions. VI. C ONCLUSIONS In this work a novel image-based visual servoing strategy for mobile robots with nonholonomic constraints has been presented. It is mainly based on epipolar geometry estimated from corresponding feature points of the observed scene. The algorithm is divided in three independent and sequential steps. A special attention has been devoted to the second step where an input-output linearizing controller has been presented to cope with the nonholonomic kinematics of the camera-robot system. The stability study for the proposed visual servoing strategy is presented and guaranteed for any positive constant distance to the robot target position. Simulations have been performed to validate the proposed algorithm. Work is in
[1] M. Aicardi, G. Casalino, A. Bicchi, and A. Balestrino. “Closed loop steering of unicycle-like vehicles via lyapunov techniques”. IEEE Robotics & Automation Mag., 2:27–35, 1995. [2] R. Basri, E. Rivlin, and I. Shimshoni. “Visual homing: Surfing on the epipoles.”. In International Conference on Computer Vision, pages 863– 869, 1998. [3] G. Campion, B. d’Andrea Novel, and G. Bastin. “Modeling and state feedback control of nonholonomic mechanical systems”. In 30th IEEE Conf. on Decision and Control, pages 1184–1189, 1991. [4] G. Chesi, E. Malis, and R. Cipolla. “Automatic segmentation and matching of planar contours for visual servoing”. In International Conference Robotics and Automation, volume 3, pages 2753 –2758, 2000. [5] G. Chesi, J. Piazzi, D. Prattichizzo, and A.Vicino. “Epipole-based visual servoing using profiles.”. In IFAC World Congress, Barcellona, Spain., 2002. [6] G. Cross, A.W. Fitzgibbon, and A. Zisserman. “Parallax geometry of smooth surfaces in multiple views”. In 7th International Conference on Computer Vision, Kerkyra, Greece, pages 323–329, 1999. [7] B. d’Andrea Novel, G. Bastin, and G. Campion. “Control of nonholonomic wheeled mobile robots by state feedback linearization”. Int. J. of Robotics Research, 14:543–559, 1995. [8] A. De Luca, G. Oriolo, and C. Samson. “Feedback control of a nonholonomic car-like robot”. In J.-P. Laumond, editor, Robot Motion Planning and Control, volume 229 of LNCIS, pages 171–253. Springer Verlag, 1998. [9] O. Faugeras. Three-Dimensional Computer Vision: A Geometric Viewpoint. MIT Press, 1993. [10] R. Hartley and A. Zisserman. “Multiple view geometry in computer vision”. Cambridge University Press, September 2000.
Epipole trajectories in actual and target cameras 0
ea(t) e (t)
0.5
d
1
epipoles [pixels]
Fig. 7. The second step trajectory: the robot in the actual position moves towards the optical axis zd of the camera in the desired configuration.
1.5
2
2.5
3
3.5
4 0
20
40
60
80
100
120
t [secs.] Fig. 8. zero.
The second step trajectory : both epipole trajectories converge to
2nd STEP: Motion of Nonholonomic Robot for different starting points I and II half plane 600
500
400
Y(t)
300
200
100
0
Desired Position 100 400
300
200
100
0
100
200
300
400
X(t) 2nd STEP: Motion of Nonholonomic for different starting points III and IV half plane 0
Desired Position
100
Y(t)
200
300
400
500
300
200
100
0
100
200
300
X(t)
Fig. 9. Convergence of the nonholonomic robot trajectory to z d for different starting positions.
[11] S. A. Hutchinson, G. D. Hager, and P. I. Corke. “A tutorial on visual servo control”. IEEE Trans. Robotics and Automation, 12(5):651–670, citeseer.nj.nec.com/hutchinson96tutorial.html 1996. [12] A. Isidori. ”Nonlinear Control Systems”. Springer-Verlag, 3rd edition edition, London, UK 1995. [13] J.Piazzi and D.Prattichizzo. ”An auto-epipolar strategy for mobile robot visual servoing”. In IROS03 Conference on Intelligient Robots and Systems, Las Vegas, USA, 2003. [14] Q.T. Luong and O.D. Faugeras. “The fundamental matrix: theory, algorithms and stability analysis.”. Int. Journal of Computer Vision, 17(1):43–76, 1996. [15] Y. Ma, J. Kosecka, and S.S. Sastry. “Vision guided navigation of nonholonomic mobile robot.”. IEEE Trans. Rob. Autom., 15(3):521– 536, 1999. [16] E. Malis and F. Chaumette. “ Theoretical improvements in the stability analysis of a new class of model–free visual servoing methods”. In IEEE Transaction on Robotics and Automation, volume 18(2), Apr. 2002. [17] G. Mariottini and D. Prattichizzo. Epipolar Geometry Toolbox for Matlab. University of Siena, www.dii.unisi.it/ rslab/vision, 2003. [18] P.R.S. Mendonc¸a and R. Cipolla. “Estimation of epipolar geometry from apparent contours: affine and circular motion cases.”. In Computer Vision and Pattern Recognition, volume 1, page 1009, June 1999. [19] J. Piazzi, D. Prattichizzo, and A. Vicino. “Visual servoing along
epipoles”. In Control Problems in Robotics, volume 4, pages 215–232. STAR, Springer Tracks in Advanced Robotics, Berlin Heidelberg, 2003. [20] P. Rives. “Visual servoing based on epipolar geometry”. In International Conference on Intelligent Robots and Systems, volume 1, pages 602–607, 2000. [21] C. Samson. “Time-varying feedback stabilization of car-like wheeled mobile robots”. Int. J. of Robotics Research, 12:55–64, 1993. [22] C. Samson and K. Ait-Abderrahim. “Feedback control of a nonholonomic wheeled cart in cartesian space”. In IEEE Int. Conf. on Robotics and Automation, pages 1136–1141, 1991.