Optimal Motion Planning in the Image Plane for Mobile Robots Hong Zhang
Jim Ostrowski
General Robotics and Active Sensory Perception (GRASP) Laboratory University of Pennsylvania, 3401 Walnut Street, Philadelphia, PA 19104-6228 E-mail: fhozhang,
[email protected] Abstract
This paper presents a novel approach to image-based visual servoing by making motion plans directly in the image plane. By skipping the step of transferring image features back to robot pose, we are able to utilize the image features for a direct and fast motion planning solution. Standard visual servoing techniques can then be applied to track the given trajectory. In this paper, we have applied the idea to 2-D cart-like and carlike mobile robot systems, and then extended it to perform 3-D (kinematic) motion planning. Within this motion planning paradigm, we also discuss the mechanisms for taking advantage of surplus features and incorporating image-based constraints, such as requiring that the images remain in the eld of view.
1 Introduction
Motion planning for robotics has long been a heavily studied eld [9]. A variety of techniques exist, each of which possess certain advantages and disadvantages related to the domain of application. Some research has focused on the use of methods such as potential elds [8] (or its extensions [15]) or randomized approaches [7], while others have taken dierent approaches, such as dierential geometry [1], graph theory [3], and game theory [10, 22]. In order to be interactive with the environment, especially for a mobile robot exploring in an unknown environment, most robots now are equipped with different kinds of sensors, such as cameras [21], sonar sensors [19], and tactile sensors [12]. Thanks in part to dramatically increasing computational power and re ned electronic engineering technology, the utilization of vision sensors has never been greater. Today, many robots are equipped with one or two cameras, for example, the Nomad 2000, Sony's AIBO dog-like robot, and unmanned blimps [21]. Thus, it is not a surprise that vision-based motion planning [2, 5] is a topic that is currently inspiring new research.
In the traditional study of motion planning, we generally assume that we know (or at least partially know) the position of the target and/or the robot. However, the direct outputs of vision sensors are generally not position information, but image features. The image features are themselves distorted due to projection, and are generally limited in where they can be detected due to restrictions on the eld of view. In order to obtain the global position and orientation of one object or even just to determine the relative pose of it, we need various algorithms of calibration and transformation. Hence, solving the sensor-based (and especially the vision-based) motion planning problem is usually a two-step process: the robot rst transfers the sensor features back to pose information, then makes a motion plan in the pose space based on this information. This is parallel to the early development of image based control, where the process can be divided into the sequential steps of vision and control. Motivated by image based visual servoing [6], where the computation of the control inputs is performed directly in the image plane, we propose here the idea of motion planning in the image plane, where the computation of the motion plans are executed in the image plane using the image features extracted directly from the visual data. It is well known that we can drive a car-like robot to track landmarks [11] or follow a given trajectory [13]. In our previous work [21], we showed that the same type of stabilization and tracking can also be done for a dynamic robot, such as a free- ying blimp. However, in this approach there is generally no trajectory speci ed. Instead a control law is chosen to drive the robot to the desired set-point. For this reason, there is no direct control over the path to be followed by the robot, and there is no a priori guarantee that a feasible path even exists, particularly if the initial and nal poses are widely separated. Our idea, then, is to generate a virtual trajectory for the robot to follow. This virtual trajectory should satisfy some criteria such as minimizing cost or avoid-
ing obstacles, and can be generated by using motion planning algorithms. The advantage of image based motion planning is analogous to that of visual servo control-by removing intermediate transformations, we can save computation time and eliminate the computation and modeling errors accompanied with it. However, the convenience does not come without its own drawbacks: there is generally only local convergence [2, 16], and it may be much more dicult to incorporate obstacle avoidance and global maps into the motion planning scheme. The structure of this paper is as follows. Section 1 gives a brief introduction. Section 2 formulates the process of making motion plans directly in the image plane. Section 3 is an illustration of the idea to 2-D car-like robots and Section 4 explores the idea further with examples of a 3-D \kinematic" blimp. Finally, in Section 5 we discuss these results and suggest areas requiring further work.
2 Motion Planning in the Image Plane
Traditionally, motion planning for a mobile robot involves determining a path that will take the robot from its starting position and orientation to a nal pose. In order to obtain such pose information and to be interactive with the environment, we need to apply dierent types of sensors. One type of commonly used sensor is a vision sensor, or camera. Although one could install a global camera to give a bird's eye's view (camera 1 in Figure 1) in a controlled area, this requires that the robot only operate in a limited area and does not allow for decentralized operation. More often, we will encounter a robot with an on-board camera (camera 2 in Figure 1). In this paper, we will focus on this case. That is, we will try to nd out the optimal path for a robot with an on-board camera. The camera will only provide information on the target and on the parts of the environment that happen to fall within the camera's viewing angle. Our scenario does not give a global view of the scene which includes the pose of both the target(s) and the robot itself, unless additional information is known about the location of the target. In previous work in visual servoing [21], we explored how to push the system dynamics to the image plane in order to control the robot based on tracking image features directly. Hence, it will be attractive if we can make motion plans directly from the features as well. That is, we want to nd out a sequence of inputs to the robot based on the features, such that the result of the robot movement is to make the features move from the starting con guration in the image plane to the desired
one. Combining the eort with visual servoing, we can derive control laws that are less sensitive to camera calibration errors. Camera 1
Camera 2 Target Robot
Figure 1: Contrasting the use of global and on-board camera views. Let us examine further the idea of motion planning in the image plane. After capturing and analyzing an image, the robot's control inputs are calculated. These inputs could be the thruster force for a blimp or the velocity input for a car-like mobile robot, and are chosen to drive the robot to the desired position. Figure 2 shows one such captured image and the features extracted from the image. Y
n X (0,0)
(v,u)
Figure 2: An image captured from the robot's (blimp's) on-board camera. The features extracted from the image. For simplicity, we track features that arise from spherical objects. Thus, the features we are interested in here are the center position of the object in the image plane (v; u) and its radius n, all measured in pixels. We use f = (v; u; n)T to denote these feature parameters. In fact, the features can be any landmarks which can be extracted stably, such as points, corners, edges, area, and so on. However, since our techniques generally require the computation of an image Jacobian based on features, it may be necessary to estimate the depth of certain features [4]. In this article, we will assume that we already have a mature algorithm to obtain the necessary features. For more information about feature extraction, the readers are strongly recommended to [6, 18]. The motion planning problem in the image plane is formulated exactly analogous to the classical motion planning problem. We apply techniques from our
previous work on optimal control methods used in motion planning of mobile robots [20, 22], but many other techniques could presumably be used (e.g., navigation functions used by Cowan and Koditschek [2]). The basic goal is to drive the robot from an initial state (say, f (t0 ) = f0 ) to a desired nal state, f (t1 ) = f1 , subject to any geometric, kinematic, or dynamic constraints. (See Figure 3.) In our case, the states will be described by the image features. Constraints can take the form of kinematic or dynamic constraints such as the governing equations for a mobile robot or a blimp. Alternatively, we can impose geometric constraints by requiring that the features all remain within the image plane or that certain regions in the image feature space be avoided, for example, if we project known obstacles into the image plane coordinates. Y
f1
X
f0
Figure 3: Visual motion planning in the image plane. The features change from f0 to f1 as the result of the robot's motion. In addition, the optimal motion planning formulation that we utilize allows us to specify a criterion for judging the performance of a chosen path. Our goal then is to compute feasible paths that minimize some cost function, C , over the space of all allowable inputs. Utilizing a cost function based on some norm of the inputs will also make the resulting path in the feature space dependent on the motion in the original space where the inputs are speci ed. For example, this could lead to choices of paths that are in some sense minimal length or minimal energy expended in the state space of the robot [20]. This can be especially critical for mobile robots, which generally have a limited power supplies. For a kinematically driven robot, such as a car-like robot, the inputs are the linear and angular velocities. The cost function can be written as C
=
Z X n 1
0
i=1
ki vi2 dt;
where vi are linear or angular velocities, and ki is a scale factor, possibly based on the system's mass or
inertia. Note that throughout this paper, we will use normalized dimensionless time, i.e., t0 = 0 and t1 = 1. Likewise, for a dynamically driven robot such as a blimp, the cost function would look something like C
=
Z X n 1
0
i=1
ki Fi2 dt;
where the Fi 's are the forces generated by the thruster. On the other hand, just like any motion planning problem, the robot will be governed by some constraints relating the inputs to the motion of the features, and may be required to satisfy certain geometric constraints. For example, an omni-directional robot can move in any direction on the plane, while a carlike robot can only turn based on its steering action. Thus, we will specify that the motion of the robot is always governed by some dierential equation modeling the eect of its inputs: f_ = g(f; u). In addition, there may be geometric constraints which we would like to impose on the motion. We will model these as inequality constraints on the features: h(f; f_) 0, though we could also include constraints on the inputs. We will use these primarily to constrain the motions so that the image features always remain in the image plane. That is, we want to prescribe motions for the robot which guarantee that the features of interest will always be in view. If there are multiple targets or features being tracked, we could also place constraints on the motion to ensure that occlusions do not occur. In the case that a model of the environment is given, one might also wish to incorporate obstacle constraints. This could be done by mapping the object parameters into the feature space and thus writing them as a geometric constraint. However, our experience with doing this using an optimal control approach is that the computational cost of adding many obstacles can be signi cant. Better results might be achieved through taking more traditional approaches such as using C-space methods.
3 Motion Planning for a Planar Robot
In this section, we will give some examples of motion planning in the image plane for a robot moving in a 2-D plane. Cart-like Robot First, let us look at the cart-like Labmate robot (often described as a rolling penny or Hilare robot). The con guration of the robot is shown in Figure 4. Without loss of generality, we x the camera such that the projection of its focal center to the ground plane is the same as that of the origin of the body frame, and its optical axis is parallel to the X-axis.
Wheel
Y 60
n
40 20
u2
0
X
Z
1
0
(k1 u21 + k2 u22 )dt
f (t = 0) f (t = 1)
= =
f0 ; f1 ;
vn ? v u R = K (f ) u ; 2+ 2
n2 R
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
u1 −0.015 4
u2
b
= (v; n; 1)T = ( xy b ; xRb ; 1)T ; (2) where v is the horizontal position, n is the size of the target in the image plane, is the focal length of the camera, and R is the real radius of the target. Then we have b (3) 0 ?yb f_ = Jp0p_1; (xb )2 xb where Jp = @ ? (xRb )2 0 0 A is the image Jacobian. 0 0 0 Let the cost function be the total kinetic energy. Then the motion planning problem is to nd u1 and u2 that solves
f_ =
0.6
−0.01
f
and
0.5
0
? nv
1
1
u2
u2
where K (f ) is a matrix connecting the inputs to the image feature kinematics, based on Eqs. 1, 2 and 3. 1 When necessary, we use homogeneous coordinates in this section. For a good introduction to this, please refer to [14]
2
0
Figure 5: The features and control inputs for the visual motion planning of a cart-like robot. 2.5
250
200
2
150 1.5
100 1
50 Y
Let the two inputs of the robot be the forward speed u1 and the angular velocity u2 . Then given a point pb = (xb ; y b ; 1)T in the body frame1 , we can nd the velocity of it with the following equation: b b b (1) 0 0 ?up_ =u?1 p ; 2 1 where b = @ u2 0 0 A is the homogeneous ve0 0 0 locity of the robot in the body frame. On the other hand, we can derive the relationship of the body velocity of the robot (or camera) to the velocity of the features in the image plane. By analyzing the image projection, we know that the features are related to the point pb by
such that
0.4
−0.005
Figure 4: Top view of the con guration.
2
0.3
v
Body
Wheel
1
0.2
50 0
min u ;u
0.1
100
u1 Camera
0
150
0
−50
0.5
0
−100 −0.5
−150 −1
−200
−250 −300
−200
−100
0
100
200
Figure 6: Motion of the image features.
300
0
0.5
1
1.5
2
2.5 X
3
3.5
4
4.5
5
Figure 7: Planned trajectory of the car.
Additionally, in order to track the target correctly we want the robot always to be in view of the target; that is, we want to keep the target within the viewing angle. Hence, we add two extra constraints so that the center of the target never goes beyond the boundary of the image, or ?Hboundary v Hboundary . Meanwhile, we also keep the robot from hitting the target by maintaining the size of the target in the image plane; that is, 0 < n nmax, where nmax is the maximum value allowed for a safe distance, and the lower bound 0 is used to prevent the optimization program from giving an unrealistic result. For convenience, we suppose that the robot is initially resting at the origin of the xed frame Let the target be a ball with a radius of 0:1m, located at (5:0m, 1:0m). If the focal length of the camera is 6mm, the target of the image will be a disk with radius of 12 pixels with horizontal position of 120 pixels. Our goal is to nd out the inputs for the robot such that the result of the motion will move the target to the center of the image plane with a desired size (50 pixels). Using a fast numerical optimization method based
Yb
Camera φ Xb
Yf
O
f
(xt , yt , r)
(x, y, θ ) Xf
Figure 8: Car-like mobile robot with camera. The subscript f denotes the global xed frame, b the body frame, (xt ; yt ; r) is the position and size of the target in the xed frame, and (x; y; ; ) are the state con gurations of the robot in the xed frame. The governing equations for such a robot with inputs of forward speed, u1 , and steering angle, u2 = , are well known. The body velocity b of the robot in the body frame0is 1 0 ? u1 tanl u2 u1 b = @ u1 tanl u2 0 0A 0 0 0 For comparison, we use the same initial and nal con gurations as in the cart-like robot. Then with the same image Jacobian, we can nd the dynamics of the image features: 0 vn ? (v2 +2 ) tan u2 )u1 1 ( R l f_ = @ ( n2 ? nv tan u2 )u1 A (4) R l 0 With a slightly dierent cost function (since the input u2 is a desiredZ angle, not velocity) 1 C= (k1 u21 + k2 u_ 22 )dt; 0 we can compute the motion plan in the image plane. The motion of the features is shown in Figure 9. In Figure 10, we show the motion of the robot in the ground plane. For the sake of comparison, the motion of the cart-like robot is also shown dashed.
2.5
50
n
40
2
30
1.5 20
1 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0.5 140
0
120 100
−0.5
80
v
on Newton-Raphson method [20], we can nd the result in just seconds. However, all the examples in this paper can also be solved using other standard optimization tools such as Matlab's optimization toolbox, while the computation time may vary. From Figures 5 and 6 we can see that the target center correctly moves to the center of the image plane, while its size increases to the desired value. Figure 7 gives the trajectory in the xed frame using the given control inputs. Car-like Robot Another type of commonly used robot is a car-like robot with front steering wheel. Figure 8 illustrates an example of such a nonholonomic robot.
60 40
−1
20 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Figure 9: Change of image features.
1
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
Figure 10: Planned trajectories of the robots.
Tracking multiple features In previous examples, we generated motion plans with two image features. However, when the scene is viewed from a global frame, there is one degree of uncertainty. For example, for the features chosen here, although we can choose inputs to make the nal target image to be at a certain position with certain radius, we still only know the distance instead of the pose of the target in the global and body frame (see Figure 11). While this may be useful in cases where the robot is simply following a target, we can easily eliminate this uncertainty by introducing more features. In the next example, we illustrate this idea using two targets in the scene. Target
Figure 11: One degree of uncertainty of the pose of the robot. The robot can be anywhere on the dotted circle facing the target when optimizing with only two image features (size and centroid). Suppose we have two spherical targets in the environment, and they are close enough for the robot to nd them simultaneously2, then we will have four features: two positions and two radii. We denote them with f = (v1 ; n1 ; v2 ; n2 )T . The movement of the features will still obey the kinematic equations we described before. Hence, for the example of cart-like robot, the governing dynamics for the features will be 2 If we assume the robot has memory, we can readily relax this restriction, and let the robot observe the targets one by one. However, when the target is moving, techniques to deal with time series and signal ltering will be necessary.
0v n v R ? B n nv B R ? f_ = B B @ vRn ? v
2+ 2 1
1 1 2 1
1 1
2 2
2+ 2 2
n22 R
? n2v2
1 CC u u CC ; = K (f ) u A u 1
1
2
2
4 Motion Planning For a 3-D Robot (5)
Then the new motion planning problem will be similar, except that the boundary conditions and kinematic constrains are more complex. Let's look at the following example. The starting point of the robot is at the origin of the global frame. The image features are initially at f0 = (?13; 4; 30; 4)T , corresponding to the position of (14; ?0:3) for one target and (14; 0:7) for the other. The goal is to move the target to f = (?60; 12; 60; 12)T , corresponding to the position of (5; ?0:5) and (5; 0:5). With the same cost function to minimize, we can nd the inputs and the movement of the image features as well as the global trajectory of the robot, see Figures 12 and 13. v1
0 −50 −100
0
0.1
0.2
0.3
0.4
0.5 time
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5 time
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5 time
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5 time
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5 time
0.6
0.7
0.8
0.9
1
n1
20 10 0
v2
100 50 0
n2
20 10
u1: o u2: *
0 20 0 −20
Figure 12: Change of the features and inputs.
u3
Image features 20
y (pixel)
10 0 −10 −20 −60
−40
−20
0 x (pixel)
20
40
60
global scene 2
y (m)
1 0 −1 −2 0
2
4
6
8
In this chapter, we will extend the idea of motion planning in the image plane to the 3-D motions. In our previous paper, we had analyzed the kinematic and dynamic behavior of an unmanned blimp [21]. In this paper, in order to concentrate on the idea of motion planning, we will make a simpli cation of the system to a kinematic blimp. Given the control inputs: forward speed F1 , angular velocity F2 , and motor tilt angle speed F3 , we can write down the kinematic equations for the blimp in the body frame: 8
> < xz__ == FF11 cos sin (6) _ > : == FF23 : Hence, we can nd the body velocities of any point in the blimp frame, that is: p_b = ?pb ; (7) b b b b T where p = (x 0; y ; z ; 1) is the point in the body 0 ?u3 0 u1 1 frame and = B @ u03 00 00 u02 CA is the body ve0 0 0 0 locity of blimp. Note that in order to make the process of optimization simpler and faster, we transform the control inputs as8 follows: < u1 = F1 cos F3 : uu32 == FF21 sin F3 : After obtained u1 ; u2, and u3 we can easily convert them back to the real inputs F1 ; F2 , and F3 . Again, we have the equation that relates the feature velocities and the body velocities using the image Ja0u 1 cobian: 1 f_ = Jp p_ b = K (f ) @ u2 A ; (8)
10
12
14
x (m)
Figure 13: Image plane and global movement of the car.
where f = (v; u; n; 1)T is the feature vector and Jp is the image Jacobian with (xb ; yb ; z b)= ( n R; nv R; nu R) and 0 ? vn n 0 1 R un R0 n A K (f ) = @ ? R R n2 0 0 ? R is the matrix transfer function. Let theZ cost function be 1 ((k1 u1)2 + (k2 u2 )2 + (k3 u3)2 )dt; 0 where k1 ; k2 , and k3 denote the relative weights of moving in dierent directions. Here we give a simulation result of such a system. Suppose the target is located at (5:0m, 2:0m, 1:0m)T
in the body frame from outset, and the radius of the target ball is 0:1m. Then the initial feature is f0 = (240; 120; 12)T . Our goal is to move the feature to f1 = (0; 0; 50)T . Also, we set k1 = 0:75, k2 = 1 and k3 = 4. The result of the optimization is shown in Figures 14-16. 60
n
40 20 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
300
v
200 100 0 150
u
100 50
u1: * u2: o u3: +
0 6 4 2 0
Figure 14: Change of the image features and control inputs. 250
200 1.4
150 1.2
100 1
50 0.8
0 0.6
−50 0.4
−100 0.2
3
−150 2 0 0
−200
1
2
1 3
4
−250 −300
−200
−100
0
100
200
Figure 15: Feature changes in the image plane.
5
6
0
300
Figure 16: Planned trajectory.
5 Conclusion and discussion
In this paper, we have investigated the idea of generating motion plans in the image plane for mobile robots. In doing so, the goal is to develop nominal paths that are based in the same space as the sensor measurements. By bypassing the transformation needed to nd pose information from image features, we can save computation time and eliminate unnecessary modeling errors due to extrinsic camera calibration parameters. We have simulated planar mobile robots and a 3-D kinematic version of an aerial blimp tracking a spherical object. We also enlarged the feature space to demonstrate the feasibility of tracking two targets simultaneously, which helps to remove the ambiguity in the con guration space that results from tracking a single object. The simulation showed
tentatively satisfying results, though more extensive simulations need to be performed. In particular, in this over-constrained optimization problem, convergence of the optimization algorithm occurred over a restricted space of initial and nal poses. Our initial investigation of image-based motion planning suggests that there is signi cant potential for its use, but there is obviously a good deal of work that remains to be done. Designing motion plans in the sensor space allows one to work directly with the observations that are being made, instead of going through multiple transformations between pose and image measurements. Perhaps the most important feature of this method, however, is the ability to place visibility constraints on the motion plans, in order to guarantee that the targets always remain within the robot's sight. When working with traditional visual servoing algorithms, this kind of guarantee becomes dicult to enforce, and if often arises that features stray out of the image plane (as we've seen in experimental work with visual servoing [17, 21]). The bene ts of visual servoing, however, such as calibration insensitivity and good convergence rates, are still maintained. Needless to say, there can be certain disadvantages in working with an image-based motion planner. Using techniques from optimal control leads to problems with convergence, particularly when the initial and nal poses are dramatically dierent, or when the problem is over-constrained. The convergence properties are not easily analyzed, making it dicult to predict when the planner will fail to generate a feasible (or even reasonable) path. One issue we would like to explore is whether other methods for solving the motion planning problem, such as navigation functions [2] or randomized methods [7], might provide better results in certain cases. Most motion planning techniques, however, come with their own set of limitations and disadvantages. An additional drawback of working in the image plane is that it becomes more dicult to incorporate known global information, such as obstacle locations, into the motion plans. This should be possible, however, and is the topic of future work. Another topic that we would like to explore further is the eect of dynamics of the robot on the generation of motion plans. in [21]. An easy way to do this is to make a \kinematic" motion plan using the methods explored here and consider this plan as a feedforward control term. Then, a feedback controller, such as the one developed for the blimp in [21], can be wrapped around this trajectory to provide path following. On the other hand, one could extend the present ideas by
directly incorporating the dynamics of the robot into the optimization scheme. We currently have simpli ed the blimp system to take a kinematic form in order to focus on the idea of motion planning within the image plane. Making the optimization scheme (or any other motion planning scheme) work in the presence of second order inputs will be a signi cant challenge. Lastly, we would like to investigate in more detail general 3-D motions (for example, as might be used in guiding a satellite) and the additional features that must be added to track fully spatial objects. Along with this, we will explore the ways in which these algorithms extend to other types of features beyond those derived from simple spherical objects. In addition to multiple features, it would be interesting to see whether motion planning as a general approach could be extended to include a variety of sensors, such as dead-reckoning and sonars, in performing sensor fusion side-by-side with motion planning. The authors gratefully acknowledge the support of NSF grants MIP94-20397 and IRI-9711834, and ARO grants P-34150-MA-AAS, DAAH04-96-1-0007, and DURIP DAAG55-97-1-0064.
References
[1] F. Bullo, N. E. Leonard, and A. D. Lewis. Controllability and motion algorithms for underactuated Lagrangian systems on Lie groups. Submitted to IEEE Transactions on Automatic Control, February 1998. [2] N. J. Cowan and D. E. Koditschek. Planar image based visual servoing as a navigation problem. In International Conference on Robotics and Automation, pages 611{617, 1999. [3] J. P. Desai, V. Kumar, and J. P. Ostrowski. Control of changes in formation for a team of mibile robots. In International Conference on Robotics and Automation, pages 1556{1561, 1999. [4] B. Espiau, F. Chaumette, and P. Rives. A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation, 8(3):313{326, June 1992. [5] G. D. Hager, D. Kriegman, A. Georghiades, and O. Ben-Shahar. Toward domain-independent navigation: Dynamic vision and control. 1998. [6] S. Hutchinson, G. D. Hager, and P. I. Corke. A tutorial on visual servo control. IEEE Transactions on Robotics and Automation, 12(5):651{670, 1996. [7] L. Kavraki, P. Svestka, J. Latombe, and M. Overmars. Probabilistic roadmaps for path planning in high-dimensional con guration spaces. IEEE Transactions on Robotics and Automation, 12:566{580, 4 1996. [8] O. Khatib. Real-time obstacle avoidance for manipulators and mobile robots. International Journal of Robotics Research, 5(1):90{98, 1986.
[9] J.-C. Latombe. Robot Motion Planning. Kluwer Academic Publishers, Boston, 1991. [10] S. M. LaValle and R. Sharma. A framework for motion planning in stochastic environments: applications and computational issues. volume 3, pages 3063{3068, San Diego, CA, 1995. [11] A. Lazanas and J. C. Latombe. Motion planning with uncertainty: A landmark approach. Arti cial Intelligence, 76:285{317, 1-2 1995. [12] V. Lumelsky and S. Sun. A uni ed methodology for motion planning with uncertainty for 2d and 3d twolink robot arm manipulators. International Journal of Robotics Research, 9(5):89{104, 1990. [13] Y. Ma, J. Kosecka, and S. Sastry. Vision guided navigation for a nonholonomic mobile robot. In The Con uence of Vision and Control, pages 134{145. Springer Verlag, 1998. [14] R. M. Murray, Z. Li, and S. S. Sastry. A Mathematical Introduction to Robotic Manipulation. CRC Press, Boca Raton, FL, 1994. [15] E. Rimon and D. E. Koditschek. Exact robot navigation using arti cial potential functions. IEEE Transactions on Robotics and Automation, 8(5):501{518, 1992. [16] S. Soatto, R. Frezza, and P. Perona. Motion estimation via dynamic vision. In IEEE Transactions on Automatic Control, volume 41(3), pages 393{413, March 1996. [17] C. J. Taylor, J. P. Ostrowski, and S.-H. Jung. Visual servoing with relative orientation. In Conference on Computer Vision and Pattern Recognition, 1999. To appear. [18] E. Trucco and A. Verri. Introductory Techniques for 3-D Computer Vision. Prentice Hall, Upper Saddle River, New Jersey 07458, 1998. [19] T. Yata, A. Ohya, and S. Yuta. A fast and accurate sonar-ring sensor for a mobile robot. In International Conference on Robotics and Automation, pages 630{ 636, 1999. [20] M. Z efran. Continuous Methods For Motion Planning. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA, October 1996. [21] H. Zhang and J. P. Ostrowski. Visual servoing with dynamics: Control of an unmanned blimp. In International Conference on Robotics and Automation, pages 618{623, May 1999. [22] H. Zhang, J. P. Ostrowski, and V. Kumar. Motion planning with uncertainty. In International Conference on Robotics and Automation, pages 638{643, May 1998.