Pattern Recognition Letters 21 (2000) 45±60
www.elsevier.nl/locate/patrec
Self-localization of a mobile robot without camera calibration using projective invariants Wang-Heun Lee b, Kyoung-Sig Roh a,*, In-So Kweon b a
Intelligent Robot System Team, System and Control Sector, Samsung Advanced Institute of Science and Technology, P.O. Box 111, Suwon 440-600, Kyung ki Do, Kihung, South Korea b Department of Automation and Design, Korea Advanced Institute of Science and Technology, Seoul, South Korea Received 20 January 1999; received in revised form 22 June 1999
Abstract In this paper, we propose a visual-based self-localization algorithm of an indoor mobile robot. The algorithm does not require calibration and can be worked with only a single image by using the projective invariant relationship between natural landmarks. The position of the robot is determined by relative positioning. Also, the method does not require previous information of the position of the robot. The robustness and feasibility of our algorithm have been demonstrated through experiments in hallway environments. Ó 2000 Elsevier Science B.V. All rights reserved. Keywords: Mobile robot; Self-localization; Projective invariant
1. Introduction Self-localization is a very important capability for an indoor mobile robot to autonomously execute given tasks. Among many approaches, vision-based methods have some advantages because of their ¯exibility and simplicity. Some successful vision-based approaches are well reviewed in (Kosaka and Kak, 1992; Sugihara, 1988). Among the approaches, we are interested in monocular vision system, that is, we assume that geometric models of the indoor are available. The prominent approaches are as follows: Dulimarta and Jain (1997) proposed an indoor mobile robot system with a self-localization capability using ``ceiling light'' and ``door number plates''. The paper proposed the further direction of vision-based approach and an ecient usage of the landmark. Matsumoto et al. (1996) proposed a new visual representation of the route, the ``view-sequenced route representation (VSRR)'', which can be utilized for localization and steering angle determination and obstacle detection, simultaneously. Kosaka and Kak (1992) proposed a vision-guided method using model-based reasoning and prediction of uncertainties. Their approach performed vision-based self-localization with the help of a 3D CAD model of the environment and *
Corresponding author. Tel.: +82-331-280-9275; fax: +82-331-280-9289. E-mail addresses:
[email protected] (K.-S. Roh),
[email protected] (I.-S. Kweon).
0167-8655/00/$ - see front matter Ó 2000 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 7 - 8 6 5 5 ( 9 9 ) 0 0 1 3 2 - 4
46
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60
the prediction of motion uncertainty. It was possible to obtain very exact localization by using Kalman ®lter-based updating. Most of the vision-based approaches assume that the position of robot is known by an internal sensor or previous position and motion information of the robot, approximately. And the vision system was used for detail localization or minimizing error between the estimated pose and current pose. But, the cases that cannot use the previous information occur frequently. In the cases, the approaches may be inecient or inapplicable. Also, the approaches require a ®xed camera setup or camera calibration before working. For example, Matsumoto et al. (1996) constructed VSRR as a model of the route and used the VSRR to recognize the best matched view by template matching. But, if the camera setup is changed, the new VSRR may be required. Kosaka and Kak (1992) used the camera parameters to generate an expectation map from a CAD model. To solve the two problems, we present a new ecient self-localization algorithm, which does not need any camera information and any previous information, using the projective invariant (Mundy and Zisserman, 1992). The proposed algorithm for navigation in hallways and similar indoor environments is based on two basic assumptions that the ground plane is ¯at and two parallel side-lines are formed by the ¯oor and two side walls. We also assume that an environmental map database is available for matching between the scene and the model. Intersection points between the ¯oor and the vertical lines of door frames are used as point features to compute the projective invariant. As an o-line process, we construct a database consisting of the projective invariant of the point features. Using the invariant in the constructed database, the correspondences between the model and scene features can be found. The corresponding point features in the database and in the image are used to compute the positions of the mobile robot. We demonstrate the robustness and feasibility of our algorithms through experiments in indoor environments using an indoor mobile robot. 2. Projective invariant and relative positioning 2.1. Projective invariant Fig. 1 shows the projective invariant relationship explained by the concept of the canonical coordinates. B Let X A k and X k , k 1±4, denote four pairs of corresponding points in coordinates A and B, respectively. Let ek , k 1±4, denote the corresponding points in the canonical coordinates. Then we compute the canonical coordinates for the other points with respect to the four points. From 2D projective invariant, we obtain Ix
A A A A A det
X A det
X Bi X B2 X B3 det
X B4 X B1 X B2 i X 2 X 3 det
X 4 X 1 X 2 ; A A A A A A det
X i X 1 X 2 det
X 4 X 2 X 3 det
X Bi X B1 X B2 det
X B4 X B2 X B3
A A A A A det
X A det
X Bi X B3 X B1 det
X B4 X B1 X B2 i X 3 X 1 det
X 4 X 1 X 2 ; Iy A A A A A A det
X i X 1 X 2 det
X 4 X 3 X 1 det
X Bi X B1 X B2 det
X B4 X B3 X B1
where
xa det
X a X b X c ya 1
xb yb 1
xc yc : 1
1
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60
47
Fig. 1. Projective invariant relationship.
Due to the noisy observations, we cannot obtain the exact invariant. Thus, the variance of the measured invariant is required for robust matching. Let
~xi ; y~i be the noisy observation of a true image point
xi ; yi , then the relationship between these can be written as ~xi xi ni ;
y~i yi gi ;
2
where the noisy terms ni ; gi are independently distributed, having mean 0, variance r2i , then Eni Egi 0; V ni V gi r2i ; 2 r0 if i j; Eni nj 0 otherwise; 2 r0 if i j; Egi gj 0 otherwise;
3
Eni gj 0: The 2D invariant is a function of the variables: I
det
X5 X1 X4 det
X5 X2 X3 ; det
X5 X1 X3 det
X5 X2 X4
4
where X i
xi ; yi ; 1T or I I
x1 ; y1 ; x2 ; y2 ; x3 ; y3 ; x4 ; y4 ; x5 ; y5 . Due to noise in pixel positions, we measure the noisy invariant: ~ x1 ; y~1 ; ~x2 ; y~2 ; ~x3 ; y~3 ; ~x4 ; y~4 ; ~x5 ; y~5 : I~ I
~
5
~ we represent I~ as a Taylor series expanded around To determine the expected value and variance of I,
x1 ; y1 ; x2 ; y2 ; x3 ; y3 ; x4 ; y4 ; x5 ; y5 : " # " # 5 5 X X ~ ~ ~ ~ o I o I o I o I
~xi ÿ xi
~ yi ÿ yi ni I~ I I gi :
6 o~xi o~ yi o~xi o~ yi i1 i1
48
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60
Therefore, the variance becomes 2 !2 5 X oI~ 2 2 4 ~ E
I ÿ I r0 o~xi i1
oI~ o~ yi
!2 3 5:
7
2.2. Relative positioning Given four corresponding points in coordinates A and B and if we know the physical coordinates of ith point in A, the corresponding physical coordinates of ith point in B can be obtained by the relative positioning. We rewrite Eq. (1) as follows: Ix
det
X B4 X B2 X B3 det
X Bi X B2 X B3 XiB
Y2B ÿ Y3B ÿ YiB
X2B ÿ X3B
X2B Y3B ÿ X3B Y2B ; B B B det
X 4 X 1 X 2 det
X Bi X B1 X B2 XiB
Y1B ÿ Y2B ÿ YiB
X1B ÿ X2B
X1B Y2B ÿ X2B Y1B
det
X Bj X B3 X B1 XiB
Y3B ÿ Y1B ÿ YiB
X3B ÿ X1B
X3B Y1B ÿ X1B Y3B det
X B4 X3B X B1 Iy det
X Bi X B1 X B2 XiB
Y1B ÿ Y2B ÿ YiB
X1B ÿ X2B
X1B Y2B ÿ X2B Y1B det
X B4 X B1 X B2
8
or aXjB bYjB c; dXjB eYjB f ;
9
B where X A i and X i denote a point in coordinates A and B, respectively. B A In Eq. (9), a; b; c; d; e; f are functions of X A k ; X k ; k 1±4, and X k and
a Ix
det
X B4 X B2 X B3 B
Y ÿ Y2B ÿ
Y2B ÿ Y3B ; det
X B4 X B1 X B2 1
b ÿI x d Iy
det
X B4 X B2 X B3 B
X ÿ X2B
X B2 ÿ X3B ; det
X B4 X B1 X B2 1
det
X B4 X B3 X B1 B
Y ÿ Y2B ÿ
Y3B ÿ Y1B ; det
X B4 X B1 X B2 1
e ÿ Iy
det
X B4 X B3 X B1 B
X ÿ X2B
X3B ÿ X1B ; det
X B4 X B1 X B2 1
c ÿ Ix
det
X B4 X B2 X B3 B B
X Y ÿ X2B Y1B
X2B Y3B ÿ X3B Y2B ; det
X B4 X B1 X B2 1 2
f ÿ Iy
det
X B4 X B3 X B1 B B
X Y ÿ X2B Y1B
X3B Y1B ÿ X1B Y3B : det
X B4 X B1 X B2 1 2
From Eq. (9), the corresponding physical coordinates of ith point in B become
ce ÿ bf A A A B B B B A F xi
X A 1 ; X 2 ; X 3 ; X 4 ; X 1 ; X 2 ; X 3 ; X 4 ; X i ;
ae ÿ bd
af ÿ cd A A A B B B B A F yi
X A YiB 1 ; X 2 ; X 3 ; X 4 ; X 1 ; X 2 ; X 3 ; X 4 ; X i :
ae ÿ bd
XiB
10
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60
49
From Eq. (10), we de®ne the relative position relationship as follows: F xi A F Bxi ; F yi A F Byi
or
A
A
F B
X A
F Bxi ;
A
F Byi :
11
3. Self-localization In this section, we describe methods for self-localization in detail. Fig. 2 shows a hallway image that can be observed in a general indoor environment. Lights on the ceiling and the re¯ection on the ¯oor are distributed uniformly along the hallway. In this ®gure, two parallel lines intersect at a point called the vanishing point, which is one of the representative characteristics of the projective geometry. Also there are vertical lines generated by doors and entrances. We use intersection points between the two parallel lines and the vertical lines as point features or natural landmarks, see white marks in Fig. 2. The vision system consists of four sub-systems: the model-base construction sub-system, the image processing sub-system, the matching sub-system, the self-localization sub-system. The model-base construction sub-system uses the coordinate information for the frames of doors to compute and store ®vepoint invariants. In the image processing sub-system, we obtain the vanishing point and the point features corresponding to the frames of doors from an input image. In the matching sub-system, we search for the corresponding features or the frames of doors. The pose of a mobile robot is determined in the self-localization sub-system by relative positioning. 3.1. Image processing The proposed algorithm for navigation in hallways and similar indoor environments is based on two basic assumptions that the ground plane is ¯at and two parallel side-lines are formed by the ¯oor and two side walls. Intersection points between the ¯oor and the vertical lines of door frames are used as point features to compute the projective invariant.
Fig. 2. Scene of the general hallway.
50
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60
In a hallway image, the two parallel lines intersect at a point called the vanishing point. We use the Hough transformation method to extract the two parallel lines and vertical lines. Then, the point features are extracted by computing the intersecting point. We must select four points among the extracted point features to compute the canonical coordinates for any other points, which is explained in Section 3.2. For this purpose, we ®rst search the point features corresponding to the frames of doors, classify whether the extracted frames of doors are on the left or right side wall and select the frame of a door in both side walls. Finally, we can choose four points since each frame of door provides two point features. We extract frames of doors under the assumption that the door has uniform intensity, that is, small variance along the vertical direction. Thus, we can extract the frames of doors with small variance in intensity. 3.2. Model-base construction and matching As an o-line process, we construct a database consisting of the projective invariant of the point features. In order to construct an ecient model base, we use a hash table indexed by the projective invariant. Matching consists of hypotheses generation and veri®cation. Hypotheses are generated by indexing into the hash table, which is indicated by an invariant computed by ®ve points extracted in the image processing sub-system. The generated hypotheses are veri®ed by an alignment method (Huttenlocher and Ullman, 1987). 3.2.1. Model-base construction and hypotheses generation In this section, we explain how to construct a database using the projective invariant. Fig. 3 shows a topdown view of a typical corridor scene. Point features on the left and the right wall, which are the intersection points between the ¯oor and door frames, are represented by LK and RK , respectively. Table 1 represents the pseudo code of the model-base construction. In Table 1, H
represents a hash function and is de®ned by H
I ABC;
12
Fig. 3. Top view of a hallway.
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60
51
Table 1 Pseudo code for the model-base construction for Hallway HI for left feature Lk & Lk1 for right feature Rl & Rl1 ASSIGN Basis Points [Lk : Rl : Lk 1 : Rl 1 ] for left feature m (except k, & k+1) COMPUTE Invariant & Error Bound (I, DI ) for j I ÿ 3DI I 3DI STORE (i, k, l, weight) at H(j) end for end for for right feature n (except l & l + 1) COMPUTE Invariant & Error Bound
I; DI for j I ÿ 3DI I 3DI STORE (i, k, l, weight) at H(j) end for end for end for end for end for
where
( A ( B
l:
the left feature;
r:
the right feature;
p:
if I > 0;
m:
if I < 0;
and
C integer
jIj 100: And the weight is de®ned as modeled by Gaussian function. ! 2 1 ÿ
j ÿ I ;
I ÿ 3DI 6 j 6
I 3DI: weight p exp DI 2 DI 2p
13
Table 2 represents a matching algorithm to search for the corresponding model points. We verify the generated hypotheses by the alignment method. The alignment method is one of the wellknown veri®cation methods. For a hypothesis, we compute the projective transformation, then project the other points of the model on the image plane by the transformation and compare the projected points with the extracted features. Lastly, we select the hypothesis with the maximum matching ratio as the best match. The matching ratio is de®ned as follows 0
2 1
~ ÿ x ÿ x
C 1 B
14 MR
~ x; x p exp @ A; 2r2p rp 2p ~ is a projected point and x is the extracted image point within rp and rp 3 where x DI x ; DI y are computed by Eq. (7).
q DI 2x DI 2y and
52
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60
Table 2 Pseudo code for the matching EXTRACT Features & Basis Features in an input image ASSIGN Basis Points [bl1 : br1 : bl2 : br2 ] For left feature lfm (except Basis point)) COMPUTE Invariant INDEXING & VOTING end for for right feature rfn (except Basis point) COMPUTE Invariant INDEXING & VOTING end for If Voting #> Threshold Hypotheses Generation & Veri®cation
3.3. Self-localization Assume that the virtual robot center is de®ned by the vertical projection of the optical center of a camera mounted on a mobile robot onto the ¯oor. In Fig. 4, for example, the perspective projection of the virtual robot center produces a virture image point
xm ; ym on the extended virtual image plane. This virtual image point becomes the ®fth point feature in the image, which is used for self-localization through relative positioning. From Fig. 4, we can derive the relationship between the projected point of the virtual robot center
xm ; ym and the vanishing point
xv ; yv as f : sy yv sy ym : f ; where sy is a scale factor which transforms an image coordinate to a metric coordinate. Thus, the coordinate of the projected virtual robot center on the image plane is 2 f 1 1 fy2 : ym sy yv yv
Fig. 4. The virtual robot center and its projection onto the image plane.
15
16
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60
53
Using the extracted four image points xk ; k 1±4 and
xm ; ym with the corresponding object points, we can estimate the object coordinates of
xm ; ym that correspond to the position of the robot in the global coordinate system by relative positioning. In Eq. (16), if the optical axis of camera is parallel with the ¯oor, yv becomes zero. Thus, ym 1. In practice, we can assume ym 1 with a very small error because fy is very large. In the case that ymIm 1, from Eq. (1) plane projective invariants Ix and Iy become: det
x4 x1 x2 det
x4 x2 x3 det
x4 x1 x2 Iy det
x4 x3 x1 Ix
Ix
0
y2 ÿ y3 ÿ 1
x2 ÿ x3
x2 y3 ÿ x3 y2 ; 0
y1 ÿ y2 ÿ 1
x1 ÿ x2
x1 y2 ÿ x2 y1 0
y3 ÿ y1 ÿ 1
x3 ÿ x1
3 y1 ÿ x1 y3 ; 0
y1 ÿ y2 ÿ 1
x1 ÿ x2
x1 y2 ÿ x2 y1
det
x4 x1 x2
x2 ÿ x3 det
x4 x2 x3
x1 ÿ x2
or
17
det
x4 x1 x2
x3 ÿ x1 Iy : det
x4 x3 x1
x1 ÿ2 In Eq. (10), let A and B be the coordinate systems de®ned in the image and world, respectively. Then the position of robot X m
Xm ; Ym becomes
ce ÿ bf F x
x1 ; x2 ; x3 ; x4 ; X 1 ; X 2 ; X 3 ; X 4 ;
ae ÿ bd
af ÿ cd F y
x1 ; x2 ; x3 ; x4 ; X 1 ; X 2 ; X 3 ; X 4 ; Ym
ae ÿ bd Xm
18
where X k ; k 1±4 represent the object points corresponding to the image points xk ; k 1±4. Since Eq. (18) is independent of X m , it is possible to compute the position of robot without calibration using a single image. 4. Experiments Experiments have been carried out in an indoor corridor environment using a mobile robot KASIRI-II. The mechanism of KASIRI-II consists of wheels for conventional running and in®nite path wheels for running on un¯at ¯oor such as stairs. We use a 585 pentium as the master controller. It is also equipped with a motion control board, a vision processing board (UIC), IR sensors, servo motors and drivers and sonars. Fig. 5 shows the photograph of KASIRI-II (Lee, 1995). 4.1. Image processing Fig. 6 shows the result of the image processing. The Sobel operator is ®rst applied to detect edges and is followed by a thinning process (Bochem and Prautzsch, 1994) to obtain edges with single pixel width. Extracted edges for an image are shown in Fig. 6(b) and (c). Fig. 6(d) shows the extracted point features. Finally, door frames in each wall are determined from intensity distribution between two adjacent point features as shown in Fig. 6(e). Fig. 7 shows the results of door frame extraction using the intensity distribution. In Fig. 7(c) and (d), for example, point features corresponding to a waste-basket denoted as A are rejected due to large variance.
54
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60
Fig. 5. Photograph of KASIRI-II, a mobile robot used for the experiments.
4.2. Matching Among extracted point features, the points of the ®rst door frame in both sides are selected as the basis points. Table 3 represents the extracted point features for the image in Fig. 6, where bl and br represent the left and the right basis points and lf, rf represent the left and the right point features. Table 4 represents the invariant coordinates of each point feature for four basis points and the corresponding index value, which is computed with the result of the image processing for the image in Fig. 6. Table 5 represents the generated hypotheses and the result of veri®cation for each hypothesis. The matching ratio MR is de®ned in Eq. (14). We determine the second hypothesis as the true hypothesis because the matching ratio of the second hypothesis is larger than the ®rst one. Fig. 8 shows the projected and extracted point features for the true hypothesis. The circle represents the extracted feature points from an input image and the cross represents the projected model points, which are obtained from the result of matching. 4.3. Self-localization We test the self-localization algorithm with the input image shown in Fig. 6 and the result of matching is represented in Table 6. For the experiment, we used a camera with f 16 mm and sy 0:0013 mm and obtained xm
xm ; ym
0; 1070950:469 by Eq. (16). Using the extracted four image points and
xm ; ym with the corresponding object points, we can estimate the object coordinates of
xm ; ym that corresponds to the position of the robot in the global coordinate system by relative positioning. Table 7 presents the estimation result by the proposed method. Table 8 presents the result of self-localization when ym 1. Therefore, we can compute the position of robot without calibration using a single image.
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60
Fig. 6. Results of procedures of the image processing.
55
56
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60
Fig. 7. Results of the image processing.
Table 3 Extracted features No.
Image coord.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
bl (76, 101) br (247, 183) bl (86, 94) br (220, 156) lf (100, 84) lf (103, 82) lf (105, 80) rf (195, 131) rf (186, 122) rf (168, 104) rf (164, 100) rf (157, 93) rf (153, 89) rf (150, 86)
Table 4 Computed invariant No.
Invariant
Index value
5 6 7 8 9 10 11 12 13 14
(0, 0.72) (0, 0.78) (0, 0.82) (0.38, 1) (0.28, 1) (0.15, 1) (0.13, 1) (0.09, 1) (0.08, 1) (0.07, 1)
lp72 lp78 lp82 rp38 rp28 rp15 rp13 rp9 rp8 rp7
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60 Table 5 Generated hypotheses and matching ratio of each hypothesis No.
Hypotheses
MR
1 2
(1,0,2) (1,2,2)
0.73 0.94
Fig. 8. Results of veri®cation (: Point features; +: Projected model points).
Table 6 Result of matching No
Image coordinates (xIm )
World coordinates (X W )
1 2 3 4 I
bl (76, 101) br (247, 183) bl (86, 94) br (220, 156) Robot position: X m Im F W
xm
(ÿ103.5,1151) cm (103.5, 598) cm (ÿ103.5,1318) cm (103.5, 688) cm
Table 7 Result of a self-localization with the exact ym Computed value
True value
(ÿ2.03, 367.50) cm
xm ; ym
0; 1070950:469
(0, 360) cm
Table 8 Result of a self-localization when ym 1 Computed value
True value
(ÿ2.14, 367.0) cm
xm ; ym
0; 1)
(0, 360) cm
57
58
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60
4.4. Navigation experiments Fig. 9 shows a navigation scenario for a mobile robot to test the accuracy of the self-localization and obstacle detection algorithms. The robot is commanded to navigate along the center of hallways, starting from end of hallway I. The goal position for the robot is the other end of hallway II. Obstacles are intentionally placed in hallways I and II. Fig. 10 shows a trajectory of the positions of the robot which is computed by the proposed method.
Fig. 9. The test region of a mobile robot.
Fig. 10. The computed position of robot.
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60
59
Fig. 11. Results of the self-localization and obstacle detection.
Fig. 12. The error of self-localization.
Fig. 11 shows the result of the self-localization of the mobile robot. Fig. 12 shows the error between the true positions, measured by a tape measure and the computed positions of the robot. Time complexity is as follows: The rate of success is about 80% and mis-match or false alarm is 0% and the remaining 20% is the rate of failure. Failures are mainly caused by low matching ratio. The average errors of self-localization are (9.7, 7.9) cm and (6.4, 6.6) cm in hallways I and II, respectively. 5. Conclusion In the paper, we proposed a new vision-based approach for indoor mobile-robot navigation. For selflocalization, we presented a method using the ®ve-point projective invariant to search for correspondences. Since the method is not required for the camera parameters, such as focal length, scale factor, it is independent of the con®guration of camera. Also, this method does not need any information for the initial robot position.
60
W.-H. Lee et al. / Pattern Recognition Letters 21 (2000) 45±60
References Bochem, W., Prautzsch, N., 1994. Geometric Concepts for Geometric Design. A.K. Peters, Wellesley, MA. Dulimarta, H.S., Jain, A.K., 1997. Mobile robot localization in indoor environment. Pattern Recognition 30 (1), 99±111. Huttenlocher, D.P., Ullman, S., 1987. Object recognition using alignment. In: Proceedings of the IEEE ICCV, pp. 102±111. Kosaka, A., Kak, A.C., 1992. Fast vision guided mobile robot navigation using model based reasoning and prediction of uncertainties. CVGIP: Image Understanding 56 (3), 271±329. Lee, W.H., 1995. Development of a mobile robot with a hybrid locomotion mechanism. M.S. Thesis, KAIST. Matsumoto, Y., Inaba, M., Inoue, H., 1996. Visual navigation using view-sequenced route representation. In: Proceedings of ICRA96. Minneapolis, MN, April 83±88. Mundy, J.L., Zisserman, A., 1992. Geometric Invariance in Computer Vision. MIT Press, Cambridge, MA. Sugihara, K., 1988. Some location problems for robot navigation using a single camera. CVGIP: Image Understanding 42, 112±129.