Pattern Recognition 39 (2006) 889 – 896 www.elsevier.com/locate/patcog
Projective reconstruction from line-correspondences in multiple uncalibrated images A.W.K. Tang∗ , T.P. Ng, Y.S. Hung, C.H. Leung Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong Received 30 June 2005
Abstract A new approach is proposed for reconstructing 3D lines and cameras from 2D corresponding lines across multiple uncalibrated views. There is no requirement that the 2D corresponding lines on different images represent the same segment of a 3D line, which may not appear on all images. A 3D line is reconstructed by minimizing a geometric cost function that measures the distance of the reprojected end points of the 3D segment from the measured 2D lines on different images. An algorithmic procedure is provided with guaranteed convergence to a solution where the geometric cost function achieves a (local) minimum. 䉷 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved. Keywords: Multiple views; Projective reconstruction; Line reconstruction; Line correspondence
1. Introduction The recovery of a 3D scene from uncalibrated images has long been a major area in computer vision. Existing reconstruction methods mainly make use of feature point correspondences [1–3] across multiple views. Recently, 3D reconstruction from line correspondences [4–12] has attracted some attention since there are many merits of reconstruction from lines over points. In the matching process, for example, reconstruction from lines allows correspondence of different sections of a line in different views. Dornaika and Garcia [13] proposed two 3D reconstruction methods from point and line correspondences for weak perspective and paraperspective camera models. Ansar and Daniilidis [14] proposed a linear method for 3D reconstruction from point and line correspondences under the assumption that the camera intrinsic parameters are known. Both of these methods (i.e. Refs. [13] and [14]), however, make a priori assumptions about the cameras. ∗ Corresponding author. Tel.: +852 2859 2728; fax: +852 2559 8738.
E-mail addresses:
[email protected] (A.W.K. Tang),
[email protected] (T.P. Ng),
[email protected] (Y.S. Hung),
[email protected] (C.H. Leung).
For uncalibrated cameras, Hartley [4] introduced the geometry between line correspondences across three images which was later developed into the trifocal tensor [6] governing point and line correspondences in three images analogous to that of the fundamental matrix for two images. Explicit linear algorithms for computing the trifocal tensor from line correspondences are available and camera projection matrices can be readily obtained from the trifocal tensor. Methods based on these multilinear constraints [4–6], however, are noise sensitive and applicable only for three or four views, and they minimize an algebraic rather than geometric error. A different approach for line reconstruction is to parameterize a 3D line as a 6-vector in Plücker coordinates. Faugeras and Mourrain [15] propose to transform a general projection matrix for points to a line projection matrix in Plücker coordinates, so that the linear relationship of projecting a 3D line onto an image plane as a 2D line can be represented by the line projection matrix similar to the projection equation for points. There are several methods [1,8–10,15] based on the linear projection equation for lines in Plücker coordinates. Maolin et al. [8] propose an iterative projective reconstruction for estimating scale factors, and Martinec and Pajdla [9] propose a non-iterative
0031-3203/$30.00 䉷 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.patcog.2005.10.019
890
A.W.K. Tang et al. / Pattern Recognition 39 (2006) 889 – 896
factorization method. The main problem of the 3D line reconstruction in the Plücker coordinates is that both the line and line projection matrix should satisfy some conditions which cannot be enforced directly during the reconstruction process. Instead, they are enforced at the end of the reconstruction process with the introduction of unpredictable errors due to an algebraic condition with no physical meaning. Moreover, the reconstructed line projection matrices need to be transformed back to general projection matrices for upgrading the projective to Euclidean space. Bartoli and Sturm [16] show empirically that the existing linear methods perform poorly in most cases, and they propose a bundle adjustment method based on orthonormal representation of 3D lines in the Plücker coordinates for minimizing a geometric quantity, namely the orthogonal distances between the reprojected lines and the two 2D end points of the measured line segments. Soe and Hong [10] propose a sequential line reconstruction method using iterative extended Kalman filter (IEKF) by adding short baseline frames one by one to update the scene reconstructed from three wide-baseline frames. As this method relies on the initial estimates of the projection matrices computed from the first three views and they are updated iteratively, the method is biased towards the images input earlier. In Ref. [12], Triggs uses the factorization-based method to perform 3D reconstruction from both point and line correspondences. However, the method needs a set of point correspondences to compute fundamental matrices which are then used to convert the line correspondences into additional point correspondences. Furthermore, the algebraic rather than geometric error is minimized, and no missing data is allowed. In this paper, we will propose a line reconstruction method adopted from Ref. [1], which performs projective reconstruction from point correspondences in multiple views by minimizing 2D reprojection errors using a quasi-linear bundle adjustment approach. The proposed method reconstructs a line in 3D space as a segment with two suitably chosen end points whose projections on images are as close as possible in a geometric sense to measured lines that correspond with each other. Despite the apparent simplicity of the idea, there are many intricacies in the design of a complete solution using this approach, and we are not aware of any similar method for line reconstruction having the following characteristics: • the cost function represents a geometric measure of the goodness of reconstruction; • the reconstruction is truly based on line correspondences with no need for point correspondences; • the method is able to handle missing lines; • the optimization algorithm incorporates bundle adjustment and is guaranteed to converge to a local minimum of the geometric cost function. The paper is organized as follows. In Section 2, we will give a brief summary of the point-based quasi-linear bundle
adjustment method of Ref. [1] that will be adopted for the proposed line reconstruction method. In Section 3, the line reconstruction problem is formulated as a minimization of a geometric cost and an algorithmic solution is developed. Experimental results are given in Section 4. Section 5 contains some concluding remarks. Notation: The Hadamard product of two matrices A=[aij ] and B = [bij ] of the same size is denoted A ∗ B = [aij bij ].
2. Point-based method for projective reconstruction Suppose a set of n image point correspondences xij are established across m views. The jth 3D point Xj ∈ R4×1 projected on the ith view by a projection matrix Pi ∈ R3×4 as a 2D point xij = [uij vij 1]T can be expressed as ij xij = Pi Xj ,
(1)
where ij is the depth of Xj to the ith camera. The projection of all the 3D points onto all the cameras can be represented in matrix form as ⎡
11 x11 .. ⎣ M= . m1 xm1
⎤ · · · 1n x1n . .. ⎦ = P X ∈ R3m×n , .. . · · · mn xmn
(2)
where M is the scaled measurement matrix, P = [P1T P2T · · · PmT ]T ∈ R3m×4 is the joint projection matrix and X = [X1 X2 · · · Xn ] ∈ R4×n is the shape matrix. If the projective depths [ij ] are known, the scaled measurement matrix can be factorized into P and X by means of singular value decomposition. Various methods are proposed to determine the projective depths [1,2]. In particular, the depths are estimated iteratively in Ref. [1] as a minimization problem with the cost function
min
P ,X,
i=m,j =n
¯ij ∗ (xij − ij Pi Xj )2 ,
(3)
i=1,j =1
where ij (=1/ij ) is the inverse depth, ¯ ij = [1 1 ij ]T , is a control factor to force the cost (3) to approach the 2D reprojection error and ij are weighting factors for balancing the magnitude of pixel coordinates. This method has the advantages that it minimizes the 2D reprojection error and is capable of dealing with missing data by considering only the visible 2D points in (3). An iterative algorithm can be used to estimate the unknown parameters ij , Pi and Xj alternately as linear least-squares problems and therefore the solution process is a quasi-linear bundle adjustment. In the next section, we will extend this formulation to the case of line reconstruction.
A.W.K. Tang et al. / Pattern Recognition 39 (2006) 889 – 896
891
3. Line reconstruction
Estimated 3D Points
3.1. Representing lines In Fig. 1, a 2D line is represented as a 3-vector l. The minimum distance (i.e. orthogonal distance) between a 2D point x and the 2D line l is given by d⊥ (x, l) =
lTx
,
(4)
(l)21 + (l)22
Measured line with two end points
Estimated Camera Centre
Reprojected Points
Fig. 2. The projection of a line.
where (l)k is the kth element of the vector l. If x lies on l, d⊥ (x, l) = 0. A line can also be represented by any two distinct points (e.g. x1 and x2 for 2D cases or X1 and X2 for 3D cases) which lie on it. The points lying on that line between these two points can be expressed as x1 + (1 − )x2 for a 2D line or X1 + (1 − )X2 for a 3D line, 0 1. The proposed method relies on these two kinds of line representations.
reprojected lines to the measured line segments. It is also possible to simplify (5) by normalizing lij so that (lij )21 + (lij )22 = 1, ∀i, j . In the sequel, we will assume that all lines are already normalized. 3.3. Analyses on using lines for projective reconstruction
3.2. The error measure for line reconstruction A critical issue of line reconstruction methods is how to measure the difference between a measured line and its reprojected line from the reconstructed 3D line. Let the jth measured 2D line segment on the ith view be denoted lij and its two end points be denoted xij and xi,n+j so that lijT xij =lijT xi,n+j =0, where n is the total number of 3D lines. Let the measured 2D line segments be indexed by ordered pairs of the set A = {(i, j )|lij with end points xij and xi,n+j is observed as the j th line on the ith view}. The idea of the proposed method is to reconstruct a 3D line segment with end points Xj and Xn+j whose reprojected end points on the ith view will be close to lij . Denote the reprojected end points of Xj and Xn+j on the ith view by xˆij and xˆi,n+j , respectively. Making use of (4), the measure of error in the line reconstruction can be defined as 2 =
(lijT xˆi,j )2 + (lijT xˆi,n+j )2 1 , 2 n(A) (lij )21 + (lij )22
(5)
(i,j )∈A
where n(A) is the number of elements of the set A. can be regarded as the root mean square error (RMSE) of the
l d⊥
x
(l)3 O
2 (l )1
In Fig. 2, the reprojection of the estimated end points of a 3D line segment on an image is illustrated. Formulating a minimization problem directly from (5) to estimate the 3D line segments without extra constraint(s) will not provide a reasonable solution. This is because it is always easier to fit a single point than two end points of a line segment to the projection model, and therefore there is a tendency during optimization for the two end points of a line segment to converge to one single point that is closest to a measured line. In order to prevent the two end points from converging into one point, we require the two reprojected end points of the 3D line segment to approach the end points of a particular measured line segment on one of the image planes, and this line will be referred to as the reference line segment. Thus, the reference line segment acts as an anchor for fixing the section of the 3D line to be reconstructed. For each 3D line, only one line segment is chosen as the reference line segment. For all other lines lij , the distance of the reprojected end points of the 3D line segment Xj Xn+j from lij is taken to be the orthogonal distance from the (extended) line lij without any regard to the actual positions of the end points of the measured line. Fig. 3 shows the idea of reconstructing the 3D line segment by requiring its reprojected end points to approach the measured end points of a reference line segment (leftmost image), whereas the reprojected end points on the other two views are only required to be as close as possible to the measured lines with no other restrictions on the locations of the reprojected points.
2
3.4. Choice of reference line segments
+ (l )2
Fig. 1. Line on a 2D plane.
The strategy of reference line segment selection directly affects the accuracy of the final reconstruction. The
892
A.W.K. Tang et al. / Pattern Recognition 39 (2006) 889 – 896
xˆ i,n+j n+j xˆij
lrj
xi,n+j
j xij Fig. 4. The relationship between 2D reprojection error and orthogonal distance.
is related to the orthogonal distance to the line as: sum of squared orthogonal distances
Fig. 3. The reconstruction process.
= xˆr,j − xr,j 2 sin2 j + xˆr,n+j − xr,n+j 2 sin2 n+j ,
recommended strategy is to choose the line segment having the maximum length in pixel coordinate among the group of corresponding lines across multiple views. If the same amount of disturbance is applied to the end points of line segments having different lengths, the disturbed line will be closer to the original line when the ratio of disturbance to the length of line segment is smaller. Let the reference line segments be indexed by ordered pairs of the set
∃ j , n+j ∈ R
xˆr,j − xr,j + xˆr,n+j − xr,n+j sum of squared 2D reprojection errors. 2
2
The cost function (6) can therefore be treated as an approximation to the error measure (5) with a constraint on the end points of the reconstructed line. 3.6. Choice of weighting factors
R = {(r, j )|lrj is chosen as a reference line segment}. 3.5. Cost function for line reconstruction To formulate the line reconstruction method stated above, the cost function of the proposed method is defined as ⎧ 1 1 ⎨ FL (P , X, ) = ¯r,kn+j ∗ (xr,kn+j ⎩ 2n(A) k=0
(r,j )∈R
− r,kn+j Pr Xkn+j )2 ⎡ ⎣(i,kn+j lijT Pi Xkn+j )2 + (i,j )∈A\R
⎤⎫ ⎬ + 2 2i,kn+j (1 − i,kn+j Pi3 Xkn+j )2 ⎦ , ⎭ (6) where Pis is the sth row of Pi and ¯ i,kn+j = [1 1 i,kn+j ]T . The first summation in Eq. (6) is the mean squares of the reprojection errors for all the end points of the reference line segments. This quantity becomes the 2D reprojection error when tends to infinity. The second summation becomes the mean squares of all the orthogonal distances of the reprojected points to the measured lines when tends to infinity. The cost (6) is also the upper bound error of Eq. (5). In Fig. 4, the 2D reprojection error of the reprojected points xˆi,j and xˆi,n+j from the end points of the reference line lij
The purpose of the weighting factor ij is to balance the magnitude of the pixel coordinates with the last element (i.e. 1) of the homogeneous coordinates. During the iterations of the proposed method, ij is fixed. There are different considerations for the two terms in Eq. (6). For the first term of the minimization of 2D reprojection error, we choose rj = max(|urj |, |vrj |),
∀(r, j ) ∈ R.
(7)
This weighting factor helps to balance the distribution among the three components of the 2D homogeneous coordinates. For the second term of the minimization of orthogonal distances, ij is chosen as ⎧ (l ) ij 3 ⎪ ⎨ (lrj )3 ij = ⎪ ⎩ max(|urj |, |vrj |, |ur,n+j |, |vr,n+j |), (lrj )3 = 0 (lrj )3 = 0 |(lij )3 |, ∀(i, j ) ∈ A\R, (r, j ) ∈ R and i = r,
(8)
where the lines lij and lrj are normalized and the 2D line lrj on the rth view is chosen as the reference line segment for the jth 3D line. Note that |(lij )3 | is the orthogonal distance between the 2D line from the image origin. Similar to (7), the weighting factor (8) is meant to scale the third component of the homogeneous coordinates to magnitudes comparable with the pixel coordinates of the reprojected points on the ith view. We note however that the final results are not unduly sensitive to the choice of ij , and as is increased
A.W.K. Tang et al. / Pattern Recognition 39 (2006) 889 – 896
893
progressively in the algorithm (see step 7 of algorithm below), eventually ij → ∞ irrespective of the choice of ij . 3.7. A quasi-linear bundle adjustment algorithm for line reconstruction
k
6. Repeat steps 2–5 until converges. 7. If |k −b | is bigger than a threshold value, then put b =k , = 1.1 , k = 0 and return to step 2, else stop. At the end of the algorithm, the projection matrices are taken as Pik and the 3D lines are defined by the points Xjk and k . When some 3D lines are not visible to all the views, Xn+j missing entries in the measurement matrix should be filled in the initialization stage. The end points of a missing line can be filled by the centroid of the end points of the measured line segments on the view. This method for initializing missing data is used in all examples in this paper. We found that this initialization works well in all cases when the percentage of missing line segments is less than 60%. 3.8. Convergence The proposed method minimizes the cost function, FL (P , X, ), by estimating P, X and iteratively as three weighted least squares problems. It can be shown that as → ∞, FL (P , X, ) is guaranteed to converge to a (local) minimum of the cost function with a geometric meaning given in Section 3.5.
4. Experimental results The proposed method is evaluated by both synthetic and real data. The method is implemented using Matlab 6.5 running on a Pentium-42.8 GHz PC.
Fig. 5. A view of the synthetic scene.
2.5 RMS 2D reprojection error / pixel
1. Put k = 0, = 1, b = 0; Set 0ij = 1, ∀i, j and compute P 0 and X 0 from a rank-4 SVD approximation of [(1/0ij )xij ]. Select reference line segments and compute ij . 2. Put k = k + 1. 3. Fix X k−1 and k−1 and determine P k by solving minP k FL (P k , Xk−1 , k−1 ). 4. Fix P k and k−1 and determine X k by solving minXk FL (P k , Xk , k−1 ). 5. Fix P k and Xk and determine k by solving k = mink FL (P k , Xk , k ).
2
1.5
1
0.5
0
0
0.5
1
1.5 2 2.5 noise level / pixel
3
3.5
4
Fig. 6. Performance of the algorithm with data of difference noise levels.
such that all of them point at the centroid of the box and the image of the box almost occupies the whole image. The intrinsic parameters are fixed over all views and the image size is 500 × 500 pixels. The two vertices of each 2D line segment are chosen as its two measured points and they are perturbed by Gaussian noise independently. Line segments in the first view are chosen as the reference line segments. The reconstruction results for noisy data having Gaussian noise level ranging from 0 to 4 pixels with increments of 0.5 pixels are shown in Fig. 6. The noises are added to the x- and y-coordinates of the corner points independently. For each noise level, the estimated reprojection error is taken as the mean of the root mean square values calculated in 30 trials with different randomly generated noise. Note from Fig. 6 that the RMS error is proportional to the added noise. We have also implemented the line factorization method [9] in Matlab. Although the method works fine for noisefree data, the results are however very sensitive to added noise and it is not always possible to obtain a reasonable solution even at low noise levels. Hence we cannot perform a sensible comparison with our method.
4.1. Synthetic data 4.2. Real data Fig. 5 shows a view of the synthetic scene consisting of a box and five cameras. The box of size 1m × 1m × 1m is made up of 29 3D lines with patterns of triangles and rectangles on its sides. Five cameras are randomly generated
We consider two real image sequences, the clock tower sequence and the toy house sequence. Feature lines of the objects are manually extracted and matched.
894
A.W.K. Tang et al. / Pattern Recognition 39 (2006) 889 – 896
Fig. 7. Image sequence of the clock tower.
Fig. 9. The reconstructed tower.
Fig. 8. The reconstructed lines of the reconstructed tower.
4.2.1. Clock tower Four images were taken around the clock tower shown in Fig. 7 by a Canon D60 digital camera with a 70 mm lens. The image size is 2272 × 1704 pixels and the intrinsic parameters are varied due to auto-focusing. Line segments on the images corresponding to 22 3D line segments of the tower are visible in all views but different portions of the same segment are seen in different views. The measured line segments are shown as blue solid lines superimposed on the four images in Fig. 7. Note that some parts of the feature lines are blocked by the palm trees in front of the tower in the second image from the left. The proposed method took 58.28 s and 679 iterations to converge. The RMS 2D reprojection error of the reference line segments is 0.423 pixels and the RMS orthogonal distance is 0.509 pixels. Using the estimated projection matrices, the reconstructed 3D line segments can be reprojected onto the images and these are plotted as dotted green line segments on two of the images. Fig. 8 shows the reprojected line segments (dotted green segments) superimposed on the solid blue measured lines for comparison. Fig. 9 shows different views of the reconstructed tower in the Euclidean frame. Fig. 9(a) shows the estimated positions of the cameras taking the images. Fig. 9(b) shows a close-up view of the tower. Fig. 9(c) is the top view of the tower, which shows that the angle between two sides of the tower is almost 90◦ . 4.2.2. Toy house Nine images were taken around a toy house by a Canon A80 digital camera, as shown in Fig. 10. The image size is
Fig. 10. Image sequence of toy house.
Fig. 11. Distribution of measured line segments for image sequence of toy house.
2048 × 1536 pixels and the intrinsic parameters are varied due to auto-focusing. As not all the parts of the house are visible in all images, this image sequence demonstrates the performance of the proposed method for dealing with missing line segments in some of the images. In these images, each line segment is visible in at least three images. Line segments on the images corresponding to 57 3D line segments are extracted manually and there are altogether 271 missing lines on the images (about 57.2% of all 2D lines). The manually extracted lines are shown as blue lines in Fig. 10. The distribution of measured line segments are shown in Fig. 11 where a blue cross in the (i, j ) position indicates
A.W.K. Tang et al. / Pattern Recognition 39 (2006) 889 – 896
Fig. 12. The reconstructed house in Euclidean frame.
that the jth 3D line can be observed by the ith camera as a 2D line lij . The proposed method took 466 s with 1187 iterations to converge. The RMS 2D reprojection error of the reference line segments is 1.347 pixels and the RMS orthogonal distance is 2.620 pixels. The results of the reconstruction are shown in Fig. 12, where different views of the reconstructed house after being upgraded to the Euclidean frame are presented. 5. Conclusion In this paper, a projective reconstruction method for line correspondences has been developed and evaluated by synthetic data and real image sequences. We reconstruct a 3D line segment by minimizing the sum of orthogonal distances of the reprojected end points of the 3D segment from the measured 2D lines in all the images, except for a reference line segment where the distance is measured from the actual end points of the measured segment. The problem of line reconstruction is thus formulated purely on the basis of line correspondences in multiple views. There is no requirement that the 2D corresponding lines visible on different images represent the same segment of a 3D line. The method does not require point correspondence and its ability to handle missing lines is demonstrated by real image sequences. Experimental results based on synthetic data show that the proposed method is robust to added noise and the RMS orthogonal distance is almost linearly proportional to the added noise.
895
ages is addressed. We propose a new approach to reconstruct 3D lines from 2D corresponding lines across multiple views. The reconstruction problem is formulated purely on the basis of line correspondences in multiple views. There is no requirement that the 2D corresponding lines visible on different images represent the same segment of a 3D line, and the lines are not required to be visible in all the views. Most 3D reconstruction methods make use of feature point correspondences across multiple views. However, reconstruction from line correspondences has many merits over point-based methods. Existing methods for projective reconstruction from line correspondences are mainly based on multilinear geometric constraints or the linear projection equation for lines in Plücker coordinates. These methods have the drawback of being excessively noise sensitive and they minimize an algebraic rather than geometric error. A further problem of 3D line reconstruction in Plücker coordinates is that both the line and line projection matrix should satisfy some conditions which cannot be enforced directly during the reconstruction process. Instead, they are enforced at the end of the reconstruction process with the introduction of unpredictable errors due to the imposition of an algebraic condition with no physical meaning. In this paper, we propose a new approach for reconstructing 3D lines from 2D line correspondences. A 3D line segment is reconstructed by minimizing a geometric cost function that measures the distance of the reprojected end points of the 3D segment from the measured 2D lines on different images. The distance is defined to be the orthogonal distance of the reprojected end points of the 3D segment from the measured 2D lines in all the images, except for a reference line segment where the distance is measured from the actual end points of the measured segment. As the choice of reference line segments directly affects the accuracy of the final reconstruction, we propose a strategy to choose reference line segments among the groups of corresponding lines across multiple views. A quasi-linear bundle adjustment algorithm is provided which is guaranteed to converge to a solution where the geometric cost function achieves a (local) minimum. Experimental results based on synthetic data show that the proposed method is robust to added noise and the RMS error measure is almost linearly proportional to the added noise. Real image sequences demonstrate the performance of the proposed method for dealing with cases when different sections of the same segment are seen in different views and some lines are missing in some of the views.
Acknowledgements 6. Summary In this paper, the problem of using line correspondences for projective reconstruction from multiple uncalibrated im-
The work described in this paper was substantially supported by a grant from the Research Grants Council of Hong Kong Special Administrative Region, China (Project No. HKU7058/02E).
896
A.W.K. Tang et al. / Pattern Recognition 39 (2006) 889 – 896
References [1] W.K. Tang, Y.S. Hung, A factorization-based method for projective reconstruction with minimization of 2D reprojection errors, in: Proceedings of the DAGM 2002, September, 2002, pp. 387–394. [2] P. Sturm, B. Triggs, A factorization based algorithm for multipleimage projective structure and motion, in: Proceedings of the European Conference on Computer Vision, Cambridge, England, 1996, pp. 709–720. [3] C. Tomasi, T. Kanade, Shape and motion from image streams under orthography: a factorization method, Int. J. Comput. Vision 9 (2) (1992) 137–154. [4] R. I. Hartley, Projective reconstruction from line correspondences, in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994, pp. 903–907. [5] R.I. Hartley, Multilinear relationships between coordinates of corresponding image points and lines, in: Proceedings of the Sophus Lie Symposium, Nordfjordeid, Norway, 1995. [6] R.I. Hartley, Lines and points in three views and the trifocal tensor, Int. J. Comput. Vision 22 (2) (1997) 125–140. [7] Y. Liu, S. Huang, A linear algorithm for motion estimation using straight line correspondences, in: Proceedings of the IEEE Ninth International Conference on Pattern Recognition, vol. 1, 1988, pp. 213–219. [8] H. Maolin, Z. Quanbing, W. Sui, Projective reconstruction from lines based on SVD, in: Proceedings of the SPIE The International Society for Optical Engineering, vol. 4875, 2002, pp.896–902.
[9] D. Martinec, T. Pajdla, Line reconstruction from many perspective images by factorization, in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, 2003, pp.497–502. [10] Y. Seo, K. S. Hong, Sequential reconstruction of lines in projective space, in: Proceedings of the 13th International Conference in Pattern Recognition, vol. 1, 1996, pp.503–507. [11] J. Weng, S. Huang, N. Ahuja, Motion and structure from line correspondences: closed-form solution, uniqueness and optimization, IEEE Trans. Pattern Anal. Mach. Intell. 14 (3) (1992) 318–336. [12] B. Triggs, Factorization methods for projective structure and motion, in: Proceedings of the Conference on Computer Vision and Pattern Recognition, San Franciso, June, 1996, pp. 845–851. [13] F. Donaika, C. Garcia, Pose estimation using point and line correspondences, Real-Time Imaging 5 (1999) 215–230. [14] A. Ansar, K. Daniilidis, Linear pose estimation from points or lines, IEEE Trans. Pattern Anal. Mach. Intell. 25 (5) (2003) 578–589. [15] O. Faugeras, B. Mourrain, On the geometry and algebra of the point and line correspondences between N images, in: Proceedings of the Fifth International Conference on Computer Vision, 1995, pp. 951–956. [16] A. Bartoli, P. Sturm, Multiple-view structure and motion from line correspondences, in: Proceedings of the International Conference on Computer Vision, 2003.