final version in: International Conference on Robotics and Automation, ICRA, IEEE Press, 2009
Revisiting Uncertainty Analysis for Optimum Planes Extracted from 3D Range Sensor Point-Clouds Kaustubh Pathak, Narunas Vaskevicius, and Andreas Birk Dept. of Computer Science, Jacobs University Bremen, 28759 Bremen, Germany Email: [k.pathak, n.vaskevicius, a.birk]@jacobs-university.de Abstract—In this work, we utilize a recently studied more accurate range noise model for 3D sensors to derive from scratch the expressions for the optimum plane which best fits a point-cloud and for the combined covariance matrix of the plane’s parameters. The parameters in question are the plane’s normal and its distance to the origin. The range standarddeviation model used by us is a quadratic function of the true range and is a function of the incidence angle as well. We show that for this model, the maximum-likelihood plane is biased, whereas the least-squares plane is not. The plane-parameters’ covariance matrix for the least-squares plane is shown to possess a number of desirable properties, e.g., the optimal solution forms its null-space and its components are functions of easily understood terms like the planar-patch’s center and scatter. We verify our covariance expression with that obtained by the eigenvector perturbation method. We further compare our method to that of renormalization with respect to the theoretically best covariance matrix in simulation. The application of our approach to real-time range-image registration and plane fusion is shown by an example using a commercially available 3D range sensor. Results show that our method has good accuracy, is fast to compute, and is easy to interpret intuitively. Index Terms—Plane-fitting, Plane uncertainty estimation, 3D Mapping, Plane Fusion
ENERALIZATION of traditional occupancy-grid 2D maps to three dimensions is hindered by the disproportionate increase in storage and computation requirements. The input data for the mapping algorithm is a point-cloud obtained from a range-sensor like a rotating laser-range-finder (LRF) [1] or a time-of-flight sensor like the Swiss-ranger [2] and the PMD [3]. Mapping methods based directly on point-clouds are usually off-line [4]. There have been recent attempts to use feature-based 3D mapping, where the features are usually planar patches [5], [6], [7] extracted from the point-clouds by a variation of the region-growing algorithm [8]. This results in a compact semantic representation of the environment which is faster to compute and requires much less storage than a voxel-grid. Given a set of noisy points known or hypothized to lie on a plane, the “optimum” plane can be extracted from them using methods surveyed in [9]. The main methods are that of least-squares [5], [9] and renormalization [10], [11]. The latter uses a more detailed error-model for the 3D points which results in better estimations with respect to the assumed model. This is achieved, however, at the cost of a computationally expensive iterative method which is not suited for a real-time application. The main drawback of this method is the use of an outdated range resolution model which does not incorporate the effect of incidence-angle, and wherein the range standard-deviation is a linear rather than a quadratic function of the true range. Another aspect which needs reexamination is the computation of covariance matrix of the estimated plane parameters from the point of view of a trade-off between accuracy and computation-time. This paper is organized as follows: in Sec. I, we formulate the maximum likelihood and least-squares plane extraction problems and derive useful properties of the exact solution based on an accurate range noise model. In Sec. II, we derive an approximate solution suitable for real-time applications along with analytical expressions for the its covariance. In Sec. III another analytical solution for the covariance is derived using the eigenvector perturbation method.
G
These two methods are numerically compared to the method of renormalization in Sec. IV. Finally, an application of the plane covariance matrices for plane fusion using real-world data is presented in Sec. V followed by conclusions in Sec. VI. I. P ROBLEM F ORMULATION In this section, we briefly rederive the optimum plane parameters from scratch because this serves to highlight the various assumptions made along the way and the effects of using a new range sensor noise ˆ · r = d, where n ˆ is the plane’s model. The equation of a plane is n unit normal and d the distance to the origin. Assume that the sensor ˆ j , j = 1 . . . N , where, m ˆ j are the returned a point-cloud rj = ρj m measurement directions for the sensor, usually accurately known, and ρj are the respective ranges which are noisy. A. Maximum Likelihood and Least Squares Plane Estimation We of radial Gaussian noise, i.e. ρi ∼ make the assumption ˆ ·m ˆ i } , where {·} encloses function arguments, and N ρ¯i , σ 2 {¯ ρi , n ˆ i ) is the true range of i-th measurement. This implies ρ¯i = d/(ˆ n·m a covariance matrix of rj of the form ˆ ·m ˆ j} m ˆ jm ˆ Tj , Cr,j = σ 2 {¯ ρj , n
(1)
ˆ ·m ˆ j } is explicitly a where the range standard-deviation σ{¯ ρj , n (usually quadratic) function of ρ¯j and has been found to be inversely ˆ ·m ˆ j , i.e. the cosine of the incidence-angle [12]. proportional to n The likelihood of plane-parameters (ˆ n, d) given range sample ρi ˆ i is along measurement direction m ˆ , d, m ˆ i) = √ p(ρi | n
ˆ i ))2 1 (ρi − d/(ˆ n·m exp − ˆ ·m ˆ i} 2 σ 2 {¯ ρi , n ˆ ·m ˆ i} 2πσ{¯ ρi , n 1
Then, considering a sequence of samples i = 1 . . . N , ignoring constant terms, and defining a binary switch β = 0/1, the loglikelihood function to be maximized is as follows max LGMLP = −β ˆ ,d n
−
1 2
N X
ˆ ·m ˆ i} log σ{¯ ρi , n
i=1 N X j=1
ˆ j ) − d)2 (ρj (ˆ n·m , ˆ j )2 σ 2 {¯ ˆ ·m ˆ j} (ˆ n·m ρj , n
(2)
where, GMLP stands for General Maximum Likelihood Problem for β = 1. The General Least Squares Problem (GLSP) can be obtained from GMLP by simply setting β = 0. For GLSP, the above can be equivalently rewritten as min ˆ ,d n
N X j=1
ˆ j ) − d)2 (ρj (ˆ n·m . 2 2 ˆ ·m ˆ j} ˆ j ) σ {¯ (ˆ n·m ρj , n
(3)
We note that the above expression is exactly the same as the weighted least-squares cost function in [10, Eq. (8)], except that it is rewritten in our notation on substitution of (1). No particular ˆ ·m ˆ j } yet. functional dependence has been enforced on σ 2 {¯ ρj , n
The optimization (2) cannot be handled analytically, especially as ˆ i ) and n ˆ ·m ˆ i . One approach is σ is a function of ρ¯j = d/(ˆ n·m to handle the problem numerically using iterations– as is done in the renormalization method of [10]. It has the advantage of having higher accuracy. Additionally, it assumes a linearly varying standardˆ ·m ˆ j } = κ1 ρ¯j , independent of the incidence-angle, deviation σ{¯ ρj , n and simultaneously estimates the value of κ1 during the iterations. However, there are two main reservations regarding this approach. Firstly, an iterative method is usually not suitable for real-time applications. Secondly, and more importantly, as shown in [13], [12], [14], the standard-deviation of commonly available 3D sensors like the Swiss-ranger and Laser-range-finder (LRF) is more accurately modeled by a function of the form ˆ ·m ˆ j} = σ{¯ ρj , n
σ ˆ {¯ ρj } , ˆ j| |ˆ n·m
σ ˆ {¯ ρj } , κ¯ ρ2j ≡
κd2 ˆ j |2 |ˆ n·m
(4)
ˆ is the local normal to the surface the point rj lies on. where n The coefficient κ > 0 can be estimated by doing initial calibration experiments with the sensor. In fact, on substitution of the LHS of (4) in (2), we get, max LEMLP = −β ˆ ,d n
N X
log
j=1
N σ ˆ {¯ ρj } n · rj − d)2 1 X (ˆ − , ˆ j| |ˆ n·m 2 j=1 σ ˆ 2 {¯ ρj }
(5)
where, EMLP stands for Exact Maximum Likelihood Problem. If β = 0, we get the Exact Least Squares Problem (ELSP). The ELSP expression is both more accurate and simpler-looking than the cost function considered in [10], which is equivalent to the cost in (3) ˆ·m ˆ j } ≡ κ1 ρ¯j . On expanding (5) using RHS of (4), with σ{¯ ρj , n and ignoring constants, we get max LEMLP = −2βN log d + 3β ˆ ,d n
N X
ˆ j| log |ˆ n·m
j=1 N (ˆ n · rj − d)2 1X ˆ j )4 − (ˆ n·m . 2 j=1 κ2 d 4
(6)
ˆ j are the measuring directions of the sensor, from arguments As m of geometric consistency of points on a plane observed along these ˆ j| ≡ n ˆ ·m ˆ j in what follows. directions we take |ˆ n·m B. Properties of the Solution ¯ To save space, ˆ , d by n ¯ , d. We denote the ground-truth values of n ˆ , d at we define the error in the plane-equation for some estimate n ˆ · rj − d. sample rj as j , n Although only a numerical solution of (6) is possible, we can show that unless β = 0, the solution is biased. To this end, we ˆ − 1), where λ is the define a Lagrangian L = LEMLP − 21 λ(ˆ nT n ˆ ? , d? , the usual Lagrange multiplier. Since at the optimum estimate n gradient vanishes, N 22j ∂L 2βN 1 X ˆ j )4 j + (ˆ n·m =− + 2 4 , (7a) ∂d d κ d j=1 d N X ˆj ∂L m = −λˆ n+3 β ˆ ˆ ˆj ∂n n · m j=1
−
N 1 X ˆ j )3 22j m ˆ j + j (ˆ ˆ j )rj . (ˆ n·m n·m κ2 d4 j=1
(7b)
N 1 X ˆ j )4 22j + j (ˆ (ˆ n·m n · rj ) . κ2 d4 j=1
N 1 X ∂2L ˆ j )4 rj rTj = −λ I − (ˆ n·m ? ˆ2 ∂n κ2 d4 j=1
−
N 4 X 3 T T ˆ ˆ ˆ (ˆ n · m ) m r + r m j j j j j j κ2 d4 j=1
N 6 X 2 ˆ j )2 m ˆ jm ˆ Tj . j (ˆ n·m κ2 d4 j=1 2 2
−
HELSP ,
∂ L ˆ2 ∂n
∂2L ˆ ∂d ∂ n
∂ L ˆ ∂d ∂ n
T
∂2L ∂d2
.
(8c)
(8d)
Using results from [15], [11], we can find the 4×4 plane-parameter covariance matrix CELSP ≡ −H+ ELSP , i.e., the negative of the MoorePenrose generalized inverse of the Hessian. The generalized inverse is necessary because the Hessian is singular on account of the unit ˆ. norm constraint on n 1) The Theoretical Lower Bound on Covariance: This lower bound is achieved [11] when all our samples are perfect, i.e. j = 0, j = 1 . . . N. Since ELSP is unbiased, in this case the ¯ Substituting the above ¯ , d. optimum solution is the ground-truth n ¯ n·m ˆ j ), ¯ ˆ j , and doing algebraic in (8), using ρ¯j ≡ d/(¯ rj ≡ ρ¯j m simplifications, we get the lower-bound on the covariance denoted ¯ ELSP as by C T N X −1 ¯ rj ¯ rj −¯ rj ¯ HELSP , . (9a) rTj 1 σ ˆ 2 {¯ ρj } −¯ j=1
¯ ELSP , −H ¯+ C ELSP
(9b)
Since the ground-truth is not known in practice, this bound cannot be found. However, in simulations this ideal value can be used to compare the estimated covariances found using various methods. ¯ ELSP which can be verified A particularly interesting property of H by direct substitution in (9) is ¯ ¯ ¯ ELSP n ¯ ELSP n H = 0 ⇒ C = 0. (10) d¯ d¯ The latter follows from the definition of the Moore-Penrose inverse. II. A PPROXIMATE S OLUTION AND C OMPUTATION OF C OVARIANCE M ATRIX
ˆ ? , d? , the above expressions equate to zero and At n λ? = 3βN −
Suppose we get perfect samples, then on substituting j = 0, j = 1 . . . N in (7), we see that the gradients do not vanish unless β = 0. ¯ unless β = 0. This shows that ˆ ? 6= n ¯ and d? 6= d, In other words, n the maximum likelihood estimate is biased, whereas the least-squares estimate is not. In the light of these results, we will take β = 0 for the rest of what follows, i.e., we will only consider the ELSP. ˆ=n ˆ ? , d = d? and β = 0 can be The Hessian of (6) at λ = λ? , n computed as follows after some algebra N 102j ∂2L 8j 1 X 4 ˆ 1 + = − (ˆ n · m ) + , (8a) j ∂d2 κ2 d4 j=1 d d2 N ∂2L 1 X 2j ˆ j )3 1 + ˆj = 2 4 4j (ˆ n·m m ˆ ∂d ∂ n κ d j=1 d N 1 X 4j ˆ j )4 1 + + 2 4 (ˆ n·m rj , (8b) κ d j=1 d
(7c)
The objective of this work is to explore if, under reasonable simplifying assumptions, one can obtain a fast estimation of the plane parameters and their covariance matrix without iterations. When
extracting planar patches from a range-image using region-growing, the plane-fitting algorithm is called many times per range-image, and an iterative algorithm is unsuitable for this. The solution of ELSP cannot be done analytically– hence, in the sequel we explore ways of solving it in an approximate but fast manner. To proceed with the approximate solution, we make the assumption that σ ˆ (¯ ρj ) ≈ σ ˆ (ρj ). We define σ ˆi , σ ˆ (ρj ) and note that σ ˆi is now ˆ and d. Using Eq. (5) and no longer a function of ρ¯j and hence of n setting β = 0, we get max LALSP ˆ ,d n
N n · rj − d)2 1 X (ˆ =− , 2 j=1 σ ˆj2
ˆ ? , d? , λ? . Its expressions are at n 1 T ˆ ? Sˆ n n? , 2 N X ∂2L 1 , −µ, =− 2 2 ∂d σ ˆ j=1 j λ? =
N
ˆ T? Sˆ ≡ −S − µ rG rTG + n n? I.
HALSP
ALSP is a well studied problem, see for example [6]. Although the solution is well-known, the covariance matrix computation is done in various ways in the literature. The method presented in [6] is particularly redundant as it solves the optimization problem twice: It first finds the optimum plane in the (ˆ n, d) space, then rotates the ˆ , and subsequently translates plane such that the z axis is parallel to n the plane such that d = 0. Finally, another best-fit is done using the plane-equation of the form z = β0 + β1 x + β2 y. The problem is essentially re-solved for this form by least-squares to obtain the covariance matrix for β0 , β1 , β2 . The procedure is justified by saying that it provides an analytical way of obtaining the covariance and it avoids the matrix singularity in (ˆ n, d) space. We show that the covariance matrix can be obtained analytically also in the (ˆ n, d) space. We see the fact that the null-space of the covariance matrix consists of the optimal solution as an asset which can be used to efficiently fuse planes. In fact, we show that this covariance matrix also contains other summarized information about the point cloud from which it was extracted, like its center and scatter. As before, we have a constrained optimization problem which can be solved using Lagrange multipliers. Defining the Lagrangian ˆ − 1) L = LALSP + λ(ˆ nT n
j=1
σ ˆj−2
σ ˆj−2 rj ,
=
∂2L ˆ2 ∂n ∂2L T ( ∂d∂ n ) ˆ
kˆ nk=1
ˆ Sˆ , arg min n n, kˆ nk=1
Hn ˆd Hdd
(15e) ALSP
CALSP = −H+ ALSP .
(15f)
Using Eqs. (15), it is easily verified that ˆ? ˆ? n n HALSP = 0, ⇒ CALSP = 0. d? d?
(16)
It is a remarkable property of ALSP that its solution satisfies the same property as that of the exact lower-bound covariance matrix ¯ ELSP and the ground-truth values derived in (10). C III. M ETHOD OF E IGENVECTOR P ERTURBATION FOR C OVARIANCE C OMPUTATION
(13)
Then the equation of the plane is r ˆ ˆ · % = 0, ν· ,ν −1
j=1
ˆ σ ˆj−2 (rj − rG )(rj − rG )T n (14)
where, S is the positive semi-definite weighted scatter matrix. The ˆ ? is then the eigenvector corresponding to the minimum solution n eigenvalue of S. As done in Sec. I-B, the covariance matrix of the optimal solution can simply be obtained by finding the Hessian of the Lagrangian (12)
C%,i =
Cr,i 0
0 , 0
(18)
where (1) has been used. Then, the ALSP (11) can be reformulated to the Eigenvector Perturbation Problem (EVPP)
kˆ ν k=1
j=1 T
Hnn HT ˆd n
Finally, the covariance matrix is
min LEVPP =
ˆ ? = arg min n ˆT n
=
(12)
where rG is the weighted center of the point-cloud. Substituting the above in (11) gives N X
∂2L ˆ ∂d∂ n ∂2L 2 ∂d
Another method to compute covariances was derived in a different context in [16]. We would like to confirm the computation of Hessian based covariance with their method. For this, however, we need to reformulate the problem similar to [10]. We define a unit vector in parameter space T 1 ˆ n ˆ = ν1 ν2 ν3 ν4 . (17) = p ν kˆ n k2 + d 2 d
Setting ∂L/∂d = 0 gives the well-known solution
rG ,
(15d)
Using the above, the ALSP Hessian is computed by
A. Solution of ALSP and Covariance Estimation
ˆ ? · rG , d? = n
(15c)
N X rj rTj ∂2L =− + 2λ? I, 2 ˆ ∂n σ ˆj2 j=1
N −1 X
(15b)
X rj ∂2L = ≡ µ rG , ˆ ∂d ∂ n σ ˆ2 j=1 j
(11)
where ALSP stands for Approximate Least Square Problem.
N X
(15a)
N X j=1
ˆT ν
%j %Tj ˆ,ν ˆ T Mˆ ν ν. σ ˆj2
(19)
ˆ ? is simply the eigenvector corresponding to the The solution ν minimum eigenvalue of the symmetric positive semi-definite matrix ˆ ? can be computed using the method of [16]. M. The uncertainty in ν The specialization of this method to the present problem is, however, derived by us, and we have not found it elsewhere. The symmetric positive semi-definite matrix M has the decomposition M = VΛVT , where the orthonormal matrix V contains the eigenvectors corresponding to the eigenvalues λi in the diagonal matrix Λ. The eigenvectors are assumed to be sorted in ascending
ˆ ? . If M is order, so that λ1 is the eigenvalue corresponding to ν ˆ ? is perturbed by perturbed by a small disturbance ∆M , ν ˆ ? , where, δ νˆ ? = V∆1 VT ∆M ν
(20)
ˆ can be is incorrect because it assumes that the components of ν independently perturbed, whereas this is not true as kˆ ν k = 1. The correct Jacobian is ˆ 24 )−1/2 I4 , J , (1 − ν
∆1 = diag{0, (λ1 − λ2 )−1 , (λ1 − λ3 )−1 , (λ1 − λ4 )−1 }. ∴ Cνˆ ? = =
ˆ ?ν ˆ T? ∆TM ]V∆1 VT E[δ νˆ ? δ Tνˆ ? ] = V∆1 VT E[∆M ν (21) V∆1 VT C∆M νˆ ? V∆1 VT ,
where E[·] is the expectation operator. Let the perturbation ri = T ¯ ri + δri , δri ≡ δxi δyi δzi . One can easily derive that ∆M =
N X 1 2 σ ˆ i=1 i
2δxi xi xi δyi + yi δxi xi δzi + zi δxi −δxi
xi δyi + yi δxi 2δyi yi yi δzi + zi δyi −δyi
xi δzi + zi δxi yi δzi + zi δyi 2δzi zi −δzi
−δxi −δyi . −δzi 0
(22)
ˆ ? , such Using this one can derive a matrix Ni {ˆ ν ? }, a function of ν that, ˆ? = ∆M ν C∆M νˆ ? =
N X 1 Ni {ˆ ν ? }δri , which gives the result 2 σ ˆ i=1 i N X 1 Ni {ˆ ν ? }Cr,i NTi {ˆ ν ? }. 2 σ ˆ i i=1
Transforming Cνˆ ? to the (ˆ n, d) space by the usual Jacobian method presents an interesting paradox, which has not been addressed by previous researchers [10], who have used a similar transform. A. A Paradox In Covariance Matrix Transformation Given a symmetric positive semi-definite matrix C, its eigenvectors ˆ in the space of definition, where the quadratic are the unit-directions ν ˆ T Cˆ product ν ν achieves a minimum or maximum [17]. Since a unitˆ of vector is identical in the (ˆ n, d) space as well as in the space of ν (17), we expect that Cνˆ ? will have the same eigenvectors (though not eigenvalues) as the covariance Cnˆ ,d in (ˆ n, d) space. We have verified numerically that this is true (details in Sec. IV, Figs. 1(b), 2(b), 1(c), 2(c)). However, these two matrices are also supposed to be related by Cnˆ ,d = JCνˆ ? JT . An obvious way to compute the Jacobian J 1 0 0 1 ∂g ˆ 24 )−1/2 = (1 − ν J, ˆ ∂ν 0 0 0
0
(25) is 0 0 1 0
which merely scales and which will be used in this paper along with (25) to transform the covariance to the (ˆ n, d) space. IV. C OMPARISON OF M ETHODS We will compare three methods: renormalization [10], ALSP (Sec. II), and EVPP (Sec. III). The comparison is done in simulation because the “ground-truth” is known accurately in simulation and ¯ ELSP of Sec. I-B1 can be hence the lower-bound covariance matrix C computed. Let the sorted eigenvalues and their unit eigenvectors of ¯ j , τ¯j , j = 0 . . . 3, where λ ¯ ¯ ELSP be λ C T 0 = 0 and the unit vector τ¯0 ¯ T d¯ . points in the direction of n Given the covariance matrix estimation of method i as Ci with sorted eigenvalues and their eigenvectors as λi,j , τˆi,j , j = 0 . . . 3, and i = renormalization, ALSP, and EVPP. We define the following three error metrics which are plotted for the three methods in Figs. 1, 2, and 3: Estimated Plane Error ε1 : This metric, ideally zero, is defined as follows for the i-th method ε1 , cos−1 (¯ τ0 · τˆi,0 ).
(23)
On substitution in (21), we get the required covariance Cνˆ ? . We can now recover our optimum solution in the (ˆ n, d) space by ˆ? applying the following reverse transform to ν 1 ˆ n ˆ. ν (24) = g{ˆ ν} , p d 1 − νˆ42
ˆ1ν ˆ4 ν 1−ˆ ν2 4 ˆ2ν ˆ4 ν 1−ˆ ν2 4 ˆ3ν ˆ4 . ν 1−ˆ ν2 4 1 1−ˆ ν2 4
(26)
The same Jacobian was used in [10], but with the bottom rightcorner ˆ n 2 2 −1/2 ˆ = (kˆ element negated as they had defined ν nk + d ) −d ˆ 4 = 0, this Jacobian rotates instead of as in (17). However, unless ν the eigenvectors of Cνˆ ? , which contradicts our earlier observation that the eigenvectors should remain unrotated. In fact, this Jacobian
(27)
(28)
Principal Uncertainty Directions Error ε2 : Weighted sum of angular errors in the principal uncertainty directions of the i-th method, which should be ideally zero. ε2 ,
3 X
¯j λ wj cos−1 (¯ τj · τˆi,j ), wj , ¯ ¯ ¯ . λ + λ 1 2 + λ3 j=1
(29)
Relative Total Uncertainty ε3 : Lower bounded by unity. ¯ 3 ). ¯2λ ¯1λ ε3 , (λi,1 λi,2 λi,3 )/(λ
(30)
We simulated the commercial 3D sensor Swiss-ranger and took its experimentally determined parameter κ = 0.0018 (with all lengths ˆ j , j = 176 × 144 in meters) from [12]. The measuring directions m were taken to be the same as that of the real device. Samples were generated using the noise model of (4). We considered infinite planes at a constant distance from origin d = 4 meters but with varying normals as a function of spherical coordinates T ˆ = cos θ cos φ cos θ sin φ sin θ . n (31) In Figs. 1, 2, and 3, the three error metrics are plotted as a function of θ and φ. The four corners of the plots have been truncated as they correspond to samples for which the range is more than the device’s maximum range of 7.5 meters. The color-scale is different for all plots and should be noted carefully for proper comparison. Discussion: We see that the ALSP method outperforms the others in terms of accuracy. Renormalization is the worst because it is trying to optimize a cost based on the wrong sensor noise model. Another advantage of the covariance of the ALSP is that its components are known in terms of easily understood quantities– this semantic information is lost in the EVPP covariance although numerically it is quite close to that of ALSP. We compared the computation times for the three algorithms in MATLAB, although ALSP has been implemented in C++ also. In MATLAB, the average computation time (in 840 runs) per sample of 176 × 144 points took 0.0052 secs., renormalization 0.4393 secs., and EVPP took 0.3332 secs. Renormalization is slower as expected
1
2
3 −4 x 10
1
(a) Renormalization
2
3 −4 x 10
(b) ALSP
1
2
3 −4 x 10
(a)
(b)
(c)
(d)
(e)
(f)
(c) Eigenvector perturbation
Fig. 1. Estimated Plane Error ε1 . The optimum plane itself is clearly not affected strongly by the change in the range noise model.
Fig. 4. Intensity images for successive views captured from a Swiss-ranger.
0 0
0.02
0.04
0.06
0.08
0.5
1 −3 x 10
0.1
(a) Renormalization
(b) ALSP
0
0.5
1 −3 x 10
(c) Eigenvector perturbation
Fig. 2. Principal Uncertainty Directions Error ε2 . The obviously strong effect of the different range noise model assumed by renormalization on the principal uncertainty directions.
because it is iterative. EVPP is non iterative and could possibly be made faster by a more efficient implementation. For run-times of a C++ implementation of a full-fledged region-growing algorithm with embedded ALSP, see Sec. V. V. A PPLICATION : P LANE F USION AFTER S CENE R EGISTRATION Supposing that the same plane was observed by two sensor reference frames F` and Fr which are related by rr = Rr/` (r` − tr/` ),
(32)
where Rr/` is the relative rotation and tr/` is the relative translation, T ˆT d both assumed known. The plane parameters vector ξ , n found in the two frames are related by the transform Rr/` 03×1 . (33) ξr = Tr/` ξ` , Tr/` , 1 −tTr/` Once the planes have been matched and registered together, we want the corresponding plane parameters to be fused together. The estimated plane parameters and their covariances satisfy C` ξ` = 0 and Cr ξr = 0, as shown earlier. Now, Tr/` ξ` with covariance Tr/` C` TTr/` , and ξr with covariance Cr can be considered to be two observations in the right frame for the same plane. The usual equation for fusing two Gaussians
estimates, which is analogous to the Kalman filter update, would give + + T C+ 1 = (Tr/` C` Tr/` ) + Cr ,
ξ1 = ξr + Cr (Cr +
Tr/` C` TTr/` )+ (Tr/` ξ`
(34) − ξr ).
(35)
This approach has two problems: first, there is no guarantee that the first three components of ξ1 form a unit vector, and secondly, the property C1 ξ1 = C+ 1 ξ 1 = 0 may not be satisfied. We therefore propose that the fused covariance matrix be found by modifying the matrix in (34) as + + ˆf ν ˆ Tf , where, C+ f = C1 − λmin {C1 } ν
(36)
+ λmin {C+ 1 } is the minimum eigenvalue of C1 and the corresponding ˆ f . Then the fused plane parameters ξf normalized eigenvector is ν are found by using the reverse transform (24). + In fact, C+ f is the matrix closest in Frobenius norm to C1 which satisfies both of the aforementioned properties and Cf is positive semi-definite. All fused quantities are with respect to the right frame. Another reason for preferring (36) over (35) is that the latter does not work if ξ is negated, which represents the same plane. The results of the application of the above fusion strategy to reallife data is shown in Fig. 6(b). The plane-patches were extracted using the ALSP method coupled with a region-growing algorithm for point-clouds obtained from a Swiss-ranger mounted on a robot turning in-place. About ten such successive views were analyzed by a registration algorithm and corresponding planes identified. The details of the registration algorithm, which makes extensive use of the computed covariance matrices, are beyond the scope of this paper. Finally, the corresponding plane-patches were rotated in the global frame and fused using the method described in this section. There were on an average 10 planar patches per view. The average time per view-pair for plane-patch extraction by region-growing, registration, and fusion was about 0.25 seconds by a C++ implementation on a Pentium 4 HT, 3 GHz, with 1 GB of memory.
VI. C ONCLUSIONS
1
1.2
1.4
1.6
(a) Renormalization.
1
1.001
1.002
(b) ALSP
1.003
1
1.001
1.002
1.003
(c) Eigenvector perturbation
Fig. 3. Relative Total Uncertainty ε3 . In Fig. 3(a), note that lowest value is below the theoretically least value of unity because the algorithm assumes a different sensor error model. Note also from the color-scale that the values are much worse than the other two methods.
The paper presents several new results for the best-fit plane using a new range noise model. The covariance matrix for the plane parameters is derived in two different ways, both of which are entirely analytical. Some important properties of the null-space of this matrix are derived and used for plane fusion. Both simulation and real sensor data were presented to support our theoretical results. We conclude that the approximate least-squares plane-fit (ALSP) is a suitable method for real-time applications.
(a) Front-view. The color corresponds to intensity.
(a)
600
400
1500 Z (mm)
1000 200
500 0 −500 4000
0
−3000
3000 −2000
2000
−200
−1000
1000 1000
1500
2000
(b) Zoomed in side-view to show noise normal to the planar surface. Fig. 5.
Swiss-ranger point-cloud views corresponding to Fig. 4(c).
R EFERENCES [1] H. Surmann, A. Nuechter, and J. Hertzberg, “An autonomous mobile robot with a 3D laser range finder for 3D exploration and digitalization of indoor environments,” Robotics and Autonomous Systems, vol. 45, no. 3-4, pp. 181–198, 2003. [2] CSEM, The SwissRanger, Manual V1.02, 8048 Zurich, Switzerland, 2006. [Online]. Available: http://www.swissranger.ch [3] PhotonIC (R) PMD 3k-S, PMD Technologies, 2008. [Online]. Available: http://www.pmdtec.com [4] A. N¨uchter, K. Lingemann, and J. Hertzberg, “6D SLAM– 3D mapping outdoor environments,” Journal of Field Robotics, vol. 24, no. 8/9, pp. 699–722, 2007. [5] J. Weingarten and R. Siegwart, “3D SLAM using planar segments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Beijing, 2006. [6] J. Weingarten, “Feature-based 3D SLAM,” Ph.D. dissertation, EPFL, Lausanne, Switzerland, 2006. [Online]. Available: http://library.epfl.ch/theses/?nr=3601 [7] P. Kohlhepp, P. Pozzo, M. Walther, and R. Dillmann, “Sequential 3DSLAM for mobile action planning,” Intelligent Robots and Systems, 2004. (IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on, vol. 1, pp. 722–729 vol.1, Sept.-2 Oct. 2004. [8] J. Poppinga, N. Vaskevicius, A. Birk, and K. Pathak, “Fast plane detection and polygonalization in noisy 3D range images,” in IEEE Int. Conf. on Intelligent Robots and Systems (IROS), Nice, France, 2008. [9] C. Wang, H. Tanahashi, H. Hirayu, Y. Niwa, and K. Yamamoto, “Comparison of local plane fitting methods for range data,” vol. 1, 2001, pp. –663. [10] Y. Kanazawa and K. Kanatani, “Reliability of fitting a plane to range data,” IEICE Transactions on Information and Systems, vol. E78-D, no. 12, pp. 1630–1635, 1995.
X (mm)
0 0
Y (mm)
(b) Fig. 6. Results for planes extracted from the views of Fig. 4. In Fig. 6(a), planes were matched and aligned using our registration algorithm. Corresponding matched planes are drawn with the same color. Unmatched and filtered out planes are shown grayed out. Fig. 6(b) shows the result of applying plane fusion based on the methodology described in in Sec. V on Fig. 6(a).
[11] K. Kanatani, Statistical Optimization for Geometric Computation. New York: Dover Publications, Inc., 2005, iSBN 0486443086. [12] D. Anderson, H. Herman, and A. Kelly, “Experimental characterization of commercial flash ladar devices,” in International Conference on Sensing Technologies, Palmerston North, New Zealand, November 2005. [Online]. Available: http://www.frc.ri.cmu.edu/ alonzo/pubs/papers/icst05FlashLadar.pdf [13] F. Prieto, T. Redarce, P. Boulanger, and R. Lepage, “CAD-based range sensor placement for optimum 3D data acquisition,” in Second International Conference on 3-D Imaging and Modeling (3DIM’99). Los Alamitos, CA, USA: IEEE Computer Society, 1999, p. 0128. [Online]. Available: http://doi.ieeecomputersociety.org/10.1109/IM.1999.805343 [14] K. Pathak, A. Birk, S. Schwertfeger, and J. Poppinga, “3D Forward Sensor Modeling and Application to Occupancy Grid Based Sensor Fusion,” in International Conference on Intelligent Robots and Systems (IROS). San Diego, USA: IEEE Press, 2007. [15] D. S. Sivia, Data Analysis: A Bayesian Tutorial. Oxford University Press, 1996. [16] J. Weng, T. S. Huang, and N. Ahuja, “Motion and structure from two perspective views: algorithms, error analysis, and error estimation,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 11, no. 5, pp. 451–476, May 1989. [17] R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge University Press, 1985.