Localization with Improved Proposals Stefan Oßwald Christoph Gonsior Department of Computer Science, University of Freiburg
Abstract Solving localization and navigation tasks reliably is an essential objective for autonomous mobile systems and robots. A popular technique for estimating the robot’s pose is localization with particle filters, also known as Monte–Carlo localization (MCL). In this paper we present a MCL–based localization system that employs informed proposal distributions to sample particles during the motion step of the filter. While the standard MCL model computes the sampling proposal distribution based only on the movement on the robot, our approach also takes the most recent sensor observation into account. Experiments using datasets gathered by real robots in an indoor environment show that our approach is able to estimate the robot’s pose with less uncertainty than the standard MCL implementation.
1
Introduction
To navigate successfully within a known environment and to accomplish high–level tasks, a mobile robot needs to estimate its pose accurately. This pose consists of the robot’s position and orientation in the world coordinate system. Tracking its pose over time while it moves through its environment is essential. This task is commonly referred to as localization. The localization problem itself consists of two subproblems: global position estimation and local position tracking. In global position estimation the robot has to find its pose within the environment without prior information about its start pose, whereas in local position tracking, the robot knows its start pose and has to track its pose while moving. As the localization problem is key to mobile robot navigation, several algorithms have been proposed to solve the localization task. Kalman–filter based techniques are commonly used to track the robot’s pose, as they provide accurate and robust results. But as Kalman–filters cannot represent multimodal distributions, they cannot be used for global localization in environments containing symmetries, as this task requires to track multiple hypotheses for solving ambiguities. Therefore recent localization approaches use sampling–based algorithms to represent the multi–modal pose posterior and to model ambiguities.
In this paper we extend localization with particle filters, known as Monte–Carlo localization (MCL), to use informed proposal distributions to propagate the particle set. The standard MCL approach constructs the proposal distribution by propagating the pose posterior of the previous timestep according to the robot’s odometry model. The standard algorithm predicts the robot’s pose by drawing from the proposal distribution, estimates the likelihood of the sensor readings, and weights the new poses to match the target distribution. If the robot is equipped with accurate sensors such as laser range finders, ignoring the sensor readings for pose propagation leads to a suboptimal proposal distribution. Compared to laser sensor readings, the odometry information contains more uncertainty, thus leading to a flat proposal distribution. This often results in a high number of low weighted particles outside the meaningful area of the observation likelihood, so a large number of particles is required to cover the most likely poses. Considering the more precise sensor readings leads to a more peaked proposal distribution, thus we expect the particle cloud to be more focused and to better represent the meaningful area of the target distribution. A more peaked pose posterior reduces the pose uncertainty and enables the robot to navigate more precisely. Improved proposals have already been successfully applied for generating maps in the SLAM context [Montemerlo et al., 2003]. The contribution of our work is to transfer this idea to the localization problem and to evaluate the performance of Monte–Carlo localization using improved proposals. This paper is organized as follows: in the next section, we discuss related work and compare our approach to existing algorithms from different fields. Section 3 explains how Monte Carlo Localization works in general and is also a description of the standard model we compare our approach with. In Section 4 we introduce our algorithm for localization with improved proposals and present some implementation details. The results of the experiments we carried out on real–world datasets are shown and evaluated in Section 5. We finish this paper with conclusions and some future prospects in Section 6.
2
Related Work
Several promising approaches for solving the localization problem were presented in the field of robotics in the last decades. But also in other scientific fields techniques for state
estimation and filtering were introduced that can be adapted to algorithms for navigation and localization. In this chapter we give an overview of techniques we used and algorithms that influenced our work. We present similar approaches and compare them to our algorithm. An extensive introduction to the localization problem and probabilistic approaches to solve localization and mapping problems including standard motion models and observation models for computing the likelihood of an observation are presented in [Thrun et al., 2005]. We refer to their motion model as the standard motion model and extend their model to our improved motion model. In Section 5 we compare our extended motion model with their standard motion model. Realizing that many problems with localization are closely related to the type of representation used for representing the probability densities over the state space, work on Bayesian filtering methods seems to be a promising approach. Similar approaches have appeared in other fields under names as “condensation” or “survival of the fittest”. Monte–Carlo localization was firstly introduced in 1998 [Dellaert et al., 1998] and is based on Bayesian filtering. Instead of describing the probability density function itself, a set of samples is maintained to represent the robot’s state. Trying to overcome a range of limitations for different versions of MCL a dual sampler called Mixture–MCL was presented in [Thrun et al., 2001]. This approach combines conventional sampling from the proposal with the inverted version of the sampling process and performs more robust and efficient. In [Doucet et al., 2000] an approach is presented that uses Rao–Blackwellisation to improve the efficiency and accuracy of particle filters. Their method is used in a wide range of fields like computer vision, speech and dynamic factor analysis and robotics. Typical particle filter applications as described in [Stentz et al., 2003; Dellaert et al., 1998] use the odometry motion model as the proposal distribution and draw samples from it. For many types of robots this motion model is easy to compute but needs a comparably high number of particles when the sensor information is significantly more precise than the odometry information used for estimating the pose. Considering the accuracy of the robot’s sensor for the proposal distribution was also suggested in [Grisetti et al., 2005] and [Montemerlo et al., 2003]. While Grisetti et al. also used it in [Grisetti et al., 2007] for grid mapping together with an adaptive resampling technique, Montemerlo et al. improved their FastSLAM algorithm and proved its convergence for linear SLAM problems. Both groups were able to reduce the uncertainty about the robot’s pose in the prediction step of the filter significantly. Particle filtering with usage of sensor information in the prediction step was also presented in [Doucet et al., 2001] for general filters and in [De Freitas et al., 2000] as a Markov Chain Monte Carlo technique for neural networks. Also the unscented particle filter presented in [van der Merwe et al., 2001; 2000] uses the last sensor information to sample in meaningful areas of the distribution. In [Kanazawa et al., 1995] a similar approach is presented
for the survival–of–the–fittest in which the uncertainty of the state is used as an information for spreading the samples.
3
Monte Carlo Localization
For solving localization problems probabilistic approaches belong to the most promising candidates. In this chapter we shortly introduce the fundamental techniques used in our approach.
3.1
The Bayes Filter
To estimate its current pose xt , the robot needs to estimate the posterior of its state or belief Bel (xt ) = P (xt | u1 , z1 , . . . , ut , zt ) given the sequence of odometry measurements u1 , . . . , ut and observations z1 , . . . , zt at times 1 . . . t. In the Bayes Filter approach this posterior is computed as Z Bel (xt ) = ηP (zt | xt ) P (xt | ut , xt−1 ) Bel (xt−1 ) dxt−1 where P (z | x) is the probability to measure z given that the system is in state x. This probability is also known as the robot’s observation model. P (x | u, x0 ) is the probability to be in state x given the previous state x0 and the odometry measurement u. This probability denotes the motion model. η is the normalization constant used to scale the weights in that way that the sum over all weights equals 1.
3.2
The Particle Filter
Monte–Carlo localization implements the Bayes Filter approach using Particle Filters, which is a robust and efficient method to represent the non-Gaussian posterior. Instead of describing the probability density function itself, Particle Filters maintain a set S of N samples known as particles, propagate these particles according to the motion model and weight them according to the observation model. Each particle hsi , wi i represents a hypothesis about the current state si of the robot weighted with an importance factor wi . The posterior can be reconstructed from the particle set as p (x) =
N X
wi δsi (x) .
i=1
The particle filter algorithm proceeds in two phases: 1. In the prediction phase samples are drawn from the proposal distribution. As the Markov assumption holds, the current state xt only depends on the previous state xt−1 and the motion command ut . This results in the probability p(xt |xt−1 , ut ) of being in in state xt given xt−1 and ut . For wheeled robots an odometry motion model can be used to estimate this probability. In this step the (i) weights wt of the particles don’t change. 2. In the correction phase the samples drawn in the prediction step are weighted according to the sensor model. To incorporate the sensor information, we use the most recent measurement zt to compute the likelihood p(zt |xt ) that zt is observed given the robot is in state xt . In this (i) step the states xt of the particles don’t change.
Finally, the particle set is resampled according to the particles’ weights, so unlikely samples are removed in favor of more likely particles.
3.3
The Beam Endpoint Model
In the correction step of the particle filter the drawn samples are weighted according to the observation model. We use a modified beam endpoint model [Thrun et al., 2005] as observation model in our implementations. In a preprocessing step we compute the distance map from the given map m of the environment, containing for each grid cell c the distance dist(c) to the nearest obstacle. We use this distance map in the particle filter algorithm to compute [1] [n] the likelihood of measurements. Let ct , . . . , ct denote the beam endpoints of scan zt , then the likelihood for this measurement is computed as ! n cX [i] p(zt | m, xt ) = exp − dist(ct ) n i=1 with a constant scaling factor c.
4
Localization with Improved Proposals
In this chapter we introduce our approach for localization with improved proposals and describe the steps of our algorithm in detail. As explained above, the sensor noise is typically lower than the noise in the odometry when laser range finders are used as measurement unit. Instead of predicting poses using the odometry model and correcting the predictions using the sensor readings, we exploit the accuracy of the sensor readings by considering the sensor readings already in the prediction step, thus leading to more focused predictions. The improved proposal distribution is the product of the next state distribution p(xt |xt−1 , ut ), and the probability p(zt |xt ) of observing the sensor readings zt given the predicted state. As the observation likelihood dominates the product in case an accurate sensor is used, we approximate the product by setting p(xt |xt−1 , ut ) = const. to simplify the computation. Algorithm 1 shows our approach of a particle filter with improved proposals. This algorithm is based on the algorithm presented in [Grisetti et al., 2007]. Whenever a new tuple consisting of an odometry information ut−1 and a sensor observation zt gets available, our algorithm computes the proposal distribution individually for each particle and updates the particle’s position and weight. The algorithm performs the following steps: 1. Compute the particle’s new pose x0t based on its old pose xt−1 and the odometry measurement ut−1 . The result is the most likely pose considering only the odometry measurement and is called mode. 2. Draw K test points around x0t from a Gaussian distribution. 3. For each test point x1 , . . . , xk , compute the target distribution p(xt | xt−1 , ut−1 ) · p(zt | m, xk ) for
Algorithm 1 Particle Filter with Improved Proposals Require: St−1 , the sample set of the previous time step zt , the most recent laser scan ut−1 , the most recent odometry measurement m, the map Ensure: St , the new sample set St ← ∅ for all st−1 ∈ St−1 do hxt−1 , wt−1 i ← st−1 // Compute mode x0t x0t ← xt−1 ⊕ ut−1 // Sample around mode for k ∈ {1, . . . , K} do xk ∼ N (x0t , Σmode ) end for // Compute Gaussian proposal µt ← (0, 0, 0)T η←0 for all xk ∈ {x1 , . . . , xK } do µt ← µt + xk · p(zt | m, xk ) η ← η + p(zt | m, xk ) end for µt ← µt /η Σt ← 0 for all xk ∈ {x1 , . . . , xK } do Σt ← Σt + (xk − µt )(xk − µt )T · p(zt | m, xk ) end for Σt ← Σt /η // Sample new pose xt ∼ N (µt , Σt ) // Correction step wt ← wt−1 · η // Update particle set St = St ∪ {hxt , wt i} end for St ← resample(St )
0.3
0.25
0.2
0.15
0.1
0.05
0
Figure 1: Occupancy map of building 079. The grid cell resolution is 0.1 m, the mapped area is about 400 m2 . xk ∈ {x1 , . . . , xK }. As the observation likelihood dominates the product, we simplify the computation by assuming p(xt | xt−1 , ut−1 ) = const., thus this factor can be neglected. The observation likelihood is computed using the beam endpoint model. 4. Compute the sample mean and covariance of the test points weighted by the target distribution. The results represent the Gaussian approximation of the proposal limited to the area surrounding x0t . 5. Sample the particle’s new pose by drawing from the Gaussian proposal. 6. Correct the particle’s weight and add the new particle to the particle set. After generating the new particle set, the algorithm resamples the particles to increase the number of particles in the most likely areas.
5
Experiments
In this chapter we present our setup for evaluating our algorithm and the results of our localization in global and local pose tracking.
5.1
Setup
To evaluate our approach, we tested our implementation on an indoor dataset gathered with a real robot equipped with a laser range finder. The length of the robot’s trajectory is 290 m, the time needed to travel this distance is 90 3000 . Our implementation uses the robot’s original logfile to localize the robot on a map generated with Grisetti’s implementation of Olson’s map optimization approach [Grisetti et al., 2010; Olson, 2008]. Figure 1 shows the occupancy map used in our experiments. To measure the accuracy of our approach, the robot’s trajectory as computed by the mapping algorithm was taken as a ground truth reference. We expect that using improved proposal distributions leads to more focused particle clouds after convergence, and thus a
0.02
0.04
0.06
0.08
0.1
Figure 2: Smaller sampling covariances lead to more focused particle clouds. more precise pose estimate compared to standard MCL. To measure the “focusedness” of the particle cloud, we compute the square root of the generalized weighted sample variance p ft := det Σt , (1) which is proportional to the area of the error ellipse for a fixed confidence level [Rencher, 2002]. Thus, the value of ft can be used to measure how focused the particle cloud is. Algorithm 1 draws samples around the mode from a Gaussian distribution with covariance matrix Σmode . This parameter influences the generalized sample variance of the particle cloud, as Figure 2 shows. The lower bound of the generalized sample variance is given by the grid resolution, so the particle cloud cannot be arbitrarily focused. However, the parameter Σmode has to be chosen carefully. If the covariances are too small, the samples do not cover the meaningful area of the proposal distribution, resulting in an incorrect posterior, so the algorithm can loose track of its true position. Our algorithm cannot recover from delocalizations once the particle cloud does not cover the true position any more. If the sampling covariances are chosen too big, the particle cloud tends to diverge, as particles can be scattered in a wide range around the mode. Our experiments show that translational variances less than 10−4 lead to overconfident pose posterior estimates, eventually resulting in incorrect pose estimates, whereas variances of 10−4 or greater still produce reliable tracking results.
5.2
Global Localization
For comparing the global localization behavior of the standard MCL implementation and our implementation with improved proposals, we defined the particle cloud to be converged if at least 95% of the particles are located within a radius of 1 m around the weighted mean of all particle poses. If only a small number of particles is used, the ground truth location might not be covered by the initial distribution. In this case the particle filter might not recognize ambiguities and converge to a symmetrical location or not converge at all.
0.01
90% 80%
0.008
success rate
70% 60%
0.006
50% 40%
0.004
30% 20%
0.002
10% 0%
0k
5k
10k
15k
20k
25k
30k
35k
0
number of particles
0
200
400
600
800
1000
1200
standard MCL improved proposals
Figure 3: Required number of particles for successful localization. If too few particles are used, the filter sometimes converges to symmetrical locations or does not converge at all. Therefore we define the global localization to be successful if the weighted mean of all particle poses is less than 1 m away from the ground truth pose provided by the map generation algorithm. Figure 3 shows the success rate depending on the number of particles used during global localization. When small numbers of particles are used, our implementation using improved proposals appears to be more successful than the standard approach. This can be expected, as the standard implementation samples the particles’ new poses randomly based on the odometry information, whereas our approach computes the likelihood for several test points around the pose predicted by the odometry information and samples from the resulting distribution. Hence our implementation can correct small pose errors of particles near the true pose, increasing the probability that particles are near the true pose and receive high weights. To evaluate whether this difference is significant or not, more experiments would be required. With both approaches a high number of particles is required for global localization. As long as no prior about the robot’s pose is given, the proposal distribution is constant for all possible states, so many hypotheses are required to cover the constant proposal distribution on the whole map.
5.3
Figure 4: Evolution of the focusedness of the particle cloud over time (5000 particles). pose mean with respect to the ground truth pose does not significantly differ for the improved model and the standard implementation. Both approaches lead to an average deviation of about 0.2 m (2 grid cells) from the ground truth position. Thus, we conclude that our approach does not result in more accurate pose estimations, but decreases the uncertainty of the estimation compared to the standard MCL implementation.
Local Pose Tracking
Figure 4 shows how the size of the particle cloud evolves over time. After convergence the f value introduced in Equation (1) representing the focusedness of the particle cloud stays at a low level. Our experiments show that the standard MCL implementation produces an average value of f = 1.49 · 10−3 for 5000 particles, whereas our approach using improved proposals results in a more focused particle cloud with f = 0.43 · 10−3 . This difference is significant according to the t–test at 95% confidence. Figure 5 visualizes the size of the particle cloud with both approaches. At the same time, the average deviation of the weighted
Figure 5: Visualization of the particle cloud after convergence. Our approach (right) produces a more focused particle cloud than the standard MCL implementation (left). The figure is best viewed in color.
5.4
Computational Costs
Our implementation has not been optimized for computational efficiency. In fact, our algorithm needs significantly more time than the standard MCL implementation, thus it cannot be used for online localization. For each particle we need a large number of samples around the mode to compute
the Gaussian proposal, and for each sample we have to evaluate the observation likelihood by applying the observation model on the complete laser scan. To increase the efficiency of the algorithm, more sophisticated methods to compute the Gaussian proposals are required.
6
Conclusions
In this work, we extended the standard Monte–Carlo localization algorithm to use improved proposal distributions. Our algorithm integrates accurate sensor measurements in the proposal distribution instead of using the sensor measurements to correct predictions based on the odometry model. To evaluate the performance of our algorithm, we conducted experiments both with the standard MCL approach and an implementation of our algorithm using logfiles gathered with a real robot. Our experiments show that our approach leads to more focused particle clouds compared to the standard MCL approach, thus decreasing the uncertainty in the pose estimation. Hence, our algorithm enables robots to navigate more precisely in known environments. Before our algorithm can be used for online localization of a robot, further investigations in more efficient ways to compute the Gaussian proposals are required to reduce the computational costs of the algorithm.
References [De Freitas et al., 2000] J. F. G. De Freitas, M. A. Niranjan, A. H. Gee, and A. Doucet. Sequential monte carlo methods to train neural network models. Neural Comput., 12(4):955–993, 2000. [Dellaert et al., 1998] F. Dellaert, D. Fox, W. Burgard, and S. Thrun. Monte carlo localization for mobile robots. Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), 1998. [Doucet et al., 2000] A. Doucet, N. de Freitas, K. P. Murphy, and S. J. Russell. Rao-blackwellised particle filtering for dynamic bayesian networks. In UAI ’00: Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pages 176–183, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. [Doucet et al., 2001] A. Doucet, N. De Freitas, and N. Gordon. Sequential Monte Carlo methods in practice. Springer, 2001. [Grisetti et al., 2005] G. Grisetti, C. Stachniss, and W. Burgard. Improving grid-based slam with rao-blackwellized particle filters by adaptive proposals and selective resampling. In ICRA, 2005. [Grisetti et al., 2007] G. Grisetti, C. Stachniss, and W. Burgard. Improved techniques for grid mapping with raoblackwellized particle filters. IEEE Transactions on Robotics, 23(1):34–46, 2007. [Grisetti et al., 2010] G. Grisetti, R. K¨ummerle, C. Stachniss, U. Frese, and C. Hertzberg. Hierarchical optimization on manifolds for online 2d and 3d mapping. In Proc. of the
IEEE International Conference on Robotics & Automation (ICRA), Anchorage, Alaska, 2010. To appear. [Kanazawa et al., 1995] K. Kanazawa, D. Koller, and S. Russell. Stochastic simulation algorithms for dynamic probabilistic networks, 1995. [Montemerlo et al., 2003] M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit. Fastslam 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges. Proc. of the Int. Conf. on Artificial Intelligence (IJCAI), pages 1151–1156, 2003. [Olson, 2008] E. Olson. Robust and Efficient Robotic Mapping. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, June 2008. [Rencher, 2002] A. C. Rencher. Methods of multivariate analysis. John Wiley and Sons, 2nd edition, 2002. [Stentz et al., 2003] A. Stentz, D. Fox, and M. Montemerlo. Fastslam: A factored solution to the simultaneous localization and mapping problem with unknown data association. In In Proceedings of the AAAI National Conference on Artificial Intelligence, pages 593–598. AAAI, 2003. [Thrun et al., 2001] S. Thrun, D. Fox, W. Burgard, and F. Dellaert. Robust monte carlo localization for mobile robots. Artificial Intelligence, 2001. [Thrun et al., 2005] S. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics. MIT Press, 2005. [van der Merwe et al., 2000] R. van der Merwe, A. Doucet, N. de Freitas, and E. Wan. The unscented particle filter. Technical report, Cambridge University, Engineering Department, 2000. [van der Merwe et al., 2001] R. van der Merwe, N. de Freitas, A. Doucet, and E. Wan. The unscented particle filter. In Advances in Neural Information Processing Systems 13, Nov 2001.