This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE ICC 2011 proceedings
Belief Condensation Filter for Navigation in Harsh Environments Santiago Mazuelas, Member, IEEE, Yuan Shen, Student Member, IEEE, and Moe Z. Win, Fellow, IEEE Laboratory for Information and Decision Systems Massachusetts Institute of Technology Cambridge, Massachusetts 02139 Email: {mazuelas, shenyuan, moewin}@mit.edu
Abstract—Real-time reliable navigation capability is a key enabler for a diverse set of important wireless applications. Traditional techniques for navigation such as the Kalman filter cannot capture the nonlinear and non-Gaussian behavior of measurements in wireless systems deployed in harsh environments. Nonparametric filters such as particle filters can cope with the complex measurements behavior only at the expense of a computational complexity beyond the reach of many low-cost navigation devices. In this paper, we establish a general framework for parametric filters based on belief condensation (BC). The new filtering technique can provide near-optimal performance with affordable complexity in highly nonlinear and non-Gaussian environments. Our methodology exploits the specific structure of the navigation problem and decomposes it in a way that the linear-Gaussian part can be solved in a closed form, while the remaining parts are addressed by an optimization process, referred to as BC. The simulation results show that the accuracy of the proposed BC filter is remarkably close to that of particle filters, while the computational complexity of the former is much lower. Index Terms—Navigation, information fusion, belief condensation filter (BCF).
I. I NTRODUCTION Filtering techniques for navigation aim to determine the posterior distribution of the positional state at each time instant based on the measurements obtained until that time [1], [2]. This inference task is performed by using the Bayes rule and predetermined statistical models for the time evolution of positional states (dynamic model) and the relationship between measurements and states (measurement model).1 Positional state evolution can be well modeled by a Markov chain, which together with the use of Bayes’ rule leads to the posterior distribution computation in three steps: prediction, update, and normalization. When the initial distribution of the positional state is Gaussian, and both dynamic and measurement models are linearGaussian, the Gaussianity of the distribution is retained in prediction and update steps, leading to simple closed-form This research was supported, in part, by the National Science Foundation under Grant ECCS-0901034, the Office of Naval Research Presidential Early Career Award for Scientists and Engineers (PECASE) N00014-09-1-0435, and the MIT Institute for Soldier Nanotechnologies. 1 Linear and Gaussian models have been used traditionally for both dynamic and measurement models [3].
solutions. The recursion given by such analytic solutions is the celebrated Kalman filter [3]. However, linear and Gaussian models are not adequate for navigation systems deployed in harsh environments, causing Kalman filters to suffer severe performance degradations. Nonparametric techniques based on Monte Carlo sampling [1], [2], [4] can handle such complex scenarios, but with a computational complexity beyond the reach of many navigation devices, especially those that are small and low-cost. Each navigation problem has an underlying structure that is determined by the dynamic and measurement models. Kalmanlike filters cannot capture the nonlinear/non-Gaussian behavior, while particle filters do not exploit the structure inherent to the specific problem, with the exception of Rao-Blackwellized (RB) particle filters. The RB filters take advantage of the specific structure by marginalizing out some of the variables analytically and filtering the remaining variables using particle filters. However, this method is applicable only to some special cases since it requires the partition of the state space [5]. The dimension of a filter, a measure of filter complexity, is the number of reals that the filter has to store in each step [6]. For instance, Kalman-like filters store the mean and covariance matrix of a state vector of size d; therefore their dimension is d(1 + d+1 2 ). Particle filters have a dimension of M (d + 1), where the number of particles M is typically several orders of magnitude larger than d to achieve adequate performance. In this paper, we develop a new parametric filtering technique to solve nonlinear and non-Gaussian navigation problems. In particular, our methodology decomposes the problem in a way that the linear-Gaussian part can still be solved in a closed form, while the remaining computations are solved by belief condensation (BC): a new technique for condensing a complex statistical distribution to a tractable one (Fig. 1). This condensation involves finding the distribution, which belongs to a parametric family, closest to the posterior distribution after the filtering steps.2 As shown in the paper, the proposed BC filter (BCF) has a performance similar to particle filters, but with a much smaller dimension. 2 The “closeness” of two probability distributions can be measured by Kullback-Leibler (KL) divergence.
978-1-61284-231-8/11/$26.00 ©2011 IEEE
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE ICC 2011 proceedings
1.5
and measurements form a hidden Markov model (HMM) [7].3 In navigation systems, the state vector is formed by several derivatives of positions and orientations with respect to time, such as velocity, acceleration, and angular velocity; these variables store the relevant information about the agent’s motion at each time instant, resulting in the Markov condition for the positional state evolution necessary in the HMM. Such an HMM can be completely described by only two kinds of dependence between the random variables:
True PDF 5000 samples Gaussian fit 11−parameter BC
PDF
1
0.5
0 −1
−0.5
0 0.5 Random variable values
1
1.5
Figure 1. Belief condensation can accurately represent complex distributions by tractable parametric distributions. The figure shows the probability density function (PDF) of a mixture of one beta and one t-student distributions as well as three kinds of approximation: Gaussian fit, 5000 samples, and a 11parameter BC.
i) Dynamic model: the relationship between the state vector at times tk and tk−1 , denoted by p(yk |yk−1 ); ii) Measurement model: the relationship between the measurements and the state vector at each time instant, denoted by p(zk |yk ). Hence, the joint distribution of all the random variables can be factorized as p(y1 , . . . , yk , z1:k ) = p(y1 ) · p(z1 |y1 ) ·
k
p(yi |yi−1 ) · p(zi |yi )
i=2
II. G ENERALIZED F ILTERING FOR NAVIGATION In this section, we present the model for navigation problems, conventional filtering techniques, and the concept of proposed generalized filtering.
A. Navigation Model Consider a single agent equipped with multiple sensors for navigation. The parameters of interest are the position, velocity, acceleration, orientation, and angular velocity of the agent evolving with time t, denoted by x(t), v(t), a(t), o(t), and ω(t), respectively. These parameters form the state vector, which we denote by y(t). The multi-sensor navigation system obtains measurements in discrete times {tk , k = 1, 2, . . . , K}, and updates the agent’s parameters at these times based on the measurements. For convenience, we use the notation xk = x(tk ), and we denote zk the set of measurements from different sensors collected at time tk . We also denote by z1:k = {z1 , z2 , . . . , zk } the set of all the measurements obtained until time tk . Filtering techniques for navigation exploit two facts: i) the relationship between the positional state yk and yk−1 , and ii) the relationship between the set of measurements zk and the positional state yk . The goal of these techniques is to use such relationships to optimally estimate the positional state yk at each time instant tk using all the available measurements z1:k , i.e., to determine p(yk |z1:k ). In general, optimal inference requires all the measurements obtained until time instant tk to estimate the positional state yk , which may not be feasible in a real-time implementation. However, iterative inference can be carried out if the temporal evolutions of the positional states
= p(y1 , . . . , yk−1 , z1:k−1 ) · p(yk |yk−1 ) · p(zk |yk )
(1)
Such a factorization in (1) leads to iteratively inferring the posterior distribution p(yk |z1:k ) from p(yk−1 |z1:k−1 ) and the new measurements zk . Specifically, from (1) one obtains that p(y1 , . . . , yk |z1:k ) p(yk |yk−1 )p(zk |yk )p(y1 , . . . , yk−1 |z1:k−1 ) = p(zk |z1:k−1 ) for k > 1, and p(y1 |z1 ) = Therefore, p(yk |z1:k ) =
p(zk |yk )
where
p(z1 |y1 )p(y1 ) . p(z1 )
p(yk |yk−1 )p(yk−1 |z1:k−1 )dyk−1 p(zk |z1:k−1 ) (2)
p(yk |z1:k−1 ) =
p(yk |yk−1 )p(yk−1 |z1:k−1 )dyk−1 .
Observing equation (2), it follows that the posterior distribution p(yk |z1:k ) can be obtained in three steps: 1) prediction:
p(yk |z1:k−1 ) =
p(yk |yk−1 )p(yk−1 |z1:k−1 )dyk−1
2) update: p(zk |yk ) p(yk |z1:k−1 ) 3 An HMM is formed by two sequences of random variables, called hidden and observable, respectively. For a given hidden variable in one time instant: 1) the observable variable in this instant is independent from all other variables; and 2) the hidden variable in the following instant is independent from all the past variables.
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE ICC 2011 proceedings
where D(·||·) is a suitable measure of the discrepancy, e.g., the KL-divergence between two statistical distributions.
3) normalization: p(yk |z1:k ) =
p(zk |yk )p(yk |z1:k−1 ) . p(zk |yk )p(yk |z1:k−1 )dyk
B. Current Filtering Techniques Solving the prediction and update steps analytically is highly complex or impossible in almost all cases. One exception is when dynamic and measurement models are Gaussian and linear, then these steps can be easily computed in a closed form (Kalman filter). However, in nonlinear or non-Gaussian cases, analytical solutions to these steps can be obtained only for very specific models [6]. For the remaining cases the posterior distribution can only be approximated. There are two common approaches to approximate the computations involved in these steps: • to approximate the distributions by Gaussians (Kalmanlike filters); and • to approximate the distributions by samples and perform the computation with them (particle filter [4]). The first approach is relatively simple but can result in poor performance if the models needed are far from linear and Gaussian, while the accuracy of the second approach can be improved at the expense of high computational complexity (more samples). In essence, these suboptimal filtering techniques choose a family of distributions and perform the prediction, update, and normalization steps in a way that the estimated posterior distribution always lies within the same family. For example, Kalman-like approaches such as extended Kalman filter (EKF) or unscented Kalman filter (UKF) choose the family of Gaussian distributions and make the approximations by Taylor series expansions or by deterministic sample points, respectively. Particle filters use mixtures of deltas as the family to approximate the distribution. The performance and complexity of these techniques depend on the suitability of the statistical family as well as the accuracy and complexity of the approximation process. C. Generalized Filtering We now introduce the concept of generalized filtering technique. Let F = {Fθ : θ ∈ Θ ⊂ Rn } denote a family of statistical distributions. The iterative process of the filtering techniques can be described as follows: 1) Estimate the posterior distribution p(yk−1 |z1:k−1 ) ∈ F at time tk−1 ; 2) Predict using dynamic model p(yk |yk−1 ) and update using measurement model p(zk |yk ) to obtain p(yk |yk−1 ) p(yk−1 |z1:k−1 )dyk−1 p(yk |z1:k ) ∝ × p(zk |yk ) ;
(3)
3) Find the distribution p(yk |z1:k ) ∈ F closest to the distribution p(yk |z1:k ), i.e., p(yk |z1:k )||q) p(yk |z1:k ) = arg min D ( q∈F
In the following sections, we apply this concept of generalized filtering for navigation problems in harsh environments, select a suitable parametric family of distributions F, and design the optimization procedure for (4). III. DYNAMIC AND M EASUREMENT M ODELS In this section, we describe the models for system dynamics and the measurements obtained by navigation sensors in harsh environments. A. Dynamic Model Let x(t) and o(t) denote the position and orientation of the agent node at time t, where o(t) can be represented by a rotation vector [8].4 Both x(t) and o(t) can be modeled as analytic functions of time. Hence, at each time tk , they can be approximated by truncated Taylor expansions. In particular, the component of the state vector β at time tk can be approximated as n (n) (tk+1 − tk ) (5) (tk+1 − tk ) + . . . + βk−1 βk ≈ βk−1 + βk−1 n! where the error of this approximation is β (n+1) (ξ)
(tk − tk−1 )n+1 (n + 1)!
and ξ is some point in the interval [tk , tk+1 ]. Therefore, the dynamic model can be written as yk+1 = Hk yk + nk where the matrix Hk is obtained from the Taylor series expansions, and the approximation error nk can be modeled as a random variable. Commonly this error is modeled as a zeromean Gaussian variable (i.e., discrete Wiener process) [9]. Thus, the dynamic model for the state vector in navigation can be considered as linear and Gaussian with a wide generality. B. Measurement Model Navigation systems can use a variety of measurements obtained by multiple sensors such as radio frequency (RF) receivers, global positioning system (GPS) receivers, inertial measurement units (IMU), Doppler sensors, compasses, etc. The set of measurements obtained by the agent in each time instant tk forms the vector zk . The relationship between the positional state vector and zk can be described by the likelihood model p(zk |yk ). Here we focus on the case in which the agent obtains IMU and GPS measurements in harsh environments.
(4) 4 o(t)
∈ R3 if x(t) ∈ R3 and o(t) ∈ R if x(t) ∈ R2 .
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE ICC 2011 proceedings
1) IMU Measurements: An IMU device takes two kinds of measurements; the angular velocity about the body frame, zω , and the force zf [10] measured by gyros and accelerometers, respectively. The gyro measurement for angular velocity at each time instant is given by ω ω zω = ω + bω in + Sin ω + n 3 where ω ∈ R3 is the true angular velocity, bω in ∈ R and Sinω ∈ R are in-run bias and scale factor (both following linearGaussian dynamics), and nω ∈ R3 is a Gaussian vector with zero mean and covariance matrix Kω . Hence, p(zω k |yk ) is ω + S ω and covariance Kω . Gaussian with mean ω + bω in in Similarly, the measurement for the force at each time instant is given by
zf = f + bfin + Sinf f + nf = (1 + Sinf ) · CT (o) · (a − g) + bfin + nf where a, f ∈ R3 are the true acceleration and force, respectively, g ∈ R3 is the gravity, bfin ∈ R3 and Sinf ∈ R are in-run bias and scale factor (both following linear-Gaussian dynamics), and nf ∈ R3 is a Gaussian vector with zero mean and covariance matrix Kf ∈ R3×3 . Moreover, C(o) ∈ R3×3 can be written, by the Rodrigues rotation formula [8], as5 C(o) = I +
sin o 1 − cos o 2 [o]× + [o]× o o2
where [o]× is the skew-symmetric form of the rotation vector ⎞ ⎛ oy 0 −oz 0 −ox ⎠ . [o]× = ⎝ oz −oy ox 0 Hence, p(zfk |yk ) is a Gaussian distribution with mean (1 + Sinf ) · CT (o) · (a − g) + bfin and covariance matrix Kf . Note that the relationship between the force measurement and the state vector is nonlinear. 2) GPS Measurements: The measurement of a pseudorange from satellite i with known position xG i is given by [10] G ziG =xG i − x + c · btR + c · xi − x StR + Bi + bi + ni
where c is the propagation speed, btR and StR are the clock bias and drift (both following linear-Gaussian dynamics), Bi is the pseudorange error (following linear-Gaussian dynamics), bi is the bias due to non-line-of-sight (NLOS) and multipath propagation (following some distribution with nonnegative values), and ni is the white Gaussian noise. Note that the relationship between the pseudorange measurements and the state vector is nonlinear. Moreover, the distribution of the pseudorange is not Gaussian due to the existence of NLOS/multipath bias bi .
IV. B ELIEF C ONDENSATION F ILTER In this section, we will present the key ideas of the BCF as well as the associated optimization process. A. Kalman-Like Filters and Particle Filters As mentioned in previous sections, current filtering techniques can be broadly categorized into two main branches: Kalman-like approaches and particle filters. The former approximate the posterior distribution as Gaussian, and thus they do not perform well, or fail altogether, if the true distribution cannot be accurately approximated by a Gaussian. The latter approximate the distribution as a mixture of deltas, which can approximate any distribution but at the expense of a high complexity. In both cases, the three steps to estimate the posterior distribution can be easily performed due to the amiable properties of these distributions. However, for general distributions, the three filtering steps are intractable analytically. This observation leads to the development of the following parametric filter that combines the advantages of Gaussians and particles. B. Mixtures of Gaussians and Belief Condensation The nonlinear and non-Gaussian parametric filter proposed in this paper begins with the observation that the amiable properties of Gaussian distributions can still be used if the posterior distribution is approximated as a mixture of Gaussians. Moreover, analogously to the case of particle filters, any statistical distribution can be approximated by a mixture of Gaussians with the number of components much smaller than that using a mixture of deltas. The dimension of filters based on mixtures of Gaussians is N (d + 1)(1 + d2 ), where N is the number of components in the mixture.6 In the following we develop a filtering technique which can be as accurate as particle filters but with dimension several orders of magnitude smaller. The main concept is depicted in Fig. 2. 1) Prediction: This step requires the computation of a high-dimensional integral (marginalization) of the product of p(yk−1 |z1:k−1 ) with p(yk |yk−1 ). The integration is highly complex in general, where it can be efficiently solved only if the functions involved are Gaussians or sums of deltas, which are the properties exploited by Kalman-like and particle filters. Since p(yk |yk−1 ) for navigation can be considered linear and Gaussian with a wide generality, the prediction step can be easily computed if the family F chosen for the posterior distribution is the family of mixtures of Gaussians. Specifically, let7 p(yk−1 |z1:k−1 ) =
N
(i)
(i)
αi N (yk−1 ; μk−1|k−1 , Qk−1|k−1 )
i=1 5 C(o)
is the direction cosine matrix which transforms the coordinates of vectors with respect to the non-inertial frame to coordinates with respect to the inertial fixed reference frame. Hence, it is a matrix formed from the rotation vector o that represents the rotation determined by o as a linear transformation.
6 Some filtering methods have exploited these facts [11], [12]. However, they are in essence equivalent to several Kalman-like filters running in parallel. 7 With a slight abuse of notation we use N (y; μ, Q) to denote the Gaussian PDF of a random variable y with mean μ and covariance matrix Q.
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE ICC 2011 proceedings
p(yk |z1:k )
p(yk |z1:k−1 )
If Φ(θ) is the KL-divergence between p(x) and q(x; θ), i.e., Φ(θ) = D(p(x)||q(x; θ)), then ∗ , μ∗1 , . . . , μ∗m , Σ∗1 , . . . , Σ∗m ) is a stationary θ∗ = (α1∗ , . . . , αm point of Φ if and only if T (θ ∗ ) = θ ∗ , where qi (x; θ) T (αi ) = Ep q(x; θ) i (x;θ) Ep qq(x:θ) x T (μi ) = i (x:θ) Ep qq(x:θ) i (x:θ) Ep qq(x:θ) (x − μi )(x − μi )T T (Σi ) = i (x:θ) Ep qq(x:θ)
p(yk |z1:k )
p(yk−1 |z1:k−1 )
F
Figure 2. Illustration of the BCF in three steps: prediction, update, and belief condensation.
e−λi αi = m −λj , j=1 e
then after the prediction p(yk |z1:k−1 ) =
N
(i)
(i)
αi N (yk−1 ; μk|k−1 , Qk|k−1 )
(6)
i=1 (i)
(i)
(i)
(i)
μk|k−1 = Hk μk−1|k−1 , Qk|k−1 = Hk Qk−1|k−1 HTk + P . and P is the covariance matrix of nk . 2) Update: Based on (6), the update step yields p(yk |z1:k ) ∝
N
and qi (x; θ) = αi N (x; μi , Σi ). Proof: Reparameterizing the mixtures of Gaussians with −1 (λ1 , . . . , λm , μ1 , . . . , μm , Σ−1 1 , . . . , Σm ), where
(i)
(i)
αi N (y; μk|k−1 , Qk|k−1 ) · p(zk |yk ) . (7)
i=1
Since the measurement model determined by p(zk |yk ) is not linear and Gaussian, the expression after the update step p(yk |z1:k ) is not in the family F. 3) Belief Condensation: For general nonlinear/nonGaussian filters, the number of sufficient statistics characterizing the true posterior distribution increases without bound [6]. Filtering techniques developed for these cases have to condense the true posterior distribution into a suitable family of distributions. In our case, as sketched in Fig. 2, after the update step the distribution obtained falls outside the family F. Therefore, to finalize the filter presented we have to condense the distribution after the update step by a distribution in F [solve (4)]. As we show below, in order to obtain the mixture of Gaussians q which minimizes the KL-divergence with respect to a given distribution p, an iterative algorithm based on the following theorem can be used. Theorem 1: Let p(x) be the PDF of a random vector x ∈ Rn and θ = (α1 , . . . , αm , μ1 , . . . , μm , Σ1 , . . . , Σm ) be the parameters characterizing a mixture of m Gaussian distributions m
αi N (x; μi , Σi ). q(x; θ) = i=1
∂Φ (θ∗ ), θ ∗ is a stationary point of Φ(θ) if and only if ∂λ i ∂Φ ∂Φ ∗ ∗ ∂μi (θ ), and ∂Σ−1 (θ ) vanish for i = 1, . . . , m. i T ∂ Using the properties ∂x x Ax = xT (A + AT ), ∂ ∂ T T −1 , for any x ∈ Rn ∂A (x Ax) = xx and ∂A (log |A|) = A and symmetric matrix A, it is straightforward to show that qi (x; θ) ∂Φ = −αi + Ep ∂λi q(x; θ) qi (x; θ)(x − μi )T Σi −1 ∂Φ = −Ep ∂μi q(x; θ) 1 ∂Φ qi (x; θ)(Σi − (x − μi )(x − μi )T ) E = − p 2 q(x; θ) ∂Σ−1 i
and the result is obtained by checking that these partial derivatives vanish at θ ∗ if and only if θ ∗ = T (θ ∗ ). Therefore, an algorithm to condense the belief given by a PDF p(x) into the mixtures of Gaussians can be obtained as follows: given an initial solution θ (0) = (0) (0) (0) (0) (0) (α1 , . . . , αm , μ1 , . . . , μm , Σ1 , . . . , Σ(0) m ) for the parameters characterizing the mixture of Gaussians p(yk |z1:k ), repeat until convergence the iteration θ j+1 = T (θ j ). The performance of this algorithm depends on the initial solution. In our problem a good initial solution can be obtained by approximating p(yk |z1:k ) by a mixture of Gaussians in the same way as the Gaussian mixture filter (GMF) [11], [12]. V. S IMULATION R ESULTS In this section, we will show the performance of the proposed BCF by simulations with measurements emulating sensors’ behavior in harsh propagation environments. Consider a scenario where one agent obtains both GPS and IMU measurements. We simulate GPS measurements from 4 satellites in NLOS conditions. The white thermal noise of such measurements is modeled as a Gaussian random variable with
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE ICC 2011 proceedings
25
1 0.9
20
0.8
15 0.7 0.6 CDF
Meters
10 5
0.5 0.4
0
0.3
−5
Route 20,000 Particles EKF BCF
−10 −15 −10
−5
0
5
10
20,000 Particles 15,000 Particles EKF GMF BCF
0.2 0.1
15
Meters
Figure 3. Filtering of a route by particle filter, EKF, and the proposed BCF.
zero mean and standard deviation of 2 m, while the positive bias introduced by the NLOS propagation is modeled as an exponential random variable with mean 6 m, 8 m, 10 m, and 14 m for each satellite. The error in the IMU force measurements is modeled as N (0, 0.07 N), and the error in the angular velocity measurements as N (0, 0.02 rad/sec). The motion of the agent is simulated as shown in Fig. 3 with a mean velocity of 1.02 m/sec and a maximum velocity of 2.6 m/sec, the acceleration mean and maximum 0.122 m/sec2 and 0.267 m/sec2 , respectively, and the angular velocity mean 0.04 rad/sec with a maximum of 0.077 rad/sec. In such a scenario we filter the positional state of the agent by using the EKF, the GMF, the sampling importance resampling (SIR) particle filter [4], and the proposed BCF. We simulate 100 positions, where the number of components for the filters using mixtures of Gaussians is N = 10 (Fig. 3). In addition, Fig. 4 shows the performance of the filters in 60 Monte Carlo repetitions of the above simulation. From this figure we can observe that the performance of the proposed BCF is close to that of the particle filter with sufficient particles,8 and much better than those of the EKF and the GMF. In addition, the dimensions of the particle filters shown are 135,000 and 180,000, while the dimension of the BCF based on mixtures of Gaussians is 450 and the dimension of the EKF is 44. VI. C ONCLUSION In this paper we present a new filtering technique for navigation in harsh environments where the measurements exhibit nonlinear and/or non-Gaussian behavior. We establish a general framework for parametric filtering techniques based on BC. Our methodology exploits the specific structure of the navigation problem and decomposes it in a way that the linear and Gaussian part can be solved efficiently. We introduce the family of mixtures of Gaussians for parametric representation 8 The results obtained in these simulations with 25,000 particles and 20,000 particles are almost indistinguishable.
0
0
1
2
3 4 Error (Meters)
5
6
7
Figure 4. Empirical cumulative distribution function (CDF) of errors in the position obtained by particle filters, EKF, GMF, and the proposed BCF.
of the posterior distributions and propose an optimization algorithm, referred to as BC, for iteratively determining the parameters which best approximate the true posterior distribution. We compare the performance of the proposed BCF with those of particle filters, EKF, and GMF. Our simulation results show that the BCF has a performance close to the particle filter but with a much lower complexity. ACKNOWLEDGMENT The authors wish to thank B. D. Appleby and C. C. Yu for bringing the nonlinear/non-Gaussian navigation problem to authors’ attention. R EFERENCES [1] F. Gustafsson, F. Gunnarsson, N. Bergman, U. Forsell, J. Jansson, R. Karlsson, and P. J. Nordlund, “Particle filters for positioning, navigation and tracking,” IEEE Trans. Signal Process., vol. 50, no. 2, pp. 425–437, Feb. 2002. [2] L. Mihaylova, D. Angelova, S. Honary, D. R. Bull, C. N. Canagarajah, and B. Ristic, “Mobility tracking in cellular networks using particle filtering,” IEEE Trans. Wireless Commun., vol. 6, no. 10, pp. 3589– 3599, Oct. 2007. [3] R. E. Kalman, “A new approach to linear filtering and prediction problem,” Trans. ASME, Ser. D, J. Basic Eng., vol. 82, pp. 34–45, 1960. [4] M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE Trans. Signal Process., vol. 50, no. 2, pp. 174–188, Feb. 2002. [5] A. Doucet, N. de Freitas, and N. Gordon, Sequential Monte Carlo Methods in Practice. Springer, 2001. [6] F. Daum, “Nonlinear filters: Beyond the Kalman filter,” IEEE Aerosp. Electron. Syst. Mag., vol. 20, no. 8, pp. 57–69, Aug. 2005. [7] O. Cappe, E. Moulines, and T. Ryden, Inference in Hidden Markov Models. Springer Series in Statisics, 2007. [8] D. Koks, Explorations in Mathematical Physics. Springer, 2006. [9] X. R. Li and V. P. Jilkov, “Survey of maneuvering target tracking. Part I: Dynamic models,” IEEE Trans. Aerosp. Electron. Syst., vol. 39, no. 4, pp. 1333–1364, Oct. 2003. [10] D. H. Titterton and J. L. Weston, Strapdown Inertial Navigation Technology. The American Institute of Aeronautics and Astronautics and The Institution of Engineering, 2004. [11] J. H. Kotecha and P. M. Djuric, “Gaussian sum particle filtering,” IEEE Trans. Signal Process., vol. 51, no. 10, pp. 2602–1612, Oct. 2003. [12] B. D. Anderson and J. B. Moore, Optimal Filtering. New Jersey: Prentice-Hall, 1979.