Journal of Systems Architecture 47 (2001) 103±117
www.elsevier.com/locate/sysarc
The architecture of a Gaussian mixture Bayes (GMB) robot position estimation system Takamasa Koshizen * Department of Systems Engineering, Research School of Information Sciences and Engineering, The Australian National University, Canberra, ACT 0200, Australia Received in revised form 30 May 2000; accepted 25 November 2000
Abstract Modelling and reducing uncertainty are two essential problems with mobile robot localisation. In this paper, a new robot position estimator, the Gaussian mixture of Bayes (GMB) which utilises a density estimation technique, is introduced in particular. The proposed system, namely the GMB robot position estimator, which allows a robot's position to be modelled as a probability distribution, and uses Bayes' theorem to reduce the uncertainty of its location. In addition, we describe, in this paper, how our proposed system is capable of dealing with multiple sensors, as well as a single sensor only. Nevertheless, it is known that such multiple sensors could be used to raise more robust than the single sensor, in terms of obtaining accurate estimate over a robot's position. The GMB position estimator mainly consists of four modules such as sonar-based, sensor selection, sensor fusion, and sensor selection improved by combining it with sensor fusion. The proposed system is also illustrated with respect to minimising the uncertainty of a robot's position, using the Nomad200 mobile robot shown in Fig. 1. Eventually, it was found that the proposed system was capable of constraining the position error of the robot by the modularity of the system. Ó 2001 Published by Elsevier Science B.V. Keywords: Modular robot position estimation system; Density estimation and Bayes' rule; Single sensor and multiple sensors
1. Introduction Position estimation is one of the crucial issues for robot navigation and goal-reaching as a robot must be able to avoid static/dynamic obstacles with dierent forms by utilising its perception to estimate its positions. For example, Nagatani [1] indicated that autonomous navigation of a mobile robot equipped with a manipulator in an indoor environment with opening/closing doors was robust as far as the robot could estimate its position correctly. Moreover, it was showed that the position estimation could be a crucial task for cleaning an indoor room using a robot or co-operative robots [2]. Robot position estimation, alternatively called self-localisation, is a key issue in robot navigation. The goal of position estimation is to keep
* Correspondence address: Fundamental Research Division, Wako Research Center, Honda R&D Co., Ltd., 1-4-1 Chuo Wako-Shi, Saitama 351-0193, Japan. E-mail addresses:
[email protected],
[email protected] (T. Koshizen).
1383-7621/01/$ - see front matter Ó 2001 Published by Elsevier Science B.V. PII: S 1 3 8 3 - 7 6 2 1 ( 0 0 ) 0 0 0 5 6 - 4
104
T. Koshizen / Journal of Systems Architecture 47 (2001) 103±117
track of the robot's position with respect to a global sensor's reference. The reference is either de®ned by models of external landmarks or by the robot's initial position. Essentially, position estimation is done in two ways: dead reckoning and external referencing. In dead reckoning, the current robot position (as relative to a previous position) is measured using an internal sensor (or odometry). In external referencing, the robot position is determined with respect to external landmarks. Generally, dead reckoning is done by integrating an internal sensor (e.g., odometry) over time. Odometry is typically implemented by shaft encoders on wheels to record the distance travelled. The external referencing is alternatively called a map generation. The robot `looks' around from its position based on these observations and it must infer the place (or set of places) on a map where it can be located [3]. Several previous position estimation techniques have looked at learning and recognising landmarks in the environment, in particular, as a probabilistic representation of sensor readings [4±6]. These approaches basically use probabilistic maps, in which each cell has a certain probability of being occupied by landmarks such as walls, doors and desks. This approach depends entirely on the characteristic of the individual sensor [4]. Sonar sensory data have also been used in dierent ways to produce a variety of algorithms for the sonarbased position estimator [4,7,8]. For instance, the Bayesian method has been applied to the sonar-based position estimator [4,8,9]. The results have shown that the probabilistic nature of the position estimation algorithms made this problem `quite' robust to ambiguities and sonar sensor noise. Essentially, the problem of robot localisation can be summarised as how `uncertainty' can be modelled and then reduced. In order to cope with this uncertainty, an architecture of a position estimation system using Gaussian mixture Bayes (GMB) is hereby proposed, which integrates all purposive modules which are sonar-based, sensor selection, sensor fusion and sensor selection improved by combining it with sensor fusion. The proposed system is usually called as the GMB position estimator, which is occasionally capable of dealing with multiple sensors, as well as a single sensor. The sonar-based module of the GMB position estimator, namely the GMB with regularised expectation maximisation (GMB-REM) with sonar alone [10], employs the EM algorithm [11] to learn the relationship between a sonar signature and a robot position. The advantage of the sonar-based GMB-REM system is that it does not accumulate the error of a robot's position, The conventional odometry-based position estimator of the robot usually incurs this problem because of its small measurement errors (due to, for example, surface roughness and wheel slippage). That is, odometry alone is not sucient for position estimation for the navigation task of the robot. Nevertheless, odometry provides good initial position estimates which may greatly help landmark-based position estimation procedures. There are key dierences between the sonar-based GMB-REM system and previous sonar-based position estimate systems. The ®rst is the method introduced by Barshan [12], which leads to univariate density functions; in contrast the GMB-REM is multivariate. Second is the map generation technique, previously presented by Moravec, who constructed a map of the indoor environment using probabilistic occupancy grids [4]. The GMB-REM approach provides a way of making combined use of the map and sensor readings to compute the position of the robot within the map itself. Thus, the received sensor information can be used immediately to update the estimate of a robot's position. Another dierence is that it allows the robot to learn explicitly the pattern of high-dimensional sensor signatures rather than to learn implicitly the features or landmarks of the environment, as has been done by Thrun [8], i.e., landmark-based position estimation. Recently, instead of using a single sensor only, multiple sensors have been used in the robot navigation task. The sensor fusion technique has been applied by Durrant-Whyte [13], Matia et al. [14] and Bruder [15] to robot position estimation, and oers advantages over the use of a single sensor. Their works indicate that the fusion of `heterogeneous' sensors such as external and internal sensors in a mobile robot is capable of reducing uncertainty in position estimation. Due to the same reason, our sensor fusion scheme
T. Koshizen / Journal of Systems Architecture 47 (2001) 103±117
105
utilises internal motion information of the robot integrated with its sonar signature [16]. Moreover, Koshizen [17] has proposed an enhanced form of the sonar-based GMB-REM system in terms of sensor selection tasks. The principle idea of the sensor selection task is to enable the robot to choose the sensor, which works best in a particular location, considering the minimum position error of the robot's position. In other words, the system is actually able to reduce the uncertainty of the robot's position, by using multiple (external) sensors [17]. In this paper, an architecture of new position estimation system is proposed, namely the GMB position estimator, as well as sensor selection improved by combining it with sensor fusion method. The paper is organised as follows: in Section 2, we present the density estimation technique, in particular the EM algorithm. In Section 3, we describe the GMB position estimator. Finally, in Section 4, the empirical results are presented, with an assessment of the minimisation of position error, i.e., the reduction in the uncertainty of the robot's position for the proposed system architecture, using the Nomad200 mobile robot illustrated in Fig. 1.
2. EM algorithm and density estimation The expectation maximisation (EM) algorithm is an iterative algorithm for the computation of maximum-likelihood parameter estimates when the observations can be viewed as incomplete data. The theoretical framework of the EM algorithm was ®rst introduced by Dempster et al. [11] even though the resulting update formulas for the Gaussian mixture parameters had been proposed previously [18]. Since then, the EM algorithm has been applied to many density estimation problems, in particular for the parameter estimation of Gaussian mixture models.
Fig. 1. The Nomad200 mobile robot.
106
T. Koshizen / Journal of Systems Architecture 47 (2001) 103±117
The standard EM algorithm is an iterative method, based on two alternating steps: an Expectation
Estep and a Maximisation**
M-step. The algorithm actually derives from the de®nition of the expected loglikelihood of the complete (observed and hidden) data: Q
HjH
k Ef LjH; fYt gg;
1
where Q is a function of the parameters of the model H, given the current estimate of the parameters H and the observation sequence fYt g. Suppose that H
k denotes the current value of the parameters after k cycles of the algorithm. The E-step consists of computing Q
HjH
k . The M-step uses these expectations to maximise Q as a function of H H
k1 argmaxH Q
HjH
k :
2
Using Jensen's inequality, Dempster [11] showed that each iteration of the E- and M-steps increases a cost function, namely log-likelihood L, until it converges to a (local) maximum. The (standard) EM algorithm for the Gaussian mixture density estimation can be summarised as follows. Initially, it is assumed that the probability density function of an N-dimensional random vector space X is a mixture of Q multivariate Gaussians p
DjH PNi1 p
xi jH
Q X Q X i1
j1
2p
aj n=2
jrj j1=2
lj T rj 1
xi
exp 0:5
xi
lj ;
3
where H faj ; lj ; rj j j 1; . . . ; Qg is the set of model parameters, aj are the mixing proportions, lj and rj denote the mean and covariance matrices, respectively. We denote D fx1 ; . . . ; xN g as a sample data set (training set) of values of X, and suppose that elements of D are independent and identically distributed. Therefore, the relationship between the resulting density p
DjH and the log-likelihhood L is p
DjH L
HjD: The EM algorithm is a general method for ®nding the maximum-likelihood estimate of the parameters of an underlying distribution from a given data set in which the data are incomplete or have missing values. In short, the ultimate goal of the EM algorithm is to obtain parameter values h^ which maximise the likelihood L ! Q N N 1 Y X X i i log
L
HjD log p
x jH log aj pj
x jHj ; i1
i1
j1
where hj
aj ; lj ; rj . 1. Initialise the means, lj , to randomly selected data points from D and the covariance matrices, rj , to unity matrices. Set aj 1=Q for all j. Set the iteration counter t 0. 2. Expectation
E-step: compute the posterior probabilities hij
t Ezij jxi ; hj of membership of xi to the jth Gaussians, for all i and j hij
t PQ
aj
t=
2p
n=2
jrj
tj
n=2
k1
ak
t=
2p
1=2 j
exp 0:5
xi
T
1
lj
t rj
t
xi
jrk
t 1=2 exp 0:5
xi
T
lj
t 1
lk
t rk
t
xi
lk
t
:
4
3. Maximisation
M-step: re-estimate the mixing proportions aj
t 1, means lj
t 1, and covariances rj
t 1 of the Gaussians using the data set weighted by hij
t. Increase the iteration counter t t 1 and go to the E-step using: N 1 X hij
t;
5 aj
t 1 N i
T. Koshizen / Journal of Systems Architecture 47 (2001) 103±117
107
Fig. 2. EM algorithm and parameter estimation. 4
Log likelihood per sample
2.35
x 10
2.4 2.45 2.5 2.55 2.6 2.65
0
2
4
6 8 Iterations of EM
10
12
14
Fig. 3. EM algorithm and the learning curve.
PN hij
txi lj
t 1 Pi1 ; N i1 hij
t PN
T
lj
t 1
xi lj
t 1 :
7 PN i1 hij
t Fig. 2 shows an example of h^ estimating using 3000 normalised training data points where `+' in Fig. 2 denotes the trained means by the EM algorithm, while the learning curve of the EM algorithm is displayed in Fig. 3. However, when estimating Gaussian densities, the standard EM algorithm does not always lead to satisfactory results. It is well known, for Gaussian mixture models, that the likelihood goes to in®nity whenever any of the covariance matrices is singular. In addition, if the number of data points is small or sparse in high-dimensional space, the computed density estimations may give a poor representation of the true distribution, and possess limited generalisation capabilities, i.e., over®tting/overgeneralisation dilemma. One way to improve the quality of the estimation is to use `regularisation' techniques. In the GMB position estimator which will be described in the following section, the EM algorithm regularises Gaussian mixture models to prevent both numerical singular covariance matrices, and over®tting of the models, so that these are called regularised EM algorithms. 1 rj
t 1
i1
hij
t
xi
6
1 In a very recent paper, the author proposed a new EM algorithm using the Tikhonov regulariser, which outperforms Kambhatla regularised EM [19] in terms of generalisation properties [20].
108
T. Koshizen / Journal of Systems Architecture 47 (2001) 103±117
Fig. 4. The GMB position estimation system.
3. The GMB position estimation system The architecture of the GMB position estimation system shown in Fig. 4 has four modules in preparation of estimating accurate robot positions. In addition, the system mainly consists of two distinct parts: the single sensor part where the sonar-based position estimate, namely the sonar-based GMB-REM system, is computed with a single sonar sensor only, and the multiple sensor part where the GMB-REM system is enhanced with respect to sensor selection, sensor fusion, and sensor selection improved by combining it with sensor fusion in the use of multiple sensors. 3.1. Sonar-based GMB-REM position estimation system To begin with, we belie¯y describe the sensory (e.g., sonar)-based GMB-REM position estimation system. In practice, the GMB-REM system also consists of the following four steps: (1) Data Collection. The robot is moved on a planar workspace W R2 in a cluttered room. A sensory (e.g., sonar) signature is taken for a position of the robot in the workspace. In our notation, the vector r represents a signature of readings taken by all 16 sensors at a given location of the (Nomad200) robot, i.e., r hr1 ; r2 ; . . . ; r16 i, where r denotes a reading from a single sensor (e.g., sonar) transducer of the robot. It is assumed that the robot must keep its orientation with respect to the environment through its paths in order to ®nd corresponds between current sensor reading r1 ; . . . ; r16 and the position de®ned by the superposition of readings taken during the teaching phase. The Nomad robot ful®ls this precondition to a certain extent. According to its synchrodrive it keeps its orientation for a couple of minutes. However, after a while the orientation drifts away: coming back to the same position the former readings from sensor ri are now to be found at sensor rik mod 16. Put it another way: the former grid cell Cik is now found seen from the robot at
T. Koshizen / Journal of Systems Architecture 47 (2001) 103±117
109
Cij;kl . Keeping track of the orientation of the robot is an undertaking in itself. After the collection, the sensory (e.g., sonar) data are normalised to zero mean and unit variance. These are then divided into several data sets, namely training, validation and test sets. It should be noted that the test data are obtained separately from the training and validation data sets. (2) Training. The training data set is used to construct a generative model to predict the probabilistic conditional distribution p
rjx by using the regularised EM algorithm. The EM algorithm is widely applied for `parametric' density estimation for a mixture of Gaussian models as described in Section 2. In fact, the Gaussian mixture densities are able to approximate any continuous probability function with an appropriately chosen mixture [21]. In the GMB-REM system, training data are taken from each dierent robot position x and are used to estimate the conditional probability parameters, using the regularised EM. All the taken training data sets from sensor readings at some known positions are distributed over the region where the robot is to drive later. From these training data, the probability distribution p
rjx is calculated for arbitrary positions x 2 R2 , which is a generalisation step. If f is a normal probability distribution like a single Gaussian function, then the parameter vector H
l; r is sucient to fully describe the p
rjx. Here, l and r are called the mean and the standard deviation, respectively. In the GMB-REM, a true distribution of sensor readings is approximated by a mixture of Gaussian models, which are weighted by the sums of normal probability distribution. Thus, the mixture model is actually parameterised by Hx
axj ; lxj ; rxj . Note that, for the mixture of Gaussian models, the parameter of mixing coecient
axj is accordingly added to the parameters of the normal Gaussian distribution. Once the parameter Hx is found, the conditional density function f of a speci®ed position x can be described as p
rjx
Qx X j1
ajx
16=2
2p
jrjx j
1=2
exp
0:5
r
T
1
ljx
rjx
r
ljx ;
8
where Qx has been used as mixture components for modelling the function of a robot's position x. The sensory generative model parameters Hx can be adjusted to maximise the log-likelihood cost function for the given set of training data via the regularised EM. Alternatively, the parameter Hx could also be obtained by neural networks in terms of approximating any continuous functions [22]. In the GMB-REM system, the Gaussian mixture models are also regularised by adding a penalty term rp to the log-likelihood cost function. In fact, the penalty used is realised by a small value maxfjrjx j; jx g;
0:0 < x ; < 1:0 for each covariance matrix in the M-step of every EM iteration. is also added to the posteriors to prevent the Gaussians from being too far from all the data points. The use of such a regularisation technique has become widespread in many areas of science such as statistics [23]. Note that the most important role of the regularisation technique here is to prevent both numerical singular covariance matrices and over®tting [19]. More speci®cally, in high-dimensional spaces, the two problems frequently cause the computed density estimates to possess only relatively limited generalisation capabilities with respect to predicting the densities of new sensory data. (3) Cross-validation. A suitable value for the regularising coecient can be found by seeking the value which gives the best performance on a validation set. (4) Calculation of the posterior probability (or belief). Initially, a robot position x is discretised into a grid cell (or a discretised position) Ci , i.e., the workspace of the cluttered room is divided into a certain number of grid cells, and the prior probability distribution of each position is `uniform'. For a given test data of sensory (e.g., sonar) readings, the posterior probability of robot position p
Ci jr is calculated by using Bayes' rule, by assigning a vector from R16 (16-dimensional sensory data space) to one of the grid cells. Generally, Bayes' rule oers us a clear probabilistic framework which is ideally suited for incorporating new measurements with old ones. Let Inew be the most recent measurement, and Iold the collection of all previous measurements and estimates. The information in which we are ultimately interested is the state (e.g., the robot position) x based on all but the newest measurement: p
xjIold . The Bayes rule allows us to use Inew to update this state distribution by the following formula:
110
T. Koshizen / Journal of Systems Architecture 47 (2001) 103±117
p
xjInew ; Iold
p
Inew jx; Iold p
xjIold : p
Iold jInew
Note that, if we assume that the new measurement Inew is entirely dependent on the value of the state vector, then p
Inew jx; Iold reduces to p
Inew jx. Thus, we can synthesise `noisy' sensor readings over time by giving more weight to those measurements in which we have more con®dence, when a probability distribution p
Inew jx for the sensor is provided. Especially, if Inew is a sensory (e.g., sonar) signature r which consists of sets of measurements that distinguish dierent grid cells, then we need to model p
rjx. In our framework, the likelihood of r at each dierent position of the robot is multiplied by the robot's prior probability of belonging to that particular cell. The products are then re-normalised to obtain a posterior probability distribution, which incorporates the information obtained from the most current sensory readings. The probability distribution p
Ci jr is often called the `belief' of the robot's position. That is, the belief incorporates a variance of its position, which corresponds to the existence probability of the robot residing within a speci®c grid cell. 3.2. Sensor selection task with the GMB-REM system In this section, the GMB position estimator is described in terms of the sensor selection task. An overview of sensor selection task is illustrated in Fig. 5. The sensor selection task is conceptually capable of dealing with multiple sensor information from external sensors such as sonar, infrared and bumper sensors. Fig. 5 represents the sensor selection task for robot position estimation, subject to minimising the position error of the robot. The basic idea is that sensor selection allows the robot to choose the sensor which works best at a certain robot position, considering the minimum (metric) position error. In other words, by comparing the error for dierent positions, the robot can decide which sensor yields the most accurate position estimation. In particular, Fig. 5 shows the error EBel associated with dierent types of sensors for each dierent robot position, where xt denotes a robot's position at a certain time, and sl is a type of sensor (l is the number of dierent sensor types). Moreover, the mathematical de®nition of the error of the position estimation EBel on belief and sensor selection are as follows. Let W be the robot's position space and N the sensory space. Note that, more concretely, the sensory space N consists of sonar, infrared bumper, laser, photo-camera sensors and so on.
Fig. 5. An overview of the sensor selection.
T. Koshizen / Journal of Systems Architecture 47 (2001) 103±117
111
De®nition 1 (Robot position error EBel for sensor selection). The metric error is calculated by, that is, the predicted (or expected) robot position based on a sensor signature of each sensor s 2 N and the actual robot position at which each sensory reading rs2N was taken EBel fs 2 NjEBel
s; xt 8 xt 2 Wg:
9
De®nition 2 (Sensor selection). There is the sensor s[ that minimises the metric position error of the robot 9 s[ 2 N;
s[ argmin EBel argmin fs 2 NjEBel
s; xt 8 xt 2 Wg:
10
Note that, in practice, the position error on a belief EBel is calculated soon after each sensory belief is sought. 3.2.1. The sensor selection algorithm The belief for the sensor selection algorithm can be obtained by using the GMB-REM system. Initially, the work space is discretised to be a distribution representation of p
Cit 1 , before receiving the most recent external sensor information on a rectangular grid of points fCit 1 gM i1 . 1. Take each sensor signature rt at a new point, and calculate ps2N
rt jCit accordingly. Integrate the prior distribution ps2N
Cit for each sensor with the recent conditional probability of each sensor signature ps2N
rt jCit to obtain belief(s) for each sensor ps2N
Cit jr or more simply, ps2N
Cit as follows: Prior ps2N
Cit jrt ps2N
Cit ;
11 ps2N
rt P Prior where, ps2N
rt k ps2N
Ckt jrt ps2N
Ckt is a normalising factor. 2. Do the sensor selection for each sensory belief ps2N
Cit according to Eqs. (8) and (9). Thus, we obtain a Bel minimum error of each discretised position Ci in particular for the sensor s[ , i.e., Emin
s[ ; Ci
s[ 2 N: 3. Repeat above steps (1) and (2) until no more sensory readings are taken.
ps2N
Cit
3.3. The sensor fusion The sensor fusion scheme proposed here is incorporated with the GMB-REM system as follows. Assume Dxt 1 is a measurement of the change of the robot's position, obtained from odometry. Let xt xt
1
Dxt
1
xmotion
12
be the robot's position at time t, where xmotion is motion noise with variances r2motion . Note that we assume r2motion in the algorithm to be almost zero so that each discrete robot movement is measured very accurately. 3.3.1. The sensor fusion algorithm The belief for the sensor fusion algorithm can be obtained by using the GMB-REM system and conventional odometry-based navigation. Initially, the work space is discretised to be a distribution representation of p
Cit 1 , before receiving the most recent motion or sonar information on a rectangular grid of points fCit 1 gM i1 , which are located at the centre of each cell in the position space W. (1) Incorporate the most current motion information Dxt 1 by a robot's movement, obtaining the prior belief for a robot's position at time t as follows: Z p
xt jxt 1 p
xt 1 dxt 1 :
13 p
xt p
xt jDXt 1 ; xt 1 R2
112
T. Koshizen / Journal of Systems Architecture 47 (2001) 103±117
Hence, we obtain Z Z t t p
Ci p
x dx Ci
Ci
Z
1=2
R2
p
xt jxt 1 ; Dxt 1 p
xt 1 dxt kxt
exp
2
Cj \
Ci Dxt
1
X p
Cj Z area
Cj Cj \
Ci Cj X
p
Ci
Cj
Dxt
1
dz
area
Cj \
Ci Dxt 1 ; area
Cj
1
xt 1 Dxt 1 k2 2r2motion
2p rmotion Ci R Z Z d
xt
xt 1 Dxt 1 p
xt 1 dxt Ci R2 Z Z d
xt zp
z Dxt 1 dz dxt Ci R2 Z p
xt Dxt 1 dxt 1 Ci Z p
z dz Ci Dxt 1 Z X p
z dz Cj
Z
1
Z
1
dxt
dxt ! p
xt 1 dxt
14 1
dxt
15
16
17
18
19
20
21
22
where we use the discretisation p
z
p
Cj area
Cj
8 z 2 Cj :
23
(2) Take a sonar signature rt at a new point, and calculate p
rt jCit accordingly. Integrate the prior distribution p
Cit of the discretised position Ci with the recent conditional distribution of sonar signals p
rt jCit to obtain a fused (by step (1)) belief p
Cit jrt as for the usual Bayesian formula: p
Cit jrt
p
rt jCit p
Cit ; p
rt
24
where p
rt is a normalising factor. (3) Repeat above steps (1) and (2) until there are no longer any robot movements, and no more sensory readings are taken. Note that, the d function in Eq. (16) can be equalled to the normal Gaussian function when rmotion is almost zero ( !) 1 1
x x0 2 lim exp :
25 1=2 2r2motion
2p rmotion ! 0 rmotion The point of the sensor fusion algorithm is to use the motion sensor, which is dierent in nature from sonar readings. That is, the robot's position error by the use of motion sensors would normally accumulate over time. In our fusion algorithm, motion noise was modelled to be a `zero-mean Gaussian' N
0; rmotion , and physically calculated in real robot navigations using the Nomad200 robot. In addition, the algorithm allows
T. Koshizen / Journal of Systems Architecture 47 (2001) 103±117
113
us to occasionally incorporate zero-mean Gaussians with relatively `large' motion noise. The updating formula is then given as follows: ! 2 X jCi Cj j ;
26 p
Ci q p
Cj exp 2r2 j where q
area
Ci
2p
1=2
const:
area
Ci denotes the size of the discretised position or the grid cell Ci . 3.4. Sensor selection improved by combining it with sensor fusion Sections 3.2 and 3.3 show the proposed GMB position estimation in particular for the use of multiple sensors such as an internal sensor and external sensors. The aim of sensor selection is to choose the sensor which works best at a certain robot position, considering the minimum (metric) position error. In contrast, the purpose of the sensor fusion is to incorporate heterogenic sensors in the robot, so that it is capable of reducing uncertainty in the position estimation of the robot. More speci®cally, our sensor fusion algorithm can decrease the error by the movements of the robot (i.e., motion sensor) no matter how noisy other (external) sensors are. In this section, we propose a method, namely an improved sensor selection technique, which can further minimise the error by the combination of sensor selection (Section 3.2) and sensor fusion (Section 3.3). The algorithm of the improved sensor selection algorithm is as follows. 3.4.1. The improved sensor selection algorithm Initially, the work space is discretised to be a distribution representation of p
Cit 1 , before receiving the M most recent motion or other (external) sensor information on a rectangular grid of points fCit 1 gi1 , which are located at the centre of each cell in the position space W. 1. Incorporate the most current motion information Dxt 1 by a robot's movement, obtaining the prior belief for a robot's position at time t as follows: Z ps2N
xt jxt 1 ps2N
xt 1 dxt 1 :
27 ps2N
xt ps2N
xt jDt 1 ; xt 1 R2
Thus, we obtain Z ps2N
Ci
Ci
t
t
ps2N
x dx
X Cj
area
Cj \
Ci Dxt 1 ps2N
Cj area
Cj
28 s2N
according to Eqs. (14)±(22). 2. Take each sensor signature rt at a new point, and calculate ps2N
rt jCit accordingly. Integrate the prior distribution ps2N
Cit for each sensor with the recent conditional probability of each sensor signature ps2N
rt jCit to obtain fused belief(s) (by step (1)) for each sensor ps2N
Cit jr or simply ps2N
Cit by Eq. (10). 3. Do the sensor selection for each sensory belief ps2N
Cit according to Eqs. (8) and (9). Thus, a minimum Bel error of each discretised position Ci is obtained in particular for the sensor s[ , i.e., Emin
s[ ; Ci
s[ 2 N: 4. Repeat above steps (1)±(3) until there are no longer any robot movements, and no more sensory readings are taken.
114
T. Koshizen / Journal of Systems Architecture 47 (2001) 103±117
4. Experiment In this section, real experiment results of the GMB position estimator are discussed. At ®rst, we provide the brief description of the Nomad200. The Nomad200 robot, shown in Fig. 1 was used in this experiment. It has a cylindrical shape with an approximate radius of 23 cm, and a kinematic model equivalent to a unicycle. It is comprised of a base (lower part) and a turret (upper part). The base is formed by three wheels and a tactile sensor. The turret is a uniform 16-sided polygon, with markings indicating `front' and `back'. On each side are mounted sonar (or ultrasonic), infrared and bumper (or touch) sensors. The sonar sensor is normally adequate for long-range detection (15±640 cm); the infrared (IR) proximity sensors (up to 80 cm) and the odometer keep track of its position. Fig. 6 shows a plan view of the cluttered room, which was used in the real experiment of the GMB position estimation system in Fig. 4. In the physical world, the robot knows its position by integrating the encoders' information. However, this is not the exact information, since there is some slippage of the wheels on the ¯oor. Thus, the average motion noise xmotion in accordance with r2motion 0:01 cm was calculated at which the robot moved ahead every 10 cm. In this real experiment, the robot was localised in this order: A ! B ! C ! D ! E and data sets for sonar and infrared were collected as well as the odometry data. Fig. 7 shows the mean position error of the proposed GMB position estimator with respect to the four modules; the sonar-based GMB-REM (the solid line), the sensor selection (the dash line), the sensor fusion (the line with circles) and the improved sensor selection (the star line). In fact, the sensor selection was applied after the time step became 21 in Fig. 7 because of the nature of infrared sensor as a short-distance sensor. The ®gure obviously indicates that the closer the robot approaches the particular location ± C, D and E in Fig. 6 ± the more the mean position error of the robot increases. This is particularly because of the `noisy' sonar information aected by its false re¯ection in the speci®ed location. In this real experiment, the number of training sets, validation data sets, and the number of applied mixture components are exactly the same as in the simulation experiment described in Section 4. As a consequence, Fig. 7 clearly shows that the sensor selection method improved by combining with sensor fusion symbolised by (*) can be best accurate with respect to constraining the position error.
4.2 m
shelves
Fig. 6. The plan view of a cluttered room.
T. Koshizen / Journal of Systems Architecture 47 (2001) 103±117
115
6.5
6
Mean Position Error(cm)
5.5
5
4.5
4
3.5
3
2.5
0
5
10
15
20 25 Time Step
30
35
40
45
Fig. 7. Mean position error of the GMB position estimator.
5. Conclusion In this paper, we proposed the architecture of the GMB position estimation system (or estimator). The proposed system is actually a purposive modular system. That is, the system allows the robot to be able to deal with the system of a single type of sensors (the sensor-based GMB-REM), and multiple sensors (the sensor selection, the sensor fusion and the sensor selection improved by combining it with the sensor fusion). The crucial point of the proposed system is to constrain the error as much as possible no matter how sensory models (e.g., the sonar model) are signi®cantly noised in some particular locations of the robot. This can be resolved by introducing the use of the multiple sensors into the proposed system for the particular locations, where sensory models tend to be noisy. The performance of the GMB position estimator was shown, in this paper, in terms of minimising the uncertainty of the robot's position, i.e., decreasing the robot's position error. In the GMB position estimator, the combination of sensor selection and sensor fusion module performed best and produced very accurate position estimates of the robot in our experiment using the Nomad200 robot. With respect to the computational cost (or memory) of the proposed system, however, the sonar-based GMB-REM is actually the lowest compared with the multiple sensors in the system. Consequently, the modularity of the system arises from circumstances of a location where the robot is, i.e., the complexity/size of the room, so that the architecture of the proposed position estimation system is designed to be able to select which module of the system (e.g., either a single sensor or multiple sensors) can be applied to such circumstances in order to minimise the position error of the robot.
116
T. Koshizen / Journal of Systems Architecture 47 (2001) 103±117
Acknowledgements The work was supported by several funds from the Australian Government and the Australian National University. The author would like to thank anonymous referees for useful comments on this paper.
References [1] K. Nagatani, H. Choset, S. Thrun, Towards exact localisation without expricit localisation, in: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 5, Leuven, Belgium, 1998, pp. 342±348. [2] I. Ulrich, F. Mondada, J.-D. Nicoud, Autonomous vacuum cleaner, Robot. Autonomous System 19 (1995) 233±245. [3] I.J. Cox, Blanche-an experiment in guidance and navigation of an autonomous robot vehicle, IEEE Trans. Robot. Automation 7 (1995) 193±204. [4] H. Moravec, Certainty grids for sensor fusion in mobile robots, in: A. Casals (Ed.), Sensor Devices and Systems for Robotics, Springer, Berlin, 1989, pp. 243±276. [5] B. Scheile, J.L. Crowley, Certainty grids perception and localisation for a mobile robot, Robots Autonomous Systems 12 (1994) 163±172. [6] R. Madharam, M.W.M.G. Dissanayake, Map-building and map-based localisation in an underground-mine by statistical patter machine, in: Proceedings of the International Conference on Pattern Recognition, Brisbane, Australia, 1998, pp. 17±21. [7] G. Dudek, M.M. Jenkin, E. Milios, D. Wilkes, Re¯ections on modelling a sonar range sensor, Technical Report, No. CIM-92-9, McGill University, 1993. [8] S. Thrun, Bayesian landmarks learning for mobile robot localisation, Mach. Learning 33 (1998). [9] A.R. Cassandra, L.P. Kaebling, J.A. Kurien, Acting under uncertainty discrete Bayesian models for mobile robot navigation, in: Proceedings of the IEEE International Conference on Intelligent Robots and Systems, 1996. [10] T. Koshizen, The Gaussian mixture Bayes with regularised EM algorithm GMB-REM for real mobile robot position estimation technique, in: Proceedings of the IEEE International Conference of Automation, Robotics, Control and Vision, vol. 1, Singapore, 1998, pp. 256±260. [11] A.P. Dempster, N.M. Laird, D.B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, J. Royal Stat. Soc. B 39 (1998) 1±38. [12] B. Barshan, R. Kuc, Dierentiating sonar re¯ections from corners and planes by employing an intelligent sensor, IEEE Trans. Pattern Anal. Mach. Intell. (1990) 560±569. [13] H.F. Durrant-Whyte, Integration, Coordination and Control of Multi-Sensor Robot System, Kluwer Academic Publishers, Dordrecht, 1988. [14] F. Matia, A. Jimenex, Multisensor fusion: an autonomousmobile robot, J. Intell. Robotics System 22 (21) (1998). [15] B.H. Stephen, Bruder, An information centric approach to heterogeneous multi-sensor integration for robotic applications, Robotics Autonomous System 26 (1999) 255±280. [16] T. Koshizen, P. Bartlett, A. Zelinsky, Sensor fusion of odometry and sonar sensors by the Gaussian mixture Bayes technique in mobile robot position estimation, Proc. IEEE System, Man Cyber. (1999). [17] T. Koshizen, The sensor selection task of the Gaussian mixture Bayes with regularised EM (GMB-REM) technique in robot position estimation, in: Proceedings of the IEEE International Conference on Robotics & Automation, 1999. [18] V. Hasselblad, Estimation of parameters for a mixture of normal distributions, Technometrics 8 (1966) 431±444. [19] N. Kambhatla, Local models and Gaussian mixture models for statistical data processing, Ph.D. dissertation of Oregon Graduate Science of Technology, 1996. [20] T. Koshizen, Y. Rosseel, Y. Tonegawa, A new EM algorithm with a class of Tikhonov regularisers, in: Proceedings of the International Joint Conference on Neural Networks, Washington, DC, 1999, pp. 661±665. [21] L. Xu, M.I. Jordan, On convergence properties of the EM algorithm for Gaussian mixtures, Neural Comput. 8 (1996) 129±151. [22] K. Hornik, H. Stinchcombe, H. White, Universal approximation of an unknown function and its derivatives using multilayer feedforward networks, Neural Networks 3 (1990) 551±560. [23] V. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, 1995.
T. Koshizen / Journal of Systems Architecture 47 (2001) 103±117
117
T. Koshizen completed his Bachelor's degree in Physics at the University of Ryukyu, Okinawa, Japan in 1991. Since 1991, he joined to develop a product of arti®cial neural network, called NEUROLIVE, as a system engineer at the Hitachi Co. Ltd. Software Development Centre. Yokohama, Japan. His Master's degree with honours was completed in computer science at the University of Wollongong, in NSW, Australia in 1994. From 1995 to 1997, he was a conducted researcher in speech processing project at D21 laboratory of Sony Co. Ltd., Tokyo, Japan. Since 1997 he undertook his Ph.D. study at the Australia National University, in Canberra, Australia where he had constructed a new learning scheme of robot localisation problem. He is concurrently a senior researcher at the Honda Co. Ltd. Wako Basic Research Centre, in Saitama, Japan where he is devoted to a brain-like computing project. So far, he has been invited to be a session chair of several international conferences like ICONIP and ICRA, and has also become the active paper reviewer for international conferences like IROS, KES and the IEEE Transaction on Neural Networks journal so on. His current research interests include building learning systems from machine learning theory and brain understanding, as well as emotional modelling.