{"title": "KLD-Sampling: Adaptive Particle Filters", "book": "Advances in Neural Information Processing Systems", "page_first": 713, "page_last": 720, "abstract": null, "full_text": "KLD-Sampling: Adaptive Particle Filters\n\nDepartment of Computer Science & Engineering\n\nDieter Fox\n\nUniversity of Washington\n\nSeattle, WA 98195\n\nEmail: fox@cs.washington.edu\n\nAbstract\n\nOver the last years, particle \ufb01lters have been applied with great success to\na variety of state estimation problems. We present a statistical approach to\nincreasing the ef\ufb01ciency of particle \ufb01lters by adapting the size of sample\nsets on-the-\ufb02y. The key idea of the KLD-sampling method is to bound the\napproximation error introduced by the sample-based representation of the\nparticle \ufb01lter. The name KLD-sampling is due to the fact that we measure\nthe approximation error by the Kullback-Leibler distance. Our adaptation\napproach chooses a small number of samples if the density is focused on\na small part of the state space, and it chooses a large number of samples\nif the state uncertainty is high. Both the implementation and computation\noverhead of this approach are small. Extensive experiments using mobile\nrobot localization as a test application show that our approach yields drastic\nimprovements over particle \ufb01lters with \ufb01xed sample set sizes and over a\npreviously introduced adaptation technique.\n\n1 Introduction\n\nEstimating the state of a dynamic system based on noisy sensor measurements is extremely\nimportant in areas as different as speech recognition, target tracking, mobile robot navigation,\nand computer vision. Over the last years, particle \ufb01lters have been applied with great success\nto a variety of state estimation problems (see [3] for a recent overview). Particle \ufb01lters\nestimate the posterior probability density over the state space of a dynamic system [4, 11].\nThe key idea of this technique is to represent probability densities by sets of samples. It is\ndue to this representation, that particle \ufb01lters combine ef\ufb01ciency with the ability to represent\na wide range of probability densities. The ef\ufb01ciency of particle \ufb01lters lies in the way they\nplace computational resources. By sampling in proportion to likelihood, particle \ufb01lters focus\nthe computational resources on regions with high likelihood, where things really matter.\n\nSo far, however, an important source for increasing the ef\ufb01ciency of particle \ufb01lters has only\nrarely been studied: Adapting the number of samples over time. While variable sample\nsizes have been discussed in the context of genetic algorithms [10] and interacting particle\n\ufb01lters [2], most existing approaches to particle \ufb01lters use a \ufb01xed number of samples during\nthe whole state estimation process. This can be highly inef\ufb01cient, since the complexity of the\nprobability densities can vary drastically over time. An adaptive approach for particle \ufb01lters\nhas been applied by [8] and [5]. This approach adjusts the number of samples based on the\nlikelihood of observations, which has some important shortcomings, as we will show.\n\n\fIn this paper we introduce a novel approach to adapting the number of samples over time.\nOur technique determines the number of samples based on statistical bounds on the sample-\nbased approximation quality. Extensive experiments using a mobile robot indicate that our\napproach yields signi\ufb01cant improvements over particle \ufb01lters with \ufb01xed sample set sizes and\nover a previously introduced adaptation technique. The remainder of this paper is organized\nas follows: In the next section we will outline the basics of particle \ufb01lters and their appli-\ncation to mobile robot localization. In Section 3, we will introduce our novel technique to\nadaptive particle \ufb01lters. Experimental results are presented in Section 4 before we conclude\nin Section 5.\n\n2 Particle \ufb01lters for Bayesian \ufb01ltering and robot localization\n\nParticle \ufb01lters address the problem of estimating the state  of a dynamical system from\n\nsensor measurements. The goal of particle \ufb01lters is to estimate a posterior probability density\nover the state space conditioned on the data collected so far. The data typically consists of\nan alternating sequence of time indexed observations\n, which\ndescribe the dynamics of the system. Let the belief \u0006\b\u0007\n\t\f\u000b\ndenote the posterior at time\n\u000f . Under the Markov assumption, the posterior can be computed ef\ufb01ciently by recursively\nupdating the belief whenever new information is received. Particle \ufb01lters represent this belief\nby a set\n\n\u0002\u000e\r\nweighted samples distributed according to \u0006\u0013\u0007\u0014\t\f\u000b\n\nand control measurements\n\n\u0004\u0005\u0002\n\n\u0001\u0003\u0002\n\nof\n\n:\n\nHere each 0\u001b \u001d\u001e\u001f\n\n\u001b\u001e\u001d\u001e\u001f\n\u0002&%('\n)\nis a sample (or state), and the\n\n\u0015+*\n!-,.,.,.!\nare non-negative numerical factors called\n\u001b\u001e\u001d \u001f\nimportance weights, which sum up to one. The basic form of the particle \ufb01lter updates\nthe belief according to the following sampling procedure, often referred to as sequential\nimportance sampling with re-sampling (SISR, see also [4, 3]):\n\n\u0010\u0005\u0002\u0016\u0015\u0018\u0017\u001a\u0019\n\n\u001c\u001b\u001e\u001d \u001f\n\u0002\"!$#\n\n\u0012\u0016/\n\n\u0010\u0011\u0002\n\n.\n\n\u000232\u00114\n\n\u0010\u0011\u000232\u001c4\n. This\n\ndistribution used in the next step.\n\nfrom the sample set\n\u001b\u001e\u001d \u001f\n\u000232\u001c4\n\n\u001b\u001e\u001d \u001f\n\u000232\u001c4\n, which describes the dynamics of the system.\n\nRe-sampling: Draw with replacement a random sample 1\u001b\u001e\u001d\u001e\u001f\n\u000232\u00114\naccording to the (discrete) distribution de\ufb01ned through the importance weights\nsample can be seen as an instance of the belief \u0006\u0013\u0007\n\t$\u000b\nSampling: Use \nand the control information\n\u00048\u000232\u00114\ndensity given by the product 7\n\u000232\u00114:\r\nImportance sampling: Weight the sample ;\u001b65\f\u001f\nlikelihood of the sample 0\u001b65\f\u001f\nEach iteration of these three steps generates a sample drawn from the posterior belief\n\u0006\b\u0007\n\t\f\u000b\niterations, the importance weights of the samples are normalized so that\nthey sum up to 1. It can be shown that this procedure in fact approximates the posterior\ndensity, using a sample-based representation [4, 2, 3].\n\n\u000232\u00114.\r\nto sample \n\u000232\u001c4\n\u0006\b\u0007\n\t$\u000b\n\u000232\u001c4.\r\nby the importance weight 7\n\nfrom the distribution\nnow represents the\n. This density is the proposal\n\ngiven the measurement\n\n\u001b65\f\u001f\n1\u001b95\f\u001f\n\n. After\n\n\u001c\u001b65\f\u001f\n\n\u000232\u00114\n\n, the\n\n\u0001\n\u0002\n\n.\n\n\u0001\n\u0002\n\n\u0002\u000e\n\nParticle \ufb01lters for mobile robot localization\n\nWe use the problem of mobile robot localization to illustrate and test our approach to adaptive\nparticle \ufb01lters. Robot localization is the problem of estimating a robot\u2019s pose relative to a\nmap of its environment. This problem has been recognized as one of the most fundamental\nproblems in mobile robotics [1]. The mobile robot localization problem comes in different\n\ufb02avors. The simplest localization problem is position tracking. Here the initial robot pose\nis known, and localization seeks to correct small, incremental errors in a robot\u2019s odometry.\nMore challenging is the global localization problem, where a robot is not told its initial pose,\nbut instead has to determine it from scratch.\n\n\n\u0012\n\n\u0002\n\n\u0002\n#\n\u0002\n#\n\n\u0004\n\u0002\n7\n\u000b\n\n\u0002\n'\n\n!\n\n\u0002\n\u000b\n\n\u0002\n'\n\n!\n\u0004\n\n\u0002\n\u000b\n'\n\u0002\n\n\u0002\n\n\u0012\n\f(a)\n\nRobot position\n\nRobot position\n\nStart\n\n(c)\n\nRobot position\n\n(b)\n\n(d)\n\nFig. 1: a) Pioneer robot used throughout the experiments. b)-d) Map of an of\ufb01ce environment along\nwith a series of sample sets representing the robot\u2019s belief during global localization using sonar sensors\n(samples are projected into 2D). The size of the environment is 54m\n18m. b) After moving 5m, the\nrobot is still highly uncertain about its position and the samples are spread trough major parts of the\nfree-space. c) Even as the robot reaches the upper left corner of the map, its belief is still concentrated\naround four possible locations. d) Finally, after moving approximately 55m, the ambiguity is resolved\nand the robot knows where it is. All computation can be carried out in real-time on a low-end PC.\n\n\u00048\u000232\u00114\n\nof the system is the robot\u2019s position, which is\nIn the context of robot localization, the state \ntypically represented in a two-dimensional Cartesian space and the robot\u2019s heading direction.\ndescribes how the position of the robot\nThe state transition probability 7\n\u000232\u00114\ncollected by the robot\u2019s wheel encoders. The perceptual model\nchanges using information\ndescribes the likelihood of making the observation \u0001\n\u000b\u0002\u0001\ngiven that the robot is at\n. In most applications, measurements consist of range measurements or camera\nlocation \nimages (see [6] for details). Figure 1 illustrates particle \ufb01lters for mobile robot localization.\nShown there is a map of a hallway environment along with a sequence of sample sets during\nglobal localization. In this example, all sample sets contain 100,000 samples. While such\na high number of samples might be needed to accurately represent the belief during early\nstages of localization (cf. 1(a)), it is obvious that only a small fraction of this number suf\ufb01ces\nto track the position of the robot once it knows where it is (cf. 1(c)). Unfortunately, it is not\nstraightforward how the number of samples can be adapted on-the-\ufb02y, and this problem has\nonly rarely been addressed so far.\n\n3 Adaptive particle \ufb01lters with variable sample set sizes\n\nThe localization example in the previous section illustrates that the ef\ufb01ciency of particle\n\ufb01lters can be greatly increased by changing the number of samples over time. Before we\nintroduce our approach to adaptive particle \ufb01lters, let us \ufb01rst discuss an existing technique.\n\n3.1 Likelihood-based adaptation\n\nWe call this approach likelihood-based adaptation since it determines the number of sam-\nples such that the sum of non-normalized likelihoods (importance weights) exceeds a pre-\nspeci\ufb01ed threshold. This approach has been applied to dynamic Bayesian networks [8] and\nmobile robot localization [5]. The intuition behind this approach can be illustrated in the\nrobot localization context: If the sample set is well in tune with the sensor reading, each indi-\nvidual importance weight is large and the sample set remains small. This is typically the case\nduring position tracking (cf. 1(c)). If, however, the sensor reading carries a lot of surprise,\nas is the case when the robot is globally uncertain or when it lost track of its position, the\n\n\n\u0002\n\u000b\n\n\u0002\n'\n\n!\n\n\u0004\n\u0002\n7\n\u0002\n'\n\n\u0002\n\n\u0002\n\u0002\n\findividual sample weights are small and the sample set becomes large.\n\nThe likelihood-based adaptation directly relates to the property that the variance of the im-\nportance sampler is a function of the mismatch between the proposal distribution and the\ndistribution that is being approximated. Unfortunately, this mismatch is not always an accu-\nrate indicator for the necessary number of samples. Consider, for example, the ambiguous\nbelief state consisting of four distinctive sample clusters shown in Fig. 1(b). Due to the sym-\nmetry of the environment, the average likelihood of a sensor measurement observed in this\nsituation is approximately the same as if the robot knew its position unambiguously (cf. 1(c)).\nLikelihood-based adaptation would therefore use the same number of samples in both situ-\nations. Nevertheless, it is obvious that an accurate approximation of the belief shown in\nFig. 1(b) requires a multiple of the samples needed to represent the belief in Fig. 1(c).\n\n3.2 KLD-sampling\n\nThe key idea of our approach is to bound the error introduced by the sample-based repre-\nsentation of the particle \ufb01lter. To derive this bound, we assume that the true posterior is\ngiven by a discrete, piecewise constant distribution such as a discrete density tree or a multi-\ndimensional histogram [8, 9]. For such a representation we can determine the number of\nsamples so that the distance between the maximum likelihood estimate (MLE) based on the\n\n, where\nis\n\n(1)\n\n(2)\n\n\u0015\"\u0012\n\nas\n\n5\u0018\u0017\n\n!.,-,.,.!\n\n\u0002\u0004\u0003\n\n,-,.,\n\n2\u00114\n\n\u0015&\u0012\n\nTo see, suppose that\n\nfor testing 7\n\nis\n\ndenote the number of samples drawn from each bin.\n\nis the true distribution, the likelihood ratio converges to a chi-square distribution:\n\nresulting approach the KLD-sampling algorithm since the distance between the MLE and the\ntrue distribution is measured by the Kullback-Leibler distance. In what follows, we will \ufb01rst\nderive the equation for determining the number of samples needed to approximate a discrete\nprobability distribution (see also [12, 7]). Then we will show how to modify the basic particle\n\ufb01lter algorithm so that it realizes our adaptation approach.\n\nsamples and the true posterior does not exceed a pre-speci\ufb01ed threshold . We denote the\nsamples are drawn from a discrete distribution with\u0001 different bins.\nLet the vector\u0002\nis distributed according to a multinomial distribution, i.e.\u0002\n\u0015\u0006\u0005 Multinomial\u0003\n\u0003 speci\ufb01es the probability of each bin. The maximum likelihood estimate of 7\n. Furthermore, the likelihood ratio statistic\b\n\t\ngiven by\u0007\n\u000b\r\f\u000f\u000e\n\u000b\u0013\f\u0014\u000e\u0016\u0015\n\u000b\r\f\u000f\u000e\u0016\u0015\n5\u0019\u0017\n5\u0012\u0011\n5\u0012\u0011\n\u000b\u0013\f\u0014\u000e\n\b\u0019\t\u0004\u001b\u001d\u001c\u001f\u001e! \n\u0012\"\u001b$#\n\u0003\n2\u00114\nPlease note that the sum in the rightmost term of (1) speci\ufb01es the K-L distance%\ndistance is smaller than , given that\n&-'\n\u0012(%\n+:\n\n*),+\n\r*)\n\u0012(%\n\u0015+*4365\n\u0003\u00142\u001140/\n\u0003\u00142\u00114\n221\nsuch that\u001a\n4:2\n1 , we can combine (3) and (4) to get\nis equal to\u001e\n\u0003\u00142\u001140/\n*4375\n\r*),+:\r\n\u001e. \n221\n\u0003\u00142\u001140/\n\nbetween the MLE and the true distribution. Now we can determine the probability that this\n\nThe second step in (3) follows by replacing\nby the convergence result in (2). The quantiles of the chi-square distribution are given by\n\nwith the likelihood ratio statistic, and\n\nThis derivation can be summarized as follows: If we choose the number of samples\n\nas\n\nsamples are drawn from the true distribution:\n\n\u001e. \n\n\u0003\u00142\u00114\n\n(3)\n\n(4)\n\n(5)\n\n(6)\n\nNow if we choose\n\nWhen7\n\n&('\n\n\u0012\n\u0015\n\u000b\n\u0002\n4\n\n\u0002\n\u000b\n\u0012\n!\n7\n\n7\n\u0015\n7\n4\n7\n7\n\u0002\n\b\n\t\n\u0015\n\u0003\n\u0010\n4\n\u0002\n5\n\u0007\n7\n5\n7\n\u0003\n\u0010\n4\n\u0007\n7\n5\n\u0007\n7\n5\n7\n,\n\u001a\n\u000b\n\u0007\n7\n!\n7\n\n\u0012\n\u000b\n%\n\u000b\n\u0007\n7\n!\n7\n\n\u0015\n\u000b\n\u001a\n\u000b\n\u0007\n7\n!\n7\n\u001a\n\u0012\n,\n\u0015\n&\n\u000b\n)\n\u001a\n\u0012\n+\n\n\u000b\n\u0007\n7\n!\n7\n\n&\n\u000b\n\u001e\n \n)\n\u001e\n \n4\n\n,\n\u0012\n\u0012\n+\n \n&\n'\n\u000b\n%\n\u000b\n\u0007\n7\n!\n7\n,\n\u0015\n,\n\u0012\n\u0012\n\u0015\n*\n\u001a\n+\n4\n!\n\fwhere\n\n221\n\n(7)\n\n\u0001\u00034\n\n\u000b\r\f\n\n*43\n\nis the upper\n\ndistribution.\n\n\u0003\u00142\u001140/\n\n221\b\u0007\n\t\n\nthen we can guarantee that with probability\n\naccording to (6), we need to\ncompute the quantiles of the chi-square distribution. A good approximation is given by the\nWilson-Hilferty transformation [7], which yields\n\n3\u001d5 , the K-L distance between the MLE and\nthe true distribution is less than . In order to determine\n221\n\r\u0004\u0003\u0006\u0005\n*4365 quantile of the standard normal\n\nThis concludes the derivation of the sample size needed to approximate a discrete distribution\n\nwith the certainty of the state estimation 1.\nIt remains to be shown how to apply this result to particle \ufb01lters. The problem is that we do\nnot know the true posterior distribution (the estimation of this posterior is the main goal of the\nparticle \ufb01lter). Fortunately, (7) shows that we do not need the complete discrete distribution\n\nwith an upper bound on the K-L distance. From (7) we see that the required number\nof samples is proportional to the inverse of the bound, and to the \ufb01rst order linear in the\nnumber\u0001 of bins with support. Here we assume that a bin of the multinomial distribution has\nsupport if its probability is above a certain threshold. This way the number\u0001 will decrease\nbut that it suf\ufb01ces to determine the number\u0001 of bins with support. However, we do not know\nthis quantity before we actually generate the distribution. Our approach is to estimate\u0001 by\nof the particle \ufb01lter update. The determination of\u0001 can be done ef\ufb01ciently by checking for\nthe original algorithm is that we have to keep track of the number\u0001 of supported bins. The\n\nbins can be implemented either as a \ufb01xed, multi-dimensional grid, or more ef\ufb01ciently as tree\nstructures [8, 9]. Please note that the sampling process is guaranteed to terminate, since for a\ngiven bin size\n\neach generated sample whether it falls into an empty bin or not. Sampling is stopped as\nsoon as the number of samples exceeds the threshold speci\ufb01ed in (7). An update step of the\nresulting KLD-sampling particle \ufb01lter is given in Table 1.\n\ncounting the number of bins with support during sampling. To be more speci\ufb01c, we estimate\nresulting from the \ufb01rst two steps\n\nThe implementation of this modi\ufb01ed particle \ufb01lter is straightforward. The only difference to\n\nfor the proposal distribution 7\n\n\u000232\u001c4\n\n\u000232\u001c4\n\n\u0006\u0013\u0007\n\t$\u000b\n\n\u000232\u00114\n\n, the maximum number\u0001 of bins is limited.\n\n4 Experimental results\n\nWe evaluated our approach using data collected with one of our robots (see Figure 1). The\ndata consists of a sequence of sonar scans and odometry measurements annotated with time-\nstamps to allow systematic real-time evaluations. In all experiments we compared our KLD-\nsampling approach to the likelihood-based approach discussed in Section 3.1, and to particle\n\ufb01lters with \ufb01xed sample set sizes. Throughout the experiments we used different parameters\nfor the three approaches. For the \ufb01xed approach we varied the number of samples, for the\nlikelihood-based approach we varied the threshold used to determine the number of samples,\n\nof 50cm\n\n1This need for a threshold to determine\n\nmaximum number of samples for all approaches to 100,000.\n\nand for our approach we varied , the bound on the K-L distance. In all experiments, we\nused a value of 0.99 for5 and a \ufb01xed bin size\n\nvary over time) is not particularly elegant.\nHowever, it results in an ef\ufb01cient implementation that does not even depend on the value of the thresh-\nold itself (see next paragraph). We also implemented a version of the algorithm using the complexity\nof the state space to determine the number of samples. Complexity is measured by\nis the\nentropy of the distribution. This approach does not depend on thresholding at all, but it does not have a\nguarantee of approximation bounds and does not yield signi\ufb01cantly different results.\n\n10deg. We limited the\n\n(and to make\n\n, where\n\n50cm\n\n\u0011\u0013\u0012\n\n*\n\u0012\n\u0012\n\u0015\n*\n\u001a\n+\n\u001e\n \n4\n,\n\u0015\n\u0001\n3\n*\n\u001a\n+\n\u0001\n\u001a\n\u0002\n\u000b\n\u0001\n3\n*\n\u001a\n\u0002\n\u000b\n\u0001\n3\n*\n\n!\n\u0001\n4\n\u000b\n!\n*\n\n\u0001\n\u000b\n\n\u0002\n'\n\n!\n\u0004\n\n\n\n\u000e\n\u000e\n\u000f\n\u000f\n\u0010\n\u0010\n\u0014\n\f!.,-,.,\n\n\u0012\u0016/\n\u0015+*\n, observation\n/* Initialize */\n/* Generate samples\n\nrepresenting belief \u0006\u0013\u0007\n\t$\u000b\n\u0001\n\u0002\n\n, bounds and5 , bin size\n\n\u000232\u00114.\n\n*/\n\n,\n\n,.,.,\n\nusing \u001c\u001b65\n\u000232\u001c4\n\nand\n\n\u00048\u000232\u001c4\n\n/* Compute importance weight */\n/* Update normalization factor */\n/* Insert sample into sample set */\n\n\u000232\u00114\n\n)\n\nthen\n\n/* Update number of bins with support */\n\n/* Update number of generated samples */\n/*\n\nuntil K-L bound is reached */\n\n,-,.,\n\n/* Normalize importance weights */\n\nInputs:\n\n\u0010\u0005\u0002\u00016\u0015\ndo\n\n\u001b\u001e\u001d \u001f\n\u000232\u001c4\n\u000232\u00114\ncontrol measurement\n\n\u001c\u001b \u001d\u001e\u001f\n\u000232\u001c4\n\n!\f#\n!\u0004\u0003\n\n'\n)\n\u0004\u0005\u000232\u00114\n\n\u000232\u00114\n\n\u000232\u001c4\n\n;\n\nSample 0\u001b\n\n6\u0015\n\n6\u0015\n\n6\u0015\n\nif\n\nfrom 7\n0\u001b\n\n\u0001\u0014\u0002\n\u0017\u001a\u0019\n\n!$#\n\nnon-empty\n\n\u0002\u0007\u0006\nfalls into empty bin\n9\u0015\n\b\t9\u0015\n\u0012\n6\u0015\"\u0012\nwhile \u000b\n\u0012\f\u000b\nfor\n6\u0015\n\u001b\u001e\u001d \u001f\nreturn\n\n\u0003\n2\u001140/\n\n\u001b\u001e\u001d \u001f\n\u0002\u0010\u000f\u0011\u0003\n\n221\n\n!.,-,.,\n\ndo\n\n \u000e\n\n9\u0015\n\u0010\u0005\u0002\n\nSample an index\n\nfrom the discrete distribution given by the weights in\n\nTable 1: KLD-sampling algorithm.\n\nApproximation of the true posterior\n\nIn the \ufb01rst set of experiments we evaluated how accurately the different methods approximate\nthe true posterior density. Since the ground truth for these posteriors is not available, we\ncompared the sample sets generated by the different approaches with reference sample sets.\nThese reference sets were generated using a particle \ufb01lter with a \ufb01xed number of 200,000\nsamples (far more than actually needed for position estimation). After each iteration, we\ncomputed the K-L distance between the sample sets and the corresponding reference sets,\nusing histograms for both sets. Note that in these experiments the time-stamps were ignored\nand the algorithms was given as much time as needed to process the data. Fig. 2(a) plots\nthe average K-L distance along with 95% con\ufb01dence intervals against the average number\nof samples for the different algorithms (for clarity, we omitted the large error bars for K-\nL distances above 1.0). Each data point represents the average of 16 global localization\nruns with different start positions of the robot (each run itself consists of approximately 150\nsample set comparisons at the different points in time). As expected, the more samples are\nused, the better the approximation. The curves also illustrate the superior performance of our\napproach: While the \ufb01xed approach requires about 50,000 samples before it converges to a K-\nL distance below 0.25, our approach converges to the same level using only 3,000 samples on\naverage. This is also an improvement by a factor of 12 compared to the approximately 36,000\nsamples needed by the likelihood-based approach. In essence, these experiments indicate that\nour approach, even though based on several approximations, is able to accurately track the\ntrue posterior using signi\ufb01cantly smaller sample sets on avarage than the other approaches.\n\nReal-time performance\n\nDue to the computational overhead for determining the number of samples, it is not clear\nthat our approach yields better results under real-time conditions. To test the performance\nof our approach under realistic conditions, we performed multiple global localization ex-\nperiments under real-time considerations using the timestamps in the data sets. Again, the\n\n\u0010\n\u0015\n\u0017\n\u0019\n%\n!\n\n\u000e\n\u0002\n!\n\u0012\n\u0015\n\f\n!\n\u0001\n\u0015\n\f\n\u0015\n\f\n\u0005\n\u000b\n\u0012\n\n\u0010\n\t\n\u001f\n\u0002\n\u000b\n\n\u0002\n'\n\n!\n\u0004\n\n\u001b\n\t\n\u001f\n\u001f\n#\n\u001b\n\t\n\u001f\n\u0002\n7\n\u000b\n'\n\t\n\u001f\n\u0002\n\n\u0003\n\u0003\n\u0003\n#\n\u001b\n\t\n\u001f\n\u0002\n\u0010\n\u0002\n\u0010\n\n\u001b\n\t\n\u001f\n\u0002\n\u001b\n\t\n\u001f\n\u0002\n%\n/\n\u000b\n\n\u001b\n\t\n\u001f\n\u0002\n\b\n\u0001\n\u0001\n\u0003\n*\n\u0003\n*\n4\n\u001e\n \n4\n\n)\n*\n!\n\u0012\n#\n\u0002\n#\n\f3.5\n\n3\n\n2.5\n\n2\n\n1.5\n\n1\n\n0.5\n\n0\n\ne\nc\nn\na\nt\ns\ni\nd\nL\nK\n\n \n\n\u22120.5\n\n0\n\nKLD\u2212sampling\nLikelihood\u2212based adaptation\nFixed sampling\n\n20000\n\n40000\n\n60000\n\n80000\n\nAverage number of samples\n\n100000\n(a)\n\n200\n\n150\n\n100\n\n50\n\n]\n\nm\nc\n[\n \nr\no\nr\nr\ne\n \n\nn\no\ni\nt\na\nz\ni\nl\na\nc\no\nL\n\n0\n\n0\n\nKLD\u2212sampling\nLikelihood\u2212based adaptation\nFixed sampling\n\n20000\n\n40000\n\n60000\n\nAverage number of samples\n\n80000\n(b)\n\n-axis represents the average sample set size for different parameters of the three ap-\nFig. 2: The\nproaches. a) The\n-axis plots the K-L distance between the reference densities and the sample sets\ngenerated by the different approaches (real-time constraints were not considered in this experiment).\nb) The\n-axis represents the average localization error measured by the distance between estimated\npositions and reference positions. The U-shape in b) is due to the fact that under real-time conditions,\nan increasing number of samples results in higher update times and therefore loss of sensor data.\n\ndifferent average numbers of samples for KLD-sampling were obtained by varying the  -\nbound. The minimum and maximum numbers of samples correspond to -bounds of 0.4 and\n\n0.015, respectively. As a natural measure of the performance of the different algorithms, we\ndetermined the distance between the estimated robot position and the corresponding refer-\nence position after each iteration. 2 The results are shown in Fig. 2(b). The U-shape of all\nthree graphs nicely illustrates the trade-off involved in choosing the number of samples under\nreal-time constraints: Choosing not enough samples results in a poor approximation of the\nunderlying posterior and the robot frequently fails to localize itself. On the other hand, if we\nchoose too many samples, each update of the algorithm can take several seconds and valuable\nsensor data has to be discarded, which results in less accurate position estimates. Fig. 2(b)\nalso shows that even under real-time conditions, our KLD-sampling approach yields drastic\nimprovements over both \ufb01xed sampling and likelihood-based sampling. The smallest aver-\nage localization error is 44cm in contrast to an average error of 79cm and 114cm for the\nlikelihood-based and the \ufb01xed approach, respectively. This result is due to the fact that our\napproach is able to determine the best mix between more samples during early stages of\nlocalization and less samples during position tracking. Due to the smaller sample sets, our\napproach also needs signi\ufb01cantly less processing power than any of the other approaches.\n\n5 Conclusions and Future Research\n\nWe presented a statistical approach to adapting the sample set size of particle \ufb01lters on-\nthe-\ufb02y. The key idea of the KLD-sampling approach is to bound the error introduced by\nthe sample-based belief representation of the particle \ufb01lter. At each iteration, our approach\ngenerates samples until their number is large enough to guarantee that the K-L distance be-\ntween the maximum likelihood estimate and the underlying posterior does not exceed a pre-\nspeci\ufb01ed bound. Thereby, our approach chooses a small number of samples if the density is\nfocused on a small subspace of the state space, and chooses a large number of samples if the\nsamples have to cover a major part of the state space.\n\nBoth the implementational and computational overhead of this approach are small. Exten-\nsive experiments using mobile robot localization as a test application show that our approach\nyields drastic improvements over particle \ufb01lters with \ufb01xed sample sets and over a previ-\nously introduced adaptation approach [8, 5]. In our experiments, KLD-sampling yields bet-\n\n2Position estimates are extracted using histograming and local averaging, and the reference positions\n\nwere determined by evaluating the robot\u2019s highly accurate laser range-\ufb01nder information.\n\n\n\u0001\n\u0001\n\fter approximations using only 6% of the samples required by the \ufb01xed approach, and using\nless than 9% of the samples required by the likelihood adaptation approach. So far, KLD-\nsampling has been tested using robot localization only. We conjecture, however, that many\nother applications of particle \ufb01lters can bene\ufb01t from this method.\n\nKLD-sampling opens several directions for future research. In our current implementation\nwe use a discrete distribution with a \ufb01xed bin size to determine the number of samples. We as-\nsume that the performance of the \ufb01lter can be further improved by changing the discretization\nover time, using coarse discretizations when the uncertainty is high, and \ufb01ne discretizations\nwhen the uncertainty is low. Our approach can also be extended to the case where in certain\nparts of the state space, highly accurate estimates are needed, while in other parts a rather\ncrude approximation is suf\ufb01cient. This problem can be addressed by locally adapting the dis-\ncretization to the desired approximation quality using multi-resolution tree structures [8, 9]\nin combination with strati\ufb01ed sampling. As a result, more samples are used in \u201cimportant\u201d\nparts of the state space, while less samples are used in other parts. Another area of future\nresearch is the thorough investigation of particle \ufb01lters under real-time conditions. In many\napplications the rate of incoming sensor data is higher than the update rate of the particle\n\ufb01lter. This introduces a trade-off between the number of samples and the amount of sensor\ndata that can be processed (cf. 2(b)). In our future work, we intend to address this problem\nusing techniques similar to the ones introduced in this work.\n\nAcknowledgments\n\nThe author wishes to thank Jon A. Wellner and Vladimir Koltchinskii for their help in de-\nriving the statistical background of this work. Additional thanks go to Wolfram Burgard and\nSebastian Thrun for their valuable feedback on early versions of the technique.\n\nReferences\n[1] I. J. Cox and G. T. Wilfong, editors. Autonomous Robot Vehicles. Springer Verlag, 1990.\n[2] P. Del Moral and L. Miclo. Branching and interacting particle systems approximations of feynam-\nkac formulae with applications to non linear \ufb01ltering. In Seminaire de Probabilites XXXIV, num-\nber 1729 in Lecture Notes in Mathematics. Springer-Verlag, 2000.\n\n[3] A. Doucet, N. de Freitas, and N. Gordon, editors. Sequential Monte Carlo in Practice. Springer-\n\nVerlag, New York, 2001.\n\n[4] A. Doucet, S.J. Godsill, and C. Andrieu. On sequential monte carlo sampling methods for\n\nBayesian \ufb01ltering. Statistics and Computing, 10(3), 2000.\n\n[5] D. Fox, W. Burgard, F. Dellaert, and S. Thrun. Monte Carlo Localization: Ef\ufb01cient position esti-\nmation for mobile robots. In Proc. of the National Conference on Arti\ufb01cial Intelligence (AAAI),\n1999.\n\n[6] D. Fox, S. Thrun, F. Dellaert, and W. Burgard. Particle \ufb01lters for mobile robot localization. In\n\nDoucet et al. [3].\n\n[7] N. Johnson, S. Kotz, and N. Balakrishnan. Continuous univariate distributions, volume 1. John\n\nWiley & Sons, New York, 1994.\n\n[8] D. Koller and R. Fratkina. Using learning for approximation in stochastic processes. In Proc. of\n\nthe International Conference on Machine Learning (ICML), 1998.\n\n[9] A. W. Moore, J. Schneider, and K. Deng. Ef\ufb01cient locally weighted polynomial regression pre-\n\ndictions. In Proc. of the International Conference on Machine Learning (ICML), 1997.\n\n[10] M. Pelikan, D.E. Goldberg, and E. Cant-Paz. Bayesian optimization algorithm, population size,\nIn Proc. of the Genetic and Evolutionary Computation Conference\n\nand time to convergence.\n(GECCO), 2000.\n\n[11] M. K. Pitt and N. Shephard. Filtering via simulation: auxiliary particle \ufb01lters. Journal of the\n\nAmerican Statistical Association, 94(446), 1999.\n\n[12] J.A. Rice. Mathematical Statistics and Data Analysis. Duxbury Press, second edition, 1995.\n\n\f", "award": [], "sourceid": 1998, "authors": [{"given_name": "Dieter", "family_name": "Fox", "institution": null}]}