0:00:13 | a distributed gaussian particle filtering using a like lead consensus and is the joint work with |
---|
0:00:18 | the on is lou checked friends how much but there your each and |
---|
0:00:21 | um are |
---|
0:00:23 | so uh first let me summarise the |
---|
0:00:27 | uh a contribution of this paper |
---|
0:00:29 | we proposed a a a a distributed implementation of |
---|
0:00:33 | of the gaussian particle filter that was originally introduced in |
---|
0:00:37 | um |
---|
0:00:38 | in a centralized for mean in the paper of co tech uh and you reach in two and |
---|
0:00:43 | and three |
---|
0:00:44 | so in this paper uh we we've posted the distributed implementation and in this implementation each uh |
---|
0:00:51 | sensor computes i |
---|
0:00:54 | global estimate based on to joint or all sensors uh likelihood function |
---|
0:00:59 | and and |
---|
0:01:00 | the joint likelihood function uh or its approximation is obtained uh |
---|
0:01:05 | at each sensor in a distributed way using a the like likelihood consensus scheme which we propose to |
---|
0:01:11 | in our previous paper two thousand ten |
---|
0:01:13 | at the T asilomar conference |
---|
0:01:16 | uh here we also use a a second stage of consensus algorithms |
---|
0:01:20 | uh to reduce the complexity of the of the distributed gaussian |
---|
0:01:25 | a particle filter |
---|
0:01:27 | so a a is a brief comparison with some other consensus based distributed particle filters |
---|
0:01:33 | so in this paper |
---|
0:01:35 | in in far a lot uh two doesn't ten uh the use no approximations and so the |
---|
0:01:41 | do estimation performance can be |
---|
0:01:43 | better but on the other hand the communication requirements so can be much higher than it |
---|
0:01:48 | in our case and in |
---|
0:01:50 | uh uh two to and eight uh the use on a local like load functions |
---|
0:01:55 | and in contrast to the use the joint likelihood function at each sensor |
---|
0:01:58 | and |
---|
0:01:59 | so the estimation performance of of our method is is better |
---|
0:02:04 | um |
---|
0:02:05 | okay so let's start with some |
---|
0:02:08 | overview of distributed estimation wireless the sensor network |
---|
0:02:12 | so what we consider is a wireless sensor network that's composed of capital K U |
---|
0:02:17 | sensor nodes that joint we estimate a time varying state X and |
---|
0:02:23 | and each of the sensors |
---|
0:02:24 | indexed by |
---|
0:02:25 | small K |
---|
0:02:27 | uh obtains the measurement vector |
---|
0:02:30 | is E |
---|
0:02:31 | and |
---|
0:02:31 | an example is so local aviation and |
---|
0:02:34 | or tracking based on the sound emitted by a moving target or we can |
---|
0:02:39 | and the cost that we would like to achieve are to forming so |
---|
0:02:42 | it sensor nodes should obtain a global |
---|
0:02:44 | state estimate |
---|
0:02:45 | X head |
---|
0:02:46 | and based on measurements of all other sensors and the network and this might be important for example in sensor |
---|
0:02:51 | actuator or |
---|
0:02:53 | robotic networks |
---|
0:02:55 | and would like to use only local processing short distance communications with neighbours |
---|
0:03:00 | and also no fusion center should be used and no routing of measurements throughout the network |
---|
0:03:07 | okay we also wish to perform sequential estimation |
---|
0:03:09 | uh of the time-varying state |
---|
0:03:12 | X and |
---|
0:03:13 | uh from the current and the past measurements of all sensors in the |
---|
0:03:16 | in the sensor network |
---|
0:03:19 | and and |
---|
0:03:20 | so we consider nonlinear non gaussian state space model but with |
---|
0:03:24 | independent additive gaussian measurement noise is |
---|
0:03:27 | and it's uh such a system is described by the state transition pdf |
---|
0:03:31 | and the joint likelihood function or J lf |
---|
0:03:34 | where C N is uh |
---|
0:03:37 | is the collection of measurements from all sensors |
---|
0:03:41 | and |
---|
0:03:41 | well in this case optimal bayesian estimation amounts to calculation of the posterior |
---|
0:03:46 | uh pdf here |
---|
0:03:48 | and sequential estimation is enabled by is uh a recursive posterior update they where we |
---|
0:03:53 | turn the |
---|
0:03:54 | previous post your to the current one |
---|
0:03:57 | using the state transition pdf and and the don't by clint function |
---|
0:04:00 | and joint like that function is important if you want to obtain |
---|
0:04:03 | global results based on to |
---|
0:04:06 | all sensors |
---|
0:04:07 | measurements |
---|
0:04:09 | okay so now let's have a look at the distributed gaussian particle filter |
---|
0:04:14 | so it it's well known that for nonlinear non gaussian systems the optimal bayesian estimation is typically infeasible |
---|
0:04:21 | and the computational feasible approximation is provided by a |
---|
0:04:25 | particle filtering for |
---|
0:04:27 | well sequential monte carlo approach |
---|
0:04:29 | and think an example of |
---|
0:04:31 | many one of the many particle filters is the the gaussian particle filter proposed in this paper |
---|
0:04:36 | where the posterior is approximated by a gaussian pdf and the mean and covariance of this |
---|
0:04:42 | gaussian approximation R |
---|
0:04:44 | oh obtained from a set of |
---|
0:04:47 | weighted samples or particles |
---|
0:04:50 | and what we propose is a distributed |
---|
0:04:52 | implementation of the |
---|
0:04:54 | gaussian particle filter where each sensor |
---|
0:04:57 | use |
---|
0:04:58 | local |
---|
0:05:00 | gaussian in particle filter to sequential track the mean and covariance of a local gaussian approximation but that the global |
---|
0:05:06 | posterior |
---|
0:05:08 | uh |
---|
0:05:09 | and |
---|
0:05:10 | in this case the the measurement update |
---|
0:05:12 | at each sensor uses the global joint likelihood function and which ensures that global estimates are obtained |
---|
0:05:18 | and it's end |
---|
0:05:20 | and the J laugh is provided to to each sensor in a distributed way using the likelihood consensus |
---|
0:05:27 | scheme that we proposed in in this paper |
---|
0:05:29 | and some advantages are that the consensus algorithms employed by like that consensus require only local communications and operate without |
---|
0:05:38 | putting protocols |
---|
0:05:40 | and also no measurements or particles need to be exchange between the sensors |
---|
0:05:45 | um so here are the i'll show some steps that each |
---|
0:05:49 | sensor performs oh so the steps of a local |
---|
0:05:52 | gaussian particle filter |
---|
0:05:54 | so first couple at time and it's sends or obtains the gaussian approximation to the previous global posterior |
---|
0:06:01 | then eight |
---|
0:06:02 | draws |
---|
0:06:03 | particles from this |
---|
0:06:04 | a a gaussian approximation and it propagates so through the state this model so |
---|
0:06:09 | basically it's samples new predicted particles from the state transition pdf |
---|
0:06:14 | a then we need to calculate the joint likelihood function |
---|
0:06:18 | at each sensor and to do this we used the likelihood consensus |
---|
0:06:21 | and this step we will require communication between the |
---|
0:06:25 | uh neighbouring sensors |
---|
0:06:26 | and |
---|
0:06:28 | after each sensor can update the particle weights using the |
---|
0:06:31 | obtained trying to like lead function so |
---|
0:06:33 | this is how it's done |
---|
0:06:35 | so basically be we then evaluate the joint like with functions at the |
---|
0:06:39 | but in like look function at the predicted particles so |
---|
0:06:42 | that's why we need the joint likelihood function at each sensor |
---|
0:06:45 | as a function of the |
---|
0:06:46 | of the state X and four point twice evaluation |
---|
0:06:50 | and once we have to particles and weights we can calculate again to meeting |
---|
0:06:54 | and covariance of the of the |
---|
0:06:56 | a gaussian approximation to the global posterior |
---|
0:06:59 | and |
---|
0:06:59 | the state estimate is basically equal to disk calculated the in here |
---|
0:07:04 | so now let's have a look at how the like that consensus scheme operate |
---|
0:07:09 | so |
---|
0:07:10 | we we in this paper we consider the following measurement model we have here uh |
---|
0:07:15 | measurement function H and K of X N which is |
---|
0:07:19 | in general nonlinear it's it's a function of the |
---|
0:07:22 | of of the sensor |
---|
0:07:23 | index and |
---|
0:07:25 | uh uh it it depends on the sensor and possibly also on time |
---|
0:07:28 | and fees |
---|
0:07:29 | additive uh |
---|
0:07:31 | gaussian measurement noise which is |
---|
0:07:33 | assumed to be independent from sensor to sensor |
---|
0:07:35 | and you to this we obtain the joint like that function as a product of local likelihood functions |
---|
0:07:41 | and therefore in the exponent of the joint likelihood |
---|
0:07:44 | we have a sum over all sensors so this is this expression as and |
---|
0:07:48 | a here |
---|
0:07:49 | and for purposes of statistical inference is |
---|
0:07:52 | S an expression completely describes the joint like lead functions will focus on a distributed calculation of of S N |
---|
0:07:59 | and it will be |
---|
0:08:00 | that's three to obtain this as a function of the state X N |
---|
0:08:03 | and C N is just a collection of measurements from all sensors and it's observed and hence fixed |
---|
0:08:09 | well |
---|
0:08:10 | a direct calculation of of S and wood |
---|
0:08:13 | required at each sensor knows the |
---|
0:08:15 | measurements and also measurement functions of full other sensors in the network |
---|
0:08:19 | but uh initial we assume that |
---|
0:08:22 | each sensor only has its |
---|
0:08:23 | local information so we would need to somehow |
---|
0:08:26 | root this local information from each sensor to have every other sensor but that |
---|
0:08:30 | so what we would like to a it so we |
---|
0:08:33 | we choose another approach will be suitably approximate S N |
---|
0:08:37 | by |
---|
0:08:38 | suitably approximating the sensor measurement functions locally |
---|
0:08:41 | and to do the approximation in such a way that we can use than consensus algorithms to compute S N |
---|
0:08:48 | a |
---|
0:08:49 | so |
---|
0:08:49 | here we use a |
---|
0:08:51 | polynomial approximation of the sensor measurement functions so which till is the polynomial approximation |
---|
0:08:57 | uh and the |
---|
0:08:58 | this function here P R of X and basically this is are the |
---|
0:09:02 | the monomials all meals of of the polynomial but in principle we could use other basis functions to obtain |
---|
0:09:08 | some more general approximation |
---|
0:09:11 | and the the coefficients all five of this approximation there we calculate them using a least squares polynomial fitting and |
---|
0:09:18 | as the data points for this we squares fit be use the predicted particles of the |
---|
0:09:23 | of of the particle filter |
---|
0:09:24 | and that's |
---|
0:09:25 | important note that the |
---|
0:09:28 | rocks summation so basically the |
---|
0:09:29 | alpha coefficient of the approximation error obtained locally at it sensor so |
---|
0:09:33 | we don't need to communicate anything to to do that |
---|
0:09:38 | now if we have a substitute the |
---|
0:09:40 | polynomial approximation H two the for for H in in this S expression we obtain |
---|
0:09:46 | and approximate |
---|
0:09:48 | S still that |
---|
0:09:49 | uh since |
---|
0:09:51 | H till this are |
---|
0:09:53 | polynomials basically out of this um |
---|
0:09:55 | overall all sensors we obtain also a polynomial but of twice the degree so |
---|
0:10:00 | what we write this |
---|
0:10:01 | we see her the polynomial |
---|
0:10:03 | uh |
---|
0:10:05 | you |
---|
0:10:05 | coefficients the beta coefficients they contain for each sensor |
---|
0:10:09 | all local information so it's measurement |
---|
0:10:12 | as well as the U |
---|
0:10:13 | alpha coefficients of the approximation of of fits a local |
---|
0:10:17 | uh a measurement function |
---|
0:10:19 | uh what's important is that the coefficients are independent of the state X N |
---|
0:10:24 | and the only |
---|
0:10:25 | way how the state and into this expression is that would these monomials or |
---|
0:10:29 | some general basis function |
---|
0:10:31 | and now if we exchange the order of summation here so we we get a uh |
---|
0:10:36 | polynomial which has |
---|
0:10:38 | coefficients T |
---|
0:10:40 | and this coefficients here |
---|
0:10:42 | there are obtained as a sum over all sensors |
---|
0:10:45 | and therefore for the |
---|
0:10:46 | these coefficients they contain information from the entire network |
---|
0:10:50 | so we could view them |
---|
0:10:52 | as the sufficient statistic that fully describe |
---|
0:10:56 | is that still that |
---|
0:10:57 | and in turn also the approximate joint likelihood function |
---|
0:11:00 | so we see this is the approximate joint likelihood if each sensor knows these coefficients T |
---|
0:11:06 | then it can evaluate the |
---|
0:11:08 | joint likelihood function for more less for any any value of of the state X N |
---|
0:11:14 | a so since this coefficients are obtained as already said |
---|
0:11:18 | uh as a summation over all sensors state can be computed using the a distributed consensus algorithm at at each |
---|
0:11:24 | sensor |
---|
0:11:25 | so this is basically how would operate it check it's sensor computes locally coefficients speech of from the local available |
---|
0:11:31 | data |
---|
0:11:32 | and then the sum over all sensors is computed in a distributed by using consensus |
---|
0:11:38 | and it requires only transmission of some |
---|
0:11:40 | partial sums to the next per so we don't in to transmit measurements or or or or particles |
---|
0:11:46 | a the communication load put therefore be much much lower |
---|
0:11:51 | okay okay i'll just briefly mention |
---|
0:11:53 | ah |
---|
0:11:54 | a reduced complexity person of the distributed gaussian particle filter |
---|
0:11:59 | a a so in in this |
---|
0:12:01 | reduced complexity version each each of the |
---|
0:12:04 | "'kay" set uh sensors |
---|
0:12:06 | or |
---|
0:12:07 | "'kay" local in particle filters |
---|
0:12:09 | uses a reduced number of particles cheap prime |
---|
0:12:12 | so we we use the number of particles by a factor put to the number of sensors |
---|
0:12:17 | and we calculate a partial mean and the partial covariance variance of the global posterior but also using the joint |
---|
0:12:23 | like with function of the using the like with sensors |
---|
0:12:27 | and |
---|
0:12:28 | after this partial means and covariances can be |
---|
0:12:31 | combine by means of the second stage of consensus algorithms |
---|
0:12:36 | and |
---|
0:12:36 | if the second stage |
---|
0:12:38 | use a sufficient |
---|
0:12:39 | number of iterations then the pitch estimation performance |
---|
0:12:43 | of the reduced complexity version will be effectively put to that of the original one so |
---|
0:12:48 | we |
---|
0:12:49 | reduce the computational complexity but of course we introduce some new communications so it comes at the cost of some |
---|
0:12:58 | increasing in communications |
---|
0:13:01 | okay now i'll show you |
---|
0:13:02 | a target tracking sample and some simulation results |
---|
0:13:06 | so |
---|
0:13:07 | oh in this example the state |
---|
0:13:10 | represents the two D position and the two D velocity of the target |
---|
0:13:14 | and it it false according to this state transition equation |
---|
0:13:19 | uh |
---|
0:13:20 | and we consider or we simulate a |
---|
0:13:23 | network of randomly deployed acoustic amplitude sensors that sounds the sound i mean that sense the sound i meet it |
---|
0:13:30 | by to target |
---|
0:13:31 | and |
---|
0:13:32 | the measurement model is |
---|
0:13:33 | the following so the |
---|
0:13:35 | sensor measurement function is basically given here |
---|
0:13:38 | so we have the amplitude of the |
---|
0:13:40 | of the source divided by the distance between the |
---|
0:13:43 | target |
---|
0:13:44 | and the sensor |
---|
0:13:46 | and it's in principle the sensor positions can be time varying so we could |
---|
0:13:51 | the plight this mess the also the dynamic uh a sensor networks |
---|
0:13:56 | a this is the setting so we deployed sensors in the field of |
---|
0:14:00 | do mention two hundred by two very meters and |
---|
0:14:03 | it consists of twenty five acoustic and sensors |
---|
0:14:07 | um |
---|
0:14:08 | and the proposed distributed gaussian particle filter and it's reduced complexity person now compared with a centralized gaussian in particle |
---|
0:14:16 | filter |
---|
0:14:17 | we used one thousand particles sense to approximate the measurement function we use a polynomial of degree |
---|
0:14:22 | to |
---|
0:14:23 | which leads to fourteen consensus algorithms that need to be executed in parallel so |
---|
0:14:28 | basically a what in one iteration of like consensus you need to to transmit fourteen real numbers |
---|
0:14:34 | and we compare like that consensus that use eight iterations of consensus |
---|
0:14:39 | with a |
---|
0:14:41 | with a case where we calculate the sums |
---|
0:14:44 | exactly so that that could be |
---|
0:14:46 | a that's a |
---|
0:14:48 | as an S the asymptotic case |
---|
0:14:49 | so infinite number of consensus iterations more less |
---|
0:14:54 | okay okay here just as an illustration we see that the green line is the true target trajectory and the |
---|
0:14:59 | the right one is the track one |
---|
0:15:02 | and it's just a result |
---|
0:15:03 | a from one of these sense but in principle all sensors obtain the same reason |
---|
0:15:08 | okay here is to root mean square error performance of first this time |
---|
0:15:13 | ah |
---|
0:15:13 | the black line is the centralized case and as expected this the best one |
---|
0:15:17 | now if you look at the distributed case the exact some calculation |
---|
0:15:22 | that's the red plan |
---|
0:15:23 | there is a slight performance degradation and |
---|
0:15:26 | of course if you only use eight iterations of consensus you you get the to line which has |
---|
0:15:31 | slightly worse performance again but even |
---|
0:15:34 | we compare |
---|
0:15:35 | the blue and red two |
---|
0:15:37 | to the to the black ones with to the centralized case the the performance degradation is not so large |
---|
0:15:42 | here it's |
---|
0:15:44 | average average rmse which we averaged also over over the time and versus just measurement noise variance so yeah the |
---|
0:15:50 | noise variance rises is also the |
---|
0:15:52 | error arise is but more less the comparison between the three mats this the same as something |
---|
0:15:57 | on the first figure |
---|
0:15:58 | here it's the dependence of the estimation error on the number of consensus iterations and yeah of course as the |
---|
0:16:04 | number of iterations increases the performance gets better |
---|
0:16:08 | but what's interesting is the |
---|
0:16:11 | when we compared to the |
---|
0:16:12 | solid |
---|
0:16:13 | ooh curve with the solid red once for the |
---|
0:16:16 | strip it gaussian part before and it's reduced complexity version |
---|
0:16:19 | for |
---|
0:16:20 | lower number of |
---|
0:16:21 | iterations here the the reduced used complexity version |
---|
0:16:25 | uh |
---|
0:16:25 | has a slightly better performance and this we could explain |
---|
0:16:29 | more or less that |
---|
0:16:30 | such a way that the second stage of consensus algorithms helps to diffuse for that a local information |
---|
0:16:36 | throughout the network |
---|
0:16:38 | okay okay so what's conclude we proposed to distributed it uh that was can particle filtering scheme that |
---|
0:16:45 | in which each sensor around a local gaussian particle for to that computes a global state estimate that reflects the |
---|
0:16:51 | measurements of all sensors |
---|
0:16:52 | and to do this |
---|
0:16:54 | the |
---|
0:16:55 | we have to the particle weights at it's sensor using the joint like good function which we obtained in a |
---|
0:17:00 | distributed way |
---|
0:17:01 | i likelihood consensus |
---|
0:17:04 | and |
---|
0:17:05 | a think about like let can is that it requires on only local communications of |
---|
0:17:10 | some sufficient statistics so no measurements or particles need to be communicated and |
---|
0:17:14 | is also suitable for dynamic sensor networks |
---|
0:17:17 | and we also propose a reduced complexity variant of the distributed option particle filter |
---|
0:17:23 | and the simulation results indicate that the performance is good even in comparison with the centralized a in particle filter |
---|
0:17:31 | okay so that's compose my talk thanks |
---|
0:17:42 | i |
---|
0:17:44 | i |
---|
0:17:44 | i |
---|
0:17:45 | i |
---|
0:17:46 | yeah |
---|
0:17:47 | should |
---|
0:17:48 | is uh are insensitive to K to the value issue |
---|
0:17:53 | a a take a static um a couple of lot the number of polynomials right |
---|
0:17:58 | uh |
---|
0:17:59 | yeah that's the order of the problem a yes i and what that this approximation |
---|
0:18:03 | is good for K |
---|
0:18:05 | you mean all sensors yes |
---|
0:18:06 | uh |
---|
0:18:07 | yeah i mean in this in this application we use the same same |
---|
0:18:12 | type of measurement function at each sensor |
---|
0:18:14 | so |
---|
0:18:16 | that's what we used also the same approximation for for all sensors |
---|
0:18:19 | but i mean in principle you could have |
---|
0:18:21 | different measurement functions that different sensors and then you would need to |
---|
0:18:27 | use different order of polynomials and yeah |
---|
0:18:34 | yeah |
---|
0:18:37 | i |
---|
0:18:41 | and |
---|
0:18:42 | say |
---|
0:18:43 | a my in the same manner value |
---|
0:18:45 | of the global one |
---|
0:18:46 | a function |
---|
0:18:48 | well i mean you can only guaranteed by using a |
---|
0:18:51 | yeah |
---|
0:18:54 | a i think you cannot guarantee these i mean it depends on the on the size of your network and |
---|
0:18:59 | the bigger the network the more iterations you need i mean so |
---|
0:19:03 | a |
---|
0:19:06 | hmmm |
---|
0:19:07 | uh yes |
---|
0:19:08 | yes i on there are slight differences i mean depend on the number of iteration i mean you can |
---|
0:19:16 | oh |
---|
0:19:19 | hmmm no actually in in in the gaussian particle for to you don't need any a resampling because you construct |
---|
0:19:24 | the gaussian posterior and then you sampled new |
---|
0:19:27 | but yes i mean if you have insufficient number of iterations then each because each of the also operates separately |
---|
0:19:33 | so it go |
---|
0:19:34 | each of the nose has a |
---|
0:19:35 | he's all its own set of particles and its own set of weights and it will there be slight difference |
---|
0:19:43 | yeah |
---|
0:19:44 | and |
---|
0:19:45 | yeah |
---|
0:19:48 | and |
---|
0:19:49 | oh |
---|
0:19:49 | lee |
---|
0:19:50 | yes |
---|
0:19:51 | that's one |
---|
0:19:52 | i |
---|
0:19:53 | well |
---|
0:19:54 | yes |
---|
0:19:56 | i |
---|
0:19:57 | yes |
---|
0:19:58 | uh |
---|
0:20:00 | no no it's it's not not the case uh i mean |
---|
0:20:04 | uh it's just what |
---|
0:20:06 | you saying |
---|
0:20:08 | and |
---|
0:20:09 | as |
---|
0:20:10 | yeah okay |
---|
0:20:13 | yeah |
---|