0:00:13 | so |
---|
0:00:13 | a would have to and everybody |
---|
0:00:16 | in this paper well we focus on particle filtering techniques for high guy mentions |
---|
0:00:22 | so such problems to here uh in a uh of there as kind of application areas |
---|
0:00:28 | do a group object tracking and the object or |
---|
0:00:33 | so in the paper we first uh preview we view recent works |
---|
0:00:38 | on group and extended object tracking |
---|
0:00:41 | uh then we present a of the sequential monte carlo framework |
---|
0:00:46 | and |
---|
0:00:46 | uh the core of these work used |
---|
0:00:49 | in the develop |
---|
0:00:51 | markov of chain |
---|
0:00:52 | uh monte carlo or particle field yeah |
---|
0:00:55 | uh which actually uh moves |
---|
0:00:58 | um |
---|
0:00:59 | a lot of particle |
---|
0:01:01 | right seem to more likely regions based on |
---|
0:01:04 | at step gradient projection |
---|
0:01:06 | a and we show that it works |
---|
0:01:09 | yeah you well for high dimensional all |
---|
0:01:13 | then we can yeah the performance of this field to uh with uh a sampling in both the importance resampling |
---|
0:01:20 | particle field yeah |
---|
0:01:21 | and we and spend it on field yeah |
---|
0:01:25 | and also what we go beyond |
---|
0:01:28 | we study the case when |
---|
0:01:30 | the date are a spot |
---|
0:01:32 | so in some uh of this uh can breeze them complications uh because uh |
---|
0:01:38 | the problem |
---|
0:01:38 | might become |
---|
0:01:39 | uh and observable |
---|
0:01:41 | and we compare with their compressive sampling um one feel there |
---|
0:01:46 | and finally i will |
---|
0:01:48 | lou this no uh with them |
---|
0:01:50 | well |
---|
0:01:52 | from open the question |
---|
0:01:54 | future work |
---|
0:01:56 | so that has been a lot of interest recently in a group and extended object tracking so based classification just |
---|
0:02:04 | a being uh |
---|
0:02:06 | it out |
---|
0:02:07 | so i |
---|
0:02:08 | but uh we can classify the |
---|
0:02:11 | the that the words in two to be groups |
---|
0:02:13 | um |
---|
0:02:14 | method |
---|
0:02:15 | uh that are four |
---|
0:02:16 | a small |
---|
0:02:17 | a number of group with the relatively small number of one and |
---|
0:02:21 | and |
---|
0:02:22 | sequential monte carlo methods for |
---|
0:02:24 | a hundreds or thousands |
---|
0:02:26 | the object |
---|
0:02:27 | so huge |
---|
0:02:29 | so for small groups i have to mention the the works of um |
---|
0:02:34 | some of the first works are |
---|
0:02:36 | i and and they let |
---|
0:02:38 | for up to twenty um |
---|
0:02:40 | object |
---|
0:02:41 | for an start |
---|
0:02:43 | no no they i |
---|
0:02:44 | right |
---|
0:02:45 | uh |
---|
0:02:46 | one uh michael chain model |
---|
0:02:48 | i'll |
---|
0:02:49 | i with um |
---|
0:02:50 | michael them deal |
---|
0:02:52 | uh a which is um |
---|
0:02:54 | a good the have |
---|
0:02:56 | low however |
---|
0:02:58 | then |
---|
0:02:58 | a a call and use group develop a range of |
---|
0:03:01 | a techniques |
---|
0:03:02 | uh |
---|
0:03:03 | bayesian not necessarily sequential monte carlo but |
---|
0:03:07 | one of the approach is used to look at this problem as |
---|
0:03:10 | um |
---|
0:03:12 | tracking and extended it object |
---|
0:03:15 | and uh estimating both the parameters of |
---|
0:03:19 | bench |
---|
0:03:20 | fashion |
---|
0:03:21 | uh |
---|
0:03:23 | you |
---|
0:03:23 | that's circle forms |
---|
0:03:25 | um wheel |
---|
0:03:26 | and then |
---|
0:03:27 | the problem reduces to finding the |
---|
0:03:29 | pension that the way they they sell B |
---|
0:03:32 | with the random matrices |
---|
0:03:34 | then there is a B |
---|
0:03:36 | group of each D field described in not a T P to do you D field and the whole range |
---|
0:03:41 | a a cute with a um a round them finite sets |
---|
0:03:45 | this the |
---|
0:03:46 | um |
---|
0:03:47 | recently there was them um |
---|
0:03:49 | a uh uh also based on |
---|
0:03:52 | sequential monte carlo and |
---|
0:03:54 | um the that |
---|
0:03:56 | a of ways |
---|
0:03:57 | uh |
---|
0:03:59 | uh yeah the |
---|
0:04:00 | uh |
---|
0:04:01 | i wish shall i N |
---|
0:04:02 | of this so we |
---|
0:04:03 | been also work |
---|
0:04:05 | by combining sequential monte carlo with |
---|
0:04:08 | vol |
---|
0:04:08 | random graph |
---|
0:04:11 | then uh are the case for lunch |
---|
0:04:14 | um |
---|
0:04:15 | number of but that is within the group is especially challenging um not only because of the the large dimension |
---|
0:04:22 | but also because you can |
---|
0:04:24 | estimate uh each if you the that yet so the weight this problem soap usually used by forming a last |
---|
0:04:31 | uh and then estimating the |
---|
0:04:34 | to have the class that and to the then |
---|
0:04:37 | so um |
---|
0:04:38 | the |
---|
0:04:39 | extended object tracking |
---|
0:04:41 | then um |
---|
0:04:42 | the down to a a a a a and um |
---|
0:04:45 | uh formulation a way you want to know where the center use and the then |
---|
0:04:50 | and then you need to both um |
---|
0:04:52 | state estimation |
---|
0:04:54 | problem and parameter |
---|
0:04:56 | estimation problem |
---|
0:04:57 | if be could people speak up to the two problems because you know part to things are not so good |
---|
0:05:03 | in |
---|
0:05:04 | parameter estimation especially when they are and i mean |
---|
0:05:07 | so for what the in these groups |
---|
0:05:10 | including in the work of all one uh estimate the parameters with this that than approach and the ages |
---|
0:05:16 | all yeah |
---|
0:05:16 | we they of meditation and then feed the for and it is in the the part of you they'll or |
---|
0:05:22 | another field yeah |
---|
0:05:24 | and then there is another interesting group of approaches is combining in in row techniques with the common field to |
---|
0:05:31 | or or um i'm not that that bayesian need |
---|
0:05:34 | uh |
---|
0:05:36 | and uh a also was on type process models |
---|
0:05:40 | and and so on |
---|
0:05:42 | so what we do in i'll work |
---|
0:05:44 | we focus on |
---|
0:05:46 | i i mention now um estimation problem nonlinear linear in general described with um |
---|
0:05:52 | that the general uh state space equations where the |
---|
0:05:57 | a state function is nonlinear uh and the uh |
---|
0:05:59 | the noise can be a non gaussian in general |
---|
0:06:03 | then |
---|
0:06:03 | um we assume the markovian property just dependent on the previous that and the measurement equation and uh in general |
---|
0:06:11 | is nonlinear and the noise |
---|
0:06:13 | of process can be a a non girls |
---|
0:06:18 | so |
---|
0:06:19 | just briefly we we are still |
---|
0:06:21 | a the bayesian estimation problem finding that both your ears they be yet based on the the yeah so actually |
---|
0:06:28 | still being |
---|
0:06:29 | um |
---|
0:06:30 | you um |
---|
0:06:32 | you might a way the chapman or of equation |
---|
0:06:35 | based on a that the particles and they are T |
---|
0:06:38 | uh their weights |
---|
0:06:40 | and the the bayesian update |
---|
0:06:42 | i the more everybody's but media with that |
---|
0:06:46 | so with in this time there's a sequential monte carlo what we we do we follow the prediction and update |
---|
0:06:52 | that still the prediction we have |
---|
0:06:55 | our uh hmmm of the transition prior actually in |
---|
0:07:00 | in you know in case we use it |
---|
0:07:02 | and then we find um these the pdf function |
---|
0:07:06 | and where we |
---|
0:07:07 | spread particles do to the noise and and uh in the like the updates that way when the measurement a |
---|
0:07:14 | hmmm comes |
---|
0:07:16 | then we folk of the particles |
---|
0:07:18 | uh by combined with the likelihood |
---|
0:07:20 | and there is sampling that just introduce the very right |
---|
0:07:24 | oh now what are we doing um |
---|
0:07:27 | we |
---|
0:07:28 | move um |
---|
0:07:29 | but the cloud of particles |
---|
0:07:32 | uh with |
---|
0:07:33 | um |
---|
0:07:34 | markov chain monte carlo at um |
---|
0:07:37 | method |
---|
0:07:38 | so we can um in that you know one of these solutions exist in the literature um a one can |
---|
0:07:45 | use metropolis case |
---|
0:07:47 | uh it's that like he as the one can generate particles |
---|
0:07:50 | in time K minus one |
---|
0:07:52 | based uh |
---|
0:07:54 | from this propose to distribution |
---|
0:07:56 | and then the new particles |
---|
0:07:58 | um |
---|
0:07:59 | in a time kate a can be john in to me no way |
---|
0:08:03 | then we simulate a sample X pine prime from the |
---|
0:08:07 | a joint |
---|
0:08:08 | a a probability density function uh and then where i |
---|
0:08:12 | sprite drawn from |
---|
0:08:14 | these um |
---|
0:08:15 | transition prior |
---|
0:08:17 | and the previous |
---|
0:08:18 | expand K minus one is uniformly a drawn |
---|
0:08:22 | from from an imperial empirical or distribution |
---|
0:08:27 | so with in this question of the metropolis hastings algorithm one accept or reject |
---|
0:08:33 | the um |
---|
0:08:34 | a a new candidate there |
---|
0:08:36 | um |
---|
0:08:37 | when this condition is satisfied so you the you for me |
---|
0:08:41 | uh generated a random number these less |
---|
0:08:45 | or you you than i'll but |
---|
0:08:47 | i mean a |
---|
0:08:48 | and this likelihood ratio then we at said that yeah otherwise we we reject |
---|
0:08:54 | um this is um |
---|
0:08:56 | oh would algorithm but when the measurement when the state noise he's is relatively small |
---|
0:09:04 | then the moves can be rather small |
---|
0:09:07 | they are |
---|
0:09:08 | i there are uh improvements recent the a suggested by uh simon i'm and chords you and she's group where |
---|
0:09:16 | one can combine metropolis case things with the gibbs sampler |
---|
0:09:20 | and one thing see that there is a much better mixing scene |
---|
0:09:23 | uh especially for large group |
---|
0:09:27 | yeah i other algorithms like mcmc sampler as um i'm not of seven than group where one can generate a |
---|
0:09:34 | T V |
---|
0:09:36 | and |
---|
0:09:36 | what we do he's |
---|
0:09:38 | uh i'll show on the next slide |
---|
0:09:40 | we uh use |
---|
0:09:41 | subgradient gradient information of the likelihood you know to to move part goals in um |
---|
0:09:47 | more likely region |
---|
0:09:50 | so we |
---|
0:09:51 | we can have some some X |
---|
0:09:54 | i K prime |
---|
0:09:55 | um |
---|
0:09:56 | propagated to these |
---|
0:09:58 | joint pdf |
---|
0:09:59 | and then we calculate that that we see and |
---|
0:10:02 | so based on |
---|
0:10:04 | the logarithm of the likelihood function so these |
---|
0:10:07 | the |
---|
0:10:08 | is that what she subgradient norm to read |
---|
0:10:11 | uh |
---|
0:10:12 | from |
---|
0:10:13 | and the like who about the lady uh |
---|
0:10:16 | in in uh the part though X prime |
---|
0:10:18 | and we can have a relic station parameter which is |
---|
0:10:21 | um can be uniformly sampled or samples from a uniform distribution |
---|
0:10:26 | or or can be a adapt be chosen in in some way |
---|
0:10:31 | and i actually the means is the |
---|
0:10:34 | of performance but the out with them |
---|
0:10:38 | then |
---|
0:10:38 | we form the regularized |
---|
0:10:41 | uh proposal like a um a gaussian mixture and then um the metropolis that case since it's set and problem |
---|
0:10:49 | be set that's probability |
---|
0:10:51 | is formed based on |
---|
0:10:53 | a nice rue |
---|
0:10:55 | and then um we accept or reject |
---|
0:10:58 | um |
---|
0:10:59 | they are |
---|
0:11:01 | yeah um samples based |
---|
0:11:03 | uh on |
---|
0:11:06 | oh |
---|
0:11:07 | one can um |
---|
0:11:09 | yeah this proposed algorithm with that the large amounts of a um |
---|
0:11:15 | random walk outside of a markov chain monte carlo methods where |
---|
0:11:21 | um |
---|
0:11:22 | we one can achieve a similar effect but in our case the sterilisation and a can have negative values and |
---|
0:11:30 | also we |
---|
0:11:32 | we restrict the uh within that then develop between |
---|
0:11:35 | so you want to the R |
---|
0:11:37 | theoretical results are shown in um |
---|
0:11:40 | the anti sample agendas for convex |
---|
0:11:43 | uh log likelihood functions |
---|
0:11:47 | B |
---|
0:11:48 | uh performance of these sub gradient projection uh technique |
---|
0:11:53 | has been um |
---|
0:11:54 | valued at all where um |
---|
0:11:57 | well known example |
---|
0:11:59 | but uh with forty states and one handed state |
---|
0:12:04 | um |
---|
0:12:05 | and |
---|
0:12:05 | yeah yeah uh results |
---|
0:12:07 | um where we |
---|
0:12:09 | O calculate the normal i the average |
---|
0:12:12 | no estimation error actually this two-state minus |
---|
0:12:17 | um |
---|
0:12:18 | the estimate |
---|
0:12:19 | so now want to these uh different all the |
---|
0:12:22 | no two of the actual state |
---|
0:12:26 | um bases for |
---|
0:12:28 | um |
---|
0:12:28 | up to forty um |
---|
0:12:31 | state |
---|
0:12:32 | and he can see them there is then of these |
---|
0:12:35 | um |
---|
0:12:36 | averaged norm error |
---|
0:12:38 | um |
---|
0:12:39 | between the |
---|
0:12:41 | a subgradient |
---|
0:12:42 | projection |
---|
0:12:43 | markov chain monte carlo method then |
---|
0:12:46 | well we have the sampling importance resampling particle filter and then come on |
---|
0:12:52 | you |
---|
0:12:52 | the word |
---|
0:12:53 | for |
---|
0:12:55 | then |
---|
0:12:56 | um |
---|
0:12:57 | they |
---|
0:12:58 | what that that they are now we we focus the attention all of um |
---|
0:13:03 | the performance of this out with them when |
---|
0:13:05 | we have a lamb down |
---|
0:13:08 | a parameter alternating |
---|
0:13:11 | and |
---|
0:13:12 | uh as we can see uh one can achieve a meeting than the better performance compared with that |
---|
0:13:19 | the mcmc when |
---|
0:13:21 | these is a regularization parameter a use drawn from a uniform distribution |
---|
0:13:26 | and also compared to the spend common |
---|
0:13:29 | you |
---|
0:13:31 | also one can see that one had them much higher |
---|
0:13:35 | acceptance ratio ratio with the alternating in uh and C and C uh i'll with them from yet with the |
---|
0:13:43 | a when mom that |
---|
0:13:44 | drawn from the uniform distribution |
---|
0:13:49 | that's and that the |
---|
0:13:50 | uh it it in a case where but we have a unique sample this is actually the random multi variable |
---|
0:13:57 | random them walk model uh with hundred states and what we show that um |
---|
0:14:03 | the alternating mcmc um |
---|
0:14:07 | you it uh uh uh can reach |
---|
0:14:09 | a white the um |
---|
0:14:10 | i i Q is C which is comparable with them um and to that which is the optimal solution for |
---|
0:14:16 | that case |
---|
0:14:18 | a then |
---|
0:14:20 | i'm not a interesting problem is when the data is um |
---|
0:14:24 | sparse |
---|
0:14:25 | and you know um |
---|
0:14:28 | that they had been a lot of freeze that's uh used the in that area that that |
---|
0:14:33 | but um um |
---|
0:14:35 | rest and seen or compressive sensing |
---|
0:14:38 | has been point by the no hold in two thousand |
---|
0:14:41 | six |
---|
0:14:42 | a there are a lot of uh works real at uh with the linear case |
---|
0:14:47 | um |
---|
0:14:48 | it is by noble also because um |
---|
0:14:53 | if you use that we um |
---|
0:14:55 | which works when we we have a limited amount of data so we know um the shannon sampling theorem is |
---|
0:15:02 | eighteen that |
---|
0:15:03 | a we can we of a a a signal completely eat |
---|
0:15:07 | the the sampling frequency is that be go twice bigger than the maximum frequency of a signal |
---|
0:15:13 | a web but you the years that |
---|
0:15:15 | that |
---|
0:15:16 | a possible if if this condition is by lady uh we can recover cover of there is sparse signal |
---|
0:15:23 | thanks to uh |
---|
0:15:25 | the compressed sensing a a theoretical derivations |
---|
0:15:28 | uh the problem boils down to an optimized a which are initially was for uh normal eight |
---|
0:15:35 | uh a L zero which is a non polynomial caught problem but then you was |
---|
0:15:41 | a for the of um |
---|
0:15:43 | as |
---|
0:15:44 | an optimized patient problem and a minimum of minimization of known that one |
---|
0:15:49 | we want to recover the signal X |
---|
0:15:51 | one |
---|
0:15:51 | palms |
---|
0:15:52 | when we have mentioned a measurement vector which they mention use much smaller than the dimension of that the state |
---|
0:15:59 | vector |
---|
0:16:00 | and |
---|
0:16:01 | uh |
---|
0:16:02 | this is possible |
---|
0:16:03 | if two conditions that are satisfied one is sparsity |
---|
0:16:07 | so X has |
---|
0:16:09 | enough nonzero components S |
---|
0:16:11 | he's |
---|
0:16:12 | a measure and then |
---|
0:16:14 | the the second one is |
---|
0:16:15 | not in common jen at um |
---|
0:16:18 | then |
---|
0:16:19 | H |
---|
0:16:20 | the matrix |
---|
0:16:20 | H head |
---|
0:16:22 | uh its columns |
---|
0:16:23 | subset |
---|
0:16:24 | of size let into S |
---|
0:16:27 | and |
---|
0:16:28 | we compare |
---|
0:16:30 | uh the key |
---|
0:16:31 | performance of the step gradient projection markov chain monte carlo met that |
---|
0:16:37 | with a recently developed um |
---|
0:16:39 | compressive sampling a and you yeah that |
---|
0:16:42 | by a uh |
---|
0:16:45 | would feel and and it's the |
---|
0:16:47 | um |
---|
0:16:48 | and |
---|
0:16:49 | um |
---|
0:16:50 | actually the problem reduces sees uh to to the minimization yeah of these norm wet such that the |
---|
0:16:57 | a mathematical expectation of norm to of the air are used um |
---|
0:17:02 | bound V by a that the number |
---|
0:17:07 | so what is the problems here one of the difficulties he's when the signal response this effect gives the ability |
---|
0:17:13 | of the system |
---|
0:17:15 | and |
---|
0:17:15 | one might |
---|
0:17:16 | have |
---|
0:17:18 | yeah |
---|
0:17:20 | on the the bill many of the state |
---|
0:17:23 | uh |
---|
0:17:25 | in the example we can see the hundred states but to the observations but |
---|
0:17:30 | uh times that and uh the |
---|
0:17:33 | hmmm and to |
---|
0:17:34 | and and G |
---|
0:17:36 | the regular common and the cannot work in these conditions maybe maybe because of the lack of of the ability |
---|
0:17:43 | and uh |
---|
0:17:45 | let's see what find |
---|
0:17:46 | results we have |
---|
0:17:48 | so this is the signal |
---|
0:17:51 | um |
---|
0:17:52 | we |
---|
0:17:53 | bars |
---|
0:17:54 | that |
---|
0:17:57 | and |
---|
0:17:57 | on the right can said this is a measure of complexity um |
---|
0:18:02 | but a me is |
---|
0:18:03 | a perfectly is boss where is |
---|
0:18:05 | but |
---|
0:18:06 | a symbol |
---|
0:18:08 | and |
---|
0:18:09 | he of these that different real i realise this is with um |
---|
0:18:13 | more noise |
---|
0:18:15 | then |
---|
0:18:16 | um |
---|
0:18:16 | for the example and unique example with |
---|
0:18:19 | and hundred states |
---|
0:18:20 | uh |
---|
0:18:22 | this is the result we get this is there common field to |
---|
0:18:25 | this is the compressed C uh sensing of common you the |
---|
0:18:30 | uh and it seems that the |
---|
0:18:33 | step gradient projection a markov chain monte carlo method |
---|
0:18:37 | has a performance of close to |
---|
0:18:40 | to the compressive sampling common field to which is the optimal one |
---|
0:18:45 | when the of the of the B conditions are not fully |
---|
0:18:49 | um |
---|
0:18:51 | respect |
---|
0:18:53 | oh and let me conclude this torque so um |
---|
0:18:57 | this work keys |
---|
0:18:58 | uh propose used a new a markov chain monte carlo map that |
---|
0:19:02 | at that |
---|
0:19:03 | uh in |
---|
0:19:04 | at the performance of uh a sequential monte the colour |
---|
0:19:08 | you this |
---|
0:19:09 | by moving this time in into more highly uh more likely regions |
---|
0:19:14 | base |
---|
0:19:15 | or on uh a subgradient projection need |
---|
0:19:18 | and well we we compare it um |
---|
0:19:22 | we the several uh the uh well known field as |
---|
0:19:25 | so |
---|
0:19:26 | in this work we actually a propose proposed a a uh proposal |
---|
0:19:31 | propose a function |
---|
0:19:33 | uh a i accuracy is achieved |
---|
0:19:37 | and in future we would like to to look at more complex examples |
---|
0:19:42 | um more related to to group and that |
---|
0:19:45 | then then the direct tracking |
---|
0:19:53 | i don't known to so |
---|
0:20:03 | yes |
---|
0:20:04 | oh |
---|
0:20:05 | course of to |
---|
0:20:08 | that i is some |
---|
0:20:11 | yes |
---|
0:20:13 | it's |
---|
0:20:14 | hmmm |
---|
0:20:17 | oh |
---|
0:20:18 | well i think so it is because when you use the gradient you push than in in a |
---|
0:20:25 | you know but the direction |
---|
0:20:27 | and then uh |
---|
0:20:29 | you improve the accuracy that way |
---|
0:20:34 | otherwise to |
---|
0:20:36 | there might be a lot of |
---|
0:20:38 | the |
---|
0:20:38 | sampled that that deeply T |
---|
0:20:44 | so we'll should |
---|
0:20:50 | i was to children should should to demonstrate a very good question that if there are not a in the |
---|
0:20:56 | paper of but um |
---|
0:20:58 | ah |
---|
0:20:59 | it is of use a switch or yeah |
---|
0:21:04 | do |
---|
0:21:05 | oops |
---|
0:21:06 | or two |
---|
0:21:08 | i |
---|
0:21:09 | is |
---|
0:21:11 | so |
---|
0:21:13 | so |
---|
0:21:16 | oh |
---|
0:21:17 | no |
---|
0:21:20 | sure |
---|
0:21:21 | useful |
---|
0:21:22 | as |
---|
0:21:24 | i |
---|
0:21:25 | i |
---|
0:21:25 | just |
---|
0:21:26 | see |
---|
0:21:28 | i |
---|
0:21:28 | which are usually the propose a small training all the proposal |
---|
0:21:33 | true |
---|
0:21:35 | yeah |
---|
0:21:36 | but it will be a related to also with the application |
---|
0:21:40 | speced that maybe be force that the group of method |
---|
0:21:42 | one type of proposed a we work better |
---|
0:21:45 | the yeah |
---|
0:21:48 | Q |
---|
0:21:50 | i |
---|