0:00:14 | K a uh had a of on and the thank you for it and that is saying to my presentation |
---|
0:00:19 | of |
---|
0:00:20 | max is very their is the regularization at a selection and uh basically it's about uh a of based estimations |
---|
0:00:27 | generally |
---|
0:00:28 | but uh we are going to |
---|
0:00:31 | i i explain our results in the context of of uh you a estimation |
---|
0:00:35 | and uh |
---|
0:00:37 | for this reason |
---|
0:00:39 | you a problem |
---|
0:00:40 | and how we can solve by |
---|
0:00:43 | uh a sparse representation |
---|
0:00:46 | and then yeah |
---|
0:00:48 | we address the regularization parameter uh estimation problem |
---|
0:00:52 | and we introduce a method to solve uh this problem and finally we compare the different method |
---|
0:01:00 | oh |
---|
0:01:02 | uh |
---|
0:01:03 | particularly a problem we have a set of sources |
---|
0:01:06 | and the |
---|
0:01:07 | a set of uh a small area of sensors |
---|
0:01:10 | and we are to uh estimated directions of the sources direction of arrival of the signals |
---|
0:01:16 | uh to the sensors |
---|
0:01:18 | only by having |
---|
0:01:19 | uh |
---|
0:01:20 | uh |
---|
0:01:22 | the uh at the uh area point |
---|
0:01:25 | so what our assumptions are that uh |
---|
0:01:28 | the signals are narrow and and uh they are |
---|
0:01:30 | and the the sources are very for |
---|
0:01:33 | so that |
---|
0:01:35 | we can write the uh there is a a a a |
---|
0:01:38 | as the in your model of the sense that that so if you assume a source from a certain direction |
---|
0:01:44 | fit |
---|
0:01:44 | in the space |
---|
0:01:46 | and we can write uh the received data |
---|
0:01:49 | uh as a multiplication of a is to as you knowing the car which is dependent on the configuration of |
---|
0:01:54 | this uh uh sensor are ready for a linear or and do with and that the at the |
---|
0:01:59 | a a big to are we introduce here |
---|
0:02:02 | so if we have any source |
---|
0:02:04 | uh uh the total received that will be |
---|
0:02:07 | uh uh |
---|
0:02:10 | by a super per uh was super imposing posing a different elements will be a a linear combination of |
---|
0:02:15 | uh uh the terms of which can be with in a matrix form |
---|
0:02:19 | oh of course uh are we have a |
---|
0:02:21 | always a noise |
---|
0:02:23 | there |
---|
0:02:26 | so are many ways to solve this problem |
---|
0:02:28 | but uh and |
---|
0:02:30 | many of them like subspace so as what we have able but a for uh the problem that the are |
---|
0:02:35 | not general for example in one assumption option and one that are at case |
---|
0:02:38 | it they fail |
---|
0:02:39 | so uh uh the most general a solution is the loss of a solution |
---|
0:02:45 | and that we are going to explain now |
---|
0:02:47 | uh for uh a base estimation we need to discrete time this space |
---|
0:02:51 | and the uh |
---|
0:02:52 | then uh |
---|
0:02:54 | we have some that the sources are uh |
---|
0:02:57 | i mean uh we with a very fine estimation are you will get a a a a a a a |
---|
0:03:02 | a estimation of so this is the quantization noise introduced by discrete i in the space |
---|
0:03:06 | uh but |
---|
0:03:08 | that |
---|
0:03:09 | we introduce |
---|
0:03:11 | many many in real sources for each grid point in the space |
---|
0:03:15 | uh most of them are sending zero except to the the the sources that are we are interested in |
---|
0:03:21 | so we are introducing a very lying uh a vector of sources |
---|
0:03:25 | and most of them or zero |
---|
0:03:28 | and then we can write a model that method by the previous |
---|
0:03:31 | uh a a a a uh as a a linear combination of the source |
---|
0:03:36 | and now a G is the vector a is that matrix use of all the steering vectors and a for |
---|
0:03:40 | the degree |
---|
0:03:41 | and it is very five trees |
---|
0:03:43 | so even if there is no noise this model cannot be solve this get it because a is not a |
---|
0:03:48 | veritable |
---|
0:03:50 | so we need a a a a constraint obviously as about uh uh the number of sources number of sources |
---|
0:03:55 | uh uh or uh a constrained to a a a to you know i mean the to signal is uh |
---|
0:04:00 | a sparse and the number of men and nonzero elements are know |
---|
0:04:04 | uh uh to express it in a general |
---|
0:04:06 | and situation when we have many a snapshots is the usual way is that |
---|
0:04:10 | uh we look at the |
---|
0:04:12 | a extended sources |
---|
0:04:14 | in the direction direction wise can we introduce |
---|
0:04:18 | yeah |
---|
0:04:19 | average average power from each direction of a space |
---|
0:04:22 | given "'em" by their average energy of the sources from that direction for the direction that there's no source |
---|
0:04:27 | yeah a it will be zero |
---|
0:04:29 | so is your norm of uh uh this come a vector which is long in itself |
---|
0:04:33 | uh gives the the number of source |
---|
0:04:37 | then |
---|
0:04:38 | the maximum likelihood |
---|
0:04:39 | is to can be written as |
---|
0:04:42 | this this is the usual max some like to this is |
---|
0:04:46 | uh related to the model and the constrained about the number of nonzero L |
---|
0:04:52 | so it's it's a hard to solve this problem is as hard as solving a use maximum like to in |
---|
0:04:57 | in nineteen your |
---|
0:04:58 | a square |
---|
0:04:59 | and uh what |
---|
0:05:00 | has been done is that you can write it in this uh |
---|
0:05:04 | equivalent form and then |
---|
0:05:06 | replace this zero norm which one or |
---|
0:05:10 | and uh this is a a an approximation but it works |
---|
0:05:13 | uh and the reason is that a which is explained by teacher and so |
---|
0:05:17 | uh |
---|
0:05:19 | the reason is that |
---|
0:05:20 | with so in its optimization it's more probable to heat for uh this is |
---|
0:05:25 | corners like here |
---|
0:05:27 | uh in the diamond shape to the damage that uh |
---|
0:05:31 | sort is introduced by the one or |
---|
0:05:34 | so |
---|
0:05:34 | if it's not this one |
---|
0:05:37 | or you can buy use of and that so here we have a lot line one skit hand uh the |
---|
0:05:41 | number of nonzero elements here |
---|
0:05:43 | well it will see that on the controls a number of the sparsity of the source so uh if you |
---|
0:05:47 | have a |
---|
0:05:48 | more uh uh i i highest that higher value of long |
---|
0:05:51 | and then we have a less source |
---|
0:05:53 | the active uh in in the model so uh the question a would be phones estimation point of view |
---|
0:05:58 | uh the question would be uh a how to choose the proper value of land |
---|
0:06:03 | a for example in this example of we have a three |
---|
0:06:06 | i sources and uh |
---|
0:06:08 | probably this point should be chosen chosen by a method |
---|
0:06:12 | but before going to the but a uh i need to |
---|
0:06:15 | and reviews review some a a a points about the last so |
---|
0:06:19 | so uh two |
---|
0:06:20 | a estimate the number of sources if you want the simulation will get such an eight spectrum of or this |
---|
0:06:26 | space |
---|
0:06:26 | and then we need a special thing to choose which ones i active and which one is the psyche but |
---|
0:06:31 | is not a problem because difference is very high |
---|
0:06:34 | and uh |
---|
0:06:36 | it the lasso so based estimation is a good estimate or or of the direction but uh it's biased for |
---|
0:06:41 | or sources |
---|
0:06:42 | and the |
---|
0:06:44 | so well if you want to good estimation of the sources is that the product of the |
---|
0:06:48 | i is the part of the main problem but |
---|
0:06:50 | if you are interested in that then you have to solve a |
---|
0:06:54 | and uh another estimation again by |
---|
0:06:56 | assuming and known direction as and it will be a usual uh max and uh |
---|
0:07:01 | usual |
---|
0:07:02 | the uh it is a square |
---|
0:07:04 | my |
---|
0:07:05 | and now we are uh to the problem of the regularization parameter selection uh first like just |
---|
0:07:11 | we use some concepts in a a estimation to especially for the maximum a posteriori estimator |
---|
0:07:16 | so i uh if you write the log out |
---|
0:07:18 | exactly |
---|
0:07:19 | scale bombs |
---|
0:07:22 | can maybe like |
---|
0:07:25 | and uh the combination of the prime your |
---|
0:07:27 | and the the model we have a |
---|
0:07:29 | so uh the question of how to choose the prior |
---|
0:07:32 | there are two ways to think about it a one way is that the the prior |
---|
0:07:35 | is the |
---|
0:07:38 | uh the point you're is given by the physics i i |
---|
0:07:41 | a it's given by the nature and uh it's a part of the model |
---|
0:07:44 | but the the way that uh the prime or is the two |
---|
0:07:47 | two |
---|
0:07:48 | apply some properties to the estimation |
---|
0:07:52 | so how we uh uh do the in the second way of thinking |
---|
0:07:56 | uh a to illustrate it uh |
---|
0:07:59 | a will show this |
---|
0:08:00 | uh graph which |
---|
0:08:01 | well uh one of the axes |
---|
0:08:03 | the |
---|
0:08:03 | probably there are for an estimator |
---|
0:08:05 | uh when the the but the values of true parameter values or to come one and S one |
---|
0:08:10 | and the the other their as would be a problem to of error white this estimator |
---|
0:08:14 | i when the true by do the parameters of to to to an S two and that there is a |
---|
0:08:17 | tradeoff between a this to you cannot be good as a whole something |
---|
0:08:21 | and by choosing the prior are actually moving the score |
---|
0:08:25 | uh and the probably you need to a you want to be here if you don't have any preference |
---|
0:08:31 | uh you want to be good |
---|
0:08:32 | for all of them and that's why we choose the prior |
---|
0:08:36 | usually a a a from prior what works well but that's always and |
---|
0:08:41 | uh a here you can see that |
---|
0:08:43 | if if a a are good and one at them to the case where than for example the noise |
---|
0:08:46 | was a zero or the number of just go to infinity |
---|
0:08:49 | then uh it's possible to get a better estimation the mute a new this man i mean a |
---|
0:08:55 | at the same time you will get a good estimation for all the |
---|
0:08:58 | uh |
---|
0:08:59 | but |
---|
0:09:01 | so uh one uh a hard uh example a the selection with which we are interested because we want to |
---|
0:09:06 | know how to choose the right land uh which is equivalent to choosing the |
---|
0:09:10 | right model order |
---|
0:09:11 | uh uh and the uh and you can see uh a |
---|
0:09:14 | choosing a uniform will be and over feet there has been a and |
---|
0:09:18 | many |
---|
0:09:19 | uh discussions is about how to choose a prior a good prior and that there are there have been some |
---|
0:09:24 | would prior or as a based an asymptotic case |
---|
0:09:27 | uh for for example for |
---|
0:09:28 | no a but uh |
---|
0:09:30 | number of snapshots |
---|
0:09:31 | which given by a i I C or and the L |
---|
0:09:35 | and here we introduce the and the uh |
---|
0:09:37 | and which uh chooses the best model |
---|
0:09:40 | uh as the model that describes the date i mean less number of bits |
---|
0:09:44 | and this can be expressed is one |
---|
0:09:47 | uh |
---|
0:09:47 | a term is related to the number of bits |
---|
0:09:50 | of the parameters and this one is the number of |
---|
0:09:53 | they'd have it's |
---|
0:09:55 | with respect to the |
---|
0:09:56 | parameters we chose |
---|
0:09:58 | and that you can see there is a relation between |
---|
0:10:01 | a a maximum likelihood and |
---|
0:10:03 | uh the |
---|
0:10:05 | and here for our problem in be at given by |
---|
0:10:08 | this for me a no |
---|
0:10:10 | and |
---|
0:10:10 | you at them all |
---|
0:10:12 | as you can see for mean is not as work will work very about what uh for a low number |
---|
0:10:16 | of snapshots doesn't work |
---|
0:10:19 | so |
---|
0:10:20 | uh |
---|
0:10:22 | as we are are for a in the case of one a snapshot it's |
---|
0:10:25 | tends to well were fit the model again |
---|
0:10:28 | and yeah we need to one to choose and don't the prior |
---|
0:10:32 | the good way to choose or a prior is to look at the last it's so because lots of works |
---|
0:10:36 | so |
---|
0:10:36 | probably uh it will it is it gives us a good prior as well so if you we compare the |
---|
0:10:41 | maximum likelihood we a lot so |
---|
0:10:43 | uh formulation you will see that |
---|
0:10:45 | if we choose |
---|
0:10:49 | uh a laplacian prior then |
---|
0:10:51 | we can compare |
---|
0:10:55 | i some likelihood we are K and then you can use the maximum likelihood or |
---|
0:10:59 | uh choosing a that |
---|
0:11:02 | it give this estimator or it is is actually |
---|
0:11:04 | uh discuss in a in many papers that |
---|
0:11:08 | this doesn't work |
---|
0:11:09 | it actually works but the the problem is that only works for a high snrs where you are very close |
---|
0:11:15 | to the case |
---|
0:11:15 | where actually every prior may more |
---|
0:11:18 | uh uh and the as you can see now it follows the right answer the minimum is related to the |
---|
0:11:23 | true number of sources |
---|
0:11:25 | but it as the tracker so |
---|
0:11:27 | so what the problem the problem is that we are doing in a giving uh a high probability to not |
---|
0:11:32 | desired values |
---|
0:11:33 | uh uh actually a we we are not interested in be high dimensional uh speech but the laplacian prior gives |
---|
0:11:39 | us a |
---|
0:11:40 | uh |
---|
0:11:41 | the the main problem the probability is given to very high dimension space |
---|
0:11:46 | so remove that or |
---|
0:11:48 | and we constraint the |
---|
0:11:49 | a probability density function |
---|
0:11:51 | the for the prior |
---|
0:11:52 | to the don't a low dimensional space and is given my |
---|
0:11:58 | this is not there's one |
---|
0:11:59 | this an up there shown where the uh but only over |
---|
0:12:02 | though that a low dimensional space that |
---|
0:12:04 | and then uh solving for uh maximum likelihood would get such a a a a a a as which or |
---|
0:12:09 | but at something and and six more |
---|
0:12:12 | uh a deterministic but unknown parameters and then you can use |
---|
0:12:16 | the rubber the estimation of sick man and |
---|
0:12:18 | uh as explained before |
---|
0:12:22 | to get an image |
---|
0:12:23 | and as a result you can compare it to the and the uh |
---|
0:12:27 | so we have a uh |
---|
0:12:29 | the maps estimate i mismatch |
---|
0:12:31 | with at that is made |
---|
0:12:33 | by playing with the optimized actually |
---|
0:12:35 | a a you have we can have many queen wasn't forms of a optimisation and all of them can can |
---|
0:12:40 | interpret it in different friend |
---|
0:12:42 | uh |
---|
0:12:44 | by using in four |
---|
0:12:45 | so uh |
---|
0:12:47 | and uh i there is a long discussion the paper or i'm not going to be the that's really but |
---|
0:12:51 | then |
---|
0:12:52 | that you can see by plane with optimized the will get you different priors but all of them |
---|
0:12:57 | or |
---|
0:12:58 | working very well |
---|
0:13:00 | uh but as you can see uh there is like slightly different related to |
---|
0:13:05 | and |
---|
0:13:07 | the prior your |
---|
0:13:08 | that you play and you change the point in your career |
---|
0:13:11 | but and and he does not work and it |
---|
0:13:13 | probably works for |
---|
0:13:15 | i mission |
---|
0:13:16 | oh are there is a |
---|
0:13:18 | uh uh at the conclusion a uh we have the less so can and uh |
---|
0:13:23 | it's words |
---|
0:13:24 | and |
---|
0:13:24 | you can choose a uh two |
---|
0:13:26 | i uh use the |
---|
0:13:28 | you a model order selection but you also can choose the asymptotic here |
---|
0:13:32 | uh you can choose uh uh the law that so as a |
---|
0:13:34 | uh |
---|
0:13:35 | prior |
---|
0:13:36 | to |
---|
0:13:37 | the the model order and |
---|
0:13:38 | in this case what of what's much more it |
---|
0:13:40 | so thank you |
---|
0:13:42 | and |
---|
0:13:49 | yeah so |
---|
0:14:01 | yeah |
---|
0:14:02 | maximum |
---|
0:14:06 | this is me |
---|
0:14:07 | less |
---|
0:14:08 | but i guess you i think what right |
---|
0:14:10 | that look at all probably to should be max maximum of problem to you |
---|
0:14:15 | and arabic bic which are usually are |
---|
0:14:17 | so you're using the mike the negative like |
---|
0:14:20 | so you're writing then |
---|
0:14:21 | mean |
---|
0:14:29 | since your |
---|
0:14:30 | discrete C |
---|
0:14:31 | don't |
---|
0:14:32 | uh why is it that you to detect search clear peaks |
---|
0:14:35 | should be able to to sure should shouldn't |
---|
0:14:37 | yeah by |
---|
0:14:38 | cool fishing P R to kick |
---|
0:14:43 | or in one of |
---|
0:14:45 | much previous slide |
---|
0:14:47 | yeah |
---|
0:14:48 | uh |
---|
0:14:49 | actually a that there there are some proof that |
---|
0:14:52 | well |
---|
0:14:55 | i |
---|
0:14:55 | zero no and one or might but and some you but looks order to show that there is a low |
---|
0:15:00 | correlations between the columns of your |
---|
0:15:03 | a a or in this case your discrete choice in space so |
---|
0:15:06 | to see mark are conditionally all right that's |
---|
0:15:09 | actually i i i |
---|
0:15:10 | well as you |
---|
0:15:12 | proof that this case |
---|
0:15:14 | right |
---|
0:15:15 | image matrix |
---|
0:15:16 | oh |
---|
0:15:17 | from |
---|
0:15:18 | that man |
---|
0:15:19 | is K |
---|
0:15:20 | do you we have a sparse |
---|
0:15:22 | still |
---|
0:15:24 | okay |
---|
0:15:26 | and you have the question |
---|
0:15:31 | i do have a question |
---|
0:15:33 | oh it seems that you have some kind of mixture between uh that's a deterministic |
---|
0:15:38 | approach and a bayesian approach so what why don't you go to yeah |
---|
0:15:42 | a long to full by way assuming that |
---|
0:15:44 | um guys |
---|
0:15:46 | i by |
---|
0:15:47 | we some |
---|
0:15:48 | should |
---|
0:15:49 | yeah |
---|
0:15:50 | or way that |
---|
0:15:51 | i i mean |
---|
0:15:52 | i i the first approach that |
---|
0:15:54 | and |
---|
0:15:56 | right |
---|
0:15:57 | prior you know |
---|
0:15:59 | is a way to look at |
---|
0:16:00 | but the it doesn't work and |
---|
0:16:02 | in the literature are some paper that used for some or prior |
---|
0:16:06 | and and it's by jeanne |
---|
0:16:08 | a |
---|
0:16:09 | but uh |
---|
0:16:13 | i i want to say that uh |
---|
0:16:15 | there is no way to think |
---|
0:16:16 | you can go back what the back get and you always choose a of the prior you want is a |
---|
0:16:20 | higher elevation |
---|
0:16:22 | uh |
---|
0:16:23 | oh at some extent should |
---|
0:16:25 | i thought |
---|
0:16:27 | okay thank you |
---|