0:00:13 | alright right |
---|
0:00:14 | welcome everybody thanks for coming to my talk uh i'd as already mentioned that's doesn't order performance analysis for distortion |
---|
0:00:19 | structure chorus |
---|
0:00:20 | and it's one of me and and uh for some and her from him and a you know |
---|
0:00:24 | the call |
---|
0:00:25 | now for brief more i probably don't have to tell you that high resolution is interesting for a number of |
---|
0:00:30 | applications |
---|
0:00:31 | uh in particular is pretty type parameter estimation schemes are often used |
---|
0:00:35 | because the very simple the very flexible they can be a that the very settings setting closed form you don't |
---|
0:00:39 | need any peak search or something |
---|
0:00:41 | and they were perform still close to the cramerrao about |
---|
0:00:44 | um |
---|
0:00:45 | and the were based this all this pretty have all "'em" the based on the should than write equation which |
---|
0:00:49 | are a set of were determined equations which are typically solved using just score |
---|
0:00:53 | speakers it's a |
---|
0:00:55 | oh have release is actually suboptimal |
---|
0:00:57 | it is suboptimal because it you know was more is the fact that there are also estimation errors in the |
---|
0:01:02 | subspace itself |
---|
0:01:03 | there is a better technique which is called structure these scores which takes this estimation errors into account and also |
---|
0:01:08 | explicitly exploits the structure of shift write equations |
---|
0:01:11 | just been proposed in ninety seven |
---|
0:01:14 | um so it comes of improve the form |
---|
0:01:16 | a however the problem is that so far to be scored this is pretty was only evaluated uh a using |
---|
0:01:21 | the simulations so there is no analytical statement for when does it to for better how much so an analytical |
---|
0:01:27 | performance evaluation is of course pretty is the a desirable |
---|
0:01:30 | um the goal here |
---|
0:01:32 | this paper was to apply to |
---|
0:01:33 | seem frame we used |
---|
0:01:35 | if only had to analyse a these scores based pretty |
---|
0:01:38 | which is this paper by a by car gives the |
---|
0:01:40 | a a or expansion of of the signal subspace |
---|
0:01:44 | and i'm in order to analyse a structure discourse basis pretty |
---|
0:01:47 | we use the same frame of before to analyse these based pretty the corresponding graph |
---|
0:01:51 | solicit down here |
---|
0:01:53 | a so we analyzed various versions like standard is pretty unitary esprit read or is pretty in and more you |
---|
0:01:57 | can even do would for non circular esprit in and others |
---|
0:02:00 | but no those are based on these scores |
---|
0:02:02 | oh the purpose you was to extend the no to incorporate structured |
---|
0:02:06 | uh which brings me to outline of their the the big one iteration we had now i'll go to would |
---|
0:02:11 | be for usual on you again the shift invariance equations |
---|
0:02:13 | uh and and what is to actually scores mean showing you the concept of the first-order perturbation for the S |
---|
0:02:18 | P which you might find interesting to use also in other fields |
---|
0:02:21 | and and then our earlier phones results for these process pretty in and the main part of focus on structure |
---|
0:02:26 | these scores of to the solution and then you know as |
---|
0:02:28 | and in some simulation results |
---|
0:02:30 | so let's start of the review |
---|
0:02:32 | a what is what is shift invariance is pretty based on the fact that you can divide the rate into |
---|
0:02:36 | two identical sub arrays to one and J two which of their of the same observations except for phase shift |
---|
0:02:43 | but which is encoded in this musical all spatial frequency |
---|
0:02:46 | and a spatial frequencies link to your parameters of interest for instance if you want to do direction of arrival |
---|
0:02:51 | name to the direction of arrival and T done a simple relation |
---|
0:02:54 | now these matrices you you you can use selection make J one and J two which operate on your race |
---|
0:02:59 | during vector a to select the of the first and the second sub array |
---|
0:03:03 | and shift them very close to you get the same observation except for a face should which contains the problem |
---|
0:03:07 | of interest |
---|
0:03:08 | does is for a single source for multiple source |
---|
0:03:11 | these J one and J two they operate on the race hearing matrix a which contains all the raised and |
---|
0:03:15 | vectors |
---|
0:03:16 | and you parameters is of interest are actually five |
---|
0:03:18 | right right these contains spatial free |
---|
0:03:21 | no this is of the right equation a matrix equation the problem is the array steering matrix of course and |
---|
0:03:25 | no |
---|
0:03:26 | so to get rid of the unknown array steering matrix |
---|
0:03:29 | what we do is we take our observations let's say X is the is a is a matrix which contains |
---|
0:03:33 | and sit C but observations of or and and sensors |
---|
0:03:36 | and we just computed as P D in from the svd P get an estimate for the signal subspace which |
---|
0:03:41 | are the D don't the left single vector |
---|
0:03:43 | and then you was the fact that the race during matrix and this made use S and the same column |
---|
0:03:48 | space approximately because there is noise |
---|
0:03:50 | so they are related via a transfer matrix T |
---|
0:03:52 | and this can be used to eliminate right some we are actually have a shift invariance equations which need to |
---|
0:03:57 | be sold pop i that's the only known |
---|
0:04:00 | you has we have an estimate for |
---|
0:04:01 | and i the eigen base so i are are are do once correct range |
---|
0:04:05 | right this is the basis for is pretty just a very quick review |
---|
0:04:08 | the main point here is that these of the shift write equations we need to solve the are determined |
---|
0:04:12 | and we have an estimate for the subspace but it's not accurate |
---|
0:04:15 | right |
---|
0:04:15 | so how do we sell |
---|
0:04:17 | typically people sold just using these were |
---|
0:04:19 | and these score just mean to minimize the least squares fit between the left and the right hand side of |
---|
0:04:23 | the equation right only subject to side |
---|
0:04:25 | uh this gives a very close from solution a very simple one you can use the inverse |
---|
0:04:29 | but the problem is you we more the fact that you don't know exactly us that is you actually implicitly |
---|
0:04:34 | that it's perfectly known which is not true we know that there is in there |
---|
0:04:38 | and that's the idea of structure these chorus |
---|
0:04:40 | such each score as change this cost function what is to change here |
---|
0:04:43 | a change to incorporate for each of these occurrences of the subspace bayes us an additional they'll tell us which |
---|
0:04:48 | explicitly models the fact that we have an estimation error in the subspace that we tried to correct so that |
---|
0:04:54 | the two sides of the question a line in a better way |
---|
0:04:56 | and in an in a group regularization term is to make sure that this up would doing a small to |
---|
0:05:01 | penalise to lot of |
---|
0:05:03 | this is the cost function for structure these scores |
---|
0:05:05 | the nice thing is that takes into account errors and subspace |
---|
0:05:08 | but draw a always done now not only in your course anymore but it's quadratic |
---|
0:05:12 | so we typically so it iteratively via a local linearization |
---|
0:05:15 | but it has been shown that only very few iterations are required actually in the high snr regime only one |
---|
0:05:20 | iteration is required so you can see just as a as a correct |
---|
0:05:24 | alright right |
---|
0:05:24 | now to to to come to the performance analysis for this we need to look into this |
---|
0:05:28 | source of error that actually here the source of average is here is this yeah to yes this error and |
---|
0:05:33 | the signal subspace we need to analytically grass |
---|
0:05:36 | and the the frame up we using for this is the one that by but kind of which i just |
---|
0:05:40 | briefly you want to review was also very simple |
---|
0:05:42 | you take a look at you and vector or uh observations X not without any noise where you have you |
---|
0:05:47 | true signal subspace and a to noise subspace if you break again the S |
---|
0:05:50 | in the presence of noise you only have an estimate so you can say that your estimate signal subspace is |
---|
0:05:55 | the true one class and error come down |
---|
0:05:57 | in this trying to find |
---|
0:05:58 | and for this |
---|
0:05:59 | for this error and you can always expand into one part which lies in the model bass and one part |
---|
0:06:04 | which lies in the signal subspace this just because it leads and this N dimensional space so you can always |
---|
0:06:08 | break it into but two space |
---|
0:06:10 | and the interpretation of the first component |
---|
0:06:13 | it's a a of the signal subspace which is in the noise subspace would really model models how much of |
---|
0:06:17 | the noise leads into the thing as it's how the subspace itself |
---|
0:06:20 | a raw whereas the second one is our of the signal subspace inside the signal |
---|
0:06:25 | this one models how the individual singular vectors inside the signal subspace the particular basis we choose how was the |
---|
0:06:30 | trip right |
---|
0:06:31 | so obviously the second one plays no role for esprit because the particular basis because the relevant only the first |
---|
0:06:37 | one |
---|
0:06:38 | but extensions exist |
---|
0:06:40 | we only use the first term because the second one's a relevant if you want you can easily incorporate the |
---|
0:06:44 | second one as well |
---|
0:06:45 | has been proposed by a car and a you can see it's a very similar expressions of first order expansion |
---|
0:06:50 | in the noise and which also perturbation |
---|
0:06:52 | and the second one it's been proposed actually minor college |
---|
0:06:55 | um but as i said we don't need for this work |
---|
0:06:58 | um also the colour already ninety three has used this result to analyse standard esprit in is given a first |
---|
0:07:03 | order expansion for the estimation error indicates spatial frequency using standard esprit which is the simple expression |
---|
0:07:09 | based on this work with expanded it |
---|
0:07:11 | um we've be shown that a last year at i cast in in dallas that you can instance perform statistical |
---|
0:07:16 | expectation of this |
---|
0:07:17 | assuming white complex gaussian noise |
---|
0:07:19 | so what it should probably emphasise that this framework |
---|
0:07:22 | its asymptotic |
---|
0:07:23 | in the effective snr so you don't need a large number of snapshots of something |
---|
0:07:28 | uh you you can have a single snapshot if you want as long as the variance is low and it |
---|
0:07:31 | needs no a particular assumptions about the statistics you don't need gaussian of the noise you don't even need a |
---|
0:07:37 | gaussian of the symbols you just need the perturbation to be small |
---|
0:07:40 | but if you as young gas and then you can of course conforms forms that exceed the expectation and you |
---|
0:07:43 | get needs better |
---|
0:07:45 | this is lisa and of this read this is nice unitary we and you can do more |
---|
0:07:48 | these a previous results we shown and based on these now we try to expand them to incorporate structure |
---|
0:07:54 | so |
---|
0:07:55 | um what is done here is we first check our extension attention to a special piece of stuff to these |
---|
0:07:59 | scores which is using a single iteration |
---|
0:08:01 | and it is not using any regularization |
---|
0:08:03 | the reason for this is that these assumptions are asymptotically optimal for structured these in the high snr regime |
---|
0:08:09 | and since the performance and as we do use anyways asymptotic high as are it's it's fine to assume this |
---|
0:08:14 | right "'cause" it |
---|
0:08:15 | but it but it asymptotically in you're actually |
---|
0:08:17 | right |
---|
0:08:18 | uh i'm of these assumptions you can express the cost function and the solution and in a very simple way |
---|
0:08:22 | you can say that these solutions sites for structure these is equal to the initial solution get "'em" by this |
---|
0:08:27 | first plus an update or |
---|
0:08:29 | and is a a term is the solution of this cost function |
---|
0:08:32 | which is of course quadratic "'cause" i've said |
---|
0:08:34 | there is there is a a a a a term image does not depend on a there is a link |
---|
0:08:37 | at time and then there is a quadratic term right |
---|
0:08:39 | the quadratic term be black that's the linearization so we are back to a lean this squares problem |
---|
0:08:44 | and this mean of this problem has of course of very simple solution |
---|
0:08:47 | for for us uh to be suppressed so this will be the update for me to in this would be |
---|
0:08:51 | the update for the for the subspace if if you would wanna do the second iteration be actually don't need |
---|
0:08:56 | the second |
---|
0:08:56 | since we only going one |
---|
0:08:58 | so that the main message here is that it actually the up they can be explicitly computed as taking this |
---|
0:09:02 | vector are are last |
---|
0:09:04 | which is the vectorized version of the residual matrix after doing least |
---|
0:09:08 | a be multiplied by that to them as of this matrix F |
---|
0:09:11 | which is the linear mapping me |
---|
0:09:12 | and for this we have to find of a first-order expansion |
---|
0:09:15 | how have we done it we done it by looking at both terms individually |
---|
0:09:19 | we start with a |
---|
0:09:19 | first that's of most of that matrix F |
---|
0:09:22 | what we will the set you can express this matrix a had as equal to and matrix a matrix they'll |
---|
0:09:28 | have |
---|
0:09:29 | but the matrix F is constant in the sense that that does not depend on the perturbation itself self right |
---|
0:09:34 | so if you look at one realisation look at the random of some of perturbation |
---|
0:09:37 | this some will be constant and this one will be linear right and this gives the matrix F |
---|
0:09:42 | and therefore if we look at it so inverse since this part is zero mean it's not very hard to |
---|
0:09:46 | see a that in versus actually equal to the sort inverse of this constant matrix independent of the perturbation |
---|
0:09:51 | loss a linear term that's a quadratic term plus i wouldn't for |
---|
0:09:54 | right we don't actually physically need to spend it it's fine to know that |
---|
0:09:58 | this constant term as we will see the in terms actually not need and this is simplified and is greatly |
---|
0:10:02 | and for the second term from those are last |
---|
0:10:05 | uh it's not difficult to see that this can be written as a a a a at the linux pension |
---|
0:10:09 | for the again |
---|
0:10:10 | uh the error in the subspace they are U S be multiplied by one matrix mapping the subspace you to |
---|
0:10:15 | the residual are possible direct term which be more |
---|
0:10:18 | and uh also this |
---|
0:10:20 | a a the subspace is the result of a car |
---|
0:10:22 | has a mean expansion in terms of the actual perturbation them noise |
---|
0:10:25 | again but forming that a matrix plus a body matrix plus i don't for |
---|
0:10:30 | now the collect both |
---|
0:10:31 | if we collect um this and this together we see that the uh a vector all has a lean expansion |
---|
0:10:36 | and the noise |
---|
0:10:37 | and now we combine these two results back into the original expression |
---|
0:10:41 | we can see that if you multiply the um and this one out them be get a linear term which |
---|
0:10:46 | is this constant term times the leading it |
---|
0:10:48 | plus a quadratic term just as linear times the the linear to right so the quadratic term again be like |
---|
0:10:54 | that's |
---|
0:10:54 | that's first order |
---|
0:10:55 | so this shows that we don't actually need even the lean expansion of them soon |
---|
0:10:59 | right |
---|
0:10:59 | and then we get this very simple result |
---|
0:11:01 | a which is pretty into it if you start of the noise you have one a mapping matrix from the |
---|
0:11:05 | noise to the subspace that from the stuff to the residuals and then this matrix here |
---|
0:11:09 | this is to the vector and selling and us |
---|
0:11:11 | but as is that are only interested in this up part |
---|
0:11:13 | so the final result for this upper part is actually this one again |
---|
0:11:17 | pretty intuitive meaning about mapping |
---|
0:11:19 | and uh then you can also like in of back to the original expansion of the estimation error you find |
---|
0:11:23 | a first or expansion of the estimation error of spatial frequencies again |
---|
0:11:27 | it's a very simple and it has a form just again in its structure very simple to but these press |
---|
0:11:32 | expansion be shown previously |
---|
0:11:34 | are we have a me this vector are L as now it's R S L a slightly different but the |
---|
0:11:37 | form still the same |
---|
0:11:39 | and again you can if you want a for means mean |
---|
0:11:42 | um |
---|
0:11:43 | the the mean square error if you assume as your mean and circle is much white noise you actually don't |
---|
0:11:47 | you gals in for this |
---|
0:11:48 | and you get a needs got are again very compact and and simplex spray |
---|
0:11:51 | now but is all this good for why do we do a why do we go through this analytical result |
---|
0:11:55 | and what does it show us now that we have the result |
---|
0:11:57 | the break |
---|
0:11:58 | think this is good for is if you look at |
---|
0:12:00 | that's |
---|
0:12:00 | specific case we can simplify the expression so much that you actually gain inside |
---|
0:12:05 | in to what is the performance of the schemes of the very specific set |
---|
0:12:09 | and to to add the this point i brought one example you which is not the paper due to like |
---|
0:12:13 | of space but i still i think it's interesting |
---|
0:12:15 | to kind of push for what of it that this we has has applications |
---|
0:12:18 | and the example is of course the simple one you can think of which is a single source |
---|
0:12:22 | if you consider a single sauce |
---|
0:12:24 | and be shown the uh that fully score space is pretty the means error as a break |
---|
0:12:28 | compact expression if you consider a uniformly a rate of N sensors |
---|
0:12:32 | um it has to be effective as an front and then it's quadratic and one over one over and basically |
---|
0:12:37 | then the common or bound also as a very simple expression |
---|
0:12:40 | which means that you can find the asymptotic efficiency again asymptotic effective snr it still it's about for single snapshot |
---|
0:12:46 | um is is given by this expression so you have a closed form expression for the asymptotic efficiency only depending |
---|
0:12:51 | on and |
---|
0:12:52 | in it's exact except for the fact that it doesn't talk |
---|
0:12:55 | um so and and what you can see you is that it basically it's start of one for two and |
---|
0:12:59 | three sensors and then it goes down |
---|
0:13:01 | so these press based pretty is not efficient for large race on a single song |
---|
0:13:05 | we did the same number of the structure these scores |
---|
0:13:07 | and after a number of manipulations we found again a closed-form expression for the mean square |
---|
0:13:12 | it's a bit more involved but you can do the same thing you can not of like from are bound |
---|
0:13:15 | i mean square error and you find a close form expression for the asymptotic efficiency which is |
---|
0:13:20 | a ratio of order only us |
---|
0:13:22 | interestingly the first three coefficients agree then they start to diff |
---|
0:13:25 | if you plot is on the same thing as a things you and one it looks like it's almost equal |
---|
0:13:29 | to one but is actually not if you zoom in a little bit you can see it starts of one |
---|
0:13:33 | it goes down a little bit and then it goes back out |
---|
0:13:35 | we don't really have physical explanation for that mathematically you could prove that it is that with simulations you can |
---|
0:13:40 | verify that like |
---|
0:13:42 | and you really you have the values for the asymptotic efficiency as as exact number so this is a pretty |
---|
0:13:46 | value result |
---|
0:13:47 | it would be interesting to extend this to two sources to see was the performance in terms of separation correlation |
---|
0:13:52 | and these real or |
---|
0:13:53 | uh a parameter stuff |
---|
0:13:55 | alright right |
---|
0:13:55 | a just a few must simulations and the simulations will be compared is is we compare the empirical error |
---|
0:14:00 | which you get by actually performing as free on random data and then computing the estimate the spatial frequencies computing |
---|
0:14:06 | the arrow an averaging it |
---|
0:14:08 | with the semi analytical results which still depend on the noise realisations of the average these or noise |
---|
0:14:13 | and a fully analytical results which which for which no simulations actually needed on the one to colour everything is |
---|
0:14:19 | needed because there are around of that |
---|
0:14:20 | right |
---|
0:14:21 | uh a this first example he for uncorrelated sources |
---|
0:14:24 | you know the the performance of unitary briefly specified |
---|
0:14:28 | sort |
---|
0:14:28 | these grass actually scores |
---|
0:14:30 | is very close so it's of course interesting to see can this really small gap still be reliably predicted |
---|
0:14:35 | well the S is yes |
---|
0:14:37 | with our analytical results we become asymptotically optimal yeah the same small yeah |
---|
0:14:41 | the semi analytical this |
---|
0:14:42 | the volume |
---|
0:14:43 | uh another the result this is pretty source which are very strongly correlated there zero point ninety nine core between |
---|
0:14:48 | any pair of soft |
---|
0:14:49 | i know we have four as we have to us and standard this spree based on these press actually scores |
---|
0:14:54 | unitary esprit based on these cost of is discourse |
---|
0:14:56 | to do this time correlation or is a big gas so you can more clearly distinguished the curve |
---|
0:15:01 | again this is the i mean uh the results to become |
---|
0:15:03 | accurate for high snr this |
---|
0:15:05 | this |
---|
0:15:05 | the form you |
---|
0:15:06 | and then use the single source now for single source we have an improvement if you plot of versus the |
---|
0:15:10 | snr |
---|
0:15:11 | for for eight sensors between these grass structure course |
---|
0:15:14 | and again the analytical results |
---|
0:15:17 | but range to the conclusions |
---|
0:15:18 | what we present a is a first order |
---|
0:15:20 | a a perturbation analysis to actually score is pretty |
---|
0:15:23 | just based on the performance analysis for the S D which is a very nice concept that can be used |
---|
0:15:27 | also in different field |
---|
0:15:29 | the nice thing about it its asymptotic and the effective snr so i a small noise variance |
---|
0:15:33 | well a large and |
---|
0:15:34 | can be both for whatever you want |
---|
0:15:36 | and it explicit which means you don't need any assumptions about the statistics you need to noise to be zero |
---|
0:15:41 | mean but you don't need to source to be gaussian you the noise to be C |
---|
0:15:44 | you just need to be small |
---|
0:15:46 | and is also shown means but our assuming zero-mean circular is met guide noise |
---|
0:15:50 | and now also shown the explicit expressions for single source where you can actually see just talking |
---|
0:15:55 | this concludes my talk things |
---|
0:16:03 | we have time for question |
---|
0:16:14 | yeah |
---|
0:16:15 | there is a relation but there is also a difference |
---|
0:16:18 | um |
---|
0:16:19 | okay here |
---|
0:16:20 | in an in totally scores you allow for an error |
---|
0:16:23 | in in in this expression in you mean mapping matrices that's say |
---|
0:16:27 | but you assume that there as are independent errors on the left and the right hand side of spray |
---|
0:16:32 | at that's why this is called structure these course totally suppressed would model to independent errors for the left and |
---|
0:16:37 | the right and say the different |
---|
0:16:38 | i actually there is a structure in the shift writes equations which tells you that these are actually not and |
---|
0:16:42 | pain and are almost the same except little selection matrices |
---|
0:16:45 | and the structure should be incorporated the solution |
---|
0:16:51 | yeah |
---|
0:16:52 | yeah |
---|
0:16:54 | right |
---|
0:16:55 | this only is yeah |
---|
0:16:56 | yeah |
---|
0:17:16 | it doesn't have to be we don't can find that that that you explicitly to be unit |
---|
0:17:20 | it could be not |
---|
0:17:25 | well |
---|
0:17:25 | you for for is pretty you don't need a constraint that does unitary mean |
---|
0:17:29 | this you can you to any subspace at you one doesn't have to be you know |
---|
0:17:39 | but you can you can describe a sub using any is any base equivalent typically you the S D because |
---|
0:17:44 | it gives you know from all the basis and it's nice to work with and it's simple |
---|
0:17:48 | but you could use any other basis and it would have no inter |
---|
0:17:51 | form |
---|
0:17:51 | any base |
---|
0:17:52 | fine |
---|
0:17:53 | actually me started rolling taxes pretty the first version |
---|
0:17:56 | i'll be defined of subspace it was long units are you but then when be corrected it we met every |
---|
0:18:00 | had no impact on the performance which is |
---|
0:18:02 | just would be |
---|
0:18:03 | yeah but just |
---|
0:18:06 | us us this to be unitary domain |
---|
0:18:10 | if you had last yeah i |
---|
0:18:12 | not so mean unit |
---|
0:18:13 | mean |
---|
0:18:14 | so can you |
---|
0:18:15 | the |
---|
0:18:16 | i you know a |
---|
0:18:17 | to to issue |
---|
0:18:18 | you would like to minimize |
---|
0:18:20 | but which |
---|
0:18:21 | know |
---|
0:18:22 | you would still operate |
---|
0:18:23 | you can men |
---|
0:18:24 | C |
---|
0:18:25 | yeah |
---|
0:18:27 | you could do would but i one |
---|
0:18:28 | you you what what would be the uh the data |
---|
0:18:30 | i mean we don't need to same unitary T four |
---|
0:18:34 | the writing is pretty for using it |
---|
0:18:37 | but it would be possible |
---|
0:18:40 | i you you have question |
---|
0:18:43 | a some thought of a set of goes to infinity so a set of the not so sure |
---|
0:18:50 | i |
---|
0:18:50 | i |
---|
0:18:51 | so what we actually need |
---|
0:18:52 | let me go to |
---|
0:18:55 | well |
---|
0:18:57 | what we need is that this term and frame here that it they can zero we and P D which |
---|
0:19:01 | is the power of sauce of single and which is the power of your |
---|
0:19:04 | um and the the noise variance and you and which is the number of snapshots |
---|
0:19:08 | so if you have a finite snr and you that in going to infinity works just exactly the same way |
---|
0:19:13 | i i you let and go to infinity or you get the noise variance go zero all |
---|
0:19:17 | a source power go to |
---|
0:19:18 | in |
---|
0:19:27 | yes it single source only for singles |
---|
0:19:30 | it's not as bad for for multiple |
---|
0:19:34 | it was a prize one as what for the first time but i can very fight using selection |
---|
0:19:37 | so |
---|
0:19:41 | it is surprising because he the low resolution techniques are asymptotically optimal for single source but it's is not |
---|
0:19:53 | when when that more sources is it it's not as that it's it's very hard to find as expressions explicitly |
---|
0:19:57 | for most source because the number of terms get |
---|
0:19:59 | it's very large |
---|
0:20:00 | very to use we try simplify simplified |
---|
0:20:02 | we we we actually didn't |
---|
0:20:03 | get the final result but from from simulations my experience is that |
---|
0:20:06 | this kind of the the risk case in terms of comparing it with rubber |
---|
0:20:12 | it also disappear of course if you replace quest squares fast a score |
---|
0:20:15 | which may serve of the spec |
---|
0:20:18 | and you just it's just one correction term it's not and it to iterative procedure to apply already of a |
---|
0:20:23 | single iteration it basically disappear |
---|
0:20:26 | in it's it's something simple it's holding one set of |
---|
0:20:29 | when |
---|
0:20:34 | a |
---|
0:20:34 | oh yeah |
---|
0:20:35 | the question |
---|
0:20:36 | it seems that uh stuff to the squares |
---|
0:20:39 | but and and these squares |
---|
0:20:40 | especially when the source is a card |
---|
0:20:43 | right |
---|
0:20:44 | so okay translate translated |
---|
0:20:46 | a |
---|
0:20:47 | so |
---|
0:20:48 | estimation use |
---|
0:20:49 | uh |
---|
0:20:50 | is words |
---|
0:20:51 | when the sources are quite |
---|
0:20:53 | is better one one |
---|
0:20:55 | because this is main |
---|
0:20:57 | to my |
---|
0:20:58 | a problem is that |
---|
0:21:00 | there is not always a one-to-one mapping between |
---|
0:21:02 | a better self space and a better performance of this pretty |
---|
0:21:05 | but but start of a senses as for single source we got a better subspace but the the the the |
---|
0:21:09 | means but our of the speech that's both exact |
---|
0:21:12 | sometimes times you get you get a better subspace but it does not help Q in terms of your you |
---|
0:21:16 | mean square |
---|
0:21:17 | i would say that there should probably be be something but i don't know if it's the weak link |
---|
0:21:21 | a strong so this something |
---|
0:21:22 | still have |
---|
0:21:26 | which |
---|
0:21:27 | so let's |
---|