0:00:19 | thank you |
---|
0:00:28 | oh |
---|
0:00:35 | a |
---|
0:00:35 | all the slide |
---|
0:00:42 | oh okay |
---|
0:00:43 | so it's uh |
---|
0:00:44 | meaning and |
---|
0:00:45 | on |
---|
0:00:46 | about fifteen slide |
---|
0:00:47 | separating us from |
---|
0:00:49 | go check |
---|
0:00:50 | so |
---|
0:00:50 | as all of it |
---|
0:00:52 | uh this |
---|
0:00:53 | uh work |
---|
0:00:54 | that the uh a sparsity technique K in order to take the problem of uh |
---|
0:00:59 | C reconstruction from a noisy data |
---|
0:01:02 | it was down uh yeah or uh joint it was my piece two supervisors makes it out than then because |
---|
0:01:08 | have ski |
---|
0:01:10 | a snapshot of the resulting a technique |
---|
0:01:13 | is a |
---|
0:01:14 | at the uh the low part of the slide |
---|
0:01:17 | yeah |
---|
0:01:18 | after the |
---|
0:01:19 | offline training is a to the previous uh work here that |
---|
0:01:24 | chaining of the data is not offline |
---|
0:01:26 | and then uh uh are we use a standard to construction all |
---|
0:01:30 | uh algorithm such as uh filter the back projection |
---|
0:01:33 | uh but the |
---|
0:01:34 | before that a some uh noise reduction and that's sending on them in the sending domain is that perform |
---|
0:01:40 | uh using a like S and the sparse representations |
---|
0:01:44 | and finally we get there |
---|
0:01:46 | finally |
---|
0:01:47 | so that's a snapshot and uh i'm getting that the T |
---|
0:01:52 | but this recall of some basic uh model of the in |
---|
0:01:56 | so that that could |
---|
0:01:57 | have a foot the big |
---|
0:01:58 | we have uh |
---|
0:02:00 | in this case it today |
---|
0:02:02 | but no us slice |
---|
0:02:04 | uh of a male had which uh |
---|
0:02:06 | um i'm scanning with X |
---|
0:02:08 | so each rate |
---|
0:02:10 | is is um |
---|
0:02:11 | has that some in shell uh an initial intensity of i zero four dollars per unit time |
---|
0:02:17 | and uh as the ray travels to this so the body |
---|
0:02:20 | it uh the the the four ones are sort the it's so that issue |
---|
0:02:25 | oh the final number of of for we count |
---|
0:02:27 | how just east estimate the line integrals was uh |
---|
0:02:30 | attenuation map |
---|
0:02:32 | and the this is actually there i don't transform to two-dimensional radon transform of the |
---|
0:02:36 | of that initial map |
---|
0:02:38 | and the what you're measuring uh are there |
---|
0:02:41 | approximations of uh a line integrals up to uh look |
---|
0:02:45 | well four |
---|
0:02:47 | now in the a does it's can when when we want to keep this |
---|
0:02:51 | number i as a little and i zero |
---|
0:02:54 | keep it low |
---|
0:02:55 | yeah we have a a deviation from the a correct ideal number of uh for tones |
---|
0:03:00 | the be count |
---|
0:03:01 | therefore the two measures measurements are |
---|
0:03:04 | um model those so on the variables |
---|
0:03:07 | instances of possible variables was that uh |
---|
0:03:10 | uh a parameter that uh them that which is the bows the expectation of this variable |
---|
0:03:15 | and their variance of |
---|
0:03:16 | so the higher the |
---|
0:03:18 | had had a that the time that the and number of that most count |
---|
0:03:22 | uh the that's nice we |
---|
0:03:24 | so we have here a |
---|
0:03:26 | day they the trade between a |
---|
0:03:29 | a a good image is want to get a and the |
---|
0:03:32 | radiated and sick patients that we |
---|
0:03:34 | the two will |
---|
0:03:34 | have you free |
---|
0:03:36 | try to improve they image them |
---|
0:03:40 | oh |
---|
0:03:41 | so |
---|
0:03:42 | number of things can but can be done in order to reduce the does it's in the in this can |
---|
0:03:46 | is first as to use algorithms which are which were |
---|
0:03:49 | specifically designed to |
---|
0:03:51 | acknowledges this a statistical model |
---|
0:03:54 | uh so there is a |
---|
0:03:56 | uh there is uh |
---|
0:03:58 | a a map uh objective which is minimised |
---|
0:04:01 | i directivity |
---|
0:04:02 | and the usually is those algorithms are quite slow |
---|
0:04:05 | uh despite by the fact that uh they do |
---|
0:04:08 | yeah truly improve the performance of uh |
---|
0:04:11 | such a basic uh are or assist few to by projection or |
---|
0:04:15 | other similar pressed methods |
---|
0:04:17 | which are currently |
---|
0:04:18 | to really are used in there and the clinical ct "'em" is K |
---|
0:04:23 | another way to reduce drastically there |
---|
0:04:27 | uh the amount of eric's will deletion |
---|
0:04:29 | he's not to eliminate the entire uh the entire had |
---|
0:04:33 | but if you want to you look on at the small region on the in a a a a the |
---|
0:04:37 | head |
---|
0:04:37 | we can get the where was radiating on the on that region plus some |
---|
0:04:42 | small additional amount of data |
---|
0:04:45 | uh theoretically we mask ready at a i i'd to the whole had in order to |
---|
0:04:49 | the recover you been one picks still because the there don't transform is not uh is not local but in |
---|
0:04:55 | practice we can |
---|
0:04:56 | yeah to by was much this radiation and you get a good image of some region |
---|
0:05:01 | there are a |
---|
0:05:02 | a special the algorithms to do that |
---|
0:05:05 | we want to consider this scenario one where we do a scan that are had |
---|
0:05:09 | and the you know it to improve the result of to the by projection |
---|
0:05:14 | we perform some uh sign enormous asians some processing of a set run before uh |
---|
0:05:19 | uh we can applied |
---|
0:05:22 | so just uh |
---|
0:05:24 | we called of your of there |
---|
0:05:25 | images that we get along the way |
---|
0:05:27 | uh from their ask and had we get a perfect sine gram |
---|
0:05:31 | where are was uh |
---|
0:05:33 | uh which is computed from |
---|
0:05:35 | but on measurements was a log transform |
---|
0:05:37 | then we have a a data dependent noise |
---|
0:05:40 | which is at that because of a a little for don't count to the system |
---|
0:05:44 | and from here we want to recover |
---|
0:05:46 | the um |
---|
0:05:48 | the the final image |
---|
0:05:52 | okay now uh i'm we define the goal want to get to the method |
---|
0:05:56 | that that via a attacking the problem |
---|
0:05:59 | uh i see that the |
---|
0:06:02 | some them them image was done to this that to the symbols here so this is uh |
---|
0:06:06 | you uh almost people sign |
---|
0:06:10 | um |
---|
0:06:11 | you know that the you three i try to decompose natural signals in such a a |
---|
0:06:15 | yeah yeah |
---|
0:06:16 | yeah frames a to of let's or discrete con signs or the for transform |
---|
0:06:21 | we have a rapid decay of the coefficient |
---|
0:06:24 | so uh uh similar we want to |
---|
0:06:26 | take a model which where we assume that only a few non-zero coefficients are needed to represent the signal well |
---|
0:06:33 | here the signal for this case is the small uh uh quadratic page for the image |
---|
0:06:38 | we put it the as a straight vector |
---|
0:06:40 | and what that we want to represent that as uh the product |
---|
0:06:43 | of uh matrix D |
---|
0:06:44 | by the present a presentation i far |
---|
0:06:47 | which she where the D is redundant so that i with a much longer |
---|
0:06:50 | but we only use few nonzero uh a few if you comes from D |
---|
0:06:55 | and the we both want a sparse vector L zero norm |
---|
0:06:59 | measure the sparsity |
---|
0:07:01 | but that the number of nonzero uh elements |
---|
0:07:03 | and so that the residual at a |
---|
0:07:06 | would be small |
---|
0:07:11 | um based on this principle there is a noise reduction technique from uh for uh for standard the signal image |
---|
0:07:18 | processing |
---|
0:07:19 | developed by a lot and a on in uh and i that the something six |
---|
0:07:23 | the define uh this objective function which contains three |
---|
0:07:27 | basic turn |
---|
0:07:29 | first time is uh is if you don't to term |
---|
0:07:32 | uh which you compare is the noisy image F T today |
---|
0:07:36 | was supposed to be and the |
---|
0:07:38 | while which try to recover at |
---|
0:07:40 | a second one uh uh |
---|
0:07:42 | request that the |
---|
0:07:43 | all the representations are a |
---|
0:07:46 | uh |
---|
0:07:47 | the J a the J runs over all the small patches in the an overlapping patches |
---|
0:07:52 | and it J yeah operator extracts a small patch from have |
---|
0:07:55 | and it here it is compared to |
---|
0:07:58 | uh a sparse and coding |
---|
0:08:00 | uh uh in in a form of the times i for G |
---|
0:08:03 | so i the J is a sparse presentation |
---|
0:08:06 | which she L zero norm is small |
---|
0:08:08 | and also the residual |
---|
0:08:10 | the difference a L two norm normal difference |
---|
0:08:12 | is the |
---|
0:08:13 | a required to be small |
---|
0:08:15 | how this a |
---|
0:08:16 | this equation is sold online after the |
---|
0:08:19 | and noisy noisy image is a so the dictionary D |
---|
0:08:22 | and this that the representations are |
---|
0:08:25 | boast lower and only from the noise addiction |
---|
0:08:27 | there is also a of and and uh uh |
---|
0:08:30 | and of to procedure with training images |
---|
0:08:33 | are you |
---|
0:08:34 | so here a uh is what's a what's called a case the algorithm |
---|
0:08:38 | we minimize for the second and that certain that |
---|
0:08:41 | and third relevant turns for D and i'm five |
---|
0:08:44 | uh there are two steps |
---|
0:08:45 | to to do it |
---|
0:08:47 | we optimize for are five and for the i directory |
---|
0:08:51 | and to compute the odd us giving a a dictionary D |
---|
0:08:54 | with perform what's called the sparse code |
---|
0:08:57 | we want to find their the the sparse just are a |
---|
0:09:00 | so |
---|
0:09:01 | so uh under the condition that a threshold |
---|
0:09:04 | but uh uh uh a different um |
---|
0:09:06 | and norm of the residual |
---|
0:09:07 | is below some threshold a epsilon J |
---|
0:09:10 | this is done uh use and and that prior approximate and go algorithm |
---|
0:09:14 | a pursuit algorithm such that uh a a a a uh orthogonal matching pursuit or P |
---|
0:09:19 | or other uh you don't go |
---|
0:09:21 | a second stage in the in the saturation is addiction of date |
---|
0:09:25 | it does not relevant to might dog so i'll skip |
---|
0:09:29 | finally any we have the both dictionary and they presentation |
---|
0:09:32 | we can compute the that you image using the first and the sir |
---|
0:09:36 | relevant terms for the image |
---|
0:09:38 | the there is actually a closed form a to to solve this uh |
---|
0:09:42 | equation so it is not quite quick |
---|
0:09:47 | okay |
---|
0:09:48 | and now this technique which is by the were quite efficient for noise reduction in image images |
---|
0:09:53 | uh was used |
---|
0:09:54 | but a couple of years ago by D appear sapiro |
---|
0:09:57 | to uh to to |
---|
0:10:00 | produce a reconstruction of an image from um |
---|
0:10:03 | yeah from a C |
---|
0:10:04 | so it is basically the same question is we so a minute ago |
---|
0:10:08 | except the fidelity delta term |
---|
0:10:09 | compare the noise assigning signing and you do that |
---|
0:10:12 | and the image this sort image have which is uh |
---|
0:10:16 | a transform by their at done transform |
---|
0:10:19 | um |
---|
0:10:20 | well first of should say that the uh this uh paper the shows some |
---|
0:10:25 | very impressive to the results |
---|
0:10:27 | a a a a on the yeah uh image which images to was uh region mention structure |
---|
0:10:31 | and there a severe conditions of of of the partial data of the used very few |
---|
0:10:36 | projections |
---|
0:10:37 | uh but the |
---|
0:10:39 | uh |
---|
0:10:40 | in principle there are few |
---|
0:10:42 | uh problems in this in this equations which we want |
---|
0:10:46 | but to try and the |
---|
0:10:48 | uh repair in a different uh set |
---|
0:10:50 | so first of all know what is the |
---|
0:10:52 | the the use the L two norm in the in the P do to to |
---|
0:10:56 | which actually means that the assumption is that the noise |
---|
0:10:59 | is a home a genie |
---|
0:11:01 | a however do know that uh in the sinogram domain the main the noise does depend on the data |
---|
0:11:06 | more more than would know exactly how does the we know the variance of the noise so |
---|
0:11:11 | this can be used |
---|
0:11:13 | and the um the second problem was that for that the term is |
---|
0:11:16 | we actually want to get a good the |
---|
0:11:19 | a a or uh a low error in the |
---|
0:11:21 | and the image domain main is it images of |
---|
0:11:24 | step of that we are |
---|
0:11:25 | it decrease in the error or be in the signing gram the domain |
---|
0:11:28 | and since the i don't transform is ill condition |
---|
0:11:31 | a |
---|
0:11:32 | this does not tell as much about what you image yeah |
---|
0:11:35 | we are we're are seeing Q |
---|
0:11:37 | the second problem is the |
---|
0:11:40 | which is also a model but the also was that the we can not |
---|
0:11:43 | surely obtain these coefficients you G |
---|
0:11:46 | um S is the are related directly |
---|
0:11:50 | to the um |
---|
0:11:51 | to the uh thresholds i'd uh on J we don't really use |
---|
0:11:55 | these new J's |
---|
0:11:56 | but for each you for each batch we we need to know what is the expected that uh error energy |
---|
0:12:02 | so that |
---|
0:12:03 | to the put here the correct threshold |
---|
0:12:06 | if we don't know the noise statistics and we do not know the noise statistics in the a ct image |
---|
0:12:10 | domain |
---|
0:12:11 | because there are uh after the reconstructions the noise is |
---|
0:12:15 | quite complicated |
---|
0:12:16 | we can not |
---|
0:12:17 | compute these thresholds uh |
---|
0:12:19 | a a quite right there are some estimation techniques but the the don't be a was uh uh a very |
---|
0:12:24 | good result |
---|
0:12:26 | so |
---|
0:12:27 | we are trying to solve the problem |
---|
0:12:30 | yeah shifting the different this their own where we do have a a few the back projection |
---|
0:12:35 | not is the |
---|
0:12:35 | the this concept does not use and any reconstruction the just of the in the a minimizer of this a |
---|
0:12:41 | equation |
---|
0:12:42 | and we do use and offline algorithm which you |
---|
0:12:45 | does the provides learning was trained |
---|
0:12:49 | so |
---|
0:12:50 | uh |
---|
0:12:51 | we |
---|
0:12:52 | she from the image domain |
---|
0:12:54 | but sparse coding was done or to the sending gram the mean |
---|
0:12:57 | and a want to stick code the the pitch is of sending gram instead of the image |
---|
0:13:01 | uh |
---|
0:13:02 | so the |
---|
0:13:04 | the panel to that the be are seeking for |
---|
0:13:07 | is uh should should be in the image domain "'cause" are we can require for some |
---|
0:13:11 | uh a nice properties um |
---|
0:13:13 | of the image that they want to |
---|
0:13:16 | we what we using an offline training stage which on the one hand to be yeah requires some training images |
---|
0:13:21 | on a another hand |
---|
0:13:23 | it makes the algorithm very fast because all the heavy work is done once the you just can or is |
---|
0:13:28 | initiated |
---|
0:13:29 | and then the in the reconstruction stage |
---|
0:13:31 | it's |
---|
0:13:32 | almost all most just just |
---|
0:13:34 | almost as fast as the a F P P |
---|
0:13:38 | so |
---|
0:13:38 | the algorithm uses a set of uh a reference images high quality ct images |
---|
0:13:43 | and the also uh corresponding glottal signing grams |
---|
0:13:47 | should be applied |
---|
0:13:49 | such that can be obtained the uh for instance use and using cadavers or find terms which can be scanned |
---|
0:13:54 | without the |
---|
0:13:56 | any any hesitation is to uh the radiation dose |
---|
0:14:00 | okay so the algorithm goes as follows |
---|
0:14:03 | we use the um |
---|
0:14:05 | the the case we de algorithm to train |
---|
0:14:08 | for for but very similar |
---|
0:14:10 | equation is well before but the mix of extract the patches of sending gram and not of the image |
---|
0:14:17 | except them from that is uh it is just the send question |
---|
0:14:20 | uh equation uh except from the very |
---|
0:14:23 | yeah important difference that we don't need a a different uh coefficients for uh for different presentations |
---|
0:14:30 | here no it is it's |
---|
0:14:31 | a weighted L two norm was the spatial matrix W able |
---|
0:14:35 | detail on in uh okay and next slide |
---|
0:14:37 | so this uh help us to normalize the noise over all the pitch |
---|
0:14:43 | okay so we do and coding was a fixed the threshold Q |
---|
0:14:47 | which is actually the size of page a number of pixels |
---|
0:14:51 | uh a and the noise is normalized corresponding to so that this will work |
---|
0:14:55 | okay now |
---|
0:14:57 | and once once none of that we have a dictionary |
---|
0:15:00 | a and the set of us uh that representations |
---|
0:15:03 | is that the uh |
---|
0:15:04 | help us though and it the produced in good sparse and coding for sending |
---|
0:15:09 | uh one could stop here and use this dictionary to uh to improve the center of in the future |
---|
0:15:16 | but think again that want the in the penalty to be in the image domain in here |
---|
0:15:20 | we are uh we're not comparing to the |
---|
0:15:23 | uh well actually not comparing to anything we just acquiring requiring that |
---|
0:15:28 | for each patch that would be a good sparse code |
---|
0:15:31 | so the second step is to take these representations |
---|
0:15:34 | and the to make |
---|
0:15:36 | uh the dictionary a better able |
---|
0:15:38 | now |
---|
0:15:39 | this uh |
---|
0:15:41 | expression in in here is actually the or there is stored sending graph |
---|
0:15:44 | i take this sparse encoded patches |
---|
0:15:47 | the times i've the J |
---|
0:15:49 | yeah return then to the my and a gram metrics |
---|
0:15:52 | and finally the M inverse uh accounts for their but uh patches overlap |
---|
0:15:57 | so this is the sign a gram |
---|
0:15:59 | after i remove all the unnecessary |
---|
0:16:02 | uh i necessary noise |
---|
0:16:04 | and D is the |
---|
0:16:06 | some field that is some construction algorithm |
---|
0:16:09 | we wanted to be linear for this equations the be sort of the is but it can also be known |
---|
0:16:14 | in near you have uh |
---|
0:16:16 | i if this can be still so |
---|
0:16:18 | so it can be the feel be projection or some other a |
---|
0:16:22 | uh in your uh algorithm like that sounds like a the free inverse verse transform |
---|
0:16:28 | i mean for the free uh algorithm for the |
---|
0:16:30 | uh inverse to london |
---|
0:16:33 | okay so a here D is uh is the linear function in terms of a of the data provided here |
---|
0:16:40 | so of the |
---|
0:16:41 | this L to more use the easily minimize for the using good |
---|
0:16:45 | so you could you could writing |
---|
0:16:48 | and |
---|
0:16:49 | all this is an offline training which prepares as these two dictionaries D one and D two |
---|
0:16:54 | and the in the in the second training stage we compares the reconstructed image |
---|
0:16:59 | with the original one |
---|
0:17:00 | and then |
---|
0:17:02 | we use a a also a weighted to L two more |
---|
0:17:05 | which she allows us to |
---|
0:17:07 | demand specific things about the error that we are we are we are observing this is a construction error in |
---|
0:17:12 | this |
---|
0:17:13 | in this term |
---|
0:17:14 | and um |
---|
0:17:15 | and the matrix Q allows us to do some things which ill which i will also shown in a in |
---|
0:17:20 | a couple of men |
---|
0:17:23 | and meanwhile while how do i you use this the uh a train data |
---|
0:17:28 | given then you noisy uh |
---|
0:17:30 | uh uh a G till that |
---|
0:17:32 | i idea compute the computer presentations using the sparse coding was the the same threshold Q |
---|
0:17:38 | and the diction do you one |
---|
0:17:40 | and then this presentations are used to encode |
---|
0:17:44 | is the the a gram the restored sinogram a jury |
---|
0:17:47 | okay this is the same for mil |
---|
0:17:50 | finally when a have that uh the center gram i applied the |
---|
0:17:53 | you the projection to to compute the the |
---|
0:17:58 | now |
---|
0:17:58 | uh what are the matrices W and matches |
---|
0:18:02 | make matrix Q or was talking about |
---|
0:18:04 | you know to build |
---|
0:18:05 | uh |
---|
0:18:06 | a good the note the not normalize is the the the error in the centre of the domain so that |
---|
0:18:11 | all these the differences would the all the same |
---|
0:18:14 | yes a yeah |
---|
0:18:16 | a you need to buy you need one a energy i need to recall what are the statistics of the |
---|
0:18:21 | of the noise in |
---|
0:18:23 | so one using the their statistical model introduced the in the beginning |
---|
0:18:28 | one can deduce |
---|
0:18:29 | but that the the variance in each location of the sound of gram difference between the ideal sending a very |
---|
0:18:35 | and the |
---|
0:18:36 | measured one |
---|
0:18:37 | is uh approximately uh a in verse to the true photon count |
---|
0:18:42 | a and the and |
---|
0:18:44 | we don't have it but we can |
---|
0:18:45 | hey a use a good approximation by the mel for count |
---|
0:18:50 | so when a |
---|
0:18:51 | yeah multiply |
---|
0:18:52 | by one over this the variance |
---|
0:18:55 | i have a a uniform noise no in the in the uh |
---|
0:18:58 | in the centre gone the so |
---|
0:19:00 | is summing over the Q |
---|
0:19:02 | yeah yeah as and the patch i |
---|
0:19:04 | expect the energy to be just |
---|
0:19:06 | Q |
---|
0:19:07 | and therefore i can take this uh weight matrix as a diagonal matrix C a containing the initial photon count |
---|
0:19:14 | in know to to uh produce a to do uh |
---|
0:19:17 | correct sparsity decoding was a on thrash |
---|
0:19:22 | and now as as uh a to the question of uh |
---|
0:19:26 | what |
---|
0:19:26 | kind of error measure i using |
---|
0:19:28 | in fact to was this uh with this slide i'm more hoping for uh your help that the coming to |
---|
0:19:34 | tell the something "'cause" the this is something come |
---|
0:19:37 | i stumbled upon that the |
---|
0:19:39 | wasn't quite considered in the literature |
---|
0:19:41 | "'cause" not much of the supervised learning in the C reconstruction was done so far |
---|
0:19:46 | i would like to think of uh and error measure |
---|
0:19:50 | it which you can be designed using good a good quality reference image |
---|
0:19:53 | which could we which which would help me to promote a a a good properties of the reconstructed image |
---|
0:19:59 | for instance if i'm not looking at the had |
---|
0:20:02 | i'm seeing a regions of bound and re in the regions of air |
---|
0:20:06 | uh which are um |
---|
0:20:07 | uh |
---|
0:20:08 | not uh not necessary for my reconstruction if i'm interested only in the soft tissues there is a a great |
---|
0:20:14 | to the dynamic range of C videos |
---|
0:20:16 | he read is about on south than for five hundred |
---|
0:20:19 | here it is a mine one thousand and i one plus minus third |
---|
0:20:23 | so in the in the is in the special L by by design |
---|
0:20:27 | i remove those regions |
---|
0:20:29 | it can be seen |
---|
0:20:30 | in this is a in the piecewise constant a |
---|
0:20:34 | phantom here that |
---|
0:20:35 | those regions are completely removed from the map then the |
---|
0:20:39 | are not uh |
---|
0:20:40 | i i don't i don't come then when a a a a a reduce my i all that the rest |
---|
0:20:45 | of the image |
---|
0:20:46 | also i would like to |
---|
0:20:48 | to emphasise the uh that's the the the edges just of uh of small tissues |
---|
0:20:55 | so that here all the you know an sis will be |
---|
0:20:57 | uh |
---|
0:20:58 | a good a good uh we'll build a good quality |
---|
0:21:02 | so overall i E upper the i design such weighting map |
---|
0:21:05 | and the maybe there are would other designs that can the proposed |
---|
0:21:09 | and with the uh with respect to this map |
---|
0:21:11 | i can obtain a visually better |
---|
0:21:14 | uh images |
---|
0:21:16 | uh um finishing in just one minute |
---|
0:21:18 | all the just like the shows some uh um based some um |
---|
0:21:21 | results |
---|
0:21:22 | this is the piecewise uh |
---|
0:21:24 | find tom is the um random L |
---|
0:21:26 | a strong all about it |
---|
0:21:28 | this is the reconstruction was standard fit the back projection |
---|
0:21:32 | where it their uh of the parameters |
---|
0:21:34 | uh a cut the frequency was optimal each known |
---|
0:21:37 | uh this is compared to to our algorithm |
---|
0:21:41 | and to double those you'll to but projection the result |
---|
0:21:45 | which E which use |
---|
0:21:46 | twice as much for tones |
---|
0:21:48 | and the by the signal to noise ratio measurements |
---|
0:21:51 | yeah white one can observe the these are |
---|
0:21:54 | more is the same |
---|
0:21:55 | also there are a some our results on clinical images this is a head section from a visible human uh |
---|
0:22:01 | source |
---|
0:22:02 | and this again is the egg F bp P algorithm which is |
---|
0:22:06 | a little bit noisy |
---|
0:22:07 | here i can uh recover the final fine details |
---|
0:22:11 | much better |
---|
0:22:12 | which she also use a a roughly just about what we can do was a double those |
---|
0:22:16 | in the if to by projects |
---|
0:22:19 | so in to summarise |
---|
0:22:21 | we we can see that sparse presentations |
---|
0:22:23 | can well already were you |
---|
0:22:26 | uh for for computed tomography and the they can produce a very good results |
---|
0:22:30 | white well taking not |
---|
0:22:31 | much of the computational effort |
---|
0:22:34 | and they can be easily incorporated in the |
---|
0:22:36 | existing clinic kind as "'cause" it is just a matter of maybe replacing the soft |
---|
0:22:42 | uh |
---|
0:22:43 | so that's is it for now and thank you very much |
---|
0:22:50 | recent to use the return my phone |
---|
0:22:54 | and |
---|
0:22:55 | question |
---|
0:23:00 | no but |
---|
0:23:01 | if a break dct |
---|
0:23:05 | yeah oh |
---|
0:23:06 | oh |
---|
0:23:07 | you a balloon or to make you |
---|
0:23:13 | X exposed to of a construct a and and and |
---|
0:23:20 | just uh |
---|
0:23:21 | and i still and to explain to uh |
---|
0:23:27 | i mean cell has has a done |
---|
0:23:30 | yeah |
---|
0:23:30 | it's a reading can radiating a small neighborhood of the region of interest and only if few global projection so |
---|
0:23:36 | that the low frequency can all source to |
---|
0:23:40 | oh that there are good it's |
---|
0:23:42 | one is a question you know come |
---|
0:23:47 | you know i |
---|
0:23:49 | and |
---|
0:23:49 | yeah approach of just what comes to a are right |
---|
0:23:53 | i of getting explain quite so it's not like |
---|
0:23:59 | you ask if my approach can be used for for that uh race |
---|
0:24:03 | a for uh construction from partial date |
---|
0:24:06 | if if i only if i only measure that |
---|
0:24:08 | a race sort through the region of interest if i can use the this technique |
---|
0:24:15 | yeah |
---|
0:24:17 | i i a i like should i yes because my technique is local i only i i i think local |
---|
0:24:23 | in the sending them take a small patches and what our work on that |
---|
0:24:27 | so uh even if a if the if i have a partial data by and their i |
---|
0:24:31 | have some method of way |
---|
0:24:33 | of of uh of dealing with a like and the extrapolation which is usually used |
---|
0:24:37 | i can still uh |
---|
0:24:38 | work on the available power data and the some uh |
---|
0:24:42 | a a preprocessing in know though to get better or uh |
---|
0:24:45 | uh but S now there before i does the |
---|
0:24:48 | uh apply that that a algorithm so yeah you're right those things can be combined |
---|
0:24:53 | okay okay you know mark |
---|
0:24:57 | yeah we got all speak now more so on |
---|
0:25:02 | point five |
---|
0:25:04 | i |
---|