0:00:13 | among everyone this is a of lossy compression on a lossy compression on a set for hyperspectral and all spectral |
---|
0:00:20 | images |
---|
0:00:21 | that's a joint work with and they are about the not a button you the university of C N |
---|
0:00:25 | so first give some motivation apart one but was a compression and the a specific problems |
---|
0:00:31 | i will describe the proposed compression algorithm then a provide some experimental results on |
---|
0:00:37 | i spectral and out for spectral images and |
---|
0:00:39 | finally draw some calm |
---|
0:00:42 | um |
---|
0:00:42 | i spectral images are a collection of pictures taken at seven different wave of the same scene |
---|
0:00:48 | quite a lot of data but when it comes to compress those data on were set to like we're faced |
---|
0:00:53 | with the problem that |
---|
0:00:54 | we don't have many computational resources to actually do |
---|
0:00:57 | impressions so |
---|
0:00:58 | first that the need is some low complexity um |
---|
0:01:01 | that's quite different from compression on the ground our car idle is |
---|
0:01:05 | the second but it thing is that unlike typical consumer digital cameras where there is a two dimensional |
---|
0:01:12 | detector are rate to |
---|
0:01:13 | picture |
---|
0:01:14 | for hyperspectral images to we just have one um |
---|
0:01:17 | a signal one dimensional array |
---|
0:01:19 | that takes one a line of the image with all it's |
---|
0:01:22 | a spectral channels all the different wavelength |
---|
0:01:25 | the i-th the lines of the images or the image are for by the motion of the aircraft or set |
---|
0:01:29 | light |
---|
0:01:30 | so it they are taken at time |
---|
0:01:32 | so that what use in table is that's we to the compression we don't have the whole image available but |
---|
0:01:38 | just a few lines of the image with all the spectral channel so |
---|
0:01:41 | we need to do compression using a a we will about one |
---|
0:01:46 | so that's do we want to compress |
---|
0:01:48 | good as we can so we want to have state-of-the-art compression efficiency |
---|
0:01:52 | and for hyperspectral applications we need to cover a range of bit which is relatively large |
---|
0:01:57 | to be from around zero point five two |
---|
0:01:59 | three three or four bit |
---|
0:02:00 | a pixel |
---|
0:02:01 | that should be compared with a a sensor be that is typically between twelve and |
---|
0:02:05 | uh sixteen bits per se |
---|
0:02:07 | so we have to cover |
---|
0:02:08 | a a little bit rate and high bit rate |
---|
0:02:11 | fashion and finally |
---|
0:02:12 | we need some error containment uh those uh |
---|
0:02:15 | compressed data packets are down only uh use a communication channel |
---|
0:02:19 | which is subject to occasional back loss and we don't want |
---|
0:02:22 | that's a the signal back to these disrupts the reconstruction of the whole |
---|
0:02:27 | so |
---|
0:02:28 | uh there are several of all buttons for performing compression of two dimensional images like |
---|
0:02:34 | and hyperspectral or on for spectral images |
---|
0:02:37 | a most popular approach uses three dimensional tile from coding |
---|
0:02:41 | for each sample jpeg two thousand we part two with that of course a multi component transformation |
---|
0:02:46 | where you can use |
---|
0:02:47 | you the spectral wavelet transform or an arbitrary spectral transformation |
---|
0:02:51 | and then the light jpeg two thousand on each of the transform |
---|
0:02:54 | a a spectral chan |
---|
0:02:56 | we we can use a way list we use the couple nonlinear transforms whatever |
---|
0:02:59 | this |
---|
0:03:00 | uh uh was very well at low rates of the problem of this approach |
---|
0:03:03 | the high complex |
---|
0:03:05 | complexity comes from the top for but you know or or from the coding and rate distortion |
---|
0:03:09 | use a shouldn't G |
---|
0:03:10 | a thousand where you had to |
---|
0:03:11 | uh |
---|
0:03:12 | do we |
---|
0:03:13 | i i coding all you links and then post compression rate distortion |
---|
0:03:16 | ascension so that |
---|
0:03:17 | mass too complex for on more |
---|
0:03:20 | that it does work well for |
---|
0:03:21 | archive |
---|
0:03:22 | hmmm |
---|
0:03:24 | the second thing is that if we want to use J two thousand well then a for compression we don't |
---|
0:03:29 | have a whole image available |
---|
0:03:31 | so our spatial transformations |
---|
0:03:33 | we have just to take that |
---|
0:03:34 | on a few lines of that image at a time |
---|
0:03:36 | which is that |
---|
0:03:37 | a possible with J |
---|
0:03:38 | a thousand using be line based transformation |
---|
0:03:41 | but then that start from them a shouldn't can be done in a global way more are |
---|
0:03:45 | you has to be done and a lot of weight just a few lines at that time and there's a |
---|
0:03:49 | big performance pen |
---|
0:03:51 | or that |
---|
0:03:52 | oh to be |
---|
0:03:52 | a a global optimal rate distortion |
---|
0:03:55 | age |
---|
0:03:56 | the you the also approach is to use of prediction techniques using three dimensional spatial and spectral prediction |
---|
0:04:02 | but action has been used for a time |
---|
0:04:05 | for the last this compression |
---|
0:04:07 | yeah was this compression used to to go for high quality applications where you want the maximum absolute error between |
---|
0:04:13 | the decoder in an original image to be a by |
---|
0:04:16 | a a user select value |
---|
0:04:18 | so that works very well i high rates |
---|
0:04:21 | i it doesn't work well as well the little bit rate |
---|
0:04:23 | and then uh |
---|
0:04:25 | a two dimensional prediction is usually a couple of colour this station an on how one i entropy coding |
---|
0:04:31 | have a a clear that we don't go below one |
---|
0:04:33 | pixel |
---|
0:04:34 | the short course |
---|
0:04:35 | a code word |
---|
0:04:36 | but how one on can provide is just one |
---|
0:04:38 | so |
---|
0:04:38 | uh that's a problem |
---|
0:04:40 | to go below one |
---|
0:04:42 | so uh what we propose for a compression is based on a a approach where use three dimensional |
---|
0:04:49 | spatial and spectral predictor or which keeps us a low complexity that need |
---|
0:04:53 | a compression |
---|
0:04:55 | but then we're faced with the problem of improving performance of a low bit |
---|
0:04:59 | existing schemes |
---|
0:05:00 | the just don't them |
---|
0:05:02 | so that we don't know how do them is she's we don't really need to perform you was let's compression |
---|
0:05:07 | move to |
---|
0:05:08 | truly lasting compression |
---|
0:05:09 | all the prediction residuals |
---|
0:05:11 | and so in order to do that we improve quantisation stage |
---|
0:05:16 | we don't use a simple scalar want either |
---|
0:05:18 | and we a rate distortion optimization the whole ski |
---|
0:05:23 | so |
---|
0:05:24 | uh |
---|
0:05:25 | this is how we do it |
---|
0:05:26 | uh the prediction stage you use |
---|
0:05:29 | therefore forms the prediction independently on sixteen by sixteen blocks of samples |
---|
0:05:34 | so the speech or shows uh and in which dividing in four sixteen by sixteen blocks |
---|
0:05:39 | and uh uh of for every we look at a all channels and the court look blocks in the the |
---|
0:05:45 | different spectral channels so looking |
---|
0:05:47 | what where the wavelet to mention |
---|
0:05:49 | we probably right |
---|
0:05:50 | from the curve at decoding block the bleeding spectral chan |
---|
0:05:54 | so this is quite different from the kind of prediction that is usually uh and blowing in hyperspectral image compression |
---|
0:06:00 | pixel by pixel |
---|
0:06:01 | and not a lot by block prediction |
---|
0:06:03 | but as will see is a loss as to a very efficient to start from |
---|
0:06:07 | i |
---|
0:06:08 | so essentially what we do is uh that are would need to in the next slide that we calculate a |
---|
0:06:13 | linear predictor that use the previous well |
---|
0:06:16 | but it be current law |
---|
0:06:17 | then we calculate |
---|
0:06:19 | the position the prediction residual and i think about this is that it provides a spatial error containment in that |
---|
0:06:25 | it some compressed where one is lot |
---|
0:06:28 | should |
---|
0:06:28 | that we have set the next blocks in the |
---|
0:06:31 | uh the the the the block in the same spatial position in the next wave lines but it not affect |
---|
0:06:36 | and now their spatial loss |
---|
0:06:38 | which |
---|
0:06:39 | so the prediction is actually quite seen but we got X |
---|
0:06:42 | the vector of samples of the current sixteen by six |
---|
0:06:45 | in lexical that ring |
---|
0:06:47 | are the vector of samples of the reference plot which is as i said |
---|
0:06:51 | but score okay at decoder block in previous spectral channel |
---|
0:06:55 | and then we calculate at least mean square predictor or with which is defined by two parameters are are fine |
---|
0:07:00 | and and |
---|
0:07:01 | and is the mean value of the current block and i'll wise at least means square parameter that |
---|
0:07:05 | pretty P |
---|
0:07:06 | current uh one |
---|
0:07:08 | probably principal |
---|
0:07:11 | alright |
---|
0:07:12 | so uh the first uh the other thing to move formula |
---|
0:07:17 | lossy compression quantisation |
---|
0:07:19 | uh uh |
---|
0:07:20 | technical near lossless compression of less quantisation |
---|
0:07:24 | which is almost a do not at high bit but far from optimal a low bit |
---|
0:07:29 | so for a bit rate compression its customary to use one with that some which creates long sequences of zeros |
---|
0:07:36 | that are back to the effectively by and three coders |
---|
0:07:39 | is is the optimal at lower rates |
---|
0:07:41 | not i rates |
---|
0:07:42 | and to find some that works well at all rates we decide to you |
---|
0:07:46 | a a kind of quantizer which is called uniform threshold quantizer or or you Q |
---|
0:07:50 | which is slightly more complex than E |
---|
0:07:53 | in form quantizer |
---|
0:07:54 | but dead zone |
---|
0:07:55 | but is the or not at all |
---|
0:07:58 | yeah O you do you |
---|
0:08:00 | is actually |
---|
0:08:01 | very simple |
---|
0:08:02 | it's |
---|
0:08:03 | i |
---|
0:08:04 | i i there in which every interval what decision you of rows are all the same size |
---|
0:08:09 | so calculating the court were is done much the same way |
---|
0:08:12 | classical |
---|
0:08:13 | colour you from quantizer |
---|
0:08:15 | the difference lies in the fact that that construction there is not taken as the midpoint point of the quantization |
---|
0:08:20 | interval but rather the sense |
---|
0:08:22 | as H |
---|
0:08:24 | so uh since we are a blank these to the prediction was walls we make use some so that the |
---|
0:08:29 | position was you those |
---|
0:08:30 | for a two dimensional |
---|
0:08:32 | a a two sided exponentially decreasing distribution lower plot distribution |
---|
0:08:37 | and so we calculate the |
---|
0:08:39 | uh |
---|
0:08:39 | actually actual construct and that used as a it's using these |
---|
0:08:42 | is to be |
---|
0:08:44 | and what happens is that if should look at this speech or |
---|
0:08:46 | so we you can see here are are are the different quantization interval |
---|
0:08:50 | the getting but of the point of the interval that you know a construction put by a a or for |
---|
0:08:56 | one times are |
---|
0:08:58 | a seems we use this uh we make this some some of distribution their out that the way that is |
---|
0:09:03 | of the prediction that or or more be then the high values |
---|
0:09:06 | so uh what we do is we add the collection term to the the construction but |
---|
0:09:11 | account for that |
---|
0:09:12 | so that by is that construction to works a little bit |
---|
0:09:15 | so much so as the uh quantisation indexes low so close to |
---|
0:09:20 | so that |
---|
0:09:21 | uh |
---|
0:09:22 | the or from your last |
---|
0:09:25 | and the second and was the most important one is rate distortion optimization |
---|
0:09:28 | and this is where really helps to use |
---|
0:09:31 | a square blocks for the prediction |
---|
0:09:33 | so the eight yeah here is |
---|
0:09:35 | uh essentially similar to the skip mode be the compression |
---|
0:09:39 | sometimes we find certain sixteen by sixteen or that can be pretty to really very well from the you reference |
---|
0:09:45 | blah |
---|
0:09:46 | and in that case we don't in the prediction a so we are other keep the encoding of the prediction |
---|
0:09:51 | or |
---|
0:09:51 | so that we save a lot it's in the process and just signal to the decoder |
---|
0:09:55 | that the decoded signal on this case we just the pretty |
---|
0:09:58 | that the decoder can |
---|
0:09:59 | a couple |
---|
0:10:01 | in particular |
---|
0:10:02 | we actually |
---|
0:10:03 | prediction |
---|
0:10:04 | we calculate the variance of the prediction and D |
---|
0:10:07 | and he can they're this variance for the special |
---|
0:10:11 | and if D X is the threshold that |
---|
0:10:13 | it means that the predictor is not a good enough for speech in the current block so we don't the |
---|
0:10:18 | classical and encoding of the prediction or |
---|
0:10:20 | i rice |
---|
0:10:21 | if |
---|
0:10:22 | the D is below a actual |
---|
0:10:24 | we simply to that and prediction parameters for a lot of the file but no prediction that so |
---|
0:10:29 | so that |
---|
0:10:30 | the the would will be a a to uh mean the petition parameter |
---|
0:10:34 | from the file |
---|
0:10:34 | calculate a pretty or in use the predictor as coding |
---|
0:10:41 | entropy coding all B |
---|
0:10:43 | but addition was used was is done using a a power of two call |
---|
0:10:47 | this is a a very typical become uh thing in uh compression for set like imaging |
---|
0:10:53 | because goal of two codes are much see there than any other |
---|
0:10:56 | a a cool especially arithmetic medical |
---|
0:10:59 | so |
---|
0:10:59 | uh there not was powerful but that that's of a good compromise |
---|
0:11:03 | performance and complexity |
---|
0:11:05 | and calculate |
---|
0:11:06 | the best coding parameter |
---|
0:11:08 | every sample |
---|
0:11:09 | based on the of magnitude of on my prediction residual of a window of thirty two |
---|
0:11:14 | so it's not done a lot |
---|
0:11:15 | well |
---|
0:11:16 | a sample by sent |
---|
0:11:19 | right |
---|
0:11:20 | so here are some results for the proposed algorithm |
---|
0:11:24 | a a tried that on a different images all show results for |
---|
0:11:28 | images from two different data |
---|
0:11:30 | well as i've is i have this is an images |
---|
0:11:32 | using spectrum there |
---|
0:11:34 | long |
---|
0:11:34 | which is flown on the aircraft |
---|
0:11:37 | and these images are |
---|
0:11:39 | had it they have two hundred fifty four spectral channels and the spatial signs |
---|
0:11:43 | six and an at times five hundred and twelve images |
---|
0:11:47 | um |
---|
0:11:47 | and the is a the right images as are where by this |
---|
0:11:51 | they have no calibration whatsoever no of corporations and not with really image |
---|
0:11:56 | it is taken by |
---|
0:11:57 | i've of is |
---|
0:11:58 | used for |
---|
0:11:59 | so that classification of |
---|
0:12:00 | locations and |
---|
0:12:01 | oh |
---|
0:12:03 | the second image isn't a in image from the years and |
---|
0:12:07 | some the which is operated by the not |
---|
0:12:10 | which is used for a static of studies |
---|
0:12:12 | these images have a a a a a much less piece spatially |
---|
0:12:16 | just one hundred and thirty five times ninety pixels |
---|
0:12:19 | but they have a a spectral channel signal can hundred one spec |
---|
0:12:25 | is a quality metric we look at the P peak signal noise ratio and and we compare the performance of |
---|
0:12:30 | the proposed algorithm with two other algorithms |
---|
0:12:33 | well i he's jpeg two thousand part with the spectral |
---|
0:12:37 | discrete wavelet transformation |
---|
0:12:39 | in this case we do not perform the three dimensional a rate distortion optimization we're not doing any |
---|
0:12:45 | line based comes from so that is also be shown for J |
---|
0:12:48 | a thousand or and i'm realistic and |
---|
0:12:50 | one would be actually run the set |
---|
0:12:53 | a sort of upper bound of the |
---|
0:12:54 | or of J |
---|
0:12:56 | and the second algorithm is near lossless compression |
---|
0:12:59 | use exactly the same predictor |
---|
0:13:01 | and it to be colder |
---|
0:13:03 | and not using the U G Q quantizer or nor the latest store |
---|
0:13:07 | just a or |
---|
0:13:09 | a by E D P C and we discard uniform quantization and entropy coding of the prediction was |
---|
0:13:16 | are the results |
---|
0:13:17 | the curve here |
---|
0:13:19 | july two thousand to the wavelet transformation |
---|
0:13:22 | and a continuous list compression algorithm |
---|
0:13:26 | it is no you assess compression is better and transform coding at high bit-rates and you can see that here |
---|
0:13:32 | will performance difference with respect to jpeg two thousand speech large over two bit per sample |
---|
0:13:38 | but that were try this this is not a as good uh so essentially for two reasons |
---|
0:13:42 | one is related to the fact that |
---|
0:13:44 | the like to the quantization step size |
---|
0:13:47 | and the were is the quality of the reference signal for the prediction so these points |
---|
0:13:51 | this brings the |
---|
0:13:52 | performance style |
---|
0:13:53 | and low bit |
---|
0:13:54 | but then and this i have is not able to achieve a rates be more one bit |
---|
0:13:58 | a pixel because we're using a |
---|
0:14:01 | got a call was mean got were like this one |
---|
0:14:04 | there's just no way to go below that |
---|
0:14:06 | the proposed algorithm seems to bring the best of both worlds here |
---|
0:14:10 | better then a job but two thousand and and are a bit rate is larger than then point three or |
---|
0:14:15 | do you point thirty five percent so |
---|
0:14:18 | you and that's bit rates the rate distortion optimization works pretty nice here |
---|
0:14:22 | and it's for for its performance tends to the performance of the yeah lost this compression at high rates and |
---|
0:14:28 | that's |
---|
0:14:29 | reasonable about because at high bit-rates |
---|
0:14:31 | that it is a shall never select the skip model for any block the image and a uniform threshold quantizer |
---|
0:14:37 | tense this colour one |
---|
0:14:39 | so |
---|
0:14:39 | the two algorithms essentially become pretty much the same |
---|
0:14:44 | we have similar |
---|
0:14:45 | you a results for the it's image |
---|
0:14:48 | sort of a yes |
---|
0:14:49 | do a two thousand as a little bit better or sometimes are performs by a small market |
---|
0:14:54 | proposed algorithm some but is not quite as good |
---|
0:14:57 | essentially |
---|
0:14:58 | you know a comparable performance |
---|
0:15:01 | and and that's pretty much the same you some near as compression is not a as low bit rates is |
---|
0:15:06 | become pretty high bit |
---|
0:15:08 | so this is still a a a a lot of jpeg two thousand for this image but jpeg two thousand |
---|
0:15:13 | you and recall |
---|
0:15:14 | is using a um the from a three dimensional very from |
---|
0:15:17 | as a so |
---|
0:15:18 | if we use the line based on from no one |
---|
0:15:22 | so |
---|
0:15:23 | um |
---|
0:15:25 | alright right so uh the this is an example of visual quality and this is just essentially sensually goes to |
---|
0:15:30 | show that we all the were using a block based pretty or we we don't have any hard |
---|
0:15:34 | here |
---|
0:15:35 | so this is a |
---|
0:15:36 | uh a patch from one the end of every as your original signal |
---|
0:15:41 | and this is not a construct signal by the proposed algorithm at zero one forty bit per piece so it's |
---|
0:15:47 | is |
---|
0:15:47 | you know one of the |
---|
0:15:48 | oh well as |
---|
0:15:49 | bit rates at the output but in can achieve |
---|
0:15:51 | and as can be seen that i mean the artifacts |
---|
0:15:53 | but no not not to science |
---|
0:15:55 | what's that |
---|
0:15:56 | the is is that a lot not the facts |
---|
0:15:58 | uh come from the quantisation the transform from the a some from the coupling of one position |
---|
0:16:03 | first transformation |
---|
0:16:04 | where i is in this case where using a block based pretty but the quantisation used and independently of the |
---|
0:16:09 | signal send |
---|
0:16:10 | pretty |
---|
0:16:11 | and what not |
---|
0:16:12 | so this is what would have a job but for example |
---|
0:16:15 | which creates you know a a lot |
---|
0:16:17 | here just which essentially keeps the text or |
---|
0:16:22 | alright right |
---|
0:16:23 | so |
---|
0:16:24 | uh |
---|
0:16:25 | you can can an uh the proposed is essentially |
---|
0:16:29 | uh a a a and you by for compression of |
---|
0:16:31 | a a hyperspectral image where we achieve lower complexity by use |
---|
0:16:34 | a prediction based approach |
---|
0:16:37 | which uh uh forms |
---|
0:16:38 | uh |
---|
0:16:39 | is known as or better than the state-of-the-art of the art three dimensional for coding with really feature |
---|
0:16:44 | distortion for optimization |
---|
0:16:45 | so that seems to be a nice way for for on what compression of set images |
---|
0:16:50 | complex in memory requirements are significantly lower than jpeg thousand |
---|
0:16:55 | a it's difficult to compare the complexity of different algorithms by top to sign this working on J two thousand |
---|
0:17:02 | and seems like the proposed approach to be like and man two |
---|
0:17:05 | fewer operations than J |
---|
0:17:07 | for a to the same |
---|
0:17:09 | on this |
---|
0:17:11 | uh but used in room for improvement |
---|
0:17:13 | we're not using any or i've made calling but that and certainly have the coding efficiency apartment coding |
---|
0:17:20 | what most are as by some margin |
---|
0:17:23 | we might use |
---|
0:17:24 | know and of the ring |
---|
0:17:25 | that is using for a reference uh spectrum channel for the prediction not just a three spectral channel |
---|
0:17:32 | a the spectral channel but use more correlated with the colour channel |
---|
0:17:36 | so this is especially on is that provides the nice performance |
---|
0:17:41 | uh this algorithm is people proposed to the european space efficiency is in is of a mission for the spectral |
---|
0:17:46 | image image or these it on the is i X amount for |
---|
0:17:50 | is going to fly to mars |
---|
0:17:51 | you |
---|
0:17:53 | that |
---|
0:18:01 | do have any questions |
---|
0:18:08 | i can you |
---|
0:18:11 | can make any comment in regards to um |
---|
0:18:14 | have the compression technique might affect processing that would occur after |
---|
0:18:18 | uh the images uh |
---|
0:18:20 | transmitted for example |
---|
0:18:22 | yeah end member extraction or some sort of classification task |
---|
0:18:24 | yeah |
---|
0:18:25 | uh that's something that wouldn't propose lossy compression to a remote sensing device the re scared about the potential negative |
---|
0:18:32 | effects of lossy compression so |
---|
0:18:34 | uh we we were that experiments in the past with that's |
---|
0:18:37 | and my feeling is that if the mean square error so you have several quality metrics cd can |
---|
0:18:42 | to measure that not just mean square error the maximum air |
---|
0:18:45 | spectral and will and and a lot of the matrix but |
---|
0:18:48 | my experience with that |
---|
0:18:50 | a if the mean square error is low in a a small have then everything we were very nice |
---|
0:18:54 | and it's uh in |
---|
0:18:56 | for for this kind of missions you definitely want to uh to keep the is got a sufficiently small |
---|
0:19:01 | uh for not a hyperspectral image but for a spectral uh says i'll um |
---|
0:19:06 | existing since C systems actually use a compression |
---|
0:19:09 | spurt five does use lossy compression |
---|
0:19:12 | at a bit of |
---|
0:19:13 | well i think a three per pixel from paid |
---|
0:19:16 | and the can a set of in just lossy compression so |
---|
0:19:20 | uh uh the government agencies which are using problem funding they don't really not care about a lot of compression |
---|
0:19:26 | but people |
---|
0:19:27 | that the private companies they they care of what's so my feeling is that |
---|
0:19:30 | compression is not a big deal |
---|
0:19:32 | uh are exceptions obviously if the mean square error is is small enough run problem uh comes for example from |
---|
0:19:38 | applications like a novelty detection |
---|
0:19:40 | where a a large at so the one on one single pixel can actually by the result of a normally |
---|
0:19:45 | detection so one has to be |
---|
0:19:46 | to have a in some ways but for classification |
---|
0:19:49 | my feeling is that a more less goes we the mean square error if the means got is low |
---|
0:19:57 | we have time for a a question |
---|
0:19:59 | quick |
---|
0:20:03 | "'cause" take a speaker |
---|