0:00:13 | you |
---|
0:00:16 | one |
---|
0:00:18 | exactly |
---|
0:00:18 | and a boring points in tandem one |
---|
0:00:20 | my name's got the carnage the |
---|
0:00:22 | uh uh as it was set i from one diversity of technology |
---|
0:00:26 | and it's a cloud that rate rider to a what can not today |
---|
0:00:30 | and |
---|
0:00:31 | in my |
---|
0:00:31 | and i would discuss |
---|
0:00:33 | the |
---|
0:00:34 | extended inter view direct mode for the multi view video coding |
---|
0:00:39 | so to start with |
---|
0:00:40 | uh |
---|
0:00:41 | development of uh efficient it's a solutions for multi view video coding becomes |
---|
0:00:46 | more and more |
---|
0:00:48 | a an important these days |
---|
0:00:49 | as the the D you video applications are getting |
---|
0:00:54 | and more and more popular |
---|
0:00:56 | and in the standard to a solution no as |
---|
0:01:00 | the and X H of the a C H two |
---|
0:01:03 | six six four standard |
---|
0:01:05 | and a simple but the quite efficient a my it |
---|
0:01:09 | is you |
---|
0:01:10 | and |
---|
0:01:11 | in this |
---|
0:01:12 | standard to the solution |
---|
0:01:14 | and a not in order not to a a sense uh |
---|
0:01:18 | a a independent you |
---|
0:01:20 | so |
---|
0:01:21 | in order not to and got independent |
---|
0:01:24 | and every you of the multi you sequence |
---|
0:01:26 | and we uh uh i i a a reference pictures from other uh you |
---|
0:01:32 | to that |
---|
0:01:33 | uh reckon |
---|
0:01:34 | which are list |
---|
0:01:35 | so |
---|
0:01:36 | this is the main uh |
---|
0:01:37 | the domain |
---|
0:01:38 | and in there view my congressman man |
---|
0:01:41 | which is used in the multi |
---|
0:01:42 | you got me |
---|
0:01:44 | and unfortunately there |
---|
0:01:46 | read a then C be between a a a bit streams of the neighbouring use |
---|
0:01:51 | a is still quite a big so a uh |
---|
0:01:54 | phil four |
---|
0:01:55 | further problem exist |
---|
0:01:57 | and on the other hand |
---|
0:02:00 | and |
---|
0:02:01 | there is |
---|
0:02:02 | some some additional information the multiview view video which can be adopted |
---|
0:02:07 | a a to uh decrease this redundancy between a a bit streams in |
---|
0:02:12 | they bring use |
---|
0:02:13 | and this information is |
---|
0:02:15 | that that information |
---|
0:02:18 | that that well for the which are described where that three D geometry of the scene |
---|
0:02:23 | and a if we at all |
---|
0:02:25 | this is information in encoding process |
---|
0:02:28 | and we can get that uh |
---|
0:02:31 | that |
---|
0:02:32 | performance of the multiview view video codec |
---|
0:02:35 | and and should be |
---|
0:02:36 | bigger or and as a result |
---|
0:02:39 | the result and the |
---|
0:02:40 | bit strange should reduce |
---|
0:02:43 | so the main idea |
---|
0:02:45 | presentation |
---|
0:02:46 | still |
---|
0:02:46 | use that |
---|
0:02:48 | that information to improve the compression of the multiview video |
---|
0:02:53 | and |
---|
0:02:53 | use the use that's of that that information beeps i'll some uh a new possibilities |
---|
0:03:00 | a for prediction |
---|
0:03:02 | as we all know that you all the bit stream |
---|
0:03:05 | contains |
---|
0:03:06 | most the of the control data prediction error or |
---|
0:03:09 | motion data |
---|
0:03:10 | reducing using a a you two |
---|
0:03:12 | prediction |
---|
0:03:13 | and a |
---|
0:03:15 | and he of this uh uh that's of data thus |
---|
0:03:17 | it's the |
---|
0:03:19 | as uh |
---|
0:03:20 | make a a bit stream |
---|
0:03:21 | smaller smaller |
---|
0:03:22 | a a two day i'll discuss only prediction of the motion data |
---|
0:03:27 | to decrease their |
---|
0:03:28 | bit right |
---|
0:03:30 | oh |
---|
0:03:31 | what to let let's imagine that to |
---|
0:03:34 | we've got to be used to and go |
---|
0:03:36 | the reference view which was already in that the bit stream |
---|
0:03:40 | all this |
---|
0:03:41 | you was |
---|
0:03:41 | know |
---|
0:03:43 | at the time of uh a ink the other view |
---|
0:03:47 | and to |
---|
0:03:49 | what we are uh actually try to do is to predict the motion |
---|
0:03:53 | a data |
---|
0:03:54 | from the neighbouring you |
---|
0:03:56 | and using |
---|
0:03:57 | the three D dependencies between objects |
---|
0:04:00 | a C |
---|
0:04:01 | for T |
---|
0:04:01 | a dependencies are described by a that information |
---|
0:04:07 | oh yeah |
---|
0:04:08 | a bit more details |
---|
0:04:10 | and |
---|
0:04:12 | first of all of like to point out that |
---|
0:04:14 | the prediction of motion i i data from the right |
---|
0:04:17 | you |
---|
0:04:18 | the the independently for each point of and to |
---|
0:04:22 | and |
---|
0:04:23 | of and code that be a picture |
---|
0:04:26 | and |
---|
0:04:28 | we we are to only using that the that information of the ref |
---|
0:04:32 | you only |
---|
0:04:34 | a we check that that volume for that |
---|
0:04:36 | for each of the that |
---|
0:04:38 | a points in the river |
---|
0:04:40 | you |
---|
0:04:40 | and based on this information |
---|
0:04:43 | and |
---|
0:04:44 | we project |
---|
0:04:45 | to a a location in that feed this space using that that image based rendering |
---|
0:04:51 | so we get a |
---|
0:04:52 | a point location of pretty space |
---|
0:04:55 | and next |
---|
0:04:56 | we re project is position in |
---|
0:04:59 | and |
---|
0:05:00 | but that's to picture |
---|
0:05:02 | and |
---|
0:05:03 | and |
---|
0:05:04 | as a result we get to i |
---|
0:05:06 | a of a point |
---|
0:05:07 | one in the right |
---|
0:05:08 | you and one call it's you'll |
---|
0:05:10 | and this uh points are a part |
---|
0:05:14 | or so |
---|
0:05:15 | some uh a connection between that |
---|
0:05:18 | exist |
---|
0:05:19 | uh |
---|
0:05:20 | and it's it as there |
---|
0:05:22 | a a |
---|
0:05:23 | motion data for the pointing direction |
---|
0:05:26 | you is already known because |
---|
0:05:28 | you was already in bit |
---|
0:05:30 | so we can check the motion vectors selector |
---|
0:05:32 | speech or in |
---|
0:05:33 | all |
---|
0:05:34 | a which are |
---|
0:05:35 | uh in that block |
---|
0:05:37 | uh |
---|
0:05:38 | for this |
---|
0:05:38 | or |
---|
0:05:39 | and uh |
---|
0:05:41 | simply the right that two |
---|
0:05:43 | and describe |
---|
0:05:44 | the motion vectors and reference picture dies |
---|
0:05:47 | of that point |
---|
0:05:48 | in the column view |
---|
0:05:49 | so this |
---|
0:05:50 | a the main idea of the |
---|
0:05:52 | of |
---|
0:05:53 | a problem |
---|
0:05:54 | and evaluate |
---|
0:05:56 | uh this idea |
---|
0:05:58 | and |
---|
0:05:59 | we integrate it's with that and B C motion video coding |
---|
0:06:03 | and a reference scope there |
---|
0:06:06 | and and the plantation was |
---|
0:06:08 | the |
---|
0:06:08 | uh uh |
---|
0:06:09 | we do that |
---|
0:06:11 | uh S and you might have model |
---|
0:06:13 | a video compression |
---|
0:06:15 | and |
---|
0:06:16 | which we call the extended inter view direct mode |
---|
0:06:19 | so yeah i D |
---|
0:06:21 | uh |
---|
0:06:23 | now it's uh look how a dog to the a and B C a coding scheme |
---|
0:06:29 | and that's a that we've got the preview |
---|
0:06:32 | uh uh two and go |
---|
0:06:33 | that |
---|
0:06:34 | basic scheme which is used in and |
---|
0:06:36 | see |
---|
0:06:37 | it's more or less like |
---|
0:06:38 | they want to picture |
---|
0:06:40 | so we've got a a a a base you |
---|
0:06:42 | this C zero which is |
---|
0:06:44 | and and is the first well so for that based you there are null |
---|
0:06:48 | error |
---|
0:06:49 | you |
---|
0:06:49 | uh a available |
---|
0:06:51 | and there are two a more you |
---|
0:06:54 | see C two and you see well |
---|
0:06:57 | a i each of |
---|
0:06:58 | uh which are and that with |
---|
0:07:00 | one or two ref |
---|
0:07:02 | you |
---|
0:07:03 | i would like to |
---|
0:07:04 | a |
---|
0:07:05 | point out that it's uh |
---|
0:07:07 | for that use C tool the inter view a reference |
---|
0:07:11 | the |
---|
0:07:12 | uh which is |
---|
0:07:13 | the used due to this scheme |
---|
0:07:15 | is only for that |
---|
0:07:17 | and it for the the on or picture areas |
---|
0:07:20 | so for that |
---|
0:07:21 | time instance T zero |
---|
0:07:23 | uh place look at the picture |
---|
0:07:25 | for uh a time instance |
---|
0:07:26 | do you want to and others |
---|
0:07:29 | there is no our between you use C zero and C two |
---|
0:07:34 | and |
---|
0:07:36 | in contrast to for you C while |
---|
0:07:38 | a every a picture |
---|
0:07:42 | for a every |
---|
0:07:42 | time |
---|
0:07:43 | as there are |
---|
0:07:44 | are are between use C zero and to C one or |
---|
0:07:48 | you C two and C one |
---|
0:07:51 | uh |
---|
0:07:52 | however |
---|
0:07:53 | uh the only uh |
---|
0:07:55 | the only uh what if a question do the income comparison with |
---|
0:08:00 | stand |
---|
0:08:01 | a simple cost and called is that there right |
---|
0:08:04 | at least |
---|
0:08:04 | the the are modified |
---|
0:08:07 | in the way that it for each time in |
---|
0:08:10 | star |
---|
0:08:11 | uh |
---|
0:08:12 | features from a from |
---|
0:08:13 | use from thing but |
---|
0:08:14 | a from the neighbouring used are at their uh to that |
---|
0:08:18 | a rip |
---|
0:08:19 | which lee |
---|
0:08:21 | yeah |
---|
0:08:22 | after i think uh there |
---|
0:08:24 | i i E E I D mode two |
---|
0:08:27 | and and this |
---|
0:08:28 | the scheme |
---|
0:08:30 | we get some you uh |
---|
0:08:32 | prediction possibilities |
---|
0:08:34 | and i i would like to point out that |
---|
0:08:36 | this uh |
---|
0:08:37 | a are a smart that's right |
---|
0:08:40 | i i do not to a modified the referee |
---|
0:08:43 | features list |
---|
0:08:45 | a a as we don't the use |
---|
0:08:47 | a a new pictures |
---|
0:08:49 | and |
---|
0:08:50 | this |
---|
0:08:51 | this are was only a |
---|
0:08:53 | tell that |
---|
0:08:55 | a |
---|
0:08:55 | for this features we can is |
---|
0:08:59 | we can predict the motion information from other views using to good |
---|
0:09:05 | and |
---|
0:09:06 | what this where not not no |
---|
0:09:08 | also is that |
---|
0:09:09 | then you model |
---|
0:09:11 | used to only for that |
---|
0:09:13 | not not you |
---|
0:09:15 | because a based you |
---|
0:09:16 | no right |
---|
0:09:17 | you |
---|
0:09:19 | so there is no you to two |
---|
0:09:21 | to to uh to refer |
---|
0:09:24 | and the other of thing is that i |
---|
0:09:27 | new model can be applied only for that |
---|
0:09:30 | and on on for pictures because in none or pictures |
---|
0:09:33 | and there |
---|
0:09:34 | motion dot doubles |
---|
0:09:36 | we can read |
---|
0:09:36 | do |
---|
0:09:37 | and |
---|
0:09:38 | it's only that |
---|
0:09:40 | uh |
---|
0:09:41 | the the most but those uh describe that |
---|
0:09:44 | and uh |
---|
0:09:45 | them that the motion which is not collected |
---|
0:09:48 | a a different time instants |
---|
0:09:50 | well only two |
---|
0:09:51 | and different C you |
---|
0:09:53 | so |
---|
0:09:54 | and so we don't to use that |
---|
0:09:57 | and |
---|
0:09:58 | now a few words about the evaluation of the i |
---|
0:10:01 | the model |
---|
0:10:03 | and and we but |
---|
0:10:04 | de the I V D model |
---|
0:10:06 | with if too codecs |
---|
0:10:08 | and this is |
---|
0:10:09 | then B C four point all which is the reference |
---|
0:10:12 | the |
---|
0:10:13 | reference software |
---|
0:10:15 | and |
---|
0:10:16 | the previous one of the previous version |
---|
0:10:18 | of |
---|
0:10:19 | of |
---|
0:10:19 | a a of the software |
---|
0:10:21 | which also |
---|
0:10:22 | okay case the motion skip tool |
---|
0:10:24 | a adopted in that |
---|
0:10:26 | most speed |
---|
0:10:27 | a according to which |
---|
0:10:29 | use this uh is a similar idea all |
---|
0:10:31 | a a a pretty thing motion information from a neighbouring bureaus |
---|
0:10:35 | a cover it does not to the use |
---|
0:10:38 | and information about that or three D G |
---|
0:10:42 | so it's uh and an interest |
---|
0:10:44 | a them to this |
---|
0:10:46 | and |
---|
0:10:47 | in our test we used |
---|
0:10:49 | five different this sequences |
---|
0:10:50 | who book i've by parallel where |
---|
0:10:53 | as something power and of mine |
---|
0:10:56 | and |
---|
0:10:57 | the results were obtained for to be value |
---|
0:11:00 | for four for the six and four do |
---|
0:11:03 | and the last thing is that |
---|
0:11:05 | bit stream for for uh |
---|
0:11:08 | which are present |
---|
0:11:10 | uh are the bit streams |
---|
0:11:11 | the the for a single you so we only |
---|
0:11:14 | a |
---|
0:11:15 | so i would show only a results for the call you would vol |
---|
0:11:20 | and we've out the reference you |
---|
0:11:22 | bit stream or we about the |
---|
0:11:25 | bit streams to encode the data |
---|
0:11:28 | because in our approach we assume that |
---|
0:11:30 | and that that uh information already |
---|
0:11:33 | and i by to what called or |
---|
0:11:35 | for some other purpose |
---|
0:11:38 | so we don't do not include it's uh and uh uh uh the the B |
---|
0:11:43 | all the results |
---|
0:11:46 | and |
---|
0:11:47 | because uh in the reference uh |
---|
0:11:50 | use the sorry |
---|
0:11:51 | the ref |
---|
0:11:52 | multi well all this not this material |
---|
0:11:55 | there were only and three views available |
---|
0:11:58 | so there are a two possible scenarios to to check |
---|
0:12:02 | first number is |
---|
0:12:03 | that we are coding to you C two |
---|
0:12:06 | and and the reference you is to the base use of the use C zero |
---|
0:12:12 | and the other scenario is |
---|
0:12:13 | the encoding coding all use you one |
---|
0:12:16 | and a of the reference you C zero |
---|
0:12:21 | uh |
---|
0:12:21 | and finally the results |
---|
0:12:23 | so |
---|
0:12:24 | as you can see and you do you |
---|
0:12:27 | yeah i don't think the i i yeah i i D mode into D |
---|
0:12:30 | a a a a and and P C software |
---|
0:12:33 | and |
---|
0:12:34 | get |
---|
0:12:34 | some uh bit-rate rate reduction |
---|
0:12:38 | a in in there's the |
---|
0:12:40 | there's compare them with uh the referee |
---|
0:12:43 | we we you reference software |
---|
0:12:45 | and uh as we can see |
---|
0:12:47 | that |
---|
0:12:47 | be to bit reduction is |
---|
0:12:50 | uh for every at test sequence |
---|
0:12:53 | uh |
---|
0:12:54 | the are range one is |
---|
0:12:55 | that are part of the ref |
---|
0:12:56 | software |
---|
0:12:57 | with the motion speed or a label |
---|
0:13:01 | uh |
---|
0:13:02 | we also not is |
---|
0:13:03 | that |
---|
0:13:03 | the a a bit reduction |
---|
0:13:06 | and case of snow or two |
---|
0:13:08 | and |
---|
0:13:09 | compared of the reference software |
---|
0:13:11 | uh |
---|
0:13:13 | blue one |
---|
0:13:14 | uh |
---|
0:13:15 | also gives us a a a a a bit rate |
---|
0:13:18 | bit-rate rate reduction |
---|
0:13:20 | a cow what are in case of the |
---|
0:13:22 | a compress on the motion he |
---|
0:13:24 | we see that a there is a |
---|
0:13:27 | uh as sequence |
---|
0:13:28 | a at i'm maturity |
---|
0:13:30 | a a for which a the motion keep for the |
---|
0:13:33 | what but they're |
---|
0:13:34 | yes |
---|
0:13:35 | uh |
---|
0:13:36 | to summarise the result |
---|
0:13:38 | uh |
---|
0:13:39 | as we notice the extent |
---|
0:13:41 | a inter view direct mode improves compression surely |
---|
0:13:45 | and |
---|
0:13:46 | against reference software |
---|
0:13:48 | we get |
---|
0:13:49 | this |
---|
0:13:49 | bit-rate reduction for all this see test signal |
---|
0:13:52 | for or bit rate |
---|
0:13:54 | and |
---|
0:13:55 | and the average bit-rate savings were uh |
---|
0:13:58 | six six point nine percent |
---|
0:14:00 | a now to and |
---|
0:14:01 | point five point |
---|
0:14:03 | three percent |
---|
0:14:04 | in case of standard two |
---|
0:14:06 | and this |
---|
0:14:07 | you vol are have weighted |
---|
0:14:09 | to |
---|
0:14:11 | based |
---|
0:14:11 | the instrument we yes |
---|
0:14:14 | uh |
---|
0:14:15 | and compares them with the motion skip tool |
---|
0:14:18 | i we get the bit rate reduction from all this |
---|
0:14:22 | were checked |
---|
0:14:23 | and uh and the average bit which |
---|
0:14:25 | uh is savings |
---|
0:14:26 | are eleven point three percent |
---|
0:14:28 | and |
---|
0:14:29 | two point five percent |
---|
0:14:31 | for never two and |
---|
0:14:33 | a so i one and i two |
---|
0:14:35 | uh |
---|
0:14:36 | lean |
---|
0:14:38 | to conclude my |
---|
0:14:39 | and |
---|
0:14:40 | and a |
---|
0:14:42 | we shall that that information |
---|
0:14:44 | can be efficiently use |
---|
0:14:46 | to represent motion data |
---|
0:14:49 | and in multi view video |
---|
0:14:50 | a bit streams |
---|
0:14:53 | and |
---|
0:14:53 | do do you use that of the |
---|
0:14:55 | three young geometry we get a an Q right |
---|
0:14:58 | the of motion data from the you |
---|
0:15:02 | uh |
---|
0:15:03 | the whole algorithm |
---|
0:15:04 | a interest |
---|
0:15:06 | the the computation going with |
---|
0:15:07 | complexity of be called in english lee |
---|
0:15:11 | as so the this is true but in |
---|
0:15:13 | and and so the the plantation |
---|
0:15:16 | oh |
---|
0:15:17 | the |
---|
0:15:17 | new idea was |
---|
0:15:19 | and the for the extended inter view direct mode |
---|
0:15:22 | uh |
---|
0:15:23 | as nice |
---|
0:15:24 | as we so the compression that used to see |
---|
0:15:26 | a improvement was |
---|
0:15:28 | uh a two point blank |
---|
0:15:30 | to let and to three percent |
---|
0:15:32 | uh |
---|
0:15:33 | we call it can |
---|
0:15:34 | we draw is uh the more accurate that model will uh be a |
---|
0:15:40 | and the last but not the least |
---|
0:15:42 | the use it |
---|
0:15:43 | oh that |
---|
0:15:44 | from |
---|
0:15:44 | a such associated with |
---|
0:15:46 | uh |
---|
0:15:47 | and uh the reference you only |
---|
0:15:50 | and makes this solution adaptable able to than most of the multi view video applications so |
---|
0:15:55 | applications where for example texture and that are and called |
---|
0:15:59 | the |
---|
0:16:00 | to together |
---|
0:16:01 | dependent the |
---|
0:16:02 | and |
---|
0:16:03 | and |
---|
0:16:04 | this |
---|
0:16:04 | this this solution is also applied to |
---|
0:16:08 | no |
---|
0:16:09 | so thank you very much |
---|
0:16:11 | real a |
---|
0:16:18 | you have questions |
---|
0:16:24 | so |
---|
0:16:25 | like |
---|
0:16:27 | what |
---|
0:16:30 | me |
---|
0:16:33 | what sounds like |
---|
0:16:36 | you |
---|
0:16:38 | and the reason we we should to the we should note that the |
---|
0:16:43 | it increased |
---|
0:16:44 | uh |
---|
0:16:45 | the |
---|
0:16:45 | complex you of the you gotta because |
---|
0:16:48 | some additional orders |
---|
0:16:50 | the is you'll still a to project the pixel location |
---|
0:16:54 | and and some but uh |
---|
0:16:56 | some calculations are uh that but |
---|
0:16:59 | this is not much |
---|
0:17:01 | so |
---|
0:17:01 | what |
---|
0:17:04 | if you |
---|
0:17:05 | i |
---|
0:17:06 | and and are not not not this some that |
---|
0:17:12 | any questions |
---|
0:17:14 | i |
---|
0:17:16 | i think you're |
---|