0:00:14 | getting a |
---|
0:00:15 | good taste of |
---|
0:00:16 | three three D today |
---|
0:00:17 | um my talk is about semiautomatic two D to three D conversion |
---|
0:00:22 | merging |
---|
0:00:23 | graph cuts and run one |
---|
0:00:28 | so i will talk a bit about to two D to three D conversion is definitely not dead |
---|
0:00:33 | for for a while |
---|
0:00:34 | if and the we have three D can |
---|
0:00:36 | um |
---|
0:00:37 | and talk about our method |
---|
0:00:39 | merge |
---|
0:00:40 | a segmentation technique |
---|
0:00:41 | once a graph |
---|
0:00:42 | five or |
---|
0:00:45 | and how our method |
---|
0:00:46 | a semiautomatic |
---|
0:00:47 | approach is not fully a |
---|
0:00:50 | uh no shows some results |
---|
0:00:54 | um definitely there has been |
---|
0:00:57 | yeah |
---|
0:00:57 | keen interest a three D over the last decade probably more so of the last five years |
---|
0:01:02 | uh so |
---|
0:01:04 | three D image that perception wanna rent two distinct images |
---|
0:01:08 | in two D three conversion that's cool you don't have |
---|
0:01:10 | uh |
---|
0:01:12 | given one image we want |
---|
0:01:13 | find the |
---|
0:01:15 | let |
---|
0:01:15 | right |
---|
0:01:16 | which we |
---|
0:01:16 | find |
---|
0:01:18 | in create a three D uh image |
---|
0:01:20 | so we know that there are technology it's painfully you can't that most of us |
---|
0:01:24 | um |
---|
0:01:25 | a experience three D uh set up |
---|
0:01:27 | some was probably have a home |
---|
0:01:29 | and it |
---|
0:01:30 | two most popular method to produce a three D content is |
---|
0:01:33 | having three D camera |
---|
0:01:35 | or taking a two D image |
---|
0:01:36 | very |
---|
0:01:37 | three D |
---|
0:01:41 | um the best solution that a nice to capture with three D camera |
---|
0:01:45 | and |
---|
0:01:46 | some people are not already |
---|
0:01:47 | even |
---|
0:01:48 | consumer cameras exist for people and have one myself |
---|
0:01:51 | and there are movie produces a of at that are being a good example of |
---|
0:01:55 | a portion of the movie actually film with to cam |
---|
0:01:59 | however |
---|
0:02:00 | there still an extremely large |
---|
0:02:02 | a proportion of |
---|
0:02:04 | people |
---|
0:02:05 | produce content |
---|
0:02:06 | that |
---|
0:02:07 | do not |
---|
0:02:08 | oh want to have refused the film with to camera |
---|
0:02:12 | so |
---|
0:02:13 | you available they don't wanna do that |
---|
0:02:14 | we feel that they they're are image in the way they see a seems is better represented a |
---|
0:02:18 | camera |
---|
0:02:19 | so there's something we can do |
---|
0:02:20 | a three D |
---|
0:02:21 | um |
---|
0:02:22 | they wanna can |
---|
0:02:23 | this to you wanna convert a three D at the words and they realise that they can make a lot |
---|
0:02:26 | more money |
---|
0:02:27 | um |
---|
0:02:28 | so it |
---|
0:02:29 | in in a when people do these two cameras realise it's |
---|
0:02:32 | can expand |
---|
0:02:34 | a and is difficult filming in three D with two cameras your mindset changes as high have |
---|
0:02:38 | so |
---|
0:02:39 | some things can be filmed a three D |
---|
0:02:41 | i you very fast close of motion there are certain ways it scenes are taken |
---|
0:02:46 | they cannot be then at three |
---|
0:02:47 | so there's a drawback |
---|
0:02:49 | um so |
---|
0:02:51 | many times to D dct you conversion to prefer especially if its also content that you've already had |
---|
0:02:56 | you're are a lot of a movie house |
---|
0:02:59 | have |
---|
0:03:00 | millions of hours of movies |
---|
0:03:02 | content they they would love to |
---|
0:03:04 | make a money from |
---|
0:03:05 | and they would love to convert a three D be able to |
---|
0:03:07 | do that |
---|
0:03:08 | so |
---|
0:03:09 | to use conversion is proof |
---|
0:03:10 | for that |
---|
0:03:11 | um |
---|
0:03:12 | of course we know that there are many examples of |
---|
0:03:16 | to you if conversion |
---|
0:03:17 | many of i'm work |
---|
0:03:19 | uh and that's true partners where our job was |
---|
0:03:22 | help them find methods in eight them and doing to D three version |
---|
0:03:25 | especially and |
---|
0:03:26 | high resolution cinema |
---|
0:03:28 | and is a great demand for that |
---|
0:03:30 | as much as we know that three D cameras are are there |
---|
0:03:32 | there was |
---|
0:03:33 | very strong the man |
---|
0:03:34 | for |
---|
0:03:35 | if |
---|
0:03:36 | so as much as i plug the previous uh |
---|
0:03:39 | uh people lectures |
---|
0:03:41 | to D converges to strollers |
---|
0:03:43 | and won't direct then |
---|
0:03:47 | so we want to recover or we wanna "'cause" recovering and depth map |
---|
0:03:51 | and and not necessarily the most accurate and precise that not we want be will show relative depth to mean |
---|
0:03:57 | but |
---|
0:03:58 | um um |
---|
0:04:00 | so once you have that that map as of seen |
---|
0:04:02 | previous like you can get a stereo |
---|
0:04:06 | what we ate what we've done in the past we aim to do is a a semiautomatic approach |
---|
0:04:11 | we |
---|
0:04:13 | we work "'cause" in the industrial part in the past |
---|
0:04:15 | very well known one |
---|
0:04:17 | that converts high resolution two D to three D uh |
---|
0:04:19 | movies |
---|
0:04:21 | and their goal is not a |
---|
0:04:23 | to do a quick |
---|
0:04:24 | their goal was do it precise |
---|
0:04:27 | they can spend about three months |
---|
0:04:30 | uh |
---|
0:04:30 | convert about twenty minutes of a movie and it's all than mine |
---|
0:04:33 | it was segmented objects from an image |
---|
0:04:36 | and they will be very |
---|
0:04:37 | we take how accurate that is |
---|
0:04:40 | so that they one extremely accurate |
---|
0:04:42 | uh edges |
---|
0:04:43 | and when a segment out |
---|
0:04:45 | that that is not so much the ratio should they can J |
---|
0:04:49 | so they they like a semiautomatic approach one where you can have we user |
---|
0:04:54 | give some indication |
---|
0:04:56 | of what things should be segmented |
---|
0:04:58 | and that the algorithm rhythm work from that uh uh information |
---|
0:05:01 | uh |
---|
0:05:02 | in the two D dct conversion rolled |
---|
0:05:04 | doing a fully automatic approach is to of the whole grail they only don't expect to see that have |
---|
0:05:10 | so the best results so far in that feel have this the autumn |
---|
0:05:13 | approach |
---|
0:05:15 | um |
---|
0:05:16 | our method has inspired by |
---|
0:05:19 | a good men and his team |
---|
0:05:20 | they you very good results |
---|
0:05:22 | test did that pretty complex |
---|
0:05:24 | uh algorithm rhythm scheme |
---|
0:05:26 | but it's a quite good and it's based on random walks |
---|
0:05:29 | uh and the user |
---|
0:05:31 | for back to machine for some training classification |
---|
0:05:34 | so i was it's |
---|
0:05:35 | we saw this is run walks and realise |
---|
0:05:37 | um |
---|
0:05:38 | we could use our method from random walks something similar |
---|
0:05:41 | and the from or interaction with industry we use graph "'cause" for the help a segmentation |
---|
0:05:47 | very pop |
---|
0:05:47 | me |
---|
0:05:48 | and we also used the random walks to refine |
---|
0:05:51 | so with we try to merge together |
---|
0:05:54 | so |
---|
0:05:56 | we merge random walks which is a |
---|
0:05:59 | well known technique and graph as was an even better no |
---|
0:06:02 | can can we have modified versions of a |
---|
0:06:04 | to the |
---|
0:06:09 | um |
---|
0:06:10 | instead of considering each label |
---|
0:06:12 | we we take an object in an image a you |
---|
0:06:15 | do a semi automatic approach usually you put markers in an image of and wants to me with graph that |
---|
0:06:19 | you know you brush strokes |
---|
0:06:20 | you can identify back |
---|
0:06:22 | for gram |
---|
0:06:23 | and a call a label |
---|
0:06:24 | so instead of |
---|
0:06:26 | the mind set of for segmenting an object |
---|
0:06:29 | background back where actually |
---|
0:06:30 | we modify graph that's |
---|
0:06:32 | and we do a multiple |
---|
0:06:34 | object segmentation we |
---|
0:06:35 | see see that essentially deaf so we allow the user |
---|
0:06:37 | a fine |
---|
0:06:38 | uh a object but also you a relative that |
---|
0:06:45 | um random walks |
---|
0:06:47 | is uh |
---|
0:06:48 | pretty well known method also it's a solution to a linear system |
---|
0:06:52 | and it's |
---|
0:06:53 | no two |
---|
0:06:54 | uh |
---|
0:06:56 | the very good on edges that are |
---|
0:06:58 | uh fine detail and |
---|
0:07:01 | uh |
---|
0:07:02 | gradual will change |
---|
0:07:04 | but they're not very good as strong edge |
---|
0:07:06 | whereas graph cuts is extremely good uh route |
---|
0:07:09 | strong edges |
---|
0:07:10 | but not very good on |
---|
0:07:11 | uh |
---|
0:07:12 | and is that change over with contrast |
---|
0:07:15 | so we want to combine those two together |
---|
0:07:22 | so what we |
---|
0:07:22 | first do is we we allow the user or shown later when a get the results also the user will |
---|
0:07:27 | select |
---|
0:07:28 | um |
---|
0:07:30 | different levels of that an image |
---|
0:07:32 | easy brush strokes |
---|
0:07:33 | and we then apply a random walks that |
---|
0:07:35 | and we do a random walk segmentation of |
---|
0:07:38 | i'll |
---|
0:07:38 | of the image |
---|
0:07:39 | and um |
---|
0:07:41 | we modify the random walks to scale space random walk where we |
---|
0:07:45 | find the random walks |
---|
0:07:46 | is susceptible noise |
---|
0:07:48 | and we do a scale space version of the image with do a random walks at each level |
---|
0:07:52 | and merge of the results here you mention me |
---|
0:07:55 | and we |
---|
0:07:55 | get our on result |
---|
0:07:59 | and once we've done that take it |
---|
0:08:01 | graph cut |
---|
0:08:02 | of the same a bead using the same uh input from a user |
---|
0:08:06 | and we do a a a modified version of graph |
---|
0:08:08 | you you multiple label |
---|
0:08:09 | right |
---|
0:08:10 | so we segment every of segmentation of |
---|
0:08:14 | objects |
---|
0:08:15 | relative to each other |
---|
0:08:17 | um over the multiple label |
---|
0:08:20 | and we end up with two different types of segmentation |
---|
0:08:23 | that |
---|
0:08:24 | uh information results |
---|
0:08:26 | and a point is to merge |
---|
0:08:27 | get |
---|
0:08:32 | um are that |
---|
0:08:33 | the merging a more depth map essentially done with |
---|
0:08:36 | um |
---|
0:08:37 | a geometric mean |
---|
0:08:39 | now |
---|
0:08:39 | this is a preliminary result and how we try to merge these things together |
---|
0:08:43 | we do a geometric should mean to |
---|
0:08:45 | um |
---|
0:08:47 | hopefully fully |
---|
0:08:48 | uh i i a i a lot the areas is like we're graph that is more uh |
---|
0:08:51 | a stronger |
---|
0:08:52 | to overcome the random walks so |
---|
0:08:55 | results |
---|
0:08:56 | so our geometric mean tends to from a first result and to have a |
---|
0:09:00 | a a a a good um |
---|
0:09:02 | experimental out |
---|
0:09:03 | uh a that we have a |
---|
0:09:05 | student this taking over this |
---|
0:09:07 | um |
---|
0:09:08 | task |
---|
0:09:09 | just trying to figure out a more that way |
---|
0:09:11 | of of finding weights that you much mean currently |
---|
0:09:14 | and that the it totally different weighting |
---|
0:09:16 | uh on top |
---|
0:09:23 | oh show you a few results |
---|
0:09:25 | how it's done |
---|
0:09:30 | so we |
---|
0:09:31 | and we get |
---|
0:09:33 | as for example this is a |
---|
0:09:35 | one of the original image is on the left |
---|
0:09:38 | and |
---|
0:09:40 | the user is |
---|
0:09:41 | presented with to D image and all the users expect do is the mark |
---|
0:09:45 | a relative |
---|
0:09:45 | that |
---|
0:09:48 | on the image |
---|
0:09:50 | okay so |
---|
0:09:51 | and i think that's why is consider |
---|
0:09:53 | would be smart by the user being close to us |
---|
0:09:57 | and anything that's black would be |
---|
0:09:59 | so way |
---|
0:10:01 | and the in between a grey values |
---|
0:10:04 | this case we only have three |
---|
0:10:05 | uh would be somewhere between but we can modify that the have |
---|
0:10:08 | many levels of |
---|
0:10:10 | uh |
---|
0:10:11 | spared differ |
---|
0:10:12 | that |
---|
0:10:13 | relative depth this |
---|
0:10:15 | so user could modify a multiple areas |
---|
0:10:17 | with the relative to |
---|
0:10:20 | and that's very similar to have no grab cuts works |
---|
0:10:23 | i you |
---|
0:10:24 | mark |
---|
0:10:25 | areas is |
---|
0:10:26 | for gram running that the are |
---|
0:10:27 | as job |
---|
0:10:28 | that's what mean by something out of we don't |
---|
0:10:30 | one provide a have a user providing more information |
---|
0:10:33 | then uh |
---|
0:10:37 | so the |
---|
0:10:40 | on the top left is the |
---|
0:10:42 | a a modified R S as R W or modified random walks |
---|
0:10:46 | that's map |
---|
0:10:47 | from the user what |
---|
0:10:49 | and the bottom one is the |
---|
0:10:51 | a graph cuts that |
---|
0:10:53 | notice we have some occlusion |
---|
0:10:56 | um |
---|
0:10:57 | holes |
---|
0:10:57 | a result |
---|
0:10:59 | and those uh |
---|
0:11:01 | we initially we just use the simple |
---|
0:11:03 | um |
---|
0:11:04 | inpainting method |
---|
0:11:07 | pretty much anything you use there |
---|
0:11:09 | but for for results so |
---|
0:11:11 | to this is to a modified random walk |
---|
0:11:14 | to actually fill that in |
---|
0:11:16 | in the end |
---|
0:11:16 | but we want |
---|
0:11:19 | mercies to result graph cut has to be a very good |
---|
0:11:21 | binary like |
---|
0:11:22 | segmentation of objects |
---|
0:11:24 | that's map us S a that we use a some more a gradual change and that |
---|
0:11:29 | point was them are those two things together |
---|
0:11:31 | so i |
---|
0:11:32 | using the |
---|
0:11:33 | geometric mean were hoping to merge those two together |
---|
0:11:36 | and we find a we |
---|
0:11:39 | ten to |
---|
0:11:40 | create a more gradual |
---|
0:11:42 | i of |
---|
0:11:43 | uh realistic that map by a merging two yeah |
---|
0:11:47 | so this is the |
---|
0:11:50 | the the size right you from the original image |
---|
0:11:52 | and this is an an live image course we don't have |
---|
0:11:55 | three glasses here but if anyone's dress |
---|
0:11:58 | web address and paper and can be viewed without that's online |
---|
0:12:02 | but it gives a sense of how the shift |
---|
0:12:04 | uh as happen |
---|
0:12:05 | how much the spared |
---|
0:12:06 | so |
---|
0:12:07 | shown in you |
---|
0:12:11 | so another example user |
---|
0:12:13 | just selects the |
---|
0:12:14 | background |
---|
0:12:15 | foreground and a couple the a strokes on sonic |
---|
0:12:18 | a building |
---|
0:12:22 | is the to uh |
---|
0:12:24 | random walks and |
---|
0:12:26 | graph "'cause" nose around what's |
---|
0:12:27 | a very nice bright roll |
---|
0:12:29 | foreground background |
---|
0:12:30 | a map |
---|
0:12:33 | the |
---|
0:12:33 | that |
---|
0:12:34 | so |
---|
0:12:39 | a synthesized |
---|
0:12:40 | uh right you |
---|
0:12:41 | along with you |
---|
0:12:43 | and of |
---|
0:12:43 | image |
---|
0:12:46 | no she how much stuff |
---|
0:12:49 | course not everything's rosy this |
---|
0:12:51 | example where |
---|
0:12:52 | um |
---|
0:12:53 | do work very well |
---|
0:12:54 | the original left |
---|
0:12:56 | that's labels at the user provides |
---|
0:13:02 | issues this our graph cut i think here but |
---|
0:13:05 | i think the merging |
---|
0:13:07 | with R |
---|
0:13:08 | you well investigation how to merge the two |
---|
0:13:11 | or that we can |
---|
0:13:13 | based |
---|
0:13:13 | that's some image information to try to |
---|
0:13:15 | create a better that's map merging two methods yeah |
---|
0:13:19 | um |
---|
0:13:20 | are |
---|
0:13:21 | again i i the stress the importance |
---|
0:13:23 | of our algorithm is the fact that |
---|
0:13:26 | we don't one how to we want to user |
---|
0:13:29 | what |
---|
0:13:30 | and |
---|
0:13:31 | on |
---|
0:13:31 | the only a of input that user |
---|
0:13:33 | take |
---|
0:13:34 | a couple of second |
---|
0:13:35 | of input |
---|
0:13:37 | to just draw steps types of uh |
---|
0:13:39 | a line |
---|
0:13:41 | on top of that the you D of it is it if it's result or or or something not right |
---|
0:13:45 | user was able to go back and modify race |
---|
0:13:48 | or keys their their strokes |
---|
0:13:49 | at at some more |
---|
0:13:51 | and help to help the algorithm rhythm work a little better |
---|
0:13:54 | now are our results are preliminary |
---|
0:13:56 | we know that are are S start tell you must as work very well path |
---|
0:14:01 | find the reference the paper |
---|
0:14:03 | the ad |
---|
0:14:03 | paper |
---|
0:14:04 | but um |
---|
0:14:06 | we feel that with the provide more murder to together |
---|
0:14:09 | to methods as work very well and and we should be able to get a very good |
---|
0:14:12 | uh |
---|
0:14:13 | approximate |
---|
0:14:14 | relative |
---|
0:14:15 | that |
---|
0:14:16 | between we not |
---|
0:14:18 | since be very important |
---|
0:14:23 | um |
---|
0:14:24 | so i our preliminary results are shown that that's maps |
---|
0:14:27 | a the best |
---|
0:14:28 | or try to in the best of |
---|
0:14:29 | both these met |
---|
0:14:31 | um |
---|
0:14:32 | so a graph provides |
---|
0:14:34 | "'cause" just as a noticeable borders |
---|
0:14:36 | why use very much and segmentation |
---|
0:14:38 | and random walks allows us to |
---|
0:14:41 | capture texture and gradient |
---|
0:14:43 | a a little better |
---|
0:14:44 | um |
---|
0:14:45 | there are drawbacks for |
---|
0:14:47 | um |
---|
0:14:48 | one of the that maps is not correct as i a showed |
---|
0:14:50 | previous one a graph good |
---|
0:14:52 | right |
---|
0:14:53 | to the results |
---|
0:14:55 | but because it's such just simple method |
---|
0:14:58 | run relatively quickly to a few seconds a frame |
---|
0:15:01 | a a user can go back and modify and change their labels |
---|
0:15:04 | the modified the R |
---|
0:15:06 | hope |
---|
0:15:06 | correct |
---|
0:15:08 | um |
---|
0:15:10 | also a weighting scheme is a way to scheme static |
---|
0:15:13 | again it's very |
---|
0:15:15 | initial results |
---|
0:15:16 | and we're trying to |
---|
0:15:18 | you know a find some kind of that way and |
---|
0:15:25 | oh |
---|
0:15:27 | so mention your mike future work |
---|
0:15:29 | students so we're looking ways so that the you uh that |
---|
0:15:32 | that that lemurs at that's maps |
---|
0:15:34 | um |
---|
0:15:35 | and another aspect of features to try to uh |
---|
0:15:38 | standard to videos |
---|
0:15:39 | great oh |
---|
0:15:40 | although the metric |
---|
0:15:42 | a type of environment we can merge |
---|
0:15:43 | results to graph cut a random walks across cross by match |
---|
0:15:47 | um |
---|
0:15:48 | model of scene and tries |
---|
0:16:01 | yeah we can have some questions |
---|
0:16:08 | i |
---|
0:16:11 | a a nice one almost great |
---|
0:16:14 | and one the more care to know what's |
---|
0:16:17 | and makes a great |
---|
0:16:19 | a you so |
---|
0:16:21 | proposed in to a week |
---|
0:16:25 | a i also great with that can vary between one and two |
---|
0:16:32 | yeah |
---|
0:16:32 | what you we have a |
---|
0:16:34 | and it's |
---|
0:16:36 | have you tried use instead of the G |
---|
0:16:39 | two reasons so |
---|
0:16:41 | okay |
---|
0:16:41 | um i i |
---|
0:16:43 | i'm not sure if |
---|
0:16:45 | students were aware that paper |
---|
0:16:46 | time |
---|
0:16:47 | i have to check on that |
---|
0:16:49 | um i'm not familiar with the |
---|
0:16:52 | i i'd reshape the ref |
---|
0:16:54 | okay |
---|
0:17:01 | oh oh my question is |
---|
0:17:03 | oh how to determine the number of that there is that you have and C |
---|
0:17:08 | it's all by the user |
---|
0:17:10 | the user it's up to the user |
---|
0:17:11 | to define how many taps levels there are |
---|
0:17:14 | that's layers |
---|
0:17:15 | so the user can you or can do four gram part and if that's what they want |
---|
0:17:19 | or or they can do two or three or four depending on |
---|
0:17:22 | how much uh detail tell or or that |
---|
0:17:24 | they |
---|
0:17:25 | they want |
---|
0:17:26 | it's of the U |
---|
0:17:37 | um |
---|
0:17:38 | how about the the consistency of the resulting time um |
---|
0:17:42 | you video sequence |
---|
0:17:44 | that's definitely a problem |
---|
0:17:46 | time |
---|
0:17:46 | we have investigated that at all |
---|
0:17:49 | so going from frame to frame you're gonna see issues with |
---|
0:17:52 | edges |
---|
0:17:52 | and ripples definitely one of biggest |
---|
0:17:54 | with |
---|
0:17:55 | to be if you D conversion |
---|
0:17:58 | again when we extend to video that's maybe one of the top |
---|
0:18:03 | oh oh also want to know that whether that use a need to a sign to that well you for |
---|
0:18:07 | each of the later |
---|
0:18:09 | well of the the user |
---|
0:18:10 | i mean if you're from a with a graph cut the user just likes four gram but |
---|
0:18:14 | so in this case is the think it four gram and the user thinks |
---|
0:18:17 | closer or |
---|
0:18:18 | and further away |
---|
0:18:19 | so if the user things that there are |
---|
0:18:21 | three or four objects for depth layers |
---|
0:18:24 | they would draw according to the objects |
---|
0:18:26 | and assign labels of those which are |
---|
0:18:29 | for the back from the front object and further their |
---|
0:18:31 | back from that object and so on |
---|
0:18:33 | so it's again as i mentioned to the previous question |
---|
0:18:36 | is that it's all defined the and on the users perception of the image |
---|
0:18:40 | and what they and what object they consider a close or for away from you |
---|
0:18:44 | how well the except that where you are to |
---|
0:18:46 | that that well |
---|
0:18:48 | that's a you is it's right what we care but is the relative that |
---|
0:18:51 | oh |
---|
0:18:51 | C |
---|
0:18:52 | um given |
---|
0:18:53 | companies like um i max for example when a convert a movie |
---|
0:18:57 | they're not |
---|
0:18:58 | they don't care about the actual that |
---|
0:19:00 | they care about the relative that because |
---|
0:19:03 | they will move |
---|
0:19:04 | objects |
---|
0:19:05 | further into the three D Q |
---|
0:19:07 | uh a to make the effect bigger so that then they're not there to make it seem |
---|
0:19:12 | realistic you there are there to make it more |
---|
0:19:14 | you know extra realistic or super really |
---|
0:19:17 | so they care about the that uh relative depth and that's what we concentrate on |
---|
0:19:21 | it's not like robotic vision were things have to be |
---|
0:19:24 | um |
---|
0:19:24 | you know the |
---|
0:19:25 | precise |
---|
0:19:36 | i walls |
---|
0:19:37 | happens if the use uh |
---|
0:19:40 | put things a normal to |
---|
0:19:41 | however are as it that situation not we're pretty much proved then |
---|
0:19:46 | well then the whole that's changes |
---|
0:19:48 | um |
---|
0:19:49 | there's no |
---|
0:19:50 | i mean i get it because a automatic |
---|
0:19:52 | all these methods depend on the user's expertise and the |
---|
0:19:56 | and interpretation |
---|
0:19:57 | so |
---|
0:19:58 | if for user wants to do that that's fine |
---|
0:20:01 | is gonna make the result total be |
---|
0:20:03 | how are |
---|
0:20:04 | so things of |
---|
0:20:05 | just ship |
---|
0:20:07 | i think you |
---|
0:20:14 | okay have a think are speaker again |
---|