0:00:13 | so have um can you know the pen that's um your for the talks |
---|
0:00:17 | a |
---|
0:00:18 | um |
---|
0:00:18 | a that's apart from cornell that's cost here from cmu you |
---|
0:00:21 | and that's that these all the question is just a little bit just similar discussion |
---|
0:00:25 | um but then you're more than welcome to you the reply to the use if you have any other or |
---|
0:00:30 | um a pin on those or tool |
---|
0:00:32 | come up with other questions |
---|
0:00:34 | so the first question would be what is be most promising approach that you can imagine so far |
---|
0:00:38 | so it's ceiling was all the visual detection of non-linguistic |
---|
0:00:41 | vocal all outbursts |
---|
0:00:43 | so what is the most promising approach um to this |
---|
0:00:47 | topic |
---|
0:00:47 | any of the of this |
---|
0:00:50 | what's point to come for |
---|
0:01:00 | of them that's a just passed |
---|
0:01:03 | and hmmm |
---|
0:01:04 | so that this really isn't my are you totally um my kind of P C topic as a little bits |
---|
0:01:09 | to the side of this post |
---|
0:01:11 | i i think that's a very important in this sort of non-linguistic extra linguistic sort of um |
---|
0:01:16 | a a of a press i think that's um voice quality place a |
---|
0:01:19 | a very big role in us and |
---|
0:01:21 | and yeah unfortunately a lot of features that people of used |
---|
0:01:25 | for voice quality tends not to be robust enough and thus |
---|
0:01:28 | T the reason why |
---|
0:01:29 | people don't use of was um i think that's that's it |
---|
0:01:32 | and more robust features |
---|
0:01:34 | that's could um |
---|
0:01:36 | that can measure different dimensions of voice quality i think that could be that beneficial |
---|
0:01:40 | to the recognition |
---|
0:01:41 | perhaps |
---|
0:01:43 | i'm and i |
---|
0:01:45 | so |
---|
0:01:46 | thank you |
---|
0:01:47 | um down any opinions |
---|
0:01:51 | i |
---|
0:01:56 | yeah i think a lot can be done on the classification side for example was the different fusion techniques |
---|
0:02:01 | so right now we have seen |
---|
0:02:03 | uh early fusion and |
---|
0:02:04 | step rows approach |
---|
0:02:06 | a of course to could also of if used the different modalities but different classifiers not late fusion approach |
---|
0:02:12 | and to estimate the confidences for the different classifiers |
---|
0:02:15 | because just for some problems when for example the |
---|
0:02:18 | visual modality uh yeah does not to the right a confidence to could decide on all you only or to |
---|
0:02:24 | something like this |
---|
0:02:29 | this so from the audio was a part maybe be rows can comment |
---|
0:02:33 | or missing |
---|
0:02:35 | uh |
---|
0:02:38 | i i get current see that the the i think that is a |
---|
0:02:42 | anything of that |
---|
0:02:42 | sky sky very promising i mean a are so many problems we given in and |
---|
0:02:47 | it's a |
---|
0:02:49 | of course and the main problem |
---|
0:02:52 | the the question is |
---|
0:02:54 | but this |
---|
0:02:55 | oh |
---|
0:02:56 | was a a it is a huge variability of expressions |
---|
0:02:59 | and to and |
---|
0:03:01 | to good using my and problems of but different classes |
---|
0:03:04 | and |
---|
0:03:06 | a the very but isn't so high based expressions |
---|
0:03:09 | this |
---|
0:03:10 | hi variability but because of the expression is very but because of yet posted movements |
---|
0:03:15 | and to also but but of different cultures |
---|
0:03:18 | and |
---|
0:03:20 | so that of a it's not a problem and a |
---|
0:03:24 | oh |
---|
0:03:25 | that are several approaches |
---|
0:03:27 | and |
---|
0:03:29 | it's for for promising i mean no i'm not sure what the hey |
---|
0:03:32 | i |
---|
0:03:33 | so |
---|
0:03:38 | a so my idea |
---|
0:03:39 | uh uh E |
---|
0:03:40 | segments from the audience |
---|
0:03:41 | um |
---|
0:03:42 | that's |
---|
0:03:42 | that |
---|
0:03:48 | so i i mean i'm asking myself is what you see year this uh |
---|
0:03:52 | this is uh localisation and and detection of this uh |
---|
0:03:55 | all purse is that so much different from what |
---|
0:03:57 | we did so far and many the other |
---|
0:03:59 | but the recognition applications for or remember |
---|
0:04:02 | that's in the late nineties so more than ten years ago we working on broadcast news uh |
---|
0:04:07 | and that was is and we were |
---|
0:04:09 | using basic you get is of the efforts what's for |
---|
0:04:11 | distinguishing between |
---|
0:04:13 | that's say |
---|
0:04:14 | i a speech nonspeech a music parts and silence and um really asking maybe |
---|
0:04:19 | some of you |
---|
0:04:21 | do you think this to what's a difference |
---|
0:04:23 | between what we did |
---|
0:04:25 | then use of a of twelve years ago |
---|
0:04:27 | for this |
---|
0:04:27 | kind of similar the problem or do you you leave that no this kind of thing is giving you |
---|
0:04:32 | extremely new |
---|
0:04:33 | changes which will lead to different approaches |
---|
0:04:39 | thank you so |
---|
0:04:41 | once |
---|
0:04:47 | L |
---|
0:04:48 | there you can |
---|
0:04:50 | i don't |
---|
0:04:51 | you know exactly what kinds of problems you're referring to um |
---|
0:04:54 | i'm sure that there are lots of problems that are similar to the problems there been tackled |
---|
0:04:58 | these days |
---|
0:04:59 | but i think it's fair to say that there are lots of new problems that we're not considered in the |
---|
0:05:04 | nineties that people |
---|
0:05:05 | are looking at |
---|
0:05:06 | um |
---|
0:05:08 | that said the problems that are in common |
---|
0:05:11 | i think people generally don't do with the comparative studies |
---|
0:05:14 | to demonstrate what in fact is new about the new techniques |
---|
0:05:18 | so |
---|
0:05:19 | uh sometimes the day are goes away and sometimes the people that did the work of away |
---|
0:05:24 | and |
---|
0:05:25 | there's not enough effort being put in to demonstrate |
---|
0:05:28 | that novel a new mathematical techniques are in fact worth doing |
---|
0:05:33 | um |
---|
0:05:35 | that's my opinion |
---|
0:05:44 | maybe just do add on this |
---|
0:05:45 | because you mentioned you don't know exactly what was not i mean |
---|
0:05:48 | what was done was kind of there was some the what what do you do today |
---|
0:05:51 | people were basically training new that work |
---|
0:05:53 | and on on both classes and then they were using a sliding windows and we're going over the streams |
---|
0:05:59 | and then they were looking at the i is the was the the and then sign those segments to those |
---|
0:06:03 | different |
---|
0:06:04 | uh classes and uh |
---|
0:06:06 | so that was button |
---|
0:06:08 | i both my motivation to say well look there's is a to what is |
---|
0:06:11 | been done now |
---|
0:06:12 | although it just a the different we used maybe to some different types of new networks but |
---|
0:06:17 | if it's not so much i think that was my point on the but vision for my previous question actually |
---|
0:06:24 | okay so |
---|
0:06:26 | for the segment on that |
---|
0:06:29 | okay so maybe we can move to the second question was actually was which all the problems |
---|
0:06:34 | so which is very most similar to this |
---|
0:06:36 | so all the any specific problems maybe in this context um which all these |
---|
0:06:40 | anybody |
---|
0:06:42 | from the |
---|
0:06:43 | handle |
---|
0:06:44 | i |
---|
0:06:46 | well this is the young row but a a uh young old question but um |
---|
0:06:51 | i think um |
---|
0:06:52 | uh |
---|
0:06:53 | in emotion recognition |
---|
0:06:55 | uh there are |
---|
0:06:57 | some people have been starting how to do this cross control |
---|
0:07:01 | um um that |
---|
0:07:02 | for sir for certain some uh uh will cut out first are dove like come on across cultures but some |
---|
0:07:08 | or not |
---|
0:07:09 | and i think uh more effort into this area could be a |
---|
0:07:13 | you could actually gain some knowledge |
---|
0:07:15 | uh from the this kind of task |
---|
0:07:17 | i Q |
---|
0:07:24 | i take this |
---|
0:07:29 | um i in my you uh i mean i'm and |
---|
0:07:32 | most to do |
---|
0:07:33 | uh speech but uh as far as i can |
---|
0:07:35 | see and have seen |
---|
0:07:38 | one of the organisational problems |
---|
0:07:40 | is that |
---|
0:07:42 | uh |
---|
0:07:42 | speech uh |
---|
0:07:44 | somebody has pointed out is more developed than uh |
---|
0:07:48 | visual processing |
---|
0:07:50 | sites that claim to do a multimodal processing normally |
---|
0:07:54 | uh are rather but are more uh on the video side then on the audio side of course there's always |
---|
0:08:00 | exceptions |
---|
0:08:01 | uh so uh video or |
---|
0:08:04 | in this case is |
---|
0:08:05 | as far as |
---|
0:08:06 | can be rather developed and audio is |
---|
0:08:09 | let |
---|
0:08:10 | so uh it might be one of the organisational problems and that |
---|
0:08:15 | uh refers to |
---|
0:08:16 | funding and so on |
---|
0:08:18 | uh to bring together uh |
---|
0:08:21 | there is a special from both sides |
---|
0:08:24 | uh |
---|
0:08:25 | and not to us |
---|
0:08:27 | uh |
---|
0:08:29 | vision paper doing some you and vice a versa |
---|
0:08:32 | and uh my second more refers to terminology that uh |
---|
0:08:37 | might seem to be a rather |
---|
0:08:40 | um minor or problem but i i tell you story |
---|
0:08:43 | i i don't like uh this uh a taxonomy into a non volatile and so on and that's my story |
---|
0:08:49 | back in the nineties in the german verbmobil project |
---|
0:08:54 | um |
---|
0:08:54 | we uh |
---|
0:08:55 | i a convention |
---|
0:08:57 | that that poses a better conceived as non linguistic stuff like breathing and |
---|
0:09:03 | a coughing thing and uh a to and so on so it was |
---|
0:09:07 | uh |
---|
0:09:08 | marked with these brackets |
---|
0:09:11 | um you can say well about the problem the problem was |
---|
0:09:14 | that |
---|
0:09:15 | right |
---|
0:09:17 | really |
---|
0:09:17 | dies uh doing the uh implementing the is or machine |
---|
0:09:21 | just uh |
---|
0:09:23 | to do all these this garbage and um didn't a take it into account |
---|
0:09:28 | later on when we or linguistic models wanted to have a look and one to use |
---|
0:09:32 | is iterations |
---|
0:09:34 | these guys from uh uh D is are engine set well |
---|
0:09:37 | that's not in our project be uh can't do it uh we country reimplement because this takes some weeks |
---|
0:09:43 | so uh |
---|
0:09:45 | we you had to do our own is are test because |
---|
0:09:48 | uh in the beginning it was |
---|
0:09:50 | you find as non-linguistic |
---|
0:09:53 | uh |
---|
0:09:54 | you can say now would days is are more intelligent |
---|
0:09:57 | but i i don't it so uh i think |
---|
0:10:00 | we really should uh take care for even for these problems that seem to be minor or |
---|
0:10:05 | from the very beginning |
---|
0:10:09 | thank think and on um anybody or any comments maybe from the audience in problems |
---|
0:10:15 | in this respect |
---|
0:10:16 | yes |
---|
0:10:22 | looks like a most of the work has been done using a it |
---|
0:10:26 | anything facial expressions and all of your |
---|
0:10:29 | um using a single speaker |
---|
0:10:31 | kind of C |
---|
0:10:31 | a you you'll |
---|
0:10:33 | uh i want noise |
---|
0:10:34 | there is any work on |
---|
0:10:36 | uh like a meeting kind of see that you're like got many people |
---|
0:10:40 | and you have a camera |
---|
0:10:41 | morning morning different people |
---|
0:10:44 | i |
---|
0:10:44 | you have |
---|
0:10:45 | so that a P H an expression is very bored and like just yeah |
---|
0:10:49 | those kind of things |
---|
0:10:51 | quality of it this |
---|
0:10:52 | nonlapel cues |
---|
0:11:01 | so i'm looking at a and solve was from the meeting |
---|
0:11:04 | domain |
---|
0:11:06 | so actually i i i can quite |
---|
0:11:08 | fully |
---|
0:11:09 | a |
---|
0:11:09 | here everything that you said in spite of on you asked |
---|
0:11:12 | how people working on meetings are with the same problems that are |
---|
0:11:16 | uh in treated here is that |
---|
0:11:57 | i |
---|
0:11:57 | i don't know very much about it but i can tell you that it leads to an explosion of sensors |
---|
0:12:01 | um |
---|
0:12:02 | so at there's been quite a lot of |
---|
0:12:05 | projects |
---|
0:12:06 | actually a |
---|
0:12:07 | at that you level |
---|
0:12:08 | that have dealt with instrumenting meeting rooms and seminar rooms et cetera |
---|
0:12:13 | um um |
---|
0:12:14 | i find that research kind of exclusive "'cause" there's only a couple of people that can |
---|
0:12:18 | do research where it you know the number of video cameras and number of microphones that are necessary and that |
---|
0:12:24 | and are fixed inside of a fixed geometry room are are possible |
---|
0:12:28 | um |
---|
0:12:31 | i sell and on the video your side |
---|
0:12:33 | i |
---|
0:12:34 | i don't really know but in in on the |
---|
0:12:37 | on the speech side there's a lot of |
---|
0:12:40 | problems that are being tackled an essentially the same |
---|
0:12:43 | way |
---|
0:12:45 | um |
---|
0:12:46 | as the pink out for a single side |
---|
0:12:48 | after for example beam forming the audio from a microphone array or a set of microphone |
---|
0:13:01 | yeah the most works like give seen on a meetings basically what they do if they want to use special |
---|
0:13:06 | expressions is the use one common up this point |
---|
0:13:09 | so that you one comment a |
---|
0:13:11 | a looking at the face bits for this one |
---|
0:13:13 | and am |
---|
0:13:15 | now if you want in the a you mentioned there is only one common a |
---|
0:13:19 | a a as far as i know |
---|
0:13:21 | that is uh and there are some works |
---|
0:13:24 | and the think that is a group which it's are like meet the up |
---|
0:13:28 | but they do a gesture recognition or some L let's say |
---|
0:13:32 | oh Q not very sophisticated features because it's |
---|
0:13:34 | make hard problem when you one one common i |
---|
0:13:37 | and to in order to detect a mean and is the most dominant person in a meeting |
---|
0:13:42 | and things like that |
---|
0:13:43 | a but yeah i think they are would you i think |
---|
0:13:47 | there are many works using only one common |
---|
0:13:50 | and i think also professor nick come belle just done some works uh on |
---|
0:13:54 | using this three sixty one three six six a common to |
---|
0:13:59 | the table |
---|
0:14:00 | and and i when she was in the font some work on but but but you are there are many |
---|
0:14:06 | works a that and one of the reason is that |
---|
0:14:09 | it's not easy in to you are not sure what to features to extract in there are so many problems |
---|
0:14:14 | so |
---|
0:14:23 | okay so the next question would be a of and how can we um had to better integrate audio and |
---|
0:14:28 | video how can be |
---|
0:14:30 | but if use these |
---|
0:14:31 | and that should be looked at from both sides so from the video side from the audio side |
---|
0:14:35 | uh on the that mention that there are um these |
---|
0:14:38 | well gaps in between |
---|
0:14:40 | a look an that from one side of the other side |
---|
0:14:43 | so maybe there someone pin in |
---|
0:14:45 | on this |
---|
0:14:45 | but |
---|
0:14:46 | how can you best |
---|
0:14:47 | integrate all and video in the future |
---|
0:14:59 | when one possibility ease that not too |
---|
0:15:03 | uh use both |
---|
0:15:04 | uh |
---|
0:15:05 | video and audio information for uh |
---|
0:15:09 | but only or late fusion for the you uh very same problem but the |
---|
0:15:15 | to look at |
---|
0:15:16 | the context and then uh uh try to |
---|
0:15:19 | i just the probably rows for uh |
---|
0:15:22 | the phenomena |
---|
0:15:23 | i i'm interested in |
---|
0:15:25 | when a |
---|
0:15:26 | may come back to uh |
---|
0:15:28 | or data |
---|
0:15:30 | it's seems |
---|
0:15:31 | that |
---|
0:15:32 | if we are interested in a in these interactive uh |
---|
0:15:36 | but that would be you were mother using and so on |
---|
0:15:39 | uh then we can have a very close look at uh |
---|
0:15:43 | the body movements of uh the subjects and when they are lightly then we can just a |
---|
0:15:50 | reset set the priors for |
---|
0:15:52 | the phenomena of are interested in and this holds of course vice versa |
---|
0:15:56 | uh |
---|
0:15:58 | well coming from |
---|
0:15:59 | speech uh i |
---|
0:16:01 | and most the can't imagine speech problems uh we are interested in but uh |
---|
0:16:05 | it's the job of the guys to to do it vice versa |
---|
0:16:09 | so uh i mean that's a maybe just |
---|
0:16:12 | another type of approach |
---|
0:16:14 | that |
---|
0:16:14 | um |
---|
0:16:16 | that we can try to as you and |
---|
0:16:18 | um |
---|
0:16:19 | my and ditch told me that uh |
---|
0:16:21 | G at the same ideas are coming from from video so it might really be with what |
---|
0:16:29 | Q |
---|
0:16:29 | and |
---|
0:16:30 | so maybe |
---|
0:16:35 | i the i think i a |
---|
0:16:38 | if a a a T V it seems of the most popular approach the moment a decision fusion feature level |
---|
0:16:43 | fusion |
---|
0:16:43 | yeah |
---|
0:16:45 | and also that some so other ports like multi-stream hmms |
---|
0:16:49 | but i think this is not only a to work in video i think fusion of different modalities of thing |
---|
0:16:53 | is a court a P i mean and |
---|
0:16:56 | and this far is a no |
---|
0:16:57 | and |
---|
0:16:59 | these methods you you know for years |
---|
0:17:01 | and it seems at nothing better just come out so |
---|
0:17:04 | and they guess people trying to find but the fusion methods |
---|
0:17:07 | and uh uh and obviously for example in feature level fusion |
---|
0:17:12 | give a simple way is just too much the frame rates between audio and video |
---|
0:17:16 | and |
---|
0:17:18 | not guarantee that this is |
---|
0:17:20 | the optimal way to do it |
---|
0:17:21 | and to uh |
---|
0:17:23 | so yeah that is i think of an increase of the need for new fusion methods now |
---|
0:17:27 | what is this going to be |
---|
0:17:30 | and think any and knows the answer yet i mean |
---|
0:17:32 | i'm pretty sure a lot of people who were confusion |
---|
0:17:34 | and |
---|
0:17:35 | fusion of different modalities so |
---|
0:17:37 | it's will but |
---|
0:17:38 | some point that would be something better than |
---|
0:17:40 | this is to sell |
---|
0:17:57 | a |
---|
0:17:58 | yeah |
---|
0:18:00 | i |
---|
0:18:03 | i |
---|
0:18:05 | i |
---|
0:18:06 | a |
---|
0:18:08 | oh |
---|
0:18:10 | i |
---|
0:18:11 | i |
---|
0:18:12 | i |
---|
0:18:15 | i |
---|
0:19:16 | i |
---|
0:19:17 | i |
---|
0:19:18 | i |
---|
0:19:18 | a |
---|
0:19:19 | i |
---|
0:19:20 | i |
---|
0:19:21 | i |
---|
0:19:21 | i |
---|
0:19:24 | i |
---|
0:19:27 | i |
---|
0:19:32 | so |
---|
0:19:35 | actually that was the force on last question on the slide |
---|
0:19:39 | um what applications |
---|
0:19:40 | to to see for all these types of non-linguistic |
---|
0:19:43 | uh vocalisations or vocal outbursts |
---|
0:19:46 | so where B |
---|
0:19:47 | you could each of you also some microphone and tell your a linguistic |
---|
0:19:51 | vocalisation application |
---|
0:19:53 | oh |
---|
0:19:57 | uh |
---|
0:20:00 | prediction of in seeking or |
---|
0:20:03 | in you time |
---|
0:20:07 | i |
---|
0:20:08 | and |
---|
0:20:09 | improved |
---|
0:20:10 | spoke the dialogue systems particular improved |
---|
0:20:12 | yeah speech synthesis |
---|
0:20:14 | and |
---|
0:20:15 | proper pretty it's not just not linguistic but also short and is like a of |
---|
0:20:19 | and |
---|
0:20:20 | yeah |
---|
0:20:21 | he |
---|
0:20:22 | a and selection a large corpus used section |
---|
0:20:25 | since is it |
---|
0:20:27 | if you can track to |
---|
0:20:28 | the right |
---|
0:20:29 | that type of yeah or other |
---|
0:20:32 | sure it burst |
---|
0:20:33 | and that and |
---|
0:20:34 | mike provides a |
---|
0:20:36 | more foods and |
---|
0:20:38 | synthetic speech or |
---|
0:20:39 | um sets |
---|
0:20:42 | a a i think this guy told everything |
---|
0:20:46 | yeah from my point of view an interesting uh a point of application is |
---|
0:20:50 | personality analysis for example assessment of leadership qualities |
---|
0:20:54 | for instance you could think of |
---|
0:20:56 | if someone is |
---|
0:20:57 | making a lot of hesitations is probably not a good speaker |
---|
0:21:00 | and so that's that's classifications could be quite a |
---|
0:21:05 | a a key |
---|
0:21:06 | at least they |
---|
0:21:08 | in a global i mean they the main goal was used |
---|
0:21:11 | wasn't these two have more user friendly interface |
---|
0:21:14 | so that's why we were |
---|
0:21:16 | mostly concentrate on laughter but also not the localization |
---|
0:21:19 | was for example |
---|
0:21:20 | where a when you tact with an interface |
---|
0:21:22 | and T V it can protect that you laugh |
---|
0:21:25 | and a then this is you uh usually means that |
---|
0:21:28 | you happy P mean a you enjoy a set of course you have |
---|
0:21:32 | a different meaning can be a a a can mean i don't need |
---|
0:21:35 | it's a and if |
---|
0:21:36 | it's the moment think it's very hard to discriminate between these two types |
---|
0:21:41 | but for us to just to make a communication more user friendly because if you just speech |
---|
0:21:46 | it's um |
---|
0:21:48 | i think if you take an example of a interaction between two people in you remove |
---|
0:21:52 | all these non-linguistic vocalisations |
---|
0:21:54 | yeah you will see the interaction |
---|
0:21:57 | and a it's |
---|
0:21:58 | i it doesn't know does not seem so natural |
---|
0:22:00 | it's uh an so already idea was to more |
---|
0:22:03 | our more more deviation was a more user friendly interface and that's how |
---|
0:22:06 | we begin to work we set |
---|
0:22:11 | um i think |
---|
0:22:13 | or we could i am at not only |
---|
0:22:15 | uh assessing or money touring personality but uh assessing and monitoring changes and personality |
---|
0:22:22 | over uh |
---|
0:22:23 | certain times |
---|
0:22:24 | uh |
---|
0:22:25 | make just imagine when we come back to do this leadership uh |
---|
0:22:30 | topic uh um somebody |
---|
0:22:33 | taking part in the management curves and uh then at the beginning and at the end we assess |
---|
0:22:39 | uh there's an T and uh have a look whether the |
---|
0:22:42 | changes to place the same holds for any interactive be how you were uh |
---|
0:22:46 | can be extended to uh to do an or |
---|
0:22:50 | yeah |
---|
0:22:51 | but a could be hey were and stuff like that |
---|
0:22:54 | but uh not single instance uh judgement because that's to erroneous |
---|
0:22:59 | but uh just combination of uh no |
---|
0:23:05 | um a question |
---|
0:23:06 | so we |
---|
0:23:07 | how about the dangers of using |
---|
0:23:10 | well of setting things like these aside from linguistic stuff |
---|
0:23:14 | um |
---|
0:23:14 | i'm from |
---|
0:23:15 | and out of these dream applications five |
---|
0:23:18 | a a i think where linguistic and make sure has to do with speech and and drug for construction |
---|
0:23:23 | what is the benefit of using a node like |
---|
0:23:27 | for these |
---|
0:23:31 | um |
---|
0:23:36 | i |
---|
0:23:36 | point |
---|
0:23:56 | that |
---|
0:23:58 | i |
---|
0:23:58 | i when you you i think that's mostly programmatic question it's |
---|
0:24:01 | just that |
---|
0:24:03 | uh a linguistic then have dominated |
---|
0:24:06 | this community's general interest for a long time |
---|
0:24:10 | and uh |
---|
0:24:11 | to actually get something going that's nonlinguistic you just need to exert a lot of effort and sound like a |
---|
0:24:16 | first and that actually wants to disintegrated from |
---|
0:24:19 | linguistics but i don't think that's the |
---|
0:24:21 | general name |
---|
0:24:23 | think uh |
---|
0:24:25 | that's just my thing |
---|
0:24:28 | yeah would totally agree with that |
---|
0:24:30 | and thing probably that's a future to have it |
---|
0:24:33 | combines have |
---|
0:24:34 | i with six linguistics apart part from a and video |
---|
0:24:36 | but this we actually reached the end of the session and sums of time |
---|
0:24:40 | a would like to thank you all very very much the speakers |
---|
0:24:43 | all of the audience and well |
---|
0:24:45 | of that we have more |
---|
0:24:46 | for four |
---|
0:24:47 | discussion in the future thank you |
---|