0:00:06 | alright |
---|
0:00:08 | everybody uh |
---|
0:00:09 | whose |
---|
0:00:09 | so i want |
---|
0:00:10 | to uh |
---|
0:00:13 | already start breaking before the wine tasting |
---|
0:00:16 | less than i i can continue now |
---|
0:00:18 | um |
---|
0:00:19 | they want to talk about speaker linking and um what what can you actually |
---|
0:00:23 | suspect |
---|
0:00:25 | uh before we go to go for the wine tasting |
---|
0:00:28 | a few things |
---|
0:00:29 | um |
---|
0:00:31 | and i don't doubt |
---|
0:00:33 | uh a high price question |
---|
0:00:35 | i have a graph |
---|
0:00:36 | uh even for the mathematicians i have a formula |
---|
0:00:39 | um and also a picture |
---|
0:00:42 | and |
---|
0:00:42 | finally |
---|
0:00:44 | maybe or maybe not depending on uh |
---|
0:00:46 | or how well i do i |
---|
0:00:48 | joe |
---|
0:00:49 | so |
---|
0:00:49 | that's start with the |
---|
0:00:51 | and it out |
---|
0:00:53 | by the way uh if if you're not interested in this this subject you can you can keep yourself busy |
---|
0:00:58 | with |
---|
0:00:58 | with detecting oovs |
---|
0:01:00 | specific events that i |
---|
0:01:02 | use case you |
---|
0:01:03 | um |
---|
0:01:05 | alright so |
---|
0:01:06 | i was reading a book |
---|
0:01:08 | i have a look at home |
---|
0:01:09 | and um |
---|
0:01:11 | i haven't finished it yet but it's it's it's |
---|
0:01:13 | it just think it it tells about how uh people in world war two |
---|
0:01:17 | um |
---|
0:01:19 | yeah the the the pitch in this case we're we're eavesdropping on the communication |
---|
0:01:23 | uh of the uh five guys |
---|
0:01:25 | uh from from that |
---|
0:01:27 | english perspective |
---|
0:01:28 | and um |
---|
0:01:30 | they said they were listening to the morse code signals and |
---|
0:01:33 | codes |
---|
0:01:34 | where encrypted |
---|
0:01:36 | but still they were able to it |
---|
0:01:38 | did you some kind of information others namely the person behind |
---|
0:01:42 | the morse code apparatus |
---|
0:01:45 | and uh i think you in morse code technology that this corpus |
---|
0:01:48 | the face so that's why i assume that is the way your face |
---|
0:01:52 | goes up and down the operators |
---|
0:01:54 | uh so even though they didn't know the identity of the people |
---|
0:01:57 | they were able to link together one broadcast |
---|
0:02:01 | maybe at one particular instance in time from one particular direction or whatever |
---|
0:02:05 | uh to another one |
---|
0:02:07 | later |
---|
0:02:07 | and then they could use |
---|
0:02:09 | uh movements of troops so even though the messages themselves are encrypted there were still able to |
---|
0:02:14 | did you some information so that is |
---|
0:02:16 | that gives you an idea of uh |
---|
0:02:18 | oh |
---|
0:02:19 | but this could be used for |
---|
0:02:21 | useful for |
---|
0:02:22 | so |
---|
0:02:23 | another example of of linking were clustering as you might uh |
---|
0:02:27 | uh colleges |
---|
0:02:29 | i i i would rather |
---|
0:02:30 | pacific implementation |
---|
0:02:32 | um is uh actually it |
---|
0:02:35 | it's on the web it's done by uh by big software |
---|
0:02:38 | where |
---|
0:02:39 | based software firm |
---|
0:02:41 | uh you could do this with photographs with with basis in fact |
---|
0:02:45 | so uh and it works pretty well see about a hollow |
---|
0:02:49 | pictures and even though the clustering itself isn't very good |
---|
0:02:53 | uh in terms of actual forms figures |
---|
0:02:55 | uh you |
---|
0:02:56 | you get a cluster in this case |
---|
0:02:58 | uh |
---|
0:02:59 | it's high shoe and already and you |
---|
0:03:01 | click one of them away you type |
---|
0:03:03 | two or |
---|
0:03:04 | three letters of person and |
---|
0:03:05 | of course you can your your email |
---|
0:03:08 | um |
---|
0:03:09 | a database |
---|
0:03:10 | and |
---|
0:03:11 | that you made a new cluster you get the next person et cetera |
---|
0:03:15 | so it works very well but |
---|
0:03:16 | the in and in uh |
---|
0:03:18 | interactive setting |
---|
0:03:19 | even though the |
---|
0:03:20 | clustering performance |
---|
0:03:21 | itself in these particular cases |
---|
0:03:23 | pretty bad |
---|
0:03:24 | so |
---|
0:03:25 | um |
---|
0:03:27 | now just a short intermission so |
---|
0:03:29 | and the way i see clustering it's actually kind of old fashioned we're doing |
---|
0:03:33 | some kind of identification right |
---|
0:03:35 | put |
---|
0:03:36 | people by their forties |
---|
0:03:38 | and make |
---|
0:03:39 | hard decisions about this |
---|
0:03:41 | um |
---|
0:03:42 | sort of identification and we don't like identification has the problem of of the priors |
---|
0:03:47 | yes |
---|
0:03:48 | nicholas eight if you want to do proper identification |
---|
0:03:50 | you need a prior don't know them what are we going to do with them |
---|
0:03:54 | uh just a little test |
---|
0:03:56 | for you we will work with |
---|
0:03:57 | equal error rate even in language recognition |
---|
0:03:59 | so suppose you have |
---|
0:04:01 | uh a system with a certain equal error rate |
---|
0:04:04 | five percent means |
---|
0:04:05 | to class system |
---|
0:04:06 | detection |
---|
0:04:08 | now you're going to apply this system to eight to speaker identification system |
---|
0:04:13 | and you do your identification by |
---|
0:04:16 | taking the segment |
---|
0:04:17 | compute a score to one more one computing score |
---|
0:04:20 | for another and you |
---|
0:04:21 | you |
---|
0:04:21 | choose |
---|
0:04:22 | you have an equal prior |
---|
0:04:24 | uh |
---|
0:04:25 | good |
---|
0:04:25 | the the model with the maximum |
---|
0:04:27 | like to score |
---|
0:04:29 | and you don't do anything clever no discriminative training between two speakers with a |
---|
0:04:34 | so the question then is uh |
---|
0:04:37 | what is your identification error rate going to be |
---|
0:04:40 | gonna be one percent five percent or ten percent |
---|
0:04:42 | so it's not a question you can think about |
---|
0:04:45 | uh during this uh |
---|
0:04:46 | this |
---|
0:04:47 | if you don't want to watch a slide |
---|
0:04:49 | so speaker like linking i |
---|
0:04:52 | yeah |
---|
0:04:52 | i'd |
---|
0:04:53 | this this term was actually used |
---|
0:04:55 | uh we're inspired |
---|
0:04:57 | by uh by |
---|
0:04:58 | george overton is a see those lots of |
---|
0:05:01 | uh |
---|
0:05:01 | inspiration |
---|
0:05:02 | in in |
---|
0:05:03 | in |
---|
0:05:04 | the question what kind of questions should be sold in speaker recognition |
---|
0:05:08 | and it was good |
---|
0:05:10 | dismissive |
---|
0:05:11 | type of |
---|
0:05:12 | of |
---|
0:05:13 | of answers george |
---|
0:05:14 | can can do these kind of things |
---|
0:05:16 | uh as |
---|
0:05:17 | note that a speaker link |
---|
0:05:19 | it's a back problem |
---|
0:05:20 | so |
---|
0:05:21 | uh |
---|
0:05:23 | it |
---|
0:05:25 | whatever and |
---|
0:05:26 | i am interested in a large |
---|
0:05:29 | set |
---|
0:05:29 | a speech segment so |
---|
0:05:31 | um |
---|
0:05:33 | i want you if i want |
---|
0:05:34 | here diarization within a single show i think that's fantastic but i would actually like to do |
---|
0:05:39 | diarisation over |
---|
0:05:41 | all the television shows |
---|
0:05:43 | oh for |
---|
0:05:43 | uh entire year |
---|
0:05:45 | what |
---|
0:05:45 | or whatever |
---|
0:05:46 | large scale problems |
---|
0:05:49 | and again it's kind of a clustering want to link those speakers |
---|
0:05:52 | and |
---|
0:05:53 | i think it |
---|
0:05:54 | the large scale thing |
---|
0:05:56 | is |
---|
0:05:57 | a problem and that's what i want to uh |
---|
0:05:59 | to show |
---|
0:06:01 | or want to investigate |
---|
0:06:02 | so it's a bit export |
---|
0:06:04 | so |
---|
0:06:04 | presentation |
---|
0:06:06 | alright nick already said |
---|
0:06:08 | uh |
---|
0:06:08 | previously that is related to all kinds of other things |
---|
0:06:12 | speaker clustering of course |
---|
0:06:14 | basically the same problem |
---|
0:06:16 | but we're focusing now large |
---|
0:06:18 | large scale problems |
---|
0:06:20 | uh partitioning |
---|
0:06:22 | probably much nicer way of doing things but |
---|
0:06:25 | uh you need prior distributions overall |
---|
0:06:28 | partitionings |
---|
0:06:28 | and it doesn't probably work for |
---|
0:06:31 | for large scale |
---|
0:06:32 | um |
---|
0:06:34 | that has |
---|
0:06:34 | relations with lots of other things |
---|
0:06:36 | first of all diarisation |
---|
0:06:38 | and diarisation you need to also the segmentation of course |
---|
0:06:42 | um |
---|
0:06:43 | and that happens |
---|
0:06:44 | but you typically applied are stationed within a cluster of within a |
---|
0:06:49 | a single |
---|
0:06:49 | a single recording and like i say i'd like to |
---|
0:06:52 | make the links |
---|
0:06:53 | between recordings as well |
---|
0:06:56 | um insulation with the uh |
---|
0:06:58 | so i |
---|
0:06:59 | to wire training conditions in uh in in this |
---|
0:07:02 | speaker recognition evaluation |
---|
0:07:05 | um |
---|
0:07:07 | um |
---|
0:07:09 | with their uh first of all you have |
---|
0:07:11 | uh diarisation as an |
---|
0:07:13 | additional cost |
---|
0:07:14 | and |
---|
0:07:15 | you know that there is exactly one common |
---|
0:07:18 | speaker |
---|
0:07:19 | common link between all the training segments that you have so this |
---|
0:07:22 | there's more prior information |
---|
0:07:25 | um |
---|
0:07:26 | also speaker tracking |
---|
0:07:28 | um |
---|
0:07:30 | is related there i think the problem is that you are given |
---|
0:07:34 | a mobile for a particular speaker and then you have to find it in a large |
---|
0:07:37 | collection |
---|
0:07:39 | um |
---|
0:07:40 | and finally it is of course related to |
---|
0:07:43 | clustering in general |
---|
0:07:46 | uh with the difference that in many clustering problems is not really here |
---|
0:07:50 | what the classes are if you look at topic clustering yeah what makes topic topic |
---|
0:07:55 | can be something else and here with speaker of course we know the truth |
---|
0:07:59 | well very quick overview of that type |
---|
0:08:02 | of |
---|
0:08:02 | clustering algorithms which take clustering as a solution to this problem |
---|
0:08:06 | uh there is |
---|
0:08:07 | is |
---|
0:08:08 | oh |
---|
0:08:08 | damn |
---|
0:08:09 | way |
---|
0:08:10 | and you might see the way we train our gmm says that it's a way of doing this |
---|
0:08:14 | you start with |
---|
0:08:15 | the single clustering attracts people |
---|
0:08:17 | uh clusters that are more similar |
---|
0:08:20 | or you can do baltimore |
---|
0:08:22 | but also a sorry |
---|
0:08:23 | or agglomerative clustering |
---|
0:08:25 | i mean |
---|
0:08:27 | this is typically what we do in diarisation in the beginning and diarisation also that |
---|
0:08:32 | so from the from from the top |
---|
0:08:34 | so there you start with |
---|
0:08:36 | individual segments and you try to cluster together into you say this is enough |
---|
0:08:40 | i have found my classes now |
---|
0:08:42 | i'm sure that there are many |
---|
0:08:43 | more |
---|
0:08:44 | clustering or algorithms |
---|
0:08:46 | that are actually better than |
---|
0:08:47 | then |
---|
0:08:48 | these kinds but now concentrate actually |
---|
0:08:51 | the |
---|
0:08:52 | agglomerative closer now |
---|
0:08:55 | um |
---|
0:08:56 | one of the things that i |
---|
0:08:59 | i the bothers me about this |
---|
0:09:01 | this clustering is that |
---|
0:09:03 | that it doesn't |
---|
0:09:03 | scale with time |
---|
0:09:05 | if you if you take the easy to use |
---|
0:09:07 | simplest agglomerative clustering idea |
---|
0:09:10 | then you start with the number of |
---|
0:09:11 | for all segments and you find the best matching class them together |
---|
0:09:15 | and then you do it again |
---|
0:09:17 | um |
---|
0:09:18 | and the total complexity would be |
---|
0:09:21 | all of the order into this |
---|
0:09:22 | third power |
---|
0:09:24 | and if you then want to get |
---|
0:09:25 | intermediate |
---|
0:09:27 | updates in some kind of online |
---|
0:09:29 | situation show you recorded |
---|
0:09:31 | shows over a whole year and you get an extra show with extra speaker segments |
---|
0:09:36 | and you want to put them in |
---|
0:09:37 | then again you have |
---|
0:09:38 | uh |
---|
0:09:39 | uh |
---|
0:09:41 | an extra order |
---|
0:09:42 | of |
---|
0:09:42 | complexity |
---|
0:09:44 | in total |
---|
0:09:45 | of course you also get |
---|
0:09:46 | getting time every day so that |
---|
0:09:48 | make you doubt |
---|
0:09:49 | done incremental |
---|
0:09:51 | context you thank you |
---|
0:09:53 | um |
---|
0:09:56 | what's the next uh |
---|
0:09:58 | thing |
---|
0:09:59 | oh yeah you |
---|
0:10:00 | this is if you do |
---|
0:10:01 | agglomerative cluster |
---|
0:10:02 | offline so you collect your data and then you say i'm going to do that live clustering in a in |
---|
0:10:06 | a very careful manner |
---|
0:10:08 | posted online the saying either one segments next segment is insane |
---|
0:10:12 | either a cluster or to make a new cluster |
---|
0:10:14 | that's a lot simpler |
---|
0:10:15 | so that |
---|
0:10:16 | the the incremental complexity is now the order of the number of |
---|
0:10:20 | found cluster |
---|
0:10:22 | and for the divisions ski |
---|
0:10:24 | i don't know exactly what i think it's |
---|
0:10:26 | it's |
---|
0:10:27 | also |
---|
0:10:29 | um |
---|
0:10:30 | some aspects of of this clustering is you can decide |
---|
0:10:33 | either to retain your models |
---|
0:10:37 | during the clustering process or not |
---|
0:10:39 | so there's some advantages if you if you do that you have more data from all better models but |
---|
0:10:44 | you browse might also |
---|
0:10:47 | um |
---|
0:10:48 | there's another question is |
---|
0:10:49 | are you going to use |
---|
0:10:51 | the data in your your speaker comparison |
---|
0:10:54 | trix |
---|
0:10:55 | two |
---|
0:10:55 | to do some form of normalisation |
---|
0:10:57 | for instance |
---|
0:10:58 | for the general acoustics |
---|
0:11:00 | that you |
---|
0:11:00 | getting |
---|
0:11:02 | or |
---|
0:11:03 | you want to normalise scores |
---|
0:11:05 | or it may be even better when you want to |
---|
0:11:07 | trained discriminatively so stranger clusters discriminatively believing that addresses are very well then you probably get much better |
---|
0:11:14 | speaker separation |
---|
0:11:15 | this is something we |
---|
0:11:17 | local news |
---|
0:11:17 | doing in this |
---|
0:11:19 | speaker detection |
---|
0:11:20 | for good reason |
---|
0:11:21 | uh but i think if you |
---|
0:11:23 | really |
---|
0:11:24 | art and |
---|
0:11:24 | the scene clustering you might consider to do these things but |
---|
0:11:27 | i also think that or not |
---|
0:11:29 | trivial to do |
---|
0:11:31 | another aspect |
---|
0:11:32 | would be are indeed going to make decisions are we going to make hard |
---|
0:11:36 | clusters we're gone |
---|
0:11:37 | you speaker segments together or not |
---|
0:11:39 | are going to do it in some kind of software which is moral |
---|
0:11:43 | suppose lines of the |
---|
0:11:44 | speaker partitioning |
---|
0:11:45 | with priors |
---|
0:11:46 | every |
---|
0:11:47 | i might be better to |
---|
0:11:48 | the soft |
---|
0:11:49 | soft way i think if you can call |
---|
0:11:51 | comparison way |
---|
0:11:53 | two |
---|
0:11:54 | uh the way uh you do |
---|
0:11:56 | you do |
---|
0:11:57 | you attribute your data |
---|
0:11:59 | two |
---|
0:12:00 | you're mixtures in the gmm that's also done in software |
---|
0:12:03 | and that |
---|
0:12:04 | that works |
---|
0:12:04 | better |
---|
0:12:05 | then |
---|
0:12:06 | if you do it the hard way so |
---|
0:12:08 | this is something to consider as well |
---|
0:12:12 | alright another aspect or speaker clustering would be |
---|
0:12:14 | highway evaluate how well i'm doing for speaker detection we |
---|
0:12:19 | we have found |
---|
0:12:20 | very far into |
---|
0:12:22 | uh |
---|
0:12:23 | defining good |
---|
0:12:25 | evaluation measures |
---|
0:12:26 | the problem |
---|
0:12:27 | while we understand |
---|
0:12:29 | where are we gonna do for clustering usually people that do clustering |
---|
0:12:34 | have a |
---|
0:12:35 | some form of single evaluation measure and i |
---|
0:12:38 | i don't know which ones are the best |
---|
0:12:40 | but |
---|
0:12:40 | the ones that i like |
---|
0:12:42 | are |
---|
0:12:43 | the |
---|
0:12:44 | impurities |
---|
0:12:45 | or from |
---|
0:12:46 | suppressed impurities but we like to look at errors |
---|
0:12:49 | so i i |
---|
0:12:50 | i'd go with the impurity |
---|
0:12:52 | as basically if you if you have your cluster |
---|
0:12:55 | in the end |
---|
0:12:56 | of the clustering process you want to know |
---|
0:12:58 | uh how homogeneous is |
---|
0:13:00 | and the simplest way of looking this what is the most occurring speaker |
---|
0:13:03 | and what fraction |
---|
0:13:05 | does that |
---|
0:13:06 | or |
---|
0:13:07 | in impurity measures |
---|
0:13:09 | how much difference |
---|
0:13:10 | segments |
---|
0:13:11 | are there |
---|
0:13:11 | compared to the most current |
---|
0:13:13 | i if you want to express this mathematically |
---|
0:13:15 | then uh |
---|
0:13:17 | the way i |
---|
0:13:18 | after the fine it looks rather complicated |
---|
0:13:21 | um |
---|
0:13:22 | but i couldn't get it any simpler |
---|
0:13:24 | um |
---|
0:13:26 | but the interesting is sitting is the this is the cluster purity you you know you you see in in |
---|
0:13:32 | general cluster literature but i think there's always the other side in speaker detection we know there's always the other |
---|
0:13:37 | side |
---|
0:13:38 | so in |
---|
0:13:38 | in |
---|
0:13:39 | cluster impurity that's comparable to |
---|
0:13:42 | to minimising |
---|
0:13:42 | false alarm some you know there's always the the downside |
---|
0:13:46 | the missus |
---|
0:13:47 | so |
---|
0:13:48 | we should also the find something like |
---|
0:13:50 | speaker impurity |
---|
0:13:52 | which is |
---|
0:13:53 | the same |
---|
0:13:53 | definition then with respect to the reference speaker |
---|
0:13:57 | and uh you don't always |
---|
0:13:59 | see these things but uh |
---|
0:14:01 | i think you should just computed both and see how they train |
---|
0:14:04 | in your final clustering uh |
---|
0:14:06 | but |
---|
0:14:08 | and the the the reason is |
---|
0:14:10 | it is trivial to make uh |
---|
0:14:12 | uh |
---|
0:14:13 | cluster impurity of of zero so perfect clustering |
---|
0:14:16 | by just making a single cluster for every segment you |
---|
0:14:20 | so those |
---|
0:14:21 | not into |
---|
0:14:21 | you need the other |
---|
0:14:23 | the other part |
---|
0:14:25 | uh there's also other measures which which are more probabilistic of nature's around and looking only at the most |
---|
0:14:31 | we can |
---|
0:14:31 | see frequently occurring |
---|
0:14:33 | speaker in your cluster you can actually look at the |
---|
0:14:36 | at the whole distribution |
---|
0:14:37 | so you get some kind of that |
---|
0:14:38 | entropy |
---|
0:14:39 | measure for your cluster |
---|
0:14:42 | you can average of the roll cluster are weighted by |
---|
0:14:45 | the |
---|
0:14:45 | the number of segments in each cluster |
---|
0:14:47 | and again |
---|
0:14:48 | um |
---|
0:14:50 | not |
---|
0:14:50 | on this slide |
---|
0:14:52 | not only the cluster entropy you can define |
---|
0:14:54 | can also define |
---|
0:14:55 | okay |
---|
0:14:56 | speaker entropy |
---|
0:14:57 | sure |
---|
0:14:57 | again look at both |
---|
0:14:59 | these measures |
---|
0:15:01 | but then we come to the uh |
---|
0:15:03 | experimental section it |
---|
0:15:05 | it's a small experiment |
---|
0:15:07 | um |
---|
0:15:08 | actually carried out a while ago |
---|
0:15:10 | seems pretty ancient uh |
---|
0:15:11 | in terms of speaker recognition develop |
---|
0:15:14 | uh it's |
---|
0:15:15 | can |
---|
0:15:15 | being |
---|
0:15:16 | it's good that it's uh |
---|
0:15:17 | two years ago and and at that time we had a state of the art system |
---|
0:15:21 | a fourth |
---|
0:15:22 | we still have the same system but is not always say the art |
---|
0:15:25 | but anyway so it's a |
---|
0:15:27 | gmm svm system weedy nor |
---|
0:15:29 | at reform |
---|
0:15:30 | pretty well on on on the two thousand six |
---|
0:15:33 | uh he fell set and that the experiment was done |
---|
0:15:36 | uh a warm the |
---|
0:15:38 | preparing for a three two thousand eight |
---|
0:15:41 | so that's why work with that data |
---|
0:15:43 | that at the time we didn't have the |
---|
0:15:46 | the truth data |
---|
0:15:47 | oh |
---|
0:15:48 | two thousand eight yet |
---|
0:15:50 | so we using the two thousand six data i would |
---|
0:15:52 | simply use all |
---|
0:15:53 | the test |
---|
0:15:54 | segments are there is some |
---|
0:15:56 | some |
---|
0:15:57 | thirty seven hundred test segments as male or female and say well you can do it up fish |
---|
0:16:02 | speaker today |
---|
0:16:03 | should not have |
---|
0:16:04 | cross gender trials because |
---|
0:16:06 | those portions of charles tend to be one target trials and kind of not fair |
---|
0:16:11 | uh but here we're not really doing speaker detection |
---|
0:16:13 | we do clustering so it |
---|
0:16:15 | if |
---|
0:16:16 | gender gives you |
---|
0:16:17 | if you some information about cluster them |
---|
0:16:20 | maybe it's fair to use them |
---|
0:16:22 | um and moreover are |
---|
0:16:24 | hours |
---|
0:16:24 | system at the time was completely gender independent or was not a single |
---|
0:16:28 | condition one |
---|
0:16:29 | gender there |
---|
0:16:31 | five minutes |
---|
0:16:32 | um so two versions of agglomerative clustering one online |
---|
0:16:37 | so taking one second |
---|
0:16:38 | the time |
---|
0:16:39 | making decisions |
---|
0:16:40 | and one |
---|
0:16:41 | a part |
---|
0:16:42 | click here for vocal work |
---|
0:16:44 | this would result |
---|
0:16:45 | so you see speaker |
---|
0:16:47 | impurity versus uh |
---|
0:16:49 | cluster impurity |
---|
0:16:50 | for both |
---|
0:16:52 | type |
---|
0:16:52 | of uh agglomerative clusters |
---|
0:16:54 | clustering |
---|
0:16:56 | and um |
---|
0:16:57 | well you can define something like an equal impurity |
---|
0:17:01 | point |
---|
0:17:02 | uh i i put the debt curves |
---|
0:17:04 | uh that access for |
---|
0:17:06 | for people |
---|
0:17:07 | but |
---|
0:17:08 | can't live without |
---|
0:17:09 | that accent |
---|
0:17:10 | i actually works |
---|
0:17:11 | very well because that there's no reason why these |
---|
0:17:14 | first straight not not one that i understand |
---|
0:17:17 | but easily |
---|
0:17:18 | but it works |
---|
0:17:19 | very well you see that these two different |
---|
0:17:21 | kind of approaches one as much |
---|
0:17:22 | simpler the online version is much simpler than the offline version |
---|
0:17:27 | form more or less |
---|
0:17:28 | say |
---|
0:17:29 | uh |
---|
0:17:30 | another interesting thing or |
---|
0:17:31 | not |
---|
0:17:32 | counting from abroad |
---|
0:17:33 | in terms of T norm |
---|
0:17:35 | uh thresholds that you |
---|
0:17:37 | put |
---|
0:17:38 | uh |
---|
0:17:40 | in in the cluster our them for for stopping |
---|
0:17:42 | uh |
---|
0:17:43 | these are |
---|
0:17:43 | quite different for the two algorithms so that will be easy |
---|
0:17:46 | look at |
---|
0:17:47 | different things |
---|
0:17:48 | uh |
---|
0:17:49 | what it worked even |
---|
0:17:51 | uh and this |
---|
0:17:53 | is the the the last subject and that's |
---|
0:17:55 | the scalability |
---|
0:17:56 | all uh |
---|
0:17:57 | of this whole process because i mean the thing large numbers so |
---|
0:18:01 | because |
---|
0:18:02 | mentioning a thousand |
---|
0:18:04 | uh |
---|
0:18:05 | segments |
---|
0:18:06 | that we doing seven minute segments but |
---|
0:18:08 | for two thousand six i did not more |
---|
0:18:10 | at least i didn't |
---|
0:18:11 | take more than seven |
---|
0:18:12 | signal |
---|
0:18:13 | um |
---|
0:18:14 | so here i am |
---|
0:18:15 | looking at what scale what what the the |
---|
0:18:18 | equal impurity is as a function or |
---|
0:18:20 | number of segments on a log scale and |
---|
0:18:22 | you see |
---|
0:18:23 | uh |
---|
0:18:24 | some people would call this uh |
---|
0:18:26 | uh |
---|
0:18:26 | graceful |
---|
0:18:27 | degradation i think that's a fantastic where they learn to use it |
---|
0:18:31 | and then some vision vision |
---|
0:18:33 | graceful degradation and it's it it |
---|
0:18:36 | it's a single work some award number of |
---|
0:18:39 | all speakers |
---|
0:18:40 | my dad |
---|
0:18:40 | to do with the way to this |
---|
0:18:42 | this |
---|
0:18:43 | segments are chosen |
---|
0:18:44 | in the east |
---|
0:18:46 | evaluations because |
---|
0:18:48 | um you can also express it as the number of speakers |
---|
0:18:51 | but then and linear access you could actually exactly the same graph |
---|
0:18:55 | so |
---|
0:18:56 | seems that this relation between the number of segments and number speakers if you just randomly leave out |
---|
0:19:02 | segments in order to reduce the problem that's what i did |
---|
0:19:05 | going from |
---|
0:19:06 | the full problem here |
---|
0:19:08 | down |
---|
0:19:09 | i just uh randomly |
---|
0:19:11 | uh left out |
---|
0:19:12 | second |
---|
0:19:13 | so again you see the same kind of |
---|
0:19:16 | graceful degradation but there |
---|
0:19:18 | for for this performance |
---|
0:19:19 | speaker recognition system there is some number here |
---|
0:19:23 | where |
---|
0:19:23 | we will have an equal impurity of fifty percent if this |
---|
0:19:26 | this trend is good |
---|
0:19:27 | and |
---|
0:19:28 | we shouldn't go beyond |
---|
0:19:29 | so i think |
---|
0:19:31 | the |
---|
0:19:33 | if you define the problem of speaker |
---|
0:19:35 | clustering or speaker linking |
---|
0:19:37 | uh you have a problem with the |
---|
0:19:39 | scalability in terms of the number of speakers or number of segments or |
---|
0:19:43 | whatever you want to look at that |
---|
0:19:45 | so from that perspective i think it's interesting problem |
---|
0:19:47 | from gets harder has to do with the fact that |
---|
0:19:50 | i suppose |
---|
0:19:51 | identification |
---|
0:19:53 | gets harder with more |
---|
0:19:54 | class |
---|
0:19:57 | okay |
---|
0:19:58 | um |
---|
0:19:59 | that said that |
---|
0:20:02 | there is actually in in different fields or something |
---|
0:20:05 | called C and C |
---|
0:20:06 | as an analysis tool and you remember what it is but it's something like it measures |
---|
0:20:12 | how well |
---|
0:20:13 | your |
---|
0:20:14 | target |
---|
0:20:16 | object is in the |
---|
0:20:17 | based |
---|
0:20:19 | and |
---|
0:20:20 | uh |
---|
0:20:21 | classes of class segments returned |
---|
0:20:24 | um |
---|
0:20:25 | it's around and looking at identification one and identification you're looking at |
---|
0:20:30 | this |
---|
0:20:30 | how it goes with |
---|
0:20:31 | with |
---|
0:20:32 | uh |
---|
0:20:33 | with two and |
---|
0:20:34 | you might say |
---|
0:20:37 | um |
---|
0:20:38 | and that has been analysing in different |
---|
0:20:40 | literature already |
---|
0:20:44 | good |
---|
0:20:45 | um |
---|
0:20:46 | and of course the real nice thing thing would be in once we have |
---|
0:20:49 | define or you cultivation measure any good taste |
---|
0:20:52 | and a and a and a proper test that that |
---|
0:20:54 | that we understand |
---|
0:20:55 | forms of all scales |
---|
0:20:57 | that of course you can look at different |
---|
0:20:59 | our the because they are in my |
---|
0:21:01 | use here |
---|
0:21:01 | is |
---|
0:21:02 | pretty trivial |
---|
0:21:04 | and i'm sure that you can use global algorithms that |
---|
0:21:08 | that consider everything at the same time |
---|
0:21:10 | and |
---|
0:21:10 | perform or much better |
---|
0:21:12 | and |
---|
0:21:13 | um |
---|
0:21:15 | of course there's also |
---|
0:21:17 | a question i didn't say that i started with the score |
---|
0:21:20 | matrix are just scored everything against everything |
---|
0:21:23 | which is pretty moment at the time if i think about it now everybody scores everything against everything but |
---|
0:21:28 | at the time was kinda |
---|
0:21:29 | uh |
---|
0:21:30 | you would score nice |
---|
0:21:32 | nice try at least |
---|
0:21:34 | but can we do better than that |
---|
0:21:36 | so we use |
---|
0:21:38 | no alternative |
---|
0:21:40 | speech segments that we've already seen ordered receding in global |
---|
0:21:44 | for either normalisation and discriminant |
---|
0:21:46 | training |
---|
0:21:48 | that's another |
---|
0:21:49 | and that's a question and that's |
---|
0:21:51 | where i'd like to still visiting |
---|
0:21:53 | time |
---|
0:21:54 | that |
---|
0:21:58 | i've run over time |
---|
0:21:59 | is that |
---|
0:22:00 | great |
---|
0:22:02 | so you five minutes i'm fine |
---|
0:22:07 | well i don't have any slightly more at that was kind of a nice way to |
---|
0:22:17 | thanks david |
---|
0:22:18 | which is |
---|
0:22:19 | my signing about |
---|
0:22:21 | we have time for some comments or questions |
---|
0:22:27 | this uh this then yeah |
---|
0:22:29 | and that use um |
---|
0:22:30 | um |
---|
0:22:31 | yeah that's uh |
---|
0:22:32 | it seems quite |
---|
0:22:33 | the |
---|
0:22:34 | quite upsetting |
---|
0:22:35 | as you as you mentioned |
---|
0:22:37 | uh |
---|
0:22:38 | it seems like things that thing to break down |
---|
0:22:41 | we uh |
---|
0:22:42 | that the problems that are too long |
---|
0:22:44 | so |
---|
0:22:44 | yeah |
---|
0:22:45 | uh |
---|
0:22:46 | and and |
---|
0:22:46 | then |
---|
0:22:47 | and your conclusion you conjecture that maybe you |
---|
0:22:50 | do some |
---|
0:22:50 | bit of |
---|
0:22:51 | considering |
---|
0:22:52 | yes |
---|
0:22:53 | school |
---|
0:22:54 | so |
---|
0:22:55 | yeah |
---|
0:22:55 | i think that |
---|
0:22:57 | i have |
---|
0:22:58 | uh |
---|
0:22:58 | exactly the problem |
---|
0:23:00 | oh yeah |
---|
0:23:01 | sure |
---|
0:23:02 | in uh |
---|
0:23:03 | in |
---|
0:23:04 | in |
---|
0:23:04 | and my method for example and also in the uh |
---|
0:23:08 | variational bayes method which |
---|
0:23:10 | patrick |
---|
0:23:11 | messages to be tried |
---|
0:23:12 | for the following workshop |
---|
0:23:14 | uh |
---|
0:23:15 | you effectively |
---|
0:23:17 | uh |
---|
0:23:18 | or |
---|
0:23:18 | looking |
---|
0:23:19 | the type that one |
---|
0:23:21 | the |
---|
0:23:22 | but in in an unsupervised way |
---|
0:23:24 | yes so that um |
---|
0:23:26 | my mike that train |
---|
0:23:27 | so we'll have to |
---|
0:23:28 | we'll have to |
---|
0:23:29 | we'll have to like look at that and see if we can |
---|
0:23:33 | true |
---|
0:23:37 | that that |
---|
0:23:39 | again |
---|
0:23:42 | click |
---|
0:23:45 | or whatever |
---|
0:23:46 | i'm i'm not the |
---|
0:23:48 | both |
---|
0:23:54 | yeah um |
---|
0:23:56 | can you |
---|
0:23:57 | i've got two questions |
---|
0:23:58 | one um you disappointing |
---|
0:24:00 | speaker linking |
---|
0:24:02 | could you clarify |
---|
0:24:03 | which |
---|
0:24:04 | different |
---|
0:24:05 | from speech |
---|
0:24:06 | cluster |
---|
0:24:07 | in the end |
---|
0:24:07 | i don't know the scale thing but |
---|
0:24:09 | it it's |
---|
0:24:10 | it's |
---|
0:24:10 | it's the same problem that's just the way i see it is is is |
---|
0:24:13 | is |
---|
0:24:14 | linking is more like a task and clustering and more like uh |
---|
0:24:18 | a way of doing it |
---|
0:24:20 | i think there are otherwise identical |
---|
0:24:21 | and the reason why colour blinking is because we're all so busy with large scale diarisation and there you have |
---|
0:24:26 | two steps |
---|
0:24:27 | states yeah first within |
---|
0:24:30 | uh |
---|
0:24:30 | say we within your meeting or within your |
---|
0:24:33 | broadcast |
---|
0:24:35 | segmentation clustering kind of things diarisation |
---|
0:24:38 | and then you try to link |
---|
0:24:39 | the |
---|
0:24:40 | the different |
---|
0:24:41 | clusters |
---|
0:24:42 | between |
---|
0:24:43 | meetings are between the uh broadcast |
---|
0:24:45 | and |
---|
0:24:46 | in order to separate |
---|
0:24:47 | things there |
---|
0:24:49 | we call that linking rather than clustering with otherwise we'd have clustering here clustering there |
---|
0:24:54 | maybe a little less uncertainty on the |
---|
0:24:56 | speech segment for one speaker |
---|
0:25:00 | but |
---|
0:25:01 | um |
---|
0:25:03 | the second question was uh i'm still there was mention i'm puzzled |
---|
0:25:07 | online |
---|
0:25:08 | uh system |
---|
0:25:10 | i mean to me |
---|
0:25:11 | it looks like it |
---|
0:25:14 | top down |
---|
0:25:15 | cluster |
---|
0:25:16 | this |
---|
0:25:16 | uh or |
---|
0:25:20 | you think that the lines you |
---|
0:25:22 | for worse than no um and possibly |
---|
0:25:24 | condition |
---|
0:25:25 | um because you're |
---|
0:25:27 | and it's not one of my more |
---|
0:25:29 | right |
---|
0:25:29 | the |
---|
0:25:30 | this online clustering |
---|
0:25:32 | oh and i'm not sure whether it's |
---|
0:25:34 | the agglomerative no well i supportive in a sense yeah at one |
---|
0:25:38 | at the time you try to fit |
---|
0:25:39 | somewhere in your clusters |
---|
0:25:41 | maybe not |
---|
0:25:42 | more formally aboard |
---|
0:25:45 | yeah |
---|
0:25:50 | so |
---|
0:25:51 | i |
---|
0:25:52 | a ninety |
---|
0:25:54 | speaking |
---|
0:25:55 | something |
---|
0:25:55 | fast |
---|
0:25:56 | linking partition |
---|
0:25:57 | things that |
---|
0:25:58 | so |
---|
0:25:59 | uh |
---|
0:25:59 | like like it'd it even more involved |
---|
0:26:01 | some of this |
---|
0:26:02 | speaker clustering and |
---|
0:26:03 | language class |
---|
0:26:05 | happening in |
---|
0:26:05 | there |
---|
0:26:06 | and |
---|
0:26:06 | papers looking at it |
---|
0:26:08 | but |
---|
0:26:09 | the point we ran into it at one point in this |
---|
0:26:11 | like |
---|
0:26:12 | just about the plastic |
---|
0:26:14 | that's |
---|
0:26:14 | is that |
---|
0:26:15 | in general |
---|
0:26:16 | these |
---|
0:26:17 | and like task trying to come up this mess |
---|
0:26:20 | perform |
---|
0:26:21 | class to make sure |
---|
0:26:23 | diarisation |
---|
0:26:25 | you know these horribly complicated like pca measures and all that |
---|
0:26:29 | i think that they should keep in mind is |
---|
0:26:31 | somebody's measures it |
---|
0:26:32 | yeah |
---|
0:26:33 | it's hard to relate to |
---|
0:26:34 | actually doing |
---|
0:26:36 | and actually yeah |
---|
0:26:38 | yeah |
---|
0:26:39 | for different diarisation error rate right we use that |
---|
0:26:41 | really |
---|
0:26:42 | a lot of diarization error down |
---|
0:26:44 | little late |
---|
0:26:45 | diarisation or |
---|
0:26:46 | cluster |
---|
0:26:47 | all these other things i view them as being |
---|
0:26:49 | oh right |
---|
0:26:51 | it's not that they're not worth working on |
---|
0:26:52 | but it's not really we're gonna get a single measure performance |
---|
0:26:55 | say |
---|
0:26:56 | when we optimise this |
---|
0:26:58 | it always opens a question |
---|
0:27:00 | now what i do it |
---|
0:27:02 | so diarisation has this problem to look at uh |
---|
0:27:05 | speech recognition make they want to do diarisation |
---|
0:27:08 | or clustering |
---|
0:27:09 | on audio |
---|
0:27:10 | for adapting |
---|
0:27:11 | speech recogniser |
---|
0:27:12 | they hear what they want for doing adaptation |
---|
0:27:15 | it's nothing like what we would say oh that's really good diarisation |
---|
0:27:19 | so let's see here i think |
---|
0:27:20 | we're going into these things and not at three days and then come back |
---|
0:27:24 | diarisation |
---|
0:27:25 | the partitioning the linking and talk about it |
---|
0:27:28 | but at one point |
---|
0:27:29 | we're gonna have to be a little careful about |
---|
0:27:31 | creating |
---|
0:27:32 | these numbers and saying oh i got a number X |
---|
0:27:34 | better than why |
---|
0:27:36 | but all the people are gonna say well what's the nation that |
---|
0:27:39 | what what is it |
---|
0:27:40 | i help you do |
---|
0:27:41 | what did why that is |
---|
0:27:43 | no |
---|
0:27:44 | it's not clear to me |
---|
0:27:45 | point two we start |
---|
0:27:46 | uh actually liking them or something |
---|
0:27:48 | doing |
---|
0:27:49 | at the end |
---|
0:27:50 | yeah |
---|
0:27:53 | so uh |
---|
0:27:54 | experiment linking |
---|
0:27:55 | then |
---|
0:27:56 | hmmm |
---|
0:27:57 | are you |
---|
0:27:58 | batch of |
---|
0:27:59 | this |
---|
0:28:01 | on the text |
---|
0:28:02 | yeah |
---|
0:28:02 | um |
---|
0:28:03 | yeah um |
---|
0:28:06 | yeah i agree i mean that it did this this is just |
---|
0:28:09 | an application and i think |
---|
0:28:10 | the focus here |
---|
0:28:11 | more was to look at what what happens |
---|
0:28:13 | if |
---|
0:28:14 | if |
---|
0:28:14 | things |
---|
0:28:16 | scale |
---|
0:28:16 | scale up |
---|
0:28:18 | and and that the nice thing about speaker detection is that |
---|
0:28:21 | you don't have to worry about that |
---|
0:28:22 | things go up just get |
---|
0:28:24 | better |
---|
0:28:24 | estimates of how well you're doing |
---|
0:28:27 | but |
---|
0:28:27 | did |
---|
0:28:28 | theory the |
---|
0:28:30 | reform and the cost function whenever it should be more or less the same shoot you |
---|
0:28:35 | stabiliser thing here |
---|
0:28:37 | this is not |
---|
0:28:38 | okay shifting you might say okay you're doing the wrong thing |
---|
0:28:41 | on the other hand |
---|
0:28:43 | trying to |
---|
0:28:43 | rotation |
---|
0:28:45 | speakers |
---|
0:28:46 | might be useful |
---|
0:28:47 | thing |
---|
0:28:49 | so there's an example in this case |
---|
0:28:51 | right weightings |
---|
0:28:52 | okay |
---|
0:28:53 | what's yeah |
---|
0:28:54 | okay |
---|
0:28:55 | you get these clusters out |
---|
0:28:57 | what does someone do it |
---|
0:29:02 | yeah |
---|
0:29:02 | do you uh |
---|
0:29:06 | put in the same boat although i mean that |
---|
0:29:08 | yeah |
---|
0:29:08 | the |
---|
0:29:10 | i give it and saying |
---|
0:29:11 | C D |
---|
0:29:12 | i mean yeah |
---|
0:29:15 | this does i open this up in general |
---|
0:29:17 | partitioning in other things that are going on |
---|
0:29:20 | such as |
---|
0:29:21 | you get these things they |
---|
0:29:22 | in some sense you get that stuff |
---|
0:29:24 | what am i |
---|
0:29:25 | doing |
---|
0:29:26 | i get a cluster |
---|
0:29:27 | yeah thousand clusters at night |
---|
0:29:29 | average |
---|
0:29:29 | here it is |
---|
0:29:30 | yeah percent |
---|
0:29:32 | but if someone says |
---|
0:29:33 | i know i want |
---|
0:29:34 | do what |
---|
0:29:35 | so my searching for somebody in my |
---|
0:29:36 | so for example we went to this day |
---|
0:29:39 | trajectories in this details |
---|
0:29:41 | we started a trend |
---|
0:29:42 | clustering |
---|
0:29:43 | and |
---|
0:29:44 | some of the things |
---|
0:29:45 | no |
---|
0:29:45 | working it's which were diarization |
---|
0:29:47 | although lately that pulled away from this doing diarization instead |
---|
0:29:50 | the task is detection we're gonna give you |
---|
0:29:53 | we wanna see how it was |
---|
0:29:54 | in the context |
---|
0:29:55 | doing diarisation help you purify your data |
---|
0:29:58 | to roland |
---|
0:29:59 | test models |
---|
0:30:00 | we wanna see how well you |
---|
0:30:01 | do |
---|
0:30:02 | linking these two together and trying to see that correlates T diarization error rate in in that |
---|
0:30:07 | cation like detection task |
---|
0:30:08 | was that one but it's |
---|
0:30:10 | very loose |
---|
0:30:12 | it seems |
---|
0:30:12 | ah |
---|
0:30:13 | that's one thing |
---|
0:30:14 | here i think in general people are put up this task |
---|
0:30:18 | to another |
---|
0:30:18 | talking on |
---|
0:30:19 | is |
---|
0:30:20 | linking |
---|
0:30:21 | some |
---|
0:30:21 | you could say |
---|
0:30:22 | if i drop |
---|
0:30:23 | twenty percent ten percent |
---|
0:30:25 | did i get |
---|
0:30:26 | twice |
---|
0:30:27 | good |
---|
0:30:27 | in my and application matter |
---|
0:30:29 | and i using centimetres |
---|
0:30:31 | things are miles |
---|
0:30:32 | i just don't have like |
---|
0:30:35 | well guess |
---|
0:30:36 | here |
---|
0:30:37 | a rates go down you you do better |
---|
0:30:39 | but |
---|
0:30:40 | where is good enough |
---|
0:30:43 | today |
---|
0:30:45 | fig |
---|
0:30:45 | common |
---|
0:30:46 | this |
---|
0:30:47 | yeah |
---|
0:30:48 | you he's very close |
---|
0:30:50 | your time |
---|
0:30:51 | proposed in the present |
---|
0:30:53 | nice |
---|
0:30:53 | me and myself |
---|
0:30:55 | and the only difference it was exactly the same task |
---|
0:30:58 | to explain the interest of a task |
---|
0:31:01 | we |
---|
0:31:02 | so cues |
---|
0:31:03 | in |
---|
0:31:03 | we should use on |
---|
0:31:05 | no |
---|
0:31:05 | oh raw |
---|
0:31:07 | like you |
---|
0:31:08 | T V book us |
---|
0:31:09 | if you days |
---|
0:31:10 | and you know you speaker diarisation each recording |
---|
0:31:14 | and you want to load |
---|
0:31:15 | after that to combat was you know |
---|
0:31:18 | and this is a real |
---|
0:31:20 | if you will with a |
---|
0:31:21 | national media organisation |
---|
0:31:24 | uh they don't uh the the computer for this one |
---|
0:31:27 | come back |
---|
0:31:28 | seeing on each time they need to do |
---|
0:31:30 | when indexing task |
---|
0:31:32 | you mix |
---|
0:31:33 | according to me would be different |
---|
0:31:35 | and the second |
---|
0:31:36 | constraint is |
---|
0:31:37 | you should have like you |
---|
0:31:39 | you say something to have to implement the indexing implement the |
---|
0:31:44 | we'd formation |
---|
0:31:45 | you can't |
---|
0:31:46 | each done or receiving a new file |
---|
0:31:48 | uh come back to and we |
---|
0:31:51 | we do all |
---|
0:31:52 | computing |
---|
0:31:53 | in this case you have a strong difference between generalisation |
---|
0:31:56 | on one one or a few times |
---|
0:31:59 | and speaker tying your |
---|
0:32:01 | you key |
---|
0:32:02 | well you could have a |
---|
0:32:04 | hundred of thousands of hours of video |
---|
0:32:08 | last comment the |
---|
0:32:09 | you could |
---|
0:32:11 | okay |
---|
0:32:17 | ah |
---|
0:32:18 | okay i'm i'm i'm also just |
---|
0:32:20 | uh on on something |
---|
0:32:21 | that so |
---|
0:32:23 | in this sense |
---|
0:32:24 | uh |
---|
0:32:25 | why we brought this |
---|
0:32:27 | uh |
---|
0:32:28 | this kind of problem and |
---|
0:32:30 | also about uh |
---|
0:32:32 | devalued i submit |
---|
0:32:34 | um |
---|
0:32:35 | i i i didn't uh |
---|
0:32:38 | uh |
---|
0:32:39 | discuss this in my presentation |
---|
0:32:41 | but |
---|
0:32:42 | in the paper |
---|
0:32:43 | i show how you can |
---|
0:32:45 | uh |
---|
0:32:45 | do all the usual this task |
---|
0:32:48 | um so |
---|
0:32:49 | the partitioning problem |
---|
0:32:50 | the multiple training |
---|
0:32:52 | the unsupervised adaptation |
---|
0:32:54 | yeah |
---|
0:32:55 | and so on so by |
---|
0:32:56 | right |
---|
0:32:57 | generalising we can we can learn more |
---|
0:33:01 | about |
---|
0:33:01 | all that |
---|
0:33:02 | the normal tossed |
---|
0:33:03 | that we're doing |
---|
0:33:04 | um |
---|
0:33:05 | so |
---|
0:33:06 | if you can solve this problem can solve everything else |
---|
0:33:10 | so |
---|
0:33:11 | sorry |
---|
0:33:18 | yes if you're segmentation is given it's not just the last |
---|
0:33:21 | the last |
---|
0:33:22 | the last |
---|
0:33:22 | the last thing is |
---|
0:33:25 | uh |
---|
0:33:25 | the evaluation metric |
---|
0:33:27 | um |
---|
0:33:29 | uh |
---|
0:33:30 | you could use it sort of any practical purpose namely |
---|
0:33:33 | uh |
---|
0:33:33 | numerically optimised |
---|
0:33:35 | discriminant |
---|
0:33:36 | right |
---|
0:33:36 | so |
---|
0:33:37 | uh |
---|
0:33:37 | this is probably what i'm going to be doing the next three weeks at at at the workshop and |
---|
0:33:43 | uh |
---|
0:33:43 | oh |
---|
0:33:44 | in |
---|
0:33:44 | be |
---|
0:33:45 | mazes here |
---|
0:33:47 | probably |
---|
0:33:48 | what i do |
---|
0:33:49 | going to use |
---|
0:33:50 | okay so what what application and it's keeping you go |
---|
0:33:53 | so that yeah |
---|
0:33:55 | limits its uh |
---|
0:33:57 | thanks to be with |
---|