0:00:15 | have run some to learn from university of east and women today same here two |
---|
0:00:19 | presents the last it binds |
---|
0:00:21 | in using the planning phone number and its applications |
---|
0:00:24 | so how would you simple we chuckle reflects a hand graph feature in |
---|
0:00:30 | in direct approach |
---|
0:00:31 | by our deep-learning that will |
---|
0:00:32 | so what we use be plantings first |
---|
0:00:35 | you given any feature itinerary in use a added to considering so we try to |
---|
0:00:40 | understand how the effect of convolution on the signal by reconstruct the signal from the |
---|
0:00:46 | art is no messy go very small print a tree trees so signal for via |
---|
0:00:51 | kind of read effects and fall be but |
---|
0:00:54 | become comparison is like glow the signals |
---|
0:00:58 | and yes the |
---|
0:01:00 | subsets in speech recognition this |
---|
0:01:02 | or you modify must in |
---|
0:01:05 | frame within we depending |
---|
0:01:07 | and next week |
---|
0:01:10 | deep-learning in bottleneck feature we improve the performance of language identification system a lot |
---|
0:01:16 | so this is a forty size of i was system |
---|
0:01:20 | is the end-to-end approach from audio file we extract |
---|
0:01:24 | no mfcc or filter bank feature and feeding to the network we custom probabilities for |
---|
0:01:29 | each language |
---|
0:01:31 | so far |
---|
0:01:32 | jennings this deep network |
---|
0:01:34 | well we need to address how overnight in well how to better if it from |
---|
0:01:38 | multiple watching take the design how to change the artistic the |
---|
0:01:42 | efficiently using early stopping regularization |
---|
0:01:45 | am optimization techniques |
---|
0:01:47 | and how to address the computation noise you with deep network |
---|
0:01:51 | and this is the results so we get improvement compared to single system and our |
---|
0:01:58 | republic system that improvement |
---|
0:02:00 | by using more advanced technical and what's normalization dropouts |
---|
0:02:05 | and fall we sees as the imbalanced dataset happen or negative interface i on the |
---|
0:02:12 | model so we use modify the corpus and using by just imposing to be |
---|
0:02:17 | or sampling and score calibration |
---|
0:02:19 | we closer to the bottleneck feature but |
---|
0:02:22 | still plenty of room for improvement |
---|