Spoken Term Detection and Relevance Score Estimation using Dot-Product of Pronunciation Embeddings
(Oral presentation)
Jan Švec (University of West Bohemia, Czech Republic), Luboš Šmídl (University of West Bohemia, Czech Republic), Josef V. Psutka (University of West Bohemia, Czech Republic), Aleš Pražák (University of West Bohemia, Czech Republic) |
---|
The paper describes a novel approach to Spoken Term Detection (STD) in large spoken archives using deep LSTM networks. The work is based on the previous approach of using Siamese neural networks for STD and naturally extends it to directly localize a spoken term and estimate its relevance score. The phoneme confusion network generated by a phoneme recognizer is processed by the deep LSTM network which projects each segment of the confusion network into an embedding space. The searched term is projected into the same embedding space using another deep LSTM network. The relevance score is then computed using a simple dot-product in the embedding space and calibrated using a sigmoid function to predict the probability of occurrence. The location of the searched term is then estimated from the sequence of output probabilities. The deep LSTM networks are trained in a self-supervised manner from paired recognition hypotheses on word and phoneme levels. The method is experimentally evaluated on MALACH data in English and Czech languages.