Predicting Tasks in Goal-Oriented Spoken Dialog Systems using Semantic Knowledge Bases
Aasish Pappu, Alexander Rudnicky |
---|
Goal-oriented dialog agents are expected to recognize user-intentions from an utterance and execute appropriate tasks. Typically, such systems use a semantic parser to solve this problem. However, semantic parsers could fail if user utterances contain out-of-grammar words/phrases or if the semantics of uttered phrases did not match the parser's expectations. In this work, we have explored a more robust method of task prediction. We define task prediction as a classification problem, rather than âparsingâ and use semantic contexts to improve classification accuracy. Our classifier uses semantic smoothing kernels that can encode information from knowledge bases such as Wordnet, NELL and Freebase. com. Our experiments on two spoken language corpora show that augmenting semantic information from these knowledge bases gives about 30% absolute improvement in task prediction over a parserbased method. Our approach thus helps make a dialog agent more robust to user input and helps reduce number of turns required to detected intended tasks.