Rethinking End-to-End Evaluation of Decomposable Tasks: A Case Study on Spoken Language Understanding
(3 minutes introduction)
Siddhant Arora (Carnegie Mellon University, USA), Alissa Ostapenko (Carnegie Mellon University, USA), Vijay Viswanathan (Carnegie Mellon University, USA), Siddharth Dalmia (Carnegie Mellon University, USA), Florian Metze (Carnegie Mellon University, USA), Shinji Watanabe (Carnegie Mellon University, USA), Alan W. Black (Carnegie Mellon University, USA) |
---|
Decomposable tasks are complex and comprise of a hierarchy of sub-tasks. Spoken intent prediction, for example, combines automatic speech recognition and natural language understanding. Existing benchmarks, however, typically hold out examples for only the surface-level sub-task. As a result, models with similar performance on these benchmarks may have unobserved performance differences on the other sub-tasks. To allow insightful comparisons between competitive end-to-end architectures, we propose a framework to construct robust test sets using coordinate ascent over sub-task specific utility functions. Given a dataset for a decomposable task, our method optimally creates a test set for each sub-task to individually assess sub-components of the end-to-end model. Using spoken language understanding as a case study, we generate new splits for the Fluent Speech Commands and Snips SmartLights datasets. Each split has two test sets: one with held-out utterances assessing natural language understanding abilities, and one with held-out speakers to test speech processing skills. Our splits identify performance gaps up to 10% between end-to-end systems that were within 1% of each other on the original test sets. These performance gaps allow more realistic and actionable comparisons between different architectures, driving future model development. We release our splits and tools for the community.