Towards the prediction of the vocal tract shape from the sequence of phonemes to be articulated
(Oral presentation)
Vinicius Ribeiro (Loria (UMR 7503), France), Karyna Isaieva (IADI (Inserm U1254), France), Justine Leclere (IADI (Inserm U1254), France), Pierre-André Vuissoz (IADI (Inserm U1254), France), Yves Laprie (Loria (UMR 7503), France) |
---|
In this work, we address the prediction of speech articulators’ temporal geometric position from the sequence of phonemes to be articulated. We start from a set of real-time MRI sequences uttered by a female French speaker. The contours of five articulators were tracked automatically in each of the frames in the MRI video. Then, we explore the capacity of a bidirectional GRU to correctly predict each articulator’s shape and position given the sequence of phonemes and their duration. We propose a 5-fold cross-validation experiment to evaluate the generalization capacity of the model. In a second experiment, we evaluate our model’s data efficiency by reducing training data. We evaluate the point-to-point Euclidean distance and the Pearson’s correlations along time between the predicted and the target shapes. We also evaluate produced shapes of the critical articulators of specific phonemes. We show that our model can achieve good results with minimal data, producing very realistic vocal tract shapes.