Thomas Pellegrini,



The last decade has seen growing interest in developing speech and language technologies for a wider range of languages. State-of-the-Art speech recognizers are typically trained on huge amounts of data, both transcribed speech and texts. My thesis work focused on speech recognition for languages for which small amounts of data are available: the “less-represented languages”. These languages often suffer from poor representation on the Web, which is the main collecting source. Very high out-of-vocabulary rates and poor language model estimation are common for these languages. In this presentation, I will briefly describe the difficulties posed by building new ASR systems with little data. Then I will present our attempt to improve performance, by using sub-word units in the recognition lexicon. We enhanced a data-driven word decompounding algorithm in order to address the problem of increased phonetic confusability arising from word decompounding. Experiments carried out on two distinct languages, Amharic and Turkish, achieved small but significative improvements, around 5% relative in word error rate, with 30% to 50% relative OOV reductions. The algorithm is relatively language independent and requires minimal adaptation to be applied to other languages.


Date: 2008-Jun-18     Time: 14:00:00     Room: 336

For more information: