David Sündermann,

Universitat Politècnica de Catalunya


For applications like multi-user speech-to-speech translation, it is helpful to individualize the output voice to make voices distinguishable. Ideally, this should be done by applying the input speaker’s voice characteristics to the output speech.

In general, a speech-to-speech translation system consists of three main modules: speech recognition, text translation, and speech synthesis.

Since the latter, the speech synthesis module, normally is based on a large speech corpus of a professional speaker manually corrected and carefully tuned, the output voice characteristics are static. This is overcome by a fourth module, the voice conversion unit, which processes the synthesizer’s speech according to the input voice characteristics.

Due to the nature of speech-to-speech translation, input and output voices use different languages leading to the following two challenges:

(i) As opposed to state-of-the-art voice conversion, whose statistical parameter training is based on parallel utterances of both involved speakers (text-dependent approach), here we have to rely on text-independent parameter training: There is no way to produce parallel utterances in different languages;
(ii) Most voice conversion techniques estimate conversion functions that depend on the phonetic class, either explicitly (e.g. using CART) or implicitly (e.g. using GMM). However, considering different languages, we face different phoneme sets that make it hard to find conversion functions for phonetic units, which are not covered by the other phoneme set.

In this talk, I present text-independent voice conversion techniques that are cross-language portable and aim at solving these challenges. In this context, I will

(i) introduce a speech alignment technique based on unit selection dealing with non-parallel speech
and (ii) show that vocal tract length normalization, which is applied to convert the source voice towards the target, can be directly applied to the time frames without the detour through frequency domain.

The techniques’ performance is assessed on several multi-lingual corpora in the framework of subjective evaluations. In addition to the evaluation results, speech samples will be used to illustrate the discussed techniques’ effectiveness.


Date: 2006-Nov-17     Time: 15:30:00     Room: 336

For more information: