| Name: | Description: | Size: | Format: | |
|---|---|---|---|---|
| 273.82 KB | Adobe PDF |
Advisor(s)
Abstract(s)
Voice conversion techniques aim to modify a subject’s voice characteristics in order to mimic the one’s of another person. Due to the difference in utterance length between source and target speaker, state of the art voice conversion systems often
rely on a frame alignment pre-processing step. This step aligns the entire utterances with algorithms such as dynamic time warping (DTW) that introduce errors, hindering system performance. In this paper we present a new technique that
avoids the alignment of entire utterances at frame level, while keeping the local context during training. For this purpose, we combine an RNN model with the use of phoneme or syllablelevel information, obtained from a speech recognition system.
This system segments the utterances into segments which then can be grouped into overlapping windows, providing the needed context for the model to learn the temporal dependencies. We show that with this approach, notable improvements can be attained over a state of the art RNN voice conversion system on the CMU ARCTIC database. It is also worth noting that with this technique it is possible to halve the training data size and still outperform the baseline.
Description
Keywords
Voice conversion Recurrent neural networks Deep learning Spectral mapping
Pedagogical Context
Citation
Ramos, M.V., Black, A.W., Astudillo, R.F., Trancoso, I., Fonseca, N. (2017) Segment Level Voice Conversion with Recurrent Neural Networks. Proc. Interspeech 2017, 3414-3418, doi: 10.21437/Interspeech.2017-1538
Publisher
ISCA
CC License
Without CC licence
