Skip to content

ssumin6/Korean-TTS-Server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Korean TTS Server

Implementation of Korean TTS Server based on FastSpeech Pytorch.
This is based on the fastspeech implementation of xcmyz.

Screen Capture of Web Demo

Screen Capture of Web Demo

Performance on Korean TTS dataset

We tested our trained model on various aspect : Inference Time, Accuracy in Pronounciation and Robustness.

CER (Character Error Rate)

Accurancy in pronounciation is evaluated in CER (Character Error Rate). It compares the original sentence and generated sentence via passing a generated audio into Google Speech recognition API.

Robustness

Created a set of 100 hard sentences and count the number of sentences that repetition and skipping occurs.
100 Sentences are created from tongue-twisting sentences in Korean.

Tacotron2 (Baseline) FastSpeech
Robustness (# of Skipping) 34 28
Robustness (# of Repeat) 18 3
Inference Time (s) 2.16 0.02
CER on Test Dataset (%) 17.8 19.3
CER on Game Test Dataset (%) 28.9 57.2

Start

Dependencies

  • python 3.6
  • CUDA 10.0
  • pytorch 1.1.0
  • numpy 1.16.2
  • scipy 1.2.1
  • librosa 0.6.3
  • inflect 2.1.0
  • matplotlib 2.2.2

Prepare Dataset

  1. Download and extract LJSpeech dataset.
  2. Put LJSpeech dataset in data.
  3. Run preprocess.py.

For this implementation, our team utilize Korean Dataset which is available only in Netmarble Company.

Get Alignment from Tacotron2

Note

In the paper of FastSpeech, authors use pre-trained Transformer-TTS to provide the target of alignment. I didn't have a well-trained Transformer-TTS model so I use Tacotron2 instead.

Calculate Alignment during Training (slow)

Change pre_target = False in hparam.py

Calculate Alignment before Training

  1. Download the pre-trained Tacotron2 model published by NVIDIA here.
  2. Put the pre-trained Tacotron2 model in Tacotron2/pre_trained_model
  3. Run alignment.py, it will spend 7 hours training on NVIDIA RTX2080ti.

Use Calculated Alignment (quick)

I provide LJSpeech's alignments calculated by Tacotron2 in alignment_targets.zip. If you want to use it, just unzip it.

Run (Support Data Parallel)

Note

In the turbo mode, a prefetcher prefetches training data and this operation may cost more memory.

Normal Mode

Run train.py.

Turbo Mode

Run train_accelerated.py.

Test

Synthesize

Run `test.py -t text_sentence -s checkpoint_step -w 1'

Results

  • The examples of audio are in results. The sentence for synthesizing is "I am very happy to see you again.". results/normal.wav was synthesized when alpha = 1.0, results/slow.wav was synthesized when alpha = 1.5 and results/quick.wav was synthesized when alpha = 0.5.

Notes

  • The output of LengthRegulator's last linear layer passes through the ReLU activation function in order to remove negative values. It is the outputs of this module. During the inference, the output of LengthRegulator pass through torch.exp() and subtract one, as the multiple for expanding encoder output. During the training stage, duration targets add one and pass through torch.log() and then calculate loss. For example:
duration_predictor_target = duration_predictor_target + 1
duration_predictor_target = torch.log(duration_predictor_target)

duration_predictor_output = torch.exp(duration_predictor_output)
duration_predictor_output = duration_predictor_output - 1

Reference

Releases

No releases published

Packages

No packages published

Languages