Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

> Decoder stopped with max_decoder_steps 500 #734

Closed
Osiris-Team opened this issue Oct 31, 2021 · 7 comments
Closed

> Decoder stopped with max_decoder_steps 500 #734

Osiris-Team opened this issue Oct 31, 2021 · 7 comments

Comments

@Osiris-Team
Copy link

Osiris-Team commented Oct 31, 2021

Steps to reproduce:

  1. Install TTS with python -m pip install TTS
  2. Run in console: tts --text "Hello my name is Johanna, and today I want to talk a bit about AutoPlug. In short, AutoPlug is a feature-rich, modularized server manager, that automates the most tedious parts of your servers or networks maintenance." --out_path INSERT_ABSOLUTE_DIR_PATH_HERE\output.wav

Result:
The output.wav file is around 10 seconds long and the reader stops talking around the middle of the text ("... server manager, that...").

System: Windows 10 x64 bit

Looks like its related to the length of a sentence...

@Osiris-Team
Copy link
Author

Closing this because of no response

@SteveDaulton
Copy link

Same issue here using:
tts --text "One minute, the hill was bright with sun, and the next it was deep in shadows, and the wind that had been merely cool was downright cold." --out_path <path-to-output-file>
but replace the third comma with a full stop, and the entire text is successfully rendered.
tts --text "One minute, the hill was bright with sun, and the next it was deep in shadows. And the wind that had been merely cool was downright cold." --out_path <path-to-output-file>

It certainly looks to be an issue with sentence length.
Tested on Xubuntu 20.04 with Python 3.8.10.

@RonyMacfly
Copy link

Increase the value of "max_decoder_steps".

For example, I use the Tacotron2 model.

tts --text "Hello"
 > tts_models/en/ljspeech/tacotron2-DDC is already downloaded.
 > vocoder_models/en/ljspeech/hifigan_v2 is already downloaded.
 > Using model: Tacotron2
 > Model's reduction rate `r` is set to: 1
 > Vocoder Model: hifigan
 > Generator Model: hifigan_generator
 > Discriminator Model: hifigan_discriminator

The installed project can be found here. Debian 10.
/home/user/.local/lib/python3.7/site-packages/TTS

We need a config file.
/home/user/.local/lib/python3.7/site-packages/TTS/tts/configs/tacotron_config.py

Changing values.
max_decoder_steps: int = 500
to
max_decoder_steps: int = 10000

@Osiris-Team
Copy link
Author

Thanks. This kinda should be added to the installation steps.

@SteveDaulton
Copy link

Increase the value of "max_decoder_steps".

Thanks RonyMacfly, that works.

It also works in virtual environments. In my case the tacotron_config.py file was in
.venv/lib/python3.8/site-packages/TTS/tts/configs/

lkiesow added a commit to lkiesow/TTS that referenced this issue Aug 6, 2022
Running `tts --text "$text" --out_path …` with a somewhat longer
sentences in the text will lead to warnings like “Decoder stopped with
max_decoder_steps 500” and the sentences just being cut off in the
resulting WAV file.

This happens quite frequently when feeding longer texts (e.g. a blog
post) to `tts`. It's particular frustrating since the error is not
always obvious in the output. You have to notice that there are missing
parts. This is something other users seem to have run into as well [1].

This patch simply increases the maximum number of steps allowed for the
tacotron decoder to fix this issue, resulting in a smoother default
behavior.

[1] mozilla/TTS#734
erogol pushed a commit to coqui-ai/TTS that referenced this issue Aug 7, 2022
Running `tts --text "$text" --out_path …` with a somewhat longer
sentences in the text will lead to warnings like “Decoder stopped with
max_decoder_steps 500” and the sentences just being cut off in the
resulting WAV file.

This happens quite frequently when feeding longer texts (e.g. a blog
post) to `tts`. It's particular frustrating since the error is not
always obvious in the output. You have to notice that there are missing
parts. This is something other users seem to have run into as well [1].

This patch simply increases the maximum number of steps allowed for the
tacotron decoder to fix this issue, resulting in a smoother default
behavior.

[1] mozilla/TTS#734
erogol added a commit to coqui-ai/TTS that referenced this issue Aug 22, 2022
* Fix checkpointing GAN models (#1641)

* checkpoint sae step crash fix

* checkpoint save step crash fix

* Update gan.py

updated requested changes

* crash fix

* Fix the --model_name and --vocoder_name arguments need a <model_type> element (#1469)

Co-authored-by: Eren Gölge <[email protected]>

* Fix Publish CI (#1597)

* Try out manylinux

* temporary removal of useless pipeline

* remove check and use only manylinux

* Try --plat-name

* Add install requirements

* Add back other actions

* Add PR trigger

* Remove conditions

* Fix sythax

* Roll back some changes

* Add other python versions

* Add test pypi upload

* Add username

* Add back __token__ as username

* Modify name of entry to testpypi

* Set it to release only

* Fix version checking

* Fix tokenizer for punc only (#1717)

* Remove redundant config field

* Fix SSIM loss

* Separate loss tests

* Fix BCELoss adressing  #1192

* Make style

* Add durations as aux input for VITS (#1694)

* Add durations as aux input for VITS

* Make style

* Fix tts_tests

* Fix test_get_aux_input

* Make lint

* feat: updated recipes and lr fix (#1718)

- updated the recipes activating more losses for more stable training
- re-enabling guided attention loss
- fixed a bug about not the correct lr fetched for logging

* Implement VitsAudioConfig (#1556)

* Implement VitsAudioConfig

* Update VITS LJSpeech recipe

* Update VITS VCTK recipe

* Make style

* Add missing decorator

* Add missing param

* Make style

* Update recipes

* Fix test

* Bug fix

* Exclude tests folder

* Make linter

* Make style

* Fix device allocation

* Fix SSIM loss correction

* Fix aux tests (#1753)

* Set n_jobs to 1 for resample script

* Delete resample test

* Set n_jobs 1 in vad test

* delete vad test

* Revert "Delete resample test"

This reverts commit bb7c846.

* Remove tests with resample

* Fix for FloorDiv Function Warning (#1760)

* Fix for Floor Function Warning

Fix for Floor Function Warning

* Adding double quotes to fix formatting

Adding double quotes to fix formatting

* Update glow_tts.py

* Update glow_tts.py

* Fix type in download_vctk.sh (#1739)

typo in comment

* Update decoder.py (#1792)

Minor comment correction.

* Update requirements.txt (#1791)

Support for #1775

* Update README.md (#1776)

Fix typo in different and code sample

* Fix & update WaveRNN vocoder model (#1749)

* Fixes KeyError bug. Adding logging to dashboard.

* Make pep8 compliant

* Make style compliant

* Still fixing style

* Fix rand_segment edge case (input_len == seg_len - 1)

* Update requirements.txt; inflect==5.6 (#1809)

New inflect version (6.0) depends on pydantic which has some issues irrelevant to 🐸 TTS. #1808 
Force inflect==5.6 (pydantic free) install to solve dependency issue.

* Update README.md; download progress bar in CLI. (#1797)

* Update README.md

- minor PR
- added model_info usage guide based on #1623 in README.md .

* "added tqdm bar for model download"

* Update manage.py

* fixed style

* fixed style

* sort imports

* Update wavenet.py (#1796)

* Update wavenet.py

Current version does not use "in_channels" argument. 
In glowTTS, we use normalizing flows and so "input dim" == "ouput dim" (channels and length). So, the existing code just uses hidden_channel sized tensor as input to first layer as well as outputs hidden_channel sized tensor. 
However, since it is a generic implementation, I believe it is better to update it for a more general use.

* "in_channels -> hidden_channels"

* Adjust default to be able to process longer sentences (#1835)

Running `tts --text "$text" --out_path …` with a somewhat longer
sentences in the text will lead to warnings like “Decoder stopped with
max_decoder_steps 500” and the sentences just being cut off in the
resulting WAV file.

This happens quite frequently when feeding longer texts (e.g. a blog
post) to `tts`. It's particular frustrating since the error is not
always obvious in the output. You have to notice that there are missing
parts. This is something other users seem to have run into as well [1].

This patch simply increases the maximum number of steps allowed for the
tacotron decoder to fix this issue, resulting in a smoother default
behavior.

[1] mozilla/TTS#734

* Fix language flags generated by espeak-ng phonemizer (#1801)

* fix language flags generated by espeak-ng phonemizer

* Style

* Updated language flag regex to consider all language codes alike

* fix get_random_embeddings --> get_random_embedding (#1726)

* fix get_random_embeddings --> get_random_embedding

function typo leads to training crash, no such function

* fix typo

get_random_embedding

* Introduce numpy and torch transforms (#1705)

* Refactor audio processing functions

* Add tests for numpy transforms

* Fix imports

* Fix imports2

* Implement bucketed weighted sampling for VITS (#1871)

* Update capacitron_layers.py (#1664)

crashing because of dimension miss match   at line no. 57
[batch, 256] vs [batch , 1, 512]
enc_out = torch.cat([enc_out, speaker_embedding], dim=-1)

* updates to dataset analysis notebooks for compatibility with latest version of TTS (#1853)

* Fix BCE loss issue (#1872)

* Fix BCE loss issue

* Remove import

* Remove deprecated files (#1873)

- samplers.py is moved
- distribute.py is replaces by the 👟Trainer

* Handle when no batch sampler (#1882)

* Fix tune wavegrad (#1844)

* fix imports in tune_wavegrad

* load_config returns Coqpit object instead None

* set action (store true) for flag "--use_cuda"; start to tune if module is running as the main program

* fix var order in the result of batch collating

* make style

* make style with black and isort

* Bump up to v0.8.0

* Add new DE Thorsten models (#1898)

- Tacotron2-DDC
- HifiGAN vocoder

Co-authored-by: manmay nakhashi <[email protected]>
Co-authored-by: camillem <[email protected]>
Co-authored-by: WeberJulian <[email protected]>
Co-authored-by: a-froghyar <[email protected]>
Co-authored-by: ivan provalov <[email protected]>
Co-authored-by: Tsai Meng-Ting <[email protected]>
Co-authored-by: p0p4k <[email protected]>
Co-authored-by: Yuri Pourre <[email protected]>
Co-authored-by: vanIvan <[email protected]>
Co-authored-by: Lars Kiesow <[email protected]>
Co-authored-by: rbaraglia <[email protected]>
Co-authored-by: jchai.me <[email protected]>
Co-authored-by: Stanislav Kachnov <[email protected]>
@tudorw
Copy link

tudorw commented Jun 24, 2023

okay, been playing around a lot with this, as of today my best output so far uses this tts_models/zh-CN/baker/tacotron2-DDC-GS while it might appear the CN indicates a Chinese source, this is not important when building a model, it's abstracting anything like words or letters, it's creating a multidimensional topological manifold that embodies 'speech' at it's essence... I am working with around 80 to 100 words, I use python language tool to clean the text and fix grammar, more on that later, then pass it in text chunks to a loop that synthesizes and saves the audio chunk, then concatenates the audio chunks into a single file to play.

Decoder steps, a setting within the TTS library configuration (tacotron) is deceptive, it feels like longer (10000) should allow longer text, however, if you look at the quality, it drops off dramatically after 500 to 800 steps, more steps does not make it better, so chunking text, then synthesis then concatenate and play keeps the decoder fed nicely, the training data is on a bell curve, it's most 'experience' lies in maybe 80 to 120 words, it 'knows' what that sounds like, if you ask it for 1 second, or 10 minutes, the model collapses completely, it does not recognize this and does a (poor) job of trying to synthesize.

It is also tripped up by just wrong text, if the sentence structure is poor, the model struggles to simulate, I am experimenting with using an 'AI' to format the text in a style that best suits the TTS model to improve the output.

I am also looking at other improvements such as creating a baseline version then another version with my chosen TTS model, then I would compare the audio lengths and discard the longest, this would pick out the occasional errors where something like '<hello =sorry i bokre theAI' tries to get spoken with hilarious or fairly tragic results depending on your sense of humor...

Also I can roughly guess how long audio should be and count words and compare that to audio output length, or feed the output into a speech to text model and then get it's grammar judged versus the supplied text, then an additional loop that let's AI remix the text continuously until it agrees the output is good... plus, I can get my twitch channel 'infinifiction' to ask listeners to rate the best speech and use that to retrain a model specialized in the length and style of output I want for this specific task...

@tudorw
Copy link

tudorw commented Jun 24, 2023

On a Radeon 9 6900HX I get a realtime factor of around 0.3, so I can confidently generate around 60 seconds of audio in 30 seconds.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants