Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why does it take almost 15000-20000ms latency to deliver the text ? #40

Open
HardikJain02 opened this issue Oct 31, 2023 · 1 comment
Open

Comments

@HardikJain02
Copy link

HardikJain02 commented Oct 31, 2023

For example: If I say I am XYZ. It takes almost above mentioned time to deliver it. How to speed this thing up?

Also, Why does it take unexpected time to load the model?

@davabase

@davabase
Copy link
Owner

davabase commented Nov 8, 2023

The first time a model is used it has to be downloaded, which can take time, after that the model should be cached on disk.
The amount of time it takes to inference speech is dependent on the size of the model and the hardware it is running on. If you don't have a GPU or your computer is not very high end, it will take longer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants