-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak on corrector.LoadLangModel() #44
Comments
I'm facing the same issue. I've trained a (german) model using around 560MiB plain text from the Leipzig Corpora Collection. The model itself is 488MiB. Having a 16GB RAM 4CPU Linux Cloud, it takes 5-10 min to load the model. Is it possible to speed this up? |
March 16, 2022 |
Community version doesn't support loading linux models on windows. Or you can buy a PRO version, it supports all models on all operation systems. Also a PRO version reduced memory usage while training. |
I've trained my model (I've tried versions from
master
and0.0.11
branches) on 10 MiB plain text part of English Wikipedia (enwiki-latest-pages-articles_10MiB.txt) and got 41 MiB bin file (enwiki.bin.zip).I'm loading it in Python, but it takes 12 GiB of memory to load it and still it doesn't load in foreseeable time.
The text was updated successfully, but these errors were encountered: