-
Notifications
You must be signed in to change notification settings - Fork 256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why is first training slower, but subsequent trainings faster #84
Comments
Hi @ngupta23 , I haven't checked in details, but I strongly suspect that this is due to JIT compilation by Numba. The code is compiled to LLVM on the first run, which takes time, and subsequent runs re-use the compiled code. |
Thanks for confirming this. We can close this as the question has been answered. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I am curious by the first training for AutoARIMA using statsforecast is slower compared to subsequent trainings (even though the models are instantiated again before the subsequent trainings - i.e. new objects). Is some information being reused from the first training?
https://gist.github.com/ngupta23/59cc0ce155048f72b80a0431c57b7d17
Initial Training
Subsequent Training (independent object)
The text was updated successfully, but these errors were encountered: