Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why is first training slower, but subsequent trainings faster #84

Closed
ngupta23 opened this issue Mar 31, 2022 · 3 comments
Closed

Why is first training slower, but subsequent trainings faster #84

ngupta23 opened this issue Mar 31, 2022 · 3 comments

Comments

@ngupta23
Copy link

ngupta23 commented Mar 31, 2022

I am curious by the first training for AutoARIMA using statsforecast is slower compared to subsequent trainings (even though the models are instantiated again before the subsequent trainings - i.e. new objects). Is some information being reused from the first training?

https://gist.github.com/ngupta23/59cc0ce155048f72b80a0431c57b7d17

Initial Training

image

Subsequent Training (independent object)

image

@hrzn
Copy link

hrzn commented Apr 5, 2022

Hi @ngupta23 , I haven't checked in details, but I strongly suspect that this is due to JIT compilation by Numba. The code is compiled to LLVM on the first run, which takes time, and subsequent runs re-use the compiled code.

@kdgutier
Copy link
Collaborator

kdgutier commented Apr 5, 2022

That is the case @hrzn and @ngupta23 , the first Numba compilation takes time then the code is reused.
We are pondering the possibility of creating pre-compiled wheels, to avoid this issue.

@ngupta23
Copy link
Author

Thanks for confirming this. We can close this as the question has been answered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants