-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some thoughts about "chunksize" in iter_parallel_chains function of beat/sampler/base.py #86
Comments
Hi again, cool that you are still around ;) . Cheers! |
I understand it! Best regards. |
Sorry for the late fixing, but I apparently didnt get the point correctly until I tried myself with larger number of chains. |
Hi again,
In
iter_parallel_chains
function ofbeat/sampler/base.py:476-482
the tps seems to depend on hardware(I have installed libamdm), and if we set a bigger n_jobs, the chunksize will also be bigger when case tps > 0.5 and draws > 10 and stage > 0.
Refering
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.map
, the bigger chunksize leads to the smaller chunks count. when n_job > chunks count, the bigger n_job will decrease the number of parallels, which means the calculation time gets longer.Is it correct? And can I set a arbitory chunksize in script manually?
Thank you!
The text was updated successfully, but these errors were encountered: