This repository has been archived by the owner on Jun 24, 2022. It is now read-only.
-
-
Notifications
You must be signed in to change notification settings - Fork 14
Serial vs parallel #8
Comments
I was thinking you can use Any suggestions to make this easier? |
You can write to CSVs and then use this brand new package to stitch it all together and do analysis in parallel: https://discourse.julialang.org/t/ann-juliadb-jl/3564 |
There are many tools for this now, including JuliaDB, CSV output, #13 , |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
If I run a large batch of trajectories on one machine, I can easily reach a limit where I have too many to fit in memory, and I start seeing a major IO lag. You have some write to file features now. What I am wondering is: if I want to run say 10^6 trajectories on one node (maybe this is happening on each node of a cluster for example), how should I manage the trajectories? Should I just step through sequential calls to like
sol1 = solve(...), sol2 = solve(...)
where each batch is manageable within memory, have each one write to file, and then aggregate the results? [I assume I would have to be careful to make the random noise seeds different here?]. Or should I just ask for the number of trajectories I want, and letsolve()
handle everything in a parallel environment? I am just wondering if there is any convenient way to handle this situation...The text was updated successfully, but these errors were encountered: