Skip to content
This repository has been archived by the owner on Jun 24, 2022. It is now read-only.

Distributed Array of solutions #13

Closed
ivborissov opened this issue Jun 23, 2017 · 11 comments
Closed

Distributed Array of solutions #13

ivborissov opened this issue Jun 23, 2017 · 11 comments

Comments

@ivborissov
Copy link

It would be useful to have an extension not to transfer data from remote workers to the master but to store it there, after the simulations

@ChrisRackauckas
Copy link
Member

It's not immediately clear to me how this can be done (maybe dfill?), so I opened an issue:

JuliaParallel/DistributedArrays.jl#145

I'll followup here when I get a response.

@ChrisRackauckas
Copy link
Member

What you're looking for with calling the simulation is:

addprocs()
@everywhere begin
    import Distributions: Uniform
    using DiffEqBase, OrdinaryDiffEq, DiffEqMonteCarlo
    using DistributedArrays
    # number of iterations
    n_iter = 10

    # t span and step
    tspan = (0., 100.)
    step = 1.

    # DiffEq
    pf_func = function (t,u,p,du)
        vel = p[3]*p[2]*u[2]/(p[4]+u[2])
        vabs = p[1]*p[5]*u[1]
        du[1] = -vabs/p[1]
        du[2] = (vabs-vel)/p[3]
    end

    # Defaults
    u0 = [C0=10., C1=0.]
    params = [Default=1., kcat=7.282813exp(-1), Vd=5.275000exp(0), Km=5.257432exp(0), kabs=2.090394exp(0)]

    pf = ParameterizedFunction(pf_func,params)
    prob = ODEProblem(pf,u0,tspan)

    # MonteCarlo
    prob_func1 = function (prob, i)
      prob.f.params[2] = rand(Uniform(6.5, 7.5))
      prob.f.params[3] = rand(Uniform(4.5, 5.5))
      prob.f.params[4] = rand(Uniform(4.5, 5.5))
      prob.f.params[5] = rand(Uniform(1.5, 3.))
      prob
    end

    monte_prob = MonteCarloProblem(prob,prob_func=prob_func1)
    srand(myid())
    sim = solve(monte_prob, Tsit5(), saveat=collect(tspan[1]:step:tspan[2]), parallel_type=:threads, num_monte=n_iter)
end

The aggregation seems to stall:

dfill(sim,((n_iter for i in 1:nprocs())...))

so I'm not entirely sure that way of aggregating is correct.

@ivborissov
Copy link
Author

Chris, thanks a lot!
As I really need to transfer back only statistic values now I have come to the following solution (thanks to your ParallelDataTransfer 👍 )
@Everywhere @time sim1 = solve(monte_prob, Tsit5(), saveat=collect(tspan[1]:step:tspan[2]), parallel_type=:threads, num_monte=n_iter)
@Everywhere @time m_val = timeseries_steps_mean(sim1)

using ParallelDataTransfer
for i in workers()
m_val = map(+, m_val[:], @getfrom i m_val[:])
end
m_val /= nprocs()

I guess it is not ideal though

@ivborissov
Copy link
Author

Sorry, this way is better )

using ParallelDataTransfer
for i in workers()
m_val.u = map(+, m_val.u, @getfrom i m_val.u)
end

m_val.u /= nprocs()

@ChrisRackauckas
Copy link
Member

If you just need the statistical values, use the reduction function. Example:

https://github.com/JuliaDiffEq/DiffEqMonteCarlo.jl/blob/master/test/monte.jl#L79

That makes it only save the sum of the endpoints, instead of ever saving the other values. You can use that to never even save the full array of solutions. Of course, only limited applications can benefit from this so the general solution is still necessary.

@ChrisRackauckas
Copy link
Member

https://discourse.julialang.org/t/distributed-generation-of-a-darray/4472/5

@ivborissov
Copy link
Author

Chris, thanks! For this case I think reduction will be sufficient, however in future full solution also may be needed.

@ChrisRackauckas
Copy link
Member

I got something on master. Essentially, you can use Val{false} as the third argument to make it stay distributed. Works with parallel_type = :none and parallel_type = :threads since it already maps to each process (so that's essentially :pmap and :split_threads).

sim = solve(prob2,SRIW1(),Val{false},dt=1//2^(3),num_monte=10)
@test length(sim) == 10

sim = solve(prob2,SRIW1(),Val{false},dt=1//2^(3),num_monte=10,parallel_type=:threads)
@test length(sim) == 10

That interface is a little awkward so I'll want to test it out a little bit more, but it works.

@ChrisRackauckas
Copy link
Member

Release requires DistributedArrays.jl tag. JuliaParallel/DistributedArrays.jl#147 But if you have master for DistributedArrays and DiffEqMonteCarlo, the code above works.

@ivborissov
Copy link
Author

Chris, thanks a lot!

@ChrisRackauckas
Copy link
Member

Implemented.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants