Skip to content

Commit

Permalink
@parallel for return array of RemoteRefs and documentation.
Browse files Browse the repository at this point in the history
  • Loading branch information
lucasb-eyer committed Dec 5, 2014
1 parent 8d9d64f commit 6cfb43f
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 7 deletions.
5 changes: 1 addition & 4 deletions base/multi.jl
Original file line number Diff line number Diff line change
Expand Up @@ -1504,10 +1504,7 @@ function preduce(reducer, f, N::Int)
end

function pfor(f, N::Int)
for c in splitrange(N, nworkers())
@spawn f(first(c), last(c))
end
nothing
[@spawn f(first(c), last(c)) for c in splitrange(N, nworkers())]
end

function make_preduce_body(reducer, var, body, ran)
Expand Down
13 changes: 10 additions & 3 deletions doc/manual/parallel-computing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -324,7 +324,6 @@ For example, the following code will not work as intended::
a[i] = i
end

Notice that the reduction operator can be omitted if it is not needed.
However, this code will not initialize all of ``a``, since each
process will have a separate copy of it. Parallel for loops like these
must be avoided. Fortunately, distributed arrays can be used to get
Expand All @@ -341,6 +340,14 @@ the variables are read-only::
Here each iteration applies ``f`` to a randomly-chosen sample from a
vector ``a`` shared by all processes.

As you could see, the reduction operator can be omitted if it is not needed.
In that case, the loop executes asynchronously, i.e. it spawns independent
tasks on all available workers and returns an array of ``RemoteRef``
immediately without waiting for completion.
The caller can wait for the ``RemoteRef`` completions at a later
point by calling ``fetch`` on them, or wait for completion at the end of the
loop by prefixing it with ``@sync``, like ``@sync @parallel for``.

In some cases no reduction operator is needed, and we merely wish to
apply a function to all integers in some range (or, more generally, to
all elements in some collection). This is another useful operation
Expand All @@ -354,8 +361,8 @@ random matrices in parallel as follows::
Julia's ``pmap`` is designed for the case where each function call does
a large amount of work. In contrast, ``@parallel for`` can handle
situations where each iteration is tiny, perhaps merely summing two
numbers. Only worker processes are used by both ``pmap`` and ``@parallel for``
for the parallel computation. In case of ``@parallel for``, the final reduction
numbers. Only worker processes are used by both ``pmap`` and ``@parallel for``
for the parallel computation. In case of ``@parallel for``, the final reduction
is done on the calling process.


Expand Down

0 comments on commit 6cfb43f

Please sign in to comment.