-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel all reduce communication and backprop #573
Comments
Very interesting! Can you upload the details about your cluster, namely the GPUs and interconnect being used, and the parallelism settings? I am surprised by this and want to do some experiments before making any changes. re: your PS I believe this was set up to do a comparative speed test of PP = 1 and sequential modeling, though I can’t find any records of the results of that testing. I’ll open a separate issue to test PP = 1 vs sequential so it doesn’t fall through the cracks again. @EricHallahan I don’t suppose you recall or can find the results of this testing? |
Yes, it is expected behavior. Setting It is news to me that it is faster to use |
@StellaAthena The testing cluster was 2 nodes, each with 4 V100. And I was running with the default |
@StellaAthena Also, the NIC is 100G RDMA/RoCE. |
@ShivanshuPurohit is gong to look into this :) @reyoung can you post whatever performance statistics you have with your 2 cluster set-up? FLOPS, % comms, etc? |
Hey @zhuzilin , really interesting! firstly, wrt to the speed difference between pp=0 and pp=1, we also found a similar thing, see #269 . Although maybe the speed difference isn't quite as stark as what you found. I'm not sure of the source of the difference. wrt the optimization, I see no reason this couldn't also work with MP and PP, and we'd be very interested in getting something like this implemented. I suspect it might not be so straightforward with deepspeed though! Fundamentally, you're doing the same communication op with MP / PP, just the group you're reducing within is smaller. So I think this should definitely be possible, but i'm not yet certain how this optimization would interact with:
|
Thank you for open source such a great repo for the community! Your work is really helping our team with training large pretrained model :)
In our experiment, we find out when training a not-that-large model, e.g. 2.7B, with data parallel, the scaling efficiency across multiple nodes is not good enough (under 70% for 2 nodes in our case). A reason for this is that currenly the backward calculation ("BackwardPass" instruction) and the communication (introduced in "ReduceGrads" instruction) are executed sequentially. In fact, if we start the allreduce communication right after each grad is calculated, we could parallel the backward computation and the ReduceGrads, reducing the negative effect on cross-node communication.
We could use the backward hook mechaism in pytorch for this optimization. Here is an example in the source code of pytorch.
This optimization may only work for pure data parallel as the communication pattern is quite different in model parallel or pipeline parallel.
We'd love to help if you have interest in applying such optimization to your project (gpt-neox or DeeperSpeed)~ Thank you again for your great contribution to the community!
P.S. We found some different behavior compare to the comment here:
gpt-neox/megatron/neox_arguments/arguments.py
Lines 755 to 758 in f6c611f
PipelineModule
wrapper is used whenpipe_parallel_size
is set to 1 and theto_sequential()
version is used only whenpipe_parallel_size
is set to 0;PipelineModule
is observably faster than theto_sequential()
version.I wonder if these are expected behavior? Thank you.
The text was updated successfully, but these errors were encountered: