Skip to content
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.

flatten_parameters automatically for multi-GPU RNNs #2793

Open
vinbo8 opened this issue May 2, 2019 · 1 comment
Open

flatten_parameters automatically for multi-GPU RNNs #2793

vinbo8 opened this issue May 2, 2019 · 1 comment

Comments

@vinbo8
Copy link
Contributor

vinbo8 commented May 2, 2019

I'm not entirely sure whether this is intentional, but perhaps adding a flatten_parameters call to the underlying PyTorch RNN might be handy? Right now, multi-GPU RNNs throw warnings, and I'm not sure what the impact on performance is - #2294 references this, the author solved it by manually calling it.

RuntimeWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().

@joelgrus
Copy link
Contributor

joelgrus commented May 3, 2019

we hope to get to this soon, but it's not an explicit priority right now. PR welcome.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants