Skip to content

Tags: ditschuk/pytorch

Tags

v1.9.0

Toggle v1.9.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
[docs] Add torch.package documentation for beta release (pytorch#59886)

**Summary**
This commit adds documentation for the `torch.package` module to
accompany its beta release in 1.9.

**Test Plan**
Continous integration.

v1.9.0-rc4

Toggle v1.9.0-rc4's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
[docs] Add torch.package documentation for beta release (pytorch#59886)

**Summary**
This commit adds documentation for the `torch.package` module to
accompany its beta release in 1.9.

**Test Plan**
Continous integration.

v1.9.0-rc3

Toggle v1.9.0-rc3's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Fix test_randperm_device_compatibility for 1 GPU (pytorch#59484) (pyt…

…orch#59502)

Summary:
Do not try to create tensors on 2nd device if device_count() == 1

Fixes #{issue number}

Pull Request resolved: pytorch#59484

Reviewed By: ngimel

Differential Revision: D28910673

Pulled By: malfet

fbshipit-source-id: e3517f31a463dd049ce8a5155409b7b716c8df18

v1.9.0-rc2

Toggle v1.9.0-rc2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Document factory_kwargs in nn.Quantize + remove Attributes section (p…

…ytorch#59025) (pytorch#59045)

Summary:
The `factory_kwargs` kwarg was previously undocumented in `nn.Quantize`. Further, the `Attributes` section of the docs was improperly filled in, resulting in bad formatting. This section doesn't apply since `nn.Quantize` doesn't have parameters, so it has been removed.

Pull Request resolved: pytorch#59025

Reviewed By: anjali411

Differential Revision: D28723889

Pulled By: jbschlosser

fbshipit-source-id: ba86429f66d511ac35042ebd9c6cc3da7b6b5805

Co-authored-by: Joel Schlosser <[email protected]>

v1.9.0-rc1

Toggle v1.9.0-rc1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
[release/1.9] Fix issues regarding binary_chekcout (pytorch#58495)

Signed-off-by: Eli Uriegas <[email protected]>

v1.8.1

Toggle v1.8.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Perform appropriate CUDA stream synchronization in distributed autogr…

…ad. (pytorch#53929) (pytorch#54358)

Summary:
Pull Request resolved: pytorch#53929

The local autograd engine performs appropriate stream synchronization
between autograd nodes in the graph to ensure a consumer's stream is
synchronized with the producer's stream before executing the consumer.

However in case of distributed autograd, the SendRpcBackward function receives
gradients over the wire and TensorPipe uses its own pool of streams for this
purpose. As a result, the tensors are received on TensorPipe's stream pool but
SendRpcBackward runs on a different stream during the backward pass and there
is no logic to synchronize these streams.

To fix this, I've enhanced DistEngine to synchronize these streams
appropriately when it receives grads over the wire.
ghstack-source-id: 124055277

(Note: this ignores all push blocking failures!)

Test Plan:
1) Added unit test which reproduced the issue.
2) waitforbuildbot.

Reviewed By: walterddr, wanchaol

Differential Revision: D27025307

fbshipit-source-id: 2944854e688e001cb3989d2741727b30d9278414

Co-authored-by: Pritam Damania <[email protected]>

v1.8.1-rc3

Toggle v1.8.1-rc3's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Perform appropriate CUDA stream synchronization in distributed autogr…

…ad. (pytorch#53929) (pytorch#54358)

Summary:
Pull Request resolved: pytorch#53929

The local autograd engine performs appropriate stream synchronization
between autograd nodes in the graph to ensure a consumer's stream is
synchronized with the producer's stream before executing the consumer.

However in case of distributed autograd, the SendRpcBackward function receives
gradients over the wire and TensorPipe uses its own pool of streams for this
purpose. As a result, the tensors are received on TensorPipe's stream pool but
SendRpcBackward runs on a different stream during the backward pass and there
is no logic to synchronize these streams.

To fix this, I've enhanced DistEngine to synchronize these streams
appropriately when it receives grads over the wire.
ghstack-source-id: 124055277

(Note: this ignores all push blocking failures!)

Test Plan:
1) Added unit test which reproduced the issue.
2) waitforbuildbot.

Reviewed By: walterddr, wanchaol

Differential Revision: D27025307

fbshipit-source-id: 2944854e688e001cb3989d2741727b30d9278414

Co-authored-by: Pritam Damania <[email protected]>

v1.8.1-rc2

Toggle v1.8.1-rc2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
third_party: Update kineto to fix libtorch builds (pytorch#54205)

Signed-off-by: Eli Uriegas <[email protected]>

v1.8.1-rc1

Toggle v1.8.1-rc1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
third_party: Update kineto to fix libtorch builds (pytorch#54205)

Signed-off-by: Eli Uriegas <[email protected]>

v1.8.0

Toggle v1.8.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Fix hipify_python (pytorch#52756)

Co-authored-by: rraminen <[email protected]>
Co-authored-by: Nikita Shulga <[email protected]>