-
Notifications
You must be signed in to change notification settings - Fork 467
Issues: pytorch/xla
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[hard]
nn.functional.max_pool2d
and nn.functional.max_pool3d
torchxla2-hard
torchxla2
#8086
opened Sep 27, 2024 by
ManfeiBai
Support the xla_force_host_platform_device_count for CPU for multiple device count and GSPMD
#8063
opened Sep 24, 2024 by
mario-aws
PjRtComputationClient::ExecuteReplicated core dump when encountering a scalar
#8057
opened Sep 24, 2024 by
mars1248
[hard] Op info test for
masked.median
torchxla2-hard
torchxla2
#8017
opened Sep 15, 2024 by
ManfeiBai
[RFC]
torch_xla
Backward Compatibility Proposal
2.5 release
documentation
#8000
opened Sep 12, 2024 by
zpcore
Speeding up computation while using SPMD on large TPU pod
#7987
opened Sep 10, 2024 by
dudulightricks
While operator test generates condition input as a parameter instead of a constant
#7986
opened Sep 10, 2024 by
aws-rhsoln
[benchmarks]
dlrm
training running twice on dynamo and non-dynamo configurations.
xla:gpu
#7976
opened Sep 9, 2024 by
ysiraichi
Core dump when calling
jax.device_put
on array mapped from torch.tensor to jax with xla2
#7970
opened Sep 6, 2024 by
tengomucho
Ran out of memory in memory space vmem / register allocator spill slots call depth 2
#7962
opened Sep 5, 2024 by
radna0
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.