Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors when 0-dim tensor of complex or bool type passed to aminmax. #128404

Closed
wants to merge 7 commits into from

Conversation

ajbrent
Copy link
Contributor

@ajbrent ajbrent commented Jun 11, 2024

Fixes #126742

Added errors for the case of 0-dim tensors of complex or bool types passed to aminmax.

cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10

Copy link

pytorch-bot bot commented Jun 11, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/128404

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit 97af98b with merge base 92ca17d (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the module: cpu CPU specific problem (e.g., perf, algorithm) label Jun 11, 2024
@colesbury colesbury requested a review from mingfeima June 12, 2024 01:38
@colesbury colesbury added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jun 12, 2024
@colesbury colesbury requested a review from janeyx99 June 12, 2024 01:38
Copy link
Contributor

@janeyx99 janeyx99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks fine but missing test case!

@ajbrent ajbrent requested a review from mruberry as a code owner June 14, 2024 00:30
Comment on lines 1236 to 1239
if dtype is bool:
torch.aminmax(torch.tensor(1, dtype=dtype, device=device), dim=0)
else:
torch.aminmax(torch.tensor(1, dtype=dtype, device=device), dim=0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if dtype is bool:
torch.aminmax(torch.tensor(1, dtype=dtype, device=device), dim=0)
else:
torch.aminmax(torch.tensor(1, dtype=dtype, device=device), dim=0)
torch.aminmax(torch.tensor(1, dtype=dtype, device=device), dim=0)

the if and else statements look identical

@dtypes(*complex_types())
def test_invalid_0dim_aminmax(self, device, dtype):
with self.assertRaisesRegex(RuntimeError, 'not implemented'):
torch.aminmax(torch.tensor(True, dtype=dtype, device=device), dim=0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
torch.aminmax(torch.tensor(True, dtype=dtype, device=device), dim=0)
torch.aminmax(torch.tensor(1.0, dtype=dtype, device=device), dim=0)

could we use a scalar that is not a bool since bool's not the intended target dtype here?

Copy link
Contributor

@janeyx99 janeyx99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

merge only if CI passes

@janeyx99
Copy link
Contributor

@pytorchbot rebase -b main

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/main. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased complex-bool-errors onto refs/remotes/origin/main, please pull locally before adding more changes (for example, via git checkout complex-bool-errors && git pull --rebase)

@janeyx99
Copy link
Contributor

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Jun 24, 2024
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR needs a release notes: label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@janeyx99 janeyx99 added the release notes: python_frontend release notes category label Jun 24, 2024
@janeyx99
Copy link
Contributor

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 2 jobs have failed, first few of them are: trunk / macos-py3-arm64-mps / test (mps, 1, 1, macos-m1-13), trunk / macos-py3-arm64-mps / test (mps, 1, 1, macos-m1-14)

Details for Dev Infra team Raised by workflow job

@janeyx99
Copy link
Contributor

@pytorchbot merge -f "Tests that would have run already pass"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged module: cpu CPU specific problem (e.g., perf, algorithm) open source release notes: python_frontend release notes category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

The complex type of a 0D tensor with aminmax() and dim=0 or dim=-1 works
5 participants