-
Notifications
You must be signed in to change notification settings - Fork 21.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP Add 3D channels last tensor iterator support #118377
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/118377
Note: Links to docs will display an error until the docs builds have been completed. ❌ 36 New Failures, 3 Cancelled Jobs, 4 Unrelated FailuresAs of commit 1e03b32 with merge base 68a1f78 (): NEW FAILURES - The following jobs have failed:
CANCELLED JOBS - The following jobs were cancelled. Please retry:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
d7e7ea6
to
84ff863
Compare
d8e2ff1
to
66119dc
Compare
51b6648
to
13ea4bd
Compare
@peterbell10 could you please advise me ? In all honesty Idk what I'm doing. I understand that the strides of the grad output are making this test fail, but I can't udnerstand what I've done wrong. |
13ea4bd
to
1e03b32
Compare
with FakeTensorMode(): | ||
grad_out = torch.rand(2, 3, 4, 4, 4) | ||
inp = torch.rand(2, 3, 4, 4, 4).to(memory_format=torch.channels_last_3d) | ||
grad_in = torch.ops.aten._adaptive_avg_pool3d_backward(grad_out, inp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is running under fake tensor mode which uses the meta-registration from torch/_meta_registrations.py
rather than the concrete implementation. So this test isn't hitting your code at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Damn. I modified the aten library though. Do you have any idea why the registration doesn't take into account my aten modifications ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The meta function doesn't call into aten, you need to update it directly:
pytorch/torch/_meta_registrations.py
Lines 2830 to 2850 in 499ead9
@register_meta(aten._adaptive_avg_pool2d_backward.default) | |
def meta__adaptive_avg_pool2d_backward(grad_out, self): | |
ndim = grad_out.ndim | |
for i in range(1, ndim): | |
torch._check( | |
grad_out.size(i) > 0, | |
lambda: f"adaptive_avg_pool2d_backward(): Expected grad_output to have non-zero \ | |
size for non-batch dimensions, {grad_out.shape} with dimension {i} being empty", | |
) | |
torch._check( | |
ndim == 3 or ndim == 4, | |
lambda: f"adaptive_avg_pool2d_backward(): Expected 3D or 4D tensor, but got {self.shape}", | |
) | |
torch._check( | |
self.dtype == grad_out.dtype, | |
lambda: f"expected dtype {self.dtype} for `grad_output` but got dtype {grad_out.dtype}", | |
) | |
memory_format = torch.contiguous_format | |
if is_channels_last(self): | |
memory_format = torch.channels_last | |
return self.new_empty(self.shape).to(memory_format=memory_format) |
Part of a multi-PR work to improve #59168