Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP Add 3D channels last tensor iterator support #118377

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

dmenig
Copy link
Collaborator

@dmenig dmenig commented Jan 26, 2024

Part of a multi-PR work to improve #59168

Copy link

pytorch-bot bot commented Jan 26, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/118377

Note: Links to docs will display an error until the docs builds have been completed.

❌ 36 New Failures, 3 Cancelled Jobs, 4 Unrelated Failures

As of commit 1e03b32 with merge base 68a1f78 (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOBS - The following jobs were cancelled. Please retry:

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Mar 30, 2024
@github-actions github-actions bot closed this Apr 29, 2024
@dmenig dmenig reopened this May 2, 2024
@dmenig dmenig force-pushed the 3d_channels_last_iterator_2 branch 2 times, most recently from d7e7ea6 to 84ff863 Compare May 7, 2024 07:35
@dmenig dmenig force-pushed the 3d_channels_last_iterator_2 branch from d8e2ff1 to 66119dc Compare May 14, 2024 07:23
@dmenig dmenig changed the title WIP Add tensor iterator support WIP Add 3D channels last tensor iterator support May 14, 2024
@dmenig dmenig force-pushed the 3d_channels_last_iterator_2 branch 2 times, most recently from 51b6648 to 13ea4bd Compare May 15, 2024 08:54
@dmenig
Copy link
Collaborator Author

dmenig commented May 15, 2024

@peterbell10 could you please advise me ? In all honesty Idk what I'm doing. I understand that the strides of the grad output are making this test fail, but I can't udnerstand what I've done wrong.

@dmenig dmenig force-pushed the 3d_channels_last_iterator_2 branch from 13ea4bd to 1e03b32 Compare May 15, 2024 08:59
@github-actions github-actions bot closed this Jun 14, 2024
@dmenig dmenig reopened this Jun 23, 2024
with FakeTensorMode():
grad_out = torch.rand(2, 3, 4, 4, 4)
inp = torch.rand(2, 3, 4, 4, 4).to(memory_format=torch.channels_last_3d)
grad_in = torch.ops.aten._adaptive_avg_pool3d_backward(grad_out, inp)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is running under fake tensor mode which uses the meta-registration from torch/_meta_registrations.py rather than the concrete implementation. So this test isn't hitting your code at all.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Damn. I modified the aten library though. Do you have any idea why the registration doesn't take into account my aten modifications ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The meta function doesn't call into aten, you need to update it directly:

@register_meta(aten._adaptive_avg_pool2d_backward.default)
def meta__adaptive_avg_pool2d_backward(grad_out, self):
ndim = grad_out.ndim
for i in range(1, ndim):
torch._check(
grad_out.size(i) > 0,
lambda: f"adaptive_avg_pool2d_backward(): Expected grad_output to have non-zero \
size for non-batch dimensions, {grad_out.shape} with dimension {i} being empty",
)
torch._check(
ndim == 3 or ndim == 4,
lambda: f"adaptive_avg_pool2d_backward(): Expected 3D or 4D tensor, but got {self.shape}",
)
torch._check(
self.dtype == grad_out.dtype,
lambda: f"expected dtype {self.dtype} for `grad_output` but got dtype {grad_out.dtype}",
)
memory_format = torch.contiguous_format
if is_channels_last(self):
memory_format = torch.channels_last
return self.new_empty(self.shape).to(memory_format=memory_format)

@github-actions github-actions bot closed this Jul 25, 2024
@dmenig dmenig reopened this Jul 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants