Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add jagged_sum operator for unpadded nested tensors to TritonBench #2299

Closed
wants to merge 1 commit into from

Conversation

jananisriram
Copy link
Contributor

Summary:
Add a jagged_sum reduction operator for unpadded nested tensors, based on the PyTorch sum operator, to TritonBench. This diff implements a basic benchmark for reducing along the ragged dimension for 3-dimensional nested tensors. For a 3-dimensional tensor of shape (B, *, M), where * is the ragged dimension, this benchmark uses PyTorch's sum operator to reduce B (*, M) 2-dimensional tensors to a (B, M) output tensor.

Measure performance of basic benchmark with gbps and latency metrics and display nested tensor parameters B and M.

Reviewed By: YuqingJ

Differential Revision: D58396957

Summary:
Add a `jagged_sum` reduction operator for unpadded nested tensors, based on the PyTorch `sum` operator, to TritonBench. This diff implements a basic benchmark for reducing along the ragged dimension for 3-dimensional nested tensors. For a 3-dimensional tensor of shape `(B, *, M)`, where `*` is the ragged dimension, this benchmark uses PyTorch's `sum` operator to reduce `B` `(*, M)` 2-dimensional tensors to a `(B, M)` output tensor.

Measure performance of basic benchmark with `gbps` and `latency` metrics and display nested tensor parameters `B` and `M`.

Reviewed By: YuqingJ

Differential Revision: D58396957
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D58396957

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 576b2b2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants