Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slicing tensors with negative steps #604

Closed
MaximumEntropy opened this issue Jan 27, 2017 · 6 comments
Closed

Slicing tensors with negative steps #604

MaximumEntropy opened this issue Jan 27, 2017 · 6 comments

Comments

@MaximumEntropy
Copy link
Contributor

It'd be nice to be able to slice tensors using a negative step such as tensor[::-1] or tenso[:, ::-1, :] like what is possible with numpy or theano. It'll make implementing things like bidirectional RNNs (without using the inbuilt RNN modules) simpler.

@fmassa
Copy link
Member

fmassa commented Jan 27, 2017

Allowing negative strides is already being tracked in #229

@apaszke
Copy link
Contributor

apaszke commented Jan 27, 2017

I agree, but our C backends will require a few, possibly nontrivial, changes. I'm closing this since it's a duplicate.

@apaszke apaszke closed this as completed Jan 27, 2017
@MaximumEntropy
Copy link
Contributor Author

Oops I didn't look carefully enough apparently. Apologies.

@el3ment
Copy link

el3ment commented Jul 28, 2018

@apaszke I think this issue might deserve to be reopened. torch.flip isn't quite the same thing as negative striding or negative stepping -- torch.flip produces an (efficient) copy, but negative striding wouldn't need to perform a copy at all.

@soumith
Copy link
Member

soumith commented Aug 14, 2018

@el3ment while what you say is absolutely correct, I dont think we are planning to support negative striding anytime in the near / far future (if ever). It requires rethinking our internals and is a huge undertaking, for what we see as small perceived benefit. cc: @ezyang to double-check that our new C10 Tensor design is not incorporating negative strides.

@ezyang
Copy link
Contributor

ezyang commented Aug 14, 2018

We were originally thinking that when we looked into changing TH's size/stride semantics, it might be "easy" to also make things work with negative strides, but we ended up using all our engineering budget just on making zero-size dims and scalars work correctly.

mrshenli pushed a commit to mrshenli/pytorch that referenced this issue Apr 11, 2020
Update index.rst - Adds torchscript to left nav
mratsim added a commit to SciNim/flambeau that referenced this issue Jan 5, 2021
jjsjann123 added a commit to jjsjann123/pytorch that referenced this issue Jan 22, 2021
To support fusion with linear layer, we did:

new operator add_optional that supports add with optional[Tensor];
decompose pass that breaks linear layer into matmul and add_optional;
parser rule added for add_optional and linear;
linear added in autodiff;
python test;
KyleCZH pushed a commit to KyleCZH/pytorch that referenced this issue Sep 20, 2021
)

* Allow to override TORCH_CUDA_ARCH_LIST

* Don't change cuDNN pruning as some archs need to fallback
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants