Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Got the RuntimeError. Would anyone help me? #1

Closed
huangteng1220 opened this issue Sep 5, 2018 · 2 comments
Closed

Got the RuntimeError. Would anyone help me? #1

huangteng1220 opened this issue Sep 5, 2018 · 2 comments

Comments

@huangteng1220
Copy link

(cv2.7) gpuserver@ubuntu:~/ht/labs/temporal-ensembling$ python mnist_eval.py
Traceback (most recent call last):
File "mnist_eval.py", line 50, in
acc, acc_best, l, sl, usl, indices = train(model, seed, **cfg)
File "/home/gpuserver/ht/labs/temporal-ensembling/temporal_ensembling.py", line 86, in train
k, n_classes, seed, shuffle_train=False)
File "/home/gpuserver/ht/labs/temporal-ensembling/temporal_ensembling.py", line 25, in sample_train
indices[i * card: (i + 1) * card] = class_items[rd[:card]]
RuntimeError: expand(torch.LongTensor{[10, 1]}, size=[10]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (2)

@sweetdream33
Copy link

I get the same error.

Traceback (most recent call last):
File "mnist_eval.py", line 50, in
acc, acc_best, l, sl, usl, indices = train(model, seed, **cfg)
File "/storage/ssd/jupyter-nb-workspace/temporal-ensembling-master/temporal_ensembling.py", line 86, in train
k, n_classes, seed, shuffle_train=False)
File "/storage/ssd/jupyter-nb-workspace/temporal-ensembling-master/temporal_ensembling.py", line 25, in sample_train
indices[i * card: (i + 1) * card] = class_items[rd[:card]]
RuntimeError: expand(torch.LongTensor{[10, 1]}, size=[10]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (2)

@ferretj
Copy link
Owner

ferretj commented Nov 15, 2018

Hey, sorry about the delay.

The error is a dimension mismatch and is due to breaking changes in PyTorch 0.4.0 and later. I used PyTorch 0.3.0.post4 version with torchvision version 0.2.0 to run my experiments.

I recommend to use the same version as I did, and will update the README so that users will know which version to use.

If you want to stick with the latest PyTorch version, in my case a very simple addition in temporal_ensembling.py did the job (line 22) :
class_items = (train_dataset.train_labels == i).nonzero()[:, 0]

Hope this helps, I will close the issue for now.

@ferretj ferretj closed this as completed Nov 15, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants