-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need help to do inference for gray scale image #32
Comments
can you give me your gray images? |
You can use KITTI pretrained model, that will perform well. |
thanks gangweiX. I see sensible output with kitti2015 pre-trained network for above images. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Want to run the network for gray scale images (single channel)
I get this error while running network on gray images
Traceback (most recent call last):
File "demo_imgs.py", line 100, in
demo(args)
File "demo_imgs.py", line 50, in demo
image1 = load_image(imfile1)
File "demo_imgs.py", line 29, in load_image
img = torch.from_numpy(img).permute(2, 0, 1).float()
RuntimeError: number of dims don't match in permute
I tried copying same gray values for all 3 channel but results are not very good.
I see eth3d is gray scale image dataset so I also tried with eth3d network shared.
But I still get above error.
Can you please share what change is needed to adapt network to gray images?
The text was updated successfully, but these errors were encountered: