Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The requirement of cityscapes RGBD dataset #1

Open
SunXusheng5200 opened this issue Sep 1, 2020 · 11 comments
Open

The requirement of cityscapes RGBD dataset #1

SunXusheng5200 opened this issue Sep 1, 2020 · 11 comments

Comments

@SunXusheng5200
Copy link

First thank you for sharing the excellent work for us. And for the RGBD segmentation research, we really need the cityscapes RGB-D dataset or the method to get depth map, would you share that in recent times?

@charlesCXK
Copy link
Owner

charlesCXK commented Sep 1, 2020

Thanks for your attention.
You could get the depth map from the official disparity map (in '.png' file) and camera parameters (in '.json' file).
Here we show some example code:

disp = cv2.imread(disp_file, cv2.IMREAD_UNCHANGED)    # read the 16-bit disparity png file
disp = np.array(disp).astype(np.float)
disp[disp > 0] = (disp[disp > 0] - 1) / 256    # convert the png file to real disparity values, according to the official documentation. 
# you could also refer to https://github.com/mcordts/cityscapesScripts/issues/55#issuecomment-411486510

# read camera parameters
camera_params = json.loads(camera_file)

depth = camera_params['extrinsic']['baseline'] * camera_params['intrinsic']['fx'] / disp
depth[depth == np.inf] = 0
depth[depth == np.nan] = 0

The final depth map is in 'meters'. Its median value is around 10 m.


After you obtain the depth map, you could use Depth2HHA-python to generate HHA map. Note that some hyper-parameters in that repo are designed for NYU Depth v2 dataset and we need to change something.

  1. We should clip the depth value because the 'sky' pixels has too large depth values.
    depth = np.minimum(depth, 100) # maximum value is 100 m.

  2. In function getHHA(C, D, RD), some hyper-parameters need to adjust according depth ranges of CityScapes dataset.

I[:,:,2] = 20000/pc[:, :, 2]*6 # 31000/pc[:,:,2]
I[:,:,1] = h/20       # height for cityscapes
I[:,:,0] = (angle + 128-90) + 10

@Serge-weihao
Copy link

@charlesCXK hha = getHHA(camera_matrix, D, RD)
what are the RDs for Cityscapes?

@charlesCXK
Copy link
Owner

@Serge-weihao In Cityscapes, we only have raw depth maps, so D == RD.

@SunXusheng5200
Copy link
Author

Thanks for your attention.
You could get the depth map from the official disparity map (in '.png' file) and camera parameters (in '.json' file).
Here we show some example code:

disp = cv2.imread(disp_file, cv2.IMREAD_UNCHANGED)    # read the 16-bit disparity png file
disp = np.array(disp).astype(np.float)
disp[disp > 0] = (disp[disp > 0] - 1) / 256    # convert the png file to real disparity values, according to the official documentation. 
# you could also refer to https://github.com/mcordts/cityscapesScripts/issues/55#issuecomment-411486510

# read camera parameters
camera_params = json.loads(camera_file)

depth = camera_params['extrinsic']['baseline'] * camera_params['intrinsic']['fx'] / disp
depth[depth == np.inf] = 0
depth[depth == np.nan] = 0

The final depth map is in 'meters'. Its median value is around 10 m.

After you obtain the depth map, you could use Depth2HHA-python to generate HHA map. Note that some hyper-parameters in that repo are designed for NYU Depth v2 dataset and we need to change something.

  1. We should clip the depth value because the 'sky' pixels has too large depth values.
    depth = np.minimum(depth, 100) # maximum value is 100 m.
  2. In function getHHA(C, D, RD), some hyper-parameters need to adjust according depth ranges of CityScapes dataset.
I[:,:,2] = 20000/pc[:, :, 2]*6 # 31000/pc[:,:,2]
I[:,:,1] = h/20       # height for cityscapes
I[:,:,0] = (angle + 128-90) + 10

Thank you very much for your help, I will continue to pay attention to your follow-up work . Furthermore, we collected some out door datasets, how should I adjust the hyper-parameters in the Depth2HHA-python to fit our dataset? I would be very grateful if you could solve my problem.

@charlesCXK
Copy link
Owner

@SunXusheng5200 Hi, if you could understand Chinese, please refer to this issue:
#2

@SunXusheng5200
Copy link
Author

@SunXusheng5200 Hi, if you could understand Chinese, please refer to this issue:
#2

十分感谢您的回答,这对我的研究很有帮助!再次感谢!

@swjtulinxi
Copy link

hi,you paper of the RGB-D can not be opened from the link ???

@TXH-mercury
Copy link

Hi @charlesCXK ,
The generated depth img from disparity img exists many unfilled values, have u used some algorithm to fill in the missing value first or directly trained without processing the missing value? If yes , could you share the filling algorithm? Thanks very much !

@charlesCXK
Copy link
Owner

@TXH-mercury Hi, we didn't use any algorithm to fill in the missing values.

@xiaojiangjiangjinger
Copy link

@charlesCXK Hello, would you share the HHA maps of cityscapes depth maps?

@charlesCXK
Copy link
Owner

@xiaojiangjiangjinger
Sorry, we haven't planned to upload the HHA maps of Cityscapes online for the time being. Maybe you could try to convert them following #1 (comment) 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants