Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The dataloader of SensatUrban with point-transformer? #6

Open
whuhxb opened this issue Jun 14, 2022 · 5 comments
Open

The dataloader of SensatUrban with point-transformer? #6

whuhxb opened this issue Jun 14, 2022 · 5 comments

Comments

@whuhxb
Copy link

whuhxb commented Jun 14, 2022

Hi @meidachen

How to set the dataloader of SensatUrban dataset with point-transformer? As SensatUrban and STPLS3D datasets are both large-scale, could I refer to and follow the generate_blocks.py and stpls.py upon SensatUrban dataset with point-transformer code?

Thanks.

@RockyatASU
Copy link
Collaborator

Thanks for your interest in our datasets. Yes, you can follow the generate_blocks.py and stpls.py to create a new dataset class for the SensateUrban dataset. As the point cloud resolution of SensatUrban and STPLS3D datasets are different, you may need to change the default block size in generate_blocks.py.

@whuhxb
Copy link
Author

whuhxb commented Jun 15, 2022

@RockyatASU

OK Thanks a lot

@whuhxb
Copy link
Author

whuhxb commented Jun 20, 2022

Hi @RockyatASU

How to check or measure the resolution of each ply area in SensatUrban dataset? And how to set the voxel size for SensatUrban dataset? In addition, if I do not want to generate blocks for SensatUrban dataset, is it possible to follow the RandLA-Net dataloader setting for Point Transformer?

Thanks.

@RockyatASU
Copy link
Collaborator

Hi @whuhxb,

I think our STPLS3D has a similar point cloud resolution to the SensatUrban dataset so we recommend you try 0.3~0.5 voxel size for your experiments on SensatUrban. For details please refer to the author of SensatUrban.

You can refer to this link for the detailed parameters of point-transformer. Basically, point-transformer utilizes the voxelized point cloud to reduce the point cloud density during training but it still produces a semantic label for each point. Sure, you can feed one whole scene of SensatUrban into a point-transformer without generating subblocks and the setting of RandLA-Net may be a good starting point for your experiments.

@whuhxb
Copy link
Author

whuhxb commented Jun 21, 2022

Hi @RockyatASU

Thank you for your kind and detailed reply. I will have a try.

In addition, for S3DIS dataset with area 5 as testing, Point Transformer uses 12346 areas as training and 5 area as evaluate to obtain the best model and then uses 5 as testing, where train val and test set should have no intersection. But I have seen several github codes on S3DIS dataset adopts this training setting. I have no idea why set like this.

I wonder to know whether Point Transformer for STPLS3D also adopts the same setting. WMSC is for evaluate and testing and other files are used for training. Or, how to split the train set, val set, and test set for STPLS3D dataset?

Train set, val set, and test set of SensatUrban dataset have no intersection.

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants