Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lidar Data Generation Issue in Adverse Weather Conditions #84

Open
ptiwari0664 opened this issue Dec 15, 2023 · 3 comments
Open

Lidar Data Generation Issue in Adverse Weather Conditions #84

ptiwari0664 opened this issue Dec 15, 2023 · 3 comments
Labels
enhancement New feature or request

Comments

@ptiwari0664
Copy link

ptiwari0664 commented Dec 15, 2023

Description:
The Lidar sensor simulation currently faces challenges in generating realistic data under adverse weather conditions, such as fog and rain. The current sensor modelling does not adequately replicate the impact of these weather conditions on Lidar data, leading to a discrepancy between simulated and real sensor data.

Expected Behavior:
The Lidar simulation should accurately reproduce the effects of adverse weather conditions on data generation, including reduced visibility, distortion, and noise in the point cloud. This is crucial for ensuring that the simulations closely mirror real-world scenarios, providing a more comprehensive and accurate testing environment.

Steps to Reproduce:

  1. Set up a simulation scenario with adverse weather conditions (e.g., fog, rain).
  2. Run Lidar data generation within this scenario.
  3. Observe that the generated Lidar data does not exhibit the expected effects of adverse weather conditions.

Can someone please help to guide, how can we achieve this ?

@RyodoTanaka
Copy link
Member

@ptiwari0664
Thank you for raising the issue. As you mentioned, we believe that LiDAR simulations that take into account the effects of fog, rain, and dust are necessary for the actual implementation. Currently, the LiDAR implemented in this repository is based on the following two methods.

  1. Ray Cast Method
  2. Depth Buffer Method

In the Ray Cast method, only objects with a collider set can be detected. Therefore, it is almost impossible to implement objects such as fog, rain, dust, etc. that do not have a collision model.
On the other hand, the Depth Buffer method acquires the rendered image as a depth image and converts it into distance information. In other words, we believe that there is a high possibility of implementing obstacles that are only visible visually, such as fog, and are moderately trapped by LiDAR.

The team currently developing the functionality for this repository is in the process of revising the existing design. Therefore, we are not able to respond immediately, but we may be able to start implementing this feature in the near future.
Of course, implementation and pull requests by @ptiwari0664 and others are always welcome !! 😎

@Autumn60
Copy link
Contributor

Autumn60 commented Jan 4, 2024

After checking, I found that the default particle shader does not set DepthBuffer.
So you need to prepare a shader to set the DepthBuffer.
The following link is helpful

https://github.com/PavelTorgashov/Unity-Particles-with-Depth-Buffer

image
image

@RyodoTanaka
Copy link
Member

RyodoTanaka commented Jan 17, 2024

@Autumn60
Thank you very much for your help.
The result means, we can have a desired data such as rain, fog, dust and etc... in simulation !
This must be very nice and important function !! 😎
Also, I think we should write this into the new document.

@ptiwari0664
If you think this is not enough, please figure your opinion out.
Or, please close this issue.

Thank you both !

@Autumn60 Autumn60 added the enhancement New feature or request label Jan 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants