-
Notifications
You must be signed in to change notification settings - Fork 147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom trained data is coming out as a mess #19
Comments
One note that may be important, I tried the method of my calibration also using the dynerf/cook_spinach dataset and the training result was good so I believe the method I am using for calibration is valid. And as i stated before, the positional information looked quite accurate when i inspected it on my non-working personal datasets. Thanks again for any help! |
Dear timatchley, I also tried same approach like yours in btw, I found that you should inverse points like: |
Thanks for the reply guanjuwu,
I just gave this a try, and no visualization shows up. Can you elaborate? Also I'm not sure that just visualizing will solve the problem. Is the initial pointcloud just the points3d from Colmap? As far as the inversing points, I'm a little confused how this would occur and not be breaking the current test datasets? Aren't they following the same process and there for would have the same problem? I could also use a bit of advice on where in the code I should try altering it if this is the issue. Thanks again! |
emmmm you can find the rendering results in "output/your_results_dir/", it only render a image every training iteration. |
Hi, have you successfully used Custom data to get good results? I also used imgs2poses.py for data calibration, but the training results were very bad. |
@ch1998 have you figure out the root issue and get better result? I met the same issue. |
First of all, great work. I am very impressed by the results you have been able to achieve.
However I have not been able to get my own data sets working well and thought maybe you'd be able to point me in the right direction or tell me about any limitations I should be aware about.
To put it short, using the vanilla Gaussian Splatting i am able to get a solid non-dynamic scene to output using this data so I'm trying to figure out what gives.
I've modeled my input data to match that the dynerf dataset uses. To achieve this I took my between 9-16 synced cameras video files and dumped each frame out.
Then to calibrate I simply took the first frame output from each camera and put that into it's own folder
calibration/images
. From there I followed the LLFFimgs2poses.py
and had it get a good calibration from both my datasets I've tried using (I verified the # of cameras matched and the positions looked accurate using the colmap Gui). I let this script output the .npy file and put that in the root of my capture data's folder so it matched the structure the dynerf datasets use.I then created a .py file matching the default dynerf .py file (I wonder if there is anything special i that needs to be done here?)
From there I was able to run it in the same way I ran using your sample dynerf dataset but the resulting video was basically just noise for one data set that was from 16 cameras circling a target, and quite noisy from the 9 more linear/semi curve camera position data set I have created.
I've noticed you are quite active working on this project so thought maybe i'd reach out and see if you have any idea what i may be doing wrong?
Should i be trying to get my input data to match one of the other supported formatted dataset you've tested with? If so can you provide detailed instructions on how to take several synced camera videos (or just the dumped image sequences from each of them) and get to come out better using this awesome codebase you've written?
Any help would be much appreciated. I look forward to hearing back!
The text was updated successfully, but these errors were encountered: