Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom trained data is coming out as a mess #19

Open
timatchley opened this issue Oct 20, 2023 · 6 comments
Open

Custom trained data is coming out as a mess #19

timatchley opened this issue Oct 20, 2023 · 6 comments

Comments

@timatchley
Copy link

First of all, great work. I am very impressed by the results you have been able to achieve.

However I have not been able to get my own data sets working well and thought maybe you'd be able to point me in the right direction or tell me about any limitations I should be aware about.

To put it short, using the vanilla Gaussian Splatting i am able to get a solid non-dynamic scene to output using this data so I'm trying to figure out what gives.

I've modeled my input data to match that the dynerf dataset uses. To achieve this I took my between 9-16 synced cameras video files and dumped each frame out.

Then to calibrate I simply took the first frame output from each camera and put that into it's own folder calibration/images. From there I followed the LLFF imgs2poses.py and had it get a good calibration from both my datasets I've tried using (I verified the # of cameras matched and the positions looked accurate using the colmap Gui). I let this script output the .npy file and put that in the root of my capture data's folder so it matched the structure the dynerf datasets use.

I then created a .py file matching the default dynerf .py file (I wonder if there is anything special i that needs to be done here?)

From there I was able to run it in the same way I ran using your sample dynerf dataset but the resulting video was basically just noise for one data set that was from 16 cameras circling a target, and quite noisy from the 9 more linear/semi curve camera position data set I have created.

I've noticed you are quite active working on this project so thought maybe i'd reach out and see if you have any idea what i may be doing wrong?

Should i be trying to get my input data to match one of the other supported formatted dataset you've tested with? If so can you provide detailed instructions on how to take several synced camera videos (or just the dumped image sequences from each of them) and get to come out better using this awesome codebase you've written?

Any help would be much appreciated. I look forward to hearing back!

@timatchley
Copy link
Author

One note that may be important, I tried the method of my calibration also using the dynerf/cook_spinach dataset and the training result was good so I believe the method I am using for calibration is valid. And as i stated before, the positional information looked quite accurate when i inspected it on my non-working personal datasets.

Thanks again for any help!

@guanjunwu
Copy link
Collaborator

guanjunwu commented Oct 21, 2023

Dear timatchley,

I also tried same approach like yours in dynerf/cut_roasted_beef, I think the main reason is that the pointclouds initialization is wrong. You can set render_process=True in the ModelHiddenParams to visualization training process at the start of training.

btw, I found that you should inverse points like:
pcd = pcd._replace(points=np.concatenate([-pcd.points[:,0:1],-pcd.points[:,1:2],-pcd.points[:,2:3]],1)*train_dataset.scale_factor)
it may works.(bounding box of points cloud may better be set between (-2,2))

@timatchley
Copy link
Author

Thanks for the reply guanjuwu,

You can set render_process=True in the ModelHiddenParams to visualization training process at the start of training.

I just gave this a try, and no visualization shows up. Can you elaborate? Also I'm not sure that just visualizing will solve the problem. Is the initial pointcloud just the points3d from Colmap?

As far as the inversing points, I'm a little confused how this would occur and not be breaking the current test datasets? Aren't they following the same process and there for would have the same problem? I could also use a bit of advice on where in the code I should try altering it if this is the issue.

Thanks again!

@guanjunwu
Copy link
Collaborator

emmmm you can find the rendering results in "output/your_results_dir/", it only render a image every training iteration.

@ch1998
Copy link

ch1998 commented Nov 6, 2023

Hi, have you successfully used Custom data to get good results? I also used imgs2poses.py for data calibration, but the training results were very bad.

@yangqing-yq
Copy link

@ch1998 have you figure out the root issue and get better result? I met the same issue.
@guanjunwu anyone can generate good result with custom dataset?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants