Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reduce VRAM requirement #45

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

reduce VRAM requirement #45

wants to merge 2 commits into from

Conversation

alexjunholee
Copy link

Hi,

I use a system with a single graphic card (VRAM 16GB) and high memory instead (because RAM expansion is always cheaper than VRAM expansion)
Therefore, I had to lower the volume of VRAM and load the images into the memory.

Simply not loading the original_image to the cuda in class Camera, I got what I expected.
There are no other problems, as in the training part, the loss function is already calculated with the following:

gt_image = torch.clamp(viewpoint.original_image.to("cuda"), 0.0, 1.0)
l1_test += l1_loss(image, gt_image).mean().double()

Please merge if you find this useful.
Thanks for your great work!

@hbb1
Copy link
Owner

hbb1 commented May 27, 2024

Thank you for your PR. Indeed this is important. However, I think it will cause increased training time? because we need loading the image into GPU every iteration. Perhaps, for large scales where there are thousands of images, a more smart data loader should be implemented. It think it would be great to leave for future development.

@hbb1
Copy link
Owner

hbb1 commented Jun 10, 2024

Hi, can you add an augment like data_device so that we can control the device to put the data.

@hbb1 hbb1 mentioned this pull request Jun 26, 2024
@oUp2Uo
Copy link

oUp2Uo commented Jul 2, 2024

I think this maybe useful when processing large amount images, or with --resolution 1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants