Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About setting parameter batch_size=1, num_frames=5 #20

Open
hwpengTristin opened this issue Nov 30, 2022 · 3 comments
Open

About setting parameter batch_size=1, num_frames=5 #20

hwpengTristin opened this issue Nov 30, 2022 · 3 comments

Comments

@hwpengTristin
Copy link

In the second stage of the FGT network training, I found that batch_size was only set to 1 and only 5 frames per video were selected for training. Thus, the size of the input tensor is (b, t, c, h, w)=>(1, 5, c, h, w). I would like to know why batch_size is set so small.

@hitachinsk
Copy link
Owner

In my experiments, I set batch size to 2, not 1. The batch size is the mini-batch per GPU, therefore, if you adopt 4 GPUs, and the batch size set to 2, the overall batch is 8 (2*4).

If you have more GPU memory, you can select more frames in a video for training, which may lead to better performance. I only selected 5 frames because of the limitation of GPU memory.

@hitachinsk
Copy link
Owner

I have corrected the configuration files, thanks for pointing out this error.

@hwpengTristin
Copy link
Author

Thank you for providing the precious details.

Furthermore, I find some duplicate definitions.
In the second stage FGT network training, I find some duplicate definitions betwen configuration file 'train.yaml' and the 'inputs.py' file. Also, when run code in train.py, it load another configuration file 'flowCheckPoint/config.yaml', which also contains some duplicate definitions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants