-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About setting parameter batch_size=1, num_frames=5 #20
Comments
In my experiments, I set batch size to 2, not 1. The batch size is the mini-batch per GPU, therefore, if you adopt 4 GPUs, and the batch size set to 2, the overall batch is 8 (2*4). If you have more GPU memory, you can select more frames in a video for training, which may lead to better performance. I only selected 5 frames because of the limitation of GPU memory. |
I have corrected the configuration files, thanks for pointing out this error. |
Thank you for providing the precious details. Furthermore, I find some duplicate definitions. |
In the second stage of the FGT network training, I found that batch_size was only set to 1 and only 5 frames per video were selected for training. Thus, the size of the input tensor is (b, t, c, h, w)=>(1, 5, c, h, w). I would like to know why batch_size is set so small.
The text was updated successfully, but these errors were encountered: