Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance on MVBench #4

Closed
NIneeeeeem opened this issue Jun 18, 2024 · 8 comments
Closed

Performance on MVBench #4

NIneeeeeem opened this issue Jun 18, 2024 · 8 comments

Comments

@NIneeeeeem
Copy link

Thanks to open source for this exciting work!

I reproduced the performance on MVBench with a single gpu, but three experiments did not achieve the expected results in the paper (best results below), if any other weights were used in the tests. Also I noticed that when setting batch_size_per_gpu=2 drastically affects the performance and there is no OOM.

All Acc: [67.5, 57.99999999999999, 80.3030303030303, 48.5, 56.49999999999999, 86.5, 73.86934673366834, 37.0, 30.0, 30.5, 85.0, 38.0, 65.0, 82.5, 42.5, 51.0, 48.0, 31.0, 40.0, 56.00000000000001]% Total Acc: 55.37%

@mmaaz60
Copy link
Member

mmaaz60 commented Jun 18, 2024

Hi @NIneeeeeem,

I appreciate your interest in our work. Please share the exact steps you followed to reproduce our work. For example, which model weights are you using? What command you are using for running inference and evaluation.

Further, note that the scripts provided in our repository are not tested for batch sizes > 1 and are not guaranteed to be working properly. I would highly recommend to keep the batch_size =1. Thank You.

@NIneeeeeem
Copy link
Author

Hi @NIneeeeeem,

I appreciate your interest in our work. Please share the exact steps you followed to reproduce our work. For example, which model weights are you using? What command you are using for running inference and evaluation.

Further, note that the scripts provided in our repository are not tested for batch sizes > 1 and are not guaranteed to be working properly. I would highly recommend to keep the batch_size =1. Thank You.

Hi, here is the command:
CUDA_VISIBLE_DEVICES=0 python eval/mvbench/inference/infer.py --model-path weights/VideoGPT-plus_Phi3-mini-4k/mvbench - -model-base weights/Phi-3-mini-128k-instruct --video-folder MVBench/video --question-dir MVBench/json --output-dir M VBench/dual_result3
Weights:
video encoder: VideoGPT-plus/OpenGVLab/InternVideo2-Stage2_1B-224p-f4/InternVideo2-stage2_1b-224p-f4.pt
image encoder: VideoGPT-plus/openai/clip-vit-large-patch14-336
llm: Phi-3-mini-128k-instruct
ckpt: VideoGPT-plus_Phi3-mini-4k/mvbench

Building OpenGVLab/InternVideo2-Stage2_1B-224p-f4/InternVideo2-stage2_1b-224p-f4.pt
missing_keys=[]
Building openai/clip-vit-large-patch14-336
Building mlp2x_gelu
projector_type: mlp2x_gelu
Building mlp2x_gelu
projector_type: mlp2x_gelu
Loading additional VideoGPT+ weights...
Loading LoRA weights...
Merging LoRA weights...
Model is loaded...
load_state_dict: _IncompatibleKeys(missing_keys=[]

@mmaaz60
Copy link
Member

mmaaz60 commented Jun 19, 2024

Hi @NIneeeeeem,

Thank you for providing the inference command you are using. Please note that our experiments use the Phi-3-mini-4k-instruct base model, not the Phi-3-mini-128k-instruct.

Please try replacing the 128K context model with 4K context model and this should solve the issue. Good Luck!

@NIneeeeeem
Copy link
Author

NIneeeeeem commented Jun 19, 2024

Thank you for your reply!

With Phi-3-mini-4k-instruct, the total Acc achieved is 58.14%.
I have another question that I noticed that in the instruction-tuning phase, subsets from multiple datasets were mixed, such as K710 and SSV2. Is there a regular pattern in the division of the subsets, or is it randomly selected?
issue4

@mmaaz60
Copy link
Member

mmaaz60 commented Jun 19, 2024

Hi @NIneeeeeem,

These design choices are selected to optimize the training time and performance for both benchmarks.

@NIneeeeeem
Copy link
Author

@mmaaz60 Thank you for your reply. I think I didn't state my question clearly.

For datasets like SSV2, it contains 220,847 videos, of which 168,913 samples are used as the training set. And 40,000 were selected for the IT dataset in VideoGPT-plus. I am curious about the basis for such selection.

@mmaaz60
Copy link
Member

mmaaz60 commented Jun 20, 2024

Hi @NIneeeeeem,

Thanks for the clarification. We follow the splits proposed in MVBench for training VideoChat2. I hope it answers your question.

@NIneeeeeem
Copy link
Author

Thank you, my issue has been resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants