Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the estiamted runtime across benchmarks, and OpenAI api cost? #33

Closed
findalexli opened this issue Dec 28, 2023 · 1 comment
Closed

Comments

@findalexli
Copy link

No description provided.

@kennymckormick
Copy link
Member

Hi, @findalexli ,

  1. The runtime depends on the architecture of your VLM and may vary across different VLMs. To demonstrate a data point, evaluating llava-v1.5-7b on MMBench_DEV_EN may cost 15 minutes on a single A100.
  2. The OpenAI API cost depends on the nature of the task and the instruction following capability of your model (after all, if an VLM follows the instruction perfectly, no GPT post-processing is required). For some data points, evaluating different VLMs on MMBench_EN (dev + test) will cost < $1.5 on average. For models like llava-v1.5-7b, XComposer, no GPT cost will be spent during evaluation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants