Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLava GPT Eval无法对齐 #84

Closed
BlueBlueFF opened this issue Jan 31, 2024 · 1 comment
Closed

LLava GPT Eval无法对齐 #84

BlueBlueFF opened this issue Jan 31, 2024 · 1 comment

Comments

@BlueBlueFF
Copy link

在LLava中,Prompt是输出字母https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/vlm/llava.py#L96,然而在GPT Eval
中,https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/evaluate/multiple_choice.py#L75 却要求模型输出选项的内容,这里的Diff是为啥呢?

@kennymckormick
Copy link
Member

@BlueBlueFF
需要澄清的是,multiple_choice.py#L75 与推理多模态模型(如 LLaVA)无关,只是我们评测选择题时处理 无法直接匹配选项字母 情况的一种方式(即利用语言模型进行语义匹配)。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants