Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding LLaVa support #1832

Closed
wants to merge 3 commits into from
Closed

Conversation

ashvinnihalani
Copy link

@ashvinnihalani ashvinnihalani commented May 13, 2024

@CLAassistant
Copy link

CLAassistant commented May 13, 2024

CLA assistant check
All committers have signed the CLA.

@LSinev
Copy link
Contributor

LSinev commented Jun 13, 2024

As new release will be soon, can you please consider resolving conflicts with the main branch to increase pssibility of adding this PR to the new release?

Copy link

@accesslint accesslint bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are accessibility issues in these changes.

lm_eval/tasks/mmmu/utils.py Show resolved Hide resolved
lm_eval/tasks/mmmu/utils.py Show resolved Hide resolved
lm_eval/tasks/mmmu/utils.py Show resolved Hide resolved
Updating APIs for MM support

Adding MLLM dependencies

Rebase off mainline
@ashvinnihalani
Copy link
Author

As new release will be soon, can you please consider resolving conflicts with the main branch to increase pssibility of adding this PR to the new release?

Done, just a heads up this is probably not the first CR for LLaVa support adding some more tasks and supporting the latest LLaVa 1.6 is still needed.

Copy link

@accesslint accesslint bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are accessibility issues in these changes.

lm_eval/tasks/mmmu/utils.py Show resolved Hide resolved
lm_eval/tasks/mmmu/utils.py Show resolved Hide resolved
lm_eval/tasks/mmmu/utils.py Show resolved Hide resolved
Copy link

@accesslint accesslint bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are accessibility issues in these changes.

lm_eval/tasks/mmmu/utils.py Show resolved Hide resolved
@haileyschoelkopf
Copy link
Contributor

haileyschoelkopf commented Jun 28, 2024

Hi @ashvinnihalani , giving you a heads up we're looking at this PR and trying out some multimodal design ideas based on it! See multimodal-evals branch

@ashvinnihalani
Copy link
Author

Cool we are already using this + some other commits to test our MM models in multiple formats (HF LLaVA, LLaVA, VLLM pending) across multiple benchmarks (VQA, MMMU, custom MM benchmarks, text tasks as well, etc). Let me know what I can do to help mainline our changes/get this functionality upstreamed.

Happy to continue working on this PR/move to another PR.

@haileyschoelkopf
Copy link
Contributor

Closing this PR as it uses code written by LMMs-eval authors, but as described in #2014 we're working on design for VLM (text-out) backend and will share in that issue for feedback as we update!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants