-
Notifications
You must be signed in to change notification settings - Fork 154
Issues: open-compass/VLMEvalKit
[Help Wanted] Supporting the
chat_inner
API for existing VLMs.
#323
opened Jul 27, 2024 by
kennymckormick
Open
2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Does vlmeval support multi card inference and batch size > 1?
Feature Request
#32
opened Dec 28, 2023 by
John-Ge
There is a long gap between the validation accuracy of the dataset of vlmevalkit and the model paper
#94
opened Feb 23, 2024 by
YongLD
How are models that use in-context examples handled?
Feature Request
#90
opened Feb 7, 2024 by
sachit-menon
ChatAPI - ERROR - HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by ProxyError('Unable to connect to proxy', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f481233f640>: Failed to establish a new connection: [Errno 111] Connection refused')))
#406
opened Aug 25, 2024 by
yansuoyuli
[Help Wanted] Supporting the Extra attention is needed
chat_inner
API for existing VLMs.
help wanted
#323
opened Jul 27, 2024 by
kennymckormick
[Request] Consider integrating the following list?
help wanted
Extra attention is needed
#257
opened Jul 5, 2024 by
sarapieri
Audio modality evaluation
help wanted
Extra attention is needed
#313
opened Jul 25, 2024 by
Simplesss
Reproducing QWen2VL Results on Video Benchmarks with VLMEvalKit
#484
opened Sep 23, 2024 by
aniki-ly
Input length of input_ids is 0, but max_length is set to -2009.
#455
opened Sep 10, 2024 by
wangli68
Previous Next
ProTip!
no:milestone will show everything without a milestone.