-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Usage] How can I implemet few shot learning on LLaVa #1202
Comments
In-context learning or fine tuning |
That's an excellent question. Similar to OpenAI GPT models, we can enhance them through a few-shot approach. It would be fantastic if we could apply the same method to these pre-trained models. @haotian-liu |
Is it solved? Because I use SGLang for batch inference and I also need this feature for ICL and multiple discussions or few shot. |
I think The error is because of the image token. In the prompt, the image token should be given as: <image> and not by image id or image index. I got a similar error in my setup for multi-prompt. BTW, the model is not capable of performing directly on multiple images and prompts simultaneously, as is evident from the following conversations by the author and others. https://discuss.huggingface.co/t/llava-multi-image-input-support-for-inference/68458 #197! #57. |
Hi guys, you can use our implemented codebase for ICL. https://github.com/ys-zong/VL-ICL |
Describe the issue
Hi there,
I have some images and some custom explain.
So I want to implement few shot learning to make summaries of my images.
This is my current implement:
My code to build prompt:
Inference:
And my error:
The text was updated successfully, but these errors were encountered: