-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to fine-tune llava-v1.6-mistral-7b on GQA dataset #1544
Comments
There is not training script for 1.6 unfortunately... |
Hello, I was wondering if there is a current training script available for version 1.6? Thank you! |
what do you guys think of this ? By replacing llava by lava-next (processor and model) |
Hi @YuyangYe , have you found the scripts for that? Thanks! |
have you find any progress on this? |
there are some works on mac Blaizzy/mlx-vlm#43 |
maybe is the right way |
Yes, the owner of this repo answered on another issue: The fine-tuning does work with llama3! I suggest we move to his repo if we have new issues. |
Question
Thank you for your great work!
I am trying to fine-tune llava-v1.6-mistral-7b on the provided GQA dataset, using the script
finetune_task_lora.sh
. However, the loss dosen't decrease and test result on GQA is worse. I wonder how should I fine-tune llava-v1.6 models?This is the modified script:
This the modified part in
train.py
:Looking forward to reply!
The text was updated successfully, but these errors were encountered: