-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
full-parameter or lora? #3
Comments
Hi @Nastu-Ho, As per our experiments, we did not notice any accuracy gains when using Phi3-mini-4K LLM or Vicuna. However, full fine-tuning was better in the case of LLaMA-3 as compared to LoRA. |
Thank you for your reply. |
Hi @Nastu-Ho, We do not have the MVBench results using LLaMA-3 LLM, however, I can share the numbers with Vicuna 7B and 13B that we observe during experiments and this may give us some clues about the trend. With Vicuna 7B and 13B, we obtain 53.10 and 58.67 average scores on MVBench. These experiments show that using a better/stronger LLM improves the MVBench performance. However, we do not have any ablations with LLaMA-3 and Mistral. If you have any findings, please do share. Thank You. |
Will full-parameter fine-tuning be better?
The text was updated successfully, but these errors were encountered: