-
Notifications
You must be signed in to change notification settings - Fork 18
Issues: jackaduma/Vicuna-LoRA-RLHF-PyTorch
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
What is the data format to LoRA-fine-tune Vicuna?
#4
opened May 11, 2023 by
DavidFarago
updated May 11, 2023
supervised_finetune.py failed with a wordaround
#10
opened Jun 11, 2023 by
SeekPoint
updated Jun 11, 2023
SFT with large loss {'loss': 388082722196684.8, 'learning_rate': 0.0, 'epoch': 0.02}
#11
opened Jun 19, 2023 by
LeiShenVictoria
updated Jun 19, 2023
Unable to merge reward adapter into model
#5
opened May 12, 2023 by
DavidFarago
updated Jun 26, 2023
Can we another format than alpaca-instruct like alpaca-chat instruct format if yes how ?
#12
opened Jun 27, 2023 by
Tejaswi-kashyap-006
updated Jun 27, 2023
unable to merge reward adapter into model
#14
opened Jul 18, 2023 by
XuanRen4470
updated Jul 18, 2023
any plans for adding repo using stable vicuna for conversation .. human: assistant
#16
opened Aug 14, 2023 by
andysingal
updated Aug 14, 2023
ProTip!
Find all open issues with in progress development work with linked:pr.