-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] finetune LLaVA-1.5 with LoRA.: does not appear to have a file named config.json. #729
Comments
have you figured it out? |
It seems like we need add a trigger when saving intermediate checkpoint, like the ones after train finished:
|
I added a custom callback just above the trainer initialize # save callback
from transformers import TrainerCallback
class SaveCallback(TrainerCallback):
def on_save(self, args, state, control, **kwargs):
checkpoint_dir = os.path.join(args.output_dir, 'checkpoint-{}'.format(state.global_step))
if args.lora_enable:
state_dict = get_peft_state_maybe_zero_3(
model.named_parameters(), training_args.lora_bias
)
non_lora_state_dict = get_peft_state_non_lora_maybe_zero_3(
model.named_parameters()
)
if args.local_rank in [-1, 0]:
model.config.save_pretrained(checkpoint_dir)
model.save_pretrained(checkpoint_dir, state_dict=state_dict)
torch.save(non_lora_state_dict, os.path.join(checkpoint_dir, 'non_lora_trainables.bin')) and pass it to the trainer initializing trainer = LLaVATrainer(model=model,
tokenizer=tokenizer,
args=training_args,
callbacks=[SaveCallback()],
**data_module) |
hey I met the same problem. Did you solve this? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Question
Thanks for your great work! I have a question about finetune LLaVA-1.5 with LoRA:
![image](https://private-user-images.githubusercontent.com/129492824/279598073-4b54f165-0d43-4908-b283-66d37ced69e9.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjI2NDgyODUsIm5iZiI6MTcyMjY0Nzk4NSwicGF0aCI6Ii8xMjk0OTI4MjQvMjc5NTk4MDczLTRiNTRmMTY1LTBkNDMtNDkwOC1iMjgzLTY2ZDM3Y2VkNjllOS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwODAzJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDgwM1QwMTE5NDVaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0xNzIwY2JkNTlhZDFkYjNiYWE3ZDUwNzBhMDZkOTBlZDc2YTBkYzFkM2Y0MTAxZWMyZGYzZTc2MmM1NmI1MDFlJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.N5t6Glavq7_hUcXfE41T-YX5hbNsWTRyfXNGWjJgrnE)
OSError: /checkpoints/llava-v1.5-13b-lora-v2/checkpoint-6000 does not appear to have a file named config.json.
The text was updated successfully, but these errors were encountered: