You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I originally thought that the output of lora training ouput "non_lora_trainables.bin" could be shared with other checkpoints as the literal meaning.
So I did a test. I simply trained it twice. Only the "non_lora_trainables.bin" trained that time will have normal image dialogue output. If "non_lora_trainables.bin" is exchanged, it will be abnormal.
I would like to ask if there is a way to save "non_lora_trainables.bin" and "config.json" in all checkpoints during the lora training process. Otherwise, the checkpoints are saved but cannot be used and it is difficult to correctly judge the model performance.
The text was updated successfully, but these errors were encountered:
Describe the issue
I originally thought that the output of lora training ouput "non_lora_trainables.bin" could be shared with other checkpoints as the literal meaning.
So I did a test. I simply trained it twice. Only the "non_lora_trainables.bin" trained that time will have normal image dialogue output. If "non_lora_trainables.bin" is exchanged, it will be abnormal.
I would like to ask if there is a way to save "non_lora_trainables.bin" and "config.json" in all checkpoints during the lora training process. Otherwise, the checkpoints are saved but cannot be used and it is difficult to correctly judge the model performance.
The text was updated successfully, but these errors were encountered: