Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible to train LoRa on mac m2 GPU ? #2418

Closed
Anvil-Late opened this issue May 29, 2023 · 2 comments
Closed

Possible to train LoRa on mac m2 GPU ? #2418

Anvil-Late opened this issue May 29, 2023 · 2 comments

Comments

@Anvil-Late
Copy link

Hi,

I want to finetune the wizard-vicuna-7b model (currently using HF because GGML seems to be broken for now) and to add some medical knowledge.
So far my dataset is ready, loads fine and has the proper structure, and the training works as well but is a bit slow.
I'd like to use the GPU to train my lora faster ; is there any way to do that with the text-generation-webui ?

I'm using a mac m2 pro with 32G of unified memory.

PS : I looked around and didn't find a similar issue but if the question has already been asked, don't hesitate to link it to me !

@Anvil-Late
Copy link
Author

Closing my own issue : apparently work is underway on llama.cpp to integrate fine-tuning, including on mac gpu.

Source : ggerganov/ggml#8

@QueryType
Copy link

Hmm.. it is a pity you closed the issue. As far as we know, finetuning on llama.cpp is still a bit shaky (at least for me) on the mac m1/m2. Also as per ggerganov, finetuning on metal, is low prior item.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@QueryType @Anvil-Late and others