GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
pytorch
lora
low-rank-approximation
peft
large-language-models
llm
low-rank-adaptation
finetuning-llms
galore
-
Updated
Mar 7, 2024 - Python
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Add a description, image, and links to the galore topic page so that developers can more easily learn about it.
To associate your repository with the galore topic, visit your repo's landing page and select "manage topics."