Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
-
Updated
Aug 20, 2024 - Python
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Firefly: 大模型训练工具,支持训练Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
LoRA & Dreambooth training scripts & GUI use kohya-ss's trainer, for diffusion model.
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
Use PEFT or Full-parameter to finetune 300+ LLMs or 60+ MLLMs. (Qwen2, GLM4v, Internlm2.5, Yi, Llama3.1, Llava-Video, Internvl2, MiniCPM-V-2.6, Deepseek, Baichuan2, Gemma2, Phi3-Vision, ...)
Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
The cryptography-based networking stack for building unstoppable networks with LoRa, Packet Radio, WiFi and everything in between.
OneTrainer is a one-stop solution for all your stable diffusion training needs.
chatglm 6b finetuning and alpaca finetuning
An Open-sourced Knowledgable Large Language Model Framework.
Communicate Freely
Add a description, image, and links to the lora topic page so that developers can more easily learn about it.
To associate your repository with the lora topic, visit your repo's landing page and select "manage topics."