Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
-
Updated
Aug 20, 2024 - Python
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
stable diffusion webui colab
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
Using Low-rank adaptation to quickly fine-tune diffusion models.
Firefly: 大模型训练工具,支持训练Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
LoRA & Dreambooth training scripts & GUI use kohya-ss's trainer, for diffusion model.
AirLLM 70B inference with single 4GB GPU
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
MQTT gateway for ESP8266 or ESP32 with bidirectional 433mhz/315mhz/868mhz, Infrared communications, BLE, Bluetooth, beacons detection, mi flora, mi jia, LYWSD02, LYWSD03MMC, Mi Scale, TPMS, BBQ thermometer compatibility & LoRa.
Meshtastic device firmware
Use PEFT or Full-parameter to finetune 300+ LLMs or 60+ MLLMs. (Qwen2, GLM4v, Internlm2.5, Yi, Llama3.1, Llava-Video, Internvl2, MiniCPM-V-2.6, Deepseek, Baichuan2, Gemma2, Phi3-Vision, ...)
Add a description, image, and links to the lora topic page so that developers can more easily learn about it.
To associate your repository with the lora topic, visit your repo's landing page and select "manage topics."