Stars
Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality synthetic data generation pipeline!
Accessible large language models via k-bit quantization for PyTorch.
Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
[ACL 2024] Progressive LLaMA with Block Expansion.
1-Click is all you need.
🔥 High-performance TensorFlow Lite library for React Native with GPU acceleration
Hosting code-server on Amazon SageMaker
A standalone registry used to mirror images for Openshift installations.