Stars
A command line tool and library for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S, RTMP, RTMPS, RTSP…
A simple, performant and scalable Jax LLM!
The easiest, and fastest way to run AI-generated Python code safely
Aim 💫 — An easy-to-use & supercharged open-source experiment tracker.
Visualize streams of multimodal data. Free, fast, easy to use, and simple to integrate. Built in Rust.
Completely unstyled, fully accessible UI components, designed to integrate beautifully with Tailwind CSS.
Accessible large language models via k-bit quantization for PyTorch.
Model interpretability and understanding for PyTorch
Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.
Machine Learning Engineering Open Book
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting…
Guide for fine-tuning Llama/Mistral/CodeLlama models and more
Get up and running with Llama 3.2, Mistral, Gemma 2, and other large language models.
🤖 Chat with your SQL database 📊. Accurate Text-to-SQL Generation via LLMs using RAG 🔄.
Chat with your database (SQL, CSV, pandas, polars, mongodb, noSQL, etc). PandasAI makes data analysis conversational using LLMs (GPT 3.5 / 4, Anthropic, VertexAI) and RAG.
Enable decision-making based on simulations
Gen-AI Chat for Teams - Think ChatGPT if it had access to your team's unique knowledge.
Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
Probabilistic programming with HuggingFace language models
Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.
OpenAI Assistants API quickstart with Next.js.
Code at the speed of thought – Zed is a high-performance, multiplayer code editor from the creators of Atom and Tree-sitter.
A vector search SQLite extension that runs anywhere!
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities