-
Together AI
- Seattle, WA
-
23:22
(UTC -08:00)
Lists (1)
Sort Name ascending (A-Z)
Stars
Fast and memory-efficient exact attention
Build AI Agents with memory, knowledge, tools and reasoning. Chat with them using a beautiful Agent UI.
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
Minimalistic large language model 3D-parallelism training
Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
Convert any URL to an LLM-friendly input with a simple prefix https://r.jina.ai/
CLI tool and python library that converts the output of popular command-line tools, file-types, and common strings to JSON, YAML, or Dictionaries. This allows piping of output to tools like jq and …
The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤
FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHX…
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Hackable and optimized Transformers building blocks, supporting a composable construction.
A collection of libraries to optimise AI model performances