Block or Report
Block or report atemiz
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (7)
Sort Name ascending (A-Z)
Stars
Language: Python
Sort by: Most stars
Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.
Stable Diffusion web UI
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
A natural language interface for computers
The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
Making large AI models cheaper, faster and more accessible
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Code and documentation to train Stanford's Alpaca models, and generate the data.
⚡ A Fast, Extensible Progress Bar for Python and CLI
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Open-Sora: Democratizing Efficient Video Production for All
Stable Diffusion with Core ML on Apple Silicon
A graph-relational database with declarative schema, built-in migration system, and a next-generation query language
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference,…
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
Run any open-source LLMs, such as Llama 2, Mistral, as OpenAI compatible API endpoint in the cloud.
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
MiniCPM-Llama3-V 2.5: A GPT-4V Level Multimodal LLM on Your Phone
An open source implementation of Microsoft's VALL-E X zero-shot TTS model. Demo is available in https://plachtaa.github.io
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
Diff Match Patch is a high-performance library in multiple languages that manipulates plain text.
Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
CoreNet: A library for training deep neural networks