Lists (1)
Sort Name ascending (A-Z)
Stars
Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.
Machine Learning Interviews from FAANG, Snapchat, LinkedIn. I have offers from Snapchat, Coupang, Stitchfix etc. Blog: mlengineer.io.
An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).
Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
General technology for enabling AI capabilities w/ LLMs and MLLMs
Large Language-and-Vision Assistant for Biomedicine, built towards multimodal GPT-4 level capabilities.
✅ Solutions to LeetCode by Go, 100% test coverage, runtime beats 100% / LeetCode 题解
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
Making large AI models cheaper, faster and more accessible
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
Salesforce open-source LLMs with 8k sequence length.
Let ChatGPT teach your own chatbot in hours with a single GPU!
Code for the ALiBi method for transformer language models (ICLR 2022)
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
Aligning pretrained language models with instruction data generated by themselves.
Fast and memory-efficient exact attention
Reformer, the efficient Transformer, in Pytorch
Siamese and triplet networks with online pair/triplet mining in PyTorch
A curated list of awesome Active Learning
An open-source toolkit which is full of handy functions, including the most used models and utilities for deep-learning practitioners!
A collection of corpora for named entity recognition (NER) and entity recognition tasks. These annotated datasets cover a variety of languages, domains and entity types.
Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
example queries for learning the kusto language