Block or Report
Block or report jiahaozhenbang
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..
Using sparse coding to find distributed representations used by neural networks.
A Unified Library for Parameter-Efficient and Modular Transfer Learning
Official code for "Large Language Models Are Reasoning Teachers", ACL 2023
SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It solves 12.47% of bugs in the SWE-bench evaluation set and takes just 1 minute to run.
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions
Locating and editing factual associations in GPT (NeurIPS 2022)
ReFT: Representation Finetuning for Language Models
For releasing code related to compression methods for transformers, accompanying our publications
[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.
The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.
An easy-to-use federated learning platform
[ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.
A high-throughput and memory-efficient inference and serving engine for LLMs
Benchmarking large language models' complex reasoning ability with chain-of-thought prompting
🚀 State-of-the-art parsers for natural language.
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
Official implementation for "Automatic Chain of Thought Prompting in Large Language Models" (stay tuned & more will be updated)
Code, datasets, and checkpoints for the paper "Improving Passage Retrieval with Zero-Shot Question Generation (EMNLP 2022)"
The GitHub repository for the paper "Self-prompted Chain-of-Thought on Large Language Models for Open-domain Multi-hop Reasoning" accepted by EMNLP 2023.
Forward-Looking Active REtrieval-augmented generation (FLARE)
The original implementation of Min et al. "Nonparametric Masked Language Modeling" (paper https//arxiv.org/abs/2212.01349)