- Los Angeles
Stars
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
Implementation of paper 'Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference'
Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.
Repo for ACL2023 paper "Won't Get Fooled Again: Answering Questions with False Premises"
[ACL 2020] Towards Debiasing Sentence Representations
【不定期更新】收集整理的一些网站中(如知乎、Quora、Reddit、Stack Exchange等)与深度学习、机器学习、强化学习、数据科学相关的有价值的问题
Awesome Machine Unlearning (A Survey of Machine Unlearning)
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
Evaluating the Ripple Effects of Knowledge Editing in Language Models
Stable Knowledge Editing in Large Language Models
[EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions
Adding guardrails to large language models.
A very simple framework for state-of-the-art Natural Language Processing (NLP)
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
[EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627