Block or Report
Block or report yixiao-huang
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Solutions to Linear Algebra Done Right, by Sheldon Axler.
Code and documentation to train Stanford's Alpaca models, and generate the data.
A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).
We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20 via OpenAI’s APIs.
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.
Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotlight).
The hub for EleutherAI's work on interpretability and learning dynamics
PyHessian is a Pytorch library for second-order based analysis and training of Neural Networks
Collection of advice for prospective and current PhD students
Transformer based on a variant of attention that is linear complexity in respect to sequence length
A beautiful, simple, clean, and responsive Jekyll theme for academics
Minimal is a Jekyll theme for GitHub Pages
Source code for EMNLP2022 paper "Finding Skill Neurons in Pre-trained Transformers via Prompt Tuning".
Library containing PyTorch implementations of various adversarial attacks and resources
EPFL Course - Optimization for Machine Learning - CS-439
Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"
Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification. https://aclanthology.org/2021.acl-long.381/
Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.
A curated list of prompt-based paper in computer vision and vision-language learning.
Must-read papers on prompt-based tuning for pre-trained language models.
[NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features
A random event driven text-based game engine.