-
The Hong Kong University of Science and Technology
- Hong Kong SAR, China
-
00:48
(UTC +08:00) - https://yjiangcm.github.io/
- @Yuxin_Jiang_
- https://scholar.google.com/citations?user=QnfcEEcAAAAJ
Block or Report
Block or report YJiangcm
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Robust recipes to align language models with human and AI preferences
The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"
A large-scale, fine-grained, diverse preference dataset (and models).
Get up and running with Llama 3, Mistral, Gemma 2, and other large language models.
SimPO: Simple Preference Optimization with a Reference-Free Reward
Reference implementation for DPO (Direct Preference Optimization)
The hub for EleutherAI's work on interpretability and learning dynamics
Achieving Efficient Alignment through Learned Correction
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"
DSPy: The framework for programming—not prompting—foundation models
Code for "Learning to Edit: Aligning LLMs with Knowledge Editing (ACL 2024)"
Code and data for "MT-Eval: A Multi-Turn Capabilities Evaluation Benchmark for Large Language Models"
Fast and memory-efficient exact attention
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Code for "FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models (ACL 2024)"
🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
Toolkit for creating, sharing and using natural language prompts.
Locating and editing factual associations in GPT (NeurIPS 2022)
🦜🔗 Build context-aware reasoning applications
[知识编辑] [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
[知识编辑] Must-read Papers on Knowledge Editing for Large Language Models.
Benchmarking large language models' complex reasoning ability with chain-of-thought prompting
A high-throughput and memory-efficient inference and serving engine for LLMs