🚩
Working from school
Sophomore undergrad at Peking University📚 Focus on Scalable Oversight / AI Safety / AI Alignment
-
Peking University
- Beijing
- https://cby-pku.github.io/
- https://scholar.google.com/citations?user=o23sDqkAAAAJ&hl=zh-CN&oi=ao
Pinned Loading
-
omnisafe
omnisafe PublicForked from PKU-Alignment/omnisafe
OmniSafe is an infrastructural framework for accelerating SafeRL research.
Python
-
Safe-Policy-Optimization
Safe-Policy-Optimization PublicForked from PKU-Alignment/Safe-Policy-Optimization
This is a benchmark repository for safe reinforcement learning algorithms
Python
-
safe-rlhf
safe-rlhf PublicForked from PKU-Alignment/safe-rlhf
Safe-RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Python 1
-
aligner
aligner PublicForked from Aligner2024/aligner
Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction
Python
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.