Block or Report
Block or report kk-machine-learning
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
OpenChat: Advancing Open-source Language Models with Imperfect Data
[ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".
Official implementation for the paper *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Scalable toolkit for efficient model alignment
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Monitor and profiler powered by eBPF to monitor network traffic, and diagnose CPU and network performance.
APM, Application Performance Monitoring System
字节跳动 APM 团队预备招聘社群,来一起聊聊大厂面试经验、简历如何编写、技术……
An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & Mixtral)
This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (NeurIPS22).
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
CodeUp: A Multilingual Code Generation Llama2 Model with Parameter-Efficient Instruction-Tuning on a Single RTX 3090
A collection of gdb tips. 100 maybe just mean many here.
Exploit Development and Reverse Engineering with GDB Made Easy
Python debugger (debugpy) extension for VS Code.
GEF (GDB Enhanced Features) - a modern experience for GDB with advanced debugging capabilities for exploit devs & reverse engineers on Linux
AutoDev - 🧙the AI-powered coding wizard . Put the most loved AutoDev AI assistant into your VSCode, and have things done quickly
NJU EMUlator, a full system x86/mips32/riscv32/riscv64 emulator for teaching
Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"
An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"