Lists (10)
Sort Name ascending (A-Z)
amazing tools
一些linux下的强大工具,开发者的瑞士军刀。c++项目
python学习
一些学习python的教程shell plugin
plugins or themes for shellubuntu美化配置
包括桌面美化、终端主题等。vim plugin
powerful plugins for vim我做过的项目
一些我自己做过的开源项目机器学习与深度学习
Stars
TSP算法全复现:遗传(GA)、粒子群(PSO)、模拟退火(SA)、禁忌搜索(ST)、蚁群算法(ACO)、自自组织神经网络(SOM)
4 labs + 2 challenges + 4 docs
Policy Gradient is all you need! A step-by-step tutorial for well-known PG methods.
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
(ICML 2024) The official code for EvoRainbow: Combining Improvements in Evolutionary Reinforcement Learning for Policy Search
Research Papers and Code Repository on the Integration of Evolutionary Algorithms and Reinforcement Learning
Rich is a Python library for rich text and beautiful formatting in the terminal.
Design patterns implemented in Java
🎨 ML Visuals contains figures and templates which you can reuse and customize to improve your scientific writing.
📖A curated list of Awesome LLM Inference Paper with codes, TensorRT-LLM, vLLM, streaming-llm, AWQ, SmoothQuant, WINT8/4, Continuous Batching, FlashAttention, PagedAttention etc.
real Transformer TeraFLOPS on various GPUs
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.
The official implementation of the EMNLP 2023 paper LLM-FP4
Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling
Reorder-based post-training quantization for large language model
A framework for few-shot evaluation of language models.
[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
🆓免费的 ChatGPT 镜像网站列表,持续更新。List of free ChatGPT mirror sites, continuously updated.
A simple network quantization demo using pytorch from scratch.
[MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration