-
Westlake University
- Hangzhou, China
-
16:53
(UTC +08:00) - https://huanwang.tech
- @huanwangx
- in/huanwangx
Highlights
- Pro
Block or Report
Block or report MingSun-Tse
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (4)
Sort Name ascending (A-Z)
Stars
Language
Sort by: Recently starred
Don't Judge by the Look: Towards Motion Coherent Video Representation (ICLR2024)
[CVPR24] OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising
Official implementation of "Towards Distribution-Agnostic Generalized Category Discovery" (NIPS 2023)
[TMLR 2024] Efficient Large Language Models: A Survey
[NeurIPS 2023] Latent Graph Inference with Limited Supervision
[ICDM 2023] Momentum is All You Need for Data-Driven Adaptive Optimization
[ICCV2023 Official PyTorch code] for Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution
A curated list for Efficient Large Language Models
Unified Controllable Visual Generation Model
Tools to Design or Visualize Architecture of Neural Network
[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support LLaMA, Llama-2, BLOOM, Vicuna, Baichuan, etc.
[NeurIPS 2023] Structural Pruning for Diffusion Models
Code to reproduce the experiments of ICLR2023-paper: How I Learned to Stop Worrying and Love Retraining
Repo for BenTsao [original name: HuaTuo (华驼)], Instruction-tuning Large Language Models with Chinese Medical Knowledge. 本草(原名:华驼)模型仓库,基于中文医学知识的大语言模型指令微调
Awesome papers on 3D anomaly detection.
[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
Page for the CVPR 2023 Tutorial - Efficient Neural Networks: From Algorithm Design to Practical Mobile Deployments
Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".
[CVPR 2023] Towards Any Structural Pruning; LLMs / SAM / Diffusion / Transformers / YOLOv8 / CNNs
[Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning