🤗
Research Intern @ Qualcomm; Ph.D Student @aiha-lab; General efficiency in LLM Inference
-
Hanyang University
- Seoul
- https://marsjacobs.github.io/
Pinned Loading
-
aiha-lab/TSLD
aiha-lab/TSLD Public[NeurIPS 2023] Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
-
kd-qat-large-enc
kd-qat-large-enc Public[EMNLP 2022 main] Code for "Understanding and Improving Knowledge Distillation for Quantization-Aware-Training of Large Transformer Encoders"
Jupyter Notebook 7
-
mbv1_brevitas
mbv1_brevitas PublicThis is MobileNetv1 Brevitas based Quantization-aware-Training Framework
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.