Skip to content
#

model-acceleration

Here are 20 public repositories matching this topic...

A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.

  • Updated Nov 1, 2024

A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.

  • Updated Jun 19, 2021

Learn the ins and outs of efficiently serving Large Language Models (LLMs). Dive into optimization techniques, including KV caching and Low Rank Adapters (LoRA), and gain hands-on experience with Predibase’s LoRAX framework inference server.

  • Updated Apr 12, 2024
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the model-acceleration topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the model-acceleration topic, visit your repo's landing page and select "manage topics."

Learn more