Skip to content

Study of model compression techniques: Knowledge distillation, Quantization, and Pruning

Notifications You must be signed in to change notification settings

patilunmesh/DL_Model_Compression

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Model_compression

Study of three major model compression techniques: Knowledge distillation, Quantization, and Pruning.

This was a part of class project for the Deep Learning course of the graduate AI program in Oregon State University. Please refer to the report PDF in the repository for more details on the experiments.

About

Study of model compression techniques: Knowledge distillation, Quantization, and Pruning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published