Skip to content

A walkthrough of how to prune keras models, using both weight-pruning and unit/neuron-pruning.

Notifications You must be signed in to change notification settings

matthew-mcateer/Keras_pruning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Keras pruning

Open In Colab Binder

A walkthrough of how to prune keras models using both weight-pruning and unit-pruning.

Overview

There are multiple ways of optimizing neural-network-based machine learning algorithms. One of these optimizations is the removal of connections between neurons and layers, and thus speeding up computation by reducing the overal number of parameters.

Pruning example

Networks generally look like the one on the left: every neuron in the layer below has a connection to the layer above; but this means that we have to multiply a lot of floats together. Ideally, we’d only connect each neuron to a few others and save on doing some of the multiplications; this is called a “sparse” network.

Given a layer of a neural network $ReLU(xW)$ are two well-known ways to prune it:

  • Weight pruning: set individual weights in the weight matrix to zero. This corresponds to deleting connections as in the figure above.
    • Here, to achieve sparsity of $k%$ we rank the individual weights in weight matrix $W$ according to their magnitude (absolute value) $|wi,j|$, and then set to zero the smallest $k%$.
  • Unit/Neuron pruning: set entire columns to zero in the weight matrix to zero, in effect deleting the corresponding output neuron.
    • Here to achieve sparsity of $k%$ we rank the columns of a weight matrix according to their L2-norm $|w| = \sqrt{\sum_{i=1}^{N}(x_i)^2}$ and delete the smallest $k%$.

Naturally, as you increase the sparsity and delete more of the network, the task performance will progressively degrade. This repo serves as a demonstration of both weight and unit pruning and compare the performance across both the MNIST and FMNIST datasets.

MNIST performance

FMNIST performance

Requirements

Installation Instructions

There are two ways to run this notebook:

  1. Running in-browser (Google Colab)
  1. Running locally
  • Clone this repository with the following command: git clone https://github.com/matthew-mcateer/Keras_pruning.git
  • Run jupyter Model_pruning_exploration.ipynb
  • Optional: before running the notebook, create a separate environment using conda and install the Requirements above in the environment. Activate your conda environment before running Jupyter.

References/Resources

Releases

No releases published

Packages