Skip to content

training the model and pruning its lowest weights or nodes

License

Notifications You must be signed in to change notification settings

citymap/Weight-Pruning-keras

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Weight Pruning keras

Train a dense network and make it sparse by setting the smallest "n%" weights or nodes to zero

I have trained a dense fully connected neural network in mnist dataset.
Then in the first function (prunew) we take a single weight matrix and the % pruning to be done.
From the size we find out the number of weights to be pruned (N).
we create a copy of the given matrix. We then scan throught the copy matrix and find the lowest N absolute values of and set them to 0.
once the for loop is complete we return the copy.

In this function we make a get a sigle matrix and % nodes to be pruned.
Make a vector. Calculate the f2 norm of each column and append it to the vector.
copy the vector, and sort the copy. the amongst the size of the vector find what number is n%?
get the nth element.
if vector
if element > nth element == 1
else 0
then multiply weight matrix X vector
we now have the node pruned matrix.
compile it into a copy of the model and test for accuracy.

WHY TO DO THIS?

Accuracy shoots up after retraining!

I got 90 % accuracy on the orignal model after pruning and retraining it rose to 99 %


I have tried multiple pruning percentages to see node pruning accuracy, weight pruning accuracy, and retraining accuracy for both.

Heres the graph:

https://user-images.githubusercontent.com/35966791/57179613-320acc00-6e9d-11e9-8725-3046aaa558b4.png

About

training the model and pruning its lowest weights or nodes

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%