Skip to content

rootally/Pruning-in-Neural-Network

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pruning in Neural Network

Weight Pruning and Unit Pruning in Tensorflow

alt text

Given a layer of a neural network are two well-known ways to prune it:

  • Weight pruning: Set individual weights in the weight matrix to zero. This corresponds to deleting connections as in the figure above. Here, to achieve sparsity of k% I rank the individual weights in weight matrix W according to their magnitude (absolute value) , and then set to zero the smallest k%.

  • Unit/Neuron pruning: Set entire columns to zero in the weight matrix to zero, in effect deleting the corresponding output neuron. Here to achieve sparsity of k% we rank the the columns of a weight matrix according to their L2-norm and delete the smallest k%.

Try it on Colab Notebook

About

Weight Pruning and Unit Pruning in Tensorflow

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published