Skip to content

dorianHe/visualizing_what_CNN_learns

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 

Repository files navigation

Visualizing what CNN learns

based on CS231n "Visualizing and Understanding"

It is very nature to think about what have the kernels in the CNN learned during the training process. The visualizing and understanding CNN plays an important role in CNN validating.

First Layer: Visualize Filters

Visualize the kernels from the first layer. We can also visualize filters at higher layers, but not that interesting.

Last Layer

The last layer here means the layer immediately before the classifier. So it is actually a feature vector.

Nearest Neighbors

(from ImageNet Classification with Deep Convolutional Neural Networks)

The idea is to show the top k-th training image that produce feature vectors with the smallest Euclidean distance from the feature vector for the test image.

Dimensionality Reduction

(from Visualizing Data using t-SNE)

Visualize the space of the feature vectors b reducing the dimensionality of vectors from high dimensions to 2 dimensions using simple algorithm PCA or more complex algorithm t-SNE.

Visualizing Activations

Visualize each feature map at some certain layer as grayscale images.

Maximally Activating Patches

(from Striving for Simplicity: The All Convolutional Net)

  1. Pick a layer and a channel; e.g. conv5 is $1281313$, pick the channel $17/128$
  2. Run many images through the network record values of chosen channel
  3. visualize image patches that correspond to maximal activations

Which pixels matter: Saliency vs Occlusion

Mask part of the image before feeding to CNN, check how much predicted probabilities change.

T.B.C.

About

based on CS231n "Visualizing what ConvNets learn".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages