Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Bumped minor version from 1.4.0 to 1.5.0 on master, updated License file #13478

Merged
merged 29 commits into from
Dec 5, 2018
Merged
Changes from 1 commit
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
2cb8faf
updated to v1.5.0
srochel Nov 30, 2018
e4af8e7
Bumped minor version from 1.4.0 to 1.5.0 on master
srochel Nov 30, 2018
49bbcbc
added Anirudh as maintainer for R package
srochel Dec 2, 2018
42c6db0
Updated license file for clojure, onnx-tensorrt, gtest, R-package
srochel Dec 2, 2018
408a55d
Get the correct include path in pip package (#13452)
apeforest Nov 30, 2018
7b67d8f
Use ~/.ccache as default ccache directory so is not cache is not eras…
larroy Nov 30, 2018
f9e661e
Skip flaky test https://github.com/apache/incubator-mxnet/issues/1344…
ChaiBapchya Nov 30, 2018
b902878
Rewrite dataloader with process pool, improves responsiveness and rel…
zhreshold Nov 30, 2018
819a04a
Fix errors in docstrings for subgraph op; use code directive (#13463)
aaronmarkham Nov 30, 2018
ddf6980
[MXNET-1158] JVM Memory Management Documentation (#13105)
nswamy Nov 30, 2018
4d342ef
Update row_sparse tutorial (#13414)
eric-haibin-lin Nov 30, 2018
fb92a66
Add resiliency to onnx export code (#13426)
safrooze Nov 30, 2018
0bb26ac
[MXNET-1185] Support large array in several operators (part 1) (#13418)
apeforest Dec 1, 2018
aed3079
[MXNET-1210 ] Gluon Audio - Example (#13325)
gaurav-gireesh Dec 1, 2018
c9ddcb8
ONNX export: Instance normalization, Shape (#12920)
vandanavk Dec 1, 2018
9e74dfa
Clarify dependency on OpenCV in CNN Visualization tutorial. (#13495)
vishaalkapoor Dec 1, 2018
049107c
clarify ops faq regarding docs strings (#13492)
aaronmarkham Dec 1, 2018
80e2a1d
Add graph_compact operator. (#13436)
zheng-da Dec 1, 2018
1fd7558
Deprecate Jenkinsfile (#13474)
marcoabreu Dec 1, 2018
d8029c8
update github location for sampled_block.py (#13508)
srochel Dec 2, 2018
96f5beb
#13453 [Clojure] - Add Spec Validations to the Optimizer namespace (#…
hellonico Dec 2, 2018
09b6607
ONNX export: Logical operators (#12852)
vandanavk Dec 3, 2018
dd9d80c
Fix cmake options parsing in dev_menu (#13458)
larroy Dec 3, 2018
b901d52
Revert "Manually track num_max_thread (#12380)" (#13501)
anirudh2290 Dec 3, 2018
c44bc85
Feature/mkldnn static 2 (#13503)
azai91 Dec 3, 2018
41f3f98
fix toctree Sphinx errors (#13489)
aaronmarkham Dec 4, 2018
e533304
Disabled flaky test test_gluon_data.test_recordimage_dataset_with_dat…
jlcontreras Dec 4, 2018
7f3a591
[MXNET-1234] Fix shape inference problems in Activation backward (#13…
larroy Dec 4, 2018
a8e635d
Merge branch 'master' into master
srochel Dec 5, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Clarify dependency on OpenCV in CNN Visualization tutorial. (#13495)
  • Loading branch information
vishaalkapoor authored and srochel committed Dec 4, 2018
commit 9e74dfa82ada90bd924b5985f12834091d5d4a4b
15 changes: 10 additions & 5 deletions docs/tutorials/vision/cnn_visualization.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,21 @@
# Visualizing Decisions of Convolutional Neural Networks

Convolutional Neural Networks have made a lot of progress in Computer Vision. Their accuracy is as good as humans in some tasks. However it remains hard to explain the predictions of convolutional neural networks, as they lack the interpretability offered by other models, for example decision trees.
Convolutional Neural Networks have made a lot of progress in Computer Vision. Their accuracy is as good as humans in some tasks. However, it remains difficult to explain the predictions of convolutional neural networks, as they lack the interpretability offered by other models such as decision trees.

It is often helpful to be able to explain why a model made the prediction it made. For example when a model misclassifies an image, it is hard to say why without visualizing the network's decision.
It is often helpful to be able to explain why a model made the prediction it made. For example, when a model misclassifies an image, without visualizing the network's decision, it is hard to say why the misclassification was made.

<img align="right" src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/example/cnn_visualization/volcano_barn_spider.png" alt="Explaining the misclassification of volcano as spider" width=500px/>

Visualizations also help build confidence about the predictions of a model. For example, even if a model correctly predicts birds as birds, we would want to confirm that the model bases its decision on the features of bird and not on the features of some other object that might occur together with birds in the dataset (like leaves).
Visualizations can also build confidence about the predictions of a model. For example, even if a model correctly predicts birds as birds, we would want to confirm that the model bases its decision on the features of bird and not on the features of some other object that might occur together with birds in the dataset (like leaves).

In this tutorial, we show how to visualize the predictions made by convolutional neural networks using [Gradient-weighted Class Activation Mapping](https://arxiv.org/abs/1610.02391). Unlike many other visualization methods, Grad-CAM can be used on a wide variety of CNN model families - CNNs with fully connected layers, CNNs used for structural outputs (e.g. captioning), CNNs used in tasks with multi-model input (e.g. VQA) or reinforcement learning without architectural changes or re-training.
In this tutorial we show how to visualize the predictions made by convolutional neural networks using [Gradient-weighted Class Activation Mapping](https://arxiv.org/abs/1610.02391). Unlike many other visualization methods, Grad-CAM can be used on a wide variety of CNN model families - CNNs with fully connected layers, CNNs used for structural outputs (e.g. captioning), CNNs used in tasks with multi-model input (e.g. VQA) or reinforcement learning without architectural changes or re-training.

In the rest of this notebook, we will explain how to visualize predictions made by [VGG-16](https://arxiv.org/abs/1409.1556). We begin by importing the required dependencies. `gradcam` module contains the implementation of visualization techniques used in this notebook.
In the rest of this notebook, we will explain how to visualize predictions made by [VGG-16](https://arxiv.org/abs/1409.1556). We begin by importing the required dependencies.

## Prerequesites
* OpenCV is required by `gradcam` (below) and can be installed with pip using `pip opencv-python`.

* the `gradcam` module contains the implementation of visualization techniques used in this notebook. `gradcam` can be installed to a temporary directory by executing the following code block.

```python
from __future__ import print_function
Expand Down