Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nd convolution and pooling with cuDNN #3983

Open
wants to merge 32 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
0ad1284
CMake: link with ${HDF5_HL_LIBRARIES}
intelfx Jul 25, 2016
c62e06b
Fix search for Atlas on arch.
Jul 26, 2016
bc1a433
add cudnn interfaces for n-dimensional computation
Feb 18, 2016
e5c13a5
add support for nd convolution in cudnn
Feb 18, 2016
cc357bd
change interface of pool to support n-dimensions
Feb 19, 2016
12cb24f
fix 2D pooling on CPU and GPU
Feb 19, 2016
c1b0f38
remove some calls of Blob::LegacyShape() to support 3D
May 23, 2016
721553e
fix xavier filler to use new blob shape accessors
Feb 19, 2016
b2f3848
fix tests for new pooling parameter interface
Apr 12, 2016
7173035
add 3D cudnn convolution tests
Apr 13, 2016
c9de153
add 3D cudnn pooling tests
Apr 14, 2016
eb93d32
fix CUDNN_BAD_PARAM when using InnerProduct layer
Apr 28, 2016
919b6d7
change interface for cudnn v5
May 23, 2016
9e9e9ba
Merge pull request #4523 from delftrobotics/cmake-atlas
longjon Aug 4, 2016
6431477
Merge pull request #4516 from intelfx/BVLC-work
longjon Aug 4, 2016
61e0165
num in blob is deprecated
fyu Aug 7, 2016
375003a
Merge pull request #4559 from fyu/loss_reshape
jeffdonahue Aug 7, 2016
f86a099
add cudnn interfaces for n-dimensional computation
Feb 18, 2016
4f63ea5
add support for nd convolution in cudnn
Feb 18, 2016
5e1f04e
change interface of pool to support n-dimensions
Feb 19, 2016
2346c5e
fix 2D pooling on CPU and GPU
Feb 19, 2016
0dcb68a
remove some calls of Blob::LegacyShape() to support 3D
May 23, 2016
fb0f9f5
fix xavier filler to use new blob shape accessors
Feb 19, 2016
b8ca687
fix tests for new pooling parameter interface
Apr 12, 2016
c88f8fa
add 3D cudnn convolution tests
Apr 13, 2016
d0efc10
add 3D cudnn pooling tests
Apr 14, 2016
45562a0
fix CUDNN_BAD_PARAM when using InnerProduct layer
Apr 28, 2016
b506327
change interface for cudnn v5
May 23, 2016
fc39d7e
remove some calls of Blob::LegacyShape() to support 3D
Sep 12, 2016
857f47d
fix msra filler to use new blob shape accessors
Sep 12, 2016
334e76f
fix positive_unitball filler to use new blob shape accessors
Sep 12, 2016
efda84c
Merge branch 'nd-cudnn' of github.com:christianpayer/caffe into nd-cudnn
Nov 2, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
remove some calls of Blob::LegacyShape() to support 3D
  • Loading branch information
Christian Payer committed Sep 12, 2016
commit fc39d7e30cb4ebb19887416c504afeac9678e396
12 changes: 4 additions & 8 deletions src/caffe/layers/cudnn_sigmoid_layer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ void CuDNNSigmoidLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,
SigmoidLayer<Dtype>::LayerSetUp(bottom, top);
// initialize cuDNN
CUDNN_CHECK(cudnnCreate(&handle_));
cudnn::createTensor4dDesc<Dtype>(&bottom_desc_);
cudnn::createTensor4dDesc<Dtype>(&top_desc_);
cudnn::createTensorDesc<Dtype>(&bottom_desc_);
cudnn::createTensorDesc<Dtype>(&top_desc_);
cudnn::createActivationDescriptor<Dtype>(&activ_desc_,
CUDNN_ACTIVATION_SIGMOID);
handles_setup_ = true;
Expand All @@ -22,12 +22,8 @@ template <typename Dtype>
void CuDNNSigmoidLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
SigmoidLayer<Dtype>::Reshape(bottom, top);
const int N = bottom[0]->num();
const int K = bottom[0]->channels();
const int H = bottom[0]->height();
const int W = bottom[0]->width();
cudnn::setTensor4dDesc<Dtype>(&bottom_desc_, N, K, H, W);
cudnn::setTensor4dDesc<Dtype>(&top_desc_, N, K, H, W);
cudnn::setTensorNdDesc<Dtype>(&bottom_desc_, bottom[0]->shape());
cudnn::setTensorNdDesc<Dtype>(&top_desc_, bottom[0]->shape());
}

template <typename Dtype>
Expand Down
12 changes: 4 additions & 8 deletions src/caffe/layers/cudnn_tanh_layer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ void CuDNNTanHLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,
TanHLayer<Dtype>::LayerSetUp(bottom, top);
// initialize cuDNN
CUDNN_CHECK(cudnnCreate(&handle_));
cudnn::createTensor4dDesc<Dtype>(&bottom_desc_);
cudnn::createTensor4dDesc<Dtype>(&top_desc_);
cudnn::createTensorDesc<Dtype>(&bottom_desc_);
cudnn::createTensorDesc<Dtype>(&top_desc_);
cudnn::createActivationDescriptor<Dtype>(&activ_desc_, CUDNN_ACTIVATION_TANH);
handles_setup_ = true;
}
Expand All @@ -21,12 +21,8 @@ template <typename Dtype>
void CuDNNTanHLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
TanHLayer<Dtype>::Reshape(bottom, top);
const int N = bottom[0]->num();
const int K = bottom[0]->channels();
const int H = bottom[0]->height();
const int W = bottom[0]->width();
cudnn::setTensor4dDesc<Dtype>(&bottom_desc_, N, K, H, W);
cudnn::setTensor4dDesc<Dtype>(&top_desc_, N, K, H, W);
cudnn::setTensorNdDesc<Dtype>(&bottom_desc_, bottom[0]->shape());
cudnn::setTensorNdDesc<Dtype>(&top_desc_, bottom[0]->shape());
}

template <typename Dtype>
Expand Down