Skip to content

Commit

Permalink
fix cudnn conv legacy bug (#96)
Browse files Browse the repository at this point in the history
  • Loading branch information
yjxiong committed Aug 4, 2016
1 parent 42b9db8 commit c0d200b
Showing 1 changed file with 0 additions and 2 deletions.
2 changes: 0 additions & 2 deletions src/caffe/layers/cudnn_conv_layer.cu
Original file line number Diff line number Diff line change
Expand Up @@ -66,12 +66,10 @@ void CuDNNConvolutionLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,
if (this->param_propagate_down_[0]) {
weight = this->blobs_[0]->gpu_data();
weight_diff = this->blobs_[0]->mutable_gpu_diff();
caffe_gpu_set(this->blobs_[0]->count(), Dtype(0), weight_diff);
}
Dtype* bias_diff = NULL;
if (this->bias_term_ && this->param_propagate_down_[1]) {
bias_diff = this->blobs_[1]->mutable_gpu_diff();
caffe_gpu_set(this->blobs_[1]->count(), Dtype(0), bias_diff);
}
for (int i = 0; i < top.size(); ++i) {
const Dtype* top_diff = top[i]->gpu_diff();
Expand Down

2 comments on commit c0d200b

@antran89
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yjxiong Can you explain why the line is a bug? Does BLVC Caffe fix this bug? We are using many different versions of Caffe, and I wonder whether this bug affect our version?

@yjxiong
Copy link
Owner Author

@yjxiong yjxiong commented on c0d200b Oct 25, 2016 via email

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.