Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cuDNN R2 #2038

Merged
merged 4 commits into from
Mar 24, 2015
Merged

cuDNN R2 #2038

merged 4 commits into from
Mar 24, 2015

Conversation

shelhamer
Copy link
Member

This is the master edition of #1731. See #1854 for the initial cuDNN R2 compatible branch by @slayton58.

Although cuDNN R2 is still experimental it is in the release candidate stage of development, so this PR will ready Caffe for R2. This should not be made the default until R2 is done or we add version switching for compatibility.

Note that R2 brings OS X support.

Compatibility

  • create general tensor, but set 4D tensor
  • set conv 2D descriptors
  • set pooling 2D descriptors
  • replace accumulation flags with alpha, beta scaling
  • let pooling have padding
  • pick fastest by default (according to cuDNN heuristics)

Next Up

  • keep R1 compatibility #if CUDNN_VERSION < 20 since the library isn't itself backward-compatible we're not going to take this on ourselves.
  • pick algorithm in Reshape() instead of Forward()
  • expose choice of the forward convolutional algorithm / preference
  • switch to the N-D interface now that Blobs are N-D arrays (for N not necessarily equals 4) #1486 is done
  • 3D convolution

myfavouritekk added a commit to myfavouritekk/caffe that referenced this pull request Mar 16, 2015
cuDNN R2

* shelhamer/cudnn-r2:
  cuDNN pooling can pad now
  replace cuDNN alphas and betas with coefficient values
  switch to cuDNN R2
NV-slayton and others added 4 commits March 24, 2015 14:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants