Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mark cublas version/handle as non-differentiable #2368

Merged
merged 1 commit into from
May 9, 2024

Conversation

wsmoses
Copy link
Contributor

@wsmoses wsmoses commented May 8, 2024

I'm working on getting a minimal working cublas AD working, but these are nevertheless prerequisites that will reduce the error log size for a lot of users already (e.g. EnzymeAD/Enzyme.jl#1392)

Copy link
Member

@vchuravy vchuravy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does cudaconvert need a forward pass?

@wsmoses
Copy link
Contributor Author

wsmoses commented May 9, 2024

It needed one for the other PR to ensure unified memory setup is fine

@wsmoses
Copy link
Contributor Author

wsmoses commented May 9, 2024

@vchuravy can you merge?

@vchuravy
Copy link
Member

vchuravy commented May 9, 2024

can you merge?

Yes, but there is a question how we test this code.

@vchuravy vchuravy merged commit e9928ca into JuliaGPU:master May 9, 2024
1 check passed
@wsmoses
Copy link
Contributor Author

wsmoses commented May 9, 2024

Yeah I have a test [just doing CUBLAS.Dot(x, y) but that presently fails for the reason of cudamemset as a fn not being found so we need to figure out the correct symbol materialization story. Still helpful for reducing error before then tho

@wsmoses wsmoses deleted the ecublas branch May 9, 2024 21:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants