Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add arguments for turning off fused kernels to work with native pytorch #906

Closed
mayank31398 opened this issue Apr 26, 2023 · 2 comments
Closed
Assignees
Labels
feature request New feature or request

Comments

@mayank31398
Copy link

This is needed for running on non-nvidia architectures.

@mayank31398 mayank31398 added the feature request New feature or request label Apr 26, 2023
@mayank31398
Copy link
Author

I plan to work on this :)

@Quentin-Anthony
Copy link
Member

I believe we already support this with:

  "scaled-upper-triang-masked-softmax-fusion": false,
  "bias-gelu-fusion": false,

e.g. https://github.com/EleutherAI/gpt-neox/blob/main/configs/20B.yml#L31-L32

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants