-
Notifications
You must be signed in to change notification settings - Fork 964
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Will GGML support ROCm platform? #472
Comments
there is some afford being made here ggerganov/llama.cpp#1087 |
I have to test again, but RX 6950 XT should work (for ggerganov/llama.cpp#1087) |
@Green-Sky thank you. btw it takes me 6 hours to see all the comments 😄 |
But leave it as a feature request. Maybe using HIPIFY the platform migration will be easier. But still waiting for ggml to support ROCm originally. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I have 3 AMD Radeon s:
Although I use my RTX A6000 and Tesla A16 to train and inference, but I want to use my AMD's computing power.
The text was updated successfully, but these errors were encountered: