Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Will GGML support ROCm platform? #472

Open
Promesis opened this issue Aug 23, 2023 · 4 comments
Open

Will GGML support ROCm platform? #472

Promesis opened this issue Aug 23, 2023 · 4 comments

Comments

@Promesis
Copy link

I have 3 AMD Radeon s:

  • RX 7900XTX
  • RX 7900XT
  • RX 6950XT

Although I use my RTX A6000 and Tesla A16 to train and inference, but I want to use my AMD's computing power.

@Green-Sky
Copy link
Contributor

there is some afford being made here ggerganov/llama.cpp#1087

@SlyEcho
Copy link
Sponsor Contributor

SlyEcho commented Aug 24, 2023

I have to test again, but RX 6950 XT should work (for ggerganov/llama.cpp#1087)

@Promesis
Copy link
Author

@Green-Sky thank you.

btw it takes me 6 hours to see all the comments 😄

@Promesis
Copy link
Author

But leave it as a feature request. Maybe using HIPIFY the platform migration will be easier. But still waiting for ggml to support ROCm originally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants