-
Notifications
You must be signed in to change notification settings - Fork 930
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Support PyTorch GPT-2 Models #76
Comments
It would be my world if this is achieved since it seems to be simple with cerebras models. I'm not so good at the core of this library but the minimal approach llms it's one core philosophy of mine. Having said that I do have trained this model recently, but I'm not capable of make it work with ggml yet. Having gpt2 quantized models "packaged" into a binary ready to use makes it pretty easy and actually usable for realtime scenarios on cpu. So please, someone who knows how to do this, help! Edit: |
It would be very useful if there was a script to convert a PyTorch GPT-2 model to ggml.
The text was updated successfully, but these errors were encountered: