This repository has been archived by the owner on Jun 24, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 355
Directly load pth
/PyTorch tensor model files
#21
Labels
issue:enhancement
New feature or request
Comments
re loading |
philpax
changed the title
Port
Directly load Mar 18, 2023
llama.cpp
utilities to Rustpth
/PyTorch tensor model files
This was referenced Mar 18, 2023
Open
This is not complete yet. We've merged in the start of a converter, but more work is required to convert the weight. Luckily, @KerfuffleV2's developed a Pickle parser that can handle PyTorch tensors: https://github.com/KerfuffleV2/repugnant-pickle We should be able to use this to convert tensors to GGML format. In future, we can directly load tensors (I may separate that out into a new issue), but our focus is on loading tensors so that they can be quantised by #84 and used by |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
At present,
llama.cpp
contains a Python script that convertspth
toggml
format.It would be nice to build it into the CLI directly, so that you can load the original model files. The original Python script could also be converted to Rust, so that we have a fully-Rust method of converting
pth
to ggml models.The text was updated successfully, but these errors were encountered: