This repository has been archived by the owner on Jun 24, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 353
WizardCoder llama assert failure #417
Comments
Related: ggerganov/llama.cpp#2445 (comment) |
Probably another issue with the currently used ggml version, a re-sync with the current main branch of |
I actually did that and found a failure on the same assert line. The linked comment said rolling the version back worked best. I'm wondering if this assert is assuming constant layer sizes, so any modification like The Bloke does might be causing the failure? |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Trying to run a variety of ggml models from TheBloke leads to this error:
GGML_ASSERT: llama-cpp/ggml.c:6270: ggml_nelements(a) == ne0*ne1*ne2
Wondering if anyone else is experiencing this, and what the issue might be?
The text was updated successfully, but these errors were encountered: