Skip to content
This repository has been archived by the owner on Jun 24, 2024. It is now read-only.

WizardCoder llama assert failure #417

Open
jacohend opened this issue Aug 28, 2023 · 3 comments
Open

WizardCoder llama assert failure #417

jacohend opened this issue Aug 28, 2023 · 3 comments

Comments

@jacohend
Copy link

Trying to run a variety of ggml models from TheBloke leads to this error:
GGML_ASSERT: llama-cpp/ggml.c:6270: ggml_nelements(a) == ne0*ne1*ne2

Wondering if anyone else is experiencing this, and what the issue might be?

@jacohend
Copy link
Author

Related: ggerganov/llama.cpp#2445 (comment)

@jacohend jacohend changed the title WizardCoder llama assert WizardCoder llama assert failure Aug 28, 2023
@LLukas22
Copy link
Contributor

Probably another issue with the currently used ggml version, a re-sync with the current main branch of llama.cpp is probably needed.

@jacohend
Copy link
Author

jacohend commented Aug 28, 2023

I actually did that and found a failure on the same assert line. The linked comment said rolling the version back worked best.

I'm wondering if this assert is assuming constant layer sizes, so any modification like The Bloke does might be causing the failure?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants