-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ggml_status introduction #750
ggml_status introduction #750
Conversation
I think this is a good change. Some comments:
|
+1 for |
ggml_compute_exit_code -> ggml_status changed ggml_status from a bit-field type to simple codes ggml_status to string cast
dd30ae8
to
a6699b1
Compare
Thank you for your review, made changes:
And I get an important question. The |
My opinion is that llama.cpp needs its own |
@slaren , I completely agree with you! |
Co-authored-by: slaren <[email protected]>
I would like to add the ability to explicitly specify the execution result for the
ggml_backend_graph_plan_compute
andggml_backend_graph_compute
functions.This will allow us to determine whether the execution was aborted by abort_callback or by some other reasons (and allow us to use different reasons). I used the additional
ggml_compute_result_t
type because when executingggml_*_compute
multi-threaded, each thread may return its own error/warning, and in order not to lose them, I decided to concatenate exit_code's using OR.Also I didn't remove
GGML_EXIT_SUCCESS
andGGML_EXIT_ABORTED
because they can be used by other party libraries that use ggml.