Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync : llama.cpp #742

Merged
merged 9 commits into from
Feb 21, 2024
Merged

sync : llama.cpp #742

merged 9 commits into from
Feb 21, 2024

Conversation

ggerganov
Copy link
Owner

No description provided.

slaren and others added 8 commits February 21, 2024 16:19
* cuda : ignore peer access already enabled errors

* fix hip
* Update ggml_sycl_op_mul_mat_vec_q

* Apply suggestions from code review

Co-authored-by: Abhilash Majumder <[email protected]>

* revert suggestion on macro

* fix bug

* Add quant type GGML_TYPE_IQ1_S to unsupported

* fix format

---------

Co-authored-by: Abhilash Majumder <[email protected]>
* [SYCL] conext add name

* name should start with SYCL*
@ggerganov ggerganov changed the title sync : llam.cpp sync : llama.cpp Feb 21, 2024
@ggerganov ggerganov merged commit 3080551 into master Feb 21, 2024
9 of 10 checks passed
@ggerganov ggerganov deleted the sync branch February 21, 2024 14:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants