Skip to content

Issues: foldl/chatllm.cpp

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

calculate required scratch memory
#6 opened Mar 3, 2024 by foldl updated Mar 3, 2024
CPU inferencing a lot slower than llama.cpp
#10 opened Apr 2, 2024 by netspym updated Apr 4, 2024
Support GGUF gguf
#16 opened May 13, 2024 by trufae updated May 14, 2024
How to use GPU? gpu
#13 opened May 8, 2024 by li904775857 updated May 14, 2024
你好,请问支持GLM-4V吗?
#23 opened Jun 12, 2024 by yhl41001 updated Jun 13, 2024
bge-reranker is extreamly slow
#24 opened Jun 20, 2024 by RobinQu updated Jun 24, 2024
baichuan13 does not work well gpu
#27 opened Jun 26, 2024 by cagev updated Jun 26, 2024
ProTip! Add no:assignee to see everything that’s not assigned.