-
Notifications
You must be signed in to change notification settings - Fork 942
Issues: ggerganov/ggml
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Is ggml support RISC-V ISA porting?
#875
by ZhengmingHu
was closed Jun 29, 2024
updated Jun 29, 2024
ggml : add optional CPU backend context, support reusing threads, async compute
enhancement
New feature or request
#721
by slaren
was closed Jun 19, 2024
updated Jun 19, 2024
Is there interest in
ggml_upscale_to_shape
supporting non integer scaling factors?
#812
by balisujohn
was closed May 15, 2024
updated May 15, 2024
Add Qualcomm mobile SoC native backend for GGML
#771
by zhouwg
was closed Apr 17, 2024
updated Apr 26, 2024
License for Python-based GGUF parser with NumPy vectorization
#802
by 99991
was closed Apr 25, 2024
updated Apr 25, 2024
Is there interest in implementations of something analogous to
torch.Tensor.scatter_
and torch.gather
?
#718
by balisujohn
was closed Apr 20, 2024
updated Apr 20, 2024
Is there interest in a groupnorm operation being added?
#800
by balisujohn
was closed Apr 20, 2024
updated Apr 20, 2024
Is there interest in a cuda implementation for ggml_conv_1d
#788
by balisujohn
was closed Apr 19, 2024
updated Apr 19, 2024
quantization func test failed with GGML_QKK_64
#739
by winice-test
was closed Mar 13, 2024
updated Mar 13, 2024
Incorrect Error Handling in ggml_backend_graph_compute Function in example/magika/main.cpp
#760
by charloco
was closed Mar 7, 2024
updated Mar 7, 2024
Replit code completion example not working
#758
by hassan404
was closed Mar 4, 2024
updated Mar 4, 2024
ggml : make ggml_fp16_t private
refactoring
Refactoring
#720
by ggerganov
was closed Feb 22, 2024
updated Feb 22, 2024
ggml : simplify the ggml_compute_forward_ calls
good first issue
Good for newcomers
refactoring
Refactoring
#724
by ggerganov
was closed Feb 21, 2024
updated Feb 21, 2024
ggml : improve memory allocation for weights and similar lists of tensors
refactoring
Refactoring
#578
by slaren
was closed Jan 30, 2024
updated Jan 30, 2024
why add 512 : ggml_backend_alloc_buffer(backend_kv, memory_size + 512*2);
#637
by EveningLin
was closed Jan 29, 2024
updated Jan 29, 2024
"array size is too large" on model load
#714
by iamlemec
was closed Jan 29, 2024
updated Jan 29, 2024
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.