-
Notifications
You must be signed in to change notification settings - Fork 5.7k
Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
bug: ollama show bakllava:latest panic: interface conversion: interface {} is nil, not string
bug
Something isn't working
#5289
opened Jun 26, 2024 by
silentoplayz
[feature request] Easy clean-up of large New feature or request
ollama
files
feature request
#5286
opened Jun 25, 2024 by
hamirmahal
update /show to work like command line show
feature request
New feature or request
#5281
opened Jun 25, 2024 by
iplayfast
Bug: Ollama keeps crashing & switching num-ctx context length back to default during usage
bug
Something isn't working
#5280
opened Jun 25, 2024 by
Daasin
Is it possible to start llama server through dynamic dependency library?
feature request
New feature or request
#5278
opened Jun 25, 2024 by
leeyiding
"How to utilize the Ollama local model in Windows 10 to generate the same API link as OpenAI, enabling other programs to replace the GPT-4 link? Currently, entering 'ollama serve' in CMD generates the 'http:https://localhost:11434' link, but replacing this link with the GPT-4 link in applications does not work. Please provide a command to generate a link that supports replacing GPT-4."
feature request
New feature or request
#5277
opened Jun 25, 2024 by
windkwbs
Support for Vision Language Models that can process Videos.
feature request
New feature or request
#5276
opened Jun 25, 2024 by
manishkumart
ROCm on WSL
amd
Issues relating to AMD GPUs and ROCm
feature request
New feature or request
wsl
Issues using WSL
#5275
opened Jun 25, 2024 by
justinkb
Seed and temperature=0 not generating deterministic output
bug
Something isn't working
#5274
opened Jun 25, 2024 by
d-kleine
2024-June-25 conda-forge ollama v0.1.17 is too old
feature request
New feature or request
#5273
opened Jun 25, 2024 by
polySugar
keep_alive and OLLAMA_KEEP_ALIVE not effective
bug
Something isn't working
#5272
opened Jun 25, 2024 by
peanutfs
Low VRAM Utilization on RTX 3090 When Models are Split Across Multiple CUDA Devices (separate ollama serve)
bug
Something isn't working
#5271
opened Jun 25, 2024 by
chrisoutwright
Interesting behavior when running in parallel
bug
Something isn't working
#5269
opened Jun 25, 2024 by
AI-Guru
Windows 11上 ,ollama_llama_server.exe会被“效率模式”运行,导致响应非常慢
bug
Something isn't working
windows
#5266
opened Jun 25, 2024 by
fengbangyao
How to use the .mf model configuration file to register a customize vision-language model in Ollama
#5264
opened Jun 25, 2024 by
LJY16114
Add a parameter to prohibit adding services to New feature or request
systemictl '
feature request
#5263
opened Jun 25, 2024 by
wszgrcy
Support Multiple Types for OpenAI Completions Endpoint
feature request
New feature or request
#5259
opened Jun 24, 2024 by
royjhan
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.