-
Notifications
You must be signed in to change notification settings - Fork 305
Insights: THUDM/GLM-4
Overview
-
- 0 Merged pull requests
- 1 Open pull request
- 17 Closed issues
- 20 New issues
There hasn’t been any commit activity on THUDM/GLM-4 in the last week.
Want to help out?
1 Pull request opened by 1 person
-
delete useless token
#444 opened
Aug 5, 2024
17 Issues closed by 1 person
-
4张3090 lora微调时报错 OutOfMemoryError: CUDA out of memory. Tried to allocate 214.00 MiB. GPU
#375 closed
Aug 6, 2024 -
BFCL评测问题
#378 closed
Aug 6, 2024 -
GLM-4V-9B继续微调报错,显示超显存
#408 closed
Aug 3, 2024 -
建议可以将trust_remote_code设置为false
#397 closed
Aug 3, 2024 -
为什么GLM-4V-9B INT4量化以后推理速度没有提升?
#391 closed
Aug 3, 2024 -
请问finetune_demo中提到的对GLM-4-9B-Chat对工具调用能力微调是什么意思呢,模型可以对哪些工具进行调用呢,是指外部api吗
#404 closed
Aug 3, 2024 -
多卡文档理解时,如果文档过大会集中占用某卡显存,导致显存不足。
#416 closed
Aug 2, 2024 -
GLM-4V何时支持ollama?
#425 closed
Aug 2, 2024 -
如何以INT4来运行GLM-4V-9B?
#435 closed
Aug 2, 2024 -
GLM4v推理时可以开启FlashAttention2么?
#420 closed
Aug 2, 2024 -
请问GLM-4V-9B什么时候支持使用vllm部署呢
#432 closed
Aug 2, 2024 -
微调评估error
#417 closed
Aug 1, 2024 -
为什么GLM-4会有3个stop_token_ids?
#398 closed
Aug 1, 2024 -
请问ChatGLM-4-9B-Chat的工具调用官方代码在哪里能找到?说是3的工具调用4用不了,但是仓库里也没看见工具调用的代码。
#423 closed
Aug 1, 2024 -
支持Parallel Function Calls
#421 closed
Aug 1, 2024
20 Issues opened by 19 people
-
api_server运行报错,请教一下如何解决
#450 opened
Aug 7, 2024 -
GLM4如何一次性回复多个候选答案,类似于gpt参数设置“n”?
#449 opened
Aug 7, 2024 -
本地glm4-9B模型使用function call 功能的问题
#448 opened
Aug 7, 2024 -
GLM-4 and Dify Response Reception Issue
#447 opened
Aug 7, 2024 -
大佬想问一下glm4是encode-decode还是decode-only?
#446 opened
Aug 7, 2024 -
返回json格式时是中带有markdown的```json
#445 opened
Aug 6, 2024 -
为啥system消息会传两次?
#442 opened
Aug 5, 2024 -
使用自己构造的数据进行微调,ValueError:151337 is not in list报错
#441 opened
Aug 5, 2024 -
怎么自定义glm4模型的system指令?
#440 opened
Aug 5, 2024 -
微调报错 ValueError: 151337 is not in list
#438 opened
Aug 4, 2024 -
请问如何加载ptuning微调后的模型,如下图所示,提示不支持prompt learning,但是相同的代码可以加载lora微调的模型,代码就是贵方给出的加载代码
#437 opened
Aug 4, 2024 -
微调报错IndexError: list index out of range
#436 opened
Aug 3, 2024 -
GLM4+LLaMA-Factory 微调 lora 组网attention模块出现q k v的datatype对应不上的情况
#434 opened
Aug 2, 2024 -
[glm-4v-9b]关于eval情况下的padding问题。
#433 opened
Aug 1, 2024 -
启动 openai_api_server.py 报错
#431 opened
Aug 1, 2024 -
glm4v-9b-chat微调之后“打字机”问题变得很严重,每个问题都有重复输出和非中英文乱码。
#428 opened
Aug 1, 2024 -
glm4-9b-chat采用lora微调后,模型回答内容中掺杂英文,加入中文限制提示词亦无效。
#426 opened
Aug 1, 2024
4 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
ChatGLM4 batch推理时padding 细节
#424 commented on
Aug 1, 2024 • 0 new comments -
Ollama运行GLM-4-9b后乱码
#333 commented on
Aug 1, 2024 • 0 new comments -
利用Ollama部署的GLM4在进行信息识别任务时输出大量G
#323 commented on
Aug 6, 2024 • 0 new comments -
feat(rust-candle): add Rust candle support
#403 commented on
Aug 5, 2024 • 0 new comments