Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add x86 information #1130

Merged
merged 5 commits into from
Mar 13, 2024
Merged

Add x86 information #1130

merged 5 commits into from
Mar 13, 2024

Conversation

Jin-hao80
Copy link
Contributor

Add Intel HW and SW supporting information. Thanks.

@jklj077 jklj077 self-requested a review March 12, 2024 08:10
@@ -354,6 +354,9 @@ If you suffer from lack of GPU memory and you would like to run the model on mor

However, though this method is simple, the efficiency of the native pipeline parallelism is low. We advise you to use vLLM with FastChat and please read the section for deployment.

### x86 Platforms
When deploy on Core™/Xeon® Scalable Processors or with Arc™ GPU, [OpenVINO™ Toolkit](https://docs.openvino.ai/2023.3/gen_ai_guide.html) is recommended. You can install and run this [example notebook] ( https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/254-llm-chatbot).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would you mind adding instructions for users to contact you if they met problems? for example: For related issues, you are welcome to file an issue at <...>.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add update information for issue track, please check.

@@ -347,6 +347,10 @@ model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="cp

尽管这个方法很简单,但它的效率相对较低。我们建议使用vLLM和FastChat并请阅读部署章节。

### x86 平台
在 酷睿™/至强® 可扩展处理器或 Arc™ GPU 上部署量化模型时,建议使用 [OpenVINO™ Toolkit](https://docs.openvino.ai/2023.3/gen_ai_guide.html) 以充分利用硬件,实现更好的推理性能。您可以安装并运行此[example notebook] (https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/254-llm-chatbot)。

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

请添加一下反馈方式吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add update information for issue track, please check.

Add issue contact information
Add issue contact information.
Add issue support information for openvino
@jklj077 jklj077 merged commit 55de9f1 into QwenLM:main Mar 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants