Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SparseLLM/ReluLLaMA-7B · Powerinfer - faster CPU inference #174

Open
1 task
irthomasthomas opened this issue Dec 29, 2023 · 0 comments
Open
1 task

SparseLLM/ReluLLaMA-7B · Powerinfer - faster CPU inference #174

irthomasthomas opened this issue Dec 29, 2023 · 0 comments
Labels
llm Large Language Models llm-inference-engines Software to run inference on large language models ml-inference Running and serving ML models. sparse-computation ReLu llm's like mixtral moe

Comments

@irthomasthomas
Copy link
Owner

ReluLLaMA-7B

Model creator: Meta
Original model: Llama 2 7B
Fine-tuned by: THUNLP and ModelBest

Background

Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs). Among various approaches, the mixture-of-experts (MoE) method, exemplified by models like Mixtral, has shown particular promise. MoE works by selectively activating different model components (experts), thus optimizing resource usage.

Recent studies (Zhang el al., 2021; Liu et al., 2023; Mirzadeh et al., 2023) reveal that LLMs inherently exhibit properties conducive to sparse computation when employing the ReLU activation function. This insight opens up new avenues for model efficiency, akin to MoE's selective activation. By dynamically choosing model parameters for computation, we can substantially boost efficiency.

However, the widespread adoption of ReLU-based models in the LLM field remains limited. Referring to the transformation methods from existing works (Zhang el al., 2021; Mirzadeh et al., 2023), we convert existing models to ReLU-activated versions through fine-tuning. We hope these open-source ReLU LLMs could promote the development of sparse LLMs.

@irthomasthomas irthomasthomas added inbox-url unclassified Choose this if none of the other labels (bar New Label) fit the content. ml-inference Running and serving ML models. llm-inference-engines Software to run inference on large language models llm Large Language Models and removed unclassified Choose this if none of the other labels (bar New Label) fit the content. labels Dec 29, 2023
@irthomasthomas irthomasthomas added the sparse-computation ReLu llm's like mixtral moe label Jan 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
llm Large Language Models llm-inference-engines Software to run inference on large language models ml-inference Running and serving ML models. sparse-computation ReLu llm's like mixtral moe
Projects
None yet
Development

No branches or pull requests

1 participant