-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LMOps/README.md at main · microsoft/LMOps #706
Labels
AI-Agents
Autonomous AI agents using LLMs
finetuning
Tools for finetuning of LLMs e.g. SFT or RLHF
llm
Large Language Models
Papers
Research papers
Research
personal research notes for a topic
Comments
irthomasthomas
added
AI-Agents
Autonomous AI agents using LLMs
finetuning
Tools for finetuning of LLMs e.g. SFT or RLHF
llm
Large Language Models
Papers
Research papers
Research
personal research notes for a topic
labels
Mar 6, 2024
This was referenced Mar 14, 2024
This was referenced Apr 12, 2024
1 task
1 task
This was referenced Aug 20, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
AI-Agents
Autonomous AI agents using LLMs
finetuning
Tools for finetuning of LLMs e.g. SFT or RLHF
llm
Large Language Models
Papers
Research papers
Research
personal research notes for a topic
LMOps/README.md at main · microsoft/LMOps
LMOps
LMOps is a research initiative on fundamental research and technology for building AI products w/ foundation models, especially on the general technology for enabling AI capabilities w/ LLMs and Generative AI models.
Links
News
Prompt Intelligence
Advanced technologies facilitating prompting language models.
Promptist: reinforcement learning for automatic prompt optimization
Structured Prompting: consume long-sequence prompts in an efficient way
[Paper] Structured Prompting: Scaling In-Context Learning to 1,000 Examples
Example use cases:
X-Prompt: extensible prompts beyond NL for descriptive instructions
LLMA: LLM Accelerators
Accelerate LLM Inference with References
Fundamental Understanding of LLMs
Understanding In-Context Learning
Hiring: aka.ms/GeneralAI
We are hiring at all levels (including FTE researchers and interns)! If you are interested in working with us on Foundation Models (aka large-scale pre-trained models) and AGI, NLP, MT, Speech, Document AI and Multimodal AI, please send your resume to [email protected].
License
This project is licensed under the license found in the LICENSE file in the root directory of this source tree.
Microsoft Open Source Code of Conduct
Contact Information
For help or issues using the pre-trained models, please submit a GitHub issue.
For other communications, please contact Furu Wei (
[email protected]
).Suggested labels
The text was updated successfully, but these errors were encountered: