Prompt injection is one of the major safety concerns of LLMs like ChatGPT。
This repository serves as a comprehensive resource on the study and practice of prompt-injection attacks, defenses, and interesting examples. It contains a collection of examples, case studies, and detailed notes aimed at researchers, students, and security professionals interested in this topic.
本仓库是关于提示词注入攻防及其有趣示例的收集资源。
In this repository, you'll find:
这部分介绍了提示词注入攻防及其有趣示例的基本概念和背景知识,也包含一些完整的示例。
- 提示词逆向工程的对应提示词 Prompt Reverse Engineering prompts
- 提示词防御的对应提示词 Prompt Defense prompts
- 提示词攻击的对应提示词 Prompt Attacks prompts
- 提示词防御的对应提示词 Prompt Defense prompts
Here are some related resources that can help you understand prompt-injection attacks, defenses, and interesting examples better:
这里有一些可以帮助你更好地理解提示词注入攻防及其有趣示例的相关资源:
We welcome everyone to contribute to this project. If you have any ideas, suggestions,
or have found errors, feel free to submit an issue or a pull request. For more details, please refer to our Contribution Guidelines.
This project is licensed under the MIT License. For more details, please refer to the LICENSE
file.
This project is intended for academic research and education. We are not responsible for any illegal use of these resources. Please abide by the laws and regulations of your country/region when using these resources.
这个项目的目的是为了学术研究和教育,我们不对任何非法使用这些资源的行为负责。在使用这些资源时,请遵守你所在国家/地区的法律法规。