Skip to content

This repo focus on how to deal with prompt injection problem faced by LLMs

License

Notifications You must be signed in to change notification settings

rohilrg/CatchPromptInjection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CatchPromptInjection

This repository is dedicated to addressing the prevalent challenge of prompt injection encountered by Language Model Models (LLMs). By providing insights, strategies, and solutions, we aim to equip developers and practitioners with effective approaches to mitigate and overcome prompt injection issues in the LLM domain. Explore the resources and knowledge shared here to enhance your understanding and effectively tackle prompt injection problems within LLMs.

Project in action

To see the project in action here:

Project in action

Project Report

The project report that gives background and compares different methods to circumvent this problem and explaination of proposed solution is stored here: Project Report

Structure

./CatchPromptInjection/
├── src
│   ├── config
│   │   └── config.json
│   ├── tests
│   │   └── test_detector.py
│   └── detector.py
├── Dockerfile
├── LICENSE
├── Makefile
├── README.md
├── app.py
└── requirement.txt

Please remember to the config.json file to control parameters.

Pre-installation steps:

  • Make sure you have python 3+ installed on your computer.
  • Make sure you have pip3 package installed on your computer.
  • Make sure you clone this package to your computer.

Install requirements:

In your terminal/command-line go to the project folder and execute the command below:

pip install -r requirement.txt

Checking before building

Please run these make commands to make sure the formatting, linting and tests are working.

make format
make lint
make test

Local Running

To run the app locally just run the following command in bash in the root directory of the folder:

streamlit run app.py

Your app is available at: https://localhost:8501

Running with docker

To run the project with docker simply run the following command to build:

docker build -t <IMAGE_NAME_TO_GIVE> ./

Once the image is built please run the container using this command:

docker run -p 8501:8501 <IMAGE_NAME_TO_GIVE>

Once started the app is available at: https://localhost:8501

About

This repo focus on how to deal with prompt injection problem faced by LLMs

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published