Skip to content

HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal

License

Notifications You must be signed in to change notification settings

Aaquib111/HarmBench

Repository files navigation

HarmBench

πŸ—‘οΈ HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal πŸ›‘οΈ

A fast, affordable, scalable and open system framework for evaluating automated red teaming methods and LLM attacks/defenses

Table of Contents

πŸ“° Latest News πŸ“°

πŸ—‘οΈ What is HarmBench πŸ›‘οΈ

Automated red teaming holds substantial promise for uncovering and mitigating the risks associated with the malicious use of large language models (LLMs), yet the field lacks a standardized evaluation framework to rigorously assess these methods. To address this issue, we introduce HarmBench, a standardized evaluation framework for automated red teaming. We identify key considerations previously unaccounted for in red teaming evaluations and systematically design HarmBench to meet these criteria. Using HarmBench, we conduct a large-scale comparison of $18$ red teaming methods and $33$ target LLMs and defenses, yielding novel insights. We also introduce a highly efficient adversarial training method that greatly enhances LLM robustness across a wide range of attacks, demonstrating how HarmBench enables codevelopment of attacks and defenses.

🌐 Overview 🌐

HarmBench Evaluation Pipeline

There are two primary ways to use HarmBench: (1) evaluating red teaming methods against a set of LLMs, and (2) evaluating LLMs against a set of red teaming methods. These use cases are both supported by the same evaluation pipeline, illustrated above. The process for running this evaluation pipeline is described below in the Quick Start section.

HarmBench has out-of-the-box support for transformers-compatible LLMs, numerous closed-source APIs, and several multimodal models. To add new red teaming methods and models that aren't compatible with transformers, it may help to learn more about how the codebase is structured. We provide a set of docs and (coming soon) tutorials to help with more advanced usage.

β˜• Quick Start β˜•

βš™οΈ Installation

git clone https://github.com/centerforaisafety/HarmBench.git
cd HarmBench
pip install -r requirements.txt

πŸ—‘οΈ Step 1 - Generate Test Cases

The first step in the evaluation pipeline is generating test cases. The start_idx and end_idx parameters allow parallelizing test case generation across behaviors. The run_id parameter enables parallelizing test case generation in other ways (e.g., the EnsembleGCG method can be parallelized by setting start_idx=0, end_idx=401, run_id="${n}" for n ranging from 1 to N.)

# Run the step1 (generate test cases) script with Slurm
./scripts/step1.sh

# OR run directly
./scripts/generate_test_cases.sh $method $experiment $behaviors_path $save_dir $start_idx $end_idx $run_id
Example

# Make sure your environment has enough GPUs required for each model and attacks

base_dir="results"
method="GCG"
experiment="vicuna_13b_v1_5"
behaviors_path="./data/harmbench_text_test_behaviors.csv" # or "./data/harmbench_text_validation_behaviors.csv"
save_dir=${base_dir}/${method}/${experiment}/test_cases
start_idx=0
end_idx=51 # first 50 behaviors
run_id=""

./scripts/generate_test_cases.sh $method $experiment $behaviors_path $save_dir $start_idx $end_idx $run_id

πŸ—‘οΈ Step 1.5 - Merge Test Cases

For many methods, generate_test_cases.py is parallelized and does not directly save to the final test_cases.json file. Rather, these methods save results in {top_level_results_dir}/{method}/{experiment}/test_cases/test_cases_individual_behaviors/{behavior_id}/test_cases.json. Run the following command to merge all of the test cases from different behaviors into a single test_cases.json file.

# With Slurm
./scripts/step1.5.sh

# Or run directly
./scripts/merge_test_cases.sh $method $save_dir
Example

# Your save_dir from Step 1
base_dir="results"
method="GCG"
experiment="vicuna_13b_v1_5"
save_dir=$base_dir/$method/$experiment/test_cases

./scripts/merge_test_cases.sh $method $save_dir

πŸ—‘οΈ Step 2 - Generate Completions

After generating test cases (attack prompts) specified by a method and experiment in Step 1, we generate completions for a target_model.

# Run with Slurm
./scripts/step2.sh

# Or run directly, make sure your environment have enough GPUs to allocate the model
./scripts/generate_completions.sh $model $behaviors_path $test_cases_file $completions_output_file $max_new_tokens $token
Example

base_dir="results"
model="vicuna_13b_v1_5" # from configs/model_configs/models.yaml
files_dir=$base_dir/$method/$model

test_cases_file=$files_dir/test_cases/test_cases.json
completions_output_file=$files_dir/completions/${model}.json
behaviors_path="./data/harmbench_text_test_behaviors.csv" # or "./data/harmbench_text_validation_behaviors.csv"
max_new_tokens=512
# Optional token is the private token for the HF's model or api tokens for GPT, CLAUDE, etc.
# token = <your_api_token>

./scripts/generate_completions.sh $model $behaviors_path $test_cases_file $completions_output_file $max_new_tokens $token

πŸ—‘οΈ Step 3 - Evaluate Completions

After generate completions from a target_model from Step 2, we use our classifier to label whether each completion is an example of its corresponding behavior. Our classifier model also available at πŸ€— cais/HarmBench-Llama-2-13b-cls

# Run with Slurm
./scripts/step3.sh

# Or run directly, make sure your environment have enough GPUs to allocate the classifier model (1 GPU)
./scripts/evaluate_completions.sh $cls_path $behaviors_path $input_file $output_file
Example

base_dir="results"
model="vicuna_13b_v1_5" # from configs/model_configs/models.yaml
method="GCG"

cls_path="cais/HarmBench-Llama-2-13b-cls"
behaviors_path="./data/harmbench_text_test_behaviors.csv" # or "./data/harmbench_text_validation_behaviors.csv" 
input_file=$base_dir/$method/$model/completions/$model.json # output file from Step 2
output_file=$base_dir/$method/$model/results/$model.json

./scripts/evaluate_completions.sh $cls_path $behaviors_path $input_file $output_file

βž• Using your own model in HarmBench

You can easily add new Hugging Face transformers models in config/model_configs/models.yaml by simply adding an entry for your model. This model can then be directly evaluated on most red teaming methods without modifying the method configs (using our dynamic experiment config parsing code, described in the docs folder). Some methods (AutoDAN, PAIR, TAP) currently require manually adding new experiment configs for added models.

βž• Adding/Customizing your own red teaming methods in HarmBench

All of our red teaming methods are implemented in baselines, imported through baselines/init.py, and managed by configs/method_configs. You can easily improve on top of existing red teaming methods or add new methods by simply making a new subfolder in the baselines directory.

βš“ Documentation βš“

For more details documentation about each component in the framework

🌱 HarmBench Roadmap 🌱

We plan to update to HarmBench over the next few weeks with the following:

  • Notebook versions for red teaming methods
  • Additional red teaming methods
  • Additional target models
  • Additional harmful behaviors
  • Support for system prompt attacks
  • Experiment code from the paper, including adversarial training code
  • Validation classifiers

πŸ™ Acknowledgements and Citation πŸ™

We thank the following open-source repositories:

[1] https://github.com/llm-attacks/llm-attacks
[2] https://github.com/patrickrchao/JailbreakingLLMs
[3] https://github.com/RICommunity/TAP
[4] https://github.com/SheltonLiu-N/AutoDAN
[5] https://github.com/lm-sys/FastChat/tree/main/fastchat
[6] https://github.com/ray-project/ray
[7] https://github.com/vllm-project/vllm
[8] https://github.com/huggingface/transformers
[10] https://github.com/CHATS-lab/persuasive_jailbreaker

If you find this useful in your research, please consider citing our paper:

@article{harmbench,
    title = {HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal},
    author = {Mazeika, Mantas and Phan, Long and Yin, Xuwang and Zou, Andy and Wang, Zifan and Mu, Norman and Sakhaee, Elham and Li, Nathaniel and Basart, Steven and Li, Bo and Forsyth, David and Hendrycks, Dan},
    journal = {arXiv preprint},
    year = {2024}
}

About

HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Jupyter Notebook 55.5%
  • Python 44.2%
  • Other 0.3%