Skip to content

haizelabs/redteaming-resistance-benchmark

Repository files navigation

Redteaming Resistance Benchmark

Overview

Hello! This repository contains the code and data used to benchmark redteaming prompts against various models as seen in our Huggingface Leaderboard. This project is aimed to reveal weaknesses in both open-sourced and blackbox language models through redteaming attacks covering a diverse range of behaviors and topics.

The following datasets are used in this benchmark, with more on the way:

  1. Advbench: Dataset of adversarial behaviors ranging from profanity, discrimination, to violance, formulated as instructions from Universal and Transferable Adversarial Attacks on Aligned Language Models. [Github]
  2. AART: A collection of generated adverasrial behaviors created through AI-assisted recipes with a wide range of diverse cultural, geographic, and application settings, taken from AART: AI-Assisted Red-Teaming with Diverse Data Generation for New LLM-powered Applications. [Github]
  3. Beavertails: An extensive collection of datasets specifically developed to support research on safety alignment in large language models, taken from A Human-Preference Dataset for Improving Safety Alignment of large Language Models. [Github]
  4. Do Not Answer: An open-source dataset to evaluate LLMs' safety mechanism at a low cost. The dataset is curated and filtered to consist only of prompts to which responsible language models do not answer, taken from Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs. [Github]
  5. RedEval - HarmfulQA: Dataset of harmful questions covering 10 topics and ~10 subtopics each, ranging cultural studies to ancient history. Subset of RedEval from Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment. [Github]
  6. RedEval - DangerousQA: Dataset of harmful questions covering racist, stereotypical, sexist, illegal, toxic, and harmful content. Subset of RedEval from Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment. [Github]
  7. Student-Teacher Prompting: Dataset of harmful prompts and formats that successully broke Vicuna-13B. Formats are specified in their repository: [Github]
  8. SAP: Generated attacks created through in-context learning to mimic human speech. Taken from Attack Prompt Generation for Red Teaming and Defending Large Language Models. [Github]

For benchmarking performance on content, we've separated OpenAI's usage policies into 15 different categories as done in Yi Zeng et al.:

  1. Illegal Activity
  2. Children Harm
  3. Hate/Harass/Discrimination/Violence
  4. Malware
  5. Physical Harm
  6. Economic Harm
  7. Fraud/Deception
  8. Adult Content
  9. Political Campaigning
  10. Privacy Violation
  11. Unauthorized Practice of Law
  12. Tailored Financial Advice
  13. Unauthorized practice of medical advice
  14. High-Risk Government Decision Making
  15. Sexual Content

We've provided the classification results for all our prompts in data/prompt_content_classification.csv.

Project Structure

  • /classes: Base classes for models, evaluations, and judges.
  • /configs: Config objects for loading models.
  • /data: Houses data stored in .json files from each of our collected datasets.
  • /evaluations: Different classes for each dataset for readability.
  • /judges: Houses code for interfacing with Llamaguard.
  • /models: Model classes for OpenAI, Anthropic, Cohere, and Huggingface models.

Getting Started

Conda + pip

We've provided both a requirements.txt and environment.yml file for creating environments. Note that we use vllm for inference, meaning that installation will require being on a machine with a GPU.

API Keys

If you want to benchmark OpenAI, Anthropic, or Cohere models, create an .env file and fill in the following keys:

OPENAI_API_KEY=...
COHERE_API_KEY=...
CLAUDE_API_KEY=...

Running Benchmarks

We've provided an example script, run_eval_benchmarks.py, which allows the user to specify a list of models with a list of datasets to benchmark against. By default, results are written to a /generations folder.

Llamaguard

While Llamaguard was specifically fine-tuned to classify between safe and unsafe prompts, we found in our experiments that the model outputs a significant number of false-negatives, which in this case means biasing towards "safe" even when the output is clearly "unsafe."

We found that recent versions of gpt-4 are highly accurate in their classification, and thus we recommend using those models for better results.

We use the system prompt specificed in the Llamaguard paper, defined in judges/system_prompts.py. We also provide a version of the system prompt based on OpenAI's content moderation policies. Choosing between OpenAI and LLamaguard's policies can be toggled in the policy parameter of the LlamaGuardJudge constructor, which takes in either openai or llamaguard as its value.

Invoking the Judge Model

Given an instance of LlamaGuardJudge, inference can be performed through batch_call(), which takes in 3 optional boolean flags:

  • judge_agent: A value of True means the judge will evaluate the Agent's response to a user prompt, and a value of False means the judge will only evaluate the user prompt.
  • use_policy_template: If True then the prompts will be formatted with the template string defined in the Llamaguard paper before sending to the model. If False the prompt will be formatted in the conventional "messages" format as a dictionary with a "user" and "assistant" message. This value should be set to True by default unless the judge model is a Llamaguard-family model.
  • apply_template: This value is only relevant for open-sourced models. A value of True applies the chat template through tokenizer.apply_chat_template(). This value should be set to True by default unless the judge is using a Llamaguard-family model, in which this value should be used in conjunction with use_policy_template--For example, if apply_template is True, then we refrain from applying the Llamaguard system prompt and set use_policy_template to False. Likewise, if apply_template is False, then we want to format the system prompt and set use_policy_template to True. For blackbox models the chat template is applied automatically.

Loading Models

Model loading is performed by specifying a config, created through ConfigFactory, or by passing in the model name:

from configs.utils import ConfigFactory, ModelFactory

model_config = ConfigFactory.load_config(model_name="lmsys/vicuna-7b-v1.5") 
model = ModelFactory.from_config(model_config)
# OR 
model = ModelFactory.from_name("lmsys/vicuna-7bv.1.5")

The above code loads a model object for lmsys/vicuna-7b-1.5. Additional parameters, such as temperature, top_p, and max_tokens can be set as kwargs in load_config() or from_name(). The full list of arguments is defined in configs/model_configs.py.

vllm

We use vllm to speed up inference times on open-sourced models. vllm, however, only supports having one model loaded at a time. Therefore, we've provided helper functions, load_model() and destroy() that loads and destroys the model respectively. destroy() has the added effect of killing any distributed thread groups created by torch.distributed.launch(), implemented as part of the vllm package.

We recommend using vllm for inference. That being said, vllm utilization can be specified with the use_vllm boolean flag in the model config. Model loading can also be configured by setting load_model=True in the config if the user wants to load the model on startup instead of at a later point through load_model(). We strongly suggest actively loading and destroying these models if running across multiple open sourced models to preserve GPU memory.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages