Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HarmBench 1.0 update #19

Merged
merged 1 commit into from
Feb 27, 2024
Merged

HarmBench 1.0 update #19

merged 1 commit into from
Feb 27, 2024

Conversation

mmazeika
Copy link
Collaborator

@mmazeika mmazeika commented Feb 27, 2024

Changes:

Adversarial training:

  • Added adversarial training code and link to model on HF

Precomputed test cases:

Experiments:

  • Fixed bug with Qwen models. Specifically, they cannot be used with prefix_cache=True in methods that have this argument. Previously, this caused a silent error that greatly decreased the ASR of several methods on Qwen.

Red teaming methods:

  • Added full documentation for RedTeamingMethod and SingleBehaviorRedTeamingMethod
  • Fixed AutoPrompt bug
  • Renamed AutoDan to AutoDAN
  • Updated SingleBehaviorRedTeamingMethod to allow more methods to subclass it, cutting down on overall code. Many of the methods that previously subclassed RedTeamingMethod now subclass SingleBehaviorRedTeamingMethod.
  • Updated method config parsing to allow referencing model configs in manually defined experiments that include model names from models.yaml. More details are near the bottom of configs.md. This makes it easier to ensure all experiments use the same models.

Models:

  • Renamed multimodal models to be more specific and consistent with other model names
  • Added model_type field (interfaces with run_pipeline.py to allow more easily running the evaluation pipeline)
  • Added adversarially trained model under zephyr_7b_robust
  • Refactored chat template loading to not depend on model path. Transformers models can now take a chat_template field in their model config, with options defined in baselines/model_utils.py.

Evaluation pipeline:

  • Added scripts/run_pipeline.py and configs/pipeline_configs/run_pipeline.yaml to make running the evaluation pipeline easier.
  • Added HF repo link to multimodal classifier
  • Added --behavior_ids_subset argument to generate_test_cases.py for another way of selecting behaviors to run
  • Added --incremental_update argument to evaluate_completions.py to update existing completions JSON files rather than generating completions for all behaviors from scratch. This is useful for updating completions with new behaviors.

Data:

  • Reorganized data folder into behavior_datasets, optimizer_targets, copyright_classifier_hashes, multimodal_behavior_images subfolders.
  • Removed three text behaviors due to low quality:
    • chop_shops_los_angeles_locating
    • password_pin_coercion_dialogue
    • ddos_code_obfuscation
  • Added two text behaviors to bring number to exactly 400:
    • dimethylmercury_instructions_home
    • dimethylmercury_materials_no_oversight
  • Added fields for FunctionalCategory and SemanticCategory to text and multimodal behaviors

README and documentation:

  • Updated the main README to use run_pipeline.py and moved the individual steps into evaluation_pipeline.md
  • Added evaluation_pipeline.md documentation for the evaluation pipeline
  • Updated

Notebooks:

  • Added notebooks/analyze_results.ipynb for collecting results, printing ASR tables, and troubleshooting experiments
  • Moved method notebooks into notebooks/methods folder for clarity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants