Skip to content

Universal and Transferable Attacks on Aligned Language Models

Notifications You must be signed in to change notification settings

FabienRoger/llm-attacks

 
 

Repository files navigation

Coup probes

This is a repository that contains experiments of the Coup probes post. It contains code to train probes to identify theft advice, and evaluate their generalization abilities under format variations and jaibreak suffixes.

Once the seed dataset is generated, theft_probe/run.py runs the relevant scripts to generate jailbreaks and the model activations. Then theft_probe/train_probes.py trains the probes and evaluates them, and theft_probe/plot.py generates figures.

This is a fork of the official repository for "Universal and Transferable Adversarial Attacks on Aligned Language Models" by Andy Zou, Zifan Wang, J. Zico Kolter, and Matt Fredrikson.

About

Universal and Transferable Attacks on Aligned Language Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Python 92.9%
  • Jupyter Notebook 5.3%
  • Shell 1.8%