Skip to content

SchwinnL/ExploringMisclassifications

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks

Code relative to "Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks" Leo Schwinn, René Raab, An Nguyen, Dario Zanca, Bjoern Eskofier

We do an observational study on the classification decisions of 30 different state-of-the-art neural networks trained to be robust against adversarial attacks. Based on these observations, we propose a novel loss function for adversarial attacks that consistently improves their efficiency and success rate compared to prior attacks for all 30 analyzed models.

The auto-attack implementation is taken from: https://github.com/fra31/auto-attack/tree/master/autoattack

Requirements

Required packages can be installed by running install_requirements.sh

About

Adversarial Attack

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages