Data augmentation for NLP
-
Updated
Jun 24, 2024 - Jupyter Notebook
Data augmentation for NLP
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
Simple code related to adversarial examples, attacks, and defenses.
A Harder ImageNet Test Set (CVPR 2021)
A Toolbox for Adversarial Robustness Research
An Open-Source Package for Textual Adversarial Attack.
[ICML 2019] ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation
A Paperlist of Adversarial Attack on Object Detection
Training Ensembles to Detect Adversarial Examples
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
A reproduced version of PyTorch from the official repository, based on TensorFlow/JAX.
List of state of the art papers, code, and other resources
A geometry-inspired decision-based attack
Fooling a neural network with adversarial examples
We use 3D modeling methods to create real-world adversarial patches (adversarial examples) for an existing scene.
A brief study on Adversarial Attacks and python scripts to generate and study them.
Adversarial Item Promotion in visually-aware recommenders
Source code for ESORICS 2020 paper "Detection by attack: Detecting adversarial samples by undercover attack"
Generative Adversarial Perturbations (CVPR 2018)
This is the course project for CSCE585: ML Systems. Students will build their machine learning systems based on the provided infrastructure --- Athena.
Add a description, image, and links to the adversarial-example topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-example topic, visit your repo's landing page and select "manage topics."