Skip to content

This is a PyTorch implementation of the paper "Reinforcement Learning-Based Black-Box Model Inversion Attacks" accepted by CVPR 2023.

Notifications You must be signed in to change notification settings

HanGyojin/RLB-MI

Repository files navigation

Reinforcement Learning-Based Black-Box Model Inversion Attacks

This is a PyTorch implementation of the paper "Reinforcement Learning-Based Black-Box Model Inversion Attacks" accepted by CVPR 2023.

Dependencies

This code has been tested with Python 3.8.8, PyTorch 1.8.0 and cuda 10.2.89.

Weights

Model weights for experiments can be downloaded from the link below. https://drive.google.com/drive/folders/15Xcqoz53TQVUUyZe9HNtchoCriLUeQ-O?usp=sharing

Usage

Please check the commands included in run_experiments.sh. There are commands for both the simplified experiment and the experiments reported in the paper.

Please run

bash run_experiments.sh

to reproduce the results.

Timeline

[04.28] Mistakes made during code cleanup have been fixed.

Acknowledgements

This repository contains code snippets and some model weights from repositories mentioned below.

https://github.com/MKariya1998/GMI-Attack

https://github.com/SCccc21/Knowledge-Enriched-DMI

https://github.com/BY571/Soft-Actor-Critic-and-Extensions

About

This is a PyTorch implementation of the paper "Reinforcement Learning-Based Black-Box Model Inversion Attacks" accepted by CVPR 2023.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published