Skip to content

Latest commit

 

History

History
39 lines (20 loc) · 1.02 KB

README.md

File metadata and controls

39 lines (20 loc) · 1.02 KB

Reinforcement Learning-Based Black-Box Model Inversion Attacks

This is a PyTorch implementation of the paper "Reinforcement Learning-Based Black-Box Model Inversion Attacks" accepted by CVPR 2023.

Dependencies

This code has been tested with Python 3.8.8, PyTorch 1.8.0 and cuda 10.2.89.

Weights

Model weights for experiments can be downloaded from the link below. https://drive.google.com/drive/folders/15Xcqoz53TQVUUyZe9HNtchoCriLUeQ-O?usp=sharing

Usage

Please check the commands included in run_experiments.sh. There are commands for both the simplified experiment and the experiments reported in the paper.

Please run

bash run_experiments.sh

to reproduce the results.

Timeline

[04.28] Mistakes made during code cleanup have been fixed.

Acknowledgements

This repository contains code snippets and some model weights from repositories mentioned below.

https://github.com/MKariya1998/GMI-Attack

https://github.com/SCccc21/Knowledge-Enriched-DMI

https://github.com/BY571/Soft-Actor-Critic-and-Extensions