Skip to content

LJY-M/Voice-Guard

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Voice Guard: Protecting Voice Privacy with Strong and Imperceptible Adversarial Perturbation in the Time Domain

This is the official implementation of the paper Voice Guard: Protecting Voice Privacy with Strong and Imperceptible Adversarial Perturbation in the Time Domain. We use adversarial attack to prevent one's voice from improperly utilized in voice conversion. The conversion fails when adversarial noise is added to the input utterance.

Thanks to Attack VC for the code base.

This implementation is based on Attack VC, so we only show the implementation of the core functions.

This algorithm is no longer maintained, please implement it as you wish.

Core coda

attack_utils.py : implements Voice Guard's core attack flow.

data_utils.py : implements gradient-preserving feature extraction.

generate_masking_threshold.py : implements the calculation of the masking threshold.

Reference

Please cite our paper if you find it useful.

@inproceedings{ijcai2023p535,
  title     = {Voice Guard: Protecting Voice Privacy with Strong and Imperceptible Adversarial Perturbation in the Time Domain},
  author    = {Li, Jingyang and Ye, Dengpan and Tang, Long and Chen, Chuanxi and Hu, Shengshan},
  booktitle = {Proceedings of the Thirty-Second International Joint Conference on
               Artificial Intelligence, {IJCAI-23}},
  publisher = {International Joint Conferences on Artificial Intelligence Organization},
  editor    = {Edith Elkind},
  pages     = {4812--4820},
  year      = {2023},
  month     = {8},
  note      = {Main Track},
  doi       = {10.24963/ijcai.2023/535},
  url       = {https://doi.org/10.24963/ijcai.2023/535},
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages