This is the official repository for the SingGraph model. The paper has been accepted by Interspeech 2024.
The official code has been released. I need some time to write the README.md
of the data pre-processing processes.
Existing models for speech deepfake detection have struggled to adapt to unseen attacks in this unique singing voice domain of human vocalization. To bridge the gap, we present a groundbreaking SingGraph model. The model synergizes the capabilities of the MERT acoustic music understanding model for pitch and rhythm analysis with the wav2vec2.0 model for linguistic analysis of lyrics Additionally, we advocate for using RawBoost and beat matching techniques grounded in music domain knowledge for singing voice augmentation, thereby enhancing SingFake detection performance. Our proposed method achieves new state-of-the-art (SOTA) results within the SingFake dataset, surpassing the previous SOTA model across three distinct scenarios: it improves EER relatively for seen singers by 13.2%, for unseen singers by 24.3%, and unseen singers using different codecs by 37.1%.
The dataset is based on the paper "SingFake: Singing Voice Deepfake Detection," which is accepted by ICASSP 2024. [Project Webpage]
Since the copyright issue, the dataset didn't open source. Please follow the instructions in the above paper to download the dataset by yourself.
If you find our work useful, please consider cite
@article{chen2024singing,
title={Singing Voice Graph Modeling for SingFake Detection},
author={Xuanjun Chen and Haibin Wu and Jyh-Shing Roger Jang and Hung-yi Lee},
journal={arXiv preprint arXiv:2406.03111},
year={2024}
}
If you have any questions, please feel free to contact me by email at [email protected].