Skip to content

UtsvGrg/MIDAS_Lab_PRS

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

This is what I explored and worked with new SOMOS dataset during my Independent Project at MIDAS Lab

Trained MOS prediction models on different datasets using self-supervised learning model for speech, wav2vec2. Achieved a 45% improvement in MOS prediction accuracy compared to SOTA models on the SOMOS dataset.

Also leveraged Tacotron2 with TOBI labels to identify prosodic variations which give better naturalness scores (LCC,SRCC)

Data and pretrained models on Zenodo

Implentation of the PRS paper.

Downloads:

Download the data and pre-trained weights from zenodo in a folder named ASRU. We use the full path to ASRU folder to set up the REF variable later. We only provide PRS pre-trained weights for stage2.

Environment:

Create a conda environment using the env.yaml conda env create -f env.yaml

Environment-variable

Set the environment variable in linux based machine using this export REF="path/to/the/ASRU/"

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 78.6%
  • Python 21.4%