And we should consider every day lost on which we have not danced at least once.
- Hong Kong
Dongchao Yang
yangdongchao
A PhD Student from The Chinese University of Hong Kong, currently working on multi-modal audio foundation models and Chinese traditional philosophy.
CUHK&PKU&SHU
Yiwei Guo
cantabile-kwok
2nd year Ph.D. Student at SJTU X-LANCE Lab, Shanghai, China. Interested in text-to-speech synthesis, voice conversion and speech codecs.
Alexander Tong
atong01
Postdoc at Mila; UdeM studying causal single-cell dynamics
@mila-iqia; @UdeM; @Dreamfold Montreal, QC
币圈科学家@btcok9
COINsciencer
币圈科学家telegram
https://t.me/btcok9
长期免费分享币圈科学工具
我的简介
https://bit.ly/coinsciy
进群申请表
https://bit.ly/alphasci
Collabora
collabora
See https://gitlab.collabora.com/ for more Collabora git repositories. We are hiring! https://col.la/careers
Cambridge, UK; Montreal, QC
Haohe (Leo) Liu / 刘濠赫
haoheliu
UoSurrey, Centre for Vision, Speech and Signal Processing (CVSSP) Guildford GU2 7XH Stag Hill, UK
Azure Samples
Azure-Samples
Microsoft Azure code samples and examples in .NET, Java, Python, JavaScript, TypeScript, PHP and Ruby
Redmond, WA
Rafael Valle
rafaelvalle
My research focuses on machine listening and improvisation.
During my PhD at Berkeley I was advised mainly by Sanjit Seshia and Edmund Campion.
Cheng-I Jeff Lai
jefflai108
Ph.D. Student at MIT. Interested in self-supervised learning, spoken language acquisition, and audio-visual learning.
Cambridge, MA
Dong-Hwan Jang 장동환
DongHwanJang
ML researcher @ Samsung SAIT.
Previously @ NAVER as an intern.
BS/MS EE @ SNU.
On a quest to enhance model robustness and effectiveness.
yanggeng1995
speech synthesis | neural vocoder | generative models | machine learning@ aslp, nwpu, Xi'an, ShannXi, China
Northwestern Polytechnical University Beijing China
Jinhyeok Yang
Yangyangii
Speech AI Researcher @supertone-inc
Supertone, Inc. Seongnam, Republic of Korea
Yeongtae
Yeongtae
YeongTae Hwang.
Research Engineer, @neosapience, 2020.03 ~ Now.
Research Engineer, @netmarble, 2016.03 ~2020.03
@neosapience South Korea
ZackHodari
Generative AI, Large Speech Models, Prosody and Expressivity in Synthetic Voices
Papercup London
Tomoki Hayashi
kan-bayashi
Main developer of ESPnet / COO @ Human Dataware Lab. Co., Ltd. / Postdoctoral researcher @ Nagoya University
Nagoya University Nagoya
Song
songhan
Song Han is an associate professor at MIT EECS and distinguished scientist at NVIDIA. His research interest is efficient AI computing.
MIT, NVIDIA
Seung-won Park
seungwonpark
박승원 · Machine Learning Engineer
@moloco · Previously worked on speech synthesis/processing for about 3 years.
@moloco Seoul, South Korea
Theodoros Giannakopoulos
tyiannak
Principal Researcher of Multimodal Machine Learning
NCSR Demokritos Athens, Greece
Srikanth Ronanki
ronanki
PhD student at CSTR, University of Edinburgh
University of Edinburgh Edinburgh
PreviousNext