Huy is currently an Applied Scientist in the Alexa Science Team at Amazon, where he is working on natural language processing components of Alexa. He holds a Master of Applied Science in Electrical and Computer Engineering from York University, Toronto. His Master's thesis was nominated for the "Best Thesis Award 2021".
At York University, his research focus was on combining deep learning with graph signal processing to solve various image processing tasks. Prior to his study at York, he was a Data Scientist at a big telecom company, where he leveraged data-driven algorithms to detect anomaly network issues.
Accepted to ICASSP'21. ArXiv. While deep learning (DL) architectures like convolutional neural networks (CNNs) have enabled effective solutions in image denoising, in general their implementations overly rely on training data and require tuning of a large parameter set. DeepGTV is a hybrid design that combines graph signal filtering with deep feature learning. It utilizes interpretable analytical low-pass graph filters and employs 80% fewer network parameters than state-of-the-art DL denoising scheme DnCNN. Experimental results show that in the case of statistical mistmatch, this method outperformed DnCNN by up to 3dB in PSNR.
An implemetation of DeepGLR in PyTorch. DeepGLR is a denoising method that combines CNNs with Graph Signal Processing. Published in CVPR'19. It outperformed state-of-the-art CNN approaches by a remarkable margin in statistical mismatch cases. Source: ArXiv | IEEE.
In this work, the spectral normalization technique is applied to LSGANS to synthesize new motorbike color images. Results of this project might help researchers who are working on traffic recognition systems that need to recognize motorbikes, especially research in Vietnam. In fact, motorcycles are essential parts of Vietnamese daily lives because they are the main transportation vehicles there.