Implementation of our paper "Self-training Sampling with Monolingual Data Uncertainty for Neural Machine Translation" to appear in ACL 2021. [paper]
Self-training has proven effective for improving NMT performance by augmenting model training with synthetic parallel data. The common practice is to construct synthetic data based on a randomly sampled subset of large-scale monolingual data, which we empirically show is sub-optimal. In this work, we propose to improve the sampling procedure by selecting the most informative monolingual sentences to complement the parallel data. To this end, we compute the uncertainty of monolingual sentences using the bilingual dictionary extracted from the parallel data. Intuitively, monolingual sentences with lower uncertainty generally correspond to easy-to-translate patterns which may not provide additional gains. Accordingly, we design an uncertainty-based sampling (UncSamp) strategy to efficiently exploit the monolingual data for self-training, in which monolingual sentences with higher uncertainty would be sampled with higher probability. Experimental results on large-scale WMT English⇒German and English⇒Chinese datasets demonstrate the effectiveness of the proposed method. Extensive analyses provide a deeper understanding of how the proposed method improves the translation performance.
We evaluate the proposed UncSamp approach on two high-resource translation tasks. As shown, our Transformer-Big models trained on the authentic parallel data achieve the performance competitive with or even better than the submissions to WMT competitions. Based on such strong baselines, self-training with RandSamp improves the performance by +2.0 and +0.9 BLEU points on En⇒De and En⇒Zh tasks respectively, demonstrating the effectiveness of the large-scale self-training for NMT models.
With our UncSamp approach, self-training achieves further significant improvement by +1.1 and +0.6 BLEU points over the random sampling strategy, which demonstrates the effectiveness of exploiting uncertain monolingual sentences.
Further analyses suggest that our UncSamp approach does improve the translation quality of high-uncertainty sentences and also benefits the prediction of low-frequency words at the target-side.
Please kindly cite our paper if you find it helpful:
@inproceedings{jiao2021self,
title={Self-Training Sampling with Monolingual Data Uncertainty for Neural Machine Translation},
author={Wenxiang Jiao and Xing Wang and Zhaopeng Tu and Shuming Shi and Michael R. Lyu and Irwin King},
booktitle = {ACL},
year = {2021}
}