Selecting Backtranslated Data from Multiple Sources for Improved Neural Machine Translation
Machine translation (MT) has benefited from using synthetic training data originating from translating monolingual corpora, a technique known as backtranslation. Combining backtranslated data from different sources has led to better results than when using such data in isolation. In this work we analyse the impact that data translated with rule-based, phrase-based statistical and neural MT systems has on new MT systems. We use a real-world low-resource use-case (Basque-to-Spanish in the clinical domain) as well as a high-resource language pair (German-to-English) to test different scenarios with backtranslation and employ data selection to optimise the synthetic corpora. We exploit different data selection strategies in order to reduce the amount of data used, while at the same time maintaining high-quality MT systems. We further tune the data selection method by taking into account the quality of the MT systems used for backtranslation and lexical diversity of the resulting corpora. Our experiments show that incorporating backtranslated data from different sources can be beneficial, and that availing of data selection can yield improved performance.
Egileak (ixakideak):
Egileak:
Xabier Soto, Dimitar Shterionov, Alberto Poncelas, Andy Way
Fitxategi publikoak:
Urtea:
2020
Artikuluaren erreferentzia:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp: 3898–3908.
Argitalpen mota:
Argitalpen mota fina (argitalpen_sailkapen_ohia):
Kongresuaren balorazioa:
- SCIE clase 1
URLa (ahal dela DOI):