Document-level Machine Translation For Scientific Texts
Résumé
While neural machine translation has seen significant progress during recent years at sentencelevel, translating full documents remains a challenge to efficiently incorporate document-level context. Various approaches have been proposed, but most of them consider only one to three previous source and/or target sentences as the context. This is not sufficient to faithfully translate some language phenomena, like lexical consistency and document coherence, especially in some scientific texts. In this work, we conducted experiments to include full contextual context and investigate the impact of all the past / future sentences on the source side with a context ablation study, on some abstracts from scientific publications. Our results show that future context is more influential than the past source context, and in our experiments, the Transformer architecture performs much better to translate the beginning of a long document than the end.
Fichier principal
main.pdf (636.43 Ko)
Télécharger le fichier
supplementary_material.pdf (128.11 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|