May 21, 2015

Domain Adaptation for Machine Translation with Instance Selection

Ergun BiçiciDomain Adaptation for Machine Translation with Instance SelectionThe Prague Bulletin of Mathematical Linguistics, 103:5-20, 2015. [doi:10.1515/pralin-2015-0001] Keyword(s): Machine TranslationMachine LearningDomain Adaptation.

Domain adaptation for machine translation (MT) can be achieved by selecting training instances close to the test set from a larger set of instances. We consider 7 different domain adaptation strategies and answer 7 research questions, which give us a recipe for domain adaptation in MT. We perform English to German statistical MT (SMT) experiments in a setting where test and training sentences can come from different corpora and one of our goals is to learn the parameters of the sampling process. Domain adaptation with training instance selection can obtain 22% increase in target 2-gram recall and can gain up to 3.55 BLEU points compared with random selection. Domain adaptation with feature decay algorithm (FDA) not only achieves the highest target 2-gram recall and BLEU performance but also perfectly learns the test sample distribution parameter with correlation 0.99. Moses SMT systems built with FDA selected 10K training sentences is able to obtain F1 results as good as the baselines that use up to 2M sentences. Moses SMT systems built with FDA selected 50K training sentences is able to obtain 1 F1 point better results than the baselines.

No comments:

Post a Comment