December 13, 2015

Referential translation machines for predicting semantic similarity

Ergun Biçici and Andy Way. Referential translation machines for predicting semantic similarityLanguage Resources and Evaluation, pp 1-27, 2015. ISSN: 1574-020X. [WWW] [doi:10.1007/s10579-015-9322-7]

Referential translation machines (RTMs) are a computational model effective at judging monolingual and bilingual similarity while identifying translation acts between any two data sets with respect to interpretants. RTMs pioneer a language-independent approach to all similarity tasks and remove the need to access any task or domain-specific information or resource. We use RTMs for predicting the semantic similarity of text and present state-of-the-art results showing that RTMs can achieve better results on the test set than on the training set. RTMs judge the quality or the semantic similarity of texts by using relevant retrieved training data as interpretants for reaching shared semantics. Interpretants are used to derive features measuring the closeness of the test sentences to the training data, the difficulty of translating them, and the presence of the acts of translation, which may ubiquitously be observed in communication. RTMs achieve top performance at SemEval in various semantic similarity prediction tasks as well as similarity prediction tasks in bilingual settings. We define MAER, mean absolute error relative to the magnitude of the target, and MRAER, mean absolute error relative to the absolute error of a predictor always predicting the target mean assuming that target mean is known. RTM test performance on various tasks sorted according to MRAER can help identify which tasks and subtasks require more work by design.

June 22, 2015

RTM-DCU: Predicting Semantic Similarity with Referential Translation Machines

Ergun BiçiciRTM-DCU: Predicting Semantic Similarity with Referential Translation Machines. In SemEval-2015: Semantic Evaluation Exercises - International Workshop on Semantic Evaluation, Denver, Colorado, USA, June 2015. [WWW] Keyword(s): Machine TranslationMachine LearningPerformance PredictionSemantic Similarity.

We use referential translation machines (RTMs) for predicting the semantic similarity of text. RTMs are a computational model effectively judging monolingual and bilingual similarity while identifying translation acts between any two data sets with respect to interpretants. RTMs pioneer a language independent approach to all similarity tasks and remove the need to access any task or domain specific information or resource. RTMs become the 2nd system out of 13 systems participating in Paraphrase and Semantic Similarity in Twitter, 6th out of 16 submissions in Semantic Textual Similarity Spanish, and 50th out of 73 submissions in Semantic Textual Similarity English.

RTM-DCU: Referential Translation Machines for Semantic Similarity

Ergun Biçici and Andy WayRTM-DCU: Referential Translation Machines for Semantic Similarity. In SemEval-2014: Semantic Evaluation Exercises - International Workshop on Semantic Evaluation, Dublin, Ireland, 23-24 August 2014. [PDF ] Keyword(s): Machine TranslationMachine LearningQuality EstimationSemantic Similarity.

We use referential translation machines (RTMs) for predicting the semantic similarity of text. RTMs are a computational model for identifying the translation acts between any two data sets with respect to interpretants selected in the same domain, which are effective when making monolingual and bilingual similarity judgments. RTMs judge the quality or the semantic similarity of text by using retrieved relevant training data as interpretants for reaching shared semantics. We derive features measuring the closeness of the test sentences to the training data via interpretants, the difficulty of translating them, and the presence of the acts of translation, which may ubiquitously be observed in communication. RTMs provide a language independent solution to all similarity tasks and achieve top performance when predicting monolingual cross-level semantic similarity (Task 3) and good results in the semantic relatedness and entailment (Task 1) and multilingual semantic textual similarity (STS) (Task 10). RTMs remove the need to access any task or domain specific information or resource.

May 21, 2015

Domain Adaptation for Machine Translation with Instance Selection

Ergun BiçiciDomain Adaptation for Machine Translation with Instance SelectionThe Prague Bulletin of Mathematical Linguistics, 103:5-20, 2015. [doi:10.1515/pralin-2015-0001] Keyword(s): Machine TranslationMachine LearningDomain Adaptation.

Domain adaptation for machine translation (MT) can be achieved by selecting training instances close to the test set from a larger set of instances. We consider 7 different domain adaptation strategies and answer 7 research questions, which give us a recipe for domain adaptation in MT. We perform English to German statistical MT (SMT) experiments in a setting where test and training sentences can come from different corpora and one of our goals is to learn the parameters of the sampling process. Domain adaptation with training instance selection can obtain 22% increase in target 2-gram recall and can gain up to 3.55 BLEU points compared with random selection. Domain adaptation with feature decay algorithm (FDA) not only achieves the highest target 2-gram recall and BLEU performance but also perfectly learns the test sample distribution parameter with correlation 0.99. Moses SMT systems built with FDA selected 10K training sentences is able to obtain F1 results as good as the baselines that use up to 2M sentences. Moses SMT systems built with FDA selected 50K training sentences is able to obtain 1 F1 point better results than the baselines.

QuEst for High Quality Machine Translation

Ergun Biçici and Lucia Specia. QuEst for High Quality Machine TranslationThe Prague Bulletin of Mathematical Linguistics, 103:43-64, 2015. [doi:10.1515/pralin-2015-0003] Keyword(s): Machine TranslationMachine LearningPerformance Prediction.

In this paper we describe the use of QuEst, a framework that aims to obtain predictions on the quality of translations, to improve the performance of machine translation (MT) systems without changing their internal functioning. We apply QuEst to experiments with:

  • multiple system translation ranking, where translations produced by different MT systems are ranked according to their estimated quality, leading to gains of up to 2.72 BLEU, 3.66 BLEUs, and 2.17 F1 points;
  • n-best list re-ranking, where n-best list translations produced by an MT system are re-ranked based on predicted quality scores to get the best translation ranked top, which lead to improvements on sentence NIST score by 0.41 points;
  • n-best list combination, where segments from an n-best list are combined using a lattice-based re-scoring approach that minimize word error, obtaining gains of 0.28 BLEU points; and
  • the ITERPE strategy, which attempts to identify translation errors regardless of prediction errors (ITERPE) and build sentence-specific SMT systems (SSSS) on the ITERPE sorted instances identified as having more potential for improvement, achieving gains of up to 1.43 BLEU, 0.54 F1, 2.9 NIST, 0.64 sentence BLEU, and 4.7 sentence NIST points in English to German over the top 100 ITERPE sorted instances.

February 22, 2015

Optimizing Instance Selection for Statistical Machine Translation with Feature Decay Algorithms

Ergun Biçici and Deniz YuretOptimizing Instance Selection for Statistical Machine Translation with Feature Decay AlgorithmsIEEE/ACM Transactions On Audio, Speech, and Language Processing (TASLP), 2014. [WWW] Keyword(s): Machine TranslationMachine LearningArtificial IntelligenceNatural Language Processing.

We introduce FDA5 for efficient parameterization, optimization, and implementation of feature decay algorithms (FDA), a class of instance selection algorithms that use feature decay. FDA increase the diversity of the selected training set by devaluing features (i.e. n-grams) that have already been included. FDA5 decides which instances to select based on three functions used for initializing and decaying feature values and scaling sentence scores controlled with 5 parameters. We present optimization techniques that allow FDA5 to adapt these functions to in-domain and out-of-domain translation tasks for different language pairs. In a transductive learning setting, selection of training instances relevant to the test set can improve the final translation quality. In machine translation experiments performed on the 2 million sentence English-German section of the Europarl corpus, we show that a subset of the training set selected by FDA5 can gain up to 3.22 BLEU points compared to a randomly selected subset of the same size, can gain up to 0.41 BLEU points compared to using all of the available training data using only 15% of it, and can reach within 0.5 BLEU points to the full training set result by using only 2.7% of the full training data. FDA5 peaks at around 8M words or 15% of the full training set. In an active learning setting, FDA5 minimizes the human effort by identifying the most informative sentences for translation and FDA gains up to 0.45 BLEU points using 3/5 of the available training data compared to using all of it and 1.12 BLEU points compared to random training set. In translation tasks involving English and Turkish, a morphologically rich language, FDA5 can gain up to 11.52 BLEU points compared to a randomly selected subset of the same size, can achieve the same BLEU score using as little as 4% of the data compared to random instance selection, and can exceed the full dataset result by 0.78 BLEU points. FDA5 is able to reduce the time to build a statistical machine translation system to about half with 1M words using only 3% of the space for the phrase table and 8% of the overall space when compared with a baseline system using all of the training data available yet still obtain only 0.58 BLEU points difference with the baseline system in out-of-domain translation.

Referential Translation Machines for Predicting Translation Quality

Ergun Biçici and Andy WayReferential Translation Machines for Predicting Translation Quality. In Proceedings of the Ninth Workshop on Statistical Machine Translation, Baltimore, USA, June 2014. Association for Computational Linguistics. [PDF ] Keyword(s): Machine TranslationMachine LearningQuality EstimationNatural Language Processing[Abstract] [bibtex-entry]

We use referential translation machines (RTM) for quality estimation of translation outputs. RTMs are a computational model for identifying the translation acts between any two data sets with respect to interpretants selected in the same domain, which are effective when making monolingual and bilingual similarity judgments. RTMs achieve top performance in automatic, accurate, and language independent prediction of sentence-level and word-level statistical machine translation (SMT) quality. RTMs remove the need to access any SMT system specific information or prior knowledge of the training data or models used when generating the translations and achieve the top performance in WMT13 quality estimation task (QET13). We improve our RTM models with the Parallel FDA5 instance selection model, with additional features for predicting the translation performance, and with improved learning models. We develop RTM models for each WMT14 QET (QET14) subtask, obtain improvements over QET13 results, and rank $1$st in all of the tasks and subtasks of QET14.\