December 13, 2016

Predicting the Performance of Parsing with Referential Translation Machines

Ergun Biçici. Predicting the Performance of Parsing with Referential Translation MachinesThe Prague Bulletin of Mathematical Linguistics, 106:31-44, 2016. [doi: https://doi.org/10.1515/pralin-2016-0010] Keywords: referential translation machines, parsing, machine translation performance prediction


Referential translation machine (RTM) is a prediction engine used for predicting the performance of natural language processing tasks including parsing, machine translation, and semantic similarity pioneering language, task, and domain independence. RTM results for predicting the performance of parsing (PPP) in out-of-domain or in-domain settings with different training sets and types of features present results independent of language or parser. RTM PPP models can be used without parsing using only text input and without any parser or language dependent information. Our results detail prediction performance, top selected features, and lower bound on the prediction error of PPP.

August 16, 2016

Trade of archeological artifacts and antiques is still taking place in places effected by war and economic distress...

Archeological artifacts and antiques are still being traded / stolen / looted at an alarming rate with the increased instability in the Mediterranean. Here is a table sorted according to the average in the last 5 years (data from http://www.trademap.org/)...

July 19, 2016

Türklerin Tarihi kitabını hatmettim...

Türklerin Tarihi kitabını hatmettim:
hatim, -tmi a. Ar.
    1. esk. Sona erdirme, bitirme.
    2. Kuran’ı başından sonuna değin okuma.

http://www.dildernegi.org.tr/TR,274/turkce-sozluk-ara-bul.html


https://www.facebook.com/T%C3%BCrklerin-Tarihi-%C4%B0lber-Ortayl%C4%B1-460664594092691/

İlber Ortaylı'nın sunumu bir cafe'de kendisiyle entellektüel bir konuşma yapma havasında...birikimiyle yüzlerce yıllık tarihi yumuşak ama ikna edebilen bir anlatımla sunuyor...

July 10, 2016

ParFDA for Instance Selection for Statistical Machine Translation

Ergun Biçici. ParFDA for Instance Selection for Statistical Machine Translation. In Proc. of the First Conference on Statistical Machine Translation (WMT16), Berlin, Germany, August 2016. Association for Computational Linguistics. [WWWKeyword(s): Machine TranslationMachine LearningLanguage Modeling.

We build parallel feature decay algorithms (ParFDA) Moses statistical machine translation (SMT) systems for all language pairs in the translation task at the first conference on statistical machine translation~\cite{WMT2016} (WMT16). ParFDA obtains results close to the top constrained phrase-based SMT with an average of 2.52 BLEU points difference using significantly less computation for building SMT systems than the computation that would be spent using all available corpora. We obtain BLEU bounds based on target coverage and show that ParFDA results can be improved by 12.6 BLEU points on average. Similar bounds show that top constrained SMT results at WMT16 can be improved by 8 BLEU points on average while German to English and Romanian to English translations results are already close to the bounds.

Referential Translation Machines for Predicting Translation Quality and Related Statistics

Ergun Biçici. Referential Translation Machines for Predicting Translation Quality and Related Statistics. In Proc. of the First Conference on Statistical Machine Translation (WMT16), Berlin, Germany, August 2016. Association for Computational Linguistics.[WWWKeyword(s): Machine TranslationMachine LearningPerformance Prediction.

Referential translation machines (RTMs) pioneer a language independent approach for predicting translation performance and to all similarity tasks with top performance in both bilingual and monolingual settings and remove the need to access any task or domain specific information or resource. RTMs achieve to become 1st in document-level, 4th system at sentence-level according to mean absolute error, and 4th in phrase-level prediction of translation quality in quality estimation task.

RTM at SemEval-2016 Task 1: Predicting Semantic Similarity with Referential Translation Machines and Related Statistics

Ergun Biçici. RTM at SemEval-2016 Task 1: Predicting Semantic Similarity with Referential Translation Machines and Related Statistics. In SemEval-2016: Semantic Evaluation Exercises - International Workshop on Semantic Evaluation, San Diego, USA, June 2016. [WWW] Keyword(s): Machine TranslationMachine LearningPerformance PredictionSemantic Similarity.

We use referential translation machines (RTMs) for predicting the semantic similarity of text in both STS Core and Cross-lingual STS. RTMs pioneer a language independent approach to all similarity tasks and remove the need to access any task or domain specific information or resource. RTMs become 14th out of 26 submissions in Cross-lingual STS. We also present rankings of various prediction tasks using the performance of RTM in terms of MRAER, a normalized relative absolute error metric.

March 11, 2016

Economic Model based on Carbon Emission


Bill Gates uses the following equation to explain CO2 emissions (https://www.gatesnotes.com/2016-Annual-Letter):

P x S x E x C = CO2

where P is the population, S is the services used by each, E is the energy needed by each, and C is the carbon emission by each.

P x S x E x C=CO2 equation can be used to identify the cost from possible developmental scenarios.
Maybe we can reverse engineer some items and then look again to identify where to reduce with a cost model per country for instance. So the cost model is:

=> P x S x E x C = CO2 - [CO2 that is recycled]

--> P is not much regulated by governments, therefore this can be thought as a constant and can be later considered to calculate the load per person.

=> S2 x E2 x C2 = (CO2 - [CO2 that is recycled])

--> We can think of the scenarios that can be possible by diversifying S2, E2, and C2.
--> There is the carbon tax discussion: https://en.wikipedia.org/wiki/Carbon_tax. If we can quantify the cost per CO2, this can be used as an economic model for valuing scenarios and services. So, with an economic model like this, people living in the rain forests of Brazil is likely to get gadgets that emit CO2 for cheaper. 
--> Countries that improve on CO2 recycling techniques may start to get some items for cheaper.
--> People living in the desert may be at a disadvantage.
--> If the potential environmental hazard is also included in this model such as the possible waste and its associated costs to recycle...this economic model may be more realistic.
--> The cost of a service purchased in country A is calculated by using A's CO2 model and the producer country's CO2 model.

ParFDA for Fast Deployment of Accurate Statistical Machine Translation Systems, Benchmarks, and Statistics

Ergun Biçici, Qun Liu, and Andy Way. ParFDA for Fast Deployment of Accurate Statistical Machine Translation Systems, Benchmarks, and Statistics. InProceedings of the EMNLP 2015 Tenth Workshop on Statistical Machine Translation, Lisbon, Portugal, September 2015. Association for Computational Linguistics. [WWW] Keyword(s): Machine TranslationMachine LearningLanguage Modeling.

We build parallel FDA5 (ParFDA) Moses statistical machine translation (SMT) systems for all language pairs in the workshop on statistical machine translation~\cite{WMT2015} (WMT15) translation task and obtain results close to the top with an average of $3.176$ BLEU points difference using significantly less resources for building SMT systems. ParFDA is a parallel implementation of feature decay algorithms (FDA) developed for fast deployment of accurate SMT systems. ParFDA Moses SMT system we built is able to obtain the top TER performance in French to English translation. We make the data for building ParFDA Moses SMT systems for WMT15 available: https://github.com/bicici/ParFDAWMT15.

Referential Translation Machines for Predicting Translation Quality and Related Statistics

Ergun Biçici, Qun Liu, and Andy Way. Referential Translation Machines for Predicting Translation Quality and Related Statistics. In Proceedings of the EMNLP 2015 Tenth Workshop on Statistical Machine Translation, Lisbon, Portugal, September 2015. Association for Computational Linguistics. [WWW] Keyword(s): Machine TranslationMachine LearningPerformance Prediction.

We use referential translation machines (RTMs) for predicting translation performance. RTMs pioneer a language independent approach to all similarity tasks and remove the need to access any task or domain specific information or resource. We improve our RTM models with the ParFDA instance selection model~\cite{Bicici:FDA54FDA:WMT15}, with additional features for predicting the translation performance, and with improved learning models. We develop RTM models for each WMT15 QET (QET15) subtask and obtain improvements over QET14 results. RTMs achieve top performance in QET15 ranking 1st in document- and sentence-level prediction tasks and 2nd in word-level prediction task.