I recently attended the ACL conference. In this post I write about the most remarkable stuff presented there and in the co-located events, from the point of view of a neural machine translation researcher. These are just my opinions, feel free to disagree.
If, after reading this post, you want to know more about what happened at ACL 2019, I recommend searching for hashtag #acl2019nlp on twitter as there was plenty of live tweeting.
The ACL main conference lasted three days. The day before starting there was a day devoted to tutorials. After the main conference, there were two days of co-located events and workshops, including the Conference on Machine Translation (WMT).
Cross-lingual Representations
Cross-lingual embedding mappings are gaining importance, especially their unsupervised variant, maybe because they are the cornerstone of unsupervised NMT. The Unsupervised Cross-Lingual Representation Learning tutorial (slides) by Sebastian Ruder, Anders Søgaard and Ivan Vulić provided a very complete overview of the field. Oral session 7C and poster session 4B were both entirely devoted to multilinguality, while oral session 4A (machine translation) also had some unsupervised NMT papers, including a simple and nice approach based on word-by-word translations. I suggest you have a look at the papers at the mentioned sessions (see program)
BERT
BERT was ubiquitous at ACL and co-located workshops. On the one hand there were a lot of submissions that built on top of BERT to address other tasks or setups. On the other hand there were a lot of “BERTology” submissions, at least 9 counting those from the main conference and those from the Blackbox NLP workshop. BERTology is the term used to refer to papers that study the properties of BERT, usually by probing its internal representations to understand how well they capture morphological, syntactical or other type of information.
Document-level MT
Document-level NMT was one of the new aspects in WMT news translation shared task. In this year’s data, sentences were grouped into their originating documents. This information was ignored by many participants, but was very profitable for others, like the Microsoft Translation submission( poster) which used full documents as translation units. There were some new very interesting shared tasks in WMT, like the Automatic Post-Editing task or the Parallel Corpus Filtering for Low-Resource Conditions
Translationese, back-translation
The awareness on the effects of “translationese” increased a lot this year in WMT. For one, the news shared task test sets were created so that the reference translations are always in the same directions as the MT systems are run, while in previous years the test set was 50% in each translation direction. Also, there were a few papers in WMT studying the effect of back-translation. Talk of the devil, back-translation was very present in the WMT research track. I especially liked the Tagged Back-Translation paper, which re-interpreted the widely accepted previous beliefs on back-translation.
ACL 2019 was a great conference. See you at the next one!
Talks I would have loved to attend but couldn’t
Modeling Output Spaces in Continuous-Output Language Generation, by Yulia Tsvetkov, in the Workshop on Representation Learning for NLP (RepL4NLP) . Fortunately, here and here there are twitter threads with summaries of the talk
The Curious Case of Degenerate Neural Conversation by Yejin Choi, in the NLP for Conversational AI workshop. There are some tweets about it, though (1, 2, 3).
Summary highlights
- (Unsupervised) cross-lingual embedding representation mapping consolidates as a mainstream area.
- Lots of papers studying the properties of BERT or building on top of it.
- Document-level MT getting momentum.
- Translationese exists and it has its effects on MT systems.
- Step step further to better understand back-translation.