Skip to content

Supplementary material for "When and Why Are Pre-trained Word Embeddings Useful for Neural Machine Translation?" at NAACL 2018

Notifications You must be signed in to change notification settings

neulab/word-embeddings-for-nmt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation?

This page contains the details of the code and TED talks dataset which was used for conducting the experiments included in the above paper.

The content could also be found at https://github.com/neulab/word-embeddings-for-nmt.

Contents

Software:

We used XNMT with commitID 38044b3 for all the experiments.

Experiments:
Data Processing:

In order to perform experiments, we collected (during early 2017) a common corpus of TED talks which has been translated into many low-resource languages. Under the Open Translation project, TED talks transcripts are available for more than 2400 talks in 109 languages. A histogram plot of language (represented by its ISO Code) vs total number of talks in the original dataset is visualized in the figure below.

TED Talks statistics

To obtain a parallel corpus for experiments, we preprocessed the dataset using Moses tokenizer and used hard punctuation symbols to identify valid sentence boundaries for English language. In order to create train, dev and test sets, we apply a greedy selection algorithm based on the popularity of the talks and selected disjoint talks for each split. We selected talks which had translations in more than 50 languages. Finally, we selected a list of 60 languages that had sufficient data for performing meaningful experiments. The train, test and dev splits for the most common talks are also shown in the table alongside the above figure.

  • The train, dev and test splits for the above TED talks: ted_talks.tar.gz.
  • ted_reader.py is a sample python script to read this TED talks data. An example is shown under the "main" attribute of the code.

If you use the dataset or code, please consider citing the paper using following bibtex:

BibTex

@inproceedings{Ye2018WordEmbeddings,
  author  = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},
  title   = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},
  booktitle = {HLT-NAACL},
  year    = {2018},
  }

About

Supplementary material for "When and Why Are Pre-trained Word Embeddings Useful for Neural Machine Translation?" at NAACL 2018

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages