π¨ Joey NMT framework is developed for educational purposes. It aims to be a clean and minimalistic code base to help novices find fast answers to the following questions.
- β How to implement classic NMT architectures (RNN and Transformer) in PyTorch?
- β What are the building blocks of these architectures and how do they interact?
- β How to modify these blocks (e.g. deeper, wider, ...)?
- β How to modify the training procedure (e.g. add a regularizer)?
In contrast to other NMT frameworks, we will not aim for the most recent features or speed through engineering or training tricks since this often goes in hand with an increase in code complexity and a decrease in readability. π
However, Joey NMT re-implements baselines from major publications.
Check out the detailed documentation and our paper.
Joey NMT was initially developed and is maintained by Jasmijn Bastings (University of Amsterdam) and Julia Kreutzer (Heidelberg University), now both at Google Research. Mayumi Ohta at Heidelberg University is continuing the legacy.
Welcome to our new contributors
Joey NMT implements the following features (aka the minimalist toolkit of NMT π§):
- Recurrent Encoder-Decoder with GRUs or LSTMs
- Transformer Encoder-Decoder
- Attention Types: MLP, Dot, Multi-Head, Bilinear
- Word-, BPE- and character-based input handling
- BLEU, ChrF evaluation
- Beam search with length penalty and greedy decoding
- Customizable initialization
- Attention visualization
- Learning curve plotting
In order to keep the code clean and readable, we make use of:
- Style checks: pylint with (mostly) PEP8 conventions, see
.pylintrc
. - Typing: Every function has documented input types.
- Docstrings: Every function, class and module has docstrings describing their purpose and usage.
- Unittests: Every module has unit tests, defined in
test/unit/
. Travis CI runs the tests and pylint on every push to ensure the repository stays clean.
Joey NMT is built on PyTorch and torchtext for Python >= 3.5.
A. Now also directly with pip!
pip install joeynmt
If you want to use GPUs add: pip install torch==1.8.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
, for CUDA v10.1.
You'll need this in particular when working on Google Colab.
B. From source
- Clone this repository:
git clone https://github.com/joeynmt/joeynmt.git
- Install joeynmt and it's requirements:
cd joeynmt
pip3 install .
(you might want to add--user
for a local installation). - Run the unit tests:
python3 -m unittest
Warning! When running on GPU you need to manually install the suitable PyTorch version (1.8.0) for your CUDA version. This is described in the PyTorch installation instructions.
For details, follow the tutorial in the docs. π
For training a translation model, you need parallel data, i.e. a collection of source sentences and reference translations that are aligned sentence-by-sentence and stored in two files, such that each line in the reference file is the translation of the same line in the source file.
Before training a model on it, parallel data is most commonly filtered by length ratio, tokenized and true- or lowercased.
The Moses toolkit provides a set of useful scripts for this purpose.
In addition, you might want to build the NMT model not on the basis of words, but rather sub-words or characters (the level
in JoeyNMT configurations).
Currently, JoeyNMT supports the byte-pair-encodings (BPE) format by subword-nmt and sentencepiece.
Experiments are specified in configuration files, in simple YAML format. You can find examples in the configs
directory.
small.yaml
contains a detailed explanation of configuration options.
Most importantly, the configuration contains the description of the model architecture (e.g. number of hidden units in the encoder RNN), paths to the training, development and test data, and the training hyperparameters (learning rate, validation frequency etc.).
For training, run
python3 -m joeynmt train configs/small.yaml
.
This will train a model on the training data specified in the config (here: small.yaml
),
validate on validation data,
and store model parameters, vocabularies, validation outputs and a small number of attention plots in the model_dir
(also specified in config).
Note that pre-processing like tokenization or BPE-ing is not included in training, but has to be done manually before.
Tip: Be careful not to overwrite models, set overwrite: False
in the model configuration.
The validations.txt
file in the model directory reports the validation results at every validation point.
Models are saved whenever a new best validation score is reached, in batch_no.ckpt
, where batch_no
is the number of batches the model has been trained on so far.
best.ckpt
links to the checkpoint that has so far achieved the best validation score.
JoeyNMT uses Tensorboard to visualize training and validation curves and attention matrices during training.
Launch Tensorboard with tensorboard --logdir model_dir/tensorboard
(or python -m tensorboard.main ...
) and then open the url (default: localhost:6006
) with a browser.
For a stand-alone plot, run python3 scripts/plot_validation.py model_dir --plot_values bleu PPL --output_path my_plot.pdf
to plot curves of validation BLEU and PPL.
For training on a GPU, set use_cuda
in the config file to True
. This requires the installation of required CUDA libraries.
There are three options for testing what the model has learned.
Whatever data you feed the model for translating, make sure it is properly pre-processed, just as you pre-processed the training data, e.g. tokenized and split into subwords (if working with BPEs).
For testing and evaluating on your parallel test/dev set, run
python3 -m joeynmt test configs/small.yaml --output_path out
.
This will generate translations for validation and test set (as specified in the configuration) in out.[dev|test]
with the latest/best model in the model_dir
(or a specific checkpoint set with load_model
).
It will also evaluate the outputs with eval_metric
.
If --output_path
is not specified, it will not store the translation, and only do the evaluation and print the results.
In order to translate the contents of a file not contained in the configuration (here my_input.txt
), simply run
python3 -m joeynmt translate configs/small.yaml < my_input.txt > out
.
The translations will be written to stdout or alternatively--output_path
if specified.
If you just want try a few examples, run
python3 -m joeynmt translate configs/small.yaml
and you'll be prompted to type input sentences that JoeyNMT will then translate with the model specified in the configuration.
- The docs include an overview of the NMT implementation, a walk-through tutorial for building, training, tuning, testing and inspecting an NMT system, the API documentation and FAQs.
- A screencast of the tutorial is available on YouTube. π₯
- Jade Abbott wrote a notebook that runs on Colab that shows how to prepare data, train and evaluate a model, at the example of low-resource African languages.
- Matthias MΓΌller wrote a collection of scripts for installation, data download and preparation, model training and evaluation.
Benchmark results on WMT and IWSLT datasets are reported here. Please also check the Masakhane MT repository for benchmarks and available models for African languages.
Pre-trained models from reported benchmarks for download (contains config, vocabularies, best checkpoint and dev/test hypotheses):
Pre-processing with Moses decoder tools as in this script.
- IWSLT14 de-en BPE RNN (641M)
- IWSLT14 de-en Transformer (210M)
The data came preprocessed from Stanford NLP, see this script.
- IWSLT15 en-vi Transformer (186M)
Following the pre-processing of the Sockeye paper.
- WMT17 en-de "best" RNN (2G)
- WMT17 lv-en "best" RNN (1.9G)
- WMT17 en-de Transformer (664M)
- WMT17 lv-en Transformer (650M)
Training with data provided in the Ukuxhumana project, with additional tokenization of the training data with the Moses tokenizer.
- Autshumato en-af small Transformer (147M)
- Autshumato af-en small Transformer (147M)
- Autshumato en-nso small Transformer (147M)
- Autshumato nso-en small Transformer (147M)
- Autshumato en-tn small Transformer (319M)
- Autshumato tn-en small Transformer (321M)
- Autshumato en-ts small Transformer (229M)
- Autshumato ts-en small Transformer (229M)
- Autshumato en-zu small Transformer (147M)
- Autshumato zu-en small Transformer (147M)
If you trained JoeyNMT on your own data and would like to share it, please email us so we can add it to the collection of pre-trained models.
Since this codebase is supposed to stay clean and minimalistic, contributions addressing the following are welcome:
- code correctness
- code cleanliness
- documentation quality
- speed or memory improvements
- resolving issues
- providing pre-trained models
Code extending the functionalities beyond the basics will most likely not end up in the master branch, but we're curions to learn what you used Joey NMT for.
Here we'll collect projects and repositories that are based on Joey NMT, so you can find inspiration and examples on how to modify and extend the code.
- πΈοΈMasakhane Web. @CateGitau, @Kabongosalomon, @vukosim and team built a whole web translation platform for the African NMT models that Masakhane built with Joey NMT. The best is: it's completely open-source, so anyone can contribute new models or features. Try it out here, and check out the code.
- βοΈ MutNMT. @sjarmero created a web application to train NMT: it lets the user train, inspect, evaluate and translate with Joey NMT --- perfect for NMT newbies! Code here. The tool was developed by Prompsit in the framework of the European project MultiTraiNMT.
- π Russian-Belarusian Translator. @tsimafeip built a translator from Russian to Belarusian and adapted it to legal and medical domains. The code can be found here.
- πͺ Reinforcement Learning. @samuki implemented various policy gradient variants in Joey NMT: here's the code, could the logo be any more perfect? πͺ π¨
- β Sign Language Translation. @neccam built a sign language translator that continuosly recognizes sign language and translates it. Check out the code and the CVPR 2020 paper!
- π€ @bpopeters built Possum-NMT for multilingual grapheme-to-phoneme transduction and morphologic inflection. Read their paper for SIGMORPHON 2020!
- π· Image Captioning. @pperle and @stdhd built an imagine captioning tool on top of Joey NMT, check out the code and the demo!
- π‘ Joey Toy Models. @bricksdont built a collection of scripts showing how to install JoeyNMT, preprocess data, train and evaluate models. This is a great starting point for anyone who wants to run systematic experiments, tends to forget python calls, or doesn't like to run notebook cells!
- π African NMT. @jaderabbit started an initiative at the Indaba Deep Learning School 2019 to "put African NMT on the map". The goal is to build and collect NMT models for low-resource African languages. The Masakhane repository contains and explains all the code you need to train JoeyNMT and points to data sources. It also contains benchmark models and configurations that members of Masakhane have built for various African languages. Furthermore, you might be interested in joining the Masakhane community if you're generally interested in low-resource NLP/NMT. Also see the EMNLP Findings paper.
- π¬ Slack Joey. Code to locally deploy a Joey NMT model as chat bot in a Slack workspace. It's a convenient way to probe your model without having to implement an API. And bad translations for chat messages can be very entertaining, too ;)
- π Flask Joey. @kevindegila built a flask interface to Joey, so you can deploy your trained model in a web app and query it in the browser.
- π₯ User Study. We evaluated the code quality of this repository by testing the understanding of novices through quiz questions. Find the details in Section 3 of the Joey NMT paper.
- π Self-Regulated Interactive Seq2Seq Learning. Julia Kreutzer and Stefan Riezler. Published at ACL 2019. Paper and Code. This project augments the standard fully-supervised learning regime by weak and self-supervision for a better trade-off of quality and supervision costs in interactive NMT.
- π Speech Joey. @Sariyusha is giving Joey ears for speech translation. Code.
- π« Hieroglyph Translation. Joey NMT was used to translate hieroglyphs in this IWSLT 2019 paper by Philipp Wiesenbach and Stefan Riezler. They gave Joey NMT multi-tasking abilities.
If you used Joey NMT for a project, publication or built some code on top of it, let us know and we'll link it here.
Please leave an issue if you have questions or issues with the code.
For general questions, email us at joeynmt <at> gmail.com
. π
If you use Joey NMT in a publication or thesis, please cite the following paper:
@inproceedings{kreutzer-etal-2019-joey,
title = "Joey {NMT}: A Minimalist {NMT} Toolkit for Novices",
author = "Kreutzer, Julia and
Bastings, Jasmijn and
Riezler, Stefan",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-3019",
doi = "10.18653/v1/D19-3019",
pages = "109--114",
}
Joeys are infant marsupials. π¨