ConvLab is an open-source multi-domain end-to-end dialog system platform, aiming to enable researchers to quickly set up experiments with reusable components and compare a large set of different approaches, ranging from conventional pipeline systems to end-to-end neural models, in common environments.
convlab | an open-source multi-domain end-to-end dialog research library |
convlab.agent | a module for constructing dialog agents including RL algorithms |
convlab.env | a collection of environments |
convlab.experiment | a module for running experiments at various levels |
convlab.modules | a collection of state-of-the-art dialog system component models including NLU, DST, Policy, NLG |
convlab.human_eval | a server for conducting human evaluation using Amazon Mechanical Turk |
convlab.lib | a libarary of common utilities |
convlab.spec | a collection of experiment spec files |
Once you've downloaded ConvLab and installed required packages, you can run the command-line interface with the python run.py
command.
$ python run.py {spec file} {spec name} {mode}
For example:
# to evaluate a dialog system consisting of NLU(OneNet), DST(Rule), Policy(Rule), NLG(Template) on the MultiWOZ environment
$ python run.py demo.json onenet_rule_rule_template eval
# to train a DQN policy with NLU(OneNet), DST(Rule), NLG(Template) on the MultiWOZ environment
$ python run.py demo.json onenet_rule_dqn_template train
# to use a pretrained policy from above
$ python run.py output/onenet_rule_dqn_template_{timestamp}/onenet_rule_dqn_template_spec.json onenet_rule_dqn_template enjoy@onenet_rule_dqn_template_t0_s0
A spec file is used to fully specify experiments including a dialog agent and a user simulator. It is a JSON of multiple experiment specs, each containing the keys agent, env, body, meta, search.
We based our implementation on SLM-Lab. For an introduction to these concepts, you should check these docs.
Instead of writing one from scratch, you are welcome to modify the convlab/spec/demo.json
file. Once you have created a new spec file, place it under convlab/spec
directory and run your experiments. Note that you don't have to prepend convlab/spec/
before your spec file name.
The ConvLab team welcomes contributions from the community. Pull requests must have one approving review and no requested changes before they are merged. The ConvLab team reserve the right to reject or revert contributions that we don't think are good additions.
If you use ConvLab in your research, please cite ConvLab: Multi-Domain End-to-End Dialog System Platform.
@inproceedings{lee2019convlab,
title={ConvLab: Multi-Domain End-to-End Dialog System Platform},
author={Lee, Sungjin and Zhu, Qi and Takanobu, Ryuichi and Li, Xiang and Zhang, Yaoqin and Zhang, Zheng and Li, Jinchao and Peng, Baolin and Li, Xiujun and Huang, Minlie and others},
booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
year={2019}
}