Skip to content

marvinzh/ConvLab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ConvLab

ConvLab is an open-source multi-domain end-to-end dialog system platform, aiming to enable researchers to quickly set up experiments with reusable components and compare a large set of different approaches, ranging from conventional pipeline systems to end-to-end neural models, in common environments.

Package Overview

convlab an open-source multi-domain end-to-end dialog research library
convlab.agent a module for constructing dialog agents including RL algorithms
convlab.env a collection of environments
convlab.experiment a module for running experiments at various levels
convlab.modules a collection of state-of-the-art dialog system component models including NLU, DST, Policy, NLG
convlab.human_eval a server for conducting human evaluation using Amazon Mechanical Turk
convlab.lib a libarary of common utilities
convlab.spec a collection of experiment spec files

Running ConvLab

Once you've downloaded ConvLab and installed required packages, you can run the command-line interface with the python run.py command.

$ python run.py {spec file} {spec name} {mode}

For example:

# to evaluate a dialog system consisting of NLU(OneNet), DST(Rule), Policy(Rule), NLG(Template) on the MultiWOZ environment
$ python run.py demo.json onenet_rule_rule_template eval

# to see natural language utterances 
$ LOG_LEVEL=NL python run.py demo.json onenet_rule_rule_template eval

# to see natural language utterances and dialog acts 
$ LOG_LEVEL=ACT python run.py demo.json onenet_rule_rule_template eval

# to see natural language utterances, dialog acts and state representation
$ LOG_LEVEL=STATE python run.py demo.json onenet_rule_rule_template eval

# to train a DQN policy with NLU(OneNet), DST(Rule), NLG(Template) on the MultiWOZ environment
$ python run.py demo.json onenet_rule_dqn_template train

# to use the policy trained above
$ python run.py output/onenet_rule_dqn_template_{timestamp}/onenet_rule_dqn_template_spec.json onenet_rule_dqn_template eval@onenet_rule_dqn_template_t0_s0

Note that currently ConvLab can only train the policy component by interacting with a user simulator. For other components, ConvLab supports offline supervise learning. For example, you can train a NLU model using the local training script as in OneNet.

Creating a new spec file

A spec file is used to fully specify experiments including a dialog agent and a user simulator. It is a JSON of multiple experiment specs, each containing the keys agent, env, body, meta, search.

We based our implementation on SLM-Lab. For an introduction to these concepts, you should check these docs.

Instead of writing one from scratch, you are welcome to modify the convlab/spec/demo.json file. Once you have created a new spec file, place it under convlab/spec directory and run your experiments. Note that you don't have to prepend convlab/spec/ before your spec file name.

Contributions

The ConvLab team welcomes contributions from the community. Pull requests must have one approving review and no requested changes before they are merged. The ConvLab team reserve the right to reject or revert contributions that we don't think are good additions.

Citing

If you use ConvLab in your research, please cite ConvLab: Multi-Domain End-to-End Dialog System Platform.

@inproceedings{lee2019convlab,
  title={ConvLab: Multi-Domain End-to-End Dialog System Platform},
  author={Lee, Sungjin and Zhu, Qi and Takanobu, Ryuichi and Li, Xiang and Zhang, Yaoqin and Zhang, Zheng and Li, Jinchao and Peng, Baolin and Li, Xiujun and Huang, Minlie and others},
  booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
  year={2019}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Languages

  • Python 76.0%
  • OpenEdge ABL 22.6%
  • Other 1.4%