Skip to content

Chinese NER using Lattice LSTM. Code for ACL 2018 paper.

Notifications You must be signed in to change notification settings

turned123/LatticeLSTM

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Chinese NER Using Lattice LSTM

Lattice LSTM for Chinese NER. Character based LSTM with Lattice embeddings as input.

Models and results can be found at our ACL 2018 paper Chinese NER Using Lattice LSTM. It achieves 93.18% F1-value on MSRA dataset, which is the state-of-the-art result on Chinese NER task.

Details will be updated soon.

Requirement:

Python: 2.7   
PyTorch: 0.3

Input format:

CoNLL format (prefer BIOES tag scheme), with each character its label for one line. Sentences are splited with a null line.

美	B-LOC
国	E-LOC
的	O
华	B-PER
莱	I-PER
士	E-PER

我	O
跟	O
他	O
谈	O
笑	O
风	O
生	O 

Pretrained Embeddings:

The pretrained character and word embeddings are the same with the embeddings in the baseline of RichWordSegmentor

Character embeddings: gigaword_chn.all.a2b.uni.ite50.vec

Word(Lattice) embeddings: ctb.50d.vec

Resume NER data

Crawled from the Sina Finance, it includes the resumes of senior executives from listed companies in the Chinese stock market. Details can be found in our paper.

Cite:

Please cite our ACL 2018 paper:

@article{zhang2018chinese,  
 title={Chinese NER Using Lattice LSTM},  
 author={Yue Zhang and Jie Yang},  
 booktitle={Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)},
 year={2018}  
}

About

Chinese NER using Lattice LSTM. Code for ACL 2018 paper.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.1%
  • Shell 0.9%