fugashi is a Cython wrapper for MeCab, a Japanese tokenizer and morphological analysis tool. Wheels are provided for Linux, OSX, and Win64, and UniDic is easy to install.
issueを英語で書く必要はありません。
Check out the interactive demo, see the blog post for background on why fugashi exists and some of the design decisions, or see this guide for a basic introduction to Japanese tokenization.
If you are on an unsupported platform (like PowerPC), you'll need to install MeCab first. It's recommended you install from source.
from fugashi import Tagger
tagger = Tagger('-Owakati')
text = "麩菓子は、麩を主材料とした日本の菓子。"
tagger.parse(text)
# => '麩 菓子 は 、 麩 を 主材 料 と し た 日本 の 菓子 。'
for word in tagger(text):
print(word, word.feature.lemma, word.pos, sep='\t')
# "feature" is the Unidic feature data as a named tuple
fugashi requires a dictionary. UniDic is recommended, and two easy-to-install versions are provided.
- unidic-lite, a 2013 version of Unidic that's relatively small
- unidic, the latest UniDic 2.3.0, which is 1GB on disk and requires a separate download step
If you just want to make sure things work you can start with unidic-lite
, but
for more serious processing unidic
is recommended. For production use you'll
generally want to generate your own dictionary too; for details see the MeCab
documentation.
To get either of these dictionaries, you can install them directly using pip
or do the below:
pip install fugashi[unidic-lite]
# The full version of UniDic requires a separate download step
pip install fugashi[unidic]
python -m unidic download
For more information on the different MeCab dictionaries available, see this article.
fugashi is written with the assumption you'll use Unidic to process Japanese, but it supports arbitrary dictionaries.
If you're using a dictionary besides Unidic you can use the GenericTagger like this:
from fugashi import GenericTagger
tagger = GenericTagger()
# parse can be used as normal
tagger.parse('something')
# features from the dictionary can be accessed by field numbers
for word in tagger(text):
print(word.surface, word.feature[0])
You can also create a dictionary wrapper to get feature information as a named tuple.
from fugashi import GenericTagger, create_feature_wrapper
CustomFeatures = create_feature_wrapper('CustomFeatures', 'alpha beta gamma')
tagger = GenericTagger(wrapper=CustomFeatures)
for word in tagger.parseToNodeList(text):
print(word.surface, word.feature.alpha)
If you use fugashi in research, it would be appreciated if you cite this paper. You can read it at the ACL Anthology or on Arxiv.
@inproceedings{mccann-2020-fugashi,
title = "fugashi, a Tool for Tokenizing {J}apanese in Python",
author = "McCann, Paul",
booktitle = "Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.nlposs-1.7",
pages = "44--51",
abstract = "Recent years have seen an increase in the number of large-scale multilingual NLP projects. However, even in such projects, languages with special processing requirements are often excluded. One such language is Japanese. Japanese is written without spaces, tokenization is non-trivial, and while high quality open source tokenizers exist they can be hard to use and lack English documentation. This paper introduces fugashi, a MeCab wrapper for Python, and gives an introduction to tokenizing Japanese.",
}
If you have a problem with fugashi feel free to open an issue. However, there are some cases where it might be better to use a different library.
- If you don't want to deal with installing MeCab at all, try SudachiPy.
- If you need to work with Korean, try pymecab-ko or KoNLPy.
fugashi is released under the terms of the MIT license. Please copy it far and wide.
fugashi is a wrapper for MeCab, and fugashi wheels include MeCab binaries.
MeCab is copyrighted free software by Taku Kudo <[email protected]>
and Nippon
Telegraph and Telephone Corporation, and is redistributed under the BSD
License.