Skip to content
This repository has been archived by the owner on Dec 11, 2023. It is now read-only.

the pretrained word representations (word2vec) #380

Closed
yapingzhao opened this issue Aug 29, 2018 · 0 comments
Closed

the pretrained word representations (word2vec) #380

yapingzhao opened this issue Aug 29, 2018 · 0 comments

Comments

@yapingzhao
Copy link

yapingzhao commented Aug 29, 2018

An error occurred while I was running the command
python -m nmt.nmt
--src=vi --tgt=en
--vocab_prefix=/tmp/nmt_data/vocab
--train_prefix=/tmp/nmt_data/train
--dev_prefix=/tmp/nmt_data/tst2012
--test_prefix=/tmp/nmt_data/tst2013
--out_dir=/tmp/nmt_model
--num_train_steps=12000
--steps_per_stats=100
--num_layers=2
--num_units=128
--dropout=0.2
--metrics=bleu
--embed_prefix=nmt/nmt_data/vector
error:"All embedding size should be size."
but,bilingual word size are the same.
please give me some suggestions, thank you!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant