Skip to content
This repository has been archived by the owner on Dec 11, 2023. It is now read-only.

--embed_prefix for translation model training #382

Closed
yapingzhao opened this issue Sep 3, 2018 · 0 comments
Closed

--embed_prefix for translation model training #382

yapingzhao opened this issue Sep 3, 2018 · 0 comments

Comments

@yapingzhao
Copy link

yapingzhao commented Sep 3, 2018

Hi,
The --embed_prefix option is introduced in the nmt.py file as follows:
The pre training word vector file format is as follows:
World 0.342344...(vector size=300)
One 1.341233...
But,An error occurred while I was running the command
python -m nmt.nmt
--src=vi --tgt=en
--num_units=300
--embed_prefix=nmt/nmt_data/vector
But ,error:
FailedPreconditionError (see above for traceback): HashTable has different value for same key. Key ᠤ has 23 and trying to add value 1518
[[Node: string_to_index/hash_table/table_init = InitializeTableFromTextFileV2[delimiter="\t", key_index=-2, value_index=-1, vocab_size=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](string_to_index/hash_table, string_to_index/hash_table/table_init/asset_filepath)]]

Looking forward to your advice or answers.
Best regards,
Thank you very much!

@yapingzhao yapingzhao changed the title the pretrained word representations (word2vec) --embed_prefix for translation model training Sep 3, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant