You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have question on using wordVectors in seq2seq.py since I have to adapt to my very long sentences dataset (I saw the limited length of sentence is 15 in the createTrainingMatrices function in seq2seq.py, but my datasets are almost short paragraphs^_^). Although the createTrainingMatrices function in seq2seq.py can help to create new vectors for every sentence using index of words, why not using the pre-trained embeddingMatrix.npy produced by word2vec.py? "wordVectors = np.load('models/embeddingMatrix.npy') " has been stated in seq2seq.py, but doesn't been used actually?
The text was updated successfully, but these errors were encountered:
Hi, I have question on using wordVectors in seq2seq.py since I have to adapt to my very long sentences dataset (I saw the limited length of sentence is 15 in the createTrainingMatrices function in seq2seq.py, but my datasets are almost short paragraphs^_^). Although the createTrainingMatrices function in seq2seq.py can help to create new vectors for every sentence using index of words, why not using the pre-trained embeddingMatrix.npy produced by word2vec.py? "wordVectors = np.load('models/embeddingMatrix.npy') " has been stated in seq2seq.py, but doesn't been used actually?
The text was updated successfully, but these errors were encountered: