Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempted an implementation #31

Open
scottleith opened this issue Dec 27, 2017 · 0 comments
Open

Attempted an implementation #31

scottleith opened this issue Dec 27, 2017 · 0 comments

Comments

@scottleith
Copy link

scottleith commented Dec 27, 2017

I really enjoyed your "Advanced dynamic seq2seq with TensorFlow" tutorial, and decided to try it out myself! I wanted to take a corpus of english quotes, and create an encoder-decoder that could reconstruct the quotes from the meaning vector (the hidden state).

I've run into an error in the softmax_entropy_with_logits:
InvalidArgumentError (see above for traceback): logits and labels must be same size: logits_size=[1000,27994] labels_size=[500,27994] ( sequences have 5 timesteps, batch size is 100, vocab size is 27994).

I've been looking over my code for hours now, but can't find the mistake. I know it's a long shot, but would you be willing to take a look to see where I've gone wrong?

The code is here, and the 'problem' might be around line 246:
https://github.com/scottleith/lstm/blob/master/Attempted%20encoder-decoder%20LSTM.py

The raw data can be downloaded here: https://github.com/alvations/Quotables/blob/master/author-quote.txt

I also apologize if this is an inappropriate place to ask - I wanted to contact you, but github doesn't make it easy!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant