You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, I received an error while executing python textmorph/edit_model/main.py configs/edit_model/edit_onebil.txt --gpu 0 within docker
individually:
Variable containing:
205.0143
[torch.cuda.FloatTensor of size 1 (GPU 0)]
batched:
Variable containing:
205.0143
[torch.cuda.FloatTensor of size 1 (GPU 0)]
Traceback (most recent call last):
File "textmorph/edit_model/main.py", line 40, in
exp.train()
File "/code/textmorph/edit_model/training_run.py", line 265, in train
self._train(self.config, self._train_state, self._examples, self.workspace, self.metadata, self.tb_logger)
File "/code/textmorph/edit_model/training_run.py", line 399, in _train
editor.test_batch(noiser(train_batches[0]))
File "/code/textmorph/edit_model/editor.py", line 128, in test_batch
raise Exception('batching error - examples do not produce identical results under batching')
Exception: batching error - examples do not produce identical results under batching
After some checks, I found the individually is 205.01431 while batched is 205.01433, what should I do with this?
Appreciate any help! Thank you 😄
The text was updated successfully, but these errors were encountered:
Hello,
Thank you for releasing such an amazing work. I've tried to run the process by following steps.
mkdir -p $DATA_DIR
and uncompressed Glove vectors into$DATA_DIR/word_vectors
glove.6B.300d_onebil.txt
andglove.6B.300d_yelp.txt
from https://worksheets.codalab.org/bundles/0x89bc0497bbb14ee489d33e032fa43a2e/$DATA_DIR/onebillion_split
However, I received an error while executing
python textmorph/edit_model/main.py configs/edit_model/edit_onebil.txt --gpu 0
within dockerAfter some checks, I found the
individually
is 205.01431 whilebatched
is 205.01433, what should I do with this?Appreciate any help! Thank you 😄
The text was updated successfully, but these errors were encountered: