How to generate answers from a loaded GPT model #2902
Answered
by
ht-zhou
bridgearchway
asked this question in
Community | Q&A
-
I've loaded my pretrained model. But when I was doing 'model.generate()' and 'tokenizer.batch_decode()', the output is just nonsense. Just get stuck here. |
Beta Was this translation helpful? Give feedback.
Answered by
ht-zhou
Mar 2, 2023
Replies: 1 comment
-
Thanks for your feedback.The code we support for training in example is a demo.You need to get
|
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
ht-zhou
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Thanks for your feedback.The code we support for training in example is a demo.You need to get
to train for over 1 million episodes to get an actor with good performance.