We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
看了知乎链接的教程,尝试训练encoder 练了半天,这结果似乎没什么变化 数据自建的,有2个多G 问题1:这种情况是正常的吗?如果不正常是什么原因造成的? 问题2:根据知乎上的说法“实测了一次 训练synthesizer时,4000左右step就能attention收敛,22k step的时候loss就到0.35了,可以很快进行finetune,算是超越预期。”,训练synthesizer时,如何把encoder加入?
The text was updated successfully, but these errors were encountered:
正常。encoder的训练要求要高很多,数据量要大、step要多很多,建议只做微调。
由于结构问题,encoder和synth是分开训练的
Sorry, something went wrong.
感谢回复,我再跑一段时间看看
No branches or pull requests
看了知乎链接的教程,尝试训练encoder
练了半天,这结果似乎没什么变化
数据自建的,有2个多G
问题1:这种情况是正常的吗?如果不正常是什么原因造成的?
问题2:根据知乎上的说法“实测了一次 训练synthesizer时,4000左右step就能attention收敛,22k step的时候loss就到0.35了,可以很快进行finetune,算是超越预期。”,训练synthesizer时,如何把encoder加入?
The text was updated successfully, but these errors were encountered: