We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
作者您好!我使用您的代码在Adobe240数据集上从头开始训练模型,在训练到第30个epoch之后的模型推理出来的图片都是纯白的,第30个epoch的的模型推理出来的图片虽然比较糊但起码是正常的,请问这种情况可能是哪里出了问题呢? 为了减少训练过程中的I/O,我将训练集原有的720p视频帧划分成了120x120大小的patch,可能是因为我用的patch太小了吗?
The text was updated successfully, but these errors were encountered:
建议先打印检查一下训练和推理时的数值范围是不是一致为 0~1 我觉得 patch 应该没这么大影响
Sorry, something went wrong.
感谢您的答复! 我看了一下发现推理出来的数全都是1.0 并且训练后期的loss_l1都是1e-1数量级的,而训练早期都是1e-2数量级,loss还越练越高了 请问这种情况要怎么处理呢?
噢那有可能真是训练炸了,如果 loss 越训越高,可以把 learning rate 调小一个数量级看看 loss 能不能正常下降
No branches or pull requests
作者您好!我使用您的代码在Adobe240数据集上从头开始训练模型,在训练到第30个epoch之后的模型推理出来的图片都是纯白的,第30个epoch的的模型推理出来的图片虽然比较糊但起码是正常的,请问这种情况可能是哪里出了问题呢?
为了减少训练过程中的I/O,我将训练集原有的720p视频帧划分成了120x120大小的patch,可能是因为我用的patch太小了吗?
The text was updated successfully, but these errors were encountered: