-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
微调过程loss为0, grad_norm为nan #29
Comments
开启bf16了吗? |
是的,按照提供的参数脚本训练的 @iMountTai |
确认一下数据长度和你设置的 |
请问解决了吗?目前我也遇到了同样的问题 |
Chinese-LLaMA-Alpaca-3 提供的脚本没出现这个问题,但是使用 unsloth 提供的llama3训练脚本生成的lora模型,再次训练会出现这个问题。 |
使用 kigner/ruozhiba-llama3-tt 数据集在 hfl/llama-3-chinese-8b-instruct-v2 上训练也出现了这个问题,暂时找到解决办法... |
{'loss': 3.3183, 'grad_norm': nan, 'learning_rate': 1.707941929974381e-08, 'epoch': 0.0} |
通过升级bitsandbytes到最新,问题解决 升级到 |
我也遇到了这个问题,升级了bitsandbytes之后并没有解决 |
@aa200647963 |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration. |
Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance. |
work! |
提交前必须检查以下项目
问题类型
模型训练与精调
基础模型
Llama-3-Chinese-8B(基座模型)
操作系统
Linux
详细描述问题
在自己数据集上进行微调,但是训练过程loss为0,grad_norm为nan,请问是什么问题?
依赖情况(代码类问题务必提供)
运行日志或截图
The text was updated successfully, but these errors were encountered: