You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have read the README and searched the existing issues.
Reproduction
/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
warnings.warn(
/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/torch/utils/checkpoint.py:61: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn(
Exception in thread Thread-5:
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/root/LLaMA-Factory/src/llmtuner/train/tuner.py", line 31, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/root/LLaMA-Factory/src/llmtuner/train/sft/workflow.py", line 75, in run_sft
train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
File "/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/transformers/trainer.py", line 1854, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/transformers/trainer.py", line 2744, in training_step
self.accelerator.backward(loss)
File "/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/accelerate/accelerator.py", line 1964, in backward
loss.backward(**kwargs)
File "/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/torch/autograd/init.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
按照首页视频操作步骤来的
第一次运行 web_demo.py 4bit量化Qlora报错 RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
试了不同模型,不同数据都会报这个错误
没搜到同样issue
python = 3.8.10
Expected behavior
No response
System Info
No response
Others
No response
The text was updated successfully, but these errors were encountered:
Reminder
Reproduction
/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
warnings.warn(
/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/torch/utils/checkpoint.py:61: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn(
Exception in thread Thread-5:
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/root/LLaMA-Factory/src/llmtuner/train/tuner.py", line 31, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/root/LLaMA-Factory/src/llmtuner/train/sft/workflow.py", line 75, in run_sft
train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
File "/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/transformers/trainer.py", line 1854, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/transformers/trainer.py", line 2744, in training_step
self.accelerator.backward(loss)
File "/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/accelerate/accelerator.py", line 1964, in backward
loss.backward(**kwargs)
File "/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/root/Baichuan-13B/baichuan/lib/python3.8/site-packages/torch/autograd/init.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
按照首页视频操作步骤来的
第一次运行 web_demo.py 4bit量化Qlora报错 RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
试了不同模型,不同数据都会报这个错误
没搜到同样issue
python = 3.8.10
Expected behavior
No response
System Info
No response
Others
No response
The text was updated successfully, but these errors were encountered: