Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

使用longlora在val测试(非训练)时遇到错误:local variable 'groupsz' referenced before assignment #3724

Closed
1 task done
the-nine-nation opened this issue May 13, 2024 · 1 comment
Labels
solved This problem has been already solved

Comments

@the-nine-nation
Copy link

Reminder

  • I have read the README and searched the existing issues.

Reproduction

model

model_name_or_path: /root/autodl-tmp/Models/Meta-Llama-3-8B
quantization_bit: 4

method

stage: sft
do_train: true
finetuning_type: lora
lora_target: q_proj,v_proj
shift_attn: true
lora_rank: 16
lora_alpha: 32
lora_dropout: 0.1
rope_scaling: linear

dataset

dataset: law_data,case_data,true_data,identity_data,zhengju_data,alpaca_gpt4_zh,alpaca_gpt4_en
template: llama3
cutoff_len: 12000
max_samples: 1000
val_size: 0.01
overwrite_cache: true
preprocessing_num_workers: 32

output

output_dir: saves/llama3-8b/lora/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true

train

per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 0.0001
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_steps: 0.1
fp16: true

eval

per_device_eval_batch_size: 1
evaluation_strategy: steps
eval_steps: 5

Expected behavior

经过一些尝试,我最终定位到问题所在,似乎是由于longlora开启后在推理时出现了一些错误

System Info

Traceback (most recent call last):
File "/root/miniconda3/bin/llamafactory-cli", line 8, in
sys.exit(main())
File "/root/autodl-tmp/LLaMA-Factory/src/llmtuner/cli.py", line 49, in main
run_exp()
File "/root/autodl-tmp/LLaMA-Factory/src/llmtuner/train/tuner.py", line 33, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/root/autodl-tmp/LLaMA-Factory/src/llmtuner/train/sft/workflow.py", line 73, in run_sft
train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 1859, in train
return inner_training_loop(
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 2278, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 2662, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer_seq2seq.py", line 180, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 3467, in evaluate
output = eval_loop(
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 3650, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/root/autodl-tmp/LLaMA-Factory/src/llmtuner/train/sft/trainer.py", line 69, in prediction_step
loss, generated_tokens, _ = super().prediction_step( # ignore the returned labels (may be truncated)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer_seq2seq.py", line 278, in prediction_step
return super().prediction_step(
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 3836, in prediction_step
loss, outputs = self.compute_loss(model, inputs, return_outputs=True)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 3161, in compute_loss
outputs = model(**inputs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/accelerate/utils/operations.py", line 822, in forward
return model_forward(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/accelerate/utils/operations.py", line 810, in call
return convert_to_fp32(self.model_forward(*args, **kwargs))
File "/root/miniconda3/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast
return func(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/peft/peft_model.py", line 1129, in forward
return self.base_model(
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 161, in forward
return self.model.forward(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward
output = module._old_forward(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1211, in forward
outputs = self.model(
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward
output = module._old_forward(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1018, in forward
layer_outputs = decoder_layer(
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward
output = module._old_forward(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 741, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward
output = module._old_forward(*args, **kwargs)
File "/root/autodl-tmp/LLaMA-Factory/src/llmtuner/model/utils/longlora.py", line 273, in llama_sdpa_attention_forward
causal_mask = causal_mask[:, :, :, :groupsz]
UnboundLocalError: local variable 'groupsz' referenced before assignment

Others

No response

@the-nine-nation
Copy link
Author

问题不在这里

hiyouga added a commit that referenced this issue May 13, 2024
@hiyouga hiyouga added solved This problem has been already solved labels May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

No branches or pull requests

2 participants