You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/home/jovyan/wxh/LLaMA-Factory/src/llamafactory/launcher.py", line 23, in <module>
launch()
File "/home/jovyan/wxh/LLaMA-Factory/src/llamafactory/launcher.py", line 19, in launch
run_exp()
File "/home/jovyan/wxh/LLaMA-Factory/src/llamafactory/train/tuner.py", line 50, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/home/jovyan/wxh/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 94, in run_sft
train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
File "/home/jovyan/conda-env/envs/wxh_lf/lib/python3.10/site-packages/transformers/trainer.py", line 1938, in train
return inner_training_loop(
File "/home/jovyan/conda-env/envs/wxh_lf/lib/python3.10/site-packages/transformers/trainer.py", line 2438, in _inner_training_loop
self.control = self.callback_handler.on_train_end(args, self.state, self.control)
File "/home/jovyan/conda-env/envs/wxh_lf/lib/python3.10/site-packages/transformers/trainer_callback.py", line 463, in on_train_end
return self.call_event("on_train_end", args, state, control)
File "/home/jovyan/conda-env/envs/wxh_lf/lib/python3.10/site-packages/transformers/trainer_callback.py", line 507, in call_event
result = getattr(callback, event)(
File "/home/jovyan/wxh/LLaMA-Factory/src/llamafactory/train/callbacks.py", line 168, in on_train_end
model.delete_adapter("pissa_init")
File "/home/jovyan/conda-env/envs/wxh_lf/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 821, in delete_adapter
raise ValueError(f"Adapter {adapter_name} does not exist")
ValueError: Adapter pissa_init does not exist
Expected behavior
No response
Others
No response
The text was updated successfully, but these errors were encountered:
File "/home/ps/.pyenv/versions/3.10.14/envs/train/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 395, in set_peft_model_state_dict
load_result = model.load_state_dict(peft_model_state_dict, strict=False)
File "/home/ps/.pyenv/versions/3.10.14/envs/train/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2215, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM:
size mismatch for base_model.model.model.layers.0.attention.wqkv.lora_A.default.weight: copying a param with shape torch.Size([8, 4096]) from checkpoint, the shape in current model is torch.Size([16, 4096]).
Reminder
System Info
llamafactory
version: 0.8.4.dev0Reproduction
examples中pissa样例,报错:
Expected behavior
No response
Others
No response
The text was updated successfully, but these errors were encountered: