You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/data/hf_deploy/LLaMA-Factory-main/src/train_bash.py", line 14, in
main()
File "/data/hf_deploy/LLaMA-Factory-main/src/train_bash.py", line 5, in main
run_exp()
File "/data/hf_deploy/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 33, in run_exp
run_rm(model_args, data_args, training_args, finetuning_args, callbacks)
File "/data/hf_deploy/LLaMA-Factory-main/src/llmtuner/train/rm/workflow.py", line 31, in run_rm
model, tokenizer = load_model_and_tokenizer(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/hf_deploy/LLaMA-Factory-main/src/llmtuner/model/loader.py", line 195, in load_model_and_tokenizer
model: "AutoModelForCausalLMWithValueHead" = AutoModelForCausalLMWithValueHead.from_pretrained(model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniconda3/lib/python3.11/site-packages/trl/models/modeling_base.py", line 276, in from_pretrained
model = cls(pretrained_model, **multi_adapter_args, **trl_model_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniconda3/lib/python3.11/site-packages/trl/models/modeling_value_head.py", line 113, in init
raise ValueError("The model does not have a language model head, please use a model that has one.")
ValueError: The model does not have a language model head, please use a model that has one.
Others
No response
The text was updated successfully, but these errors were encountered:
Reminder
Reproduction
CUDA_VISIBLE_DEVICES=9 python3.11 src/train.py
--stage rm
--do_train True
--model_name_or_path /data/hf_deploy/internlm2-7b-human-v2_merge_sft
--create_new_adapter
--dataset hh_rlhf_prompt
--template intern
--finetuning_type lora
--lora_target wqkv
--output_dir /data/LLaMA-Factory-main/results/intern_hh_rw
--per_device_train_batch_size 1
--gradient_accumulation_steps 4
--lr_scheduler_type cosine
--logging_steps 5
--save_steps 100
--learning_rate 1e-5
--num_train_epochs 2
--plot_loss
--overwrite_output_dir
--fp16
Expected behavior
rm 微调使用当前框架最新代码,报错提示没有语言模型头部,看了issue之前修复了chatglm2-6b的问题,使用最新框架训练internlm2还是会出现这个报错
System Info
Traceback (most recent call last):
File "/data/hf_deploy/LLaMA-Factory-main/src/train_bash.py", line 14, in
main()
File "/data/hf_deploy/LLaMA-Factory-main/src/train_bash.py", line 5, in main
run_exp()
File "/data/hf_deploy/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 33, in run_exp
run_rm(model_args, data_args, training_args, finetuning_args, callbacks)
File "/data/hf_deploy/LLaMA-Factory-main/src/llmtuner/train/rm/workflow.py", line 31, in run_rm
model, tokenizer = load_model_and_tokenizer(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/hf_deploy/LLaMA-Factory-main/src/llmtuner/model/loader.py", line 195, in load_model_and_tokenizer
model: "AutoModelForCausalLMWithValueHead" = AutoModelForCausalLMWithValueHead.from_pretrained(model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniconda3/lib/python3.11/site-packages/trl/models/modeling_base.py", line 276, in from_pretrained
model = cls(pretrained_model, **multi_adapter_args, **trl_model_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniconda3/lib/python3.11/site-packages/trl/models/modeling_value_head.py", line 113, in init
raise ValueError("The model does not have a language model head, please use a model that has one.")
ValueError: The model does not have a language model head, please use a model that has one.
Others
No response
The text was updated successfully, but these errors were encountered: