Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RM微调Internlm2报错 #3625

Closed
1 task done
TSCollepse opened this issue May 8, 2024 · 1 comment
Closed
1 task done

RM微调Internlm2报错 #3625

TSCollepse opened this issue May 8, 2024 · 1 comment
Labels
solved This problem has been already solved

Comments

@TSCollepse
Copy link

Reminder

  • I have read the README and searched the existing issues.

Reproduction

CUDA_VISIBLE_DEVICES=9 python3.11 src/train.py
--stage rm
--do_train True
--model_name_or_path /data/hf_deploy/internlm2-7b-human-v2_merge_sft
--create_new_adapter
--dataset hh_rlhf_prompt
--template intern
--finetuning_type lora
--lora_target wqkv
--output_dir /data/LLaMA-Factory-main/results/intern_hh_rw
--per_device_train_batch_size 1
--gradient_accumulation_steps 4
--lr_scheduler_type cosine
--logging_steps 5
--save_steps 100
--learning_rate 1e-5
--num_train_epochs 2
--plot_loss
--overwrite_output_dir
--fp16

Expected behavior

rm 微调使用当前框架最新代码,报错提示没有语言模型头部,看了issue之前修复了chatglm2-6b的问题,使用最新框架训练internlm2还是会出现这个报错

System Info

Traceback (most recent call last):
File "/data/hf_deploy/LLaMA-Factory-main/src/train_bash.py", line 14, in
main()
File "/data/hf_deploy/LLaMA-Factory-main/src/train_bash.py", line 5, in main
run_exp()
File "/data/hf_deploy/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 33, in run_exp
run_rm(model_args, data_args, training_args, finetuning_args, callbacks)
File "/data/hf_deploy/LLaMA-Factory-main/src/llmtuner/train/rm/workflow.py", line 31, in run_rm
model, tokenizer = load_model_and_tokenizer(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/hf_deploy/LLaMA-Factory-main/src/llmtuner/model/loader.py", line 195, in load_model_and_tokenizer
model: "AutoModelForCausalLMWithValueHead" = AutoModelForCausalLMWithValueHead.from_pretrained(model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniconda3/lib/python3.11/site-packages/trl/models/modeling_base.py", line 276, in from_pretrained
model = cls(pretrained_model, **multi_adapter_args, **trl_model_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniconda3/lib/python3.11/site-packages/trl/models/modeling_value_head.py", line 113, in init
raise ValueError("The model does not have a language model head, please use a model that has one.")
ValueError: The model does not have a language model head, please use a model that has one.

Others

No response

@hiyouga hiyouga added the pending This problem is yet to be addressed label May 8, 2024
@hiyouga hiyouga closed this as completed in d9cdddd May 8, 2024
@hiyouga
Copy link
Owner

hiyouga commented May 8, 2024

已修复,请确保模型文件是最新版

@hiyouga hiyouga added solved This problem has been already solved and removed pending This problem is yet to be addressed labels May 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

No branches or pull requests

2 participants