Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

保存后的Reward Model用来inference #4379

Closed
1 task done
Zhenwen-NLP opened this issue Jun 19, 2024 · 6 comments
Closed
1 task done

保存后的Reward Model用来inference #4379

Zhenwen-NLP opened this issue Jun 19, 2024 · 6 comments
Labels
solved This problem has been already solved

Comments

@Zhenwen-NLP
Copy link

Reminder

  • I have read the README and searched the existing issues.

System Info

导出之后的reward model用这个方式加载:

model = AutoModelForCausalLMWithValueHead.from_pretrained('...')

弹出一个warning: no v_head weight is found. This IS expected if you are not resuming PPO training.

请问这是正常可以忽略的吗?我想用保存后的reward model做inference输出value

Reproduction

model = AutoModelForCausalLMWithValueHead.from_pretrained('...')

Expected behavior

No response

Others

No response

@github-actions github-actions bot added the pending This problem is yet to be addressed label Jun 19, 2024
@1250658183
Copy link

1250658183 commented Jun 21, 2024

蹲一下题主的后续,我使用了相似的加载代码,对训练集进行了正样本与负样本的last_token的score对比,结果显示正样本>负样本的比例约在65%,似乎也是没有处理正确。比较分数的代码参考了PPO/trainer.py内部的实现

for i in range(values_chosen.size(0)):
    end_indexes = (batch_chosen["input_ids"][i] != tokenizer.pad_token_id).nonzero()
    end_index = end_indexes[-1].item() if len(end_indexes) else 0
    rewards.append(values_chosen[i, end_index].float().detach().cpu())  # use fp32 type

@1250658183
Copy link

1250658183 commented Jun 21, 2024

Reminder

  • I have read the README and searched the existing issues.

System Info

导出之后的reward model用这个方式加载:

model = AutoModelForCausalLMWithValueHead.from_pretrained('...')

弹出一个warning: no v_head weight is found. This IS expected if you are not resuming PPO training.

请问这是正常可以忽略的吗?我想用保存后的reward model做inference输出value

Reproduction

model = AutoModelForCausalLMWithValueHead.from_pretrained('...')

Expected behavior

No response

Others

No response

@Zhenwen-NLP 我定位到了问题,AutoModelForCausalLMWithValueHead.from_pretrained('...')并不能直接加载value_head,这个问题在我的脚本中导致的现象是推理结果恒接近50%正确率。 原库中的model/loader.py有这一部分的详细代码。一个简单的补充value_head加载的代码如下:

vhead_params = torch.load('LLaMA-Factory/saves/Qwen1___5-14B-Chat/full/value_head.bin', map_location="cpu")
print('vhead_params:', vhead_params)
# 获取当前模型的state_dict
model_state_dict = model.state_dict()

# 遍历vhead_params字典,并更新模型的state_dict
for name, param in vhead_params.items():
    if name in model_state_dict:
        # 确保新的参数与模型中的参数有相同的形状
        assert param.shape == model_state_dict[name].shape, f"Shape mismatch at {name}: " \
                                                            f"model param shape {model_state_dict[name].shape}, " \
                                                            f"provided param shape {param.shape}"
        # 更新模型的state_dict
        model_state_dict[name] = param
    else:
        raise KeyError(f"{name} is not a parameter of the model.")

new_state_dict = {k if 'v_head' in k else 'pretrained_model.' + k: v for k, v in model_state_dict.items()}

# 将更新后的state_dict加载到模型中
model.load_state_dict(new_state_dict)

@hiyouga
Copy link
Owner

hiyouga commented Jun 21, 2024

现在 merge 时候没有把 value head 参数复制过去,可以先把 valuehead.safetensors 手动复制

@hiyouga
Copy link
Owner

hiyouga commented Jun 24, 2024

导出时候指定 stage: rm

@hiyouga hiyouga added solved This problem has been already solved and removed pending This problem is yet to be addressed labels Jun 24, 2024
PrimaLuz pushed a commit to PrimaLuz/LLaMA-Factory that referenced this issue Jul 1, 2024
@yata0
Copy link

yata0 commented Jul 9, 2024

"and no v_head weight is found. This IS expected if you are not resuming PPO training"
使用了export的方法,指定了stage为rm,且路径下有value_head.safetensors,还是报了这个错
image

xtchen96 pushed a commit to xtchen96/LLaMA-Factory that referenced this issue Jul 17, 2024
@world2025
Copy link

@1250658183 你好,可以分享下reward model的推理脚本吗,谢谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

No branches or pull requests

5 participants