Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

求问模型ppo之后已读乱回怎么办 #4012

Closed
1 task done
paopao0226 opened this issue May 31, 2024 · 4 comments
Closed
1 task done

求问模型ppo之后已读乱回怎么办 #4012

paopao0226 opened this issue May 31, 2024 · 4 comments
Labels
solved This problem has been already solved

Comments

@paopao0226
Copy link

Reminder

  • I have read the README and searched the existing issues.

Reproduction

rm:
deepspeed --include localhost:1,2,3,4 --master_port=9001 src/train_bash.py --deepspeed ds_config.json --stage rm --do_train True --model_name_or_path /home/ywj_0/llm_safety/model/llama2-7b-hf --finetuning_type full --template llama2 --dataset_dir /home/ywj_0/llm_safety/dataset/raw/v4_training/train --dataset safety_llama_rlhf --cutoff_len 4096 --learning_rate 1e-06 --num_train_epochs 1.0 --max_samples 100000 --per_device_train_batch_size 2 --gradient_accumulation_steps 8 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 5 --save_steps 500 --warmup_steps 50 --output_dir /home/ywj_0/llm_safety/model/safety-rm-0530 --fp16 True --plot_loss True --val_size 0.1 --per_device_eval_batch_size 1 --evaluation_strategy steps --eval_steps 50
ppo:
deepspeed --include localhost:1,2,3,4 --master_port=9010 src/train_bash.py --deepspeed ds_config.json --stage ppo --do_train True --model_name_or_path /home/ywj_0/llm_safety/model/llama2-7b-hf --finetuning_type full --template llama2 --dataset_dir /home/ywj_0/llm_safety/dataset/raw/v4_training/train --dataset safety_llama_ppo --cutoff_len 4096 --learning_rate 5e-06 --num_train_epochs 1 --max_samples 100000 --per_device_train_batch_size 2 --gradient_accumulation_steps 8 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 5 --save_steps 100 --warmup_steps 50 --output_dir /home/ywj_0/llm_safety/model/safety-ppo-0530 --bf16 True --reward_model /home/ywj_0/llm_safety/model/safety-rm-0530 --reward_model_type full --plot_loss True --temperature 1.0 --val_size 0.1 --per_device_eval_batch_size 1 --evaluation_strategy steps --eval_steps 50

reward model的损失也蛮小的 但是不知道为什么会出现这种情况。。
image
image

image

Expected behavior

No response

System Info

No response

Others

No response

@hiyouga
Copy link
Owner

hiyouga commented May 31, 2024

ppo lr 可能太大了

@paopao0226
Copy link
Author

ppo lr 可能太大了

我试着把模型的lr缩到了1e-6,但是还是会在途中报KL变负,然后模型开始已读乱回,麻了
--deepspeed ds_config.json --stage ppo --do_train True --model_name_or_path /home/ywj_0/llm_safety/model/llama2-7b-hf --finetuning_type full --template llama2 --dataset_dir /home/ywj_0/llm_safety/dataset/raw/v4_training/train --dataset safety_llama_ppo --cutoff_len 4096 --learning_rate 1e-06 --num_train_epochs 1 --max_samples 100000 --per_device_train_batch_size 1 --gradient_accumulation_steps 8 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 5 --save_steps 100 --warmup_steps 100 --output_dir /home/ywj_0/llm_safety/model/safety-ppo-0530 --bf16 True --reward_model /home/ywj_0/llm_safety/model/safety-rm-0530 --reward_model_type full --plot_loss True
这个是我ppo的脚本,然后ppo的数据集就是query及对应我想要的响应的json

@hiyouga hiyouga added the pending This problem is yet to be addressed label Jun 3, 2024
@hiyouga
Copy link
Owner

hiyouga commented Jun 3, 2024

ppo 缺少一些参数,请参照示例脚本

@hiyouga hiyouga added solved This problem has been already solved and removed pending This problem is yet to be addressed labels Jun 5, 2024
@hiyouga hiyouga closed this as completed Jun 5, 2024
hiyouga added a commit that referenced this issue Jun 6, 2024
@hiyouga
Copy link
Owner

hiyouga commented Jun 6, 2024

之前的 PPO 有些问题,已经修复

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

No branches or pull requests

2 participants