Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

默认参数建议:sft阶段是否考虑将学习率调小 #4944

Closed
1 task done
bayma-1 opened this issue Jul 24, 2024 · 1 comment
Closed
1 task done

默认参数建议:sft阶段是否考虑将学习率调小 #4944

bayma-1 opened this issue Jul 24, 2024 · 1 comment
Labels
solved This problem has been already solved

Comments

@bayma-1
Copy link

bayma-1 commented Jul 24, 2024

Reminder

  • I have read the README and searched the existing issues.

System Info

  • llamafactory version: 0.8.4.dev0
  • Platform: Linux-5.4.0-60-generic-x86_64-with-glibc2.17
  • Python version: 3.8.19
  • PyTorch version: 2.2.2+cu121 (GPU)
  • Transformers version: 4.41.2
  • Datasets version: 2.19.1
  • Accelerate version: 0.31.0
  • PEFT version: 0.11.1
  • TRL version: 0.8.6
  • GPU type: Tesla V100-PCIE-32GB
  • DeepSpeed version: 0.14.2

Reproduction

model

model_name_or_path: path_to_Qwen2-7B-Instruct

method

stage: sft
do_train: true
finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json

dataset

dataset: alpaca_data_en_52k
template: qwen
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16

output

output_dir: saves/qwen2/full/sft
logging_steps: 1
save_steps: 50000
plot_loss: true
overwrite_output_dir: true

train

per_device_train_batch_size: 1
gradient_accumulation_steps: 2
learning_rate: 1.0e-5
max_steps: 1000
#num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
fp16: true
ddp_timeout: 180000000

eval

val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

Expected behavior

【感谢查看】

  1. 以下是我使用 qwen2-7b-instruct 模型在默认参数的基础上,只修改学习率得到的 loss 曲线。发现学习率为 1e-5 的时候收敛效果要明显好于 1e-4。
    torch_testblueline
  2. 为了验证 1e-4 的学习率是否对于 qwen2-7b-instruct 模型是更大的,我查阅了 qwen2 的论文,发现在 sft 阶段使用的学习率在 7e-6 到 7e-7 之间。
    qwen2-sft-lr
  3. 请问是否考虑在 sft 阶段调小默认学习率?

Others

No response

@github-actions github-actions bot added the pending This problem is yet to be addressed label Jul 24, 2024
@hiyouga hiyouga added solved This problem has been already solved and removed pending This problem is yet to be addressed labels Jul 24, 2024
@hiyouga
Copy link
Owner

hiyouga commented Jul 24, 2024

fixed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

No branches or pull requests

2 participants