-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
微调 qwen2的时候为啥默认开启的是多卡,我明明用的是单卡训练,而且我也用webui试了单卡,但是它默认的还是多卡 #4137
Comments
在cli.py的第80行,改成 |
修复了 |
我用llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_bitsandbytes.yaml 8bit量化微调qwen2出现 |
用 bf16 |
在哪添加啊 modelmodel_name_or_path: E:\LLaMA-Factory\qwen\Qwen2-7B-Instruct methodstage: sft datasetdataset: xunlian outputoutput_dir: saves/qwen2-7b/lora/sft trainper_device_train_batch_size: 1 evalval_size: 0.1 |
fp16 换成 bf16 |
好的谢谢 |
Reminder
System Info
Reproduction
llamafactory-cli train examples/lora_single_gpu/llama3_lora_sft.yaml
Expected behavior
No response
Others
No response
The text was updated successfully, but these errors were encountered: