We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
使用命令CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_bitsandbytes.yaml训练后的模型,如何使用多卡推理? CUDA_VISIBLE_DEVICES=0,1,2 llamafactory-cli webchat examples/merge_lora/llama3_lora_sft.yaml通过这个脚本指定CUDA_VISIBLE_DEVICES发现并未生效
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_bitsandbytes.yaml
CUDA_VISIBLE_DEVICES=0,1,2 llamafactory-cli webchat examples/merge_lora/llama3_lora_sft.yaml
CUDA_VISIBLE_DEVICES
No response
The text was updated successfully, but these errors were encountered:
fix examples #3769
3df986c
之前的示例脚本有误,已更新: https://github.com/hiyouga/LLaMA-Factory/blob/main/examples/README_zh.md#%E6%8E%A8%E7%90%86-lora-%E6%A8%A1%E5%9E%8B
Sorry, something went wrong.
还是不行,使用hf_engine,拉取了最新版本,按照说明操作还是只能单卡启动推理
No branches or pull requests
Reminder
Reproduction
使用命令
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_bitsandbytes.yaml
训练后的模型,如何使用多卡推理?CUDA_VISIBLE_DEVICES=0,1,2 llamafactory-cli webchat examples/merge_lora/llama3_lora_sft.yaml
通过这个脚本指定CUDA_VISIBLE_DEVICES
发现并未生效Expected behavior
No response
System Info
No response
Others
No response
The text was updated successfully, but these errors were encountered: