We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llamafactory
按照文章 https://zhuanlan.zhihu.com/p/695287607 直到启动API Server
CUDA_VISIBLE_DEVICES=0 API_PORT=8000 llamafactory-cli api \ --model_name_or_path /root/workspace/models-modelscope/Meta-Llama-3-8B-Instruct \ --adapter_name_or_path ./saves/LLaMA3-8B/lora/sft \ --template llama3 \ --finetuning_type lora
由于我的环境需要配置root_path,所以服务无法正常运行,root_path 文档(FastAPI)请参考: https://fastapi.tiangolo.com/zh/advanced/behind-a-proxy/
No response
The text was updated successfully, but these errors were encountered:
use FASTAPI_ROOT_PATH
FASTAPI_ROOT_PATH
Sorry, something went wrong.
8b588c7
fix hiyouga#5307
9d050c4
4bf3f07
No branches or pull requests
Reminder
System Info
llamafactory
version: 0.8.4.dev0Reproduction
按照文章 https://zhuanlan.zhihu.com/p/695287607 直到启动API Server
由于我的环境需要配置root_path,所以服务无法正常运行,root_path 文档(FastAPI)请参考: https://fastapi.tiangolo.com/zh/advanced/behind-a-proxy/
Expected behavior
No response
Others
No response
The text was updated successfully, but these errors were encountered: