-
Notifications
You must be signed in to change notification settings - Fork 710
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] AttributeError: 'MiniCPM3ForCausalLM' object has no attribute 'get_module_name' #1416
Comments
Hi @lixiangtiandashen, this should be easy to fix if you copy these functions to minicpm3.py. Can you contribute a fix? cc @Ying1123 |
Oh oh, yes, indeed, I'd love to do this |
@merrymercy python3 -m sglang.launch_server --model-path /base_model --tokenizer-path /base_model --lora-paths /lora_model0 /lora_model1 --disable-radix --disable-cuda-graph --max-loras-per-batch 2 --mem-fraction-static 0.5 --random-seed 0 --enable-torch-compile AttributeError: 'Gemma2ForCausalLM' object has no attribute 'get_module_name' |
Because this issue has not been resolved, only the Llama series can support the Lora model. |
@lixiangtiandashen |
When using the However, when I do inference, the error below occurs.
|
The gemma2 model was solved with its PR #2330 |
Checklist
Describe the bug
Except for the LLaMa model, there is no "get_module_name" method in the other model configurations, so the LoRA configuration cannot be loaded.
python/sglang/srt/lora/lora_manager.py:106
Reproduction
--lora-path openbmb/MiniCPM3-RAG-LoRA
Environment
not important
The text was updated successfully, but these errors were encountered: