[Bug]: Failed to apply Qwen2VLProcessor when running vllm serve showlab/ShowUI-2B #11762
Closed
1 task done
Labels
bug
Something isn't working
Your current environment
The output of `python collect_env.py`
Model Input Dumps
No response
🐛 Describe the bug
showlab/ShowUI-2B is a GUI grounding model fine-tuned from Qwen/Qwen2-VL-2B.
I had no problem using this model with
vllm serve
until a recent PR merge #11717, which caused the server hosting to fail. The version before this PR merge is working fine.Below is the error message when I try to run
vllm serve showlab/ShowUI-2B
.Before submitting a new issue...
The text was updated successfully, but these errors were encountered: