You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
MiniCPM-o 2.6 is the latest and most capable model in the MiniCPM-o series. The model is built in an end-to-end fashion based on SigLip-400M, Whisper-medium-300M, ChatTTS-200M, and Qwen2.5-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.6, and introduces new features for real-time speech conversation and multimodal live streaming. Notable features of MiniCPM-o 2.6 include
What's your difficulty of supporting the model you want?
File "/usr/local/lib/python3.12/site-packages/vllm/model_executor/models/registry.py", line 421, in inspect_model_cls
return self._raise_for_unsupported(architectures)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/vllm/model_executor/models/registry.py", line 382, in _raise_for_unsupported
raise ValueError(
ValueError: Model architectures ['MiniCPMO'] are not supported for now.
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
The model to consider.
Adding support for MiniCPM-o-2, please review.
HuggingFace Page: https://huggingface.co/openbmb/MiniCPM-o-2_6
MiniCPM-o 2.6 is the latest and most capable model in the MiniCPM-o series. The model is built in an end-to-end fashion based on SigLip-400M, Whisper-medium-300M, ChatTTS-200M, and Qwen2.5-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.6, and introduces new features for real-time speech conversation and multimodal live streaming. Notable features of MiniCPM-o 2.6 include
The closest model vllm already supports.
https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/minicpmv.py
What's your difficulty of supporting the model you want?
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: