We are engaged in preparing a PR for the Ollama official repository! For now, you can run MiniCPM-V 2.6 with this fork. Feel free to raise issues if you meet any problems, and we will reply as soon as possible.
- cmake version 3.24 or higher
- go version 1.22 or higher
- gcc version 11.4.0 or higher
Prepare both our llama.cpp fork and this Ollama fork.
git clone -b minicpm-v2.6 https://github.com/OpenBMB/ollama.git
cd ollama/llm
git clone -b minicpmv-main https://github.com/OpenBMB/llama.cpp.git
cd ../
Here we give a MacOS example. See the developer guide for more platforms.
brew install go cmake gcc
Optionally enable debugging and more verbose logging:
# At build time
export CGO_CFLAGS="-g"
# At runtime
export OLLAMA_DEBUG=1
Get the required libraries and build the native LLM code:
go generate ./...
Build ollama:
go build .
Start the server:
./ollama serve
-
Create a file named
Modelfile
following this example. You may have to change toFROM
field with your local filepaths. -
Create the model in Ollama:
ollama create minicpm-v2.6 -f examples/minicpm-v2.6/Modelfile
- Run:
ollama run minicpm-v2.6