feat(oci): support OCI images and Ollama models #2628
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR fixes #2527
This PR fixes #1028
Now models URL can be specified also as
oci://
orollama://
prefix. Note that whenoci://
is used, it will default tounpack
the image content to the model path, so the container have to be a naked one (e.g. a FROM scratch
built with docker). When usingollama://
, it can be used as for e.g.ollama://llama3
orollama://gemma:2b
and so on so forth.My plan is to further make the installation more flexible in a way thatlocal-ai models install ...
can be used directly with URL/oci/ollama.How it works
Models URLs can be specified as usual, in the YAML config file for example:
or directly start the ollama model:
Alternatively, the model can be pre-populated with
local-ai models install
too:For OCI images, same applies but the prefix is
oci://
. To build a container image which works in this way, you must have a Dockerfile like this:Push the image in some registry and reference it.
Notes for Reviewers
It also includes a small nuances in fixing watcher message that displays always as an error 🤷
Signed commits