-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Ollama (via openai compatible APIs) #16
Conversation
Use `-cm ollama` to select the default model (llama3) or `-cm ollama:{model}` to select a differnt model. Ollama should be running locally on the default port `http://localhost:11434`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, and many thanks for the addition! One request though: could you please update the README.md with an entry similar to the other vendors? I think a link that gives a brief start to get started with ollama plus the requirement of default ollama port.
Also, I don't have a beefy enough computer to run any models supplied via ollama, but I trust you've verified functionality yourself
Sure, I'll update the readme. And yes, I did verified myself (chat and query commands) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also needs to get updated: /baalimago/clai/blob/main/internal/text/querier_setup.go#L20 in order to support the config system for the different models. Right now, I suspect the configuration file for any ollama model will overwrite each others at <os-config-dir>/.clai/VENDOR_NOT_FOUND.json
.
I think <os-config-dir>/.clai/ollama_ollama_{model}
would be an appropriate format, with {model}
being just ollama
for the default case.
You can verify the problem+solution by checking out <os-config-dir>/.clai/
(probably ~/.config/.clai
) or running clai setup
-> 1 (model files) -> c (configure) and seeing ollama_ollama_...
config files (and the lack thereof, until change)
EDIT: I think I'll refactor this so that this is inside the respective vendors, but I'll do so aftter this PR to make it less confusing
Hello! I'm planning on making a new minor release soon and I'd really like to have this PR in it. Do you have any progress on the suggested changes? I also had a thought that we could just merge this as is, and I'll refactor as mentioned above. But since I don't have ollama myself, I'd appreciate if you'd run and verify that the configuration works as intended. Could this work? |
Sorry for the delay. Let me test one more time. I'll let you know later
today.
…On Thu, Aug 15, 2024 at 10:43 AM Lorentz Kinde ***@***.***> wrote:
Hello! I'm planning on making a new minor release soon and I'd really like
to have this PR in it. Do you have any progress on the suggested changes?
I also had a thought that we could just merge this as is, and I'll
refactor as mentioned above. But since I don't have ollama myself, I'd
appreciate if you'd run and verify that the configuration works as
intended. Could this work?
—
Reply to this email directly, view it on GitHub
<#16 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAANESB4N6QQHKO2N3NGJODZRTSFPAVCNFSM6AAAAABK3MXCLSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJRHA2DANBWGI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I'll proceed and merge this and do the refactor! If something doesn't work, please make an issue and I'll check it out, or submit another PR to fix it. Thanks for your contributions! |
I doublechecked my own suggested refactors and it didn't really make any sense, apologies for the confusion! I updated the readme. |
Awesome, thanks!
…On Mon, Aug 19, 2024 at 10:31 AM Lorentz Kinde ***@***.***> wrote:
I doublechecked my own suggested refactors and it didn't really make any
sense, apologies for the confusion! I updated the readme,
—
Reply to this email directly, view it on GitHub
<#16 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAANESAA3KLFXGEZC52CA73ZSITY7AVCNFSM6AAAAABK3MXCLSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJXGA4DCNRYGA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Use
-cm ollama
to select the default model (llama3) or-cm ollama:{model}
to select a differnt model.Ollama should be running locally on the default port
http://localhost:11434