You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 30, 2024. It is now read-only.
The JetBrains Plugin doesnt seem to provide any functionality for targeting my local models from Ollama. I have managed to resolve the issues with VSCode also having such problems, but that is only because I could edit the config for the VS Code Cody Extension, which allowed me to point to the Ollama server API for inference code-completion/chat.
Please advise how to point the JetBrains plugin to the local Ollama for offline chat and auto-complete.
Thanks in advance!
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The JetBrains Plugin doesnt seem to provide any functionality for targeting my local models from Ollama. I have managed to resolve the issues with VSCode also having such problems, but that is only because I could edit the config for the VS Code Cody Extension, which allowed me to point to the Ollama server API for inference code-completion/chat.
Please advise how to point the JetBrains plugin to the local Ollama for offline chat and auto-complete.
Thanks in advance!
The text was updated successfully, but these errors were encountered: