Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No ollama LLM found #373

Open
suoko opened this issue Apr 5, 2024 · 8 comments
Open

No ollama LLM found #373

suoko opened this issue Apr 5, 2024 · 8 comments

Comments

@suoko
Copy link

suoko commented Apr 5, 2024

I run the docker compose up command and all was installed correctly.
I entered the ollama docker container and installed llama2 but when I run devika, no LLM is found for ollama.
Should I configure something ? Or only some LLM are supported ?
Starcoder is not seen either

Thanks
image

@cpAtor
Copy link

cpAtor commented Apr 5, 2024

#300 adding reference to similar existing issue.

I am also facing the same issue.

@heartsiddharth1
Copy link

Any update on this one please. i am not able to select the local model

@ARajgor
Copy link
Collaborator

ARajgor commented Apr 6, 2024

If you are using ollama docker the. Check the serve host of ollama. I guess you have to change it from the default one

@ChanghongYangR
Copy link

I have the some problem.

@cpAtor
Copy link

cpAtor commented Apr 7, 2024

If you are using ollama docker the. Check the serve host of ollama. I guess you have to change it from the default one

The following worked for me:

  • Updating the OLLAMA API_ENDPOINT in config.toml as follows
        OLLAMA = "http://ollama-service:11434"
    
  • running docker compose up --build

@Ahmet0691
Copy link
Contributor

Which language model in ollama works properly for this project?

@ChanghongYangR
Copy link

when I turned off my vpn connection, it worked.
Uploading 屏幕截图 2024-04-07 133757.png…

@kuendeee
Copy link

Any updates here? I'm running the Ollama but the Devika still cannot recognized it.
2024-04-26 12_26_43-Administrator_ Command Prompt - ollama  serve

2024-04-26 12_28_53-config toml - devika - Visual Studio Code  Administrator

devin-ai-integration bot added a commit to erkinalp/devika that referenced this issue Dec 20, 2024
- Add connection retry logic with exponential backoff
- Implement URL validation and connection testing
- Add comprehensive error messages and logging
- Support OLLAMA_HOST environment variable
- Add test suite with mock testing

Co-Authored-By: Erkin Alp Güney <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants