Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(llama.cpp): Vulkan, Kompute, SYCL #1647

Open
3 of 4 tasks
mudler opened this issue Jan 26, 2024 · 4 comments
Open
3 of 4 tasks

feat(llama.cpp): Vulkan, Kompute, SYCL #1647

mudler opened this issue Jan 26, 2024 · 4 comments
Labels
enhancement New feature or request roadmap

Comments

@mudler
Copy link
Owner

mudler commented Jan 26, 2024

Tracker for: ggerganov/llama.cpp#5138 and also ROCm

@mudler mudler added the enhancement New feature or request label Jan 26, 2024
mudler added a commit that referenced this issue Jan 29, 2024
mudler added a commit that referenced this issue Jan 30, 2024
@mudler mudler changed the title llama.cpp Vulkan, Kompute, SYCL feat(llama.cpp): Vulkan, Kompute, SYCL Jan 31, 2024
@mudler mudler added the roadmap label Jan 31, 2024
mudler added a commit that referenced this issue Feb 1, 2024
mudler added a commit that referenced this issue Feb 1, 2024
* feat(sycl): Add sycl support (#1647)

* onekit: install without prompts

* set cmake args only in grpc-server

Signed-off-by: Ettore Di Giacinto <[email protected]>

* cleanup

* fixup sycl source env

* Cleanup docs

* ci: runs on self-hosted

* fix typo

* bump llama.cpp

* llama.cpp: update server

* adapt to upstream changes

* adapt to upstream changes

* docs: add sycl

---------

Signed-off-by: Ettore Di Giacinto <[email protected]>
@mudler mudler pinned this issue Mar 2, 2024
@RiQuY
Copy link

RiQuY commented Apr 8, 2024

The merge requests linked on this issue appears to be merged upstream. Does that mean LocalAI already supports Vulkan or there are any additional tasks to do before that?

@mudler
Copy link
Owner Author

mudler commented Jun 24, 2024

The merge requests linked on this issue appears to be merged upstream. Does that mean LocalAI already supports Vulkan or there are any additional tasks to do before that?

Only kompute is missing as for now

@jim3692
Copy link

jim3692 commented Sep 9, 2024

The merge requests linked on this issue appears to be merged upstream. Does that mean LocalAI already supports Vulkan or there are any additional tasks to do before that?

Only kompute is missing as for now

It looks like kompute is also merged

@KhazAkar
Copy link

So.. what's missing in LocalAI to support vulkan? Or compilation of in-tree llama.cpp to support vulkan would be enough to use it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request roadmap
Projects
None yet
Development

No branches or pull requests

4 participants