Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(p2p): Federation and AI swarms #2723

Merged
merged 18 commits into from
Jul 8, 2024
Merged

feat(p2p): Federation and AI swarms #2723

merged 18 commits into from
Jul 8, 2024

Conversation

mudler
Copy link
Owner

@mudler mudler commented Jul 5, 2024

Screenshot 2024-07-08 at 19-31-12 LocalAI - P2P dashboard

How does it work ?

Start LocalAI with --p2p, and for sharing an instance with federation, start with --federated. A token have to be configured for workers and node joining a network (with the TOKEN environment variable).

At first start, if a token isn't supplied it is generated automatically, and can be used when navigating over the Swarm dashboard page.

Video 1 Federation

https://youtu.be/pH8Bv__9cnA

Video 1 Llama.cpp workers:

https://youtu.be/ePH8PGqMSpo

Additional notes

This is a WIP branch, my goal here is to have a very minimal dashboard + general enhancements direction is:

  • dashboard should allow to have a look at which workers are active and let monitor the status
  • give indication on how to add new workers by using the script or docker images
  • detection of dead workers to turn down tunnels
  • federated support for sharing requests and balance them across multiple LocalAI instances

This is a continuation of #2343

Also adds a fix for #2733

@mudler mudler force-pushed the p2p_enhancements branch 2 times, most recently from a2df4ad to 9704512 Compare July 5, 2024 17:25
@mudler mudler added the area/p2p label Jul 5, 2024
@github-actions github-actions bot added the ci label Jul 5, 2024
@mudler mudler added enhancement New feature or request and removed ci labels Jul 5, 2024
Copy link

netlify bot commented Jul 5, 2024

Deploy Preview for localai ready!

Name Link
🔨 Latest commit 0a98a8e
🔍 Latest deploy log https://app.netlify.com/sites/localai/deploys/668c2501f656a5000868213f
😎 Deploy Preview https://deploy-preview-2723--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@github-actions github-actions bot added the ci label Jul 5, 2024
@mudler mudler force-pushed the p2p_enhancements branch 4 times, most recently from 8599e8e to f04c4da Compare July 6, 2024 14:18
@@ -19,3 +19,11 @@ func LLamaCPPRPCServerDiscoverer(ctx context.Context, token string) error {
func BindLLamaCPPWorker(ctx context.Context, host, port, token string) error {
return fmt.Errorf("not implemented")
}

func GetAvailableNodes() []NodeData {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mudler if you have a sec, do you mind explaining why p2p_disabled.go is... useful? Mostly for my own understanding here - I'd have either just checked if p2p was enabled when GetAvailableNodes was called, or if performance was to be optimized, dump both versions in the same file and select the mock vs the full impl based on that setting. I'm probably missing something interesting here 😄

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's actually because it is behind the GO_TAGS user flag. If GO_TAGS contains p2p then the p2p.go file is built, otherwise the other one.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, thanks for pointing out this is a compile time option vs a config setting

@mudler mudler force-pushed the p2p_enhancements branch 2 times, most recently from 3af5fe7 to c61ccbe Compare July 6, 2024 17:02
}
tunnelEnvVar := strings.Join(tunnelAddresses, ",")

os.Setenv("LLAMACPP_GRPC_SERVERS", tunnelEnvVar)

Check warning

Code scanning / Golang security checks by gosec

Errors unhandled. Warning

Errors unhandled.
core/cli/federated.go Dismissed Show dismissed Hide dismissed

func copyStream(closer chan struct{}, dst io.Writer, src io.Reader) {
defer func() { closer <- struct{}{} }() // connection is closed, send signal to stop proxy
io.Copy(dst, src)

Check warning

Code scanning / Golang security checks by gosec

Errors unhandled. Warning

Errors unhandled.
<-closer

tunnelConn.Close()
conn.Close()

Check warning

Code scanning / Golang security checks by gosec

Errors unhandled. Warning

Errors unhandled.
go copyStream(closer, conn, tunnelConn)
<-closer

tunnelConn.Close()

Check warning

Code scanning / Golang security checks by gosec

Errors unhandled. Warning

Errors unhandled.
core/cli/federated.go Fixed Show fixed Hide fixed
@mudler
Copy link
Owner Author

mudler commented Jul 7, 2024

ok I pushed a bit more =) this is actually adding LocalAI federation too. Let me wrap this up and update the description.

UI is kinda messy at the moment, needs improvement:

Screenshot 2024-07-07 at 15-50-03 LocalAI - P2P dashboard

  • now llama.cpp workers can be tracked easily, and from the p2p page instructions are given to add new workers
  • beside llama.cpp workers, federation works completely differently from the distributed worker mechanism (which distributes weights between all nodes) - here we will go by distributing per-request to each node, so it applies to all backends.
  • There is no logic to syncronize models in the federation, you have to start localai with the same models if you want to have a consistent behavior across your AI swarm
  • distribution of load is NOT optimized at the moment in the swarm, nodes are selected randomly (TODO: optimizations)

@mudler mudler changed the title Wip p2p enhancements feat(p2p): Federation and AI swarms Jul 7, 2024
@mudler mudler removed the ci label Jul 7, 2024
@mudler mudler marked this pull request as ready for review July 8, 2024 06:26
@github-actions github-actions bot added the ci label Jul 8, 2024
@mudler
Copy link
Owner Author

mudler commented Jul 8, 2024

I'm kinda satisfied for now:

Screenshot 2024-07-08 at 19-31-12 LocalAI - P2P dashboard

@mudler mudler removed the ci label Jul 8, 2024
@mudler mudler merged commit cca881e into master Jul 8, 2024
41 checks passed
@mudler mudler deleted the p2p_enhancements branch July 8, 2024 20:04
truecharts-admin referenced this pull request in truecharts/public Jul 24, 2024
…9.1 by renovate (#24152)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.17.1-aio-cpu` -> `v2.19.1-aio-cpu` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.17.1-aio-gpu-nvidia-cuda-11` ->
`v2.19.1-aio-gpu-nvidia-cuda-11` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.17.1-aio-gpu-nvidia-cuda-12` ->
`v2.19.1-aio-gpu-nvidia-cuda-12` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.17.1-cublas-cuda11-ffmpeg-core` ->
`v2.19.1-cublas-cuda11-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.17.1-cublas-cuda11-core` -> `v2.19.1-cublas-cuda11-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.17.1-cublas-cuda12-ffmpeg-core` ->
`v2.19.1-cublas-cuda12-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.17.1-cublas-cuda12-core` -> `v2.19.1-cublas-cuda12-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.17.1-ffmpeg-core` -> `v2.19.1-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.17.1` -> `v2.19.1` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>mudler/LocalAI (docker.io/localai/localai)</summary>

###
[`v2.19.1`](https://togithub.com/mudler/LocalAI/releases/tag/v2.19.1)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.19.0...v2.19.1)


![local-ai-release-219-shadow](https://togithub.com/user-attachments/assets/c5d7c930-656f-410d-aab9-455a466925fe)

##### LocalAI 2.19.1 is out! :mega:

##### TLDR; Summary spotlight

- 🖧 Federated Instances via P2P: LocalAI now supports federated
instances with P2P, offering both load-balanced and non-load-balanced
options.
- 🎛️ P2P Dashboard: A new dashboard to guide and assist in setting up
P2P instances with auto-discovery using shared tokens.
- 🔊 TTS Integration: Text-to-Speech (TTS) is now included in the binary
releases.
- 🛠️ Enhanced Installer: The installer script now supports setting up
federated instances.
-   📥 Model Pulling: Models can now be pulled directly via URL.
- 🖼️ WebUI Enhancements: Visual improvements and cleanups to the WebUI
and model lists.
- 🧠 llama-cpp Backend: The llama-cpp (grpc) backend now supports
embedding ( https://localai.io/features/embeddings/#llamacpp-embeddings
)
-   ⚙️ Tool Support: Small enhancements to tools with disabled grammars.

##### 🖧 LocalAI Federation and AI swarms

<p align="center">
<img
src="https://github.com/user-attachments/assets/17b39f8a-fc41-47d9-b846-b3a88307813b"/>
</p>

LocalAI is revolutionizing the future of distributed AI workloads by
making it simpler and more accessible. No more complex setups, Docker or
Kubernetes configurations – LocalAI allows you to create your own AI
cluster with minimal friction. By auto-discovering and sharing work or
weights of the LLM model across your existing devices, LocalAI aims to
scale both horizontally and vertically with ease.

##### How it works?

Starting LocalAI with `--p2p` generates a shared token for connecting
multiple instances: and that's all you need to create AI clusters,
eliminating the need for intricate network setups. Simply navigate to
the "Swarm" section in the WebUI and follow the on-screen instructions.

For fully shared instances, initiate LocalAI with `--p2p --federated`
and adhere to the Swarm section's guidance. This feature, while still
experimental, offers a tech preview quality experience.

##### Federated LocalAI

Launch multiple LocalAI instances and cluster them together to share
requests across the cluster. The "Swarm" tab in the WebUI provides
one-liner instructions on connecting various LocalAI instances using a
shared token. Instances will auto-discover each other, even across
different networks.


![346663124-1d2324fd-8b55-4fa2-9856-721a467969c2](https://togithub.com/user-attachments/assets/19ebd44a-20ff-412c-b92f-cfb8efbe4b21)

Check out a demonstration video: [Watch
now](https://www.youtube.com/watch?v=pH8Bv\_\_9cnA)

##### LocalAI P2P Workers

Distribute weights across nodes by starting multiple LocalAI workers,
currently available only on the llama.cpp backend, with plans to expand
to other backends soon.


![346663124-1d2324fd-8b55-4fa2-9856-721a467969c2](https://togithub.com/user-attachments/assets/b8cadddf-a467-49cf-a1ed-8850de95366d)

Check out a demonstration video: [Watch
now](https://www.youtube.com/watch?v=ePH8PGqMSpo)

##### What's Changed

##### Bug fixes :bug:

- fix: make sure the GNUMake jobserver is passed to cmake for the
llama.cpp build by [@&#8203;cryptk](https://togithub.com/cryptk) in
[https://github.com/mudler/LocalAI/pull/2697](https://togithub.com/mudler/LocalAI/pull/2697)
- Using exec when starting a backend instead of spawning a new process
by [@&#8203;a17t](https://togithub.com/a17t) in
[https://github.com/mudler/LocalAI/pull/2720](https://togithub.com/mudler/LocalAI/pull/2720)
- fix(cuda): downgrade default version from 12.5 to 12.4 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2707](https://togithub.com/mudler/LocalAI/pull/2707)
- fix: Lora loading by [@&#8203;vaaale](https://togithub.com/vaaale) in
[https://github.com/mudler/LocalAI/pull/2893](https://togithub.com/mudler/LocalAI/pull/2893)
- fix: short-circuit when nodes aren't detected by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2909](https://togithub.com/mudler/LocalAI/pull/2909)
- fix: do not list txt files as potential models by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2910](https://togithub.com/mudler/LocalAI/pull/2910)

##### 🖧 P2P area

- feat(p2p): Federation and AI swarms by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2723](https://togithub.com/mudler/LocalAI/pull/2723)
- feat(p2p): allow to disable DHT and use only LAN by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2751](https://togithub.com/mudler/LocalAI/pull/2751)

##### Exciting New Features 🎉

- Allows to remove a backend from the list by
[@&#8203;mauromorales](https://togithub.com/mauromorales) in
[https://github.com/mudler/LocalAI/pull/2721](https://togithub.com/mudler/LocalAI/pull/2721)
- ci(Makefile): adds tts in binary releases by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2695](https://togithub.com/mudler/LocalAI/pull/2695)
- feat: HF `/scan` endpoint by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/2566](https://togithub.com/mudler/LocalAI/pull/2566)
- feat(model-list): be consistent, skip known files from listing by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2760](https://togithub.com/mudler/LocalAI/pull/2760)
- feat(models): pull models from urls by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2750](https://togithub.com/mudler/LocalAI/pull/2750)
- feat(webui): show also models without a config in the welcome page by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2772](https://togithub.com/mudler/LocalAI/pull/2772)
- feat(install.sh): support federated install by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2752](https://togithub.com/mudler/LocalAI/pull/2752)
- feat(llama.cpp): support embeddings endpoints by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2871](https://togithub.com/mudler/LocalAI/pull/2871)
- feat(functions): parse broken JSON when we parse the raw results, use
dynamic rules for grammar keys by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2912](https://togithub.com/mudler/LocalAI/pull/2912)
- feat(federation): add load balanced option by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2915](https://togithub.com/mudler/LocalAI/pull/2915)

##### 🧠 Models

- models(gallery): :arrow_up: update checksum by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2701](https://togithub.com/mudler/LocalAI/pull/2701)
- models(gallery): add l3-8b-everything-cot by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2705](https://togithub.com/mudler/LocalAI/pull/2705)
- models(gallery): add hercules-5.0-qwen2-7b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2708](https://togithub.com/mudler/LocalAI/pull/2708)
- models(gallery): add
llama3-8b-darkidol-2.2-uncensored-1048k-iq-imatrix by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2710](https://togithub.com/mudler/LocalAI/pull/2710)
- models(gallery): add llama-3-llamilitary by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2711](https://togithub.com/mudler/LocalAI/pull/2711)
- models(gallery): add tess-v2.5-gemma-2-27b-alpha by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2712](https://togithub.com/mudler/LocalAI/pull/2712)
- models(gallery): add arcee-agent by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2713](https://togithub.com/mudler/LocalAI/pull/2713)
- models(gallery): add gemma2-daybreak by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2714](https://togithub.com/mudler/LocalAI/pull/2714)
- models(gallery): add L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2715](https://togithub.com/mudler/LocalAI/pull/2715)
- models(gallery): add qwen2-7b-instruct-v0.8 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2717](https://togithub.com/mudler/LocalAI/pull/2717)
- models(gallery): add internlm2\_5-7b-chat-1m by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2719](https://togithub.com/mudler/LocalAI/pull/2719)
- models(gallery): add gemma-2-9b-it-sppo-iter3 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2722](https://togithub.com/mudler/LocalAI/pull/2722)
- models(gallery): add llama-3\_8b_unaligned_alpha by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2727](https://togithub.com/mudler/LocalAI/pull/2727)
- models(gallery): add l3-8b-lunaris-v1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2729](https://togithub.com/mudler/LocalAI/pull/2729)
- models(gallery): add llama-3\_8b_unaligned_alpha_rp_soup-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2734](https://togithub.com/mudler/LocalAI/pull/2734)
- models(gallery): add hathor_respawn-l3-8b-v0.8 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2738](https://togithub.com/mudler/LocalAI/pull/2738)
- models(gallery): add llama3-8b-instruct-replete-adapted by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2739](https://togithub.com/mudler/LocalAI/pull/2739)
- models(gallery): add llama-3-perky-pat-instruct-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2740](https://togithub.com/mudler/LocalAI/pull/2740)
- models(gallery): add l3-uncen-merger-omelette-rp-v0.2-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2741](https://togithub.com/mudler/LocalAI/pull/2741)
- models(gallery): add nymph\_8b-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2742](https://togithub.com/mudler/LocalAI/pull/2742)
- models(gallery): add smegmma-9b-v1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2743](https://togithub.com/mudler/LocalAI/pull/2743)
- models(gallery): add hathor_tahsin-l3-8b-v0.85 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2762](https://togithub.com/mudler/LocalAI/pull/2762)
- models(gallery): add replete-coder-instruct-8b-merged by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2782](https://togithub.com/mudler/LocalAI/pull/2782)
- models(gallery): add arliai-llama-3-8b-formax-v1.0 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2783](https://togithub.com/mudler/LocalAI/pull/2783)
- models(gallery): add smegmma-deluxe-9b-v1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2784](https://togithub.com/mudler/LocalAI/pull/2784)
- models(gallery): add l3-ms-astoria-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2785](https://togithub.com/mudler/LocalAI/pull/2785)
- models(gallery): add halomaidrp-v1.33-15b-l3-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2786](https://togithub.com/mudler/LocalAI/pull/2786)
- models(gallery): add llama-3-patronus-lynx-70b-instruct by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2788](https://togithub.com/mudler/LocalAI/pull/2788)
- models(gallery): add llamax3 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2849](https://togithub.com/mudler/LocalAI/pull/2849)
- models(gallery): add arliai-llama-3-8b-dolfin-v0.5 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2852](https://togithub.com/mudler/LocalAI/pull/2852)
- models(gallery): add tiger-gemma-9b-v1-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2853](https://togithub.com/mudler/LocalAI/pull/2853)
- feat: models(gallery): add deepseek-v2-lite by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2658](https://togithub.com/mudler/LocalAI/pull/2658)
- models(gallery): :arrow_up: update checksum by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2860](https://togithub.com/mudler/LocalAI/pull/2860)
- models(gallery): add phi-3.1-mini-4k-instruct by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2863](https://togithub.com/mudler/LocalAI/pull/2863)
- models(gallery): :arrow_up: update checksum by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2887](https://togithub.com/mudler/LocalAI/pull/2887)
- models(gallery): add ezo model series (llama3, gemma) by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2891](https://togithub.com/mudler/LocalAI/pull/2891)
- models(gallery): add l3-8b-niitama-v1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2895](https://togithub.com/mudler/LocalAI/pull/2895)
- models(gallery): add mathstral-7b-v0.1-imat by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2901](https://togithub.com/mudler/LocalAI/pull/2901)
- models(gallery): add MythicalMaid/EtherealMaid 15b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2902](https://togithub.com/mudler/LocalAI/pull/2902)
- models(gallery): add flammenai/Mahou-1.3d-mistral-7B by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2903](https://togithub.com/mudler/LocalAI/pull/2903)
- models(gallery): add big-tiger-gemma-27b-v1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2918](https://togithub.com/mudler/LocalAI/pull/2918)
- models(gallery): add phillama-3.8b-v0.1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2920](https://togithub.com/mudler/LocalAI/pull/2920)
- models(gallery): add qwen2-wukong-7b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2921](https://togithub.com/mudler/LocalAI/pull/2921)
- models(gallery): add einstein-v4-7b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2922](https://togithub.com/mudler/LocalAI/pull/2922)
- models(gallery): add gemma-2b-translation-v0.150 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2923](https://togithub.com/mudler/LocalAI/pull/2923)
- models(gallery): add emo-2b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2924](https://togithub.com/mudler/LocalAI/pull/2924)
- models(gallery): add celestev1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2925](https://togithub.com/mudler/LocalAI/pull/2925)

##### 📖 Documentation and examples

- :arrow_up: Update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2699](https://togithub.com/mudler/LocalAI/pull/2699)
- examples(gha): add example on how to run LocalAI in Github actions by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2716](https://togithub.com/mudler/LocalAI/pull/2716)
- docs(swagger): enhance coverage of APIs by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2753](https://togithub.com/mudler/LocalAI/pull/2753)
- docs(swagger): comment LocalAI gallery endpoints and rerankers by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2854](https://togithub.com/mudler/LocalAI/pull/2854)
- docs: add a note on benchmarks by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2857](https://togithub.com/mudler/LocalAI/pull/2857)
- docs(swagger): cover p2p endpoints by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2862](https://togithub.com/mudler/LocalAI/pull/2862)
- ci: use github action by [@&#8203;mudler](https://togithub.com/mudler)
in
[https://github.com/mudler/LocalAI/pull/2899](https://togithub.com/mudler/LocalAI/pull/2899)
- docs: update try-it-out.md by
[@&#8203;eltociear](https://togithub.com/eltociear) in
[https://github.com/mudler/LocalAI/pull/2906](https://togithub.com/mudler/LocalAI/pull/2906)
- docs(swagger): core more localai/openai endpoints by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2904](https://togithub.com/mudler/LocalAI/pull/2904)
- docs: more swagger, update docs by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2907](https://togithub.com/mudler/LocalAI/pull/2907)
- feat(swagger): update swagger by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2916](https://togithub.com/mudler/LocalAI/pull/2916)

##### 👒 Dependencies

- :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2700](https://togithub.com/mudler/LocalAI/pull/2700)
- :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2704](https://togithub.com/mudler/LocalAI/pull/2704)
- deps(whisper.cpp): update to latest commit by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2709](https://togithub.com/mudler/LocalAI/pull/2709)
- :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2718](https://togithub.com/mudler/LocalAI/pull/2718)
- :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2725](https://togithub.com/mudler/LocalAI/pull/2725)
- :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2736](https://togithub.com/mudler/LocalAI/pull/2736)
- :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2744](https://togithub.com/mudler/LocalAI/pull/2744)
- :arrow_up: Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2746](https://togithub.com/mudler/LocalAI/pull/2746)
- :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2747](https://togithub.com/mudler/LocalAI/pull/2747)
- :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2755](https://togithub.com/mudler/LocalAI/pull/2755)
- :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2767](https://togithub.com/mudler/LocalAI/pull/2767)
- :arrow_up: Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2756](https://togithub.com/mudler/LocalAI/pull/2756)
- :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2774](https://togithub.com/mudler/LocalAI/pull/2774)
- chore(deps): Update Dependencies by
[@&#8203;reneleonhardt](https://togithub.com/reneleonhardt) in
[https://github.com/mudler/LocalAI/pull/2538](https://togithub.com/mudler/LocalAI/pull/2538)
- chore(deps): Bump dependabot/fetch-metadata from 2.1.0 to 2.2.0 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2791](https://togithub.com/mudler/LocalAI/pull/2791)
- chore(deps): Bump llama-index from 0.9.48 to 0.10.55 in
/examples/chainlit by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2795](https://togithub.com/mudler/LocalAI/pull/2795)
- chore(deps): Bump openai from 1.33.0 to 1.35.13 in /examples/functions
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2793](https://togithub.com/mudler/LocalAI/pull/2793)
- chore(deps): Bump nginx from 1.a.b.c to 1.27.0 in /examples/k8sgpt by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2790](https://togithub.com/mudler/LocalAI/pull/2790)
- chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in
/backend/python/coqui by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2798](https://togithub.com/mudler/LocalAI/pull/2798)
- chore(deps): Bump inflect from 7.0.0 to 7.3.1 in
/backend/python/openvoice by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2796](https://togithub.com/mudler/LocalAI/pull/2796)
- chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in
/backend/python/parler-tts by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2797](https://togithub.com/mudler/LocalAI/pull/2797)
- chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in
/backend/python/petals by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2799](https://togithub.com/mudler/LocalAI/pull/2799)
- chore(deps): Bump causal-conv1d from 1.2.0.post2 to 1.4.0 in
/backend/python/mamba by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2792](https://togithub.com/mudler/LocalAI/pull/2792)
- chore(deps): Bump docs/themes/hugo-theme-relearn from `c25bc2a` to
`1b2e139` by [@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2801](https://togithub.com/mudler/LocalAI/pull/2801)
- chore(deps): Bump tenacity from 8.3.0 to 8.5.0 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2803](https://togithub.com/mudler/LocalAI/pull/2803)
- chore(deps): Bump openai from 1.33.0 to 1.35.13 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2794](https://togithub.com/mudler/LocalAI/pull/2794)
- chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in
/backend/python/bark by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2805](https://togithub.com/mudler/LocalAI/pull/2805)
- chore(deps): Bump streamlit from 1.30.0 to 1.36.0 in
/examples/streamlit-bot by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2804](https://togithub.com/mudler/LocalAI/pull/2804)
- chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in
/backend/python/diffusers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2807](https://togithub.com/mudler/LocalAI/pull/2807)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in
/backend/python/exllama2 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2809](https://togithub.com/mudler/LocalAI/pull/2809)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in
/backend/python/common/template by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2802](https://togithub.com/mudler/LocalAI/pull/2802)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in
/backend/python/autogptq by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2800](https://togithub.com/mudler/LocalAI/pull/2800)
- chore(deps): Bump weaviate-client from 4.6.4 to 4.6.5 in
/examples/chainlit by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2811](https://togithub.com/mudler/LocalAI/pull/2811)
- chore(deps): Bump gradio from 4.36.1 to 4.37.1 in
/backend/python/openvoice in the pip group by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2815](https://togithub.com/mudler/LocalAI/pull/2815)
- chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in
/backend/python/vall-e-x by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2812](https://togithub.com/mudler/LocalAI/pull/2812)
- chore(deps): Bump certifi from 2024.6.2 to 2024.7.4 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2814](https://togithub.com/mudler/LocalAI/pull/2814)
- chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in
/backend/python/transformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2817](https://togithub.com/mudler/LocalAI/pull/2817)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in
/backend/python/sentencetransformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2813](https://togithub.com/mudler/LocalAI/pull/2813)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in
/backend/python/rerankers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2819](https://togithub.com/mudler/LocalAI/pull/2819)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in
/backend/python/parler-tts by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2818](https://togithub.com/mudler/LocalAI/pull/2818)
- chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in
/backend/python/vllm by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2820](https://togithub.com/mudler/LocalAI/pull/2820)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in
/backend/python/coqui by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2825](https://togithub.com/mudler/LocalAI/pull/2825)
- chore(deps): Bump faster-whisper from 0.9.0 to 1.0.3 in
/backend/python/openvoice by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2829](https://togithub.com/mudler/LocalAI/pull/2829)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in
/backend/python/exllama by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2841](https://togithub.com/mudler/LocalAI/pull/2841)
- chore(deps): Bump scipy from 1.13.0 to 1.14.0 in
/backend/python/transformers-musicgen by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2842](https://togithub.com/mudler/LocalAI/pull/2842)
- chore: :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2846](https://togithub.com/mudler/LocalAI/pull/2846)
- chore(deps): Bump langchain from 0.2.3 to 0.2.7 in /examples/functions
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2806](https://togithub.com/mudler/LocalAI/pull/2806)
- chore(deps): Bump mamba-ssm from 1.2.0.post1 to 2.2.2 in
/backend/python/mamba by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2821](https://togithub.com/mudler/LocalAI/pull/2821)
- chore(deps): Bump pydantic from 2.7.3 to 2.8.2 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2832](https://togithub.com/mudler/LocalAI/pull/2832)
- chore(deps): Bump langchain from 0.2.3 to 0.2.7 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2822](https://togithub.com/mudler/LocalAI/pull/2822)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/bark
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2831](https://togithub.com/mudler/LocalAI/pull/2831)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in
/backend/python/diffusers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2833](https://togithub.com/mudler/LocalAI/pull/2833)
- chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in
/backend/python/autogptq by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2816](https://togithub.com/mudler/LocalAI/pull/2816)
- chore(deps): Bump gradio from 4.36.1 to 4.38.1 in
/backend/python/openvoice by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2840](https://togithub.com/mudler/LocalAI/pull/2840)
- chore(deps): Bump the pip group across 1 directory with 2 updates by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2848](https://togithub.com/mudler/LocalAI/pull/2848)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in
/backend/python/transformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2837](https://togithub.com/mudler/LocalAI/pull/2837)
- chore(deps): Bump sentence-transformers from 2.5.1 to 3.0.1 in
/backend/python/sentencetransformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2826](https://togithub.com/mudler/LocalAI/pull/2826)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in
/backend/python/vall-e-x by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2830](https://togithub.com/mudler/LocalAI/pull/2830)
- chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in
/backend/python/rerankers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2834](https://togithub.com/mudler/LocalAI/pull/2834)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/vllm
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2839](https://togithub.com/mudler/LocalAI/pull/2839)
- chore(deps): Bump librosa from 0.9.1 to 0.10.2.post1 in
/backend/python/openvoice by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2836](https://togithub.com/mudler/LocalAI/pull/2836)
- chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in
/backend/python/transformers-musicgen by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2843](https://togithub.com/mudler/LocalAI/pull/2843)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in
/backend/python/mamba by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2808](https://togithub.com/mudler/LocalAI/pull/2808)
- chore(deps): Bump llama-index from 0.10.43 to 0.10.55 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2810](https://togithub.com/mudler/LocalAI/pull/2810)
- chore(deps): Bump langchain from 0.2.3 to 0.2.7 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2824](https://togithub.com/mudler/LocalAI/pull/2824)
- chore(deps): Bump numpy from 1.26.4 to 2.0.0 in
/backend/python/openvoice by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2823](https://togithub.com/mudler/LocalAI/pull/2823)
- chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in
/backend/python/transformers-musicgen by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2844](https://togithub.com/mudler/LocalAI/pull/2844)
- build(deps): bump docker/build-push-action from 5 to 6 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2592](https://togithub.com/mudler/LocalAI/pull/2592)
- chore(deps): Bump chromadb from 0.5.0 to 0.5.4 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2828](https://togithub.com/mudler/LocalAI/pull/2828)
- chore(deps): Bump torch from 2.2.0 to 2.3.1 in /backend/python/mamba
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2835](https://togithub.com/mudler/LocalAI/pull/2835)
- chore: :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2851](https://togithub.com/mudler/LocalAI/pull/2851)
- chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in
/backend/python/sentencetransformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2838](https://togithub.com/mudler/LocalAI/pull/2838)
- chore: update edgevpn dependency by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2855](https://togithub.com/mudler/LocalAI/pull/2855)
- chore: :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2859](https://togithub.com/mudler/LocalAI/pull/2859)
- chore(deps): Bump langchain from 0.2.7 to 0.2.8 in /examples/functions
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2873](https://togithub.com/mudler/LocalAI/pull/2873)
- chore(deps): Bump langchain from 0.2.7 to 0.2.8 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2874](https://togithub.com/mudler/LocalAI/pull/2874)
- chore(deps): Bump numexpr from 2.10.0 to 2.10.1 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2877](https://togithub.com/mudler/LocalAI/pull/2877)
- chore: :arrow_up: Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2885](https://togithub.com/mudler/LocalAI/pull/2885)
- chore: :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2886](https://togithub.com/mudler/LocalAI/pull/2886)
- chore(deps): Bump debugpy from 1.8.1 to 1.8.2 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2878](https://togithub.com/mudler/LocalAI/pull/2878)
- chore(deps): Bump langchain-community from 0.2.5 to 0.2.7 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2875](https://togithub.com/mudler/LocalAI/pull/2875)
- chore(deps): Bump langchain from 0.2.7 to 0.2.8 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2872](https://togithub.com/mudler/LocalAI/pull/2872)
- chore(deps): Bump openai from 1.33.0 to 1.35.13 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[https://github.com/mudler/LocalAI/pull/2876](https://togithub.com/mudler/LocalAI/pull/2876)
- chore: :arrow_up: Update ggerganov/whisper.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2898](https://togithub.com/mudler/LocalAI/pull/2898)
- chore: :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2897](https://togithub.com/mudler/LocalAI/pull/2897)
- chore: :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2905](https://togithub.com/mudler/LocalAI/pull/2905)
- chore: :arrow_up: Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2917](https://togithub.com/mudler/LocalAI/pull/2917)

##### Other Changes

- ci: add pipelines for discord notifications by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2703](https://togithub.com/mudler/LocalAI/pull/2703)
- ci(arm64): fix gRPC build by adding googletest to CMakefile by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2754](https://togithub.com/mudler/LocalAI/pull/2754)
- fix: arm builds via disabling abseil tests by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/2758](https://togithub.com/mudler/LocalAI/pull/2758)
- ci(grpc): disable ABSEIL tests by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2759](https://togithub.com/mudler/LocalAI/pull/2759)
- ci(deps): add libgmock-dev by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2761](https://togithub.com/mudler/LocalAI/pull/2761)
- fix abseil test issue \[attempt 3] by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/2769](https://togithub.com/mudler/LocalAI/pull/2769)
- feat(swagger): update swagger by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2766](https://togithub.com/mudler/LocalAI/pull/2766)
- ci: Do not test the full matrix on PRs by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2771](https://togithub.com/mudler/LocalAI/pull/2771)
- Git fetch specific branch instead of full tree during build by
[@&#8203;LoricOSC](https://togithub.com/LoricOSC) in
[https://github.com/mudler/LocalAI/pull/2748](https://togithub.com/mudler/LocalAI/pull/2748)
- fix(ci): small fixups to checksum_checker.sh by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2776](https://togithub.com/mudler/LocalAI/pull/2776)
- fix(ci): fixup correct path for check_and_update.py by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2777](https://togithub.com/mudler/LocalAI/pull/2777)
- fixes to `check_and_update.py` script by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/2778](https://togithub.com/mudler/LocalAI/pull/2778)
- Update remaining git clones to git fetch by
[@&#8203;LoricOSC](https://togithub.com/LoricOSC) in
[https://github.com/mudler/LocalAI/pull/2779](https://togithub.com/mudler/LocalAI/pull/2779)
- feat(scripts): add scripts to help adding new models to the gallery by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2789](https://togithub.com/mudler/LocalAI/pull/2789)
- build: speedup `git submodule update` with `--single-branch` by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/2847](https://togithub.com/mudler/LocalAI/pull/2847)
- Revert "chore(deps): Bump inflect from 7.0.0 to 7.3.1 in
/backend/python/openvoice" by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2856](https://togithub.com/mudler/LocalAI/pull/2856)
- Revert "chore(deps): Bump librosa from 0.9.1 to 0.10.2.post1 in
/backend/python/openvoice" by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2861](https://togithub.com/mudler/LocalAI/pull/2861)
- feat(swagger): update swagger by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2858](https://togithub.com/mudler/LocalAI/pull/2858)
- Revert "chore(deps): Bump numpy from 1.26.4 to 2.0.0 in
/backend/python/openvoice" by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2868](https://togithub.com/mudler/LocalAI/pull/2868)
- feat(swagger): update swagger by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2884](https://togithub.com/mudler/LocalAI/pull/2884)
- fix: update grpcio version to match version used in builds by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[https://github.com/mudler/LocalAI/pull/2888](https://togithub.com/mudler/LocalAI/pull/2888)
- fix: cleanup indentation and remove duplicate dockerfile stanza by
[@&#8203;cryptk](https://togithub.com/cryptk) in
[https://github.com/mudler/LocalAI/pull/2889](https://togithub.com/mudler/LocalAI/pull/2889)
- ci: add workflow to comment new Opened PRs by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2892](https://togithub.com/mudler/LocalAI/pull/2892)
- build: fix go.mod - don't import ourself by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/2896](https://togithub.com/mudler/LocalAI/pull/2896)
- feat(swagger): update swagger by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2908](https://togithub.com/mudler/LocalAI/pull/2908)
- refactor: move federated server logic to its own service by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2914](https://togithub.com/mudler/LocalAI/pull/2914)
- refactor: groundwork - add pkg/concurrency and the associated test
file by [@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/2745](https://togithub.com/mudler/LocalAI/pull/2745)

##### New Contributors

- [@&#8203;a17t](https://togithub.com/a17t) made their first
contribution in
[https://github.com/mudler/LocalAI/pull/2720](https://togithub.com/mudler/LocalAI/pull/2720)
- [@&#8203;LoricOSC](https://togithub.com/LoricOSC) made their first
contribution in
[https://github.com/mudler/LocalAI/pull/2748](https://togithub.com/mudler/LocalAI/pull/2748)
- [@&#8203;vaaale](https://togithub.com/vaaale) made their first
contribution in
[https://github.com/mudler/LocalAI/pull/2893](https://togithub.com/mudler/LocalAI/pull/2893)

**Full Changelog**:
https://github.com/mudler/LocalAI/compare/v2.18.1...v2.19.0

###
[`v2.19.0`](https://togithub.com/mudler/LocalAI/releases/tag/v2.19.0)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.18.1...v2.19.0)


![local-ai-release-219-shadow](https://togithub.com/user-attachments/assets/c5d7c930-656f-410d-aab9-455a466925fe)

##### LocalAI 2.19.0 is out! :mega:

##### TLDR; Summary spotlight

- 🖧 Federated Instances via P2P: LocalAI now supports federated
instances with P2P, offering both load-balanced and non-load-balanced
options.
- 🎛️ P2P Dashboard: A new dashboard to guide and assist in setting up
P2P instances with auto-discovery using shared tokens.
- 🔊 TTS Integration: Text-to-Speech (TTS) is now included in the binary
releases.
- 🛠️ Enhanced Installer: The installer script now supports setting up
federated instances.
-   📥 Model Pulling: Models can now be pulled directly via URL.
- 🖼️ WebUI Enhancements: Visual improvements and cleanups to the WebUI
and model lists.
- 🧠 llama-cpp Backend: The llama-cpp (grpc) backend now supports
embedding ( https://localai.io/features/embeddings/#llamacpp-embeddings
)
-   ⚙️ Tool Support: Small enhancements to tools with disabled grammars.

##### 🖧 LocalAI Federation and AI swarms

<p align="center">
<img
src="https://github.com/user-attachments/assets/17b39f8a-fc41-47d9-b846-b3a88307813b"/>
</p>

LocalAI is revolutionizing the future of distributed AI workloads by
making it simpler and more accessible. No more complex setups, Docker or
Kubernetes configurations – LocalAI allows you to create your own AI
cluster with minimal friction. By auto-discovering and sharing work or
weights of the LLM model across your existing devices, LocalAI aims to
scale both horizontally and vertically with ease.

##### How it works?

Starting LocalAI with `--p2p` generates a shared token for connecting
multiple instances: and that's all you need to create AI clusters,
eliminating the need for intricate network setups. Simply navigate to
the "Swarm" section in the WebUI and follow the on-screen instructions.

For fully shared instances, initiate LocalAI with `--p2p --federated`
and adhere to the Swarm section's guidance. This feature, while still
experimental, offers a tech preview quality experience.

##### Federated LocalAI

Launch multiple LocalAI instances and cluster them together to share
requests across the cluster. The "Swarm" tab in the WebUI provides
one-liner instructions on connecting various LocalAI instances using a
shared token. Instances will auto-discover each other, even across
different networks.


![346663124-1d2324fd-8b55-4fa2-9856-721a467969c2](https://togithub.com/user-attachments/assets/19ebd44a-20ff-412c-b92f-cfb8efbe4b21)

Check out a demonstration video: [Watch
now](https://www.youtube.com/watch?v=pH8Bv\_\_9cnA)

##### LocalAI P2P Workers

Distribute weights across nodes by starting multiple LocalAI workers,
currently available only on the llama.cpp backend, with plans to expand
to other backends soon.


![346663124-1d2324fd-8b55-4fa2-9856-721a467969c2](https://togithub.com/user-attachments/assets/b8cadddf-a467-49cf-a1ed-8850de95366d)

Check out a demonstration video: [Watch
now](https://www.youtube.com/watch?v=ePH8PGqMSpo)

##### What's Changed

##### Bug fixes :bug:

- fix: make sure the GNUMake jobserver is passed to cmake for the
llama.cpp build by [@&#8203;cryptk](https://togithub.com/cryptk) in
[https://github.com/mudler/LocalAI/pull/2697](https://togithub.com/mudler/LocalAI/pull/2697)
- Using exec when starting a backend instead of spawning a new process
by [@&#8203;a17t](https://togithub.com/a17t) in
[https://github.com/mudler/LocalAI/pull/2720](https://togithub.com/mudler/LocalAI/pull/2720)
- fix(cuda): downgrade default version from 12.5 to 12.4 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2707](https://togithub.com/mudler/LocalAI/pull/2707)
- fix: Lora loading by [@&#8203;vaaale](https://togithub.com/vaaale) in
[https://github.com/mudler/LocalAI/pull/2893](https://togithub.com/mudler/LocalAI/pull/2893)
- fix: short-circuit when nodes aren't detected by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2909](https://togithub.com/mudler/LocalAI/pull/2909)
- fix: do not list txt files as potential models by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2910](https://togithub.com/mudler/LocalAI/pull/2910)

##### 🖧 P2P area

- feat(p2p): Federation and AI swarms by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2723](https://togithub.com/mudler/LocalAI/pull/2723)
- feat(p2p): allow to disable DHT and use only LAN by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2751](https://togithub.com/mudler/LocalAI/pull/2751)

##### Exciting New Features 🎉

- Allows to remove a backend from the list by
[@&#8203;mauromorales](https://togithub.com/mauromorales) in
[https://github.com/mudler/LocalAI/pull/2721](https://togithub.com/mudler/LocalAI/pull/2721)
- ci(Makefile): adds tts in binary releases by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2695](https://togithub.com/mudler/LocalAI/pull/2695)
- feat: HF `/scan` endpoint by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[https://github.com/mudler/LocalAI/pull/2566](https://togithub.com/mudler/LocalAI/pull/2566)
- feat(model-list): be consistent, skip known files from listing by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2760](https://togithub.com/mudler/LocalAI/pull/2760)
- feat(models): pull models from urls by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2750](https://togithub.com/mudler/LocalAI/pull/2750)
- feat(webui): show also models without a config in the welcome page by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2772](https://togithub.com/mudler/LocalAI/pull/2772)
- feat(install.sh): support federated install by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2752](https://togithub.com/mudler/LocalAI/pull/2752)
- feat(llama.cpp): support embeddings endpoints by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2871](https://togithub.com/mudler/LocalAI/pull/2871)
- feat(functions): parse broken JSON when we parse the raw results, use
dynamic rules for grammar keys by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2912](https://togithub.com/mudler/LocalAI/pull/2912)
- feat(federation): add load balanced option by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2915](https://togithub.com/mudler/LocalAI/pull/2915)

##### 🧠 Models

- models(gallery): :arrow_up: update checksum by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2701](https://togithub.com/mudler/LocalAI/pull/2701)
- models(gallery): add l3-8b-everything-cot by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2705](https://togithub.com/mudler/LocalAI/pull/2705)
- models(gallery): add hercules-5.0-qwen2-7b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2708](https://togithub.com/mudler/LocalAI/pull/2708)
- models(gallery): add
llama3-8b-darkidol-2.2-uncensored-1048k-iq-imatrix by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2710](https://togithub.com/mudler/LocalAI/pull/2710)
- models(gallery): add llama-3-llamilitary by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2711](https://togithub.com/mudler/LocalAI/pull/2711)
- models(gallery): add tess-v2.5-gemma-2-27b-alpha by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2712](https://togithub.com/mudler/LocalAI/pull/2712)
- models(gallery): add arcee-agent by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2713](https://togithub.com/mudler/LocalAI/pull/2713)
- models(gallery): add gemma2-daybreak by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2714](https://togithub.com/mudler/LocalAI/pull/2714)
- models(gallery): add L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2715](https://togithub.com/mudler/LocalAI/pull/2715)
- models(gallery): add qwen2-7b-instruct-v0.8 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2717](https://togithub.com/mudler/LocalAI/pull/2717)
- models(gallery): add internlm2\_5-7b-chat-1m by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2719](https://togithub.com/mudler/LocalAI/pull/2719)
- models(gallery): add gemma-2-9b-it-sppo-iter3 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2722](https://togithub.com/mudler/LocalAI/pull/2722)
- models(gallery): add llama-3\_8b_unaligned_alpha by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2727](https://togithub.com/mudler/LocalAI/pull/2727)
- models(gallery): add l3-8b-lunaris-v1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2729](https://togithub.com/mudler/LocalAI/pull/2729)
- models(gallery): add llama-3\_8b_unaligned_alpha_rp_soup-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2734](https://togithub.com/mudler/LocalAI/pull/2734)
- models(gallery): add hathor_respawn-l3-8b-v0.8 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2738](https://togithub.com/mudler/LocalAI/pull/2738)
- models(gallery): add llama3-8b-instruct-replete-adapted by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2739](https://togithub.com/mudler/LocalAI/pull/2739)
- models(gallery): add llama-3-perky-pat-instruct-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2740](https://togithub.com/mudler/LocalAI/pull/2740)
- models(gallery): add l3-uncen-merger-omelette-rp-v0.2-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2741](https://togithub.com/mudler/LocalAI/pull/2741)
- models(gallery): add nymph\_8b-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2742](https://togithub.com/mudler/LocalAI/pull/2742)
- models(gallery): add smegmma-9b-v1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2743](https://togithub.com/mudler/LocalAI/pull/2743)
- models(gallery): add hathor_tahsin-l3-8b-v0.85 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2762](https://togithub.com/mudler/LocalAI/pull/2762)
- models(gallery): add replete-coder-instruct-8b-merged by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2782](https://togithub.com/mudler/LocalAI/pull/2782)
- models(gallery): add arliai-llama-3-8b-formax-v1.0 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2783](https://togithub.com/mudler/LocalAI/pull/2783)
- models(gallery): add smegmma-deluxe-9b-v1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2784](https://togithub.com/mudler/LocalAI/pull/2784)
- models(gallery): add l3-ms-astoria-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2785](https://togithub.com/mudler/LocalAI/pull/2785)
- models(gallery): add halomaidrp-v1.33-15b-l3-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2786](https://togithub.com/mudler/LocalAI/pull/2786)
- models(gallery): add llama-3-patronus-lynx-70b-instruct by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2788](https://togithub.com/mudler/LocalAI/pull/2788)
- models(gallery): add llamax3 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2849](https://togithub.com/mudler/LocalAI/pull/2849)
- models(gallery): add arliai-llama-3-8b-dolfin-v0.5 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2852](https://togithub.com/mudler/LocalAI/pull/2852)
- models(gallery): add tiger-gemma-9b-v1-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2853](https://togithub.com/mudler/LocalAI/pull/2853)
- feat: models(gallery): add deepseek-v2-lite by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2658](https://togithub.com/mudler/LocalAI/pull/2658)
- models(gallery): :arrow_up: update checksum by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/2860](https://togithub.com/mudler/LocalAI/pull/2860)
- models(gallery): add phi-3.1-mini-4k-instruct by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/2863](https://togithub.com/mudl

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about these
updates again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy40NDAuNiIsInVwZGF0ZWRJblZlciI6IjM3LjQ0MC42IiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIiwibGFiZWxzIjpbImF1dG9tZXJnZSIsInVwZGF0ZS9kb2NrZXIvZ2VuZXJhbC9ub24tbWFqb3IiXX0=-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/p2p enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants