You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since now we have federation support #2915 and #2343 it makes sense to build a place under the LocalAI website to list and visualize community pools.
By community pools, I'm refering to a way for people to share swarms token, so they can both provide hardware capabilities, and use the federation for inference (like, Petals, but with more "shards").
The idea is to have an "explorer" or a "dashboard" which shows a list of active pools and with how many federated instances or llama.cpp workers, reporting their capability and availability.
Things to notice:
In the dashboard we should list only active pools and delete pools that are offline or have 0 workers/federated instances
Users can add arbitrarly tokens/pools, these gets scanned periodically and the dashboard reports its status
We need to explicitly mention that this is without any warranty and contribute/use it at your own risk - we don't have any responsability of the usage you do and if malicious actors tries to fiddle with your systems. We are going to tackle bugs of course as a community, but users should be very well aware of the fact that this is experimental and might be unsecure to deploy on your hardware (unless you take all the precautions).
This would allow the users to:
setup a cluster, and dedicate that for a specific community
share the compute resources with others
run inferencing if you don't have beefy hardware, but it is instead given by other community peers
The text was updated successfully, but these errors were encountered:
This would likely be a new golang app that can be deployed e.g. in Vercel and would need a simple form for users to submit tokens.
I see two sections in this app:
a page or a form to insert new tokens and provide description/name
a landing page where to show all the global pools, with the availability, number of workers, and hardware specs ( note this is not yet collected by the p2p swarm functionalities)
thinking it again. no need of vercel or a dynamic web app at all: can all be static and have GH workflow pipelines to run "cron" jobs to update the data.
thinking it again. no need of vercel or a dynamic web app at all: can all be static and have GH workflow pipelines to run "cron" jobs to update the data.
scratch that - too complicated to add new tokens then
https://explorer.localai.io is now live. It still misses some of UX around on how to run things, but it's just low hanging fruit on documentation that will be addressed in follow-ups
Since now we have federation support #2915 and #2343 it makes sense to build a place under the LocalAI website to list and visualize community pools.
By community pools, I'm refering to a way for people to share swarms token, so they can both provide hardware capabilities, and use the federation for inference (like, Petals, but with more "shards").
The idea is to have an "explorer" or a "dashboard" which shows a list of active pools and with how many federated instances or llama.cpp workers, reporting their capability and availability.
Things to notice:
This would allow the users to:
The text was updated successfully, but these errors were encountered: