We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug Fix the lack of a consistent pool manager. Basically when using supabase and loading memgpt with tons of user I get that I have reached the max number of client connections. For reference a message on discord (https://discord.com/channels/1161736243340640419/1162177332350558339/1257022675021463654)
If you have any better alternative of the db to use please let me know!
Please describe your setup
MemGPT Config Just using the standard configuration with this db url: postgresql+pg8000://postgres.xxxxx:[email protected]:6543/postgres
Thank you :)
The text was updated successfully, but these errors were encountered:
This also impacts deployments with only a couple users, 2 + the default user account.
I setup new deployment in Google Cloud Run + Supabase (Nano) and followed the steps below.
Max client connections reached
Note: When I built the image and ran container locally, I executed a couple 'list' commands with the python client.
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
Describe the bug
Fix the lack of a consistent pool manager.
Basically when using supabase and loading memgpt with tons of user I get that I have reached the max number of client connections.
For reference a message on discord (https://discord.com/channels/1161736243340640419/1162177332350558339/1257022675021463654)
If you have any better alternative of the db to use please let me know!
Please describe your setup
MemGPT Config
Just using the standard configuration with this db url: postgresql+pg8000://postgres.xxxxx:[email protected]:6543/postgres
Thank you :)
The text was updated successfully, but these errors were encountered: