-
-
Notifications
You must be signed in to change notification settings - Fork 694
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
google cloudrun updated their limits on maxscale based on memory and cpu count #1779
Comments
maybe a simpler solution is to set the maxscale to like 2? since datasette is not set up to make use of container scaling anyway? |
I'm going to default maxscale to 2 but provide an extra command line option for |
Actually I disagree that Datasette isn't setup to make use of container scaling: part of the idea behind the Baked Data pattern is that you can scale to handle effectively unlimited traffic by running multiple copies of your application, each with their own duplicate copy of the database. So I'm going to default maxScale to 10 and still let people customize it. |
Here's the relevant part of the
|
Tried duplicating this error locally but the following command succeeded when I expected it to fail:
|
Maybe I need to upgrade:
|
|
Just spotted this in the failing Actions workflow:
I tried that locally too but the deploy still succeeds. |
Just tried this instead, and it still worked and deployed OK:
@fgregg I'm not able to replicate your deployment failure I'm afraid. |
(I deleted my |
Here's the start of the man page for
I'm going to expose |
thanks @simonw! |
if you don't set an explicit limit on container scaling, then google defaults to 100
google recently updated the limits on container scaling, such that if you set up datasette to use more memory or cpu, then you need to set the maxScale argument much smaller than 100.
would be nice if
datasette publish
could do this math for you and set the right maxScale.Log of an failing publish run.
The text was updated successfully, but these errors were encountered: