-
Notifications
You must be signed in to change notification settings - Fork 737
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix initial deployment downtime #1451
Fix initial deployment downtime #1451
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #1451 +/- ##
==========================================
+ Coverage 54.53% 54.56% +0.03%
==========================================
Files 84 84
Lines 10146 10143 -3
==========================================
+ Hits 5533 5535 +2
+ Misses 3956 3953 -3
+ Partials 657 655 -2
☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
left a couple of nits, but apart from that its looking good!
could you these commits into one commit and make sure you rebase against main? we should be ready to merge then :) |
259559e
to
99baa53
Compare
should be good now, thanks :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm! thanks @miguelvalerio 🙇
Signed-off-by: miguelvalerio <[email protected]>
99baa53
to
b25e12d
Compare
Fixes #1444.
After taking a look at the code, I think the current implementation could cause some issues: considering that if might be a bit of latency between scaling de
Deployment
/DaemonSet
to 0 and updating theService
, theService
would be left with noPod
s to communicate to for a bit. This would be even worse if, for some reason, updating theService
failed, leaving the apex service with noPod
s for even longer.This solution moves the
ScaleToZero
to be performed only after the apex service is already pointing to theprimary
target, and should therefore mitigate the issues described above.