-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mention about achieving zero-downtime rolling updates #912
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://github.com/kubernetes/kubernetes/wiki/CLA-FAQ to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
…gress Controller docs
@tyranron I am not sure this is a good advice. If you use liveness and readiness probes + a strategy in the deployment, you don't need to use the hook spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0 |
As I understand it, the concern is how long it takes nginx-ingress to notice when a pod becomes unready? how does kube-proxy handle it? Cause a pod can go unready for reasons other then deletion. |
@kfox1111 good question. Please check this comment kubernetes-retired/contrib#1140 (comment) (and the next ones from thockin) |
@aledbf that just does not work in most cases. It often works for simple single-container Pods, but for multi-container Pods it works quite rare. The reasons why this happens are discussed exactly in a issue you've referred above. To refresh: Pod can still receiving new traffic after receiving SIGTERM, because in Kubernetes SIGTEM means not just "terminate gracefully", but "wait a sec, and terminate gracefuly". But most applications do not do that behavior out-of-the-box. I understand that this solution is not ideal and looks quite strange. But this simple hack, proposed by @foxylion in #322 just does work in almost all cases. Even @thockin agreed a bit with that. |
Yet another investigation/confirmation of situation in relative thread. |
lifecycle: | ||
preStop: | ||
exec: | ||
command: ["sleep, "15"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a typo here, you're missing a "
This should be:
lifecycle:
preStop:
exec:
command: ["sleep", "15"]
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@aledbf We use the same strategy mentioned here, as there are some (ugly) java processes that are pretty damn slow to stop. So, using a IMHO it's a good advice :) |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Did we ever update the docs with this information? |
This PR adds mention achieving zero-downtime rolling updates to Nginx Ingress Controller docs.
Based on investigation made in #322.