-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing failing nodes does not restore the cluster.... #6
Comments
Hi Happened to me in a cluster GKE upgrade version process.
|
In order to avoid a downtime in the consul cluster when performing an upgrade of version in GKE,
with this, if a pod is evicted from a node, if will leave the cluster gracefully. Also I add a PodDisruptionBudget with a minAvailable of 2. So, the drain will wait until this is |
Just ran a quick test on GKE off of PR #34 which is pretty close to mainline, just consul 1.2 instead of 0.9.1. kill -9 on all the agents results in them getting brought back up - different hosts, but alive and sync'd just the same. |
Testing failing nodes does not restore the cluster....
$ kubectl delete pods consul-2 consul-1;
The text was updated successfully, but these errors were encountered: