-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix version upgrade kubelet support #16932
Conversation
Hi @rsafonseca. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Thanks @rsafonseca! On the one hand, I can't say I'm the biggest fan of the ergonomics of this approach, because it basically means we have to run the identical |
Yeah, it's not perfect for all use cases, but it's a first step and we can build upon this for the so called "uber" command :) In addition, this will produce no behaviour change for any kops update cluster other than specifically when the kubernetes version is being increased (e.g. kops updates will still procede normally and will update things like the containerRuntime, etc without the need for 2 interactions) It also allows the instancegroups to continue to be managed while the controlplane isn't updated. E.g. if i need to make a change to an instancegroup but haven't finished rolling out all control planes. As for the CD script, here I'm probably the least impacted as we're running a kops update loop on all clusters to reconcile with CRDs from our operator, so this change will (in our scenario) make the upgrades seamless and safe without having to change anything :) Edit: Optionally an easy way to maintain behaviour in current CD workflows, is to simply ensure api-server nodes are rolled out first in the rolling-update command, after the masters have rolled out, before moving on to the other instance groups trigger an update cluster :) |
cmd/kops/update_cluster.go
Outdated
if err != nil { | ||
klog.Warningf("error checking control plane running verion: %v", err) | ||
} else { | ||
klog.Warningf("successfully checked control plane running version: %v", minControlPlaneRunningVersion) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: any reason to make this a warning (as opposed to Infof)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh no... that was just for my testing i fully intended to make it info or debug and then forgot :)
|
||
func checkControlPlaneRunningVersion(ctx context.Context, clusterName string, version string) (string, error) { | ||
|
||
configLoadingRules := clientcmd.NewDefaultClientConfigLoadingRules() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I think we should try to share this "get kubeconfig" code with the rolling-update logic (I think that also connects to the cluster). Not a blocker though.
LabelSelector: "node-role.kubernetes.io/control-plane", | ||
}) | ||
if err != nil { | ||
return version, fmt.Errorf("cannot list nodes in cluster %q: %v", clusterName, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah - this is tricky. If the control plane is offline, we presumably don't want to update the nodes (?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the control plane is offline, this is the least of our worries lol. But it might be an expected situation (e.g. creating a new cluster) so we just log the error and continue as normal
pkg/nodemodel/fileassets.go
Outdated
return err | ||
} | ||
if len(assetBuilder.KubeletSupportedVersion) > 0 && assetBuilder.KubeletSupportedVersion != c.Cluster.Spec.KubernetesVersion { | ||
c.AssetsSupportedKubelet, err = c.addFileAssets(assetBuilder, assetBuilder.KubeletSupportedVersion) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah - great point - we need different assets. Although I'm starting to worry about complexity now...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the complexity here worried me a bit as well as I dug a bit into how to achieve this. The code isn't perfect but i tried to do it without having to refactor many things, if you have any better ideas we can either improve this down the line or i can make any adjustments :)
Thanks for this! A few random comments on the code but I'm going to mull on the overall approach. |
So we have added the ability to update only some of the instance groups, and in #16939 I am proposing to make it easy to drive those operations in a predefined sequence. One thing I don't like about that approach is that we do have to predefine the sequence, and one of the things I like in this PR is that we're being more reasoned about the sequence: rather than hard-coding the sequence we effectively infer the sequence at run-time. It might be that the predefined approach is good enough to unblock 1.31, but I would like to pursue this direction more. I am wondering if we should make it more explicit, adding a field for the Kube version on each instance groups and managing that. |
@justinsb Could we combine the two approaches by having the anneal/reconcile cluster command call update & rolling-update repeatedly, until the update command reports that there are no tasks left that it had to skip due to version skew or other dependencies? Then the anneal/reconcile command wouldn't have to hard-code any sequences, and that structure might also be able to incorporate running the |
@danports yes, I think we might need a combination. Something brought up in office hours was that we need to support this for terraform also, where we can't emit partial configuration (at least I don't think we can!). I've gone round in circles a lot on other ways to make this work, but I am feeling like the approach in this PR is probably going to be the only one that we can really do, short of introducing a kubernetesVersion field in the instance group which is its own management problem. I am doing some work to try to clean up our assets logic so that this PR doesn't have to plumb through multiple asset versions - that should make this PR smaller I hope! |
OK, I think I've got something working, and it is based on this PR. The approach here is very clever, and because terraform will delete resources if we stop emitting them, I think this is the right way forward. The downside is that we will have to run update / rolling-update / update / rolling-update, but we can address that with messages to the user and potentially a command like #16939 (though that will have to be reworked) I do think we should merge #16954 first though, as with that PR the code in this PR becomes much easier. The whole building of fileAssets looks to be unnecessarily complicated, which is part of the reason why this PR is somewhat complicated. I have a rebase of it after #16954 so once that merges I think I can probably just update this PR branch (as a maintainer). cc @hakman |
Cool, let me know if you need a hand with anything @justinsb :) |
fdc76a7
to
7c1542f
Compare
/ok-to-test I rebased, it's simpler now that we can resolve the file assets "at the last minute" |
Awesome, much cleaner change now :D |
Thanks @rsafonseca - I want to emphasize that you did the hard work here of figuring out the approach. I just cleaned up some of the existing over-complicated code in kOps that was resolving file assets in an awkward way. cc @hakman if we get this merged I have a few small follow-ons (e.g. there's a problem with featuregates being removed upstream), but I think this is our best approach, given terraform. |
cmd/kops/update_cluster.go
Outdated
@@ -163,6 +171,7 @@ func NewCmdUpdateCluster(f *util.Factory, out io.Writer) *cobra.Command { | |||
cmd.RegisterFlagCompletionFunc("lifecycle-overrides", completeLifecycleOverrides) | |||
|
|||
cmd.Flags().BoolVar(&options.Prune, "prune", options.Prune, "Delete old revisions of cloud resources that were needed during an upgrade") | |||
cmd.Flags().BoolVar(&options.IgnoreVersionSkew, "ignore-version-skew", options.IgnoreVersionSkew, "Setting this to true will force updating the kubernetes version on all instance groups, regardles of which control plane version is running") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
given how many policies are defined here: https://kubernetes.io/releases/version-skew-policy/
and that this only violates one of them, perhaps we should make the flag more specific? --ignore-kubelet-version-skew
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point @rifelpet, but i think instead what we could do is keep this naming and add an additional check of whether the user is trying to update more than 3 minor versions at a time, and throw an error if that's the case unless the --ignore-version-skew flag is set. That would take care of all the version skews listed there and the flag name would then make sense. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I think the two most common skews are going to be people updating the control plane by more than one version at a time, and the one we have here where people want to update the nodes at the same time as the control plane.
I suspect we could get away with two flags therefore, like ignore-node-version-skew
and ignore-controlplane-version-skew
(though maybe something like ignore-controlplane-skip-versions
would be more accurate). But I'm wondering if making this a string would be more ergonomic, so we would have ignore-version-skew=all
and ignore-version-skew=kubelet
.
but the trick would be that we would accept --ignore-version-skew
to mean --ignore-version-skew=all
, so we wouldn't actually have to define everything yet, we're just leaving the "flag API surface" open to future extensions. I'll check out whether we can have a defaulting string...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we can do this, using something like:
// By default we enforce the version skew between control plane and worker nodes
options.IgnoreVersionSkew = "DoNotIgnore"
cmd.Flags().StringVar(&options.IgnoreVersionSkew, "ignore-version-skew", options.IgnoreVersionSkew, "Setting this to true will force updating the kubernetes version on all instance groups, regardless of which control plane version is running")
cmd.Flags().Lookup("ignore-version-skew").NoOptDefVal = "IgnoreAllSkew"
The thing I don't like is that it won't be very discoverable, particularly once we have more options. The help with the above looks like this:
--ignore-version-skew string[="IgnoreAllSkew"] Setting this to true will force updating the kubernetes version on all instance groups, regardless of which control plane version is running (default "DoNotIgnore")
So we're going to be struggling to get a clear description here. Maybe that just doesn't matter - this is blocking the 1.31 beta release, so maybe this is good enough?
The alternative is that we have separate flags, --ignore-kubelet-version-skew
, --ignore-upgrade-skip-versions
and maybe that's all (but if not we just add more). We'd have more flags, but they would be simple.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having separate flags makes sense to me because users may want to toggle them separately which is easier to understand as independent flags rather than trying to choose the most appropriate flag value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think it does seem simpler as I don't think we're going to end up with too many of them (and if we do, we can always hide the flags and move to an aggregated flag)
Any objections to --ignore-kubelet-version-skew
? I think I can push a change if we all agree cc @rsafonseca
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No concerns from me, feel free to push the change @justinsb :)
7c1542f
to
e21f716
Compare
Thanks all - I renamed the flag and pushed! |
…, while control plane is being updated Co-authored-by: Peter Rifel <[email protected]>
f7814c5
to
e4a0ef6
Compare
Thanks @rifelpet ; I incorporated your log message tweaks. We do need some follow on PR (some node feature flags have been removed in 1.31), so we can also make more tweaks in follow on as we observe things (many of which will be more of the "oh, we should consider the instance group version, not the cluster version" variety) |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rifelpet The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Automatically preserve kubelet supported version skew on worker nodes while control plane is being updated
Fixes: #16907