From e3663c5821faee7afdefa4e954594a0b5e095082 Mon Sep 17 00:00:00 2001 From: liz Date: Fri, 15 Jun 2018 12:44:10 -0400 Subject: [PATCH 1/9] Add kubeadm upgrade docs for 1.11 --- .../upgrade-downgrade/kubeadm-upgrade-1-11.md | 303 ++++++++++++++++++ 1 file changed, 303 insertions(+) create mode 100644 content/en/docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md diff --git a/content/en/docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md b/content/en/docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md new file mode 100644 index 0000000000000..9e0e8cff2312e --- /dev/null +++ b/content/en/docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md @@ -0,0 +1,303 @@ +--- +reviewers: +- pipejakob +- luxas +- roberthbailey +- jbeda +title: Upgrading kubeadm clusters from v1.10 to v1.11 +content_template: templates/task +--- + +{{% capture overview %}} + +This guide is for upgrading `kubeadm` clusters from version 1.10.x to 1.11.x, as well as 1.10.x to 1.10.y and 1.11.x to 1.11.y where `y > x`. + +{{% /capture %}} + +{{% capture prerequisites %}} + +Before proceeding: + +- You need to have a functional `kubeadm` Kubernetes cluster running version 1.10.0 or higher in order to use the process described here. Swap also needs to be disabled. +- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md) carefully. +- `kubeadm upgrade` now allows you to upgrade etcd. `kubeadm upgrade` will also upgrade of etcd to 3.1.10 as part of upgrading from v1.8 to v1.9 by default. This is due to the fact that etcd 3.1.10 is the officially validated etcd version for Kubernetes v1.9. The upgrade is handled automatically by kubeadm for you. +- Note that `kubeadm upgrade` will not touch any of your workloads, only Kubernetes-internal components. As a best-practice you should back up what's important to you. For example, any app-level state, such as a database an app might depend on (like MySQL or MongoDB) must be backed up beforehand. + +{{< caution >}} +**Caution:** All the containers will get restarted after the upgrade, due to container spec hash value being changed. +{{< /caution >}} + +Also, note that only one minor version upgrade is supported. For example, you can only upgrade from 1.10 to 1.11, not from 1.9 to 1.11. + +{{% /capture %}} + +{{% capture steps %}} + +## Upgrading your control plane + +Execute these commands on your master node: + +1. Install the most recent version of `kubeadm` using `curl` like so: + +```shell +sudo apt-get upgdate && sudo apt-get upgrade kubeadm +``` + +`kubeadm` is only needed on individual (non-master) nodes for joining the cluster. +It is not necessary to update kubeadm on nodes + +Verify that this download of kubeadm works and has the expected version: + +```shell +kubeadm version +``` + +2. On the master node, run the following: + +```shell +kubeadm upgrade plan +``` + +You should see output similar to this: + + + +```shell +[preflight] Running pre-flight checks +[upgrade] Making sure the cluster is healthy: +[upgrade/health] Checking API Server health: Healthy +[upgrade/health] Checking Node health: All Nodes are healthy +[upgrade/health] Checking Static Pod manifests exists on disk: All manifests exist on disk +[upgrade/config] Making sure the configuration is correct: +[upgrade/config] Reading configuration from the cluster... +[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' +[upgrade] Fetching available versions to upgrade to: +[upgrade/versions] Cluster version: v1.10.1 +[upgrade/versions] kubeadm version: v1.10.0 +[upgrade/versions] Latest stable version: v1.9.0 +[upgrade/versions] Latest version in the v1.8 series: v1.8.6 + +Components that must be upgraded manually after you've upgraded the control plane with 'kubeadm upgrade apply': +COMPONENT CURRENT AVAILABLE +Kubelet 1 x v1.8.1 v1.8.6 + +Upgrade to the latest version in the v1.8 series: + +COMPONENT CURRENT AVAILABLE +API Server v1.8.1 v1.8.6 +Controller Manager v1.8.1 v1.8.6 +Scheduler v1.8.1 v1.8.6 +Kube Proxy v1.8.1 v1.8.6 +Kube DNS 1.14.4 1.14.5 + +You can now apply the upgrade by executing the following command: + + kubeadm upgrade apply v1.8.6 + +_____________________________________________________________________ + +Components that must be upgraded manually after you've upgraded the control plane with 'kubeadm upgrade apply': +COMPONENT CURRENT AVAILABLE +Kubelet 1 x v1.8.1 v1.9.0 + +Upgrade to the latest stable version: + +COMPONENT CURRENT AVAILABLE +API Server v1.8.1 v1.9.0 +Controller Manager v1.8.1 v1.9.0 +Scheduler v1.8.1 v1.9.0 +Kube Proxy v1.8.1 v1.9.0 +Kube DNS 1.14.5 1.14.7 + +You can now apply the upgrade by executing the following command: + + kubeadm upgrade apply v1.9.0 + +Note: Before you do can perform this upgrade, you have to update kubeadm to v1.9.0 + +_____________________________________________________________________ +``` + +The `kubeadm upgrade plan` checks that your cluster is upgradeable and fetches the versions available to upgrade to in an user-friendly way. + +3. Pick a version to upgrade to and run. For example: + +```shell +kubeadm upgrade apply v1.11.0 +``` + +You should see output similar to this: + + + +```shell +[preflight] Running pre-flight checks. +[upgrade] Making sure the cluster is healthy: +[upgrade/config] Making sure the configuration is correct: +[upgrade/config] Reading configuration from the cluster... +[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' +I0614 20:56:08.320369 30918 feature_gate.go:230] feature gates: &{map[]} +[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file. +[upgrade/version] You have chosen to change the cluster version to "v1.11.0-beta.2.78+e0b33dbc2bde88" +[upgrade/versions] Cluster version: v1.10.4 +[upgrade/versions] kubeadm version: v1.11.0-beta.2.78+e0b33dbc2bde88 +[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y +[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] +[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.11.0-beta.2.78+e0b33dbc2bde88"... +Static pod: kube-apiserver-ip-172-31-85-18 hash: 7a329408b21bc0c44d7b3b78ff8187bf +Static pod: kube-controller-manager-ip-172-31-85-18 hash: 24fd3157627c7567b687968967c6a5e8 +Static pod: kube-scheduler-ip-172-31-85-18 hash: 5179266fb24d4c1834814c4f69486371 +Static pod: etcd-ip-172-31-85-18 hash: 9dfc197f444be11fcc70ab1467b030b8 +[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/etcd.yaml" +[certificates] Using the existing etcd/ca certificate and key. +[certificates] Using the existing etcd/server certificate and key. +[certificates] Using the existing etcd/peer certificate and key. +[certificates] Using the existing etcd/healthcheck-client certificate and key. +[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/etcd.yaml" +[upgrade/staticpods] Waiting for the kubelet to restart the component +Static pod: etcd-ip-172-31-85-18 hash: 9dfc197f444be11fcc70ab1467b030b8 +< snip > +[apiclient] Found 1 Pods for label selector component=etcd +[upgrade/staticpods] Component "etcd" upgraded successfully! +[upgrade/etcd] Waiting for etcd to become available +[util/etcd] Waiting 0s for initial delay +[util/etcd] Attempting to see if all cluster endpoints are available 1/10 +[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939" +[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/kube-apiserver.yaml" +[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/kube-controller-manager.yaml" +[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/kube-scheduler.yaml" +[certificates] Using the existing etcd/ca certificate and key. +[certificates] Using the existing apiserver-etcd-client certificate and key. +[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/kube-apiserver.yaml" +[upgrade/staticpods] Waiting for the kubelet to restart the component +Static pod: kube-apiserver-ip-172-31-85-18 hash: 7a329408b21bc0c44d7b3b78ff8187bf +< snip > +[apiclient] Found 1 Pods for label selector component=kube-apiserver +[upgrade/staticpods] Component "kube-apiserver" upgraded successfully! +[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/kube-controller-manager.yaml" +[upgrade/staticpods] Waiting for the kubelet to restart the component +Static pod: kube-controller-manager-ip-172-31-85-18 hash: 24fd3157627c7567b687968967c6a5e8 +Static pod: kube-controller-manager-ip-172-31-85-18 hash: 63992ff14733dcb9dcfa6ac0a3b8031a +[apiclient] Found 1 Pods for label selector component=kube-controller-manager +[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! +[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/kube-scheduler.yaml" +[upgrade/staticpods] Waiting for the kubelet to restart the component +Static pod: kube-scheduler-ip-172-31-85-18 hash: 5179266fb24d4c1834814c4f69486371 +Static pod: kube-scheduler-ip-172-31-85-18 hash: 831e4b9425f758e572392976311e56d9 +[apiclient] Found 1 Pods for label selector component=kube-scheduler +[upgrade/staticpods] Component "kube-scheduler" upgraded successfully! +[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace +[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster +[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace +[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" +[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ip-172-31-85-18" as an annotation +[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials +[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token +[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster +[addons] Applied essential addon: CoreDNS +[addons] Applied essential addon: kube-proxy + +[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.11.0-beta.2.78+e0b33dbc2bde88". Enjoy! + +[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. +``` + +To upgrade the cluster with CoreDNS as the default internal DNS, invoke `kubeadm upgrade apply` with the `--feature-gates=CoreDNS=true` flag. +`kubeadm upgrade apply` does the following: + +- Checks that your cluster is in an upgradeable state: + - The API server is reachable, + - All nodes are in the `Ready` state + - The control plane is healthy +- Enforces the version skew policies. +- Makes sure the control plane images are available or available to pull to the machine. +- Upgrades the control plane components or rollbacks if any of them fails to come up. +- Applies the new `kube-dns` and `kube-proxy` manifests and enforces that all necessary RBAC rules are created. +- Creates new certificate and key files of apiserver and backs up old files if they're about to expire in 180 days. + +4. Manually upgrade your Software Defined Network (SDN). + + Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. + Check the [addons](/docs/concepts/cluster-administration/addons/) page to + find your CNI provider and see if there are additional upgrade steps + necessary. + +## Upgrading your master and node packages + +For each host (referred to as `$HOST` below) in your cluster, upgrade `kubelet` by executing the following commands: + +1. Prepare the host for maintenance, marking it unschedulable and evicting the workload: + +```shell +kubectl drain $HOST --ignore-daemonsets +``` + +When running this command against the master host, `--ignore-daemonsets` is required: + +```shell +kubectl drain ip-172-31-85-18 +node "ip-172-31-85-18" cordoned +error: unable to drain node "ip-172-31-85-18", aborting command... + +There are pending nodes to be drained: + ip-172-31-85-18 +error: DaemonSet-managed pods (use --ignore-daemonsets to ignore): calico-node-5798d, kube-proxy-thjp9 +``` + +``` +kubectl drain ip-172-31-85-18 --ignore-daemonsets +node "ip-172-31-85-18" already cordoned +WARNING: Ignoring DaemonSet-managed pods: calico-node-5798d, kube-proxy-thjp9 +node "ip-172-31-85-18" drained +``` + +2. Upgrade the Kubernetes package versions on the `$HOST` node by using a Linux distribution-specific package manager: + +If the host is running a Debian-based distro such as Ubuntu, run: + +```shell +apt-get update +apt-get upgrade +``` + +If the host is running CentOS or the like, run: + +```shell +yum update +``` + +3. Restart the kubectl process with +```shell +sudo systemctl restart kubelet +``` + +Now the new version of the `kubelet` should be running on the host. Verify this using the following command on `$HOST`: + +```shell +systemctl status kubelet +``` + +4. Bring the host back online by marking it schedulable: + +```shell +kubectl uncordon $HOST +``` + +5. After upgrading `kubelet` on each host in your cluster, verify that all nodes are available again by executing the following (from anywhere, for example, from outside the cluster): + +```shell +kubectl get nodes +``` + +If the `STATUS` column of the above command shows `Ready` for all of your hosts and the version you expect, you are done. + +## Recovering from a failure state + +If `kubeadm upgrade` somehow fails and fails to roll back, for example due to an unexpected shutdown during execution, +you can run `kubeadm upgrade` again as it is idempotent and should eventually make sure the actual state is the desired state you are declaring. + +You can use `kubeadm upgrade` to change a running cluster with `x.x.x --> x.x.x` with `--force`, which can be used to recover from a bad state. + +{{% /capture %}} + + From a986d345671aecf84c0be05ce6951376d2794d40 Mon Sep 17 00:00:00 2001 From: liz Date: Thu, 21 Jun 2018 16:31:16 -0400 Subject: [PATCH 2/9] Initial docs review feedback --- .../upgrade-downgrade/kubeadm-upgrade-1-11.md | 412 +++++++++--------- 1 file changed, 205 insertions(+), 207 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md b/content/en/docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md index 9e0e8cff2312e..12b0fa4ce6f37 100644 --- a/content/en/docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md +++ b/content/en/docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md @@ -1,16 +1,13 @@ --- reviewers: -- pipejakob -- luxas -- roberthbailey -- jbeda +- sig-cluster-lifecycle title: Upgrading kubeadm clusters from v1.10 to v1.11 content_template: templates/task --- {{% capture overview %}} -This guide is for upgrading `kubeadm` clusters from version 1.10.x to 1.11.x, as well as 1.10.x to 1.10.y and 1.11.x to 1.11.y where `y > x`. +This guide is for upgrading `kubeadm` clusters from version 1.10.x to 1.11.x and 1.11.x to 1.11.y where `y > x`. {{% /capture %}} @@ -18,9 +15,8 @@ This guide is for upgrading `kubeadm` clusters from version 1.10.x to 1.11.x, as Before proceeding: -- You need to have a functional `kubeadm` Kubernetes cluster running version 1.10.0 or higher in order to use the process described here. Swap also needs to be disabled. -- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md) carefully. -- `kubeadm upgrade` now allows you to upgrade etcd. `kubeadm upgrade` will also upgrade of etcd to 3.1.10 as part of upgrading from v1.8 to v1.9 by default. This is due to the fact that etcd 3.1.10 is the officially validated etcd version for Kubernetes v1.9. The upgrade is handled automatically by kubeadm for you. +- You need to have a functional `kubeadm` Kubernetes cluster running version 1.10.0 or higher in order to use the process described here. Swap also needs to be disabled. This cluster should use a self-hosted etcd and static control plane pods. +- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md) carefully. - Note that `kubeadm upgrade` will not touch any of your workloads, only Kubernetes-internal components. As a best-practice you should back up what's important to you. For example, any app-level state, such as a database an app might depend on (like MySQL or MongoDB) must be backed up beforehand. {{< caution >}} @@ -29,191 +25,176 @@ Before proceeding: Also, note that only one minor version upgrade is supported. For example, you can only upgrade from 1.10 to 1.11, not from 1.9 to 1.11. +{{< caution >}} +**Caution:** The default DNS provider in 1.11 is [CoreDNS](https://coredns.io/) rather than [kube-dns](https://github.com/kubernetes/dns). +To keep `kube-dns`, pass `--feature-flags=CoreDNS=false` to `kubeadm upgrade apply`. +{{< /caution >}} + {{% /capture %}} {{% capture steps %}} ## Upgrading your control plane -Execute these commands on your master node: - -1. Install the most recent version of `kubeadm` using `curl` like so: +Execute these commands on your master node (as root): -```shell -sudo apt-get upgdate && sudo apt-get upgrade kubeadm -``` +1. -`kubeadm` is only needed on individual (non-master) nodes for joining the cluster. -It is not necessary to update kubeadm on nodes - -Verify that this download of kubeadm works and has the expected version: - -```shell -kubeadm version -``` - -2. On the master node, run the following: + ```shell + export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version + export ARCH=amd64 # or: arm, arm64, ppc64le, s390x + curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm + chmod a+rx /usr/bin/kubeadm + ``` -```shell -kubeadm upgrade plan -``` + {{< caution >}} + **Caution:** Upgrading the `kubeadm` package on your system prior to upgrading the control plane causes a failed upgrade. + Even though `kubeadm` ships in the Kubernetes repositories, it's important to install `kubeadm` manually. The kubeadm + team is working on fixing this limitation. + {{< /caution >}} -You should see output similar to this: - + `kubeadm` is only needed on individual (non-master) nodes for joining the cluster. + It is not necessary to update kubeadm on nodes -```shell -[preflight] Running pre-flight checks -[upgrade] Making sure the cluster is healthy: -[upgrade/health] Checking API Server health: Healthy -[upgrade/health] Checking Node health: All Nodes are healthy -[upgrade/health] Checking Static Pod manifests exists on disk: All manifests exist on disk -[upgrade/config] Making sure the configuration is correct: -[upgrade/config] Reading configuration from the cluster... -[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' -[upgrade] Fetching available versions to upgrade to: -[upgrade/versions] Cluster version: v1.10.1 -[upgrade/versions] kubeadm version: v1.10.0 -[upgrade/versions] Latest stable version: v1.9.0 -[upgrade/versions] Latest version in the v1.8 series: v1.8.6 + Verify that this download of kubeadm works and has the expected version: -Components that must be upgraded manually after you've upgraded the control plane with 'kubeadm upgrade apply': -COMPONENT CURRENT AVAILABLE -Kubelet 1 x v1.8.1 v1.8.6 + ```shell + kubeadm version + ``` -Upgrade to the latest version in the v1.8 series: +2. On the master node, run the following: -COMPONENT CURRENT AVAILABLE -API Server v1.8.1 v1.8.6 -Controller Manager v1.8.1 v1.8.6 -Scheduler v1.8.1 v1.8.6 -Kube Proxy v1.8.1 v1.8.6 -Kube DNS 1.14.4 1.14.5 + ```shell + kubeadm upgrade plan + ``` -You can now apply the upgrade by executing the following command: + You should see output similar to this: - kubeadm upgrade apply v1.8.6 + -_____________________________________________________________________ + ```shell + [preflight] Running pre-flight checks. + [upgrade] Making sure the cluster is healthy: + [upgrade/config] Making sure the configuration is correct: + [upgrade/config] Reading configuration from the cluster... + [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' + I0618 20:32:32.950358 15307 feature_gate.go:230] feature gates: &{map[]} + [upgrade] Fetching available versions to upgrade to + [upgrade/versions] Cluster version: v1.10.4 + [upgrade/versions] kubeadm version: v1.11.0-beta.2.78+e0b33dbc2bde88 -Components that must be upgraded manually after you've upgraded the control plane with 'kubeadm upgrade apply': -COMPONENT CURRENT AVAILABLE -Kubelet 1 x v1.8.1 v1.9.0 + Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': + COMPONENT CURRENT AVAILABLE + Kubelet 1 x v1.10.4 v1.11.0 -Upgrade to the latest stable version: + Upgrade to the latest version in the v1.10 series: -COMPONENT CURRENT AVAILABLE -API Server v1.8.1 v1.9.0 -Controller Manager v1.8.1 v1.9.0 -Scheduler v1.8.1 v1.9.0 -Kube Proxy v1.8.1 v1.9.0 -Kube DNS 1.14.5 1.14.7 + COMPONENT CURRENT AVAILABLE + API Server v1.10.4 v1.11.0 + Controller Manager v1.10.4 v1.11.0 + Scheduler v1.10.4 v1.11.0 + Kube Proxy v1.10.4 v1.11.0 + CoreDNS 1.1.3 + Kube DNS 1.14.8 + Etcd 3.1.12 3.2.18 -You can now apply the upgrade by executing the following command: + You can now apply the upgrade by executing the following command: - kubeadm upgrade apply v1.9.0 + kubeadm upgrade apply v1.11.0 -Note: Before you do can perform this upgrade, you have to update kubeadm to v1.9.0 + Note: Before you can perform this upgrade, you have to update kubeadm to v1.11.0. -_____________________________________________________________________ -``` + _____________________________________________________________________ + ``` -The `kubeadm upgrade plan` checks that your cluster is upgradeable and fetches the versions available to upgrade to in an user-friendly way. + The `kubeadm upgrade plan` checks that your cluster is upgradeable and fetches the versions available to upgrade to in an user-friendly way. 3. Pick a version to upgrade to and run. For example: -```shell -kubeadm upgrade apply v1.11.0 -``` - -You should see output similar to this: - - - -```shell -[preflight] Running pre-flight checks. -[upgrade] Making sure the cluster is healthy: -[upgrade/config] Making sure the configuration is correct: -[upgrade/config] Reading configuration from the cluster... -[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' -I0614 20:56:08.320369 30918 feature_gate.go:230] feature gates: &{map[]} -[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file. -[upgrade/version] You have chosen to change the cluster version to "v1.11.0-beta.2.78+e0b33dbc2bde88" -[upgrade/versions] Cluster version: v1.10.4 -[upgrade/versions] kubeadm version: v1.11.0-beta.2.78+e0b33dbc2bde88 -[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y -[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] -[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.11.0-beta.2.78+e0b33dbc2bde88"... -Static pod: kube-apiserver-ip-172-31-85-18 hash: 7a329408b21bc0c44d7b3b78ff8187bf -Static pod: kube-controller-manager-ip-172-31-85-18 hash: 24fd3157627c7567b687968967c6a5e8 -Static pod: kube-scheduler-ip-172-31-85-18 hash: 5179266fb24d4c1834814c4f69486371 -Static pod: etcd-ip-172-31-85-18 hash: 9dfc197f444be11fcc70ab1467b030b8 -[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/etcd.yaml" -[certificates] Using the existing etcd/ca certificate and key. -[certificates] Using the existing etcd/server certificate and key. -[certificates] Using the existing etcd/peer certificate and key. -[certificates] Using the existing etcd/healthcheck-client certificate and key. -[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/etcd.yaml" -[upgrade/staticpods] Waiting for the kubelet to restart the component -Static pod: etcd-ip-172-31-85-18 hash: 9dfc197f444be11fcc70ab1467b030b8 -< snip > -[apiclient] Found 1 Pods for label selector component=etcd -[upgrade/staticpods] Component "etcd" upgraded successfully! -[upgrade/etcd] Waiting for etcd to become available -[util/etcd] Waiting 0s for initial delay -[util/etcd] Attempting to see if all cluster endpoints are available 1/10 -[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939" -[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/kube-apiserver.yaml" -[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/kube-controller-manager.yaml" -[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/kube-scheduler.yaml" -[certificates] Using the existing etcd/ca certificate and key. -[certificates] Using the existing apiserver-etcd-client certificate and key. -[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/kube-apiserver.yaml" -[upgrade/staticpods] Waiting for the kubelet to restart the component -Static pod: kube-apiserver-ip-172-31-85-18 hash: 7a329408b21bc0c44d7b3b78ff8187bf -< snip > -[apiclient] Found 1 Pods for label selector component=kube-apiserver -[upgrade/staticpods] Component "kube-apiserver" upgraded successfully! -[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/kube-controller-manager.yaml" -[upgrade/staticpods] Waiting for the kubelet to restart the component -Static pod: kube-controller-manager-ip-172-31-85-18 hash: 24fd3157627c7567b687968967c6a5e8 -Static pod: kube-controller-manager-ip-172-31-85-18 hash: 63992ff14733dcb9dcfa6ac0a3b8031a -[apiclient] Found 1 Pods for label selector component=kube-controller-manager -[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! -[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/kube-scheduler.yaml" -[upgrade/staticpods] Waiting for the kubelet to restart the component -Static pod: kube-scheduler-ip-172-31-85-18 hash: 5179266fb24d4c1834814c4f69486371 -Static pod: kube-scheduler-ip-172-31-85-18 hash: 831e4b9425f758e572392976311e56d9 -[apiclient] Found 1 Pods for label selector component=kube-scheduler -[upgrade/staticpods] Component "kube-scheduler" upgraded successfully! -[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace -[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster -[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace -[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" -[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ip-172-31-85-18" as an annotation -[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials -[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token -[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster -[addons] Applied essential addon: CoreDNS -[addons] Applied essential addon: kube-proxy - -[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.11.0-beta.2.78+e0b33dbc2bde88". Enjoy! - -[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. -``` - -To upgrade the cluster with CoreDNS as the default internal DNS, invoke `kubeadm upgrade apply` with the `--feature-gates=CoreDNS=true` flag. -`kubeadm upgrade apply` does the following: - -- Checks that your cluster is in an upgradeable state: - - The API server is reachable, - - All nodes are in the `Ready` state - - The control plane is healthy -- Enforces the version skew policies. -- Makes sure the control plane images are available or available to pull to the machine. -- Upgrades the control plane components or rollbacks if any of them fails to come up. -- Applies the new `kube-dns` and `kube-proxy` manifests and enforces that all necessary RBAC rules are created. -- Creates new certificate and key files of apiserver and backs up old files if they're about to expire in 180 days. + ```shell + kubeadm upgrade apply v1.11.0 + ``` + + If you currently use `kube-dns` and wish to continue doing so, use `--feature-flags=CoreDNS=false`. + + You should see output similar to this: + + + + ```shell + [preflight] Running pre-flight checks. + [upgrade] Making sure the cluster is healthy: + [upgrade/config] Making sure the configuration is correct: + [upgrade/config] Reading configuration from the cluster... + [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' + I0614 20:56:08.320369 30918 feature_gate.go:230] feature gates: &{map[]} + [upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file. + [upgrade/version] You have chosen to change the cluster version to "v1.11.0-beta.2.78+e0b33dbc2bde88" + [upgrade/versions] Cluster version: v1.10.4 + [upgrade/versions] kubeadm version: v1.11.0-beta.2.78+e0b33dbc2bde88 + [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y + [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] + [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.11.0-beta.2.78+e0b33dbc2bde88"... + Static pod: kube-apiserver-ip-172-31-85-18 hash: 7a329408b21bc0c44d7b3b78ff8187bf + Static pod: kube-controller-manager-ip-172-31-85-18 hash: 24fd3157627c7567b687968967c6a5e8 + Static pod: kube-scheduler-ip-172-31-85-18 hash: 5179266fb24d4c1834814c4f69486371 + Static pod: etcd-ip-172-31-85-18 hash: 9dfc197f444be11fcc70ab1467b030b8 + [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/etcd.yaml" + [certificates] Using the existing etcd/ca certificate and key. + [certificates] Using the existing etcd/server certificate and key. + [certificates] Using the existing etcd/peer certificate and key. + [certificates] Using the existing etcd/healthcheck-client certificate and key. + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/etcd.yaml" + [upgrade/staticpods] Waiting for the kubelet to restart the component + Static pod: etcd-ip-172-31-85-18 hash: 9dfc197f444be11fcc70ab1467b030b8 + < snip > + [apiclient] Found 1 Pods for label selector component=etcd + [upgrade/staticpods] Component "etcd" upgraded successfully! + [upgrade/etcd] Waiting for etcd to become available + [util/etcd] Waiting 0s for initial delay + [util/etcd] Attempting to see if all cluster endpoints are available 1/10 + [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939" + [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/kube-apiserver.yaml" + [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/kube-controller-manager.yaml" + [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests089436939/kube-scheduler.yaml" + [certificates] Using the existing etcd/ca certificate and key. + [certificates] Using the existing apiserver-etcd-client certificate and key. + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/kube-apiserver.yaml" + [upgrade/staticpods] Waiting for the kubelet to restart the component + Static pod: kube-apiserver-ip-172-31-85-18 hash: 7a329408b21bc0c44d7b3b78ff8187bf + < snip > + [apiclient] Found 1 Pods for label selector component=kube-apiserver + [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/kube-controller-manager.yaml" + [upgrade/staticpods] Waiting for the kubelet to restart the component + Static pod: kube-controller-manager-ip-172-31-85-18 hash: 24fd3157627c7567b687968967c6a5e8 + Static pod: kube-controller-manager-ip-172-31-85-18 hash: 63992ff14733dcb9dcfa6ac0a3b8031a + [apiclient] Found 1 Pods for label selector component=kube-controller-manager + [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-06-14-20-56-11/kube-scheduler.yaml" + [upgrade/staticpods] Waiting for the kubelet to restart the component + Static pod: kube-scheduler-ip-172-31-85-18 hash: 5179266fb24d4c1834814c4f69486371 + Static pod: kube-scheduler-ip-172-31-85-18 hash: 831e4b9425f758e572392976311e56d9 + [apiclient] Found 1 Pods for label selector component=kube-scheduler + [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! + [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace + [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster + [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace + [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" + [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ip-172-31-85-18" as an annotation + [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials + [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token + [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster + [addons] Applied essential addon: CoreDNS + [addons] Applied essential addon: kube-proxy + + [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.11.0-beta.2.78+e0b33dbc2bde88". Enjoy! + + [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. + ``` + + To upgrade the cluster with CoreDNS as the default internal DNS, invoke `kubeadm upgrade apply` with the `--feature-gates=CoreDNS=true` flag. 4. Manually upgrade your Software Defined Network (SDN). @@ -228,68 +209,75 @@ For each host (referred to as `$HOST` below) in your cluster, upgrade `kubelet` 1. Prepare the host for maintenance, marking it unschedulable and evicting the workload: -```shell -kubectl drain $HOST --ignore-daemonsets -``` + ```shell + kubectl drain $HOST --ignore-daemonsets + ``` -When running this command against the master host, `--ignore-daemonsets` is required: + When running this command against the master host, `--ignore-daemonsets` is required: -```shell -kubectl drain ip-172-31-85-18 -node "ip-172-31-85-18" cordoned -error: unable to drain node "ip-172-31-85-18", aborting command... + ```shell + kubectl drain ip-172-31-85-18 + node "ip-172-31-85-18" cordoned + error: unable to drain node "ip-172-31-85-18", aborting command... -There are pending nodes to be drained: - ip-172-31-85-18 -error: DaemonSet-managed pods (use --ignore-daemonsets to ignore): calico-node-5798d, kube-proxy-thjp9 -``` + There are pending nodes to be drained: + ip-172-31-85-18 + error: DaemonSet-managed pods (use --ignore-daemonsets to ignore): calico-node-5798d, kube-proxy-thjp9 + ``` -``` -kubectl drain ip-172-31-85-18 --ignore-daemonsets -node "ip-172-31-85-18" already cordoned -WARNING: Ignoring DaemonSet-managed pods: calico-node-5798d, kube-proxy-thjp9 -node "ip-172-31-85-18" drained -``` + ``` + kubectl drain ip-172-31-85-18 --ignore-daemonsets + node "ip-172-31-85-18" already cordoned + WARNING: Ignoring DaemonSet-managed pods: calico-node-5798d, kube-proxy-thjp9 + node "ip-172-31-85-18" drained + ``` 2. Upgrade the Kubernetes package versions on the `$HOST` node by using a Linux distribution-specific package manager: -If the host is running a Debian-based distro such as Ubuntu, run: - -```shell -apt-get update -apt-get upgrade -``` + If the host is running a Debian-based distro such as Ubuntu, run: -If the host is running CentOS or the like, run: + {{< tabs name="k8s_install" >}} + {{% tab name="Ubuntu, Debian or HypriotOS" %}} + ```bash + apt-get update + apt-get upgrade -y kubelet kubeadm + ``` + {{% /tab %}} + {{% tab name="CentOS, RHEL or Fedora" %}} + ```bash + yum upgrade -y kubelet kubedam + ``` + {{% /tab %}} + {{< /tabs >}} -```shell -yum update -``` + Upgrading `kubeadm` is only required on the master node. 3. Restart the kubectl process with -```shell -sudo systemctl restart kubelet -``` + ```shell + sudo systemctl restart kubelet + ``` -Now the new version of the `kubelet` should be running on the host. Verify this using the following command on `$HOST`: + Now the new version of the `kubelet` should be running on the host. Verify this using the following command on `$HOST`: -```shell -systemctl status kubelet -``` + ```shell + systemctl status kubelet + ``` 4. Bring the host back online by marking it schedulable: -```shell -kubectl uncordon $HOST -``` + ```shell + kubectl uncordon $HOST + ``` 5. After upgrading `kubelet` on each host in your cluster, verify that all nodes are available again by executing the following (from anywhere, for example, from outside the cluster): -```shell -kubectl get nodes -``` + ```shell + kubectl get nodes + ``` -If the `STATUS` column of the above command shows `Ready` for all of your hosts and the version you expect, you are done. + If the `STATUS` column of the above command shows `Ready` for all of your hosts and the version you expect, you are done. + +{{% /capture %}} ## Recovering from a failure state @@ -298,6 +286,16 @@ you can run `kubeadm upgrade` again as it is idempotent and should eventually ma You can use `kubeadm upgrade` to change a running cluster with `x.x.x --> x.x.x` with `--force`, which can be used to recover from a bad state. -{{% /capture %}} +## How it works +`kubeadm upgrade apply` does the following: +- Checks that your cluster is in an upgradeable state: + - The API server is reachable, + - All nodes are in the `Ready` state + - The control plane is healthy +- Enforces the version skew policies. +- Makes sure the control plane images are available or available to pull to the machine. +- Upgrades the control plane components or rollbacks if any of them fails to come up. +- Applies the new `kube-dns` and `kube-proxy` manifests and enforces that all necessary RBAC rules are created. +- Creates new certificate and key files of the API server and backs up old files if they're about to expire in 180 days. From b3c10bbd20945e48b29fc5417da59e365b88a61f Mon Sep 17 00:00:00 2001 From: liz Date: Thu, 21 Jun 2018 16:42:11 -0400 Subject: [PATCH 3/9] Add 1-11 to outline --- data/tasks.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/data/tasks.yml b/data/tasks.yml index a65fc67798779..6623b61eef0cc 100644 --- a/data/tasks.yml +++ b/data/tasks.yml @@ -135,6 +135,7 @@ toc: - docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-7.md - docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-8.md - docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-9.md + - docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md - docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-ha.md - title: Manage Memory, CPU, and API Resources section: From d370c632874d9fe1d0a9aeb7b0289c8936696946 Mon Sep 17 00:00:00 2001 From: liz Date: Thu, 21 Jun 2018 16:50:10 -0400 Subject: [PATCH 4/9] Fix formatting on tab blocks --- .../upgrade-downgrade/kubeadm-upgrade-1-11.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md b/content/en/docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md index 12b0fa4ce6f37..472cef0d340f3 100644 --- a/content/en/docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md +++ b/content/en/docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md @@ -238,15 +238,11 @@ For each host (referred to as `$HOST` below) in your cluster, upgrade `kubelet` {{< tabs name="k8s_install" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - ```bash apt-get update apt-get upgrade -y kubelet kubeadm - ``` {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - ```bash yum upgrade -y kubelet kubedam - ``` {{% /tab %}} {{< /tabs >}} From 78de6e61b290398a439c068e92268f42ee099ea9 Mon Sep 17 00:00:00 2001 From: liz Date: Thu, 21 Jun 2018 16:59:49 -0400 Subject: [PATCH 5/9] Move file to correct location --- .../{upgrade-downgrade => kubeadm}/kubeadm-upgrade-1-11.md | 0 data/tasks.yml | 1 - 2 files changed, 1 deletion(-) rename content/en/docs/tasks/administer-cluster/{upgrade-downgrade => kubeadm}/kubeadm-upgrade-1-11.md (100%) diff --git a/content/en/docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md similarity index 100% rename from content/en/docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md rename to content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md diff --git a/data/tasks.yml b/data/tasks.yml index 6623b61eef0cc..a65fc67798779 100644 --- a/data/tasks.yml +++ b/data/tasks.yml @@ -135,7 +135,6 @@ toc: - docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-7.md - docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-8.md - docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-9.md - - docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-1-11.md - docs/tasks/administer-cluster/upgrade-downgrade/kubeadm-upgrade-ha.md - title: Manage Memory, CPU, and API Resources section: From f765ee189e7ac978fcff39cd281563f9ab42dc79 Mon Sep 17 00:00:00 2001 From: liz Date: Fri, 22 Jun 2018 09:39:00 -0400 Subject: [PATCH 6/9] Add `kubeadm upgrade node config` step --- .../kubeadm/kubeadm-upgrade-1-11.md | 33 +++++++++++-------- 1 file changed, 20 insertions(+), 13 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md index 472cef0d340f3..8c32971d4ef12 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md @@ -15,7 +15,7 @@ This guide is for upgrading `kubeadm` clusters from version 1.10.x to 1.11.x and Before proceeding: -- You need to have a functional `kubeadm` Kubernetes cluster running version 1.10.0 or higher in order to use the process described here. Swap also needs to be disabled. This cluster should use a self-hosted etcd and static control plane pods. +- You need to have a functional `kubeadm` Kubernetes cluster running version 1.10.0 or higher in order to use the process described here. Swap also needs to be disabled. This cluster should use a static control plane and etcd pods. - Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md) carefully. - Note that `kubeadm upgrade` will not touch any of your workloads, only Kubernetes-internal components. As a best-practice you should back up what's important to you. For example, any app-level state, such as a database an app might depend on (like MySQL or MongoDB) must be backed up beforehand. @@ -26,7 +26,7 @@ Before proceeding: Also, note that only one minor version upgrade is supported. For example, you can only upgrade from 1.10 to 1.11, not from 1.9 to 1.11. {{< caution >}} -**Caution:** The default DNS provider in 1.11 is [CoreDNS](https://coredns.io/) rather than [kube-dns](https://github.com/kubernetes/dns). +**Caution:** The default DNS provider in 1.11 is [CoreDNS](https://coredns.io/) rather than [kube-dns](https://github.com/kubernetes/dns). To keep `kube-dns`, pass `--feature-flags=CoreDNS=false` to `kubeadm upgrade apply`. {{< /caution >}} @@ -38,7 +38,7 @@ To keep `kube-dns`, pass `--feature-flags=CoreDNS=false` to `kubeadm upgrade app Execute these commands on your master node (as root): -1. +1. ```shell export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version @@ -48,14 +48,17 @@ Execute these commands on your master node (as root): ``` {{< caution >}} - **Caution:** Upgrading the `kubeadm` package on your system prior to upgrading the control plane causes a failed upgrade. - Even though `kubeadm` ships in the Kubernetes repositories, it's important to install `kubeadm` manually. The kubeadm - team is working on fixing this limitation. + **Caution:** Upgrading the `kubeadm` package on your system prior to upgrading the control plane causes a failed upgrade. + Even though `kubeadm` ships in the Kubernetes repositories, it's important to install `kubeadm` manually. The kubeadm + team is working on fixing this limitation. {{< /caution >}} - `kubeadm` is only needed on individual (non-master) nodes for joining the cluster. - It is not necessary to update kubeadm on nodes +{{< caution >}} +**Caution:** Upgrading the `kubeadm` package on your system prior to upgrading the control plane causes a failed upgrade. +Even though `kubeadm` ships in the Kubernetes repositories, it's important to install `kubeadm` manually. The kubeadm +team is working on fixing this limitation. +{{< /caution >}} Verify that this download of kubeadm works and has the expected version: @@ -120,7 +123,7 @@ Execute these commands on your master node (as root): You should see output similar to this: - + ```shell [preflight] Running pre-flight checks. @@ -246,9 +249,13 @@ For each host (referred to as `$HOST` below) in your cluster, upgrade `kubelet` {{% /tab %}} {{< /tabs >}} - Upgrading `kubeadm` is only required on the master node. -3. Restart the kubectl process with +3. On all nodes but the master node, the kubelet config needs to be upgraded: + ```shell + sudo kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ' ' -f 2) + ``` + +4. Restart the kubectl process with ```shell sudo systemctl restart kubelet ``` @@ -259,13 +266,13 @@ For each host (referred to as `$HOST` below) in your cluster, upgrade `kubelet` systemctl status kubelet ``` -4. Bring the host back online by marking it schedulable: +5. Bring the host back online by marking it schedulable: ```shell kubectl uncordon $HOST ``` -5. After upgrading `kubelet` on each host in your cluster, verify that all nodes are available again by executing the following (from anywhere, for example, from outside the cluster): +6. After upgrading `kubelet` on each host in your cluster, verify that all nodes are available again by executing the following (from anywhere, for example, from outside the cluster): ```shell kubectl get nodes From f2a309a95ce1c6718aa10942b1a723da2a61f620 Mon Sep 17 00:00:00 2001 From: liz Date: Fri, 22 Jun 2018 09:46:29 -0400 Subject: [PATCH 7/9] Overzealous ediffing --- .../administer-cluster/kubeadm/kubeadm-upgrade-1-11.md | 9 --------- 1 file changed, 9 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md index 8c32971d4ef12..4f67f5fdc2f4c 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md @@ -53,13 +53,6 @@ Execute these commands on your master node (as root): team is working on fixing this limitation. {{< /caution >}} - -{{< caution >}} -**Caution:** Upgrading the `kubeadm` package on your system prior to upgrading the control plane causes a failed upgrade. -Even though `kubeadm` ships in the Kubernetes repositories, it's important to install `kubeadm` manually. The kubeadm -team is working on fixing this limitation. -{{< /caution >}} - Verify that this download of kubeadm works and has the expected version: ```shell @@ -197,8 +190,6 @@ team is working on fixing this limitation. [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. ``` - To upgrade the cluster with CoreDNS as the default internal DNS, invoke `kubeadm upgrade apply` with the `--feature-gates=CoreDNS=true` flag. - 4. Manually upgrade your Software Defined Network (SDN). Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. From 451ce777d66f706c8f4f9d7b58bcdd2cc08e1f11 Mon Sep 17 00:00:00 2001 From: JENNIFER RONDEAU Date: Mon, 25 Jun 2018 13:50:31 -0400 Subject: [PATCH 8/9] copyedit, fix lists and headings --- .../kubeadm/kubeadm-upgrade-1-11.md | 94 ++++++++----------- 1 file changed, 39 insertions(+), 55 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md index 4f67f5fdc2f4c..0ae3f17b4f901 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md @@ -7,59 +7,45 @@ content_template: templates/task {{% capture overview %}} -This guide is for upgrading `kubeadm` clusters from version 1.10.x to 1.11.x and 1.11.x to 1.11.y where `y > x`. +This page explains how to upgrade a Kubernetes cluster created with `kubeadm` from version 1.10.x to version 1.11.x, and from version 1.11.x to 1.11.y, where `y > x`. {{% /capture %}} {{% capture prerequisites %}} -Before proceeding: - -- You need to have a functional `kubeadm` Kubernetes cluster running version 1.10.0 or higher in order to use the process described here. Swap also needs to be disabled. This cluster should use a static control plane and etcd pods. +- You need to have a `kubeadm` Kubernetes cluster running version 1.10.0 or later. Swap must be disabled. The cluster should use a static control plane and etcd pods. - Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md) carefully. -- Note that `kubeadm upgrade` will not touch any of your workloads, only Kubernetes-internal components. As a best-practice you should back up what's important to you. For example, any app-level state, such as a database an app might depend on (like MySQL or MongoDB) must be backed up beforehand. - -{{< caution >}} -**Caution:** All the containers will get restarted after the upgrade, due to container spec hash value being changed. -{{< /caution >}} +- Make sure to back up any important components, such as app-level state stored in a database. `kubeadm upgrade` does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice. -Also, note that only one minor version upgrade is supported. For example, you can only upgrade from 1.10 to 1.11, not from 1.9 to 1.11. +### Additional information -{{< caution >}} -**Caution:** The default DNS provider in 1.11 is [CoreDNS](https://coredns.io/) rather than [kube-dns](https://github.com/kubernetes/dns). +- All containers are restarted after upgrade, because the container spec hash value is changed. +- You can upgrade only froom one minor version to the next minor version. That is, you cannot skip versions when you upgrade. For example, you can upgrade only from 1.10 to 1.11, not from 1.9 to 1.11. +- The default DNS provider in version 1.11 is [CoreDNS](https://coredns.io/) rather than [kube-dns](https://github.com/kubernetes/dns). To keep `kube-dns`, pass `--feature-flags=CoreDNS=false` to `kubeadm upgrade apply`. -{{< /caution >}} {{% /capture %}} {{% capture steps %}} -## Upgrading your control plane - -Execute these commands on your master node (as root): +## Upgrade the control plane -1. +1. On your master node, run the following (as root: - ```shell - export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version - export ARCH=amd64 # or: arm, arm64, ppc64le, s390x - curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm - chmod a+rx /usr/bin/kubeadm - ``` + export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version + export ARCH=amd64 # or: arm, arm64, ppc64le, s390x + curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm + chmod a+rx /usr/bin/kubeadm - {{< caution >}} - **Caution:** Upgrading the `kubeadm` package on your system prior to upgrading the control plane causes a failed upgrade. - Even though `kubeadm` ships in the Kubernetes repositories, it's important to install `kubeadm` manually. The kubeadm - team is working on fixing this limitation. - {{< /caution >}} + Note that upgrading the `kubeadm` package on your system prior to upgrading the control plane causes a failed upgrade. Even though `kubeadm` ships in the Kubernetes repositories, it's important to install it manually. The kubeadm team is working on fixing this limitation. - Verify that this download of kubeadm works and has the expected version: +1. Verify that the download works and has the expected version: ```shell kubeadm version ``` -2. On the master node, run the following: +1. On the master node, run: ```shell kubeadm upgrade plan @@ -104,15 +90,15 @@ Execute these commands on your master node (as root): _____________________________________________________________________ ``` - The `kubeadm upgrade plan` checks that your cluster is upgradeable and fetches the versions available to upgrade to in an user-friendly way. + This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. -3. Pick a version to upgrade to and run. For example: +1. Choose a version to upgrade to, and run the appropriate command. For example: ```shell kubeadm upgrade apply v1.11.0 ``` - If you currently use `kube-dns` and wish to continue doing so, use `--feature-flags=CoreDNS=false`. + If you currently use `kube-dns` and wish to continue doing so, add `--feature-flags=CoreDNS=false`. You should see output similar to this: @@ -190,24 +176,21 @@ Execute these commands on your master node (as root): [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. ``` -4. Manually upgrade your Software Defined Network (SDN). +1. Manually upgrade your Software Defined Network (SDN). - Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. - Check the [addons](/docs/concepts/cluster-administration/addons/) page to - find your CNI provider and see if there are additional upgrade steps - necessary. + Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. + Check the [addons](/docs/concepts/cluster-administration/addons/) page to + find your CNI provider and see whther additional upgrade steps are required. -## Upgrading your master and node packages +## Upgrade master and node packages -For each host (referred to as `$HOST` below) in your cluster, upgrade `kubelet` by executing the following commands: - -1. Prepare the host for maintenance, marking it unschedulable and evicting the workload: +1. Prepare each host for maintenance, marking it unschedulable and evicting the workload: ```shell kubectl drain $HOST --ignore-daemonsets ``` - When running this command against the master host, `--ignore-daemonsets` is required: + On the master host, you must add `--ignore-daemonsets`: ```shell kubectl drain ip-172-31-85-18 @@ -226,9 +209,7 @@ For each host (referred to as `$HOST` below) in your cluster, upgrade `kubelet` node "ip-172-31-85-18" drained ``` -2. Upgrade the Kubernetes package versions on the `$HOST` node by using a Linux distribution-specific package manager: - - If the host is running a Debian-based distro such as Ubuntu, run: +1. Upgrade the Kubernetes package version on each `$HOST` node by running the Linux package manager for your distribution: {{< tabs name="k8s_install" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} @@ -240,45 +221,48 @@ For each host (referred to as `$HOST` below) in your cluster, upgrade `kubelet` {{% /tab %}} {{< /tabs >}} +## Upgrade kubelet on each node + +1. On each node except the master node, upgrade the kubelet config: -3. On all nodes but the master node, the kubelet config needs to be upgraded: ```shell sudo kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ' ' -f 2) ``` -4. Restart the kubectl process with +1. Restart the kubectl process: + ```shell sudo systemctl restart kubelet ``` - Now the new version of the `kubelet` should be running on the host. Verify this using the following command on `$HOST`: +1. Verify that the new version of the `kubelet` is running on the host: ```shell systemctl status kubelet ``` -5. Bring the host back online by marking it schedulable: +1. Bring the host back online by marking it schedulable: ```shell kubectl uncordon $HOST ``` -6. After upgrading `kubelet` on each host in your cluster, verify that all nodes are available again by executing the following (from anywhere, for example, from outside the cluster): +1. After the kubelet is upgraded on all hosts, verify that all nodes are available again by running the following command from anywhere -- for example, from outside the cluster: ```shell kubectl get nodes ``` - If the `STATUS` column of the above command shows `Ready` for all of your hosts and the version you expect, you are done. + The `STATUS` column should show `Ready` for all your hosts, and the version number should be updated. {{% /capture %}} ## Recovering from a failure state -If `kubeadm upgrade` somehow fails and fails to roll back, for example due to an unexpected shutdown during execution, -you can run `kubeadm upgrade` again as it is idempotent and should eventually make sure the actual state is the desired state you are declaring. +If `kubeadm upgrade` fails and does not roll back, for example because of an unexpected shutdown during execution, +you can run `kubeadm upgrade` again. This command is idempotent and eventually makes sure that the actual state is the desired state you declare. -You can use `kubeadm upgrade` to change a running cluster with `x.x.x --> x.x.x` with `--force`, which can be used to recover from a bad state. +To recover from a bad state, you can run `kubeadm upgrade` to change a running cluster from `x.x.x --> x.x.x` with `--force`. ## How it works From 68bcbc27f5614372845b679c15942b8c0bf9821e Mon Sep 17 00:00:00 2001 From: JENNIFER RONDEAU Date: Mon, 25 Jun 2018 14:28:14 -0400 Subject: [PATCH 9/9] clarify --force flag for fixing bad state --- .../tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md index 0ae3f17b4f901..2b4602d9ccfb8 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md @@ -262,7 +262,7 @@ To keep `kube-dns`, pass `--feature-flags=CoreDNS=false` to `kubeadm upgrade app If `kubeadm upgrade` fails and does not roll back, for example because of an unexpected shutdown during execution, you can run `kubeadm upgrade` again. This command is idempotent and eventually makes sure that the actual state is the desired state you declare. -To recover from a bad state, you can run `kubeadm upgrade` to change a running cluster from `x.x.x --> x.x.x` with `--force`. +To recover from a bad state, you can also run `kubeadm upgrade --force` without changing the version that your cluster is running. ## How it works