This guide will setup Kubernetes on CoreOS in a similar way to other tools in the repo. The main goal of these scripts is to be generic and work on many different cloud providers or platforms. The notable difference is that these scripts are intended to be platform agnostic and thus don't automatically setup the TLS assets on each host beforehand.
While we provide these scripts and test them through the multi-node Vagrant setup, we recommend using a platform specific install method if available. If you are installing to bare-metal, you might find our baremetal repo more appropriate.
Review the OpenSSL-based TLS instructions for generating your TLS assets for each of the Kubernetes nodes.
Place the files in the following locations:
Controller Files | Location |
---|---|
API Certificate | /etc/kubernetes/ssl/apiserver.pem |
API Private Key | /etc/kubernetes/ssl/apiserver-key.pem |
CA Certificate | /etc/kubernetes/ssl/ca.pem |
Worker Files | Location |
---|---|
Worker Certificate | /etc/kubernetes/ssl/worker.pem |
Worker Private Key | /etc/kubernetes/ssl/worker-key.pem |
CA Certificate | /etc/kubernetes/ssl/ca.pem |
This cluster must adhere to the Kubernetes networking model. Nodes created by the generic scripts, by default, listen on and identify themselves by the ADVERTISE_IP
environment variable. If this isn't set, the scripts will source it from /etc/environment
, specifically using the value of COREOS_PUBLIC_IPV4
.
Each controller node must set its ADVERTISE_IP
to an IP that accepts connections on port 443 from the workers. If using a load balancer, it must accept connections on 443 and pass that to the pool of controllers.
To view the complete list of environment variables, view the top of the controller-install.sh
script.
In addition to identifying itself with ADVERTISE_IP
, each worker must be configured with the CONTROLLER_ENDPOINT
variable, which tells them where to contact the Kubernetes API. For a single controller, this is the ADVERTISE_IP
mentioned above. For multiple controllers, this is the IP of the load balancer.
To view the complete list of environment variables, view the top of the worker-install.sh
script.
You may modify the kubelet's unit file to use additional features such as:
- mounting ephemeral disks
- allow pods to mount RDB or iSCSI volumes
- allowing access to insecure container registries
- use host DNS configuration instead of a public DNS server
- enable the cluster logging add-on
- changing your CoreOS auto-update settings
It is highly recommended that etcd is run as a dedicated cluster separately from Kubernetes components.
Use the official etcd clustering guide to decide how best to deploy etcd into your environment.
Follow these instructions for each controller you wish to boot:
- Boot CoreOS
- Download and copy
controller-install.sh
onto disk. - Copy TLS assets onto disk.
- Execute
controller-install.sh
with environment variables set. - Wait for the script to complete. About 300 MB of containers will be downloaded before the cluster is running.
Follow these instructions for each worker you wish to boot:
- Boot CoreOS
- Download and copy
worker-install.sh
onto disk. - Copy TLS assets onto disk.
- Execute
worker-install.sh
with environment variables set. - Wait for the script to complete. About 300 MB of containers will be downloaded before the cluster is running.
The Kubernetes will be up and running after the scripts complete and containers are downloaded. To take a closer look, SSH to one of the machines and monitor the container downloads:
$ docker ps
You can also watch the kubelet's logs with journalctl:
$ journalctl -u kubelet -f
Did your containers start downloading? Next, set up the `kubectl` CLI for use with your cluster.
Yes, ready to configure `kubectl`