Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Deploy Cilium as Default CNI? #2

Closed
templarfelix opened this issue Dec 20, 2022 · 2 comments
Closed

How to Deploy Cilium as Default CNI? #2

templarfelix opened this issue Dec 20, 2022 · 2 comments

Comments

@templarfelix
Copy link

How I enable cilium? https://github.com/alvistack/vagrant-kubernetes/blob/master/playbooks/60-kube_cilium-install.yml

@hswong3i
Copy link
Member

hswong3i commented Dec 21, 2022

Short Answer

# `vagrant up` the box and SSH into it
$ git clone -b develop https://github.com/alvistack/vagrant-kubernetes.git
$ cd vagrant-kubernetes
$ vagrant up
$ vagrant ssh

# Working as root
vagrant@kubernetes-1:~$ sudo su -

# Wait few minutes for /usr/local/bin/virt-sysprep-firstboot.sh get ready
root@kubernetes-1:~# kubectl get --raw='/readyz?verbose' | grep 'check passed'
readyz check passed

# At this point kubernetes should already self provisioned with Ansible,
# without any default CNI
root@kubernetes-1:~# kubectl get pod --all-namespaces 
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   coredns-565d847f94-nhls4               1/1     Running   0          113s
kube-system   coredns-565d847f94-tx8st               1/1     Running   0          113s
kube-system   kube-apiserver-kubernetes-1            1/1     Running   0          2m8s
kube-system   kube-controller-manager-kubernetes-1   1/1     Running   0          2m8s
kube-system   kube-proxy-fmg77                       1/1     Running   0          113s
kube-system   kube-scheduler-kubernetes-1            1/1     Running   0          2m8s

# Deploy cilium as CNI
root@kubernetes-1:~# apt update
root@kubernetes-1:~# ansible-playbook /etc/ansible/playbooks/60-kube_cilium-install.yml 

# Check again and cilium should now ready
root@kubernetes-1:~# kubectl get pod --all-namespaces 
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   cilium-252xk                           1/1     Running   0          3m
kube-system   cilium-node-init-9b9df                 1/1     Running   0          3m
kube-system   cilium-operator-5478d947cd-9dhnd       1/1     Running   0          3m
kube-system   coredns-565d847f94-b5m5h               1/1     Running   0          22s
kube-system   coredns-565d847f94-jfq5g               1/1     Running   0          37s
kube-system   kube-addon-manager-kubernetes-1        1/1     Running   0          3m8s
kube-system   kube-apiserver-kubernetes-1            1/1     Running   0          6m51s
kube-system   kube-controller-manager-kubernetes-1   1/1     Running   0          6m51s
kube-system   kube-proxy-fmg77                       1/1     Running   0          6m36s
kube-system   kube-scheduler-kubernetes-1            1/1     Running   0          6m51s
root@kubernetes-1:~# cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:         OK
 \__/¯¯\__/    Operator:       OK
 /¯¯\__/¯¯\    Hubble:         disabled
 \__/¯¯\__/    ClusterMesh:    disabled
    \__/

DaemonSet         cilium             Desired: 1, Ready: 1/1, Available: 1/1
Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Containers:       cilium             Running: 1
                  cilium-operator    Running: 1
Cluster Pods:     2/2 managed by Cilium
Image versions    cilium             quay.io/cilium/cilium:v1.12.4: 1
                  cilium-operator    quay.io/cilium/operator-generic:v1.12.4: 1

LT;DR;

When simply vagrant up

This box have a self provision script (see https://github.com/alvistack/vagrant-kubernetes/blob/master/playbooks/templates/usr/local/bin/virt-sysprep-firstboot.sh.j2) for above initialization with Ansible, execute by systemctl during finalize phase of system boot up.

But this mode with skip the deployment of CNI, so I could reuse it for testing both cilium/flannel/weave as below. It also only focus on single node AIO deployment.

When self test with sudo -E molecule test -s kubernetes-1.25-libvirt

Just after the box up and before above script execute, I stop the self provision (see https://github.com/alvistack/vagrant-kubernetes/blob/master/molecule/kubernetes-1.25-libvirt/molecule.yml#L34-L37).

During converge phase (see https://github.com/alvistack/vagrant-kubernetes/blob/master/molecule/default/converge.yml), it just run normal self provision steps, for multi node cluster deployment.

During verify phase (see https://github.com/alvistack/vagrant-kubernetes/blob/master/molecule/default/verify.yml), deploy flannel as default CNI, for running CNCF conformance test (see https://github.com/cncf/k8s-conformance/tree/master/v1.25/alvistack-vagrant-kubernetes#deploy-kubernetes).

When reuse as base box for testing other else CNI

It also stop the self provision at the beginning (see https://github.com/alvistack/ansible-role-kube_cilium/blob/master/molecule/kubernetes-1.25-libvirt/molecule.yml).

Therefore it could deploy its own CNI for testing during verify phase (see https://github.com/alvistack/ansible-role-kube_cilium/blob/master/molecule/default/verify.yml)

@hswong3i hswong3i changed the title cilium How to Deplot Cilium as Default CNI? Dec 21, 2022
@hswong3i hswong3i pinned this issue Dec 21, 2022
@hswong3i hswong3i changed the title How to Deplot Cilium as Default CNI? How to Deploy Cilium as Default CNI? Dec 21, 2022
@templarfelix
Copy link
Author

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants