-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Container control plane #1318
base: main
Are you sure you want to change the base?
Container control plane #1318
Conversation
Two new opts that represent the two scripts used in the provision phase (provision linux and provision k8s). Using go embed to include any necessary config files then run the commands on a node using libssh Signed-off-by: aerosouund <[email protected]>
The KubevirtProvider is a struct representing an arbitrary Kubevirtci running cluster. It holds all config flags and options that are in the run and provision commands. A Kubevirt provider can be created in two ways, by creating a cluster using the Start method, or from an already running cluster. For this to be possible then json representation of the struct is persisted on the dnsmasq container and later read to parse the deployed settings Or through the normal constructor which uses the option pattern to avoid a bloated function signature The logic that was previously in run.go has been split to several methods to facilitate readability and testing (runNFSGanesha, runRegistry, prepareQemuCmd, prepareDeviceMappings) and dnsmasq creation logic got moved to its own method instead of existing in its own package Floating methods such as waitForVMToBeUp, nodeNameFromIndex, nodeContainer.. etc were grouped to be methods of the struct Signed-off-by: aerosouund <[email protected]>
To avoid having to read each flag and return an error if its unset leverage the FlagMap, a map of flag name to FlagConfig. a FlagConfig is the type of this flag (string, int, uint16, bool or array of string) and the option function that sets the value of this flag on the KubevirtProvider struct. During parsing of flags this map is being iterated on and each option gets appended to an array to later be used in the KubevirtProvider constructor. The run method's role is now to parse the flags and pass them to the provider and just call Start. All the floating methods in run.go are removed after being moved to the provider. Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
This functionality now exists in the KubevirtProvider type and doesn't need a package of its own Signed-off-by: aerosouund <[email protected]>
The KubevirtProvider type is what provides the methods that run a node or run the k8s options. Testing logic has been moved to a Base Provider Suite Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
…uint16 All references to ports in the codebase use uint not uint16. There is no reason to keep the ports as they are Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
Thanks for your pull request. Before we can look at it, you'll need to add a 'DCO signoff' to your commits. 📝 Please follow instructions in the contributing guide to update your commits with the DCO Full details of the Developer Certificate of Origin can be found at developercertificate.org. The list of commits missing DCO signoff: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @aerosouund - thanks for all the effort on this.
I have a couple of concerns about this.
Overall, I am not convinced of the resources that this will save as we will just be running the kubernetes control plane components somewhere else.
We will still have a requirement to have a control plane node for our testing in kubevirt/kubevirt as a lot of the KubeVirt infra components require a control plane node to be scheduled - kubevirt/kubevirt#11659
These KubeVirt infra components can be sensitive to things like selinux policies and kernel modules so running in a controlled VM adds some benefits here.
We could make this container control plane configurable but I don't see it running in the main CI workloads and I am not sure how much it will be used in local dev environments as if a someone wants to test against a lightweight cluster setup, we have the kind cluster providers which provides a very lightweight cluster.
Let me know what you think. These are just a couple of the issues that came to my mind on this.
Valid concerns for sure, let me discuss them
True that the control plane will run somewhere else, but you would not be reserving a big amount of compute for it. Currently, whatever amount you reserve for VMs in the cluster is what gets allocated to the control plane which can be rather big sometimes. In general, the majority of resource saving is accrued from being given the ability to schedule workload pods (istio, cdi, multus.. etc) on any node in the cluster. Meaning that you have unlocked the total sum of resources used by those containers to be allocatable on node 1 which was previously the control plane.
Yes, i ran into this issue while testing this PR. and for now the current hacky fix i am using is by labeling a random node as the control plane even though it isn't and that seems to be enough to make it work. But if we check the reasons mentioned in the PR, they say its because
I am still reading their full rationale behind this, but based on what they say these risks will be present wherever the components are scheduled. And an opinionated take i have on this is that KubeVirtCI is an ephemeral cluster creator and not meant for any long lived clusters. Also, clusters it creates run on isolated environments (atleast in CI thats how it is) . And so in terms of priorities CI efficiency and resources beats security
If i understand correctly you are saying KV infra components are best suited for being ran on a VM. Well, under this PR they still are.
It has challenges for sure, but i believe that it has very strong candidacy to run in CI and no challenge (so far) seems so glaring to imply its impossible to run it in CI. The latest of them being the timing out of validation webhooks due to the api server not being in the pod network. This was overcame by using konnectivity. Happy to discuss this further. Let me know if you have any opinions on what i said or if anything i said needs correction |
We do use the control plane node (node01) for scheduling test workloads so I am not sure if the resources are wasted. node01 is schedulable. For instance here you can see a test VM on node01 - https://storage.googleapis.com/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/13208/pull-kubevirt-e2e-k8s-1.31-sig-compute/1859000686096683008/artifacts/k8s-reporter/3/1_overview.log What would the benefit of this approach be over just running a single node kubevirtci cluster? What kind of resource savings are seeing by moving the kubeernetes control components out of the VM? I don't think does components are that heavy on resources but maybe I am wrong. |
Based on this it seems that some components are indeed scheduled on the control plane, I need to investigate per component why that is as some components actively check for the control plane label on the node to be scheduled on them (kubevirt CR job for example), but by default the control plane is not taking any pods on it.
You are right in saying that moving the control plane components isn't gonna result in high savings as they aren't the main culprit in resource consumption. The benefit is that by taking them out, all your nodes are now workers and you can treat them all as a shared resource pool, rather than giving a particular node special treatment. This means that the kubevirtci components don't need to take into account scheduling laws or things like that and can sit on any node. Effectively achieving better resource utilization and savings across the cluster. In general, the sensible way for move forward with this project is to: 1- Get it working for all KubeVirtCI test cases and lanes Let me know what you think |
What this PR does / why we need it:
Runs the control plane of the kubevirtCI cluster in containers to discount on resource consumption, expects to reduce a runner per CI lane run across all repos
It is based on #1230
In a typical kubevirtCI cluster, the control plane is unschedulable. As seen in this snippet, only the system containers are on it
Design
There is no standardized tool or technology that achieves what this PR tries to achieve. Atleast not in a way that matches the needs of kubevirtCI. One of the big requirements is that the control plane joining process remains the same for workers (through kubeadm) whether you are joining a VM control plane or a container control plane and so the PR provides its own way of provisioning certificates, running DNS in the cluster and many other rudimentary k8s concepts.
The code for the control plane container lives in
cluster-provision/gocli/control-plane
.The creation of the control plane happens in a way similar to how kubeadm provisions a cluster, through the concept of phases. below is a snippet from its main function to illustrate how some of the phases are being called
The phases it runs are:
This then gets instantiated in the KV provider to start it
and
node01
will now only get called if the node count is 1Changes to the networking setup
Only one change is required in
dnsmasq.sh
If the node count is higher than 1 create an interface called
tap101
and manually assign it the required IPs and forward port 6443 on it througheth0
. Since the api server container gets launched in the same netns as dnsmasq all whats needed afterwards is that the server advertises the IP192.168.66.110
as the api server endpoint. and this is taken care of in the codeCurrent state
The PR is under testing locally to see if all previously existing functionality is retained. As well as code cleaning for the final presentation. The PR is opened as a draft for the community to see it and give feedback on any points to help drive the direction of the project due to its size
Extra notes
cc: @dhiller @brianmcarey @acardace @xpivarc
Checklist
This checklist is not enforcing, but it's a reminder of items that could be relevant to every PR.
Approvers are expected to review this list.
Release note: