-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node Slice Fast IPAM #458
Node Slice Fast IPAM #458
Conversation
NodeSlice CR is user to define and change...so hard to match the different runtime need of each node? not before the user is AI I guess. |
I'm not sure I fully understand. In this current version the user can define whatever slice size they need. |
Hi @ivelichkovich, sorry for not making my point clear. I meant to suggest not to limit a whereabouts node agent to only one network slice. Here are more details for your review: (1) Limiting a node agent to one network slice effectively remove the need for "lease lock", since the locking will always be successful (2) We can use the existing "lease lock" workflow for the node agent to require access to other network slice (not full) when its primary slice is full. (3) When a new node added, it can take a free network slice (not assigned to any node yet). |
we discussed this in maintainers meeting, lease is still needed because you can run multiple network-attachment-definitions for same node as well as each node can allocate multiple IPs at same time and each launches a new whereabouts process |
note to self: clean imports |
f7ecf1d
to
2f5a718
Compare
8d4c2c1
to
100e0c2
Compare
95b6163
to
f81dba7
Compare
e2e/e2e_node_slice_test.go
Outdated
By("deleting replicaset with whereabouts net-attach-def") | ||
Expect(clientInfo.DeleteReplicaSet(replicaSet)).To(Succeed()) | ||
}) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can these tests be run from multi-threaded in parallel, to verify simultaneous requests from several client?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could do something like that but what exactly are we trying to test with that? Whereabouts works on pod create so these tests do launch many concurrent pods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly the pod creation here is serial, so the IP requests toward whereabouts may be not fully concurrent, and so don't test simultaneous requests (at least if the test is blocking on pod creation)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So it sets the replicaset to testConfig.MaxReplicas(allPods.Items)
that will result in many pods launching in parallel. Depending on how many nodes are in the test cluster this should also lead to multiple pods per node so this would get exercised with multiple pods to a lease/node pool.
7bde9f9
to
b37731f
Compare
Pull Request Test Coverage Report for Build 9844370844Details
💛 - Coveralls |
e32e018
to
1e2a4e0
Compare
Might be worth marking this feature as experimental in the docs until we've built out more of the phases from the proposal and had more baketime/testing time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Igor -- this is awesome. I just ran through some functional testing, and it's looking great for the cases I tried, which didn't try to push the boundaries of the limitations we noted.
I'm all for moving forward with a merge. Especially since we've decided on a phased approach, if there's tailoring that's necessary, we can follow on with it, also because of the approach you took (and thank you for it), I don't think there's a strong risk to other functionality.
Agreed on marking it experimental in the docs... here's a quick attempt at an addition to the README. Feel free to incorporate it, or we can follow on with something:
## Fast IPAM by Using Preallocated Node Slices [Experimental]
**Enhance IPAM performance in large-scale Kubernetes environments by reducing IP allocation contention through node-based IP slicing.**
### Fast IPAM Configuration
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: whereabouts-fast-ipam
spec:
config: '{
"cniVersion": "0.3.0",
"name": "whereaboutsexample",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "whereabouts",
"range": "192.168.2.0/24",
"fast_ipam": true,
"node_slice size": "/22"
}
}'
This setup enables the fast IPAM feature to optimize IP allocation for nodes, improving network performance in clusters with high pod density.
Hi @ivelichkovich, I'm about to start reviewing this PR but I wanted to understand the design first. From the proposal, it is not clear to me how the range is divided. Could you please elaborate how a range set in the IPAM config is divided between the nodes, assuming the What happens if the number of nodes increases? i.e. nodes increases to 6 What does the new controller do when a node is unreachable? Thanks. |
Hey so this requires running a controller in the cluster, that controller is responsible for going for creating and managing the NodeSlicePools (resource representing node allocations). When nodes are added it assigns the nodes to a open "slice". If there's too many nodes it just skips them but it could fire an event or something like that. If a node is not reachable I don't think it'll be removed but when the node itself is actually deleted from the cluster it's "slice" will open up again. |
@@ -81,6 +83,8 @@ func (ic *IPAMConfig) UnmarshalJSON(data []byte) error { | |||
Datastore string `json:"datastore"` | |||
Addresses []Address `json:"addresses,omitempty"` | |||
IPRanges []RangeConfiguration `json:"ipRanges"` | |||
NodeSliceSize string `json:"node_slice_size"` | |||
Namespace string `json:"namespace"` //TODO: best way to get namespace of the NAD? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is the biggest remaining issue in the PR I think, it's needed to know the namespace of the NAD which is the same namespace as the nodeslices so its used to get the nodeslices.
Not sure if there's some easy way to discover this value, we could also make nodeslicepools be cluster scoped and not need to worry about it wouldn't be consistent with the rest of the CRDs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's also an API change so this one is probably worth figuring out even before merging as an experimental feature
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay fixed this to use WHEREABOUTS_NAMESPACE the same way it does for IPPools, there's some implications there if there's like multiple sets of duplicate NADs in different namespaces, maybe these resources should be cluster scoped? anyway for this PR this should fix the namespace thing to follow current patterns
e38c69c
to
48298e7
Compare
Appreciate all the hard work on this -- huge benefit to the whereabouts community, hugely appreciated. |
What this PR does / why we need it:
improves performance with node slice mode
https://docs.google.com/document/d/1YlWfg3Omrk3bf6Ujj-s5wXlP6nYo4PZseA0bS6qmvkk/edit#heading=h.ehhncqtntm3t
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer (optional):
This is a very very rough draft to help guide the design and discussion