-
Notifications
You must be signed in to change notification settings - Fork 529
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preemption for Coscheduling #581
Comments
It's a good idea I feel like there can be two ways
|
/assign |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Hi @KunWuLuan , is there any ideas on this issue? |
@shadowdsp Yes, currently the preemption plugin will only set nominated node for the pod that triggers the preemption. We have a custom plugin to solve the problem, but currently it seems that the plugin is for the specific situation. |
I noticed that when using coscheduling, preempt only one pod a time is not enough.
So I am considering about implement a plugin to support preempt more than one pod a time.
I have implemented a prototype of batch preemption. The batch preemption plugin (will be called BP) will interact with other plugin. BP provide an interface with 2 functions for other plugins: 1. to query the list of pods ready for preemption and 2. do preemption if current pod schedule successfully or preempt successfully.
If a pod find some victims on a node, the plugin will record the result and do nothing.If some other plugins decide to do the preemption after a successful scheduling or successful preemption, they can call the interface, then BP will preempt all victims it has recorded.
Of course, the prototype has already taken into account the situation where the Pod in the record is preempted again by other Pods, or the Pod in the record is rescheduled. This is a bit complicated. If necessary, I will add it later
The point is that I believe the user interaction in the prototype is unreasonable.
In the prototype, BP's behavior is hard coded. I think we may need a CRD for BP to decide when to do preemption. Do you have some ideas?
The text was updated successfully, but these errors were encountered: