-
Notifications
You must be signed in to change notification settings - Fork 39.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubelet parameter(eviction-max-pod-grace-period ), not work as expected like officical comment. #118172
Comments
/sig node |
/triage accepted The question is whether we fix code and make the backward incompatible change or fix a doc. I'd vote for a doc change which is less work. But blocks an interesting scenarios. @rainbowBPF2 were you looking for the described behavior? Does your scenario requires this? /priority important-longterm |
Thanks for brothers‘ attention on this point.
In short,no required but deep expected, 😄。 |
/assign |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/triage accepted |
/remove-lifecycle rotten |
/help At this stage I doubt we will be fixing the inconsistency by changing code and need to update the field documentation instead. If you want to take this issue:
|
@SergeyKanzhelev: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/Assign |
/assign |
/assign |
Hi! I'd like to help with this documentation issue. I plan to update the documentation to accurately reflect the actual behavior Is this issue still available to work on? Happy to collaborate with other assignees if needed. |
Hi! @FabianIMV I am working on Docs too.. |
Thanks @akshayamadhuri! I'm new to Kubernetes contributions. Would you like to share how we can work together on this? Happy to learn and help with any part you think would be useful. |
/assign |
The documentation for eviction-max-pod-grace-period parameter claimed that negative values would defer to the pod's specified grace period. However, the current implementation does not honor this behavior. This commit updates the documentation to accurately reflect the actual behavior. Fixes kubernetes#118172
What happened?
Hey brothers , feedback a confusing point about kubelet soft eviction:
Just like kubelet --help show:
--eviction-max-pod-grace-period int32 Maximum allowed grace period (in seconds) to use when terminating pods in response to a soft eviction threshold being met. If negative, defer to pod specified value.
I guess if this parameter is set to negative ,such as -1, soft eviction would use pod specified value(TerminationGracePeriodSeconds),
But When I try to evict pod by creating node pressure(such as memory pressure), I found it's always -1 send to CRI runtime.
Then pod container stopped immediately with sigkill, exit 137.
In short, this parameter:
(1)set as active number:work as expected;
(2)no set, kubelet read its default number:0, work as expected;
(3)set as negative number: not work as kubelet help show.
Thanks for your response. May it be a bug ?
What did you expect to happen?
--eviction-max-pod-grace-period set to negative, then kubelet syncPod logic send pod‘s TerminationGracePeriodSeconds as gracePeriod to CRI runtime to stop container.
How can we reproduce it (as minimally and precisely as possible)?
(1) set this parameter to a negative number, such as -1;
(2) config kubelet soft eviction ;
(3) running a pod asking a lot memory , to trigger node memory pressure;
(4) check CRI runtime log, such as containerd ;
(5) then you will find timeout parameter -1 sent to pod container , then container exit with code 137(sigkill);
Anything else we need to know?
I running these tests on Tencent Cloud TKE.
And I thought it seem not related to Cloud Service Provider.
Kubernetes version
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.3-tke.34", GitCommit:"61cc96d2f7e9277e89e29b4b04d045f27d6e75df", GitTreeState:"clean", BuildDate:"2023-03-08T07:40:04Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.3-tke.32", GitCommit:"8ca5807ae095365d68754a2ba20b10bbe5a8998c", GitTreeState:"clean", BuildDate:"2022-10-08T04:00:33Z", GoVersion:"go1.12.14", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
containerd --version
containerd github.com/containerd/containerd v1.4.3-tke.2 a11500cbfa0b4d2fc9b905e03c35f349ef5b1a9f
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: