-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes: Received event spamming? #449
Comments
@errm: did you get something similar on your platform ? |
I got the same problem here. It appears that the kubernetes provider sends a new configuration for every event, even though the event doesn't actually change the configuration. Since these events arrive very frequently (several times each second), the result is that a new configuration is never applied, since it wants a two second pause in the configuration updates:
|
Also encountering this issue myself. Maybe instead of waiting for a 2 second pause from kubernetes, is traefik aware of the last time it updated its own config and we can use that instead? |
Or just ignore events with annotation control-plane.alpha.kubernetes.io/leader |
@jonaz @mikespokefire which version of kubernetes are you currently running? |
I am not seeing these, I think they are related to the Currently we don't check what is in a watched event, rather just using it as a trigger to rebuild the whole config. I guess the best thing would be to filter these events to just look for changes to an IP or port... |
@errm We're running latest stable which is v1.2.4 And yes we have |
We are using v1.2.3. Looking further into the problem, it seems to be the
|
I get this spam with leader election enabled also |
Unless you are running a HA master on multiple nodes you should be able to safely remove I have a few ideas for fixing this...
|
See my comment #448 (comment) |
I get this every second. I dont think traefik should listen to this event or even subscribe to it since its the internal leader election when running multiple masters.
This might be a kubernetes issue
When i tried manually watching http://localhost:8001/api/v1/endpoints?watch=true i get:
The only thing changed is renewTime.
So i guess traefik could filter out those messages.
I found a issue upstream: kubernetes/kubernetes#23812
The text was updated successfully, but these errors were encountered: