-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
destination: Frequent redundant discovery updates #8677
Comments
FWIW, I'm not seeing this behavior on my local k3d cluster with emojivoto... |
Hi @olix0r, there is very little information in
|
@johnswarbrick Sorry, I may have given you a slightly incorrect command. These logs are from the proxy, but I'm curious to see the logs from the destination controller. I think this should work:
|
Tower [v0.4.13] includes a fix for a bug in the `tower::ready_cache` module, tower-rs/tower#415. The `ready_cache` module is used internally in Tower's load balancer. This bug resulted in panics in the proxy (linkerd/linkerd2#8666, linkerd/linkerd2#6086) in cases where the Destination service sends a very large number of service discovery updates (see linkerd/linkerd2#8677). This commit updates the proxy's dependency on `tower` to 0.4.13, to ensure that this bugfix is picked up. Fixes linkerd/linkerd2#8666 Fixes linkerd/linkerd2#6086 [v0.4.13]: https://github.com/tower-rs/tower/releases/tag/tower-0.4.13
Tower [v0.4.13] includes a fix for a bug in the `tower::ready_cache` module, tower-rs/tower#415. The `ready_cache` module is used internally in Tower's load balancer. This bug resulted in panics in the proxy (linkerd/linkerd2#8666, linkerd/linkerd2#6086) in cases where the Destination service sends a very large number of service discovery updates (see linkerd/linkerd2#8677). This commit updates the proxy's dependency on `tower` to 0.4.13, to ensure that this bugfix is picked up. Fixes linkerd/linkerd2#8666 Fixes linkerd/linkerd2#6086 [v0.4.13]: https://github.com/tower-rs/tower/releases/tag/tower-0.4.13
The proxy can receive redundant discovery updates. When this occurs, it causes the balancer to churn, replacing an endpoint stack (and therefore needlessly dropping a connection). This change updates the discovery module to keep clones of the discovered endpoint metadata, so that updated values can be compared to eliminate duplicate updates. Relates to linkerd/linkerd2#8677 Signed-off-by: Oliver Gould <[email protected]>
The proxy can receive redundant discovery updates. When this occurs, it causes the balancer to churn, replacing an endpoint stack (and therefore needlessly dropping a connection). This change updates the discovery module to keep clones of the discovered endpoint metadata, so that updated values can be compared to eliminate duplicate updates. Relates to linkerd/linkerd2#8677 Signed-off-by: Oliver Gould <[email protected]>
The proxy can receive redundant discovery updates. When this occurs, it causes the balancer to churn, replacing an endpoint stack (and therefore needlessly dropping a connection). This change updates the discovery module to keep clones of the discovered endpoint metadata, so that updated values can be compared to eliminate duplicate updates. Relates to linkerd/linkerd2#8677
Tower [v0.4.13] includes a fix for a bug in the `tower::ready_cache` module, tower-rs/tower#415. The `ready_cache` module is used internally in Tower's load balancer. This bug resulted in panics in the proxy (linkerd/linkerd2#8666, linkerd/linkerd2#6086) in cases where the Destination service sends a very large number of service discovery updates (see linkerd/linkerd2#8677). This commit updates the proxy's dependency on `tower` to 0.4.13, to ensure that this bugfix is picked up. Fixes linkerd/linkerd2#8666 Fixes linkerd/linkerd2#6086 [v0.4.13]: https://github.com/tower-rs/tower/releases/tag/tower-0.4.13
Tower [v0.4.13] includes a fix for a bug in the `tower::ready_cache` module, tower-rs/tower#415. The `ready_cache` module is used internally in Tower's load balancer. This bug resulted in panics in the proxy (linkerd/linkerd2#8666, linkerd/linkerd2#6086) in cases where the Destination service sends a very large number of service discovery updates (see linkerd/linkerd2#8677). This commit updates the proxy's dependency on `tower` to 0.4.13, to ensure that this bugfix is picked up. Fixes linkerd/linkerd2#8666 Fixes linkerd/linkerd2#6086 [v0.4.13]: https://github.com/tower-rs/tower/releases/tag/tower-0.4.13
Tower [v0.4.13] includes a fix for a bug in the `tower::ready_cache` module, tower-rs/tower#415. The `ready_cache` module is used internally in Tower's load balancer. This bug resulted in panics in the proxy (linkerd/linkerd2#8666, linkerd/linkerd2#6086) in cases where the Destination service sends a very large number of service discovery updates (see linkerd/linkerd2#8677). This commit updates the proxy's dependency on `tower` to 0.4.13, to ensure that this bugfix is picked up. Fixes linkerd/linkerd2#8666 Fixes linkerd/linkerd2#6086 [v0.4.13]: https://github.com/tower-rs/tower/releases/tag/tower-0.4.13
The proxy can receive redundant discovery updates. When this occurs, it causes the balancer to churn, replacing an endpoint stack (and therefore needlessly dropping a connection). This change updates the discovery module to keep clones of the discovered endpoint metadata, so that updated values can be compared to eliminate duplicate updates. Relates to linkerd/linkerd2#8677
Tower [v0.4.13] includes a fix for a bug in the `tower::ready_cache` module, tower-rs/tower#415. The `ready_cache` module is used internally in Tower's load balancer. This bug resulted in panics in the proxy (linkerd/linkerd2#8666, linkerd/linkerd2#6086) in cases where the Destination service sends a very large number of service discovery updates (see linkerd/linkerd2#8677). This commit updates the proxy's dependency on `tower` to 0.4.13, to ensure that this bugfix is picked up. Fixes linkerd/linkerd2#8666 Fixes linkerd/linkerd2#6086 [v0.4.13]: https://github.com/tower-rs/tower/releases/tag/tower-0.4.13
The proxy can receive redundant discovery updates. When this occurs, it causes the balancer to churn, replacing an endpoint stack (and therefore needlessly dropping a connection). This change updates the discovery module to keep clones of the discovered endpoint metadata, so that updated values can be compared to eliminate duplicate updates. Relates to linkerd/linkerd2#8677
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
#8666 includes logs that show the destination controller serving redundant
discovery updates roughly every 10 seconds (though sometimes up to 60s apart):
While these updates should not cause the logic bug described in the issue, they
do seem likely to cause unnececessary work: the balancer endpoint will be
replaced each time an update is processed, causing new connections to be
created. In large clusters, this is probably taxing on the destination
controller as it is forced to perform unnecessary I/O with all of its clients.
Why are these redundant updates being sent? Are they preventable?
The text was updated successfully, but these errors were encountered: