-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KUBERNETES_SERVICE_HOST env variable not being used to communicate with API-server #9339
Comments
Thanks for (re-)opening this issue! I've opened kube-rs/kube#1000 to discuss restoring support for the environment variable and Azure/AKS#3183 to discuss how this feature is at odds with documented client contract. |
And kubernetes/kubernetes#112263 discusses reconciling client-go's behavior with the documentation. |
This is fixed by kube-rs/kube#1001. Once kube-rs v0.75 is released, we'll update dependencies. This is unlikely to be included in Linkerd stable-2.12.1, but it should be available for stable-2.12.2 (probably near the end of the month). |
Thank you @olix0r 🙏 💯 ❤️ |
* Update kubert to v0.10 * Update kube-rs to v0.75 (fixes #9339) * Update k8s-openapi to v0.16 * Update k8s-gateway-api to v0.7 Signed-off-by: Oliver Gould <[email protected]>
* Update kubert to v0.10 * Update kube-rs to v0.75 (fixes #9339) * Update k8s-openapi to v0.16 * Update k8s-gateway-api to v0.7 Signed-off-by: Oliver Gould <[email protected]>
What is the issue?
We are testing linkerd2 stable-2.12.0 and we see that policy-controller running in the Destination pod is not able to connect to the API-server and ends up with crash/restart loop
Our current installation with linkerd2 stable-2.11.3 all is good with the policy-controller being able to access API server.
What we see is that policy-controller is not using the
KUBERNETES_SERVICE_HOST
env variable to connect to the API-server. Its usingkubernetes.default.svc
as the url to API-server.Our requirement may be specific to Azure as we secure our AKS egress with a layer 7 firewall. Info here, plus Azure AKS has also support for the
KUBERNETES_SERVICE_HOST
now, release notes hereWe created an issue in kubert and @olix0r pointed us to the change done in kube-rs that impacts us.
We use Cattle vs Pets model for our AKS clusters. Basically we, at a moments notice can destroy and provision a brand new AKS cluster. We use the FQDN of the API-server via KUBERNETES_SERVICE_HOST so that workloads may access the API-Server. This way we don't have to keep updating the firewall with the IP of API-Server. The IP of the API-server managed by MS/Azure is non static and may change at any given time.
How can it be reproduced?
Control the egress of the cluster via a Layer 7 firewall. Use FQDN of the API-Server in the env variable
KUBERNETES_SERVICE_HOST
for a pod that need to access the API-Server.Logs, error output, etc
output of
linkerd check -o short
Environment
Possible solution
No response
Additional context
No response
Would you like to work on fixing this bug?
No response
The text was updated successfully, but these errors were encountered: