You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is this an ISSUE or FEATURE REQUEST? (choose one): Issue
Which release version?: master + cherry-pick of #212
Which component (CNI/IPAM/CNM/CNS): CNI
Which Operating System (Linux/Windows): Windows Server version 1803
Which Orchestrator and version (e.g. Kubernetes, Docker): Kubernetes
What happened:
After scaling up a replica set, some containers failed to start. When this happens, the IPs were not freed. Here's an example of the end state after scaling back down. Only 1 pod IP should be in use on the node, but there are 3 marked as in use in the IPAM file
How to reproduce it (as minimally and precisely as possible):
# cordon all Windows nodes except 1
kubectl apply -f https://raw.githubusercontent.com/PatrickLang/Windows-K8s-Samples/master/HyperVExamples/whoami-1803.yaml
kubectl scale deploy whoami-1803 --replicas=6
# wait some time, not all 6 will start successfully
kubectl scale deploy whoami-1803 --replicas=1
just a note - we're still trying to determine if this is an azure-cni issue, which would affect both Windows+Linux, or if it's actually a Windows-only kubelet issue.
After debugging yesterday, we found its more of windows KUBERNETES issue and there is no del call to CNI when user removes pod. Please open a issue with windows KUBERNETES GitHub repo.
Is this a request for help?: No
Is this an ISSUE or FEATURE REQUEST? (choose one): Issue
Which release version?: master + cherry-pick of #212
Which component (CNI/IPAM/CNM/CNS): CNI
Which Operating System (Linux/Windows): Windows Server version 1803
Which Orchestrator and version (e.g. Kubernetes, Docker): Kubernetes
What happened:
After scaling up a replica set, some containers failed to start. When this happens, the IPs were not freed. Here's an example of the end state after scaling back down. Only 1 pod IP should be in use on the node, but there are 3 marked as in use in the IPAM file
What you expected to happen:
No leaks
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
Found this while testing fix for #195
The text was updated successfully, but these errors were encountered: