-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Traefik ingress controller dosen't listen on ports 80, 443, and 8080 on the host, but ramdon nodeport #1414
Comments
traefik LoadBalancer 10.43.20.185 172.16.24.138 80:30579/TCP,443:30051/TCP,8080:32535/TCP 96m The traefik service is listening on ports 80, 443 and 8080. You should be able to access 172.16.24.138:80, 172.16.24.138:443 and 172.16.24.138:8080. |
@dabio no, I tried... only 30579 work for http... |
Are your iptables rules broken or something? What OS is this on? |
ubuntu 18.04, a brand new ECS in alibaba cloud(like aws ec2) |
Is this issue reproducible? What about on another platform? This should be working. |
@davidnuzik can you tell me why this should be working? |
Based on our suite of tests against Ubuntu 18.04 and CentOS7 this should work. I would review firewall rules, etc. You mentioned aws ec2 instances -- the security group has been set up correctly, etc? |
I installed k3s again with no-traefik option, and install nginx-ingress helm with nodeports 30080 and 30443.
I found that k3s-serve is listening on port 30080 but not 80.
But it still can visit the ingress by port 80, so how the k3s achieve this? by iptables? |
Confirming this too, the external interface is listening on the random port number not the servce port number.
On the master:
On one of the workers:
|
Yes, this is how kubernetes (specifically kube-proxy) works. The container listens on a random node port, and the control plane uses iptables rules to masquerade traffic from the loadbalancer address and port to the appropriate node port.
If this isn't working for you, then you've probably got something wrong with your iptables configuration - such as running on a distro that uses iptables-nft and not installing the iptables-legacy tools. |
In our case we're based on Ubuntu 20.04 and using the 'legacy' iptables and the various KUBE* chains and rules are in place. Despite all that its unclear why exposed services cannot be reached from the public IP addresses of the workers. I suspect the original reporter on this issue has the same problem as we're seeing but came to the same conclusion as we did when the expected behaviour wasn't observed. Like the original reporter we can only connect to the exposed services via the random port numbers from outside the cluster not the well-known service ports. |
I think I've figured out our issue. Our master(s) are deployed on the office LAN. Workers are in remote data-centres. Because the cluster needs to be on its own sub-net to avoid PNAT/routing issues we've created a Wireguard VPN that the cluster uses on 10.127.0.0/16 with the master on 10.127.0.1. On the master we can attach to traefik using HTTP (tested using telnet) but from the workers that fails (strange since the workers can reach the master via the 10.127.0.0/16 sub-net).
However, it is clear our issue is to do with our 'IoT' edge network requirements rather than a problem with traefik. |
I'm a new learner and I don't quite understand. I have previously set nginx listening on 80 and 443. Now I'm trying to install k3s. Does |
if you run traefik with root, it can bind 80 and 443 |
Try to setup hostPort for web and websecure ports. That will create dnat rules in iptables. |
@fox-md you can't simply setup hostPort... am using a service with NodePort, setting hostport need to be in pods not in service ... and it's not working in both (i've tried) Thank you. |
Hi @magixus, |
To be honest I didn't understand your replay, but I can tell that I've tried setting hostPort as well a long with my configurations and it didn't work. no DNAT routing has been created unfortunately. #!/bin/bash
#sleep 2m
TRAEFIK_IP=$(kubectl get pods -n kube-system -o wide | grep traefik | awk '{print $6}')
# check IP PREROUTING
PREROUTING_IP=$(iptables -t nat -vnL PREROUTING --line-numbers | sed '/^num\|^$\|^Chain/d' | wc -l)
if [ "$PREROUTING_IP" == 4 ] ; then
# update IP DNAT prerouting rules
iptables -R PREROUTING 3 -t nat -i ens160 -p tcp --dport 80 -j DNAT --to $TRAEFIK_IP:80
iptables -R PREROUTING 4 -t nat -i ens160 -p tcp --dport 443 -j DNAT --to $TRAEFIK_IP:443
elif [ "$PREROUTING_IP" == 2 ]; then
# create DNAT prerouting rules if they don't existe
iptables -A PREROUTING -t nat -i ens160 -p tcp --dport 80 -j DNAT --to $TRAEFIK_IP:80
iptables -A PREROUTING -t nat -i ens160 -p tcp --dport 443 -j DNAT --to $TRAEFIK_IP:443
fi
# check IP FORWARD
FORWARD_IP=$(iptables -vnL FORWARD --line-numbers | sed '/^num\|^$\|^Chain/d' | wc -l)
if [ "$FORWARD_IP" == 12 ] ; then
# update IP DNAT FORWARD rules
iptables -R FORWARD 11 -p tcp -d $TRAEFIK_IP --dport 80 -j ACCEPT
iptables -R FORWARD 12 -p tcp -d $TRAEFIK_IP --dport 443 -j ACCEPT
elif [ "$FORWARD_IP" == 10 ]; then
# create DNAT FORWARD rules if they don't existe
iptables -A FORWARD -p tcp -d $TRAEFIK_IP --dport 80 -j ACCEPT
iptables -A FORWARD -p tcp -d $TRAEFIK_IP --dport 443 -j ACCEPT
fi What the script is doing is basically checking any PREROUTING and FORWARD rules and update or create them accordingly. |
This workaround works for me as well (clean install of Debian 10; also tested on a clean install of Fedora Server 33): root@jakob-lenovog710:~# kubectl patch svc traefik -n kube-system -p '{"spec":{"externalIPs":["192.168.178.54"]}}'
service/traefik patched
root@jakob-lenovog710:~#
root@jakob-lenovog710:~# ss -tlnp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:32004 0.0.0.0:* users:(("k3s-server",pid=572,fd=247))
LISTEN 0 128 127.0.0.1:10248 0.0.0.0:* users:(("k3s-server",pid=572,fd=237))
LISTEN 0 128 127.0.0.1:10249 0.0.0.0:* users:(("k3s-server",pid=572,fd=197))
LISTEN 0 128 127.0.0.1:10251 0.0.0.0:* users:(("k3s-server",pid=572,fd=196))
LISTEN 0 128 127.0.0.1:10252 0.0.0.0:* users:(("k3s-server",pid=572,fd=203))
LISTEN 0 128 127.0.0.1:6444 0.0.0.0:* users:(("k3s-server",pid=572,fd=16))
LISTEN 0 128 0.0.0.0:30575 0.0.0.0:* users:(("k3s-server",pid=572,fd=250))
LISTEN 0 128 192.168.178.54:80 0.0.0.0:* users:(("k3s-server",pid=572,fd=208))
LISTEN 0 128 127.0.0.1:10256 0.0.0.0:* users:(("k3s-server",pid=572,fd=199))
LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=617,fd=3))
LISTEN 0 128 127.0.0.1:10010 0.0.0.0:* users:(("containerd",pid=632,fd=16))
LISTEN 0 128 192.168.178.54:443 0.0.0.0:* users:(("k3s-server",pid=572,fd=316))
LISTEN 0 128 *:10250 *:* users:(("k3s-server",pid=572,fd=239))
LISTEN 0 128 *:6443 *:* users:(("k3s-server",pid=572,fd=7))
LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=617,fd=4))
root@jakob-lenovog710:~# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 9m24s
metrics-server ClusterIP 10.43.93.188 <none> 443/TCP 9m23s
traefik-prometheus ClusterIP 10.43.208.88 <none> 9100/TCP 8m13s
traefik LoadBalancer 10.43.57.101 192.168.178.54,192.168.178.54 80:32004/TCP,443:30575/TCP 8m13s |
You shouldn't need to do that - creating the LB pods and then setting the externalIP field based on the addresses of the nodes running the pods is exactly what the servicelb controller is supposed to do. See how you've got the same IP listed twice now? Have you checked on the svclb pods to ensure that they're running properly? |
Exactly; the pods all run properly, but if the IP has been added twice it works. svclb pods run fine, both before and after adding the IP a second time. It just doesn't seem to |
Yes, that is how it works. It doesn't bind to the node port directly. It binds to a random port in the pod, and the ServiceLB pod programs iptables rules to forward the traffic from the host port to the pod port. It is not expected that you will see it actually listening on the host ports. |
@brandond Thanks for the insight! I naively assumed that they would just bind so I evaluated with # Test VM@DO
root@debian-s-1vcpu-1gb-ams3-01:~# curl -sfL https://get.k3s.io | sh -
root@debian-s-1vcpu-1gb-ams3-01:~# k3s kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system helm-install-traefik-9d7bl 0/1 ContainerCreating 0 20s
kube-system local-path-provisioner-5ff76fc89d-slrbk 0/1 ContainerCreating 0 19s
kube-system metrics-server-86cbb8457f-4vx7s 1/1 Running 0 19s
kube-system coredns-854c77959c-92p5b 0/1 Running 0 19s
# Local Linux system
[pojntfx@felixs-xps13 ~]$ curl 188.166.84.48
curl: (7) Failed to connect to 188.166.84.48 port 80: Connection refused
# Test VM@DO
root@debian-s-1vcpu-1gb-ams3-01:~# k3s kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system metrics-server-86cbb8457f-4vx7s 1/1 Running 0 40s
kube-system local-path-provisioner-5ff76fc89d-slrbk 1/1 Running 0 40s
kube-system coredns-854c77959c-92p5b 1/1 Running 0 40s
kube-system helm-install-traefik-9d7bl 0/1 Completed 0 41s
kube-system traefik-6f9cbd9bd4-dmtss 0/1 Running 0 17s
kube-system svclb-traefik-9n4rv 2/2 Running 0 17s
# Local Linux system
[pojntfx@felixs-xps13 ~]$ curl 188.166.84.48
404 page not found |
This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions. |
I think I was able to figure it out, at least if Calico is involved. Basically, Calico disables IP forwarding, preventing svclb from functioning the way @brandond mentions. My solution:
You will not see the port listening in |
This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions. |
The trick with |
It seems this issue still exists. After performing an update on Ubuntu 22 today networking was broken for me. I had an FTP server using node port and several services behind the build-in traefik ingress. None of it worked anymore showing the symptoms outlined in this issue. The solutions outlined in this thread did not resolve the problem. I needed to uninstall k3s via the script, which returned the IP-Tables rules back to defaults, then reinstall k3s. Now it works again. Not the biggest of deals since I had everything encapsulated as helm charts and only run k3s on a workstation. Makes me hesitant to use k3s for production workloads though. Would it be possible to introduce a self-healing mechanism that examines and repairs network related configuration like ip-tables upon server start? Or maybe a script / commandline switch that does this on demand without having to loose all deployed workloads? |
I have the same issue... tried uninstalling using the provided script and reinstalling again, but doesn't seem to work. Traefik listens in a random port still. |
Same issue I guess: kube-system traefik LoadBalancer 10.43.15.188 10.201.60.25 80:30307/TCP,443:31074/TCP 29d Where the IP of the server is 10.201.60.25 The service traefik is set like this:
There are no IngressRoute setup except the default one: [root@euc1-awx1 ingress]# kubectl get IngressRoute --all-namespaces [root@euc1-awx1 ingress]# curl https://10.201.60.25:31074 curl https://10.201.60.25:3307 curl http://10.201.60.25:80 or 443 https |
I am going to lock this, as there seems to be ongoing confusion in this thread about how ports work in Kubernetes.
If anyone has questions about configuring traefik ingress resources, please check the Traefik Community Forums, or open a new discussion thread. |
Version:
k3s version v1.0.0 (18bd921)
Describe the bug
Traefik ingress controller dosen't listen on ports 80, 443, and 8080 on the host, but ramdon nodeport
To Reproduce
install the v1.0.0
Expected behavior
The Traefik ingress controller will use ports 80, 443, and 8080 on the host
Actual behavior
Traefik ingress controller listen on nodeport(like 30579)
kubectl get svc --namespace=kube_system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 97m
metrics-server ClusterIP 10.43.162.222 443/TCP 97m
traefik LoadBalancer 10.43.20.185 172.16.24.138 80:30579/TCP,443:30051/TCP,8080:32535/TCP 96m
The text was updated successfully, but these errors were encountered: