Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker 18.03.1-ce routing mesh not working #2227

Closed
deminngi opened this issue Jul 9, 2018 · 30 comments
Closed

Docker 18.03.1-ce routing mesh not working #2227

deminngi opened this issue Jul 9, 2018 · 30 comments

Comments

@deminngi
Copy link

deminngi commented Jul 9, 2018

TLDR

@fcrisciani I have read and understood how mesh is working using ingress network and ingress_sbox on docker_gwbridge and using a custom overlay network for services.

Read a lot of stuff and checked everything out (see docker-support.log) I cannot reach the published external port. To avoid side effects my firewallD is not running on the swarm cluster.

Issue type

Installing nginx with replica 1 cannot be accessed via the published port 80, suppose
routing mesh is not working

Expect

Can access published port for every node with w3m http://localhost

Got

Cannot load http://localhost

OS Version/build

Kernel: 4.4.x
OS: Ubuntu 18.04 LTS
Arch: arm64
Docker: 18.03.1-ce

API version

Client:
Version: 18.03.1-ce
API version: 1.37
Go version: go1.9.5
Git commit: 9ee9f40
Built: Thu Apr 26 07:16:22 2018
OS/Arch: linux/arm64
Experimental: false
Orchestrator: swarm

Server:
Engine:
Version: 18.03.1-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.5
Git commit: 9ee9f40
Built: Thu Apr 26 07:14:27 2018
OS/Arch: linux/arm64
Experimental: true

Steps to reproduce

- Create named volume
   docker volume create --opt type=none --opt device=/gluster/run/test --opt o=bind test

- Create overlay network
   docker network create --driver=overlay --subnet 10.32.0.0/16 --gateway 10.32.0.1 --attachable weave

- Create docker service
docker service create --name=nginx --network weave --mount src=test,dst=/config -e PGID=1000 -e PUID=1000 -p 80:80 -p 443:443 -e TZ=Europe/Berlin lsioarmhf/nginx-aarch64

Diagnostic log
See https://gist.github.com/giminni/1ab53616d6529baeace0f2e6d2eac65a

@fcrisciani
Copy link

fcrisciani commented Jul 9, 2018

@giMini can you try: curl -v -4 http://localhost configuration looks good

@deminngi
Copy link
Author

@fcrisciani here the result

$curl -v -4 http://localhost

  • Rebuilt URL to: http://localhost/
  • Trying 127.0.0.1...
  • TCP_NODELAY set
  • connect to 127.0.0.1 port 80 failed: Connection refused
  • Trying 127.0.0.1...
  • TCP_NODELAY set
  • connect to 127.0.0.1 port 80 failed: Connection refused
  • Failed to connect to localhost port 80: Connection refused
  • Closing connection 0

@fcrisciani
Copy link

@giminni does it work if you do curl inside the container?

@deminngi
Copy link
Author

deminngi commented Jul 10, 2018

@fcrisciani If I use the docker_gwbridge ip address of the container 172.18.0.3 not 172.18.0.2 it works, I guess something is wrong with the ingress_sbox container or iptables configuration.

$ docker network inspect docker_gwbridge
[
{
"Name": "docker_gwbridge",
"Id": "643d2dc921fc3c5d7a3a7b240f9ae58e49561fc483d57451734fac7546b043a6",
"Created": "2018-07-09T00:45:18.296196724+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"e5dc1741e78a1d6ec582a1fb0c91296573006dd7728b391238c4c28c2cfec285": {
"Name": "gateway_e5dc1741e78a",
"EndpointID": "69a4b2a6c5c60114202e71ee191058a52c8b72cfc852b66c9a844835eb9d194b",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"ingress-sbox": {
"Name": "gateway_ingress-sbox",
"EndpointID": "6619f9c6292791c1dce82cc6972d5d183f2b8542ff6915ea96c50cb321cb31da",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.enable_icc": "false",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.name": "docker_gwbridge"
},
"Labels": {}
}
]

$curl -v -4 http://172.18.0.3

  • Rebuilt URL to: http://172.18.0.3/
  • Trying 172.18.0.3...
  • TCP_NODELAY set
  • Connected to 172.18.0.3 (172.18.0.3) port 80 (#0)

GET / HTTP/1.1
Host: 172.18.0.3
User-Agent: curl/7.58.0
Accept: /

< HTTP/1.1 200 OK
< Server: nginx/1.12.2
< Date: Tue, 10 Jul 2018 16:20:17 GMT
< Content-Type: text/html
< Content-Length: 988
< Last-Modified: Sat, 07 Jul 2018 23:47:45 GMT
< Connection: keep-alive
< ETag: "5b415121-3dc"
< Accept-Ranges: bytes
<


<title>Welcome to our server</title>

@deminngi
Copy link
Author

deminngi commented Jul 10, 2018

@fcrisciani I can access the vip address from the ingress_sbox shell
$ sudo ip netns exec ingress_sbox sh
# iptables -n -v -L -t mangle
Chain PREROUTING (policy ACCEPT 154 packets, 17089 bytes)
pkts bytes target prot opt in out source destination
0 0 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 MARK set 0x100
0 0 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 MARK set 0x100

Chain INPUT (policy ACCEPT 154 packets, 17089 bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 174 packets, 13945 bytes)
pkts bytes target prot opt in out source destination
9 815 MARK all -- * * 0.0.0.0/0 10.255.0.12 MARK set 0x100

Chain POSTROUTING (policy ACCEPT 174 packets, 13945 bytes)
pkts bytes target prot opt in out source destination
# curl -4 http://10.255.0.12


<title>Welcome to our server</title>

@fcrisciani
Copy link

from the host it matches the DOCKER_INGRESS

Chain PREROUTING (policy ACCEPT 2111 packets, 142K bytes)
 pkts bytes target     prot opt in     out     source               destination
11016  661K DOCKER-INGRESS  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

and that direct it to the ingress namespace:

Chain DOCKER-INGRESS (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:443 to:172.18.0.2:443
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:172.18.0.2:80

Now as the packets pass through the ingress_sbox they are getting marked:

# nsenter --net=/var/run/docker/netns/ingress_sbox iptables -w1 -n -v -L -t mangle
Chain PREROUTING (policy ACCEPT 8 packets, 672 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 MARK       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:80 MARK set 0x10d
    0     0 MARK       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:443 MARK set 0x10d

and then IPVS is load balancing them to the backend container:

# nsenter --net=/var/run/docker/netns/ingress_sbox ipvsadm -l -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
FWM  269 rr
  -> 10.255.0.13:0                Masq    1      0          0

Did something changed between your new analysis and the support.sh? I can see that from the support the backend is 10.255.0.13 while in your last dump it is instead 10.255.0.12 and also the firewall marker are not matching, 256 (0x100) instead of 269 (0x10D)

@deminngi
Copy link
Author

deminngi commented Jul 11, 2018

@fcrisciani
thx for your reply, yes indeed I am doing a lot of things trying to understand why I cannot get the published port from the host.

Why are the packet counter zero in DOCKER-INGRESS and POST-ROUTING?

I will upload an updated docker-support log this morning and reset the packet counter.

BTW is there an issue using kernel 4.4 and iptables 1.6.1?

@deminngi
Copy link
Author

deminngi commented Jul 11, 2018

@fcrisciani
I uploaded the new diagnostic log and executed w3m http://localhost, got not response, tried http://172.180.0.3 (nginx-1) it was successful

I looked inside the ingress sandbox, the counters are always zero

Here my findings:
$ ip netns exec ingress_sbox sh

# iptables -vnL -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER_OUTPUT all -- * * 0.0.0.0/0 127.0.0.11
0 0 DNAT icmp -- * * 0.0.0.0/0 10.255.0.35 icmptype 8 to:127.0.0.1

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER_POSTROUTING all -- * * 0.0.0.0/0 127.0.0.11
0 0 SNAT all -- * * 0.0.0.0/0 10.255.0.0/16 ipvs to:10.255.0.2

Chain DOCKER_OUTPUT (1 references)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- * * 0.0.0.0/0 127.0.0.11 tcp dpt:53 to:127.0.0.11:40192
0 0 DNAT udp -- * * 0.0.0.0/0 127.0.0.11 udp dpt:53 to:127.0.0.11:42917

Chain DOCKER_POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 SNAT tcp -- * * 127.0.0.11 0.0.0.0/0 tcp spt:40192 to::53
0 0 SNAT udp -- * * 127.0.0.11 0.0.0.0/0 udp spt:42917 to::53

# iptables -vnL -t mangle
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 MARK set 0x103
0 0 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 MARK set 0x103

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 10.255.0.35 MARK set 0x103

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

And the endpoint is never reached.
$ ip netns exec ingress_sbox sh
# ipvsadm -l -n --stats
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes
-> RemoteAddress:Port
FWM 259 0 0 0 0 0
-> 10.255.0.36:0 0 0 0 0 0
-> 10.255.0.37:0 0 0 0 0 0

@fcrisciani
Copy link

@giminni the default gw network is mainly for north/south communication than for exposing services.
We should focus on the path that I was describing before. trying to see where the packet is not passing.
For the stats what about the host, there is where the translation starts

@deminngi
Copy link
Author

deminngi commented Jul 12, 2018

I use w3m http://localhost trying to reach port 80
See activity flow from docker host on table nat for chain OUTPUT to DOCKER-INGRESS to POSTROUTING

Chain OUTPUT (policy ACCEPT 1094 packets, 85015 bytes)
pkts bytes target prot opt in out source destination
821 68143 DOCKER-INGRESS all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Looking at DOCKER-INGRESS chain got counts on RETURN not for port 80

Chain DOCKER-INGRESS (2 references)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 to:172.18.0.2:443
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.18.0.2:80
1507 116K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0

Going now to ingress sandbox on table mangle I see activity on all chains, but packet counter for port 80 is zero.

Chain PREROUTING (policy ACCEPT 560 packets, 54320 bytes)
pkts bytes target prot opt in out source destination
0 0 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 MARK set 0x100
0 0 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 MARK set 0x100
Chain INPUT (policy ACCEPT 560 packets, 54320 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 560 packets, 54320 bytes)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 10.255.0.35 MARK set 0x100
Chain POSTROUTING (policy ACCEPT 560 packets, 54320 bytes)
pkts bytes target prot opt in out source destination

Showing the stats inside the sandbox I can't see any activity

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes
-> RemoteAddress:Port
FWM 256 0 0 0 0 0
-> 10.255.0.6:0 0 0 0 0 0
-> 10.255.0.7:0

@fcrisciani
Copy link

@giminni I tried the steps that you mentioned at the top on an x86, ubuntu 18.04 instance but I'm not able to reproduce with a vanilla nginx instance, and debugging iptables from github is really tedious.
There is definitely something that is blocking that traffic, you need to do tcpdump on each step and see where does it stops.
So far seem does not even arrive to the bridge in the ingress-sbox. you should be able to enter the namespace: nsenter --net=/var/run/docker/netns/ingress_sbox and do tcpdump on the interface with ip 172.18.0.2. if the traffic does not even arrive there, no point to check further in the chain.

@deminngi
Copy link
Author

deminngi commented Jul 12, 2018

@fcrisciani thx for instruction
strange is that it works with mode=host.

Can this be a problem with kernel 4.4? and can you show me a working iptables -vnL -t nat on the host and iptables -vnL -t mangle on ingress_sbox?

I am testing using hping3 localhost -p 80 on the host and tcpdump -i eth1 on ingress_sbox. In parallel I use watch -n 2 -d iptables -vnL -t nat without success

UPDATE: If I sent a SYN flag using hping3 -V -S -p 80 localhost it works and the counters are incremented, seeing traffic on eth1 on ingress_sbox.

Why can SYN pass through?

@fcrisciani
Copy link

@giminni ingress is exposing TCP ports, ICMP is not going to work.
This is the output of the machine that I tested:
working.txt

This is the tool to get the output: docker run -v /var/run:/var/run --network host --privileged dockereng/network-diagnostic:support.sh > working.txt

Test:

docker network create -d overlay --attachable weave
docker service create --name=nginx --network weave  -p 80:80 -p 443:443 -e TZ=Europe/Berlin nginx

@deminngi
Copy link
Author

deminngi commented Jul 13, 2018

@fcrisciani I am not using ping I use hping3
Checking port: Here hping3 will send a Syn packet to a specified port (80 in my example). I can control also from which local port will start the scan (30000)

hping3 -V -S -p 80 -s 30000 localhost
Output from hping3 is:
using lo, addr: 127.0.0.1, MTU: 65536
HPING localhost (lo 127.0.0.1): S set, 40 headers + 0 data bytes

on ingress_sbox I got:

IP 172.18.0.1.30000 > 172.18.0.2.http: Flags [S], seq 1083945123, win 512, length 0
IP 172.18.0.2.http > 172.18.0.1.30000: Flags [R.], seq 0, ack 1083945124, win 0, length 0

and I see that SYN is working, but not HTTP

@deminngi
Copy link
Author

deminngi commented Jul 13, 2018

@fcrisciani
Here my lsmod output:

Module Size Used by
ip_vs_rr 16384 2
xt_ipvs 16384 2
ip_vs 110592 5 ip_vs_rr,xt_ipvs
xt_REDIRECT 16384 1
nf_nat_redirect 16384 1 xt_REDIRECT
xt_nat 16384 13
xt_tcpudp 16384 15
veth 16384 0
vxlan 40960 0
ip6_udp_tunnel 16384 1 vxlan
udp_tunnel 16384 1 vxlan
iptable_mangle 16384 2
xt_mark 16384 3
ipt_MASQUERADE 16384 3
nf_nat_masquerade_ipv4 16384 1 ipt_MASQUERADE
nf_conntrack_netlink 36864 0
iptable_nat 16384 3
nf_conntrack_ipv4 24576 6
nf_defrag_ipv4 16384 1 nf_conntrack_ipv4
nf_nat_ipv4 16384 1 iptable_nat
xt_addrtype 16384 5
iptable_filter 16384 2
xt_conntrack 16384 5
nf_nat 20480 4 nf_nat_redirect,nf_nat_ipv4,xt_nat,nf_nat_masquerade_ipv4
nf_conntrack 126976 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
overlay 45056 1
snd_soc_rockchip_hdmi_dp 16384 0
snd_soc_rockchip_i2s 16384 4
snd_soc_rockchip_spdif 16384 2
rockchip_saradc 16384 0
ip_tables 24576 3 iptable_filter,iptable_mangle,iptable_nat
x_tables 32768 11 xt_ipvs,xt_mark,ip_tables,xt_tcpudp,ipt_MASQUERADE,xt_conntrack,xt_nat,iptable_filter,xt_REDIRECT,iptable_mangle,xt_addrtype
autofs4 40960 0
dw_hdmi_i2s_audio 16384 0

@fcrisciani
Copy link

@giminni do you have any overlap between your host interfaces and the container network?
who is replying with the RST?

@deminngi
Copy link
Author

deminngi commented Jul 13, 2018

@fcrisciani No overlap
my overlay network is encrypted and has a class B mask
10.32.0.0/16 and does not collide with ingress 10.255.0.0/16
docker engine is started with /usr/bin/dockerd -H fd:// -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 --experimental

BTW I have no firewall service running and I am using kernel 4.4.x and my docker environment is running with the --experimental flag (see above), can this be a show stopper?

@deminngi
Copy link
Author

deminngi commented Jul 13, 2018

@fcrisciani looking around what could be the root cause, can it be that some mandatory netfilter kernel module are missing?

Here my findings:

# uname -a
Linux box 4.4.126-rockchip-ayufan-239 #1 SMP Sun May 27 18:38:24 UTC 2018 aarch64 aarch64 aarch64 GNU/Linu

# zcat /proc/config.gz |grep IP_VS*
CONFIG_IP_VS=m
# CONFIG_IP_VS_IPV6 is not set
# CONFIG_IP_VS_DEBUG is not set
CONFIG_IP_VS_TAB_BITS=12
# CONFIG_IP_VS_PROTO_TCP is not set
# CONFIG_IP_VS_PROTO_UDP is not set
# CONFIG_IP_VS_PROTO_AH_ESP is not set
# CONFIG_IP_VS_PROTO_ESP is not set
# CONFIG_IP_VS_PROTO_AH is not set
# CONFIG_IP_VS_PROTO_SCTP is not set
CONFIG_IP_VS_RR=m
# CONFIG_IP_VS_WRR is not set
# CONFIG_IP_VS_LC is not set
# CONFIG_IP_VS_WLC is not set
# CONFIG_IP_VS_FO is not set
# CONFIG_IP_VS_OVF is not set
# CONFIG_IP_VS_LBLC is not set
# CONFIG_IP_VS_LBLCR is not set
# CONFIG_IP_VS_DH is not set
# CONFIG_IP_VS_SH is not set
# CONFIG_IP_VS_SED is not set
# CONFIG_IP_VS_NQ is not set
CONFIG_IP_VS_SH_TAB_BITS=8
CONFIG_IP_VS_NFCT=y
# CONFIG_VIDEO_ROCKCHIP_VPU is not set
# CONFIG_SND_SOC_ROCKCHIP_VAD is not set
CONFIG_USBIP_VHCI_HCD=m
# CONFIG_ROCKCHIP_VENDOR_STORAGE is not set

# find /lib/modules/4.4.126-rockchip-ayufan-239/ -name ip_vs* -print
/lib/modules/4.4.126-rockchip-ayufan-239/kernel/net/netfilter/ipvs/ip_vs.ko
/lib/modules/4.4.126-rockchip-ayufan-239/kernel/net/netfilter/ipvs/ip_vs_rr.ko

@deminngi
Copy link
Author

deminngi commented Jul 13, 2018

@fcrisciani I got it running using endpoint-mode dnsrr and --publish mode=host,published=80,target=80

Now why is it not running in endpoint-mode vip?? Still thinking of missing kernel stuff

Alternative would be to install an external load balancer if vip is not working properly

@fcrisciani
Copy link

@giminni it is running because you are using mode=host, so there is no ingress being used, the service will be accessible only on that node and not from the rest of the cluster.
I don't think can be related to a module because else commands to configure would have failed.
I'm still curious to understand who was sending the RST, that is key I guess, because means that the packet instead of being routed to the container is replied by someone else, and discovering why will explain the reason it is not working.
can you do a ifconfig on the host?

@deminngi
Copy link
Author

deminngi commented Jul 13, 2018

@fcrisciani Here my ifconfig

$ ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:ffff:fffff:ffff prefixlen 64 scopeid 0x20
ether 02:42:ab:e1:32:19 txqueuelen 0 (Ethernet)
RX packets 1232 bytes 69108 (69.1 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2863 bytes 12452492 (12.4 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

docker_gwbridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:ffff:ffff:ffff prefixlen 64 scopeid 0x20
ether 02:42:42:3a:7f:4c txqueuelen 0 (Ethernet)
RX packets 29892 bytes 3210739 (3.2 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 24157 bytes 2855395 (2.8 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet a.b.c.43 netmask 255.255.254.0 broadcast a.b.c.255
inet6 fe80::b0c9:fffff:ffff:ffff prefixlen 64 scopeid 0x20
ether b2:c9:aa.bb.cc.dd txqueuelen 1000 (Ethernet)
RX packets 17828072 bytes 2062951224 (2.0 GB)
RX errors 0 dropped 711690 overruns 0 frame 0
TX packets 15890583 bytes 1761931044 (1.7 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 40

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet x.y.z.19 netmask 255.255.255.0 broadcast x.y.z.255
inet6 fe80::b0c9:fff:ffff:ffff prefixlen 64 scopeid 0x20
ether b2:c9:ww.xx.yy.zz txqueuelen 1000 (Ethernet)
RX packets 35830 bytes 2149800 (2.1 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 222 bytes 15628 (15.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 188

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1 (Local Loopback)
RX packets 3428114 bytes 302476107 (302.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3428114 bytes 302476107 (302.4 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth1f20832: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::448a:99ff:fe44:5816 prefixlen 64 scopeid 0x20
ether 46:8a:99:44:58:16 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 39 bytes 2806 (2.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

veth37a5581: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::98b8:2bff:fee7:2866 prefixlen 64 scopeid 0x20
ether 9a:b8:2b:e7:28:66 txqueuelen 0 (Ethernet)
RX packets 9 bytes 3028 (3.0 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 41 bytes 3327 (3.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

@deminngi
Copy link
Author

@fcrisciani
Finally I could run the service using the ingress mesh network.
It was a multiple combination of missing kernel modules, had to compile my own one for arm64v8 and reading the doc I recognized that I need a /24 overlay network otherwise the port gets not published.

Thx for support.

@tylerwight
Copy link

@giminni I was hoping you could share the modules you were missing. I have a very similar board (pine64 vs your rock64) and I am seeing the exact problems you are. I am trying to find where I can compare kernel modules to find missing ones but I have running into issues.

@marianopeck
Copy link

Same situation here... @giminni any chance you share what you did?

@tylerwight
Copy link

tylerwight commented May 7, 2019

@marianopeck

I ended up getting it working using an Armbian build that contained the correct modules. This goes for both Docker Swarm and Kubernetes. Here is my write up on it (comments at bottom)

docker/for-linux#525

@marianopeck
Copy link

Hi @tylerwight thanks, yeah, I saw your workaround too. In fact, I just finished flashing and I am now gonna try to install Docker. This is the image I downloaded and I am running on a Rock64

@marianopeck
Copy link

Ok... @tylerwight I can confirm it also works for me with Armbian!!! Thank you very much for your workaround.

@hueyvle
Copy link

hueyvle commented Mar 18, 2020

hi @giminni ,
I encounter the same issue. Can you share the list of missing modules?

@hueyvle
Copy link

hueyvle commented Mar 24, 2020

FYI. I have exact problem and have managed to fix the issue by finding the missing modules and recompile/enable them in kernel:

  1. Download the script
    `wget https://github.com/moby/moby/raw/master/contrib/check-config.sh

chmod 0755 check-config.sh
`
2. Locate your kernel config, if you don't know where it is, ask

  1. Run the script
    ./check-config.sh <kernel config path>
    missing modules are likely the cause

@Vorsku
Copy link

Vorsku commented Jun 4, 2020

@hueyvle which modules were you missing and which did you have to recompile to get this working please?

I have the same problem in the OP, and I am missing the following in Network Drivers:
CONFIG_BRIDGE_VLAN_FILTERING

Latest vanilla Raspbian, same local switch between swarm nodes, all nodes listening on port 4789, 2377 and 7946, no host firewall enabled. Still unable to access containers except directly pointed to the host it's running on - pulling my hair out!

fff7d1bc added a commit to fff7d1bc/moby that referenced this issue Feb 19, 2021
Points out another symbol that Docker might need. in this case Docker's
mesh network in swarm mode does not route Virtual IPs if it's unset.

From /var/logs/docker.log:
time="2021-02-19T18:15:39+01:00" level=error msg="set up rule failed, [-t mangle -A INPUT -d 10.0.1.2/32 -j MARK --set-mark 257]:  (iptables failed: iptables --wait -t mang
le -A INPUT
-d 10.0.1.2/32 -j MARK --set-mark 257: iptables v1.8.7 (legacy): unknown option \"--set-mark\"\nTry `iptables -h' or 'iptables --help' for more information.\n (exit status 2))"

Bug: moby/libnetwork#2227
Bug: docker/for-linux#644
Bug: docker/for-linux#525
Signed-off-by: Piotr Karbowski <[email protected]>
docker-jenkins pushed a commit to docker-archive/docker-ce that referenced this issue Feb 22, 2021
Points out another symbol that Docker might need. in this case Docker's
mesh network in swarm mode does not route Virtual IPs if it's unset.

From /var/logs/docker.log:
time="2021-02-19T18:15:39+01:00" level=error msg="set up rule failed, [-t mangle -A INPUT -d 10.0.1.2/32 -j MARK --set-mark 257]:  (iptables failed: iptables --wait -t mang
le -A INPUT
-d 10.0.1.2/32 -j MARK --set-mark 257: iptables v1.8.7 (legacy): unknown option \"--set-mark\"\nTry `iptables -h' or 'iptables --help' for more information.\n (exit status 2))"

Bug: moby/libnetwork#2227
Bug: docker/for-linux#644
Bug: docker/for-linux#525
Signed-off-by: Piotr Karbowski <[email protected]>
Upstream-commit: e8ceb976469e15547ed368ba5c110102ccc5fbfa
Component: engine
thaJeztah pushed a commit to thaJeztah/docker that referenced this issue Feb 23, 2021
Points out another symbol that Docker might need. in this case Docker's
mesh network in swarm mode does not route Virtual IPs if it's unset.

From /var/logs/docker.log:
time="2021-02-19T18:15:39+01:00" level=error msg="set up rule failed, [-t mangle -A INPUT -d 10.0.1.2/32 -j MARK --set-mark 257]:  (iptables failed: iptables --wait -t mang
le -A INPUT
-d 10.0.1.2/32 -j MARK --set-mark 257: iptables v1.8.7 (legacy): unknown option \"--set-mark\"\nTry `iptables -h' or 'iptables --help' for more information.\n (exit status 2))"

Bug: moby/libnetwork#2227
Bug: docker/for-linux#644
Bug: docker/for-linux#525
Signed-off-by: Piotr Karbowski <[email protected]>
(cherry picked from commit e8ceb97)
Signed-off-by: Sebastiaan van Stijn <[email protected]>
nosamad pushed a commit to WAGO/docker-engine that referenced this issue Sep 13, 2021
Points out another symbol that Docker might need. in this case Docker's
mesh network in swarm mode does not route Virtual IPs if it's unset.

From /var/logs/docker.log:
time="2021-02-19T18:15:39+01:00" level=error msg="set up rule failed, [-t mangle -A INPUT -d 10.0.1.2/32 -j MARK --set-mark 257]:  (iptables failed: iptables --wait -t mang
le -A INPUT
-d 10.0.1.2/32 -j MARK --set-mark 257: iptables v1.8.7 (legacy): unknown option \"--set-mark\"\nTry `iptables -h' or 'iptables --help' for more information.\n (exit status 2))"

Bug: moby/libnetwork#2227
Bug: docker/for-linux#644
Bug: docker/for-linux#525
Signed-off-by: Piotr Karbowski <[email protected]>
(cherry picked from commit e8ceb97)
Signed-off-by: Sebastiaan van Stijn <[email protected]>
nosamad pushed a commit to WAGO/docker-engine that referenced this issue Sep 15, 2021
Points out another symbol that Docker might need. in this case Docker's
mesh network in swarm mode does not route Virtual IPs if it's unset.

From /var/logs/docker.log:
time="2021-02-19T18:15:39+01:00" level=error msg="set up rule failed, [-t mangle -A INPUT -d 10.0.1.2/32 -j MARK --set-mark 257]:  (iptables failed: iptables --wait -t mang
le -A INPUT
-d 10.0.1.2/32 -j MARK --set-mark 257: iptables v1.8.7 (legacy): unknown option \"--set-mark\"\nTry `iptables -h' or 'iptables --help' for more information.\n (exit status 2))"

Bug: moby/libnetwork#2227
Bug: docker/for-linux#644
Bug: docker/for-linux#525
Signed-off-by: Piotr Karbowski <[email protected]>
(cherry picked from commit e8ceb97)
Signed-off-by: Sebastiaan van Stijn <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants