-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Daemon triggers a Netscan alert from hosting company #1226
Comments
it really should not scan internal IPs (it currently does try to dial other peers at their internal IPs in hopes that theyre on the same LAN). We have multicast DNS for finding peers on the local network, we should filter out local IP's from our advertised list. |
@jbenet o/ |
I've seen these reports before as well.. Duplicate of #1173 |
@whyrusleeping (a) Multicast DNS does not work all the time. It is often disabled in many networks-- It's happened at 2/4 talks i've given recently-- and even OSes. (And it certainly does not work for containers.) (b) Look at the WebRTC standard. Dialing local network addresses is precisely how it works. I'm tired of having to justify this over and over. Now, there are many ways to fix this sort of thing. For example, just two among many:
I suggest also looking at the silencing/niceness heuristics other (aggressively local) p2p applications use. |
Next steps
Getting these down would go a long way for people trying to run go-ipfs in VPSes at providers that (rightly!) are concerned about random processes trying to dial lots of local addresses. |
We received a similar letter from a dedicated server provider. Long term, I really do see this as something ISPs need to become more comfortable with, as the web adjusts to a more decentralized model, and in the case of IPFS, even datacenters become the home to localized caches of content (and it's a good thing for them overall). That said, in the interim they treat most of this sort of activity as being of a malicious nature. So a way to turn it off for is needed for now until more widespread adoption takes place. I think your next steps would solve this issue for us. |
Also, it's possible that a firewall rule could be used as a workaround for now. I'm not sure what that rule would look like, I'm not very savvy with iptables. |
so an iptables solution to this would be to just block outgoing connections to other 'internal' networks like so:
and so on, for any other networks that you are accused of scanning. I personally dont think this is a good approach, but it may work in the short term. |
@aSmig gave me some great feedback on iptables usage, and recommend this as a workaround:
This will block all private scans. Not ideal obviously, but all of the netscans I've gotten complaints about were related to local IP scanning. If you're running Ubuntu, this service will persist the settings:
You may need to disable UFW if it is running (and then iptables -F), or make a version of these rules that uses UFW instead of iptables. I'll report back if I get another netscan warning. |
@kyledrake thank you! |
Also got a netscan report from my hoster looking quite similar to that one in the original post for this issue. Solved by some iptables rules quite similar to that ones @kyledrake posted above:
In this case I had the chance to block all transfer to private IPs using the external interface as the machine does not have any private networking on that interface. |
Just a quick update that I have not had any more complaints from our DCO since we installed these filters. |
@kyledrake thanks, good to know! still need to put this into IPFS soon. hopefully into 0.3.6 or 0.3.7 |
Just got another one:
Which is really weird, since the iptables policy is in place:
So I'm not sure what's going on here. The above rule doesn't cover 172.17 for some reason? Ideas welcome. I also found this list that claims to be all the private nets (RPC 1918), copied from here:
Use at your own risk. I haven't edited this to make it useful, and I have no idea what $EXTERNAL_IP does, and it may not be what you want. |
172.17.. is definitely covered by 172.16.0.0/12. The report indicates that the source ports are in the 50000-60000 range. Your rules only match when source and destination port are 4001. Pull the --sport 4001 out of your commands to match any source port. The valid-src chain described above has a few issues, including blocking all outbound traffic if you specify your external IP. Most of the rules block outbound traffic only when the source IP matches a private network, but you want to match against destination IP's. If you really want to block any and all traffic to private network ranges on a given external interface, this should get you closer: EXTERNAL_IF=eth0 # or whatever interface connects to your ISP
iptables -A valid-out -d 10.0.0.0/8 -j REJECT
iptables -A valid-out -d 172.16.0.0/12 -j REJECT
iptables -A valid-out -d 192.168.0.0/16 -j REJECT
iptables -A valid-out -d 224.0.0.0/4 -j REJECT
iptables -A valid-out -d 240.0.0.0/5 -j REJECT
iptables -A valid-out -d 127.0.0.0/8 -j REJECT
iptables -A valid-out -d 0.0.0.0/8 -j REJECT
iptables -A valid-out -d 255.0.0.0/8 -j REJECT
iptables -A valid-out -d 169.254.0.0/16 -j REJECT
iptables -A valid-out -d 224.0.0.0/4 -j REJECT
iptables -A OUTPUT -o $EXTERNAL_IF -j valid-out # make sure this happens before a global ACCEPT
# Use this instead of the previous line if you only want to block traffic to port 4001
#iptables -A OUTPUT -o $EXTERNAL_IF -p tcp --dport 4001 -j valid-out |
we should up the priority on this and get it out sooner. |
We now have ip/cidr connection filtering: could someone:
|
Is this the format you ended up going with?: #1378 (comment) |
@kyledrake the format is |
i added another $10 to this issue: https://www.bountysource.com/issues/14335371-daemon-triggers-a-netscan-alert-from-hosting-company |
So this should be fixed, and @whyrusleeping fixed it |
(though would love people to play with it, make sure it does fix things, and make an example) |
what's the PR#? |
So, if i close this issue, i acquire currency? |
@whyrusleeping what's an example config here? does this look right: { // in config
"DialBlockList": [
"/ip4/192.168.0.0/ipcidr/16",
"/ip4/172.10.1.0/ipcidr/28"
]
} Based on: |
Would this also work with IPv6? |
I'm not able to connect to another node on my LAN with the filters set appropriately |
@kyledrake, you can reset your rule match counters with iptables -Z. Then check them a week later to see if anything got past the built-in filters and was blocked by your firewall. To show only rules that have matched packets, you can do this:
This helps with testing and ensures your ISP won't get grumpy. |
Starting test on Hetzner server… We'll see whether there is a netscan alert… Rule-Set: "Swarm": {
"AddrFilters": [
"/ip4/10.0.0.0/ipcidr/8",
"/ip4/100.64.0.0/ipcidr/10",
"/ip4/169.254.0.0/ipcidr/16",
"/ip4/172.16.0.0/ipcidr/12",
"/ip4/192.0.0.0/ipcidr/24",
"/ip4/192.0.0.0/ipcidr/29",
"/ip4/192.0.0.8/ipcidr/32",
"/ip4/192.0.0.170/ipcidr/32",
"/ip4/192.0.0.171/ipcidr/32",
"/ip4/192.0.2.0/ipcidr/24",
"/ip4/192.168.0.0/ipcidr/16",
"/ip4/198.18.0.0/ipcidr/15",
"/ip4/198.51.100.0/ipcidr/24",
"/ip4/203.0.113.0/ipcidr/24",
"/ip4/240.0.0.0/ipcidr/4"
]
}, (Networks from iana list @aSmig posted above) |
@Luzifer can you confirm that the swarm has them on? |
# docker exec ipfs ipfs swarm filters
/ip4/192.168.0.0/ipcidr/16
/ip4/198.18.0.0/ipcidr/15
/ip4/198.51.100.0/ipcidr/24
/ip4/203.0.113.0/ipcidr/24
/ip4/10.0.0.0/ipcidr/8
/ip4/172.16.0.0/ipcidr/12
/ip4/192.0.0.0/ipcidr/29
/ip4/192.0.0.170/ipcidr/32
/ip4/169.254.0.0/ipcidr/16
/ip4/192.0.0.0/ipcidr/24
/ip4/240.0.0.0/ipcidr/4
/ip4/100.64.0.0/ipcidr/10
/ip4/192.0.0.8/ipcidr/32
/ip4/192.0.0.171/ipcidr/32
/ip4/192.0.2.0/ipcidr/24 |
it would be really cool if my parsing for |
Until now neither feedback nor an alert from my hoster. |
@Luzifer woot! lets keep it up |
Still running, no complaints… I think the filters are working… Praise @whyrusleeping for building it! |
i'm glad we've fixed that finally! |
See: ipfs/kubo#1226 License: MIT Signed-off-by: Lars Gierth <[email protected]>
Had the same problem. I'm not sure if this is a problem that pops up over and over. If it is, you could perhaps make a little note in the installation guide or disable local dialing in the default configuration. Afaik. there are approx. 250 nodes so I don't think it is that important at this stage. Anyways, interesting and awesome project. Keep up the good work! |
the path to improvement:
we could also add a warning to I've also been wanting an
|
#1247 should be already implemented. |
ah indeed. i didn't re read it closely enough |
Just got blocked by Hetzner due to this a few minutes ago. IMO, it really would make sense to at least printout a warning (until #1246 is implemented) while starting ipfs (maybe with a link to this bugreport) as having your host blocked due to ipfs is not a nice 'user experience'. |
Hey @adrian-bl yeah it is annoying to deal with that. How do you suggest detecting the environment to print out the warning? We need some good heuristics. Such warning should not be printed every time ipfs daemon runs, only when the user is likely to be running in an aggressive hosted environment like hetzner. |
Hi @jbenet
I wouldn't call them 'aggressive': I can somehow understand that they consider requests to private networks fishy and assume such hosts to be compromised.
The cleanest solution would be to print out the warning if all interfaces of the host have public IPv4 addresses (while ignoring 127.0.0.0/8). Another (flaky) solution could be to check if the subnet mask of all interfaces (excluding lo) is bigger than /24 (most hosters use something like /27 or /28 as their networks are routed) But i wonder how the private IPs are actually ending up in the DHT: Is this intentional? Eg: BitTorrents Kademlia implementation doesn't have this issue/feature: A node doesn't need to know its own IP address (but could easily learn it by searching for its own node id): An announce request only includes the listening port of the node - the remote node (which receives the announce) will then store the remote address of the UDP packet. (after verifying the token to avoid spoofing) |
We may as well just implement #1246 agree a warning would be nice. Also documenting in The mainline dht is not designed to work in private disconnected networks On Tue, Dec 15, 2015 at 08:44 Adrian Ulrich [email protected]
|
You don't have to justify yourself: I had no intention to criticize the decision. I'm pretty new to IPFS and was just wondering why it behaves like this (i've written my own BitTorrent client so my mind is 'locked' in the mainline DHT world). I'll read trough https://ipfs.io/ipfs/QmR7GSQM93Cx5eAg6a6yRzNde1FQv7uL6X1o4k7zrJa3LX/ipfs.draft3.pdf which will probably answer my questions :-) |
No worries, I'm just excusing myself for not giving you a complete answer nor pointers. Good thing to wonder though :) Read through issues in this tepo about addresses |
+1. Got an abuse message from Hetzner today. Please add ability to disable local peer-discovery! |
Please see #1226 (comment) and preceeding comments for a solution to this issue. If you still have an issue with this after trying the address filters, please file a new issue with details of what you have tried and which addresses are being dialed. |
These filters are now applied to your config if you initialize ipfs with the 'server' profile:
|
Solution
Use
ipfs init --profile=server
~ Kubuxu
I just installed go-ipfs, did an init, and started the daemon. A couple minutes later, my hosting provider sent me an abuse email indicating that a "Netscan" was coming from my host and asked me to stop. Here is the log they sent me (edited for privacy).
Notice all but 3 destination addresses are internal network destination. There are also many repeats (same destination internal IP) and this all happened in 33 seconds. Nearly all of this was happening on port 4001 as well, reinforcing that this was IPFS doing this.
How does ipfs currently find peers to swarm with? Is there a way to throttle back the peer discovery process? Why is it even trying to scan internal IPs? (I'm on a externally facing machine)
The text was updated successfully, but these errors were encountered: