-
Notifications
You must be signed in to change notification settings - Fork 964
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Probabilistic Forwarding - Using Heuristic Analysis of Network Conditions to Reduce Load #5629
Conversation
… auto position updates
Thanks for this, it’s interesting. While I like that it’s using existing metrics obtained from the mesh without utilizing more airtime or RAM, I’m not sure about the metrics and the idea in general. First of all, non-routers/repeaters (except non-rebroadcasters like firmware/src/mesh/FloodingRouter.cpp Lines 28 to 29 in f39a9c5
So this already drastically limits the number of rebroadcasts. Based on your Now onto the metrics. Next, the Lastly, while the |
Thank you for such thorough feedback!
Good point. I see that now. But I don't follow this part -> "for more than 1 distinct source, the probability becomes 0"
To your point though, it seems like you're deduplicating using a somewhat similar mechanism (ie. random broadcast delay in the packet pool). Probably not a good metric to utilize.
Yes... you're right. We'd have to add something to the packet to indicate the original from vs. the last repeating node. Wouldn't this be useful though? If we had that information, any node could more effectively understand how packets were traversing the mesh.
I think it depends actually. The number of neighbors may mean this node should broadcast more in the case that this node is the only node that is capable of serving the other nearby nodes. But it could also mean that this node is one of many in a dense mesh where everyone can essentially serve everyone and therefore you'd want to lower its propensity to rebroadcast. What about something like a bloom filter, where the top N strongest immediate neighbors are added to the filter and passed along in the packet. When the next node receives the packet, it compares its own top N strongest immediate neighbors and the number of likely unique nodes it serves is an input to the probabilistic forwarding computation. Let's assume we'd use 2 hashes per node (k = 2), a 64 bit total field size (m = 64) and a total of 20 entries max (n = 21), you'd end up with a false positive rate of ~30%. With 21 entries on a 3 hop configuration, you'd be able to record each nodes top 7 neighbors. And that assumes each is holding fully unique neighbor lists. |
With "this logic" I was referring to how it currently works in Meshtastic.
The random delay is taking from a contention window, which scales with SNR (after giving router/repeaters priority). Meaning that nodes further away will generally rebroadcast first, in order to minimize the amount of hops used to spread a packet.
I propose to add this for the Next-Hop Router (#2856), although only the last byte of the relayer's node number, because we only have 2 bytes left in the header.
When you consider to add any overhead, in my opinion we should first simulate (https://github.com/meshtastic/Meshtasticator) whether it gives significant improvements above the current method in all kinds of scenarios. |
Very interesting. I'll review that PR. And I'll also utilize the simulator. Thank you. |
Nodes are continuously logging information about the state of the mesh around them which can be used to create a probabilistic forwarding scheme that mitigates unneeded and unwanted packet traffic without impacting reliability.
There are three core data points under study:
These three data points can be used to determine the likelihood repeating the packet is unnecessary. Of course it can never be a certainty which is why a probabilistic forwarding scheme is used, and the degree to which each factor influences the forwarding probability can be tuned for typical meshtastic levels of traffic.
In any case, even if the influence of these factors are reduced significantly to only conservatively reduce traffic, it would be a traffic reduction nonetheless.
To model the effect of this probablistic forwarder, you can use this js fiddle which mirrors the calculations in FloodRouter.cpp - https://jsfiddle.net/1ufmhry6/4/
Key lines of code:
Line 61-63 of FloodingRouter.cpp -> calls probability calculation function and tests value against a random number to determine if packet will be forwarded.
Line 110 of FloodingRouter.cpp -> calculates probability based on data noted above