Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scale default intervals based for *online* mesh size past 40 nodes #4277

Merged
merged 7 commits into from
Jul 13, 2024

Conversation

thebentern
Copy link
Contributor

No description provided.

@thebentern thebentern requested a review from GUVWAF July 12, 2024 18:51
src/mesh/ProtobufModule.h Outdated Show resolved Hide resolved
@thebentern thebentern marked this pull request as ready for review July 12, 2024 19:54
@thebentern thebentern requested a review from todd-herbert July 12, 2024 19:54
@thebentern thebentern changed the title Scale default intervals based on online mesh size past 40 nodes Scale default intervals based for *online* mesh size past 40 nodes Jul 12, 2024
@thebentern thebentern merged commit c5d747c into master Jul 13, 2024
95 checks passed
@todd-herbert
Copy link
Contributor

Sorry I didn't get to this one in time. I do like the concept!

@thebentern thebentern deleted the scale-defaults-to-mesh-size branch July 13, 2024 17:51
@thebentern
Copy link
Contributor Author

Sorry I didn't get to this one in time. I do like the concept!

I accept posthumous reviews as well. 😄

@ayysasha
Copy link

ayysasha commented Aug 1, 2024

Can we have a laymans terms explanation on how this scaling works?

@thebentern
Copy link
Contributor Author

Can we have a laymans terms explanation on how this scaling works?

I added a write-up about regular broadcast intervals in our mesh algo page that goes over the new scaling mechanism.
https://meshtastic.org/docs/overview/mesh-algo/#regular-broadcast-intervals

@akohlsmith
Copy link

this does not seem like a good idea. 40 nodes randomly sending out broadcasts at 1800 second intervals is not a lot of traffic. The number of online nodes is not representative of mesh traffic or congestion. Scaling back with channel utilization makes a lot more sense to me...

@GUVWAF
Copy link
Member

GUVWAF commented Aug 1, 2024

We already stop some of the periodic broadcasts on 25% or 40% channel utilization. While I agree that the number of online nodes is not the perfect method, channel utilization is neither, since it's a local measurement. You don't know what kind of channel utilization nodes 3 hops away are experiencing. The number of online nodes can be a good proxy for that.
Edit: Well, nodes do send their channel utilization, but then do you take the average channel utilization or maybe the max? Also, the reported channel utilization is a snapshot of the last minute and may vary a lot. Arguably, there may indeed be a more representative way to determine the "health" of the network in this way, but I doubt it will be much better than just the number of online nodes.

With the default LoRa settings, DeviceTelemetry takes up about 1 second of airtime I believe. With the default of 3 hops, most nodes will at least hear that packet twice (or transmit themselves), and maybe more. So DeviceTelemetry only will already result into around 2*40*1/1800 = 4.4% channel utilization.

@thebentern
Copy link
Contributor Author

thebentern commented Aug 1, 2024

I think part of the problem is that Channel Utilization is often too transient of a metric to control regularly intervalled broadcasts that are somewhat spaced out. Using online node count has already worked fairly well for the use cases of fairly static large public meshes and at events with high concentration of nodes, where you end up with sudden packet storms when someone sends a packet with a want_ack and the channel utilization spikes and then it goes back down. As @GUVWAF said, it may not be perfect, but it's given us a more holistic approach to start wrangling traffic on huge meshes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants