Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Measure volume of network traffic of statusd process #546

Closed
adambabik opened this issue Jan 10, 2018 · 12 comments
Closed

Measure volume of network traffic of statusd process #546

adambabik opened this issue Jan 10, 2018 · 12 comments
Labels

Comments

@adambabik
Copy link
Contributor

Problem

In status-im/status-mobile#2931, we collected data usage of the mobile app. In order to test different status-go settings and compare the results, we need to figure out how to collect similar metrics from a local node.

Implementation

In this issue, let's focus on network usage. There should be a metric displaying volume of network traffic to and from statusd process over time. Using flags we will be able to see test different configurations like "LES enabled and Whisper disabled" and so on.

Acceptance Criteria

  1. There should be a way to measure network traffic of statusd process over time,
  2. Ideally, data should be fed into a storage that can be added as a data source in Grafana,
  3. There is an expvar metric that exports some basic information about the syncing progress.

We should be able to correlate how much data needs to be downloaded before syncing is done, how much data, on average, is required to load N blocks, how much network traffic is consumed by Whisper etc.

Notes

How I imagine it will work:

  1. Start a tool which measures network traffic,
  2. Start statusd process,
  3. Monitor in Grafana network traffic to and from statusd process.
@divan
Copy link
Contributor

divan commented Jan 10, 2018

@adambabik from my understanding network traffic is something that should be measured per process, i.e. on OS level rather than app level. So that's probably should be an external app that measures traffic and sends it to the stats storage/display software like Grafana.

I'd be more than happy to extend https://github.com/divan/statusmonitor to support network usage stats and proxying data to the stats server. Actually, this app already doing a similar job of getting OS level stats (CPU utilization) that is not possible to get from within an app.

@adambabik
Copy link
Contributor Author

@divan you're correct, it should be an external app/code/repo but as I was not aware of any, I put this issue here. Btw. your link leads to 404, I guess you mean https://github.com/status-im/statusmonitor. We can move this issue there as it makes perfect sense!

The most important part here is to collect network usage without a need to run the app on a simulator or device as this activity is troublesome and doing it locally does not change anything.

@divan
Copy link
Contributor

divan commented Jan 10, 2018

@adambabik oops, you're right, old link :)

I found one way to collect network stats for process that can be used with statusmonitor:

  • first, find userID of the process:

adb shell dumpsys package im.status.ethereum | grep userId

  • filter /proc/net/xt_qtaguid/stats output from the device to get tx_bytes and rx_bytes values:

adb shell cat /proc/net/xt_qtaguid/stats | grep \ 10178\ | cut -d' ' -f6,8

(6,8 are ids for rx_bytes and tx_bytes fields)

Maybe there is a better way to get this data by PID, but that's the best thing I've tried that works on my Android device.

@dshulyak
Copy link
Contributor

Won't it be useful to know what traffic is generated by p2p protocols, e.g. syncing, sending whisper messages, receiving whisper. And what traffic is from regular RPC requests to upstream node?
I wanted to share https://github.com/ethereum/go-ethereum/blob/master/p2p/metrics.go

@dshulyak
Copy link
Contributor

dshulyak commented Jan 10, 2018

It seems there is no way to meter traffic for RPC requests at the moment, but we can get p2p ingress/egress traffic from metrics.DefaultRegistry or by sending 'debug_metrics' RPC request.

More info is here https://github.com/ethereum/go-ethereum/wiki/Metrics-and-Monitoring

@divan
Copy link
Contributor

divan commented Jan 10, 2018

@dshulyak that'll be definitely useful as soon as we confirm that there is a problem with traffic.
I've updated statusmonitor to collect OS level Rx/Tx for Status app:

Not sure if it collects network traffic from WebViews (i.e. when browsing dApp from within the app), though.

@adambabik If these numbers are what we need, we can add proxying data to Grafana/whatever into statusmonitor.

@adambabik
Copy link
Contributor Author

Linking a proxy layer issue for statusmonitor: status-im/statusmonitor#2

@JekaMas
Copy link
Contributor

JekaMas commented Jan 25, 2018

I've wrote small script to compate different versions of Geth and Status(still in work. i need one or two days more to finish it) on different user scenarios - https://github.com/JekaMas/ether_tester

It can be used to run many containers and get stats for given period of time (with or without sync):

cluster = Cluster(
    [
        Geth(
            eth_value="~/.ethereum/docker",
            description="Geth with Whisper service",
            init_time=20),
        Geth(
            eth_value="~/.ethereum/docker2",
            description="Geth without Whisper service",
            init_time=20)
    ],
    is_wait_sync=True, debug=False).start()

cluster.collect_stats(60, ['personal.newAccount(\"passphrase\")', 'miner.setEtherbase(\"$result0\")'])
cluster.print_stats()

@dshulyak
Copy link
Contributor

Seems like there are 3 tools now to profile a cluster :)

@JekaMas
Copy link
Contributor

JekaMas commented Jan 25, 2018

I think this are different tools for different issus. My point is about automation testing not cluster health.

@ghost
Copy link

ghost commented Sep 24, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@ghost ghost added the stale label Sep 24, 2018
@ghost
Copy link

ghost commented Oct 1, 2018

This issue has been automatically closed. Please re-open if this issue is important to you.

@ghost ghost closed this as completed Oct 1, 2018
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants