-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] stream write data slow #432
Comments
So, you're writing a tiny amount each time and then pausing for a second. The data is probably still sitting in a buffer somewhere when you stop the clock. The difference between libp2p and a raw TCP socket is that libp2p will encrypt and multiplex your data; this will add significant overhead over just dumping to a buffer somewhere. Try measuring the bandwidth of a sustained read. That is, have one node continuously write to a remote node and then measure the bandwidth on the remote node. Libp2p will still be slower (we have quite a bit more overhead over a simple TCP socket and haven't spent enough time optimizing our stack) but it shouldn't be that slow. |
@Stebalien Thanks for your response! I have followed your advice and made more tests.
Here's the result:
Secondly I measured the bandwidth of a sustained read as you mentioned. I kept the client node continuously writing to the server node and measured the time interval after reading every 3MB data.
The libp2p is much slower. And the test result is the same as in my production environment(one packet is about 3MB and the latency is 1-2 seconds). It seems that the multiplex has a consumption performance. If that's true, what's the benefit of multiplex and how can I make full use of it? Can you give me some advice of accelerating the data transmission speed? |
It allows one to have multiple streams between two endpoints without having their congestion control algorithms fighting eachother. In this case, it won't help you. This may be ipfs/kubo#4280 but it may also be an issue with the congestion control built into the default stream muxer. Try using mplex instead of yamux.
This will override the default multiplexers and disable yamux. Mplex is significantly simpler so it may perform better in this instance. If it does, we can see what we can do to improve yamux. Thanks for taking the time to look into this. |
The mplex's performance is very close to the raw TCP socket. I've replaced the default muxer with it in my project and the service now is running smoothly. Thank you very much for the big help! libp2p is really a great work with which people like me can build a distributed system efficiently without thinking about the annoying network problems. Wish libp2p better and better! |
This definitely suggests that we should put some effort into measuring and optimizing yamux. |
Closing as resolved in favor of #435. |
Hi,
I'm working on a project using go-libp2p and I find the speed of data transmission between nodes is very slow. My service is deployed on AWS cloud in different regions. I have measured the network bandwidth and latency with tools such as iperf and found nothing wrong. So I wrote some codes to make a test directly.
It's the test code using the standard library:
The output is as following and the speed seems very fast.
It's the test code using the go-libp2p:
The output is as following:
As you can see, the second test code's performance is much worse.
Maybe I did something wrong or missed something. I hope for your help!
The text was updated successfully, but these errors were encountered: