Skip to content
This repository has been archived by the owner on Nov 7, 2019. It is now read-only.

TcpStream::connect timeout settings #98

Open
ragona opened this issue Jul 21, 2019 · 6 comments
Open

TcpStream::connect timeout settings #98

ragona opened this issue Jul 21, 2019 · 6 comments

Comments

@ragona
Copy link

ragona commented Jul 21, 2019

Hi there,

I read through some of the other issues on timers in romio, but it seemed slightly different than changing the underlying mio/net2 settings. I want to open a large number of TCP connections to different hosts, which may or may not respond. In order to prevent an excessive number of open files on the client I want to configure aggressive connection timeouts to prune connections that don't respond quickly.

I'd be happy to try to submit a PR to add a Builder object or something to make TcpStream more configurable, but I'm not sure if there is somewhere I should be looking for an example of this. (I'm digging into the Rust async/await story, and I'm not experienced in it yet.)

Here's a brief snippet that shows the code I'm using -- I appear to be hanging onto connections that fail for some time, and if there is an alternate path to fix I'd be happy to try that out instead.

fn main() -> io::Result<()> {
    let delay = 1e9 as u64 / REQUESTS_PER_SECOND;

    executor::block_on(async {

        for _ in 0..TOTAL_REQUESTS {
            juliex::spawn(async move {
                let addr = random_addr(80);
                match TcpStream::connect(addr).await {
                    Ok(mut stream) => {
                        stream.write_all(&REQUEST)
                            .await
                            .expect("Failed to write to socket")

                        stream.close()
                            .await
                            .expect("Failed to close socket");

                        println!("Success: {:?}", &addr);
                    }
                    Err(e) => {
                        eprintln!("Failed to connect: '{}'", e);
                    }
                }

            });
        }
    });

    Ok(())
}

Thanks!
Ryan

@yoshuawuyts
Copy link
Collaborator

Hey Ryan, thanks for opening this issue! In general romio tries to follow stdlib's APIs, so having equivalents of TcpStream.set_read_timeout and TcpStream.set_write_timeout would be very welcome additions. A patch for this would def be fantastic!

@ragona
Copy link
Author

ragona commented Jul 30, 2019

Cool, thanks Yoshua. I'll take a look.

@ragona
Copy link
Author

ragona commented Jul 31, 2019

Hey @yoshuawuyts, I've spent a bit of time looking into this, and what I'm seeing is that all of the similar methods for mio (set_nodelay for example) are in their mio/sys/<platform>/tcp implementations, where they call the associated inner std::net::TcpStream method. The best place to add set_read_timeout seems like it would be in mio, but I suspect there's a reason it isn't in there.

This doesn't actually set the underlying socket options, but I wonder if a simple example of showing how to do this with futures-timer might be better for many users. It seems to work based on some quick testing, but since you're a contributor to both libraries I'd love your advice on whether this would be a good pattern.

use std::time::Duration;
use romio::TcpStream;
use futures::executor;
use futures_timer::{TryFutureExt};

executor::block_on(async {
    let stream = TcpStream::connect(&HOST.parse().unwrap()).
        timeout(Duration::from_millis(500))
        .await;

    match stream {
        Ok(s) => {
            println!("it worked: {:?}", s);
        },
        Err(ref e) if e.kind() == ErrorKind::TimedOut => {
            println!("timed out");
        },
        Err(e) => {
            println!("some other type of error: {}", e);
        },
    }
});

@yoshuawuyts
Copy link
Collaborator

but I suspect there's a reason it isn't in there.

There's a non-zero chance that the reason it isn't in there is because nobody bothered to add it before. For example the windows IntoRawHandle / IntoRawSocket extensions are really common, but weren't added until recently because nobody bothered to. They're set to be released in a new release soon.

I see three paths forward:

  • patch mio, and propagate the changes to romio once it's available
  • add the timeout functionality to romio using futures-timer (like you suggested)
  • add the timeout functionality to runtime, using whichever timer is available on the platform

To be fair the futures-timer implementation isn't the best, so I'm a bit hesitant to make it a hard dependency of romio. Out of all paths forward it seems that adding it to mio might be the cleanest. But that also will take a while to propagate. So perhaps in the meantime we could move up one level and extend runtime with this functionality using the provided timers?

What do you think?

@ragona
Copy link
Author

ragona commented Aug 2, 2019

I went ahead and implemented this on the mio side of things, but it doesn't appear to work. And actually, the more I think about it, the less sure I am what I expect it to do here. Since the sockets are non-blocking the operations always return immediately, even if it returns with zero bytes read/written. A bit of additional Googling seems to suggest that non-blocking sockets just don't care about read/write timeouts, which makes sense. Even if a write buffer fills up the operation immediately returns a WouldBlock error, not a timeout.

Even if it did work, this doesn't help romio out with what I think is probably the most frustrating part -- gracefully timing out the syn phase in TcpStream::connect. What I've seen is that a standard kernel config takes something like 180 seconds to timeout due to the default value of approximately 6 retries in /proc/sys/net/ipv4/tcp_syn_retries. Even if you pop that down to 1 it's still ~30s, so this is where I think an easy timeout would be most helpful. And really, I'd love the syntax to be the same for timing out a connection and a read and a write, which leads me to think that your third suggestion is probably the best one.

I'm not familiar with runtime, and I don't see it in the dependencies for romio. futures-timer will probably be sufficient for my needs at the moment, but I really would love a precision delay timer at some point, so if there's a way I could contribute to that effort I'd be happy to do some additional poking around.

Let me know!

Best,
Ryan

@ragona
Copy link
Author

ragona commented Aug 10, 2019

As a bit of an epilogue to this issue, using a timeout via futures-timer provided exactly the functionality I wanted. I was pretty happy with the performance of my application even with these timeouts, but the way futures-timer spins off a global thread that sends out notifications using sync primitives was the only place in my application with any std::sync usage. The rest of my application is using a LocalPool executor for increased performance on a network IO heavy application, so the two don't line up well. After some profiling I removed the futures-timer timeouts, and my top throughput nearly doubled. However, I now run the risk of servers that don't send an EOF blocking a read_to_end call for way longer than I want, so it's a tradeoff!

Turns out this is a really tricky problem, and the more you dig into it the more weird edge cases you find! At some point I may attempt to add a thread-local futures timer, but for now I've just moved the timeout to an optional configuration.

(Feel free to close this issue; not sure what your strategy is for closing pending issues without a clear resolution like this.)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants