-
-
Notifications
You must be signed in to change notification settings - Fork 408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
async-std support with trait objects #1346
Conversation
b08772e
to
f8fbd8e
Compare
Surprisingly using the async-std/async-io based UdpSocket and Timer code (still running under the tokio runtime otherwise) leads to better performance in the bulk bench, even though it spawns a background thread. bench results
We could also try to choose the correct runtime for the user: pub fn runtime() -> Box<dyn Runtime> {
#[cfg(feature = "runtime-tokio")]
{
if let Ok(_) = tokio::runtime::Handle::try_current() {
return TokioRuntime;
}
}
#[cfg(feature = "runtime-async-std")]
{
return AsyncStdRuntime;
}
panic!("No usable runtime found");
} But I don't know, should the user still be able to override this? |
Interesting! How consistent is that across runs?
I actually really like the look of that helper, though I'm not sure it should be exposed directly. Let's use that to avoid complicating the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your continued work on this! I'm liking how it's shaping up.
I do wonder if we could refactor quinn-udp to be plain stupid-simple helper functions on RawFd
/RawHandle
and isolate all the runtime abstraction into the quinn
crate. I don't think this is a hard blocker for me, though.
quinn/examples/client-async-std.rs
Outdated
@@ -0,0 +1,161 @@ | |||
//! This example demonstrates an HTTP client that requests files from a server. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure the fairly trivial differences justify duplicating the maintenance cost of completely duplicating the client/server examples. Maybe a single dedicated, simpler example that just establishes a connection?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also thought about that. I needed a way to test if everything worked correctly but you're right, a simpler example would also work for that.
After trying a few more things I think it comes down to the amount of threads the load is distributed across. When using the single-threaded tokio executor (like the bench is), it is bottle-necked on that single thread, and off-loading thread to another thread takes load off the executor thread.
Sounds good! I will implement that later. |
(Close was accidental. Pressed the wrong button) |
Interesting indeed; I wouldn't have guessed that there was any opportunity for parallelism there. Good investigation! |
@yu-re-ka This still seems to have a hard dependency on the tokio crate. Is that intended? |
Yes, that was the suggestion from the maintainers:
I tested that it only builds the sync feature of the tokio crate, so you don't get a dependency on any of the runtime/executor code. |
7d61ab1
to
229f3de
Compare
I have removed the runtime parameter from Endpoint::client, Endpoint::server (these now use the default mechanism described earlier), and Endpoint::connect, Endpoint::connect_with (these use the runtime the endpoint was created with). |
ca3307f
to
c12d09c
Compare
c12d09c
to
76e77de
Compare
Brilliant work |
ready!(self.io.poll_write_ready(cx))?; | ||
match send(state, self.io.get_ref(), last_send_error, transmits) { | ||
Err(e) if e.kind() == io::ErrorKind::WouldBlock => { | ||
self.io.clear_write_ready(cx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if the socket becomes ready in between the call to poll_write_ready
and clear_write_ready
? Could this miss a wakeup? Similarly for reads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First, the clear_write_ready is a no-op on all implementations but tokio+unix (using AsyncFd), so we are only talking about this platform now. async-io doesn't require (or have a way of) clearing readiness events. tokio UdpSocket automatically clears readiness events for us in the call to try_recv_from
, and it also has no way to manually do that.
The code here is following the instructions from the tokio documentation:
Indicates to tokio that the file descriptor is no longer ready. The internal readiness flag will be cleared, and tokio will wait for the next edge-triggered readiness notification from the OS.
It is critical that this function not be called unless your code actually observes that the file descriptor is not ready. Do not call it simply because, for example, a read succeeded; it should be called when a read is observed to block.
When we observe that the file descriptor is not ready (I/O operation returns WouldBlock), we clear the readiness status.
I don't know what would be the alternative? This is exactly how tokio documentation describes it and how tokio does it itself internally.
I can't really say that the concern you are raising is not justified, but if it applies here it would also apply to the previous code using the tokio ReadyGuard.
Also note that as long as we don't clear the readiness event, polling again will return ready right away, as that's the whole point of clearing readiness events: The readiness state is stored and returned unchanged until it is cleared. The code for that is here:
https://docs.rs/tokio/latest/src/tokio/io/driver/scheduled_io.rs.html#284
So it should not be a problem that we poll again before clearing the readiness event, which is the only difference between this code and the previous.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks; I'll have to study this in some detail...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your concern is indeed very justified:
microsoft/msquic#2654
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed with upstream tokio; this will miss wakeups. In particular, the implementation of clear_write_ready
in terms of poll_write_ready
will cause readiness events that occur after the initial poll_write_ready
to be lost.
Our basic problem is that AsyncUdpSocket
is trying to be too low-level. Rather than trying to make the methods of quinn_udp::UdpSocket
abstract over the underlying async primitive, let's try converting the (already object-safe) quinn_udp::UdpSocket
itself into a trait object, with separate implementations for tokio and async-std. Those separate implementations should then call into shared code to do the OS-specific heavy-lifting. We can then hoist the trait machinery up into the quinn
crate and leave the shared OS-specific heavy lifting code in quinn_udp, concentrating awareness of async runtimes in one place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've implemented the approach I outlined above in #1364.
20f3bb2
to
a9a28fa
Compare
Rebased, resolved conflicts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking pretty good, thanks for working on it!
Some comments, mostly stylistic concerns -- hope that's okay.
@@ -0,0 +1,26 @@ | |||
use super::{AsyncTimer, Runtime}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Style nit: please split the imports as mentioned elsewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't infer from the other comments how you would like imports to be structured. Is there some styling guidelines where I can read up on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our convention is to split imports into three blocks: std
, followed by external crates, followed by the local crate. We don't currently have a formal style guide written, though we should; thanks for bearing with us in the mean time.
633f011
to
698290a
Compare
I can see from your comments that you would like things to be changed somehow, but there are so many options for how I could interpret these comments. I now just chose any one option, so you will have to make new comments if my interpretations did not match your comment intention - or send me a link to some guideline I can follow. |
why was this closed? |
I don't have motivation to refactor this again. Anyone feel free to open a new PR and use these changes if it is helpful. |
This was very close to completion, so I took it the last few steps in #1364. |
Alternative to #1345