-
Notifications
You must be signed in to change notification settings - Fork 731
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tracing_apender: Fix race condition for logging on shutdown #1125
Conversation
@zekisherif can you explain further the potential deadlock you mentioned in #1120 (comment)?
It's not immediately obvious to me why we can't just wait for the worker thread to join if the I'm fine with moving forward with this approach, I just want to make sure I understand why it's necessary first. |
If you don't mind rebasing the current master, #1128 should have fixed the CI failure. Thanks! |
It may be unlikely to happen but if the worker thread fails to actually stop then the call to join will never complete. It seems to me using join within drop is not good practice (but I may be wrong about this). I can do that if you're fine with that or stick to this approach. |
Under what circumstances do you think the worker thread could fail to complete? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, these changes LGTM.
And based on empirical testing I can say that they fix the issue I was seeing.
@zekisherif Thanks for working on this!
It's not clear to me the specifics of what could cause the worker to complete in absolute terms, but since it is dealing with external storage it might be more likely that it simply does not complete fast enough. The way I read the PR, there is a policy that when dropping we try to create a short window of time in which the worker can complete its work before yanking the rug out: up to 1/10th of second for the worker to get the memo, and then up to a whole second to complete before rug-yank time. I don't know if those two windows of time will work for everybody, generally, but they seem like okay defaults to me. I would be willing to wait longer for the memo delivery. Really, I'd be willing to wait (a lot) longer in general, on the theory that it will be fast when things are working correctly, but when things are going sideways the remaining bits of unflushed data might be the most important to have written. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me. I commented on a few minor style nits.
when the `Worker` calls `recv()` on a zero-capacity channel. Use `send_timeout` | ||
so that drop is not blocked indefinitely. | ||
*/ | ||
let _ = self.shutdown.send_timeout((), Duration::from_millis(1000)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 1000 ms? It might be worth having a comment explaining how this value was chosen. We might want to eventually make this configurable, but that can be done later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a specific reason for 1000 ms. I just thought this is a reasonable amount of time to give the worker thread to call recv
.
It seems difficult to determine if there is an ideal value here. Making this configurable would let us pass on the decision of the correct value to the end user. I'll cut a feature request.
I was thinking more in terms of external problems affecting the behaviour of the application in some way (hardware failures, weird kernel scenarios). But at that point, you probably have bigger problems than worrying about flushing of logs. |
## Motivation Fixes the race condition outlined in #1120 . ## Solution `Worker` now uses a 2 stage shutdown approach. The first shutdown signal is sent through the main message channel to the `Worker` from `WorkerGuard` when it is dropped. Then `WorkerGuard` sends a second signal on a second channel that is zero-capacity. This means It will only succeed a `send()` when a `recv()` is called on the other end. This guarantees that the `Worker` has flushed all it's messages before the `WorkerGuard` can continue with its drop. With this solution I'm not able to reproduce the race anymore using the provided code sample from #1120 Co-authored-by: Zeki Sherif <[email protected]>
## Motivation Fixes the race condition outlined in #1120 . ## Solution `Worker` now uses a 2 stage shutdown approach. The first shutdown signal is sent through the main message channel to the `Worker` from `WorkerGuard` when it is dropped. Then `WorkerGuard` sends a second signal on a second channel that is zero-capacity. This means It will only succeed a `send()` when a `recv()` is called on the other end. This guarantees that the `Worker` has flushed all it's messages before the `WorkerGuard` can continue with its drop. With this solution I'm not able to reproduce the race anymore using the provided code sample from #1120 Co-authored-by: Zeki Sherif <[email protected]>
## Motivation Fixes the race condition outlined in #1120 . ## Solution `Worker` now uses a 2 stage shutdown approach. The first shutdown signal is sent through the main message channel to the `Worker` from `WorkerGuard` when it is dropped. Then `WorkerGuard` sends a second signal on a second channel that is zero-capacity. This means It will only succeed a `send()` when a `recv()` is called on the other end. This guarantees that the `Worker` has flushed all it's messages before the `WorkerGuard` can continue with its drop. With this solution I'm not able to reproduce the race anymore using the provided code sample from #1120 Co-authored-by: Zeki Sherif <[email protected]>
## Motivation Can be though of as a continuation to #1120 and #1125. Example with problematic racy behavior: ``` use std::io::Write; struct TestDrop<T: Write>(T); impl<T: Write> Drop for TestDrop<T> { fn drop(&mut self) { println!("Dropped"); } } impl<T: Write> Write for TestDrop<T> { fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> { self.0.write(buf) } fn flush(&mut self) -> std::io::Result<()> { self.0.flush() } } fn main() { let writer = TestDrop(std::io::stdout()); let (non_blocking, _guard) = tracing_appender::non_blocking(writer); tracing_subscriber::fmt().with_writer(non_blocking).init(); } ``` Running this test case in a loop with `while ./test | grep Dropped; do done`, it can be seen that sometimes writer (`TestDrop`) is not dropped and the message is not printed. I suppose that proper destruction of non-blocking writer should properly destroy underlying writer. ## Solution Solution involves joining `Worker` thread (that owns writer) after waiting for it to almost finish avoiding potential deadlock (see #1120 (comment))
Can be though of as a continuation to #1120 and #1125. Example with problematic racy behavior: ``` use std::io::Write; struct TestDrop<T: Write>(T); impl<T: Write> Drop for TestDrop<T> { fn drop(&mut self) { println!("Dropped"); } } impl<T: Write> Write for TestDrop<T> { fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> { self.0.write(buf) } fn flush(&mut self) -> std::io::Result<()> { self.0.flush() } } fn main() { let writer = TestDrop(std::io::stdout()); let (non_blocking, _guard) = tracing_appender::non_blocking(writer); tracing_subscriber::fmt().with_writer(non_blocking).init(); } ``` Running this test case in a loop with `while ./test | grep Dropped; do done`, it can be seen that sometimes writer (`TestDrop`) is not dropped and the message is not printed. I suppose that proper destruction of non-blocking writer should properly destroy underlying writer. Solution involves joining `Worker` thread (that owns writer) after waiting for it to almost finish avoiding potential deadlock (see #1120 (comment))
Changed - **non_blocking**: Updated `crossbeam-channel` dependency to 0.5 (tokio-rs#1031) Fixed - **non_blocking**: Fixed a race condition when logging on shutdown (tokio-rs#1125) - Several documentation improvements (tokio-rs#1109, tokio-rs#1110, tokio-rs#941, tokio-rs#953)
Motivation
Fixes the race condition outlined in #1120 .
Solution
Worker
now uses a 2 stage shutdown approach. The first shutdown signal is sent through the main message channel to theWorker
fromWorkerGuard
when it is dropped. ThenWorkerGuard
sends a second signal on a second channel that is zero-capacity. This means It will only succeed asend()
when arecv()
is called on the other end. This guarantees that theWorker
has flushed all it's messages before theWorkerGuard
can continue with its drop.With this solution I'm not able to reproduce the race anymore using the provided code sample from #1120