-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tokio_threadpool::blocking locking up tasks via NotReady #726
Comments
@stjepang Interested in your thoughts. |
@sfackler you mean |
Er, yeah. |
I believe I have discovered a bug. BackgroundThe threadpool allows for up to N tasks to be concurrently in a blocking state. This is implemented using a sephamore. The sephamore implementation was done directly in the guts of the threadpool. As part of an effort to ensure correctness of Tokio's various components, I have been working on a (brute force) concurrency checker. It is still in an early state, but can be found here. I extracted the sephamore component and have been testing it in isolation. The longer term goal is to break up the threadpool implementation to make it more manageable. Doing so has exposed (I believe) the bug. SephamoreThe future aware sephamore roughly acts by having a queue of permits. When a task releases the permit, it is assigned to any pending task. Each task struct has a flag tracking whether or not it holds the permit. The bugThe bug arises from a conflation of a few factors.
The logic of not checking if the permit was assigned is that it is already checked once before entering the task. If the permit is assigned while the task is running, it should run again and then consume the permit the second time around. The problem is that consuming the permit happens before the task is transitioned to a running state. This is where the race arises:
Feels like a 🐛 |
cc @stjepang |
Ideally, the way to solve this is to rip out the semaphore impl from threadpool in favor of the one being worked on in syncbox, which should have a saner API. |
@sfackler Have you been able to confirm that this is fixed? |
Oh, thanks for the reminder. I'll cut a new release of our internal library that switches back to blocking and see how it goes. |
@sfackler Just a ping in case there is any news on this issue since the last time. |
I just got around to switching our implementation back over to using blocking - it should roll out over the next week or so. |
This has been running without issue for a couple of days now, so it seems fixed! |
Yay! |
Sad update: we saw what looks like this issue happen again 😢 |
Oh no :-(. Are we sure it is blocking related? If you run with the threadpool but not use the blocking fn, do things work fine? |
Yeah - we've been using futures-cpupool for DNS for the last couple of months, and switched back to blocking a week or so ago. Same symptoms as before I think, though I didn't go in and turn on trace logging/etc. Rather than just rolling back off of blocking entirely, I think I'll add some instrumentation (e.g. a gauge tracking how many threads are actually in the blocking closure) and some logic to fall back to a futures-cpupool if we wait more than N seconds for a blocking thread slot. |
This is rewritten. |
Version
Platform
Linux 3.10.0-862.3.2.el7.x86_64 #1 SMP Mon May 21 23:36:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Subcrates
tokio-threadpool
Description
This is a continuation/revival of #525. At the time we thought it was introduced by upgrading from tokio-threadpool 0.1.4 to 0.1.5, but didn't know much more than that. The issue has popped back up for us, and I believe we've narrowed it down to the use of
tokio_threadpool::blocking
, and not the version of tokio_threadpool. We were specifically switching from use of afutures_cpupool
totokio_threadpool::blocking
for DNS resolution in our HTTP client. We still saw the issue after upgrading tokio-threadpool all the way to the newest release.The bad state we encounter is that at some point under high load, the runtime gets into a bad state where
blocking
calls returnWouldBlock
while there isn't actually any blocking work being performed. Trace-level logging just shows this:We use a custom Hyper
Connect
implementation, and the first async operation it performs is this:Since we never see that debug log message, it seems like the
WouldBlock
logged by tokio_threadpool is coming from theblocking
call itself. Thread dumps indicate that there aren't any threads blocked ingetaddrinfo
, so it seems like the threadpool is somehow losing track of how many outstanding blocking calls are happening.Unfortunately, I've been completely unable to reproduce this anywhere other than production :(
The text was updated successfully, but these errors were encountered: