-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Acquire fewer locks in TaskRunner #8394
base: master
Are you sure you want to change the base?
Conversation
Previously each run did this: - acquire a lock to take a task - acquire a lock to finish a task - if crashed, acquire a lock to start a new thread So to run 10 tasks without any crashes, we'd acquire the lock 20 times. With this update, we do this: - acquire a lock to take the first task - acquire a lock to release task N and take task N + 1 So to run 10 tasks without any crashes, we now acquire the lock 11 times.
lock.withLock { | ||
afterRun(task, delayNanos) | ||
} | ||
currentThread.name = oldName |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also change the thread name fewer times!
"FINE: Q10000 finished run in 0 µs: task", | ||
"FINE: Q10000 run again after 50 µs: task", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this new order better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 more logical
Does this failure need an update?
|
I think there’s non-determinism that I need to fix in that test, as I can’t reproduce that failure locally! Lemme do that first, then rebase this on top. |
I am confused about what the If so, I think the idleAtNs of the added connection should be LONG.MAX_VALUE - 101 instead of the default value: LONG.MAX_VALUE.
This default value will make the connection ignored when cleanupTask tries to figure out whether there is any connection remaining to be closed. Consequently, the cleanupTask won't be added back to cleanupQueue and the cleanupQueue won't be added back to the readyQueue after its initial execution - activeQueue is empty even if we did not interrupt threads.
|
Previously each run did this:
So to run 10 tasks without any crashes, we'd acquire the lock 20 times.
With this update, we do this:
So to run 10 tasks without any crashes, we now acquire the lock 11 times.