-
Notifications
You must be signed in to change notification settings - Fork 618
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for graceful shutdown of the server #1672
Conversation
☔ The latest upstream changes (presumably #1682) made this pull request unmergeable. Please resolve the merge conflicts. |
c5577c0
to
8f35863
Compare
☔ The latest upstream changes (presumably #1677) made this pull request unmergeable. Please resolve the merge conflicts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I checked this out and tested with ctrl-c, SIGKILL and SIGTERM. In both civet and hyper mode on my rebased branch (https://github.com/markcatley/crates.io/tree/hyper-graceful-shutdown).
It works exactly as I would expect. I am also pretty confident I tested it staying up until the connections were dropped, although I am not 100% sure how to test this reliably.
Looks good to me.
let (tx, rx) = channel::<()>(); | ||
ctrlc_handler(move || tx.send(()).unwrap_or(())); | ||
rx.recv().unwrap(); | ||
drop(server); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The drop seems unnecessary given it'll be dropped at the end of the block which immediately follows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to leave the drop explicit here. Removing the drop means server
is unused so I have to switch to Civet(_server)
which, while equivalent, I think is a bit less clear.
ctrlc_handler(move || tx.send(()).unwrap_or(())); | ||
|
||
let mut rt = tokio::runtime::Builder::new() | ||
.core_threads(4) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be configured with an environment variable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's fine to leave this hard-coded for now. I added it mainly because sometimes we see the server boot with different levels of initial memory consumption and I'm trying to reduce potential variability (although I doubt this will really affect anything). Long term, I expect we can switch back to all the defaults provided by Runtime::new()
, which will set the number of core threads to the CPU count.
8f35863
to
cd8fabb
Compare
@jtgeibel The compile failure requires a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
I quickly retested using ctrl-c in both civet and hyper configurations.
The server process now intercepts SIGINT and SIGTERM to initiate a graceful shutdown of the server. This is useful when tracking memory leaks locally, as both `hyper` and `civet` are given a chance to return memory and shutdown cleanly. However, this will not improve things in production as we also run an instance of `nginx`, which will close all connections after receiving a SIGTERM from Heroku. From my preliminary investigation, it appears we may need to customize the buildpack to change this behavior. Additionally, the server will now briefly sleep before notifying Heroku that it is ready to receive connections. Also, when using `hyper` the `Runtime` is configured to use 4 background threads. This overrides the default, which is one thread per CPU and provides consistency between differently sized dynos. The number of `conduit` background threads are still configured via `SERVER_THREADS` and defaults to 50 in production.
cd8fabb
to
beeebb4
Compare
@bors r+ |
📌 Commit beeebb4 has been approved by |
Add support for graceful shutdown of the server The server process now intercepts SIGINT and SIGTERM to initiate a graceful shutdown of the server. This is useful when tracking memory leaks locally, as both `hyper` and `civet` are given a chance to return memory and shutdown cleanly. However, this will not improve things in production as we also run an instance of `nginx`, which will close all connections after receiving a SIGTERM from Heroku. From my preliminary investigation, it appears we may need to customize the buildpack to change this behavior. Additionally, the server will now briefly sleep before notifying Heroku that it is ready to receive connections. Also, when using `hyper` the `Runtime` is configured to use 4 background threads. This overrides the default, which is one thread per CPU and provides consistency between differently sized dynos. The number of `conduit` background threads are still configured via `SERVER_THREADS` and defaults to 50 in production.
☀️ Test successful - checks-travis |
The server process now intercepts SIGINT and SIGTERM to initiate a
graceful shutdown of the server.
This is useful when tracking memory leaks locally, as both
hyper
andcivet
are given a chance to return memory and shutdown cleanly.However, this will not improve things in production as we also run an
instance of
nginx
, which will close all connections after receiving aSIGTERM from Heroku. From my preliminary investigation, it appears we
may need to customize the buildpack to change this behavior.
Additionally, the server will now briefly sleep before notifying Heroku
that it is ready to receive connections. Also, when using
hyper
theRuntime
is configured to use 4 background threads. This overridesthe default, which is one thread per CPU and provides consistency
between differently sized dynos. The number of
conduit
backgroundthreads are still configured via
SERVER_THREADS
and defaults to 50in production.