-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sanic Server WorkerManager refactor #2499
Conversation
FYI - If you are looking at this PR an see the |
See sanic-org/sanic-ext#111 for health monitoring implementation |
|
I'm putting my chop on this. I was satisfied with the walkthrough and the Q&A, and I think with tests passing as best as can be expected, I'm ready for this to be merged. |
@sjsadowski Nice. I am going to merge, which will unblock a few other items I want to get done this week. |
Do you have a date for this release? I'm curious and eager to move it to prod :) |
Likely Sunday (2022-09-26). |
Great, thank you :) |
NOTE: When I say "Sanic" I mean Sanic Server. This PR is inapplicable to ASGI mode.
Background
The long overdue refactor of Sanic multiprocessing...
This PR intends to redo how Sanic creates processes and sockets. As a side effect, it will also change how auto-reload works, and perhaps most importantly finally fix serving multiple workers from Windows. It will also expose a new API for manually triggering restarts, accessing the current worker process context, and passing objects to the processes like
multiprocessing.Queue
that will enable sync between single-node, multi-worker instances.Design concept
As discussed in #2364, there will always* be a main process when you start Sanic. The main process will create a
WorkerManger
that is responsible for the lifecycle of one or more server processes, and optionally a reload process. This pattern will also be extensible by API to allow Sanic Extension and other plugins to piggyback off the manager for example to create a health-check process.* When I say always, I really mean "sometimes". We will also create a secondary
Sanic.single_serve()
method that creates and runs the server in one process. No auto-reload or multi-worker support. You will need to go out of your way to use this for those people that need it. OOTB, we want to make a consistent experience between DEBUG and PROD.Once this is complete, we can make further enhancements to
Sanic.prepare
. Currently if you prepare multiple HTTP versions and/or socket bindings, they will run in each process on the same loop. We can make this better to allow HTTP servers instances to have their own loops inside individual processes. This is not a part of this PR and will likely be an addition in v22.12.One or more sockets will be created by the Main process and passed to each worker process (as needed). Because most of this implementation will be all new, it should be pretty easy to maintain backwards compatibility by taking the existing
Sanic.serve
and moving it toSanic.serve_legacy
.New API
Sanic.serve_legacy
- Use the old version ofSanic.serve
Sanic.serve_single
- Run Sanic w/o multiprocessing (no auto-reload allowed)app.run(..., single_process=True)
app.run(..., legacy=True)
app.m.restart()
- Manually restart an applicationapp.shared_ctx
- see below$ sanic path.to:app --inspect
- CLI command to inspect running applicationapp.manager.manage(...)
- see belowShared state
For sharing safe values between workers (on the same Sanic runtime), there is a new
app.shared_ctx
. This will be an object explicitly for things likemultiprocessing
sync and shared state objects (Queue
,Pipe
,Value
,Array
, etc). They should ONLY be added in the main process like this:Process management
A user can hook into the Sanic worker manager to allow it to manage any additional subprocess. All it needs is a callable. If that subprocess will be blocking, then it should also handle some common signals.
Sanic will now startup the process and manage its lifecycle.
Worker state
There is an object available on the application multiplexer called "state":
It is a dictionary-like object containing basic details about the current state of the current worker process:
This object has some basic operations to interact with it like a dictionary:
It should be noted that the multiplexer object is only available inside of a server worker process.
Inspector
There is a special worker process called the "Inspector". You must opt-in to use it:
This will open a local port that is exposed to allow an outside process to get information and interact with the running application. In the below screenshot, we execute a CLI command to get the state of the current instance:
Other commands:
Breaking Changes
app.config
values inmain_process_start
will have no impact. (This is a sort of unintended side effect. Probably more a bug that should not have been allowed. Nonetheless, it is no longer possible and may be a breaking change for some).app.run()
orSanic.serve()
must be inside of aif __name__ == "__main__"
blockImpacted Issues
If you know of some other Issues that this touches not listed below, please LMK.
Closes #2534
Closes #2494
Closes #2467
Closes #2429
Closes #2364
Closes #2312
Closes #1471
Closes #1346
TODO
Do not pre-bind socket in legacy modehealth
module to Sanic ExtensionsTechnical discussion and overview: https://youtu.be/m8HCO8NK7HE