-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Web workers #1993
Web workers #1993
Conversation
@afinch7 Have a look at the following disabled tests:
They involve the compiler worker. The problem, I think, is the Lines 88 to 91 in 129eae0
|
Maybe we need a second resource for worker like a "stderr" for each worker. |
@afinch7 My inclination is that any JSError hit inside the worker should be set back to the caller. In the case of Lines 126 to 131 in c43cfed
In the case of the compiler, the caller is clear. But in general it is not... so not sure how to handle that. |
I will at least give it a try for sure before I finish this commit. Maybe poll after each call or use |
You'll have to change it so both the worker and main isolate are running in the "main" tokio executor. That is, change the thread spawn into a tokio::spawn |
It took me way too long to figure out that tokio was waiting on the compiler worker future to complete after the main isolate finished. I ended up having to use a second tokio runtime for the compiler worker. I also decided against compile_sync returning a result for now. It might make sense to wrap |
It seems to be green now. Nice! What was the unit_test hanging problem? |
Tokio scheduling. Had to use the same runtime for both spawns. e6a3bab |
static ref C_SHARED: Mutex<Option<CompilerShared>> = Mutex::new(None); | ||
// tokio runtime specifically for spawning logic that is dependent on | ||
// completetion of the compiler worker future | ||
static ref C_RUNTIME: Mutex<Runtime> = Mutex::new(Runtime::new().unwrap()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This runtime is different than the tokio runtime? (defined by crate::tokio_util::run)
Ideally we could get all isolates using the same runtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried that, but this leads to tokio waiting indefinitely for the compiler worker to exit. The compiler worker only has two ways to exit right now: calling workerClose
as the worker or throwing a uncaught error. Maybe work on a better way to terminate the compiler worker later?
let union = | ||
futures::future::select_all(vec![worker_receiver, local_receiver.shared()]); | ||
|
||
match union.wait() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to refactor this into a compile_async and compile_sync which calls the async version? I'd like to experiment with parallel compilation at some point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe in another PR? This would require a major rework of communication with the compiler worker. There is no way to relate responses directly to requests currently, and also no way to relate errors back to requests or even handle those errors without the compiler exiting(permanently).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to land this PR soon - it's getting quite big and I think it's already an improvement over the existing code base. I'm worried it's going to get difficult to rebase.
My main concern now is removing the worker specific declaration/bundle/snapshot - I think it's too much complexity. Sharing the main isolate's tokio runtime is my other concern - but that can be done later if it's too much to do now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
This is a massive improvement - thank you very much! I guess we'll have to iterate a bit more on workers - mostly I'd like to get them working in the same Tokio runtime.
* Refactored the way worker polling is scheduled and errors are handled. * Share the worker future as a Shared
part of #1955 fixes
part of #1222
part of #1047