-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Features/task_closures #162
Conversation
encore_arg_t result = run_closure(current->closure, value); | ||
future_fulfil(current->future, result); | ||
|
||
pony_gc_recv(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tabs everywhere!
I made two comments on the source (see above). One is superficial, the other one may not be a bug. Additionally, I'd like to state that I think that jokes should not go into master. Even more so jokes that can make people uncomfortable, like the 'one night stand' one. Sorry for being the buzzkill here. For others who may want to review and wonder about tests: I confirmed that there is a test that uses the feature (called |
You might be right and I got the joke too far. I'll rebase this branch on top of |
the current closures are always attached and do not use the enum ATTACHED_CLOSURE. It's code that lies there and no one knows if it's used or not.
implementation of task closures (TASK_CLOSURE). TASK_CLOSURE closures are used whenever a future produced by a task runner is chained onto another function. TASK_CLOSURE closures are different from DETACHED closures in the sense that a detached closure is run by the producer of the future right after the future is fulfilled while TASK_CLOSURE are packed as messages that will be run by any task runner (we don't know which one). E.g. ``` let fut = o.execute() in fut ~~> \(x: int) -> x + 1 ``` In this case, chaining on the `fut` creates a DETACHED closure on the lambda. As soon as the `o` object finishes with the `execute` method, it will check its list of chained closures to run and, run one by one until there are no more chained closures. This approach is sequential and, if having 1000 chained closures on the result of `o.execute` it will take forever. Let's go for the TASK_CLOSURE closures: ``` let fut = async { ... } in fut ~~> \(x: int) -> x+1 ``` In this case, the future produced by the `async` will be fulfilled by any available task runner (currently, one per scheduler). Furthermore, after a task runner finishes running the body of the `async` task, it doesn't run all the lambdas chained on the produced future, but creates a new message and schedules the message to be run by any other task runner. This means that if we have 1000 chained lambdas on the future, the task runner needs to place the chained lambdas in the global queue where any task runner can run the closure. If we have 4 schedulers, we have 4 task runners that operate in parallel picking up these 1000 closures to run, which load balances the jobs. Shortcoming: there could be data races if the environment of the closure captures an passive object. A possible fix to this problem is to do a copy of the passive object whenever we detect that the closure is impure.
tasks where not traced properly and there was a memory leak. it should be solve with this commit.
remove the function call in the test that produce deadlock. this means that the test will be kept as a failure, as it should, until someone decides to fix it
0353b44
to
7a11ad4
Compare
the failure has nothing to do with this commit. tasks are broken ever since we re-enabled parallelism on Mac |
typedef struct actor_entry actor_entry_t; | ||
typedef struct closure_entry closure_entry_t; | ||
typedef struct message_entry message_entry_t; | ||
typedef enum responsibility_t responsibility_t; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"ISO C forbids forward references to ‘enum’ types". This SO explains why. I will fix it, for I am touching futures now.
task closure chaining on task-produced futures
implementation of one-night stand closures (TASK_CLOSURE). TASK_CLOSURE closures are used
whenever a future produced by a task runner is chained onto another
function. TASK_CLOSURE closures are different from DETACHED closures in the sense
that a detached closure is run by the producer of the future right after
the future is fulfilled while TASK_CLOSURE are packed as messages that will be
run by any task runner (we don't know which one). E.g.
In this case, chaining on the
fut
creates a DETACHED closure on thelambda. As soon as the
o
object finishes with theexecute
method, itwill check its list of chained closures to run and, run one by one until
there are no more chained closures. This approach is sequential
and, if having 1000 chained closures on the result of
o.execute
itwill take forever.
Let's go for the TASK_CLOSURE closures:
In this case, the future produced by the
async
will be fulfilled byany available task runner (currently, one per scheduler). Furthermore,
after a task runner finishes running the body of the
async
task, itdoesn't run all the lambdas chained on the produced future, but creates
a new message and schedules the message to be run by any other task
runner. This means that if we have 1000 chained lambdas on the future,
the task runner needs to place the chained lambdas in the global queue
where any task runner can run the closure. If we have 4 schedulers, we
have 4 task runners that operate in parallel picking up these 1000
closures to run, which load balances the jobs.
Shortcoming: there could be data races if the environment of the
closure captures an passive object. A possible fix to this problem is to
do a copy of the passive object whenever we detect that the closure is impure.