Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Features/task_closures #162

Merged
merged 7 commits into from
May 12, 2015
Merged

Conversation

kikofernandez
Copy link
Contributor

task closure chaining on task-produced futures

implementation of one-night stand closures (TASK_CLOSURE). TASK_CLOSURE closures are used
whenever a future produced by a task runner is chained onto another
function. TASK_CLOSURE closures are different from DETACHED closures in the sense
that a detached closure is run by the producer of the future right after
the future is fulfilled while TASK_CLOSURE are packed as messages that will be
run by any task runner (we don't know which one). E.g.

  let fut = o.execute() in
    fut ~~> \(x: int) -> x + 1

In this case, chaining on the fut creates a DETACHED closure on the
lambda. As soon as the o object finishes with the execute method, it
will check its list of chained closures to run and, run one by one until
there are no more chained closures. This approach is sequential
and, if having 1000 chained closures on the result of o.execute it
will take forever.

Let's go for the TASK_CLOSURE closures:

  let fut = async { ... } in
    fut ~~> \(x: int) -> x+1

In this case, the future produced by the async will be fulfilled by
any available task runner (currently, one per scheduler). Furthermore,
after a task runner finishes running the body of the async task, it
doesn't run all the lambdas chained on the produced future, but creates
a new message and schedules the message to be run by any other task
runner. This means that if we have 1000 chained lambdas on the future,
the task runner needs to place the chained lambdas in the global queue
where any task runner can run the closure. If we have 4 schedulers, we
have 4 task runners that operate in parallel picking up these 1000
closures to run, which load balances the jobs.

Shortcoming: there could be data races if the environment of the
closure captures an passive object. A possible fix to this problem is to
do a copy of the passive object whenever we detect that the closure is impure.

encore_arg_t result = run_closure(current->closure, value);
future_fulfil(current->future, result);

pony_gc_recv();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tabs everywhere!

@kaeluka
Copy link
Contributor

kaeluka commented May 10, 2015

I made two comments on the source (see above). One is superficial, the other one may not be a bug.

Additionally, I'd like to state that I think that jokes should not go into master. Even more so jokes that can make people uncomfortable, like the 'one night stand' one. Sorry for being the buzzkill here.

For others who may want to review and wonder about tests: I confirmed that there is a test that uses the feature (called async_chain.enc), so I guess we can consider this PR tested.

@kikofernandez
Copy link
Contributor Author

You might be right and I got the joke too far. I'll rebase this branch on top of master (since we cannot merge it automatically) and rename a couple of things

Kiko Fernandez Reyes added 7 commits May 11, 2015 11:56
the current closures are always attached and do not use the enum
ATTACHED_CLOSURE. It's code that lies there and no one knows if it's
used or not.
implementation of task closures (TASK_CLOSURE). TASK_CLOSURE closures are used
whenever a future produced by a task runner is chained onto another
function. TASK_CLOSURE closures are different from DETACHED closures in the sense
that a detached closure is run by the producer of the future right after
the future is fulfilled while TASK_CLOSURE are packed as messages that will be
run by any task runner (we don't know which one). E.g.

```
  let fut = o.execute() in
    fut ~~> \(x: int) -> x + 1
```

In this case, chaining on the `fut` creates a DETACHED closure on the
lambda. As soon as the `o` object finishes with the `execute` method, it
will check its list of chained closures to run and, run one by one until
there are no more chained closures. This approach is sequential
and, if having 1000 chained closures on the result of `o.execute` it
will take forever.

Let's go for the TASK_CLOSURE closures:

```
  let fut = async { ... } in
    fut ~~> \(x: int) -> x+1
```

In this case, the future produced by the `async` will be fulfilled by
any available task runner (currently, one per scheduler). Furthermore,
after a task  runner finishes running the body of the `async` task, it
doesn't  run all the lambdas chained on the produced future, but creates
a new message and schedules the message to be run by any other task
runner. This means that if we have 1000 chained lambdas on the future,
the task runner needs to place the chained lambdas in the global queue
where any task runner can run the closure. If we have 4 schedulers, we
have 4 task runners that operate in parallel picking up these 1000
closures to run, which load balances the jobs.

Shortcoming: there could be data races if the environment of the
closure captures an passive object. A possible fix to this problem is to
do a copy of the passive object whenever we detect that the closure is impure.
tasks where not traced properly and there was a memory leak. it should
be solve with this commit.
remove the function call in the test that produce deadlock. this means
that the test will be kept as a failure, as it should, until someone
decides to fix it
@kikofernandez kikofernandez changed the title Features/ons closures Features/task_closures May 11, 2015
@kikofernandez
Copy link
Contributor Author

the failure has nothing to do with this commit. tasks are broken ever since we re-enabled parallelism on Mac

kaeluka pushed a commit that referenced this pull request May 12, 2015
@kaeluka kaeluka merged commit 14822e0 into parapluu:master May 12, 2015
@kikofernandez kikofernandez deleted the features/ons-closures branch May 12, 2015 12:35
typedef struct actor_entry actor_entry_t;
typedef struct closure_entry closure_entry_t;
typedef struct message_entry message_entry_t;
typedef enum responsibility_t responsibility_t;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"ISO C forbids forward references to ‘enum’ types". This SO explains why. I will fix it, for I am touching futures now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants