Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Execution Engine / dataflows runner #96

Open
rufuspollock opened this issue Jun 22, 2019 · 3 comments
Open

Execution Engine / dataflows runner #96

rufuspollock opened this issue Jun 22, 2019 · 3 comments

Comments

@rufuspollock
Copy link
Member

rufuspollock commented Jun 22, 2019

What is the recommended way to run dataflows in production and/or on a regular basis and/or connected to a task queue.

i.e. the equivalent of datapackage-pipelines runner?

User Stories

As X running data processing flows I want to have a queue of data processing tasks run through data flows

  • Want a queue of tasks which are executed by dataflows ...

As X I want to have a given dataflow run on a regular basis (e.g. daily, hourly) so that I can process data regularly

  • e.g. web scraping etc
@rufuspollock
Copy link
Member Author

@akariv any thoughts here?

@micimize
Copy link

micimize commented Jul 7, 2019

It seems to me that datapackage-pipelines is as close as we have to a recommended deployment scheme, by virtue of there being docs for it's integration.

In the wild, because my pipeline operates on packages as a unit (#62), deployment has to be custom. I have a container that runs the pipeline scheduled with crython, iterates over whatever new data sources have been added, and applies the dataflows pipeline to them.

I've thought some about more scalable deployment solutions - I think a generic way to deploy auto-scaling python workloads to kubernetes would be a nice fit. I've also been thinking about how feasible it might be to write an adapter from dataflows (or a similarly pythonic data-package-based api) to apache-beam

@cschloer
Copy link
Contributor

I would like to bump this issue @akariv @roll

I'm running dataflows in a production environment and slowly starting to realize that it's not handling larger datasets very well. My understanding is that it doesn't run through processors in the same way as DPP, so eventually it runs out of memory. Switching back to DPP wouldn't be ideal as there isn't currently a way to get the results back from running a pipeline in DPP without adding a dump_to_path step and reading from the filesystem. Is there any way to improve dataflows performance, or update the DPP runner so that it works better running within python?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants