-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Execution Engine / dataflows runner #96
Comments
@akariv any thoughts here? |
It seems to me that In the wild, because my pipeline operates on packages as a unit (#62), deployment has to be custom. I have a container that runs the pipeline scheduled with I've thought some about more scalable deployment solutions - I think a generic way to deploy auto-scaling python workloads to kubernetes would be a nice fit. I've also been thinking about how feasible it might be to write an adapter from |
I would like to bump this issue @akariv @roll I'm running dataflows in a production environment and slowly starting to realize that it's not handling larger datasets very well. My understanding is that it doesn't run through processors in the same way as DPP, so eventually it runs out of memory. Switching back to DPP wouldn't be ideal as there isn't currently a way to get the results back from running a pipeline in DPP without adding a dump_to_path step and reading from the filesystem. Is there any way to improve dataflows performance, or update the DPP runner so that it works better running within python? |
What is the recommended way to run dataflows in production and/or on a regular basis and/or connected to a task queue.
i.e. the equivalent of datapackage-pipelines runner?
User Stories
As X running data processing flows I want to have a queue of data processing tasks run through data flows
As X I want to have a given dataflow run on a regular basis (e.g. daily, hourly) so that I can process data regularly
The text was updated successfully, but these errors were encountered: