-
Notifications
You must be signed in to change notification settings - Fork 671
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core feature] Injection of env vars at run time [discussion needed] #2447
Comments
From Gigi on channel:
|
We also need this feature to decide the environment(dev/staging/prod) at runtime so that our application can determine what configuration file to load for example. Most people use flyte domain to decide the environment but we don't use domains for the environment as we have isolated clusters across environments. As a workaround, we use Docker arg/env but this forces us to build multiple It would be great if we can pass these properties via flytekit or flytectl along with the packaging/registering |
this is not directly related to this thread but there was a request from a user on how to support the injection of different environment variables by domain (that do not change from one workflow execution to another). this can now be achieved through pod templates and the pod template fallback logic in flyte propeller. Bring up the demo environment with apiVersion: v1
kind: PodTemplate
metadata:
name: flyte-template
namespace: '{{ namespace }}'
template:
metadata:
annotations:
bar: custom-value
spec:
containers:
- name: default
image: docker.io/rwgrim/docker-noop
terminationMessagePath: "/dev/foo"
env:
- name: DOMAIN_BASED_VALUE
value: '{{ mydomainvalue }}' Then, place this in plugins:
k8s:
default-pod-template-name: "flyte-template"
cluster_resources:
customData:
- production:
- mydomainvalue:
value: "iminprod"
- staging:
- mydomainvalue:
value: "iminstg"
- development:
- mydomainvalue:
value: "imindev" And if you want you can also make a default flyte namespace pod template apiVersion: v1
kind: PodTemplate
metadata:
name: flyte-template
namespace: flyte
template:
metadata:
labels:
foo: foolabel
annotations:
foo: initial-value
bar: initial-value
spec:
containers:
- name: default
image: docker.io/rwgrim/docker-noop
terminationMessagePath: "/dev/foo" |
I think the original issue, still stands where a user may want to set env vars for containers at runtime. pyflyte run --env "k=v" --env "k=v" ... or in flytectl This can help users set env vars. For non I am thinking, of starting to work on this |
Think this is ready to close? |
yes, it's done. |
Motivation: Why do you think this is important?
A user requested that Flyte be able to inject environment variables at execution launch time to a workflow run that would get propagated into all downstream tasks, subworkflows, and launch plans.
I think this feature is possibly reasonable, but warrants discussion. The reason is that this breaks referential transparency kinda. The injected environment variable are designed to alter the behavior of the task. However. the idea is that these env vars will not affect caching at all, so that is a bit strange.
The original use-case for this would be to direct service queries. One may run the same task against production services (non-Flyte stuff) or development services for example.
Goal: What should the final outcome look like, ideally?
Not entirely clear, but basically at all the points where users can run workflows (flytectl, console, flyte-cli, flyteremote, etc.) the user would be able to pass along a dict that would get set at the top execution level, and this propagate through all nodes/subexecutions.
Describe alternatives you've considered
It feels more natural to use either a task's domain or the execution's domain to alter behavior, but perhaps not.
Another alternative is to add an explicit service environment variable to all the tasks but this is cumbersome and inelegant.
Propose: Link/Inline OR Additional context
No response
Are you sure this issue hasn't been raised already?
Have you read the Code of Conduct?
The text was updated successfully, but these errors were encountered: