-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
allow vars of the same name that refer to the same concrete value #1620
Conversation
Welcome @tkellen! |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: tkellen The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/assign @Liujingfang1 |
Hmm. I need a bit more time with this, though I would appreciate a review of the approach, this seems to produce non-deterministic results. |
@@ -94,7 +94,21 @@ func (ra *ResAccumulator) MergeAccumulator(other *ResAccumulator) (err error) { | |||
if err != nil { | |||
return err | |||
} | |||
return ra.varSet.MergeSet(other.varSet) | |||
err = ra.varSet.MergeSet(other.varSet) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just back from dinner. Note to self, because map iteration is non-deterministic and this stops iteration at the first error, this is not always going to work. This method needs to always complete iteration, returning all errors for evaluation by the new clause below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you comment on why the check on var values, and deciding not to return an error if they match, cannot be done in the body of Var.Merge?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm not looking at this too deeply, but i thought that was where the fix was going to go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you comment on why the check on var values, and deciding not to return an error if they match, cannot be done in the body of Var.Merge?
Yes, I would be glad to comment. I have two responses.
-
I am certain it is possible to perform the check entirely within
Var.Merge
. Perhaps theVar
type could gain aValue
method that takes aresaccumulator
as an argument (correspondingly theresaccumulator
would losefindVarValueFromResources
). With this approach,Var.Merge
would need access to the appropriateresaccumulator
s so it could pass them down to theValue
method. -
Based on my read of the abstractions at play (such as I understand them after reading the code for less than a day) the Var type is, today, primarily concerned with recording where a var may be found and that no duplicates exist. The nearest place in the hierarchy of the API (that I could see clearly) that was concerned with retrieving values was
resaccumulator
. If you feel that split of responsibilities is still correct given the new constraints we are trying to satisfy, I'll continue the approach I've taken thus far.
Given those two responses, could you comment on which approach hews most closely to your ideals for the codebase @monopole? I'm quite happy to implement this in either fashion (or an altogether different one, such that you are willing to take the time to explain it or otherwise prompt me to discover it on my own).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, sounds good. this error is in the view of the accumulator.
@@ -94,7 +94,21 @@ func (ra *ResAccumulator) MergeAccumulator(other *ResAccumulator) (err error) { | |||
if err != nil { | |||
return err | |||
} | |||
return ra.varSet.MergeSet(other.varSet) | |||
err = ra.varSet.MergeSet(other.varSet) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, sounds good. this error is in the view of the accumulator.
@@ -179,6 +179,141 @@ func TestResolveVarsVarNeedsDisambiguation(t *testing.T) { | |||
} | |||
} | |||
|
|||
func TestResolveVarsAllowableConflict(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please make some little private factory methods so the test reads more like:
rmFooApple := makeResourceMap("envFoo", "dataValApple")
rmBarApple := makeResourceMap("envBar", "dataValApple")
rmBarPeach := makeResourceMap("envBar", "dataValPeach")
vFoo := makeVar("envFoo")
vBar := makeVar("envBar")
ac0 := new accumulator
ac0.Append(rmFooApple)
ac0.MergeVar(vFoo)
ac1 := new accumulator
ac1.Append(rmBarApple)
ac1.MergeVar(vBar)
assert no error merging ac1 into ac0 because values the same
ac2 := new accumulator
ac1.Append(rmBarPeach)
ac1.MergeVar(vBar)
// works
assert error merging ac2 into ac1 because values different
assert error merging ac2 into ac0 because values different
Also please - for clarity - write this test as a distinct PR, where presumably all accumulation mergers fail,
we'll merge that test,
then rebase this PR to include that test, and then this PR adds the new code and a minor tweak to the test to show what doesn't error anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the clear direction. I'll get this done before EOD tomorrow.
@monopole, this incorporates the changes we discussed. Standing by for any nits large or small. |
036e2f4
to
0e3d2c3
Compare
The test below demonstrates that this PR is in its current form is not right. It is actually causing wrong resolution of the variable. It allows to have two variables with the same name but pointing at two completely different objects. As long as the values match at the time of "conflict detection" you have the impression it works ok, but the resolvevars method is then lost. The following test shows that at the intermediate level, MergeVars does not detect any obvious conflict since both IMAGE_NAME vars are resolved as "bash" but in final layer we patch the value of the image on in container 1 to nginx..... The bug is in IMAGE_NAME resolution for container 2 which also resolves as nginx instead of bash. DEMO_HOME=$(mktemp -d)
mkdir -p ${DEMO_HOME}/
mkdir -p ${DEMO_HOME}/base1
mkdir -p ${DEMO_HOME}/base2
mkdir -p ${DEMO_HOME}/final
mkdir -p ${DEMO_HOME}/intermediate
cat <<'EOF' >${DEMO_HOME}/base1/kustomization.yaml
resources:
- resources.yaml
vars:
- name: IMAGE_NAME
objref:
apiVersion: v1
kind: Pod
name: component1
fieldref:
fieldpath: spec.containers[0].image
EOF
cat <<'EOF' >${DEMO_HOME}/base1/resources.yaml
apiVersion: v1
kind: Pod
metadata:
name: component1
spec:
containers:
- name: component1
image: bash
env:
- name: POD_NAME
value: component1
- name: IMAGE_NAME
value: $(IMAGE_NAME)
EOF
cat <<'EOF' >${DEMO_HOME}/base2/kustomization.yaml
resources:
- resources.yaml
vars:
- name: IMAGE_NAME
objref:
apiVersion: v1
kind: Pod
name: component2
fieldref:
fieldpath: spec.containers[0].image
EOF
cat <<'EOF' >${DEMO_HOME}/base2/resources.yaml
apiVersion: v1
kind: Pod
metadata:
name: component2
spec:
containers:
- name: component2
image: bash
env:
- name: POD_NAME
value: component2
- name: IMAGE_NAME
value: $(IMAGE_NAME)
EOF
cat <<'EOF' >${DEMO_HOME}/intermediate/kustomization.yaml
resources:
- ../base1
- ../base2
EOF
cat <<'EOF' >${DEMO_HOME}/final/kustomization.yaml
resources:
- ../intermediate
patchesStrategicMerge:
- ./patch.yaml
EOF
cat <<'EOF' >${DEMO_HOME}/final/patch.yaml
apiVersion: v1
kind: Pod
metadata:
name: component1
spec:
containers:
- name: component1
image: nginx
EOF kustomize build ${DEMO_HOME}/final produces the wrong output apiVersion: v1
kind: Pod
metadata:
name: component1
spec:
containers:
- env:
- name: POD_NAME
value: component1
- name: IMAGE_NAME
value: nginx
image: nginx
name: component1
---
apiVersion: v1
kind: Pod
metadata:
name: component2
spec:
containers:
- env:
- name: POD_NAME
value: component2
- name: IMAGE_NAME
value: nginx
image: bash
name: component2 |
This process has helped me to recognize a "strangeness" I have been feeling about variables from day one with kustomize. My mental model of this project is that every resource enumerated in a kustomization should provide no state to its consumer other than raw yaml it would produce when rendered in isolation. Vars (as implemented today) break that isolation and gh-1600 shows this clearly. Bearing this in mind, it seems to me that vars should only be "in scope" of the kustomization.yml where they were defined. Also, they should have no interaction with dollar variables embedded in incoming resources. I have no idea how to assemble a usable workflow in those constraints but we're well down the theoretical purity rabbit hole with this project and I'm depending on a fork of a fork to fill the gap, so we may as well continue the exercise. |
kustomize vars have always had a scope clarity problem. We could change / clarify the scope, but such a change will likely break someone's use case. We'd be better of deprecating them entirely, replaced with a mechanism (e.g #1631 ) that doesn't rely on sprinkling
|
At this point I'm in support of removing vars from kustomize entirely as long as there is some pure-yaml form that allows interpolating values from multiple object references with literally specified values (e.g. a cluster-wide secret prefix that a few hundred microservices append with application-specific values). |
Currently on vacation but I'll respond in detail when I return @pwittrock, thanks for the prompting! |
@pwittrock I do not think this PR should be closed until one of the following things happens:
I would be happy to tackle any of the above if a maintainer can provide some guidance. Honestly ya'll, vars in their current form are completely unusable. The issue tracker for this project is rife with people struggling to resolve symptoms of this issue. I appreciate deeply what you are doing with this project but I really struggle to be kind in my assessment of how much cognitive energy you are burning up with something ya'll know to be broken. In my view it borders on unethical. I understand OSS is free but this project is a part of kubectl itself and needs to be held to a higher standard. Standing by for your thoughts. |
@tkellen: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
cc @bkuschel who has also attempted to address this problem ^ |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@tkellen: Reopened this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This resolves gh-1600 and is dependent on #1634.