-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use Manifest Labeller for helm. #2105
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2105 +/- ##
==========================================
+ Coverage 56.28% 56.63% +0.35%
==========================================
Files 180 180
Lines 7796 7728 -68
==========================================
- Hits 4388 4377 -11
+ Misses 2991 2931 -60
- Partials 417 420 +3
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, I like the idea to retrieve the yamls via helm. It looks elegant. However, this changeset has substantial problems. The most important one being that labels are not applied yet (only the yaml resources are patched, but not applied).
Before this change, the patched resources were applied via three-way json patch. I think this is still the best way to go. But this makes it questionable if it helps at all to retrieve the yamls in the first place.
To be honest, I'm not sure what is the best way to continue here.
@tejal29 Given that kokoro did not fail, we probably need a test which checks the labeller for all deployers? WDYT? |
Thank @corneliusweig. Yes i was going to add an integration test and also verify it manually for the examples. I will do it today or Monday :). |
@corneliusweig and @balopat this is ready for final review. |
1. Add kubectl apply command after setting labels 2. Integration tests for helms are back a. GoogleContainerTools#1823: Use env template in skaffold release name 3. During helm deploy, build is assumed and hence if no build-artifacts are supplied, it fails with following error "no build present for gcr.io/k8s-skaffold/skaffold-helm" Since Build and Deploy are now separate ( GoogleContainerTools#922 for notes) use the image value as is if not present as build artifacts Fix test fot this. TODO: 1. Add PR description 2. Figure out why we get 2 pods, 1 with valid labels which is running but another 1 with different labels.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some small nits, otherwise the code looks good to me.
However, I am a bit worried about your findings that re-labeling the pods in the deployment's podTemplateSpec will redeploy the pods once more. Is this acceptable? After all, this means that every helm deployment will trigger two deployments. And this is issue is new, because the previous labeller did not update the podTemplateSpec.
Have you also considered patching the chart templates directly?
So there's good news and bad news. 👍 The good news is that everyone that needs to sign a CLA (the pull request submitter and all commit authors) have done so. Everything is all good there. 😕 The bad news is that it appears that one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that here in the pull request. Note to project maintainer: This is a terminal state, meaning the ℹ️ Googlers: Go here for more info. |
Thanks @corneliusweig
This is true. I also discovered this when testing.
yes, thought about that, helm has a command |
CLAs look good, thanks! ℹ️ Googlers: Go here for more info. |
@tejal29 Looking at the helm docs suggests that the
Other options that come to my mind:
|
Thanks @corneliusweig , so i digged into the helm code and helm uses KubeClient to deploy the generated manifest. Its all done via tiller service and probably wait untill helm 3 for tillerless helm! |
Re-open #2075 and get that in! |
func (k *NSKubernetesClient) GetPods() *v1.PodList { | ||
pods, err := k.client.CoreV1().Pods(k.ns).List(meta_v1.ListOptions{}) | ||
if err != nil { | ||
k.t.Fatalf("Could not find pods for in namespace %s", k.ns) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "Could not find pods for in namespace"
@@ -57,6 +59,10 @@ func NewHelmDeployer(runCtx *runcontext.RunContext) *HelmDeployer { | |||
return &HelmDeployer{ | |||
HelmDeploy: runCtx.Cfg.Deploy.HelmDeploy, | |||
kubeContext: runCtx.KubeContext, | |||
kubectl: kubectl.CLI{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little confused by the alias you have above. you do
kubectl kubectl.CLI
but then here you have
kubectl: kubectl.CLI{
so is it actually kubectl.CLI.CLI
? maybe we can just get rid of the alias?
return nil, fmt.Errorf("error retrieving helm deployment info: %s", releaseInfo.String()) | ||
} | ||
return bufio.NewReader(&releaseInfo), nil | ||
} | ||
|
||
// Retrieve info about all releases using helm get | ||
// Skaffold labels will be applied to each deployed k8s object | ||
// Skaffold labels will be applied to each deployed k8s object including k8object from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Skaffold labels will be applied to each deployed k8s object, including those created from remote charts
with #2568 this is not needed. |
fixes #2074
Description of Changes:
helm get manifests
to get all the helm deployed objects and re-use manifest labeller.a. helm integration tests flake #1823: Use env template in skaffold release name
are supplied, it fails with following error
"no build present for gcr.io/k8s-skaffold/skaffold-helm"
Since Build and Deploy are now separate ( "skaffold deploy" is not useful #922 for notes) use the
image value as is if not present as build artifacts.
In Remove service labeling temporarily #959, we removed adding labels to services for helm tool (only). The issue Let the service controller retry when presistUpdate returns a conflict error kubernetes/kubernetes#68087 is now fixed. The current integration test i have added has a service and it works fine.