-
Notifications
You must be signed in to change notification settings - Fork 404
Chart optionally installs CRD / CR Manager configurable for more strict clusters #344
Chart optionally installs CRD / CR Manager configurable for more strict clusters #344
Conversation
metadata: | ||
name: externalsecrets.kubernetes-client.io | ||
annotations: | ||
app.kubernetes.io/managed-by: helm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure how to address the two different copies of the CRD... I'm also leveraging this annotation in the E2E tests which is a bit hacky.
Perhaps in 4.x the custom resource manager can be removed in favor of the chart CRD install?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So for helm v3 we should probably put the CRD in crds folder, these cant be templated though IIRC.
We could probably source this file in the code as well to avoid copies, and overrwrite metadata.annotations["app.kubernetes.io/managed-by"]
in the code before upserting it.
For helm v2 backwards compat this file could be changed to something like
{{- if .Values.customResourceManagerDisabled }}
{{- if .Values.crds.create }}
{{- range $path, $bytes := .Files.Glob "crds/*.yaml" }}
{{ $.Files.Get $path }}
---
{{- end }}
{{- end }}
{{- end }}
Probably we should also set DISABLE_CUSTOM_RESOURCE_MANAGER
to true
by default if running helm v3.
Maybe this is all overkill 😅 either way we should support both helm v2 and v3, I'll try to check with silas if he has any opinion on removing the CRD management from the runtime. I think it makes sense to favor the chart installing the CRD.
So perhaps we can go straight for 4.x solution :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great suggestions for sourcing the CRD and v2 backwards compat, I'll give them a shot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright I got the CRD sourced from one location in the repo now, and added your template for helm v2 support. However, I couldn't conditionally set a value depending on whether or not helm v2/v3 was being used so DISABLE_CUSTOM_RESOURCE_MANAGER
will have to default to false
for now to remain backwards compat.
The helm docs will need to be updated as a result to include --skip-crds
and explain how to install the CRD via the chart using v2 and v3.
40cc74c
to
8bf9789
Compare
d412409
to
54f1fcd
Compare
efd9891
to
75e039a
Compare
@@ -6,7 +6,7 @@ | |||
|
|||
```bash | |||
$ helm repo add external-secrets https://godaddy.github.io/kubernetes-external-secrets/ | |||
$ helm install external-secrets/kubernetes-external-secrets | |||
$ helm install --skip-crds external-secrets/kubernetes-external-secrets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it remain:
$ helm install --name my-release external-secrets/kubernetes-external-secrets
?
Which version of Helm are the docs targeting? I cannot think of a way to set the defaults so that the easy install works for both V3 and V2 without one of the versions requiring an additional --skip-crds
or --set
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say they've been targeting helm v2 but with EOL of v2 around the corner, I'd say it makes sense to target v3 as a default.
unit tests for manageCrd; always adding CRD manifest to client setting AWS_DEFAULT_REGION in the helm chart e2e tests pass locally first pass at e2e test fixing e2e test; running tests for both settings e2e tests work Helm 2 and 3 friedly solution w/ sane default externalsecrets and localstack have to be deleted btwn runs to avoid flakiness sourcing from one location; proper helm v3 and v2 support fixing crds template for v2 helm version and custom resource manager build matrix client only template syntax differs between v2 and v3 matching operator crd naming scheme; fixing e2e args chart docs
75e039a
to
7b194c7
Compare
Sorry for the delay, finally got around to testing on our lab OpenShift clusters, looks good! This controller is great. I'll rebase off of master and update the docs, and then let's try to get this merged? |
6d5b91f
to
2c93d96
Compare
If that correct to assume that currently you cant install the service with helm v3? I get an error on startup about not finding /openapi. |
@Sytten can you provide more details? Are you saying master currently does not work with helm v3 or this branch specifically? What is erroring on startup? It might also help to provide which Kubernetes distro and version you are running on. I have not tried master honestly, but we are using helm v3 in argocd to deploy this version of the chart currently without issue. |
@Flydiverny @silasbw hey folks, any chance I could get a review this week? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delays on this!
So from what I can tell by the default settings this is not a breaking change, right? :)
@@ -104,6 +104,20 @@ class CustomResourceManager { | |||
|
|||
throw new Error(`Unable to update resource ${resourceName}`) | |||
} | |||
|
|||
async manageCrd ({ customResourceManifest }) { | |||
this._kubeClient.addCustomResourceDefinition(customResourceManifest) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this always need to be added, perhaps only do this if its not disabled? I'm thinking it would be discovered by await kubeClient.loadSpec()
in daemon above if CRDs are already in the cluster.
steps: | ||
- uses: actions/checkout@v2 | ||
- name: Setup node | ||
uses: actions/setup-node@v1 | ||
with: | ||
node-version: 12 | ||
- uses: azure/setup-helm@v1 | ||
with: | ||
version: v2.16.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
version: v2.16.1 | |
version: v2.16.7 |
I can't merge this since the E2E status check is stuck as expected. I assume as thats whats on master, but on this branch its changed (as it should be). I'll try to poke @silasbw |
Thanks @hfuss (and @Flydiverny for pinging me on slack)! 🥂 |
@Flydiverny yessir! I wrote the changes so that it would be backwards compatible for users who previously installed the chart using Helm V2 and relied on the CR manager. Helm V3 users will relying on the CR manager will need And thanks @silasbw for merging this! 🍻 |
Changes
Controller
DISABLE_CUSTOM_RESOURCE_MANAGER
env var to the daemon configcustomResourceManager
to use this new flag to determine whether or it not it should attempt to upsert the CRDChart
charts/kubernetes-external-secrets/crds/kubernetes-client.io_externalsecrets_crd.yaml
customResourceManagerDisabled
option to the Helm chart. This controls the respective env var, the scope of the cluster role, and determines whether or not to install the CRD as part of the chart.false
to remain backwards compatv1
tov2
for Helm V3 compatibility only if that is desired for future 4.x releases.AWS_DEFAULT_REGION
env var in the helm chartE2E Tests
Justification
Closes #300
Also closes #249
Coupling the CRD creation to a controller requires an unnecessarily privileged cluster role.
In our OpenShift clusters, our admins would not be willing to grant the controller service account that role. We require the chart itself to install the CRD, and the service account or admin installing the chart would have the
crd:create
access while the controller be more tightly scoped. We have been using this fork in our lab clusters for a few weeks now with no issue.This essentially deprecates the custom resource manager so that it can removed entirely in future 4.x releases.