Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Variable is expected to be string #1721

Closed
davidmogar opened this issue Nov 2, 2019 · 11 comments
Closed

Variable is expected to be string #1721

davidmogar opened this issue Nov 2, 2019 · 11 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@davidmogar
Copy link

davidmogar commented Nov 2, 2019

I'm replacing a variable in my base to map the replicas in a Deployment with the minReplicas in a HorizontalPodAutoscaler. This is the basic config:

Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: puppetserver
spec:
  replicas:  $(REPLICAS)
...

HorizontalPodAutoscaler:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: puppetserver
spec:
  maxReplicas: 15
  minReplicas: 1

Configuration for varReferences:

varReference:
  - path: spec/replicas
    kind: Deployment
  - path: spec/metrics/object/describedObject/name
    kind: HorizontalPodAutoscaler
  - path: spec/scaleTargetRef/name
    kind: HorizontalPodAutoscaler

kustomization.yaml:

configurations:
  - kustomizeconfig/var_references.yaml

resources:
  - deployment.yaml
  - horizontal_pod_autoscaler.yaml

vars:
  - name: REPLICAS
    objref:
      kind: HorizontalPodAutoscaler
      name: puppetserver
      apiVersion: autoscaling/v1
    fieldref:
      fieldpath: spec.minReplicas

This is working as expected and the variable gets replaced. But, when I try to kustomize an overlay which only contains this:

bases:
  - ../../base

replicas:
- name: puppetserver
  count: 5

I get the next error:

Error: "$(REPLICAS)" is expected to be string

What am I doing wrong?

This is happening with the latest release (3.3.0).

@davidmogar davidmogar reopened this Nov 2, 2019
@davidmogar
Copy link
Author

I see that if I change the HorizontalPodAutoscaler instead of the replicas directly, which maps to the Deployment, it works as expected. Shouldn't be possible in anyway to do what I'm trying to do? I would expect the Deployment replicas to be different from the minReplicas in the HorizontalPodAutoscaler.

@jbrette
Copy link
Contributor

jbrette commented Nov 2, 2019

You should probably use patchesStrategicMerge instead. Check here

@davidmogar
Copy link
Author

You don't use vars in the way the documentation shows. Is this suppose to be a new way to do it? Can you link me to some info about this?

@jbrette
Copy link
Contributor

jbrette commented Nov 3, 2019

sorry. Added some comments in the README.md on how to proceed and declare the variable manually you don't have access to the autovar feature.

@davidmogar
Copy link
Author

Is that autovar this? Can you add a comment on this? I don't understand if it's something we should actively use from a fork or if we should do it manually. Is a future that is coming or has been closed for good?

@jbrette
Copy link
Contributor

jbrette commented Nov 4, 2019

  1. The autovar feature was left rotten for four months like four or fives features PR which are needed for our project.

  2. Without the autovar feature, we had kustomization.yaml which were 3000 lines long, so basically you spend more time trying to generate and manage your kustomization.yaml than getting actual work done. The autovar feature is not modifying the syntax of the kustomization.yaml so is fully backward compatible with kustomize/master.

  3. The main maintainers want to go even more complicated in sharing values across kubernetes objects . What fits in 1 line with autovar requires 10 lines in the kustomization.yaml and can not cross the boundaries of the kustomization folder: Check here. The same thread calls for deprecating variables.

I checked the other projects attempting to use kustomize at scale:
a) kind gave up, removed kustomize dependency all together
b) kubeflow goes through crazy steps in order to generate a kustomization.yaml that kustomize will potentially accept.
c) cluster-api is forced to use "| envsubst" to address the variables issues
d) replicatehq/ship is stuck with kustomize 2.0.3
e) kubectl and kubeadm are currently stuck with kustomize 2.0.3

  1. So to sum up, our project is manageable when we use those feature PRs. May be one day those features will provided using another syntax and we would have to regenerate the kustomization.yaml files. Until then we can get stuff done with the allinone fork of kustomize. This means just running "git rebase" to deal with the daily refactoring of the kubernetes-sigs/kustomize master code.

@drennalls
Copy link

drennalls commented Nov 26, 2019

@jbrette yes this is a real shame.. I can definitely relate to some things you mentioned #1721 (comment), specifically..

so basically you spend more time trying to generate and manage your kustomization.yaml than getting actual work done
...etc..
I checked the other projects attempting to use kustomize at scale:
a) kind gave up, removed kustomize dependency all together

..I'm at that point now of giving up and moving on

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 24, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 25, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants