Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

namePrefix Configuration is broken #5272

Closed
sass1997 opened this issue Aug 18, 2023 · 9 comments
Closed

namePrefix Configuration is broken #5272

sass1997 opened this issue Aug 18, 2023 · 9 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/resolved Indicates an issue has been resolved or doesn't need to be resolved.

Comments

@sass1997
Copy link

What happened?

I do have resources where I don't want to add the namePrefix: e.x. PriorityClass. I want to blacklist them via configuration

What did you expect to happen?

I expected that only the prefix is skipped for the PriorityClass. Unfortantly it's then removed for all current and future resources. What is needed to reactivate is that you explicity add one by one namePrefix you want to have. Funny thing if you add a conifguration where the namePrefix needs to be added the rule with lowercase doesn't work and you need to name it like it's expected in Pascalcase.

How can we reproduce it (as minimally and precisely as possible)?

# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: sta-
resources:
- resources.yaml
configuration:
- kustomize-config-name-prefix.yaml
# resources.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: test
value: 1000000000
globalDefault: false
description: "This priority class is for high prio only!"
---
apiVersion: v1
kind: Service
metadata:
  name: my-lbs
spec:
  ports:
    - name: 5678-8080
      port: 5678
      protocol: TCP
      targetPort: 8080
  selector:
    app: my-lbs
  type: LoadBalancer
# kustomize-config-name-prefix.yaml
namePrefix:
  - path: metadata/name
    group: scheduling.k8s.io
    version: v1
    kind: priorityclass # lowercase due to https://stackoverflow.com/questions/64218188/selectively-apply-nameprefix-namesuffix-in-kustomize
    skip: true

Expected output

# resources.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: test
value: 1000000000
globalDefault: false
description: "This priority class is for high prio only!"
---
apiVersion: v1
kind: Service
metadata:
  name: sta-my-lbs
spec:
  ports:
    - name: 5678-8080
      port: 5678
      protocol: TCP
      targetPort: 8080
  selector:
    app: my-lbs
  type: LoadBalancer

Actual output

# resources.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: test
value: 1000000000
globalDefault: false
description: "This priority class is for high prio only!"
---
apiVersion: v1
kind: Service
metadata:
  name: my-lbs
spec:
  ports:
    - name: 5678-8080
      port: 5678
      protocol: TCP
      targetPort: 8080
  selector:
    app: my-lbs
  type: LoadBalancer

Kustomize version

v5.1.0

Operating system

None

@sass1997 sass1997 added the kind/bug Categorizes issue or PR as related to a bug. label Aug 18, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Aug 18, 2023
@DiptoChakrabarty
Copy link
Member

Tried this with the same kustomization.yaml files but only difference being these files

# resources.yaml
apiVersion: v1
kind: Pod
metadata:
  name: prefix-pod-suffix # modified
spec:
  containers:
  - image: image
    name: pod
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: test
value: 1000000000
globalDefault: false
description: "This priority class is for high prio only!"
---
apiVersion: v1
kind: Service
metadata:
  name: my-lbs
spec:
  ports:
    - name: 5678-8080
      port: 5678
      protocol: TCP
      targetPort: 8080
  selector:
    app: my-lbs
  type: LoadBalancer
# kustomize-config-name-prefix.yaml
namePrefix:
  - path: metadata/name
    #group: scheduling.k8s.io
    apiVersion: v1
    kind: Pod # lowercase due to https://stackoverflow.com/questions/64218188/selectively-apply-nameprefix-namesuffix-in-kustomize
    skip: true

got this result

apiVersion: v1
kind: Service
metadata:
  name: my-lbs
spec:
  ports:
  - name: 5678-8080
    port: 5678
    protocol: TCP
    targetPort: 8080
  selector:
    app: my-lbs
  type: LoadBalancer
---
apiVersion: scheduling.k8s.io/v1
description: This priority class is for high prio only!
globalDefault: false
kind: PriorityClass
metadata:
  name: test
value: 1000000000
---
apiVersion: v1
kind: Pod
metadata:
  name: addition-prefix-pod-suffix
spec:
  containers:
  - image: image
    name: pod

It seems firstly the kustomization file is skipping the namePrefix completely and only picking the configurations option and secondly it ignores the skip statement in the config-prefix as the above example for pod (tried for service also same result)

@DiptoChakrabarty
Copy link
Member

on trying with a kustomization.yaml file like this

# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: addition-
resources:
- resources.yaml

Resultant file

apiVersion: v1
kind: Service
metadata:
  name: addition-my-lbs
spec:
  ports:
  - name: 5678-8080
    port: 5678
    protocol: TCP
    targetPort: 8080
  selector:
    app: my-lbs
  type: LoadBalancer
---
apiVersion: scheduling.k8s.io/v1
description: This priority class is for high prio only!
globalDefault: false
kind: PriorityClass
metadata:
  name: addition-test
value: 1000000000
---
apiVersion: v1
kind: Pod
metadata:
  name: addition-prefix-pod-suffix
spec:
  containers:
  - image: image
    name: pod

@natasha41575
Copy link
Contributor

natasha41575 commented Nov 22, 2023

I can't find any documentation on this skip: true field. It would be helpful for someone to try to go through code or past issues and try to reason about what the behavior of skip: true should be. From there, we can figure out what the discrepancy is between the "expected" behavior and actual behavior and triage this issue from there. It seems in general that the skip field is buggy and we need to fix it.

/assign @ncapps

@ncapps
Copy link
Contributor

ncapps commented Nov 28, 2023

Hi @natasha41575, I was not able to find any references of a skip: true field for configurations overrides. This test demonstrates that we can select which resources to apply a prefix, suffix, or both to names by resource type. Resource types are skipped if they are not listed in the override configuration. There may be more context that I am not aware of, but a skip: true field does not seem necessary.

Example below

# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: foo-
namespace: ns1
resources:
  - deployment.yaml
  - config.yaml
configurations:
  - name-prefix-rules.yaml
# deployment.yaml
apiVersion: apps/v1
metadata:
  name: deployment1
kind: Deployment
# config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: config
# name-prefix-rules.yaml
namePrefix:
- path: metadata/name
  apiVersion: v1
  kind: Deployment

Output

This example shows that a foo- prefix was only added to the Deployment because Deployment was added to the namePrefix configuration override. A prefix was not added to the ConfigMap because ConfigMap was not added to the override.

apiVersion: v1
kind: ConfigMap
metadata:
  name: config
  namespace: ns1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: foo-deployment1
  namespace: ns1

@natasha41575
Copy link
Contributor

Thank you @ncapps for the thorough investigation! Agree that the skip: true field does not seem necessary. Could you leave a comment on #3945 reminding us that the skip: true field is not documented or well-understood, and that we will need to address that if we do decide to keep the configurations field?

For this issue, looks like the alternative configuration in #5272 (comment) that Nick suggested might be sufficient for this use case. @sass1997 Could you give it a try and let us know if this resolves your issue?

/triage resolved

@k8s-ci-robot k8s-ci-robot added triage/resolved Indicates an issue has been resolved or doesn't need to be resolved. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Dec 12, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 11, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 10, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale May 10, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/resolved Indicates an issue has been resolved or doesn't need to be resolved.
Projects
None yet
Development

No branches or pull requests

6 participants