Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl --prune does not delete last managed item(s) in namespace #555

Closed
TheKangaroo opened this issue Nov 5, 2018 · 6 comments
Closed
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@TheKangaroo
Copy link

Is this a BUG REPORT or FEATURE REQUEST?:
BUG

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: baremetal/vmware cluster
  • OS (e.g. from /etc/os-release): CoreOS 1855.4.0
  • Kernel (e.g. uname -a): 4.14.67-coreos
  • Install tools: vanilla kubernetes
  • Others:

What happened:
kubctl apply --prune won't delete a resource (or multiple resources) if there is no other resource with the selected label left in this namespace.

What you expected to happen:
prune should delete all resources with the specified label if they are not in the set of applied resources.

How to reproduce it (as minimally and precisely as possible):
Apply a pod to a namespace and two pods to another namespace.

$ cat all.yaml
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: default-http-backend
    testlable: managed
  name: default-http-1
  namespace: first
spec:
  containers:
  - image: gcr.io/google_containers/defaultbackend:1.4
    imagePullPolicy: IfNotPresent
    name: default-http-backend
    ports:
    - containerPort: 8080
      protocol: TCP
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: default-http-backend
    testlable: managed
  name: default-http-1
  namespace: second
spec:
  containers:
  - image: gcr.io/google_containers/defaultbackend:1.4
    imagePullPolicy: IfNotPresent
    name: default-http-backend
    ports:
    - containerPort: 8080
      protocol: TCP
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: default-http-backend
    testlable: managed
  name: default-http-2
  namespace: second
spec:
  containers:
  - image: gcr.io/google_containers/defaultbackend:1.4
    imagePullPolicy: IfNotPresent
    name: default-http-backend
    ports:
    - containerPort: 8080
      protocol: TCP


$ kubectl apply --prune -l 'testlable=managed' --cascade=true -f all.yaml
pod/default-http-1 created
pod/default-http-2 created
pod/default-http-1 created

If you delete one pod definition in the second namespace prune apply works as expected:

$ cat /only-two.yaml
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: default-http-backend
    testlable: managed
  name: default-http-1
  namespace: first
spec:
  containers:
  - image: gcr.io/google_containers/defaultbackend:1.4
    imagePullPolicy: IfNotPresent
    name: default-http-backend
    ports:
    - containerPort: 8080
      protocol: TCP
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: default-http-backend
    testlable: managed
  name: default-http-1
  namespace: second
spec:
  containers:
  - image: gcr.io/google_containers/defaultbackend:1.4
    imagePullPolicy: IfNotPresent
    name: default-http-backend
    ports:
    - containerPort: 8080
      protocol: TCP

$ kubectl apply --prune -l 'testlable=managed' --cascade=true -f all.yaml
pod/default-http-1 unchanged
pod/default-http-1 unchanged
pod/default-http-2 pruned

But if I want to delete both resources from the second namespace, nothing happens:

$ cat only-one.yaml
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: default-http-backend
    testlable: managed
  name: default-http-1
  namespace: first
spec:
  containers:
  - image: gcr.io/google_containers/defaultbackend:1.4
    imagePullPolicy: IfNotPresent
    name: default-http-backend
    ports:
    - containerPort: 8080
      protocol: TCP

$ kubectl apply --prune -l 'testlable=managed' --cascade=true -f all.yaml
pod/default-http-1 unchanged

Anything else we need to know:
This happens regardless if I delete one or multiple resources from the second namespace. Prune won't touch the second namespace if there are no manged resources left in there.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 3, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 5, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@schlichtanders
Copy link

@TheKangaroo can you reopen this?
I seem to have run into the same issue

@TheKangaroo
Copy link
Author

I'm sorry, but I no longer work on the clusters we used prune apply back then.
So I cannot reproduce it anymore. I think it would be best if you could open a new issue and link to this one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants