-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deleting Docker cluster hangs on password prompt: sudo podman ps
#7958
Comments
This is a side-effect from the podman driver, it should probably not run unless configured.
When you do use the podman driver, it first asks you to set up sudo access without password. Most likely the best approach here is to always use |
@andk : thanks for reporting, definitely not intended behaviour! Will open another issue, to avoid running docker and podman volume command always. |
Added #7960 for avoid running podman in the first place (if not using podman driver that is) |
With the new behaviour (
This was assuming that docker and podman were installed, but not given root access yet (#7963) Otherwise it would look more like:
But nothing to the user: 🙄 "minikube" profile does not exist, trying anyways. |
minikube delete --all
: a prompt from sudo appearssudo podman ps
It was only partially resolved |
Steps to reproduce the issue:
./out/minikube start
./out/minikube delete --all
The start command looks good. The delete command leads to a prompt from sudo (actually nine prompts since I always answer with just RETURN). Expected is no prompt from sudo. Is this a regression or a changed behaviour? Last time (a couple of days ago) I tried the same sequence of commands, the
delete
step worked without any sudo prompt. After the third unsuccessful attempt to get something done via sudo, the delete command seems to work and there are no logs left over (./out/minikube logs answersThere is no local cluster named "minikube"
). I feel there's something not correct in this setup. What shall I try?Full output of failed command:
% ./out/minikube delete --all --alsologtostderr
I0501 11:28:55.690433 31894 cli_runner.go:108] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
I0501 11:28:55.768173 31894 cli_runner.go:108] Run: docker ps -a --filter label=created_by.minikube.sigs.k8s.io=true --format {{.Names}}
I0501 11:28:55.857771 31894 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0501 11:28:55.945993 31894 cli_runner.go:108] Run: docker exec --privileged -t minikube /bin/bash -c "sudo init 0"
I0501 11:28:57.153000 31894 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0501 11:28:57.355540 31894 oci.go:504] container minikube status is Stopped
I0501 11:28:57.355566 31894 oci.go:516] Successfully shutdown container minikube
I0501 11:28:57.355612 31894 cli_runner.go:108] Run: docker rm -f -v minikube
I0501 11:28:57.489665 31894 volumes.go:34] trying to delete all docker volumes with label created_by.minikube.sigs.k8s.io=true
I0501 11:28:57.490017 31894 cli_runner.go:108] Run: docker volume ls --filter label=created_by.minikube.sigs.k8s.io=true --format {{.Name}}
I0501 11:28:57.557507 31894 cli_runner.go:108] Run: docker volume rm --force minikube
I0501 11:28:57.920941 31894 volumes.go:56] trying to prune all docker volumes with label created_by.minikube.sigs.k8s.io=true
I0501 11:28:57.921009 31894 cli_runner.go:108] Run: docker volume prune -f --filter label=created_by.minikube.sigs.k8s.io=true
🔥 Deleting "minikube" in docker ...
I0501 11:28:58.006147 31894 cli_runner.go:108] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Names}}
I0501 11:28:58.087536 31894 volumes.go:34] trying to delete all docker volumes with label name.minikube.sigs.k8s.io=minikube
I0501 11:28:58.087599 31894 cli_runner.go:108] Run: docker volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}
I0501 11:28:58.179693 31894 volumes.go:56] trying to prune all docker volumes with label name.minikube.sigs.k8s.io=minikube
I0501 11:28:58.179757 31894 cli_runner.go:108] Run: docker volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube
I0501 11:28:58.258713 31894 cli_runner.go:108] Run: sudo podman ps -a --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Names}}
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
[sudo] password for sand:
Sorry, try again.
[sudo] password for sand:
Sorry, try again.
[sudo] password for sand:
I0501 11:29:06.663100 31894 cli_runner.go:147] Completed: sudo podman ps -a --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Names}}: (8.404352766s)
I0501 11:29:06.663135 31894 volumes.go:34] trying to delete all podman volumes with label name.minikube.sigs.k8s.io=minikube
I0501 11:29:06.663208 31894 cli_runner.go:108] Run: sudo podman volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
[sudo] password for sand:
Sorry, try again.
[sudo] password for sand:
Sorry, try again.
[sudo] password for sand:
I0501 11:29:14.111421 31894 cli_runner.go:147] Completed: sudo podman volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}: (7.448182524s)
W0501 11:29:14.111462 31894 delete.go:211] error deleting volumes (might be okay).
To see the list of volumes run: 'docker volume ls'
:[listing volumes by label "name.minikube.sigs.k8s.io=minikube": sudo podman volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}: exit status 1
stdout:
stderr:
sudo: 3 incorrect password attempts
]
I0501 11:29:14.111580 31894 volumes.go:56] trying to prune all podman volumes with label name.minikube.sigs.k8s.io=minikube
I0501 11:29:14.111655 31894 cli_runner.go:108] Run: sudo podman volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
[sudo] password for sand:
Sorry, try again.
[sudo] password for sand:
Sorry, try again.
[sudo] password for sand:
I0501 11:29:23.003258 31894 cli_runner.go:147] Completed: sudo podman volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube: (8.891577654s)
W0501 11:29:23.003307 31894 delete.go:216] error pruning volume (might be okay):
[prune volume by label name.minikube.sigs.k8s.io=minikube: sudo podman volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube: exit status 1
stdout:
stderr:
sudo: 3 incorrect password attempts
]
I0501 11:29:23.005642 31894 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0501 11:29:23.086619 31894 delete.go:75] Unable to get host status for minikube, assuming it has already been deleted: state: unknown state "minikube": docker inspect minikube --format={{.State.Status}}: exit status 1
stdout:
stderr:
Error: No such object: minikube
🔥 Removing /home/sand/.minikube/machines/minikube ...
I0501 11:29:23.097300 31894 lock.go:35] WriteFile acquiring /home/sand/.kube/config: {Name:mk505f793881700367e2a950c92de29206d7625a Clock:{} Delay:500ms Timeout:1m0s Cancel:}
💀 Removed all traces of the "minikube" cluster.
🔥 Successfully deleted all profiles
Full output of
minikube start
command used, if not already included:% ./out/minikube start
😄 minikube v1.10.0-beta.2 on Debian bullseye/sid (xen/amd64)
✨ Automatically selected the docker driver
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.18.1 preload ...
> preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4: 525.47 MiB
🔥 Creating docker container (CPUs=2, Memory=3200MB) ...
🐳 Preparing Kubernetes v1.18.1 on Docker 19.03.2 ...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"
The text was updated successfully, but these errors were encountered: