-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set #15808
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Facing the same problem in ubuntu 22.04 lts |
Same problem - ubuntu 22.04 lts, minikube v1.27.0 |
What Happened?
When I run: minikube start --driver docker
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
stderr:
W0208 10:17:29.009679 4472 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the $
criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
So I run: sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Wed 2023-02-08 18:17:07 CST; 5s ago
Docs: https://kubernetes.io/docs/
Process: 1575983 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET>
Main PID: 1575983 (code=exited, status=1/FAILURE)
[tcnsh@yourname roger]$ kubelet
E0208 18:17:25.956737 1576104 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set"
journalctl -u kubelet -q | tail
Feb 08 18:18:29 yourname.idle-or-running.project.scaleflux.com kubelet[1578524]: E0208 18:18:29.386498 1578524 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set"
Feb 08 18:18:29 yourname.idle-or-running.project.scaleflux.com systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 08 18:18:29 yourname.idle-or-running.project.scaleflux.com systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 08 18:18:39 yourname.idle-or-running.project.scaleflux.com systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart.
Feb 08 18:18:39 yourname.idle-or-running.project.scaleflux.com systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19371.
Feb 08 18:18:39 yourname.idle-or-running.project.scaleflux.com systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 08 18:18:39 yourname.idle-or-running.project.scaleflux.com systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 08 18:18:39 yourname.idle-or-running.project.scaleflux.com kubelet[1578588]: E0208 18:18:39.636667 1578588 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set"
Feb 08 18:18:39 yourname.idle-or-running.project.scaleflux.com systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 08 18:18:39 yourname.idle-or-running.project.scaleflux.com systemd[1]: kubelet.service: Failed with result 'exit-code'.
Installation process:
sudo yum-config-manager
--add-repo
https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io -y
sudo systemctl start docker
sudo cat /etc/group | grep docker
sudo usermod -aG docker $USER
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
sudo rpm -Uvh minikube-latest.x86_64.rpm
Attach the log file
log.txt
Operating System
Redhat/Fedora
Driver
Docker
The text was updated successfully, but these errors were encountered: