-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vSphere CSI Node Pods Consistently CrashLoopBackOff #2802
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
We also have this |
probably dup of #2863 |
@sathieu This is just a warning message stating that there are no preferential datastores in the K8s cluster, ignore this message if you are not using the feature. You do not have to disable the feature. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Hello,
I've been experiencing an issue with my vSphere CSI Node pods.
They continuously enter a CrashLoopBackOff state and won't resume normal operation until I perform a physical restart of the kubernetes node VM.
So after restarting VM with
cube-worker-storage-01-test-dev.dc
thevsphere-csi-node-4bp6l
is running successfullyvsphere-csi-driver: 3.1.2 - as is https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/v3.1.2/manifests/vanilla/vsphere-csi-driver.yaml
k8s: v1.27
vSphere: 7.0.2
Compatibility: ESXi 7.0 U2 and later (VM version 19)
Here is the relevant pod data:
$ k get nodes
$ k get pods
I've also gathered the logs from each pod, but haven't been able to identify the cause of this issue.
vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:csi-attacher
vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:csi-provisioner
vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:csi-resizer
vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:csi-snapshotter
vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:vsphere-csi-controller
$ k logs vsphere-csi-node-4bp6l -c node-driver-registrar
$ k logs vsphere-csi-node-4bp6l -c vsphere-csi-node
$ k logs --previous vsphere-csi-node-x4j5m -c node-driver-registrar
k logs --previous vsphere-csi-node-x4j5m -c vsphere-csi-node
Any advice or suggestions regarding this issue would be greatly appreciated.
Thank you in advance for your help.
The text was updated successfully, but these errors were encountered: