-
Notifications
You must be signed in to change notification settings - Fork 620
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CSI driver should not rely on device path reported by OpenStack #150
Comments
This has been fixed in Kubernetes internal Cidner volume plugin in kubernetes/kubernetes#33128 by introducing |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
PV Watchdog automating manual procedures of cisco SOP regarding: kubernetes/cloud-provider-openstack#150 kubernetes/kubernetes#33128 - watches on events for pods - deletes a pod - that has relevant cinder emptyPath event - is in Pending phase - hasn't been deleted in past 60 sec
/kind bug
What happened:
I'm running CSI driver from this repo and my volume got attached as
/dev/vdc
, while CSI driver returnedDevicePath: /dev/vdb
as attachment metadata.This volume cannot be mounted into a pod because
NodePublish
can't find the volume.What you expected to happen:
The driver either reports DevicePath or it's able to find the volume on the node without trusting
DevicePath
.How to reproduce it (as minimally and precisely as possible):
I don't know what I did to OpenStack to attach the volume as /dev/vdc instead of /dev/vdb (there is no
vdb
device), but since that happened I can reproduce the bug reliably by just creating a pod.Environment:
I don't have access to the actual servers...
uname -a
): 3.10.0-862.el7.x86_64The text was updated successfully, but these errors were encountered: