-
Notifications
You must be signed in to change notification settings - Fork 557
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unremoved RBD volume after removing PVC with snapshot #227
Comments
@rollandf can you provide logs from the provisioner and node plugin. |
From logs
if a snapshot exists for a volume , i think we cannot delete volume. |
Maybe we should add a validation that a PVC with snapshots cannot be deleted as long as the snapshots exist? |
status 39 is ENOEMPTY
So rbd is doing the right thing but our driver is not following the process. |
looks flatten is the way to go to decouple image and snapshot |
In the case of Snapshots I think it's ok to keep the linking in place and not take the time penalty on flatten; along those lines attempts to delete the parent volume with existing snapshots should fail. For clones, or create from snapshot however we should definitely use flatten, resulting in an independent volume object. I'd be happy to work on this if @Madhu-1 or @rollandf aren't interested. |
@j-griffith Thank you, please do send a PR. |
/assign j-griffith |
One of the design goals for the RBD trash was to handle situations like this. Removing the RBD image should actually run The one immediate issues w/ this is that krbd won't currently allow you to open images that are in the trash (if trying to map @ into a pod). This can be worked around by always creating a new clone for volume snapshots and use that cloned image for all snapshot volume operations. |
Just pointing out that text in the spec has been added to address this condition as well, see here. We could now return pre-condition failures if snapshots exist for a given volume, and hence cannot be deleted. @dillaman given this, what are you thoughts on deleting images with snapshots? |
If the natural representation within k8s is that snapshots can have independent lifetimes from the volume they were created from, personally, I like the idea of creating a brand new clone for each snapshot so that they are also represented as first-class citizens within RBD. |
closing as this is the duplicate of #70 |
Syncing latest changes from upstream devel for ceph-csi
Describe the bug
Unremoved RBD volume after removing PVC with snapshot
Kubernetes and Ceph CSI Versions
Kubernetes: 1.13
Ceph CSI: 1.0.0
Ceph CSI Driver logs
Post logs from rbdplugin/cephfs plugin, provisioner, etc.
To Reproduce
From examples/rbd dir: (with Block mode for all PVCs)
Expected behavior
The RBD volume from step 1 should be deleted.
The text was updated successfully, but these errors were encountered: