This repository has been archived by the owner on Oct 20, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 54
[BUG] PVC is mistakenly deleted when the controller reads stale deletionTimestamp
information
#316
Comments
Hey @srteam2020 really interesting finding. I've never been able to experience such a behavior but I don't think I've used the HA feature which is part of 1.19 IIRC. If you can provide a PR it'd be great ! Let me know if some help is needed. |
Hey @cscetbon |
cscetbon
pushed a commit
that referenced
this issue
May 7, 2021
cscetbon
added a commit
that referenced
this issue
Jul 16, 2021
cscetbon
added a commit
that referenced
this issue
Jul 16, 2021
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Bug Report
We find that in a HA k8s cluster, casskop could mistakenly delete the PVC of the cassandra pod after casskop experiencing a restart. After diagnosis and inspection, we find it is caused by potential staleness in some of the apiserver.
More concretely, if casskop receives a stale update event from an apiserver saying "CassandraCluster has non-nil deletion timestamp", the controller will delete the PVCs for each DataCenter and Rack belong to current CassandraCluster by calling
DeletePVCs(ReconcileCassandraCluster, dcName, rackName)
. InsideDeletePVCs
, PVCs to be deleted are fetched with Cluster, DataCenter, and Rack name. Note that the stale event actually comes from the deletion of a previous CassandraCluster sharing the same name (but with a different UID) as the currently running one, so the PVC of the existing CassandraCluster will be listed and deleted mistakenly.One potential approach to fix this is to label each PVC the UID of the CassandraCluster when creating them, and list the PVC using the UID of the CassandraCluster in
ListPVC
to ensure that we always delete the right PVC.What did you do?
We list the concrete steps we took that led to this issue:
cc
withDeletePVC
enabled. PVC will be created in the cluster.cc
. Apiserver1 will send the update events with a non-nil deletion timestamp to the controller and the controller will triggerDeletePVCs()
to delete related PVC for each rack. Meanwhile, apiserver2 is partitioned so its watch cache stops at the moment thatcc
is tagged with a deletion timestamp.cc
again. Now the CassandraCluster gets back with a different UID, so as its PVC. However, apiserver2 still holds the stale view thatcc
has a non-nil deletion timestamp and is about to be deleted.cc
has a deletion timestamp, the controller lists all the PVC belonging to the currently runningcc
(as mentioned above) and deletes them all.What did you expect to see?
The controller should not try to delete the PVC even when reading stale
deletionTimestamp
.What did you see instead? Under which circumstances?
The controllers deletes the PVC after the restart as mentioned in the above steps.
Environment
casskop version:
f87c8e0 (master branch)
Kubernetes version information:
1.18.9
Cassandra version:
3.11
Possible Solution
We are willing to help fix this bug by issuing a PR.
As mentioned above, the bug can be avoided by labeling each PVC with the UID of the CassandraCluster and listing PVC with UID. Each CassandraCluster will always have a different UID even with the same name. So in this case the PVCs belonging to the newly created CassandraCluster will not be deleted by the stale events of the old one.
The text was updated successfully, but these errors were encountered: