-
Notifications
You must be signed in to change notification settings - Fork 47
CORTX-30684: change retain policy to Delete for local-path storage #260
CORTX-30684: change retain policy to Delete for local-path storage #260
Conversation
Signed-off-by: Keith Pine <[email protected]>
I did a loop of deploy/destroy. I've noticed that some of the data directories are still left over. In the local-path-provisioner log, I see some messages like:
In the destroy script, we manually delete the PVs:
My guess is that deleting the volume manually is confusing the provisioner, and it doesn't handle a volume being removed while it's in its work queue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
based on everything else we've seen and have in plan, this lgtm.
Signed-off-by: Keith Pine <[email protected]>
As we're moving towards handling as much k8s functionality as possible, this makes sense to me to remove the manual removal of PVs when we have the underlying StorageClass and dynamic provisioner taking care of some of that. It will help with separation of roles, as long as we're able to still run atomic deploy/destroy loops. |
…tro v7.10 and ElasticSearch host to incorporate the multi-node cluster (Seagate#260) * EOS-18850:ElasticSearch host to incorporate the multi-node cluster * EOS-18850:Update py-utils code wrt Elasticsearch and Kibana OpenDistro v7.10 and ElasticSearch host to incorporate the multi-node cluster * Update elasticsearch and elasticsearch-dsl version Signed-off-by: Pranali04796 <[email protected]> * Update py-utils: ElasticSearch host to incorporate the multi-node cluster Signed-off-by: Pranali04796 <[email protected]> * codacy changes Signed-off-by: Pranali04796 <[email protected]> * update database.py Signed-off-by: Pranali04796 <[email protected]> * codacy changes Signed-off-by: Pranali04796 <[email protected]> Co-authored-by: Udayan Yaragattikar <[email protected]> Co-authored-by: Sachin Punadikar <[email protected]>
) Signed-off-by: Keith Pine <[email protected]>
Description
Change the
reclaimPolicy
of the local path StorageClass toDelete
, fromRetain
. Note that Delete is the default policy for dynamic provisioner storage class that doesn't specify any policy, and also the default for the Local Path Provisioner.Using a Retain policy means the cluster administrator needs to manually clean-up the backing data for k8s volumes, after a user deletes their PVCs and PVs. In development usage the local node partitions can easily run out of space without active management, causing deployments to fail. In comparison, if the policy is Delete, the provisioner will automatically remove the backing data on the nodes when the PVCs are deleted (and PVs are unbound). This is the behavior that makes sense as a default for users of the deployment scripts.
It is no longer necessary to manually remove PVs since the provisioner does this itself, in fact doing this at the same time causes the provisioner to be unable to removing the node data.
Breaking change
It's useful for users to be aware of this change to the default behavior. Running the destroy script will now result in all data being removed from the cluster nodes (although, that is really the intent).
The
destroy-cortx-cloud.sh
script no longer manually removes 3rd party PVs. This means running the script against a cluster where the PV reclaim policies areRetain
, the PVs will not be removed. Users should manually delete the PVs and data on the cluster nodes.Type of change
Applicable issues
In debugging 31356 I wanted to ensure that no left over Pod data was influencing repeat runs. Switching to the Delete reclaim policy makes sure the left-over data is automatically cleaned up.
How was this tested?
Deploy and destroy. I see all node data removed after the destroy.
Saved files into S3 bucket, confirmed after stopping and starting the cluster (stop/start scripts) the data is still accessible.
Additional information
If users prefer to keep the Retain policy, PVs can be modified after they are created. https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/
Note that future plans include allowing the use of any StorageClass, not just our specific local-path provisioner. In that case, users are free to define the StorageClass any way they prefer, and clean-up PVs and data as required.
Checklist
If this change addresses a CORTX Jira issue:
CORTX-XXXXX:
)