From f9a7b089f8205001d04d978008f2db242687caa6 Mon Sep 17 00:00:00 2001 From: Santhosh Lakshmanan Date: Thu, 13 Apr 2023 13:26:41 -0400 Subject: [PATCH 1/2] Added replication module known issue --- content/docs/replication/deployment/powerscale.md | 4 ++-- content/docs/replication/deployment/storageclasses.md | 2 +- content/docs/replication/release/_index.md | 4 +++- content/v1/replication/deployment/powerscale.md | 4 ++-- content/v1/replication/deployment/storageclasses.md | 2 +- content/v1/replication/release/_index.md | 1 + content/v2/replication/deployment/storageclasses.md | 2 +- content/v2/replication/release/_index.md | 1 + content/v3/replication/deployment/storageclasses.md | 2 +- content/v3/replication/release/_index.md | 1 + 10 files changed, 14 insertions(+), 9 deletions(-) diff --git a/content/docs/replication/deployment/powerscale.md b/content/docs/replication/deployment/powerscale.md index 86fe3ef03c..c93b8c8389 100644 --- a/content/docs/replication/deployment/powerscale.md +++ b/content/docs/replication/deployment/powerscale.md @@ -162,8 +162,8 @@ driver: "isilon" reclaimPolicy: "Delete" replicationPrefix: "replication.storage.dell.com" remoteRetentionPolicy: - RG: "Retain" - PV: "Retain" + RG: "Delete" + PV: "Delete" parameters: rpo: "Five_Minutes" ignoreNamespaces: "false" diff --git a/content/docs/replication/deployment/storageclasses.md b/content/docs/replication/deployment/storageclasses.md index 6421c01abe..6be1a87cd1 100644 --- a/content/docs/replication/deployment/storageclasses.md +++ b/content/docs/replication/deployment/storageclasses.md @@ -43,7 +43,7 @@ replication.storage.dell.com/remotePVRetentionPolicy: 'delete' | 'retain' If the remotePVRetentionPolicy is set to 'delete', the corresponding PV would be deleted. -If the remotePVRetentionPolicy is set to 'retain', the corresponding PV would be retained. +If the remotePVRetentionPolicy is set to 'retain', the corresponding PV would be retained. This is not applicable for file system replication. By default, if the remotePVRetentionPolicy is not specified in the Storage Class, replicated PV resources are retained. diff --git a/content/docs/replication/release/_index.md b/content/docs/replication/release/_index.md index a27503d49a..7506451852 100644 --- a/content/docs/replication/release/_index.md +++ b/content/docs/replication/release/_index.md @@ -24,4 +24,6 @@ Description: > ### Known Issues -There are no known issues at this time. +| Github ID | Description | +| --------------------------------------------- | ------------------------------------------------------------------ | +| [753](https://github.com/dell/csm/issues/753) | **PowerScale:** When Persistent Volumes (PVs) are created with quota enabled on CSM versions 1.6.0 and before, an incorrect quota gets set for the target side read-only PVs/directories based on the consumed non-zero source size instead of the assigned quota of the source. This can create issues when the user performs failover and wants to write data to the failed over site. If lower quota limit was set, no new writes can be performed on the target side post failover. Refer to the KB article xxx for workaround. | diff --git a/content/v1/replication/deployment/powerscale.md b/content/v1/replication/deployment/powerscale.md index 1d8c61c44f..2757829748 100644 --- a/content/v1/replication/deployment/powerscale.md +++ b/content/v1/replication/deployment/powerscale.md @@ -161,8 +161,8 @@ driver: "isilon" reclaimPolicy: "Delete" replicationPrefix: "replication.storage.dell.com" remoteRetentionPolicy: - RG: "Retain" - PV: "Retain" + RG: "Delete" + PV: "Delete" parameters: rpo: "Five_Minutes" ignoreNamespaces: "false" diff --git a/content/v1/replication/deployment/storageclasses.md b/content/v1/replication/deployment/storageclasses.md index 042d351d72..cee9765d5e 100644 --- a/content/v1/replication/deployment/storageclasses.md +++ b/content/v1/replication/deployment/storageclasses.md @@ -43,7 +43,7 @@ replication.storage.dell.com/remotePVRetentionPolicy: 'delete' | 'retain' If the remotePVRetentionPolicy is set to 'delete', the corresponding PV would be deleted. -If the remotePVRetentionPolicy is set to 'retain', the corresponding PV would be retained. +If the remotePVRetentionPolicy is set to 'retain', the corresponding PV would be retained. This is not applicable for file system replication. By default, if the remotePVRetentionPolicy is not specified in the Storage Class, replicated PV resources are retained. diff --git a/content/v1/replication/release/_index.md b/content/v1/replication/release/_index.md index 33d56c7cf5..dbdf633e6d 100644 --- a/content/v1/replication/release/_index.md +++ b/content/v1/replication/release/_index.md @@ -19,3 +19,4 @@ There are no new features in this release. | Github ID | Description | | --------------------------------------------- | --------------------------------------------------------------------------------------- | | [523](https://github.com/dell/csm/issues/523) | **PowerScale:** Artifacts are not properly cleaned after deletion. | +| [753](https://github.com/dell/csm/issues/753) | **PowerScale:** When Persistent Volumes (PVs) are created with quota enabled on CSM versions 1.6.0 and before, an incorrect quota gets set for the target side read-only PVs/directories based on the consumed non-zero source size instead of the assigned quota of the source. This can create issues when the user performs failover and wants to write data to the failed over site. If lower quota limit was set, no new writes can be performed on the target side post failover. Refer to the KB article xxx for workaround. | diff --git a/content/v2/replication/deployment/storageclasses.md b/content/v2/replication/deployment/storageclasses.md index 042d351d72..cee9765d5e 100644 --- a/content/v2/replication/deployment/storageclasses.md +++ b/content/v2/replication/deployment/storageclasses.md @@ -43,7 +43,7 @@ replication.storage.dell.com/remotePVRetentionPolicy: 'delete' | 'retain' If the remotePVRetentionPolicy is set to 'delete', the corresponding PV would be deleted. -If the remotePVRetentionPolicy is set to 'retain', the corresponding PV would be retained. +If the remotePVRetentionPolicy is set to 'retain', the corresponding PV would be retained. This is not applicable for file system replication. By default, if the remotePVRetentionPolicy is not specified in the Storage Class, replicated PV resources are retained. diff --git a/content/v2/replication/release/_index.md b/content/v2/replication/release/_index.md index d110de5734..1f25c6602b 100644 --- a/content/v2/replication/release/_index.md +++ b/content/v2/replication/release/_index.md @@ -28,3 +28,4 @@ Description: > | [514](https://github.com/dell/csm/issues/514) | **PowerScale:** When creating a replicated PV in PowerScale, the replicated PV's AzServiceIP property has the target PowerScale endpoint instead of the one defined in the target Storage class. | | [515](https://github.com/dell/csm/issues/515) | **PowerScale:** If you failover with an application still running and the volume mounted on the target site, then we cannot mount the PVC due to : "mount.nfs: Stale file handle". | | [518](https://github.com/dell/csm/issues/518) | **PowerScale:** On CSM for Replication with PowerScale, after a repctl failover to a target cluster, the source directory has been removed from the PowerScale. The PersistentVolume Object is still present in Kubernetes. | +| [753](https://github.com/dell/csm/issues/753) | **PowerScale:** When Persistent Volumes (PVs) are created with quota enabled on CSM versions 1.6.0 and before, an incorrect quota gets set for the target side read-only PVs/directories based on the consumed non-zero source size instead of the assigned quota of the source. This can create issues when the user performs failover and wants to write data to the failed over site. If lower quota limit was set, no new writes can be performed on the target side post failover. Refer to the KB article xxx for workaround. | diff --git a/content/v3/replication/deployment/storageclasses.md b/content/v3/replication/deployment/storageclasses.md index df85a44833..8869913551 100644 --- a/content/v3/replication/deployment/storageclasses.md +++ b/content/v3/replication/deployment/storageclasses.md @@ -43,7 +43,7 @@ replication.storage.dell.com/remotePVRetentionPolicy: 'delete' | 'retain' If the remotePVRetentionPolicy is set to 'delete', the corresponding PV would be deleted. -If the remotePVRetentionPolicy is set to 'retain', the corresponding PV would be retained. +If the remotePVRetentionPolicy is set to 'retain', the corresponding PV would be retained. This is not applicable for file system replication. By default, if the remotePVRetentionPolicy is not specified in the Storage Class, replicated PV resources are retained. diff --git a/content/v3/replication/release/_index.md b/content/v3/replication/release/_index.md index d110de5734..1f25c6602b 100644 --- a/content/v3/replication/release/_index.md +++ b/content/v3/replication/release/_index.md @@ -28,3 +28,4 @@ Description: > | [514](https://github.com/dell/csm/issues/514) | **PowerScale:** When creating a replicated PV in PowerScale, the replicated PV's AzServiceIP property has the target PowerScale endpoint instead of the one defined in the target Storage class. | | [515](https://github.com/dell/csm/issues/515) | **PowerScale:** If you failover with an application still running and the volume mounted on the target site, then we cannot mount the PVC due to : "mount.nfs: Stale file handle". | | [518](https://github.com/dell/csm/issues/518) | **PowerScale:** On CSM for Replication with PowerScale, after a repctl failover to a target cluster, the source directory has been removed from the PowerScale. The PersistentVolume Object is still present in Kubernetes. | +| [753](https://github.com/dell/csm/issues/753) | **PowerScale:** When Persistent Volumes (PVs) are created with quota enabled on CSM versions 1.6.0 and before, an incorrect quota gets set for the target side read-only PVs/directories based on the consumed non-zero source size instead of the assigned quota of the source. This can create issues when the user performs failover and wants to write data to the failed over site. If lower quota limit was set, no new writes can be performed on the target side post failover. Refer to the KB article xxx for workaround. | From df2a726a9cfc40b5e6ef161b8fe2f5e9d3145ca7 Mon Sep 17 00:00:00 2001 From: Santhosh Lakshmanan Date: Mon, 17 Apr 2023 13:22:25 -0400 Subject: [PATCH 2/2] Added workaround steps --- content/docs/replication/release/_index.md | 2 +- content/v1/replication/release/_index.md | 2 +- content/v2/replication/release/_index.md | 2 +- content/v3/replication/release/_index.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/content/docs/replication/release/_index.md b/content/docs/replication/release/_index.md index 7506451852..d0eb2786f2 100644 --- a/content/docs/replication/release/_index.md +++ b/content/docs/replication/release/_index.md @@ -26,4 +26,4 @@ Description: > | Github ID | Description | | --------------------------------------------- | ------------------------------------------------------------------ | -| [753](https://github.com/dell/csm/issues/753) | **PowerScale:** When Persistent Volumes (PVs) are created with quota enabled on CSM versions 1.6.0 and before, an incorrect quota gets set for the target side read-only PVs/directories based on the consumed non-zero source size instead of the assigned quota of the source. This can create issues when the user performs failover and wants to write data to the failed over site. If lower quota limit was set, no new writes can be performed on the target side post failover. Refer to the KB article xxx for workaround. | +| [753](https://github.com/dell/csm/issues/753) | **PowerScale:** When Persistent Volumes (PVs) are created with quota enabled on CSM versions 1.6.0 and before, an incorrect quota gets set for the target side read-only PVs/directories based on the consumed non-zero source size instead of the assigned quota of the source. This can create issues when the user performs failover and wants to write data to the failed over site. If lower quota limit is set, no new writes can be performed on the target side post failover.
**Workaround** using PowerScale cluster CLI or UI:
For each Persistent Volume on the source kubernetes cluster,
1. Get the quota assigned for the directory on the source PowerScale cluster. The path to the directory information can be obtained from the specification field of the Persistent Volume object.
2. Verify the quota of the target directory on the target PowerScale cluster. If incorrect quota is set, update the quota on the target directory with the same information as on the source. If no quota is set, create a quota for the target directory. | diff --git a/content/v1/replication/release/_index.md b/content/v1/replication/release/_index.md index dbdf633e6d..57c9559d09 100644 --- a/content/v1/replication/release/_index.md +++ b/content/v1/replication/release/_index.md @@ -19,4 +19,4 @@ There are no new features in this release. | Github ID | Description | | --------------------------------------------- | --------------------------------------------------------------------------------------- | | [523](https://github.com/dell/csm/issues/523) | **PowerScale:** Artifacts are not properly cleaned after deletion. | -| [753](https://github.com/dell/csm/issues/753) | **PowerScale:** When Persistent Volumes (PVs) are created with quota enabled on CSM versions 1.6.0 and before, an incorrect quota gets set for the target side read-only PVs/directories based on the consumed non-zero source size instead of the assigned quota of the source. This can create issues when the user performs failover and wants to write data to the failed over site. If lower quota limit was set, no new writes can be performed on the target side post failover. Refer to the KB article xxx for workaround. | +| [753](https://github.com/dell/csm/issues/753) | **PowerScale:** When Persistent Volumes (PVs) are created with quota enabled on CSM versions 1.6.0 and before, an incorrect quota gets set for the target side read-only PVs/directories based on the consumed non-zero source size instead of the assigned quota of the source. This can create issues when the user performs failover and wants to write data to the failed over site. If lower quota limit is set, no new writes can be performed on the target side post failover.
**Workaround** using PowerScale cluster CLI or UI:
For each Persistent Volume on the source kubernetes cluster,
1. Get the quota assigned for the directory on the source PowerScale cluster. The path to the directory information can be obtained from the specification field of the Persistent Volume object.
2. Verify the quota of the target directory on the target PowerScale cluster. If incorrect quota is set, update the quota on the target directory with the same information as on the source. If no quota is set, create a quota for the target directory. | diff --git a/content/v2/replication/release/_index.md b/content/v2/replication/release/_index.md index 1f25c6602b..a321b2ac25 100644 --- a/content/v2/replication/release/_index.md +++ b/content/v2/replication/release/_index.md @@ -28,4 +28,4 @@ Description: > | [514](https://github.com/dell/csm/issues/514) | **PowerScale:** When creating a replicated PV in PowerScale, the replicated PV's AzServiceIP property has the target PowerScale endpoint instead of the one defined in the target Storage class. | | [515](https://github.com/dell/csm/issues/515) | **PowerScale:** If you failover with an application still running and the volume mounted on the target site, then we cannot mount the PVC due to : "mount.nfs: Stale file handle". | | [518](https://github.com/dell/csm/issues/518) | **PowerScale:** On CSM for Replication with PowerScale, after a repctl failover to a target cluster, the source directory has been removed from the PowerScale. The PersistentVolume Object is still present in Kubernetes. | -| [753](https://github.com/dell/csm/issues/753) | **PowerScale:** When Persistent Volumes (PVs) are created with quota enabled on CSM versions 1.6.0 and before, an incorrect quota gets set for the target side read-only PVs/directories based on the consumed non-zero source size instead of the assigned quota of the source. This can create issues when the user performs failover and wants to write data to the failed over site. If lower quota limit was set, no new writes can be performed on the target side post failover. Refer to the KB article xxx for workaround. | +| [753](https://github.com/dell/csm/issues/753) | **PowerScale:** When Persistent Volumes (PVs) are created with quota enabled on CSM versions 1.6.0 and before, an incorrect quota gets set for the target side read-only PVs/directories based on the consumed non-zero source size instead of the assigned quota of the source. This can create issues when the user performs failover and wants to write data to the failed over site. If lower quota limit is set, no new writes can be performed on the target side post failover.
**Workaround** using PowerScale cluster CLI or UI:
For each Persistent Volume on the source kubernetes cluster,
1. Get the quota assigned for the directory on the source PowerScale cluster. The path to the directory information can be obtained from the specification field of the Persistent Volume object.
2. Verify the quota of the target directory on the target PowerScale cluster. If incorrect quota is set, update the quota on the target directory with the same information as on the source. If no quota is set, create a quota for the target directory. | diff --git a/content/v3/replication/release/_index.md b/content/v3/replication/release/_index.md index 1f25c6602b..a321b2ac25 100644 --- a/content/v3/replication/release/_index.md +++ b/content/v3/replication/release/_index.md @@ -28,4 +28,4 @@ Description: > | [514](https://github.com/dell/csm/issues/514) | **PowerScale:** When creating a replicated PV in PowerScale, the replicated PV's AzServiceIP property has the target PowerScale endpoint instead of the one defined in the target Storage class. | | [515](https://github.com/dell/csm/issues/515) | **PowerScale:** If you failover with an application still running and the volume mounted on the target site, then we cannot mount the PVC due to : "mount.nfs: Stale file handle". | | [518](https://github.com/dell/csm/issues/518) | **PowerScale:** On CSM for Replication with PowerScale, after a repctl failover to a target cluster, the source directory has been removed from the PowerScale. The PersistentVolume Object is still present in Kubernetes. | -| [753](https://github.com/dell/csm/issues/753) | **PowerScale:** When Persistent Volumes (PVs) are created with quota enabled on CSM versions 1.6.0 and before, an incorrect quota gets set for the target side read-only PVs/directories based on the consumed non-zero source size instead of the assigned quota of the source. This can create issues when the user performs failover and wants to write data to the failed over site. If lower quota limit was set, no new writes can be performed on the target side post failover. Refer to the KB article xxx for workaround. | +| [753](https://github.com/dell/csm/issues/753) | **PowerScale:** When Persistent Volumes (PVs) are created with quota enabled on CSM versions 1.6.0 and before, an incorrect quota gets set for the target side read-only PVs/directories based on the consumed non-zero source size instead of the assigned quota of the source. This can create issues when the user performs failover and wants to write data to the failed over site. If lower quota limit is set, no new writes can be performed on the target side post failover.
**Workaround** using PowerScale cluster CLI or UI:
For each Persistent Volume on the source kubernetes cluster,
1. Get the quota assigned for the directory on the source PowerScale cluster. The path to the directory information can be obtained from the specification field of the Persistent Volume object.
2. Verify the quota of the target directory on the target PowerScale cluster. If incorrect quota is set, update the quota on the target directory with the same information as on the source. If no quota is set, create a quota for the target directory. |