From 338d2eb7bbf03316a463606296d81dc780508345 Mon Sep 17 00:00:00 2001 From: Chiman Jain Date: Tue, 27 Jun 2023 13:20:31 +0530 Subject: [PATCH 1/4] update csi ephemeral doc for powerscale --- content/docs/csidriver/features/powerscale.md | 37 ++++++--- content/v1/csidriver/features/powerscale.md | 71 +++++++++------- content/v2/csidriver/features/powerscale.md | 69 +++++++++------- content/v3/csidriver/features/powerscale.md | 81 +++++++++++-------- 4 files changed, 156 insertions(+), 102 deletions(-) diff --git a/content/docs/csidriver/features/powerscale.md b/content/docs/csidriver/features/powerscale.md index 95a8811e0d..4709285ac7 100644 --- a/content/docs/csidriver/features/powerscale.md +++ b/content/docs/csidriver/features/powerscale.md @@ -131,6 +131,7 @@ During the installation of CSI PowerScale driver version 2.0 and higher, no defa The following are the manifests for the Volume Snapshot Class: 1. VolumeSnapshotClass + ```yaml apiVersion: snapshot.storage.k8s.io/v1 @@ -242,7 +243,6 @@ spec: >The Kubernetes Volume Expansion feature can only be used to increase the size of a volume. It cannot be used to shrink a volume. - ## Volume Cloning Feature The CSI PowerScale driver version 1.3 and later supports volume cloning. This allows specifying existing PVCs in the _dataSource_ field to indicate a user would like to clone a Volume. @@ -295,16 +295,19 @@ In case of a failure, one of the standby pods becomes active and takes the posit Additionally by leveraging `pod anti-affinity`, no two-controller pods are ever scheduled on the same node. To increase or decrease the number of controller pods, edit the following value in `myvalues.yaml` file: -``` + +```yaml controllerCount: 2 ``` >**NOTE:** The default value for controllerCount is 2. It is recommended to not change this unless really required. Also, if the controller count is greater than the number of available nodes (where the pods can be scheduled), some controller pods will remain in a Pending state. If you are using the `dell-csi-operator`, adjust the following value in your Custom Resource manifest -``` + +```yaml replicas: 2 ``` + For more details about configuring Controller HA using the Dell CSI Operator, refer to the [Dell CSI Operator documentation](../../installation/operator/#custom-resource-specification). ## Ephemeral Inline Volume @@ -341,7 +344,12 @@ spec: This manifest creates a pod in a given cluster and attaches a newly created ephemeral inline CSI volume to it. +**Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. +CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. +Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. + ## Topology + ### Topology Support CSI PowerScale driver version 1.4.0 and later supports Topology by default which forces volumes to be placed on worker nodes that have connectivity to the backend storage. This results in nodes which have access to PowerScale Array being appropriately labeled. The driver leverages these labels to ensure that the driver components (controller, node) are spawned only on nodes wherein these labels exist. @@ -356,7 +364,6 @@ When “enableCustomTopology” is set to “true”, the CSI driver fetches cus **Note:** Only a single cluster can be configured as part of secret.yaml for custom topology. - ### Topology Usage To utilize the Topology feature, create a custom `StorageClass` with `volumeBindingMode` set to `WaitForFirstConsumer` and specify the desired topology labels within `allowedTopologies` field of this custom storage class. This ensures that the Pod schedule takes advantage of the topology and the selected node has access to provisioned volumes. @@ -412,6 +419,7 @@ allowedTopologies: # To mount volume with NFSv4, specify mount option vers=4. Make sure NFSv4 is enabled on the Isilon Cluster. mountOptions: ["", "", ..., ""] ``` + For additional information, see the [Kubernetes Topology documentation](https://kubernetes-csi.github.io/docs/topology.html). ## Support custom networks for NFS I/O traffic @@ -427,8 +435,8 @@ communication (same IP/fqdn as k8s node) by default. For a cluster with multiple network interfaces and if a user wants to segregate k8s traffic from NFS traffic; you can use the `allowedNetworks` option. `allowedNetworks` takes CIDR addresses as a parameter to match the IPs to be picked up by the driver to allow and route NFS traffic. - ## Volume Limit + The CSI Driver for Dell PowerScale allows users to specify the maximum number of PowerScale volumes that can be used in a node. The user can set the volume limit for a node by creating a node label `max-isilon-volumes-per-node` and specifying the volume limit for that node. @@ -472,6 +480,7 @@ If SmartQuota feature is enabled, user can also set other quota parameters such soft grace period using storage class yaml file or pvc yaml file. **Storage Class Example with Quota Limit Parameters:** + ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass @@ -533,6 +542,7 @@ parameters: RootClientEnabled: "false" ``` + **PVC Example with Quota Limit Parameters:** ```yaml @@ -553,7 +563,9 @@ spec: storage: 5Gi storageClassName: isilon ``` + Note + - If quota limit values are specified in both storage class yaml and PVC yaml , then values mentioned in PVC yaml will get precedence. - If few parameters are specified in storage class yaml and few in PVC yaml , then both will be combined and applied while quota creation For Example: If advisory limit = 30 is mentioned in storage class yaml and soft limit = 50 and soft grace period = 86400 are mentioned in PVC yaml . @@ -564,24 +576,27 @@ Note This feature is introduced in CSI Driver for PowerScale version 1.6.0 and updated in version 2.0.0 ### Helm based installation + As part of driver installation, a ConfigMap with the name `isilon-config-params` is created, which contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of CSI driver. Users can set the default log level by specifying log level to `logLevel` attribute in values.yaml during driver installation. To change the log level dynamically to a different value user can edit the same values.yaml, and run the following command -``` + +```bash cd dell-csi-helm-installer ./csi-install.sh --namespace isilon --values ./my-isilon-settings.yaml --upgrade ``` Note: here my-isilon-settings.yaml is a values.yaml file which user has used for driver installation. - ### Operator based installation + As part of driver installation, a ConfigMap with the name `isilon-config-params` is created using the manifest located in the sample file. This ConfigMap contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of the CSI driver. To set the default/initial log level user can set this field during driver installation. To update the log level dynamically user has to edit the ConfigMap `isilon-config-params` and update `CSI_LOG_LEVEL` to the desired log level. -``` + +```bash kubectl edit configmap -n isilon isilon-config-params ``` @@ -596,13 +611,14 @@ CSI Driver for Dell PowerScale is supported in the NAT environment. This feature is introduced in CSI Driver for PowerScale version 2.0.0 ### Helm based installation + The permissions for volume directory can now be configured in 3 ways: 1. Through values.yaml 2. Through secrets 3. Through storage class -``` +```yaml # isiVolumePathPermissions: The permissions for isi volume directory path # This value acts as a default value for isiVolumePathPermissions, if not specified for a cluster config in secret # Allowed values: valid octal mode number @@ -623,13 +639,13 @@ In the case of operator-based installation, default permission for powerscale di Other ways of configuring powerscale volume permissions remain the same as helm-based installation. - ## PV/PVC Metrics CSI Driver for Dell PowerScale 2.1.0 and above supports volume health monitoring. This allows Kubernetes to report on the condition, status and usage of the underlying volumes. For example, if a volume were to be deleted from the array, or unmounted outside of Kubernetes, Kubernetes will now report these abnormal conditions as events. ### This feature can be enabled + 1. For controller plugin, by setting attribute `controller.healthMonitor.enabled` to `true` in `values.yaml` file. Also health monitoring interval can be changed through attribute `controller.healthMonitor.interval` in `values.yaml` file. 2. For node plugin, by setting attribute `node.healthMonitor.enabled` to `true` in `values.yaml` file and by enabling the alpha feature gate `CSIVolumeHealth`. @@ -641,6 +657,7 @@ To use this feature, enable the ReadWriteOncePod feature gate for kube-apiserver `--feature-gates="...,ReadWriteOncePod=true"` ### Creating a PersistentVolumeClaim + ```yaml kind: PersistentVolumeClaim apiVersion: v1 diff --git a/content/v1/csidriver/features/powerscale.md b/content/v1/csidriver/features/powerscale.md index 085ee57ffd..573489e655 100644 --- a/content/v1/csidriver/features/powerscale.md +++ b/content/v1/csidriver/features/powerscale.md @@ -22,7 +22,7 @@ You can use existent volumes from the PowerScale array as Persistent Volumes in 1. Open your volume in One FS, and take a note of volume-id. 2. Create PersistentVolume and use this volume-id as a volumeHandle in the manifest. Modify other parameters according to your needs. 3. In the following example, the PowerScale cluster accessZone is assumed as 'System', storage class as 'isilon', cluster name as 'pscale-cluster' and volume's internal name as 'isilonvol'. The volume-handle should be in the format of =_=_==_=_==_=_= -4. If Quotas are enabled in the driver, it is required to add the Quota ID to the description of the NFS export in this format: +4. If Quotas are enabled in the driver, it is required to add the Quota ID to the description of the NFS export in this format: `CSI_QUOTA_ID:sC-kAAEAAAAAAAAAAAAAQEpVAAAAAAAA` 5. Quota ID can be identified by querying the PowerScale system. @@ -113,7 +113,7 @@ spec: ## Volume Snapshot Feature -The CSI PowerScale driver version 2.0 and later supports managing v1 snapshots. +The CSI PowerScale driver version 2.0 and later supports managing v1 snapshots. In order to use Volume Snapshots, ensure the following components have been deployed to your cluster: @@ -130,7 +130,8 @@ During the installation of CSI PowerScale driver version 2.0 and higher, no defa Following are the manifests for the Volume Snapshot Class: -1. VolumeSnapshotClass +1. VolumeSnapshotClass + ```yaml apiVersion: snapshot.storage.k8s.io/v1 @@ -242,7 +243,6 @@ spec: >The Kubernetes Volume Expansion feature can only be used to increase the size of a volume. It cannot be used to shrink a volume. - ## Volume Cloning Feature The CSI PowerScale driver version 1.3 and later supports volume cloning. This allows specifying existing PVCs in the _dataSource_ field to indicate a user would like to clone a Volume. @@ -295,26 +295,29 @@ In case of a failure, one of the standby pods becomes active and takes the posit Additionally by leveraging `pod anti-affinity`, no two-controller pods are ever scheduled on the same node. To increase or decrease the number of controller pods, edit the following value in `myvalues.yaml` file: -``` + +```yaml controllerCount: 2 ``` >**NOTE:** The default value for controllerCount is 2. It is recommended to not change this unless really required. Also, if the controller count is greater than the number of available nodes (where the pods can be scheduled), some controller pods will remain in a Pending state. If you are using the `dell-csi-operator`, adjust the following value in your Custom Resource manifest -``` + +```yaml replicas: 2 ``` + For more details about configuring Controller HA using the Dell CSI Operator, refer to the [Dell CSI Operator documentation](../../installation/operator/#custom-resource-specification). -## Ephemeral Inline Volume +## CSI Ephemeral Inline Volume The CSI PowerScale driver version 1.4.0 and later supports CSI ephemeral inline volumes. This feature serves as use cases for data volumes whose content and lifecycle are tied to a pod. For example, a driver might populate a volume with dynamically created secrets that are specific to the application running in the pod. Such volumes need to be created together with a pod and can be deleted as part of pod termination (ephemeral). They get defined as part of the pod spec (inline). - + At runtime, nested inline volumes follow the lifecycle of their associated pods where the driver handles all phases of volume operations as pods are created and destroyed. - + The following is a sample manifest for creating CSI ephemeral Inline Volume in pod manifest with CSI PowerScale driver. ```yaml @@ -341,14 +344,19 @@ spec: This manifest creates a pod in a given cluster and attaches a newly created ephemeral inline CSI volume to it. +**Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. +CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. +Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. + ## Topology + ### Topology Support -The CSI PowerScale driver version 1.4.0 and later supports Topology by default which forces volumes to be placed on worker nodes that have connectivity to the backend storage, as a result of which the nodes which have access to PowerScale Array are appropriately labeled. The driver leverages these labels to ensure that the driver components (controller, node) are spawned only on nodes wherein these labels exist. +The CSI PowerScale driver version 1.4.0 and later supports Topology by default which forces volumes to be placed on worker nodes that have connectivity to the backend storage, as a result of which the nodes which have access to PowerScale Array are appropriately labeled. The driver leverages these labels to ensure that the driver components (controller, node) are spawned only on nodes wherein these labels exist. This covers use cases where: - -The CSI PowerScale driver may not be installed or running on some nodes where Users have chosen to restrict the nodes on accessing the PowerScale storage array. + +The CSI PowerScale driver may not be installed or running on some nodes where Users have chosen to restrict the nodes on accessing the PowerScale storage array. We support CustomTopology which enables users to apply labels for nodes - "csi-isilon.dellemc.com/XX.XX.XX.XX=csi-isilon.dellemc.com" and expect the labels to be honored by the driver. @@ -356,15 +364,14 @@ When “enableCustomTopology” is set to “true”, the CSI driver fetches cus **Note:** Only a single cluster can be configured as part of secret.yaml for custom topology. - ### Topology Usage - -To utilize the Topology feature, create a custom `StorageClass` with `volumeBindingMode` set to `WaitForFirstConsumer` and specify the desired topology labels within `allowedTopologies` field of this custom storage class. This ensures that the Pod schedule takes advantage of the topology and the selected node has access to provisioned volumes. + +To utilize the Topology feature, create a custom `StorageClass` with `volumeBindingMode` set to `WaitForFirstConsumer` and specify the desired topology labels within `allowedTopologies` field of this custom storage class. This ensures that the Pod schedule takes advantage of the topology and the selected node has access to provisioned volumes. **Note:** Whenever a new storage cluster is being added in secret, even though it is dynamic, the new storage cluster IP address-related label is not added to worker nodes dynamically. The user has to spin off (bounce) driver-related pods (controller and node pods) in order to apply newly added information to be reflected in worker nodes. **Storage Class Example with Topology Support:** - + ```yaml # This is a sample manifest for utilizing the topology feature and mount options. # PVCs created using this storage class will be scheduled @@ -412,6 +419,7 @@ allowedTopologies: # To mount volume with NFSv4, specify mount option vers=4. Make sure NFSv4 is enabled on the Isilon Cluster. mountOptions: ["", "", ..., ""] ``` + For additional information, see the [Kubernetes Topology documentation](https://kubernetes-csi.github.io/docs/topology.html). ## Support custom networks for NFS I/O traffic @@ -421,14 +429,14 @@ has workloads scheduled, there is a possibility that it might lead to backward c Also, the previous workload will still be using the default network and not custom networks. For previous workloads to use custom networks, the recreation of pods is required. -When csi-powerscale driver creates an NFS export, the traffic flows through the client specified in the export. By default, the client is the network interface for Kubernetes +When csi-powerscale driver creates an NFS export, the traffic flows through the client specified in the export. By default, the client is the network interface for Kubernetes communication (same IP/fqdn as k8s node) by default. For a cluster with multiple network interfaces and if a user wants to segregate k8s traffic from NFS traffic; you can use the `allowedNetworks` option. `allowedNetworks` takes CIDR addresses as a parameter to match the IPs to be picked up by the driver to allow and route NFS traffic. - ## Volume Limit + The CSI Driver for Dell PowerScale allows users to specify the maximum number of PowerScale volumes that can be used in a node. The user can set the volume limit for a node by creating a node label `max-isilon-volumes-per-node` and specifying the volume limit for that node. @@ -440,7 +448,7 @@ The user can also set the volume limit for all the nodes in the cluster by speci ## Node selector in helm template -Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world.)For more information, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector +Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world.)For more information, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector Similarly, users can define the tolerations based on various conditions like memory pressure, disk pressure and network availability. Refer to https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#taints-and-tolerations for more information. @@ -460,7 +468,7 @@ Let us assume the user creates a PVC with 3 Gi of storage and 'SmartQuotas' have - The user can expand the volume from 3Gi to 6Gi. The driver allows it and sets the hard limit of PVC to 6Gi. - User retries adding 2Gi more data (which has been errored out previously). - The driver accepts the data. - + - When 'enableQuota' is set to 'false' - Driver doesn't set any hard limit against the PVC created. - The user adds data of 2Gi to the above said PVC, which is having the size 3Gi (by logging into POD). It works as expected. @@ -468,30 +476,32 @@ Let us assume the user creates a PVC with 3 Gi of storage and 'SmartQuotas' have - Driver allows the user to enter more data irrespective of the initial PVC size (since no quota is set against this PVC) - The user can expand the volume from an initial size of 3Gi to 4Gi or more. The driver allows it. - ## Dynamic Logging Configuration This feature is introduced in CSI Driver for PowerScale version 1.6.0 and updated in version 2.0.0 ### Helm based installation -As part of driver installation, a ConfigMap with the name `isilon-config-params` is created, which contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of CSI driver. + +As part of driver installation, a ConfigMap with the name `isilon-config-params` is created, which contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of CSI driver. Users can set the default log level by specifying log level to `logLevel` attribute in values.yaml during driver installation. To change the log level dynamically to a different value user can edit the same values.yaml, and run the following command -``` + +```bash cd dell-csi-helm-installer ./csi-install.sh --namespace isilon --values ./my-isilon-settings.yaml --upgrade ``` Note: here my-isilon-settings.yaml is a values.yaml file which user has used for driver installation. - ### Operator based installation + As part of driver installation, a ConfigMap with the name `isilon-config-params` is created using the manifest located in the sample file. This ConfigMap contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of the CSI driver. To set the default/initial log level user can set this field during driver installation. To update the log level dynamically user has to edit the ConfigMap `isilon-config-params` and update `CSI_LOG_LEVEL` to the desired log level. -``` + +```bash kubectl edit configmap -n isilon isilon-config-params ``` @@ -506,13 +516,14 @@ CSI Driver for Dell PowerScale is supported in the NAT environment. This feature is introduced in CSI Driver for PowerScale version 2.0.0 ### Helm based installation + The permissions for volume directory can now be configured in 3 ways: 1. Through values.yaml 2. Through secrets 3. Through storage class -``` +```yaml # isiVolumePathPermissions: The permissions for isi volume directory path # This value acts as a default value for isiVolumePathPermissions, if not specified for a cluster config in secret # Allowed values: valid octal mode number @@ -533,14 +544,14 @@ In the case of operator-based installation, default permission for powerscale di Other ways of configuring powerscale volume permissions remain the same as helm-based installation. - ## PV/PVC Metrics -CSI Driver for Dell PowerScale 2.1.0 and above supports volume health monitoring. This allows Kubernetes to report on the condition, status and usage of the underlying volumes. +CSI Driver for Dell PowerScale 2.1.0 and above supports volume health monitoring. This allows Kubernetes to report on the condition, status and usage of the underlying volumes. For example, if a volume were to be deleted from the array, or unmounted outside of Kubernetes, Kubernetes will now report these abnormal conditions as events. ### This feature can be enabled -1. For controller plugin, by setting attribute `controller.healthMonitor.enabled` to `true` in `values.yaml` file. Also health monitoring interval can be changed through attribute `controller.healthMonitor.interval` in `values.yaml` file. + +1. For controller plugin, by setting attribute `controller.healthMonitor.enabled` to `true` in `values.yaml` file. Also health monitoring interval can be changed through attribute `controller.healthMonitor.interval` in `values.yaml` file. 2. For node plugin, by setting attribute `node.healthMonitor.enabled` to `true` in `values.yaml` file and by enabling the alpha feature gate `CSIVolumeHealth`. ## Single Pod Access Mode for PersistentVolumes- ReadWriteOncePod (ALPHA FEATURE) @@ -551,6 +562,7 @@ To use this feature, enable the ReadWriteOncePod feature gate for kube-apiserver `--feature-gates="...,ReadWriteOncePod=true"` ### Creating a PersistentVolumeClaim + ```yaml kind: PersistentVolumeClaim apiVersion: v1 @@ -567,4 +579,3 @@ spec: When this feature is enabled, the existing `ReadWriteOnce(RWO)` access mode restricts volume access to a single node and allows multiple pods on the same node to read from and write to the same volume. To migrate existing PersistentVolumes to use `ReadWriteOncePod`, please follow the instruction from [here](https://kubernetes.io/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#migrating-existing-persistentvolumes). - diff --git a/content/v2/csidriver/features/powerscale.md b/content/v2/csidriver/features/powerscale.md index 085ee57ffd..b39242ff6c 100644 --- a/content/v2/csidriver/features/powerscale.md +++ b/content/v2/csidriver/features/powerscale.md @@ -22,7 +22,7 @@ You can use existent volumes from the PowerScale array as Persistent Volumes in 1. Open your volume in One FS, and take a note of volume-id. 2. Create PersistentVolume and use this volume-id as a volumeHandle in the manifest. Modify other parameters according to your needs. 3. In the following example, the PowerScale cluster accessZone is assumed as 'System', storage class as 'isilon', cluster name as 'pscale-cluster' and volume's internal name as 'isilonvol'. The volume-handle should be in the format of =_=_==_=_==_=_= -4. If Quotas are enabled in the driver, it is required to add the Quota ID to the description of the NFS export in this format: +4. If Quotas are enabled in the driver, it is required to add the Quota ID to the description of the NFS export in this format: `CSI_QUOTA_ID:sC-kAAEAAAAAAAAAAAAAQEpVAAAAAAAA` 5. Quota ID can be identified by querying the PowerScale system. @@ -113,7 +113,7 @@ spec: ## Volume Snapshot Feature -The CSI PowerScale driver version 2.0 and later supports managing v1 snapshots. +The CSI PowerScale driver version 2.0 and later supports managing v1 snapshots. In order to use Volume Snapshots, ensure the following components have been deployed to your cluster: @@ -130,7 +130,8 @@ During the installation of CSI PowerScale driver version 2.0 and higher, no defa Following are the manifests for the Volume Snapshot Class: -1. VolumeSnapshotClass +1. VolumeSnapshotClass + ```yaml apiVersion: snapshot.storage.k8s.io/v1 @@ -242,7 +243,6 @@ spec: >The Kubernetes Volume Expansion feature can only be used to increase the size of a volume. It cannot be used to shrink a volume. - ## Volume Cloning Feature The CSI PowerScale driver version 1.3 and later supports volume cloning. This allows specifying existing PVCs in the _dataSource_ field to indicate a user would like to clone a Volume. @@ -295,16 +295,19 @@ In case of a failure, one of the standby pods becomes active and takes the posit Additionally by leveraging `pod anti-affinity`, no two-controller pods are ever scheduled on the same node. To increase or decrease the number of controller pods, edit the following value in `myvalues.yaml` file: -``` + +```yaml controllerCount: 2 ``` >**NOTE:** The default value for controllerCount is 2. It is recommended to not change this unless really required. Also, if the controller count is greater than the number of available nodes (where the pods can be scheduled), some controller pods will remain in a Pending state. If you are using the `dell-csi-operator`, adjust the following value in your Custom Resource manifest -``` + +```bash replicas: 2 ``` + For more details about configuring Controller HA using the Dell CSI Operator, refer to the [Dell CSI Operator documentation](../../installation/operator/#custom-resource-specification). ## Ephemeral Inline Volume @@ -312,9 +315,9 @@ For more details about configuring Controller HA using the Dell CSI Operator, re The CSI PowerScale driver version 1.4.0 and later supports CSI ephemeral inline volumes. This feature serves as use cases for data volumes whose content and lifecycle are tied to a pod. For example, a driver might populate a volume with dynamically created secrets that are specific to the application running in the pod. Such volumes need to be created together with a pod and can be deleted as part of pod termination (ephemeral). They get defined as part of the pod spec (inline). - + At runtime, nested inline volumes follow the lifecycle of their associated pods where the driver handles all phases of volume operations as pods are created and destroyed. - + The following is a sample manifest for creating CSI ephemeral Inline Volume in pod manifest with CSI PowerScale driver. ```yaml @@ -341,14 +344,19 @@ spec: This manifest creates a pod in a given cluster and attaches a newly created ephemeral inline CSI volume to it. +**Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. +CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. +Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. + ## Topology + ### Topology Support -The CSI PowerScale driver version 1.4.0 and later supports Topology by default which forces volumes to be placed on worker nodes that have connectivity to the backend storage, as a result of which the nodes which have access to PowerScale Array are appropriately labeled. The driver leverages these labels to ensure that the driver components (controller, node) are spawned only on nodes wherein these labels exist. +The CSI PowerScale driver version 1.4.0 and later supports Topology by default which forces volumes to be placed on worker nodes that have connectivity to the backend storage, as a result of which the nodes which have access to PowerScale Array are appropriately labeled. The driver leverages these labels to ensure that the driver components (controller, node) are spawned only on nodes wherein these labels exist. This covers use cases where: - -The CSI PowerScale driver may not be installed or running on some nodes where Users have chosen to restrict the nodes on accessing the PowerScale storage array. + +The CSI PowerScale driver may not be installed or running on some nodes where Users have chosen to restrict the nodes on accessing the PowerScale storage array. We support CustomTopology which enables users to apply labels for nodes - "csi-isilon.dellemc.com/XX.XX.XX.XX=csi-isilon.dellemc.com" and expect the labels to be honored by the driver. @@ -356,15 +364,14 @@ When “enableCustomTopology” is set to “true”, the CSI driver fetches cus **Note:** Only a single cluster can be configured as part of secret.yaml for custom topology. - ### Topology Usage - -To utilize the Topology feature, create a custom `StorageClass` with `volumeBindingMode` set to `WaitForFirstConsumer` and specify the desired topology labels within `allowedTopologies` field of this custom storage class. This ensures that the Pod schedule takes advantage of the topology and the selected node has access to provisioned volumes. + +To utilize the Topology feature, create a custom `StorageClass` with `volumeBindingMode` set to `WaitForFirstConsumer` and specify the desired topology labels within `allowedTopologies` field of this custom storage class. This ensures that the Pod schedule takes advantage of the topology and the selected node has access to provisioned volumes. **Note:** Whenever a new storage cluster is being added in secret, even though it is dynamic, the new storage cluster IP address-related label is not added to worker nodes dynamically. The user has to spin off (bounce) driver-related pods (controller and node pods) in order to apply newly added information to be reflected in worker nodes. **Storage Class Example with Topology Support:** - + ```yaml # This is a sample manifest for utilizing the topology feature and mount options. # PVCs created using this storage class will be scheduled @@ -412,6 +419,7 @@ allowedTopologies: # To mount volume with NFSv4, specify mount option vers=4. Make sure NFSv4 is enabled on the Isilon Cluster. mountOptions: ["", "", ..., ""] ``` + For additional information, see the [Kubernetes Topology documentation](https://kubernetes-csi.github.io/docs/topology.html). ## Support custom networks for NFS I/O traffic @@ -421,14 +429,14 @@ has workloads scheduled, there is a possibility that it might lead to backward c Also, the previous workload will still be using the default network and not custom networks. For previous workloads to use custom networks, the recreation of pods is required. -When csi-powerscale driver creates an NFS export, the traffic flows through the client specified in the export. By default, the client is the network interface for Kubernetes +When csi-powerscale driver creates an NFS export, the traffic flows through the client specified in the export. By default, the client is the network interface for Kubernetes communication (same IP/fqdn as k8s node) by default. For a cluster with multiple network interfaces and if a user wants to segregate k8s traffic from NFS traffic; you can use the `allowedNetworks` option. `allowedNetworks` takes CIDR addresses as a parameter to match the IPs to be picked up by the driver to allow and route NFS traffic. - ## Volume Limit + The CSI Driver for Dell PowerScale allows users to specify the maximum number of PowerScale volumes that can be used in a node. The user can set the volume limit for a node by creating a node label `max-isilon-volumes-per-node` and specifying the volume limit for that node. @@ -440,7 +448,7 @@ The user can also set the volume limit for all the nodes in the cluster by speci ## Node selector in helm template -Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world.)For more information, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector +Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world.)For more information, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector Similarly, users can define the tolerations based on various conditions like memory pressure, disk pressure and network availability. Refer to https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#taints-and-tolerations for more information. @@ -460,7 +468,7 @@ Let us assume the user creates a PVC with 3 Gi of storage and 'SmartQuotas' have - The user can expand the volume from 3Gi to 6Gi. The driver allows it and sets the hard limit of PVC to 6Gi. - User retries adding 2Gi more data (which has been errored out previously). - The driver accepts the data. - + - When 'enableQuota' is set to 'false' - Driver doesn't set any hard limit against the PVC created. - The user adds data of 2Gi to the above said PVC, which is having the size 3Gi (by logging into POD). It works as expected. @@ -468,30 +476,32 @@ Let us assume the user creates a PVC with 3 Gi of storage and 'SmartQuotas' have - Driver allows the user to enter more data irrespective of the initial PVC size (since no quota is set against this PVC) - The user can expand the volume from an initial size of 3Gi to 4Gi or more. The driver allows it. - ## Dynamic Logging Configuration This feature is introduced in CSI Driver for PowerScale version 1.6.0 and updated in version 2.0.0 ### Helm based installation -As part of driver installation, a ConfigMap with the name `isilon-config-params` is created, which contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of CSI driver. + +As part of driver installation, a ConfigMap with the name `isilon-config-params` is created, which contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of CSI driver. Users can set the default log level by specifying log level to `logLevel` attribute in values.yaml during driver installation. To change the log level dynamically to a different value user can edit the same values.yaml, and run the following command -``` + +```bash cd dell-csi-helm-installer ./csi-install.sh --namespace isilon --values ./my-isilon-settings.yaml --upgrade ``` Note: here my-isilon-settings.yaml is a values.yaml file which user has used for driver installation. - ### Operator based installation + As part of driver installation, a ConfigMap with the name `isilon-config-params` is created using the manifest located in the sample file. This ConfigMap contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of the CSI driver. To set the default/initial log level user can set this field during driver installation. To update the log level dynamically user has to edit the ConfigMap `isilon-config-params` and update `CSI_LOG_LEVEL` to the desired log level. -``` + +```bash kubectl edit configmap -n isilon isilon-config-params ``` @@ -506,13 +516,14 @@ CSI Driver for Dell PowerScale is supported in the NAT environment. This feature is introduced in CSI Driver for PowerScale version 2.0.0 ### Helm based installation + The permissions for volume directory can now be configured in 3 ways: 1. Through values.yaml 2. Through secrets 3. Through storage class -``` +```yaml # isiVolumePathPermissions: The permissions for isi volume directory path # This value acts as a default value for isiVolumePathPermissions, if not specified for a cluster config in secret # Allowed values: valid octal mode number @@ -533,14 +544,14 @@ In the case of operator-based installation, default permission for powerscale di Other ways of configuring powerscale volume permissions remain the same as helm-based installation. - ## PV/PVC Metrics -CSI Driver for Dell PowerScale 2.1.0 and above supports volume health monitoring. This allows Kubernetes to report on the condition, status and usage of the underlying volumes. +CSI Driver for Dell PowerScale 2.1.0 and above supports volume health monitoring. This allows Kubernetes to report on the condition, status and usage of the underlying volumes. For example, if a volume were to be deleted from the array, or unmounted outside of Kubernetes, Kubernetes will now report these abnormal conditions as events. ### This feature can be enabled -1. For controller plugin, by setting attribute `controller.healthMonitor.enabled` to `true` in `values.yaml` file. Also health monitoring interval can be changed through attribute `controller.healthMonitor.interval` in `values.yaml` file. + +1. For controller plugin, by setting attribute `controller.healthMonitor.enabled` to `true` in `values.yaml` file. Also health monitoring interval can be changed through attribute `controller.healthMonitor.interval` in `values.yaml` file. 2. For node plugin, by setting attribute `node.healthMonitor.enabled` to `true` in `values.yaml` file and by enabling the alpha feature gate `CSIVolumeHealth`. ## Single Pod Access Mode for PersistentVolumes- ReadWriteOncePod (ALPHA FEATURE) @@ -551,6 +562,7 @@ To use this feature, enable the ReadWriteOncePod feature gate for kube-apiserver `--feature-gates="...,ReadWriteOncePod=true"` ### Creating a PersistentVolumeClaim + ```yaml kind: PersistentVolumeClaim apiVersion: v1 @@ -567,4 +579,3 @@ spec: When this feature is enabled, the existing `ReadWriteOnce(RWO)` access mode restricts volume access to a single node and allows multiple pods on the same node to read from and write to the same volume. To migrate existing PersistentVolumes to use `ReadWriteOncePod`, please follow the instruction from [here](https://kubernetes.io/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#migrating-existing-persistentvolumes). - diff --git a/content/v3/csidriver/features/powerscale.md b/content/v3/csidriver/features/powerscale.md index 81a05cec8c..003f2af7e1 100644 --- a/content/v3/csidriver/features/powerscale.md +++ b/content/v3/csidriver/features/powerscale.md @@ -22,7 +22,7 @@ You can use existng volumes from the PowerScale array as Persistent Volumes in y 1. Open your volume in One FS, and take a note of volume-id. 2. Create PersistentVolume and use this volume-id as a volumeHandle in the manifest. Modify other parameters according to your needs. 3. In the following example, the PowerScale cluster accessZone is assumed as 'System', storage class as 'isilon', cluster name as 'pscale-cluster' and volume's internal name as 'isilonvol'. The volume-handle should be in the format of =_=_==_=_==_=_= -4. If Quotas are enabled in the driver, it is required to add the Quota ID to the description of the NFS export in this format: +4. If Quotas are enabled in the driver, it is required to add the Quota ID to the description of the NFS export in this format: `CSI_QUOTA_ID:sC-kAAEAAAAAAAAAAAAAQEpVAAAAAAAA` 5. Quota ID can be identified by querying the PowerScale system. @@ -113,7 +113,7 @@ spec: ## Volume Snapshot Feature -The CSI PowerScale driver version 2.0 and later supports managing v1 snapshots. +The CSI PowerScale driver version 2.0 and later supports managing v1 snapshots. In order to use Volume Snapshots, ensure the following components have been deployed to your cluster: @@ -130,9 +130,9 @@ During the installation of CSI PowerScale driver version 2.0 and higher, no defa The following are the manifests for the Volume Snapshot Class: -1. VolumeSnapshotClass -```yaml +1. VolumeSnapshotClass +```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: @@ -195,7 +195,7 @@ spec: storage: 5Gi ``` -> Starting from CSI PowerScale driver version 2.2,different isi paths can be used to create PersistentVolumeClaim from VolumeSnapshot. This means the isi paths of the new volume and the VolumeSnapshot can be different. +> Starting from CSI PowerScale driver version 2.2,different isi paths can be used to create PersistentVolumeClaim from VolumeSnapshot. This means the isi paths of the new volume and the VolumeSnapshot can be different. ## Volume Expansion @@ -242,7 +242,6 @@ spec: >The Kubernetes Volume Expansion feature can only be used to increase the size of a volume. It cannot be used to shrink a volume. - ## Volume Cloning Feature CSI PowerScale driver version 1.3 and later supports volume cloning. This allows specifying existing PVCs in the _dataSource_ field to indicate a user would like to clone a Volume. @@ -295,16 +294,19 @@ In case of a failure, one of the standby pods becomes active and takes the posit Additionally by leveraging `pod anti-affinity`, no two-controller pods are ever scheduled on the same node. To increase or decrease the number of controller pods, edit the following value in `myvalues.yaml` file: -``` + +```yaml controllerCount: 2 ``` >**NOTE:** The default value for controllerCount is 2. It is recommended to not change this unless really required. Also, if the controller count is greater than the number of available nodes (where the pods can be scheduled), some controller pods will remain in a Pending state. If you are using the `dell-csi-operator`, adjust the following value in your Custom Resource manifest -``` + +```yaml replicas: 2 ``` + For more details about configuring Controller HA using the Dell CSI Operator, refer to the [Dell CSI Operator documentation](../../installation/operator/#custom-resource-specification). ## Ephemeral Inline Volume @@ -312,9 +314,9 @@ For more details about configuring Controller HA using the Dell CSI Operator, re The CSI PowerScale driver version 1.4.0 and later supports CSI ephemeral inline volumes. This feature serves as use cases for data volumes whose content and lifecycle are tied to a pod. For example, a driver might populate a volume with dynamically created secrets that are specific to the application running in the pod. Such volumes need to be created together with a pod and can be deleted as part of pod termination (ephemeral). They get defined as part of the pod spec (inline). - + At runtime, nested inline volumes follow the lifecycle of their associated pods where the driver handles all phases of volume operations as pods are created and destroyed. - + The following is a sample manifest for creating CSI ephemeral Inline Volume in pod manifest with CSI PowerScale driver. ```yaml @@ -341,14 +343,19 @@ spec: This manifest creates a pod in a given cluster and attaches a newly created ephemeral inline CSI volume to it. +**Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. +CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. +Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. + ## Topology + ### Topology Support -CSI PowerScale driver version 1.4.0 and later supports Topology by default which forces volumes to be placed on worker nodes that have connectivity to the backend storage. This results in nodes which have access to PowerScale Array being appropriately labeled. The driver leverages these labels to ensure that the driver components (controller, node) are spawned only on nodes wherein these labels exist. +CSI PowerScale driver version 1.4.0 and later supports Topology by default which forces volumes to be placed on worker nodes that have connectivity to the backend storage. This results in nodes which have access to PowerScale Array being appropriately labeled. The driver leverages these labels to ensure that the driver components (controller, node) are spawned only on nodes wherein these labels exist. This covers use cases where: - -The CSI PowerScale driver may not be installed or running on some nodes where Users have chosen to restrict the nodes on accessing the PowerScale storage array. + +The CSI PowerScale driver may not be installed or running on some nodes where Users have chosen to restrict the nodes on accessing the PowerScale storage array. We support CustomTopology which enables users to apply labels for nodes - "csi-isilon.dellemc.com/XX.XX.XX.XX=csi-isilon.dellemc.com" and expect the labels to be honored by the driver. @@ -356,15 +363,14 @@ When “enableCustomTopology” is set to “true”, the CSI driver fetches cus **Note:** Only a single cluster can be configured as part of secret.yaml for custom topology. - ### Topology Usage - -To utilize the Topology feature, create a custom `StorageClass` with `volumeBindingMode` set to `WaitForFirstConsumer` and specify the desired topology labels within `allowedTopologies` field of this custom storage class. This ensures that the Pod schedule takes advantage of the topology and the selected node has access to provisioned volumes. + +To utilize the Topology feature, create a custom `StorageClass` with `volumeBindingMode` set to `WaitForFirstConsumer` and specify the desired topology labels within `allowedTopologies` field of this custom storage class. This ensures that the Pod schedule takes advantage of the topology and the selected node has access to provisioned volumes. **Note:** Whenever a new storage cluster is being added in secret, even though it is dynamic, the new storage cluster IP address-related label is not added to worker nodes dynamically. The user has to spin off (bounce) driver-related pods (controller and node pods) in order to apply newly added information to be reflected in worker nodes. **Storage Class Example with Topology Support:** - + ```yaml # This is a sample manifest for utilizing the topology feature and mount options. # PVCs created using this storage class will be scheduled @@ -412,6 +418,7 @@ allowedTopologies: # To mount volume with NFSv4, specify mount option vers=4. Make sure NFSv4 is enabled on the Isilon Cluster. mountOptions: ["", "", ..., ""] ``` + For additional information, see the [Kubernetes Topology documentation](https://kubernetes-csi.github.io/docs/topology.html). ## Support custom networks for NFS I/O traffic @@ -421,14 +428,14 @@ has workloads scheduled, there is a possibility that it might lead to backward c Also, the previous workload will still be using the default network and not custom networks. For previous workloads to use custom networks, the recreation of pods is required. -When csi-powerscale driver creates an NFS export, the traffic flows through the client specified in the export. By default, the client is the network interface for Kubernetes +When csi-powerscale driver creates an NFS export, the traffic flows through the client specified in the export. By default, the client is the network interface for Kubernetes communication (same IP/fqdn as k8s node) by default. For a cluster with multiple network interfaces and if a user wants to segregate k8s traffic from NFS traffic; you can use the `allowedNetworks` option. `allowedNetworks` takes CIDR addresses as a parameter to match the IPs to be picked up by the driver to allow and route NFS traffic. - ## Volume Limit + The CSI Driver for Dell PowerScale allows users to specify the maximum number of PowerScale volumes that can be used in a node. The user can set the volume limit for a node by creating a node label `max-isilon-volumes-per-node` and specifying the volume limit for that node. @@ -440,7 +447,7 @@ The user can also set the volume limit for all the nodes in the cluster by speci ## Node selector in helm template -Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world.)For more information, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector +Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world.)For more information, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector Similarly, users can define the tolerations based on various conditions like memory pressure, disk pressure and network availability. Refer to https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#taints-and-tolerations for more information. @@ -460,7 +467,7 @@ Let us assume the user creates a PVC with 3 Gi of storage and 'SmartQuotas' have - The user can expand the volume from 3Gi to 6Gi. The driver allows it and sets the hard limit of PVC to 6Gi. - User retries adding 2Gi more data (which has been errored out previously). - The driver accepts the data. - + - When 'enableQuota' is set to 'false' - Driver doesn't set any hard limit against the PVC created. - The user adds data of 2Gi to the above said PVC, which is having the size 3Gi (by logging into POD). It works as expected. @@ -468,10 +475,11 @@ Let us assume the user creates a PVC with 3 Gi of storage and 'SmartQuotas' have - Driver allows the user to enter more data irrespective of the initial PVC size (since no quota is set against this PVC) - The user can expand the volume from an initial size of 3Gi to 4Gi or more. The driver allows it. -If SmartQuota feature is enabled, user can also set other quota parameters such as Soft Limit , Advisory Limit and +If SmartQuota feature is enabled, user can also set other quota parameters such as Soft Limit , Advisory Limit and soft grace period using storage class yaml file or pvc yaml file. **Storage Class Example with Quota Limit Parameters:** + ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass @@ -533,6 +541,7 @@ parameters: RootClientEnabled: "false" ``` + **PVC Example with Quota Limit Parameters:** ```yaml @@ -554,35 +563,40 @@ spec: storageClassName: isilon ``` -Note + +Note: + - If quota limits values are specified in both storage class yaml and PVC yaml , then values mentioned in PVC yaml will get precedence. - If few parameters are specified in storage class yaml and few in PVC yaml , then both will be combined and applied while quota creation For Example: If advisory limit = 30 is mentioned in storage class yaml and soft limit = 50 and soft grace period = 86400 are mentioned in PVC yaml . - Then values set in quota will be advisory limit = 30, soft limit = 50 and soft grace period =86400. + Then values set in quota will be advisory limit = 30, soft limit = 50 and soft grace period =86400. ## Dynamic Logging Configuration This feature is introduced in CSI Driver for PowerScale version 1.6.0 and updated in version 2.0.0 ### Helm based installation -As part of driver installation, a ConfigMap with the name `isilon-config-params` is created, which contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of CSI driver. + +As part of driver installation, a ConfigMap with the name `isilon-config-params` is created, which contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of CSI driver. Users can set the default log level by specifying log level to `logLevel` attribute in values.yaml during driver installation. To change the log level dynamically to a different value user can edit the same values.yaml, and run the following command -``` + +```bash cd dell-csi-helm-installer ./csi-install.sh --namespace isilon --values ./my-isilon-settings.yaml --upgrade ``` Note: here my-isilon-settings.yaml is a values.yaml file which user has used for driver installation. - ### Operator based installation + As part of driver installation, a ConfigMap with the name `isilon-config-params` is created using the manifest located in the sample file. This ConfigMap contains an attribute `CSI_LOG_LEVEL` which specifies the current log level of the CSI driver. To set the default/initial log level user can set this field during driver installation. To update the log level dynamically user has to edit the ConfigMap `isilon-config-params` and update `CSI_LOG_LEVEL` to the desired log level. -``` + +```bash kubectl edit configmap -n isilon isilon-config-params ``` @@ -597,13 +611,14 @@ CSI Driver for Dell PowerScale is supported in the NAT environment. This feature is introduced in CSI Driver for PowerScale version 2.0.0 ### Helm based installation + The permissions for volume directory can now be configured in 3 ways: 1. Through values.yaml 2. Through secrets 3. Through storage class -``` +```yaml # isiVolumePathPermissions: The permissions for isi volume directory path # This value acts as a default value for isiVolumePathPermissions, if not specified for a cluster config in secret # Allowed values: valid octal mode number @@ -624,14 +639,14 @@ In the case of operator-based installation, default permission for powerscale di Other ways of configuring powerscale volume permissions remain the same as helm-based installation. - ## PV/PVC Metrics -CSI Driver for Dell PowerScale 2.1.0 and above supports volume health monitoring. This allows Kubernetes to report on the condition, status and usage of the underlying volumes. +CSI Driver for Dell PowerScale 2.1.0 and above supports volume health monitoring. This allows Kubernetes to report on the condition, status and usage of the underlying volumes. For example, if a volume were to be deleted from the array, or unmounted outside of Kubernetes, Kubernetes will now report these abnormal conditions as events. ### This feature can be enabled -1. For controller plugin, by setting attribute `controller.healthMonitor.enabled` to `true` in `values.yaml` file. Also health monitoring interval can be changed through attribute `controller.healthMonitor.interval` in `values.yaml` file. + +1. For controller plugin, by setting attribute `controller.healthMonitor.enabled` to `true` in `values.yaml` file. Also health monitoring interval can be changed through attribute `controller.healthMonitor.interval` in `values.yaml` file. 2. For node plugin, by setting attribute `node.healthMonitor.enabled` to `true` in `values.yaml` file and by enabling the alpha feature gate `CSIVolumeHealth`. ## Single Pod Access Mode for PersistentVolumes- ReadWriteOncePod (ALPHA FEATURE) @@ -642,6 +657,7 @@ To use this feature, enable the ReadWriteOncePod feature gate for kube-apiserver `--feature-gates="...,ReadWriteOncePod=true"` ### Creating a PersistentVolumeClaim + ```yaml kind: PersistentVolumeClaim apiVersion: v1 @@ -658,4 +674,3 @@ spec: When this feature is enabled, the existing `ReadWriteOnce(RWO)` access mode restricts volume access to a single node and allows multiple pods on the same node to read from and write to the same volume. To migrate existing PersistentVolumes to use `ReadWriteOncePod`, please follow the instruction from [here](https://kubernetes.io/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#migrating-existing-persistentvolumes). - From 8088245c1e3fba725e41c7deaeb39fcfba6246b8 Mon Sep 17 00:00:00 2001 From: Chiman Jain Date: Tue, 27 Jun 2023 13:31:04 +0530 Subject: [PATCH 2/4] Added reference to example file --- content/docs/csidriver/features/powerscale.md | 3 ++- content/v1/csidriver/features/powerscale.md | 1 + content/v2/csidriver/features/powerscale.md | 3 ++- content/v3/csidriver/features/powerscale.md | 3 ++- 4 files changed, 7 insertions(+), 3 deletions(-) diff --git a/content/docs/csidriver/features/powerscale.md b/content/docs/csidriver/features/powerscale.md index 4709285ac7..8d7c89da6f 100644 --- a/content/docs/csidriver/features/powerscale.md +++ b/content/docs/csidriver/features/powerscale.md @@ -310,7 +310,7 @@ replicas: 2 For more details about configuring Controller HA using the Dell CSI Operator, refer to the [Dell CSI Operator documentation](../../installation/operator/#custom-resource-specification). -## Ephemeral Inline Volume +## CSI Ephemeral Inline Volume The CSI PowerScale driver version 1.4.0 and later supports CSI ephemeral inline volumes. @@ -347,6 +347,7 @@ This manifest creates a pod in a given cluster and attaches a newly created ephe **Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. +For reference, check the description of parameters in the following example: ## Topology diff --git a/content/v1/csidriver/features/powerscale.md b/content/v1/csidriver/features/powerscale.md index 573489e655..61c9e41527 100644 --- a/content/v1/csidriver/features/powerscale.md +++ b/content/v1/csidriver/features/powerscale.md @@ -347,6 +347,7 @@ This manifest creates a pod in a given cluster and attaches a newly created ephe **Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. +For reference, check the description of parameters in the following example: ## Topology diff --git a/content/v2/csidriver/features/powerscale.md b/content/v2/csidriver/features/powerscale.md index b39242ff6c..9785a53eae 100644 --- a/content/v2/csidriver/features/powerscale.md +++ b/content/v2/csidriver/features/powerscale.md @@ -310,7 +310,7 @@ replicas: 2 For more details about configuring Controller HA using the Dell CSI Operator, refer to the [Dell CSI Operator documentation](../../installation/operator/#custom-resource-specification). -## Ephemeral Inline Volume +## CSI Ephemeral Inline Volume The CSI PowerScale driver version 1.4.0 and later supports CSI ephemeral inline volumes. @@ -347,6 +347,7 @@ This manifest creates a pod in a given cluster and attaches a newly created ephe **Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. +For reference, check the description of parameters in the following example: ## Topology diff --git a/content/v3/csidriver/features/powerscale.md b/content/v3/csidriver/features/powerscale.md index 003f2af7e1..9925933640 100644 --- a/content/v3/csidriver/features/powerscale.md +++ b/content/v3/csidriver/features/powerscale.md @@ -309,7 +309,7 @@ replicas: 2 For more details about configuring Controller HA using the Dell CSI Operator, refer to the [Dell CSI Operator documentation](../../installation/operator/#custom-resource-specification). -## Ephemeral Inline Volume +## CSI Ephemeral Inline Volume The CSI PowerScale driver version 1.4.0 and later supports CSI ephemeral inline volumes. @@ -346,6 +346,7 @@ This manifest creates a pod in a given cluster and attaches a newly created ephe **Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. +For reference, check the description of parameters in the following example: ## Topology From c822bd48cd24f6e21a09f39616edad0116e53bc9 Mon Sep 17 00:00:00 2001 From: Chiman Jain Date: Tue, 27 Jun 2023 15:02:49 +0530 Subject: [PATCH 3/4] update reference link --- content/docs/csidriver/features/powerscale.md | 2 +- content/v1/csidriver/features/powerscale.md | 2 +- content/v2/csidriver/features/powerscale.md | 2 +- content/v3/csidriver/features/powerscale.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/content/docs/csidriver/features/powerscale.md b/content/docs/csidriver/features/powerscale.md index 8d7c89da6f..de6ea032b3 100644 --- a/content/docs/csidriver/features/powerscale.md +++ b/content/docs/csidriver/features/powerscale.md @@ -347,7 +347,7 @@ This manifest creates a pod in a given cluster and attaches a newly created ephe **Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. -For reference, check the description of parameters in the following example: +For reference, check the description of parameters in the following example: [isilon.yaml](https://github.com/dell/csi-powerscale/blob/main/samples/storageclass/isilon.yaml) ## Topology diff --git a/content/v1/csidriver/features/powerscale.md b/content/v1/csidriver/features/powerscale.md index 61c9e41527..b031df9493 100644 --- a/content/v1/csidriver/features/powerscale.md +++ b/content/v1/csidriver/features/powerscale.md @@ -347,7 +347,7 @@ This manifest creates a pod in a given cluster and attaches a newly created ephe **Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. -For reference, check the description of parameters in the following example: +For reference, check the description of parameters in the following example: [isilon.yaml](https://github.com/dell/csi-powerscale/blob/main/samples/storageclass/isilon.yaml) ## Topology diff --git a/content/v2/csidriver/features/powerscale.md b/content/v2/csidriver/features/powerscale.md index 9785a53eae..6b96d91664 100644 --- a/content/v2/csidriver/features/powerscale.md +++ b/content/v2/csidriver/features/powerscale.md @@ -347,7 +347,7 @@ This manifest creates a pod in a given cluster and attaches a newly created ephe **Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. -For reference, check the description of parameters in the following example: +For reference, check the description of parameters in the following example: [isilon.yaml](https://github.com/dell/csi-powerscale/blob/main/samples/storageclass/isilon.yaml) ## Topology diff --git a/content/v3/csidriver/features/powerscale.md b/content/v3/csidriver/features/powerscale.md index 9925933640..556539c080 100644 --- a/content/v3/csidriver/features/powerscale.md +++ b/content/v3/csidriver/features/powerscale.md @@ -346,7 +346,7 @@ This manifest creates a pod in a given cluster and attaches a newly created ephe **Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. -For reference, check the description of parameters in the following example: +For reference, check the description of parameters in the following example: [isilon.yaml](https://github.com/dell/csi-powerscale/blob/main/samples/storageclass/isilon.yaml) ## Topology From 726706e7ed2ac6a3411bf51a92ff67804d69a1dd Mon Sep 17 00:00:00 2001 From: Chiman Jain Date: Wed, 28 Jun 2023 10:50:34 +0530 Subject: [PATCH 4/4] Fix suggested issues --- content/docs/csidriver/features/powerscale.md | 6 +++--- content/v1/csidriver/features/powerscale.md | 8 ++++---- content/v2/csidriver/features/powerscale.md | 8 ++++---- content/v3/csidriver/features/powerscale.md | 8 ++++---- 4 files changed, 15 insertions(+), 15 deletions(-) diff --git a/content/docs/csidriver/features/powerscale.md b/content/docs/csidriver/features/powerscale.md index de6ea032b3..7b5f7bea5e 100644 --- a/content/docs/csidriver/features/powerscale.md +++ b/content/docs/csidriver/features/powerscale.md @@ -346,7 +346,7 @@ This manifest creates a pod in a given cluster and attaches a newly created ephe **Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. -Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. +These `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. For reference, check the description of parameters in the following example: [isilon.yaml](https://github.com/dell/csi-powerscale/blob/main/samples/storageclass/isilon.yaml) ## Topology @@ -449,9 +449,9 @@ The user can also set the volume limit for all the nodes in the cluster by speci ## Node selector in helm template -Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world.)For more information, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector +Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world).For more information, refer to -Similarly, users can define the tolerations based on various conditions like memory pressure, disk pressure and network availability. Refer to https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#taints-and-tolerations for more information. +Similarly, users can define the tolerations based on various conditions like memory pressure, disk pressure and network availability. Refer to for more information. ## Usage of SmartQuotas to Limit Storage Consumption diff --git a/content/v1/csidriver/features/powerscale.md b/content/v1/csidriver/features/powerscale.md index b031df9493..716449e3c8 100644 --- a/content/v1/csidriver/features/powerscale.md +++ b/content/v1/csidriver/features/powerscale.md @@ -346,7 +346,7 @@ This manifest creates a pod in a given cluster and attaches a newly created ephe **Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. -Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. +These `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. For reference, check the description of parameters in the following example: [isilon.yaml](https://github.com/dell/csi-powerscale/blob/main/samples/storageclass/isilon.yaml) ## Topology @@ -354,7 +354,7 @@ For reference, check the description of parameters in the following example: [is ### Topology Support The CSI PowerScale driver version 1.4.0 and later supports Topology by default which forces volumes to be placed on worker nodes that have connectivity to the backend storage, as a result of which the nodes which have access to PowerScale Array are appropriately labeled. The driver leverages these labels to ensure that the driver components (controller, node) are spawned only on nodes wherein these labels exist. - + This covers use cases where: The CSI PowerScale driver may not be installed or running on some nodes where Users have chosen to restrict the nodes on accessing the PowerScale storage array. @@ -449,9 +449,9 @@ The user can also set the volume limit for all the nodes in the cluster by speci ## Node selector in helm template -Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world.)For more information, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector +Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world).For more information, refer to -Similarly, users can define the tolerations based on various conditions like memory pressure, disk pressure and network availability. Refer to https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#taints-and-tolerations for more information. +Similarly, users can define the tolerations based on various conditions like memory pressure, disk pressure and network availability. Refer to for more information. ## Usage of SmartQuotas to Limit Storage Consumption diff --git a/content/v2/csidriver/features/powerscale.md b/content/v2/csidriver/features/powerscale.md index 6b96d91664..29b1522176 100644 --- a/content/v2/csidriver/features/powerscale.md +++ b/content/v2/csidriver/features/powerscale.md @@ -346,7 +346,7 @@ This manifest creates a pod in a given cluster and attaches a newly created ephe **Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. -Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. +These `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. For reference, check the description of parameters in the following example: [isilon.yaml](https://github.com/dell/csi-powerscale/blob/main/samples/storageclass/isilon.yaml) ## Topology @@ -354,7 +354,7 @@ For reference, check the description of parameters in the following example: [is ### Topology Support The CSI PowerScale driver version 1.4.0 and later supports Topology by default which forces volumes to be placed on worker nodes that have connectivity to the backend storage, as a result of which the nodes which have access to PowerScale Array are appropriately labeled. The driver leverages these labels to ensure that the driver components (controller, node) are spawned only on nodes wherein these labels exist. - + This covers use cases where: The CSI PowerScale driver may not be installed or running on some nodes where Users have chosen to restrict the nodes on accessing the PowerScale storage array. @@ -449,9 +449,9 @@ The user can also set the volume limit for all the nodes in the cluster by speci ## Node selector in helm template -Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world.)For more information, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector +Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world).For more information, refer to -Similarly, users can define the tolerations based on various conditions like memory pressure, disk pressure and network availability. Refer to https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#taints-and-tolerations for more information. +Similarly, users can define the tolerations based on various conditions like memory pressure, disk pressure and network availability. Refer to for more information. ## Usage of SmartQuotas to Limit Storage Consumption diff --git a/content/v3/csidriver/features/powerscale.md b/content/v3/csidriver/features/powerscale.md index 556539c080..d8613db4d6 100644 --- a/content/v3/csidriver/features/powerscale.md +++ b/content/v3/csidriver/features/powerscale.md @@ -345,7 +345,7 @@ This manifest creates a pod in a given cluster and attaches a newly created ephe **Note**: Storage class is not supported in CSI ephemeral inline volumes and all parameters are driver specific. CSI ephemeral volumes allow users to provide volumeAttributes directly to the CSI driver as part of the Pod spec. -Following `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. +These `volumeAttributes` are supported: size, ClusterName, AccessZone, IsiPath, IsiVolumePathPermissions, AzServiceIP. For reference, check the description of parameters in the following example: [isilon.yaml](https://github.com/dell/csi-powerscale/blob/main/samples/storageclass/isilon.yaml) ## Topology @@ -353,7 +353,7 @@ For reference, check the description of parameters in the following example: [is ### Topology Support CSI PowerScale driver version 1.4.0 and later supports Topology by default which forces volumes to be placed on worker nodes that have connectivity to the backend storage. This results in nodes which have access to PowerScale Array being appropriately labeled. The driver leverages these labels to ensure that the driver components (controller, node) are spawned only on nodes wherein these labels exist. - + This covers use cases where: The CSI PowerScale driver may not be installed or running on some nodes where Users have chosen to restrict the nodes on accessing the PowerScale storage array. @@ -448,9 +448,9 @@ The user can also set the volume limit for all the nodes in the cluster by speci ## Node selector in helm template -Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world.)For more information, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector +Now user can define in which worker node, the CSI node pod daemonset can run (just like any other pod in Kubernetes world).For more information, refer to -Similarly, users can define the tolerations based on various conditions like memory pressure, disk pressure and network availability. Refer to https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#taints-and-tolerations for more information. +Similarly, users can define the tolerations based on various conditions like memory pressure, disk pressure and network availability. Refer to for more information. ## Usage of SmartQuotas to Limit Storage Consumption